neo-0.7.2/0002700013464101346420000000000013511307751010471 5ustar yohyohneo-0.7.2/LICENSE.txt0000600013464101346420000000273513507452453012330 0ustar yohyohCopyright (c) 2010-2018, Neo authors and contributors All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the names of the copyright holders nor the names of the contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. neo-0.7.2/MANIFEST.in0000600013464101346420000000030713507452453012234 0ustar yohyohinclude README.rst include LICENSE.txt include CITATION.rst prune drafts include examples/*.py recursive-include doc * prune doc/build exclude doc/source/images/*.svg exclude doc/source/images/*.dia neo-0.7.2/PKG-INFO0000600013464101346420000001213113511307751011564 0ustar yohyohMetadata-Version: 2.1 Name: neo Version: 0.7.2 Summary: Neo is a package for representing electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats Home-page: http://neuralensemble.org/neo Author: Neo authors and contributors Author-email: samuel.garcia@cnrs.fr License: BSD-3-Clause Description: === Neo === Neo is a Python package for working with electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats, including Spike2, NeuroExplorer, AlphaOmega, Axon, Blackrock, Plexon, Tdt, and support for writing to a subset of these formats plus non-proprietary formats including HDF5. The goal of Neo is to improve interoperability between Python tools for analyzing, visualizing and generating electrophysiology data by providing a common, shared object model. In order to be as lightweight a dependency as possible, Neo is deliberately limited to represention of data, with no functions for data analysis or visualization. Neo is used by a number of other software tools, including SpykeViewer_ (data analysis and visualization), Elephant_ (data analysis), the G-node_ suite (databasing), PyNN_ (simulations), tridesclous_ (spike sorting) and ephyviewer_ (data visualization). OpenElectrophy_ (data analysis and visualization) uses an older version of neo. Neo implements a hierarchical data model well adapted to intracellular and extracellular electrophysiology and EEG data with support for multi-electrodes (for example tetrodes). Neo's data objects build on the quantities package, which in turn builds on NumPy by adding support for physical dimensions. Thus Neo objects behave just like normal NumPy arrays, but with additional metadata, checks for dimensional consistency and automatic unit conversion. A project with similar aims but for neuroimaging file formats is `NiBabel`_. Code status ----------- .. image:: https://travis-ci.org/NeuralEnsemble/python-neo.png?branch=master :target: https://travis-ci.org/NeuralEnsemble/python-neo :alt: Unit Test Status (TravisCI) .. image:: https://circleci.com/gh/NeuralEnsemble/python-neo.svg?style=svg :target: https://circleci.com/gh/NeuralEnsemble/python-neo :alt: Unit Test Status (CircleCI) .. image:: https://coveralls.io/repos/NeuralEnsemble/python-neo/badge.png :target: https://coveralls.io/r/NeuralEnsemble/python-neo :alt: Unit Test Coverage .. image:: https://requires.io/github/NeuralEnsemble/python-neo/requirements.png?branch=master :target: https://requires.io/github/NeuralEnsemble/python-neo/requirements/?branch=master :alt: Requirements Status More information ---------------- - Home page: http://neuralensemble.org/neo - Mailing list: https://groups.google.com/forum/?fromgroups#!forum/neuralensemble - Documentation: http://neo.readthedocs.io/ - Bug reports: https://github.com/NeuralEnsemble/python-neo/issues For installation instructions, see doc/source/install.rst To cite Neo in publications, see CITATION.txt :copyright: Copyright 2010-2018 by the Neo team, see doc/source/authors.rst. :license: 3-Clause Revised BSD License, see LICENSE.txt for details. .. _OpenElectrophy: https://github.com/OpenElectrophy/OpenElectrophy .. _Elephant: http://neuralensemble.org/elephant .. _G-node: http://www.g-node.org/ .. _Neuroshare: http://neuroshare.org/ .. _SpykeViewer: https://spyke-viewer.readthedocs.org/en/latest/ .. _NiBabel: http://nipy.sourceforge.net/nibabel/ .. _PyNN: http://neuralensemble.org/PyNN .. _quantities: http://pypi.python.org/pypi/quantities .. _`NeuralEnsemble mailing list`: http://groups.google.com/group/neuralensemble .. _`issue tracker`: https://github.c .. _tridesclous: https://github.com/tridesclous/tridesclous .. _ephyviewer: https://github.com/NeuralEnsemble/ephyviewer Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Intended Audience :: Science/Research Classifier: License :: OSI Approved :: BSD License Classifier: Natural Language :: English Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Topic :: Scientific/Engineering Provides-Extra: stimfitio Provides-Extra: hdf5io Provides-Extra: kwikio Provides-Extra: nixio Provides-Extra: neomatlabio Provides-Extra: igorproio neo-0.7.2/README.rst0000600013464101346420000000662513507452453012176 0ustar yohyoh=== Neo === Neo is a Python package for working with electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats, including Spike2, NeuroExplorer, AlphaOmega, Axon, Blackrock, Plexon, Tdt, and support for writing to a subset of these formats plus non-proprietary formats including HDF5. The goal of Neo is to improve interoperability between Python tools for analyzing, visualizing and generating electrophysiology data by providing a common, shared object model. In order to be as lightweight a dependency as possible, Neo is deliberately limited to represention of data, with no functions for data analysis or visualization. Neo is used by a number of other software tools, including SpykeViewer_ (data analysis and visualization), Elephant_ (data analysis), the G-node_ suite (databasing), PyNN_ (simulations), tridesclous_ (spike sorting) and ephyviewer_ (data visualization). OpenElectrophy_ (data analysis and visualization) uses an older version of neo. Neo implements a hierarchical data model well adapted to intracellular and extracellular electrophysiology and EEG data with support for multi-electrodes (for example tetrodes). Neo's data objects build on the quantities package, which in turn builds on NumPy by adding support for physical dimensions. Thus Neo objects behave just like normal NumPy arrays, but with additional metadata, checks for dimensional consistency and automatic unit conversion. A project with similar aims but for neuroimaging file formats is `NiBabel`_. Code status ----------- .. image:: https://travis-ci.org/NeuralEnsemble/python-neo.png?branch=master :target: https://travis-ci.org/NeuralEnsemble/python-neo :alt: Unit Test Status (TravisCI) .. image:: https://circleci.com/gh/NeuralEnsemble/python-neo.svg?style=svg :target: https://circleci.com/gh/NeuralEnsemble/python-neo :alt: Unit Test Status (CircleCI) .. image:: https://coveralls.io/repos/NeuralEnsemble/python-neo/badge.png :target: https://coveralls.io/r/NeuralEnsemble/python-neo :alt: Unit Test Coverage .. image:: https://requires.io/github/NeuralEnsemble/python-neo/requirements.png?branch=master :target: https://requires.io/github/NeuralEnsemble/python-neo/requirements/?branch=master :alt: Requirements Status More information ---------------- - Home page: http://neuralensemble.org/neo - Mailing list: https://groups.google.com/forum/?fromgroups#!forum/neuralensemble - Documentation: http://neo.readthedocs.io/ - Bug reports: https://github.com/NeuralEnsemble/python-neo/issues For installation instructions, see doc/source/install.rst To cite Neo in publications, see CITATION.txt :copyright: Copyright 2010-2018 by the Neo team, see doc/source/authors.rst. :license: 3-Clause Revised BSD License, see LICENSE.txt for details. .. _OpenElectrophy: https://github.com/OpenElectrophy/OpenElectrophy .. _Elephant: http://neuralensemble.org/elephant .. _G-node: http://www.g-node.org/ .. _Neuroshare: http://neuroshare.org/ .. _SpykeViewer: https://spyke-viewer.readthedocs.org/en/latest/ .. _NiBabel: http://nipy.sourceforge.net/nibabel/ .. _PyNN: http://neuralensemble.org/PyNN .. _quantities: http://pypi.python.org/pypi/quantities .. _`NeuralEnsemble mailing list`: http://groups.google.com/group/neuralensemble .. _`issue tracker`: https://github.c .. _tridesclous: https://github.com/tridesclous/tridesclous .. _ephyviewer: https://github.com/NeuralEnsemble/ephyviewer neo-0.7.2/doc/0002700013464101346420000000000013511307751011236 5ustar yohyohneo-0.7.2/doc/Makefile0000600013464101346420000000606413420077704012705 0ustar yohyoh# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source .PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/neo.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/neo.qhc" latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." neo-0.7.2/doc/make.bat0000600013464101346420000000577513420077704012662 0ustar yohyoh@ECHO OFF REM Command file for Sphinx documentation set SPHINXBUILD=sphinx-build set BUILDDIR=build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\neo.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\neo.ghc goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end neo-0.7.2/doc/old_stuffs/0002700013464101346420000000000013511307751013406 5ustar yohyohneo-0.7.2/doc/old_stuffs/gif2011workshop.rst0000600013464101346420000000766213507452453017026 0ustar yohyoh************************************ Gif 2011 workshop decisions ************************************ This have been writtent before neo 2 implementation just after the wokshop. Not every hting is up to date. After a workshop in GIF we are happy to present the following improvements: =========================================================================== 1. We made a few renames of objects - "Neuron" into "Unit" - "RecordingPoint" into "RecordingChannel" to remove electrophysiological (or other) dependencies and keep generality. 2. For every object we specified mandatory attributes and recommended attributes. For every attribute we define a python-based data type. The changes are reflected in the diagram #FIXME with red (mandatory) and blue (recommended) attributes indicated. 3. New objects are required for operational performance (memory allocation) and logical consistency (neo eeg, etc): - AnalogSignalArray - IrregularlySampledAnalogSignal - EventArray - EpochArray - RecordingChannelGroup Attributes and parent objects are available on the diagram #FIXME 4. Due to some logical considerations we remove the link between "RecordingChannel" and "Spiketrain". "SpikeTrain" now depends on "Unit", which in its turn connects to "RecordingChannel". For inconsistency reasons we removed link between "SpikeTrain" and a "Spike" ("SpikeTrain" is an object containing numpy array of spikes, but not a container of "Spike" objects - which is performance-unefficient). The same idea is applied to AnalogSignal / AnalogSignalArray, Event / EventArray etc. All changes are relected in # FIXME 5. In order to implement flexibility and embed user-defined metadata into the NEO objects we decided to assign "annotations" dictionnary to very NEO object. This attribute is optional; user may add key-value pairs to it according to its scientific needs. 6. The decision is made to use "quantities" package for objects, representing data arrays with units. "Quantities" is a stable (at least for python2.6) package, presented in pypi, easy-embeddable into NEO object model. Points of implementation are presented in the diagram # FIXME 7. We postpone the solution of object ID management inside NEO. 8. In AnalogSignal - t_stop become a property (for consistency reasons). 9. In order to provie a support for "advanced" object load we decided to include parameters - lazy (True/False) - cascade (True/False) in the BaseIO class. These parameters are valid for every method, provided by the IO (.read_segment() etc.). If "lazy" is True, the IO does not load data array, and makes array load otherwise. "Cascade" parameter regulates load of object relations. 10. We postpone the question of data analysis storage till the next NEO congress. Analysis objects are free for the moment. 11. We stay with Python 2.6 / 2.7 support. Python 3 to be considered in a later discussions. New object diagram discussed =============================================== .. image:: images/neo_UML_French_workshop.png :height: 500 px :align: center Actions to be performed: =============================================================== promotion: at g-node: philipp, andrey in neuralesemble: andrew within incf network: andrew thomas at posters: all logo: samuel paper: next year in the web: pypi object struture: common: samuel draft: yann andrey tree diagram: philipp florant io: ExampleIO : samuel HDF5 IO: andrey doc: first page: andrew thomas object disription: samuel draft+ andrew io user/ io dev: samuel example/cookbook: andrey script, samuel NeuroConvert, doctest unitest: andrew packaging: samuel account for more licence: BSD-3-Clause copyright: CNRS, GNode, University of Provence hosting test data: Philipp Other questions discussed: =========================== - consistency in names of object attributes and get/set functions neo-0.7.2/doc/old_stuffs/specific_annotations.rst0000600013464101346420000000161613507452453020353 0ustar yohyoh.. _specific_annotations: ******************** Specific annotations ******************** Introduction ------------ Neo imposes and recommends some attributes for all objects, and also provides the *annotations* dict for all objects to deal with any kind of extensions. This flexible feature allow Neo objects to be customized for many use cases. While any names can be used for annotations, interoperability will be improved if there is some consistency in naming. Here we suggest some conventions for annotation names. Patch clamp ----------- .. todo: TODO Network simultaion ------------------ Spike sorting ------------- **SpikeTrain.annotations['waveform_features']** : when spike sorting the waveform is reduced to a smaller dimensional space with PCA or wavelets. This attribute is the projected matrice. NxM (N spike number, M features number. KlustakwikIO supports this feature. neo-0.7.2/doc/source/0002700013464101346420000000000013511307751012536 5ustar yohyohneo-0.7.2/doc/source/api_reference.rst0000600013464101346420000000020313420077704016053 0ustar yohyohAPI Reference ============= .. automodule:: neo.core .. testsetup:: * from neo import SpikeTrain import quantities as pqneo-0.7.2/doc/source/authors.rst0000600013464101346420000000464513507452453014773 0ustar yohyoh======================== Authors and contributors ======================== The following people have contributed code and/or ideas to the current version of Neo. The institutional affiliations are those at the time of the contribution, and may not be the current affiliation of a contributor. * Samuel Garcia [1] * Andrew Davison [2] * Chris Rodgers [3] * Pierre Yger [2] * Yann Mahnoun [4] * Luc Estabanez [2] * Andrey Sobolev [5] * Thierry Brizzi [2] * Florent Jaillet [6] * Philipp Rautenberg [5] * Thomas Wachtler [5] * Cyril Dejean [7] * Robert Pröpper [8] * Domenico Guarino [2] * Achilleas Koutsou [5] * Erik Li [9] * Georg Raiser [10] * Joffrey Gonin [2] * Kyler Brown [?] * Mikkel Elle Lepperød [11] * C Daniel Meliza [12] * Julia Sprenger [13] * Maximilian Schmidt [13] * Johanna Senk [13] * Carlos Canova [13] * Hélissande Fragnaud [2] * Mark Hollenbeck [14] * Mieszko Grodzicki * Rick Gerkin [15] * Matthieu Sénoville [2] * Chadwick Boulay [16] * Björn Müller [13] * William Hart [17] * erikli(github) * Jeffrey Gill [18] * Lucas (lkoelman@github) * Mark Histed * Mike Sintsov * Scott W Harden [19] 1. Centre de Recherche en Neuroscience de Lyon, CNRS UMR5292 - INSERM U1028 - Universite Claude Bernard Lyon 1 2. Unité de Neuroscience, Information et Complexité, CNRS UPR 3293, Gif-sur-Yvette, France 3. University of California, Berkeley 4. Laboratoire de Neurosciences Intégratives et Adaptatives, CNRS UMR 6149 - Université de Provence, Marseille, France 5. G-Node, Ludwig-Maximilians-Universität, Munich, Germany 6. Institut de Neurosciences de la Timone, CNRS UMR 7289 - Université d'Aix-Marseille, Marseille, France 7. Centre de Neurosciences Integratives et Cignitives, UMR 5228 - CNRS - Université Bordeaux I - Université Bordeaux II 8. Neural Information Processing Group, TU Berlin, Germany 9. Department of Neurobiology & Anatomy, Drexel University College of Medicine, Philadelphia, PA, USA 10. University of Konstanz, Konstanz, Germany 11. Centre for Integrative Neuroplasticity (CINPLA), University of Oslo, Norway 12. University of Virginia 13. INM-6, Forschungszentrum Jülich, Germany 14. University of Texas at Austin 15. Arizona State University 16. Ottawa Hospital Research Institute, Canada 17. Swinburne University of Technology, Australia 18. Case Western Reserve University (CWRU) · Department of Biology 19. Harden Technologies, LLC If we've somehow missed you off the list we're very sorry - please let us know. neo-0.7.2/doc/source/conf.py0000600013464101346420000001556013507452453014051 0ustar yohyoh# -*- coding: utf-8 -*- # # neo documentation build configuration file, created by # sphinx-quickstart on Fri Feb 25 14:18:12 2011. # # This file is execfile()d with the current directory set to its containing # dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import sys from distutils.version import LooseVersion with open("../../neo/version.py") as fp: d = {} exec(fp.read(), d) neo_release = d['version'] neo_version = '.'.join(str(e) for e in LooseVersion(neo_release).version[:2]) AUTHORS = u'Neo authors and contributors ' # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.append(os.path.abspath('.')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.todo'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Neo' copyright = u'2010-2018, ' + AUTHORS # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = neo_version # The full version, including alpha/beta/rc tags. release = neo_release # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. # unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = [] # The reST default role (used for this markup: `text`) # to use for all documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme = 'default' html_theme = 'sphinxdoc' # html_theme = 'haiku' # html_theme = 'scrolls' # html_theme = 'agogo' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. html_logo = 'images/neologo_light.png' # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_use_modindex = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'neodoc' # -- Options for LaTeX output ------------------------------------------------- # The paper size ('letter' or 'a4'). # latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). # latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, # documentclass [howto/manual]). latex_documents = [('index', 'neo.tex', u'Neo Documentation', AUTHORS, 'manual')] # The name of an image file (relative to this directory) to place at the # top of the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # Additional stuff for the LaTeX preamble. # latex_preamble = '' # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_use_modindex = True todo_include_todos = True # set to False before releasing documentation rst_epilog = """ .. |neo_github_url| replace:: https://github.com/NeuralEnsemble/python-neo/archive/neo-{0}.zip """.format(neo_release) neo-0.7.2/doc/source/core.rst0000600013464101346420000002626313507452453014236 0ustar yohyoh******** Neo core ******** .. currentmodule:: neo.core This figure shows the main data types in Neo: .. image:: images/base_schematic.png :height: 500 px :alt: Illustration of the main Neo data types :align: center Neo objects fall into three categories: data objects, container objects and grouping objects. Data objects ------------ These objects directly represent data as arrays of numerical values with associated metadata (units, sampling frequency, etc.). * :py:class:`AnalogSignal`: A regular sampling of a single- or multi-channel continuous analog signal. * :py:class:`IrregularlySampledSignal`: A non-regular sampling of a single- or multi-channel continuous analog signal. * :py:class:`SpikeTrain`: A set of action potentials (spikes) emitted by the same unit in a period of time (with optional waveforms). * :py:class:`Event`: An array of time points representing one or more events in the data. * :py:class:`Epoch`: An array of time intervals representing one or more periods of time in the data. Container objects ----------------- There is a simple hierarchy of containers: * :py:class:`Segment`: A container for heterogeneous discrete or continous data sharing a common clock (time basis) but not necessarily the same sampling rate, start time or end time. A :py:class:`Segment` can be considered as equivalent to a "trial", "episode", "run", "recording", etc., depending on the experimental context. May contain any of the data objects. * :py:class:`Block`: The top-level container gathering all of the data, discrete and continuous, for a given recording session. Contains :class:`Segment`, :class:`Unit` and :class:`ChannelIndex` objects. Grouping objects ---------------- These objects express the relationships between data items, such as which signals were recorded on which electrodes, which spike trains were obtained from which membrane potential signals, etc. They contain references to data objects that cut across the simple container hierarchy. * :py:class:`ChannelIndex`: A set of indices into :py:class:`AnalogSignal` objects, representing logical and/or physical recording channels. This has two uses: 1. for linking :py:class:`AnalogSignal` objects recorded from the same (multi)electrode across several :py:class:`Segment`\s. 2. for spike sorting of extracellular signals, where spikes may be recorded on more than one recording channel, and the :py:class:`ChannelIndex` can be used to associate each :py:class:`Unit` with the group of recording channels from which it was obtained. * :py:class:`Unit`: links the :class:`SpikeTrain` objects within a :class:`Block`, possibly across multiple Segments, that were emitted by the same cell. A :class:`Unit` is linked to the :class:`ChannelIndex` object from which the spikes were detected. NumPy compatibility =================== Neo data objects inherit from :py:class:`Quantity`, which in turn inherits from NumPy :py:class:`ndarray`. This means that a Neo :py:class:`AnalogSignal` is also a :py:class:`Quantity` and an array, giving you access to all of the methods available for those objects. For example, you can pass a :py:class:`SpikeTrain` directly to the :py:func:`numpy.histogram` function, or an :py:class:`AnalogSignal` directly to the :py:func:`numpy.std` function. If you want to get a numpy.ndarray you use magnitude and rescale from quantities:: >>> np_sig = neo_analogsignal.rescale('mV').magnitude >>> np_times = neo_analogsignal.times.rescale('s').magnitude Relationships between objects ============================= Container objects like :py:class:`Block` or :py:class:`Segment` are gateways to access other objects. For example, a :class:`Block` can access a :class:`Segment` with:: >>> bl = Block() >>> bl.segments # gives a list of segments A :class:`Segment` can access the :class:`AnalogSignal` objects that it contains with:: >>> seg = Segment() >>> seg.analogsignals # gives a list of AnalogSignals In the :ref:`neo_diagram` below, these *one to many* relationships are represented by cyan arrows. In general, an object can access its children with an attribute *childname+s* in lower case, e.g. * :attr:`Block.segments` * :attr:`Segments.analogsignals` * :attr:`Segments.spiketrains` * :attr:`Block.channel_indexes` These relationships are bi-directional, i.e. a child object can access its parent: * :attr:`Segment.block` * :attr:`AnalogSignal.segment` * :attr:`SpikeTrain.segment` * :attr:`ChannelIndex.block` Here is an example showing these relationships in use:: from neo.io import AxonIO import urllib url = "https://portal.g-node.org/neo/axon/File_axon_3.abf" filename = './test.abf' urllib.urlretrieve(url, filename) r = AxonIO(filename=filename) bl = r.read() # read the entire file > a Block print(bl) print(bl.segments) # child access for seg in bl.segments: print(seg) print(seg.block) # parent access In some cases, a one-to-many relationship is sufficient. Here is a simple example with tetrodes, in which each tetrode has its own group.:: from neo import Block, ChannelIndex bl = Block() # the four tetrodes for i in range(4): chx = ChannelIndex(name='Tetrode %d' % i, index=[0, 1, 2, 3]) bl.channelindexes.append(chx) # now we load the data and associate it with the created channels # ... Now consider a more complex example: a 1x4 silicon probe, with a neuron on channels 0,1,2 and another neuron on channels 1,2,3. We create a group for each neuron to hold the :class:`Unit` object associated with this spike sorting group. Each group also contains the channels on which that neuron spiked. The relationship is many-to-many because channels 1 and 2 occur in multiple groups.:: bl = Block(name='probe data') # one group for each neuron chx0 = ChannelIndex(name='Group 0', index=[0, 1, 2]) bl.channelindexes.append(chx0) chx1 = ChannelIndex(name='Group 1', index=[1, 2, 3]) bl.channelindexes.append(chx1) # now we add the spiketrain from Unit 0 to chx0 # and add the spiketrain from Unit 1 to chx1 # ... Note that because neurons are sorted from groups of channels in this situation, it is natural that the :py:class:`ChannelIndex` contains a reference to the :py:class:`Unit` object. That unit then contains references to its spiketrains. Also note that recording channels can be identified by names/labels as well as, or instead of, integer indices. See :doc:`usecases` for more examples of how the different objects may be used. .. _neo_diagram: Neo diagram =========== Object: * With a star = inherits from :class:`Quantity` Attributes: * In red = required * In white = recommended Relationship: * In cyan = one to many * In yellow = properties (deduced from other relationships) .. image:: images/simple_generated_diagram.png :width: 750 px :download:`Click here for a better quality SVG diagram <./images/simple_generated_diagram.svg>` For more details, see the :doc:`api_reference`. Initialization ============== Neo objects are initialized with "required", "recommended", and "additional" arguments. - Required arguments MUST be provided at the time of initialization. They are used in the construction of the object. - Recommended arguments may be provided at the time of initialization. They are accessible as Python attributes. They can also be set or modified after initialization. - Additional arguments are defined by the user and are not part of the Neo object model. A primary goal of the Neo project is extensibility. These additional arguments are entries in an attribute of the object: a Python dict called :py:attr:`annotations`. Note : Neo annotations are not the same as the *__annotations__* attribute introduced in Python 3.6. Example: SpikeTrain ------------------- :py:class:`SpikeTrain` is a :py:class:`Quantity`, which is a NumPy array containing values with physical dimensions. The spike times are a required attribute, because the dimensionality of the spike times determines the way in which the :py:class:`Quantity` is constructed. Here is how you initialize a :py:class:`SpikeTrain` with required arguments:: >>> import neo >>> st = neo.SpikeTrain([3, 4, 5], units='sec', t_stop=10.0) >>> print(st) [ 3. 4. 5.] s You will see the spike times printed in a nice format including the units. Because `st` "is a" :py:class:`Quantity` array with units of seconds, it absolutely must have this information at the time of initialization. You can specify the spike times with a keyword argument too:: >>> st = neo.SpikeTrain(times=[3, 4, 5], units='sec', t_stop=10.0) The spike times could also be in a NumPy array. If it is not specified, :attr:`t_start` is assumed to be zero, but another value can easily be specified:: >>> st = neo.SpikeTrain(times=[3, 4, 5], units='sec', t_start=1.0, t_stop=10.0) >>> st.t_start array(1.0) * s Recommended attributes must be specified as keyword arguments, not positional arguments. Finally, let's consider "additional arguments". These are the ones you define for your experiment:: >>> st = neo.SpikeTrain(times=[3, 4, 5], units='sec', t_stop=10.0, rat_name='Fred') >>> print(st.annotations) {'rat_name': 'Fred'} Because ``rat_name`` is not part of the Neo object model, it is placed in the dict :py:attr:`annotations`. This dict can be modified as necessary by your code. Annotations ----------- As well as adding annotations as "additional" arguments when an object is constructed, objects may be annotated using the :meth:`annotate` method possessed by all Neo core objects, e.g.:: >>> seg = Segment() >>> seg.annotate(stimulus="step pulse", amplitude=10*nA) >>> print(seg.annotations) {'amplitude': array(10.0) * nA, 'stimulus': 'step pulse'} Since annotations may be written to a file or database, there are some limitations on the data types of annotations: they must be "simple" types or containers (lists, dicts, tuples, NumPy arrays) of simple types, where the simple types are ``integer``, ``float``, ``complex``, ``Quantity``, ``string``, ``date``, ``time`` and ``datetime``. Array Annotations ----------------- Next to "regular" annotations there is also a way to annotate arrays of values in order to create annotations with one value per data point. Using this feature, called Array Annotations, the consistency of those annotations with the actual data is ensured. Apart from adding those on object construction, Array Annotations can also be added using the :meth:`array_annotate` method provided by all Neo data objects, e.g.:: >>> sptr = SpikeTrain(times=[1, 2, 3]*pq.s, t_stop=3*pq.s) >>> sptr.array_annotate(index=[0, 1, 2], relevant=[True, False, True]) >>> print(sptr.array_annotations) {'index': array([0, 1, 2]), 'relevant': array([ True, False, True])} Since Array Annotations may be written to a file or database, there are some limitations on the data types of arrays: they must be 1-dimensional (i.e. not nested) and contain the same types as annotations: ``integer``, ``float``, ``complex``, ``Quantity``, ``string``, ``date``, ``time`` and ``datetime``. neo-0.7.2/doc/source/developers_guide.rst0000600013464101346420000002451713507452453016633 0ustar yohyoh================= Developers' guide ================= These instructions are for developing on a Unix-like platform, e.g. Linux or Mac OS X, with the bash shell. If you develop on Windows, please get in touch. Mailing lists ------------- General discussion of Neo development takes place in the `NeuralEnsemble Google group`_. Discussion of issues specific to a particular ticket in the issue tracker should take place on the tracker. Using the issue tracker ----------------------- If you find a bug in Neo, please create a new ticket on the `issue tracker`_, setting the type to "defect". Choose a name that is as specific as possible to the problem you've found, and in the description give as much information as you think is necessary to recreate the problem. The best way to do this is to create the shortest possible Python script that demonstrates the problem, and attach the file to the ticket. If you have an idea for an improvement to Neo, create a ticket with type "enhancement". If you already have an implementation of the idea, create a patch (see below) and attach it to the ticket. To keep track of changes to the code and to tickets, you can register for a GitHub account and then set to watch the repository at `GitHub Repository`_ (see https://help.github.com/articles/watching-repositories/). Requirements ------------ * Python_ 2.7, 3.4 or later * numpy_ >= 1.7.1 * quantities_ >= 0.9.0 * nose_ >= 0.11.1 (for running tests) * Sphinx_ >= 0.6.4 (for building documentation) * (optional) tox_ >= 0.9 (makes it easier to test with multiple Python versions) * (optional) coverage_ >= 2.85 (for measuring test coverage) * (optional) scipy >= 0.12 (for MatlabIO) * (optional) h5py >= 2.5 (for KwikIO, NeoHdf5IO) We strongly recommend you develop within a virtual environment (from virtualenv, venv or conda). It is best to have at least one virtual environment with Python 2.7 and one with Python 3.x. Getting the source code ----------------------- We use the Git version control system. The best way to contribute is through GitHub_. You will first need a GitHub account, and you should then fork the repository at `GitHub Repository`_ (see http://help.github.com/fork-a-repo/). To get a local copy of the repository:: $ cd /some/directory $ git clone git@github.com:/python-neo.git Now you need to make sure that the ``neo`` package is on your PYTHONPATH. You can do this either by installing Neo:: $ cd python-neo $ python setup.py install $ python3 setup.py install (if you do this, you will have to re-run ``setup.py install`` any time you make changes to the code) *or* by creating symbolic links from somewhere on your PYTHONPATH, for example:: $ ln -s python-neo/neo $ export PYTHONPATH=/some/directory:${PYTHONPATH} An alternate solution is to install Neo with the *develop* option, this avoids reinstalling when there are changes in the code:: $ sudo python setup.py develop or using the "-e" option to pip:: $ pip install -e python-neo To update to the latest version from the repository:: $ git pull Running the test suite ---------------------- Before you make any changes, run the test suite to make sure all the tests pass on your system:: $ cd neo/test With Python 2.7 or 3.x:: $ python -m unittest discover $ python3 -m unittest discover If you have nose installed:: $ nosetests At the end, if you see "OK", then all the tests passed (or were skipped because certain dependencies are not installed), otherwise it will report on tests that failed or produced errors. To run tests from an individual file:: $ python test_analogsignal.py $ python3 test_analogsignal.py Writing tests ------------- You should try to write automated tests for any new code that you add. If you have found a bug and want to fix it, first write a test that isolates the bug (and that therefore fails with the existing codebase). Then apply your fix and check that the test now passes. To see how well the tests cover the code base, run:: $ nosetests --with-coverage --cover-package=neo --cover-erase Working on the documentation ---------------------------- All modules, classes, functions, and methods (including private and subclassed builtin methods) should have docstrings. Please see `PEP257`_ for a description of docstring conventions. Module docstrings should explain briefly what functions or classes are present. Detailed descriptions can be left for the docstrings of the respective functions or classes. Private functions do not need to be explained here. Class docstrings should include an explanation of the purpose of the class and, when applicable, how it relates to standard neuroscientific data. They should also include at least one example, which should be written so it can be run as-is from a clean newly-started Python interactive session (that means all imports should be included). Finally, they should include a list of all arguments, attributes, and properties, with explanations. Properties that return data calculated from other data should explain what calculation is done. A list of methods is not needed, since documentation will be generated from the method docstrings. Method and function docstrings should include an explanation for what the method or function does. If this may not be clear, one or more examples may be included. Examples that are only a few lines do not need to include imports or setup, but more complicated examples should have them. Examples can be tested easily using the iPython `%doctest_mode` magic. This will strip >>> and ... from the beginning of each line of the example, so the example can be copied and pasted as-is. The documentation is written in `reStructuredText`_, using the `Sphinx`_ documentation system. Any mention of another Neo module, class, attribute, method, or function should be properly marked up so automatic links can be generated. The same goes for quantities or numpy. To build the documentation:: $ cd python-neo/doc $ make html Then open `some/directory/python-neo/doc/build/html/index.html` in your browser. Committing your changes ----------------------- Once you are happy with your changes, **run the test suite again to check that you have not introduced any new bugs**. It is also recommended to check your code with a code checking program, such as `pyflakes`_ or `flake8`_. Then you can commit them to your local repository:: $ git commit -m 'informative commit message' If this is your first commit to the project, please add your name and affiliation/employer to :file:`doc/source/authors.rst` You can then push your changes to your online repository on GitHub:: $ git push Once you think your changes are ready to be included in the main Neo repository, open a pull request on GitHub (see https://help.github.com/articles/using-pull-requests). Python version -------------- Neo core should work with both Python 2.7 and Python 3 (version 3.4 or newer). Neo IO modules should ideally work with both Python 2 and 3, but certain modules may only work with one or the other (see :doc:`install`). So far, we have managed to write code that works with both Python 2 and 3. Mainly this involves avoiding the ``print`` statement (use ``logging.info`` instead), and putting ``from __future__ import division`` at the beginning of any file that uses division. If in doubt, `Porting to Python 3`_ by Lennart Regebro is an excellent resource. The most important thing to remember is to run tests with at least one version of Python 2 and at least one version of Python 3. There is generally no problem in having multiple versions of Python installed on your computer at once: e.g., on Ubuntu Python 2 is available as `python` and Python 3 as `python3`, while on Arch Linux Python 2 is `python2` and Python 3 `python`. See `PEP394`_ for more on this. Using virtual environments makes this very straightforward. Coding standards and style -------------------------- All code should conform as much as possible to `PEP 8`_, and should run with Python 2.7, and 3.4 or newer. You can use the `pep8`_ program to check the code for PEP 8 conformity. You can also use `flake8`_, which combines pep8 and pyflakes. However, the pep8 and flake8 programs do not check for all PEP 8 issues. In particular, they do not check that the import statements are in the correct order. Also, please do not use ``from xyz import *``. This is slow, can lead to conflicts, and makes it difficult for code analysis software. Making a release ---------------- .. TODO: discuss branching/tagging policy. Add a section in :file:`/doc/source/whatisnew.rst` for the release. First check that the version string (in :file:`neo/version.py`) is correct. To build a source package:: $ python setup.py sdist Tag the release in the Git repository and push it:: $ git tag $ git push --tags origin $ git push --tags upstream To upload the package to `PyPI`_ (currently Samuel Garcia, Andrew Davison, Michael Denker and Julia Sprenger have the necessary permissions to do this):: $ twine upload dist/neo-0.X.Y.tar.gz .. talk about readthedocs .. make a release branch If you want to develop your own IO module ----------------------------------------- See :ref:`io_dev_guide` for implementation of a new IO. .. _Python: http://www.python.org .. _nose: http://somethingaboutorange.com/mrl/projects/nose/ .. _unittest2: http://pypi.python.org/pypi/unittest2 .. _Setuptools: https://pypi.python.org/pypi/setuptools/ .. _tox: http://codespeak.net/tox/ .. _coverage: http://nedbatchelder.com/code/coverage/ .. _`PEP 8`: http://www.python.org/dev/peps/pep-0008/ .. _`issue tracker`: https://github.com/NeuralEnsemble/python-neo/issues .. _`Porting to Python 3`: http://python3porting.com/ .. _`NeuralEnsemble Google group`: http://groups.google.com/group/neuralensemble .. _reStructuredText: http://docutils.sourceforge.net/rst.html .. _Sphinx: http://sphinx.pocoo.org/ .. _numpy: http://numpy.scipy.org/ .. _quantities: http://pypi.python.org/pypi/quantities .. _PEP257: http://www.python.org/dev/peps/pep-0257/ .. _PEP394: http://www.python.org/dev/peps/pep-0394/ .. _PyPI: http://pypi.python.org .. _GitHub: http://github.com .. _`GitHub Repository`: https://github.com/NeuralEnsemble/python-neo/ .. _pep8: https://pypi.python.org/pypi/pep8 .. _flake8: https://pypi.python.org/pypi/flake8/ .. _pyflakes: https://pypi.python.org/pypi/pyflakes/ neo-0.7.2/doc/source/examples.rst0000600013464101346420000000044713420077704015114 0ustar yohyoh**************** Examples **************** .. currentmodule:: neo Introduction ============= A set of examples in :file:`neo/examples/` illustrates the use of Neo classes. .. literalinclude:: ../../examples/read_files.py .. literalinclude:: ../../examples/simple_plot_with_matplotlib.py neo-0.7.2/doc/source/images/0002700013464101346420000000000013511307751014003 5ustar yohyohneo-0.7.2/doc/source/images/base_schematic.png0000600013464101346420000016635313420077704017462 0ustar yohyohPNG  IHDR_KosBIT|d pHYsaa?itEXtSoftwarewww.inkscape.org< IDATxwTgfwY FAA-**^=-jb0bb7&؍I+QQl("ǹ.݅ggs]=s>X; """"5R ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1r]bfϘSq63;(3f#""""WQG&QXVӁv͍m{&7ΎH!80Մ~ffrkڹO"""""@O P mp?vG w_fvp{P .""""[N5Yޏ^N03kvl[36f6$""""[NGEkܽ23{̖ l=hfƶfG3{X,VL3la~VP]▴!""""][ lcQdf? ,xp$}m3; \Ee'z | xjf~7/""""[Nَ 5giaSߎM/* '@13zyτ+ˀ]Ӂ_J.tkj%[s=S-E5kfk  ":p7pgz B 8pTv6ǀFhfw6?^symaf?~Lޔ] \ ppW|~p_mfۗSW0ˣRUL6ZAxa;s)3 HG/oS."""u#BXM3/t"`fCh{Uw>쓗_s>0Ǯ}P;B EDDD$@' {HwmW|]B Z+ޚ-fvpm&Bhf/aU5}!@^`f} OzF/6p = lb>I{A4̪#@o#HTx|]}*,᜚R$May[,B^'%w z71عmEg ?j/^i9H:k ^M+mB&͂VUM9nJ'~E([`[2zpއP}p!cfKUi/HDDDDY ь^'⻚YWյFD/ʿ县D9z:h_l0t3;6{sJ$QhBMrwS;uQDDDdi3o=׌g4EJ-=̬]~h4qW~lfncfM!"""54ÄٌfȦulZgZ\CX{fvZ==xľ]Ħ H/Qwy̦0W'"Ԕͬu/s5'KFXa5}}XvsBF+3΍xr5gҷNИNH Jsډu6֪r; 3+qtPDDD #o "BZR35I`ulp6cfIV@j_9"""""1P ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1P ."""""""""1(""""mV W;GC~K@\DDD%A:>_ݱ݉TQ:vȖ_j)J)ձe˦@\DDD~gϞ{?`?Qd ),ma|ۯ?bK8OC)g|pm *Hף@\DDDt&YXuob#pu|nOYvP7 =`=IjhٛPYgh˟ `>.0S = u}ڥ}\f 6E:~KvE{xa {7oNN^zh~4t|c8d/{Wg@\: 3K[JEu{Bc>#l-,m&SϘT6S>k`tO3BQE݇ށrĥIl'mh@yOo*s:p*vq`iOSOOÒ?.ό>x#glGSHࠎ:@4̌p=0Bp/M) =i3/v DlfG\,7?Y2X3{)|FԆr3A<7b3[dfEfvK~^!AF1Hl?-m |()S~a@)p^wxsӁ=b8niwRom̾ ^OlRppBf$,jW6dE^E, ϊ>C =aZȧ,%|ؙ}e%@CyMe./GA~<_*OiDDǞ12qjY%2O:Ko5ۖk>w;]Fĥ)^ Y1M~_3fvF]C |'I>09F= 0ws B>ρ>:lZM Dq0%G /s-a4Sn}<}cwߥF[DD`ifikRងyԞ |Axg _;T4"?7qiWF9'!E([f]=MjEZ?&h3ۏp{m-ar}֐w}Y~w|/s >!'/I\M>k\\CI]Fk6{F$3K 3jE[{eDDjoei;G5qKw:O4ͭE tS5Di7R)$?#s@XDf|o3KV/ِH9OE59Qx'z/bW/@x;ڶ<}-aATɲ;E92_f}TK'w V3{ӧA{Hǀ,mGAcqKۥoy7ڶw "zlߨ b L6-o7|[wK@\ݗ؏pFB$6UnMȧ^TѶ /V;Jc+?/==O!ڦ<ӟd3kSfv1#"3̦,݇ل?J.̎&\us>u(m3̬f68FַH(ni;Px5SJI+ CUMi1Kۙ=k,anTvͮ<ր,m(xo9!Fp1 dMiht聙}.wp!\fs_G cbb6fM**x:LmԝwTo{US2Go`<׹vO#i,m a%/>0Ȁ4њcgi;kOףk▶ &49{bOU/qK۹$O*K Kۧ=o Xں<lnIKpefKB v$˧k͛"߶mіgZ.v?Y/5 5^Bz'z,r.TZ ofG0w` B@\D]׀_E|GYڲWpL,mbmi++GH ai;' W[>Gk6?  iݣPW@\efC}Q[R˽Jm SMO)#y73+so:/ZsAtךiCCfv5Mr…vf6_"5;) <\ \di$Y#" F)ҶPyߐ"K۩MA8Pr'!pRKSkf'yf"zV4h4.B@Ynfߨm3-0*}}`3tsA<"u;knflmYnfcM""鿄@.UlJk! 'kihnΓw9n% 謦j{?#Ĕ= ;GΣɳ"(:x= ȿU5 3{Z~Pe73;F6fv}~3C%*uWE(WvBؙR?wn2wΣP)e60u@G@.ST}0`!Zbfa$7a6KwM$"RyIo~x#%X7|q3pw7,-m4`x%0o|@6{XGdff{yXtRf6pa1U-~Q4G`mB,0Sެ4,mRst:|,m" <^ԬNwEM0O{q鴢6 m\mLJTgBmV:M "Ҟd,WW #FƦ3y% Rҙ@O_af3a3{0o<s ߢuf1YErKK-T󶻿YoadwO3:𨧼EE |`>^մZ EqTJpffk슈tP>uQ;!OmȯIR@tJsKK:[HJD< ;ӄ9ɦ5<5GB>n@\D:5w焸""[T;._"`¢uV ⹠rH-s!"[6OƷj7jew':^qwdчntXwen;^1@|=8I% `("N%z7H7"z(_w8O C~sU p|y{׬c.s:l`H}h "~?&u3"dM(q(q(q(q(q(q(q(q(q O'LZ.ҙXW w" EDdef&OYP{J&cܯ$ sw2lyrEEtO51 GAt. EDd'Ol(栃lۢL? @9xb[ko7W\\pw/^!CLW(ge>rHOӔW';)03K`٪ɓLFtti5kC/KtJuū `!.{>Ź+Y;dHi},*{|Hs~ݷ~ܫ㠃rL'Ox<qjfӓ܆,QʚڞkzJVV*InLInذ.QT4NF{=isU./:Ȯ^,ɭY3WvmATe;n̖SNcqqj8N(JrUDO¬ポ`ˌn$-ӫOAq3pp7ˁWAb}}9 +9`ceUv]qir]Ueq={Eqw\z EDQwM*?$X*ŢLںWK+*JZ6W\Hv˙C@v;P Nw޽/9}=*ذ,:WU9l8A",n<+//*C4vEh\\wZr^v&6d-IҋslaF/X_>݁ i޹Ҟza?t1[wLwD@\DD0 x8f}t_wB6$n 'N$L6qwS/>~ܽ :WaT錆;-qw #ӹRQDd@\Deb ÁwTCD(eŌٵM>cvC٧2l(H{)>ф͇9h"" E$v_j6tY϶8f=0۶-jL&L|6cल2l?Me`!i3 E$VH`FmlͶKKa68R2f[^ fU.g:d ~g𙆶Yoo,px:`D|72foX""qR .,3\ĕe,i p+ppc'a cЊ;<|4pw,{e˘Mo͊͝ NO`TƬEa*Ϻ/I{s ~YuA("].`,3{02Gݗf ̘ݒ1;9cV֚zB1ضP$ߋ^{jJ_8 X \|=%"7"m2{/ܝfp88e̞Ϙ]1k$J܄w 6c ^Q@?`;ZU >Ӧ؂`|[z0aVkV"WM,1;cv m}D<'d*;<'saҤIyqwGd@\\IZ܂'Q0i`%w^@ҨRq&E ߇{rF895Y(od/> @HOy1т|q ')pKTṢqywյp|4y;̆\իW%q6`݁ bt+W9;P ,Ϙ= L) !c`* |Nl TcYނ|(̫p59\1Gw UpEE#f'h\y-I43ܗ*cv4v0tX_2f] kNlw okڗ׿ftM{qwId%EƩ?ݑdB#FMI ??a :r!(nB8\ߩz.L|ܣ}]K|6jsQgBK0nEUO 73funrHo6+ '/u{8EuD"LVfox}6;BVwlF2iIp8pc <:ѽ%mwJ>#޽{˖-޽7}Yjk׮em5opB֭[3f %%%5کdͳ֬YêUfb3t-FEƱCqwPƬGƬ"v2U?4ߍ ErO'xEw}W^4#c6h1uTC! /u\u{xպKW=\M(78>.c̏$akTn1o8lQjf#=JlV}g^$?h_RN64880cLW2E{> *ffevnٰ+sӁ-w K.eС\};;[oc{3rH>`:(k/ w]5\Ð!C7ozߦoߚ5k :t(G}4|0Æ }6{sgС :[o ese=;]wݕޛ7|qwi'.rLByy9eeeuT@4is ^x!mGy$FNc~1c9c/}%v-q0+FgH^&%GK*܇W+x=Fi4P0ڛ%{Tp `IXgqY__XXwEǿ)pF~̺C! 3f;jQ'}S̎Ǭo<2QO ~S\"a"&cwZ>w}1:)Xx,A"ΌK=aM3>/xفxIH'Y[Yuo0֡`)msp Q P.JOgp%lL> 0}t/4g駟C68;yYr%/2 guVuĚnq.\?\P=b/}{w믿Kkqgկ~^74z!!‹/ȋ/I'ƍ9ٰa=ׯ穧bٲewqd5r-|_CWgQcSO=ł x衇;w._׸;y߿?=gSN#/RktM EPB)d.&{*V +jg5J >ZK!_J,y +kqwMj66mȘy??}N>68@|Ʌ@|@L"|̎ro#0:}ީQp#`k+./LB k7 vxoFɛE gp-0$KʘdN.\E5߭;󐾲 \p7jn6G{HUZTR6䰛L%`t~u]ƤIxg 4SN#S`.\X#)ˆ#ݻ7cƌ6k׮W÷~o |$ =7l5kv^z)>eee9g$ n6)--q1n8 mƼy0a%%%`Μ9u]5y̙3|_ΫgԨQ?ଳoC̘1M=D"dF:E 1O(yx_>"P!4aT}\ s+ﮨ]$`z*:KOq}J"a\w=g&GS2z!f]Qq T'g sxt1cve,MYٰ G`pNvhD<&>&džNs_P~~+…}QR 1\$5w"i }>ڿdvrɫ?Y^p#oT^ppwah=yQm.cҤI]zyԩqd2 'Nݹ9s;v,CN`ŊM߂ /z`f֣G0WO>իF; ~a}iH "''/]ᾶ9"I86pS1-@#q$B*xF9laTgԷq3-X_z-bT{4{<7f̖W4%PxG $pX_ 6b* B,Ȉx #P _}Ds b2\8b0œP=QM=gZKL8d2ɓO>InX`{,ce<ӌ7õSUU'p> 'pg}6;# .+_J9n;nF>hq͛ 7PU֯__cO>]v٥sXhlJFQQ}eѢj+*<Ԩ=~DZo#H]50E390\Y3E#~<3{r57ǾQx=+Xl&J#ϝ,c233fڝ܌3f2C*q xHE}^^q썵ϭf===ᡲM4rQ UFL"*C=׀E'?nQ* Ռm;k^Du }a„ L>˗saQVVVQ{ܗ÷Ї |G?7JיSMFWy/ S2eN>9\fss뭷/g͚YUVQ?l6˵^˴iӸ k|>a]w]\pل Gj.pL0Egݭ- 1M>xLj(G_^<(VW Wx0PKTah4zI2| Owdǫ̽0kZ 1+> #sj7#Y% ߴf8<M(e߬H $ўd^y3iҤ_]P }YrP]ޛD"W^ɔ)SxG+8lذay?DB;3_|1ӟ x'y嗹[8I SOev .cڴi|_e5IDږqaq1ش(\vFĽ 5%Dyaz3HhSOp]r5%W+o!4ò )יC} ϏR>EtO}Eio F FOr ~̊&v9/=zC_~va׏ &Xg̘1?gƌTTTprUW{ /\.GEEEuykkiӦq/|wyFK]< wÆ 㤓Nb<uqO9HJDxyXd`=x; FXXQئ ٍ)߉\'{;$l n K¢`j]EmjںZJ}VkZ"AlժU+nh-aYU@!y?;d2I&dpO>${;k`e_L+aptv{ESbmmS}_d3pXƘK"p@w@w,[C+fӦMI馛ꪫXt)۷裏g?Y^xabaJ馛馛Xf _};wUV-Z(~8o6mbٲetҥ¤"Fu6O?MڼyscrH8Ppq# WE+~Q&?W/[Yl.]8ΚpS Y*]"y)Q# B|1B5Y"- F`AvdEb]U1.Ϙ>3*v'|rO;vwޕDx*z?޽.Ct8."pǁ Xeh %4A0Kpf̓&p~6g6A~ nsp0b@_DbA_6{S]5V )mGر|J=)`y:MKg֬Y~|Gl۶N:ѵkײO<:YYZ"Gxюz IDAT+smM ̫h8Wu OVRK`sdJ @])ވmHYTQ2Z?NaI> SA~Q5ޯ,|k5DZ3p@χ~Ț5k4iW]u۷O?e̘1ccG]8*r/ 綠)(:wc<`쾃h_ݪ؟+jWSF!~BGX\ zy[/ \)~Qf`."0BY6#,'&FEaBg|Fꈳ]e #0% R˛V Ux3v 189sӟ 8xv8~.))~;r-{eq8\D< _#Ew_^HvhEk6c#܊DNQnE)ۖ/~53=VBckn W԰(B(ft-]\ CГx,-ma& ]3T] CW:h3{ǎ;˸;+ddggs3c ^~e̙C>}xZ5p8(?Dm)zwPn>BDUj@DFѬd]7[ϔ=%fkhtQTiPC x N.;3g[oq)`A{2j.drQ_B\H-LFGFbF%[ۀ_Uye-伌nNR/Z <Wm -b;i764-E]?ѱJ]tTA9ȍ@'mkuo'רA.F8|f s;z :p-"yy r/Z}0#iߓ" T 89^/ n7-x=p@Gk "GE SM{VT ]ޛphZF?s5pGܸuƇ~ȸq8餓4JcEBXNTz̺=4t`aY)Xpvy&@"܏ Sh=!߮t~ ȫnr}Y,bDsz0;e(@N#Dl B ?]ՁiӲ2տ\+Ѓ+=)ir_smashwu^o4pT{`ކGħM雎JIpA:0?uUV2ƍ),,dرi/;;{wW^y￟?uǢQ-.jVDKjurNdENKrE\?N8 :O~ȴi=MsO'C9/.RPTHϋj+A?(ϝ eDɮo9VUE'mLQO5Dim_̮ݰ$yP8:gXF!#c=Ư~U?{|[jŤI4i^{-#p衙pdFǺY @4 Z0qBh(W( Tw†ŠX&j|n@M't~{N1z!P"M60Վ@]w FGt*1YŹoe2:qVaS]}#]vgu?Guqu!$ Q I ~`h@_{м4+ہe0FsMl@FHJ]G/V EUL_Wk,wVP.E?Q//.@)"cG*D  :N1:/ץD)dz,:u*_~9999~3uT}Y 8d Gm֬ -`3?Le/:M8t!yLD[Co^JAQ}_!\+ʵQ/:T/ozyg}7⩥MD\<̖ۗ2X$VFژT_XD6>Li}Cm|f#9Cm{^~̟?{;˗7LN ŷv&S\{Y)ğDd+pY- !e0y߂Pew?$)?l 6^FrLE64'nWϾE`B{@mxg+ÍH_y䐝MVVٕ>˗b oNv1bwy';vl3hРNzĉ7n{/zscСt)ccs8ҍuDAw#\ DgQ~QGu_ @sK$=ʒj~gph .!+Q킐۾T~vx$e7j~odʼ y펄P_#_V#Zx,r]J 'W}ѲMꝺS&HMCu-DY0em/8S \BHOG9$al< t^ye׎ZKdOSԊ3To p;"LMH}1 C>\ݾȚ )$o;ʖwb+☽g@( 8(+C6i4&KҴiŏdÆ ,\|]r 7p뭷ra|qxw`]Op]w1ydyzuСC:t( I&ñ4?Aj7 EݷB=T؈^//'^sh{_>{=S1}WyI;9GzUS{=QeK1mc6d//&=7*rDŽY:6H5Ki4q<[6EUMDZVv# @UG"簾cbHB\D@w1PGL;DgY̗pl}ܣ(t>$ʼnPk}f_dUmҶm[ ̝wIݙ5kVl^,))/?#s1vmvmlݺz^{k=zЮ]_۷{4o&'L,n[ٓ>_B<(Z~f3كSMwdVaG؉"ݰ<޿Hva Mjd!8׻@ __ E(pn`A08r!Nx^_t hZ% s,"~.`Y$x>ϋt&fEf>|8'NΝK6m28qBHCgjj21~&!5QC-ܐ[_IjP8>r:]JM͟Na0>LJ `^w&ITTZ  EM&%b1OWPpuΏj%h B< H=%d#ɮX 7CbCIA " )0BuZ+t+K =qϧxjۡʫ"&0!) ́W\s=G$I{sq#8!pX3: VB98x,#po#Pd/\ |qY1|#|E"7bQ(Rk T \p10`2l CBAU͚APguEu֪ P(2@`I.Q_q _ )"= ED֚p q1o8h`$V \] c=ϡzsGY\:6F+Gbs׸$@s+f%BdH üFaݘ={8 q#=V/[?jl"!_:_duR׍X0Ba q@s=[+c6"0B|՗a!0wjYGRF,-~on b-TEY086lٱ j}/3 *ܝg)?|ͩe֔ 2 IE ō%"Gn,;CVz yÉq bc;ĵH [ᥲvN ,)og>YPѣGOow82G( K@j"bcGyzb_pZ:!=8Lgn`Y|dSl_wBoe\<,HF!{ c>^@,L'pXڷ=!KTNRk^ lD,8j^{Q\[(28#r(P+)ƏSa{OѠziVn F`%5MwziŃ-Z'fޚJAD0Gkky xh0 ``!p|\V*`pX^X.횮RҺuGQ/oEI LCPx4B%`"<\E)V%bE?[~F͚9};3⚴ x(X>"Huk*}T]\ rM,)$ߨX[=U|0+f󀇱b>e,ak"jsox;eB<橖uv0l$>RHB8Z ^OUxVAu@&LH̦e3o)A1eʔ6j: q#=fǒ=bY_1I7d+pBh30gɁDڷ#QKW}bE@ m_RzK" ua`R_]bb<_mڏTn!?`ߣ\" VYU/S>Eĥ5Cd5 "sw[{ IDATaf*,di 5#Aq7SZZ(9s:thFp8j8a5m_:!!\UGS%_4p<}_s%!~,WO/W,˂-cgb}?d)XdFsڳ6eբCbZ|707s}oYPf_[b]OY`s#T< t B08Iۀ|)fh撓*Ñ9\DHo_tzy+2$<EbO-yH-Lb=K4fr:̇^ OxڋU XM}~Yz^G"駟{GƍYzGpq#=<}@0]Uz6$K+kF~ p!p AaWϊw@̧2:~mXƎPz@ IrB&+낧SW55V  gcǁGP/Z?uODG.a=lB=lTJc0SpYNLfTw-'qT%>AyUMG0p@: 8YSq;=:S#^O( Dc!kSyE!KO7GauU]Po}6qKOٶm .Gd+ScѢEp ."p >\)~Q݌u:!^J |7!Z)z;<_/Z⩎Z4`%V3N7(ߟӧש%KpqǥiT8!p;K)U[=Q?OW$b=pBׯfͪa5,^ q#8!p򶫗R_ }EZqOVZ֛cH,awNӽ~ǎ^]qT6?0G&)HXnXU`G~3ght_t)-[]viH'G}#?@3=}K[3рtUeE[kㄸRuYSR$ }=3=tP rd^S9ZusuL8o߾{wp#8!p4>G< e_OٯWi-TLW/JWs'_d; qϤ/:re@/=;ݎڕEiQK655LCjk2<:!pZ݇@:^NKC$77;v|Z]e֭[焸Ñaw8Nב ]7~*b! E..:Rv~Adx>lz]k%Khժmڴ9TpBܑq|V/}|/r/r/,&X2ENEc7O_ky(lfrFu;0'gbigo|nño#)H{_Q_dh=߮50!<lc{RHӀ_ԱG7\~ D(~5=xU&U5P ³}l ,Q5 6#j'f M"O%:ꗽɜǾ⎪ Ƅ땙J ~1x7 =4S]٬9PnM^i>M$ $ LTe˄[HDRހlnU ¸C),!!"pBkc999 6 S]~,04H{aw8 \3GU/S;q,b;v! 1 `}p8jvW{XnE:x]6xK P!7t:+"z+֞j/%f-O48&+^Sx`w`"_%YEZmÇ^ً,~Njj3sꎚڪeLYS6b"7-@a3 S-{?-A=Z_fEBTcuϏy D*YSʡy" |g=՝Yp C.쳼Un/r<@.ܧfSyOƱӇ7_rGt͒%K2%q82;*agfO?+|`>0-ea/_N {`&/:M/$ٞE:9h '꒸ -7"Kb ^<+frE=_/:lVT-sU(xHj{1qY+v01eq ^"Q|#g,>QUBD`q[[ń<6>"6))xkxaldTTo|0E]9ɡO>̘1#k>3w8w$c &>^Tc"xLmb7ǝEzGW?Uus|cˎ"}0QwO\ӛ|'@GO< y9HLT0y"m01 gcSmv 67Ԃ7=+Q^ E>EE {ˬ@ &%8W%iEY/85 V*]UH@/[(rdFY \=WfM~I$@v@בJNonb,E5V`%y]KR[ڴ򦈜pwk:E!OvߓGT/Zlaˢ幸 s"K|! l 0 e!e+w#wT ^LU< E:%OO5.|y OuE„g@{_(㹘HSF=թXd~ziՉ\SZUӃIB]S4KU7g=ac@w|?HS+&kb:t@֕KqBܑșXw/#+W{E⋬D?)YU㰍zoꋴ¢5PT{aWESU=3 l!X"oa97jʸT_npplbܯ`,Z]Xo|&n# <9&!}VLb?&5Y$тmlŠ(U}OYP3~soڤ )QDz ̞m"E.mcc7ZBjX<}dOP*⋌+v@!&vgYRFCW+p[Z(2Np,uAµ:Fe̜9T[c;Q8!H$fAy .2*ğĿ{큚څ~m> SsN9S}Sz>f 9!왌pmOBl"-)dwS=Ч _{||wvX ]a ,+Cgȗ,# ^n5A- CTn?`v-PVL" }URx3ʺR Ւ8W&=&ąd-}jجTY5U"1˯ƪ(,NL%X":dBSvyD|'&O[GMXńi!T`YvSRvyTbd}CH97)܎e-z^x:໴)гgO"ռ qceMqc*=ENTSJX %YwDz腉%N_kl6H{`3}R_m5Oz9x3_d&rc'!Y=0O|D_ ? '^<8f~0 XLiJ?pv Xg)>βՐ$z@LLvx*I0^\x}eY  }cx7|s!~_$dC04ːuU.WxS$bI"&D#Uىα B{~7A E~bz'&~A", V ~W@d3E>a*ǰU| OOguCÜp ̘1=zT۶/Fp8E\sYAjQq̞18%.?W/ "gb3ՏVDl_:,+K<žվH_ToxEc#j<,{YHȷ*d ?EìGcb{1ˇ 埀m=]&; ,Ek0nl3EͣmQ퀆,"|Hfs  Mό2կG^˂p+t|A)" E~ef-`=ك$ qQlK Bq4nFp 1P%F7R1C083 3 ˼jTIK τj1zz[[qkv@vpHb{GդwcEruWZ rr\}>y7C($S\#\苬"5E)`11x?&DVl$z,WVT"p" _^+߸/D0;0&"].U|1OE`ߘ $V-X;O>͵믗pW-[Ma(V_;ՀN//62r'cm, k _d[P21df`Y vMwP(ji+Yb2!姵M xU(21 fjU:!a$T 4:X V--*i,ː?ɪ1h 0նٺu+VrBGpB="瘽 7LE̳C!]O$>W8& yETȣp GE.yۃ < ޝ&d?0RUNle,MHa% !.囊Wx't [|=$ˀm >zt =$cjQ/WOMǵf٧dZmw`9^)S?ˆ" 0ٳggrrs-..m۶iRaWQ8!(S}_P͒$Ƕ`L;6b/)&֞,G ,WDOS]HEy%~f0XqYOskB&9˖caUcoH|Kz`"l#?=U^X5ˢp"V4N,IxP1S&w:)L$l8P;?OI5I.qHo,R &a0KluG끑X{YR%AW-* ,Q٧TW"Ix3kH쨞]ҢE ̙Àq5} '37|Xt#{וgNlbKjRO/r>&%ț^>Ǣy c_eP?AޱQF r!~20_l3jĈxP.$e ˛Tg!V!YqW}40rtPoQvi; 񼼼Gㄸc}@ͱb𡧺=EZcOx"( PZհT,bOc+[c AI8Ĉt2[cdxY64ۊ5;Kg{ƣ0IG(Cc1ozU _7?ӈίf;92d/?y. ߗ{NQyYK7lWJ)շ 45y`jٲT[+*ͦ RZ[RH,,`^x*MM_&MrFH 5❘3g[wfٰ18*SZ_QYζ"UݔLM.sVU?w%y IDATlfuCs?ky֠=Tyi R}Ps-ЀֈeR[v͚֖Go[׶VMzsmWjrzR?ީUY/yhՂ5VrP+z%?j{ r~ƚc\0ZR#Ss1r8Sİ=?|`@+W ?]W7nlinnfaccc; m1b5%mC0} {k fc&rKs,v-W@\)[xnÀw 5kւ'mfϞ-6?w R;7<57mْT*s洔cRͿtRjk_3h ];ef>pAqTo0܅AjBk9 Kh'3*5kVg>ǎ_=YfEJS̚5O2GGG4俖к#+z?j!;3tbƯ&w"sP]z'ZOlc%-Xc 0{ 5vA0,)w9TQL@/cf` z}:R9я#a.~\by~ Ɛ8|%< V/ēwxRpI-~oϕs~ yfpQ$mF=i(J I]\ `h? `tJ#f~K;"m90i5\$L+&fb9 dcVmcRˏŏ/Q.w5wl5r[GM@AHc[l' Ԗ58i|)?j 5 yȣȩyƏ]z\^ ,6xKإqh$?3WCmyH lZR9<H"5ea*O6B5p }`f;}*lHHvݿ_X@964o;XF|#PYeR}Rj,sMQ)?jf R{0Kثw.#'STnfQ3Ώ X|.F5##"#Nq?jƏ*hJ|"-.ː_KYy5܏G,6 uS ѡ;zjQP ؽ"v{%׷3x $)@Q8P,r fzhO"?pI j *42N.PNAg8w/aZ6 ācj?jfQ3я!T$ޣnm- =] %f䆧yFb䳘K'A<%٧e5l,EfXēǛxR9tP+pe̜ !]"_~XEo|? Uēl,"ouw]4xDR'u5d7i>n`kAHz3~T೾яm5aGy/4m?jܘ4#m_~4GnFpOQ3F/amSǖ{4WJʑ;݉:^&/aG~`\wg?j KA.8 5&#wNɬOr?{75ŏArF[[?MӜ~Hɏhn"*|Q4VD|[Jb 5ڮox ܏G 5H8عt[y'/xrK1t;ޓK8S( 5AXAәo_s?WYmYG~Rdocl,-y})R?j:ݘ\E^ϐYN~DGyYk\ d$U[mQkm4 .Ȓg%up >)H[  ē?Y 2Mq' "׽yc4%HjExHPaU'5.e'}YGʹ<FW_CF TD2҇7TT:<=;ox ; ? KIEyA<%yo! ֧h *T"움y <{qpҵtɥI F vRچCq⇐^i OQӖ.ZGo AmHw~߈+z ۘ1C8?wWL.hBjF#By#7 z&shhݠ,A?dF^߶nݼ  &:Uchp=h\:AԠZE67c_#߳pۀr ƣӢ̶!Yca*r# _e>y{@+5շm/ħ#s-ӯ=$7"y ۚi?BpmSU^`ƼH"q,t-?<5"}6hGM+ p)sצ +G~Cnfsao 2"7m]?G ARvrOd: h! Fǹv WC!-g {5x 3/aD84}/aOZ(i1"H֎H@r 3rfI(:nCmeD]$x=ȱu/}7yA ;&Բ 7Qs#=ackcXd'ēx;=b܃|V 4 EnM{tW{jwC׎BFҁf.sy`: D)a}8Џj==+dOV. uF#7 ,u݄qדMm5}8exd60߂{ >6Vi|CFA!N+tak38><%?ABnvS΂@$2>y ʣ%?j%V[Lθ'Wէ@\)Utc06&?Gͧӯ=0G^xZ䯎@'K \H dcg;PI(x j4wy0%Qvud 9 eY'h _?#H-U#Sj˽qi+I_ۏ#R\2Z?6gݭ=HXp1r<M >xK t8 @|ow+pn&#/a'G}k7$1;86?| j\]IG O GaoxlI- mN t\9I` Ǘ# _K5uo2ٸåVB-l=`O"\VCEn|Ⱥh?pZp+w 򾇿[s$Ee8ˣxM_EF/a³d%?0;\멧@\)Utc0_Ԧ~BAjA.\A~D` dW. ~ >$68gY&,h!/@f +%Yk>0]e*$yh 1!| q\ ұl#}O3\|'F!; .y𽊍EXteL"䷵TMyESHt5%5RY@'ƥ&DmvG@<|K^JҩC_f /a_H/ad@nkQ'h *LR}it4RÜ-&h贄y0k?4tGM\e5A7nZ fQP^G9 !jc6؏H_2 GX6yɟ~ ,w?3ߕr}upDn6r.u5ܒq9n#I7 /!<6 SrQSGͩۜJ|_}ȹt[o$P7V:^t3q &f\H)HmyӜi?LvMZx Ƶ*i *Jf/y \zrҩE.F ]-nB.}5r#ÍK#Ťo&"7o I5t4g^d^*+3{V#@$%$[ClxPc?yTMU#+i?Z͌فE>jcM<9ē5oHȱϏ#$'68Mvg踣ͮhPTZH_q \%bҕ&x 9} @\)UtN#{o:m$Xt>'%t$R˽r}l4JOֺG!uVO[84| 2x%GNy H42J8xc&}!}=5dZoВe HMi Ľ]%^ 2 5/̚ͰHЃaO}y7)/an~6 y؏gֈ/"=8xL HXyj$5 ]LfjS`IhZp{ ٞ I8 n/満2&z1u+SSt!7F/a%lpUt ܂nnEz_x2s[ r7rO *yQs{*kT(~4/onqÂ::ۋxSh *2TW07but0//ar|?jG.]H=n]öCܦSH `HNlfp= 9w rc K]lRy\Q$,paHEH`sM@\)U4~Ta90%$־< ݅ 2 z#}\OsN"Nrӡ$wp1K IDAT! o|OX&-Z%^ž{F9 >%쇤/ O-g\Z$@k7 PMfx[y KO/^^c]׆4,fM !è"{ 2 tHq1>@zHUX9({9!vvjhI3aڏ.}f&"H#KoelXWn$ӓ@nv%tCKgg9}QGnzXqr!7XWBO~ޠ+B<.Rv+Ϻr4WJӀM͐xGɤ''dץ /a?%J/aw62tt_*Cn6><5Wm\BqؙWvm6{ p :ҖPI6l0w%1/aW"m` 6% AUoćӱ9{}y rIa& :{ {[>^ޓ1%[C\4PJvWk V7] ivEfrAz34P 3 `Z`49r y]C9$0u$غОf2׮W 0s,^(r׈㶓k\9JSl,_nә{Oz~%c~|Ha7lw .+fz{^C@W@oDyjڴNdL)HzKm/`Rϲzz,!Bv"6QXTLu gG)].%=t1 V5dREu;z~Hdp F }͏9ށ$oW@gtI7* ` %g.z1,`.C*JI8i\DwF& ^8.}w8"(s rL\ClK~#w7+ԧѫ.BzGM{8q)G'GjF)$e*~%he)z+H\c);]q;:]pR6_l:ϼ\CFQuo ~ ˜t7?:Z䖮ē aHn{ޚ|5!cyv^mԜ~m6ec3i⟞4vBܿ^.txH_ 5гW*0:> ߍv8-Iky&ϲ]]u[lXdM&yK-Ok[&<ē2k-p=ikx PPd]! kXE! ki^v%G\+ ^3KYNyG{ Ix1/aQ1z/4+M<9xēWX.WLQi O& =_H77Ú;R~X@\)r[HqWNު Osx"-e.VOmF^EV-J7D45E)r[@hԨT>.%OL TzWO(y |G?-qYTE`cvmZ舜H^R&ē&}ta?F(ti/sfqT#e0'b>QHRJ'+7o3{4^TtVEsĕRJmEZ1ؿ\`l,r˕{h̢BZ#RJ)4Vb '&SpHSˢTi RJ)l,u&RxYJE(T`c Qk_,p*K!^)]L+qYTn+?f0+ :yGi5Pm,ruA 6Ete]X>}#=XKv!+dqT wf0yf8šRl.[eM'w4RZZ- 2diY^n+`YvE6?˲l/.-q [g: B>7l6T=gY/`V`^ʤJ^b@\)RYXd kOcƘX7Wbc>gYƘt*w|3CcL6` \ x&Q%1f\fx^UncLc8$, w4\m0Qqe1YnUge.n:HQeH"@\)Uj{a;YO>e:.f"N yG<)kwW3*&ws'cl,2"H'7dwsGL~}NAc:s뙔ezKn_}Lezq5:㠀%o:?g+p(ߗmJ)ꪥo~%vca6/%4WJ)T1f9ϱ\I ˵nZ[`@>HkĕRJ)U6.p3l8OДF\)REccL<-'ο 1UdISJ)*:ē"@m34WJ)TIXd*qRJ)BqRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)@qRJ)ʠPJm:3[r>.RJJR 8܅P}.RJJRrB);J)%h rM9n)FHRΌ(wRjK@\)Xky"~4l lUc Zÿ3Hk榗}TZ7S1ǁ9Hֽv(4c}ޞkk_̾+R`_a@u5\9ZwZ[rk$'Hkkpe; X^cf~g,sy- x8xcf eGB2 x;{O,٩P?x3c `rimۻv(_oEW+o?|6(=~=k/K!oBØ{CEbCjyiP1R[_^}S5,T g(sdf`AÀA:/yH 0Y["\s:3ŢaMI&3߇\RspGqT;t?l-#J OY 63&3J)ֈ+:0a V7y0J,966ޯ3saJ)T5JmexTTka5 0ߦ` a@O,o c@ӶCP&rX~͟¢inRJ+ٯk3]1byX 4XKF @c!eXQAcP44XK1RTְ+R*a ް)Ch$RJ夁R}k%gT.b ϥ fޜ7ku5Zpm8ЈQ 6Ww+g^n,,\ KEneƌݭtVYl@А^Z[4,e9hVY_[~lv> ^T MM@zڿlra_+4fo)` Y|EO=f`M:ϾfhmFNz >\3A^OHϲ2l {Of Ag)7}H-G1k\)}cP+5rFC`nX췲Z ޕ#FZP؅r>v:}%8n6d4Y"ipn |t>(O5JE=Vf̜OUխߕ<lD|7 ۽}mITYcZk)F[g.3}ioy*uuQ!'muJ7 1, ?N%:3eT[=VJm34Wp4fv?aR=f4V1ڰgj \hjWR[ ĕ ,gtA83 Yy5GT4Y)u}`x܅PJ)Th T`[.RJ)GSSNW J)*.WJ)R 4WJ)R 4WJ)R 4W* Tu*Rj릍5Uי݀QR`1Wӽ:cMC~338 ۯ3!ܗf;'יtV|>=T:?p;ul>v=?{-'}ث?E@ehV<ļ'p6PVI^}WJ)Tnz u.t2pUw י O@,xȯ3ӺH.z `p/},p2GW?uas])R9hf!3KS^ ϺWc -;~ܫ -_g.EjpA+ERcj*l.R農חz5vCxWce:p_gx5} zR;μיa]yRJ)U Uy~ I59~̫s.eū:s7M$ 㿅31z$>3)HN _V}b~2&+>IWx}<י;#5J)*" UeH.=e߉~j67,4l7_kpMRJ)U$Z#zϯ3FzLY/eou$Pp 0ӫ؇:ה: Ʈjl_g <י ]V##dzHL^ua^t?י{~\ RJ"qq?$~ X < |8)G`}k^S|onDK܈"A,BY@jsF^}0x>WJ)T/5Wj~R= yk. 9.ͲL:l]Zڌү3a* IDATf7&W~0 x:Էx#T\ެ{_gnG.w7_ B &S=3sƛruAw5S̩OlS3򙮬`N"3sʣf攷Mw33ʜ]nif:-)SW}ޥPJ)PQ嵬e,3.F۫uZ6*Uo5J)RJJ)RJJ)RJJ)RJ6TJ~ozƜȆ59^Tιlo~0rt2>na51gJgrz1ߕTZl ߤ.¼M4FW]1Sv xӲUU-Uobٰ1kǔdJ^cXko-wYTf9Z;em1g֚rekaf>^_?mk{~4ikڞnek-0V`BOPc#kmIRJ)@qRJ)@qRJ)@k*Usͷ| n<]c@=941`eXꯛ42̍vARJq Q cra8 &x >|d53xPJ)ThRUoҏ#NcKRJ"@\rm*XG~fοnآeRJ)R[¹w^Tp6Hò/0s\ `n0edOp98-gCk,F-jRJ)U+%raV` 9lF`:j$< s(0^TꗡRJ@\-R|/z3cx0h&f%h)uRJ)U+%8ʏB)/qzZ+H .'.DDDDGA\ C"""" .;|x6%t!"""R "\_\|Z ]tHq=2pqBDDDEzQMwZ't9""" *Z\+o !jmd#;sGs䏞Z悱vצ|gN 1j]y>;ϾąZ]k|C7߷'Zqs[_(6sݩZG \ǁCg[s@û"c9е4#GO6ew]F C>PJf:_G -/ߛj:+Aܽk:6[sw}HtZ7eF\&?}-f.3s>Hffg{`ԓٙ 3ͱa>˯ 4b&Nc&`ucY^ݝnU<`V9Ckcf>+qjOW61m?c}Z۽JijHoRk8GE˲Ɲ0T_7,o܉S`^|eo7e.40jŲW;a(`d>W≇7xnߕyG0\eEVokN\'_|oa9+P`oXxXIod,=m=qrr‡IN̻ݻd85u._Ͷ^c'M muM2|C-csUs+XfXXki E=hݷx6?fҤsܯߛͅ#XƝ0y,ƀ"/=7c+,]1&ް5rV2[xAS#'DW4 c9fݸ{VWXwNGtҾ^w. [SL鷶8=gW,]t7aа&L/EsG砢z8;V.[[>ꤓ1ؚ{_j{#q\;/)%B[H?r0` `#V,ꎖS/SNn(0{n*ف(oFbKU 6ۂN;GF+unSYawi 6+s\CWE(ݟQ3< 8mu^5iD`vCe@!m9hv%pPmk8 {x%+}d`6k @|ΊSafj`$/rMrWVmhsKjr$@h|Iu^䚊ˀf3>[Qx`0@`uQaEι.™V9=78*b1/KSbu]*c[+L2y+Vzޗ NokSG[qD{Ev&s66j=dn֖ǀ4r'0h:StSZ{,z)rEvNs7CV99M# 3@=P[Nj䀦=)9̠Z[A=Q}OEDDdg .^&VP Σ+KӀ.UDDѪ)"}\/0ןx8m1،X) z?xYiH_4FcKvbl$3'_v^~zB@A\9hzh,%\/wfo46/mkTW74YyǢ'O>W>Q~GoXg?~mTPՊ]#aU][?hР  {W ~Æ V{W+v=6oCEo1M1KGZ5=m{sȠ1՛6\zF &Ԑ~|~Cﭪta7ohРA[;xV>y]_]sʰar6n~ŷ7lxA_#Y~˺'ްaC}~ewm5t̛jiVٓm `h|j&;dJG#W^K/U9p`ݪѣ}֬ jV,_e&aEW9j'bۻy@iɭNfb٢ sNeƑM߿tuQܿdݿbQ;wo3k%!<"3v3kJ7Й5Mg,?Y/`,]8)*SkA~9xO+gٗZiDU1ɇ~ooO{ѣN .";)D:l@û"c9l}֗?ڔEvQ?C>PJV|m%sa5Q:V13!hKhixixy9KkqEDDDDP @A\DDDD$qEDDDDP @A\DDDD$qEDDDDP @A\DDDD$qEDDDDP @A\DDDD$qEDDDDP @A\DDDD$qEDDDDP @A\DDDD$qEDDDDP @A\DDDD$qEDDDDP @A\DDDD$qEDDDDP @A\DDDD$qEzȒC"KƄCDDD#(tmܽ!xjwSr)`2Si;mrDDDdg .}_L=nW,L=~~;mS_nx9=HϢ . %K:ȒYy?Y<[1KE0#KzR~%CM%z,{г+ɖQS.K%CogHq{r_R>uȒud6XzLi>^-.|S0;8x_DDzs5fXd >O03ɸids?zL[kRǒMN6'OS+Y2>!dKN ,o \CӘٙ yoO^|'dmoOV?⃧Ak[kcsl3t!ۓ#mQ.z<7XYrj]]8J63VIKSqZ`I"KzN5fNQ B=lu6t" ] ?&:r[dIb?$I/D,쎱Rϑ-id#K>]ZRDDz }hHI=)0%dD7dSd~YWwURפ![{d2K ~bnvQC"Ҝt "KjZk3.ݣ }%kAdg|:$.ko.0lM#K,5pi|G_3G;A{Mii>OHq l5noh<muH=~$%bz\z_d|$3V%""ۣ ."ݮ2ȿGG|&G$tq dȒww!xu)9e%w\FA\D(}2%oq`sb\:F"K*z쮖z<!%H "L*#u%ݵ㇞jMuz|.pq:+oH= | YraDD- "TȖ5Mdv}%p'7ky8X <Yu]%6/D,E$yU/j|qts-o<Yi~YCWH=~ 8l՘;"Kv\H ."Tbld4վ5|ͩH=r "r?VFrJ=~ 9 ZH'ewe=VWe{Cl]oUEzqv4W/Q9.d p,ؾ@ঢ়f Qgf5;OQG9ץܮƥZȼaGqqTE- y`w(Be[2ٚF|!T-e_GC#"WDp!& )v%Ek&ǷGLGz|IZ!Y1YK! TiG ٫ʽvz6)HP|p%rs<_.I=~4Ȓ=sS!k %S5tqr;=!;K7_QW3Q>e' ."=­8վ&͗ثΛns.z3P@6o<}Ȓ+,ydN@&g*mR'[{`adMSa#5%5il,+j ]mzBW"=8y|s=vh(m1쫥z0%[_UA݇%Oii>~7G-קg mFA\Dz~y| }?-R7/,i<':@iz9;Ú""ٷ,9v\dnী XʮZ<8xU`qx}gRׄED/P@?xNqS? ffDrJ;αvD@|*p;=f/~;\Y.DDdgUSD$;#Ku1/^q%/wqڃ#JLn,y:">k(%E$})yM<˃8@"K.n,9pO}=QG$dLۓC7>.t-sݕ_îC{M ."9(K<ՒEggSH)؞9@ǯFl~@6\ YðCZ׼qt ]FO!?lӞ<0$b@j;{%.DDdg="Dܵz;2R/"-۷iR,WT9Sۓ#;'ʾ{9uΟg7Syg?^@?X_ ."A, VvXϼk持+ -ۀP&sx</7|'9:AT5V᭓ *0 @A\D R8,î*j~}ݷW Ȓ%{.`N>gb&:[ͱj#g}oO,lR^#."RFq`/o;Hԡ TZ?i[iɉى(BʣG; ljnP >cMˀ>/P-"=NlBі6HCJMT/AuRyD> ">ji~ud?bGC!o ."҅n #Wu&F #s8[ I)t|6g`Gg[4dlH9jbW UXc%Y2iC+cd¿@2Ļ0H ."Nuw7Ss<6/r' #~j.ͮ;''P`Udc%gE "aRӁ%vv<NA\DY[=acC׳ 1+QdZ߸?Hu"";#."NU`"X ρoݴWؖ6䀊L|]3~(6kkP#WqK[lI=,< #;q49H_ ."~yvCDE(VM(8^l[PR\[n{\Zl3r[n7ojo,L=~%d"ٞ4qv2Vgivm7U%ӀU-Fm( 7~??t'.>"t)K{qt?{k IsEDz"uY/g-Q遊; U3Bі 5g*#."n~I#~[=^[]%co2ȮSDPig>@> H ,z!\Y"5fΈyfg~һs_MU͟6QSO .""ەzY2l%=ɦ56XD%g9|mNcwhێ<=,IA\DDTZIA)*EEzFßgL9>G=vG2:XSDD%I@?HK*p+/ElMA\D,9>dL{ۧ\T9 YrJgGOhG|'CHv0#H=ػ  : ""#ƓySDDvY2i,>DDz%qmpk߃0>JA\Do[= y1: connectionstyle = "arc3,rad=0.7" else: connectionstyle = "arc3,rad=-0.7" annotate(ax=ax, coord1=(x1, y1), coord2=(x2, y2), connectionstyle=connectionstyle, color=color[r], alpha=alpha[r]) # draw boxes for name, pos in rect_pos.items(): htotal = all_h[name] obj = objs[name] allrelationship = (list(getattr(obj, '_child_containers', [])) + list(getattr(obj, '_multi_parent_containers', []))) rect = Rectangle(pos, rect_width, htotal, facecolor='w', edgecolor='k', linewidth=2.) ax.add_patch(rect) # title green pos2 = pos[0], pos[1] + htotal - line_heigth * 1.5 rect = Rectangle(pos2, rect_width, line_heigth * 1.5, facecolor='g', edgecolor='k', alpha=.5, linewidth=2.) ax.add_patch(rect) # single relationship relationship = getattr(obj, '_single_child_objects', []) pos2 = pos[1] + htotal - line_heigth * (1.5 + len(relationship)) rect_height = len(relationship) * line_heigth rect = Rectangle((pos[0], pos2), rect_width, rect_height, facecolor='c', edgecolor='k', alpha=.5) ax.add_patch(rect) # multi relationship relationship = (list(getattr(obj, '_multi_child_objects', [])) + list(getattr(obj, '_multi_parent_containers', []))) pos2 = (pos[1] + htotal - line_heigth * (1.5 + len(relationship)) - rect_height) rect_height = len(relationship) * line_heigth rect = Rectangle((pos[0], pos2), rect_width, rect_height, facecolor='m', edgecolor='k', alpha=.5) ax.add_patch(rect) # necessary attr pos2 = (pos[1] + htotal - line_heigth * (1.5 + len(allrelationship) + len(obj._necessary_attrs))) rect = Rectangle((pos[0], pos2), rect_width, line_heigth * len(obj._necessary_attrs), facecolor='r', edgecolor='k', alpha=.5) ax.add_patch(rect) # name if hasattr(obj, '_quantity_attr'): post = '* ' else: post = '' ax.text(pos[0] + rect_width / 2., pos[1] + htotal - line_heigth * 1.5 / 2., name + post, horizontalalignment='center', verticalalignment='center', fontsize=fontsize + 2, fontproperties=FontProperties(weight='bold'), ) # relationship for i, relat in enumerate(allrelationship): ax.text(pos[0] + left_text_shift, pos[1] + htotal - line_heigth * (i + 2), relat + ': list', horizontalalignment='left', verticalalignment='center', fontsize=fontsize, ) # attributes for i, attr in enumerate(obj._all_attrs): attrname, attrtype = attr[0], attr[1] t1 = attrname if (hasattr(obj, '_quantity_attr') and obj._quantity_attr == attrname): t1 = attrname + '(object itself)' else: t1 = attrname if attrtype == pq.Quantity: if attr[2] == 0: t2 = 'Quantity scalar' else: t2 = 'Quantity %dD' % attr[2] elif attrtype == np.ndarray: t2 = "np.ndarray %dD dt='%s'" % (attr[2], attr[3].kind) elif attrtype == datetime: t2 = 'datetime' else: t2 = attrtype.__name__ t = t1 + ' : ' + t2 ax.text(pos[0] + left_text_shift, pos[1] + htotal - line_heigth * (i + len(allrelationship) + 2), t, horizontalalignment='left', verticalalignment='center', fontsize=fontsize, ) xlim, ylim = figsize ax.set_xlim(0, xlim) ax.set_ylim(0, ylim) ax.set_xticks([]) ax.set_yticks([]) fig.savefig(filename, dpi=dpi) def generate_diagram_simple(): figsize = (18, 12) rw = rect_width = 3. bf = blank_fact = 1.2 rect_pos = {'Block': (.5 + rw * bf * 0, 4), 'Segment': (.5 + rw * bf * 1, .5), 'Event': (.5 + rw * bf * 4, 3.0), 'Epoch': (.5 + rw * bf * 4, 1.0), 'ChannelIndex': (.5 + rw * bf * 1, 7.5), 'Unit': (.5 + rw * bf * 2., 9.9), 'SpikeTrain': (.5 + rw * bf * 3, 7.5), 'IrregularlySampledSignal': (.5 + rw * bf * 3, 0.5), 'AnalogSignal': (.5 + rw * bf * 3, 4.9), } generate_diagram('simple_generated_diagram.svg', rect_pos, rect_width, figsize) generate_diagram('simple_generated_diagram.png', rect_pos, rect_width, figsize) if __name__ == '__main__': generate_diagram_simple() pyplot.show() neo-0.7.2/doc/source/images/multi_segment_diagram.png0000600013464101346420000044074713420077704021072 0ustar yohyohPNG  IHDRv7sBIT|d pHYsgRtEXtSoftwarewww.inkscape.org< IDATxw\U I(WP EA+boaW(( $)$PB ix9sN۝M3sgSzDUq<"28 ^MREyN^U]ۨv:"2 x.`,0X~"~!~"2 p.UӸV;!fKX_ /6z{cZU_-:;)  26Vb;OUͶ9 9*{k=E"oj{oc]%sc8epaU]Лms"JQUk sg;WUPkOȾ7c(Ezio_.zM8*6\UW vS88-`離RӜ~ =ރ}tsSbƩmTjD'𶌏+Tj;Cÿ x] v=@DJݿׁ>%zN5w¿Wգ\%Y5pzH1.X{x?6DU76~+oCb.›`S/Y ~@P 1Mxw5|]_EvFFp`"^n^8 4`26$89XtRU=;]NU_^ "[aWe¸8.oxw8.vHDdSѡTt \""[bJWl,~,U]#"+0`1`qt@ps{"Rْ8[2oT*+DF]sl͌w"2x'5 _nF}cqug1pىB"2B,U]ooehTlQ,x36KU~G`l6Ĭs[`'H3p4oY$|/aGcBuׂN/n+UPUC']5^U][ X?MsXm B_eJhI~ @όO3'I?~ P^M~("MQ~@~z;˞[ ߎ3K1ݵ3B;sy+B0=TzFۆWłl < \+KioLA6,=kn1r3X,;NѡsDd{l] <xXU*SߖXV%|j.*DDࡳ//+’'hxMk-εF`* ȗA`b(ft|L|KXo6Kil*0u,p{ՁlQ/E\-T}KU _*Ι~_/Zϥb%~^-Ǫ+T5 vUGI [5.e*찘j/e:,voA1ku/ `%ZYkxlzVqìܫ<2u1ER3믂ݝ5/mx x*"U}ZԶ&&J/$ձP֘kfRi`U<ؕ7 s& 藤eLpU`58;:?I|)ՎZ1XWStdb^¾1}Rv)k`;E|fO݃=c(){ +|6'ߤl5]W~xdoÔ"񵿊e U̵;kbT{=HߟI9PQ,=Hazw usLʾlQ_ҧg)Kϫ v=C)V,ĒԣyQݫ1A,ySlRz9#'bÀ,2ìy Mܯ'a<;[_,8w m] 4ncx[ˌ:P`Y#onyRe'9_W4;o|_/,l^J2XS`PSX)מUL;pzrCTzU#iJb`_L (&ZORB解+Q.JZnBB2QKsӚj6U}j 0an^YNAߊ/%nD+% vcjW#)حD}㾷Q]5^lL߆bo( v-KZN=?+J>]73-X(d#,o.e|+&)&bۋQ(6x<]nv}Dى(AA iSTwT KR}1k,`]J[7_*S.^sg7k PW}aZr 1-ϙ&U w i(s_TӂߣZ.S+n3yYfnnLr)҂w+Ѩ2zS;+\<(*?)N6̲Sm RBߞ%9Okqy+bZWRߣZ%J%Ўx)%cUwD_'cAFWڏ)(*үUֹYph]J*Ǔ2=nPoZAE0SE~?7(ʱ¢xU3Qյz!OQl|Dq ܭJ9RetW[.~_?a)}/"/Gm#taTu!fU'nO߿gdTh L&~'r*l!XBH=~k**PUWU(7|o¡"Gx_~ſE>JxnǩcIFÍeB\Dnz]TKZQZmf9łS͂&Z^2>$z ݻ_VڅZljkov% vu"""L'F~*"7j?ǣv2Fԗqr'~4^BuP D{/ >W:;ǩ!!BΨuG~0V-Ji8er&_;H dN&z(YBqZz1]Y>qj~>sS9sJ3nW&`!Nb4apt[u},"t^#(~q5ǰE֟¡] K(~ |XUX.9N6ai"rB!"CDX:X#צ]63W՛(^XD!"Et' ):`ڹo5"rj1k棕/ Keh<OZXD|%i:,x \?;"2*]Ot"rb~?XDĬKq<5:+]^Uo&ȍ"cV#B?MDYfy=Zo_y;>aq7xozیʯ)DVD `s"r?on\/ޟ."g EeV4an}"I-˪."{e6' oaUXB}/ y+{/1Ԍ:H|L)"'l/>/FNwff9\D.b2 V"rl{PL `{zv^q3xx!"Tk[UA0ŃocPP5grŮx ˀS(^Q_kEd W B #"a5,@|nو'vˌ28sjMtl8nJ,i>?SUd8pzx_b}Dd /~& 4^}|h"Ҋ}{q6p<ݯdjg[o{0Ů>Ѓ˼b,>LlE:yCDb&XL TN9 a0 w" =!aJOBJyd|&z\7>?UD^ x0SU3ÃOJ { I!՝ZQ;8ۤs~u`j16YU0!R{u8 |pZq ܔ:܁eJqH'\c5%[U99κ tMLWn! g?/B"I.9΀XvޘP׎%Kׅ 9:lm!5xcl%Xj*ug`!(/g!mYx)$)wTUnႝ888]1qqq8.988 p\sqq`8883qqqqgゝ888;qqq v888qqq8.988 p\sqq`88HL IDAT83qqqqgゝ888;qqqNK88BDZ i0ר꽍nD=q4@^UU88N""Cn4o݈z""!n4/wp#";w5`NS݈>n1W63F7q́)肝>l,7)l ^KDd|<4ӗd\s.\_kt#/Kqqzϊ8883qqqqgゝ888;qqq v888qqq8.988 p\sqq`8883qqqqgゝ888;qqq v888jDDF5 8881.N,D<ٱ rqqg HP``(pJx!" -atqq, 8IuAR/bWaB]m3K ž888pa<6CdF^`WGVQ柽 qt?Df4tr"#q;p~R/\J2`v_4qqH@+ptw6=4mqLXge`W%b+MU/8NXdFDLğ{1u qu06oB\èl0g=@`\xuhBp~::n~Ⱦ>Q)@:jr"Djt[g]co8κF> #ټ-.UO%yU}OZ8NC ;+MDY~RZ8}7SԶWB`FYh9MJ@PЮT cۢ]h` Z8@N䨜FnGo7n~S9Nd 8@bqZ®W{p r훻80vw\)źbɩ0)\.x7ܽ`,P+rYJdeȇD~M4r"r"m4۞03KwE&H8jMȄԎOx$Յ ;u걡+Ye"5Lg!Ip;@Pv;6j=Ql7s=SO )~VyX6(\*|.ꓖ8}¥"C~}/^`k`j[ C#R_T8 u}] 6o v-j0D\g醛dx(ʌ9v_ij9u UdHNdvȷzZWp5p'v{q)5.ɛ+ @ZSwEvɉɉ}߆9 r"_'2u`W1kmqaJ5X*Py@lB&^#Iۗ0Cdu9/5j!$p7z/8G]!a5n AOS} XI?D?]Fc=d+̕gi0)g vk4Yد>Y>]fT/ `h&I Wנl/iJ>W!=+,~bjgm~Y-F1b뀻0j25"_Bt `o\F&`ٞZ*2dȏ&r0c\r@ E${؅8yMC-v9^ z\n@%-}}Wj0e@ pH]!2ôo.̉LxGeM=삫RW޴T.uApL56\&>nȉ[zek bn;x,U;ޖO1vWL`d̽IOđ{-CނI|h(;lQK`9ȔHuYfiZ#=u`ZB1Kmw7:`nNGhKT_+&roE2`Q,$Uסy8JTC4鯼Wp0cApEp7wbwId݈Gxd 9#5>  yrKTD?>)lwhNkEƻIǗz {0>jH0k">'rqNڜHsx6Vb9kncb2Op)滑!PUOZ2@1zTȽϦy.k,0:FxpV>D՗%jq\c0:ZȖk&lF9D>ӛּA0 @F3=ʰaLĂr`F)]74ղx[ A?ړ0Tb?h1:l vŔ'ZU/_{<&䳜Ȗ9 _iʉSYJܼ&i\_{PPE9 9F,p } t''i\\g:Շ˟Q 'n"oL"W0G1-˟|Y5ELKaB-5h+VՎ7rl=C q V-MM).0[L%֌;Oawaa<е }ٸBOݹ3Se]_.rXNd~NSQweDyym[&Jb4uTr @&A-N]ެFT17U50Uʓ ꋳ HhU]{^ˉ^-ow/8B6.\K±nͿ`p*CΈ.Ix;"2X)+4=%[Ӌ"r"ˉdzB,|Ǹ`W}|\+Pw;)t`~͖&-U`uYmXgjU=Uͅj6EadU'#Z4r" s"ɂ,'F̝ a ˱ aj5l1p@~ <|3>~I;fs}dK{< u_t KR W7r"cs"ײhP4ST3UUJWZn=Ki]棘yL٪,FNx[UyZrYn{)<6ƶqd/eKM<յXF vok `zOJG\X> w%-vAP.I\xܡ \3W"!6 4ѪjEBȫEv̉Tʤ92qX:'U8`P'pb#eU+,͇qY)7~ݡ8]N`b^$k ^WjXg٪0%?օgy-3 F:.ؕԀud`RNPPpQH4i 90sJYZ6QfY -k\)d{$)3ɉdVC#w/0a*0LJveq ;ٴsn;<\&#<<}_I9;r"1X3Kvglxbd\Z^vA 5QC^C@Vݡ:9]k9Oዱ=M`x#0)FS`TVL(X3;Ou3߱ʤMiѤLt:D9TH쓇Dʥߏ±cc~ 7KȾ&iQnnsmJybVR&]x~.pZnU)``y}e..rN*l/+ L`W`I)K[)*rFdSB,vMpTTn)Z_[%٭)/VմPO`Zȉ|='2Mg k.3kf9\+O%έu$IWKEC܍зiKa`B. ?&y{D?i.rH X`PjU-P,6bZ.mTaw ,'si"{'`yqe#k 0;emƽB|zhlp~;U<&RXOk76oɋp\n1E>U lj+]H!Aq5~XsEX,T.d^p(eXؿqsX蜚,v!AiD7)"NadVFB))h0hY"]iڃ=t. MĞes,<յ!DNY\M"uZyVO6 1K jDž 2;eNdgbw&ۊbW_%%d.asͰ|! !L)_ y\iml>MdDN Y|U5?EĚa}+BS[\-\}S &=v9}6]9a(J-ݚP['@a|!]&B{; ư9$V۞&El_ܬ:HL J *M.,*Jjÿ$oeƖZxDیcCb쮕fE)"8 ecPK@<8@`תTsλM Q0t#`֤T ZGnjWΉ AApb-T۳4t!1Xem"Dr"r"[fo&'`iMANQ}pGcڪzկSXN^ D 4hSu[v%5!f8TMB8F *S+l-UX삖}tD.Bdp~ _( 1cNĂ\0ND`W.Kl|E'D llcuv[;U#K\7q63v5)psNiUiTIE~&"mSܹ"mO\iI#LI,0KwWtQZ-^?)1ـ6c?4; OJ)?hTV,` h\I vyͫk"#6'VB|Y6H,~*3SX2Ǫ֌}r";+ *|\Ҝȉ2G)|lOSxB Vd:3ؚ f)DڿcJ`>Dl8kL9r";L@0E筪c'>pZ rjP'b {Ë)bP)U4([w8lo#eS:TMH,4 q48dԸ6y2LspgkDSU?hU=hbΣO[ FeqL\ _d}PIwv[H5BZS^i3'ˉ'@ {CQsYE` h;=T}[1p vjyPn_NΜ'f,iNU6>:?`¼ .$f>C4TYvΉl&yM6hw]⌋bF*KP]!2[+|ZB8‘e k [a&MI؄P1ZNМc9JUNo?B:jnp ^]NdpEm"gޣpS*,Vڂ ڼ"7?-ԉ~nV9/D&8_X.ފ-fNV}kgea_ )(-DZ\]*2Â(tg掶S!/&%$!<0ʜQ' IDAT$0jL\R"xbJv%(L0!$_H|>HؾUufzo 3:‚-͖0&W}+dK-̀N$j6}Ӫ`.[cIٖ7غk Y&prDAiT?NUajGppHE!G&Aq9/kI]Kgc 4[gV֓T.U &x%%&T.\i/lU=UT\PLjƩ ZZAQvJ=9MmR,<)*s` Bio;y cCSks>Sq ]li-rb#Rt'nz5 [*3+9jݢDؙĜȗn>,_]"jG1%vi7 3g5wgL!Z\W$3o%7R#xLӲH%aRnSTƄI9#Rq`[3;).o'~a.fW ]x5l5R:Y #2 y[mcc]P-K+4lGߒG I/B>(ZB"&.xe}6XҪ:U0OfNQq#ɾs)kM t*1Iܧ&-'w`ApRbW6qKjs0OLi39o$K})&S1`t:5WTNa;^#^Ύ+ U vA6>=BbkcϪ[u/5iНZn`PO>Z<׫):`WJ\#6)YH=.)[| ɭ`7NU]okw~L8H ! Am^6 ^0ǘ즕d"N駩I?D&}7^7P&s8XZN-!e̦^ժujgb'[OkP0<6?T8Щ :-R HHp0dD W2t4xHHtu?H q)Np!pa[jdoŦ*\1cܟ- 1˙9Lktr` ,n1kSGbʒ *pc)2 "y;hJֻ9u SM cJ=sRcX'⦃d\Sh0ks> ̉l9Y']+4-%%.VJB0屰`ɝMf8VKu <~ $,$#=4SU; [TתvZgJN<$JJ7+'._uKkU}|gNRǶ&˶e,}iUh"̪v)W僱qI d,6#\,cj՛z8؁)Q&6ܢ!Lf%')X/¯ aԧn`WJ;wÌPޖ+ v5-zՇY i1MWTk  TRnٱ &`iS26ADBj0l7:寫=?h7)\$e+Wӥq|T!J[*-v ")R.Ֆj{ !S2j)`g5½AAtXֶs ~{YusD@v,6$ s%? 'rdN´1jdS;+<bJd&\m-!,ZDXAҴ*xeۓa., K-XZB:]vfc8"-a:8l}!hvsz&p%tC¡jLϨ}m sauAr\T:?OjP7w>$ ]>va;#ރ[S[LJ fy Z9aL'QB" oUMq `(ࡳ@y`gT𺞐LX%SӋ-xX>bޯT߄8j^a9!mdϩi: DB3|Ek2T-@FaW+a{m_JHf2_*4wM$&0!4 a99 2LpkVSBKXuޖgWY\UAyp3@NOb ф]5BÉ -v9˲>` YST3Ed{lNdps,Tvڭ8ԝSzJڴ bؑyZU[C]WbH5E'⧝-vIѼ)yvWȾp&<:-!w-kGm$:.oMҸ*9$-y6$;6i+h̃v懬i͂Uй% 𦹅;h8MpZ@5x[9 w2EUYQ"+mwI?A؝ R$6dRB ˧$ɠK xhϪ`SSrH  C⭗8j]1WmzY{ݥ_]0#|'  V:'bt.1ց v]I/keu/BoAUluEӷYu6͛kSGSՒ[bj p;fхs:H1;O9GyN:wc]rߞ@ lkưGW㞊)q]!sUnBltIz_Ni3'mbf!f5wxVH+nLsv{s;qoI,lqa79WevWrpf7]_fSwz6~9erp:xOLO]eBG"VUq!Duh&Y*s˺IvfDNvL7Z] `ڕ"j )r0k -m!^' em oNM{\حYW-6RefS{|^/ӒvY>laj jm8%G% QҀ0avXժa4Bjޜ,+hy[,Y~!$ݿK^T< p—#zW'a.鯓f.˦3m{cpcqbb׮hsEϳn_2 D{cMeEZUݪ [Gn)`܊ >`hHSŪwx ~(ҀhMN"vZlbPwcCl vt:ﳞ]p;0TQ8:bMJ]d߉99ʹd;"Ǜ$AUF!bיRF~[2Ebm/_w #m=g'Epju@uS=9%R}aF@Gtn "V|%1r~[)uƜAkh2fa<}::jq7 WcnL߀;3ayOCܰTGcѐ=߿WKyIp:EOD#զDoY DzW߰$3f;'d1?:Bcs}x-XL?iTkPo' ?"v6m/y>ؤ֐`Rx*6=#Ykg)5a]L3.(orG z/"V[w pgjM fssx& j]Z,G ®2.oEǣfl?UDNOb!XtG՟i6)z@9I}Eb.cG!IހApxmu;ք mwbK<ߤysCwh,욊vytnLony}l1}o)|;(眑u]Ex0)eYt?]lrpN߫Gfu{01y&cZAC䉞E2ms3˦;.W3bCQ/an~7LdZ׹ jb~Nx=}v(?ko Xra=`XJ?d7Kyu~-LKu>"u Nun+WrBG=aG&hXxo.2G;pDJ,unf{1[Nm?K>'Nge3#<HZ]ejEBfXO`8SwC*a_q5W8BAMZk;%au| hbiS<">\Z`U?h{A½qI}. 7R6/IoU)R`jW{0u_8K j2[UoJ¯< o83NkciO Lc5գź+֨ä :#} }ZI; g6o9p?A>K Vg#J v~a͙Ae]7QUTqV~҈5Fh@Y;3Gc[{i`ڄ/"vK?Z,]TYS֘zsm[U^7[ }ȠQ!bU&pX-؂Ywҟd1<8bȳ$mbZ*ޥp R_, өV-h5K`Hu)pn# _+ip0J:JhQjY5v/T>$R]:O䨱~JgI-Ký>RcH`h|QS/"vRfp~79']++ zLv iA6/v"9f8Bmo\Ww@`'R]M:Ҕ*|ONj#إ+ V՟Us;l` \xNa.XB_S ߈Wkun]9Ӱ]FY f_K /QV?f=YΔ: .'/y@%ZU7"_Q @ [ rqL@. Ev[1vY@ @aWN-aX@ @ U+V*0.@ 7+V.a@ ~Gva@ AؕS+3@ @ ®Zr@ @ AؕR1@ cH_@?#b@ 0tuR  A9"qXv@ /Nݗ'}IH,7ߏ @ 'y8'uȾ:@ kBĮD:.ħm؞&@ 0h  t @?1 |Շ}Jv%B.#\@O ®Dv@  ,m{v@`]b`Sy' IDATTН@Pc1Aؕ۬mr&@ a,b  &]@ O2R*&X:fv@`@ ?| ߷jXdpusU'g"v%qv~/D"?'r\_K {F@qnu'QDdwV`]5+gh.}}.`&q^et_ do.#3.n|c& @kDjz GXXXdzJ\o@Ӝ|'2N"vAQx{ϱ@X䎅"'U_M̩f_'?*,laWV<@#pk 拜@RArHhs`1 >Ir-NӳMga7 j⼜唾>@g~#2 x-0ٔLNR5Tf7?)| $98 XfGK  gW9d Ƚ5:~ $v,'P{Ӂy x?c  5T>N !b!Jt`,R~]]No?y|M !cO#iNzCm?&d욇]Fn^e mY}zXd鈘mm5~ST`%p/D*u^4"0gpH8 @a-B*f!baDt:X.p70j %0>Yu[ ]f??ə1 wq^3vi^mvFq^_ۀkb"¹D ?Ԅݣ>uDoXع~<@B:ɘ{ZB <h ®_] ?ޖX",5a9"c3LZ0/k1׈@s8/=@ݼؗ]"iA}ybyDMDUM Q x:paGv"o/>OG`sp^taW"Dz*"gDZ$RV<K_f` ȩȘ>+3Dvt=|} )R?K61ׇL"<VЄι;^@s/jr(aHdR,njݿVd/ \t>n TDl!&pt̻',̧\-r 恻 6)P6 =_䂟Iǜ)9vky""?3Od&}.ebrA.:(~q0fWcο~)Ox=zqR $awɯT{%_gͳv$b'v}&6 L4 uZv{c)y{:a9sNӸ[k5AO( ]f2PȜM<}f4n֞3f5drKYL.)¿4J 3WYu#fjpvIĆYyyWcQUQAwwZeL2`8\Ϲw,F'"⼌ iP؅(o3YqvP.&"yR__~ m`\QtÖ_d|/Yo'e0maf_G{Q>݂/lTW(Sہ)D>Z B Z"b/yyw8s2Ӑ#6f;0"|kl *uni=Jqq^^bB)% :gP_ځ <~ m~=WK:F$좂n8/W8ߐh8/yBsg0)k&gڭTNUT0fj5glyfz@ _x'g5~-$'(,\1A ed>_#aW?AՁAU;\v'l3"q]U1ͦp(L_ZUF+ 0v잾5kIP*"W:A,:`ElسA(@5%R{ > t}8`yYNɛ;tzw.gy%pra⼴y9FTNME>8/)mf1IJ[}5Mf.D.c a gW/š"+6'{ԥQoWo;Wu+Vou:b29M"EnRXZ=^)ۻXdoebR'=Rn5OFx '@9ߔX*ޫ[c\Mh |ZX p%<}"2_|KfdXEzaQ5@qNU~.h&Jp &c캪̈Wd(lw(\9}[|˴x18/?),2=82D|i<`g5a}{996KE&i ;¯ÎkɭuMY; &p;c X6{hz 5"ݔ)6㞞B"O-JOyޫ\6}RVEvֲ֣4AO\)E}d fXi g^zorL_ԍe7R:"vS8>8W~b262`o)h/U}x"'"tRn'%5wp5dw"vIO=AŔ{Jѕq^> wWxܣ5dLB5~VVpHoy92u扩}#/e(%!s$FzӰ*ܾRcf"4=]Vj젂C-ë0':΄ndߓ/*{zĎ6(j?9gb}tբY1'mV046.rVwTvW拼}56f+Ad@Hu#`9`]$"W-k ;=b僰+ R[ Ϗxv9UKmq(6n3D$` @ \%28Ylza`ӀC-5):1=YB/s3  MnOR6~Z.\fv:ߋ0BԮ؟R;%\[\Ǩ@k 4Flz8/S)/6)(zMo(<-Ą]j ;IzqfTiq^~e6&ao595* ,VC' =O?sBimdzkݎ'b%USF`y,L%;3-D@vD^EoN߾֟r,xcExLAJz]"Mau+붐Q8K Zv6 {*xbcń`D%!Px¼ 19WPn/9}ԝPAOv{(E8@kw]s.`MAAdx,ro,r{7k> (%44`yi@ܹ5Vk|"%aw gY>K9:{"-~B3ߡE[=Bv#&b؆.ҝ)k\;׬Ō1՚k*#qL<}OuM8xam:,JAA3nIGGbDNn'vxMbnþ/E] tNba^}cT;x'|)]q&^إך%~[r%n zDT$Iﱍ;xy9kдRYʅ]=5ZD68B;$)u`ϫ2 ^%)c|)sTЫmtNk=]&bjqfNaW֏xȁDf&GvAkIjٷ2iw#a%< Lw{+wo۲27A`,\)9_GZ/y;; )j`xe!-F]:4ʜ9lVAU̧b Xs 'D ENGRh H;T@䇱ȓDA{@`>F s"enȑț+y|=i %6ocQ sh0,7+1ҕ0dDVdσ>X%_W^x FWEnEw᢬Vdտlj~\խn[;|_w-A'')2M:{'c\/ `?`=m9 X#uF&}6&+ ۰&/$iKJIJg=F)EʞnwWÜaۀoD"]EN6aV?yOh!!XRдW 1 9^Zcj+Kl.q}aR_UXd "HN")P`IZH)l]>xAK'#kpGta.PHvCݩs,9.1یEi#r,GMe<n5P&s,ag1sF͜\/&#d +Kc9 Uj)w#"^ _oKV)K΃R9|P֎gH,w2*6V@n+e5bMK;,[ +\0M(a-{7OQAD}-KRl󧨠GyH֒JdL0%iTe)I 6#I7J"i߿R !*hc3*؆x`[Gsl f]֨+4ܔR2j0|Tǫ_U,yBji]8Ψp_Ro21WyI6s9ul0YyP)bvIM2ݏꈢ{6DE^᚜ك>j\]+SL՗yCޠ&>5'*܉j]kվ+ZJ&IYE޿@vi6tb mnZA f߹xDX%:qOu|wVkdҸTAhUq:Ow9,s;9#Zd瘍%%8y 7ic{Ԅg&? zVԌ4]zL:vpCEWnq˿黇Sf9a܉EVvĨXt#@TXs$jppzQ!F=9lt>Aq^~N;?MQAtֽ-룂ޛJ'|L*kV, ! +F~v+#bɜ8/z8/?TNȌ2C\9> T^y%IJATe T?u`. SXFiB˱ W7e],~`vGz>;Bo]]2:-y_<e޾rOZxI7amRPE36S=: qj IDAT Brg6b \ |t*o;ނqdi3aisT ۈ $nTy~5'bX cUujFCk * #>u/=BU`W5۰ ??2*&jθËRM_DUއgAvh~q^fyv],=8Y1Ycln޳yБ`jM|#fo:gRo.a.IV.{۰>ԟQ>xI-/pSq])b-CMKjvx)0KLs& dR?x p$DbaC2ӝşw.< ;5KJ_h/9Nڵ$Xjߕ=\Ifd츚cH* rJ\-2Y oVݸӢdC93yOE %jBDƈA?/ d7j/txk*k܎kFc#MJMyĆXߤ'mu#V<\l8/Xo/I4n)gQӰ;Zj97ufWSń)j:pOL#RT ]ST Vy5((Q4C4 x GR6C/G&'\t=])Z0V\?>dI˭LW@ۄ߫oInGip DΘoy; O?"m|jlja5@aH;R{~=Btǭ +|Τs]պ{ clcU|}^y9KNF8/yy;%@pNOb_wyʡE'.lҹœ>{lɚmo'| Z๪a7tVi ~9}8!ooЎ{[Hi舒X*he1jv6X6Ʉ"?T_K:o8Xޣjv;-7rAM:b@zAd"xGKkJ9giH.>Qu{,J9]9xRq,x.e dz+DNڢw{p~8jj :ONrҺ®D N/f{<S] GIl%.<~ C2)sɦ}Kmc 3wGj5;,` ]4‘ZfSľU:Nۄ\fFNbF/3%(Qrpf^,_ r.ݣb)^/UK isK"vE{yƋS3` )M3 )˔mW=~ 9_]b>iL/z,$RĎ7`Q͐ͭǮzZؽΈ2?K'Ǣ6KKb]}djnIzOq(^Oa .˗;QA 5V( ?$ vJdc6PCE%H>߉px9LeYIeٵaG'8KJةő`7}߽A1Gqr;hs ;F(s7`߻\v& bNxe;s>6˖y X҉4cH0;Q_׊9J.E;5 ǧxiGҽto`ZE%*w;wb/+nW dz.kǧhf'p[fx4V)3!R}IjYwhOx~l/ J= \W.VYuczW7"C[J)ۘ֋ $m# н5`*mkpZWxJiL^6kFg)EP/szgZkO)|'3 2-EIKȷŒ ؤŏݐ ѱhb$`ݿ+/By׬1"E켦rT8.s![a3y1..UM\'utn0vN:}h8yK!D> zlfDQ P.fV`Mjپzx KyIrl]zN\QaPdϪ0n:&$7n` [foi6`)Lqޜ_q^$]K;ּL? tq;xPiC\ kTFw/m:higy&vJ:H9[|ToDKLuv 놏;ɼ@uΏTj7:{ӑWMQԜ>z ®D- u,5QpJN 1c@۔DͱV(^8M{3^>(&:PR9 Ύ;ۜ4"uV/F j}jnHWk)*VPoTPxx?||SyWfNc 0)y {BSEei0h%O&c\p(kB9ˑyURw(>DOlw~)(v$Ud&D+잌 8/cүJ{'L2NRK6y<˻8_55w9*ϳlaBr1%Az*x&DV,iGQmJ39biU֥R"ЙKM֋Zn$8/bQ?LeR=y֪Ww;,Z*׻vܝnMP :g3ypHC.QK@A- 5**qj[s).%[XlUcow3]'^vu"N\]Y2~aG*#veQjIUAɉ~0Gj;,itN-] 1c[U] ݛͦQ\ ΐIG Q`F~V1n,X`.kpZ>d3R#D~6=~,YVü.~#bmI@Ii@ yBuWl+t@>a E \Fص{4^v,R5b'a/AAqb>/Dm27*WN3-)96'1pB#S5#U=}oR~c^o ®ITp%C{D%uv{:92.L*ˤcǢZ_jP?Khf0|{&t\IC:oޗڞ5 I2&|l|X*DTИrSvfljceinUX *fj!Y=mU9R&[UUOS{"S\D $R=7RtLRAzs8̑IC65c]`.fQAS44ջߵ+]VjWAb; <\[]K],®Dv{&FL&sRf\cP 6…[UYpVՋ#>{xu:bZf Uav (> ٙ1޴dzk ޻#_1jożOGD[}Tݔ2> xX$O8 I7]f3mshFC%A[7}Jc"!R!QFT:4QAwzwF0a7zT?`0y7ƉX󣛱Gk1r7 ˕XW6jJvTJm4)|&J)}J0AM/v>*g? ZtHIfXF ~!*ajRf+Of,:4&b .s;DfnV9cn_<hj0 ՎH 6.ܢ(lߊmxG~Fc 뉨`a,uz6LCF X0'Fa !=Ez %_ѹV0:P @V?SjrgJ&kY|nX?vchE;]_g`G' 9*MQAo bՋ !Y<r>nvvzMѮ,4GL=JҐmV+|tz)ûv ϵ84i]x\57qNJ{W:OV#C#ժ㤺j VYAؕn$R} <nmK[&J}.@APrіu6eRknTI |} 0>mZW[.n^c+adCt׋UVzV6E‘`jv޴&~ɗ\Ek] JM2ꪛQ3QAS^ ݉m?FD)h26QJcYS߈y{gqlcekYz2ʅݱt*cZc嵩'N0hrGTG\-ټއE#nZ8xFM }QA. nN. x?ǢzF?!p$}XEMM-lpϸ Όݓ?EAaޞ3rkĮZw+| 8C- ~m7^YmAr;!w'(Cj2hG؅Yvm`Q,}f`W?`>xhCMAw3OGjt5QSXyŔHu|jvxNRɶ}0W[ *X GVO3jzEB 8/_Ě u!JwH^QAߛ3*q^_ܔݓ/Ŝ,,B4*Q|Hm_JDz)u=k cN})EaS_^{pT貃zs&5x6O0w=_怓ͩqcWQ]\TyӪXfWAun Z(7W^`Ӗ#d^$vX,])=M-dFL5Cؕ=L16p~5,i9ˁ >:7 ?b)Jۥ0[:e)UR9#՝jFG+窾/R==šc3hjjFzH$oN/4JIX\?u`YuQTЇ. O뢂ލEyEX[X޴,{nq{MXd,|6SN &n|f]ݺ1@RCly@;0 KND\Z1}g؎SǓ(۱U՚HvD]8>ǵj)ȽZ)b A|=X D:2~%lj߷A{2_:tOZՠp+p}c ~0ˢ|kgRn؍D Lo&asܺjl%KQRTq^ձ7]9>厧j;ĜF.p.Vc<*h1ˇڞi vY/ ~ENw&P5T}6dѣR> luU3#coQ ^v=#s5X$in^$;GUH]'A @S~D6?/ZG% ߓToţMr3Vϲ+W{?U?kҍiaҕ"Y쎀AMTЇdP#nT㼤UH6'SKR(? f]{9I[ߊ ,u<3*z5R[Jë{$b[3zj]4H}7$i)6űKi9GB*f9vnl~wrj9:4(f #V/4 LSƁ@3$k{VcL&=H=4ϠN, pX<5:{f7٧˿/WaC@7Hu{j9sN_(r$4 RwCĮXv5Bn4̷)--Jр#PU}} xB .}=QA;Oyل XkD=OfHg1)?wqVU{B[(K7(Z@fҲ)(˫⊀(*"B'-TdZZBnR(-]qL&fLrO?<<7Ͻ9`7<-'\]zٚd75XMZOaw{ؐJS{<}mn'ioZyf,,&c}19f˽bstwP!~54xx:>^٘\eHvx<{<#vɧ0`y[x% 2zЅ-Eb,SX [dHwHm/xx 7zyם.]w | xWЙSrw[iSݍ+wq_=v| \Y,x7bΛ&(\-w6b#asz4e種PC bczJW=k=@"""GTܐ̀8P`i ;j3x"0,\՜vϜZvRzڏaЏumW7װZ9T)~)|HLH:^oD@Rfj9^`:0IiQ^ʼn 6`bHQxu:w L]  la\;]kˆ`4l6h D 2)#oud9lT'C$* p7.w![y<0L ʕpQ KI>XCQ8gN(z\Jij_b eBah+;:TZtu<=.˧xQ"[p7v.Jx?tՂqH@d鍥T<4{?zWw*ްKZ˕ oإADJ`0|[x@$Z#rK%xL  D x4-U8'݋pe.p%i ;+s$`6|!>x+r@U @ds &{<[GH %:}&ɴTX(6q?5亂ˇxX үsy28t)r Hw8a`#;x 2FρHsg`Y+I EsF/n^71U:01 Fm0X|>^""WvӀ29TP `paS 4b ;1Eylv2KD҆ۓa$I׊D[[ pMH朣e9 G" >l *U My*4D_O+Sqnn{w!DFCd@䫳Dkmk;<+X/灃A,i__A"]F| rW.+#m c2Eu1X> wwWE]$p?mH=-"y88"eR"GaY9,M`>EZ"{PM1hy5/.6\glǻy8@(ʣgɌ2m-H | ؼ)y.Ibx9xNX^u76ToR@E[QiްL.99^ yZˀ17&y8D"l D D Dzj G-""n_vXmBgI?Ěz@a.xRӀWb[s05<-gcw1a9 >Q10CJ8:$3m߼@dR r9 71p_YIX!^@'#ΰKStlpz)d[MOy1]u,{l)ȱ 9b߁%r(p{wZ_s 2a4.ZXڤBD\S#.#f$4'pR 20q Xq:4E.V|U@"=ܹ1u§9ucl2 Zo"@䦠foe&W6Qb\lB||D%b|%p}$ppe%9 8BܹZ#acEʛ])`mFyͳfLxSUyO DAx d}-UߎTi̹S&[=f+(sr~>35b׼Z{\}qÞ EJp0+bso>9:oELuSvQRbκw\ELLۭ.{r{gzƳ gV%>'z{o.ra; +>S0ްL]/ɜUx NV?.Q՛ [)@/"pB8 ;rH ^T-9cd+TP1q"eAȷV 2mH?]^]7ULo plj#QܷIODæ5"t?8%4Ⲹ%fQJ`RRcO{,|n:hlBݡ|[~sjma] R{gŌhLYa(:+[fL.;$v',t" s$5b )C18fX[mw+dߟn,w'7aG[JHD`lOZ$"QO *mՈ\RE~T-1F.Gm8V̨g9]ȉ?q¨}N*[Bڲj\mnr8K8Ql<6psvNoǹ}oKC_jJ*ǺS@LF2bFmW0ްˌoyU T,/ȄbUY ޳ErT8BkTg%nݤpl0oR —7fV#r~ r,O< |@7ְ1qf$]WLfDB+CFlAؘ0BZ`r.jk[m6hAm ~" ( {5nTXKsU7NUMH4a 9D"Y#rqsEn )eV l3'λ +ӻѾF"#"hE Blo nN˱KQRݜW+Nm#7)z;h8qR-E@6K' şcߝdI1bsR[1p`U7+ЈkT1xL=L^vZG9*iHv`oO~Fǁ )#Ĝ-8Yl I7pO'B-Gkb2θ.=`]=X {W^^oo"90rH`\WcK7)P}ΰ[{dtdG[wCsNv?)o7S/>7N|(ީm8cKv-ZзX)@@H$͵Y o˧X.yvj#pZ A,_)rFn93UO:lyxĤˀus9 O3rRpAhعRŃ,w~*jOC}uŌ^ S{yMO=._ܵŰSOW. ē5"6y2& )7(R4p݆ :9z+jv^ 7rlܒIw~e8'5$R=/ ;3E9FYhF8DayU}ƊFrLۈӄ/yszYL5t$ Bb\+WEഘWw84ۄ\L_Z%.؞HcI& jJzw 9=5"b; @srsm3RU>Kx:bpzXv84ʝE_%OU}\LGӋp $Ua^;imo\`q@!5K2sC\C!GD<̔:*իhqVFjOGsApZk ̼o |=TR6b=Mʍ DF`7~ (]=b2#իUXz9 a\c-?!,i({+nSվ $H< bû#yF=E.ziwʟ6CiX2E Í{4ӧp_nZz<}"..'㚫p˓O ;wMHscǨ=<)ܚ WigS쬇-rgNF̑s@~U7nrĊw Dw~3*XM"y69/:f'*=Hf_eSO"yHٸ -s.pUSo)wQ!xmJ릪.s$2 O""ֈ䬚$GaZ{٪`Ժ`$%bWpT<(G47p:a3}'wr/$fIV`$1 l;NGFc#5j ru0];dvf[j{J:r;w)ž+{H !Y=w]% 8}oYq簺@N<3n Ŵ$:v NpD"w-H}gJco `XX)q3&~0#b BFî Dng@}.հ)ᨳ2?Bب"UU3\DD 59\nΛN3)Ṽ:/:Nu(P vW2,0IV-5nV!{6qsbx [D>osngnYg-Sn\n!ȣV1O?Iݞݙ|ԙ D6FCLt*pZ1^ɲ]{RӍ9[`1w}N~ U$lU`K/SŚf3'aQ*UœZb"YF0'ɻ$4#r<@`.Q25U-cEn DB_)5yҒc%ԯIbgU^Nq.rf`pXaud'vYk30- 5'_b;1ja-wy mqϽ Pe^FT8's>+Wb09:rvJ׶åb9G;q]<}s! ̍lŶgT4)'(Է/qn s PV8Sdtj E>QW9_&a`;;dk386%8s4I&mΜ%P"#D ãnX_(MnR#bQRh+9:AS}'<))`E>S}=ec1n+ћ[J<{O>ō\'&I* sP P=s\BTE;wMOq.dgh3C ]y#J; [vw?f&Wx]'laj0@k^>5J/W_ <nj' y>!\b Jr)3)$<>4`WWgNS}L } 3ab`I5#v"S;YDM&t)TK[lQ :#EDT^U\.J[> /& sFtczm~iX.'FE1U:hxE9_՘ hq<:݂ cbM"Ʈ=Xa?d)̿|X-9IU9SPzS(3&"+\"0}V`сDC_.3ɯ]~;0<@ϭK"'E`Aޘ%R/nyoI2$IUpuŤ-H jf#0b*4:kN̰Xw/o3;3IᎳUr@D,uВ#VXI)SS=GπOhzg\28LyQڂ  S#X}wKF>p{ߗ"bqNy =SK6E^)VNd(T^WCTnpT ֯5"'"EU\.G;H#Ŝ)2ߛ$ k2y e^'"Ven^kzT}<45A z,"2Sd\ؾmEjS=zj6rj=TҭE!CvXxIikCA/&(_Ddx6K\"Wb_Tߋ9M /:î[\g4p*m\X#IKYu$mEN D %C$aLy"i</b_^qF4I?Wjϝ/aWg im4r[Fn=pn!$/I?D[ Trlǽ.=iZBT>*IClM{q|`(H?첬 `j,]t:4SuVd)`Q~T4`c,#yvbWPթ//.JF3D$&^""Jj^dgu0\NY߈w=p͑48/'?$o%;瓡ޗ QE-ڣ:qS bGJ %[ ׄkTUWT~>H,=`f̌lcoÞjR_Q(p=21յvs.Ʃ5>/]DAZ`:@ ߸YRd#1ՙb+Ί7ݏ'"âgbs`ŽdйSAژ{.pWŌ=R#v. 9q\.x̜;Ii!R6"g8y<)ީ,4VÁ])ۥpslaTwZNLΩښ2'P8g$S12N:~)TAL*zYa @霆,M6GJ'0CX>XkL,]#rP(L:_ukRn1CWݪpŸ"ً'"VCMRݓ$Ȉɺ}ckI-&{XD^C)"2 #wl9{ob!lʘ\J.3Jއb3"];ؾ6bTٍVߒ&,Sg ‘ BLJ[# g9YS]/XXf]Tm^>dtuɎocŸqI>EHk"^k9&9NR',i~HZ5"3g6a &eؕ%Usǯ$%/Zzkr irq "syQIs6 :X7Kd,Kz4ҚLW]Vtlj'SqRװO 'JjN=GA`oSU';dW&U|^(߱290I)4 I}VNu9CN,rwgs%YD׍a{.ތTs,³e@Ģv%r.R}~ܥ؞'Vb#tRUCxjw_!EA?cܯF}%Eu6 0]DDh7"22ÿR:hmSݴQ"ӷ?=bwb!HawS>"IYhfVSmWn{J{$BT~qcd9⟧pݿ]$DyT/SRZLuh=ईiR!$CRlmҗv ;B@KS}R)5TZb֚exO_P=!{LW]S=JjlE)rhA7I˩8`2Gl] 䑒;~3<w pQՅ:FTR^U}ݳV;.d,45R,;(TgD0vO*nlp pO]E'LMw?j\L^CNțVPԀiW2x3 O2l]㲜6|DG Uܐo%V;mr(<"e]tlǔ b]<9[(&#jpQ[SA"nM,)٪\3Cx 8MTtR̰U}-m$âVFv/cZldƢs4/!1mqgxk40fv}4,":/E QYb Pb-srdڪ4M$L~YPnz< K\5@dradͱS˳9SDsOO][OK^QjNDɷAW4JW(Mt*B #8K)hRNAJ.:,x,I7u&5akԪ^u'0Mgc`8f ,sm_su&b9: 9qI2U82;M*.JS1d//[0dv1La%@!_g>c1Uu1%Ty+Bs<Ψs5S䠈DndzX c O"y-%ֻJ#Y*NvIX|*VJEV+ (2˺Ri ,$]YUaks\.q0Bf[ޛ*q6XɘY{g'Ŕ2H_=^vCibqtw'JD^/Q~3oe{#|Cw"֐Aͳ+IT-J"ԕH bG|qYrHԵ<9XV/+ C%%t|A &\,pqd[ Tω_"vclT` pR9|c#؀ 4!$z]Z ŮX-xLu>p|$M4O6b\}L <~U1dp=ζ $`=p!~e b7͚$gkL}XU;Uجb/{',tH_,RLo;QpD= %ofp:-Zr=2Lt%¨EVC\ܽب_+S")br9$GTbٸ?]}0l3ȫ"ٚvʘ(=Zܨcš6qȐZל8eApaE<-WO\R<* u]K &b)bޛt.TNKUUDRS,q9tddjP"`{0&مn">ir;;?8d9Ηky<C";a ?-X<6dy&}Lx9V% /٢kvrv{Hvx<0UU)X<'_ _1 qVZ\U]jPW kX COE2x:Awx<Sjr]P!7{,DG` Q\])b=[i||zٍªz<"TwJx<OXZ쁔"vٱ]4>b"U*[kU@mc|| +ѷx<Ǔ?A|G1Rjd2 eekdG}na VjzEüaB *d0=tJ C~u:^ Wc @"r!9ډH9p!㏁MHԿK[x<`y3~Ax<xAd2@9Ndb]B9w+8`0`xc71"[x<X9\x<O)뀝u OQո\<LPW +"\qa{pmț~$We=E.x< ¿|xJp؁_ "SULTuTc۟綡_7Y8r 1v.g("Pnk_ɧ2ʘOAax<] hE@{[. wPֵ8; ӱ-` px<S:{S ;8!S]=MUj_`,p 0BzC0zb=}S< ;tzEbs49hy5b*뒨j${K_)DIP#rLr%xgGa)ٍ7x`֜6SDf2Ǹg=DP!9cJА/6O Z:a x1襘pQ |9[d8pL5}c߻Dvp"SSt1ZxڏX^Y & 8RǽGA>i-lzx˵[< 6GOI{;2=vdE!Ei$TtjT?p%"4zfV$rHF0G|m!:yXfu] -l3d5V"lԜ<oC](w 9.7\ٵ|ɅkTdîhL]OQ qatHb IDATԡ;{\1CW5AEιXsbj;0m{)ϗڂPF'٠w!q^_I3".b<-FQwz',FNbP>3H=3T?z./u4?bo)6w.xtA։vQKyTALؽK4, 2Okۂs XRT0<U7@*ZBY *dBp|[Sۅ5- aŬvt|u+sNFMCDϋEd6rTK9t3oO..TH4B===A t%-NyT X x01G S6V ޔMEhz&`4(d@ V,,M /'}5#7 < iym_ ްMg<T1ATahwarUeBI/:((Rs)"]ATT EQ@tJ ;mw6ٽgvfn93syۃތ5y*ዧR۰0/gv1fװoNޣ"o¼ rHkR5qWa0/;6ʈTNU _yhiϜ|p1x; ?(TT ńL[;:g)Xp ƪU7玵NE¼Ե0Ia^.nɞa>ʋ*Hyj~Mü r>3DFw)V zyh-ǪދS;oc׉LE}QL\hpC&yjhߙc Ɓ\o&C (ǁ:5mnP]DͻU wrapbO¼|a^0/#|T紜楱3cǔTn$ؽMmr|4K̩i%\A =+?Yc&Cz#`Re0/W5Y \m|7pp;֥̞0/*e0/'y,VpH=PN tgo¼TR.G`fj TQ' »? ;וeͨZhmc8N"^m0'glN[OI\FD!qYť=̆AnP:v4|@>ѕ"Wp"۟v}׋u|7Zȡt9scf^tNA(;,׫IkįErQr0/#|s89KX4&`H9PW|y=h-rpf\I}} >Q9d]362(U4-Xd˵_TeqCc/u2lb6D:|3W+g;ɉ")"R~>]L<' v0/[y0˨0/H}Z XуjzIz8>yvBSΈƬq/s!aU-^ l3秅y9,}v VVvr/AaBl wAl"v?;LQ{E2hSERz*<[c];U-r+v)P/V0b+EY~jS5ZzToMlӣ8f)rc:eB1ټw ð "dyvI#oFbիq-r{ <Ȼao\a. K #qm/7¼lֵ.7cckK픠b]|4PT&_o0ݸJü\y#HSNMU/VoɪUTU5X8{*;ADn.g31%i|[SF>{e` ɘ߉tzfN2 D1˱ZcѐzLY"g aCzcL9 =/vXqDc%n6Vrs¼\rJ{;u]#~Q>cNR]VaӦR](i" %Z0H$IeyjjR~`K1@E BcU ܮm-̭L'-v"gn>ye՚F(f ,srj&x)t*ke {Tp@k<D>gPdEFP!a^҄gG[ 0/,;:x,N`e&<6&UV{=BWm6b`PB.b&>C/4 =)8SsXJLcT-{+$wIuIc7ДmqJ_X߶Xc'co5(v-f%ؽcƨմWrc:&~^8LhݩpFE6Ɗ뉷"E>\ZtyX6߬9O踲8+rt Yb5u EfVsXŦ{` ƅUMUNmhd7FJ)4oKfLcؽ;sa)vSr)4 |Վ#DWW8fe~q|d K`aP(sa[ؔx5Pћqލ!^Ӿx\Sj~ tʜ5W:aoa ^{/(ܠ-<{}L^*mlM H{`/'cs,Iݿ1׳.<*v/%_ ƍqm~]3=뽴ϱ[{E` I8(s 6]0_2u!'~_9|~jgcPcm] {a^u57/5qSn,|XQ93>Ѻ;⠠MdD^a]l,+W Cݔ*ՔlU;@76Ƙ"{1PUZqD!}r!}YP@H.wmSOa`CI%VW|F^l7]"W"R8҇,î T.F;_$>W* ᭋ8y/)9SZ"%Wq3̐ Rl\ ;]SESS}(vDN&rtK]`”@cTvyϸ 7{ )k)\x&\-{YoEgw\Wy{;>W┽;m𦵈߸=."!Jx"(_4JPw~$0Qw +Ds7,OAAo%j/6 +[ąF~n-̚ڏ+mW̅,Ei¼uD߮p >\ܔb0^ ,r]- rL!kՠmYA{`+b,#Xb7V5[- )pkhpa-=8Vj2 `_d; 4M9%u"ۋwJq`KD-{8 |ؽkYV\MV{JRtlps&1Yg`fK"伐Oӵ"\,~?zV2]hhzm=5/ڶ~&r )̓ib.=iB1'%B`q^7>a)#}D`sOxxρ ˽b68VS]+h#?&֤XxX\OͽJ ֿ'I݉$,J U+C0!vî^!v EqȄ;5WFDk;0E.-k6 "EgLkp=7 LEk)=$՜Qg-v/8ƷW@L&xz2>i5 {@u 6} ۘ:Jlȭt d \r|~PЃ Woz d_~\lV}Q[xPEBzg3ǰq糀a^Ӟu6o~.(腉C%؏Vŭz)"=α`8:(MDalülh &w0pW 癏 Aa^F]'=M ?L+o0/Ga\Wä{6J‡>tumb XeeB%"RSgq2Jb ^ 3ob¼lyg5#p&n)vCn*TMWl^)rp^U` D]('p(cƩPd80@z2t/(\l)ܢp[O9Veig@_#2xOv.gokۀjsA+ }jS(SEv绋2}Vm EPQxVM/ac,IoH9H@LF{iXQYxql>}͛X*E>h+dm Zh>)XW蝗dPl_%(}͈ü|[VIev;EjBa^[˽~~f-IXZL"9.z%nmþNViX,8.3cb{]OP&`.#I|' RR XOyY)uT DdW(Ha^?B0L l<Y즮$2 -L&%E@YTa^>!m[Ra? DAAAA7eC}X" %(+b$Fyxa,`i d¼vUA!ۧ&'2P &(X#9iv,}E`pdފ[H7 )w4*rF;KT̮]^;Hw+t0,#ۈ/o6F8<̫>-pQcV"G};B79)4@kأl"oך$9_M4;uwH< ༄حoaqsO(2˅x=޻@Ͷ`oEŲ%=v)nUx;Au lX]lZkW]\\``H SJ{KsTRB1)vXL , Oi ppjI;(= )B4 [J9r,mhמ v& reO`ɉ=a^(y;* K2֏5M^idȔdn5"LJp x)0T' << @5$q/V,$ME @\0/ľ{9q^1!ѧ(ZmB(p\ewh i;^޳2=Vm yv-VSkTbb9eo@Ǫm귏RMk1K Yc~O3,):%oȱ_ims⯉,矻&[)rFQy~[LƼw9wKw{`i,OX, xȢM|RM,ӢO(]_/5 ~bVbRi0=(RrݗbQ@hW UszDjLQ>w$Vl i%cxC^,\] C8<2l3RK"Lm}?O`,"Z0/c¼>uy2qBz8$6trL³ [~-]WOleBoĕ*S(豪p|!׋ NdvqzoLFn,ܦ5̝.^~|9vnLr~#(h%oi`5>}'? j~hjMD b=)?F59<]gc缏̶7_4۷-W8 [pO@MlLU9߅Z[(| t [RFs<-^+cbYI*ܭb}ba:nnQFQF D╎[r]H޶tCi❆ۤV CiNܴj8PʹAA+a[Iz2â6kVs.m#cw8v ik@kbs m(n~Yޢ -} ȫ:ˑzX#"ZåN؋Wy%Q%jBIC@I (nCl@[ .y0@HzGݔno]ϫ%ҡ8A&a|5jBP̏n zYc\ (:'͓5ca_{byQs11&TS;g{!y/Ai ό/ < i.*}J{.kXqDeQ쉘bxC y[<yώpJ$rlr6_jZfΦb'= *MOʮ*+vhtm켙,sJ^1ع|Pd;8UrޚxX>9iK*\踁+' acs1E!nl C\>r;^ew&70N[Q N/-ʲA.sLʙρ,?D0>Дrb5Q6 삂ރyVRrbcyfj5 )v3V޶i}xgܠ{.f8;ˎa^>Iފ]\ &ѺT/8Iz}!Tu Mq&QŨvzN>J˯faruSE-yacGPЯ=DnV#u)@njX@ӳ0{X?y#1}¼ *v΢ &Ul*XÍA{p<3\۷ x[a^e0/Gt@(؁'wJOI9D7Ƅ3¼7x\3+5֏{4|}%lW@YU@ |P݁OBE.~k`VkX !VD/0f n:ÿzMuǁ]ONz1OD=Xa{]/F ~;I#O|#3|8R-wcT㟱*b_ך lR=L|-(h2*\u+69'a^vŔoqkX]a1¼ x7X*QX%rO=Nn ~&`lx&D/ USi >v{s = (_ދXnWhoǔxc9p\h7'R`ż~WfiY\,.= q6Cy=($ t -۱_N*0l/_s|eR)Wecjix9~^AA)W5(߂~3(_U<9)vt/-nC1E2'y</Z{T)~ZR%69ݘŒq}؄?Su 1l}?Qq*םe\ *y˪T`vwIXKL8&̷`a^;S|N@BWa[*aT/sǼ=(tJPcU-VeqT'UKNEmhSZLyhi-6^,l8>M*SΕ*>pv] * ݯ?hKʻǩ>ziD*¨dd =Ǫ^VЧU- hh:qV}ftQPH\CT ?a>\Šl׌Yg$Q{4@ݏ RSy`>wyxsycB&E&cax۶Ū>-+E:N# GV\>5*v*~b9Jᇁj1ǀ]=E)&p•, Hv@u50>ivev{'q},^~W)܆EBzo`.蠠 >Z})v6P>lt@K#(be&XF.M}(U:| G9Ӄ.s@ yODocU/L6k]q26rh.RߖfmR,}!?V{1/`,6 QKD٘ ¼܉-h-'_K B{{97(0/bWyy[x^;L+yXx_}ڠ(yDNmVhd(C,|QS쾃*]'pQmrǜk;"ɯMyNa$INIF9tu{Wb;"#hR 6"m w=|((?{8 2!(.2NPH\QH4*(屫g$62pHuPЧakBB$G}׀N-("zr|Wbbba/[x[\U{'(rח;; eIfb%}'ˋX?nk0lIV [Z➶XYetV`9@}.v%\Ɵbϫ]ɀ(`%3otj 3\ ށ 0 8SRk9VPde,ԱorPΣTk] %KF#t'4*2pՌߵ *Umpebt"˱s+MO,)(h1qꯘqp4i\E򬡾ﭻJ',x,^huG&/9]FqfbyX1ŮK==T_+'11hH(V-NT}7IBc"`SWFtYJ۔DOgO<֋XA4¼R-ؿ9a^6 >M -yy ++Qw._9ԛ%RշÈEJ|hEH !mD=yUA-J' y;Ȩ/n [3:gZg /'`Jׂ.(_4"y{]#QFxmSm l=GJWer?\xX*ɒ. ؝c3}F__K06_NGltV <~'g1JX<4='(0/7GbB)vRT"o`sӁf_F+VMϴRsF GGPUMFFwNgP.퓘)u OkF@y9w{," MͲ0̌uL dddddd}7iF{Q |\^Ulh"#3.#####c#([X1 | "bS˱V8X L"뭔XyH?Lz,UfjMXw +=SQW D"1FU풦݉l |McȈSV#?@3Qx<<_Xm.o " X6}GQRAc`z¼&X86n?~+"sw4jk?Ѷz)9Xۤ*v"2+7xWY<7lVU[DrYr6ա]ka]L>\hG@u3;, WVx8 t6Ϸ@6"\S._V]94J)v;hiTRRi @|LǸe QҊ,}DTR=vV?"2yvrTW-3##/ "׋ addddddd-J)v+)I/  ){cpk/"Xɔ^>ZB8o rKF2kIQRB@uLJ$?]iD ܈h N8QUafdd|_dn,&KFFFƺʵ"VdG=(`c'?Dd93d Er^E- 6;t_Wu8NFFFbk{ ]"asU]q8$)ମ8GFFFFoƅ" +Ѓ)݂q97"ϡS=XoeAU_^#"g]|7 IDATi.>OFF:F(2 lFlgsl$EvW'{E+W|ddt NnO8X-~Ks9/f" x(`wQ8*"K/ѷe8]Խ)uian|qFDxo2Uj|1yk^%6V*3$:>+pAKm222j$&)yUQD~KO+‡9Axm2\? ,C1TޤS\|R3hFFF ` >CdQ{\kkD6m0 >HCRqĔ¡ p$@}> h'уw$# 9қ;U@?S6UjFvqVۺ}"GT-P{ j p_hbh]]?lБjtizn,1DS[u8Q - ^iS8wRQRQOf AdPwcm2Sdr0J=)v]MwD]5"cCM6Qtuj|vv) S3z`ZƖl.~x^` _?Tz~OUQէzUq[ XG.rr(rT[bl=wa`q?8SDNu(r_(r\- &WUk%bדOMM{h,v\4 ="ESe垯 ΌuP!9]K3hʙAvKDu:CPLp؂e>W fcp1eU]EXGP&0YLl-6 `Pdx+@3&g@= jETO(\fe ᦜDh sOz3DFF@dJ50rpw >{"{HaTP>c(2" T׸D0C\nlEAعuOO~,uΓ>/"լB(w8YͫFl]0D7""]e(yxܵ©6TP O,B5zId&ˁMBItDqIk5w@Q99c7UdfR+׊l)Uume950:Mjq12EmKz0h؆Hd~tOָ"4)vvg(/%_l=֨?pS/`^]N"rdP$&ZvW;=j޻7JdX?F8MS . XB'DY-Uax"a7B&"#0n{ll-vTÀDSzy'QtJI\ jPk 새L T߬$`\l8| RUflEj;8ZG I+U-C("p{~T(/Ĝ=6jEa0zWS2f!@ +7gc2gA^j9XkMV+`cPˡ"zgiSB8 E6 Tץjk+vc۪IF Qb ovc=Y!0M,Pd@uuW&JǸHB7'"[`?3Cٕ<M EJ7{'CX|ZRǢڟcN~"^X <CGc,omêz^S 38 pp ""he 7]W&Å.;Zu$ ÝkvgMu˝\7+ٲ^`Գh(_ Wx%8:ip F+o]"sD* ߌ]uP/~Pyp7!2c'?V p @v}4Uu1iuH lpTGYXuIb' ; Ù.ZGT:;}Wk&rLNL*E6u67>3uWĢvCS TU+q Q cpf ?'5QJۑ~grWկyz~_Uq1Qճ_tתjQUݪgU=+)jD*nѣ8_hqVĬ;Vo+nP&ANtY("p7Č0P+sQSId}`d L`7컗U%|pم6~i;".r+M%rK)iC0 1B9W1-nP=nF\Aj8̋@SZD_(2s;SLR |RR"jb>l N,qэ~9qZCVyST0.,;G܉553,4 1DM08.Wرw*?"Cż`EZ EV$^r6<įB;y=jr̫~ 1KSNi zycD{[X(B6 6~e r~ +;籣 !l(Vev?LQDZ9Ju10;{In:v&pQ(2"U{LT^T n5CSS줃Ulo pt^bk)o.gޅ؂y\eS՝=$!$d$MYeVEHR ¨8 :36?ufqw܀ հd@vdsnꪮt]nUսwGL{E]$R0…j@D"\[Iռ_^E sTWmG%oJy)SUު;|)׌g 82)D"Ӂ0HYye*𚷁Y.P0dȌ&0OΩ/Ux6 ,%,sFMߞI"}N)9H<3؉׽O5`."kHdpciCG4[.?jE>u&|ut|$2 9;{(XoWyTHdP<BڀVĢ&Q}|15Ɗ(Y.~ދ{D"rJiBD%]YbLJZue!2?/e@e,j|.JUPDyLuk 3T_`cRj,\luH'"qjB1E00R찰.W}3_$DVD"9s.0S:[%\𪚇Si\\_|yǤMM 2 n THMRru1V(vzSv 5 ̨&Z*P\(:ެy&j'5u7|*t[EFP1}) k8L>»]bKQ},eafȬSްk YQE]4_/zّH<_"Hp|]bf d;A^+<\ԡ0fGNU+HQ`j"ִX:ip%!RحO8׀#~tUn{vpwMpe扜R#h{ٞb9_5;GmyŷOk;cySfkå{z pK}LVu"-g"'=9@?;$7yOݯD.,3DLPad/E%X(R+ _ph;=]K*b%jy|Eo=QW0&0KG 9FL8i.liF)j QH!kOZWt~lgkae(ikmN\H]$2^zwD"+x)?Y׸v9d\p5͛3׍)*{ bڦ>ƫK!֢'VDҷvŎ\Z*v-[kD1vZr9ԕß-bF壴xS}M;q9,I vxL9P1E:"vȕ"?LzZ}D8Q _CJuº]k^Zd3B |ɧyXZ9c1.= ]" N,m0T(q8GC55@c#45%ehlF٬}5 GR Ja {^Y_zV,gF [v3F,|fڂ+isb1ynݢmGkʵP(Z!&&xtHfxDFWKPf D#93tʄ0=iut>=M- Z^!jbIUT@M /}"5żﺴ ]'x{ۆ6o)ע^cZ1+侃'V&.f%*6 q{ emHnjIZՎ=2g%D0%3y߰r?O}^tF =ao`||{eU!~3j ɼĿ_U&j,ѫ*km6$on {RfIJ2&!pjwY,9`KZKRypSλ0 Y"NŔbO'vaT M=X&uImHz6u$⵭%R(R6F =kJHYGn<4qD7$Mz*22\VvNQ٤PH]ͺ{~(^LF#K">X*,hÀsA>sT_ȨdG}#90F[NnyF"mB7- 7`-rZ]DFEW}ᑻott<U]厕⪮KoDD..Gpƭ3ODVE"X np ٪by%B"E qvYzu 5b9fɚ [־E Z$IhQFiV(ҦPF"w>klpHRiؕn{5灳] r,f.OrV)whgyk>Í͝`-|1^ҁ}X/+X@{dJUߵJͳ~\- w[8GKNRl/](ppM"-ʏ{6zJAx} Ixb߹n$eirW*"(o~ @bZ6WD^r73dO IDATTN6[|z X= u C&ޭغ\ pKŠ8vlQ٪fz{5'0s^"Cx zSZYZHz8X~UO`Տ WPi$HdX$ʣ- d TJ~ G <~jV廴iN^tp 0`a*2<_D"c@/eT z gi8,mT ÔG^/2*wH0O+%oS~5Q½(Юb0UsB E+29T]' N'?xW &㊝[;@&͋q][ӖÎ3)u #+)z'fy \ܣey)C0%X V$'׽2kDR+ \#ͭ5Q 5EI xOH^lٵx+߀׋ /'NYP!54V-`ZZgmfjDE"Q,>N՜d6,~+24QI#{~}jEڸcq)Z>&1!}okA7Q~0 ͨK=X8 ~1K2*޵0\i/#`i< VhaQIa1?AhH}^I9MF>[E j؍. hO,֪!Z7[5nJڬdKc 1P^bxkdE2߲b g[4&7,.-jB4y#oz8^$G)"j/gݯK+iz Zl]8Eoy ve{)إDb  4X Ud5yq'/ E7n󈫸֘%&tʺݮb'"DJm^&e< f -r yby=?⻯*B{#sTdpD3 b7wwœ^L{\ɞL@=\eZF-W,qI|vb'H^d߶` \ gܴT k.-l lyaNNdPʦ!*a2<6 ~ɰ_>ԏ<  Ƽ +kWK鱋<8t=Տ/Ps(car P?%_jS# =px2S.~8߽vwmb/\98(*APbc==Va(/QdW|f>0vjUɨH$TϖQ]?K_=w%z(X:yTCWkm7-UFE8=uKT؝{*aS 95 5?`YKca?k搛sCW$DBiajOI"BM$KnXanJ,^[y/ ^XLɺ`Ͱ0^A0X%';Xhu >\y 4`asŊ^áy~^83KmN}[OyH A=p o  m}-9'han;W&IyT>KwsPZ'oY5~,gTeT*vZ02).ُ@,9[m٪'5{X]/y^]٦RZ]QEܑrA0I+XlkDR",VS1B<1dn Ŵ:,rܱ<61I垌E,7z5Q1w)bsT˺ak5}IuҺ]PHzg g^,[ ݪ3֟+_ORnNyCPx}$k""r!Z|W឵$rng alղݤ{H>:\9$W-,$6? _h]Ku)b\Y+,}r^y7<[-Ɩycxc;?Y=*4aR{7'ȹ3}{'iin.c硘1AKGZc^)V&qAx$g٘)vc}ҹ꫆´Kq [/0 (ZXpgؕ#H0  ^ >$p$>{V `C`gC/xɋDDm!QKU_񛩺( R^5kl s#&L\cC̨.+y_ϙMj M>&lQg"؛)Y_aAUa)j}uRVt7BNZ [4|ͻf<=[ Wzᜱ\ʂRSv=l ]E 끆f8^PYɻ["N Qj!`>v+TwV-~Dn [)͹<88<3Qis67`GE)3ZNn]zMf0ɍD5PD 6 -7(f0|bŸN#R֭֯TPPbw,E*i~G{>M}ڱ7OIX rQ}#*0RaU$) {$Ш }իe͚ظ]DU è`g6-'ʸ}jRI'; GA= 9(|Gl٭hBIq Bw Į{-uWbUMF n! K*2ҍ* %s(2!l&U;\/)q-Vy h}3U<ry>g(gjkVs_)KM.puТmN|[Ej֋ׇ4Cl`U ;"ǁS]YxeFwD3x"g+T5SYkV:!#/)A)Fa]Ǎ;arhˍwF`&2#ͳy٭"#cZ(iB v2RՕ7nqܒViHךM14ܖonOg `F+vUZPr-^Kg 8cBtGR_ۺE MLx n{WVeM{h$B^ [c&(Z~t|ODmOÚM5"L)[ xjRzyyί`{+pn<66p¿c!sUR AJ67Fyx(Ej=F9/ ȭO6gHwHn>g`?d`ժfxH5%=z4EYg;ӓzQT/W1քbW3E54x K!*=M믁 ׮Ob{TuzK{jݎ OcyTϿL򜓙մ]b^v U V@`5\<{_9>qN*mf-imE`oM(3T_j |a•#ԌEEYwk8ʣ,Sr? nEEp v~|⍟y؃5rCZ~(.roN:N 0b-9 ϔdKz`s\G/ꅳT+yF/+ mglCzQDHUlun$)]yƬELTP;I^Qz1;7'& j֥x nd X(R+tK"sUSL{Uן)'"{wH8$`( ݜ5-KBਸ[_T=v˝ ׉ "7g% 0:ۍ/ 59yu/U][ԍ!eդL))"4j6W䰁eTG"6BB.Hb }]. |P h[z KlʔjCEez?Kexa#lXf_pȠ06 lR }FATiT"*W!)vS %aҡ®ٴf<"f.axا?+v WhCOdO" tpu8"JA%-E#ۀ vhvp4mᬍjy6iHǦr^MFÞNFu5#}7r7U|>ǥk#b==B64NW\Nb /m{ɨ6GVXVdTq\f#?S)6Tr{dvČ <6?,<ܸR ܯYJ˱B$\1LTv/s?O= TXO8w1W_\n'!@$GNx!J>W){+mqMD|H&(v}z{n}cܵUMJ5dQ&;ا}^Q@ ~O10^]Rw~Y*  .f.c^$@ -)v3bt@RkTu!nxTu`!1/1.@  )vVE|F-I;Zt {),;@ 75lޠ콯J^`@KCR[T5-# @ bWkG UfLjZDv:#l{" y˺c=L)] @ B1@\RUxi1IQ;`$p=3 =.wo.(v@ tQd\O#zb;id5|>yTaS׫4$x@ a:9x @PPSշD'45N-swF"*w=>=CqJ)vA ]QxpO$zb;TSE%"Ӂ9]づ}zC7B1@ A:Ħ~KQŮ=TuNB1@ i&aB`:0g=G)Bn]7#H>4PWrV}"5g6zM@/ZqP^"{SFe% 诊]Nn9RյQ]t@lտXmd{˃Ǯc>Չd' cwAD" R"DeTۤv#1MwwY@su)Vcۖ|os|>Bؼ.~-~c<]$D=,<20-(L*X`x Q\4{bBSpDpD'_Cd &|w:QcpCGQ\PL@.\Fc#]T'i.$\GUp+ʠa-@>L cJ@@!,wt]&oeYg;5[o b!׋TroW6 }JDR|CMORoVP$fص "EPHa+QQz $w'We:%/V}xJn*/J, _w }ca`\ >c8dV;纟J#T5hQTl[Jy wp%V|})X~9,U0:/eQ!9Q\ImT'bC by]zcw$V%oE]T'87qƯވ{ <;[_|v5#[5L'sx |3}  u<΍_2t$2=Y_My߅-G{XɸQ>ɒXI7Iz0g^bC <)zۃrjsynw , o->૘0p@oYn9!'rzGS݁ %֓r>p5& sqSF[DFcN%嗮qT)3v|8)إNýhi!t.̘G-Z{5YdPX? ^DΨ2XuKrHdi^Ȩ6+;cW/,ώf( إo=v}:p2FOcȔ[Zg\,"wH HdRHEU]}AX Q]cIg6%Y ZGmodJ}&m/VĪ^zVXo.؉iOppa#xnkgzl٪_^UhS( 3|1#nDz˫tpF$%VX3W(,l^?+ޞXU| Y9X;ߧZyJaUܘ[duEs Sg] " pTmY6*Zq֋Te?~FL^@uG<PIصa%y'& | Tˁ+\EEFy5щNeQoycy&kRmwz״ޅM}JGVH6'`ap2|]R`bGFuw^$XF6XI0\Soȕv_W2u' 8N84r*LoCNC9 Luw%f*Q|8+(< 3ք#;4>ÝQ|%I+Λft9o"!m:=V*buw$4Mk}L㩨ZB7_LaLv> pR֧TO2Er:8c:9WДycQ7cQ}`,H e ;ϗNdPw"YǨqre0PEr3~vev޲Zg='Z< "ب*D`*xx) l]$r|nBLüf?{fUy-w#TD@FQKD`i7I7ݛ5Ǜnbe`IFOFMLP)*EQ)*E`9Z{>gi C 3g)߷Zqt$oм6zxpkW EzE'PĄfƄW' D \'6E}链EG(7@' wWĢCY >sJ?"ZO8&EQRQ1Ų=,䛡 {Z{bOu`5\밾F93lk8 s $|3 =R̈)x ~,ꖄǣ̙NZۏ`)i> p,PBF15v k,8ky(UϣD֒fu> 1 [AVgY4j| rZ5(a6ʹatF)L3+ O`<+lhs=_Gn_1bG&{*۽?*PH$ ZAg|Tx 詊D҅U b/ęf͏);Kf؝JnxZSX[T7`㑡[ilK~;`Nw nd.93OSŵL9IM"L,M"_~,C`)1^.{ ( b0bIs=a$4s~ofx.xgC/"D0ƈș"SD:?)3 onȠm(>.eVBU7wUuiUU0'10H%-.AaۻRΙE9hKjC{#;Ol)UpWh&DU}oGu{C{PHx+E4+EB|+B] 'sPdd(r@s '?E:7t] eJvn~ ,&;|-X@Z4S9 IDAT,)Iޘe/_wOSՖ2زŰQ>6F9;lQ4`]AV bW=AV \b EJt(Gb`x̀~W5TQzb9 v12GrB13XAVWY,=0?u$wBen0~p"eWKD:ki&doS1 gK復I&ǥS1YԪt$fuBݤa}Y;hO9Ee7ܞ١HAԿY1UST?I#Ǣhz2IR$]9$G;qZdع?Q,5}} fFW7o#N.dž-W0z~#l6yKĤ1kD/ףJa,EVX35m3Nc(O?Ja4?r8Bb&φ"W'u:RTp .-B{4K5(;a!VoYa [N,]@#HX^Z$f/χ6f37.M`iaĮCtЯ?tV)|NC@0 EcԢut>zwc/ uDLǔS x6qAV:ar#j ;sa5 (װQί 8AV :1бdɱAVnmD.lX+I9`NšX[ܘأsQ&a])eb'3t#.I4사*:r(=aԅ Zٟ#`OIJ`؅"DyH'>l*z J7[`&b&;N`fKYnxڛlGeJttFF{n?~(b8uĤ5?*"m| J d\ˁM A3G cT!ՙg99xpJ{.߉LGnUX);,{Ium"5=j;sd,ň<>oǾ/SN9Wp߾_ Ad>hB..@53Rî#O:J]"=:V^0Lڣϑ7]dP"g! -=B&9@5qnX+ޏƪx?ԅI-,u&[[0DmXuEB)B='S$rYIjEoE7֫E2jJj Ta\o}f/&.XsfT9L\F/hj9u+MڥxM$D사~EK>}ǠVc\Bd, 3kFG7RMP*-u*2p6R,c8J]^sNë10a$= Z"Pa !"yѴȰRQλ{ ̰;{ᅬ+B:QFE=rgÒS1"7 >GG} Bq#6 /SNCe+ՌrЖB<2Cۺ酥Z^H"CBB'\fdFLpxC65ï!Žç E7T~bYb,4;sX .g㣝:p U`S=54$;fc>0wvKuQ:FG:2q[w63Yk1=p6E@gh~TO~͜.|V,#F8og'n@U-T2PU{Jt;[`YaܹV7I DbP(2(5gkBzAqLfzl9fߔDcĄ|+H==ȯEp!N$g)Dbh_7Oߦ6oc0*1~HsGpmCAnB\'bd:2L3s]!ƔxJeԢ5փS`Džr`+2ki.`Dؘo#cqXKB.l"VP,^C>v(1h 11,: s Dd}_Y}9q`߿}'F5; m1#d(0hr 18 ō:½"n]MRuVۏ>[v&WsLUsF x Q ׫pz/93,֒ɋXoFb~,:d.0>5l,p>\3sH{*:fߌX 0Kdx`GԹ,kxXK6_avrFcz\G9>{@m80^EeZO|Iٗ4αUu~B^ϲkW:X25tOݛR)zF뒉[+1۝ 89booyYB7mo}!8SaF3Y(Ȱ#ĺ]A~U㩊1|tMҋNW/1QaZlgcߣq!]x<:rD(I aG"D}1EAV٪Z̢剄NJY]~LaOkOxb-g({`5~"CHX3aDM1~B7Z Rkr:NdxupH!Wݲ<6Gk=@~Qy#*T_Hb!7Vc3\ <#m?wjO$p%XvA4lp5rYz+pܷ93D̙t"F:XaqZJt#)m\&z*e64`~Zy:G7h}+0\VHh[[I)9hoaAo\3ZQÐw8\Q/59-i#uID>#ED^DއüW&c{pLa3',ϫ |nER"u&K3]ssۜZr9 Ϩ{yOzKy"՗1sXJimSΐ V g"aJૡH,8.#>yZe1QszxNmO! &YdhԶ9V2[{bϽm"qP--gm[J\+TK4;[4켉ET\_!A5||`mն:v_1"RlFs s⩷AXZQ. lf\ȅ j#cվ)pFVY]Os )7(܅틇R{=I݋Nng?? 0fص*=;.<EXx: G,2DQ_#\"վ,ߧ;<ޏjλ@@Z5ciW4-l\toFr{ E,I2X/]9ۛ_, XK;8<48ZԌ-;[/nOEejvc=jwe ;w?9 #5&WyӼ̑*G%d:E/0(qQhKݒTjbba)!^ub1am9+ji=aV`Lጷ s\U|ѳqoUoPu%[ Nv.S`Z8Gᜍ^ P"L Rk~p8|J_0)O^ P+0S>{RuU bĎmbu/AEamw}[95:6&7pl&vE=yZ:_]~6O5%)Z5q(\*fd]g& U̠hA|u7<2Z߂6Qs60;3Ha}xgN>*RD~ |%VQR?{~~]~xSX45̀[ߋ ˀ<5ueso.1k*Eg/i^lyN#isY`h`y j{ttB(|/T/m[9o ,E)r@7dՅ93NT[ }0ⷱZ-b=`TOa驇u'q'˔038VQ!זs+顑YڪEbuiw#G:4#{㩩Ź,aWjFJbn77뀩XU֕~ c As)^5K,娢å]20)'M.Jet65U nKᛪzwqJ74^STZ/u-ֺ[ߣ:o[ yXIK̰DcLXXoNd ;]0EdѵBU .Hpϰž3~=2t?]oSLh^]t2p:Os| nM(`6d[3̬ZyS/ svQyAV_υ &xq{ vza뮌Aȃ ˂du["vQZqJAV8 W7F?~d l?zm路n$U^E}*qJzXE|w6 dAVz~E^v1n)`!õaqN@Yhz 񌠓rV9SdkeK4evbN~(jk0}Y oFUJ-0?cv A[0.jڌ6a >Obr^҈]:H@UTԛ΋0sz,9ޔ~ 8nu`<Ÿ o*K#aׁɪ 'vP Kz^%µ _ŢvL:?o<Fn]1Pms9 f_h↨m, EM\%˘BB@1IaH X=ϣ-zhJ9mbO`IhG̻{m"ý`)`E0A+5ECC_UݜG7h I[`43ʢw3 Ƈr+?+!f]I>Τșta9hvj`E>Y2EsѰ(b较d5[Cޡ$Vz9lPa̐ x_Klgs nx0pP(u,)Yݵ#E{x˒:VaV 4}/GMY&+n`u.#q\@OQ=|מ?^Վů3OPnH ;F<c$=ŝJij6P=|jAEQJwֶYpZSũ@uE:Q5]e*vT7~WAU_Tտmz>5F2zHXrbN5߆)O3]r pL\%2IuTՒ,|n|4zۃ,=_\;ώ٬p).Xiܗ:8d W(ܛd)dbN(R1#vGSBᮂ轊cvʰ+C==;d,73 .j}U|bGHէ3[sc0sZ m Z"bd>4gwgazJqGgΟ3~l<Ҫ~!ܹ*E,&\, 9u݇eJ N;zba^_34 Wwz1Gǁٓ: ]eQK2nU\`X {Gu2xz8OO=r۽> {^c)5ey(>>F=؂9@:Iz]wcw.ߜ,IC= Xjvbڙsx%tu.E֩E?4ǘ|tndǘAy:Xa(bFzéf?H&2s԰l~˩ ߭Zob N}iSlNKxX(ؘ$ÎӭndPdGgHN#gѴ15J8 |"9QXow"lJՃ&ʺ@uCNT8P~c?#窾~-P`=WCs2Q<%#騃]倚\8T:MSX4hmxgVn7gSmgT׆ <ޛ,P)֎m}vTK=8"wglC Qp&ӛt C |jM{P{0GPKTՇ T u@9ƫElRݓXq}9G/[Aa| ko4;%.$Ώג݂9."k{eYc`iU}Fgroo`6Lv=e) תcMow.O5)^7p*˜,69JHV <]{f IDATg"e"E:=V# %%LS}&y 5v{xs%k|"6.pt1b\q.DuSJSj-t\ z>7[:XfM [ NUՠ{d`'R8KPP}ZMcd5G*oAVT샬.LJ0'Z"#NR ?b.%g[o uf_5iFap}Xô԰+O5҈NrX9ֺ: z=nP'r63@L+zR,90k{t+G+R8&6ӎmSRe#؀57,-Al}RDqe7U*V3z4 w?]p?w==}]"XMڦv{ nW[ ee)Qs^ МG> M)sltX72ovRî<騃]{`6)Z> t=lxmXF| 'gzyZ3j$jBMd-AV5l'ὁبK|Kڸza; աLQ&OD~ ;]t[/"rR/\i<Έw޿O~-Sî<"voj* vV@riT]ZǴLV}I x+HqOJJ F$U Q Oc1gulkAV7̰aW?eePF/z.U]W8'j'pmOq뎜Ts-ֽ)3U -wf9}NFSR+_ Ͽ Mkn).)~kwGAV;=*%%e l͵J+O%.m➩];ҥ.ѺNd-9Sa5~ w`|YXoO馩aWtAJJʶ0z6hKJ @_dux~xwJJJ EuDaW*v)))% ln(|'eg90o"%%%e'}OQ-Q}G 5J:HII&W],rA^ICZ[R>P}7 5ʑ:HIIfkOGJJ/=t4"vI ]i*:P))))))))))))Ljؕ&툙SvI ԰K@D+v)))))))))));aL5HIIIIIIIIIII԰K&uӐvTm^7R԰KRn˝To`%툙S _'%%k]dVSoSTFY.RR[EF4\}|7|%%%hoS"}z~Rvٍp腽/݆~ԝII-REf7Vdq{*ة+]J}obs_xmW)՝8M.kuaבJ e.wE(TWwg.g^6nT 8ln]}pCUsZǔgtZ6ҥz1Q+ }%9;Fvbe,(U]G* .e (R<u0n');?jk;S/N tc l^԰K\n6b'fؽ<*[mm <mGovP4̔T#םʞ+wo^vb_˩ZsApmOf~@u3ipD$&=b7|G72:Kh ,L&ߙȘ^|K {ͰU;,aבJB05RvFKLc7WScKtȨf"uΩU݊yR.e{0<ʟKPY$(" Z?̹f7dW.A~m40go2H"'H&9PX%w ݚo{ښ/w\ۋ԰HK}{~êI䋡n;X-bfDX7*N=}{)iHt @}#k7o^׹g./V;E}I[W Sh!_LްxHB|AӵtZo 5pZU1+ Kް[ 97rWXE|asBՕ\֜.gˀw"%;I!28ҝ5&pwٵs{ ryU8, :qcf(bag4 x Y߁X{L^1/> !֚"ż"G$G-LS`Gus8$67&lWi=ε" ?Uŕ:o k25a@ \Ԑ"[l1.,ry(Yk34/z El"f Us_dp;'U|̭"#sE$DB;cJJ%4N2u.ǝWwr.aWȾ@%.0Ff'뚡ȄPdiINNUn\/ǣe>RxWȆ[(ђ8j&* ʞw(|IW(2(6'g^¿.SݨppwfmCgJ n w" c]:qu8"'5ĢU5&W=%M=,Z2 uv98k%u`N(2%~"CE~ p& {cngC~ eﮗ\g}Ϋ9"iy(2Y䃷Xs>&CG4ơvp6|>|t!E&\/ҧS "a?ȸX?%ql`Zx@-7 x4iX EdC)"C=@kJA= m@DDdSDRf_ԕE"O(R7CZ$/>ۉ[2{)TM"kZgf)~ s5.7`i x^o̝ڤp?=so٫YfEN6}?V;X)]B=>c@Y YkȢ"'T\O ]8wd|*LתnÕ;l""&3[mjlRZ/PyE W"Sfw9Y:88U *kgE끥 k?TPd‡nh6$r^Y>odh|v ;*S){2old &>,rR1m}}Vgs|TuT{[lM {`oumSOm7 dŜC; aWH5cSî)"7bGۀ,Լt͸Qf3+p5U&rX:?ўM93ۢx,ZY %O4Ô(Dm+ 7T*V +g@M |2oh T( s^ ~rT]g!rx(\7k}njRaj? \ ,@MUw2lk^7a=ٿuWH^Ka6 Pd@5}g(o/nm쿵G2YSŰ+3.jB>@^&=Ǟw#r-g)+,e3V७E8W "G5ܸoOڀWuRs\csBJP?kXQ[7U#'>ϵX E; 5qZ ,S/iJ/p$ ̋~n؅"_\`ibx34R 0;gUQ`MOJexX'c;6Zn#1h}xtaFpCXT"ÚD}Hz>Qm{j24T[D"7Ԝ*mb#jm" ,T+0Q{ `J~z^OL .xjܩ![a셪++GoBaT=ҹRb)aWH7o.w "08MXqXnGqT0/=jZ#KV|]+P"wf$K3T܋=|נ*2I䗡Hg3W?i'= [a.@]༄#:z5SU)Ģ)\ |8}Xu5eP`SCw.YJn 34*z˻T_?=0onIuVsfUC"׊4"n΃"^Պx~1۳Jkp| osMlij4h3Z)->Z 8>Gg*}o8-ɬɸphMS},P}T[>Z|UNv"D~$jfn"0 xAa^3ݕ͢DH8F.?}tB(sç30I9slŜ5qCn_U5G |V#!g9m=ٯc59WipV0W;E|B\Ϸ}c=  T9"ߡ݁1"eVˋ[h:q4 /0;}H gC=ҿ )lU:_cv5p7|UEU;,B,W /V @U7D<i`u~C^! CC?"Ko~MS)j<}:rMiX,P㸒2Fao/0s0YMXhL O`euJx={-d@uC@ռ0 J١PPsEuz}әb:n[wf^1AHCzHCȗCEE񵞱UXXzɪ^XSZ՜bW >"4GfӋZ0h=C+P7xV#E[wn:֖\^jF$,2pQ:b920ѕX4&i 7י>c)ՕI,5xX$Pn/ok1hv@[l]-O%Qo2(IF1ԻexW4fqڑC 3DĚ+z;ffb n&dldrV߻FS ?EJ/c#`"< A jiËrPaWH%%(MtDX5L? (; 9ۜx;{|Ȭ&GkpeOcIbże7Fz"ȘZZrOߍ傗lkDSHhm%5r` x7…۱ f>V}aog^]X {Q'`9$8>QT_~+0iw4(|T Xu?ƵvT+vAd8l:FK.&zrc=(pȑE DDF|SYU({ꆇX~j ,/[ >)؜1ڼaGyh=/ܹɔ1s@] AƍH huBD/l6w~3ALƛ<«O[GUpE Sj• @>BK~ ߏ<}|/i/zn ۛuWs68e*{@(GMۣ&El]F_"Ux`ilP2bϺ8"։r&(ֺguyl R!p60w\{E#D{TY‰O P}[cѤ IDAT-4x#f5Q* 9ؾ2x12$;- ]s9 hw^:c ?&`jI" )9K v~l.;QoŌ=e;q*XLwD)K> TT/ud*vnT2UAǵ k[<[F-hq5'7>Sɔ@OZ9DQFwI#<Ҳ`G T"#G.P}ԋ˙?uܛ ǝko׎gdܡ{bk}֠-JC88WQl]! mw1\Uu D'""^!Bؒ{W\N>^}ܣ%=O:zȉ9ȴ6}% jeSTRԜ--)tQ;dP֋֕>MjՉ؆>=P^fAً BpPSmWo3i>, mso{*)b&pXjȊr£YSZՔU0"59 Q;+ +EZrz4QZU)6F'Rk;KF :loN1#bk6 kݣW |3/P#7c\`X7_#ek@uCE@QNґ '˂mD-tFB|QɪGWgQ7sC,jD:۟J e8I}f=Y 7a$!pGG]&f؉e됩];6idLF.5d]lEbDږ*"v^kT ?zTOƚ1- am"u!+uj7Tk3 OӢH\DB?:ߜ k0=È8ndջ n3g" 7ٲ&foz*Kѵs&DM ~jLiyX%^?5 yE}{ؑQ0n̏j Xb]+pjk}DBӞbXP);_D*yQ]iwzsNh m S"[58zqZ]u8bl@*!0ش1d@  PDBBq|{kHTQuuuuw==kcs+-nUV`' Rjl\z{,J1$s^LQVKV5iy0݊U7džOkBہeU;|ں}Izf[U?CUy osU|yMJJ/X6c# KUQ6UG1܃>_vɊ|$4[dOTBU_>O!Wa]owo'Mb0ZOK,p6Xm4I,\%'FÙx , "6F*v`#Df^U-y{*N}Um>7[Z+>ƌKZZYԅ<+G(Lbfޅ5 /2 Ʋ Obyi.P'{e8ʻEl626%ِPd,2Toz;2*;{`++'wY\P+MF`Ed8{mNЙ #rVR2vs -=Y?jV`Umna=ƾj^rs"s$NJR{A({T_9O\Ά}pB<LMq=v3&Le.x;7T۬z=lC׫$p]70G9"W _2U n Oko26jUVԺ $qk}|WYLMV?iPc߫?VY`5Y}7zQ/9k>j(\X@ PG0P-5(};9R{=lٞB ,|A)p`rT?^}k /X`O<3K7IF`DŠ]O٦p[:hϩSݖv''Q$^s?+ܠ~%Tݘ8ƣ(2n{ށn 5YmVt Z j67~$+5֏PuZ@7h} hr]}O|5WիT3X&g^>!=w=9{[_2/WݾNz)ip#^S'ßsQsʡiڞ\!"4jCk8gH~֙UOEC_=!A"QB9֒h{ [+4E{qcދknR=GZ >{jÄ:! B=sܞwXPU3:>'jͰٚ|7Mסb\tT`W+9{UOI3GKs.9E*Лc?0_TIΞ;G^n.{D RZיU7+|INl-X\g<=o\/uKrRBhMpybUZ2?up:#Du\p^ZH]ZX+[H+ >a0xXcdlTDO pv8BX5y5[KF#U,.D& \蒕07Hvs*ģ ?Q|8'2_fV?ukL'ozPydx'/- lEn< hUR&L^@ZH7 ܬ=q^`츪7U`&^k<\2 W;>/2QRѬz,Ou3Q T^\`zRj-68OJC]c=?6Mb+)ƱB?fn &@VTSye'W[+GI |U9~`&DR l4M0uO^M|է8dᣆXjTKמAooB<*\S{y`jJh j**B,L_kRD>!}^r*܊IV/JFSzA܄)%8I`mA#M = [Q2AOH J6Il,Q`e`@LlZ1'\%  TZ ;|Z{b5-(z7/.g9F MXvs^f{jOe{POPC-rXnUl. и'}M\,Y9Qm"T1 1s5AzGJ0S`Feb,9hC>~#koY1:TȨ+TKHvWAP PbQIBm2GjD&uNجi p%:5JxS!´_DWa#>|Hdm{`V?pOdA͇9EaϞv=āݞAKE"Rh?wj]SUF"`LSzWb]cbRVVgCpz2gU?p2aծ#y:UᆄnuuĨwe=>KXd+ p4j!A0*\|~)T/xN a5pNJ,T8K7[,sG-ww-Z[XPw yaܽv=-5D :^ (K6}_.kU>/{6Ϸ8{=SĵG,XUR۬Jͩ^%8f[Ll#;Wp Q6oYrԅg05#j; N;ͪ*|BL&Ē,V%7ƝnJU6TH9"?8IۊUPSZ`:_='D)1Rzio{{ۃY֫ O2 ~A8 17թ{{ 'b)F*J !:5#Y $nӁf՛};q]7v1]c zI.]tˠ,N+le 0"BJ٣_lWZOLjj-; 65%#|B6Zwge{@Smp5"uth7ýR벗!(v&)%=ΥrbMj_ymR`89Dkw,Cnʬ/T_RySSCw _(=^p ֽlzVݸ=+@VOJo^}S~8 NpMׄ.;;ǫV` 5y`Tu6S"r iU]*|vl^jGwƒZ%&tť jgR}%F±zUU`7bw '9#VzdI`˹@i_;;z ;w3]\aNj$;pHbB,ze-{1CiKUz)ؘfz€RVJ½;CfgB-i3KnѶcJꂤ*vb3gu% Ur:iU73UfK#";.F5U..MbhԁG~R2AGwsBu mN^O/ ;{հ%= bWY"3pP{Y%=bj-!T5^H 8Ybݬ]0ƥt03S}ϣ'#4JW< <ڗqXLKpFc1 4ۉ{i])1jvt+G1{06 !ldjCS} T=%TtU 7|S۾AVa\U L^N6N{!FmI>Oc{`Pa'8 8 !n *cLs3k:zzz:Hћ NYMTj'ոIpP)Vn@gtNPdxl;)+v1jvyf43̑#Gp893aBl|ky.FS{ԌP8wc[A p ^`9:% Hu]{ (| Wb#Gf)U.(x0TryŮ~HW@lP}psƆ7r#GU;$78ذF]9z6)'G"bvÐs\\Z\ {J#GoSs=o +yȩ#u/g9rBR܎']}9~1G5#G9rgX9r }a]b#Gְ3}9n4WMi6ȷY;=rȑcXv)9r B.fŸ;rHu,sč";>99G;Es]26 d>Àân#N "G^ˑbp;W*pf{0GH sA^^vhOuF*pwWnawm'.FN!D> Q 0nSNѥ'/?p3p"+"2S9/^pLWn=Q Ϸ1Ev#s!]}!Ǟjua@@ߗ_M < )^/K}\ ja,Q8Od70\6,vbǁG. WHP^|QQ9Ls 1=] @.>w/oIVU^T#4aNcHdxn{,)@ԋp#(M#a>ގ(-ߥ3j0.O]55Y7aY]}"!|` *(>Xu'PR+6P`#u@ARt<9d݋;9zU0PN D4G|`1xmsS$ X[<녈)DLC}=B9vʨaQtՏ{XȨ>CJI͇]O) P?csM>9rX.(Adf{"2G亹"턣Mwj[?cN9Ǘcb4qm1=uw<$x>wz's;x:xgV';r]9##؁=%#+f[H䞹"hI-.y} ^?si(pnڸ U_^+v51XkDJz_u:_TD?<,$}!>a8P/Hl{ctQNŨ 97/gFLn1waN, |0IXȒu{ILLĪ  lzDJw'c ^q3BյN@EFv AuȔHHՆl< h'cIUG̑ o GBúx m@ҭ9 95LWcwQ DXu tWDz('6')1":7y1,蘣7dOʽA+BյrCz|`ۥ.<4`sJpJzy"{9-ډ- 쾃w:͖iOLv^Ѿވ(o_Ol[iN| 'Ix>/A@3Q[Ճt8JӠ74|vv1 z/Bǀ1on.0"1} ]5x?z 矣@^ y aX ~o< B5]G> V``粬_Yi0 UT1p U]J5%q/JeX?CzZB z ;K)<1yE;uo:T zJSrH:W;= 4& dpIQ 'SwEwvqQ #_v֯A˭ίA@a:,Iۈ t9F+v ^z Qѽu6,o @2> MQ 並__,9 vo;؜/P6b*vڦc1CUqOkb]_1ÐȰ"gԤ^y\O[_3Kݡ16j_ڱOQb* +c 4P}nkD2cV6N |)T]OD!F0C@ +*,A6G}{IB6 Q`Tf<_oݕ,=TҳaUQO@Ox8KzER7n>g{b̍iQqG@%2W y`{0c D kxaQ^z3pB7}(7P'`XV ''GW1Fffcs֋ٞTL ly/0ЀCF>WE"DInK;Iv`#^LԷXE1J`7F|EY$0XsEzW|JSU+J "H+KnpZȻqq;â.I ,F,P+;'$;gUany]O)y~t'On' 3@ׯ[˓ânl. @p: =螪iju+a{]_W|mo @EҰ_vEރw$^ "{ANXE/V]MX[2^-Gc*NGmcF NC?[=nèo!Xxؑ_'%d<]UnUl7@$fz0& Һ5B`F9a'3+ ȅ`1_S"-(6sFZP0s.Jwo`GSyEv'Kl(]8 vZ`;;9 beج ׊'R9߂M@`7G,T&qes2"ujDaUӋw.PmlRpC؊* DvGfr}_aâ@>)A]{{,6%xwY`%1K뇰{r6RPr^_\%0y7aTҗ'(xn1wÒmӰZކQl[¢~8,"L) 'Y1pb;kry8L9w%j́=PKݛx _UVx7sZ:7;S8S;iLwjVQ[9* o|lc_lQݖ{ܕrlXHka9u=%2A8kH:~Q#T1cl犼WԂɒNKTx&/-c*p=pk4/Y!0_i3 v&qIqFgd+ARJU"p ˲ y`W?^ O[ݾIu9^2id } $,/¢),jȣG-v/#4`!ð }OVEѓ/FL*?Kl>O3* fKyDOSb@$ *þK}`wnHPf(3g`3[L_o i3mkų &IsUE %p3<m`[cᕏU% ,*|:hNaR;uH vQ2 )M=S߭1 `ԛLrLa &0_H%Kkb;$ع@s/X(}b{Kb'F.$.?X.{PSo(o}Ǯ?(ގ{13*aF(pwHoaAMWm2- &a> X`{_ߍ97 w%c1=aQu{Z8)Ƭ ϯ 4~fƽ/Kmwb?=cԍ<+G-*f9%:?̀mԦwj,K6hK\%eac P􊎞ѕWRۮU9 uf$zA[ȑQ wp6M,kNw+_L`5\KԧU/9,L_,bA{y~ v 90=' x'⨅ KJM~&r2p-`HdY:1:;uqZxpU:wv 0-sF8[IbgZ[@qμ;¦^S߇+.T} {2@Vd@(mG-xu)Z+vWt` ܡ>(`o/nKzӳmz+S>XWEݎj3xXw8[xpϦH;ގovS,5HJL ǮGO;[4X>ⰨJ j*EYc@i@XxF caQ$+C 'Qs$%xRkD-]9]\ ]y"lâ>57&sok}\JSe5B ¢ sx ^x?`A C6oŚ}?߅آ8;,aQEu)$tӀE}; oL*9@LdoGX/Yafp(Yקmlv>v}8SbV/ ̞#j):nU:>MNDƊu)(S_XD[7PGt5v@ܟsQKj=n*_c '\ne*zӱA` nV1-\@GI[^YcU?';Z0Lb:R[DwcYoQrR⩯Y\`F^"{-_i YY_:=!~,E6>>0|bd@Ni?ِ:g#ȪXc",ꚰ 'c uv ZS>u>jMx9; Zn6bI ؚyG5Pn!NTQ`e8JZ2UF q(yE=;,i2 |E}49bQ {cu,ǰ@2ΩA{S &W3c\gb[$qp@X'310=w;&3*60cx̗i@uVuަ "S~ ,N((V49Ί_wm%Ke/pVna]j U/U UqXFuZ~> E >SpVrDD{=xV*Nǂ|`8' ivOw*U3.jQ\R/OWI*b߫H*?]/2E,q]C-.h}3M^fmlț@.-cT0sH)Kv~V[*GWy1X/9Z@H#ġ膰pյpJ/b6p8Z\~{8Ӗ@~ ?p#>6itf7'%?c)y}kbjغu(ua3VX[+_opuւ}DUݭ &C}՚4m IDATaQWSNF20),j2#[ 0'/\. d (>d, :(7pQK",aQâ>_8>q"nZ!9 @eaUAG[̯"'`N 3aQ4*og,JZ¾,c#ICtz_eU07tں.gA[P]A Qdt$2sIηs:Yp4`]n8K,8-q?P#X!p>CCP4:o( Kl|hN'pؽST8zZ,>)27& & ,rQ08}ʿG;XhwpY N> &lbE]'|է]uM8 p``f* ^QR 1۱deXNx-x=xd3KȼLS$]o~Q!  lfxAm`Zk_R#g@,P861 ?f,PX͐(3\9KR:wkx Z\_EM‹w1Z)X\xW]9jU cփct?&A'5XoX <bSk{l;fVӵ7Jkh{m)}P l3<v 9, 'dR0=LXlۯZ*p#{jUX%uXH;W \@<3o0;F;XŰfH6{c' d J_CvA!J }O$,(}5X)y:'C5hBLb& ?~VAW)F[w^JMhIR}^Qf8j'Ᶎ^`FՏq+-Iܚ=}u-nQzUb+i9U_hJNnY9GwjGRgpfM۹X7 U7:1_oֲA$`C;`1J B2iIa3%}+~pRHӈF+v/%*h{v*V5TbFâVR;=ドC4mDJ@G1 )mό{K/ÒLor5^`XL]a>f,lXAӷGԎAv]x# pߵ1 ,=HaQ TbZ|?Xo46i_y:h,)>G)M`Bf`FڰX c׽%^r`/)R\09`jwKvFXԍăGa4}'U+݌9ə}> Q*H\{w p'^v}sƜ;m!ArU.hAYW'Udq4oVep,%j8Uukzv' %8V}yU6Y"BDݽm&RBL1<Zd5S>)P>:7X /W;#'.#2'oZlG2z"{}Ӷ̭hVAGjpsIv{{`7 KemnAuv|=?6R>n:q͟˴D Er,,ɑč'ߍ \X#hUvt;x8*-3iwc@è>?̾<%|?jv0;08iso} kyލH?^-#J{FQOĮxfLOwցb4`w4"11XN'3 UAlʳTIx0#c[==Y0)/cZ:f~x{cOE7=͐(k0Jarǜ>OJ9ߣ4XuM`T{=7m<^ ud3% s:f/R])𿹯5DLS08_m0YWYU7,q0R[ν]pAAߖB_t\TwP-e%$MK叉# Nvݤ9ڋ̕G7:H6Q+jt߿OJ/@`'D29*z XX1s粎{Cȣ06F2=Ķq.)LT[.ʱJWC>:7U;ߘ 뉀  z?:@lIxƶz0b'zjh *Q8ypR>KfL[%V~T9${/&32v( Q ~T،%j6n7sրrCQ*T)tA-WQO)BoU8_uQzhXNd \PܛU?=KPKSr B[Ĕ { uɮ7 -[9+z\ZXrU}_=>Ӭt|ŎseY(|U?KV]!m%ks,8Cgc TLv<,3uĴh0oS$.ٌ_B[CpSQ @)5f(h8Hf K&0`mtµ=kγX8Z;(̯&obׅ0¢~6,[R=MXn+_[,&PTc<1^sQ ojN8aQ5C*IXd?VbV,ˏ`g]KBQ G\ +5L_±-!nWO%P`ǜ:ԇQ W^6IgA\gd$ Tx]E-E$mZ ª-f(v.x\p7Rb X \93Ys V:4Yd@K矰x`5 p ,X~U"}ͰBMJLY09TUGP%Q-oq꺌{{>)N!윷g,5;ۼYYXԟEw%(3=Bl8؎YwZ r{]ztOUc. }V-,YikYƽJx hHXԏE} =:vKvob}[}hOľJT|Oހ0_N%\wWw0Ov&-jVo#t+O Ɔ\< \5̀SV:3,_ |, d<=g8:YX8i~`\,?ө 4}cb|Y1ct$ $kel¯k>KE',j=Y;d"~L`BH2PaQ?zo&Invk=1 &P_V`N=aQo zaXKâޅM_#v&j ]I]ex棪ڐ8u/baQVKn.SXQ~" _E lr1D+LgcD@zD݌~t^`;?s3NCI GH[JDl!(.:]hR/%49, sb߳Ie.HƩApNsQo >uIxGXaQ g[[``g띚9HVAs4\ _T8:XãUnW)E*B'Q, ԉ,4Mpͪ @᳎Fyd/1*\Əp^zvUgハ~ߤ+qX&bRIʊpAAꅛ"/лouϸJ}b=+qc`U~ؚ4mXԵXs C6, lR&q^YL' /&"1K~4OEV;=H+؍_$]3E}oy)x"qO(XXA,{/tvo_=dе(ci1,갨YT}(}}9,aQ5,İpϧ 7x}0P- 6weXԻkfNgw`;`@̙: 2@s55p;P&s֪&xk_'£q`vNַ/`k^ R*Ul<2AsUz36F%λ SNL"T]9{u]݉UFm]E߉%S ܚZ(v?<f՞MQ FjZ/ſm4xٟͬ_ H҂8c;05qX_aQ_y[xcοWZF-(DžE}2 X,HȚg;`MuAx8 X}d⩡WbE 5Ցx:Iu(0V U+%rPNTX+kb<֧xxwEm*Vc'vԦj_m ϠQB;8;O@h @#CY%R" ODQTQK@hRB -B$$8+Mr|2ͼSۄv PUKd;Ȃ:Md+ $s٭;9<2oYq±'맳 [ySvPh`WLkmZEÓx,R A;iXjXYA?j=Hb C[&Ke+wbAӖ53dTx}Ů3Vt@D F)B61BW]w~{jQJ+kQ|3hnn+LQA?/H2Y|T]gbU(_ 4}W?^E{`Xk^U1340{>ʔr,u}P{+`LW#X. W_RItL5nis9TuF%AEZU( ۩?b>`&c,B[%Q,WWUIk]1 ^님`9H5LhLL6l"Oﮘn7=c܌#+S.= P_(u L{=7h c汛p*lS39v"s[56 Jǃ:"0_Iȶ.px7;]ui'Zd 0?\F}@p^O-9liB( Ӫڥ,?V]osΑfN,ς^ݝwI 0|\] GϬ 9EYm KȒyf fbglmZzbHc<[aZM;;CZML[3βTgXfV\UUy" _H4ÜHޙǼv]tI9E4Iʽ!Ea&#1-0B*5}셰 sofc?V|rDkik1s Xz9T(N[ 8v `am-2 9ю x8ESZ~;aYc[Ryj,"vKJcXTkޘl3]} =>&E=3q.`W"4x\UiTӕ`,^[BɷHt(%厡[;s LLRkPڮsZqLn|/om7[|Bfk;S ゝSb&&}+U8?,r]g=IZT:]KzfUSZ24{KQ0ڮY`϶n4~aQ4 Iѹ/v]"$K׾ #_#X:ȗT_nyGH`b A[)#$RZHNxVkuTփdS.2f̎qM\Pc'030m_f=P*Ḥ{B߉(͘+шE35׉gȏ0wuC 9*̢5cEEz+vTs#FNS)n&> Zc!C|.0@|km]\鋙8sǩeqn.)ۮjmtھ( mW߹0t 1Әº(=.eSi.ۻ{ ], #TTcdr L= oWV,pQN q a8ky]VL{nE~Ņch\@U$ fTڮŵ`Nk{ Bvo[lCG88N6=^km++ty47y}q.O{.;gq%%)ywڮ&~խ;8NYԚ_YB)3;gq% v=^88NG$X|.rT~!ܿ,ND2qgdHLx8)b<᯳Դyoqi.9Y`O%)6;g1#pq888}$> qqq2`L֊ֈ888]1!`888Nq˔u2qqq\+܊888MbLqqi*.3);qqq v)gytqqi@RrT6m@qqqp<@qqq vy W v884 ʠOFqqq' *9f"ZCF888ʸL6b 888i\єOT{#888 Pi3eֈ888qrF)PDmHqqq`W9wUP1qqi8.U=2uqqq]e`888N,flWP U5 qq'Ymd!"#ÀO=|80x xHU˨;piX덈< |}S IDATRe&Zx]Uyp:[n v81"Q@U?jx҈^XUVǓDDVkSΣ!:a3"r*p oU} RiQ88pp49LZT\% Ȩ򆈜Wv]sB]q U =,Ddyl|w<&pU6pqAD&e]kOl<}آcu೔ϱ0DSsD~zglx̚-Nl]Ǫiۇm`PZHԈL_DVRSO'a /+O m[%M !`(`"2GUS`W:[DahrUgED/20o:11ak:"88; ޹^UǤlM[&`/ŪeB݀mDLjMB96">Ҵ,M m <ɡ݂Ffb fo` %̌p'PfeL 즪+ȷ0EkaF 28DU$ DDdUU-*\#)A4ݰC ADbBȖaj``ǰmۄ!h ®U/ foIFDyDŽ Ddpv pQ4F#كSt=uMђ1 LSؗ+؉H?,q-"kbVj*DsGY{wta!mb5u5W{0nV\60SvoBJTu uK0%ϟ1np0n&˿h{Ac^6PMׁ_DMߒ>ت7¾/IE5!6@w *Un $G:UUՎQ5VQUz5zwoi"A#0`mEd˂Oz"}&=)""6gx@-"@.|Ed8x"yuP." 4yAD~gZ]+|.SPw }>3E| LD3<܃ "7>|I"rKyI8waV9}x%wݕZ.9NA|@pd "mIͦ.PGUTU$$:QcIPlX*W4iK: 󊠩@D#" "WDfsDmYD6i;DD~$"ti<%"?ekA@4 B"hEz]Dve4*"Dd0OD&=k<;ȅM&g99H8c+@wLPvl~ED㙘glos.14a)aA4~[> xH,8Sňș zj0!@੬gc#Ưxr& q:v-#X{aY"~9¬ ~ Ӊ{oK-+pv mO mVf[AuLk,LCt3^{1֥ WN-øf%p6bSlڞ=DXXs~Js!9KpegRmF 3#v/x]9 \ U= *E݃91`Gwt;%ip(0cN uþ 6eӎ&fy(9ߊ|g::lpCyx8KI aϊSB˩v%5vpjpg;+a>e3yz."XwgbDZgqMƳ{&(paßLk*NlX-KWp>M\7-:%wRe&vO`ٌ+bunFP(=GNK<ʮNÞ$TM=p4Qs{hRFáldc M'ss݀D<,@(6#Ș`󉩡]KFl:*mڜYn=o=ykݘ+hsts 5j^&m[Sm&c2O`$(9=n*'+fEs_Rf텙w3 {Q+)0 w`177dOo*|?76Umޙط&,iyD`Dژ`3SN~_p]Gg]^$!5A& l:( Din`z({S}&y&lI &1"np;Fl=tBFoa=gW3ڹ`׍U vX:Ŕ&{aLr Wx}()%~/$wDoFUpJJ o'dlrd v3&Q^ ׆3&.(`wF7Q*ܜqrc/L TDvk:ӆiL)} `9L$E_P0ssBˆ}ޔ_ W1hǜ .FAɧ 551Ӗ6L ހ嚘~NjńOm>bYl5sz>{ څ&U (Y:,.LArt쯪O%OVPQa*oja"HX^JlMzI[fDU}4n.{uL@ _&++a"M\ݫS70!ocaQ`>fTcU 0.gM-ӄߧkT<، 5"Q0WDK4aX ~1ܶm絚 D-yUk13ӱU 8`ܯh uM<ՁTtU}NUg} ^_U}SaR'?TD6 ; 9WU~Ʋ& 4aMT/SՓX7Yks6L(|KD 䀙\LVaSR\ ?^"\Wׅǡں&DdWlE6/}w;@: uOuc+ybB;V](䘪UAre(r,Zp$\ߨ8 "{eT{;gQ5 K"5^[bZX)𙹢uEqzA"ذUD6&`fKP]7Kt)v`$N*&NT/ǼcL1X 'bׇ9XQDd*6YeXL=V$kRH{"r)fܟR"J[xýUvc Zn\N2yNJpw.)pP8X7^D `!aEb+cIvəeX4dU"+\?2^A?V{ grT\YAXUKaHPZ=_an!$],wuE{]DA" |kE"^^D)"Gԁ(9)em*.䰚 ,s]Ս ߁Y<)3 ;3}ۀzGWp(MW҅"տm+7~`|>P)G;ʱe a6vx$wn8u^%d;B)^ÒG;]|>cW&{ng4Y "[`fY7soUlwQT0&Dh"@&or>Y nTH /"%2WH&'' }JIX`w#fU.rWE^,[YY< 9&" }um0[X 1An2yw7ĂMfb.¢Q3m?_`0`B S+?Ĭ|G0ꟺ䳔vhcs}|l'I^G~5i]J]! RkM7Y_UKz?K "2ҋ&2$~妫7!"|}Jf^ήІ|6sL)CBi󔴿խ)iDG"2/WLM|(h_1qUU5m|1f p)"Obx[S-ˊJ6I""ȟE ~ 3>\UgdJlaYMDX,B~VW˱o%M-"7`DyCDw騰y)VL9J_6&c¾c:I7 3'& VLo "&y.jaNXaQ72WVKԊ= ~C 쮪ߧ@)f8`, ؽ9>$>K8#|33_UO>9nĄ}0z > )ob(E-_܎5ǁػvJ׺õ IJ(i&%Zn\EuȤo=y ]A?T n)hPojaX$e46Q{4`*ffkGlX0YCN3ńB .6]ö|ݰoa8/0S18nf]>6.&0ۡ|T\CuViN^a/µʺGw=.?^c V{ܫb#6L ~.[RaSaoþ {p13MlsH/\9i l]0^;IXLQ,d`yH^ sXopNWI;e7>kh]#|P>-l'l3s\{-;ayo8rǀ^]C0h @n>_7M l5{p^,[كͷިPLޫ桂@QlŰhj6Kn31- ](w!?]?JHc oPvژ5;xq{KZG` xM5 9*9/[9ծ&.oV,%L{w-k3 uߣ v=x̞̭Ա߁nnr^8)LX`2tΎ㹭3fF*p\tr?lL1 =`<{kIUvqJ"5AǠE\G)aqt.5+ʱtUZX-0f00FUg峹 i~wcQ3SDU?&"`wUMXd )U|{v4kc|sh|=jffgȏBiH*/0KUfc+E{`NUmclGM141ZF9ED|xJi֎^ {XM # 88NphTDƂĹ³XFADƳ<2fpdPl9BŬcRVV^Ĕ7o4<kv|sWW_,qqq"N.gRaZ5)x9ZDVP7Uq%U]("bQ~7"F+sh,a<{fq@wyJp׸XRqq":워}Σv5&Ё9~?RUkxgBUQ{z;q7u6V v䢪a+ zfOz9! Hqgg|IDAT'g,`8NEhdM%q88 a=Jѧgi v8ΒȾ"Rq@x" ;S vA"2كp~`tƤfqゝ4,<Û=qi4jMtIENDB`neo-0.7.2/doc/source/images/multi_segment_diagram_spiketrain.png0000600013464101346420000020752113420077704023312 0ustar yohyohPNG  IHDRXCsBIT|d pHYsgRtEXtSoftwarewww.inkscape.org< IDATxw\UgSz 7QzA(ذ]`AQ@EDEw{HBqθwomxdl33333` Om \ܦ> Ւ MF)tIrO /&TISk(_8آ˧I*T7,%,R􊗁s,T#$}.".^w,pSHJi^m?ک]I2wpI,F Kj]x?lIEĩ)pHkw[%m\N_=N'8Uq}+R?/r.nZ?BgI[cՀkN&"/kgafffff֮=h +["bJQm#q/_s)C2yGzx1d8Vҹ7CF^K7{F]i:`x$D biCqn㫭-x8=S$-"f "-d^N'Y>mfffff-Rmv3b p() }Z҃~((O #InXp,니Rּ,_)tM@3"#̀(ޖHҮwi ;nϯ,ǕrK)SDĄjq746}, S843o6ǫ̬ڶrDL-Fݫ_(#v-)}yQD'iCt5KuLRjKH-[>NܜQu󚈨%Z*MbI"uOER+*C}(íuOߗ؇|̬ ZNN4?vRK.mi"\׀ *i`SP;f@>,Ljq橈xL }mi ٤uV& [\[-\e ђV#ݐXc&WYlIc<lhk\ҫ㻹uO:UN,U6'M#*/}A֏+JRf+Zrl%}G~4umKڊ4ƻ?_"ҲWcyXzKe@䲈SqZ Yޏ,=<qҽ~hUeYҡwHWIڹc""$ ~ d3PjjN{-O "+H #&@n(<_ibq>+.7T, ooc'"/k;FVU %Ƶ`fffffy-@^kKͨ׭+2J>u1:"cAZIr ?߷B۞ ^\\"MU)Gu_(ot~$a[~\.ϕw"<$࿹ewZMvvWCutY̬ԷQ:Î1' ۻ#╖$$ϟm&H3tkXܭcX.S:UD`K@Mp oT[#gpД:]t jplfffffftp.333333-fffffff8@6333333 fffffffd3333333 plffffff8@6333333 fffffffd3333333 plffffff8@6333333 fffffffd3333333 plffffff8@6333333 ffffffftlx 00%?- KV&Gk-GҒz??8Oh̬%$-@KϓԈтٰS,.x$"fTMs] VIp;p5p pCD޶ڰ#i?0`Ү7ߊ m/\B}/Iujx]eE &犀#Nmn!i=h``*IxT߯gWB$ lO f_?)^o:@6&p&s!ǵH6HI:l'qLg$I_s3I OE-( R~/zTDEXd.$-TEZS*$8+z}D18K_߁XYIZZaeW+MN6wXUNgp8O`n<钞 [XH~z,ޯ 2ڠ2z?Oc393KGI]&I=JZouZϢZlss$=8@6V;8:-i<࣭/ G>Qؿn#6uHx&p(pvDL5Hu( k/w|X%;|YS%Acm( _w?Ώ7;$GI78OͬeD\6'{MD< |KҷE[X<r+Gr3&"p3.¤qGCZV5_t'0/Z&'tL 4T_O֛|/"^~LRqfq=pnD\]+ADNi^ZK4II`ZcJZ+tp֗I$M' .is$mDk ۷tW$.JK)[v)lޭw{7'y4j=/M 'j9IJ$Iw=z^%#I˒^ʅo>=4mc$&6 n4E9HHx-gWK߷S 8钞tIz{Za\/J%,T'sga57z7[۬_VM$]?koś5Ju~Iܲ?ߕasn>Aa{nDoJ/J*i|}xt^ґe,iiIz~$""ICz8&o_[i"^W8ORP#\ؼ%r. 7 "?Z>MwEr%sKdB7 iʶBߓA{K0m pF_4:^kBډ;'7i`D*񙹱76~$B5_tb @mYXlXDmAҘFj6dD_H ;7 }شF~O_ߢ냅8g(>HF-[nok"IztS¶Sjy!d$Tg6`.G _kx}xu L|Ɩ/@{NӀO^|x . Q-<(}t:4.E:y iӨ1R ݤxD )'R do̯ۚ4q}9%I3V\G)@AZ#zBNPVi=tB>K< .`RmY>,.iDLφt.,\(i|ٍ6UIe7QZ,fڿ8qq;*[>Vɤ9*y9]),¦6M:^(ihnXt{t 8aDͥO"}-AwKkjjLa;;~?{@x-̖ @G JJg KTi#]`VJfZV)[U%3o ]Kǒ.+韢~م/r RJگo*nUAyQIlrXH<+MkntRWWKU ]sOBk7t1ZI7xgt]Vʹ WǕUͅǗ/BjEt( i[UAȳ؂\Cl\빅t_l-s)}4Y_ M ={= XJՁ[ F*z^%buU|3-ȕ-O۔ҍ~RH;:wҋo%,`:`*[<5+ O'}~wzv)tWwioܧ?{J_<Ӥ.GZ<_Еm~R]9Y%~V.Z_(=Aj uǜH?_U>Ru뽅W{zA g8P']+UKe~Wi5w. %Z {(%jpW#y*}*C,y1&RVfnAj1Ȯ̜w"%Xk۷f:AB3@MY<Iy?rV|IC/Eu&P%4H??=oX%R}dFzf ^NVBڟHh~J/58^}+)\'qzcA 翯$]fRq)~k)Hwi[ɨMD92"NiP$Du{4V/5&-ԽoDiBٽr̎q>鄻w#ZȊ3I'bEԙe=|й13Jۗ',L8E/D6.%༈Cq=i-TIVgG#u{u2V})BP_] #b*y5޷XM N'A]ߌk'W$C6tD$ҍJvM&9[n>ES Zc+ctBϛ߭4 8@6G#ҸI'>զNJkWtSE(~aVڷkVDȭȽ]&L)<w1 UoLwkq͘Px^>^eө>N k[/¦Wћs=44뼤}H=*#iyffm_(+Ih eurcŲw[kU+q}>6x鵍q36*/{ x%m"@+b.Z5՜+<f*) $$q{ ܬ8KƥǚCrs\O7q3)i ifWPl]Ri聮͔z&@Zs鞙XI =sQOt*?5?y fWD O5}Rï4PN_-Eϸ>\+itZ^K3x։s}]v#ĕw YGE숸9"$-krUa "A<4FP#Hq.y")$>d$h>rA3k=WYZYarfb eX"Ҵ6is,\_>rs\"-tދ]mq.pjahJK@`سkݟfs\5ljuv IDATmTy]=CKځ4ja܂lfFD<#L䷗ffm6-Ux^۹&[D[x=ׇ=KBbΠu-q)e%-[3Y甗A*x <Dܹ~sިg<FsI=mzplfQBL¦q4"!zΦYm-R8Њ.ÜʊˌBϞ zeTR?ǹxc ~OX NR噝//Aixϖ7cԉǛUCan+^J]t_Ieg=6w):e\8r޳]ߑ[K*Ov8KژBy]MZj `_Iڲ/e9]DfdI+I:.OPv p]HP<&:YT[GI] v% 3Tyla%]NgP3P *f^g xXҩNו?.lz 8Jw~?[ҞM俦%UB}.p_~>^һaCޟ@8M?6&ZRq阽z6t, "RI{Ϛ$H3rx8slٔ$}F'6}O=OL| kPli?:"],0>Djė"q'svs~>b.tAR sLI%O[HtwތXV{|`Jze) ieHu~&,gxi~J߄9s;mT6y4t-qŜväkUH-gGұzXt!Ϻ9滥uoJCe\F!ת2݄^ $74W?ͬʳ%E |9re~IIwGpyiRTt))@?LED-i+B҉Eľ+՜rE`,)`٧l9d@;Q~}6r_5?h) (" "ͳ@rQ'q]Iryt>Ml_'8ඉi >~d7}Ichf1(jZ?sU}QR齫-ػmE>'AU) ;h\@҅KI'Α.luI[-w`s4}[zjDezbw6KG;{ =f&7`DDO"}@궹='(yo. Hݫ&xd9!Πw#i㤋󶉈KIf2-Aڜ4:E򽇁cI7IP.~#"(<&G$Jލ:˹DMZ[HH3ܟGlq :Rk\-#aDtcR(6D&u9;pMsF^#bZDHq=]sy8q<_v=MpC\?|=/zG:mH=]&2Hn@(Mjfzy;yKO'u~4IG3'Z/Gꮹ80?iSߤ\WH҉rrhk _$-s1ԥ| X=aAhR7HLDm-\BB^ z?|B ?X4myIAӤ<뀐$3"1F *y,⺤qJ<_KN]T+gG{*1EH-KƘD ?Mdו!GEIZ4DlqRg>^jg O vnuJDs"+3I=i=Qsk3$ M!w3z]zIX{MsLcfsIUޛN;8,4ɬj |ޤǘY5\ϪR^f yk"gޓKUۿsS:4w63rpe$xIIS,=bf%i<ݓ}Gw 6.$Fy+ $I?MZ:QxHoFZI܈6,8@63Jr#Ӏ#-,X)@nq~Ôf6hFf?h= ey-I@Zkq"-uZ^J̆0I#HKlOq23%kn.NK&i`WR_4p$7%Scx;cfffffÓjcDngyDTvmfffff6 t V/?jsYI#}S'L;cfffffN`4 /<}' x'0 -"hj0F/?lsYl++3"bZ!z?ڬxeYx0"UOhˑ lnrcT)M_&|-"eyFoDČ>g9q&cl04? SG?O/~+"^k /^&][<`=`pkDg% _8 6$MZI`ti4I7GĽgzhP [yQWIy/"K65 % u5~rn[YcqF;t0?'pG_;Tj> XeTCHk/H7kV&Pp#)ؽJVtwĒ\Jj+p>BIF@W2 l iܛ.z , D p̤ bW{o~I3IWAj8C҂qbl01夛\/`+;^?o%>Mh(#}_< \`*E PlOn:?t{Di<nekW G f̏y<dIkZv‹ˮ,-㖄k/sO SV'V>'Zq=CLS0F >>|( "^/_=IZt1QΆgDēD-.uI/G|`CfO"|yM >=X<1l+gF݅d,Q7}nƺxkXb9XxlO~#i_IU"i~IJ537g7Z|Rjv%/dewRXD9,`Uf߶mJ2ω5YAZVY5r~3IN^!}U;"f۔اmuy"*WXs#I'gz^zH]]S5x8eV!sf#MԠ%i!`*&G].O?zc]w2@ެʶ'†}-,oz0'Zg"_sY im`-wE.\(/HebV~I.i*+7Y#MW?hDIFĭ|DېX?7}lꊈ؅3fMķΆ o.ª/F[b=#~[.~y\K HZjHw+ݸ%|,"^mm4?[ :f s%+Iw$Dz.96"/ο/T}:0!"+VӒ^Tyv>+׶/,Z=>!iI$}U-$K•s-lc):I,ny5*i q(;\"+K+%}8tb.E'houm \_T+'"?;ͮzXnEKZΆDU#`m/lԳ+ӈl9OFĿ^q)@P#.ߢݾ Ͻ>K w#կ I]"|ZDZK&,S%mG&0x=U*y^-i[૤5!w |ۓpiHcu' kzx qҿא7Xۛ|m{! 爈?K8tn_Գnz?WWnƵ.H: LbA1UvUed~ IDAT"j "N`^i6,nȗ2\~ajOUU5DLHH*"&'G>C"ڲy}|i~L:\]>[ d`m%jNsofT"bv}.Q8e+q 0?zu1*M#;m4qmL>f꺿l4RKwCVQ3@j2I'M|N`UqU'˗U D+m/YOZu fqOgnh3kHC(gw 6 V+] l,wY.[E33"Iz.hfCPD\28\`;cҬe>fffff6( Z yTkj WkAݫjl033333aXȒrz}-ffffffs$)4Rz}+VˬTc^3 ٠(LҒTxҢX-SkZ jʹc3J.:M -Z +rlfffffBsݧc⤖$D<~ \y2$ȝcofffffZs p勻%,kot`uX٠(@ 7^kl(u-'@l^2+Ou= \w wd333331 !nG| ,UoXhyQp \A^/"nj ;]#`K$& < ?9'g6p$KDLmm{Z 333333"0`xu&g_,D9i"#bvu+fłCKl(5A&cxg\.n r=%}t(+₯IӠk"nW*HҳġV'$i`fNٖѮf7|dz7 LhaveuӃ88̆F-ȣX.~%㫀I &=K228"Vd'嘙S6ظ8%q Ue80G{yqQU>4vbb~S0׻;~W[)t>D~l$]VxIHZcgI%|g_Ņ("fSEDy2u7g d<`p=pp+poy%uw#p{{3%uI-%kւ85llϻsv7x E^~S_Vwt?:;{Q(we%A\Sƹza5 WA6sZ}.8ȿy N̩n?*? E/^^'5qHc5X3oSf01vK:`)%pmy-%%7WWcn,>z~ C1u[ 2);cwSѤs"{lcakۚW&B\?ްd1l֤=[_=)g}'޼_W. 5z poWFz7pݬǒnmvM CJ0*34N/ d&Mu $%ٗ }0-Pn~soJ3333k,?`uRy"N!]M6:"F;MYzK{kyXFj6 D@ |8%yes^/UmRP,fffffCR_A~E–6­Gn#b 5Z\ ~%;E<"ngx+ii+HcܘoxKK^[Pxb13333.DIM.[[rpxQ݀rMdלѤq A]TuZD̬4Zy b4\}qb֣Y*)if+*m2}9U xD^ ui_&ml}d{zb lHj섴{pP3!wv$m f])o}\h쾷$ͪzueffffցt/lCv<!{]'+3 O/9"ɈxZSGoNO\Oe ýվLޝ5VtPK+iq |uQ.Yb7)̬.Oܗ3&\=#~yj~wY󢯵No7o|iNs4EV٭T-[ Doc![@*ĭHN)$"Ԭ@eAt=ϋГAAJ_ |ήgZPg$t ),)/Dy꼟Bԫkon)6ˁ˫iY#5uJӭ޾ۅy>Bթ;sE.jJ9"؞M yPKE!x"{Whb } P3c ?4mk+ث̆F rjuI-*~Z߃4` %mGsv9\W)?Y'Z m65fb\oʊ]),M6+v A@x4+AlZbKbҧ챙UK,V,mvj%+L٠{7Q^+f4հ">qN΃4<ăA\uG |4|YdbNO-"u^lGz77RI':S yGVMpP;_23333kt8G+v䨓GÃwƒoq6;eǽa]nnJϤY&Ք#$.lMs15DBWW1 ؊4!gwnCyuWWϥb X4A*g63333R&ȒDĉ}W]/n>u~0n~ k{$] (I# 5Uh+-Sn^$5} \Ofq } H?N$dI _:AdHd33333+Y15V蚍tOξ~Qa KnCہ}j%=Պ:e RWj'.?33333DAi LI[~_^r [1LR&_)ެ@K&o5y iWᘙYj u*mgI/"b¤nAGX ̬%c~vy1b`ȞYC9UFzeӁ.a5+|J(AN-'ˆ"kގ1 xnw757b+;N#i=vˎ:OoD)Tஈ<"OEďxx =WyP^7JsL ÜY7{Dq,)1uۉS H]fS+hY9GG@N̬eF6A+Qf+^~r5"ܽ~",EĝH*)oLiےqO $=;Teݰ 0A+>Ǖ??+cij/iy,Ŭ Cfxv13333Nѧ@55?+-0x'IK <SjʏVRGdN8A633339Aw7ZJ` irjirKМGk& h.f58A633334L#bc"nEu`M`OZ\fY*I#5US}[ʏ\֫)?2"M/Pk\4OͦuqUp9BO2Y;AͺdI:f2".ZS5-ffffff֨ (;̂_*'Hc=ٔ([R`0DgY溑dplfffffh YzZNҼ8jLҬVAS⅙YSu5*gI?9:廗Eg-A633333kF ̸`НuʝLB*fffffVF ,^"ˋ )۶(̬# KzynN*\Dځ'9UN[z1YI-5IcF׈o% #ICwAlv.:W$s^(= 33333fbX¸ 2}Y ݫ/_Afv<-/=J(AN_=DtendHyOoP[ftNJIN lS ѷצ[gSbQtMr唙 Z8xnSxS^omj33333A5AS9Hr\886j1Ò13333hN*`jNe)"qv̬hA?v~%&lSF!"؜j(7"33333$kv8 i&^`H&s摒8kqՒ('*33333DkAaOЗw'O1Z("> |WҰSw[7fMefffffm;ണ[JEx?e""b"'7WvkL׺8A"433333MoxxDvہ)zˉj興 Ƥu+;oo"I˛Y]k$ȒhU eM{ixwqp$̨~$0N .iUmfffffuؚԒOv7i?ߒ>QDlC69 IYְ:AuYsLleM&`VolTsؙ޻MFG$}mp # <]7JZRynA%pT&&M<^oftaIkw :Rr'fffffJkv9Jq@&SZӢ? 092`8BRwJJ>!馂efffffVך r`q%M|9,[]k&ZQcWΕ`Ab G'%]U9VN8#=w6kUQ=u$iĜۄǓf2>.I:r33333>X KzC+飢dI_I3)?PI914ue#b=I_gffffff5'F26"63E; p_D,"taO*III:J48DD :^F)ID@ĶHcI[uE^ USo+R8`RunH-{ѿ l_ #bo7.F|*"Bҧʊ:O<F 7M s<$i,M#XqVpo3lz|_Sj녜2'fffffV%ȓևIv]{ G^Db-*45ڒI#jRٱt('fffffVF Czwd]]q ]W][JD$0딷u3U:m-"v&%nx̬34L#b: #8(c:+W47k+NyϤv ׵2:NťFaMI=OK#b,0^%~Ʋ9i$s<0X.fl }'<<&}&/wDZr8gF^ xf*"!}U}}\6=sj>i'chcX rDL"~ȑ]|lVWW甭[/dt}+(lP?2et+y<=ԟ̭6_[W=Uҵ9iDl%ն#]xKeI"$g} $')~lx1A[nUID8<;dC"b9ỲH1y∸S\ A "r40*࣒[=a42,sWhԂ|(ނy̡gq=mFۓ/m*KD \HZ7Pk#꿈JXֽٗ 1=BGG8rGV9:AΩ=X.#h4 #u~{_^Adfw6Hn؂4ˁc}5i gȟ>&?"m Kk7= m֮"b2'R㤿GB\?ڟ.o>24ٹ' 弟դyzi_7=Sq\D'e; "hFOk(1#KK' =klnY7H_Z%imTrk^h)9~8(bD ,;9<Lޛ4ׁ >_u? KZ"?fV#"qWoVKmDl  u<J哳edoK<DW& 5"^1rH&w'7cD\Au)p!d*XV3`T.AtuA9*݆)onX3ƳH׻][r|Ô)`*OcFĈO?;75Y$-t% 4>)"d`fmDΑt2i Y]f,Y+h4:xrQWY9@m[ c0} >M2 #IW!k=O XyV "bK`#=4[Fu+NZ@?#mMIڢǖŤ cGja^_R?&">Nj,|ݠ9~#pB:HIWY/Iu]t󕴠;:>8e@].]_KH}l^ySof5>0bq0IZ2@m[:[zݫ݊0Dedz3EN~DvR;{vsl`}Pͺ-&4᳼}v(iD|8X;yEƲf-ց gT-+9Z7lir~?k+"b{z|URaݛ%=\L~MrW-HZc芈mzq#brDL)@eɻѕWq{Dv)g&&+LDsn|U:9J}2/1x&Ws8/iXwb.rbEvKfFlG{su#}Vכqq|[a?\+ro/m A87h!%#f@SIٝn^;c>Ul xf#1D}HV& cজo3У65I ykE cU{5kCSS#b!osv 8>ݏ#MU>?^4zzƚjO.00AκS\SfͺEBAr=I=z:p^1|o0q  _ndCFe?|qmO}YvJW$=5ؘlx5A/=٧U+ Qu$_F?DmLB2|c֑Aqk6t /CiO.iu  #GW4ޥ-֮ ̑4T̬8GKmA%Ȼ}-=5~(p offffzxͷkmMy;_KL=δ~+Km϶*fffffC^lym5ݮv-1+&wu^\'O(m63333: G$SDԗiϮrʝ$-=x#pp 84^J{vZw b*l,YGX=~kdx}C?AN/&6l %#?tξ:h&_hX- |mx[*_oȶ|t1^J E{J}#}'}5ڙ;8;++m~AZ}ֶ߫}IK 6SaƽHkoyJ`xxBm"'ȭcvO+vEJ~O#y}`[zfNWvlfffff}kogCᑻ!%3c{!K z@g[#avZ2MS>T֝ޞtVd݈_|&]D,*׮fffffV{u+񹤱TPPkvHciFز^DlE:Pkbefi2= 5jw̌ <\ekFn, 1PJ7VkDZ5RjiZNY7Dq ԣH3%ɿcfffffmQ1&J0r$VHz`X51f&RLrfIZSގnAu23333(A I%-&9;T*6fa$hD^SޛY5JL{mX8wWGľ֨: i$ BOv"b+zgVN lև7ݓ̬C5Z)M\On.bܽc:<V;y=`%kE^rʭy)/*A~V$kCZ ࡂdӪ?=Fz+KP]M7s=7t/io.;&He}j-ʛt Vi2ZI¢9pMbrٷV WҜ_BEH`*'śPB&=A D_Ė'Iӓ$Pl X#sg2{KDB w6F;A.^nE@K ׀xS4,mu:N̬ G~{ue_8TjPN&6sll@ I ZX>_03333֨y_v~zCGҷ]0Ēx/i6=Y*" |A5e Y[k gAx?TK+摤%OfӀaa6$ތSJ̬FwxP_/x$Nv4Nv&3{*)߱(hԂ|UfNE7_IߪS.ʈ8/%W@7kN̬-4J NJ[[Uq^SyQLҝ߽THZF'fffffV G;_o퉸?".#u]WIkӢIȵ]wjA,Qlsv+cq?MM1ZVdfffffVQ 1~q,]|Ve@$Ȓ^'XBpk<MRؚ}cfffffc1&w#`:0i`ӰrEZUC{%xj"Y;A!"$u"c !%+VŖ:;ZlymQ(y4qU1%fw.✪o`!5rւofffffCPuC&&%i;1dx꛳I-㓁#mnuJ,Y[H%i;W`e:X]_^!"*$= @bVcfffffVmu#bkz L*zFR^v0J>"n'(4&]ړ$lA#%lu\ߒ?[IWJlꍈx-OͦU-iqKD{ XS#mnD#)NeQ}1.S~rr33333Rtlf`LKA$ D,;vC#b7M^Qv }R҂FREңߵʎ̬b9l <D,@:O=~ HcoNkKb"bB8ƒzIFafffffVeuScaƽHwHZTޮ9a6"btӞFD2"v)EDLof <fmfffffmy[FE x9/,Z}6#ipINd%3."%֝~?0xx}֥d3ߕSu@DlYvg#Jҳe~sd>VzDDҼ88+sKI+s̚*w9#Ix_fbwͦn/,U+ if/c$)">C~W"U|*ikʷ%#23333Sw_Iߌ_dSo_ocR?"]!IńZIGMA5UOU|D ONg"'C`]g33333zIlIg{%i}7"Mtˇbr\Su) [ɱY,'IIKM",R+59UϖM瑴|Sʌ:Yӧ딿%"v(5`F3[u'U$ .j8VZo+å7'k P;Z@wD*ρ%}IVcfffff _ Iw*Sқ$u(![҂Vafffff2OLI:333333+iuffffffVw6333333  plffffffx.3+ADL$M|- 0x W&KqteQ~r5xEm9Y^6Ji~/glCty?űqVtyUMPBIHh!^Bޥ  "XPQeYP@+H @(?*HJRa'3;v<}{ΜN`uaf66+>>&Y:d{3EҪ?Kt=BAH:8T#e r,|9#I{٣v(i-`s@yo5Km3z.|̕t]_txbrǂBtO$]ޟI3;n4*NN&Ioz:>=v_WcR;p)>iK3OKm3蚄AՐtogWvz\,*yyDqnv&wK̞*aF`0)p]l~mAFu08_a5: | X: 9OSnW)u@`u\~ئQ3`hsNuG3<` n!T2Ϥ?.<i'f%m܁5Ocx"p08.ʴ>"U`m_buV-HJdf+)V#YfAV (3;]yVgwW'z$WSzS $0|)}|E`S/0mW2ewx X,-nf ̦)Wmf{#oP[$ 촺 Sa::h#` p]_p3N_ $⢘kwu_:ܴaaߒO90)Ǖ67Q޵ĥi˓&sp W3efUSLtXugfSSӸ&7Z4$Իx͠iz=nR8mAw`" p$@3+rIJ@\ݜP"9#G~F*HޗtS$_#35헭@Q=3+^t IX>o[YqAIn n[%&$| vyp5ni3+4tBAÀusm \"]3{ =S,G<xAhI#,2i?Ì%!7=AX;?^K7[)~&J<H hdRʲ[IxVsG(@[ 4{~ pB n1U I;[D^/956,;i^J"jh;u2e3DLffmAwfRxx4AҒx%ԯ*PT3cfivN#l<̒kuʍr淁I:2*„\I{JzJRs. j,KPalofgzvis|EZLN~+P'w蹒8f]cf_Vį!R.Iv ϧz+ȇtWR4̬O'>QB&{̬m)}ۥx=ͬXn栛 rAWh3nIMDAog٧'>fi6 ˬ"J9^ qCͬM ~O2.G w.ټJ7S5X/iN@]?5;fе9{lI3(>"wdDRVYL˯O,NHZ7Ś(Tnj տ'E IZ"`;a}Ҋm{UYÀ%oP?X8A A1k9?8Ʋ~2tMH3?<|!wSK,m`rI'R o_x/"\K707%$ cfOWCA. ffS~=ߧz*ɻav׽$Nǚ٘ދqJ4Ŏ}J`7jBq8CW%7AVN?WKI{fvwιC?C=4/hM3{"hXBA"l7(ʡZw$-|-~R)|4-xǝ Z=3mlfsw5H<Ƣ g̽65i @G_u@RFʪpH;)*žuYϣT螰u/fbA2A#yھ6pS^xOoCUQǣgS: ^|) /ȷ\'R]Ih'ב!8p7@7ryg.}nĮWs_Ny-̮nBcܯfY5v I)YXbW\)o|EyxǬ>3I+HԬѼ\aENi|C'AAkz IIZOCLf2K i1 x\9'.(\c%}8A9 (I >Y0+ѸB4U *IJ*χQV@Tm--e <:#(%bcAu`a3|Af;h50`<_RcY%E<,^]R3 S\GcUS)TZG+UJE: Zgp߹1ǩcqs%N#HAb lD̬ۃ"T6 ӘX|S_l)-śHZ !Ox3{T3ƛqdڦUyxjflf4T<[R    CȟiGQ{\BaT7_TF`_[04w>`3ޫ,fUM&@n^   V-K*VnNfi?|}c@US%N-qA:>L^<л%XJ-14,*ڬzx:nK~˟PooK%V\ iq;OUjh?ߗh H|אo$~V cSY&CGOEgb>z]{8}I}]>ʡU1\{MZۿxƣl%EU"Eky KO8,f6tLE$N*!$-GklBШ eA9)*]a&YAAA!2 ?4MjF vW odjN_WAP<mJ/Z,JE!|NFLE]}xRAAAcbМuX\.u<3BՈ|'#DIw{Kɩv4o:2V YW(L~7*,G͑Ol0J0<'7rڡpd~*Jj*ԏg(zPkA`ZZ݃s\qϗ!sno;(N-ΐFWR=%}s{x}[>:L3A'rQqs%3[sI:Υ pc2Jh}X' {+oY}A=*ݧ`X d:nwoXoߪ,N0A|Pv?sݫeej?f2T#~#aƫWq pKovWEg}4SfLjQW~CUZI fv;pj:<&IQ@o|H.V籠B[ jf]{A+,G1&Ӫ$י$ ;f gbAA KT*f0pSxN䝁pS%ˀF#wVWӬo.'Im'xr#̈́fPn;H=AAA tڤR+?تbfWWIZ6xWHZZ9l'p? Sݏ$G1x_c%/wz=8[*jAAA'R r3{<[-T1ߌY~ \n~`VXKߥ:HYQ,vAA8z̷ )d?Z18Yb`>92}xɌ1E-p-^){=2H$`ci&AA= [Agff>T=)3QĺYBe90uPD>   (L %7z.i^PUW   QUYҀ6p\j' rAA KQYo%ц%m< (G*R3Ϭ=P   hXJYA^)I.QI!=E|gJ3) wAAA)l!2Ixjtp1f̘PG AAAа KF-iU4i4p_ ۢVPG(AAA4,(ȳ0wHIZ);Cu< X`N?D 3QH^R E%o9p{%4zAPHXo9*~AP+$-.oeXҁl@)pS[p޼7\`7ˮd qw0p ׏ VS)"i7$`}<</ט[|Ϳkf{(@wv&[T@`Io=IC{+}$?ݜܻ*كEdl l5g0M6Jg}m7I~ , ~'v1̾^De `ߛgTNؾg ߁~ PTA6%3r _Aˊ+cXlx3ʲK(u>MR9E NP$]0ԛJY1pF|bnA~Q)fvvAZ^wc[\bۥ>tq޲.i33ց~|_S:/ڜҁ6.@yF7\ 3?O>7_f4r כyέTs)#io\9 gfk^f?L0L;)/\9jfp\`Y_xKzfm&? .`Am W|W4l3 ̦n+A_+P]ݝ| Xٖ̆f5~IIt ;ƕf}v1Up3;y_'Oy)fJfT6'_Õ7 Kil2s~ |0|'1[ IDATVO)y 9V"XWfbl2n\U$-5yY?gג6'I.0ZR_3۹O݂πoJuy{ڪ] \9N7WsΓ >rg `?3;k 9yWy4G +Ǹ%9hPJV %m3;(.k2C _XeN>E痭=|+}j-HPsVNz| =kQ ?!"AUSCm7'sQW)24؞O9bt/oUiq7Tt?^#N)%j{Ogn.v;`'EIV_|]k,JO% afߐTk}=,?Jl~50.=I#xԀqiT#WFi? (ÁƧ?f5'+Zxq+1ن=W] SY)]~xAP [%]&iKvBRΛ% MW$N3xK*%d8⃼"l YoYƿfy3^u=LzdLܴOz$"XWuW|LPtsG>~n <+ !vID:*itM:rs-BvGte$Ik(ߜVGj'YHF2HI+xϗpL HHa&ӹs83TE96Ө- hf(} Id-il7Ҭ3W_gi$t56Jhi I0v̮x$ l&Orέ(iy3{lN!UNƒ+@d(}LO]\z3$>L' WXT?& t'-ZCͬRA|8>y{x(> W"b"Aj,8!6Aԕ̀Iz xgq[^e9.LT,{ z眿xOx֍ xjLtJɃ=p(\*Oh>Cӕb<ȳrTi M (wQl51,4[j:_ JM.p}{̞L.+fvg{$-G('}fLKo=4o甿 \OBAd| LNr?kDqnpC5;i6G>̞~俞3,sgoe;~6-xܔ岤 RK Ml3^ E~y__7z!%l$ ׂB~{p F{1_ix_S,]`e_/nԧ{47?vᶽyԩZkX X t]{p Zl4鐯bx. ` c`JZ L_w m4+ZW.fv abJȰ>[m @{q_ԛo Tr7y#hEKq`=\/'&Hbfg g٘Z}$K@{fA iۀRZU`\YlfڵEkVa3S |6:p*y (DaRfvc2^'5Nw4r#Ig5kg| Ii+l%++y} ꓞ>lSgV&Mp{A]n;sU n@/Xo w/C/g޳ `&SvD{}kgf'G-[Oy+禗sJhzT[l ycƉ8pŬkvew'9͌_ 8ӳ*kpM<qe |̬7:S|XgS0 IhA9C [eHSêH^ RS)z.8 8~A>/Vrԃ>O%.صu-  R係ZxcM@z^` " 켪H]9!Jx#+h3{_1;`(Z /{-1ɴW[  OQ;񯉁KþGaCY0 \r|$r͆4_jXl2nf&V<=|Ƚȟ.޼NY cFm%(G5JwAA4yhF{2`=gB`ag@&`.j0Au49?py' EhfWٖHR+Ю;`zJAAFI rRtt >.\@BfYŤ.pe!$M&y+;JZܬ(Hý!@CҾ\]8YAAAJ]Antf <xͬ!:)J2]Qι~xrx3 W3>Qw&}`_m`";$ƃd˔vLJuf( ON.P^ 6>bc*?4OY{5;y*Ep;{wMڿT}XVvpfΰy=σʑ;F*ρi}rsru%?*}7mLJ5YmyjtHAH)H[wam޸앵1 2"U y/<]+>o_d+kFi?.E߮>h{Mi"ufԙ~] mRX EbM#2oa]łLl.%]rPv'P<m2xLNK,8|ru%l6Ulf۶ -[;P% v*/PCl*`1Ou4:W's&0L+Z)s~4x_Y6Jlcp oYxl15(2b*~   5bd"}2(9XZ<lĬ U6qjL-D 2u = nl9` )(~:/aY?@;K?H[!/lAA= V%Ú#99sU -i%t]%<⫦Gwʍlݎ,$5vy44:3Sl>*cmOV+2`*ww+IAA9d+SGߢM5܄s&LZY(s]Gg~U:4ZN<fKJN2p'Z ۉL`^a{puã행}G96  g󅝴M0n],h006]l_3h/3*XH+)Z_klZ1Q ڿRAAA#U ;"K4۬9}dfM%,=6m$BA/&Yq|AAAT K.c|2kf}Y_4}$0= 37OpI#j-OAA=A<3?1fRBCXu=xxG:-i 0%S4ؚ@KAł3<_cY #iU``U<d<Cf~8cߟ&YJc1<]ږgcӕr 5~ sA}߯ByxnrRof%c쯽ÃD1j;I#/> uj/NP $ GWZ3;T+EKZ  II{ٴ2nZ?@D,$iG64 "'p "2 K ;wB}O1gx HZw<ꛋI:~ lDycY!F)nt-pg:ؙƦ]9$S'`ߣ`G7K6f%`Kc1[~w`HӬj~*pȟ* fYl֕QNg0}Iܝa%`=dONs6dbBHZ48} >=`AI^Ry 4%?/MkkIlǧ/Mk %#+3{BЍ ~w?Yp\!>FZ=Xw˗$c6kSm˸eH'R>W>$- \N\=3kwVFЀvdx~!+pԩz[dX}ƍ-[y'&,u7;(Nf̈́)`r@:Y|y ?.,~2 _e?J0 :(.(8h@2fOϙk*ٸj L=[E_h]e71{? U0-Mbήb?AP2f\+PM!vʞ}e'3*M~vUnns`'`NmI6SZ_3Y'96M*So\-$-afprn ݙ=a$4#9x3IqQf;:R^LNV\ASJd3 8ysoq/?0[.2%Z/E y _IWKFyZ\6H `kj:\mRS3\A#x2Ot%*MI'ί8AP-J$mVOa"ojnNǣ̊B݈ 湐lfz2v2ou\4"_S)z.wS&H$>dɔ-N:>ofW)[$~Z {%&i:MU?1&=<%MIWK(^&5W`z5I-R:8: )hYE<].t3,k< Ε{*R&vd`H.S Qg#X W9Kw屒v#`93]oyw Vc} @E)c=8!d$ݦf?hAS2|xJlIՓS,_:(9~8Ag0;;$ ӞlX*ז4+.E۩)QbNtd}ے"o|5WKҐ'w n]\'iY|lWcrQ ifS:/ G&ˠQ,iGb}>} i0FVA/P@ٶi(v 77.Jnnt25oDlo CuN 虤 GvMx`3]oa{~ %lfBPt )G{Q -d<_*/a( KZ~`abk0}Hl IDATY;-t:7xI|l,0uki /6=gmlf2O3)pᅩ/v%cf/Hz 7fIͬT3J47_ٻ&ل܍03 /\Ů^<)ix;[R1>5o.}pFA'3 O* 򋴦eu|,O ݁4c;b3+u1)UvV JqYڜÌWUR/#4ghڥ+p m^4XsYH|bfycۤ|+Mt#HZ妁Ja)`MLϊ!E).-I `[ڿnfBYkefvt;8RAuIiO'FS@MsU<`3+\ҪLlؾ?33['ZR_5Xq5C-f3mflvF2 Uڐzw/lLt$-%RHJ} G+< I%-.W? 5e t$-$g_i. vh}9\+ yD%mroTK_r̖2:>Z]vWlzCtF/41k*E{TNܬójo_g>#;mH8jc?~̫3ʢV3Ae|+_('`sd6x  TAwg8pBڐ43z8xc ԻkMA^X=Ĥ_ae`KݓߛDI7lb6JӮ]s65.:in&nlIdjn𓦦_[ڋT;3Y2`*8p-ԿqBAs>c̬$fp>5˼Nݑqxw\9]n5O<1r_cEF/D}as ^ƣuA.t+i$~'n.`)jgHٜE0A;~ZZb`6 8Sҩ=II\_u%)W188_/mU<_/#pe /OTf- \+jnfS6ͱ+!h=/4\b?~xci+)T3?3;Ͽp%9 ojJW0qC'ex-#1St`WWm# |ZΤUmK̬h Ŗ-Hx J (H5䣵J,<zWfr]+DSԥB754Zܬ=#p?[_Z]\Uo9 ?i`ъA44fvAJQwg{c Y^̝M\ WIYIm ~JKMG67ka<ܦ&,l M_ij:O#0/f-ʂ67kc+7WsS+v-   I ϟ'.8ʢ44Rx`o57dEStf| "S̪WH d;˧v+VF,@,7k%Y6 螬ޙMQY֦5tX ̃*Tقb>8fmV!Tw;N!R Z HJDR, ` JH&!@BI@BIHN&wN=k^s{>sٜ6JRZd6o3{FܶŪLϒ `#wfAE=G6q)-"j+rIE]B# YqBRPʫ5^i#*x^YnnBztx W;"]IJ}XG8i5)vm@kŻ)ZUN usޢWq5AR `O6,i̬ԕ]k(oy v͡M&=oB''vKm|z"ag8g"adǦDƦx3zA"aS֭|=; y}1ǿZq~АHص-9qfbt^I ~cfq4Ilj\%I|$ªSb9ګ|WcdIڐ0u~B"B 3.m۪eگ(aW6L&uk"jй}#L_/I:E5 yɤΧjdv9s-o KJ#M3-f>f>u&4*:E?wy4σ]=k4ǘٽG[ѵLp)GlYkvHآdR%:9C]ϗGGK$*;|>dR-CI HX ;s9 ރ%@KFR'3k̿e~Be6z5_A)_nS$Қ։ݛL ()ɤ&FlB0[a;I-%^g:s9Jf!֛l~_|ٱg72<@(u=-qWZ7GQ>CҁfVDŽN&7B Yq B5*I%\ tF<9sεW˓tIQ"^Cu2J8)e>sfYI_lQE5oE5+/\3wkcK&ՃH@x)-98%˺wKjw ߟ2%:.}p.ʊNv.!s 'a:;~I_r">"jXQ]t:V<`?.{ml&0M`:ٲ_*\X:wIzfV0v̖IKu:@gOB &Mb6HJXjf/~/ccy:KEh4@ۊkO!4_&f%!i+=^f6;mQ\?p@u40ZҰ+5P%\/(1&ICl|ƪI4s}($Iou7¬鳁syڽٚFuu+#l!+{2mW@\iHD{M\&'Gʹ &2\{6439_ιE%<׈fly8NXt9:ѹ@ށn',l7 Ifl?,]z\V`f IHB UJh[cqafK|!\ŭ2WYLp"-I}$ ;⥆&%n$u4XRj\|ETjJD]I@FQR`FlIVTLg=gfDt/(Bz#Bv,zbyL(/P⪮:#f]AQ+D)نI/tJ=ު(ymէW2v, ދ\~AHӍ0g`$)bEȸ ߣR=[-m / m+!+'rK~\df>4NҵzY*a(ρ1\-:!'p*0) IE2gE<!W_C=/w Ն d3[4{+WS|VGY]⅁+H;5?ro )3Dh?,"?`|TR=7Ѱ82- :fcf!8$WTεQfv gEH39s`mnfF [&TpهbI8'H:!'K`1Si\B(=kf&zID/qڐC)bl-)ndIG1t{=Qɺ YvƕG,}G3efvDM0 3peV._F%e+FƶyٸhzF&3g •u\3{~jfn {[O͋Kdfz.-JOv3!\\ޝ[;J'moQ*3lIx2lQƲ%fvp?*IՆB<,Y 0vI.A_Y# &v͉Y`+ceKs@\`fu3;u90p8SKtṰJ?$3n3 FRŰJ*21z 1f3 aDjtȹ!@te8Ayfv;oj70[5m8b?$n fY穬$5HY҉.tynՈ6Y6̖,wu<7h2g,$M3kԅ5rnKw;N[ ].J=y,#ݣ26̱' ?n\vyaTX6 րQ#Ͽ5̖av*k+QԌ,J.KAFK\$IJ**5:`-+q~  $OԢlݿW9Is];{̐衿'`Rt7*$I kfpvnچ@6=$Fl@k]{n4.sF3 [-Y&iK%##4&$>ڑQkQvz7D]tp{tE9sD\_b7?{3K/S`UY3 #TKũ~- CdҾ ꗤ)fVGߨX`- XBũJS47FM%|;WϿ#F{2w~=,g{4qgfvwB;'=Z&0/.-/6vW; =jf3S$|A| fN[ql#1̲eWq||5>&$%>JAsdgY v5p{ 2KN!J+aBcII2n&.y5,9Gk0p.nE(ҝ{yY^si_৒FFu'F[0h,|(1~`ڢ4TWl*#\>?dBY`hs9ћ$xPެpg6sr!i ¨nte~؁1ajG~u!ߟ@xJGUp9%{ "fل*|(۰S3{VU@_BY^CcH60޸]x!Km4Y#G-|^yN*}Ѻ>EG\b-|Bok%rDӞ)j5BtŋG"t'LUړh:pM(c=BryqZ ݙ[߶v dI=~+F㆙*IU s ↭x@0캾IcV7dm1ք>؆pEpZm^ېR$6RNòP3;(ro‡dWl2r٭JBWT󉫃l6!@gyf4,6@5pqk(n-=VoAͲ,7r?HqW` IDATIsV:ӒϕT9|TOV/0z/By;2WD꧉ˢ`[}a&{Ed~׉zމnεyf6QSgЊhdg0%V~k j K: $5ׁU:fKfUeyf?RVRj+EidA{)M}e92bHJHws?s8=ڱڐ,iХG ބ߁w_5`Jg5eɬƧM3edP]cX)L4,Gq*ƃ1:?cZkj) !Q=]I ϵ]I%ޟAH OX.kwVL2VSZ&.1YOt-ǘVcHCVl]z3^48ëk$$6hfvo4I=/3M.E"K'|0bg5bBmM7D=De~DrDv l6 $N+~ Qj?$+m U~Rcu'wwQUڡӖ-6RYy9jVsl}Ʊ1V+1J, ;smoR獣cWcRb 6:nhR%w}#Z-9q)c2$r6l$>c45A!mFTl<.UD3]7E?cf7T1R8ӵpQb ! ={\?[z:/œHz,c .Gr9sE%V_ fIR3Sت .Mw-4t-yfy=+ڄ0lUތwhJgwV:80]Lük{؟s9sU9f0k:ҡuنWcIW<"3kz ͘UmpfZ8 ;oR~zCs9PȜ=u?E@0+d7̬YG bV}eJ.yff<%/g5J#IY>HD+5g#o%s9s@{6!56ځS2sFF=`Mb"liy/1˫Pt!j3Ť6ŪhX7+թFXcݗ @v9s.]aY9;bzr9*-aҸEZY5PM [<XSҝ9WJ@R1~DܦڮP,9sեKŝ9ڐBɴf]_b:P dP^1˟0b{g^taǪ'V;œٴ.;~)Qj 16MUor QڲHc]Mńe%Ѳg˅%{% >ZcAtzGaxM*4:2ÅςU8nctrvP-kIs2ޝ}CM*|&гcBCm6J ق_)W eNbI&ܟ\ dљP"W/6!xXx(574OH- eTcqpxw[ lou :Nk~YcVooRg㍲ÁcU8hz5ܵkkz@6+H 2CQZ\Jq ńRGz/B9=ybMOy#(Ixb3`Q_s9s52OMړ-^~Sc Q]oG,x2Hީ;X2gs9sSd@Ҫ@_ .DyikcؚYó+M@UPs(WR#an`K@h38.OF ^"Z/-Ŝ)\<'ʜs9眫55%tfЩ:bٟ4y<̚}}m@\}5>CF 1[~ |W m %/)D8sˀ+9sι6/oY^ڃO^`(G\7xhi/$ 6Y30ގG; hhs6ѹ͠9ڴuS֭Ҙ؞ n#BN,[>'dc>:3cI +BׁwR $}'/$[F$w%o#10jSwE6&JJxť0zm̬qIKOY21G3[1Zoj M7&b|J-1&S؋pέOU7p7z};D^6_&&vY҃{iuN-, ~?=Bo>X$/嬿\]F˺ĬDf-`&c~+֒|e"ӏ`т`rRtwUoǕ9f{(Ih|Ol-)s8?5O2 X-iP y.*p9<۹vf13-9GK̞7 }j-I?"$HYӜ IkFv=vf6qXp οۣ۷p Bi3 Ձ d LX[Dۚl|}xuyy'$ +eK7a"D,ה"@ځjÉw٬j\OcةjQj,. '=zͲbt%%z |#fՇf9̵G=LH:&6Y y2bN{gY/aZ\`%=-i.2[;7Y6˹u}4Knw$'f*f\)pIV3,s $>I:_ҿ%},yIE]RHzx|k< ofD@N].K"_w% l|xuul-Tp縨.9K?$;c{Sߎ:eI08}ow}q `f%L⨝kK؃sizr֝Ԧل5I{:!~I\]3 ѐ=G7mWfN֕N@Y¹x]_jIGK]@a %5g9ʓԕS".h{ܫwx(+5/E+%BЫ\35 ;C3ae_+l7 c@v.}DIgNNf6'̾/e/By3[LHt0Lބ=c*aѽ?E&gI PwW i XKR*v,y+xҁbfJ2fŒu~#L5Fq|s>Y;"ȋ_W!'S9]3%KR/יY=}~omFY8%n/6ՊFj输fLIh \4!{] }OjaεKȋ ^Z̊Ci>3mOH4UpB籤 ㍀8`&ГpƕlK8rM-wˀr&r/Lxr; i<]1bJ4HH5F]Q2hÑIڔ0ˀQ%߹'iH)ќT7+UH'`!b+'IJgE9u MہS |^<^aʃ-n} l(lrF'ffó$X3+4)+N\fa#qVTf{؞mwcxO3t3%v'cv&|f~!Yx3[5FxϽ>9?\MS3$Fy_ܐI_/if$3˼O—[R lہӁ%cfGO. \ \VW\E41!D}!óJET{`000" 7[bfUSM)73$}Do$#RI.7JR\@4'ɧJ`S`͘>̷$N(lBTe%UCޛp`kf<5炘}~%iwBK ekDنzC2ʥaƄ&`2fi!dXԚ@SI{GF6!G~^VhI;lLH5x>sF1پp(},Ksb.p5I#| (sU,qRWڭGcahc4ѫ7|NXcm1i,,[ fM,Y,-53h8W\Mͬ. 8#I& GtIՎZ[|ۻA`—ߘՎ$0|y-IgYܔ W$ $\JمU s<^4-j5``Tʵ1-/<_w41oN0%xaL'c_*W-?MŹփ2f+26))NJ'˯I*fROb8:s9b-;:Cb&C*[K`f̬I߁|I``D'[/ĊF>?+uvҤt!IRclaT؄YA5Z`=Kτd(P{X^&~E1P&Ϯ;s9&yHKY8]6B| ˳*M%A*D;@.GCmcB6߆x2Ēհ8 `LȬZky% :j^ /ڝs9[A!ff,^]{}C1NԦG򹽇ө1`pjݗ|lHjRCICXB:^ ،P!ZY9sι(kkŤq}q )~ i 3D>u ;.4/Hq &{]yɲM_H}Y\%um]wf6 C(Is9PH'Xm6Y7H3ELwq0Y05cf6/(:'FBWK΁]lISlg0@0)]^c;@fOJGڦݒ]5MIe4s9syR >Y~j%`4/Ʊ4yC.JY5]yNH5s>!)jW iQ]?f3AߴU#Ϧf./Zꏣf/IJ_ M{('%8VKJҖخ-JJFjZ')ݟZ5#)(+գRM[K%-ϮNgi7W;Zf%6=,) OJ+uKu++uB,)]]*ML߮l ǒH$5هWW:=N4,3!p6=̷Fdk r2~GƼп _" 9Ϭl2Z-z!G:ROނt1lnUa2D輴rIМlZVb_vmQ2նN+>JfQxLxJZR6l3|i:O;߫"-KoM5: B+H;T o]`xjaRx +dg ӇI6^83aVTš:e\R: Z,) 0Sqs9kJ01fǐ@uh@n8HRC|X@ڨC%^!dpa$ˤP9sε{-R#i[`<~a̾WʀK)F{y]Y֍{%f *g/(pyf^$np- 99s9 l K : 95˖ 2QTQ qmnIDAT , 05C~veJo\`礴#p p|R0Q:s9W6%fqo`h\ K%9J>|4ރ\{@.ZH _f[%ªX(}Nގۨnn d) s9sB_ )̙ ڒ<0Wɺl* M%mQx !i+IgIZ;~}Ѣ+\qe";s9*.V 'x~廴13}Ԃ "̓^^U'FJz@5n,08(fvK"-7s9s5#ܟk_>sRX߇of6 l"0*fVz] #~(vw+s9sZlfxqo賡?!sVtjk1 MMhl*E;ef_fY} 99s9Wrq sE\&NDw44 GZ%=#a0 =W0yG ָG+K{{R*s9sL҅#fN{KOiԋֱSVaڧ1UW6S*HavL5^P1l;bVvpH펙͒t%J+s9sX `fIڟg&Ʈozhq ̝ ;Ì%dm ' މлX >[S2Z YwX  8 mٗ@SU"r9sε 9)fx9Yf6M1/-t\퉙-nfךY\(s9+/?_, UXګ 2o;s9*l6P=2efos9s xS3!l4,9s9\}r 3[w(VaEw``Of$z Ep:e`~ҬWpM]~({{[ =cp7w@>?f kH @%P d2T* @J @%P d2T* @J @%P d2T7=8p=(sXk{2=T/շWoG{_Mo@qUݟN.G k 2pڳ֧pG98qfz'(,=IENDB`neo-0.7.2/doc/source/images/neo_UML_French_workshop.png0000600013464101346420000052743713420077704021253 0ustar yohyohPNG  IHDRq`jK+1bKGD IDATxyXT?0,.0(hh`J)JFOZ4oY}]wC321MٔQy800303}^55{sf{0SWuExxxkA!B!B!B!BifkTJi7=!B!BQO\B!B!BièB!B!B0j%B!B!6q !B!B! F\B!B!BièpttlQX|""B!B!???hkkvB!B! *//ɓ'1~x|rTVVr!H0qDDDD48 &&&Wxe3k׮'akkx޽q-8!B!B!qې8|ᇰŶm0uTu_/^͛Ι3۷q)رWll,"##LzT޽HMMEjj*ϟ544ɓ'KԩSF^???ǧo~Eaa!RRR,]A{ŗ_~$lll''' 77#8!B!B!qۀWbРAnܸ?s΅gcر bDEE?b1Ν~ǴiӠmmmr/z Ř8q"՟H"AAA8u2j066f\(Bs8w6n܈hjjӦM;SgffݹuK"4YGϞ=۷o#..eee 'B!B!Dݨ'n"=='O͛!tRuiMIIZ>}BǎS 1`~~>b !55R65⸩Ґ&5///t7n@HHЩS' !B!BiMԈFt3f@TT_x{{sS :>w\8p޷.\`ԨQشiʐ{bҤI haa }-^\pwwGPPɈ+֭[i&xxxػw/\]]>N!B!BHkF6H"`ݺuHII\~mڵ K.H$B>}x=Uߏ\tΘ4i/^P\ ɓXP aĈ Ĉ#xy Dcǎ!,, ƍ#G ݺuH$„ .YW^ży >N!B!BHk0eyÇ___ZB{B!B!D'.!B!B!aԈK!B!B!m5B!B!BHFB!B!҆Q#.!B!B!aԈK!B!B!m5K\с-M;wO>KQ!B!B! 5Nyy9q ѣG!B!B!zԈKgO<%%%8rH+GF!B!BF\mmmD"byo޼ oooXZZBKK ͛7ìY`kk mmmbȐ!n;ww^NB!B!vJ /^@aa!Μ90`@~Gx{{ۗ={?<== &N2.mEE_{aĈ ʿwƎX[[#** ={lB!B!ҮPO\t`1cFy*++BNBYYN: *̛7kϑ/_F^1c //;wƥKB!B!+Q#.ٳuVo߾, ///5 wHOO <k׮ 0~x.}]3f@NNvhtMgI!B!BHGׯc سg ++ fj4ONN\,pϳmCJJJoF !B!B7F\СΝ ---P\\,5m ^ڄbqssP#xܹ`wB!B!fp b˖-AOOOjgggX[[~w?~%%%8q.^F߾}ѯ_?7_5k 77ňDTTTwxb7!B!B!5na3sssq! Į]j@__ӦMCuu5P(Ğ={gnnCCC?^j\MMM8qCPӨj*=!B!B!m5@ m۶!88z*`nnMMMcҤIr <==ƍCLL fΜ X MMMcݻuuuO? qF|;aB!B!BSd|}}j !JBaB!B!B!B!҆Q#.!B!B!aԈK!B!B!m5B!B!BHFB!B!҆Q#.!B!B!aԈK!B!B!m1Uŋ}vuUGQDB!B!_zB!B!BHFܡC|}}=2!B!BQ=K!B!B!m5B!B!BHFB!B!҆FСC-.g۶mDǎp|b1 Z6nܨ{Ľx"._:`Xn]_e>|^z5‰';???4+6___hii!%%ڵkGիWcbqCCC+W"!!A/TTT >>Æ Cii)`ooBOc_^_-((7o涏9ӧ+\η~?^~}SϤK. wIII͚^W\iڵk|?3ۇI&$ɓ'Duu\ݻ  44ѣJ"kƍ!Jegg'N ::ZVoݱc֬Y(..ƅ ş_nݺ׀ QRdffݹ}UUUrͅ@ sqF@SS6mZⓇP(P%OPJO׏B 7߿_5BQl߾vttDXX }WU'O`Ν Ry]68fcc.]x RRRxi~'gΜit>[ųgp%xaa!|||/୷Rz<׮]ڀkbb+++ٳgx1[0rHDGG{JB!q1qD@`` ƌSaٳglXXX5ڡ10RkkkA$)\9c(**;///8qXd JKKW3e 5%c->#CII ĉغu+[]]T,B^'=zW_}mS#iOΝ;` :U/NNN iV9777dggϞ=-䉏GXXmۆJ5󺻻ŋ߿b .wEhh(UQQ>c޽P^^///<|fff* Fi֬YpssXzz:BBByfӧOヘhhh,.BQ9s M$A,C,CWW&&&v9mۇl<;wW맲R:u---DGGAAAAɈ\{{{899!<<@|Mu(--EEE @Orseaa 5+"ק9!ѣ)]mŋ<==U#!K.Xr%k۝;w:th0=/'' 222˜1cj܊ ̘1׀kll~ ڀ zBHHn޼ X/.. |;v@rr2SNpsry?\)P("00A.P3M_9s:s>!DU̙={@(bʕMNꍸu͞=#G666H$ u%''zkȐ!H$8v H0n8^CCCŤI eؑ#GPUUnݺA$a„ ?8v8" K aĈ :uРAq"""آExe_vM8 ֬soΟ?;_C"u!=^vvv*Gy/_.4^oooUɳgϘ;v4ӧOJ_~ip/b ^Nq5Ǔ'Ox1Y[[vHDIj)!;v 6ؿo>& q[}N\B!}Ç=ynWZdǏsvRGj7o")) 022Bn뱤lţGPPPW^AOOb-w hkkC$pppeDVVf>}W^j.UZ?ESmllЧO{)*##׮]ǏQRR333L>Rӫԕ7n ==BNMINNƵkא@kkkZRQQ?Auu5LLLAA___-$.Gpp0Ԏ;PVVm~zwsY+#$$;wTJp3M6qۧOMlmmaddM?W>bBi/dΑfοڵk}vx(&1n:n_=dʥ_t)c6=qϜ9ܘ/ƺ@6mĊhOܻwUV7| Fuƶl***>n:&,322b쯿R{-wc?3cLm^֊a#Fh+|;w䥽>c7oѣGK]נlڴiٳg ʻz*oZԩSYff\Dvv6 bƍ^+]]]6k,h9YYY2ߏƍkq}kq#={f\zUm@Z|bZZZ2H$,##+k?RM=v?_תCuokUUU7kjj6q>|+_WWxEeVUUN:ʽ|R⭬M H?rH^\u/ݻwCՈ˘\NB!rH$x뭷YlРAR8p{޻woO!TVVFD~kkk >Bp-n: Uҥ x H$999so7n`„ z*UYY'ѣG>===9z =;w&SE-TUUq{{{!...\@YY֭[y/k}wXd o%&L.] ˗/~odd$>#nX `llܸq1^.^Kpqq4557np>EB!ݻCKK ׹JNNܹsq,+ysttt0f HLLą sW0o}~^444aL<@ߏ5k <<\eu*ӟvvvVxJ4440dz\]][T.~S(Νy :vwwoH!DyM@=q !r0!זz2Vp~C;.88;^{▗ⱱa'N`UUUR󔔔~(iٷo_}v&MHHhc믿n4ѣGyi'Ollj+={֠%KXIIIiiilԨ[F IDATQ @^쉫{XMOC>5Hͼ>=]uuu~_|ɛVDנ~|,zxxoܸxyBCCH$b'KMMmﯿb|/^l26E=}:}CQQ[`/&}vuxxxpսY] K-]t))Z/lz5ŋyΜ9SW߿Rjܹj媽nv?%Q;!B4ewQAA~i.^̙3*lݺ7oqML2ҿNbD"ܹ>A/zsm޼ ~{njjF 6>P[͵n:qK.Ŗ-[ X,ƹs0tP^GeR5k˗ _HjnnǏcĉ-:RW\zӃ!kP_II Nü}>^x3fR{+F {9s:uj^ؕ|sV^z\@cÆ R˫Pgkc7\żMyOŋ\-]۷!y̝;HJJHKK]40j0f̙Osi8::6Ny`ͮC]BHD7^5,7n,--TTTV 8z4ЫJB_}ioHmT*l/^q&t{쁦fۘyL{yInw7صkZCn R|y4&&& k'''n1;"44|ƍCݹ+W,EEE P(l2_]rۗ.]TeԨQ=z4O?ڵk|6Rʭ_Nz<\?}k5nݺG߶J,(I#.Q???n*8~… x }uӶW;>}mO< hňoBKKۮۛ >͕\(9x,X WEϞ=Լ_~ףv…2SNh5~`.y4r5 6={l=zJckJdd$JKK3gn9#"" VjHWgH$RJ˩︥꾦(44PTypy^s@JBF)Szu888R qqqiC~̘1Xv-<==aoo[[[333 ^?z( `gg{}^ 4hD"ܐ;k.˹/~H$X` H Ho*~y򇇇C"`ĉ ܝ>}={Dn+VK.?td_?Bi+F 䍖9q2]xJݹs7oF`` <==ÇՕ{ >[ YIg0u/22~/>E^ÑJm+ݒFq5;/k ȑ#JW{FQQ5WSLm_zU)1 >>Xv-W~رc۷oFW"::cǎ144ɓ'ѻwoc᰷… ѫW/Xr%T󃟟_QQx 6 HHH=BBB  e^Z]!X~=)>#OޠN{To!CR$5=uILYfaݺu={Ν+ƍWWW 0@(9߿=AϞ=ΫyUq/!G8jݻYTq +]Ƹ{ >mOb&j6l؀3gpW;R722^.,,TJˑ5^@R筲zۘ0aS @i{UM7o,--add j*,,l'/L6ill 333!22044H$BPPN:۳g1vXoa$&&޽{! 憡CrqbL8qqqr/O-եKhkk@ @aaB˺?#iXܹsɸ|4Yݩ455[mj̙33fhVFx̙3ر#h^o0p@1[e6G9$MMMVUZ{njjPWKkj*4 2Rz57cLdGu^;Ou[׫W/ޢnjߏ'O(ܴ4޶2~Z>MCjč;>k׮UKRLL0+F*cdxb=;wy?,|YeihyF>˺Mک}GjFyxx:tyꍸ;v@ǎѱcG={K,tuD[XXKJjސsdff*ԋ\<6664Ν!!H~KΣ# QYYBQ6<lG{{{r={ӷgu۠ ׯ-p ĉC^^=TҐ!C%KظJ૯B*sNRU_ߪu_-R֥w6֭㽮>37m+^Sꗣb]XѣIw^|HJJBll,׆B -- YYYڑ!!!xn޼ kkk9s`jj\ܾ}NŽ;#o w^:.+>Ddd$p "YQQQʗ_ڑ/_DLL N:5r֎l޽{7푚T̟?_u5v FcٟT[Ns'>YwƍDvv6rss,u"++KfL>?&N Ę1c0uT5]4%;; %'5SNHIIQ>077c EEEܗκ=$'NK,i0DE H*OMo Ƙ7]?Bȿ]ȫ~>E{I3{lÇ^^[Y**//7N.]%׏̊lee-[`ӦM˗qU_ z \rÆ É';Ve*竢O[ E^(k$Yz~mXoι}m$ ϟ۷-vA{YCujx.))it1 D_Osu\E4VdxFM>gl۶MkF>x ,\jjuqq1p]@,cܹ#y+xIIjϭvaOdggعs'կ_?|' okk wwwq2ɥ;w.8~ @Oy'׷KIjj*㥥@߾}!S]-,,QR4#՟|;PΐiӦAWW@͐544x`ۻÑ+++y?,O?rr5x)4伖Ǝ/QQQ(,,_իWzTTT_GU)|*eaݹ 5)ڐ\~=g ;Ls:O?矷ϊވ[ٳ`9r$\]]accD5k4Hܬ/GAUUuH &~]|k.,]"}Qhɱcp 8߿?w| 0bbĈ 4h&O bC D"cD 4#7RWJ;v_۲7x+߻w{!ʚ@(6l@bb"<<{}f++>Y7222db|X%_ L5j``m޽ܿRbKݺu8͝ה#G 66V:Z۲~:ě1^֭[yMG?涳T`Ŋ ^$%%a}gVzO#GzFDDs*:::(ʒQF!99'Or"޿?rssѱcG8;;cҤIXxⓥ/+S12ZMgdtS#WZ899ddwll,CCC>>>RW{z J:[vmzÄ׮]Dhyy544XbbbySJ^'.c}XYffݻy邃 :A/z.޺uKsa_W5kRfs}.]dƮ2z^2؎;xi{ɞ>}h7n0###η%=qq ^1pB^ٓ'OfM 9rdrO\{%ijj1VVVƜyu?C7۪MqqJ o7yX~:&ModlֳgO^=nnnD#_kφ[94!By <(//'$uA|7 ~O`ooQQQ 344K^h9 @MOb:OWYY"##uVZ Gnv&L7gիakkiӦ5HC̙3׮]c26m//&4vZSE͵f;v[g˖-7z9@zz:̙WVoWUK ÇUܻwo|ܽƒ#$$pqqQ˼?ǎ57v['`Ν>MMM5feի[h P3c8z( -|TPP777̙3--!!aaaزe p1Df*ˢEm6nQHx{{#447gϱb |w>ZJfYYYx1(R;7w%-[& [ B^ԈK!۷oG\\OEE6mڄM6;w?d$%%5(aر ߿?DDD]ی L26661b젡$$$ &&jVm ,\k|,++6n܈QFϟ?Ǎ7pETUU֯_J" ݻwK.ŲeУG;w1ׯ֭[|~ 婢水DXXLj5 xw`oor… x 88K.UZz?QF!!!e˖aٲeDUUokkk|!۵h+:G'BDDΟ?cǢW^ HJJBddduV۷5BW /<5z;ynH-c þ} ѵkWtx1oڀZ8sBCաC9sÆ Ë/L#Gٸto13---|W `nLEM<{Á0k,7깗:uOc{12ǎkh[wZeSugbƌ\W^ٳ8{<:::ؾ};͛PUF[[֭ٳ[;z[naҥ >ӑd & 44\ЧOӧOԼ"##722w}QF<6B!|!NСCv&M==&`ƌ{.l٢^u&1d(Am6 2ukkk#22ׯI틳g6XЮ){Ň~>}@C鯇ZZZ˗Je޽c6zq9[N%1uYZZٳܹsѽ{wD"D"888`ԩ8}4akkM7QظYC]ucٲeMޛ:`̙{mpk͜9oFk IDAT!;ÈEtuu1e\t ΝSKn-777bƩӧVB'`uQÇ__2`a̙-? nQ93g΄D" ZTN>+WC{5=LQ ܺu =B~~>^xVŞ>}7n %%҂1 333+\v (,,b1ݻbq䖔@[[ܔFFF^fKeff"::jx7ѫW/ơ{z9Ĩt(wt ڋrܼyrrr zlU|q}<}сzC[[Dii)._'O ''ꄨSjl"MÓ'O])FUyϏBv(ZVVV8qС 0dVe-emmӧv*:}4\GG}QKm6l ڡ9ijjja%tuuaBMM0e߿_+W2!H0qDDDD48>x`,^GFΝ1~xr0h D"q+*((?`bbnAbt=r/KK/>> ,@RR$ $ B!O7oę3gqAKK#"B!uFܦ\x/_Fzz:RSSSCdee5T4Z~ll,"##LPq 6 EEEXf xy}}}QXX!++ k׮P$&&b׮]x;__^ݻwHMMB! {\icccjnŋU!Bivӈ;o\( @+++ <33@jS̝;eeeg͚ :J]!By}]|˖-3ttt```H===1۷o:`VBȿJS# qI|1118us000@ii){nTUUhz6!jA1fL:x Ϟ=gggB-񙛛1"n䂂5\600Ǐqa뾮 ]To]'ONNڌ#GJBII ={Zŵo>kعs'1d8,[ NNN\.DzeΝq+Vipi?_011[o&BDDDDm ͛y:""'ԩSO[n'$ /^ QCAz6 gϞ=Uz}2 ;w7m ˗/m(~M={4}"""""""IEE*++1p@H$ܸqIII4iVkann\xQ:jv DDDDDDDDDm=֯_www;;vlꈋCuu5 J1e=iצfM+#""QoYooox{{K~CDTKDDDDDDDDDԆq .QA\"zqF;K,Azz:AuXDDDDDDDDqu`֬YXjU n$1vjv%accDP$m|%''W^Zq>= 8vX{K!"""""""z:?022umεkא޽{:6IJJJx=zSSSxzzu8Wbƍظq#z̜9}}bWmj&_gzxx ** ptt d2q=pqqT*En#`ʔ)033"##*mΆ+R)Q\\V666 ^^^HJJӷ#F`ѢE?~Mڼhޏי ,@xx}ȑ#ѧO̚5Kx饗`nnΝ; wܩӏÇcʕZ[[~/++ Ǖ+Wd˗/0e"44G|||"aʕذarss0ҥKxtR?""""""4rZמZ] iS+&&k֬+Wkkk@@@ʐ|T*DEE{`ӦMsΜ9;;;1W_EΝQ\\_زeO>FByy9~m RFRSSqe 66VL9s&F2^}ZYMo_5rssq;v숄8{,}vӔh5z?ھرcĭ[ik׮P\\7B__N/^JjRX|}RĶm͛7 kkk̙3)))ɓ'c(,,1c ݱuVT*:tAAA077ת߉hT* ((_ BD‹/^xŋ/^MjGzzhZ9i6+o5nl尦 ZBB6)OSڬ\nl5,nٲ_ŋklkKPԹWPP0a;w\ܿ_-?_o%%%666=[[: _\\ DV^.LMMA"J}_N: Gtnj\B]vZŧ)]m_ӔۘΝ;e2+VAΝ;#44ӥѓi崦ÚVj2hx6+ZוJJ!!!bYMkrYlj9:?,((^^^Pxxx`ڴi6 DR瞝$ 233!J-gkkz,,, Hp XYYT*,--jK.RIMEEEb|ptt4?Mnݺ___8pH$Xxqˊ6g+s֦6mޯڠtyy9ꩽ JB.]O|ޕ5+/\;ñpB[Vgg:+Y ]95+Gׯ?&QΝryj@+|&T*\.\. d2q888`„ Gyy9GKSSS,Z3#Cu>GERR`jj OOOn+ѣ1sL۷Y'"""""gөSUNimVkZ٪覮5u z2ZЊbccB@@@k6y饗>HaLkB\\\KT*a¨QD"Pv*DDDgϞmȉY+V:j|\\\_ Bnn`bb"'N(̛7O(++A\"驩RA \.$$$)))T*Ab!55U- ~iqΛ7Ox0`лwoP(--*ZJ-ω'SSS?^5kp=!??_޽鮮Ν;’%K 5]׮]ηS/s;vLoY-AXX?R 6`̘1uuV:QkٷovލaÆ!88šNNNJ2e 3;v1}tWFKRZ7rH( ۷۷oBĉԬ vpp@Ϟ=Vr[nպ]v4h2ZVN>Ni-z8T{ ///TVVB*bpqquXY[׵kWDDD ""BסӧO7.ɰsӵ@6`ɓ'ƌ_ aϞ=ͪ_.#%%neh}x .=???]3st6=Qg: ~p&.=Qyyyظq#Ǝ%K == :,"""""""vmDrr2z꥓r9lll Hét?j}7n'|ѣGCP 22ǎs_~uDDDDDDDDmS \v ݻwaddJQQ{SRRDѣǭ[p\z7nƍѣGc̙۷{ADDDDDDD~~~صkW⥗^9:w 777ܹsGL/--EPP,,, {.\+R)<<<0uTZ {zH!,,LGEaڵ+&O @VV ϟ+W@P@P`Zñr&??MbccP(兤:ŧM*//_|)S8r ヸ8!..+WĆ t.]ǥKt5""""""zF|g%.\X'ڮlֵYYe<3q׮] NCCCBOOӧOGTT֭[9s&ƌt>}8pXYYY5j*** GGGlڴ  A 6 F>}T*+V ;;qEӧ%-G3p\ozci?j^ ___:tbɘ1c^~e[V" nnn裏pQ۷ UVaժU2df̘iӦ[n=""""""z_;v쀧g Lve=jnL266ƭ[T*"@~~>RSSqFtRHLL~Ûo 5 GV{022#z{{{H$i? addDž -ҿʚ5qi_Grr2>3@zz:p}c֭PT8t+}}}#::* ))) 1~DFFBP@"ŋ/^x5f̘'g!>>>P(~:ϟBKV&ki&2e `eeHq',ole6N-O烸[l%,--_c˗/kUNJ+0h xyys Ń< &@/;w8{b}?g@WUUjUVVj_[#"""""""$(JtRĦM@(JL>Y χJBTT_}Ut_-[5kʕ+ǵ4?sL1eeeXz5;tmT\|պ4:N!((^^^Pxxx`ڴi077'||2 / vvvH$̄T*SK.EEEhWGGG ~mI$- M|ھ:wɓ'1j(XYYСC8r9ECv UWWmPRR"q;"""jMǏ,\""j15+/\;ñpB7rmܹsСr9Νbjyѯ_?@NfezJJJ+ӵQ@WS|&T*\.\. d2f֫&իWߌ:0aT#  6Ç8}4ӵ_Sڲ_moѤvkGmrYmkt Daa!vލɓ'(C IDATRRR0gX[[{ZYAp ,Y7nn݊.^g"""DDDDDDԠ#F[{G톔@ (..WP49>mVkqVS|&nms?r AYYLLL0{lqqqXlPQQ\e˖A&'NllTMk>>>pvv z-jտ999(,,lR5:v?ިĚ5k9r$T*JJJ:G헹9$&&">>GERR`jj OOOnAѣ1sLW!"""""ԩS:m͏r7nWT*XZZ6CSuTt ooo_~)s?(uaϞ=Կeee4fuɓ'-M|?>ϟ7n`ؿ?N8xNjyv ̘1C aDDDDDDDW{e / ޯr?9rrrpEL4Icfff7n6l؀;v͛%KZ$+׭[3g ==nnn-R?65+ :wݻ… 8v֮]밈-!,, yyyؿ?+ 4psskֿM+W6?]v!$$066ٳhѢ_tmO; 9;;^Bee%R)6o Ǯͺv69QkqF5LfM+5HIIi4OZZZiole6YyN-]???=vLo_ .Ӥ5Vk•K8qw^ݻ5&TLDDDDDDDM+5EOQZu AxήXe""""""""z8 .Q֪ikݺuHpڨ+V:6DDDDDDDDԦM ޽{w^]4MDDDDDDDDmڵkuQƒ͈ڰVl]L\"""""""""6DDDDDDDDDDmq0a%"""""""""j8KDDDDDDDDDԆq .QA\"""""""""6DDDDDDDDDDmq0a%"""""""""j8KDDDDDDDDDԆ:""""޽{0H ǏD"uԈ]իUq&.QƙDDDDD$ ]CntNp&.QA\"""""""""6DDDDDDDDDDmmVK 9<0)™DDDDDDDDDDmq xAξ%"""j㒓ѫWǪ#88-add0ęDDDDDb>...J֭bbbĴL2fffBdd$#F`ѢE?~::={)lll*1=66 ^^^HJJӧR2 wZ',l۶ }ޚ҉A\"""""zlXt)gaڴi֭V=BBBi&ܹsgΜXW_EΝQ\\_زeZHMM˗QPPX@>}T*m68::BTBT"$$D,۱cG$$$?ٳg6Rӧ7!77PTSGLL ֬Y+W ##֘:u*lڴ bMDDDt .5ۉ'W^yO?aܹ022Ҙ;v@PP^|E&M}6Ұ|rtrs!0|p\pA>grxyy!33Iϡ󑚚7cǎJGbbbuѩS'XXXo6.][";;}nݺ1͛r ƍ޾I^o%%%666=[[:MMMMԺ~-֮]\Zo [AԦP(mw4h~WdffMJ'""3qb+VD"iZ`C̘1C˺_Ƶk͛7C.#,, gϞ*x4([oH$qxORҲIqJ$P[HCT"((޼aggDLq+|_=xIIIE=?cӦMΆt"""z:p1fΜ4:u 2 򊸥sb<}{aƍ 6vbbbݤpuݯ@ee%(wxYs988`„ Gyy9 ''ZOaKz*bbb0zhӉAAA9sڽ](""""zZ) ^QQQxV&MBtt4pUcb]v!$$066ٳhѢ& 8;;oBCCaood;vZّ#GBRzzzHHH@Ϟ=ŁfMl2899r˖-Ӫŋ7;}FC<>3f}.΄:u*6oތ.]Ϝ9kud2F+Wbذajugffbݺu駟PXXccc֭rYYBBBc„ ضmZDDDDAOM D```yr9RRRԩSjߣmoϞ=@DDD}ZtG?Z;v/wŻ+O9KDDD :}4֬Y'N&&&ڵ+\\\0gL2E!ԯ 0uT :TqO\#!!f͂9R)f̘FFFOOO8qҥ ХKx{{#==b]'Nٳg1k,rÆ DDDDϠ'N`رꫯP\\jܹsYYYؿ8x,_zj^gϞu8DDDp&SlڵXvmYXX`Ϟ=6lť_~BBVQ~zŋT%""""Vs&X9r$̙={`ƍuʜ;wӧO-`cc@\tN]v0`L:D>>y&`Ĉ -\P,3.DDDDDjr9233QRRwwwx{{cС:t(OII7|`D駟PUULƎ[lAVVV1^=.] v.0m4OhRloB*B*b(((@uu5 P(PA\"""""jU _~ AQQQ(//Gnn.= &H,R-jByy9{jyĭYsωw=G klД,,,СC)Q9 x4cwXd fΜ)?VVV⽀q[^^V#//јjkk??|NzSb єNDDDQ7o>=Μ9={b={ ${Ay駟pQ߿6oތ-ޏTggeeq.q;!tR'o՛%=e``m۶Ë/.]n޼ xWļkܹ֬sCL>666f-7|SbkÇC"@|Oo߆T*m}L\"""""j56l@dd$F Àc߾}j_z%>}3f̀JKKO>Vk!&&NNN022B,y+H Z/M)CPԻ={81߿?222УG]BDDDԦ 6 Æ kRA!..Nc7o/^g~W8y$Mܪ !!Nyccz_`,XX#>>Pj %""""իW1|pK.(//Gii) 7oqDDDDõ9DDDDDT/N:ڵk{.`2dC$"""j%""""uQLv$99zj4Opp0"""Z)""""""""""z8)###]ADDDDDDDDD-3q[vUo GGG888 ++K-ñrJ{YYYP(?>\BB۷ybccP(兤$ݻ7ЫW/DFF aaabR2 w^>6 /@* ~~~j'j>ѳ,&&k֬+Wkkk/BRӧJ%mGGG(J(JyT*1}zۭDVVPQQll۶ !77PTSO}ic޼yxQZZ-[ 55U-]5N_~N:I/++{"vFFFpttD^`ooD2ܽ{p:v… nݺǎĔ)S|'"""jN8ػw/ݫhڷ]@DD8lق~Ç:u NNNb^B000xJ#h„ b- :Fee%<3ⶀ xyyBCCiӦlllJ$f#H B D"Aff&RiH$(,,;w'>_pڥBee%~w:tY=k8̙aÆ5|NN Msuu!˱uV1mȑP(طooB'jn\\T)S //I57._ L bjYZgiiii7vpSoɓ', ooo_~)޿sY&aΝ_c ;&~ǽ{>ѳ3qڨC#|s}g%.\X'=88X<6\:ס)f͚UV5Lrr2z6)ik^KDDDD[~=v઱@չ5dggwO:6C#""z|K:#::Za ~~~صkWipvvp:y+WֹBׯcP(Xt B///$%%59R2 j[fi WWWHRX-=::={)lll*@VV ϟ+W@P@P`ZŧMƞOrr2z ''' 033CXXXOCO˗7=-8KDDDDԆ9rǎõkנT*z:y.^JU~RRJ%t@TbӦMbz`` J%OެPVV\CR!**JӧOǨQP^^~;vDBBO={ a>}T*m68::BTBT"$$D)TVV"++ fff@vv6mۆ&=zEEEM~FDDDO n@DDDDԆ͛7 '֭[>|… ر#G.\N||2Ο?ÇC__pssS ~Bfffħݻի!HPVVwj~s߳͛#GHDDD qt`˖-xoÇßN:'''p 7ol`QPP0axf6!H`cc#޳WbڵͅJKK*iC__```^UUoY|~>>͓J011L&J;w>s֩kx7H044ѣG;88`„ Gyy9 ''Z &c IDATJZ!Hpƍz_QRRҬ*|痗עm'ma,ၨ(xzzRɴN1bZ ,@xx?ɵN6.(()S`ff+++DFF-1b-Zǣk׮+Vu(DD-{2ЩS:Lɳ\۷sΡC;w.<ŋfhhь .ԩ9'޾}?~wb2e'5q[' Bhrڜ<k5))) jTmbq.ѳwi0Ym5-@ۓ-M'ڠgyy9U_{r&a 5)T*XZZ6iPl;w'+Q] QpHRq1x`jk<]`=zh<#""}|={ԛvFr4~)kf;&~ǽ{2 ;wlVDDDDD"t3 .Q5tP@wf͂B{ァudXjSԩ6nج6Z _ڵ+߯01cpq]As%""""gZ`` tFGDD:"]ܲa׮]fffݻw>|8V\YoyDEEppp@VVAAAL&CppږSHRt 111Z+;;JpwwGqqZztt4z SSS ""UUU,( ̟?W\BBůMX( xyy!))I-dNNNի"##aff0گqڴ#GرcvJ%V^]'ŋR#&&k֬+Wkkk@@@ʐ|T*DEE{yTǾ ( *bP\w5 AыcQhDIQ#J@Q}IJEYAe 0lsΜU]릻O?/_ܱ֭{xxX|y9444W^ƍ8x Hطo_`۶m033CJJ RRRm_Wl퍔xxxȌkܿ۶mCyy\ǯJ}:~u?B!p B!js]͛ok׮StX$&&BCC@߱c|||0n8.;)) oɓ'JܲH$sV'޽;aff޽{<(**LKǏB!Q#.!B! i&,_PXX'O"00p\5;;uXSuU=$6lXx?wwwʗ'RRR||n]YYYOǏB!Q#;ԙE"30\͡3̹Bi||| ;w.Ǝ>={-K$6.Wc]np]B 99c(((&1P )S`~F>]7cL!uCBH{Cc8w\x{{UaB!@(B$A$SNWZ_~/^͛Vcҥ ǏǢEPPPx1bcc<سgΜ9ɓ'* Chh( %%qqq\ׯQZZ!++X@e'O 77׵}S[?и?BZ=Bjmf-9p]ꚙVkf3L4 ҥ lmmK͌ =r]3 ϟ1cNNNRǎɓ'#88I!"ˬY`ee%3?= ajj PǏz+,, ѣB!&N4.[lA@@B!tˏ={`ee??? 2K3449sȑ#kamm WWWC$a֭r_Ç)"""}vb„ r<#VRR(899Ç ~A(vҞ>}'B]]ϥӨL|(smϧ۶mܹsq5dztBH;H`iӦ_~!C^z,φξ455٬Yd}޽Z eof1LMM٦MёM2B6qUCyk{>} [r%ѣ`?#涫/=gX||C!D';T5&͛Cȓ_kO޺bnfiׯ_ B,ZlpcǎCJJ x<lmmѩS]vmж6lT믿l2|޾B!BHkv%X[[cڴiڵ+]srٱc|||0n8r8::/H__ǵ=CVCe=v ˗/ÇuVܿ>}:M'_4Y3x3 Fik5  ΐH$pwwƍ5Ըe%%%@I˄B!_]k9b ,#= a<}Æ YGnn.cR8 ''G*_mg>ߐOA?ݻwQ\\ܠtBHB],\50cLjaygL+栩 6QFI}gf Տٳg`ffuB!u5BڦSb„ 8|0BBBwww`Сr100@rr:vMW| mmZǍ%sUo+))믿:u*~'מB/N f䟙卿.'ODjj*~㩢"fƭ u֡W^ŋi B!.t^^^8}4\---L6Mj8bϞ=8s ޷'OcXn]vťAq|(qc\7bݺu4iRSSk. !56YXCC?\\\  L[W [[_;w`ĈWWW пܹso'B~c߿ Ut8Iiii0111+X!aF . >>z%vz"7orc:%&&">>iB! :kr9~~~ĻM1c 7֔F޽9"B!E"d8;;B!!!ׯ"B7FyB!z"ӧɓ'H$x1fϞ!>}:v-3ŋ077:b1j6lX SSS8;;˜ǘ1c`ll '''߿kkkB888 ''Gjϟ]v@aa!z_?m4}LMMѣG055)o8!BZ5B!Ҋ={@JJ V\Y#σYc7RRRQkwAll,S1{=<<`gg,_OV,#??HOOGff&VXrH,^IIIزe |HB!Y͙3zzzyoڵk7|www=bIII}6N< %%%888֖.==HLL`ѢE7oߠArJ@" ..:uCSoB!%B!شiGb$._U.";;YPSS㖕PZZ ǃ>nhh-?}0~xn(___K?sL333׏C6%%qqq\ƏEq ?Fll,g֭y&oߎ0̟?_p-tuu*~())AZZÍ%M&""_?x%JKK_~QЧO| !77/^ĪUyfEIi&ԈK!B͚5 VVV2>CX|y<?[e055EDDoSSSL0A"""gXYYC J Cyy9zP'"-- p-| >_+4u077H$֭[i㯿Ç򙙙8|#ס<.]Gѣ֭[ D_W\38NGڵk8x fϞ ee5r b1^z7n8k׮ٳ رcѡC>9! H`tHu }C!B$XPE"BCC& !mVjj*^ҥKUVq+W毿bL__ b1{Aܹf̘D"L]]ذsy]ܘ.LGGk׮Ity6m4zٱcٳ'SQQagGe&MKOOQÇk;s swwgLUU0D"/"W]G͚5K!??_~^~1---qDDD0[[[|>SVVf&&&ח_V_C焬>fn̙3e{rj;=FŕUP}}aaazO<\\\:SWWg^^^5_g>Bw!B!BP(]vcbbₒ())AOOϞ=q1\pgԩSpvv$W\۷1rH9rӦMCYY';;ѣGq!L2F|\wsyn߾ 4x T[o]8<~Ŋ2ԔMj|v.] D"!55v™3g?@UUא 9'VXXsqW\)3\/\hyjm ԩ$ p=\z]&D4!B!H/_DVV\M' (++?JJJ`hh$AII Dqq1<:HMYYgϞD"}PVVƍ+| ++ 999[8v^~2?˗2JOOGyy9ʆݻ7(UUU\rϟz*X 99ϟ?HMMšC =T?>RgرR Be:{^3dCI&ӧx)T'$MАo!B!֮]kr{;vp wO֭[:tT}>CnݺDX,P9?7X,[ BBӧOQ^^gϞ!??9rH6mڄAuテ5111\DZsNdffD!Yퟩy>'nNo M9b…PRR|駈9sgnP#.!B!Lrrrs}U#)P9YUCYueee(((۫W/gggs"H*PfU\#*&&&Ri nĭސYgeeeRuImgdd$Ոkdd%%%seo߾rN`Ǐu&k!Da9T?>/^@JJ W_Uu=bY>r^UWQQQg!D6NcWmcB!BOVBqq1BCCsxzz")) ŵ>KtU*om Tڕz*@}Fz^U?o~oǏsr"5B!"4_#G… qE0!PVVX,o޼ҥK V8tN>m'HqF|g*{GVz*/_"66O*?@dd$p={Э[7 8޸5ʝ={k8z*n޼v]8DFFŋ {j_j7={Ǐ8|0fϞ3g68Ճ 1 ""1( 9'jSڵ ...8rn޼'N˗Izܓ'OrqYϊ+Go֏;AQ#.!B!† `ooSSS|爏; ޽[aD";#::{nԶ &:t_>m۶A ͛77ntuu === ==@e;vp?_jttt'''ܿ|>[lGEE<<<wwwTTTHׇc*8p {{{!Ƿ~ \ܚXd ׸ -v*ɓ1l0aϞ=())ip666ܹ'O\=qKCΉڼ}|~7L:CŤI^{SC+OOOn3f@[[s̑BL=z\qҞQ#.!B!b۶m=z4D"VUUowСCann`$$$(zڍ ;w|puumH׵kW.?L4 W^'uϟիW۷/z]N07n3 󡦦+++n)SҥKpss|>ttt₋/bʔ)r۷ `a֭=zt3׿]vGPVVF=zlj`*ݻw" }􁪪*w_\tChjj"66666PRR˗ѧOlٲS 9'j#|~m'5会8q"BBB   p 6:0c hjjB(Nqn1B!BL,3,44T!={0'''&0WWW^z%MEEx" `6[z5{cPf;//͜9u҅uܙ͙3~K0`b666O>ۛqO|8+,,VVV,((Q޽{ǚ}1& ْ%Kj_BB311a:::3fbb¶m֨8Iӥ2ظMўI$v5VQQߘi)LJ6x{{{{Ƿ@tB!KEEE0k,g<{ 'TUUxEHH`ܹڵ+ sss :YЛB,#??HOOGff&VX!'-- {.wqigF.]?ĦM駟K\~3qQ$''#11\ښ5k!HidTHII۷/RRRm6!%%)))o8 i/RSS1l0{ҥ Nr#$$D!BHEB!{'/555xyy(..n݊L8q>>>ؖQRRlقL;v͛7~6l@VVVUzz:bcc~zhhh@(bѢE8xT>OOO|tbB>}_|TTT Cqر>>>7nYʟ={6 Э[7888 11Kر#򐒒5z/1??_!c!yt SNEΝ"| o 2D!BH_vc (1h EA!K666z*Iӧ*W)//K.ܲ$ ʱcR  ''G7ܹ3,PZZʽ W_}gggH$cƍf!nݺq_$#<<6! AB*44TgBQQ9H8qgϞٳg1|;:u\qDEE!77K2dzcܸqʕ+[nx{.BaS 0xPO|B!홪*<==鉂oӧ`ĉpwwԸ1\|8pTc={bܐ ǏǢE?@SS?ƃ<عs'fΜ eeelݺ...uuu=֭Î;]va…ܶ裏1c 777n ɓݻ7LLLPQQo7n\>@  ..Fjжħ'O 77]vmp|B!́k]f" (/yfEA!(&|||\kkk:uW_}s6*VB!`'BC~)6oތM6a޼yBiuH\tgfcccӳ yooow:lOOwV'!-%-- &&&0669%&&w>E 4O?8v&MhQ~Y!Bi!22GA[[Ji?-W5BZ򆐶q !B1NkM377o!:v-G R DϨB!VF\yݺuҎt@6 #F `bbPQB!B!BH j}}'Ŕ$vUcUݻׂQB!B!4 (: B 5=z˗/K_ !B⥥a9r$.\/1!c|2-ZHmmm899a޽ԠK>>z(:F۷o_uAMJ'Bi a9kyCU޽[aZ"xukyݻw%yϵ֔s)ZV^bҥ޽;lmmL!66f͂\]]W^),VBqIϞ=(۷/^}HSNЀX,FvvTyׯ_ǴiӠ@]]]5{.fΜ ###(++CCCÇG\\\G.]WWu@hh(SN(~aJ!SSS̙3\=y!_~xAIII$B!9b۶m=z4D"VUUowСCann`$$$(zH+s՛-yo{4E˭e[^˗W^2d{K,בm۶eeetuuÇ͛7B0pK.eULJH\^UUUtuuudŌ1~7/>ώ9•{I"3ƍkmddT#ԩS\+I^ܫW/җ,Y0 fjj444<&&&իW1c?W۷}Ysiۍy1lӦMBQ|gBEE0sI vE >_YXXիW0 eX,nָ̙3Y.]XΝٜ9sׯ0///fccüYQQĄB!aL0faaԘ1۹s'fmm͖-[Ɔ ”T7ofLUU%KRnjÖ/_&O>&XBB\u=&XNNW͛7*+((`1&N444>|8+,,qXPPjݻW־}  lɒ%5/!!0󙉉 311a۶mkTmAjj*\] 7=z4322jԵT׹Vߵ~}6l0ƌ&MľK.ks[z^ԶE g>dWfRwم XEEr233ن ׾ijj2vqVRRlqL݋{Jc&MBXX@,#22n1c |'(++"##1yd;v }' <sAqq1믿F@@TTTpyԨ[UUW\Aqq1QPPcǎԩO1cƠsGTT~ge+++ٳ@jj\Θ3gr [8tf̘sss899ĉ ?uuusG":u^x0! h"cB|pDFFĉg*@'''xzzbԩԔ=ǃ-lmm?"..BBB!CW^-bС<<'K/zы^֭[zK5FII a>>>cǎRu谐iii KLL֝:uq :1~Wfiicŋ㱿K_jO?5kkkO?q}}}Y```j7fxZ/l}N.++c\zPPg߯VSʜ9sd~'Çsr{ߵ垸o{\f?3ϯRmZ}Gť;88HZu.S~Kӽ݋fggW;vX}!??4U'n3Yjʫ-s[*"Hj;CCCnY"HM2&oU2@ll,rrrFZZ`_T~k8}eWs[cpvvkݦhƍuV 2=@pj4h ȩz;BHgccWhgʧOT{@.]e---H$c2Ơϥp6lXqtܙ[(--?~k֬Arr2|>?wweYGm> IIIx1Fͥ᫯3$ ݱqF(++ϻko{] jjjܲR%Y% x<@EyZn=݋dST|ضmέ͛xxx`С*?//Dxx8Ο?/&`ll }}z{*ڝ;w5*ǏFdn6tuu'zT#l)peXZZ<C{]Nq׀kee(ȑ#:u2.\?nfTWWGuS$BYF!^x{8gٳ?>Ǝ[p ՕK ˥U ~7R^rQw <wޅP(5_ffԲʟxl777aZWv؁SVªU7nرcއ(Bww;s}5禍 sܹO 555L2G^^8 5kϞ=///XXX°xb_H$ŋ888YYY:u*K.m6:v숙3gb\#lقBSS+W+6CCC|ppp!0r?vX_~ѣTڝ;w|t 3g΄k2?~gϞ5*N pqqAii)[7Ñ\tQQQիP.O|puu9:uꄯ smTe4Zwε0@KK ֘0aToo{Q^0|p >?"!!˗/aiilիW:6mԯOi Hob3;;;.o \f̘455 IDATP(dL"H}5tttg:::Ņ]zFwa3f`"|ƬعsjCniӦIwpp;vF=АѣZ˖߲ү^lll2ҥ 5k_md |> &6۴iC!BZLǫyؘ7nYNhh(b;ҀXXX;5;ibnҤIlVt/jWZZbbbٳ&1[[[ž>}~wn߾M#m5k4j\®]r ~Y^xx8kzYOj[[6 /_~ƌnM3!BHS# HKKCdd$9A [[ΝÝ;wPin޼.]{HLLD|||t/jy|>ܯHbcc WWW(:ӧOovt/"4c 1\t WtHB!B!8j%B!B! (: B 5B!"4_#G… qE0!Bc /_ƢE  '''ݻtIGB!"++ 6l=LMM#>>ƺ7o*:TF޽RH$>x<^=!"E1ckoʵ666ؽ{;W^ދ]|/?tRt Aff&YfAOOǫW2! Ml6qDEAQ@zz#!BZ\} eew!/d^߷_BB"""p덌OOOtBXXhprrBǎ'ȏы^j/OOOF!glϞ=ɉ  suueaaaիWRTTT/f`` ‚^=x1Xhh(bqƝfΜɺt:w̙^~cƍL]]qO8 Yyyy3Ƙ5F͌#W^BB311a:::3fbb¶mm&N444>|8+,,VVV,((Q޽{ ̴}1& ْ%Kj&XBBck1˜SSSclΝ\ړ'O LGG2w6lSSSc~!0a/m뻖W0nj&M$U~}gmm͖-[Ɔ ”kyffnnTUU[d +--;u/{a3,>>=|^YZZJgg.\`2d6l`vvvqjjj2vqVRRܾ}`0'ҥKAi_֮] JB!)**‘#G'N OOOL:2x-~G!""QQQHHH@pp01dիEA,C  99:tVXkbȐ!֭?ӧ:Ի};w` C߾}h5[f |>$ \%%?055/c l؈Aaa!;###<cWQRRBii1aРApvvF.]0w\Ƚ}Kk֥ju5s->}|2sssb999r՟'ĩ^!H񠧧ǥW_޿*U@/&ߋ?#F@$wq-w/-]{e]Zm۶G0o<,[ 7nhtyyyؾ};Ν͛7KD" }QWk䍼g!JRrB!m)={gϞ1vzS\j8\.ػ󰦎@P$DDҲl,h`EaSP*EoVk7ὊzE\E+HEk(""PYf%d; y'Kuu5qQbaŨWHL+}1o>>>HsI/?~qqq8{,Y)SBdggVa$&H\  ԳgOرc(//ǁHII3oooŋBSL|W044ĸq{nTUU ?3p L} \]]E@HH֭[שAAtHNNFii)v >F$%%|r|G5jQZZ sss|{.޽owLL ޼y333hhh`ҤIx>>>>СCEnfr,򂅅x<vMegg2d|||%GAA*&6o OOOhhhУ`ȑ011A\\ |r'~(>'NDDDBBB[[[QQQ OOO,Zqqqq8pÇ ,1߅ rzj|BEeOlܸ|>cƌApp0ƌ86]D baȑؾ};JJJ`޽{`aa[[[CWWHJJׯOmuC]և3Y0===zQbdSNa~e6mBYYsss<|{FLL V\, <ޯVVVG>}Zk…صkv)rk[-  MYYLFFFhȑ#?=ڍ >>/:e:Yw\RGN:QQQb\\\f+Wږ̛7='߻w/Ott4LLL!r1h aX|9z-4XMM  . |N///ɓpЧO899"u?O<رc'''L:ϧ#""0` òe(w9۷oǜ9sD5 >>>R'gChh(&L[ rEAׯBBBp%aӦMpvv& CwMzŋpuu⨈ ĉ~zz[/__ wU6Drr2lllP[[7oRh% @@@ĖЀ5 ͅ)n eeeCEEPRR֬Y 6f***ʂH9gbΜ9㏑{aј6mؠ055:gϞ… طoeggX8~8҂  ͣn-[~},,,1MϪl@CC444}vX[[wuX{LUU}YWA աN\@@%^[[ۑ%裏УGb000Bmm-^x3gbt'QTT 899-=ܹsߡ!C`ҤIB#ၻwU۷cccclwbjkkUVu8  B~:>gUSbԩ]A;MwΝ4wHң `ffFvae,ͦ566ӧ &!b ^@ ;УGVUUGoCCCذa fQSS@ w9xNl^ZZZx9^~!  4Ot. tsćJ700`/<'XkWRbjE`X{.444p1vX梭 rheqjx{{ѣpwwŋ Lq8u8^z*DAMMtAAA]7 E$>t"t秺:\.x<^26l\]]i Z*hkk+q 1abp8(((@^^&NHMMall^zh/E%ݻ7|>lق={ ''hhh!CbPVV$:6lllPWW2&~K()) u666ٳg%R1x`" x_=y8y$ @'' l {QSS f"k=zv72AΉ;c |μb c90`2*7&&K.nܹsQ[[ uuuL>^^^r/;}0}tp\͍N300ƍa``.1c].푞???'O G]]VVVBCSSvvvHKKÐ!CD8ܝAA2#>>d/^Dxx8 Çwq]Cڂ:FK 1((ZZZ r8 .]RȚ@.ډ.1---MfSRR¡CĦ]rEq>|s9%tU.fbb/үCCCK˗eѢE8pH'fffEjj*6o8  %UUUHLLDll,222zwwwڢGǏ0C VVV]\ ;())Ann. $u2AAR)uuD 7nPx[l9s=AAĻٳg8x &M}}}#==l6^^^AEEbbbrJlڴ |2BBBbڵ5֮]vj555 6\.7oބ )))xhjjy<8::bѢE?~}IhpH {{{\gHhIII"cAA?H';LII .\^`֯_|  /^ 66Ì3pssÁP^^cǎ={:b ۷oGqq1Ο?`hkk޽{Xj,,,0b>}SZXf `߿P111󃒒[dgg̙3ӧO DQQSSSsΥDBBoܸqb;###n:@|>^xWYY|>|>;vsxܼy7o={"ʫgΜNP,X^7 111:u*^|Ǐ### yDO?xZu666.+Yf!** .ě7o+4QTT 899G{Y(A#AAхqUO?UV1:ӧ &Zmף044er<<p eee4440N> 6l6555"HGR}}}d磠***7nb ^@ ;=ٓ'OpѮ .][ AtKBk B^x'O">>HOOGzz:-Z>Ñכ7o8$$$:|p 0G+BBBjWݻw!vSSS <III8q: %vjx{{ѣpwwŋm2ړ?Á'> pرc|{FLL \\\+Wϟ>}tj=  ZƼy0o>Dll,bcc} ÇX\./u}JVmzGIIIh֖-[e˖I/--LWRRu</^ŋlrORAP8uTDEEMsqq5kSSS"''Gh{{{\R\888@CC|>B鎎BΟ?ˏЫW/˖-Ccc# ''&&&7o=z`޽t011D)&M޽{CWW˗/Z-add7771ydp8NNNxH9O'O`رЀNG۷oǜ9sD5 >>>000x,͆BCC1alݺU(]__NNN;.  EׯBBBp%aӦMpvv3.\@vv6:  vHH$''"R8?Ȁ+F555X`,--QTT$u: ((Hl3g΄.*++b̘10224P;;;DGGmذl6PQQAVVEʑ~̙31p==ӦMӥSϞ=Å o>kNhDK WWW?~  fddx"|,,, ١p  ڭC666 GێqBYY|>NNN -Zܽ{]eUWW4ܾ}x={6;&ԉ+yD4555TWWߞ۶p9PQQ!C0i$}>o߆:厱޽{-VZ  E:q f_!AA(t ;wtttp),^~/oۅB~mnFRO3x<LLLׯ_@QP"SB BXC`ǧGokPLTUU($溺:yiii k    xt۸}6\\\?үvX, ۷/(³gm555BlN*X|9QTT@P%rlmLhkkbVZZ yp8~: w,kznLۇIիWagg'jjjdq9    &҉uuup\5- Æ 7|#ֈ!##Ch \|_~a\f}}=0dUd?PUU%WzqaӦMxJJJ OOOy7bWUUUd?q$>>-[;t# :[455˗tG˗/EF666[lٳgEcrFAp\;88 PWWի 9r$JKKQUU%%%$$$`HMMDEEaܹсOE11;;sEmm-1}txyy'dٷoO. [[[iLG]]]#==]dߓ'O G]]VVVBCSSvvvHKKÐ!CD8<&N(wlAARVVx#33x"add@___ߴFwiuPPV.ﮘOGΟ C"]bZZZ%-eiiWJ<\B[gj:$6ʕ+RxHII%:""B-l4|Euhh(^|)w,Z#̌Zb ]}uuAm۶A]]2UUUHLLDll,222wP8z(?~0~~~Z(FII rss1hР.# K.ﮘOW? xP(W_Q ͛7ׯ+<իWSK.Ux P;wJߟ@EGGwzLQVtt4]GAA ~WR(-TUU)///*&&ᅤijj._LPBmheeET^^EQ[иwEYXXP={jhh/7nehhHM8z)߿O_?(777JCC۷/l2Nͥ^zQǏ&OL}tzYY5i$JSSrȑ#::Ô1A}"ʿS}5e?x<u=F>BUVVyݼyٳ'3FΎZb:"}d?iݻw266Klؘ200PFFF튕 BQܹClll:n ѹ ZQRR… ڵ, ,%ݻwwuAVR/^ɓGrr2^zPQQ|}}g=b NNNزe 222ܻwVªU0|p 0S꠩ؠ055ł }`ggh1>^3gBWW?1fa???8;;˸z*|4Z6lFEETTTeee:= tmʿH$''nY8p 8,\pax{{CSSQ˃%:C%Sx 'AA H'`۶mV䫯)6̛7 Ϟ=#Gň.>___|r_LVVVǎ;p9 >>7o͛7gϞExx8<"Ӓ iii}6TUU0{l;v /FAA~7fcԨQ=zPjjjFQQ,,,ĸLs@ %֬Y…  MǴ};[GuAćF   #X,X,~g=}4`bb(~Zh^zϕ TUU(h}}}TVV***bG~+VСC>} 88qL:obb"5I|;w***7nޑ}{|AѝN\  B.Ѡ(߈TUU`cҤI8xԑɭy?>憃˗>|8=!!!BYYYtLVb|˗EEE d<ǷLv6X,mEN///ʃ <<<בF3__5t&s8xzz8|0H2%}`jWA!AAgϞűcP^^ 3f@OOވŋ/( ꫯ`hhqaݨ~gƍ4_QFkرXj\]]%'Z/]M2mGAć 00Bbb"bcc$$%%W^pww-qQ !++N7nχ\.ƌ555yfxzz֭CHH ** s΅0}t,Z>6&&rppp~*4R4;;sEmm-1}txyy#GDii) 0w `mm 3334Yۢ@Q̭I{:Lspp,,,УGv6]zO<0p0a„ )JÇ:$1mݿ_rKo[@@vu s%*66 >|H>|HQ[nԨ۷osIjȑ2~IIIԀ:VN6ydj޽rӧO).K͞=@ܹSj^*::0eEGGS(nj9j~@(,Y|||r>|eeeQLihhP@ N>ݥ1%+},,,(uuufSԜ9sbP\.f͚E]tI 6P>zܼyZf f )Gy>kAOOO]9Լ86t۩QFQ,KgddD-[q|}4ydj˖-lݕsǏӟ{<ȃ})>p}y\"СCXn]Ey233y9rrr.I&uaO'''X,p8.ѣG\޾sN>}Zhb۷Oo&񮀆Ԡwdlٲg…Xn k.,YSq-~aĈ*GѤp޽.[}]ׯ_? $$O裏\x6lxg\pP6⋮ n}W,AtWNqE ? 퓗GwCSS AII ajj 0****(,,|||f$i鰬 ,_{4i(!::ǏGff&222*#H$''x ti8Æ C\\.\8|0uඇHgp۷Odigii")0ߓ'OpE())aʔ)X~=~G vOIǷpuuƍCqamGڵk߻7nHw[ƍCSS򐖖&rqo߿[=:z>4IlΝSE Ccc#/4'1k,XYY!77ӦMCqq1ʰsN@غu+x<.] y>N???޽ŋ.h*ݝv8q`ĉDaٲeXl[)ᅲfaaA/֝ amm١pV߉(AtlvuN |E>ڸq#\C $$Zp!S!)t2Atܻ4$އSS,DRjۣsN@GGNŋ~zzzs]]]Rsi8;;QQQBJ+ӧ &&&&={6^zŸ+V`С@>},R~G hhhTVVb ^j`` t, 48e8wTTT0n8<bb: []]mUUUbof~0_>}\.dqmY竅ޟZZZ,<|@ȵ֫<'N˗/ ?~sΥӯ]L׃AMM ?Fdd$F-y['O; 466?DRRp)zc֬Yx!^z;w7oɗhL|l۶ .]˗/56>_]S\:xCBBwƽ{ӧOo-7okCm Ҹ\-G}ĨN/!;1Z>zh2OiP_~#FÇQRR!++ wPSS#(  D:q[w2zҊ Up?m97/_b!00Pd?I, wEQQP\\bF+$~: Wګo߾( Ϟ=ϙ Kg?Á'>Ç#00P$,Pj*իpPWW'kbĶYK޲WZZ*o߾d`mm-QXXȨ=; ÇuԩS1~xĄPYYzQcZ IDATagϞ555X|98vĘ1|466BYYx$Hbϟc(++o; W,Yr<彭6k٣GBlgtZXX @-ei^G^ JJJ8z(oڵKPD_~~ncc#uǏc۶mN$J.**ѫE^ꑕ{үwE}8qys̡{=#99Y4M={DVV~Wf v   {8NCCQ]]\.1c a׮]gذapuu;>TTTcЀ!Cb III"(I*&L@hh(6o  !|HMMallLUUU~ejj kkkDGGc…(**BFF=+i/Ϝ9Gee%nݺ%WMMMx5= ˗PRRuHw={YYYByؠeeeBne_ ]]]0~G޽d1et0,߇wMpp0ZE MCC4)Swww:t8uN*r?sذax)޼yr`8}4ۇR~ZSu}w7nSܾ}l6>cƌb/sN'ڵkشiΜ9*͟#Gʕ+ɑ8ݻw*mб~z|'@_.OHNNӱ9߽{3eرtΝpܽ{Wh1~I KN8PSSCbbHRb>|8d\[n@]+k֬Z:Z~=4/d"YiwvhP9eda3  HfƌŦ}'=z4JKKNi@b&6o OOO444`ݺu 6n>p\3FcbbtRѷ,"|ֹs碶>}:_Ga֬Yؿ?8N1mb `mmMaɓB+++ssshjjiii2dPGzzJگ`aauuu^ZCPV|>F2|go4i?MM/)ahhNwٳG9s/_-$x">3Ԁĉ"PijjoFn; CKcaXr-Ƥ2 01;[/:f@|;WTTTx_!>>̤/(\x022@ :FKقZnWןiiI AѽHuwwƍ2xb,^XdUer~'IQB---qU3/--LWRRӥ'fff /Zĕ~@s-8ҥŦ1}"VŮEjj*<<<s5۷Of>or[SRRP]]-4/2 }7nevvvHHHN<>Lf\i'Јötb#u4h \mU^kf@gܢMoO_}͛۷oӟ޽{Gv$h}>ڎlZm T1m4|<)))rcpD..VWWBQ=e͖9WZ=d,))Ke]lyfaҺSYZ,ڡm&aܖ}ΪXddd͛74pww-qQ<~aaa 9`eeŵPbРA]G@@@ ?};PąPD!9q . ;;[mGYYYtBvu-[`Μ9)yI}]NKhq0I^~MTK :FFF8Csƞ?x=t12_۷"yϙ3Æ qY\~ UhAycm֙S;m4z:|}}E:AammMw^xGAuu5vI/֑%٢ {@ ˗/rm6<YYYʒ8RÇHII_ 44N[f zI(^֖~MC]]P=Z#2… 򂗗?.aÆ#^Ù3gd^DR;̙3~{λٳg8x &M}}}#==l6^^^AEEbbbrJlڴ |2BBBbڵ|k׮KgЫW/˖-z:::bѢE?~<&4^<}&MB޽˗ӝ@pqq)SK(//ɓpЧO899tztt4LLLၤ$e {{{zyE>Ɂ ͛Gы > m"#""^ +())… 077Whݥ~iX~}Wѭ$$$ zᢖ/B+n+ir4ϧ[&L=[=<<.V՞huTT7oжL4@6ј7n}~333yɩSbȑc…blH1cK,%Ks8\3=cbEYY)D~'pssk("EÇŋ70c h(//DZc+ԡ4999a(.. mmmܻwVF!u꒎DBBoܸq"p8s SDGGu43gD>}usN:? 6lFEE*++&4< EEERg_V-UY#->KKKaϞ=055;w$_FBB^ kkkhhht A.0!B !. 00r ˗/pqq5kSSSzʱ={ 88׮]޲ N\Bnׯ_GUUMա@KK^[QdUho>宍y"ǏDAA9Μ9GGG(++ʕ+4hT*Ҹ#33۷/l6 OOO\|YhYf!22fffѣlmmqqzŢ;6mڄ˗cȑֆ2zcڵFeSSm9rjk׮@͛7ٵkݻXbccyߟÇ{=ضmQQQdee)e=zClݺAjt_7U+;;???oߎ7ӧVE 8h4ӕҔM6m` 33S)P6ԄY˗/ 11OOO `4R5Wg+Z7GM|ڵk\pSO)YLjr ee{Tԃs*TW^?P/R|)t:6|Iwɓ'IHHP6gU[.x<(oFM!EO[lYÑԼysǹ|2m///̙C֭?+++ǪUJiʆ j$-ה͛7s)eyu޽ggge*o?K}UjР_"""8x c߾}ԫWaÆ)?mؾ}њ;wfرzڵk֭[%ʭ[(((MLQfU...XYYǀvvv 8%KvZ]ƺuxׁKRǁر#ZRJ֌6GMO3hР0վ9jsrrʕ+dee)n 0|p BLL ˖-cʔ)zٳr-kˆAj i޼9Fݻٳ?ÇSY\RF!? =b7oΝ;-wPݾj_V@Znnn/P&QYqGDVVDDDGLL 111Mnf\tI]vzƎKΝ=-ZGI-pppg6l̈#(((>Pfoذ)SФIׯyהk >}0dxLBnn.֌?h~FVVu!**:pUWw_ŋo}sGM|}חcmm[oU1S~}ƎرcINNf9/:-4iGח,?ΐ!C>R}eN˖-} ׯW^^7иqcC- VRʖ,Ybx' رc 'OVkhժаaCCV V~vvaĉ&Mlll :t0^(ްbŊΝ;eC5GCmCmET(B!CٙPBCCIII!22]vѽ{wz=^qr 7nL֭9{, _QR:uS@K=W՚ܟkZ,t֬Y̚5K<;88~zuѫ eB!_ B!*JNe۷ͳKRR>>>`kk˲eҥCSB!Dm!B!BBqFbԨQmB!2[R?5qn?\!D핗۷hB!EAܰC!Mݠ`$Bp{nM BG\^^۷o0*$>>C!7xcB԰ݻtժM6WHv ~5I$ !U'%%^_aа[f͚ԩSk8!̙C^^]Lzz:DFFr b帺G5gpp05bҥ5CMjljBM+svIXXgΜLBZޞtZ!Ljll&B!D1YYYDGGA\\ooouFvv6۷oҥK,]KҮ];z=cǎs4r˜9s'|F㨩_Wjaaa%[.xzz`Zn]CU^z={Vߟm۶pyڶm[q$%%|rKRRRsZ{W_}={Hlʃ蛫+U^:5B!5-//M61l0\\\ !66KKK|}} '33pΝ˒%KHJJÄYp!]tK.,\Wk[lACLLLxyy`iӦ -[u_|Eiܸ1ܸqCvժUt5k3w}߾}yxquueСܺuڿz*Æ '''fϞ ={>}`kk/Ws\sݻ7sU]ZN>Ncԩ\pN{{.)))lܸ+T۶m;w_Oƍ~AA\!xDiZ̬ͮYPTUkΝtرBƍ`ǎӠA5 ,[TBBBptt~coߞ={*3˪Z`` Ɍ3y֭>… +K -ZKKK233~:K.BaÆDEEpqYf\ILLիlٲڟ8q"7?iiie>̩-?P7`޽͛WCTѣ7Ns޽~h&N(ʅ5Fque[[ !m4Maرp<==Yz5iiiݻ UOOOVZ2 IDATEZZ#((s n W_}IOO\^k׮4jGGG&v$''+uuZTz^u֥wޜ={J̙3zj4i;vŋϼXZZҿy:S_/OlذAuU2q쌛#F ((H9btީS3f ... f $$$0~xZlIݺuiذ!#..N9رc9f͚aee~~~;v̨.ڸ8F=O?rg}F_>O=_|E}]x1x{{s73Facc/׮]3cÆ %z饗:._ y&L`ΝK ???֯_^jL2ƍGdd$+++իNcJ?s+({BQ 7jԨ2p1Zʤܹ'|mұcGfϞnnnЯ_?:ud4k@ s)R<ͥa?Xt:I999舃ܾ}D M!MIIaΨQxWݾBW߯6t:]޽;>>>4nܘ޽ٳZ-: 6z{ , 1_@ff&F })jej{| K }ӧ\vMrunJϞ=W/ٳ'7oѣG_صkرL v111{RDGG+Y`Ο?ϝ;w_1b'N(w}||۷y&;wdʔ)JyͶw}^`ApϏkגSj?G֭[hZׯϥKXn< 7oެPsnyBGqŒRPXՂN>n̙3|Fkp!?T/5)R<դQ5uJKLNN6@nn.IIIV񊦐N<'xV\ն/*,, P_R^=bcc ŅaÆi&K޻wX^y\\\:t(6mу DhhѣGkŲֱg={cǎw^ϏٳgJrr2AAA 9#hsZZM4iӦ 233 5RW6~Lo4M~ݸqt=U 2e wޥE$&&r~4h̙3~M<;w;޽{ԩ+baaAtt4w!:::uUݺu%33͛7SXX[hزe ,[BY z\|YYn/֭[*ںqFm*˽I\ 8z(}2[`=}4yyy$%%ٳt钒%P^}O!j'B3JA-RV beST[nMݺuiӦ ;vEh4rsssN:sΪz)jSDնQL ijj*gҥ4l[[[f̘Attts+B_̙3͍aÆU}!DiРر 6nСC&L@f#"",)#Gxiٲ%ddeeѹsg}Ν;ykf8pK.7#^z+ pssSC˳惺+++o;;;Ȓ%Ks/_fݺu16mSOdO~G>yy7TǦ}sĕ+WʪP|ŋB~HII]vٳG矹r 'xyy+~-UO)juSK}2dѬeSsWw_ŋ%}sGM|}חˆR&\~uτiiifgtnJZ֨E[:EiժQN̙3P%?(,,4je˖Fu-[baa{ٲ噍[|PčB >>>&7,kY4S+ϹyOuҊvA\!c)((BBBbF)Pv 9E)xzz2`۷Faִ-RXWZZM6-5vP~xhюiӦuSVeh4$$$`kk[)J0Ѡzu/^L:SNdd$9r"""s]]]ӣGkY\e9x`efRbVs=zU*ә>}zZ}yݓO>i!K/d\j/SeoN:FR)))%Ws!** &?mV|/W-k9݄2z] W\Ef\[M0`| nZyyy[|ٴtqeW^DEEʮ]>|xVWE-{BQ;r Bǒ-ZV5CQV bu$77իW+)E*hJeSDQ{Jlٲ%fƌz/^,n-T,OOO> _ػwoBn΄ߓ̒%K(q&pE8q$~*C4hCSxխ[ev۷4ݻwWVرcÜLVXwgٵ?#o6ׯ_'??sA"##y&۷o'66l]tQcccBkhF\\,[%6n܈/{رc0qDƏ_8VwWWWmVnUHyBIf !{&L'e VgjOOOOzz:Çg޼yFIq42)樽?S,y뭷 Q9s&m۶Uv.m~g?u리YDmBkg;SfYjJmO$|||(((֖e˖ѥK~!ϟcǎ;v^z駟۷4hU4ȑ#3H׮]ùsR+bժUQXXXbc\KKKVZj2%>SN֭[Opyի*3M-KKK>&MğmaaYgXwww6l@pp0ofΝ%2dH۷/gq]oVgii=!dW.T jR+:beV͛7n0:Ã&hOs)jT&S,כ):C)g̘۷ݾB imPScԨQ5ꡭ_oo ܹs9x /"?#}qqqdffRn]:uꄻ;'NT2dǏg|S^=:ul4͑#GXh"''ܹsݻ꘧LFaѢE`0Xz5;v0nlE|бcG.\ŋIOOGࠜH߾}Y|9_}.]Vs=ǫZٿ?:'O~ۛӧWhfoE=!}dW!D8qƍӺuk%EtѢE5B!#iѢEe~֚;w.s-q{K.e>/ҫW/KLppѱI&Jv]tܹsʌ]v)iР5mڴaٲe&/o{.uŸq*Uoy 5?C!ă%B!*T裐b+B!~.]wдiSsfV!jBԩSj}S)Bmy_,%%HvEWzB!^>|8ǏXXXжm[3g[BB! DFFɑ#G0 :t˗^ߟ=zpLFXt0kmJV gSΝ; ̙3 UQC7o^bs2!x B!deeMDDqqqܻw֭l߾K.tR.]Jv;Ν;p/'00utveΜ9ÓO>i򼇵B!EtB!BԴ<6mİapqq!$$X,--%<۷gϞٳZh",--,] sRRR8t dgg*e$''3fdee1p@<==O撔Djj*iii,X\;wކ2럩:uDrr2~)mڴ!99dLRq !B#)!BZkѢE2C43rHuxzzb kÉĉ8q˗qvvt';;dڷo{s? >Ã_99{, 6`ƌL6ŋ 77Wuݻw9p:uRݻ7?̦ӧO3l0n޼INN:y橞Iﱰcǎ|ڪժU1n8eƑ/j0~xN8Anh޼9M4aժU_={2rHԯ_\!Dm׷o_~G{=ϟڰ0z-|||D׳b _7n|@fff/`…888pU[~pyI=>9{,ǎرc5[ud`(A\!0cݺuݻ][s `VfG)VVV=zD6lHTT]vxxxЦMMf/jL<'xo30n8< +++Scƌa.!%,,̹y&v"22{Kll,^^^3|pu=ضmQQQl<=zСCnʠA 5ѣ=˗/ 11OOO @@@rN?hڴi%** ___&O̎;;k4p|h0 jO`0,^h+ 0z FZ*>>3gΔt2+fzv @FJWW heS<j!!!ygNE_/nnn 6L)W+ՠA'//DDDpAǾ}WÆ Cc.`~`۶ml߾hsΌ;^Ovغu+[n>8p;ҪU+lllW9}Ǐnݺ^#F kSOrJ^}UZl1c1\xs /駟fРA7+++0`@UW\!++lO8q[l҇Bjk򂂂 $$///F `4p ȬMb0n͛hHHHֶzBBB:t(Ge֭ni0F`0Rd&B!xi&  !!!bii/dffܹsYd III>|P\\\8< .K.t҅ rj;##_|{{{7n;7nP_~tԉqmhdnà߿?3gTHB6tZj:tgggf͚EaamdL昋T|aVV嵥eT=ӼU_6mҥ [l 998e˖ <3fŋٿrի9qk֬!<<^{ #~r YYY5ҾBTB2ٓ͛7W[}-s@m Pgܸq̟?è2ֶLD4 F1h >\f] ૯2^iaH!~饗Xvm\|Pܼyh֬&L`߾} :7;AFk4Yl|7oi߾=={dϞ=҇EaiiIff&ׯ_gҥXXX¡CHHH ;;?P) $991cƘl'++'| 77$RSSIKKcʆB~)mڴ!99dLRupi֬ 6$**?Ǐm4H&xObb"W^U?!SZZ{T7_y&L@^*|ŋ(q_~t:mƚ5kt 2Du۶mcƍՋ`zaTν{h۶- 6N:Ŝ9sƆ>}ѣߜO>Ҿ}{Z-W.wBQd9!R*OL6˗/dɒR9$''Ӿ}{ +3裏?L{9<<}ӓׯ]o S͕̙3ԭ[(ɓ→3f'eiiIf͘1cgs-[ɓK7u̥韹P4wwO߾}I {иqcBBB{9,W}:t/0Z+SGjӞ={@բذaCcj#5&B BˡC|2ɼ%)+򫯾bʔ)ƍ;v͛gvMRRgϞ5J4B SD͕J*7׿N>n̙3|C)7fO^^ov6j~PvrSwؑ~m۶)l޼???e6]UիǏ7:Ƿ~WM?s)jg*ż~?BCC(q\ 4V]v`0?חzKHH... 6M6aV{+ CeӦMܾ}=z(3J Dhh,ܣG*1w{{{/_ٳg9v{%**ƥѴirKTT=z0zR|ãTRSSWPiKdggٳIMM%99*ݜHU$n5ɆMB!Aq„ɓ'ӬY36m;v(qNnnn3y׮]KPP& /`tĉqqqyxzzx(e֭[DإKҰaClmm1cI&\ZnMݺuiӦ ;vEh4rssU_<hJkk1L_3gbX5?ߢ65ŻrsJZ{`ĉjٕH#33\V^͈#FR k׮姟~bʕ  ݺuh4|?܆L...XYYmUɆMB!AB`֬YQm۶0ĵkTqUzmYYYxٳE%999zM*/[hkU2JT,,,(,,TXXo>>>dffYbu֭]~Fc4B)wx"VVV 8Ш}*k{ژ-??{{{cYYY%gbnO]㴪ޟ5ƍܽ{Jۏ#{{{ "++h"""#&&lll[ndgg}ve ]vzƎKΝHL2\?~|8SGZZYYYԩS(:tJȦMg[n3sLڶm˭[j̜9 Y[[[obo-Z࣏>ӓ-Z>ktNyo3#>cFAAA|}z]~+x mO--յ*?*T?ӓmS]6m` //O,dyoJA6U=#F`\xuuKmooO~~~sݿ"Rss 69}1ڵkWj_㫊g~~>ח*ԩS:u*DFFɑ#G ""B9^Yf)!Rmh"?P^𨈹 LmdHol6}tO^⸹'6 !YNAXEբjAy]4`dff??ub9i$6n_ ܟuvfPuML )ퟚDlYڴiC.]5rau|?qD6mΝ;0aB?}2%6),,$33O>3fwڕ|ӍE̥_Y)_AAA:u/s3}z '99%KQXM !BLJB<&LOj 4̛7oWK111:TKݵŮ] 0*իCVVV?a#!)!o`9~x]W>&OVPB0j(6lPjÇi߾=vvvpݛsz}DD]t֖VZn:ի 6 ;;;={ц~v˂ M6lْӧOp郭-\~-[)uy}kСC_j:tgggf͚Eaa!OF1uT.\NCӱfptt`n߾]kJJJ  wwwF+*Xl'O6:Ov9r$踝 <~qAΝ;O?ގ;Om6^}U6oތ_zxy{{K7h vɐ!CT-?2+L ⊇ñcj:Eu ߩSťCBQE&OLf̀d-ڵk䤬9uC2i$v͖ڵ+w'&&믿r,,,ݽXYYgĞ={V)+>9@c)߿g*3f`ڴi)|/XYYưaÌΩLEN:5Z*Qֽ{w}]FAff&qqqX[[_fÆ ܻw呖ѣOK֭ϯPBB!fE /ץKh޼yM:ʕ+iҤ M4aL>]yW4 a]WIoVVggg嘋䁹t:]cׯ_G]ߢE ձQ@yg<<33'''mTj#tδ44iҖ>jڴ)<ձ۷oFӍ-`0yh4oܿ 7n\ͱ'??ѣGٺuk6gĈl޼/Tef̘- .,|ׯ̺B >nݺEHHHM!j^xAq?yXUaPHQLq+ S㚙vM!TBpf9cZ(! 7T}91AZܮsY#F"BQ6+++[[2 R[ѣGcaaʕ+2dHs:uꄟ_4ƍcСҧO9~8ښ_~E駟rMyLaɤ~QPP@aa!b-[tv:[lI߾}dɒ%jRSSpB>ʻLJKfRRRرcqss#??7n,^'N?ri /H۶mu{wiڴi4PyĉdeeSعs'ǏĤϞ=!SBAqCuĚ5kBzj#F"u[FD!C|||ppp֖^z:k׮ҦM,--ywbcc>}:[FKQgڵ9[[[:vȀ*>vvvxzzw^?u3gd޽4hЀ]2k,ʑ#Gtr~縻Kaa!:tuQ^97nܠM6ʱ3g2sLJ/` !GqjZb׮]\ra)yBCC !̙3ˣ^zE!8e?~AWTx+,,24 ;w,Z}III嶹rȑr >|UV|1c3f̨rqڲnݺ 6\P>GFFr!bʔ)|g:J*be-fff믿LLLh49+꣢rssILLdɒ%5rC6B!B!BCCq%%%%ѻwoWf}HNNY l,]2Ӊ! #;qB!xD6l Bnj Rѱru҅TVZM0ĤIt !*Ov !Dn݊c޽{7ߔ:>m4Z_!BeM[P]666رcdgg+jkd'BϯRxsuu%--5kLZZiii?>033#33,/^3Fŏ?ȦMXbW\>}[npB<ۇoq}r/\@zz.NVaŊڵK=44\._oFzz:ͫ8B!B'׭[Xv-~~~4k {{{ILL"ēNq(Ï?Ȃ X`,*֭[RҠkIJJb̙ԯ_Føqؼyy!!!abbBhh([ni?u<ձc کlH{a阛~]vxb6llڴR!Bh44o޼JjRDD3fx{_pԤ|>s'""$>|HNN{{{&Ow}ÇJr IDAT5jÇ9hhhŊ@BbbPtaZn]1f͚;C@@ӠhZ7o'++K缒ylmmm,Տ666ܾ}{X5kk֬@Q}*-B!ģwUΝ;GvGXXQ _{o(}vbccٹs*̌~1l0 ..xRRRXr%+WCѿN\a?< _W%Çuڵ+* +++󁢝Æ S]6oޜ0.^smHH~˰aðArMt^i4irݟ_QFq)N:/I9ORj4Z?;v;vsNy7nJō7c4iD缒iiڴi'?? }(,?##C9V-ZR8}J~~B!Vm۶4hЀ͛3c ߿{yy1e郣# Щz}i8<]v ___ ܹsDD@@[l)5k{/V:?խalwe۶maggСCټy3w套^bŊ\~]v1vXlllh۶-̙3:u~VZq5-[Zbٜ:uSN\a/^, ڹsg4iرcYlxyy/pQ5;wdܻwSSS5kFFF6l믿? @ݻlݺV[Ut䲲 KKKlmmh4γڵkdggӸqJ >, 4~omm/̢EOy&Ѽ:ׯ]#GR^=V^uƍ:zK:{,ΣSNSe7ʕ+T5CmذAU$=z4Z*k׮ҦM,--yw0aA&''3~xrssdȑ?~2qsssILLdɒ%^_R'''8|Ν;g[[[֭[WB!'^^^9ro:پ};QQQ\|333rrr9/SSSחG_LT*U:%UN!CͿ? u=իܹs/1f՜W:5ԔnݺСC.\H||<[nѣ=z7|oooΫZ}IN"..7oժ!!!W>INNcǎCA+$L\\7oFVE=z48-Rܿ* .a<ϟ_n[6mjLLLطo_'MT\B!(((e1u˖-0z{{{ٿ?{V쌻;->رc:tH~CDǐks*u$ _06JŠA4hܹ86m>>> LII!>>X.\hݼƚBqQU֖]*ǍG\\yyy__~h^x-Zpu6oLRR@.ݍ7G}Ty甔2S2[EJe*ؔy޾ѫ`B!0~!>>>888`kkK^j dLaa!>ӦMieԨQʿ맳y@_nݺNvv6&&&$$$жm[ 꿶oH z }?խQWXZZ2d ۷ٶm޽$4i< M6ҥKʵ͚5#((1111LEZZZmP [oiru]huIHH9믿֚+M66mTꫯ*6L9~EZK*TxjF=_JjmLLs5B4ikf00YB:Jp}222*& /~=:55#FRm{fܹ6m͛7߮B( :!ܗ_~i;uBu/ YB:ի;wvUx^XX{DQ=z=m۶QXXXj`ٴoߞ'*p..ذa B!jBQ IIIښΝ;ǚ5kt\8}4nb…FZZÆ +dvťK~.ÇEnn. ,`mHre~7ә7o^>y_HNNYfo̘14jԈ,~G6mĊ+aÆѽ{wxwIJJ*G^^׷9}r/\@zzz UVb vڥnB!Iv-֮]͚5ޞpum%ēNqܺu4T*XZZ*Ek8}sQ^=qqqJEnnrNHHfff֭[ktS^=<==9<lcuؑǏ۞[Nn}ٳӧcnnT+sBt`ƌi47oJ2jƺƞeƯ=ںu+...?DDDÇ!::cooɓxabq!9qO+VEÇiݺ5fw! L?gݾ};QQQ\|333rrrV(޵kffr8g֖g t),, 33Jzt~6Ty_~}*}KTqj4o\9fooLBR;88ُZp,n߾ͽ{j,7oq}*sBtxR Z{QAA۷o'66;w*ׯÆ #00 ∏'%%+WrJ:t(!!!xzz!j,>ݻ5ZFF/^B!ē-&&9s搝̈́ ޽{s|}}޽;?#)))ܻwD\]]`ܸq#^5#FPҊEΝ9{,/^}l޼YyM\~8%G&**J0qyFř3g蝿ڞyzzҧOJg3{TQ))) 8? weW_)5HLLLx饗&((HI666̛7yO?O\\,[e˖DHHÆ ã n #F"ꢼڢhtILLŅg}VZ~}[B<<<-[пJűvZFIzXz5iܜӻwNgggYh|ǎСCeJ-[o߾DFFdj5\p,ּ,ZO?7o믿̯C0ydؿ?;w͍|nܸkg^n,:uϏ>RtR֬YCJJ ;v`Ĉ@?!x9ONN8::ҡCFF2BBBظq#/^ue&MSژxFDDDEGGcU)YܜÇY͍WҳgO4iAPT!)) VK.]Q53;;;y&zёSEuzɡC8r>>>:v뿶/b.\ՠ>+K=(FWWWغu+fܹsOxxxpe*]2|p|qwwGeƍǓFTTQQQB@@A}fN"c䮰^{͈d͚5<3F1zhZjUxrr2Ǐ'77KKKFI`` Pj~ҫWJCݹq_3gN{Æ Yd iӦЭ[7ĄڶmKbbAc2j(lmmڵ+؊⏍enݚ4 ӧO7o}[~=ǏI&XXX0rHL\qFƎ˺uPռ⋥ưӓ{2|2o+\8MMM%##y+^cǎ 0@O!Gys ۤmqesb%0(ɓ'ӧ9997бcGN:… ٿ?4jԈ>}0|y^~e_š輡ohѢgϞ%>>˗/(oӧO,XwLYs+yoիILL}f͚R9E/kCyoTUCu@T@Z:_֑[ÐU*ZSSSu@Xp!lݺGrQ|M>|8̧N"..7oժ!!!Wآ={c 8._L=c,_:0rmذȑ,ۧI&+MBQ l2h49r_~uzWʐBa$N2AAA6|;T^{{GòeXB<ɒ Ic o~-ҁ 2(ZlY浯 @:u 60fcAvv6ٳG٭{}ƏϽ{ppp899q<==ߙ>}:|M]^=Ki׮>d֬Y,Z_͛7+)馃& IDATҥ ۶mΝ;su*nMW\1þ}V6~MnynC:sڂaÆ)mT۷O9>bĈR^xԱgyF h[jMKKӉѣy>}(4hfffBׯVr/}xҾPmPPG5s-{nA)/_\4ؿ~g݃<ˊ+h.]4CNN6::Zۯ_?QFpmRR g>_UyLt4Suq0vXƎKVV7oF燹⑒E'XPP *3___wΏ?HJJ #11WWWOOO飓2%%#Fɉl̊ƆŋZ{fܹ6m͛7߮B<~:t(;wƆsαyf?7o?<*{O9k+V`ƌl߾?XYY^x-Zpu6oLRRoff&7n$55>SQ /#7nϐ:byxBC"S,88777>2s=GzpvvT*ڵӰaC"##4iREԪėϞ={駟077Ã* QWvm۶QXXXj`ٴoߞ'ҨQZA!ɓ>>ryyy\7qD"##ue޼yL˖-III`ժUmۖ мysf̘KIIɉ^{_~E'(}D@@[l)8p obʔ)GGG Pf^˗^潳ۛ2ۡ(ٳm/ϕ+Wݻ7VVVx{{ĉFEƍ%"";wTz!5oѢEt֍ƍcjj3<;?*܉#PXYYݻ+%_sTT> G!$$-Z_\]]k&_Zf׮]xyyajj?@v픔B!*'++K}.Z 3j(0aJ͛[Z* V[@ˠjiѢ*ӧOceeU4nJEFF-##CAX˰aӧ=ϏΝ;WR4lܸؐz*={ٙI&H1c`ggGVV7oޤW^8::2uTdjt҅طok׮-w~~~~lݺ3f.OӥKtRk7n^n{\\qqqzږ^ 66}-k<==J/wZE}BBp-lB\\~-j@U6 񤓝O+++4 KKKlmmŻ^ թS'z2v5e˖ۗHvT|屶LJKRXXO?Ď;4x]t3gbjjUϩK]*5 >}F'))3gR~}4 ƍEX9ӓ봟:u KKK}rر#Ǐ/=77̝߳gӧO0ŋiذ!VVVDFFiӦJ#B񴈈(K[#*hh޼MFů1'&???퉈 ))Ctt4ޞɓ'wCc-DE'ѣ/322lڵ+iFիnݺƍOprr_~˃hݺ5VVV 8+WT*]K.akkˤI0`@7}GPT:;4VyKoNϞ=h4899~zݻW18uHKhР򳩩)Qecc۷k,q%B!D #00arUۧ9$$$ꫯҬY3F_Çׯp9ϟ+7odʕK8::׿#GBN _n[RR+*fbbRkne.Vl </B9~mg[[[֭[W*āϑ:y4 _<)N,baff虗EUp- —_~?*SI[%ũCnܸIOOI&GV_9XXXhq5I}"O}) xyyy1qDƌa`ٲeJ{Ν9{,/^}l޼KKKœ9sf„ t];%%l>3gǏƍ155ŅݻwzO>,\F_Q3d~¸޽ݻ㫯RLLLx饗&((H)B EH͛Ǽy駟'..T-[Ʋeprr"$$aÆa QdWTRSn޼Itt4}ύ7J.vYSNU*Wm'k֬!%%;v(K>Yd jT.\P !B<*sPQC0DEEaffFff&>|U:gģW+eT*]ve&R4ѣlܸx҈"** {{{\\\ەj}V:!j, !zGUVoܸcDzn:j5/Nٳ Ciӆ^z<~!>>>888`kk[%SXZZ;0a(uIzz:٘@۶mILL`?&M`aaȑ#2eJen vvvxzzw^^9~m I}2rHlmmرcOON֭)((@0}J#B+sY:,,,uiiiiӦ:U}S O߽{Xp!˗/tȝ0af*7!<==dѢEyժUe7e>2qsssILLdɒ%#OBR+1w__5kdff?MUu=>?cܹ+S^'O2l0uޒ4Vѣƒkt۷oO˖-6dٽ-YBQP/^uI,]*=d !l푒 0})_Wu: j?K.C޽ hz1$֙5ԔnݺСC.\H||<[nѣ=z7|ooo9ȩScƍ)[jEHH+Orr2;vuBYBQ!WI*W!D*~e|GFF"5*^CUT!11}YeoKQ:177gݻR_ULڥR4h ;w_9x ӦMLJB)))˅ >5XSV"Blll:wLddR彲4 '##J{PM#""a5B!&m۶R9۰aaaaKQ3aW@_!uͦNԩSKuLʊÇ3|prssٲe 7ndϞ=呗GF dK;l!j,Jz*Ν]vO5%,,L?b6la!DgB!666;cǒ͛h4annnxd ĠA3fLC{?~)<9Ӈ VjUVGqU1bQQQ[E믳}v>|ȫʿJ'j7bjj wʊANNNNN̙3+}ҽ{w~GRRRwÜ9sf„ t]g|///:wٳgx"۷gXZZE9Fř3gWjO5k;!B!?~mmڴaB<4ikf00Y}Jݻ`aaX`A.\յ}7lܸؐz*={ٙI&4~hh(\| Ƽy* /8gffbnnÇW3\]]IKK3(Btt4;v͍\#TTVcǎq1#G#Dݖnx"fԨQ0a|}}:t(͛7WP~ήRT*Z3[2d_~%T*NJAAߢE T*OUyZ w:;U*Z~j:.MjuYFF5>B!H(j R#[nebff?Vbر3 0(5\ȑ#9q;vE4i҄UVՀH+3335kFdd${壏>bx7nLZZ0!dΝ;'E$Y}XYY)#FSu1zh,,,Xr%C )uNN+3=߿޽{+ (,,Jō7زe 7h-[ҷo_"##Yd jT.\GyEqqqgUv֯__;;;]Fvv67T쌻;->رc:t)Bt+/j R#@_ BRRR޽;;wggg>#LMMH NVHJJ̙3у#F(ԀI]tpWm۶ĉHQ2!ʑLǎ7v(BׯƲo>lmmywׯ_B!ē9tnnn|WθJ"77Wxb6llڴId Jw~~>{a阛u U[y5 ju=! ;q`e74ũS2uRg̘[No|_011?/=))ܶ~¾>bڵɯ5h 5jTaB!xzZ#ÿ:w͚))LMMA5jAYjj(j@Ԥ|ju/BOqx̜8qFsqy8@TTB!Dah^CjTV5F!5 qƨT*222tjinh tȑr߰B!A)|2=zΎ|r:t`찄BQGXYYhh4:54uԉz̶5jR yyy@Q]v)$&&믿[@_屶LJ;)-*ڮB*,-Sa"VR3Kf0J254Ț eRr(mxx<=|}?ټqY]QYYRvuoii)ʄTTT //7oƥKP9!q0&L i$w&`ll~; 2e >:/m{ᅦ=rrrPPPDFF{pL:t , JJJ`hh 퉉1c|>%AMK{nxyyw9r$V}p\{@Hs)xЫW/]v&,-- Z_cǎAYYYa,%&&;BZ$J`o닋%hHs!s$._OOOyAigƎ;I#^_@ \\\`y}] a}k׮PZZZ%qqqXĤT(yA!#BI >~HZ.ABBرcBH#"C E؃ם IDAT$n V}ݙ3g1ҜZn-HH}xxx ""BaBAHH̙#0!4sbܹBG-*yw)hHsqydffB[[[ޡB!|l"!SwB!B!B$$.!B!B!4c%B!B!fB!|b ;z۾};:t]]]̞=[>>>ׇh"TTTppaÆsĄB> w̔w͛dѣI4i9HMMI5kƍ(**`ff)S`ԨQrMVkԩ߄]v 4ؾ};lll^|qqq{.޽{ػw/N*t~`` ֭[ǾWVV: `ii  n:K.HNNf7lHHHScƌdCOO{---zIn_$PZooo\\.>Ę1c`ll ???Ojj*444p!8::"66}cԄBȧ!Ǐ %p`Xf"j<7n܀co޼Arr2r[[[p8hii-Ǐ#44dgϰu:Y^^"!)) {yfϜ9sf#$$oHHHUT%e?LJ>%oI~@NN믿uB}0tP]=fddcB}.\obǎ Epp0ɓ'#((JJJrss8wf̘L(((Ctt4*++1~xl߾wg0sLϟ׮]L (**::: HNNFYY.\,YPRRBCC1m4g]nn.OׯCQQx"k̄ѻwoCWW%ʢߝ!ڴi#:t耀ƾ}juҒ  !U(ۂj 3gΔwq(O?!Ƴ~z6f 2HIIALL/ɓMoC|r@7=z4{\[[Fll,~gƤI3g`b |Wعs'6o H1.\sl4?׮]*FUV S蘽=DqqqlIMMMDFF8p 1{lƄ W_}joPVVFzz:+V1&L򠬬X' |gaooVO?-[cʕuBHKUVV@Ղ]vaƌr|BiF<<<LDDD}0ۏ!S]2555fҤI=믿m/_f---u֌ =rckktЁQRRbZjtܙ>}:%qT[q1۷gv1̭[Da޽L׮]wӧѣG 433c0:::LeeeFRlL)S}KJJDUVR6IRRRhjj2ZbkײݻW}lիڶl¶iii1o޼a.]Ķ={=?}|||89w\q|lر̞=x{1sss7(?5wCጉL/̸qa3֭c555ӧbϙ;w.3g?QRRhhh0waa233̣G/^dEСs5c^b8V^|B,X@d̓'O2 T޶m0 3 0=CM AcM8x·W_fFWWeaRSSLzz:w˖-AcdžbnnwR[޽c>r8&,,QNm?_ihc3Bibcc jߔ)S6i8;;Gii)޾}(ooݺ7nPUUӧOg۷R:ulmmq 塢/_ɓ'akkӧO}݋iӦ!55޽ý{xq ppp-[puvj9_S޳ ̙32]z^8884Uuj;uիW6{{{(++Μ9S9{ !MƆݠiС}xL V|}.]Bbb"tuuaddahh} sk9s:uR0|px]0 ===Xǎ/җCVII]qn}m۶Շ ?_5_Y{xx ۷or-C!M 'NĩS;w 88 vwF!t38VON0C6|>"##}vgiiӧO}q@MM a7GD`` 6l؀OĉBtVWQQYf8z(ƌ3g`ĉlȑ#=R`رطo/^?eeer \PUUŸq&%VEyy9/_PPP#G0j(DDD-?kssзo_dee2226UUUBԖ눍ů&CBB0k,*6c5%&&;"Ell}2kffpssùspeAaa!ƍcǎ ())a066F>}pIdpTP:::p8x{M999[C ڼPP7-- !&p/] KQi"%zغu+<󖗗ȑ#3prrqi`uK.:ѯ_?<߿Gnn.x<ݻw#''eeex{G$ƕUǍ7n <wޅ@4h_;v 99Yh/n† pyJKKqAܼy&%%jguĮAb궆&Hҿ6q!K܆qqŋqzAH}ݻSL*BBB؟Æ ?ׯŋQQQ-[`ҥ‚Mjjj򂫫H?I&w\.z>~%QVVƙ3g?Xp]v&,--# IM \Jy$.!4yBJR͛7>~9_ H{tl022©S0v(nvvP[۷Z5a_q%*?~ cccz=zÇ VVk{VկՠoڵkǾ.,,dW5m۶7ZZZ"IBXadV])jqqñcǎzAH]ݾ}[~|>_K+oEV/\ .:vvP^^.b޽5W q9Ĉ=޾y}-_Mx<]ƾ/H}*B>&&p(K66#իWBIRVVƖ\ 9}eW:N߾}?pQ}ǎß ֡ٯ^GիW}||Я_?].]?ĉlݻKɪ.؄kpAbǎ~gffƾBee%󑟟/Iڛ7o⤦ܹsĉb nݚ}0 _޽PSS%'']i+)B!Ҳ6+@DFF$.]Ė8pfϞ-666r(**ݻ3|R$1(BCC1n8TTT]l`5k0}tTVV}Ǯ:Ν;BW,u-%%Z'޳PRR?3g];u2!Caܸql[ffH? =Gxxyݻw_ 6VzVVVl۶m۰m6U 6Wj8!|\d-E2|phiim&phE.JBH SNoooCϞ=Í71o<$$$͛prr?d 7n@PP]"p\aɒ%bN6 @U2733駟n:x\.a;v ׯ_GJJ |>TTTеkWT+65=3f ((YYY055Ś5k"Tر#q%bkgօ"455a``KKKVߩSb7_066FHHFBjc˖-2"Tw \J䒦FI\Bia "5_>Ç􍌌9fee7o-˸8~8̐®=u;-%,--kSZlYMl۶ ___~!!!Oo.]BBBݻǖx2O BPPP歬o:uP}(:Ϗ^8ƍBܻwVB4f|;&*+@\Ҕ(K!Dn>} +++iڵCqq1[| IMqqb/1x`\r6mbǏѺuk,[:qƍ:@O!၈yAHջwo$&&eN P"4JB}}};oFvv6abbGGG,ZHlՖ4_}۷wB8wuB.]piаAƬuHCPSB!4?D.i %"7h瓇3g4 HE~v!D:___hkkcƍBQ4 1mM<<?cEF!DVD.il%B!4Khժ';?!i%rIcRwB!#,, ~~~uVѣmo޼a-,,?={ɓQRR¶GDD'O:Wqq1`XQQ. ___MoСCadd#G CX[[C]]:_Z|_F׮]~E6$"]S'p a]v5@I\B!Bk̘1000',IPP|lܸB}233q5$%%k׮e<==www`Ȑ!ppp͛HOOGVVrrrb eD?iiixfVCqq1/_sŧGbHKKChh(RRRm6'O%)BH3RPP8r:t h!ŋ/_9B˱|rG^0p@m۶5Bddd[n3i$vsA_?q/jfeex455T2={l[Nkeee=zHKKý{p(**Aeo߾Xj\\\WBMMb'c@<|c4TkҼ;+@HC$.!4#_߿`9GCiL^|AlDĀaXZZ"""ٳ'222SDdd$ؘ͑=R7 4?PY]YYPPP;VX!ReϞ=8{,{3/^'TTT///@DgԩSJ49s0|U_8L8r4r)пyLK#ZO! o5Fbb"f̘>555xyyU ;˖-c>s䠠 Dqttt'''ۣw8t,X ,9/Α#G0m4ݻZZZ"HݻXhOi ĉGqIqϟ>///ܺuK(y[S;P劷7 h߾=%ׯwBEE>}:N8Q IDAT!ĕTwZWIIIMm777PNZKrqqApp0ϟXXX4@տY/^.$qo߾-i !r4g; B!Ԡ!C.\ .g_ĶݼySAAAB }rw^ z߳gOް8⳰@qqбEaѢERc"|?Ɛ!CЧOβU$(((0Bu;v|~N"::AAAHOO&MmAIj;^׺䟲Vx - B!BH0vXdgg?3 1w\v1VM@UR6==]:::p8x{,''89^XXqooo}z]댌 ddd ++ YYYbcPYYN}:_EEEŋ䡡Fvv64440ydAII QQQXd ˡ'''bڴiغu+~={Dzz:ѯ_?ڵKY"""l2vvvB6660`DXX&vPXXXD'222.Obb"Ο?4<lW_}|Z /^y$_igϞ@XXi?'r* [[[U츆pvvFRRޥKj 0558|>gҤIPRR<<<Ր777(++UVbk=ywҥK;;:tk?deeظq#455?~\?i^~˗/cPVVFU !Bi;K \(KEBBVZUVIM>~XQIڱctuuӧOߟ}&W~mo߾pvvF۶m燲2=::!x<'.X%%%XEEۧm۶k.:*I6m)//Jp$ߚ4ϟ? ><<ӧOǻw 0o !DZ)Քdp& !ɓw{B5V"PM_}1###9ESw9vAO2}h.))ZSvBZooo8;;舉'M---~mZZ0x`xxxƍñc8.0errrЮ];3 S_k׮A^^ڷoq}}}p8$%%58rss_To !%u͚ІҼ4t\JD+q[]bĈB͟?-MaAvkkkp8vvwwGǎѪU+BN4?;ѦM˗VVT!8{l0 jVaҤIOիz*&L---OGzzzPVV x<|||7߰ `ffvɼŽCCCBMM \.}/X*~a"/\O*** )//8^x'O:vލ<|ܹ...B;vell>}`Æ D\\_Ӿ}{<{ B;uÇ# J@|'yfٳg4?!???ܺu^>|fffPWWGΝgT 0***9rЗ_޽;ڴi===,\P|7:t(Dί˅/JKKuEDDY 5CX[[C]]:_Z_F׮]n?~<|}}!%k%M-Tjj*Ο?/GnڴilC?~~0a444p9X[[ѣx%ڵk|y666^ZY=z@߾}$7=|9f_+++RUt%%%M|IZ^}CGt] IR|p8Bu ײe˖a055m  60`#"5ri.'JP>h xxxoEii)v܉@.r`ʕ֭<<<(JBiZ PVVh߿W\رc {{{ر#W}6Ԕ:6ֆ5lY\\ ---U5pd#GDll, }H{W@P7B!?nܸ:%pMkCCC1w\wʒ܆PSY6tfM!#Go߿߽{-¡CЦMX[[#00'Nl̄ 7H-Txx8ÅClU%%%6l˗/ǏS|VVVp8`۶mcw}5"##nnn"K.58PU666r(**ݻ3|ߗxfΜkײǨ!B>Zjq<==)RB.\> Ukjrػw_cݼySj{MSdFuⷰ@qqбEaѢERc"cciiӧO}q@MMMll,~W6Yf5uD>\[VV/R4 #Suuu,gшäI"ٳ'|}}!!!xPPϬz)ooosnp !"mIHPUUe_ ŭ7?TTT(KJH,]`:u*CA!B>]↘CNBm CLL ߿{{{r,PVVFzz:+V`ݺuB!T]lVZߋp~* A 6!%q !#sssH;oyUUUQXX t :z@CCC8;;#))I_MW\ׯqePVVFŶgeex455TК={6%q !BaʕիBCCqz ׯ̙q5| B)BH3cBWWO?>--Mo R`` gggm~~~BN$::!x<'|y\_AAAc_?@#RӧOǻw a0 LLL㑑"%Czz:޿϶*\BHs@+q !38q"@OOOo}6c<:VZZZغu+ -- ɓ'fmm WWWt jjj'Ӹ1c|>WWW700p\ 4&KҮ_ݻw \.{ȑ#: %%%044Ă j=!4cƌ/  2/ˆlyy9agg<|"m(I!O\Ǵ9PԝĶϗ_.\XcMCir~M'Ku؍ڵk\.[ !˱|rG^0p@dS.]ЪU+p8x-m(I!O\oߖǴ93gBBBзo_yB!666^}H=zo߾HHH@RRRmڨPRR{BhCI7I\B!BBjʕۺu&y!#66V2DGGcr Ǝ`jpaPB!$JBJZSey@!f۶m8~8|||^Di}6̔6$QAAȑ#С!JLL@^R%B!{iN~AAAݻ… 2MJѣGJ{9By@HI$q+ mmmlܸ #kYÆ c]pk׮۷/^miiiڵ+@KK Xr%Ɲ;w.tҦB!"QVdiN޼ySy...pqq8p=^}m(I'N(Hiގ;T:$lǚ IDATӧO\( T={6a禦BCC#bccѧO}ɒ%իf͚$B!BHC޽;rwHLLDǎ !-0a'+VѩS'$'' ’%K%''a̙x1x<x<vǃ3N<)t~TTz4440w\OQQ. ___=z4жm[ڊ".~YdffbPWW-&LYf޽;ڴi===,\q;;;@JJJС0|pNj:v[[[ٳqB 0@hRC390448Ci`jjZ1|}}pB!BM&{5kc$&&OIIҳgOddd ,, @FFf̘DFF[^^dhhh>DXX޿V>tdee!''+V` 򐟟7BQQQdq}c?^]SSpm?~\(-b_pttu|۷E6 i_%0>zpuuwBHz*++!"$q`nn7*.]UV066) p󑕕cƍԄ:pq|UUU"##PSS.~/_Ƃ 5J{ 쌤Zs]sε:O |{%nݺACCxHi> 333s"SSS1`ػ& Nb(&\DTHɸ*R}!ַ24@[/Zr1vXcڴixwtn'!Q***a;vvv0`qY0 B!]ۉkkkX[[HJJb?(+$F٬Y###=z[n&O>ϝ;7{|JJ F x^b0 9CСCP(H$Ν;uzI,,,p]BE'N@AA*** HrJ}?v͛>wsЯ_?}: q_XX#GnRZ/NSAS޷6dV>"moh~QQQ@CCV\GFB!Dߪcx7?n޼7 ҥKq.-v[K6i]t⺤W{Ky2tibf򴵵Uط=x<2 ~j3+O> <հz#"KVWW0dgg#88<III 077GSSSgMi{SSLLL(!v%&&bժUQ5mۆ6y Nٳ~~~(..fccc3NT.{롸fffd$&&*=4ŧ}ڸw}cСly֋Ǫ~_w^^^nS|BH{#77YYYW>iӦ!22(,,Ğ={DkbڵpttDdd$ŭIO]㯨իWkܯOytL\@PP>KKKl+WG[o7oDmm>f'OFrr2>ʯKׯzٳNcjj ???lذ͸t>̖߿puuCUU`qvvFSS?~ٴ=T*ņ p1$''+ʕ+9rα,ւL|ܾ}sn A4#ߩgllZh?^S9AS|޷6K_o}uO*i|k::!fGo]fOSSk8v HRٳaaapwwGjj*_۷/rJ >Xz5:,vm]]]!1n8899a֬Y v-$^^^Xx1MӒPXIے\]ҩk-~},EyvtL\yZ;4)--Euuʲcb:t(|>/_qPYYZ ''Toff&/^  1o<444#::ZeRMkhXZZl=֭[???Ǐ׹ĉR(o1|pjC ܐWWW:{YV\Z"J\ٵD_76iԞq}k}^7>}aj888.BȳfMoܹs8tM###L2 扁퍍7ĉoo>\t .]²eпP=L266ƙ3ge6m֬YUVh?0sLk'OF`` }vX,1```X|dW=qa8;;-[񁃃9_XX|0 777ddds~@V1Z[B &Ypp0,?? ,x`ꫯP[[ ~'H$455 eg~ט?>Ν;llj%v؁۷oΝ;z*^}UK,Ayy9/TWW#55ʗD" p㏰W(_d O:u ݻwc۶m:׳`ڵKi{hh(Q'ݩ1,먄f7!مtm}e]G',///<==1c YHʿo̚5 7nDII ہb ܺu Gܹsu^:ضmÞƍ/~WKHbשT &`ر \ys.Q>q˖tѣҒXh\G!@7%ݛX,FUUΟ?soذqqq z?7!OXJ  {{{D"J?uTl޼ .@ 0#!!M5DKK  @ +cff#44^hQXXJo}ȿ" _p>q [B$aʔ)ɓ'aii˗cʔ)2$ԔPhBH /_:YsۺKBFu #5]. );:af\\ z) Rd>qkZ#CwҽtHf``'OvCwbb";K,88Xv,g^R̙3 7oެyɒ%vr[$%%!))I~Ƨ} %gqʹ]o짟~xnmǹsӦMSx$t$ԖP= !eم[}E\:#!\K϶]ijjq%AAAJܹ 33GE~~>󑘘.K/mQ(((@VVrrr"22&&&HNNۑ; .K&wWVVO>:1}t`鈋Cnn.ݖj˒M$бeڳ"<h&S?Vzx vvvV!<~vb " 1kOBH.Ԧ32vÎNY[[ aԶ@ @TT[TUUa2e [[[سgہ0 Ξ=d 0&LMpm899Cqq1~W[:wreɤtHR444/`ܶm~g$T[l%M4.Ӟ%XfBy{jˆיD27nĈ#:,BZS&6Z3!c{f} 3P]]~Ywޭxs9sPSSlٳNBnn.rss!0 DDDDťb(^^^K/e˖e$򶲲—_~`x{{Ek"o@s"qM$no"N4(.܌իWk|kK|muByi%cƌ1cFWA3C k 1iǎƄJaذa8p1uNS;=z](?Wsk">spi%#ڊ}'v9}41556l؀-[ƬY$Fbb"q-dgg#++ gΜ0{.pݽKd>seږDZv'ݻg.KBg&mkN\%d--~},EyvP'.!t#פ&!{%HuB*66 RY&KYYY`bj=evfMff&bbb`iic5!#iّ@g:---@ruwЯ_?,Z-D"0j(xzzV $=tN1c 99ՕP(ģGP]]ݦ6k,D"vWyyyXf Ο?@*C J{=xyy|xb!D63fi0{lӧOcժU^۶mOawwwL4 k֬ѩ͛7㣏>BEELMM1k,]Щz,ZǏ/> 1w\! G*yxx`̘1r  >W"66;гgO,\?pÇ˃rMz*\\\PYYvl]p>>>f~7j122B߾}'N࣏>Ν;ٸq#┎t{ݾl2̛7S\]]䄲2466bغu+kϟ6\>5'8 B:¦Mk/!ݘ4{B!]tjr Xr׮]c; uaff8;;>>>ppp@bb"b1QVVDDD`Ŋ:,R)q*ka!##}=""&M?SN! cƌqa8;;---5ذa=z4kh}.,,Sn[)u666ɓHOOW_srrD"Q6m֬YUVhϟ… 1p@\>/u!B]]=PSBg1a!t*zB'R\\ sss$&&"77Wiٓ\b8;;h]6$$/_T7p ػw/MLLPWWDOOOv>066F=ήYRR~ o& 秔u!BpM?gv-|7*gk닩a/^WrҾHb/-- *(1ydvB!۪ 0vXaHNNٳg&B~]ډ+t$%%dnccΠCB";wÇQW[v`D"D"̝;7SRR0j(wHHHPz~mhhf@MM x$Jŋ*i~‚#B!;FZZƏ7? y&6n 4K.2^JJ "## "nB!lmm4}\߿Æ kӱҙt9&fJѩ/|a~!,, CRR߿p˗/k\#T]`nnO>@N???vZZ}0hlld;)r?ʹQZZs3p{={nnn Ԥ'Ԏsi_eeײD8~v܉wyGeΝÐ!CT3>}|655Pi _xR/wyG)… akk~C'Bӫޟ|L6  Daa!ك,H$]k׮#"##̙3ѣGCg\?#]tL\@PP>KKK|;v@*⯿¦MtѣG㭷RYq)Gss3\]]PUU})cHNNFcc#GT?7cgϞ:0b6yD"Qi+={6K߿:cݑׯ:b\Ç+V^/"ѻwoxzzΝ;l+b1ƍ'''̚5KaRLFFD"BBB>566 /f#&&VVVD||<www,]=B%m_QQD"^}Uۍ[n{fMX2&L7!*SZZ0cV֯_Pvf=֭[???? ?~Ngff@ @PP9&񁙙\]]ӧԾ®]x lV\f1֩o|>111ѣ2d Ç#?? 666pwwlj'ίɌ;ӧOСC! _>???xyypttIJe2M?fff?s~\>ѿ#f̘>>n@{w IDAT{-666׿0Zƍ2F;;;xzzbjrB 3frm0򬩨ɓ'}ua۶mMy566";;@rr2 kkkDGGx1LtTUUoX,Vfg͛8vлwo\t ˖-àAw^$hJe/_:Y-9s&$ """4S[['6l`b444 7n@ee%VXS|@kn7 E[4'KDe888@"@"`޼yzBDN<%<<<.bLaW׾#G2Vogg͙h^wX,f0z9_FF*_~efǎ*8::d&$$yœ>}w03gd-9r$4773---L`` |rs4440D"QL0Am9ˡŊٳg3>d~WW^LBB[>uT&88illd 7ԹBaLbbbW%֬Y`RRR:dgM[n !C0b޽khwތܿ- e0dagӇ122b 8ٲeb333Ғ7n0 ߟ9y1oR[GDEE1?̜9S*h\~?~<ӫW/fܸq/wK[|EcGCUVXHRC&''P1Ǐg.\zRRRoooիǏG|篾qvvV/..y7[3bK vQs|wq=Dw ~PNQJyk'zf&.!m=m޾} HЧOdggC"࣏>qfy{{ٳJǜ={ޜVh)AiiiJ˹qi驔K߇X,FzzzgjiiA^^ϟy桡A|ؽ{7FnkO"f.%B~#Bun /26o .'O Lݥ})11QC .-- ְƁ~.))K0 )PSSZ6I:s^oeۗ&5Byšo߾077Gbbж0ipRw„ Xn=z_~Z>m >vggg>}hjjݻ ####>>W_}K?~~0uTl߾puuڵkaoop3g0 QxXYb;wFNN>!s5}t^PXJ>l0ƍqN|p]qOS"jBNBH9w\WNVmxlvvv]/&&!!!W^܀np~~~X,z~+++69lKeeR2;mhjjgSSLLL}Xuu5uu5,C3 !O46CSSdI5lllpm/?%#NzgHIITp|ѣzj444޽{8v`mmھΎTG,ow݉@ @TTЀ}!++ ǏqA|"""`?32337otrrBxx80tPݻ;$< 6 Th8==ѣB"ddIŷmۆ#G"-- B"vN9>ur@@>|p___|"j+++#%ɄKy2:u Tf% BPi܀'Nć~m۶ؾ};-Zp ggg455JsZޕ+W0rHh)c˖-(**Ç1k,C! ?RTsmgɾ7Q#Fݻw_"<<}?.-~kHuh,,,0g̙3555ƞ={p)"77 àO>H$@DD\\\:-Bv>hDòDUUUx饗!7rrr舼<_"88pqqAff&/^ XxNK-(W… 9K||>/_6J!K:qZBH7p…A,k h{opw܉y&&&Ƃ vĉj׈5vAlii *k{!ߎ; lڴ aaaJhS7;N__58i``qaݺu8z(=z?o6H|:HLLDbb"nݺldeeKܽ{Gxx8%q.YI>se?c׮]yĉ{.;vhW|URRkk W_+>B ]҉ /tE./h} m=7 4CfصkN܆ajoH$BAA999<`?k{!J߄ JK048iv>:rpEEEprrBKK .]?ihׯ-ZEA"5j<==a`@id!t]҉)8!kaӦM z 'XT?cƌQ(۰aT&#?͎:UUU#'~mv歳Ҳ >.4D"?S[6tPB%6#ܓ```'O*(H&!tӐB7ԉK!O9}< .*+>NBnIB!}N\B!B; Njٳ#ikk.CB!B!nyyy틀#uL\YzBȳB!t5kD"}񰰰@jjjcٝZ/WB=BuuRT{GJJ ^iǏYYY۷󑟟^{ 'OFxx8BCCajjsl>>ppp@bb"kh#_J066ƙ3g:e]'''Nԅ HӦMCyy9{.PpDDD]kl}Abb"qM 33?3vލݻwW^xK/*,,iB:%6# TMM RSSnd7ww^lݺ-onnFQQLMMq}\z[laoЁ֎\|uuu:uΜ9j),,đ#GPRR[n!##-rJ=zs\hhh@YYnܸJXB۷oի}Ծٳgwި w^Bcc#}]+'O6СCj˯]JaРAGZZ9PB:ߌ3sNeOСCajj X{)K*l+**H$«?""H6xann޽{w|͛7z-,Y0 h|[bHUrB 3fz ǃt<; nܸ#G 55fffHNN޽{mۆ-L:Ԅ|ٳ'B!Ν\N~Ûo CCCӳMm366F=b@ii)~W022>ٳg{~F.P\|S|ʹ|111A]]$ x<<==UNhhhP#q> WWW\v Ç~ױ~zNdooE̙3(--EBBݻ|23{uW%kB/C*@bkU7=z 6 xhhh 22}L,cݺuXj S{υe7033$''#11Qi?`= ]xQ)))}tE| 8Pm]...xwԖ744p?~.]1\]]Ėr!ˉ'PPPbʕJoUR7Jm%9@A+++uvl+??;hP!]#..UV)u"e mhi<"p̙vCUS~pe5RePVV <UUUTVVښS}0hlldQ9IR)_uu5}4O:!;;xHJJ?k0u677g7+))|}}~c;::Ưoظq#6nHDFF*,Juu5݋Lx1ɓM7!E)BH7/ŋǪUϲNE Tqq1Ν;Ç#''nPƍH$ao4G5J$u7$/#?zϕo% $ nܸ7n(-#HdT=Ic]MMM011덪,~k..׏9`mm kkk8pIII璒v?ܾ}bLIIQ޽{#!!>|CPH;w<-onKF~KܹswzֿL<}Dii²GsŮ]pq?yyyZ;&N?7***}vr#F`H$8u#G?ǏqY>}ZaMhnn+{o>vDvyyy~:MxPkx뭷 n޼Z펎?gϞoPD>F#{=!==믿ٳ'^yݻ/1m4%O%nN @(B(Ғ,5J_7Jhhh_|tߞ@Ms.~7m3:ʕ+J|~~~ذaq%>|-G?@q@F~WtiQu4֞5/ҙ233҂C  ((H!YԩSyf,\... ?k;wDMM 1j(b®]x ƍH$BVVn H)SԾ'O˗/ǔ)S_kk&Xn0~x$$$`cк< @lZsF ӕ)--m$mkqoر>}: P:ݑׯ:k׮aʕ>|8  ">>lRݻwC*bϞ= |' -@!OX 4Hi{aa!͛|DGG7P7l PUU^z ˖-S(lnnիpB7rrryDff&bbb`iic*7L,^! xb־;wb޼y 0gرJ@@,www8qQQQ*yk ~zz:aii *BK ӎ; lڴ aaaJ=*_]%[?< 6 4{K'5SKگז-[PTTÇk~_(--ŵktny6Jx*cii;vh<~̙9s2P;99O?isP7h mHfصkN܆ajo H$BAA999Y!:BHS7&L7*++̮.O*>˗4? ~ҵ~=QPPkvuXbҤI4yɦ IDATi6mڄcǎaaa*m&Y@B=o%c̘1 e6l@\\vgs:^?gW-t=^HڲC5yՓ !!!hnn@ ƍ1bĈt2ccc<BHYz200ɓ'U.%̄B:aH3f`ƌ]!t ԉK!D/vin<uun/;;N?!OgipZ۷!*,,t!D;%B! .TF_Ob@KK >.C CB!BH;/i+ta$to@őN\B!B!&!h\\\PXXةK0!߬Y;tjǰa45x{رcJ.\իWwh݄B!BHGN\B!f̙>}zhnnF@@RҥKBcƌ_ݮsp'ByP'.!><йB!B!&%@DD؈w}m:۱zj(,,D߾}fffݻwqyݻurrD"-[DDy9s&$ """T;{l555p݋4} q֭[J566ɓW۾:tHmkڔ)7.. B}}=prXƍĊ+tBtnN>CbSG;m/{'ܹ\]]!1n8899a֬Y*hB!O JlF!Oo˃!٦s`aanB!!!|r555!??/^DϞ=! 1w\"))I!6ccc x"|>.k744)ǏҥK066+7nȑ#(.. 99tBĉ((( rJK׮]6W\ڵkaddT ccc9s?{lؠo1`(((M5k`ժUZDžl.??;1k,\,eee000@DDVXA 0 :BXԉK!\ZZ}]yyylr3gΠ<1mK$~!]eee022B}}=T?ւa씖ի!cnn. ܹs>D=Z}ݺu 0yd\~!t8wWbb"VZؖ-.LLLPWWDC4u/22FF;b֭Sĥ;ozG"BQ'.!ts111 $$$ < àĬW8Bgcc#LLLxJl!))ITYCUUlll'Z󃯯=uO~JӧM˗/GBB 1o<444#::ӧO|;wb޼y 兪*KXlN$==Ѱŋc>B!/^s=B!2ԉK!!+,'6l@\\RS7wyNk4xM{ >s!BԉK!waM_ajX,ASAtH DbGJ2ԋStj]:Sꈴ.-N+Q,"Al?B~=ĪU c\Q$.Q+2b1i K@2bx(Q$. gϞ\-[9jn݊(K"""j>&N$.Q+I\"""""""""VDDDDDԠ3g"!!Fˤ˫8qAqqqHLL4BFDDD"fDDDDDԠ(XXX;Vzz:T*4b-B>}0g!;"""zp%.QuEl`` /^xxx-[@&!44{x7//C X,F`` Ə>@իW1h tcƌAUU 772 o6]LLݯ7#8p@LFFq a˖-޽{CP`ݺuBBY{ҥ033Cii)޽+V0b9sF3g`Ĉ\R7nEEEXxPVWx#""օDDDDDTdd$w_߇.L>魷ނ \]]f݃BH$O5 ˖-Cmm-~G1Ċ+`ccXx޽tM2fff011\.GZZZ<;;VVVڵ~ΝW*:~Zee%>>>>;v׷DDDԺL\""""6$)) ~!z.';}4<==2KKK!U>׳>CRܖ6 ._W? 1P*pA]t&x>|(|.,,l4^sl% JKKeeemvvvxjjjᲲ2i|_QI\""""6$::ٳg#00&M8;;DߩS'աNNNQaP="uuu?r9__}"##o>>|XX- Hbu=yAQQ:uꤑceeee^HO#""օ)!bRRVVVH$w3g_ooo,_=™3gp qrr/2ݿ?n޼ ر#C 6 ˖-رcիW F||<***<>"X]z6l@ii)J% JkǥK5>p@{z X[[# +WJ…  q}GDDD 'qڨӧ L۷cdxׅxjj* H׿z 'GJ/~ŋ9r$lllɓ'cĉB|Ĉ033C޽1n8/jÇX,رcq-Ç={_GjǑ#G0zhׯDqaC" 66cƌQ?"""j]xQ5v!SN5Zo^?ǏvFӧʯ]VlڴvIHHmᅬO HqFȑ#7Zfܹ/1uTR駟j}_T6Z62 ǏǣZOuJ\"""""2ȏ?7npq9G.Νӈ\111JFȌ^4\KDDDDDqBCCR f۷Y5֣Gfiy011AVVVGMYыDDmDuu5scaa!\hDDWDD"""Z&q[l5ťiϭ-"""j%"j#V^E; F.c˖-Nq H$NŔ;"""""犓DDm… dcAbn݊(cADDDDDܘ;"""""2iӦ>xm˫23gDBBBj}0g۷XDDDDJ\""""6*"")))/_!C@,# wU:T9s >>^L`` /^xxxkעgϞر#Z@nn.d2~m\v 2 2 ׯ޲e d2BCCw^ 1vXX[[ ,Ç;w.^{5tcƌAUUF=k֬ALLLc???$''77R[nb~~~9sxyy9D3gv$.Q;3yd >qС&ՓD\v /^DΝ666صk~W;ww&i{ Bu  fꍊBɓlނ=޽~ wFRRZ/"33(,,Ė-[B``!##Ckʕ+(**j|nݺIIITr(Jܸq(**ŋ n@DDDDԎajj5H`gg'>O n%"5 "HW6vZzJpvvH$z!tK/A,cn3ƥIDURRt̛7OwB$YxͭImdgdd`ȑJdHIIAMMMxZYYwqq8cǎgSSST*zlmmm|W_}GƟgti۸|2z#7رcqQC988@$NNN"8::T-*++-SYY KKf\>!4WWWD"@,7[DDDbJ\"vv˜1cpUVVV011b-^c뺴%//Ot'00ǏW[ɩϥ;i}˗aaa2O?;:/..ƍ7DX 8PЫW/xzz ,5}|| 1l0ӦMkmt4iݺu={6Ξ=`ݺD8X T T +++H$ộ<<<зo_XBcǎѣG8qp8p@R#L6t9~?+ammW_}˗/۷o#99aaaӯ_?TVVXkK.[k|x jXr%T*.\oV#88񨨨\~]\"""j_8KDԆw߅;bҤIjgCۥ6.m:u* R>Ƞ }4v!k۷ 6oތp|t2 qqqofp* FUU._u֩~-?~999w,Y3z~~]4~x!** }ŪU&yuʼnHӧO۷//fΜ?jEܹs۷/fϞW^yE6ܰl2W^!C0qDR_|6ld2oߎC&_))){.1`aܹz|8FѤ]#RRR;Çضm6mڤ׻8v1}t#,, ]XW/2,,,///A$AT0e -[ tt]?~!N8 }ȑ#n:q"_HHH{3gh}{:/"{3OKHH@BBBhsԩS+Jo>ӧO}_vmΝ/SNՈ)J߿~vr,Ǐ v3H$lܸIuQĕDDmН;wp5x{{[]qChԦK[JKK!^tGW2e .\|>|xW~׿pM\t jtojj jGFԟ<0(--+7]??C.Mի .] ''G]q"g!Q\\siV\HR#dFDDDDDmЄ  IDATpmL8k֬T*E\\T]qC4t( ( ЩS'թM*>]Kw9,XP(n~9aaaؼy36oތhcMn[o68=YHsNꟾ?'&O^TSS{"<<ݻwٳgj*\|...:DDYYY޽F,66֠cfZ'q(KKKL:ӧ!Ho //OxMM QWWZTWW,t]ooo,_=™3g vs鎋 5.]_ﭷW_}4L>]z>|\t * HII5 6J_|ƥ@ϯK`7nn޼d2tʼnlmm5zjG><]d裏xb3h0|pc„ _wtoߎ?ظq#lmm.M7o[k^DiΝ!zvi,N#&&&01Ѿ xVVFW\ץ-z? |?~\<7os}.տT*śo'D"5XX0yfʍ9yh럮_c&ZV%* NZ'qUaoo_~yyy8~8.]j /6x9Qsҥ &Md4 2)9%"Vƍ JX,ƚ5kзo_c7___\~k׮!6jĈ<3$.J0vMQQvvS """"""=qHQUUl5g]D/; 2fDDDDD$.^-2vD/]va׮]NqH$\Gs%j,,,бcGcAMdeee^%"2 bɒ%\ҥKue… '|9s8y$aee.]o߾>}:ƎԩS\ǔ)S}vիW٢2;7}D^زe3CDDDDDqɓ'ك\H$gĽv.lڴ Tok<ƓZ7NQ-[&L8&&&W_ţGp:tbڼyƳ[;D"3ߚ:Mpuu}Q31vDDb2e D"D";b,\C a0}tlڴ +VPoǎ>|8ann:@&!&&on0-VVV2e ͛8v\9ɓ';vĉqΝgi?x tprrBxx8~)))ѣ,--oǏoO3""""""ҍ+q"##QRRpqqHR䠬  àA0h XZZjqY|x饗OիWGӱm6W7khh(jkk;PWW&񴯿oܹs{Ezz:كƍ1c ?0tI^Cdz!3"j>;w4v D&Md NQYXXѣӧn޼ v#G};%ñtR u"&&FѣGXp!/_7obϞ=Q x`ggR]v':t蠑/Q]] oooo04'bΜ9)v؁oI1c@N$a͘0aRRR0w\ φr35fDԼ"##:c@DDfplɒ%5j:GgbDYY[nũS+"1b222aByyygJ۷:t!!!شiJ%X X>DII d2Ay<~BQQ p8t 333a+\/$%%!77WnCi999ƌZW>RkDDD͏DDd>7oF]]._`Ũ7p1FUU933wھuަM<^$L&>?R<9RT-&|.-- N; /CګגcFD-cǎN]2JDDb3""j2sssgq׫W/OԩS;wN͛7QWWm'R8u~!!!Axo>ܻwOBihOsrr>?}דߝ訵l φޓZr̈H\KDD*&&ǤI0h سgPgϞ6:`t#t}l۷oGll9GC WWWȑ#™8z(>>>gggرcرcF7ҥKu2 ۷AYnjI\""jvϟ_~CB"6l/;w?{vGFFjLyBttF^Ο=y$_wΆMHHN!!! AaaΟ?{{{D"t?ɓ 8;;#** W^m ""DDDDDJ2`Q]]݉'ɓBee%RSSv4.?3P^^WWWdee}aȐ!رcܹN:ݻغu+ /6DDD8""""jv܉; j{=9r555w,--KݽwǍT\.ǎ;[⭷vnn.fΜ2t'fBMM C&?c8x` `Νϵ=NQ[I\"6P\\7n9/}v"##Fٳg|rdff P]][ԩSͅƻӟ`jj xwcÇĝ:u*~7t GA׮]?=z3g 1HBmm-_,Qi)))HII1n2Dٳg=^^^4ihGc͛QWW˗/Xx1***p ;v 9::6ݻzo#PT\TT"kkkQQQ{.]]""I\"6\,H Q*//v CXXhWz׮]իz˗/cڵ7ѳgOP۵ncpQcƌHIII('˱e;ԯt&""gǿÇ7| bɒ%NlݺQQQD"D"'Nh'2#"""""珓DDB}6ѥCR!((HX,Fll,>#^zyyy7nqEb ܻwEr'"""""q@JJJ@,^!!!;rss <-xƍ-kעgϞر#Z@ZZzOOOxyyaF\\𾏏r9 ޽{cڴiһ刎$ fΜj]nݺb!""sk| fh<>|8&O 777sΈGpp...Crry۶mPWW:xzz;"""&$.k׮ŋܹZʕ+(**xoҥ033Cii)޽+VT`׮]_q9޽ׯ* FUU._uÇB[n{d%ˡT*q /;]%&&ݺuCyy95>@VV i8w ddd 33yyy#66|^c*++qa\pرc}dgg ]v58ǧY[[7>xp\KDԆ%%%HOOǼyjee2YXp! Pc٨9r$R)d2RRR^̄ZSSS#zVXXL&L&Ì3(++C]]ڪW8}۱m+++ C.޽{Ν;Hyum]]c ЩSsS뇫+D"rrr  Y988@$NNN[?),t*++9_8s |}}5WVV;""""j%"jb0yieeDTڤ zv튎;:tJD"w^=ڠ7l؀7|/wqq9;ṻ;O?~:\"X1ʕ+n:oŴiӚu|JkL?z555PTjM֢7oqi6.]ooo" m$.Q;1}tt֭_~%%%/^YfAT o&&Npssòe777H$+Ç&L_ >SAR!11qqqT̟?T*U[X_ hDc̘1B ѣG1uT_+++Gݻ8t|||48rD 2;w;""6DDDHHءCtb$$$h}xXXv9r$>F7o͛\"`ƍMIM x|Dž񨮮6}]Ν/Rc7,,Lh'yzzj=ITbO9O"jY;v0v DM>fDDDm\.Gqq1Ν;u\111M> W&&&jOppphzH?%"V-;;)4իWXvvv-RKKDDDDDDq DDDDD$iiij̙3?<\!11%S#""WQ0tT*EЧO̙3-( @JJJ@,^!!!;rss <-R{ L~׮]L&L&2[lL&Chh(ݫ~ZZzOOOxyyaF\\PpppD"̙3Q]]чǭ[X ???DDD`Μ9B|ڵٳ':vggg$$$v֬Yg8???$''.-%"gͣǨQ χH$H$p zD\v /^DΝW\AQQڳ޽{CP`ݺuBBY2QQQP(bbbЭ[7#)) jqڵ +Ν;ݻwkLRQQ,=g!##QhiuѶV,#66o|իFjj*qix{{ qn%DFF_~ QTH/2,,,///A$AT~Cff&`ccGll,>gίƅ `nn;V\.>KR"''Ǡvaee]c>hDDD6p%.Q;VӖ`ܸq=>m`>XjZqڟ$8::7o=??_L&3Jfff&ݶ)|}}q9܎KԾDGG#;; ,|zV$5HgV]]]\]]!#҆D"JJJgO~wñ`@P ::ڢRk\񫬬%ws%""iJRJgvڂdffbŊX,F||w%#r}$.Q.\ @hh(1{fr"mWYY`]܎K~M>M~jT4dL8=zT*_|!Ć L۷cdxn755>'b1Ǝ[n_c6l؀|H$b̘1B ˖-C@@^y̞=m899a8zhƯޑ#G0zh&""a ̞=4iYlkN+,gkk> ]HTOnurrxm`mw܉D"̛7UUUk!Μ9d _!!!Zc~c`ӦM N:06o,<LsD7>S~d8~=>>^$$$4'͝;_~%Nkl}ۿ?>g΃^l\KDԆ͛7;vti m=zjT*@uujZbʕ8x 5v\""(..nґ6+WDLL Ri dFDDD/%"j'Onݺ5}}ZYY+fϞ vڢ"vBϞ=~MMM鉪*HR̟?__xf͂R|ML8Ѡa曐H$߿nnnH$ϼZ_ZЧOswwСChqYYYMqF=O%"j'vZ}m´U1܎KDի[n;;&̙ы)ϲVn%"""""zq%.Q+,muv\"""""'q DDDDDD/6@DDDD 4H<?Ot5v:? ooo!h"śsQ{ĕDDDDDmXDDRRR8q=z􀵵5r9~72ƢEԞu}Fل;kעgϞر#Z?TUU ۷R=#::H$9s&'NL&/~2 /,,رcamm ''',X>T700/FHH<<<\W^􄗗,Xkkk5>u!燈̙3G76~Xf bbb4\ -&L?ʘsΈGpp0VZwqq ΋ I\"v:TAk J"iiiz:fΜfʈѣGqqܾ} }F+WHو#pgΜ#666صk~W;wwꊌ TL:&&"ˡT*q /ݻ :uΝ;P(&z-ݻ駟{n$%%i䜜D\v /^DΝ* FUU._uiL74>AnP^^$dff?}UTT ++ D^^ /Ν;<((HHDDD-DDԮݾ}YYYN͋ĉ=%&&;w-bccg2JRqON5 ˖-Cmm-~GaW._~,44999BQQQHMMTWW#-- QQQdffbŊX,F||>>>;vZ]㧏lXYYk׮Za֬Yq㴖;߿]"""j^%"j'Zb;mm۶o߾ڵƶʫWbРAСƌ}VvѡCbܹxХKuum}Vol7n-))q`kk {{{\d2۸vd2d2J0mۑ`˖-d ޽{ < 2b?~E/^L磰P}]q]qCc꟮K.JKKq]XBݻ7 ֭[( ( ̚5KmۑP(0y&ԩS1tP(J|G8p}$jl??9iYRR"|.--^++DFFѣG8| ޽{ǂ PPPBh uxxx{Ejj \]]!#SPPsppH$Bqq𬨨e =N>4U}~O?}ڢRk<..KKF:s |}}5WVVFDDD$.5y;-Fttp뵓FV歷ނ \]]>^K.܈0%"jZz;-xkn-cgg'|677J]cǎgSSS?v\}55]ypB 0ٳ%Vvd}hRD޽GEyyp@t*H EQ$ !VP+f6hoIJE"j-a1zBH6bFk Ƒ6a`@眞3y5X>v1umѢEg燑#GB.cڵmvX,fϞ*;rHlٲ 3f@BBf̘a0FLL Ο?I&F[[ H ׽~444&MBDDVXarS?]ٻw/JKK!J!f늓qwᅬ//_nر;v,N8|ḋ 񖈈z&DDZp!  ܹs@>N <S'EEE H6SNEdd$!ɐn>>>8t222 1sLgggRn 0v^;117nqȐHW%"?}⣏>BVVشiRRRjժ^ PoG͙3R_T*) DvHDd=&L'.w==K. g.}ѓ~z1wwnᆭ+//Gxx8Z-$ v NbȈΊdDEE!**JG}G qjkkω*=ODOiӦaXxq_!LvRSS)C.#CDDDO'qoc} } `o?8<DDY*a&DWss3@,!z\كK.!..=???jXn,Y"Ŀ(**/O>VVVݻwcǎ}61|lڴIoڴixq5|7xwonnoxҲ,YHHH]wu,Z_5LC. ;˿'Oڵk`kT?WGDDDOCD4@L>_UVaƍ}cG!66 KDԹ9s栶cxzz+EEEχNÔ)SwW*077Gyy9ua͏r:/&& ٳ ‹/(,kkkEpp0|||7n ""Y_'@DDDDDˈ#vZ|7HOO / ** MMM&b… 899a֬Yzm/^ ggg1 %%%BLTiQ\\?::氰^╕Ejj*lll HRpȑݨGd,R|Wx뭷0h ( LҤI~]vӇE\""""gԴi  0ǤI0j(~0ij36vvvgssshZ{NN! ˱~ܿ_!Cσ Y \\%Kb,D" >\9Rlj ,@UU|}}1nܸǹ"""zx3^>rrrpA|կ~;v䱝Q^^ޣꫯ H$BRR޽ۣ~nĈD(..D"1N$t:I:t(t:ZZZ`kk 矐?}Jcϝ%"";v/ԩSW\ 6<ԈJ>n݊ٳg֭[V,YFݻjH$Bmm-=u`T*ʐ8{lm WWW?YYYZ)ҥK@vv6VXap==X%"gǡj*%%&HD4%%%;:ٙYfaXr%$ &LJ9[lB33f(c777H$B mۆH${&\C!##r3g4CpL2/ir/_[ol 2SNŪU0w^LDDDOS "zFDEEaΜ9Xxpի fr5ktRƭÒ%K  ¸q駟B"4ݻcܾ}֘?>6mDb/qUܿ_ejvB\\\kpvv/233aooo+ظqI]EE,XK.a„ 1b{n}7>XZZvk"daa+"66Ϗrh.99dSwJطo%%%!))v?_t*?\phv{뭷[ou_܉KD Zƞ={ Z ZmrPpGCCRSS1h Ƿ/^đ#Ga0Off&6l؀7oÆ 39 00:c4~ Ԙ|_a̘1hnnFZZJ娬DMM ֭[yhN\""z$hjjZ;|}}/YdGqqAhxzz)._ +++=h &w1h4ݚZ[[qi\rBhhDnn.JJJ`ccPTHLL͛==ׯ7sww ĈױKD4aڵGSϟ?77GcժUxwzDGG7Qܜlڴ hnnFttAzQx1vvvs߿k766BaذaµaÆABW&"ǧ".".pBs.Dt=VxKii) v! *>0D"$%%uHd4򷵵Ekkkhmmep"uuuprr a{{{# P\\3q0DLL+++HRO_䄪*466v{<ܺu 0dݻjH$Bmm-=5:Ƶk0qDɓ'V^P(ؾ};Z-\'NqCR /)++387m,=#-Z)St:u*"##Lt-**?lll兘DFF4ȑ#e( ̘1 1cFUNNNƙ3gga֬YFeeevn{Eii)R)F[[ HnCDDDDDDS "zF=799Y8g'???[b8y4 m6{b3YwJ{ ߥR)ףcǎaժU~6񰳳CjǰDDLP*ŋ b۷oG\\d2YdFDl5xq'.۷^^ߙ%&&""كK.!..UGss3r9`͚5Xt) ++ k֬Acc#0}tcǐV X0޽*Lorrr?^|KKGDDDD DDdTgEܧ]םX;w|bsԜ9sP[[X <qqqX`MZq E|||chZ\zӧOݻwqubǎ4hJ%Q^^333`ݺuؼy#?q DDDDDgĈXv-x'/~ XXXƍȑ#!hPYY\* Gy"=i,=MHH?y1i$5 Hn bbڃP]] \.\.ǒ%KXnDDDDOS """"zF?GNNsvIIIG:uTDFF2 BrBFFr9fΜihkk$ BCCQQQ\3qQQQؿ޵WB.cٲeyp`FFaٰ=|}}qǯFhh(hkk^^^P*񁇇ϟwk.u3|}}itHII1y***D___DEEaB .R)q޽nCD$YXX8fff8x qm$$$/jhiiZF^^ ""BUEz > Jo>|s_e˖JDDDD DD0jٳPPXtclڴ bhhh@jj2S_x1Ѐ/GAZZ(,,Dqq1qFxKK h4Ϡ ߸q555&]\\ƌf<ʫT*hP^^J`ݺuݞ],#DSSj5D"|}}aeeeRV㭷!ɰd|'z^ubATرcz˗/ G6:ׄ pEqFcSٔO>7|BhhDnn.RSSaccDJ#Gtk""""""zKD48~8便2ǪU0i$ I}GcG444赳>KRckk\vvvsɹ=aÆ ~,%d |N "<<@̝; ":GsBP J P[[ '''@MM  : Nlmme3:!:ìYPWWgݻJHLLDHH^<;;mmmpssD"Ahh(***==x3",,h ѸHNNMgd2裾NO 0vXXbb"6nY(߿عskkk(J|mbpgφ-;wݻwÐ!C0|p$''0}q᷿-rJJ%|||ݻ&;++ r8zA|ڴiXb^y5 !!!z㗔`ԩH$ Ĝ9s;t:L,ys… P(2iĉ' F.ju)(JfffuLJ\\ƌ||;zwp'.3*..Æ -'h4䵴DSSj5D"|}}aee%ĕJ%<==<<6<<B,,,qaȑDzg033Rıczq@tt4aaaoooxXz5b1O??~әJ"55666H$PT8rA[c~immӧo^^^ DDD;X%""""`GGG?~IIIRݰaÄNNNoMcժU4iaooܿ_2 r׋µ#^,JQ__ߍе!CjDz租C.wxәj@ppp’%K?(ǟkllN3=?q DDDDD… HHH@`` Ν >|ЮN\__'''簵{(--B@@@J%ꫯ H$BRRIgt:sMMCDBϽaСtzL?"H~1bD"!HSW PWW㞟 q'.?~_;qu&>>ɏ5:uʕ+aÆ:7QoH$ddT*z}?OxW ƚ>>8t222 1sLFAAR)}]̜9vzNHC&!==]/6A" 44c޽{QZZ TDMz)@?ƔJ.++}7nܨ[jUxn߾}ׯ] 2DK/fΜ[f:u^o7+[vnΜ91cd2믿t:ݟ'__tÆ 7iZN}׺ѣGźѣGF۳g0_ѣuD *]HHN"KNN=x@/ӽ:ݬYt8?t:ٳu[n52eJ~-oݺ1cnȐ!:kKHHMMM uvvv8ݻw=@S*} QLm'Negguٳgoi7xCϧ ߏDDW_<==MN\"g\LL O]='336l͛7QTT$1;\xGAFFj{쁫+j5j5.]* Z]x1Ѐ/GAZZ^"梴ҋF7nh83HKK3xUTBѠu=uϥKP^^())AaaaOk= |3_}0h ( hhxzzJR,GK"??/_!ɰd|'HJJ퍒q._ +++=\&L`FӣO>+W^^^ ╕EII lll* ؼys#""ӕ#<<Zvׯ7]>"""".&oڊ<`ϟGCCD"ޛG٣rysrri&C,=wvvFCC^Aj Ʊt.;;;ܹs߇E/dw-aÄBW&"._ܧGEE!**XgE܁jΝ}3E\"l…$$$ 00s >":---BYo XWlii\쥩 *>0D"$%%ݻ}u:] PSSGGnckkN۴Ҳ ?󯫫򯫫=`ĈD(..D"y3q0DLL+++HRX,+Ə/VqY1q9~O?޽ V ///zQvNNNBcccgmm_~[n?۷o#335ڵk8qɓowk^kkk( l߾ZW\' R[~n. l,=#-Z)S\?t8)S >>/)))x"Ə̘19G-[@P`ƌFO:pwwL&Czz\.ǡC\3g 4i"""b sqm>3̚5h uuuݚ݋RHR$&&"$$D/6A" 44ݞ^Ȍ}6 y-[?DCC^zDDDxcǎ5%&&bƍ}ѣݯ .R)\.GUU-[\\zr˖-͛7!!ˑauuu={6lmmaoo___ܹs0j(|}_ڤ{yyAT?9)))&]EE H단(,_".ugggXXX<vi"'ITBѠuѣGV1tP>|j;vZ={jjK.5yM6A, HMMŠA~~~tWqahjjnܸ...cƌAss3 ^ZiJ~DDDDDLss3: džJ梤666ٝ͛7?,--Z www 1???>}K/!44o&.]??.7u}:eR*زe ~ٓ3[[[qi\rBhhqtuDDԇҰvZ yyyK&ϟ?77NWWWkmmm)ێZ .Q__h6lFNBPP1j(aM]Y*WNaÆ ׆ Vۭ".aii Ti=1C  "<<@̝;0|pH$t:31"ŐH$=C$A-{=@ii) T*1~x|wh}:\\\sa͚5y;qii)R)ODDDw `aaaƓdp]*b߾}]_[[k4fffvds|;zja筧 ]o?6=yr U*ݻ׭DDDDDDDDDDw~z1www(' SE\"""""z:+/_~ܹGDDDOq7|S """ǀE\""""bǎ}=, spNhah_)A$u DDDY_'@DDDDDDDDDDƱKDDDDDDDDDԏKDDDDDDDDDԏKDDDDDF#99Mwرc7n\3{zԩSW\ 6AFDDD͈ȨXXXXuOǏC"(( ^x˗/}dGDDDO %"""" ĺuWWWի,rѣ}KJJ0uTH$bΜ9xw|7x1x`ݻWB.cٲey&r9r9222L^W]]fϞ [[[w5 Ad׿477c…pppT*E||<ݻ'*R xxx`~j׮]p Effuz{{#%%ŴD"/|rGDDD DDDDDϰLlذ7oDQQ \Z͛7ӦMF~;|mrrrpqYYYjٳPPXtyoڴ bhhh@jj* Å  \p~~~R FrTVV֭vJ Q\\&lܸQ/҂]GPPrrroܸoa̘1hnnFZZrss⦬qa>eee?իWC,cBq/^ ggg1 %%%%V! +++Eܗ^z [lp%HMM $ T*9]۽033Rıc/_Fmt&Lŋ5Mg \kk+N>7|Bhh7u}DDDL\""""&-- k׮𰠗'pssn___H$Zsj*G}}=~~~ذa4 {:u AAApttĨQopp0^[[𹺺xc+JQ__ollm밳Ý;wp^;{:ק>"""?X%""""`.\p@BB1w\ڊDn?tPt: 3j]]]5H$N`kkL IDAT{PZZ B(J?}?#::'Oӧ#FH$Bqq1$] : N KK^}yD"~}ڋΦ@DDDD4H$d2d2XYYA* Gꊉ'b֭q;w899 [n >>زe BCC;w E\CR#"ύ*n޽FAzz:"""➞hmmEmmu\v 'N4g3bϟ?~ccLrrpoGV^իWx#6HR۷h?񏝶Yb8y4 m65M#QXX(|WTwݔQDDDDDm.]Byy9 RZ\x }vA&AfDDD4N\"""""rCB"`׮]?~|~z1www(^I033CAAAGM$&&""" DDDDDmQQQz,cwV/._lr[;;;;;V:]ڹszS """""""""X%"""""""""X%"""""""""X%"""""̟?رc7n\m㑜XS \6lxs/6#""""~+66mǏC"(( ^x˗/cˁ+܉KDDDD4EEEaׯ_SB"@PA/>m4~˗/Jku!,, pqqիWws=!C`HNNƃW^\.DzepMrrdddcgeeA.#<<G5ȿ~ߢM/+VW^Qwk.ux퍔qc***D___DEEaB .R)q޽nCDDDDDDDDϠL>---Xv-{4Nff&6l؀7oÆ ?w}/ȑ#Bj{쁫+j5j5.]* Z]x1Ѐ/GAZZ^"梴ҋF7nԁ83HKKCnn^\TBѠu= $%%fnnْq._ +++=\&L`FӣO>+W^^^ ╕EII lll* ؼys#""DDDDDLZZ֮] a0//Ox9HÇ }F٣rysrri&C,=wvv68bȐ!AAckk\vvvs߿kg߾k &W]] ?=*=, 0 .Dxx8 !!;w.`DthiizcbgKK ,-- Dךꫯ H$BRRytD"jkkcƱEkkkmZ[[aii٫/WkϿNȿNxyڈ# P\\ DkӋg 02 2 VVVJwX WWW?^8#VVٳzcܹso~߽{Z^^^B999 Z5^~elݺ?n߾LDDDtkOOOhk׮aĉF'OoݭyP(}vhZ\r'N...JBKK \"""zvKDDDD4-ZSL1~!8pSLA||<~_SRRpE? 1cs9[lB3:u*"##Lt!\C!##r3gGCC1i$DDD`Ŋ&<,"{{{̙3F|g5kxYY5/ݻJHLLDHH^<;;mmmpssD"Ahh(***= ^=q8wD\"۶MۭzZk'i9<6&qD\`".@0 LvpNW%{D\7<~WDkˆoyzpbqD\`".@0 L&qD\`Zz|z"{g5-aIENDB`neo-0.7.2/doc/source/images/neologo.png0000600013464101346420000144320013420077704016160 0ustar yohyohPNG  IHDRcgAMA a pHYs  tIME  4 IDATx{WU3( $B"%TEĄKp)J KiO[BD- P$H}2Xff\4$(EI&~yt63y>Մ3O%fB!B!B!,K@!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,HRO3~?/2jgtyB&aP{\,:%}HB!B!#P GP.!3\ED/Rc:]k[s/KYPk3#s>x?u]4!?M!B!rxXOnRJWDN^~r!dS1B=ڋF|7̃yWi۶uxk|qN!B!Bam|_+֊)%Jٜ?6[_d ! v̹ZͬgByMgnƓC:nL18_LO!B!#LrdVѼtM_Xr:OE_G/Eit}[_tszB6- ]wᔒ?gi ֬|?|a2[>щnB!B!!YOsٲp@_\]DV4]pxYg ?'|gRV#ds]~{>o7޽ W4Of\=:JO|? OxSg=?x !B!rtb}_n.9tΒeO~ovyvt~>,]v|Wrm߿kaA<̧>gyg?Og;t?I*>ڕ|qg@u&>DݻwϞ={򗿼o߾nͭox?y{g> F@d9juc]_]l.?W4.>ZB!ZKͳߤ_kO[̺s7uY9姾{M߳gϮ]\zjeCQ9ܶ?t:vss9'~~_?z5rԫA12i թ\zߛ{r-_>sH|;>O9ukv^<1H!GkYkU\?=ֿE39!B9F_,M'餛#&Vu{o uu_o-ru}׿L&骟`ۗv?YnE/z9<9u)7ʵg+9'pgx_;nwSE|k|8F#N;^}gq_uD 8rV;̃|a#"W@WKd]_B!B6 -^6cf-NTjyz~=x\sΝ;_\|N?mgS=MLS cOzғ?__9O::sIk3a d`MXɜUuB{{キߕ8F󛓍|Bm_?;\V9û"cݺkS,~}ɪtvsB!B6!uH7}3fMk9xm 5ﻬm׾}c]w-Rx.in9({RWKGFUp蕯|K_R7:OMjW7jzj'>񉫯z׮]l7[E*F]!__>_W< O~?3:K~}pa֖LM7Fn'B!dc{7wvlX퟾3SJfEP~_}O|b߾}]ϱE/tNKg㨈q-Vm/}_m۲:96kO (\wO|vVkR>rjky'CB/׾\p*}#4߷Kn;p}ݷgϞoVU yw񭿜|B!l 6u(0;o馟ٟݜ7_RJlRj.g?[}]wu_~}e*i^zaN^~SN9/~_|g2;sU_rU˟ܻw'?|wFOjtkk}7H;?0?~~O}j8CB=_<7t׾ٽ{]{XO`ϸՑ)Agmezģ>g>mo{۫_jg+O~kjs= k|rU\"[D7<^u "/~_W:r֙8p-޽{{ݵk'O&uW}Xpl+"B!i g&3-Usܹ瞻[`[9{uUz+s˜ӂW\q]wݵE=5ǽMoz'>_%D٘>b\]w>^#4)Wo}멧ch4ߗOZOscV{c!]vڵ+_7|wSL'B!e Ql??¢Zso~gqb\c)|lf7x__}nMj| ox%\'?6׹毵@\;VTl}ʫ;zGnNAwyo~/~Q:tUHzܓcg ]^sQFjo~_of/ ~B!caM]EEP6\x;w| ^(dΓ/`_ssW?vgK׿K.yS^~=X"l&I}eU/k;|E#~mC^駟we/{I'T/!L&`PZ$֋m/~_>;sN[sD!Bȱ E"FxWϐ?X'йR;v8s% I!+];F6sY.xUJ)/>3N}f9FX} {?~WzXO!B1q?~SF=3 _9VY|+{W^zwuܵ]8Yj֣5y;ygilg%=L/߿6OxRR['[[o{FS- 9X7:ef]wӟw[f޿#a50v-K!Bȱfئ${AU9O4wXuZkuرO~Ouq{1.&ԃ ުCUce0[j[#"{.|t:e&+++1mW^y^m۹Y]-{rB!Bȱ)o66&3'I՚\^zGPl/| {o}7m?:׽}U K.3??T뺔`0wދ.O?Ao]O䨪Vz::n3|k:>w}?×]v 9{??%/y׾k!;c/xSO!B2 yGb<{gdcU?x;Χ=iW]uĄevqE|/~?Ou]WVFLꗈkB!B6:8Zg};1w\d1K/}:lzORY7$]v~]vٺbmɼQPǟug2S~,۞D`8j;|^ wsJd9qZ\~?]w]zg<+| )E z:{) | _H:e:o_6cu6ԋַh]/aCFhM~R(Du\u;yHSzJ?y׭߉ۭ_Mzl.= q^xt:NS^ gqY !4&/PoǮ .@4OY0E@9ԦGҶbuzJN}(=̏Ȅ`<G0FU]wͲ[2^O*M@թ&Gs=G!W~vc7 /zыo`u}H(T[owS)~0s<wsbi]׵m;LQտ˿#[Gu VHy[\rz0:NC_^qI^ƛaGGgywݷ-J)yjQ+Fz_cF5 vygMo:x𠏜뚮-#oPCj́~7~λ;u|=#^.[A#/B"o߾.褓NcBw1 ܏;rB?g,t6֒폺߉*1tl:GnyC!oɤ6uoG%/#^9i=/!XY _4qu\wJܱY!HSfrf: sWۓɠs_{G=ik=0[~WzWDe,n? W}o#ƘRzSwޣ 6.kƳn/o~$$"}k o9Ǧi !ǁXUZ]zןɟ,//{܀P\U[ֶm?ן[>{W^yiE0, <ԟ>ߵkL[ Jm6*z.cnW` &" H0iD.DG:GGlGY:9-šYvE:eשf|G_.HnOUxD5@a['}maG8lߏwUzK6: "^!v57x ^TKsGq[/|p5b;W= }^'=jg[b򫻖~_ w]E/`ޖL-RuI'+_'/&9jPq>^Xя~o~\[綬u4@ 0`}Wo)%7"M $?У kݨɆ$Z`D*[ZS`[b1+V1{ݪDf,Af񞙾q> "G B>S*[/(df ~_m kwv]Q!@ uCȗH/;w'Ooa Z-k$I;*dZ~),_?пMAf&ҙu #4%56wUUځ!$3(Purr,"Ⱥ;cp?a2 LF)g"mz;( ADL:5ZR_|b}30ٶWRED fd]9]E Cm2A "bs+^֥.H &JalPetf~{nl )MSHJxYPYj6@+k:*`ݷDQC6t lg#Bx;^&Gnc81pvW9D;rddS<ϵa;V_eWH::k4Ua:L$Mㆳs=wm>bE؇Fhl VCuzU7kW]uk^ZFu5!Dz;w|o~W;<Ͼꪫg}{ycэ,f0@$hW^y;y]w-yfwfU,sSr$L| E013?_7Gy@`]rDf^DD)y;a ``@rPRJwVn @ekSCE 5H0?h/I,,8MvA\&= .vI"&߂79[WSE{xn bR'&f~_\=fkE1)ʋXАJn̲J`0FgYtϻX0$o)gwv%۱Jy$ DkBG ј7u/֡yW!U anX6,+;ml.y0 1*"LCjڿjY1(*_ @MA$ ET !B!m8.(,T@TL f*p眲ɭ ^'X@ 3 ͹$zV] uZɬ~7{^AV3T!&=uLf(cぅ*jK!KaHZmM"`J1`Q`XR t $H!H2-Y,䫝R;`OeI%BcRT3`?1ht)0 ?%D@y|`b}OҠ"jJdqixh eCI.4bi&@:hcT8/ږ1emK|+\D$B>`gfCDA0 br|ȇfD$/H!EZX64ҦN,"0BQ_o> !,:Xnu0$Dˁ c("!vDOO,V,юXC iڞ", H$8ϧ|Otj=f+&?)|OI}J֪4(& I.Xf_by YC9Պ W`ЙLf=:%+j3, 6>>f3{A2~@̍%R\O9Wѽu}٦QٸPƇ-[W& ]#)nʿ"Τ(T<>e"b֔_QH: iu{N;p),SJU}m(ih /[ɺcژ 9VẺiXݛIxr-sWU4X{RO;)Y $jlBSK_e 7v B 7s'1xN E.+%(J+E=kG/T"Ek 04) 06 j=r5am ;@#h,FyX ,)Y:R%KpNbވ&B'a Z1 $xnoiUaRm]r}_g]5"T-NM BFF bR4:>Sg"zQig3IdA\HFWu^@ ,bBQ+50NR@R+SکPr\w#TF`.^P !~h Alj6L >F5(2Fk Hq`3@7Q Io d-{4V:F&&H h HiԁBA|iʛ7NIH E",+E)VSZ%ci% AX5iDl0xYRWCE,T;LCΫ9_o(}uSBN~%N`*֙L u*b9Xf^tS > F_%h6TWU8\zDM@b=>VPsxṩa*wMƠYgxöZM!N U/0B!Di`@T* MhI= D,!y:)T\ Ȕ c^y1PrKMsFvB),m,Iܢb7h[͖1n[l{Q'ސLb+c3#VyYZG~vPQē6*,ưNPPbNXiYTJ!܏$k`h, ~eHL$ihf L$C>O)jLgM` tj&b]>@'yĊLaj28Р f8[F!,6Y-]޽~=A:*,[Cڷo??x.U4*x AϢH)_lBX=yW|=SkrǶ0)]R'<=شww^Оt+]!؆djSTא7&`#͹` ) *"d=bGV-E"BȻ%`0ĭHd{1fus%DMRJ|޺Yk/@Fbۀm05 XEA`+<:KY)/(g F b#v`0$65 F9!eCH#6t.r5$$T,iʛqY"!$o6$fIjƨ'ߵ$0Y?j*@Y^1:D 1 VZf^5.6mJD8&7XK֓lkB *\p/U)%Z%3ݞ*T_s&׶wɼ"sq ƒa%e8+P2S[yLyw!?#@ZuaQU@h`KĶ F0MSX>xR<| ~EAbT0P- ؒa )~h 0xꫮ_rV!""AI'< D ZXk@h F"S1p~WLa"0i$4ZIe & D&iH)eEKGFCAͩjH&"L &ɲ@c҈Ēv= &WgC0x2L`TP~9Cڼ4!B#`d[$xd [)I;F&#`@,C0'8[1a0n;!0؊$s4X,#m!tT4زIljn< = Z` 8l[lw'3q%Wfme(15Q1Q7G/=!B)jaKH4x[FGLU-$HrQ]g+1&jX=>"XXŸM!Xƒ`*ȲA;JAQ1aL@#S!+$C CpafX4,Ȥę20D=8`;L%(P}BL &`DL:ΰ$! $DH0lI1Du ⇣0]3IB'kUi!lIldd2,i5d$$h V U/n7{(8;XYZ2{ܓtfYT^J"~hm@ڒJ?&-uz&O*|RFMdjr âՋ"s='rw鶺,%H10f .?jBVTmIe}XJISBr3?Zx I(<$<`4DA^,-#֧< _,5~v)f<޿ܧ? /p>6Mut:Hz_lE헪1;v/ܱc 'Pky+]ˬ_Uו.ꫯpܹsιꪫP HtVr0̊k¶-3ٝpsbh9 HCy9+6fe;l %`"hK cERK0 LBH`D[ 'Ylf:Z>OCXlg7Qks"cbڔXT]XsӉB`\!&$4֤䱶`Rsa^1h]OO!"Bc<9àb3qrj>W"A;Z x J` LLC2l) Zx)g$'o,!И3󱗉e}b!p+5m Е40Q,A}*T7@֘{#?DwbXoZ 'zZخC~p'ڼI0լš+-7JF@=9ZJO3PK&`K̙O6*(;`2 DS SXE/* &LBa$or*CÖ[<2)VcV0!@5|@ɏ:GShEFV{2XBr&,hD-聜/jr/nJ=`6򷊢3VU-kE,8!ra3Qvtfh KW$2UhX 65K%$XSϕѤ @ZX-q(y=И4*6RET8# F@ 週X-6TekD\Dϩu~D^:U A `0L& [Km۶:o<.|sΝ;O<ĪW#r,+54ceըӶm W`Ϟ=s 7<ǧ02Nb 7ooc?8i$.d{b ۀmIpa[u= q+y:-;E1s*ؠH1y;gV&VeP*کMmsVc ԅj.>21t\ΒRG _A4 IDAT&ķ+k*02,ybI?xN4´,r"4+hM`Q- k9Z M r[]fYi6FմN3C/f\Uِbx" Sx\GfXh%vqQoHgw!7L +EU;:-dmx({`HUu(hN+Lu1 BRG,f."ڼsֿ v 7bD-'#['a*٬HC0m$4Fb`L DyE)"Ҙ c?ISI3F3'WX!tqlcU3K՟)jW͞,!w!6Y0 w[z2wJ;뉽`ΰL(FVd l` 2:*L+eK=cѕ)V[X1lZnGez2 +6!n} NrI Zja{N͙MEA mԅH@ Ǖ0X$ "$mqS\C͗n^]""Hb[28(rљE͚{$ Dtjk`~bLtvpMgTbGKaM_oǰS;*2 MFA%eed3hRv6?!kEΪ\2 @żh3wqèM jl>Fkbd]&t "fu)?z`IqZ{=o}{Ϧ5ػy=]ׅDd{;NME;6e8p_|ŵ"d->Ȥ|ȟ Yg77{xQM{?O6! i51d;`V]]G5"^ m,53[ox\Ν͐ t,NF9vΊ=ME,3z!g}4 ۂ-F 45]sإj^ Բ4dlS`๷@܂rEfd8}Z̊|)h 2 is1@QfYZJWUM20;Xzmi:X`b#C *PoQmP֢T3Y"A:5 m4Щ_2f.Ad(jԂLgf.#Á0 ۘh=|rCEqj* ^+:SmJ5Gr9_{}CWYxek_?sy {Fcf }c3zxo_V/6_ܛP=h~}L^]BW0*u@,kn$1l&DR,֠FAaG]hU\Is$G3;{Ѩ4h%fF1q܁ x:U)V|U8j,9x>r'8z3{5z nrM p$e`F k&d{uB )*@ĝ |Ԍ @d%U9LAd.%ROjg!,k ;$v,,tBj82 VOm"{e7e5id$^#!Ӎ<4">E G;\ʼ]LX \BZI )()J->'\\TaF"WO@b"B!' `Whv=(^:P F.Yġ|5;'f8dD|DKĮFj//Rw''iWC{6Q24BaWY -PjQ6Ҳqw巜L 7LNhބ#t ] >I]pw` |F\ !2A";wԝn7b֒D3R bX09QjI{1p$LȰlB!tn=>iwѼ9ƅԭyY *^nfXc0ǀm8օ2*&^#/p7b7Uh7 B+uervt*Q lDKoЅQ5HCfݏU2ˁNFxgZ*I%  ި8ʓ6#5+(KiƅC0/ۨ[gB[_RU&r/rMtǔ×@xC22 .ttd6رj XO`O 2ZT<+l ݾ34EQKI;ah{Frʯ!X3@t:rwπeѪxDv̏;v$G`hIXFQފYmDzN8]_̭ò¦2@f@0j$ipTC`f51_ \Nˆ!Dj  u;A}ZfP>4 9Ę XpGejq_,%/!4g+Ei f!n!RZ D\Ck͟܁)FPh;q0#17Y GR'v cRRGXL:'XROrp1AZw0E͛ifB8@OG 1'cӋYYԂ gKVA`Ow8xi)Gg c}E}vY% h)ć օF_RȄ!F$yɏ-\+T-  Ӣ Bl׸ rVE>My'2eqkB7Ljmrr ,$v@$޺M }C &*F5C,"*s8fS ga^Ph5hP`7s(TNlT`f :Fw, t12 ZˢiF^q Zp KȐL@MNN/nq?}dim6+r=:FE !a]i0#Z*TutgPZp2CϬ*"ehWEE 9<4,KTGWu;='}#`0j15`, Ee"V+2[5gMKȸ\ LN L(`-Ev&J<-0X8A‘Eـ3`z7h4XTu#vBuehw;~m^fB7ۍ[?LZ+  D[.OT'e]h2y4Ɗ`\lf".ۉ^,ZBR tm6UnnwR+-YUNhw&)̵V-dq;1,iqŏ=;%e8VKȌAB΃ $ {n3~Pkja >e˿G?z:0:lq7wxݖ_yu]`]Wa}_u;O͚:o_v,8E<AEL1&Ѭ%q0^f.m/.XX& "p, e?tZ,;=hvfbDۑM3&$im84vSV}(![>+4 R$`)#!.7#-l];#%8yV-I'-hE PZ lI3i@c/l0xD9 [&8=`QIٯm‘XGê[Jt1h+;SbĤ$/L:+ӘtB3ϣyt 1lo3oR67aaJR,0BYd("Zv]t] [/a.m5cJ5ƊvX߯afcpVpba`@v()U[*P9b3g؆NQckk pd1[i-~eh U%>;,2%HS kXF5*ϾyA0lNe3#'EZA0X4@2!u6ex@"Aeh[!xC M*|%1!peW1O.gA4h!W%S`i ZX(<5f뾨sAI_M4| *W UX "-+B P- P6HhB95&XHzgoPyЩ-0ch63^D0r4W%}Sqٌ%gvT|!íb9%إ])>b_G gu' M*̰z F3 ^&UrN8 g8ܢ3)3Ֆv6 z!xg:%ƇQ(-Itud8`VcIݵz˨~':9m.D8-CWaG3-kn,K`I}caZXt<@Pnr^n'\Y+ɠל"v4VCF-aBs񒕏3gYrC_#Uk hǟU@tqZ*&vUju<{G_̏4b* N͹&?:wX1p{j hrr=\\3kk|_l̙'?K➜LRv\H*'0sun^J*Iv)tm:!.¥coN8ճ]c@'p<9u}o' N kU 3_}N7?,Ѩw5#K_-9Cs 82_//,q;<>+K2>|Ē98waQg]-̗5s˫?j~/S,0y TSS6˔Pj]SG`R}섒xb*2-P<4"ZAd6${*CuyJ;@(R "&d(SuYIӓ>% <E)W M >7vrUb[@aܤH\fWS &T @tG8)c3z1Ӌ dhfl.J=ԅNzvR$]rfi.t OpdݰN!T48"f^РN+9Mmc-r}m&gu^=3~c21Z6$sHjgRGaq?ViM=egȉ.0aL#'su;8faD(RyԼ'~ &R]ّJ.2ZY9npO(Ь'm8&D>UV:\# V3UEʀBBCJC]*FO)DiEڛP'lI_}H Qff\_Ibr C,rc9|k&{Е֬sf+R" 5J<(m@RLO4FrrNy?:ЫX 4{ֶ@,MI)H}:| .rQ+QWDczw-(َ؁3B; B;b EW [ mXhvdI Ŝ]%E>g IDAT:0/a \@UVg1J-YܵⶰwВc.ulڄJ![2hOY兀@x ݸY8xO`%#Dh"--o4ڭ"xb!ḵص3Kg Y +yQmy" d^rn΀b.LF3<]B ڮ̐ L z3*x.AJU<%6RmDDHX=?1K1cFt("U' ?I}ޜ`m}G):P8_lbvpF @rbp.o׾/n!e6pL𫯾ݔv v;ʪL}gO|_NeԿîtf.gO}S/>Gg.{ ?ϙg?\ZkIajwg)1tY 8)a.  q؋Jp'i:$ke8ԨNѰJMejOLZ @ `Fhab]+X,D^EFmA"UE1 ,yv=arkI'D~j&`$]!Gnk6"C !NvpN 8UXuJV ewgr-XGdy,%q\AJsQ@@8tv6 0gM Ёp! qhj-v$^0VNXhHwYuPK3y- V=V@Gj"%i fU/]X'PAec/9 *xI)Tퟝڤ\HTzvZ Fw2.=TVv`Ԧ,[ v@yD)B t%i0 g)#G2L. <芤8RӕYJ1ƒ+&]&I,#OI HϬ\a <20sEM KG?,`B<+%y]n!$4 #CڀtI6"U8gb! :HGH,.7FP,Kiz{UvJdM t P:xd#@3p eն*4:JE"ld ` 3d2%0g99pĝA`VTK-M`M4G󐊰B0a l-?#ewmBЀ%Phb7vh,[2Rx*%BbBu<.f̪Ӑlaq XuL+D).O~9'ކYɆ!;}- kICV50 &HlU fO1 N XO: Yt )2{3HQD|3I8[w{-3;S43@/3?3}ah ,>>6zc= Ϛڅ]ɴ0iQu4pVV_|w=ry;b+yh7.%6 Ka;D^h.'qC2WdUR& G0 xNSS詃Yv0(UiZ tawVd kb'➸?XN 'NExV1Nz5C+N]ɎHwOqu#֗ذP2RNOJ{HRaQ[Xfoi D6<|gQبłEBXI#R]jE88g!empmnA6pKrvac{{ e0C)/T/d8K tŝ P(nL -;h)0݅ ՕDŽGCP C7v(-;h3@4:dHMM8w`f!jd3h7O1ʜ{! 3x )ٙ nh6 b*bb5{U{! r$e:g~SjߠSN2,&9. &G{J-$ &xlL { vD#/R>Ɏl׎?=5 tg e!!x(0)&'lK4;̊vjxVEcx @q"JP̵[) #Wz`(h%}le<4f}h'}&L@,92Z5Pn|(e9ëu ԑUo/)v kؘq-g`Fn22k=wU),8t) B!ڭ>iqȀ;c( kX p|=bgHO6q^{8 sxz[ΣRnRM-ˌ;d}?ُ+%$?S?77S)BxE;$M<2H~3iqˇ/$YcPpDmPHߥ/:7Y¤ē^\L-&&D.yNu}H=:vE9_K`Hv@:yKx䔦Ea&?AA({Kmb lA.F38y@EL,' Cy#a Ԣv `4ͱ %{#ρ p;*dsjpM}Ll{DF#z% T€C]h@^M-ux.ƉW6Pl44 ݹ aJٿr!d ڄ xI$QyrX2 0)l1a2K>hV劍xN*\Gi]ܝ`e[ۄB;I[d.&@ (O2 vOuEm'6E.`"FN`R*RMhLQORe-Kͭ0I<K% lQf d}(X$h'v ?FYk/4J:{q/}-m&qxj_~y۶??~? gf7>O(/lawEw3 qBc8&YUu(hfHt+i/~ȿEd`f .Ygd+52R/b3]xn2LCS'DXR.ZŰ2qu&:K֬G[Gf.t!x]Џ㖥YRh#3x6ځQX<:T1jEZʰoS:38˯kpOB ,FNlhPb]ٕJp=q8SG5D({FFy[2l^rO MUS4g 0|0_،Ƴˇt#0@YF06(r #蠨7b ׁؙl Ѵl}uIa(YZFz*%"d{2x)"^V@_ Oݟl׸pdɇK _vng{X6p#$adK8cdg>P#dĢB_G؉t;Z:0 -da:.L2Cj<~`W023K3WRi7`am2 C@IPI*Sw\܉+B@ݚu3 1ct%+6B8 ˖+36,d~F'=(N4"F @nI(* Nٲ{N: D'mܥj+kʸ΃x0!Bgd裚ۅE&q/T$e~'U*0:X#^^P #":oVpW76K0eÓb+[kI_Bor z cs/WJc(!e x9!Ԣ^lqTSTk-8^gtm\51Įts,xQ p lâK*.y@@uL!y1ʎ!!;r0 1i춐Ap&pDU!1dQۺie,o=DϓCPVAU ev\b^d. a [9p@j FGM'H#3J}%߮ĕstjk٪(fq;}N}g9zǷ@r_ Ax*Ur'E{l`ϫܱr2ib spo(e-sd{VAJLQEsI ڞ[ƹ7B)ahkV>,9sI z)kZw+'/dwXᤞ]׏E8o_G>O>lG$믿?oFN&1[gx}mɯ/| ?#?++O|ޝKG>|+Ϟ\ޱ g@BsT;p0-4̍!,u<ä=aAB8P*;7`6ӊDYR7w\\u1 ԼTEb( Պ.ZH& q ߞ܀@QnɣD>;S++ |?^GYBK92M S,P3rCx=)?~V^]C VOnh* ozIxC8ZJ.s~D~6LjWyq%`-Ļxr>myJ@Ms8g^:|:cCY3mh"v$J0y"RD/*jmk8Rg!C37 ԐzY0O?^5>`]L9a S dPPEZà elgvĈ.py~SϹe7@y;6D/Y&fp귌EP֐*݁v`CdTczTOdE ȳ8d5 #qRs LM׎fG҃KU 60SQq| 5)[j%ϭAeL>dX8p!|Z0O%!)'.߀'R/=z. Ȥӄu\\H\}`"СMZ>.H$pr.<})k?\l0w%1 xh=Ƈ; TCUX(xDQhf 2\{=cASW0M\@q3MTfu1ԣ7٥N{ʜ& \Zyq[ gaC\zڠjZPtqtteA2 +]= N{1Ssvֶdӡ j | KA9܆d<*u!$#ẩ}7Mէs_(H]aqe&xRFu˥TƧb>lh:Xc`O]<[u @s,X"c1.ŽCU xV{S2+CG=Ё7Ѿn95H(wX9#C}2𰻸ݴ́h qG\*F>^|;*M!$Y #]FY>J̋.خx.L07:P~B >u8N9g!zE򠹹Q<PeGQd,Ņ.WIN,`,N)!pZvW=h,3-dQfuJEu~aa膛!f ҍ}%өs:C}Hn:!0=ե82Y0ao6N;X1RIKG bޠ@_?*k ] > X|sh_}]iՑg4^z~"A_Wiw38kpԷ S2{No)POZH?s?K/}Y1LS;gjBu!7__|>U3JV3T Ƙ`O]Lqn}^@/;37^g̓_zNUqM+U˿S9<7?n}|9p+=(s˩oNJOeҺ{jbQpn'y**l( |$U"ф;^) '[K}jN+f6iyVha]~-ɮr3^̲vTelѰd\2Tp1 dƸCpmh#  qq94 ,rV~Gm̈ȕYeչU\TVʕ{E3}XΰJTw _C'8'g\>rh;]$%VHFcZ4dXnPm-8dd6QT>s ڸ/] "Oд liں-ƵA샞 `:³G1</FT#qc)q1٣*_'ʙQ$~T/s(β ̊d36$zy39e0N4)N L`ʼK5Tx *y<|9 h A|8\3٣0S|K)pB#B̠6V[7WK" ZxM4F0a3FӽnV+]Ӛq:j5p)EC'~Bi7 J*lTJc,C>-3y$"5t]X:3K>\ez<[!',o/(iB#FA 4䓴tL'/bl_p#ɾyWٶell鞜iCze/'&+D԰3z "eE'-􌓫wMl"eS'BSd,^\KA>NfInjЩ y fF6v:FaR4mEIj1(|*#e+,jhVm!j\؍dM@Xq8ȭNKgPD0r}1 $ +^Ft WcaPYϱC羶|=s UsS [7R6ZQ#jLQU#P 3OU1ad%$9ڭv`0l#n\nclikɍi1hl՘p+ 1l0zÛ\6,V^ R8|Yp4h+,iaPN-Hx!i/TLn%M7A ׉9Բk K2<_rkfYψYGYC 3G5*W X+TB@W!<㯜z~w޽[USb:?稿ԧ>u{k^Be}3yg~g~tu*1M;wO{Lk0^xw/|tgYtyD~DG{r.UU/—'|ew{Q}z}|\eO{Os?}NUjhQ Iz7ʼnc}$O2huYo*b<:>ύ=EW'Yf#E`^cмάQ}9g99nϵe+wjiAz,4;}|l14;N;>MFY33uЖG1zEQ+PGZṼ<*>P5翕^Gdx(iWJYl:&lHəExfDr5ΰ 2q-%`e A(L*\I1|ԥJvz9pcSٌv''W-Jk ֺW#Y>y;mt(Sa<^Y  |4^NL:LjwqdoDMBa63cOAQ#2 om6:\Q" e+gSn  1#:;3>d~2Q*q nԽs#CЫDLaZ`G>/Zd!&㱙br!=WD uạ਑M*zЂq(IE:-Ww<ȸ&>F*Z9`}̐Y!Ќ b=$<l8qݞnr8|eN87QqPO-ɝ(}qcP"4.mpxKTaIV;L\RË''9=B6ĭM2qˣ }\Dֲk#w;8{lRPdkSCBgVJe RGTF%,.!ΰ|d{a",)FK #OoKJF1D 9a22DtʭKzK00v-CMe0PZ9$tCp{l~ZT҃޹q"ld Zәԅ=Ȧ^64<h[³ ,*l "ϸ2JF,` .]a"kJ\{݈)'| #נ82pf:{ipw}g.//.{uE_{mo{c(Vj>t^^J,6CiN)7-oy>0놳^l3NM1/r-۷/}K_׿=~eax8 *_[`}C#f+$7|r_Ͼݗ Wd% ,`PQRR)'1*5B]FVX]8\]CYq';=0«(ޙPX- _>62^K횑7hSXgu.8s)+L"BL*~u9>dY(ڭ6#IR?smTRh`m,du߸*!t`#u"E3W^ O#3UY5f`9·hsn!"X7UrQ "&J ,6:tYv2Evjr U(krgA-m DUTo'X^#**24nkt'x:C] vO";NEPb"O{'XM JIDPey94q!i#Ap2cH|n P23L'H./ll@KKJ*r_̗xhSp=h2W(@G5g %Y5zA*X[q·q ^-ݷ(} -4^'0͗ pc"c]oVlWƍIƍk$QɫBm5eUKa2XC \MY9k"2:`\8'g &TtIqf'=R*d?m?h6$?A;)H3; #F:ٔOQXKAFvLdUaȘ%Z,qî=ܫ8DGh` WF!X3qF>|JdDjD3oE}X:abg 𲷎ĆՕn7AK\ Lt L^i,nh6vʧznck̓,);ۭ ^p.=LNVVS()R\ݑFm c* 6DDɩa{|S$'Ex)>'pBYyK%ج*ԈB~"r-veUB`Q&*sg92n-qsJ2,usqkC^c5RtJ#l_|J j6(Y%4H*RJ\ae L89l [ q (Q41FehF5q~E16l,#zt9 Ǡ1N$tS jEmp""E5zjK^ ~?S?OREO#@~X<Rgy;}{_?//,|ʻe8sb-ͤnnn> _8??p9)7H14. lKLsР5~]xݛ>*+5& #E m|k!q R fT|e{5perjHR6`}<=ѝvB} fS5R._:)q#G3Um -J VgJ,oRp);F-ZLV+qa~<HXcO+W73Ӭ_g+8:l-ZY"f ęl*M9MZbc6hFNm{VB *GA֗,򖃝IPؙt Eů=OQ1Y)e2\98:7!l؊vc#tL#}8Vkq&-*Lj4 55* 2P %. NhCTŒ[ f6jX ^lփͣ=C9KSd׊݉?R%L-vU0A*F+%,a>,d> @>G(SXFQd68=QҐQ,Nmڸܔ%c EB Ƥnl>\y^G,4N {&di̳궐\d cklaCh/z0Lbf&݈TBV%`wPdYv>alEf|iihFQ)[~dEG\Y=x[9 ,,)F+A 2_VnF\eXi컴2ɭC[IhB_X/P]&;GcwTVbinscm$Ebٮр^pgVj^̬\d?c>z,PF}༞Dɘկ~ݻw9I̞7G{{s?r\~~g?񏟝K;ޝ;ONi닸9YUd?ꩧz)I_eSThSye>'|~fqfs K?_9mmd|@ 9ш,d"wS=FQ4:+ΦC~T0%іPilm#6Ccս FrUZePK=Q?ԕG7@ Ԃ>R֧*T;t& t֠qc?; 'up!#UX|h*,[DkkbJ5aUn{FTT:jd S4PIh5Z=X`2CQʘ(&Rq֢2\U"G@K4yty+DWJJ#߈8FcZNӟ~A3.`7z : ]E!(FQX^vO02jUqԃbwE3#ބe,Crӳ`=mr9ۉUa -5x_qiY-34-eUXc^y:UYhgjiWV5VMG+h+`6F=gV6qA q$vBpGƜD7_MK)ؖ. `ar9|Xfր۠:7̪]c|Lj% t5|7娚Z>efV9P=1S);ǘVCcKdE箳i(^r) edpҁ8`=^ lLKT`vt,PEd$֘[,C=!Z]D#*\Ʀ'2K B.sDnljiGvwkl9]-yAԙ¢6mVª!#hdlsk[z2Ǖ',u5 I=sԧf82R*b|>+}WuSJַ>򑏔?͕}$4 ;zgܹ3O$3C*_<6qgln$p~ՒigWmoԧ>?g???\t/ iQd|ꩧ>=D̑z_/մLٙ9+jb:rX6[Hx^Y >8RE&۵eu4aR,"70@fFZIGcOC[wlcQ'%ctbYMR$ y kܒ*k js3\f9A@:>OsRjTxABI W!~xR%"ypk`֫DڬMC;Bp3 |Et2Cy7WNSO<9fgVQdVo`= iW)gb'=)cFٮoȇQd+t1ncdjgbe M\VYcĐLjŒ#IchitXe),%טۜͩX*A66:˕ܓ^+5NJ V%wo2EF\a]ϊ2NKe3.D-jr9aR\xNn3<OD,!$*MySڗWyN\ ,.Knj@-ڹ ֤CG5PUr2`lbge(<BVuԨ7Z؊'`fvaM:ց谞Ӷ+R`r ālC9:e2Vx4D56x}e'e"~@t#c.ngZBKp 6@Cj-/ VeVVU>XG:K,v,cxXȺdMq05pPzV cj jpA<#s0:Uԛ,߄"CBXef)b#o!M}M.'cg]gV b4{PFK*o-ڠ ֡53˻ GSR1GL*̣ kM: [zdedvL6h\Ca-H Zƒbc֡l"k?c#Lbl+Xc+LcRG/MVľcY:6 *ӹ8 mpct/A,Yjs#@ qF h?H/5JԊ]F.ׯ6 m:y6V[R+q҈69*̅!0T9 81RYq˵ b Ā$CHl6G<*RuA 2P&=aK뀖9nhyX kktĊrŤ[+)l5"ΤFTjg~6xɝ*{5Nmuu 1da^boDUF>G´([a=]Gd)e(*Rc!(jV}#t8B*rhi L3Q$9WN\UIiVm $iLd9xT nЁT#D*=!*(U1*й-QJa&\&i!LC5iPuJ2_x#.A#~Š[VEbͰt*~? ѐV2)\‡&b(qoam@+ł8V 4Xº 4$Qyxp4jcH&:OsBI\[<ԍ*!P cޫz}=O^˸X/sO~ݛG gP9I3{$`&SakHo60ܾ}'Mƿ79?̩7?>bh}pʪ~cXO[A~ӟ.0Gcg'bw΋_Y'o~vf.^z3$ !|Lqtu_&`o'P 7hGVEu·jrrCܔeŨȕi81LTe2yeK5y߫UѹlՈ ` 4hYe jiJ5ܟ"lC'~}{߻yt Hť͡M^y{=O?l|+?oۏܧJZh@h-^ɝirzD[Һ6%SpWIŅ~x/pw%tc뜎.gwF+:it Vt3#.uqɭ63ۡeFrɈD QpHUނ  0cQWeC|f܊t#1YWЅ9PA*GCުlՈa2pE ˅"nPcם`cI$EVFzc┧sy.u&ӋCmZ 9c^ڞ%)ZM{R^qWxx Xf#B{뢂x=Pp=A.@tLFF7F*=+̰^D8vЊaC;3-QYDT<]ΜIl1!Ů"Mܦm FEX-NU[j" 8.N $ĵe{dAi+l0"q9^빯(qh)DϞ]k=Ys2.:fbRA]$Wޚ5=ߖa7Fyԋ,IT.LRԧ}qZLgsUA%ϊc{d-} rY Lb*AM:Rɳ e7]7䙙 SZM٣ 0k#F<[j^C,(J]_\X ʺJ'ժkdЧf T®v̧}i>+B:mB=^#| y!#HHNGxiI\J$)w"gF-75Z\A;6m(EbN,Ӊd8\ē1oюzI>8VP%v6YQ\>{1UaV(.I`tڒRvuLKJX%ӝv'aSˮ yj`;(fQ[s:ˬRS]S|goGtf(%b㱺x\M.ڇgc{|̥@Aԋ)ǚicRЅb^0p 0Hvk4Y8rlMo,{%ۏv1UDs;_ZDD}SmZI{SO}oo} 쿱qJwlww~v^o֯گ}'ofҦ7=;;{{W~W>|3o9qNSR/?̓p p2jU]km>O|;RgawZ3΃J\EK0Ufv5A=ae5!fY)Ρę]j[gYUxFPͬ(R,kb9:x,Rώǣ#ʬԻw]?}۷Ep秱)͆?ݧ?|3 ۿۯ[(c"Exf͈(y97g٣f:tBrd*YEݼ@jr$d [T𢪂E%Qa xKam27̉c,'Cq_+g˸n&=9xxƵTUط M6% bFmֽ0.ĕKSdC䉨ӣ=eۦ`^ br1NrF¤08GZ 9񨳆̀yTDe^(pX¹V0c|4DցEf zeg _VzbVR-3YUdy$*k!gdaCBYjw iޜAs䉜L:}e%:;EK,(ϊ]P9zHVHsYV;}tvfQJi|4Bjqvj3 IDATf"B}>*ŵbS1BnJPoueG-ðMtD{*6 3^hNTZ*KTxx.EH*ISԻ;kpC`.Bb ԡKLnnCu9[%ZՅYLK}u%fU=CGCxgֵ'I,LQk\Vp$F{]Vc!4[dV8=aIY*s^+kF)tb8kV7vFxZ#PoPG18tIr餠0g1\"u$; cT:8q]=U:p5,% sTjNI!Sѻ'QZuvK*j.<0 J Q56iBRc1ɤGD\XulBd&S ,"=CWxHva7"M Y2StAg.]qמSU3wx XW~ƠB_ EDG.s#eku­q>#c8R Nkh@=,暘03D3>d+9J ,9k)ڬ'}= 52簞1Ԗ x!bF.U3@GsahM8WY8(ݚ;Rex$;8J_RDcYWGBTǗ}mR,gi8-⬕׫Zx~~uu(?R?cɈ,v퓨Т~rx;ۿ '$d=)...ۿe᳟>O}S/ſxxOņv 8G;/ .ᷴ! /ofoѶmNe|}?a9iTiE7P>~?#?rz:41oUYJ{?Syn8\dU(f+%w+(pgϥ nb|~Oڭй38s^jrLŘecJNJD(wbUN?22ftX7xʈRIT1]p4{'}GztrQ;xwp% u8#/H'F'qmt躮TYhlٶ\;:M6c@FS?F F&P |J8*.zl,]8n Q02: YGӇ;qh , #Ȇ! fKnQc薪*hVCE(wI^gZpm0\Q 55pvgҭ%9ϱ+d,N)[`QlOr@GgcAT8f1M/AG`Z4)GI)R*(Dg3ʆsX3pU"Q9kN%%n\՝5LIKl4b4[|CgeJ)vIHEt&5w8>DS!-#D ^ )jZhNΒH.Fd#SQƖOvh#?5sQQfJ`UM}&CwřWrZOG^+.E ;fBtcɭw uL%)%Y]@ȣ2>2's#s gxF+ I_%-?&Gŋ{6Z"oqΤy835cGD]fTuXDeaFE<6C+P\OGSfpIQ."uD;{<57pF=]&` L< Mx8Acx\[_Z!jdxL+W׉Cɟ__>ӧ'O7'W\w{= I^?wwWuC+wۻ[??Oӧ/M1{MO{/_?_.81=OO~1 Tb91oyiQf^]]~G_>m?O?37Pʯ| { fLO?t-  1s{ZƗ+E+g p<s9ѵhW˜'R^5~RR_a 祜z /ܩ \L(+g[xrKt \/_ce=Qp9 wj=]M!"庬\ _y,E+cEQՑg fJTP;b}/(Pnh ZJ̭Bs {{"" VvBXhtͼL:I#wpQ4s,G,8GTQ%s{pe[_č}x3s ocO]#ʅ0ڦs{آ54EKJ*줍#% 8ϚaNw׏ǁt;#9G} o+13{XFPL?ENHzh"Zo;9 {)huҘEqnL?*TDX r lOvh@*'f}\A_u ;h-Yk33qymPF,>7j㦲$^3襽& K|)?fb+#=mNHr;1.Q$LNغc8Gh k!Sψy42fm{"/bؠ5Uzt!h2%ZsDer(?y,eHUcxoF92g$"iƽ7@ k=77Ce5K5Pe.o5_E|8<NZ0ǛBᬕJ3.OIWpVRG֖i Q;ݠH$K2`)NTbWGwKql9y ].cمı#Ё 0l ҿB3ʣRp:\h&.x =LΉg W!9F)kaI\-GPfcs] s"8N:L~-pkx^nmfւS'=Q3{i/1'+4SVs#[6{{{/ah/Ί/+tkoRy2rV YC!UH$W x-5]'~zچ uS Ϡ[6ZDg;7`Z~OB: VA%sFL @BkZ!U\M36r%Z­~B&Mt9uCGUs Q*oK%bf.aWfYH3&iUOk u;<fe1;z=x;Zϗ18DIa_ ߓ/t}}ohz<_x}N2}ߟO?x]mb)Lv wk/'__~//xPfЇ=z 8__گS|>ɟlF?HuC:uJ)xaD//?;s&NpsAgB!4cW?ѿ%*ٽFQ1 iB܊[VT[q O{BNzó N<k؉&8R9KV$y#nKfS g]k9]J8<83_?;g22b1Ko)OKڿχ̱6]˟oя~7oOg~ӟÿt,-DjFYtaOn&7$tza$Hg\9]]-{{r'/<@@g3(ٲbۂf;MQMy ‘zHd!z-Y**]hE6>jGO.{sKCRϪfm75_Uذ0Bsy8Þ=FeeaJ= >[jP2[CTf7sXZ3j|r0MFGktGvi)o&Wp)>[+cD3X F "aY#lS|Cli/3 E6%PW0L9tHKaMn*^tdai)I/n`=ڐ- !E k(`ڻiۓ^@ ?dbpmh3Y hWzb!RJ1eLݖo#R:[;la %6]@QQ\M!} $CQPida54Ed` kfrls*֑ViWJRj #XNQ{%ދ;jCҍ3"'jqcY'`}[|{簻oEɛoݟklPDIX{E{ܢ-Ij sr*9ۣZgG U]N7[Xޥ vP!8+,"lO$kW%[ǮuNڣ=DN{ 4B ݕMS[SPO|'EQ/% |};g>l!ܦ0ovڦF~w:O[܇~ӯ[.=>O_|vJOIJ{Ӌ/M}F޸+寜wS[57^u]_y{>||M9C6蹧6ۜTvfJ)|;?goyHEk wO%(>ϿٟLV/OǛ1$hܙcIgJqp!b+I0T/%pcW2vŒ0 O(D}@c[FRZ#7 lwf:Vue!g](~oC_zgvd e Ks2:&yTH( -?~3O'Ӯu,+ 3h*̀ ћGNޘ M`\T!!n9\#\ˤ'vg.AXQJΫێ;+[Iޣir x=,G}+̌,8'ʐV^<ӛKE?4^ +G #9bo)KvշpOQ{7=7əuf ڎO6M:[|J& 779G^䐹\i;fA8oьXQ{58ke|MI[ңVk "]z :SbعkѧZܢQ bnF p݃1:ʳ;OQ`C̈l0ͣΝs1͈;Y4!8ʵ!BJCm͝S+{"qPn*wbhv{2 29Cı&sTX+R.*%>cAw{G]).iЊPqz]4;MXyA\bafh]oS!I'fx^cUuԔf=:rjpwQl;X]fӬ[vyStlf)< 0p5ͤ"^f9O[G{ wnʩjyn˞ol'[/fNL?,yٛ٣ʇkl(> :(Je'U&KﰰGj$p@7"C1WkXGAZyܪ,)UJ*<)f%X{19cP|+KPNp7%E֩B"6*QaSr~֢;}&Ў8߬ܲme(G;t+%!]m+>1NEʼnfoovlJ=[48 bnn=&c Oy6IU1f IDATl)VZ^7jaJ!slwf+lBcZ4WvӲt;\^w0^0Xc3JDǁw'V5Lm dk"\w Q•j A JhEuhDؔܧF3dro9==b˘vu:,GNv3:DmRӧO!O/< }}o{-,"iׇ,/Ó5T@7$-~wG~G>Oܜ^f_ah#_kk,H~D_:)ANBY>vS??wEatP_l|`pN{\&Shv_k~?/){7R캶?ǿ뻾a?_ֿ'%ӒƱ7?_L:`'BC+y٬;mPNMrVdUv(XIyG,-cR>7w*[b^8 gS[- bMЉPfsooaV OՂJ:.̺y5JN8J%?C%Y#4oo_GysAfKװIVV̤s]F[|m8!:?Ja UY5b,<6%ϒ$KIFQQ{֦nœR "K5p<O"Fd$lҷf3sĿБ ЙJFD gfSwb{e6/׊!6ٚ[E]FGK2ZQlk` ;bg,Xف:ItþI߲"[OFrFsf:\cRQ jH)m Ol4`V˪xñ8L<›Z̃^=y~ihnv[Sq$OÈ!,MuVFjq)v5q\UCuCr!ٵkZ+;&J˔6sbHq^꒨1Mbcv뤇'GVn'Pa.1ܚjFk6X1ٚ5yRA6ܺ[\cLQھ[>T 1okX!n)QjĦ;фJ19L %7nTkl4IZHvR7vh6V;X€#tkzku6IC0z} R-V}">=d*++I}fѡ-݅ɀj^]pOב)Z&s6%9bN,*kjv] 8+X5TZ9W᣽Lfwah+͈[LP1Z\u EQFUf eHkVqa\2yoDlݘƉM )mX(X ˶@bHS ċ;@!W5y'@wAcSA9]\ksNGzND(ɣ HR ep:R;$1 qt!(\[H,"uBo_s]ugFs>T첩=K%|ꜳZߜs|<8VH!PgGKLe-WJBU0bEwľjN  ;J x;93.p߄OǞyo{Gx M?򑏼o=rHZQdؽ7=3S0}ږdJSLO}S'[/^j+|{uۼZ;mtFZ_DCPra"ݤh{q.m]M}-5pUW?ӧO'@͓f})]wÇA ~-| _`=* Ba0ص]&x $jN 5fk5'* b"VI.PCi`'b '"@,HCm˸"j6*V`4gxn~(+ eqA:XARr+ΐ̻l9^Aܻj[vn\ U @ÿ=o_z'OE ҮGADs7 1J]ܔ5 #4XX%X4, byӜL^b6'V`?8jARIg>oQZ>""0n֓H]sI]dW(@ZI}.̔ Ї"F bL֫ʸ+m%]olU)l9K;k((H03+BQ}y{!3GҶtv1 =`Q!X!2GC, !lG [LD2c֧q'r[LE QȊJ?o@Ϻ=p*pbv+mGXAwD@©Z3l y,1&^sV bȔbU%PQ'!{: G!S P2!Sa*ȑ|`ܘx !RrZ   SM=W}=1G5DK0rD*i !b=WEhSIgV^ÆNP$Da:Oi 11{.fIbA*DWRR,^0"[.0$ B5|Cq3q+fGhY2 v0"jCȁup@ MQ PR> }uㅐpB(`%f**9؅8WmEzT0, BEnG}G>A5bG,MdM7[6b1ᄛX+LZLvܰ`V[*h8 C#s%戅c H= n:4ESČ " ChcRJ<c qZ؀莰)+/0D r&0"RJ*QGR۷`0w"EHPv UAv @I!S\,5ap#ka {Ѥ["u;>B6X5`j*8-afmHe2Ҹ=ՈL*OnyN璑aP1TJ$N@;`"FԆc}8du&L/2ټ w;& nm .m -"A6|yU8~ cY,BUЮ+ tR]#7^V$d(dg)ơ+m.= G8@Ėp a>l"@.@1>*61V YʢYZq?PD&a%5eSDۤcJY}zTㅇ9\\PfRY*_w"_w%#^j뙿2]uUwq{Vmll|g>mhh{b 5|Bf Pxe _4{akF)c7Mo뮻RAt=G}{o|co.KJkww o'p_ ¶/~TeDOd++ ii8`* /0"\U2KU!v [1>ً'& #;ԫ26ȫF֒Qn wvm{F3-6mߎ!>Y=<|?>~.ֶmeX1 !Yv)mxWj6X%U*GdY#5 }-SPCQ/FەۑIpgвa3[F5F.U>B2(N@[Cݯ~v9 < ~tYᲚWpͨg+<~}9H>YD=4 A,*=`x:N#84F' nl1؊ׇ# u瀁!c"|Y0|i43MHW no$) EK#JaWvA41 m#d@6j** H (ʳ^s8gD M3ԫ+d0Ga^\Q51.9*)cMz),ai؎CP|NZu5Mv,>'P#ȘnpȬoFjT䫉KCwBxoKCCq>+B%PWw:0vs0Y`k#톌 u Ҥ+Q%fRs/hα ,[T5*rG晒?Db.0<+K4Ý~ˮ%Ci* \aܡh DNm0+{!` ai۱)ۡЃ#l@ߕ{pv8hܕb8ZhOX K+`W~iĂ<>a:.UP8MY,:G ׽,|$H(KAi e9Z]`fUQ=O]&LAOkA5.G*p/''`Oo@fe(64.6Ti> [7@jX4a˹ش}!؃A }+CŭLjp-@D$03Y^MҮ)+&pSJWjZZr2}a@ҧg3`.m@XatA'D W*Jrm:7]x}L/`s$*ʋm `8o{??#9;oEN:4ixĨݑK!P,d".Dl"n=fӪ#4h=o}~B_׿ ;nt:͆;]z^WƵ7Ӛy2OFL`ʸo|C=4tHJ'uҤW:K>??kג 3_!.m>?up#1;;lOۺ(巿An;zn}s]wǮc8ԈK ,,+8f%ZA̬Q4*@`rØHfQ5 z3 fJ*7+M#GOrpYz2b8`kZr_iX ^4; fycb8WBTG@8 `[CKP PF *$xF2!6:ԍw)R۾Za>tLvY?|v*9..kڎQNCB6ZqXp{ -FwLIKx}ro^VajaCia I9l!B-:$`&Xo"'#B@&h".DžnZ aV8`?6oXw]gR˗ qF 0 Q C繰L R@(j2tq%*+U+<0BO8 }Ɂ┸=`>U528;qd[c6ӡ)["&d "/ Ă>U&bCpHYh6Z1 x糒@33钏 6%x:C" ,XM>Q6SAi MƼ1U2`aOWSpx5BE-c)qD_q$>mX$05B:I/h8gbh9'~P "oO{y|;l7!.*mo}[ǎ\/Z^^W~> BT8i`p]h͸fo( ٴ B>3v ;P 5Kac݀,,)`&};qAx9;̩]lj  KO=.G`>cQ9 ./ܽE$4'S3_s$ِR5GQ8 !#DF//{귏EO5`x4>$\ R%pTģicKa-As7`l]|N 9;GVtP4R&> m*G+ö]F]BCKġc\v;2lNWb4}o1h,~=zH\55*RQ\HsXw'Ƭ1GP;+6b"pXjpzR8/ \Zt[ ])`X φz;&ļgAaD@_9$XO`VwEYp1d),8'TDxx !zRUttjf`!U(E=?d?A`.n.x%|ߛ[)+>g[^Ϙ_r\SKӿv":_0 IƬƒ]si?TtBTTNj[JuZo% їr ɔgN&_p+e|ֿ+!G M^ho|ox|2`]hJ;Ç'g uO_G.߾֕+E˾Ǵ\믿_8 /5N=#_ӧGbMWWWYkz=%,c=ַGAj8%ZrQO&dtb]5d-(rkhѨ=ݵHB2e.oT$tߜ3`,xޏ׼5wy﹡Ў IDAT#xR+DŽՌ0⨰ w{w|o&L;= }p|5fboU∳23Vo;wACeMnM88b(UB̭}ƌ@P#1j ~z X#$,V]h}+p@nC#Ŕܪ`̺ gaL`=:k\(YPvd[A۞y(z %4˳cD֥Rzy!Q\teLf)il45AUR吨}kg.ʗQ/jp9]#Z`0*")Z#-V=5pRt7K!X!w@ԉrCp{zvqPGBHQ#Qjz@Er .:`oѵ& >H`t ;"g6$:$b7Mf`Ϝ)Mt:Qq)S Bv%L I}$f$KsRBl@_s.^CR~Wte@rq<)s42$a~!vZ7QWlpY)U|b1kJn'Pgُ}rp&/Cle0vP^1[A"5ͤOOd?}l~(2BL,=έ9?ADljj%mlYmG? sW5]zW k֢o~굯}mi?Ŋ},S^p ꩧn馯}kU8^WYzk61=/^ҐkkGImd3'>{{)~Z Th" THf~L\EH~ZsGڴc{nnt%~fRtl95&͡!e>ЗBgA[>n+Ko8&V WDhD&Dpycw_iԊ’V٘/0Tj( EG mqUU"Üc+Tsi!DĀOQ05֐5L"0[' XU.#Ra.#6-`J@(?aDD4&U˄GHo犟4ۡ?ianXan! LLgDC Ԝ$XeܦFS*F9}!Bۂ32R]xCj,.{壍)HaE\JH-lbmʽ$I)S$9'4|.='"wޢȘ9 dij?1;G>bS7@|#Lg4^?LJOLdHiq p9plO߇ր]<ЇlzD&VV% R||/Qք2O1@s e#qMxxaPնm ;"&ʁ=KM@-(_絆9gdc<)R w[͒?)B6%<䎓?X9Zkm9V4o&?g9j&Mv2}lV^ &z],QUUEQ,cCoI^/{n?__1>u]'"}[3?[o5AK/砼 #΅-<@Bҳp]_[j׭Oź.ѣ__j1YhٻONDN^J+]wu7n.wpOj8|UZ n'(1`Q,...mݶЪKU:ϞP.3oRnߙ8Gmv_JXV׾9{01eBU Aű&l!Di֥g>[Ys⤌4=kq9<Ӳ@-9edp0 T`s;8Mc%x4; ';57h6_%}j,.?x{6М0iU]/$7-9ViV~g/DTzDYMOj1V`=c `^h~ĭgpNl_Pz RٞygBY:g\VV|6%T^.%U'T6:$ӓ3+2z*]l]!bx>?3 M ?oߛ^!E_//m/'HMM(sb:}kb։nٝS8IW^A<𖷼ebq\4WUi,R*Hk{+[4,Hi)ywQXۿ<)H>}c}ᄜ|gwQ9sJabD:듈u4 ҙgRcwtM'WVVr"R hw/]kfTjcJq|c؃>wP Km&GNLrL@"hȌo(^] Tֆ2A/I|!@sDX#GQ IϢі?&]P u~ɱ]s;e`=tLl{ r-%Hx.XA%YM+ 5 ;䃝uDY&GU;.NCx :[͒FR,PެM< P8WQ'DYr(1fIxi}6-h^.M,lL S>\`X*ryq`40!m͘{^cYcP6b|깦,R~7-y c'Ozn 9 k|viRa.؇K|㫼 nA ٱC\t4nzIO(lK "H' -A{{ykz }~_ \Ls]ѣG?T (`#kƬ;xLߖ r7E̅֓.ަyBj\A{: U#S'hݥ0 C3f3[T-;(xUwRT`/E΃L@϶j*aH'dbiU֬~CuW. zB_40My6]RRHU*)den ͊ LgoMDz*4Qb`2Mw5i c)ɅL5!ﴌh­0{q|PP&|jb$8V GK rߕo8ց]`Ա}ȩw6&!H t԰nZ (B0swR%TJ/~9OYn! FX}m 05p*Qʼn63C*=w;:?FTc9945ElO\֗ ;_u?C'))ԟҗc?cս8~zo{/TRzew?0O 3J _/2Os 7 rI}ԩ|3Gg&U4ˑDObKq]֭P$Xbv?l37rAxI 8+ECsb3Stˮ"i "$$!Ƅϩ)夘::rvVVdmJUaîCQ 4ldE5 e֛w􁳣Q gAC#`oN)렦{9 ɉ rT@v9jj ~`M|I@lɒ%'ł hY$=u߬uΎv:ZMYA<ѳhIzT2Wr)(5x)N&9A?n,Y}0!tօڊIXwfd>9 pK@ : m[fUnX@JԎV>ބΏ9qO(~O # \lTQ#NHEH3.TZU>~u {ϟrYr[] b+W e:,b`rޫ'`f$/ҙMb)Lf(A*/@L}E(@unt'ͅ NH7"J6U;l=z6!ћH!k @4+G8 :> d]jn*~bVgn3ja)]0 4X%gs}C_ jlO6+gXGjԚ*]9-AqVwr">2(j1]ٓA,5HcU&' 2 L1Tb>WBd)[ԥfam#o6to&]e螇MeN[Լݯ}NzpV(vHi'P 4bELf#}ҬET 5b}n;.w}cRboQn[o{]]]M:2%o{۶ocx/3J--){ʄ\|=? ˄6c\sW\|TSjق/l?|7eY6!)%iL_@Z\{{O>{^"BCgkشg&ٺg&'|(y`9l\ HH19+&^lr׿`?s!nmaa ox7xI2Dc-T;+AK2'fW5ۆw'Εn v$'J 9>.{I=lJWޔ e^\xĬ mqj*=}-So[ʄc(%aȲT ͤ9`8*ļ2UKKqÆE4 iq G [JHFgH]m64U"&3T-Y;Yr}#6"4\jvG枚iHgc͊ԉ p0ow!f.ZL6j\Ph uGWNNۙMKlS>[bl7d I.\&qEorgͼ{~ ,8敶Ϸ10^['i쳿˿?ɏl^I )n'\o~_n%bxJzGN<@%B3G};9Ҭ7/|24ag(z(O k!kp򧤍__M_OPʱAzmHv5G`vs,}Ng w3 ᰴ+MG1 +Jiݧ(cmf\sr-sss 'YH=رc7!:Ժ/,Kw $ظ@4Cuv @yQ4qgѦ}'Mp:h c [j,}Sy3dW h: X-m;I}oiY|( Iǹ߼UPKđWE#0xp0[_ 74N -LVϱ7 |}0L-~nL4rE3 Bq ȜE!Ẍ/4-33f`\`ӫ a^5Ѝ@)tD "hzVGΘL"dm@+3JT[nwaKoH,cOU#g14b .?&I{6쾁}4OP=`ɰ$d%ud1&3gtGO *e`qq#M /d*^2_z2B6Zz4U&p lЗ hh^D;8gvyΏY^@=i^JC]T!kIl<]hT'ZX-MaD.A+Uß\OeB ,‘dރm0@w{kk3.nB7p]wu$?QH?w}wS̅) iI8]w~u{gnu~e@ۿ[գ k3 b/4<&0>˧p$Uq ؄AlE=DT5~wOa IDATpUQ&e[no|{/}+^ #½KL$e TNrn;Р>'5 cRt7Qh%_B7G4IX Oe0$Pg1ӣCP5R2d^ '',V%3y\=`^:zSb΢5 >OQ2pooY`Hs!(`seز6}[0s5\"##6=D@% *UD3B'eg w c"2[`Qs 8 tfQOŮKsv=B+Luddb%”d$ ` ͬL#e.>iQ Úx>ToF?E`XH $ŅQ S="PD'Ko;vf#v5 NC))iȈ*rAXAi3g BZQp` +˝u*:6JFBy)BU(r!cɽkuFaOͤd=" )iVb%=V pԩ/%Va\>` w={E%xOsp̰ D97(;Ar*ֶoɻ|G[Gf ڒڳw:$$N  X֑H qpg/--2[f}쩦}ѧzkieŜu̮艕R4ʥD^k*k0'kWx6;da T)ŽmI@zy` \k-=ϻC=2 QrEvb$P&32(a"YXA%C $l`2&s5VW (p]]_}Zs񮽫p&T*}z׳^1\>d> }(Ek_AZ l^{~qi] x//|w|wާ> 9}_QbR[V/Ix._|/>>5N˦6"W_}W^zgսMx񴄧YSOoK{ϳ1Ja:ᐉPoR 6)txj;ڑa+ m)K5IM@4%ZۙVkh{??K x'l>O54aVJo__G?+vQ73 '46G[R.*&EVyF/<צ۾Rt 'mhGGY䞾;u2D7]CYBr#bt pt+q-]X>#kOuQ=QTGC)&.CۢF$)1+zQ#`<,Ʃ2K fA[#0D@@)#ŀЀxQ((N`Zت3z;^T,( hpoE=rr#|l D qW⍅aYh)pr%ܲfhzMؖQ8 #%?j}$UCe Qz̈́6)Gff) =UpFD bg ZqOVd7'9?+IrA( O-P PߥJX`F[vr1 P6X<-ư0w1VnT4\_3@] gωLMpSe1yzU:t)+ 6󻂭q.)6ڍc_|#wI>OTANK{\K59lW$ɬ;;go3L럋ILJ?'OpĀwFIҷ}۷,G}@?_-ǩqX+v_WWWu-պm6(%<ú'?K/=[y+hV`[=q}0Qt"$8+iE+q׶S2`+|D]n*sjn;OxtCQ'7~Q/ZJ)_5_kk??ͺ~?u071M!~G/C YfZ-TY~{r8=iKR*xQ\B5ŨIΨskSJ@ ;C=xѸ-G+ܸO1QU&; n~zۑ80dP 31 zJJysEbq/mpnQ{|*atepZox tÈU~ָ()b)Xz8s:jR`桌LzX%5pmC ǣB&LI,aIj#<.` M 'Jf?:P(&)_7 C0<)H+%)1+pE0ޓPt1u{@jQU7bYf0Qae !8O+0WS 6\24&=N[1 ;-! &|`l"va_?I fUPj^p]q\m¦wM M ueOk[FyLF1tgߔ_NQlRYOj]dwP,"٭̃Mo{:3IQ֚ 8#ס*ove rXH{Lav;~דraEYf_` vn άF{b'n-W~Q;`lfb)iCB7=3]b}}V_}o7m4>xD#w6XbcLI}9S+^/AX6wXK(:yhm9-D BFd 3 +zb&f.9 b>>)·pRJ`:b(Mc8(YԸ֑5YK螋Ϗ?ᡈgN"g~g>O}|~??U%HU'Ms_o/ 1q-!vc-^Țk6GG Flb|es C4zVҊ0243ɽwq;yYBhOCJ!Hx4_+~;{߯>Y^nXXtnZUr)q ؎d_/]hMYDi<&1/Z:}$Wdk# p@s|]Ⓒ\QD@nnYR5y.1 uq:KN(#Q(vXhXe'Dޠe1;3eʈ ^uRB[3Qtq6ňc`ftp'-'vF ]M&6؆#RwR&>Fgr!p'bIat5΋C /"̵܋BP{P.>.* bRj$H6fVdK)`68*GZ,CNjE}YD3obG/g|(ɱfȋ`!Ð43tAO.9-`F@̱>3 4h5D-?,+`V3% 8k: pKwVZuuf;Xj;2Ň. :rWڛxD}G81W #ឆU,5"{7%6*QqX!IFoed ~Ң:pwe42؞ޣ)=~'_oh({46IxxX'r< KњLjg^TzmCs̰]Y IDATt=ǒyBz1x?4;Gk0Ȕ.Bp"Gtf'> F5ĮcnW0RUkfd\p'IHtK37Tf%11_yb3t+IR4̂%MfiQ푙5Hc tžWd{Xld$t0hY^( Ш>Hj(ir lZ#DovKx`:scd=QBfɖQ9` [ukFD3թ+OEҋyan^lHFJ1*! RI>6"^qI%پ(`͜Vt",m҅,|"M3J6]a3-Jb0؉+)Rsu5#ae1jnɋi7&%|e % 8;c4,sc}hE)Iqډ&(Y$yR^ /`#஑,fߚ[sX)͉UИeԇؼ9##xBzu]|0Z k׈keҹ&j0Xy+fnVc3b} gv Z8*VsF|k˳QCXx 'q6"uq ss Wft7 phmqimaR?^U8?|6//I5R)x|n9g>O~^/?zM圛۪}?K{5ڔb?X,NFHޒk '>/| XOǻeX+^z3~ ,Y! *M?>)x!X+-Q sQNJ#OiĘ۲RNV,Lq~034@wh@֍Hns9jo-ˈ)U0uauN"$2wAά5)B*5NŢ֙ki%n Àn'CX+se-38K\̝'ƣ̿<V3B~4(9 Ӧ J%[2^yp3sK P7\l2GG v*\ ;s-(milV꼸)p#`$yEM;Kti_z̃>U`eoRJ פ>[^̶^lnEPQ!)lT= +Tla(7\f̬+|b(V8OD <=}&8̰bf4whq* N0*!LqL'aF&Ɲ8>NϟҾW\Qp7K;/N9li tJ l5@9ɼxPt"HZ$KNhLQvjaIrxo /9a SEMDrde-Q\cE1gybK\ԮA^|H/\p1XCKt#Э5lEêxݢ ^1kFnfDIWV ֬ǜ0Yh;coB H[Is㙻Xh s|aZ4x}MHxCj =qM< 5.ၥmS4:ؙLl ,GoJ~sqؙ1bnˆ!ws-V*0ʏ~s_M4_+|EzǪLw^_5e_}ooH)=?fs PQ暈Tw 9ykf\+;/>O+oƻ}z?C?K/յmQ+qJRc) AGNEz`K8ja!= M;]wȴoӸ)cy<hR(Exk-4wd]3c+SkQsl!u>CgnK1!+(2*T=*G'i$B&pwpЈl҈1,.*i+Pn\9S4f6AU +VA䦜.rWҲ粅E,8V:؈}-<Xz2W p{Ґ;pja5SԬC+i/RXI*F@G]kO܆s?#s̓י[@f 8o mQ`@cD 3{&%fhM9t\¹|CiCa<n&3C6{>@OXk`Hs+T}0|n>Njԧ>u ݲ~N?">O|Ns8r?4wz"z?n'*ǨƳ_~Cлk^m~}iSU2 iҰc۾ۣ 4e=r- b (NkMlMn,NJi ֐mRתP4'v+g.nWjw~-Io& OɁS23T~4m{ڣ|K_l`kdۜG;Mmes!3YF\Q# `,ҹe ɐxWH09e,YE<`kkegf&)n" ZE!=aw1J#Xg[7i;ž$M h=D<1=j0c2aH1Wu1q'u`m̍Y0'zq"}XQ kb)s)[)-;:R ӔH 0т(Yh`%Wi dpWCb s]e'm n if*f TI}!`/\23b> fjWTeB.\cp .Q6öcRj st).KL1L x$IiZk7f:gN]"&K^,}Z,%FPF:PzȈHCK؈5eA\Ěx,L+ P ̖bsp5Of32`&\6"99@\oݱS;j5k$stE@XE\J1Tl:e#;p6Me5:#AnxŔߨiQ|q'"IQs1mjU֢sZ Y6Kf"ͰA ]yo,/Hw+s_!FK=jޘF\Y"txQ\7 b'Ud궝}ss}콏mFQ A [b ! KDۛ lBBKV1"Bk((HVTT*U±}s<Ͽ]I k#-ku;xǘ߿|X*gL+[80 v0F4hsjx^l0p,*@H'$7EXK.]W7w[M)`0͌*4^( XO̞zw (:#`ob/] .1&U[g Sޘߊ ~By2@o.KfpO2NCKRMr̒15IвZ+?O?y aJQ}gfG?NgNoZ1:m|3?/׷Joy{{۷/;Wge-'?ഉ4M?ԧJ)|Yqs8iv{JW+֩\@O$k64 rJa,җlU aCZ6.3xKc\uֻ1kyt޲I{iC'8ो};W,n0NuIZ|&770o~W>寏Ỵrrfx pw-jTK wZg9br[?.LNC3iflHk2[R0rk荥t0|?#ZFc!Nҳ;bd9Os43]h'roL>#Xbٲp%%K3E^U_Xn`8O'1klaÅ Wщb'm[`E\0yG\BʮMКroV|g8MEjcc,捄mJ)N䄍>f9ΦOAѸ.D`QeS!f :TBqhgk /R,m ƨKt=pp}19 ^32;p|T溊Fх]}"µN J8\^SxB\X2Ǟsy9j.O̵O:#S =C ZXMtK 387..w䡶lp.l: .Y*ىi-Pvn AH?/2nn erNR'E̥6we5֣[؀G{Za,MGifԵe'.E#ÅaN.eF] VЀE2UKFfkx„L'b 2b\f\7n]' k9[q9a+T7oRF\KӐH ( EY*胾{redr*i`%vzØL30y;8X ɘ9X. .tb]ZVLcC6VMdJCc #HOqhaR3>a;AswWK 1y'eʃ꺆K0zxx4YW]wq/RڈkvZ',pX-:Z!ʙF6WƠ֝D qnLUY|&N6hŒf OJagyMy=+XYkSEj]lzpp=k@+ ^0ڹ턀N+1`.z_RD^\#'GXYRNMȦ#/*=ԛðya&k)YݦV4Z|67D%Z0 hXj&>A~w}ק>J+9=:>--/_lfZOF)eݯW,>63ЇKh_&p9'>xoM~e[@k5Ÿg?o?|jՇF66_3&\6pgn9C0vbU$4KMLGs6Ğkry~R& D &K3OϺٸ~riY?綒W4_#asX70mڏ}Gz;S=*m--Sѡ.iukovePYUDx2g-ȷ!\ˁג "8 bR!A1EX#L'|nji @ 8V-ٝF%=r!A4^U,+ױKgW c$TucjZk qs{)d2퓗ptܛ?ʲ>, ;a 3… 9gafm/=2q:H=SR^ F;ӷpcR%Q<j 4JIGط1F8$uhF&'>YSx7[~Iغjc,X =diim#6JNs,ʺ0Cf\Q`O`;9Abs `kѹ5aKdZrSc )`\xgdܨQժs8Ҳ֞%hm43KA2ԶڝdH8dl`PkYbS{"{?דږr'l 4bl,>9'r.2dlM< ~̜zV`V5Ңzc'qM(#0V[%q g@S='e\ tKp̚3lLQ5fG舚5PhF 4 bŽbW-3wlV&&#Y(ֿ߾MAo_6ɣҲ.)W&j7MH}cUx }g1W3>Jjc]~noo[f|z_Ί/[=^-#o[j'?J^u}rȘlR.oji }f^^KaqẕI:s1W5]]A %ae>aθI5Z'NYjo},lnA%33"}-(^Iqg6/?w?7oo|: q+{m Ttkl.cZtfGcoATddL="~k:$ Lj._Y%eF½ʮu{1%(TB 3)w!Q8[aB*O[V<+}I'3{؆{p D g%em1(+LVy̬ޒlϝO+}#8kԧߘ-vgy,-|viZ{HgӏFBYtUY< f ,KQM1u"Y0@UKN']J`Fk?`l4e$yȆBfoywҁ })4470D 4OҴB,HzHK͞;"r,RA'l%=L̞62r *|N)2%\ G`CpۑAXwh)Fhupt 1q&h.3 ;=,{i-[stNP;B &Sv|+R2\-'KJvaMxg6:HZd LMp#;]2`VpArtNVҳڸ +&&xA ẅb;CN"g0/d=gK{ ذڪkkc>Csi9ha-ZxΤ77N:6[+VA19Z\b b;sg͊,ɩ'cWV19]2F FrV9Y4uöFBMi!+fYō*X9=odYwovcڊ@69|ɸ5ݤ;T A`Nz6C->"rg x]ڔ NRr9[I.-{`.%ʐ AYOv,˜ɴn֨Vi=S2'iL{*9:cp775=)55'MBi:ƅYڻ\x\ %NT`ZS˺M|/Tq{84WƲָ X$s/ j-Q^x>ϼ?qۂJ[߽~4ZR#Dgxv>=,[k;&eگدxY''G@}V/k(?9_%aj^xxV-->'>ڧ'0̝~g9mUna>=?{cG,đ܋r8찗؛mX_J^E=y 3م3u`Z4&qY[`ž2S|7)j_, 2}3a[f'_KgZ(Sޱ{m{^Ss>L;˚`ֆ YVsUۃ{ɡ61fØ_($%#͞984* ቅG{i ĜX'aAO7F=؇FjY|dtV{ٞf朔7sy1/L9eI۳UVфFؓ5KlO&JzmQX& l.ؽ'ֽ=XY+0<+ tɁ$=ƴ Q zЉ<!g5FJ1w[Ų1TFKTc2{cьB:J7LY`} | >s5 3)I0YP 9֌u#潲shɁ<'$ $rF̱ll~Tl !AgcS-م=9]tQ"`JHKrrx4ȳCE7pQ=.)5 :'Q+Y}XB' *[!:; pJ2Pɀ٦Ȋ cV0"8#9e x7Sֹ{MѤx 15ZvQכ?V̤ރ$X\1ө ͌1Vuɝj\ ֙$͜yyLu V۴+Ja޳:xl1{Q&Fa=qiN7 zK!#7fUpQ2"wb o 0c6VR"FͤnSdqqTI\KbmVa'n| N\\S6B Ke*ؗҿ vZM6)&t)naϬ)Ra<ۍ6)T%Q: 0Thpk`( ɩrLfh-:7oF8-L7'[es7`ȁHN5cRlᄛE/A)1RNaTԨ`"Q"ga)e4e&'Tl.Of`ң'[%A$ʃ|'LbNIDR#y;e霾~[p*NAc#/27C.Mb~v܊T]clQHwF)LaVMDuArc{8ۘ&1MH#rى+L)/*b=(uxI7l҅t }l%Jt~iDQxdT4Ӿ``@y'MxLtު4Q\VW+;_Q)7}7=4+:ss>&>J=Qṇ-iߣRhRCзxAk'7U۳>|#^ܗ_U]/DZ8۝D)5U5iϓ)N*KdNg.ݪ(ߡSž|FRJ7U6 PPCc}oo?}tag w1$_kc'ɵ֋XL 7eӳN K1)R٦'4+<>t<ܻK/vU#]̳^:AF2Ǎd[+/x͸"d-zR# DRGE.c9Tdiv<5H$O>i+4Qfqj>/k.d5]v1SbGY{]jzO?ֵ2Jwy:Q`Fw;n|3Qٓ#d''(UeEW!&Z>MLk넞)hƩ\Cq!{vыam}ow.z4'D q:ҙg T)d]ڎQD1Oͫ3V'`%%EⰏ)E;Ȋ9+ j7F*~I[Od7(Xd:V{].LCwq";5S'6:Swp3 &qnJi'yUCp[hvau`I)%J`F)LhJsXU>m,bUYT;-vVV`IKNlأR_G^AQvE)s/>{Yz6yGq, 2:z2yL{u#r^'~Kyq̵-뚮Ѝ}OjeaB6CƉ}fHdacNJjl`q/cӒhnTnOܠ΍ 32j$Fr̲In={_+S9R`EKaS-~䴇-$JfBY6Gr( {MЎPfG33Y*d.z2:M7 E%Q ԛ:ue*/nO݅Nv7BdZ{9MrgU>Y ~$,AL( JWIԐ-%Ilᥕi 7Ae;uJ>#N24@G*RK ȅ_O6dYΕM_TLtb#ylܐ$2߅y .%\&.' wfҝ3٥L$LC2լWE_8}tgUz`M]x Kc[Ec¬N8vܓ-$+'csPN^l,\T&KLMm)NHo5%, |^y5ۙ\5,0dyJ r.ʶ i qxtNMle2&xIl)!Zvjp!9ʾ_^RRg@)&YhRwG,ܷ Ke&)$Ýz{#80q=&a{|Tރ)Te R8 lvN&~׳EEX߁KGٮpLV;gY|2'Q! Q;W~R:f_^E˨m!`R5a} hvYogd;H9F:U sβjp3FIqRL)%+np.N".X>2;y͹MЎ8I,3,eb"CVf{9!mA [ԫj:7`Y&|2H# ;K?t T3LW˥t0w]E-YًC2LhC#RS. `VrNO&@:zgu$-5)QUɤH1Q|g7Hɱ3IڑhT13aj I=90LJu @=3S }ҝkZifUr0Pdy89GFMN`#BbA0NN"V>'/3=Im`FC=G"zD 526[ؓOԋswA;E/ 1-[XXZy>s«tF7pHa-LQ K 0)#ep1m[so06#j Q+;I'cn*L=  ϖ1볢yh-M E;vd7a!|rJƙ|끦I&B#RGR.od8:m eIVXgV`H8=y FBdYJP 6 x)Z&r6{8H{&M̰;i+ Lbg4%6XYY[1j18'Ya O›p@ZՂ Torf2kV,w}ΩsEElilbzFEt&fbmDF`1:;b4ǟ1Q58!&f*iϵg}NTUu)ޕMk=<}%Zygq vEq^̰ƪ'­4I;±ڇ+f3H{pA~ptP:U{^¥ (@'0WT* Jh3cE#V(y/Gf; ̠!<"c#&H(+X`- [',ChBc:}Q.-A;<@Sl5"RŮG\j[e Z"[l)1GLj@6pPPcdVtYBLim9ɦe639t7gy9VR`g]09n2;5k.  Vh]}qZ!< kKY4)kZJe=W( .YK0'bil:拞H0n$M$YgX:yRtaT-S9A \omlL ʫ`:*EyCަy3*@*m#!U\CsKoɖq ALHXc8l?騪@=չYT椆|k+>:tDw_VނZ2{$L`L3nEl:0ኸH ,IvHYiaգUs=+ة,h"5J wTkz2tF(v ;pĵ'7.vq 7- nje %묥u֊601nY%Zk0[T%PsJ63FX-"+E%v9im\p)̓b,dXuD8iꡑ9Uq&0Xb0GfMC[80B=r˪Hߐ}ap/,bjK /} i-ܴn9)U/|G>z lȠo:V;|_gwww{{[Ο?}kG7Λ ᓟ_QIx@^\x6;\ȅ5ʀ)V9섊bn͖1@ Gu`38yA t `LҢlTdqX\! 217:z劖KXL@ӟbQ TX*~"h!MTLyd#t3Z(fKb!ȲbAU,[w@s7k\Z{pN: KLP ljZ#ctQ7A{ /]9yihd~TnHNu[Ua@^VP Khz^'Xl*423"˓4.Wj,l ŭ@R ~NPSՑP"2`0zo2ǎ%QlfkiHYWDoGϗ0het2XcGgW٨pŠ`G~"̚i0tN+c&4[**-,Ңʵ  .Cq$T)Vh&ZX|s,C/ZV7UA9ؑEQk3uFgZ;% k*d-EK8^Xc3SB*2- pfӡsgmtVQNfFrBnjD% IK,UteERzuD- Jj!Ks28XJ ]qbf$%8.:;F-Mk1M#dXָEBhWWclVJUf7kB{=q 2VX-؇d%XU*~gJ MЮz(ofV$}d,;v(ڊ+&89,$Z߲v)VK"Y'`f\TDR̍5y8UzDh).9GD1td~؇5^"E1ޱqIn~Vp*~W{/֦#q{{F]؏;O7㸺 Ȼ_ -R-==|?.I麖EzVr֣2+XG EP6ˣ< ` J Y ݍ&i!hC+4򵻧Lefv6( khBjak'Ɠ$/ 8Šo;H2 W9Tn &j}ʾ"Eܟ*q :_ S$5 YU.T}I 35)1g(+67V(b qBL%|o>C YӇ n)H $*)0i!޸&p^+NA튱̬%cua-2 ۺ!Э}]Q%Qѱ֬SUӎ*"XQA 8~j$l Y'Tn' ,*&kE[aScXpR*_V$/?NC `n\q:6]ǰ8v25J}ڣ%Ac`D7(GcqdNJR uŋfJs|f4Y<󘉥hkYds1 v!ZXS AO$Uo 2tX<,2D*P JVx\qP3ڵ,FJG $ N"9^ZuZ FSX4S]%V7 }KVMaR5TJV+Yؠ s#+9R[60OۀI;h[sk\n$&Mt/h b% wt`,kNV܊[,cbZGzXf@զ> 4#M#4#Z*ZfDd]ۉClP`-,V|jc&В4}aaѕIvdцǺuphZÁKɭ6ml-QK"ʈËecG>^I 6SǹM?A0FsO۩Q+lLֈ7B?Gm*Nn>WJMmaQK 禤~-a-m#-`,->4BR< -)CLޛD:uc]7l4DID??7a؞ok#'~'~~o9*V{}Ux"mFI???BE57E/zQѵӷ!jvr://+m3p ?8,FKg3)ΗfkXl J XA~ gRSY5B Y09(jiCemW! /!E0Hy |fes|Iz Re@U 8ɪ*~[@25)ʲv4n"e)',uZei-[KE%k%q#21؁bL"ΝZ^f)T Czhәw]еL=bd7հwPx@Yk7 UjoﲚZ s-<Dc-V#=s`Wp;\+^C"e0MDž4μZ@,Yk,h5ͥZE1] xki?e3^Vp@1+UY`klֲ^2[}kh`"?Vc}"%zM*#d=Ř\u]*kWKHgX:-ބ.56a\MѴ,9ce7aPhKVj0_ b.)L8ZasYv0KudL3c1~OPLJdcr,Nq;}k"Z"BӨlzb,ÏCaҢh YR&7$ˣZlnV+lZ±as"bfP:Jk+Ir+ -&Ʈ,(-kaEbCzzjTvڿȅ'XH;Q%}ϱH܋&}{4zB&]BK @YϪ5smM8D1g 7:g U[&jq=F6@zx/W%NN@t*8Uq6Hü#ctzHөn=JJ)!y 'QjҵgR ?|t:46Z*V.)*oucύty{[ߺ֎7e4!h7k8z^wilVVysʞmG?gXMEOE)Dv11kS=mWEJPPN#0JcHT(}1WƝDrZ,!mL&WժyK>ԏۙݽ菾뻾Mi˄87*zD+:Z+>(1VwOdzјSG&֦yptMRUuPT-ޙRU QՔax 뇎qWŜDPUscAc1^7UYeAeb ԪAS3F*`Np9leV}!*,V1n\C34XXFT|"n}r3>4ԭ^e{'& IDAT]^#T1 ɮG.EpvSM8=;ԯ[WF!󄌲S"h +VN*(!p 5XUVl&Ro5E 5dۓp|kuo!u6he$k#aQme͝i8鿛NԽP.X_7:]pôK^Vo춬<٘ k Hh}<כ__DObv^'Ї^Wl; {E ^|37]w]ck)}=~~cT5Ml6/zы>яzȚ E&]yا">.SvЭ׸]h,1E1;l9$,pULyCKw9?K _ ay0iH${;vxc4ήpwe-I1M$3JN` am(U!TX+WiH=pMя;:̶Hf%kw}yj~wAjq-w}M_My-^6 }Wϊ]ץ#RPo/}K… hd[/yK/b1?=%EakU"W r6O?wլVByO^ گjV- JWVTIp,o"UE0R52:^,`Tѕ*tIQW=J(z8"> b `s|q}%O.bpn~ƓMf04ۖq\7| ׼5Vt…ɟrk˿O}uQ{…۶ueha{m37׾[ 6NXO)N|Im ŁGKTB]Jv1O3A:C0c}NsDI3OkWHMVU 7awϰ[M&?0.)q Pbaޙ _lX/ ֧_;J&Oe4Sr&{"z=ajv񃔮-bT*u?%v"9K"٬JZʁ:lK3zQ"S}QQ{sʀUsG9sd@3=q7`Oe?m=0+E2e_[IIfbPfufƁsh`xRvRϊ<G oZu%ۓ]Xik|A:N~lyhm'9@THC"_p; M=5꽍][ku%n@MgZWjgggMv7MO| m_oGmۈL&x;};9v3I8u{mf %OM?Qb;8jQvg>__NF1wygDL6|zBi]s=s(L7<=F(0%^<`5RGt+-CErE-eEYuQ"FluC/ExP:X6:|D leH>3[T?#E& ѧȽրC:~V]Ikd}>?"fte]IYr&"2؁ק_eYȔz:.e?h^k4Gpb|ޥGQ+[%ĚԺ42̠f:1GW ="ĊzJRKIUu~*yL$nzZC" `Yڙ`V;lm.euSM CbKuoຯUnf~GXm_Uz$?#?2 M6&Is;ޮ;NSw߿{Gtf<9ϩH8n=!y{}Dݚݜf'4Muymj哙oﺮm۔Ҧuoq(Q"9^5z!|9ۿ?C?TW)&N͕oFшSn23{DI@PJ:n[l #TozD ɉ, Uk[Khaei̛dǮcΪn}(W$Ϲ)J!Nq`z~~*{E,{uuk[8wdoڵod7j'd [&'֗ģ~ s XcDAkbf}'JaN@:Nt enubU2KЬUԯX+"*{,]M8Ⱦ0`/a_<|.Ci΋S!iVQXɔLdʩIa]XD4Mn y`{SNVyGa7>rHX`Qi Fպw܀|n]XZY(GEQVHfzߐYs%9 >z1}@b$? rݏL33Kf1 6o!"0R6|aIDՖh7ɘ!%3q&VrUGf(?~Jq{ONKGk)`i ֟ʝ4axltB)d#%M$tɤL|ܗ}ه:yZl6/DkR5F25'u}Qk~.ٮ39UO4{Or? xF{C!t*[JMn]IױNVz{]m@mwuw~wloCOO]wݵI '6eͷ/=C ;DymMo1[gU;bX@ޜ7wʕx`)>cظʌY&B ԩ Y2;Evsu^_eNu /Dѵe^wxɏǬ i`b4X$+Ditۼê:\Bs7e_aN=)`yX]]'o QnXl*Dk)N|S'uRa~aSEψ(&B3vRþ'5%!pL0= Z405m$& 2 *蝆F 4ƴ, <=7QH5x㤦~f%sfTD\K'H !kHEtEV '0&"i[/rojCHj%4lwR%JG˿˳UU'Wyi\+ob{ZB0㚋"PXD4כurYDHj$Z#.E/h/,qp1@ Z$~!l](r9\ m"q]鿵<]E$=4R@5PTk? / pD(JK6TXs iFC;zS)+j$Bų Hr0zf8AB\۔-r$7[&ܓU_+}a g6R+ iu9]3SI&yԑ@SDXEg'y6`x\WsH,gv7- 2@vw0G4)TC5I=▴x/bH>M'xm"$g/son2?{B8/ R'#Dx\y[+xᰥk;|"AC|gfDY~}t^\kLdչ4qAJUPiiܴ6CMRH"%z"Q b$Y.&9ijp^E( &ŀ[HxBő'T2!}2`/#Ť*·[\rd*QVoQéxDSbe5YɩD- |Reê2K[FJ0x_TB{BR X$ɲFMb5՘^N#9fVwo (o7H:ڬ~>|plkvaÇ}_җ *bPDJo,7AgO<i+u>—2zr7#G#;9{WLxwS߄4}=̦_}__{}5}//rU l֟Nt +%bldQ(mH5M*T)/;Za@7\O$dϥRAu+ncJ;."Pp-nc!q *%&x s _j(I*YpC GVx+`k/P^2h3QT{:01SFrY|N|2V*4 O]mXcw0?+Ζ刘G`ň9Ϯoz:R0O68^ϥ=]fHB)d-%P@Rā9DL 6ޜ2f6u;8]}8_xV_ Ncj t`q!Mo] [s)!PO5y{] IC}_Aa7\*ރ# IDATxdKNaV5ȉv.nW2FDasA^bRi,r)((xeB4z&Zq.FCKX9sZvp@, \4œhv%M ,4jkKȐ~ŹvcSbO650 )1q![͜$"^]}f<#ѾͬP~x<[*nqɟBƒTs p__vޘJMTr|3v8JfHCS*֔#,[]nAN??X]]-%۳3+^U_T\,DtDGJm_X TqrNCS n/o񍳿lO(yyN!Z/&24:Jf V ѕ ]ktɄ;A_-'3w ;+d'$oet<#VȤN̈́R&3@"C 4”zBJjEYVUDBJq 1UqhGja tJe5e|Ej!VUh"HCDZTYAT:ZҲ9fOW0IUup::ZIzک&B0DD"kUmhf}] ˔ +pWVJ@/"N |ҙ8$lX<=\KXECzRãj{bZ IVI$=],òЊ3Y=!UkWeZKq9& EoZL'DJS$x":Y'~h^YDIi|o,8Vnf"X u= drTY]vK3GRAB*f2)2zIcOʬ j&&t }c+l\N=w$WRw䘩{nxaϢ0xWjXb#Kh^ Hw :PdlR&_ъ֮-B/ړGH񪧆MR정1g图UmE~053R&*/%W(څ3LNTOhqd_!an5ɮJ$c׀oH"}{`tFV0n FD,e/nۭ}{cq9fOե_ްҙJp-r7l?֭ww|Y\dD:LIٔS[o77RDԤv{-3_p|f뾶ʥL%붰'i"G?!O\cr+ף#3Yn$&oJ1s5HzkW)^{2T$%Yg/' ZVŇWFʈ;ZP[ 'ގPօPGJ,E9*IK;Cy8r!Y&WgR{jK<(>| ;AL`z#rYΒ08z*NHd'bTXYK6f 毰sT%pOe&%% 5<汣p?nxn9ZT՚Zg?{3?D|WC&"8%"Ki B"-'6ȞMPLt%9;~bZTTJ=BZ՛ʿ8SW{7KB&h4!GUg4tUynm;.0xMPHw֞5r.Rer8ى R$n8ל#|qA*RگʯT&껵|ݘ#Xǎ;qDD <2[|\pa낽AOUzJG>rر[bn5̶ {ѣ[= nLcH36%?OofveH}H&+m."~7~cmmm6gY`v{]u w]'gߺ2]GJU%2QNU7üdZZU񅄊gWF䪫psH"x_ڿOM̕hEzrwrת5<b/XiW,*Ҋ&UO n;p"1{Dd@$ +l)dXf}Q2|J 5T"AGлShꎨ8Ģ6|,˰HT=l0N!L56*: JEROK%` ނWk Id VBiB&ݍ-tk%Hm٤{%/zhL_5LS\+B &PHA10͝^H[T޾~u׾hn> IHvb$*hX*5-Hj ML-~{NXgG8)fdcIW~M=w)wݚ."b\ (t-]Ӌ,*4iEMʒa-$41(KͿÀM!=_FWՃImȪƝH?nc1PmjH:{z8h:o"iy ՗S&S..5ZVb+o Vpv1/"}} _~fFrOO?#?=ٮjۿ=cֱaWUE|d6[5paV۫SMlQh>ƺ2.X.Ή/\z,"̚mo{'>񉫲D/ڞkG|FIWܦ>32d[l'[ssvv1O_-զS C+y`%^ږsȳ&O+MM87rn]]@bv.ϤBI:srjNH;h<=r`] Iڑ膦N+ϳ^RU4jef,_$ki#\voSt?dz'ng)`CDА*Ce݉^ʓn)C#*CܧCtXBn.q4[,lzV3{%KbR6 =NKjyN/ }ՏP!9HǑɋhp8j'h$2=p r}"ued:6HD:$cviڙD⪳,Vk mܳV|{tWu8ċ-Y#pLʊH֜t{\h3}ulir݁mTo [C-k:( jW^&e\\\B\QHH`Qw}CWo7{Νϟ).]̰?]ѱٍY91~>,wv'VwݫgϞ2>N׿z}UỲ~G9 @?D`!M̠`PP'lrTd̓;`zMGa:&EUio~<ࣣt,| G І蓦7% XL>$cmrU Z  nC+\B6ِY䱖'e.74n񍷬s3iYd}AN H&r1HѾҽ=\(5~Ro3"=K8}l0H= 0((#BHh6ZM"(i#H*.u,"a%}Ȝp^vb{ahR#U`[t{zpo<),Pry]8H_l.A &W.28/9Glm]$%Xӎe8:HsĿ7,㝌n`?O F]$Œ]Œ]1da߷d8l%D![t^@!Ɖ>oL߰dIFLGcw`b'+poJ<ᑒt)rx3x34"ڱqч1w)bQ2lc])'.* ͂t H`E7Lskdk-"7F.9A\4bN3`@%7v[; Fhu9`{6YS nKPڋMq(hBɞ{iq %ÓeSekq4:_knH^heˋ/s$wȋһ ]GUg}^8r)T"",*/sAm34 *%0_F.8xD$ h MrK*A~spx>b5)VBtq'E/7շ5M69{ƞ# ܪ @Z.t@&AOFWRd|)8膸8( ݌5>VxB~Ɩ{.4v޳r>( Od vw2;QMkG)1U-,ѻ@4jx9WOtw G \FXX kQTV5º_k!/4F?N\]$&mR=oX0l/rF߂8j\FJ_JnS37_^͊hӜ-wf_3:WPe[G>ٔ.2){,ϹTN'͋$B U{" K=+lD - FyECN"A;j{Mv-Imʢd6K+gA9Տd.yg7OO,,,lw/҃>~l0ӧO" sמH}?7&^WVV~G4-gwٟ ;cg1T|3_7Z̾sCP!t)g OFh8Z*Xz"@、R /$ T&7LL&ytte=[Էy_ BEヒim66n?> q_.U9>WkUAznB6=fD䦕o Ubkzh$L#jd+ ]Hƿs7^ IDATe]߱g91͸l8R|*{q効J5 )9Ԥjxٝd\bGEtABM\t O5L)q o'=c屏с=:z`cagBCojr籯:՘wYHcصnaK X͆^ Oxt}MȞbJ:cv!ca$qy˰&vЗγ.~e2,.ss5ͩykf8SIB!9g/=mf"E㴮e.}ޑ}ɾ>Irĩ `^9|\4WzEڤG҆D!ԕS#" SA*يCuiryΌAWUn/M:p28iqP99$O ;v<o4eg/НX EXE؀%U9_p:KD[kI5|i,.w{d:"N!.x4yd_JE]R|\OwnY1?̻ ERdRy66Ζy "r8M@$xψy8刲E`:Hd14tKEx6Xk6USiwp*LfV]>68jD$6pׄCKTD/?'N~.?q;_-22XO_Ls|{m\[[sU*+ Hxpi=-8P{cx!8}/44[&hNc9aOsN#gz n1N`!2G>>".o&({9Z(\(Fz|8!dJ-@!ՌB%ȁytr3rY f1}onAX3!)DOp_q; KEϜe] W"g#.,+>?ǐ wy2BڅI" Vх̋FٓdrUC]F'vH|Wa7<EB{8)Ed/68Dd '.{r4IJV%wx!pRkʆE{C&r1۳ bpe`u8o qtXut#ĦӁF 'Ed%낰sLC& s,¹ '.Ej %kD4Cv"I u㢰+|oaʦPXWܟ<CP DvĒ6l@f{;ǐs;IBQns?uAdC]%D[X@z.B@t6- @OA|G֮p\yϔX@:5NI 9vs A.h,~mSƞc (㔞,578^FRNҸ6'3Rx$HXYg;зkbWJcRiC. -Zt\Ǖy=` Anκ3Cƣp- pgMb-K3{</ȌDY*JmkH ZV BzՐfiU-V PR@Q"yȘEy~szqQT[EH2wﺻtR*/i Z:ZWCG E|܂uCr_0TlT 6V U֯ [ċ1q,"(1 }j=bK^~3`Hh,{, dq>*nF$RJʸ6 e6ygAck0 X#|:XO1 {`x+|r3e9-} ʿ NT"YR IKaw:im6Ff~TN{~ %8l٘ v X/yaFBld t7F39lUg=WS9W3?hCZGύ'?7*3h܅BKy%d)7_v.ށN$_y~C%HB zo&v2yUx [> O?`O O? ʯ?ɘ?38"?'O?Hp#~J?a|k_$m>0 ??oƓj]\+i~JiTW͉o&r{/c5~x|&{ xXݮ8)-xƉ/W)qKF~DPtt("c"bH F5*>gL,Y}Šw%oƜ+^z) "y}joJ z*I ik}`F]iE95K̅ 0p| ^0: aSVZ&ub²?{O(gdѬ/Ë"]Q/~}(1Wkz@݈#`D";-l#ǹlzK ##՛jef2Lg/6e=ҊK%:CxD,LJ͜#hLx 7ጘ2="dD<'A8P#GI^l1PC;S:A M pѱHiOtD<,d2ӅS3D4,cȉ%&, mԗѰĂͬ1V` FëM&&F񏃷zfbh$Q#6O>f<ށMhFtjι 8. Ͱ^54Cv,P̭ 睯+ ;Qg( z  ?T6JjQ z8O>ȥ.Ŋ_.x$rn0 }zjy>MnBr#\Smf;p@Ds! 9ܹ/n*M^0"-s ׄhZGn~~J·*>nnG /+ϥ2Р)h~Dtv ks"Gzw#pd (U-#I%W{a/M }Lf}v|GK8P 'w(18!v8O NڇY2@\ ߇$ºnWSY1y]| sfON )JRn 3-3TbƜ5K ֿhElPzHbPrPDx,אW—=FC;Hfb9f"sRR*[W(rY*HK ҅ک' 2j~nwRceDTY|.*M+RƄYGԟkm:3+V&O&ژf adaE k1?# \g$KUp'c:R :OQ)CsΔa0_Pg~k0svׇ~ c(/..nmm񯅞>kä:T'i/bZi@ݿ}{ ۤM6 zZ`sU7Ua;.??7|s^(R9s-GyT)Vʖե_ [tuL>#"ׂ7<&rKn9mET[QXNuV֋\]<=NjڰulP>Dޤ>Vį; yC˱ 2@AP.PBEG+ş3օ棁Pサ3g5?to`A~%H23a7ouaHNBiޓ|6MbJ ׌7]ꜳf1*1@4_ǞD{9Rx]yc֒O1>ވc kpq=m(ΕhBhJ,tpYÙ1L((d)S|X`lGĺ"L 222=|3a0\)d6QU5*i '%JS8szT$LX `4+B6an5$çvd``^Q51CE`x61Kv-6s/A_Gʢd.TS Zx ]$Ky} , ncR TxmSNΦ,)11Q :m9Π&W6r DrDD/!#)؂?r)pI7|*q9يE10K~ٸVβb:Ү?`hROp ! $s)S XwrQL&Y݅mGD^K[`VT;-g5~kp"K>FORhJ5=tg_^NCz䍪(Hjyi%ට;ǴU9OElDG(b~`2,ށt˰LKe⦳!v4X4nC.L,R%?iߝR 0X ^.JJIs{eU\2iBNM|X[YsN-сpUf{_b%JԷG㖩6^J$v[iҰi]NSp.|z-aCZic!k=xg {ONN>p{3.w's=E {Ax㍿Ta3D2\0DSJO$~׳)h9UsR#/[JM2<6 "_ece )h6J=_^r)Qϑ΋Uv#?}IEZ^$-we!b%^Y;{]5SoYy}Cߐz6W(J=\-nUinra`twe#hcъ 0 ČRѦ0I&ud4句|H5|9$>XTv >t/<I>.A+"*/YFza)CCɄ)~޸Hu[8B`+\B'[ q<ȧu\$m$cSePr6/ደ HFfEI{NlH>WqTq| Qv›`G7vX].Kb+I H$B Hf>'F¾!)ϣ9-:M1 æp9n7]+pKW݇ZK;LS@idBe/0zL5CNHC tkx IDATm9p3Ֆ)1DtYڑfX iZm1ƄѠA3X&aIsxpQbf)P8 Q<ڠxE{2!C4YVI,X 'WK9.R;cjN3v=3)ҒEJ c"#Zh$%9cvpG&XHDcmoJW,]x2 .^Gq7y-kmLlZ|qUڗ=%&~h l}9O;BWat` VKѸ}R:˔ʮ68&-JC a\7&<B2,95, /݄qa܈e[N68PǬD@C,V ~Qfź3־^#F[y' E#R*MiQh=؁mXE|!Y]yF(Q`a)h)jč0K80Ӎb./AwuާHY`.p$+ys8s#7~nLPVI%|-+JPʰ9˧y(}©~xnS&:cg8//qDZ"_]N׵M,ac; =Gẇ/Ž31`Wősmy< .VaL'_,̉-F p#(90py5Wa 2>z ]DKNDI{22MLEQM0&C',xFTNue*q|}.<JDO%g"z?cDH cܺ{/gc^_gƫ~Zj.PG˩,5LN|dl#&1yXa=;u}1+y8V4v0O I|"X|]J>n1CՒx2,PJ3qR_4DPG#(qX8 I5͞5aIsQ:"[!g6-i>Ym TfDD,pI;LPU<}Tr\'0qzV{^I16 a:ɡCgWRS95p QM,80 ]IN1O$IK Ah:L-ERYM%<(Rz)(=cu!]+&)睻DDZfB)Jp-tJlם.,򨥨RSd<}Dp.|i#{$}gN-E$?r>[2nc4,-`21&)t4AaNqr,cINk֑!F#a3h젏5Zx381H83Va p&t4z7HĬ8DiET.O@BD'Q#}$ѸxcM]km-tU|pL~[۞.)eH":yKQ؃D3GuAYsܸfh|V-XFS-$IiF(! FEbeoz=tSd0d?3a5hw6J 5vnK1 X&cĴM7ʟmYgwi]TR]W^jrw,}D=$d#&$p!ٹ"Ihcа,OGnj'# mE~,Epo5^ $5GVw#fV¯ s~▰J‡S)/3o{{{7͏YOo|<̓ZpZ-X?|x׆(YW* ߃~闆󜗴8ɑ\H+hzU^Nppr Nl>4g|]6qIYu47 #e|׊S`S2iN[>hVE:<;K1o}}ޘ?w NSv{"eDٟٷ탃#yRZes]oGܨp.%n<q_$^I;Q r+7+-}w~sE7y= GJsdK|c <2?8MbV7?֒4%GeUI8cbcSy MsmѮ<22Hu𲖉햏M%Oc bUt[;=䓪֣PpP7 qxK=,#o9%ie''A[J+ԣ]4{"\`Fiw: &~ҁF)^sӔIP)S5~ivs6w^ζc6z~i%bQrj 46R: |W96aئqF,L4֠iijS8#gدTfQ@QOSx+'N|0i&[ gpiBFcS=8(6>nkc~ ]EX;fD÷FA3ѵʦ6*K~QjʃuXa$޶ks-)(>TO9xԳ([[#'^bXJԖ(m۾Xc^FFHgQwz#y:5SWktW(eduy"F9T/$-/~\~,yjhEN%|82. 9ٵvO9բWZ;{.dP`I'ǣ 6`Y\M8/]c' bq .:5˧t}HM oY%Hx *RUll~&%si4*:.̈́H È:G)vz0RbtȉqJ1oYЖ‰y[10Z>UNnȁҼHz?qqXjZ廖sDX[u*L栉eT-ܫ " us3=!)OFww~dGF"\p>eZj)t; 9Qn}dlH{<4P6'|Ru4ZhE;&xwfdPFi[gbݛj)%\9aÏ%Nsj;q65ohw2ӗ $%I6ڦEd*%|5%>CI*zoYSQd\VctEх2>(£ $yީVص֔enz #HB,4CVula ST3^?:潰}/cW%Z2 ]3Ѳ퍆5B.\l~p?xIܪ88AإWϏh Y{6Nt6=Y`re+gUG=ā+i/M A{dvF 8e2₏~ mVL&Jҕ`Nm=1[XOQ-$q;2;CU'h^h Ҥm3J@`:e3 /l+g_V0\r{vΒ5n`-:~W }xGap&f::0Z40 {Lʄ3X5 m۔A7}%j"hgB)XӺe7sQ U}3Qlx0ocCT>}f~w/ 'WדB7'Si __Vz__^ts\A,Ze7X E.:^Лu*Q ;; w1s5đnGv36i2,v3'F`9(Rg֗PbՐ-gH/i?k~7?ua1?=.>z033ddVGnX"1[ ';f 9hqX>$>΃9 Y k-o[%O4y H7UxD#vS4SӨvhviОUu?t/'>@(VґmOKLt>"=g(olq7mj1KPF&%d4ޫx ^lεڛ{rP|(LPWm .-84?RoeGCgG dlK 7!$hj=/@-Eu'7=o7|GUl xer˸Q*&MfoCIA;ft[YNeIf'5>0L"S H,"o<0}\>'RTc[#rT){`,Q)Rfn/4А=7TK,'Fòs4&<i'BGW#Iº&%wĎv !%l@ϋP=3 XZ{ZyAxg{\Ezu[s߭gٳX5y|SRt~|| 'wm#9"afdLQOU dsmL}t^LA O~-tVKmE85J$t.vYjE#OVDnR4i蔜y_s=J[vt!YLG+5Ce^W1WUQL2(d&=20WQu~,c@>NEJSK,bF&\zQvԎ8g0 3j]d`cF'sK,&p"Yw9bLXcxN%&3ZSN2{okY}ksk"EVFFfVUERHE2iDԀC6  ˶40d)؆iG;!jgoNO\[VOtu ]anFӟʬRˁ9֍[.W1nb:,: ^6epgشp-ivU+D`10WŧDiKL}"R΍Faw8eKXdlg,:^w:T^ڬb>6q+ H߫ "P(f3n^40'ËP`cH&" $J=_ .ܰij4}Q~O]7g3UdSj.OO8y/ߘ-Ջ+| [=MkQWڸr:wg~]@4-P6~?s͢YT)1ma#f%XG`١$whN7?xLuEJmbq;mQ&I"ȵ њvOzh.uz.޸1{|K ,9Fo|I{g lN̸nBex| g 2,JH+ShdzO(Y>,:CyУ_6:u|&VƊ;EL[#ݛ8P 17>U]ES')s=^r g[Klځ)MG5&h.+_FS/Etla}b'-Ost$A$&ZyݚrѹAl:g+'!em\''laY'7b[pumRn)Xrc7VO*Sb$"z`FTID%!l;bϹn~(K7uӊ*e_AC=hRҨFlLp4fMWDU-:~EG7cJq[Ӯ jDk NG,lC0y>![#.Br3284֕[Jy/4 3ݤ{IơfS8W[1jK\,u=RE&;Ҽm~W:RPJ{aeXtۢeԶbm/G\s?ˆÖT?_L`+)F8C[ 8ɲ쒚>8tpHhhޤzH$ebSHd,REL~$ tJ~ ; ={p?ZQ,fJO`TI\l[]D\,qa&N423ġ,4FτœFeTRhWlhkbu=\nʥ&ns*iHlR.4!]ؑs[^%6ѵ-kE@* $ނ+sl4ƲXZv)YӞ5mj؄:6[RSHb~2<%33%GsŹNk8QZ::Q_KFoSuxQ:{"!表 ҕϩ[;X3qC-fuqϖۭ]Ō5آ+=^ ~jF.ۆԵpAvTƆ$H*|W]Xm <-G:TlW`c{ IDATF~Z׬77jF2 Ц7 DAF̯(fӰY -9a6i S Ng[('avJɯNC{ԃb3@DBTb-L5;'`VZ.IYn24+; TFZ0lۖ^hyF'Fպ$+E}݄Ʋ<\itaYy!yWqЭ__(rz~ ~}YtcZp }ŋ]Rk`+\=77O>}h4k|^Fϼ/E~?/UjHO-U~ rg:MÑi햏ܗ#+1@U*`ۯ:G/]< iM{"|[;ÿJ[Z&rE?su{uWG__C,,͉V9iIQ~N6exJ!ؗc$7{9l"o!xLٰYml},Q6JY!LT98V.$9&2""`6 qy$U"E\$.yX"&.UȇGtx ۝[1y(,6ܩOlEOgPeH؃b%3W~(=Ěd5nd̒ i t&U%z2]t(>ܕitbF'NN9uv}x܆[t+r%yC7QT(9i!^,:|7z&S* S6fhii=8LldK]G 5tʀI:] g*6m)&S9~9'3?U8Hl~0~t }3 [<OA=2'Ʌ_/<)Lϕ ;cA#L\G߁yLXIx 3&=۲):QֱU4V֒ `y,a< ïx] N>s`g-'/巨Ӫj0^iּJSy?yrFIVzZZ*:zlNo|sHȩ+fXFWdLgW׌7.|sCOȷ30=H#=#hCk,Ɍ sHsF:N4Ay+~[w1|M_/o|C}/ ^x7ğ Q??gYiwZ/!'s6`J~7p0ɷŽ o#xN3Y 2ƭ֒?jS{/O][UnC}0گگʯmKQd,Br5 ~?"߄0{I%Tî Ě<!W]acr8Oڀ+$dZ|8$sP- g"k4 d6KN5(B] KYm6GGLa64p~UNa[^ڙMk":SGqu <'V2sD+IXK*kA~}Y8ޓ|fڒ&pΡ;{YڶQ267ͩ;:3W_USRb,4y*ZbP-ym [p4m2ڶsY38 yt66#3.) j궴.g{NNJ>3u[5+[TW8Tn\S.WM5^@J#$CoLݒ"6;NBhO4۔~yEifv-ikqMSOӗP,ջ  6NYhi3X W!> V[JSXx,O|iLB֣u݈w2!IJٜ[lcvjfb9NDN_I5Ȓ=+և,e}Fn9Oeo87uWua^Vof9S^8?$bU8)AƲlPf4Ôԓ','+܃c"_ PtF,|or_ә1]؇Mͅz*[vafyUǙOv4 C܌؈xW E yJdTv>e}XT~'`l zx>u,g N˶S߃0=rOux ̤gh[VşSmr?hn8@$5$Oޅ[iՙ6(_Sn-Hl,I:/>=Oϰ}\V8ħ}Yܳ5DPf*.~H7m|uR8kO9~h>xsM{S~>W᎕4S+՞P2[ m3,jd拪pF)G6iqiQm>*_a=EփO~ُz\>`dڴuNG4KuR: ǃ=PRFB܍u<ϢLFi$K]JY^! j#P:e>cH%ihG*LEC02(;QVՎM!wӚʯ'סƪl#(aCE@m!&+<ab3v\M҅'_[)~Ah?_?/{<ÇZ;grWPe'~PKoͿye=˿`kK _ L?Z~|;q_. FOaMɚB)20,)X%eù6Aoj`= Z*88sJjr݂`7vP\kwu=[,YXu&{3h\<5Yr۸''?mYô5bUx<_IGҪ5%'0 B3 |lQ(qŌ@عSl؅X Ȑ:f%hOahmOfӂߎhԾS-:6-3y?3'b:'g#*C=۴-o>XaEY1C)$#t5fꧡ5raT9N[g4pjԧcj~ 3O%nJMpnǮ7= q '~`9KTr [p̐tSespq4}ͩrjc7-/>*yAғGKCfA;d ԃ6lbai 38@<‘v2LsE.&8t;mMl3ǔ #Z+ma]05Ѥ滩k90Nc],^Z);6(u|Tَ각KisյHzOh~S,7i>X|z<9T5QC'Qu~ÛQ>혒yvl8hJ5Ao4Mt5eʚFٷ4i95jTzLY-9 %z8PH ֽZGdΠ+HnS#yCHyY^=w,OQSg3CSxw~gm]%Z<2 BXs9?/Sz{9H#z`!{vHTS!zJ}cQ { ܼjgH-(P5nTB@6rm6.=[J?.*4QHH~|?!.|U\;7;y`H>1buiކ[p` o %/ԶhVΣrOnWR//M꿈˞???gD _A2F/^wR~ww _Iַn޼Ykzפ3o~Ç}w$/-d_xȐo%#rsjȒL6&81Ⰿؾ'%ʎQUt\RN1KƋjW aaA_oƿ5dD.5Q-6`X &8M"`g5PTJ4}g'?Z{X$hmQ3?F+%?C>FL77MoYۅjv:*͍!s[οZ1w~yJm 0go㯕_P4܏. 7Tk^E<+zY*!@7/m,BEY6 3pQElvV>l3YIhUFJ,퍎wcm<$wNpsؕ5x ?|<47tUV,.Ye+}a 帝NuQz֝A+XF r ЛK&:0:g xߣLK]CXv(r3@-Wf?_#W9[ .|z9C/d8[lkmhR3ccJ'f3q=*N_q5 |pG^gX}WfVj>B`3ubC2kO+>븉} "M8sB#|D&ntkwE>rdG:\؇FD0dAqg 0߳s.JtZ(|s!^,.ވbW% Ű3CM&.Sږw%4 [0uFk1ibY,=)Nc 7 %7G~>RɵK7)`tUx&G5[wU(A(a߲}T Ä8C  J,)C!_u7}4Rs鞴|mUJ!B9.8^%0˴B԰)|X!Fk@\*9,(vs$=G*d4*n3CR31|ܓnGԀ}}$6N-H`dE;a6x G#9jRb!z}5)̢4Uk 󸀎b/0bJ|<f[ Q h,z}+Z>Ѯ0y@!(h98'#Sp8Wy֠miϬuBK6 VnTz|Rӆl=m=62N4 CJ.=Ӵ*tj[ĖegdRtoZGQ+QVShb 4b̚NuMŸ`XZV "Y;Ţ.?svH}d8toTF03E]EPnab{,'᳒]A4g2~A[cMzE%wҮah3ϹQY*t0s M9D2Kz뭷?~|Z_7;ܽijle{{~]h~]uKmN?|I'>e 9e,5 }M7Ru0.(q3&w(8"`_ 5pmsObYn#/F.az^6,ǩvso/P@.rij} 4Ʌ6[TsuM7dhʼn -u ubN\MI0Te|8Ap;%(i1܂K,fдE%θQi87qX LaܯGr`Z9R;CpdS Ɖ4p(. 4v&*-%#aF=$n& d?3V.sJ3| I7-ښ{e(Er̨4 \9URA2TAe,, gN"{,ޣQ5a2A/Y2VYM6?G)Wf$(c3Hr(797&%&z ~&P#24IZj6徙0B ^x'=瞱Ewv4u sFD$9> Xٙl-Wܳ@=/IU7G q+9Fuզֵd){4jf]GL2}2U FQDɮoFXAѻcV9/̂JI~踐7Y!0Qco%$wg!' ]eD@<4ƶ97<nEh %'lHXBy(- AX8IXaȂDbS;)q7jZϽUd7{=Յzp\A< g6.ѿ\R.`hxgث1 Y!g!iden,V`=1 љrWnlyآ7}IR&̃.8EwFcϢ(Bjp^pePY2FOo4;4j=yݑ ^&;Qqzp?,CŅڄzq ǃpr"6Dl] 21cYn(%YOZM2IhVcW<S=](.j#uva6x1*FҋGL"M[hcFHb3UègN3sLVW&$K LۢN$EXM=y*6Щ"DqILY-VU{r0!TQP*ޏ|ni7wzb$Z@cj깒+tS+Aq .ú- b$tD]=E#"RPGkb!nNmq2n!)Fw$T*IEI]}!-?_E2Pp$@$\4ͺbCZW`41D$_l)p|gC ej|(m\$(#_0 N@ܒ_ {Oq7t\ sy )-4|}ߊZ03n/;#'?h~8j ѧ߶ *m~{5FGp8\uy|lqI!ۨ< `fwTr8 &FH޻~_ 7:Y ϵe>e;Hq{}^nFص ݙ/ԍK>W^/.OGK7Fv_WVC8]"Sk25񚡦)#^/QCEk v*1++:hmJANwR=5I ;7rWv PG=0ZۧΔgtj] >HJg^T,?,[}P-;ق9ckZ AS6QE#E.!.- ^<.EIGuc]Xu576 u\˜rXf^y ?y3d):fEyZ`rV}yDp\j(1'DT ']*EcBKv7-BC}5 ׈}.E.u{;@/zf! S>-8^#,y9J~J6QvFBoTN̗)خ,DtJ9-^jY-6uY:ǃ b^80L,Sl^fpjSD}.E)yehKck ߘ`FΝO0" dLe4ǺDֱѮ3F\s}Y,ߤZvhۡ̀ePaWa6vGj:f_kƢ!hk- 1uT$ڶR&Ԉ.GI"#idL3|Ul3TnEwjvw7E. ztlYΝ;m~7sUmp4 G}Dھn6wk=~՜3yei+˿ˇkn~a&\x qye-Va+ⵓYmǞK?:- ;DwkGKu$x5.hl] l k4K%Z7dI0b'(df *2z߰CB #ٓ?@%1Mb{0bQVI/SJ)_6*}^z-G߇K1) lW :o-N(3>Q&%^5yuWX&C2n$,sGD'}[ 5._$4ԯE=7ZY$Ӹlv% ǂ?%8Z4p,_SP/]Xka5ufnٱ& qJYɤ1aa5-o^(s^/u[[6kf"'`sT`|V]); 7K61Y,\ӫ cJܯkVq`?]UzZBM<"-9򋰅^H~|R5qQI-^U"KA,lfJf6gCt\"|1|XM]}oU^1fs[]NEW|7pX6FٖbNNrfrRr$vHdM}q5MwfWCѽ^"I~W1#w?}g>~GʨEl>Rr(p(//L2땵v;~~_^ }$! ;'5_z,9w?:@\,恳)"x؊J6EMb' \&a.klf5~f_,ԔAD;y|IsRPQUQKJ@8ekm ዔW =c\%-bl²P'2#vicW^~ 5bx$"Q\Lr(+=S@ѵ/@0LHoK)UJo<uŸR<.$@q7fP6 8 91%nRI83߮cen8z=8# v)8WI5{ ^teԯ.ba<\On^0B6r<"TY˥d3Qo'c3|1LI5AǘQ-[ب{BfƇGRJ+aM-rM{?(2A*ӂIxTnU MA}6t֏40­鸐A=ƢDYoUvP䮟 {C$``$9@%6ft]Cp,C M7䁮뵠2<>p}yE'C2o#6%7ncoub#9W+W؅ Ibnk@CE["_+fjm߶pLq_6@֣I m>Ï?D[l婧jbS|>L`#_ 7jXfG$VcvhQ'NͿ7;YkMt?'xQ*~]%0!Kbn$R\UR,0eMB}BG2xC"9`uDj 71 {ϩ}+{.E>*2zAYؔ$?HCkn  R4Zw`7 XM#_V9{ğQ0;`MiȿEa)._E_s5i58_L13 젥3*#zU|Vsg>f?5;=XD}裏^J_w1UdjOɊT W`+g㺴+k[mVAS˟ R3CH^&[d" $ ŐR4Q*4ڐߜ^zfq+:=+[ !a 0a4SgxFe8c!;Χ/aWqRuE,u}`ymOZTV2i%%.KQCbpy_~ŘrUdVxn,ay$07)|qav{IViP[X8ߒTLy9}=pAtX6h4([eHRRpON%ZFd*qmdzE ]qO6MohL*`SY&Mե4uq2b^qBD;v]C}:-FgC-9G^ȗ5v)jw<\D:4C]م]0p%8JـɕFPZAD.җ*FrHTUTZ̛Y~}lPYH=ǜB5b;0}ÄFV7 j97r5 PvOvԳ ('p[T `UBeDj/ bҿ']Mg f>s^:69ԋ#;Z2!Pl ?o<=o . sǍjRCZ7p]1>@ШݐDΑe ~ d?x#Ȫ0_ʺg{׻2ҴjaDZ#Ϝ96XGG.]j2Gycy@g nuѬێg4?3<Uu>K(̽kN0R?é]ݎF7/e{ !0;O<3|bG]#A>hRf7:o^-MqpW/'Idz-D${z_Ю8/j|G21_)h4CYM}eE#(tzWD%7D ƔM/ 8Sj #oJCBG WwS9#f)2 %a\#:Cwj6 \<2cJϡ-6}plݛIن.}ucҜ+YYHy$]zJtzhuHH.45d(7lɢ Z1%/,n*ُxl0ʜ@`#]3 Jv uF!] x}eYFJITa{J0Ծo8^-,'RSVqf-ШZL1DQd:ƷGMˁdƣۇi􎁃 77_Ns= i:Gu-Jdf Qg?h}s>KMgU5;O{JCmPҙ3g$zU{{ӟ~C?ō.r :)ʶu]#?#>hk[󇍦ovZb+?RmYvrtV&Z7$(dB}m2(HᗎI=8/u)lYⷪ}2_jtD "2}[@(]g֌e[*l] "˪YjRsLȵt6T;YhqV"! vB3O5t2(֩gS\wH&ǒZ6MUW, =޵TZIbWDly1FIal?MNWu|9K7$e)iT"(Is9wV2V"U4\(e T EIKыHL#w'O` }HAL+[m[ ~%>O}. c|xK*mi/O%XHXkᝦCOusmѸw$HZQ!"IAB=,&3ClAb2lI#KI67VF$!t-wtl$݌ dk{񢆶I+WƅYQyq$M@IM"SvX+)w?*~a@uvr[jXPԧϵ# .^ۭwRJm=sK^z-;,^!ㇵ1~Wu\R)[[sw4+O4Z66!"Lc?T4?QF+hHԱ]cJ %pvXK&yBms6Ffcz9Ee3 I ۛd :fN:)+rE|ʭj)NV0jT/jP^Z&0K8aSMDe:_A;_E% jPl3%#Vg҈>Vpp;}y,sqLZ *J&J!ߛHiA2~b %2Z#iU8t#"ӵI̫5jB3 CD%YڛDeJo ȍ>J'םO mIpђdvL3,Eb{]oi̕!یY[JgRȖL8ɂzzz%D]B;@+&-HB`&ψv qՎI8bպ?r2R͑ ݂!S! #Tb5^|Swlmmj vzp ?rwXȬ_uwvf}+ X}1?OAWj 60늜FYԲQIx|&d |#TW#Vx-0dcKK{99.mX;emo T`Ie`2RT7i\QvlZ?A9z{W4=/"Ztn X;lTC;8iB[.YѪ_Z` <׀Ma:HM(7ߤbZFa^7-; )430@e!"# PSs9|+g-!dݲ~ r/0:0]^vv- 6jRݜ,Q@PeeX@$(b z"bdaKs6Yxꙮw$M).`9M+%^뜄璩3*I3(Es !A"%Yg>بzT8;HlkVDg#t:YU"']ΑciIXJ|gaW\9}_́43^TՎ /4-;cs\EMVt[_[[4кCJ)Y۝ {'6LY ;b#PIoܕF{Uջ1Zi;ɁFט:]ɻ_M$U>o #t3ЂF͹> -3X(x$B֖?2q+8Jsr1H DeCӀW*]JybN;cZOT\Qқ[gDn6 &$ gNN2;sc\L#Z1ݺ=Ur8Q):8tUTH_m(!zDa>Ѕ;[$K_68h$I8A~`5@gq"ޕ:m6^t~xD>OZ =];`J*+ppPxH"++0޶R5hU"3?3krmw3\m֟ G &zJ|kke_Zb6D4@Tq68EvzR0r^f2.1(WWz:/ޚ͘7;`I4a̢E'b fxHswt"&I y?Ef.po{O4]vGj#XH"2 e4!$?rZᵨ_*_WOڭ1QH>ќȪؒmzdBJ˥@2+N|&U3 ]|IQNj|I:oeņM|NnE`N7#H_;RJ8mbR@1,8Gbx;R潳3>_ 8Wb0FIlxQ +_q^NDI?̛|ƻSjLB 4=;vU K"d$=$I"Bo5S|~&:pŐ/HwnsMn>N.k^?b+ 5XOf+BRZF*ycwZRh]e[SxÔ~mtr+#W,="Oa} M}W$$"-(r2{ 3[B}ι?>я:uj-;js{p>+p/̮ʽMQ|pOG^. )*s2{"McrםKܽ}kʑD&UI}*U]gx#cVdz!ƷMS]k)_~ʼ1gU|3ET44e{ KTJ!G{' u(CKПɘGp: 8|=~4&}V|TIJi(4}IjZmʷ M 7(ܒXHQ!CbZ1U߉T3hlPPRa{nbirp 2&ڔPДo\yl!ϱ+忮 F2_AI2Uxdz鱌E 0c3nnUq[EMxM T4-Pc9r!u!K=i;TrSy>Z?v9(i.QPa@UB`u|nsEh8کHR2MTSuKixmCxeI Nz;S !"mBAs~:G6d{ @#>du裏]G&F n[/XkcYVb|+ bԩS?c?/^۝gM)OTj?`_Xmy\6 qGЅQ[FY_aָA^&5f3?֤: +lVRL3Y(5aԼjr\ِ!B Td@{!q?*."CC@nRۓK&sΉ]q F]R} ` q5! R(FOtE/peMu?$*6ߓ٫OC{ SX>a!ΚnT[+4zÙ-yXΦcjZٳg]ʩ_gs~8k6kJB2Ȥ뻎f3z!2gӯ1ǥv3ޘq nD@DR7keu[kU}&o2d;IȔ5x$ˣf`<W7RyHjrÑ]ǖ7\ QpҽCZ\oEhxĶ0<+D90[:? 4Ўh?]:7*Ze w>vq+=!u@.Z$"9o_ggѡe`f>"3E 3<S=.$]YK6$660{si{G Gf,KS醴Bl .,>WϢ45,!"tD&hG# 4u6P^-4>侁P20 t r:n8<B-`ѫp-œYMn8;FOny_`M'ώ/3կ~]ٔ''gڟ1H7^HJ~ݸ9ԧ>ի$Nⶤ0~=1/o aGWXGknToU2`\`wF-mqo\:2@dGi>uOd$7 KkE*%DxЦ'P3"D& (kpyг{* ;"ehbML6m[!PSJQ$0d0"؄cO9!+T?Äj}dߩ7^1qF)k5P(snYcS,}iCodH.)_vM4B}P/̔g٦cb+H#uTt8ȋ=RjCIo 9mG>ד";7jw;(+V hf#8\*㽉=#uDw!I~ ] YpL{Eàz'2:QG9idRu.Wj;2dS>ZV>IhHe1D枙SSX 1hܱ}K L)ݗ̭]*!,r4țdLR;1|ݝ@o-r|qd d.qJxTm?*?HOl=l{uk5%OLXd@3SIPk>aDaO$/ֺ繺u)=~9Ug&!;J̞b\#H8-4Xbf!y-%BPQ $|ٛT6Q*,p(^f/$~+`8u"%X%+@ :Y$[K AH1&7~#tM4i|hT ;9%y3ؚtlqhƋ\SEj%LFJImlɧƴwrp #^D{ۜOi'q YV!Nqcy}Y>E'O$):}B{ً/]t?6Α(R`?S?u#y'??p^Rt* MEե7woc$w{KuB;Uas TѪYWz&r=RM]k#vq&RDU"@< ILYxB xx*y^E\aHgt`CwH&2~4Y$X0 6.woX0%UHGH$}Pe!A=E; A$Ck56kZi24"F"8(ĵYuhNP R4*AʿLؓ;sq/cנD^fg휻~{g4e OuդvReW O sZ"ޢUePȾZHT#L"|B Vc|Dl4)o ?C+yE4ZY&itp+T{Vreړ08-zQ(fo7boϣh'nR0,L m)0jR 4XKi;.^vjWGv kc$*69'q DZmfOF2_;h\ZG}3rr'oCEd{:f>DdY㆒#zooqgΜ1N\sN,Zkcww~%dB!Rf^JI$QcyxV#RWfZ#[k#ȽJ*hj]cjrA=ݦOB,@z^< L]"T2)ЏVxRG?aέH$;uKPJMDK 3vhh[K+kePʆ qj;N݆74T`nKƮ"pU H*7)= rD 4h mbѰE@R5a8 +~Ӻ:` guyyӡRi|/fq{w9LE;[ahN#MlN]gx"J k -蹮*1m/HpYӳI4nh~TUajK`}*)q #D&JrZ :1?+,Р[#%r}rn,\+!<^A#Dp\7wP}C|#Hl V.e%eop1䬶aåWP1j! 9LHMI*LMSnҍzHm\Уa)!!הKH8c=yD |dh_ ;8(BX'%tnG4,8RXK}aKWA]HrN^Ya!6V]k=XHbrbmc#Su'^!2sE.)YI뀭A x|c wn?Z$h*-C(lH ˣS5[l7.oОgS`N8mHE6^8#QZM7$WPhC.b" 66(*׈o\cy``a^X%hgm.PR2+8v B)_q{Y!ej{fi+cV\T72!smUIˑ5x}ȭCzlĺ۹hkP~˄9PÙB B9KD̓h1h7pods[Nø6WtXPD\ٖU(V7 g*̖X{U{K ѽluvCzvȒ `80J8lāp8 {ṕh8 C qZd!š.a 믬MI"&19L^!]68ef\q;Z%&v[ PC./羛1O-aYcD^]@uC}i44^@D^m 7 Kh]$jU&ǺCD66c^Y(ֺA DsWsԜՓ ұn;d~'qpa J)X *9ci$Ps9>|w_TDa?}^xR;'qq?OdƋ돛iR%>m?3ـ w̭,,l!rsM23Ѩ ""[)<T>cD`TM=Fh?w{#.qFA$"dSu6[ :Jp/~Jfn7xeSeV}MStsLMݲlǹ_$=׻u,J#< 5Vlu{X* 3Qs.ТPDp.I .sSDp^ŞdlFGtx,VGoVNtC6k*.Uk nLC[u6FLwIآ-IDZ |^Soŀ,]^rpϢJn݁JdiC'lo۽c wD8[SNKrH}%%5;A /T]VxG7_6#x㎢hIJo.ĤVa]AkX_tA&A!aP yk)̑}I\!e\VڏxlG#NV^,mUvuP$\)K*4PB6#D1-<3 Y5m=` Fsb-KKTk):3%[-OSuƇ6ҡ:%Nm|\qC)x?gY {*B =ּ՞93[_x I| 1`זCM*x0VvDLgOXRJ8%-|xH$Y)(=ӂ1v_e`OM#0;t DS7PDpk oA)@kVmg3|hYZx eמ)!4FnvIc7u} uIfdT|zdZ-kGE!j kh9=}w\?S$cE%qkw׌#*g:L2F\H5 @]q ^Fẘy,xeGK.ya}A|9~YZaG|j^u~}F*@TD^ibx`{!r]aQovDC/mL7 jxK87& D|m}Mطuc ZkJ~F/axuw# cDHXdH GHcc;BHNtIx3a=7f{l1@/MbG*Ht7,WC n[U.õdߒ[Gy9uk*u ^؃5$^ ƀIȗH5,)" Mb*W2Ҋ!MT&`x.N` cd0`h)Ɓ6m# \MeG6rU%3E YSDu<2:z_>w@e6H5]JDl i#QU\bR%{͒QsyLnKU|0bgvmr Kq)tLBVXڐ]Eju0"IL5%^9>&Z>U4eǀ$C1aO|BCHdD@foa/%rܻI$ -"jPk4g{$Fb鎩)4asK Hq do]z^Xv,C-K*Ke-*ȇ8AFPj$4"@{) JxB~֟IqҥG8!=Y- 'I*.Nf{Ye@%.^uUEԕG^=Kj|x5I&tܹ'xR M_ݻV=4 ,XDM+Zg"l7m@r"Qz"8ӹMs5nPmaCL1]z󄹇 IDAT#B{p9)ѻH͒go/N~ NrX5ISZp3Y$*P*gu.@ۑx7#x/j|4[AygҎr?"FxkX (vD8 ɝEDH*ݠt' "D$l!iMA vY \sC[iTy8A--")W\iuq2+HKuRizaK[9\p_6%,A6`o%H5U[q$dҊEV+>H^Va{lTN=ۈx*HoQu]4:oGZypގ|h:Tfq?KDYK#hc6PsYÂ8mOV~Z<|N ұNwafˬ*WIσ\>*$~l/Kn|gM=6 q~Tm=Rbu%ZT%p&F-uy֟IIq\@ڥB>ϟԧI=4 ;&\/,V؋F1lBau.1@p )b{>O3']aaG?ʟCBUA*Ǣٿtλ"W*vS÷-f R'׵bޙ"N.w[;Lkmڦ43.4B-U & ѐ b;/亊iZ*F&Cm=Z!N b.#TQq>ûAsN2\DٖW9" 4|clFh;^uԑu!Z3&t\3>d@ 97f$ !-~6W^\ny=xdXy^گV»!Jj᱊",-kwt*tdLQ. +<6# A\#uqF'Átyw~gFX[-HڝT(_sjS2.+ X4F-I$(-5Y3Od:,W(˅u T^uIJQyE7.D*p"7u>J< nEƍ[k-`>DfBЀۛh.`a5&$Q%-2hQ1)HSي$@Hk lzܥ"{g7Wjzqe*gɝjAFMT6Oi {5iEH$DOYVӈYcxI*&W*@2|ވ& -c+Y8̌o i.ňt_cgi#m5IyVLG@%]D UK Gwo/DM 07 ƌPp, ,-#EcSج ڰ-jiM9'Hxk'K^*_^ltq=hJDzK+^δcz'"|!9C+mQU-CiQ O\N}؞-P7|vؘ>kIg@9َ֙5ҾgX/O0סԶp b h#1ĆGY2 3#? AKu,Z6jZ9*IS)C kXlms=O$Nn>w޻ږe]1Ɯksߏzwwi㎉cEQ0H!%DFID@P@H)R+ݶn֭{sk9VCSv>Cý={|cݓ8xPկ~Mѯ7Jׂ% JM I8sF[iuYU?O<\)gbfW襝<`ھr>7rhDͿiu`mI$ xDS9qmmfc71"{\Tc67Mi)H24F$cYyY퉥4[𵊱@i^^E GR| ԨW+[HWԡzj.aJF':X5eh/n{#fTPx^lu}t~ a d23- 'T^dD/0kGJ{v~j?71"u^Ipr>tS=oΙ.! `xy׼70b)SisT%p%__la upߩz"v床Q>|\fMH#';Nu[%ta-OZC1KԹQkW<Ǘ)`O4Zo-Jģgm_XvSdM(jwnIkSÐg·0 r M|g=%˨7d$IRNTd Jq\kPğM+P!BlF;FG6j=oCסL3ekEyn#9Dİs[4 *%ZBP Ѣ!B0{Zk%M^:}>g2Ruoי̌&C.3gIՅO~TOȑШ^N†hQZs<0(oS\Gk&xZqwAhEv/zK!ؙ@(tlGiؘͤB##a 炢R<ox"X73 45?n5WZ*x4i݆7E'*em->l9w2> vAa# BĻhveJfqn{-6Jc] $!|_gbYY@wuW/z8>TJ)ٳ—~75m> @NsdjDm&ljQ>_|0rƬ?W3 Vx2r~[ݸ<_bseހDHc:Z[>P-<}ZK4j!ī鼬b;sDG;ߢ; Epxzc#w4l{NWlt|)qyUXD \L*pD$p,u  H0#K sAP#ŕ-5.Pqc ,%I] |+ުy?`f6(RKo >14ΉLJj #Pj[vx&v3g 0-_V~H9vz|CpUOwUp.tTwM݌%Z7㳾8 \8h˹>ziNo?̽<֖xj'=!Qe1*-d6n`E"ɗE(k3;0[>H{7x(F t&L;Y:'\̒g3vׄd7c3NFՂ`v_8F<]D?I?7r涚FmCcTU*݁u8fS;E龅b>W(L$ݭːުlw_IFG԰^_˹yewR_ m#&wב8]Doaת̚{ه3TBai${c$IUT1WFD τ.8BH"+Dd$U(Vr+j^WYV0v_+@mW d]p½ޓ$ {D8AD"w^݃-~ 6B%5s8:lw?&"&+g_2DTֶݛRV$i)ĮÃ7 IJh?Cp4JHXµAnzaB\W%Zm54pR^⛤,ryQN4:B) OHJysu|bk8dJƯF~e:%צ.u]9'E YY=H ޾ONgbYYg#Z| _X,g,6z^a,a?j7\ݬ\($ޏ5@jH8hq3O~7.8MYsi>Լ*9.5Ro 5Y<|wZm]ikH- 7N͈ƔI% {B!uS= :aNJO^ܚYBPA`L\n9ŁSJ I3Cbl:Nc5f$E̙EYzCu*8MX+#pjwc'%vx 8-Tl  l7 ? <4y@v)`|Ã#xkufBVX ʎ+*xS.Bŋ"a8䐞IS;y|LY)_),GqsQu 4rt~6Xuj/Y442Dc{@Pل2D2`M-=SўӇH̲~򢄸nzB[)-\Q5Occ/Z~r%7>J |e_93IGZg~gQp?hEZEcT `loHTqԀHQ)#۴Mel96?/o埗ў6t(b٫#Qnhfn"k]A퀞xi8 fPR]ipsT.&KvMSkh2e^EAB\]$;Oݴ[gv_K+Cm rRo:צʚb;bPd6܀G:̥gNؒˊp~˲G[ ,(p#4\DTAKLkgjkdD[|F TEPGtY}D4ՀU&g [X*0]<^)/zhS&uݦW~ Jg} QDIG2h{gb ]z^'ಷġ;׳#3U04I誝 ~,cՀ#Էvv/@x9DBz|~D zpĦUjAA-]Dd%Wfb}~ΙXgqg qnfs9~F]Mu_\t|\*p [_&!r_>^9 CvUw_Ag?s~`0lY^MO|AP@M{p$D K穷fdrIVC7Jnl[` 72 oP҇DW`TOu`H/"E{Q'xke,ު02 ""m&Ƣ(ۣZ.&¿'vOe>@-@'FҘ]D呁E ÐMP0ӭꨡD֪Mܛk4C &!!Fd_xmm#qt{/ zJ@9V$@SW:4$IjyV%tڝjٙD*ZnOt ]QbNͳS'M fP0Wɘ]ZvF$ uK3`jI -4QʃKvs*`6GڤuPǕx98u]ݕXRLBѴ9Z#[ovMNisj \i9 љhugqj~ _8Y|N7o~i r& {@~oȓNiǃ?.8zJ f3,&ܽG?-]Ohn0?MUHr$aR׌ xc9F'fum*u@MuK&jFi_ZO¤YoLh# cYLdumvbDMpVZ[YR+krT!P=꣥JaM4am[:9D67 7$<o TC%dn,zęZ kFE1-nְU@|[hhFSpa`ܕ2Gʀ4㩍*ԏ 85MFF?ז4HBd&Ja-L؈i=^EZ.0Q3] _^S Uudц2yAW&!Rcbt&Z.sm1I!9Ej[!9v7v*  cQw7)8ԊtbQR Vl5oZs"ǃC[#nSo K$}&M܅\o'pQ4Ob)@&LASh nk$&dCf-7?@<&Q%rҴ2%A zYX0r !3 y~`N}10Tvf$ȽL\hb_UY1_lġESk CMJ5>Iapx<VmRAuchX0c0#mxsw> f{?ş&WO@2i>SlZڸZRii[.H>QUo 1Qbՙq%E5\ėtHąDh\Z /jawtS4AZ,4$Sb`v{m 1`$THbckA4'Bۥq3rtn4Ȳ~nMQtUz6_0X>7- 8N% ~K:d-d4*SMOC%>#'=8 A9v(wB ?¨n 'T%PntC%\;Hijf=A$~75޽żã T*W4y[9f_,?WM*C <yV^񤩖 @m#9#S儼ƊgN⍴uEjf{= dvq O+Iw[Ԍ;yk ɂ A~zw./lĉقJ #dd "gN)ՊISCBPS7Ae&A /ˆziCfPDq24{^ңpXhP+jxV̋=V?64Ĕ*mq#VwS;b9?~`%51.X"8+cK˝ZܯA,dS@^ؖ:&|HT}غұFZjdB`hhizy-y5S>2p⵺&iHpü\^ uhIl"a*\%Ձנ(iEBRJ{e;6)-Ćklz #KE;{J}̸UFiWHZf*mcRS5uN&ΐL9)bnSB4@h43*4 hlK&O WH}Dͼr{I h4꿲v+t 썒mM넫<%[{5R`/FˮԼy4Ƚ:BtË% pN|'M}oEO(c] b̔`Ƨ3/tLDDC~i_OO۸ iΖkSy}rQ%pͮ!7QT;v&R-M.ҴR 3eU+8+}}}k XJebA;cY0\+;(ZP秨4zZN ;irƫMҾcYi/y>PD?1Š9heq-@mTsG߾O#N0?%7fdxGHD@ nE̹ sJpJr*+)-x)m64<[Ҝ1!#)^]ͼG]. Ýqyi{[NMmDMXx[7-//9kEˋHD7pÖ]SDy#˼MeCuiH/]K(Pc$kLrXięljO/8 e,"_F dQm[q^7sܪ7^e@T rK^^ҸnmZZ1㲶s!WB,K@y梔&3FCSs}6_N]*D,f_P p4,Kkހ}D/'I0T_&VaNDD$ ?Sp;c2ܧU' uo|H"z9}ŞPL3GKp R$TzzR(]P9__ )p*T'͆CetQk"NwD"WZTo9+āqó'Fՠ;6N3J(r9kBUl [>ۓ?WGWA۩)"G:iB$?auHWBBo.,cKE#tG r7ATXp`CSpNlp][*KERI:F;ӡ"po?W!VRd%a X5} y }bR?V?9ݟ&A?jOik7PB$L%T.U!DQhű)ҸC[ )R%חvryL5dGoi"X^FB 5HՃT[=QtY;53QݒҤBY\䇲^Ƚ1N]T48NXC<񘅐"\Wwּ8#C72u?w$"0\9`B.6| Yt?G;$f%ZŭO^.-ShoEDF2Rc$yTk_fޟZP4K"L-Y8-JV̒ZE/$Vt0A'\yͮo鰝aƙXgqg?pgq#qx`0-ۻm)(>4|UJ5Hp~qCBPZXJͧh]~=A3fT}^I1mQgQp qZC$mV '=N,݉E!f΄:Q 8E!?wd㝢c~HUq +yc1ak?%&.x $DR}&r* v~^UjH 6EM1h**8s 1 T· 33!"Lֹ,&'exO^;EƖE՞QpY0ev*ehLӖog^w%T_9 n!]&Ta;maӦv!!As'sIr.ŁLMß^:U21ſr.X-7] uPqQDb_f[ +aN'li:%GjJ[AfWuuUIHetɆ1,1% $ry Γ8$F.FۈXefߪz{9yX46tB*ԩuZk51۰TD3eB2){*V-r+מ7,"M;}^.X,5TnpȌ+ynʦ]}3I299)Q"K٭W[{IPCö|$ rWt?[C]?ϰ8bg1~dR+L'/A?kcvR%PפЗJ`& q;){IpC@=c+w6O#\T߈ tA?G-1݂fЄ&2]T>@%4 >dz0-;)oRSIN ,hQ;w=L4i'/nh:e;U˰NRL26b҅ESUEߙ 9<0Ob2RW*`/Aކwб!>ӽY-kbKpn;s,ynHhCY ɁkBc? 7R.VkOn4~RO a SGKL[LyE&5i K`ű2?3J0kBcHmKB#OflS%z|xor,p|PE6]OS`־4[,Y̒SRPG1&3LJ%dh.J{o|1+PƄk*܆ Wθ[X,;%gE.b˵ hP,} 0/t ]s_oR>^BwU 0?bta62N"U* ˱7_rTT'_Ѕ=kR>YIJgCʮpqGTX{C#Zf72yP|i q`bڕPG_쐼 5\Nwy϶s P5A"u՗l9r=g}1鴙p>rICU-7sVp{vBDИن h?~AQlYw.d,*q'DhMSr4,\u<#Z9E6qiIse\:'H*YBUc˳,!Kpd%l.]%;%؃';"@hKrGjxˢ׵?]d{>+}&HvIvoN6|T#FɔP&oJO$ txywg耕,8靴!Rٿ*\mzF )"Q*%gJ 9NYj 2^P=]>RD,awGL 5ۥ+4 <#1<s[椤~H?tXd`pYJe,UO6p>U b܊\`p.t ]軬ŝ7|s٥x ]BzL\n,({Q]LMFR378bx:2 w;b'ٱ/|^ ~诵c?c)w|x!9? y#̰f沣K~ϴSB""@ߍg.4TYP `rC D,A>(IXu aq^Ϣ}r ٓ B/?bD V1yJU&|(䣩 iݑu[w-?LzrbUVFlH ~ OA!̙^x  tTCRO'V|KO_YW".r8c~=&V!* &-O r)"*axs-O; UoXdWfH,li4TRQ!5-TNN djCHg@I)B *R]uNx~SJxiuHϺkC: ! "H]PQU_z$kRlȞX!NSA+p.B}2R' ~a IRN+Zou}fgQޥ`##E"Rr1\̒UVcK[/WY7Ux6jKB_''.JCv,,GdbA5ѓL)--?{xެ@ HݣS9Z~yqܪ>d<:!zL~*>Dٯxa"3ydA?G}|, /)s~!̹/ p:Z}B#1-̙,D랁tyrF&S:C!*u;mm8T@,jb`PþP8bH>̌tXG_< >&afn;D|ns\"kg ζ֧t2v_=[8r7*CaS H`pKa΀R ]B?9Cl|'.4ߍՆCY9W)K{}w.?s-|f||^n2 vM=Hw &w>߀39*%T<~o&(n6JPr[_r,N{e,Py*teYM+y^L z$0,Dm2ivnDhbCrMRs1BmrLT*h$䐜~{e,bD\);oOrSCvѽvo"Omk Cq:1%2 q"Uȟ/| /J^cg%9KG'd`wmh?:|؛p*ޖ*h#pZ?捥 勵F?lQ"?7ʡ}7ڕm m&Uoi4qv?@;3wB̵m@}eYI3y#bbg儣<ǽhLt@xd=\qS8' Lܬ{eCYMtfix#J uv LSb϶GzSOlH*m&I#-DD{>mE8K ~gOֽAo39( MER1 _iCeHBtk政KNsU.;h ":r ӻg>9*06C  5#qpI:E&tRU݉ fhWK>"r\ d0z ~=% Gp;l72ALsuw"LGQکϪp BvݑNj١k"L g(,`%O# W*(I,E^Rܹ'ؕsW{gPdC1>L 6{)'pҸuŁκ!τBdiV!-:G)lXgY Pn( 7"K8w*s9}AdzG,ANQRDȔ3РFGs4˄ %~.| ]B7Gr̅.tﶾ<89fxu^Bk6ncrG_;cA(DqtLyZ`"bfy]mi Ibu :C'H\R<5۶=k`$C,|ge}}-jra.\ Ujkp '3d-&6+8Faݻ)3Ry37w-{^e#?to*nqhڋ@%Im9,(Yt6Y6͏nG1(#۷)a, Cd쒇 -QT5n(ߝwg>Z:WUDd%Lһ~Yzo+tʩd;Z;>%~Q[PomEI  `ͪG`؀OЁ•omg?g5Id|:-<طyApMõ(.7p5~4?VV ?34n]tQiFPҾ]P\=0­u!zpȧMIcfT 5y>0YHPĕle IV:&!"ڽ9' O DJ)x"FuPp%zOiXRd~BXmSY'3C>8:~c9z~p0:$I2q,|x;voIkޗD$i@9|3j٬GVX,*GvTN/FigֲA0u7wzȒVbD-I)`l$ _<=-K@vmOAQĩ"ZtN{1SE }a5?R3łvek%kƱ[g~tLpʡ,U=-)E|;w-T!NO|s_~w)UFb~WU{?=h !a*%Q~6Ʊߤ_iYbPyI"Kk"^xPk+4܉B[FPЅ4jw"'m~g z}M%MiH8q@ERkdM8IlAkƊ%5F*g$- 2j(wP-6kjFAiJfJe F8}d+qQ$2֋#2,0"3X_;`*(-F{mPi1&ԫVn(O0F^ =yHltW8UihX[ tN$ؙߛ3|Kubk˿:ve58Peށ`u8sJy)L\ͥu@IjJAr{,D0D_4f.t }7if?x ]w,~4?x}3{Z (Vj] PpE.[;ow(ǝ;dG緞G_\]%s8?>> d /f[y}]> }J 6u*\.A=OZ݂oy~K+W|%Fad.9j8~y4W4L~d+_P|ċ;U =U0qx+}6T;Eb:|+5KhhR53_F!>SK2r!oxDG햏"B'` (]MS:?G@gd}^pS((̌%s; 1ҎuZJP u#LUoG$FZf3~'XhS, 3wwT y17:JTzS8g, l[r,IϹVoAP g?]_W ۅE4 NG89d8`P* J=,⧖B4~po% 5 B^Rxl^2tۆhxZ**t!.v9`N4eĆ>TfQ\kX=c!*wIWvڟ5>C 6QD{+!/m72(li -pє\RO҅2C4KlnJRAdicdq0̻jtE':4D :q5Oi;"1owb)9MTqE3a{wGGWݵ_GnQ}_+0cOy؇.]'[- wVM\Gh *j>nVV߂dl̼Dqբh)ieE37T˶}~m!.?/w-;ҙXҊoƪFCWHQrWK'?/5q88*K옐,oz ,F. tmdQ(Sa~o rycvdWQW֬YLՈDgFWsx&vПU`Lvj M ⒔na>K_(% Yҥ{jɈ7Y u㛹A}])v<™'Q.ܮ4qMg M!4\>4IZ# 9$$'W:= "sڜ^6]nttXam%AW3NRW{IԲPޯ zʓ1ADQu<^ @SHF0U"*'UGcs+?SGH/1_8[]ڛpOIaԻ3oT yWQ|Z]{;Њw! 25z:_KGԉ;IuL2`^d[#uQ&,zr}&+ŲRC)(H _^Bwysѯ53 ]BElY~;T椡 ab*ZU,B%!J#RlBX냬U4X+ )D) 8k5 ]ha:u7'ߘk+jX/swU"]K%:l.Hp7@p~y vk#|8{egL]U#Un ;^G&"XI'ARҞ6 %qiP^"%*@m!k578 qq r^C@ft4:HV|,Ms;\:j>6']TZR8x؍*":=މH)̔Lj3 6KYEK{I9xm,*U2zLٔxVo!~ sI<&t~IDZ={C@C#4Ps|6ҾP8ɨ;+x?i}ce2 M)wVS #!qbϯ g _ï|v,~ā t ͍Xdg a;BT_4\;2@")T,iL}(h'QCQ1}FUU3}2i(!۔MZV]<#fIJY@bi>"UaN7!9E^^WbN;F~H3j1쮖(5q0he93~v[C_ !5F?΢A[ "te +{2z ʸuTj"tD ޟ3fYKҌPwRRoK^p$;sa`OsNW|2?nyH'|fߞy bpq-DWE2 U|5荁n4wz\.5sIcfNI<{i/'h-f2QGVr(D~lGH^*I1t"1kWt8V%㌔qy$ DO TμxՉ5s6sG2g>ԥm>83ƮlM/&orNi31r "EeT.U$%\ld.̤ۘ+фĭ/sZ lD' N" 8MU2@p}A2"/)`cY<{!C'^^i[ 1]t}?_&-4 |z(E)|`KP+V!Ѡ͞kTBDY v=! P8zԍQ2Dͼ0IrkwC uMM-* 3 rF8sE KW6,y~ss0{ ]Be-n;, ]w>%EJZWFa*kmgdHGk̥敒coǝu7a=glp)F%u.3 uxHyu5G.Qf#:: 5U +_'Ʀ$r^FZ^1^_%o7~]/Kgz'!$DF?qFJb@M$@y B"8jHh\:DK><Ȩ or<־=x퍬D2sX\w<tA%IN8OaC0އ->c(Oj"CEo/t;Vs Z*} g|h*Sk;H-݁|MÕܑ|YI SAKZm rk'M-O0T_ e6 $47˚/!jta Hd›h} gԹPzI0ҰPGD,,bbJUH3ip.QN i\؅>DE1o1ӽIV/ĕ <^*+5Wߜ;7dHYm;}ju kL<~cfw-UH^?YbL NKAnV|o#WGgu/k(WP?aˈx0!"N>~ئWt*ύMm*$ts{eW +=`M3lIJG$96 W~pԱ0,3"E9cOGڈ1}eM"INrb}z+ajęsӍǁ'#T IDAT%%Mˣ*wFSSj S"nB>_T  (%s!RiީY0u)M8lVe2C1G`ۃ3@Rnv"0 [r`%#_M(v֗cuw27mLD(i;sofb3Ph/\nl$,cKY4KR%o;AO@dMKh8IЈ!VL)W~aе0 gŽ#.ȑnW`,?Md.pg,jz3O~ \g 9#/1lD7q>v*rQj<9UݢD=u(jbzc,3eqYHڌR7W^?wS|)-OY2vU҅%2S+?[Ʊ,1fZEeUD}b%*^˥W:k3_ÔpmïS;x[E Z$k~#c!HX?H=(f5<[Gӌ!J &=i@`$|#~\؃ uv(x&8K[)9 'Hi9#QLH}ˆ}ܬ<ٽ⋎9D'wCuXj=Uzve]>$m{ [؍0/\c980 GK߉~wR$Dޯl?7;3Cӄ 鮡vD~+j*|[-w%2м Aښ_f/3_X&I2cd&wT˻#DiQ]dc @#wb#.qwަ/ ۍЩUeVKUd 4nB0媽 ,Hng9GDGcB+i~yr驉aWꝤ<7AJs Yә3h W,5JĪcz\C$g۵̃L2ld *?J`ptD쇷 h=Ō(Gx}g7[Y1"ca@$]70i= 񼩫7ةÙy D,o/IMA†~epB~B߰ ypiŭ!,Y_2Nb\hk~!wRF5г[Z}u[&>Hv$^C@U`g+cdץ$d/1g M-5)J!ABZhzJ؟j1N9o<Q2?-Ie KRNEwC@0)la{yBQAn0m  YI}MpKR5)]'U*P[K׀!A|a¶v1pYXvL%СDau~gğJOrZw|ۼ<53A5f xYlAe 9 ga,9spAat J&|N"f3g-vL0ꡲއ!麿o#ptJO`>e"}bVL66);xAr7Y5s{ZJB<z[Y[9gQRxJ$<]}I3B`ă@~8VDκpD҂aMe d[dVu+Ev0O"ia}KP?_bc}(s 2-lS"" >&&}yY84η&}oBT3Fgb=<*5Ǭ|mO'2)CgG|617}h=>Ķܔa-we$g N*=oiUph1-tUe%w'[>Y7ߏ> ]_ݱ |A ˢu+1*rY|Rd*2s#A3r|;A)!2:H!>tAIGx6sd\_ꋘ d#Q©'  ܀HL<]V76.D`_܅X73{3Ǒ2u*NpMw_'ƛ+pmϘX+ҟl(;2#eKڋN_8+÷eNRՊCP݌rfTxMɈ^Q Y8[(gK kNz޲",!2o $ktR쩦EZ#am0FZѳQD&dR淚/ΠUz #\_'" 銔|Bsĩ.KTo@xD{ v0#GooFkuo4Kj䁱įH }@(I+%db1֗8qv[yd}/\t>{H³qw]U)~~~o bYL7 +F+iz.* zi7ɧSF""]AItOD;hvf7IvIG&hl9xyͣv>NwUUiN>y(8!6+?Uy+II QUnj_00>* ΒHy!o,j#fϓ>mFB䉷WU`IG̜\+PdY{C$d!%L( RIoޞʵ|KP5`" I"WK%єā{FIdO|.[BX9׳܂fbm:+CS_qLx*Cp53KZUyOZ9IM&PftVPKpFȏڴu:,OKu&-+=ȋ.߂$ Ǩ=@+Ȍh1Rxfh\[Mw$#Nm)̅ND]d kqn)w ? 3蒠mI;d`~9Y79҈f8Jg$eDIuٞ"t)] v WuYsiHcL?x愫%ӫ_8\KhǪ ~b&o\`m\fjnhH'aoLPS3-EEoU.C;^)w`];'mh3Ȍ+s2uJ(A a\z{w$>U1Hd2ف$θi( 7_1vvHPgk4}=CSD%%dzm 5p_u|FiOp͡4U`\{{@P:*H@IH?BAŽK:K!t~~Owx3Re+˹Ӣf )Q$iFvYe!gm|Чgsr#+(W5 &0*tB_E QsOt|*3KYDn}?R7Lٽn m98lRM CS;h_*~&Dcd.psp''pzUSGJ͇5囅7k@cGⶩTʥ;E 3OP!pcgB8g><Rd|x K.:.l.]3eik&xƶޗS\8egNr檰fj#7X8oۧ;.1}-镕MSeݺζ$&< )Cr,m:۶ױ^wAPM .@ B,I&DF9geD"T* ]H:ʋRYLG6Q˶o9?kXJڣ&WĎKf~x(JZht߻*HZ8|t(It\;dG<ўmzRƣ'<"o3y=%0xP~Pxaާ**(+͜@*zYS~e#Z t!Ӱ\ Ji?d !j.HcrGoy1JMQ{$ѷ{qؒ4l)]kKY$ O3gEry+7_Z`#DP?ٮ^E)\;<&YÆ4$ ]dJY-{q:doX"b aLDyFhEXG|ap ?ox7Go/:BfNhF$p0 Wqr&.t }i wƥL U=w=˷/ !zsȵf!\N?vʯn~m+!t݀]%脈Ucqا{pOs\2-UmNQ ~H/æWgz9@܍7Zh슨n Z)8wك.p$QiDeo&JVI"3k*AJijbDb!pfphRY \ m;YCf./ǟa*KO>>zzowym[;YZd%vį"G'kPcZ>"H3dyფ ]|3jZd;~eb]3f޻("T zD*lcS;ݳ606gr2 ISox;:^-N=jwM IDAT)Aј>u凌?}v9:)gX/C$дix&R[pnZ2|rMY!u^I15$ᣙczZHnY|{% %3-69fPR!ѭQO.,3 K+NQ@7t@:&+e_>߁rvN;2cu Uf^S(ޭ>֨O/E7sM~:?19~]UdBNh>*'T;\,i5q+rVE5׳l:"L6P|Sk H'^rhU x>kHԓ4F+X F_5߫}OT'aT%\ 6WXBE;ٕ=5Ñτu T8L$Dd-sCU9Uсrd>y\>~wvu )R:Hd"x e3piCa-$Ge^>0kA: IVf/+PFb^g˴QF:IH Hx~%v֏g}l|k2bkVv!t^[z$u!Ku[o[ty7?3[`hR?=^@)_l z2%y5?t$KlF? N89LەyNkyQ,AגWfe 8J5ţScc)׻^-DG" &|獣 A&EcmbR3;Mĩ}Ŷ la$!.k+bݭ'<>"Quc [!mDHi@1&$׵0uJ2k-k8 V[z2$LCq@gm[MI% h N 2ښgmɹPW!q IXL];ڤ[[MW!D*%2K@pl.9w$--Vg-uqjߝ ιHkhԧuͲ:Ue5k/E0a/"Pn')XFDG:9",eVJW mz0Ɏ/Y-m>C$ͼvj**s8R1ki|M~2N91yzےhFR 2V`m==l/zJ g_~n>*WօWտBG]+3s1\=3Q.j I`{\ ӃJX=GEI%Zbi F3~+A>jAAbovqu2x TՅ~M-:)95uMk9Ncy'/!O+} >ے"ѼEd8kXrUV$if VLJIWưrKME}<@)&i&d33y.шb`,Ч?x(C]A7##N-V.qS?+#6~\t{}o{kۺuZks5m.~9U.B HD/}Pb0) # ODbA@Y\JTSgٷ_m޿CkU= H?̇5Zo}BŇ-W֐ׄMOi!(?;V~`\e!ټ: ?NlBd0}yL3憷T\#"m nYdN,m҆rljnEXn!`n$w]Cp#!NGzIpm]f6C"CO!_XW@;69L$ {Z/,[Ѷs\G&QU3[~sW _DDy4nDHII*[o7G9( X4^{~$==d.z:Wzdאw8{o'2.l$MQƃ.Э[,x@1"=oQ (.,y;`%у(d6 qd1Ȱcz**kYF%X ?9Lö해|J _\hϓEk|1T6jtSE)PJ@#x?YyN]V*]e'H+Ny.,P'mI|#-پJ&Tl:/Dĉ{ƛ5BeoGڑp9fa$tIg9<ߠ$+M=Xq1 1Nؾ? WJ1cI[ AaA |+|gƱW ix{C.1} jR=û0t>VSPp2_M9BZO֔( 7woWM.# 5Eq LaӲkK6TL”2fX)9̋k%_eVy-"$Ɍ%IOǓ?:uL~4;92}kDlF&Íyt|\h%>3@Kf?T?pŇm *}-җ8{(;± .g]+ׯKi*ZKy{ 7a^|ԛ2qjPXlZf(r9¡_O"L',͈А&͗vpqo2CCsHUi, m#t\B4eyEzBYtYI-e[d d m&5QkM\pXbIKDyTרzfuG&ra MN^>>uctLB$ļ8~&28}mKB`ٱ^RJ"RsHB_mJ܂3\F`sU+ % i'x8rUV{/&'Np}P"z _ICe@M50 EdIQ&W̰ 22A7K"F*g 䥾$.dLb TU1@Ծ4<.EyiX,.'^%e@(NHHE0.PyJSYy./U ~ \w-d& 铤'\T70F[lw[t8ߥ, W}8Sny~;J76U.'G7wZ 4?发Y[UR LŊ6i:Υftk0W "ZSٹ9J,sG"JG)(Uo/^& pv%^duƜlEߕ HW/gjBڤe}1RԺ1.)KB\ apd! |0"ӧN,_[V0jg8Lc Wkц WRVňe3In~Ϟ[^ /g-+CD-tޚHIvESeo) M<>+[#.DJ]Q+!=  ,YD z*rY ~ Izz'ueY!YqLae-.'A)ӅUp8s@;4R;jY4;tB W1H'2^ qS݂)@ײY ֝ _9 6 Qw%Rp'(H>۲ /TG7P7 JK C&l)ʦ "\:׹jw9׹~ާK)ea//Zk??1r"GÅʋסgO'1I$^P3X9Q. K=!$‘Iz-z*z;wֲ<[}[uL ;/!IMW“*ٖ1_˸I=%̷Ss؁|1oNT|OlCO>F&xB /Jd߉ⴝI ' %W j\QAR7]tzq&==MNC-R2 !ʛt;CC/!GxƼa1s\'clf sWW>||)gguv(c(';JE#WexNI"IB^e3$otRmv=+7+PŏpDVw,^ Nx| \կ.7xے+tl(F^?}g$uP-9[h&E), K]oҾwU69QK]b]7(^,nO\yr>qن#.}1|D@_a?*dH;+fQtK߿ gC z*bˢ j=KbWhp[2bہ+ʢ7$/N,0P.&+ˈ0b/?L[],Ecé'NI:#7a?ٟ)=hdNrB}I85y|:U=$M̠`2 V9n2[_pM}@t)z+x^,s>sn,KqNch <<`1 IDAT|W}Y}g)l؎\\m, Y. Λ;t2R]<Œ }8]GP/; &QSǘV8e#&w$#zS g%vY8Ə <7ل94^wC/S66͕mo.j7lti/\9}R*6=qt KήQi_2/ذۭ 7]h&9O<̚R*tknri-5dwE^rަ}p!&vΫC@K4PgDDӢ`i-w7.+ "i8)oܡ<(p2|=?wh7y6P2iHj*. NÏ}lKЫL>)*yj ޷sb(8n;]T36nO4UQұ)5ے F?X;B ٰPח P9o߁=-a m?[+ߑ~`,܆Ƭ\CΪcTsi=|䲩xʮqMKHwΎs:gُm쓷Z<9LE,?躊^X> \@.HqTIOvx%3踆.?ϵ$ PMo 9vNI}m8$waJY]~-ʑۙ\Kydn5ʳ1Ctbg(߆S۪km* Co t^.A7C~|=7Nڙ=!%Dž1{-2g|yN`Ȅqx)0ʧGxe].R,@bwR9ʈ3 A+.ET0!BR'qON1gJ(4Dq`vG[>=e[#r6葈;(Ymܬ?׹u| 3 {\D_yaYXV?vkV8{auq3)iM P׭rh$6'.<_fCA(8^ukk,\RH{~st{Q+;5mmd/`_>|]D5?A+5hپ)8g/l=~/N(?#U4'}BdCs|8WI}˦-zpBX knee_NJ)w2:T, ZN ;jG)q$%1uW|h_Df}jc{I?UB|4ff׫,8IR D0:-}FfA$g |ou fdڗ7j˕{ݙ jtvۏ5CZyx70}7_{n\f B&3 Dei5x[.a*A_l^iSͪŎ . ea}HUHJ,^&hTD"*|W UK&n]ٔ1^dž><%_7Q-vֆWc$F㞆^%cbagCf} R$I "B ﷅIh=@kT`9w ˂CьX/PWĒ'@r@[A4F\V08 ?<#'o80[Oux*?pyI9p;/aS%xYx|Cِz##B.WTne1m+on./vpJF-Pc+kӽ=g RHiHB\يDDACo@H}2yD6 -j8_ _i$`r\D \U4S luX(ʒOߪÿ*=_k~jSWȇ,/ A!TzfoxO6QjY70?ps~yգ[G1Ka\Ϸq[9ٕ 4gmNJ/8É&WE݁yxT:=""!oW(woL8ל=doGDnRa˨K$uLFx·yK' ":Zk}uG F#V+ #)i| ݐ^ " R+׉8F,"yYs\{=7I8:ׯB9""`ŷn\r 2ں\vs;X˪,2TUz&iY@Or~Yk`! *CEm8_9#mjK9?@I\( u b) q(NnmM=N6|"h6i*P$, IwvrM<,t>ka؅5f9~XzN}۶TVJeAA Ryz'+! ۳LeY$g,aF0#BV/ +:DӉءU*~4>Z3bn7 9$CE9t]+~ˠYn,m6}G Dɝfڂ+\A,c5^3^YtHHؿ9Vos#yAb;r?q9;Rݿl +K1tאtYnJ7TaȈ̄U4=Q gG dm=}3ؾ18ahvLA⽹6uXaod׳O7`t<2-8`f 3\5?;DvMˑ< u*c<Зr/ NAѷ|e\ӑ.@]D0!?xEu+a\ct6+Qq[R*n6:UBq, \ke$l@OwOJYB  hrL?vg ʒ$RoC_-4rJ!.: f7ڧ-d@Q.δM2}X-^Ka^]N۵mrzh$T|$עF.)M+l: Ry \YKtս؉Xt܈:j:̘!҂vV,,W2[KɟOcͶ2 _;8x0r<9[\p5cO[& -tVR@D}$>w99 :;ܿf43&h:Y݌Q,ݽo< ٤aBof/+INz' ]]2/r;nYNHj* <` )Gư^W‹FYS$ 5e\ݨdVAe_^FčCR]v KJؒ@jWa*lO׬pn-W'lv\q?wI2w4Q :ZyY5W)%ojT[y|>]?PIo(o:!;;^ E>JZa$LR$giۖ3Gg8k) ,'dNlG:/?liq:7us}GX o;?hv2s럘-"~we!lJ G^X/ 4zk` &<0 BɏR$94{v$_G}kaeز,2;N`6@U7CŨ?Ɂނ/mHF}q(jx}Nche 3G?z6鐟s I&.{q@P,νwmgNJQ]TK)ıT:]<$C)cqdʈcUacQ|v ڨcR=[ ]+ SufGGjZe)KiD3πNV"&b-Uv) I!!Yo^?o:xTT+B',G>Ąx6rL',I-[us~kB\ ȍIbIJS^Td-:^v 2f,}D2fʇ2) }m7S=:C:շ"]d[>:dW-_2^[HSM&9ΈLwbZ`l4Њh޼LW&RcpK1Lk8Y{^HΓvfT cIS#V/F{tƓQ/eҸcf@j^ywȑ].R!y0e 6-6$F6qݷ/e[Jl,^|`l"M`$ŕm`"h L i#FI"gCmkWË{mƐYگoEo}i?2C{F8[b`÷_l 0:տ?Fka 3[W+V&"yj~0?oc,JՉYLIxqd=-/Bw:Frxo<D?|{ 4?FP2L-f,+{Ƙsχu#MȲD"GVrr/97Q"E 1'Ȏ8"#p!40 4}j]}O߷֚s]E-.7]jk_x̄OA4* %ݏF_?kwv a :{"qW ) P"`s@zͻis9}<:Eqb !&n$eW[b.'+O\0I<ѐ2 jYOAۛh sibxhKRLcr# !.3KNj<)3ŋ댮 s]E"pF֟ g8b{-,<?wSs33|2~_ֱ K; 1h8(ĩL~ VHaNPshS k B҈/Fe.CqQFG`_6yq\rf~ ?ulC)H'6VHԮl$u+~ 6ËοNG#4QPrZoi )k&ا}WߧI֊ǒ܍\sZtQv }kA-R2!FW%:2i/;8☳# XE,&n$eF{8y'BfH a8v:Hڧ.kbBޤTQ[auF#ᘠGHѴ60eNMTGT6ܣtSG}R]}ƳI^7'::&io!S2E~T:h51Fe016|kx)/Gd@;՝v侸!G/hUp¹Fpb=_B;/LHlGBlWx0v/}kRX ݭa5,8Z肋rJ*|v@(&n`D҉~,-Cr Y%=h8<'w+e,frgnin9hmǔGl^|]Xg3 $B@o>X S)ny, ݟ~~ x:>RVq:"sFzFi!ȒDֹ~3 iH(am 0{`Eo豒2Mt+tz.+S;Fm,ZcXAUIY\O&oC Ih+Dv)i*_@_M\q&rf+ ;.ӟ ?] vasŸ^<@q в_bUEK|ĒS!I")@iPɋ+:J8G=pXj:'NGwdK#+̳g{|~CeF`?Cɗ{A,exuHs9"{R_R!BtQG3am=s{XD؋X$BO|8@$Z,eH7Q %D+7!.VzX&*b@dD&ɱ^1y_};g2cx2C"5M7)uyQ# 6˗bnM$/:O.{p%?p*c]OZE`MB󌱡h?~C"2̱5Vj+R2:SO4]먴%℗:;HwUA&ו_{3 g8ß )+^{y 헒YFC /0M}1fm'@D@xíp\5k H6fۓ6p 2ԛV=*;ng8"K[5ZRC*V&RSJ$28"ҨiS*wnfwx#;ȧS!A&4-R40 62VCa \*S˸:kxSYٲ9ԗV((7zv7jYmQGligs# u2 Oias +v-z]|uF} O2! B#YJ+d TT|Ad]ootV;9V6er(F!E;w 81m@]H0} x)GLDoD5k{ ]ȎЄصrzG{uՔґr xCtD‰r˾!Q9I|H]%Sd|:bTXs 9L&IYdҜ"SV:'Ŕ =Wɚ -cm"[;3^PAB 1"]ٸD"h ֣Zpfd)4u 2 ڸᄴ[\ۤuE"W_iJHN 6Pv@jSQVt+<"t$+kt$h%=~7<ɘSפ? M;OOU;_} }l>S悂zixȃ]W2ȹOVGnм ^k&1fROh>P2T"Yjn-5!ȹwGCC?MiM}.32nG÷ ++Y\ϴxS3f*tᔶ뮭Kk&sO1]mZXF֟ gff=+y"}/g _c,[77} S42üN_ɫ;mlI*.*  X9ˊ!a|Ȧ +2wD,W^u@YGD{Zi$ȵwV_բZHCsE~j?"WB;Ie"E$*f%#4;S~P x"|:eN&HZʋouOpˮCEfM;mjѽD(Aݛ-඼S`D4ɥ qfMj$:))+ݨktf+¶LJ9-;IMY4px%D PMIRUܐznߏOEDbVAxDrǞJff_izB{:I6M$ܪ:V8%L MNsD4vomfW`G:t֋K1,#͖ʁr^r҄:԰:l s$ g=B|!J:p(J:0e(F 4T1uԅHb-+z}}S)P_{ӄtl..5S 2֕95իY.UKYô( !ɼ(|Dxff*~"vZe?MR_$R{Baʅc }iZkD)lޱ\}:5t%BHhFˉ>\FD,p=-ٔyK\%evǰ|v%6c?J;R^cgB2!XB?uqIQw" )M{CJ&R!9}-7s"v'͵O_/IBBJ}5mzZhn!NYBzAlCCM;Vu]c&w6qnKn ®65)t`u8VReNK# քmJK)EO6D|"iV rCy}zS؃֤hc:gw*;RISfӨv de7}CA e$q1>\DrjBҘ arOJ?Ú*8!!ft*P휟p<66p_8?^kcQY>I}Om1Jww IE]y }52"O'NF]_Ax3 T/8,2ZUݟ_Q -3/" :H^׭C)!.h$zɹ܇D~ 䑘\f%jٙP-i!3,H Uj E~-]#ͳ|NyPS]O ۚXI3Sb+N:ZjqtxbߺU* ,= :b%Cx g׭ (I8VMosdzڗblk=RzmlĒ]hD{"ƌh{t%"X] 焒ݩm+QҔ ٜ'"I~^pؑm#ndiTD˜s;yxhj]|4Ȭa{#Z ,m.ɡw9@K(/lIx_j^'Jw#)M,"CtҤ%?:JGLM~::]p_3=2zMH*?n<Њyna2{}lE.Veb.ȻIdugX C}Yy qLZ?{'Z72j|ɇ^omwDZd*a.>frs)$YHY ez݆8iQO۠д9QhHHd,`+bbJZj5d8LnkUFW|gљ{UEy*0/K'Qg!wͫ40m4[x̙[3_`r pPuo!"fu9(-xu='GWe'(K#-%RUJɒ?Z$}O[_ϋV:m-l:YOz?UT]]AbeRf>I~69%ז@'.\ء>b_@ "eif92P=Ff9eD猬?p5;;;| uR9H)=5!yJܟ3bd5X5$ "& \.ES`vK*{2amq::MD0N.9P"5EJIzZG8I HI3pt>Te1H8pl~ /䲓GVg*~]$# Hei߀ҤLO"7MSt⿃KҜ]B7#fQh` {-&Iz j##G(rCph~K\2ݢgf"1N1_%%+e4^kh+: ݂ߙ j&>iANt+7k蕎A]* ??fTG~t~yͿZOh#]WZ@PnMS*ZFi:DPudTds1q }]Pմ$(E9«_e=z\s]7ʣ=+(LⴂM7DKD.Дƛ2:* _Ys ;66:Ğ3`е" ݘ>Wc .M9_hjD0g7 kK<~Z ]HZrroYZHռĂǢW)"FfM5A,%^L1 $Z{=wX 7 -Ӻ=dmEsSGB _Aȓ_@E azt|C=ۀ'_uwyֳMQ4ա=_5πLNԊd9U!IbJVZ.=\q.淉u}JO i_{Գ3 ^t)df8/_re![k#;w|_43 𧄥仾뻖1ɻZI>$L.&7q%=nգ%0rH[zR:E.Bh1ǭD=#(KX(sM6LHhޝ 80K2"]'Wbs~Sl[+6H&IjHi{S:V QMQIeJ{7ό-͜ kv<Ojk:!a:]ҖJm`x.j.]Ks6!:QXcp Ռ,rR ZM xϋ#F#72(5t’΀ΓWkh%E6P.Hކ.….bm9SZ=rbQZ +QIH|ra6DH#S㨽5Z=S@Tu| i,CiUV=%__/i9\_ k(3ÙgA]h 4ICvzA7/5pkw$ Ō>?S><< sj ~F\,6[?vn'',:YFsFF ζ j`DRvo'5bn/ 7[45CbΚtx:jcp#roE]5UnOW:TPTR2Ru]jT-XU.i37( R6[B|1if5 &HD:x:`o9jߚ:gOՆ9#\649]{|Edww7AZ,eúfim"ۍKrb&礭wIΘp;=4i+:pZa2x N+o Ϸ Aζ|gr,fVnyIߘW*&Ɔ::Ćc80`d9$IPyc˹&0=N#5Ş1hr\gA:m9FF"/yBٴsHx萚{*ib2VΎB'x;xqg(B9fSMTP<]N+X> $42qG澖نH)ݻ-FJJi,HKkjm|klZֶҝ#S % *BE9a=P#MT]59PkJ^B,4 wBwXoӔMg]c4ͶZ{yߪ\}lH E Q$+H YB|B"G(F" | Db 0'o'>9st~sfΜ9SZU>ϾZWeEB<Ш׆yut/#-t '[l=IV;Zx5wsj\aZ՝FANOm 7b0\Z ɸڲkCIM\ 86SFJs۪)$X[! yxOPx C- KH)'1|.:Z n g$mdUj׭|LUߊiO[c53?%Z4lB j\irn4WfYt3wajY1<Ԝ5+N1z̄'(԰%J&?͇gn9S7DZ9j.y ؜#7wHoll_smTJvf5r8ɭ)sڈ!qYY[v>^.p |3D,ل/bUwk̊S6z|Rkqr1J<Ԛa]ꈧ1DS {:OG@0o]pdPl&q4fiScŷmZFU䗈PN\^V' m, U՟s& {= 1& clDj ?:Y>^͗'bDM{z5Se!۟OFjyNU۞S{ ^jP%u=fsB="V-Wc3#y17zkq}J[ȵ6\V./{" 헷[u2AnR+*iF eԈj ӆK0zj X RKEe#0U,:t/;;DfnR>?͛땉VMlע K%x %-|iTa-R eI`V1@a*Yg6w/$)ܒJ$*V*ky{"WB}0ڹlbh#4Bۀ܀f壹^t~r#.Wύ\$˵[Z"K46lE1S49"q!}<پS3+0ƫ5Vv3CkHۍALа;0,sR[`;owa_]rK-u֊ڥv7.&y O esÅVk@R)WX)f<k jMkfHk pMHyNZcA=Vbԓ96!${) "sDoZyt,b rf'WU0<,4ܽA{%؊.b=ieʽQZv qMQ3R_6GƓ qﱥHx7\7@(T1B9g> E-T %9݁#&*5ѽy_ \эh/ \hʵ^y 'g4ts;u?o8/=V=0a̰y"ݚ#y3e8F.Ll^~ `%-mH?{ҕf#EUu !k|x{SܠD˺#[El ]!$:a*(<9qpsǧ}+S=\IANӳlYԈ$0f !PbS{x H0+ ɩT\KBsY$D5*θ[ - >Nl"B5馸,CƌГ멌ym5: cO]%m LzVmNȑ%E喛t> GQ#S=62Sp.zu>^CCtH #%O*,4XD{('ce}>ѳ3PvvqjkL MeN*3eHqX|Է2m.!RbMܥ4Ʌ^jfK4SŹώ8uE<g?O$#Kw?OMo᪜W!FCS%]nӵg# A=7^v3'xnA:ޜ-8nA?:0 q-1 y%[ܻ|xLhGUdaó:kMuƏ,M\\V8X@OV5l/VD% 0F,dWWi`VXZgTAp*l\=τ,C7vu" z4$.z k"eBHٱ򲝇IJ;'Q4dZH6vxd5! $]IEdAT!Kf{JwޓXWtrQ3.wOTPܨHkŢ= ƷqJw2z/7L$hU'´vĢF| X3C;/9˩4w)O%DB8Q;f 6!@dޯ$Ӛ~f8 r>+ <"z2̙]@k-#Zqk}ù_>6rCN?z9sdcFER DH] CQaPdK]^\ԽǢ%ؠB4F/Wѿ7r/G"R:ܛ^%x x3m1cn)O <=s7 =)4V#% ;iR3{%ɄBޏ66rXۚ_Y V4! Emr=UIaPցtcp2iВNP!+10+ oqB~'D$Pz%Pkɕ<[t{M|EC }ş^z]>ѝ*%_ 铹 lQL>yH ȯ%o*h3}cµry"q'/3\bNWڮ.%FB8A 0?Ykj (?#rʷWMwH#mhuAu'e jxw/ ͈?";N1hVH"1l5e23d 8eTFbV E3Bu'J\*sg Eޤޝ9A:є?lG*.N O*W\FҲgO, I+%',XjI75&w"/, $i(B}A̬Տ getH hw9 S]BoH(χ p~o][.>$ҡweWE(.>61sLQm&w>?V ep.noq?ÙӼ Gdq'/ $;-xsAfj^sz,͟Px wSBS[g ?S4 " f*೯-f DjjTNFq~\0́`³DK$cPq k PD #!##9[] d>H  ʒ:⤔[2qPڣ}g2  xsظEhM*`GDuhN n2GFO=!ɑ߸HITy5ҿW׉JT>ԡˍt챊Xz[{z3D#8}+S]nڱg,Eػ8Dww'G+d`IU+J[I%(uJ*I=i<>5ى{naDfM"!1iXii7 ڇ|o?ϟFkU#{ý#xMy-r{;>XRc҄3_"$$2uN0\[J#[mL\%Wrz7 iLg1g9n#2WPp%м5J|U07v[k>-c*v~~Ĝ&$s{B‘EQIwMIŨd(b=H##ˊė-;{kf䤝u~Z|r!>t'N8 u!0Ǣ@C@M KXM].p |Ǣ}=׵R|x)8EϾGm!lN:<ŷxstJܯY֋6oM>$ }ZeE{a5"I%k@xL4mFb$JEΗ,>ϢߣpW|(͖̞r:BIdI,=M,sMqt {E({_.!).CVn=hq':$+qo9 loT;wkŝ]݀_JVY_?O@*[$2yTrqHQPBxDEw^#4 J^#uۭr9c֐%P7`>'` Tw-se&$&uh]ihvʹhEoƾMYti+rdQa- t1I I%RA㽷yFAڜPӑּjtY \;g|D< Uu^wONqѳ.Β[?|f?u[z/|%g$:#RB%˽̑ޫFP x C<ͥ"J9dc+H )2y~ }޺uP 5)}r̠5m_{ʢ-s-EB2\'2Fc0/~~b,0ILْwu3&Ep"r)NhΚfdaCzSۏ{1Oq" JH+bOpX8+:*62_ْho"3G,a7=ZRͻy8E3BS"]‹yrǝVQan G*JW't/|( ~n,?'RhnPtKb,! 67L9ȟOx Og֙$sΓ0cx6*ӤrƆt|D< c_/pI)Hݺu;/zNx/yE{I I 1@m_*oEic\27R k[C.$j=JەߏwWê%}DK2FVAavqܮe}"'z֘"H\t0D$%^O34r J(8N 8i` J3"+".)0X"@ mI)ĻN b9PH"qBFJ^_ IuׁE=wƹi=~,ʫH ܉;M3H7B@\i}z6 /p |L[U~3.*_dֿH)uLH1kCCk]kڭ;|F8!M$= gҼ`xQYVR2e 4OCM+1^.ȑ t IDATf>8[~ _euM#[dfܴT[ 䈓fIWFzQ#"4ep;wY=j':dOe**RYf`F$XJ^LS7I!x ,)O,@3h@e ""5J(9Ç*Ӿ:FX4mٶ,i<ߙ"?)㺙u$YyŽDH*~}HuqJ&yD#LR@3>ȵI;b$>⬒8H>(K@  垘Aõ/\1ʣ:|c*ffH !傰zd,[svT*lBVX]O" yv0 )z鎑vQuAJ`pМvgyr5xt 9d9)"ji $ 9i&J]7L/$p U@H Dsj>E50iWg}9A}y>EkVF'bD}e|b)'%rWFQ%%#axy1s>?"] f#$a$!yZקN&>pt&-|4#z5ڍ-޹%F QYO~)d"7YƐȐڬ^l=ŢSPDlMy>*[SfYviuzz*k[1b?Od&%i6op%**nc;LOcUݰB߾YY CЁN=M ne}i]F5H)m f5P̛썋C77kWW*5Oe.`Ð*Ac+Olx(8v=#WI C~(<,󻱐9sC 'b"Iy+BPWqCLd.pyJgg+c_xᅳ=^g 0|XeϤo\yR Z~G-D^vI0;rcu^{>ЬEȕ^ܺs7#4Jń_\sL>^#b\2 NXhiC?c\ryPu U[d/HO:w~qp_*Ai`YSF'aJM 8-T<[vf%6/,@ߺcw)՘XIuTU 4QQox*9Е'Ark^#p=G$e_e NDfPGyOLb3,{X/4FdA@eϝS]qdv pbLD|+FD@4Zi8ɏe}n_s>ުV=xw&d"ug3btck~jR7԰=~^܇$F&BbՓseOF*xxՇŒ㥕5ۚ* /UT%MMLcn-&nH a9PNXTxh,AF~`r1t߀b%*6!\* ^d"-mBvqW~0+؎JB׎Od\#1mnueәGjiYa# *v Yϛ5$l?rtۈ!2R# E!E &`7]KuliSNs!*6]:{k-igWYp0N8͍F`=qKKOG*#qFEZL?#M,?#qG\UbYfmuDafNV[GL~o~7"}8 \lc7x/w<96]mvNH1ˡl Pth{a'ۻZk p_nA [e.&}^V=sJk]c'j}$<*V(*מܰA D g'/{_=\ISQD &fH&<}zkrs`Q2zIm䤦,1}.2]]Ўl_3˺HS^? "%X"x:+S0޼#tDdJ eG(  1Wo$T@+e+i pZLW|N8pW,ec(A+-SKYf[gD9lg^)_ mDjqع+Ј=ڭHDu.cmĠ$ &z+çuxعSQd ^M/P Ȋpne./j/g>[']zlfVx_xeM u &8$kV/ 9yiZ} {x.&! ]{P`E3܍fKT["i&+?;4O,i2h'+޹_1C9WNJI]lamBjppzXx97(*0iLjס_d*.d_ܴW(ݓҦEt/oy4 21}(xrҾ:}UHMg$!!{VUD~|MIYzՖÇIvۏeǕ[kE}NfVVX7,Qƌӭcy1 8 6ala `mlM-`--jQj(R$UYYYy'$ JIDeUyrkG|kӸY4/ S(֖ԻDƝ򪧐t; ;4A ;#t^"ʱņF&Σwث [&LԺ6?Gt|27CQ1t#&潧oTdfOb:Uu?Z+/}"YwnlˏO}SߣM_kbpL(QpȤ@Qyiޕqzsv5ll{6kJ7H)MR[C(\2=Cw׿x ʾ.[09@kPyVTy'Nsu.<򃉛wE Vprd^ ɳKqH[ af|8X9#.0%\, `IB덹W }݋%R<0B5s1RpJ`0ƙzΊ/x?Z:yڤOڨ(PK("҈*AϪ8׹#FBbH<vtc8G hBrȔ4=i=?-*e B'QHnV"99N MAOU.'yƼ ǕyQOJK>qꫩ~2wA(ClU~rn)yL*Y正{X2Y [XɜB^pFEnuv[@FnaHܞڿ*%݀$<CecSWĞ?+[L$bh?7+(K^ u/x(PQTJ ,4E_+<={s6̸o/5ǥ:̌Q. a%nZv bHEWt:T/.BOGKQӕNfګS{eA+7y:1huGI78ŏj7 ]glCPt77a O8&giMela8ֳw6nɦBv ._0"DU9Q-чB>'j}1[lT[ksvR /iCYR8Zs_~֪fTkId<;wcp'Tǔ DJDR|}{9?8P(:C#Oh8Qۂ9oϥMB4$"8kH(*>.<@ #z" U_!o*Kw0qOy8ñ2D5sMx%ǭ[>ƭÑpH{棻:ϐ$CVuTVfX/!&:r]÷6~k ܀ =Iz<i'CFڲ2S$9HMcS L EH߅{IuG U!3cYNE2 iHR0/WlL#&10R< ބ ڿx{&4Nڟ,-j@oxedTkHE8v~ӃYAsg7C .];^DDAS0 ?z** \sb曈(tpB٫"k EOqqG=wj N&LAE5㫇 Jֵ!{u)|fqwnyl"UuK*c#%.&AnɅ))] D { Y*,C.ͻi@R-\ vi5Lb2.pHd+ENUeKMB| Fa$)eYAЉʊ\N:Y`A4mV;e}J'1`-oGڄ2bݐ0Ah^:z3P1Ż}yr=>H#HywӚ8+foX-c;oO<ӌa-2O+[hP2UE,[VbUiY}BC)6++'Y}I,$ve`*ē!T8!.*| ]Ƅ-QÈR^HS )#~|+g柴r .$K| oԦ<oc%ixDY>to Oćoie"q?@th*8$ ipƵ-p 6V0)c{u}X7EKщ5݉24tn/ٜ'%csߖՐJDkꁣc s8U  ~f4s M$6=L~,frlcyOԧ>uzJ?+j-};Fnt {;tg*쵦AUPxrk.LA nx{uɝnz?f86ؠ+1% q`UW9 b]ZV7zไoW3 ,=]|qG'^X?&Rv]=HfUk[R187ޜZ\T! (uqjE0 rlaqr*;0w: ! l9|E4-fep,ee>ZRBZve@B8˃-F5'EJ@XpmqA;DaI< 9ĐP J!qtM4&A& l"D+Gj REdҦ¿i oQTcF3[0 .k?uajzD)3)֚qߢE&z$M< q4}]~ ><<%`0zl_` "s|f",Xh _֦R)1sX1)M ߪ{du}'eEPi4t@^ث~.*w7/~!?FV,Хx(-դ5#O r,4&{<jq:ҽ4ADM lr^kn]9tY8Ev >@D"~^n] Oٶ~-FF9c2O_ijǔ&IUy>%BfPYH k\{Mz [I@PP8 kPC\՛sL;( oAA8~ss!AC+4 3,*HkQҔD϶$gЖR49$љU^N{`(7uO8@T UmNG #6G0{8%uU84FRS9f1ADԵ 'w\7A4 d\ܷjFr|c`** '44zPq aj]5=yFaZErRsN(Z(qja f%'ͦgjj q一}R`W015y TQIcVX87*bAN$‡L&7C|w*4Qu1w qQbH.dQ#h GP@-lTI.\pDqVN_w 6Ls\gUFϽiJ4=q☯$RnB8"hq)ndUhkƻm~̖㭷ުnc~7锎_'_YاF}r%u/뚶tuX'Mt޽Fp?gժ]去0ۀ2w$cd49䜈f iZ~LQ U5A,&B]RZ_A|TWAB'.MY:vG 4x$tG҅ -x4T=QU4$HI)=`"bQVm0!PBT{L|߆9Z1^6|j=b'\sG2l]i+`Ɔ ZIľ?F H;- W,ZK%"!"ݕW̩Dcf$R 'R#'|L .#\^moGp#p߳!1ƛǾWUi=.;V-/:xr; v$.v AAJs]x&w/H4oJzN7#”sdJY< K:pG*=^jTA[U8/kR@"ªrU X.`W rv&kkv1x -oZO<(>^ǥj_k!*&OU=LǛC}๣_z6٧ECp(SDW0{ @]SYvS51nQYmW!F~l?ITKD'ҺD#2?m?q/y~@yHoI͵Oֵոқ"{S/7ZׂM@-Ic%-1CH)hq҈ v#FSAO3ܓA Ү㬸XESE֛aH2hi8{]I\2rsn{m$Ac›}MCoUwI=tAuoqDw,!m 3n n$ 72`ªv1k@;^J,P' HEmƢ,x9BXilUvǓTif(%ԺtֵαDW GOO.\9x(|Fák'OXEP6ß-{=WM+>_1-qӸwڵqka-tQ *Eno2G$t}8mg +_?7T֤*eXպWb|zKO _]K$]7l}-+Ѵ뼼ˇ\49t` B< Zp e˹B|LsT+;o/ugJAwl1@EUn 2oYͤv+bOC" Hrw@$[6 'ts&I^[1@jՊU9]& )ډh m1[XFF\U ~(}AJ 3Vk4(vaG :u0W[xn_xǥKDYDH$#DU}͸Tڲ Xll: 0R9Հ$PHà .L.~LINëHW؊ů|Pf[|6UGsoLfhYxv %7D2`4KXu,_AK<Kxe֬t'_/='oSJ3}KEqr"#$Gz~uqnIWTq1e>SRՆFM0۬Ç] 0hħ_5}\=Iɶf;_5OyEeҠ%DzWhl#` >c _961f.ow[͘n`0f Eyoz^ѭG^_Mܺc)JU 2ThtW.':Zs^xa3"|av}繯j'R@ɰ@'4mQ{a!q bךm[gc;,v%~Ӻ2~Pۖp0`)7 7QYZ5H9ɤl%ڵdxC/hLVD$iI]Z;fD`v9]!p \Cti ]6֢94ZDvU:~f^bWX`|l D!pob'< Tva8mzoX,:bm OԴ=O'**{eQBDrANJ&J-UAzRED/%#nHN!Hn$`H<q[C^ϋR^[hsAG# )i+ĕMюEt 7Ѫq(NAz,aNkG4{;}Atb?x>@ -" ,8R~ȑgv#u?ꎠ\olЂs(BjFշ}>.mߙĉCutHiZ\{I:}d<6pMDucAjz;.PM.ϿB:؅!!ׅ= $bs I!i>F'.#FHh1*L}.ŋ@ h;-زuHp7lZhϐ>]t, piR;;2O#}.RTx?m&-N|V|[hgE]V] whw {!\=$ŌkeK APNQaǑľ;9 hau'Yx9յ~㽱]z9V"k7H!ǃmB7޲wyG|Pѩy33Y+_`v KG^fRųD߿K<ˑ$edLffzЧr^{n4D*n{mD^ü㷵 8ZQ.&[; 5CdísHQhjl2fїk4{{k,8ȉO}pqQMYQvks +S^,45 Ʃoi8.<|~מ۲G8SK ,-L4%ôO=5IGB5۟*l1o$U^NZRGis\$69ɛX:(!MaMhBKI{* ]K6zNS %2sDm.$?BN*xSH nN?Cj1KɶHۄ n{IcΩ.e,Cr䬂4ֵIƎ'4tE䪂 / #v "^p*a BXH q2ljq89 ȵ1v6|%tP#kc lCNPTaM|9'Ա9|&8h5jv"/f?0oj-$KKKAa}?b+^D(8QzQc84%-'9rk8a28jw?|kŽ!GV}h؂ "RJ̰i\@.yv$6]l sK.q4O|wop6hϱcHbU^}N35 (6L2VF ܭftRۼGS0m8d>F^㽒A"'±hkS$V+cx^ v[,DdAO]>ˌ(;fwFØ|&E )sJA ܤKS Cgǧ\巚=0fU; ,l _}|$uڲTS $H6u5CBUu"n[nYnJMބ[ccN@1y:~Ȱ|NC0AicQdD՛׎ڃs5A;mRy^J-Qc Mk d~J98O\FZem ň'pJJxxWi@\K㘗bK6H s6ӂspkD\PգJVLcj^x~>=HuߕOnuj7-/~+.Kڷ)‹i,p@I-E4Ģ0[:mHY5 &2iqz{!IOr$ԡ^o6cC>Ξn [ Vv'VʌKrڲi Fb,cАDJ/(^bȚK$$/:@fY0ЁZL k ~9IA seشZkX48 ],LdT6eBZ@k005`@j(, sf44SYߦ\(-e-6y~sQBw^9ΪmwůR~'2x)Z/& v EP}qh0\m.hMN|PY`lǜ/z|!nX $=6]ad>_CIPu.>;0LK*O\!lhV3'?ifֳկ~No 6GD۳$y3έ@FHH,%+aÏ6~9vX, %Ҵ$H kwϩ[~YM5@ \Nש}[??hqI AЋ ˬ${ m^o<ӧm&';Uh׿u3kxۤMfN67j2Hv2Ԍuz@q2Rͦ)u QKy&M)jEhm$R$QF0YdH>:"-z$UvoF;|d@RJja[ . .'Dj=v D,+̡Ȏ[ ,IطynmO8AoQ^j` m`G#x7 }LX5oƧ-;Xp,$ cdaP?liԠXE)!yύ{)X2q*Zn|Ij* tn[RDWX@&c{ ث#rŤqSeDĩO~} 3r?#|2rwn&m,"pj^4]Ml45o|%nw4뿾z*s:\k>.os DHTIYմM-ϳ}NJFHV+o+( մqjaa>͙2t.a* . ^"ryIVJ[;x:S&L-[J Rd8'WQd'6;͛o8ormvU %JևuHL#,5浚ar'YXF·ѐ5(HKgЗ9"Fp\;I"K#kC8$?Ku*Qjt@i2pJL+"kac;FHR-( 2aêTorIPy;Di<;ћZ,N:RΣH1;<02a}RhnVZ^W!՚0E Vê%qLS:YoꑣyVW3 wNVutov<1S  Qqqj}486 :cQo  w3KEQ!ɲ1HyZ-3d2\zM{aɲr`p nA:MA˺}ٵD +NtJC4YۯPq@9䴽#@ Q2:r?x}ҕMGa]MSWp0+7\ɧ(9taI^D' $3j('5s}$Mq$. 1Ps5\va8Yv)/.Ʀwcwv/|6`O&IFiqs?7;~ m~@u?*" WsfzVvI&! \"VNUөglɥw-G%]! dR38'. (߃ǒu(kg2kUD;Ipp -c0\%cfܡWb/E'Vhkc'g~~ux6ՁcPE;`hj\bx SP:)0t4/<ikf;>Mg#.o$*t#Nu#ϕCIS eXOndRIpmNljM-F DL*`,M9\Ȉ'UnHdEax5R ~!x ƃb^Cpds J[#y@Ђz^\ĎRd,/9izri,r=D=hZѬ0daP4 眇Q-E\L\ a(QrpB6Q#G7x6#YL  pk$d2VuB:uv{PkM)1[ ?au/b߈h?k(g fȓͧtxaFif:U8/U>A(&p}{`;I 41@i#Vc MV IEb&峿CWtnf𞲥LfF`t,WbvfFdl dȩA٦t:J8TGX+X0pEEoTps㰇~dAj1`X_$ZÉW, ۻX oY^|Cͭ`l%RAֺ*rI q<"pE 1jUKW~\I0M=A2-OD%]or4ל9]Y.&N0;iEMA.CRje5AM Ut-^ep**R Pp'&^-Xs3VJbuk$~l0rc"[)ߦQtRrXta⚱ kDmj5f":W-˺Hx:֦A,* cB<}1"S٤ap?契50z{*"LkT1|&3Cﯴ~2e1ɳ;pT_lHXmE]F|˔G>+/ɪ|0@܀$@eZQ}D2q olלBnEw`vNp=[-#<"2f}mGÇ'jǪo{di >M]璉-׾E݊N Hx9e`gp;e舽;0(TP-C)ae,/OjA̞ş3$G^}Zˮ8U>J Lu9xxKTS&^y\e x[  t^3q-ԙ8txrS#d:p?>>N!m9;9|Ks@ D_>˟3ucg>EGƻVܾѕX :LVSFTz6Qr]QR.OL@(Ɣ4Ƒ[;QL:|ګ ֶl+sQ!m&h@.zlHꍈ y7 `V`MߌA" T;nZG)psu9׬+s"UByʼRΧts`LCgf`ٜqN#*9 /AfbNpq`8yy/SMh%,`hLh ((y*bA 4q>|jW>tZ98Mf.Y2Yw,):'%L:_i` V#{6,k5,M130jҔ*F~<'wax$+99\X3_i۬PcDv%~t$Nj-ηTZfuᬠLi n fiwv#~8J)9gWWMIw\JI_w7t>,h8~_wے-%EU26oL5!ݛeRjrG0g{=}&."l>L̾o_|ۥkPS9{i(5n* Kz&yT۾ێn/w~q⧇[Oe~ LzA0(HbSd*_Y(f0K^7QI.uk:@ys fHwznƭu'shW ^F_a)ic6mAP@FKG@䦀vRϚo9ؔMѥ >8~*DNz8^V~l]Y/cU.uFzox#E!SòIE5*& 1B(Npʸ3]$+nAo^PQƍliԴ1$[=AEEZs Fj&BN0=P]VRbD)LHA#9Q(ԭ"n;sKɆYqkwZ5ň+'2x } /pL'a׺Y1B>5Cͫ2SSdTSKu֌O*kc65SQEh&tpHn!+wbIsv#vv᣿>h@ض䜾ɿL~Ż,wI9~\qf@qW,@f|~ڦBN&=\ w+'Es3xCb~Ot+6 ]pR:l~Dn(fXp}ju\M˚E*)܃7:GmX9R6EsG=1K_ .^k^0;l ω\:ֺ&6pֻVaJ"ZB{J Ib6/L>e2جk5_{6ÚaC>-UUpIp(fMuq+פF?f!#6z $R{x&:1u (Agپ6H$͆Z9E_"6Ygi؄&ftO@4#nd!̨%6Ʃx }3̑Y$LqXJ']e!Sl~.1Ϥ\AZ͞ʵ+cOt#q-FMZՋy!ݡqPRe9TҁY8ȥT/5H&@UxMenѢڀ%l5˂-A9ZvfE95.`V6 ~er2_xyOơۅ藔u::5H9馍?9᧰{R:—V&"a+y%n] ƌD5Rp 3daO8ߢȨvs\ND`SY:ɞ| _UVw `o_iv?[k__ﺎBj. '=-|gx?֯|+p`xJPF&nŕy#Sp3Q H)r,Ǭ)>zH|ÑBb2ԭ V0PWp◪>_'3*bx9uwOXQ~Q|*0lAѱ6i˰f%@0VӃ:J%TN#Xq6||R|=qzQQM-{-j@ը?= ݎf%qdO+̲1+8gһFMciG]Z-Z.%A]#J5\A7²E|/ل2&~4%LDIk_O7/~͡%e*cXzwqVnK2ɭKݐ2-4&" IDAT5Rrt(:fpe>dk(31` StSs#Z*4wS`LL#%ǥK>嗉`ʬ7zs3!,| Z?SYKC 9g&=㣞B2)!XZe/m621M,)9хU:nVrTۄU*%W-dvFI9ң['.pILc,Z%drN3#o!mn" u iנP{8爤RыU59hO뤗ltƊ_e:d-q#2┬Ϯ~O,L£po9>Y]^ӤLE}0';5[Lns9n+!-;e;^͖_SnMlof?ƹ-v΃/=m.ꊇPs|cyn!??ϟ?<[;|wQQkUz) V؞w}*VgzIg8aQDZӥr%/q#/H_>VlRD%Hډ D1C= PleX F7;}W^y3}?4#exLA2nĻ*R+.X.TL.p‚˲ 208@XzJt0.*]4 6'% _X2laJJco4RMb-o7BTjK6Yj*o|S X{j 7 6+XFZ+noN:\% /pCe]QdU01A  /MUaСNʒ1oO@VCU ڲg*1LwF$_[&}0cFLX+GrD.y~ľ`Ap:H^LoR¹/Jv^:l_Kp-M-\$ALfX&dɭTԖܠ]CvL@)Lbu|IVP+74 _zZ=3o~Ý<45Sr|JAcC 1{$}+_~ݝᮀ_ك7q4{63ݭ4gH)VT%KX+OfPT7P/^jkd"ߋ%umОkQ 꽓/.wZ3XOy,8k(DU#&7j-C-=OUlfg8[2lSXL_$Qy|C1@/.@z P\fʹy{v%ߢm=|7G0UyPk6_En8>Y4'uwTI$QëqX@қV//[\G:մÆ;<|– "Ed ~d,df?+B-))ED7pB;HFckBZf ms5s%ܗ(4fHAOLs R2lդ0gD *,WE2KUf XY{$V`$TjqKXQ*+0NT%EXAL %HHbjJY3j$ uV pa=y4 7W c'ǝxYu% M37/ {9as2mp282s"#vDXgs znVɔ^ϟmN$򏗜䀘іb-8''t'? /LLg7O?~_pswbmN=O~ZkXIV,jc-6K'/}K.\nAXv19eOFi&@Dn0sG1C]8 a'-UQ094qEbF?3'^n(q?c.qR %4{jԠK9=e!At+f†X&S򿇍rrV;=XDSSFZy@HOLSPp'VF+9K+T#<͜}p~J38 pV&'37$?>"l.{;"KrHvNěcfqmT[l;4=DQ1"> y/~2W\.󜷷|b-~*Cءk#T Bq??:݀y3o52όYwui)&5dόc.1+LdZ, L7v"*]ڄ:j( 5s!''RzXfr=[t \f)Q@Zq q:Gz9WŊ&UU0 •k=3NyOoy_hN yܸ_ d^rW%NA44YtMcs(aӮOzW L7?R$|6"~YVH}?,g{a&I$A2&.!#MjI'SY?Yk$ˌۭ*sJ}w,(-CdS'+U)ICO+©Yu3'+|5? ;ejn@O'V"d9%v?:ޅGKxu,=jV[?~:"D2"K(h_s{-I@WEZkk05*@|>~cEy0קtR;tzhv",2%n&+]?iJE{]wq6~-xw,̪;hwN//ʯʛ?lq~驧\p i?}s"❆!HZf؝<"m\Aƚ=0os#GD+c|.޻(Þ>꓌#&Fˁub-;ZV '%.eWFetmodI.^k}>o;0N3-q֮vSi$Ι⤄457gje( -tܤ?bXdI#GٍCt>vJYZ[z85#{0GLЦCz'%#A!HB| dԑJo>:eXpy:9q)$RX7u!Zza<9!I!KJ{}bQqgjCkFk)a$ʚz#44KD$L]3k0C2RR"nz< - 7J 'V[{"tyIm݆Zon'^MzBLVN@)vhs&8cF8;ո\`=J)tGc&=E ]0__ڽrL>l*RSvwI̋kwrt[qlh2&́L =|$NikGW%eoTD`c1̀*uu KiZd$Ƒut4 ᒙ-vtgm|dvUxc0Gf,hfר`u,1pz뱺ĤPmVDXV9;bF*(^B=.dҏ1tY^2]"TKӒ!RWYxOAS۶ . -_,owP#"dSf5RrO}%4*vDj#3}Ci% %K:JRjƓUꏌ(y.|S^]NE$1JBÈ]R251cPs񡔏 ~^oϚdhTb-ug>sNpJK_?hRʖb Ӭ2Z|ݻ[~w 3 K)"}f̄eu` \}񯶲>i#<(\`xpYDϔ+ʓhɛVۘ)ma?T:Q5Ǟ;p]>dl~**EDEňݤGjx<|<塪{|yi͍U}q|T+AuGnw78x3k~=_lxO!s~d,H5ؒ[lŻg\YֿO/Lٳ[~-+8Z?OGFD`fچۦ-qGpWӄhh -_2f)3rN TBa ܄'D %Hsg_yr1YSǒ \bЂ hr m1A Uc= &*"^Sf\1.#fd '%RLGn?yL4(dcWg)Xxg՟$2-ɻd-39˨ Ȉ!X w4.(}*btA(-$DΥp$im%o:kT8Dp ټfxQL>HM[1ymhE/ 8hZTdxO xd+"Bl,Xv~T9S4hT(+1][|H}%&sQS$>ՃgߏyJBLգ2%cVkX'#|-Nl@ Asza"PZfKH)dMjԖ ru&oƠ \!Lh2f /A4H*)Rӡ*N #P,2MG`eݻ5;I5&S~ew)G9!OF.֛;~ͥ&$_UѹF&3䙥IDrB9 ƾpҎ$0-y ELW!diY%u XUu IDAT$:Rv57ɑhO" ]gm)~^uEbRȄ1"P`{i &c{_AUsR$ oĈ͸Oh*2!,p(F,blez.@2"6MWrF֓޸jJ>Hb d 6exᡔ_yKdؐXB"p^DQRHI~ yW6w(Vj ?o[l,~N6HTj*I\Q$R9 RNՆf &-$t\c#949HgDdϸ =grva:0M<{Quhwܡ`/؋IG9P)͵ -$;(2qBG8D^^Z5va!:)jk3$3*P}ZΌbJ#pUn7VV!>[)g2zqq# ؾ0;KlfBSHj"A,%%QgMᙉz{V|U_<8 ne4bYT -E`6;p*]HѦ.PJ]ǰcylҏ DAk@ik=,_;My8MWzU N}6i|ɐ"TFILDbm7^!)> Wd[lC)3d[ԿH=Μ|-: EL/OpNݿ>rErsXd,C$;-ʞpٸBSs/yښO=&{e'PG ~a y3őWek^ =Zȝ{IWH̡!]J\cuj+L|QS)P (J$R§d6plӊ"sGz~yYL Ԥ9ҡx ^QNaER,Ɏeqs8|^N,PPeVYxW]14&Q͡oKfxy^!_ FރSsrك9cJRHU(qI4$S/NLj;s:*tBq?KWLL%7М voF7mUIM_3ϧx7)!:&3=@a7l+W3O"q8*46D оd)c 8'A'%A _T$@4 Hv^(x+ߵMx!S1Q&b$gAdV^=&M.&*9 ox uLaJnIBxhJn D@äj'LN(  JZ6>MwF '5EN2[f.DAd h|F`Pa"q }DIMOkه9hb+X6ĝ%ђu2D6\@#h2 a.Є*{ì}f keIHM$+.$|%_y]:lB3T)yZOp%O ff(3r@ 0CRHb-uh}hWշwwTuG3c>%?+aO&qmr8?Wj2cr{gʝc6Ff0Cw\ v9dLsώH#G#o 8FFColW9^ViJSFG>1P1!EњRR eLg>y",cهK#]%f`OZ= KTD)q7tU}Ub/1pns06qvaTQIVȡ;=[,YaTAѩ }#AM3'Msa&8,a-F SjSNl~^9 fIDs:/?sA1[?,늈PXs%VI'0tz)MdqptzB"6SeK8yAx9@O=3%lS-' ,$-@.B#]bq@eRˎ e(TD9~Y|A *L/W=x9U@Yu#nL%v<|M]9\"8%>݃%< nrUT#X$\y}]DzsRE$E`XȽCUs đ&>[LՅ4~%,K2f;:H$H1]:- iU㴲Ef:\|D##Wi;Bh0isGBQi7*Px "˔e|~ :ڞ5Oы i#Q>ڨH0o5 @ud:m*%b-~2TԧǔvZDE2ZkĶuo-?8w]wʕ}c?K❵#$٨_r*E1-3#4){pv*Oۦϫ;T-V7&=跔{8xH21~ou+?:ZDLnU< S&t D'#23#eh޶<4p soW2'&;2_6_ jdJ)pf{;BGf@*va?gGKBfQ(d6DZNo  U:C 0w,Xe.p1Ҭe"-{"7EnEئH9e>_Z[`a􈩄X3vɥ-|=n咝'birfp qO4MX92Nɖgӿ `Mue QzL8Mi$QlP`22 #4lzHCEueu` 냏szw.' > XёujpRFbwgv0'u!mlʫpXrҝ( w׌ 5묞InJr^GMf0H֒DL9sڼ#=,­"K.Y%lܵ' b-u//_d 1E/H@U9>-I^Jgu/|fw~K_?}X[c0n°rѐIK!$1dWQI!Je>f/y+y6x3{Si ntw`!7>}Üؾ a,FCM ͼc2DdgG^Ej&dr_̈́gk0$.;7*3['OjJսU! C2%RNf z-j1B=R$BS&eK.t9}5bv,Ć3)^s"^$?lɼ "& 6ÔIO QsE|18bh׍Ud '.wF5'v2 T$LDP31R~Ҵˆe ?]H)ɔD[hWW,b20ݱxj"0$*S J+A-|,<؁Ҏ'3hhIƙOƿ%qr@$ͦOރgaM=2#6#ū'1v zɪI?f;S#)AV@>+%TqK,I` 5e(NN˻PadyhJ ":d|:"qxҢDg&26gz,J8XVҽ+!h'Z!r9)U)] 8~սL1p-9 Qz,{,hTiJ!GDĐȴ4$Lu*>IQn ^?`i>R.{OJ-Kĸ&/4NCwC8&L*/zF՚$w1yQ:lBN3 $JdT\9JNw.^Y^q5.eͮ~Qo-]Bg)ӟ4oS??oܸ6fv-7LZگگoV)6}>2:p, xS@J E oNb@͖_S;N)77?M:l'-(4Eq_T>k-U ;z,P<_&W . ENTjK+:;OlLMXfhJLrHl$2js(YY7*U]O%t^ N}̹m1q pjLW2XaAc6F\i6GgPmq"k;t[2 #ʽn͂H줤`^s6ƌ@&kSel,+u.&.8og~80g_qrEȐa^B_3G!>FMא]Ʌh9Y8;ERw)Gxau26#'NY#%!(W1*DVlĩM*̜p2T@eCƙbA( u'chsSFϸ1*5Y 6EȶlЫXE"2C.s !%$6D@Ri֕$bȶ$0tbz'WpsXI[Soq}DPH-@EG`1q2%.!6>xO/E.94xZT#1X',6eM_i #!JGF\:^6ſ.\<вv^-TN;ޛږ]ysnW}Y H5e0M zHңȂlXq#%L z /4wh)RQEVܪ۟sZs?sso3P(O^k9O}uyb9hA`z{h`7,p謷ݰ *<<1nT[p,s,'Wi@byy|kRg_|_+gN]kkʯ? }qN\:k>Oga>rXw;w`rpT5r~H??7/}ی:8{%m[MD&5̼e͔cݣuУ'dcX'ǩʄ璿bo{c3wfIOZ,|:Zi”F=bHny-Uǫ1ZQ`UXdPh6\)s `yk7( _3WG/;[B,ܿߘ-K)nu3X.N`%&!Rl.sE#Yy2ZX rqak(P l7PLԙWlB-=c܈dJ#{zULG7VMnDeψLyBVPM > CKV&/Cr>3|?h:X3[2tb5qU2Ap#:}z|ej:v2HaBF$Y;L̶}~4'/ '&ya w`$(L70+h.mMаUEF.EUu5[`E{؎TZ2)jLZ$;lX|m"V",Y? [s<<> Z_+r}?4ktk]oz>ybytNʭ[7'dD: (fn]2) ^ {6݉f+l ǖ3j c4n3eMoy4spc%` k5{gwpHY7_v .+%2Uf%\ l$$p^Kbi G \DOaƌߟ2J.I (\p ްxeMŦy F9T 7;p7;)W,]N60 |ف IDATF1 OSM cZK2HKD[ɘ̊鑦E9uA'ƶhw(,թ[Rj"ƥ[SB9l5Ni֭v<<?B~^cW<qXATJJ+b?=O֗O§`߹me / ӟ.H nVT(ͪЉBEy͠8iӃ}0_J[p՞H^LFz#mNiC Q$[p .o:K)ZD$3*G: ,]L[u;Gմ9@. Ab&|mh Gqjʧ7&i i )FT5W|`Z<znK0HJR(I;nuQ:%a*: Tl0sK2v[PMEDb-m*o]E瓙qSfIKbƞ(&x#:dsl/u* 4 ŞMfbt&MSSԯiM}FMh]3L6Pl0lj mFr`&PB y:tm5p.τekvagFU sي.1jYFZanIMz4LiS8-b;/[\;2ߢa۱:&cOeL)̤1Rn~w+mʼn'6jv 11f>3KGг 2' 7aIyzSD8-^LGP`;JBt/ȤE@4/t$" PzV >a9JL[ޞ8iP4`x ^x4 3dKp_~>zpc u ,u:f&.?l,<#7񙙙OǷ[wPZ /;qu:TmbXr Pw}9O&1[-ODF3@reSV!`vOF7o g%.PQaZ X%+2u:CFr4k+"pJNͰ]Y!LɅ&sCa: -ݘ DKSo&P%ke.!\%KUi.ٶX" 97fƢw#ГrКXE)=f{O ӷ#I۳jjbD5_\I Vf mĚ,pn֕1ڣ] h[>m= H6rt̶s[92SAXء!OFHx އUgL|I"uh qd}g_~<=i!rrmF/n!/s΁q;x [a]Kr8PcyU܂l䐸e5)@df8t'oJW[ H-ؑ @p 6F+uhAK<3} w&(`W)G-"%`?hi Fb,X:` cPЄz;qql/~],>Cq3 "?'?S?u^VmN)gN]sϽWm}iÅTL䓍>eeLaw!{OgЋ KKH>նL]͋ |;*y) NkoK%Fi tDžcn[)b" A u$%weeM'}H&JzץMAE i;Gi]'OxU'w(Oոt y\rCDE IEK` %r4$ o epJ0Qء=/> X#}l[)U˩nlCzfbF> 8FҌA ƑetJ#nd1͘n&YX&r7;.މ)A0ѡKLD&Mi1l.ʋA'Q#ߓ앫nNZӷ*Le 9x2e/7aPhh%+h4 [@@Y]BbwH`f2DP \ƞp)*l(>ig_iǿuorYyIoEtCV?~>Qu58 ^4 3eJ^I˚`iS)QtY./#Ifcedp#|j[Iֻs6 d*bfTF{•SO'w8c)0*' dћFc67\C|c3g(: dnj2A#3qd&8CNMJ\I+S)H@1 !@8.t6f1\.M]4zwLTmż Cɶo4FvXweV{سZ[ڽ)F-g>_qk/'akmydv-7vuW&&3Bymwo6K_# ?mI<ͨQ*rl1 7$R3/} k[c1QyAQ Emcw~ r=ӫYO<dxZ[+yͿ׋li[JCR^XD!ۖ@<;쓇lmv t| MUn3ϙQ {*ymF~>5Duld1qO$0|nUxNwpiyp+aX(NdK43q6I|3vq ?]z3|>x0癳,`gq?a>ShE˧˓O7F@F+?P5O.NHܻgɃׯj?3<;1<#~}|) O&Eg'm'~(x~ TTI*J,c g`¢[& :b%8g(mfB j:7ߝQv3mKW!pE2BwC/J"co²umYr5\\ ph<$]iLx6yfҨު%,]+PgBnWUL73]#7[{m*MT8hR?J]9ixX! iuڰs9?5s.ftH{7~fl|`p=TE&Lni\!GOE`S! F%gG7>k%S".caA, o aFrܗ %̐\d"49&^@OB޻V,o)l=XQ_As+jj3O *p%&܊3`Qf]96w s$-H{FGȹXq/L~hi;_/yUJs3̏؏}v2c{F4a]/]ǝ=״ &Q'eu$ݾ}PW<^{m%&TQIGOg  LX[w (-߇T J3]Ws]{yEi`C.eq)ya5,- ޤ\3ҨbcA5(ŏkJlЈm+ NRL,C* Xx  ڍG5ew^xl//v%R\vX3yhkxg .Gyݳ{Eꭋ_Gm2Uk橡\N]Qʙ fXgTHzBDj_U/t@sk%7LodT2@=Uk! U]g;_/}q[k`vj'[ n샢v2Gc~$9CE>{N T^&^5'YERRK+\'ϋgӦK3Fic{F F" `gM]~tst{y=]}6@_yaMZ)̮MGW*Tu",];4m-[ЛYcAMȦwiڽk .&+^ G(Ref!^|7٦)̴36mlmvƣ+< &6ryNfouV&Cw:Õd~bô.qLNAVoc41~)WmxKq[S+M'PE3a1oqJU12BxNIRHWUE 黤?O72ed#y{oxw&b 5`09kL 8@юR ').dV9]q|k)Ly2}nÊzu$vѮ=+nפ2LcLX1?7`6XRYp(,"VReKJN8 3U&vGK>ѸbN %ˈ%adţ9UK)|,9f >lbgSj!:gZ[ؑlJ)rf"Ir.֟y1Π̒կ}'lOt3un<_PP+hQ9h󃙞Ƌ+[|;8D%>e|9vck;+ ea# ,E)I!S}@3Y?/eψa(ؼ[('e,qAXTM)T0˔2$3  (II EG)وIg <SFLhgK=bsdfn1X d>v'nMΠ44Ě|nX@OG Yb} _j0WiA7tI#mXQvl!ȉ8i{+m:\s{r?7Dg5Rq;7W)?u5x$uxyw(BѴ)̄$pb).K3g=Yc-F4%#=p?,ʴTYgHeF~|Ǧp[ꀏ]4ʛXK 0 %&J5}By 9܁KP > h]tYTlDKg!N2b;v[1pn&׌mW?݂)Id _p\E{fA]<=5 t_\fAe݂-*\k}.YYY*TnZnlpG8xC6*D )hhF =6Mx-Qn nURny(kE[[#%Nđ즳Lh;qc`%GT5`Ʋ6%{HwvoQǮ"kwfw}ݽaj3?3x;^rhr^#` ~uR^vjK?Bww9>;s!J>$}c\6 t\Y w .^E#>SA\6SNSKU7l2Shs3͜}iK!7cXNp @Q De-S2! 6)2q=Z9{Q7`zW aYThK)n|)뿎vȆ6H+uι `ɐVj -82{4+Nr^°+cC̪رh.l#,II^<$\~g4qGv-qTx^Sx9IGʞB=%u GU<v̂Sc fZEx6:bC^K/d/hVH # 7a;}U'֊qќXaV.R!F"iYpQڎq 6HQ_;9CP`cߙ6Ӊ&y+^/Qo 1kXÊj\M[! UxIXIFaWC>:2^-g %B;3f `7D]G7 VJF?qуEǍ=s7ֲb:naKR`}Ӽ6ϜV 3yrK[k$ B̢vxvtZ啘o\iD{2$c 33RF@*`w09p2`mkKt{,׆ [@ t ^[dQ BxFmY'-}A;4kݽ\؃@QN9Tx~<fĀmuԀւd[~#ٟK%jsՐDOg e)f{ԕbD*+c0:j*gv;u];/¯߻wqH97h8>Oo;3 }ٙ;0ߔ?|xC`T=|f!Ȍt`{֨)vl^) 4Sԃ;w0zN21=_ ZdqoP'+Z4XB<M킏T&j$jye:\Ǘ"MXv19x \ue5+8RXuTGg0ǒQEAFWþ_sĉw:\#C1< ؐN iwN@9LW;mUEZ)8b%Boۿۥ73-p'#HD%\:9118jhGk?Fj9~ #%EؙQ>F͊F!lOZ߱1oxΩ٦X2av骑p1EF)k&rPO6DJD8~~k,ɭQZafgumC` v}IjZ{x4p1kPPDFqEs ]e[aNJ/2rnd]ljQ+LM] ihFr'k6XKm6]ٖԾQ&_Vyv߂4A7N\(ybҟN [ LXSXT`F U)l3" 9Upk:q4>;G* GWbzx?^|~^>qGRu 1ݬ<8h* 2 „ڗl^TScl03ZIrTԥ*܁s`%RJH㡚$̙-^>Ǐx;j:Ǖ9=QSt,`yԣX݉pzlBrP`j$P%m^{ݮ;,oryňpICY-<|}{O??lHax_80y,ƈ)a/H;⩅xܛ΃3.'3w?nvxgakvme5I f!^gEK] ᩝ<_Cu;&G+L=/R;3&T}^ 2)Md= S&[tboJ,+' 4H4ɜ0mܼ erg?Ngo@x[ҽblt/W^kp%vQL5ڜ獫i0]-6.-6yl̬(/RfıM}'g>zs$ lVΊF!swsvG57;rn@r<0K}(X֝1P=害ZGm_OYdVlf$8%QBUTZ%fUg^2\v4lzYAˑSy8gx@/gwVxʑIutxhBMo%q4zBd o<ӂP)pt.û]qNt"ug>T}BrG6fPG h` Dyo~9O/9/=334Yf1t1ZYxN!h\(XvipUKuxHKg#, :,ysk1ZFbM To^>294r̿k7nI H[O@I"E@(uMwRSMVJFʱS'PgMsodL,H1#׸ WrpΘa g7r"C.x/UנL8XI :as0z#AYʚѼlpMGth/Vs.OO@2Ǔf]6깰.EP1 .AX-gmJyQ9:rvq[)v*k6kcj}ϋ# tc$K9aMj֤֎'l/v46 +؆@ǽK`@-N>bҰCm磇y#wauh;VN?O?ٿhG&٦Qj;}:)=9{nMTg%X.x^K=??:ª#Fd}l{ xO]ĹhfDs3O쫟ϻ4̂=??%P={  J)1}Z8L!B(T#st_f`ПvDK#'m*7KnmN=rCl̬3 ֔D-g):͝f IDATH)b셸c-TmN2ԀJ%jsIPs]3ܫh6矙-ܟ1N)tS^̸Ml {)JjdXըy/u嚕r5c!O~Fvb͆j4 Ya fɎ̰T* WT=zÆE8l*BTF12)!`ڣާ茘Zgdua;1*׀ 5F l ̖tTӘRLT,r)JIüg 8/2oOIsAenڗ6`3ą)Pļa,QO{sy #s]Ia tt ="\/?P1%ۦ#ng|гoL=3>|q=܃TN+H3]n mwU cܞsu +=6StPx,'O;M'd%=Tsii ubg 0F[⤷{_rޑ;>6Az֦3=s;~6Ԩ0O6wz*Ⱦ^BfJE ) 1w߅QS*6LF6؊e^EhVIj1C7\zqs QʭoD{sw:~Me-gmDA1TZp_nġM=Tַ>9ijh0L\t |ɿ?cp&?otoV^fZǸ5Ĩ:V*4:"*~r?3]l4<{^Yo^,܂%Q:Cs`oD)dYG݀VO&n,m|?T&|a?;y@Sn Hnra>jStMBrZtDēp@iDDe,d[(EuC=c3 &Ċ8n1'9ROA c9J=N jtɴ,P[-5L 0H$[u Ti\_x0Y^4yT3eC^D'zҬ dd;51t+0+FzulSCbjXQw#n=]LŝsץDN;FR,Ps=7aRJ޼Dދ/؍;Gq|J\8?=N'ʹ'Fx,Z$VO؃mi܂>2i%gM_;X+<~jM\P4leFϟetF|6҉qB= 45lKשm}n)تC&6ƀj g쌬W!Jf񬉡%}1L;3:)=ޛz'ᘵhzY߶z3E^MΝ;9睲~*$Og`ϩ+Ӆbx g6ɺc-EɞIjz"t[Lg:~ًؑ ̰=,-8%k69sbVq_%&Nj;~h*ꫯ~?c޸8ݬE8ljXl)slO@6N$k$,& 5έfBQ(>zEj[4;<\gbr56yFD 6bVIy渪S;Wzא4BLLb3*83f̲YwS ϑw83&*‘5v>D\tFUXMyo4(T9xAϻ }lD63["Ā-cG:18MހS qN(Q녪 I4MNzݗ%Vܳij*5ikd`)O#V" b5aqTx#VISvNco!Ιmc)GըVjXs{Gs M'>YC`f1qf*!Dδ+pd@#SFd9'R}9J̉ްT_}$XA"9 N3È;-dA1p+if7zʬ*"alQ{5J ju5ݢ1nn$q.9/Wj.8%pٜ0&xw/3^4_XDF1u,\ p#F.6Dt# ^zƨY{JCA5^ h/LxJH)6m׭q<[\c9xޅR}tJ bZÈ`Jh&*>s ^>Z)cFRYSf=~Xxo1DNMkHbD͆+č0S*#r50++RG*ޔ _Y8*`kc 1ccl #V6|jgZSk;~v>~~oիW[DL}bl>_~p7; jM.-f5~˱nҜ(nZjp ulOt*'pk!n_B/.K&Z:b}p2Uw ? ,^95?SCI)(٪[kG>v^립w|ӟ?C8o M49 (օNML(=lD'l&dO/a)ƕ\qlh.4C1ٴKfaB"rPͦ\tƞA&\k f hڧkfAӵf㕘D`DR"Ubq$< h | ܨ0/lЩi(QQgҞ%K_V`f^GkeY|s޻vjݸ11&[.' ~$q?P`)@Rc.( EJH@HdEON$Wnˮڗ{c̵ UTUݻ1|{ަ1*Z5ɮFm+{5~F!c e(F&P=80Fi LMidd7#[pb YK܂Ғ$aτcm¦YQ!=dBp2HVsRvFF!jWkCy'wYN\91|;FKl;5ۭ1iVy xS O-H=1Nx 2uΙ=;kwŢjf8 s07FZM.X=so3eTm>^}Ɖ J"K&ensq`CHmΉ$[q׈chY'05Tc5n38ѩsۃ(nhBkIa [Mq[6刄5]`ZR!l JEH;>*q_fX-Fbnzȭϰx"CY!$3SK&A}Id$W;$!i`Y00yǙMȳ̯$[̂5bCs;}qXL4d/TÑq,aխCil+ok[C~闞|&|7}D9uK:::z{я~t{d{R2}ą ?OOw]R#5ybQ5vY<;g6b9NAuSCy,U ֠e݁'kuu\}OOn][x:88x'[lQ`6qpbP&Ao| y:,9.%-O68}Gsl4qb=|id65k8NWY:Qk5~S3f6% 6 mP.mNSNa-1v6wXԼf̼I HV6­%u 5mn9b=ƓE6V:!4bBz6jRo 1}eڀh7O^<,LU.^2tFY )b`ś{Kuac2vlNL#F9Jxˆ͞\xB"TVE<'vu#ga L$nk|[Hk0M hA4k)  bM~!ٗ& /*+(3u kuV3фFk020(:5vMC*aϘR-R sbm2jQIQqJ 6Gv]}d _d"z˺:laI•(]b،uS:^hy]٤> y:"82Tko淪L.ݬ Jիh<Ʋ1R羈XWn7R^8Hu:SBQW_upƠ3vS֛g(+:oS^xmmKr΍f==U)6BWj\kf=xP80u&> G?c~-5"NxffJktW׼t -^RX1[eᙡ+Kλ_̛ oz'_^JH٫O~zOͼNJ YxpTt$QDFH&7Ejd\u1t}"N}f&;ȩb ӫIj.:ozqκgM,C KX8΍6)Sp(`rǡj(9).1[͠G1K:f/; 2>shṋ%mdOۤg |}'mnؼO;͌fܝ3'%&@%%^%$Mfܢ}0"NO*Ս&Tuξ`|.HH^8i0.1N{.lܣW_^:݂\aa蜌eW[6Y,QW\ {%d $X%sg9ǡkg2U+TM>(h'0;UHN|*"N5. IDAT7r+oCAzڬf OӢ2ֆsa(7YӴ)̦{Tn4@.#exjOQ<'/ېVUDLڛRQ6vzEGt [7.Ӕy.3N,oYFO[gz霧[f:y}9ns}l%RbV`,X" !,c- 8 M ߝ;vbI^\ j U4B:[Z"׮ZS|^|VT,{;7fm(q|.tmБlHT=S\NTH-<#)w|>{_mmo8 0$HOqhT]NwN*8v^,#et\\?9O1FX:HK2p˻ '5V5IԞ'?q Sڸui֛W JR>kw`)[_ :nR)FX 9Sd%3)}iv"Wr9%c˝u*[fg7pd Ⱥܿ 1jP ʾlK";@ȕ2&GZ^ IM/6E?^&pG5RHX{Yk9gjTdVvKȜd;ԽhLb8[cu:X+u頣9$#dnmpOIMN#AtupH2`H-Q !22`,:҉kd̔^O0xae7Ys\;N<aGkIX0Q^Bv0 x&'`<˨ ;uxu?^u2ըdAI%5Gֻ+:V0zU߭HK\MǪ^Huc}tqx.H͸ej*ã<ʻ+QR}Eҗ7̟} ͫbQIbN}պx8̆C'w>H:?¥Djd/8;n`՚`HM%[|D̾ebTӔ`M&:n:)Zltjo7sIzOQjh,Hťr/p5~3 e,6mF^ڸKBO{DŽ UqH׹0H `t W[`RyIÎKo符dV{IjɶjQ.me8 fIӑDTYXK`~,#\I/;'M;c̀/" ԵaA5L^[pՕc|[㻥=xYI?"{YWsc5[3ZbʡaYUhXbdzXOw>"_[=ߋ)6nq$ IknN^ZODqaMm잒ɷmmk[ۺ7-g~gRJ)S}]O>ASꁔ0 [~[ۺg+2~3X識t JV]}':<>ǫӣ.o$ "wb<'5ߌ̱aј s3{i|kOd=̰n] ^Bx hwf =ΙsK{n{ΎH̬ sULSxSW>dNfy, A_X![Ɍ(:E! Ҧ0qx 0*^^Ց배X`DcaXS*acU cx=2^3Na]îsvI{#VKw=KK6oNT DH9$%Xq/ny y198Vݩm2cox%78TCu{fݗqZ^~X _$}cƩ .%o25 n2&7X8 nujUκ!|}oee j@81Qd#Oԥo  wSĦ;_4m]w048N1w:j$c' ؗcڛFNd^NT7!4niChl,bԍ1C=xThj~snaTj )<;FN4L%KnX&[ډ 6 U84|CX}.GMͪSӫqah1+X6+qR |F!-s!b3lf$ ě;y ttI.DI3v;pM) }z{?lacӌ ˕;GAfX:KXre )tG .ժtb"ƺc М80ue /?&^Z4vϓ]![W=Uц=)ֶկʯ4QeSx}mr!h?<pXLuxƪQh/[uПS.nBR?؇kn&}}3-qQ?Ʉ,Rgӛqr[o)v7~7>яR9ıN` {=؅M~~fav޽Q3',op.jUCWs$ *oU# 3>@v-`U,& flfӈk8;{͆FAy$u91~'鶬8pEv22 ֛\Hmc//ɸNgY2 7@RU5 | sY0s#2jm^FZ7||'fF!3XOji ǒ嶽q$+dFM|hҋɩp7^ FHĚ.8OݥC|_:c+&Ep^<_c썶D;{6}?g ]x1ng3Ӆ9Ϲy%ewj zyI;NY$"?v1á5- pA*"ո77B2|U^D41,$ TQG5סZWn߹r3p9Es)߫6W]W61jW?4c"$䖪nG߆ELH`.P1K *Tbrl/\g؀Wd1G4bk'Pguq6BbebX d}a"$>O/)2[uL y'vi`I# 3`lPjSy7Yܕ~0:DXb6Lp,'#N=vSj|]Dxd'w<簙S564;L];eyg]_=0FoF47&t &\J==֛ϦRLěM7#Sx-Oс o7EjP %X@$Jխh2}W@;EA4 wA' 1:@WkWNu$0cmc43Yx kOqv&yE6%0H{.3,u¨Uޓ15X±q¸Qa8'fs-}$47z8Ċ$4.vy zę2豳EpZ icC肋p81MZkiݧkس΁R4#7(G3OJ BbC 30GYfKFߴzVIáq^tok#pH* Sd6t*N *ڄO5Fr - 0XZ7fΈv|Jt\1NBoDF0s#.M_zSKKxhOF$RsәZ!.psED8a$ir ~\.K7ES+0k%lzB>dTcW TŦ 1Z|_(>  9 6t?˪yU$o7m^ޖmmk[V{ҥKw~~.Il^_v~>򑏜;wcb`[YRBsaίOm;p֕#aWYb6"k!Уs`"/ V ZΡ9 N4KቿHs)f֛PM$Oĕ+W%4a=>fCh2lv`w)춁ѾNF>kvX*da4 N[ =| ;ՋkZm+`tbkEW{IxuxD^iP6IV^OJ}/뼓N7zqYt"i\F2E\Mʶ<z}: Fj#-S}=W*+vW ?xӞ/ol~b4¡ZkM^nDlŚmmk[s#L߼{fbzȏܺu`b`[v*!(?S/xs0GH̬0H"+7p'Us2VJXQ"vn{}-ʯώ9`{1mm[}~>m>웫d{d39#4f@ d;M") ~IAm`wŔN>Gl (\ @c([OƷB3؁}"Lsv$ցP1gF^oُ w6QdaSa>aOUϓ/ggv%wM9tin&ֻ&g_Ҭ3vnxg|W 0c-w va+X)伍T)ԧ` νWiN8l4Ρ Tm{:lTg\&\u]NF`مO|&z( lu*M,YτcN{`>l WB_)<vHI +T_x'ЋQqifD-1KXq(5Nj1Mt!D IDATOG<.ܢ1kmZ+XYOz2v·o`p6"K1-hW73yyf0eC &n{ ,4ha'&nhPiO)ǘ 1Z$5PayĀ ffsHNPb-Kf=tԻ-3LJ) R2$#ȤRvO7g amX84˼D3D;׻ۼ;V_3B33$wDde~ ĜhĪC/|z0LJ\}ğOͬ³Y"$"9:4݂_ >_5[ GX7v=M\>ٙ52.Co|"*JʊYլFM C4ou<{j{gm=4n+dN4 4]8dxC<:jT8#brFuG06> eڠEtR K6idMfEN.ǘ% { {kCh<@ƒ̃Nt ](W_EҘ EL5-K6%Z!E`Iĝ'FEե\g0 zuוNU 1r8랅qg :HIUpvː$y1(VWbLt]DLnBqQ 3\RTZoM<8P:,w}jNn7U w =h3[Pf0;؋HkF=K9jG c!C~2&d tE{$2k+Fr |_MF*(W:c&t0̻N-LȔ!J0?39Zkim-mԅ}19Uu\ǧl{]sztݒ$UJiokݐRA??u\Im󁧜ZlC 1q}|nb/6:6^_b€61bdݗ ?g6GncUx5 J⓿~IU4%d65vvMa01\*rpscM*Wնp4*!U4p;oEEJ+0v9mA!9dڸU0{9וvTk7*-bՠրtv̦v@JUG%aj6\+ 8(˞ E L&%.ᗍ*LS!pjGi#4]Vk [K34mJϗ|8!v Q!-0a:F?XCJ&G 9ca=S$@YzgehssVGU8B lW], s{bU?.8<'ЃaH "qCN.K6O&x[-[[ͤa<| ,'80;h80l0P"A{xf?,R/R:H0Vwx8FX9k҉Xj+: }6tpJO01fA=٤9I+ czhWLnrqZA-]fUڌH" lmM@ܮ6Zo}q΋{֙b.f#iiTShg#bU,U?243_=4 Dlg䨓Y׈_9AocK(OOB<뷱mlconJ)wooM Ĭ_jD:r?￿mlcߴJ-'mg簥yI3F#o(Ƅ8% hq"{b?ʛ[Q_̯\AΤuҦ_6QJYYݯ_ mocCr81#sb؉*dɖVAHc{S_j@bL?Re1@ڀ*gI{U Hlv`ňVjpԓ')_\f`ӵ)MGҴk[CoUn}J ۇ /8Ύ̔kyFQfvݯj%D%쮾)mK Eĺ)`b)Q4SP ,XB8Ğn<|,Wݍ[>) BXoIğL֏*Y/7Гg}bjJ_jy4qB s W.\'LY:ml֜~yf VJ+2Zu{V)cGy.;)uN{#8J<< ̋lbp1bxf܀$WcVNV̠V_t;_Q5A Y{~AZHfCiGԱ&nME)Vj3S1M t39A+E2km_ y7֬ZkvŅw4H8% .+@9Vb;'͆>0{q_9SE"r#/?/S_TF3W<,J.0ћ.;KDV(T.IVYx wKo'MuiŸGo}Ӛ@놞ƫFkռDy{z:Ǿ_3 4 -I4(|.kITm G ^BTf:R)d$KaF \$iBcM Zmx^e57<`*LejjDoip8!:;,Jr;VP! To}4ΕnUh60UcqWEB_6/k`F&8 WDf(1KP%Xa fBn9<-UH;&ё#Q&ddt$SA\.j%*shvӗdO2{U~p*b&QLa?1Tpa̭LJ)U>`N#%x#JK' XPa(kë{4`Вr'VV.ʡzn$;ʶk(Rpj~S+ӳ=fz¸he% 2Z`ы (L*U+0Ka]vd;:*[7r4ڂMm o.jLnBV0̊AB\E&b?ȢhdEלP2|H"-jJF?h*S47է & nNCRj!YGOB, ejv0#LI hky(X 7;1`S|b$/V wIj`ҔRU$5ih7ÛSf6Rĥ#ngPk P}=ݮWzV%-m`/,lfES fPL=w e1TA-5 ~<}Au5]ͥ+gUI3hGp>"Ox]^t뷱ml㍒7 <5Ms__Y'M~|0}۶am|S5LJ x+39St|mˆTspZRWVP^R=-\{ S^t`c.'osm!6wZk<)A#lr$`ӊb>hK@Ԯc gP(idkLT*N-dC6aCo<$UYs,slEG9Ki LDnb4^v,pNN{Ҁ)9~kҶf!iU4+̠(YJce]m(U #,θk,~/G jA sU͖jZ8p7*XT< :) @ɩc ԋjQ7'N-<.C I°P d"vp/thl\^:k5؃Z¡5)7gNB d:+{A`3eF9۰V.ycq N-袸yQxU 7ž0-zXRP=dJK1gJ1Tے9o(%B3 fGoNLD4"P6\Gsl/(yۆ X5ޤoc/2C51N#&F-,u!f=-M;sI.4I:chW)pXeDs.% [Q\' ֋"S2&TiVDs3hLQ ))o 5Jb)naKhZ{=a$R&cTrŵ®̓E(AcMAciwWNU-6339q7eK$h `G8<UKjkc*.#<`-*7`1\TF_*1>4 ߷_j'.r6+"P7lw2 G?_>O*RI6C5Q\H?뫤/?~;)s>KuXTPua_6[V~> )p#XJÿ_~ً ׀Uu}//r}lϷݢY&ЪtZՈDf0)4=FRT0R֩R,nuA־pp}U{ni zV|LZ5 [ ZΘGy.lـbTzEK4*UBAZ0#j>+@΀4 y3¡=`cinm LNE ^eP U#%"0[-VMR!rً:">e4G~n]zLLV>b'x!qV7 F Ttqj)vYF-%;]GyƟCca4NHq$4Lמ0|iGӈ*cA-2шc&+PԚf-KYg,PoYx<?+?A+ScN1…W`&D"&f̰= GGkZ%?`ji䨕v\ /D&.baGw;SòSl-52f U{9qGW8vA}CЮ4Ü. x5 !Ugc81ֽ5WX0:Șñ%Z8+U/t^dhan@Cf cDWD'ؑ7=rc;{6 ){]HC6 ]c;* J1!3 pvέ,Izt͹-8ZVgpLO<7n9o7%JPnIH??nKsJvQ!-N CL'^Ȼ H:.fp/"t!~2ip ٓ\9~~C=hZ¶Zx H8%Ֆu#_Uve(US5+ؼB! & I1`E86 . 3h+Y^fpz2z7Nn-O1`M))0}>3L m|ð ºAID^~;9~hSXlx䃚 ,4fkY򀃍N.ۭ^ l3dj;,1-աnC%#58]E 8vCƃ0dbX3k!Gkdb2{H ŚCӳǛ5e/ 'ųE 0߈Ghs 7h/5< _aHg.|]v+| EpTxB= ӯ$! LaB $v;o~mS ]P(+A8Ow?N1K3Y8X:?6; Y^GoC'h _uʙYfP֏]͖U.hvQp&>L@4;"NS@襂E""-l][_ZIb80dKaOMC9SC]Py_T- Cz.wX6 "QvJb(o  9dX%NЂ. K VTEKqIėNwfiHNolSJ@;iC^qіw IDAT7f#a|/*0o>Ҩ`F|j6#MV$ǹɒLV;RoȚYlxJj㾶,p_}C ;; 4A L$v\rK(D^|hhPS9M꒙V꯭TY'r\W}S؇,=s+c׽{ThVzXX!%ᯈO7tQy z:+ы%lw}gK4KL 4'gmt>kq ,2x ʎ"V㒟/]JO2/dVMhZVl dhJ5]6.\N6DP*hWsWW3lymPȾ5` ;^M:iǙ9- J>:sg71K)h+^} CLլ3;Kޘ vs{cceyl Mva``k{+*BGe0vvbs)ѣ%L=ͧ$A1D@c=UΕa$FuQqܝõ-X17ZEN&VDH0qk[KY"5oəNsgc=3S [Cz=v m0waǢ-Tf]QY(##N"-D:SpW?, dŖ%%v@=[G*+!\˷vU%fMG 9o}V~66kDbfxZb j8֜5 ??3[K\TAuUc;r;LJO~򓅞3׶V<=lqJZos^RiPY/:j*/Sx4K -J'1ի{x X:$QͱQ'O|[ ֳꦣ-# eXkN(aV eՌR- iI!y"Jc"[UV5hFr^܀*aᲐ vD#s^LTUh0%<,U<&N e7Xl͕R8fl,U<73r Ud;9ZԘJ\[QJu6;1Ƽ(ZHj&M`kUU-Q,%bZJ&>6!, CBH=R^BVRQb wmX #!ɂsL4hVI+ A*v&=$!OeL|5ɥ@E⅂V y fRŒJ"%{(JWbA6L!⅍\v㔙vձVNR'vv }鯰 /V v3]+%"[ Ji/y?)$²hwV<|Oa&zڧI}S.Xsxo_sZ#V¹R#UK8HϮ!\ dd9G]4\͗{upyexѰ\؇rWѐT:!MH(CBݖUO,jl 0u5nRuo@ml f~뷀mo4OM_G衰grz#n|%ii8fJkEqWRz'XMk~n{1%\cJ^XQrRH;dnRvl`;ؾl_v?_{0z~fDCyas4Y|lf;ƞl5fi)1 q`0]G1\wd</$b,ehB w nJmD;PV}j~ 4pKRC(s0OԋY g@{Y3T-~NU|*X С>eưd ȡ(fE+Q 5xJ DQ kݦz:jbç Xơ&v a'eqݹ|8V VK801XJ Ypz8A?1,yٱ&6`6mlpvE菚Y{VufP,d}//Uي|`C]Sm*s_*e'=㖞5lux\d貱˰JQ(EI?wkm30kwxԹc-I4w7nܨQqgAA`LjaUK^rhU~lHKLDUk}bf(b;9L w =tW2FL*str9&F<@:;O8@ng.M,&fliROX8CM/IUE`ZHk^c /v`^n>?'|`Vvj h%L;Ild oĬ g;`T11݈=q, R 2,C}5)Ghqf8LɔMF&1&)5F<tNˑɶ̕W*QEn FȔ+kih)Pw]{໓e#1,Tƥ'm?_'fF'LŽP̴V͐=`)H)֍8w\xMc^#.q&C5 Bު'P(Kx,suD9ӏt:/ Cg(9%+XZiB3Cwo']1 "a>'FmbEPYuZ6ﷱ+Xۦt6^'Io~wwy^S4 6 'q\gbׄp< C/R=҄yڍ3jR~ɟ>WE-kum$R'~rIvA aC Yeܱ4105&ANT% ?ɨ<` lZb'醳~k^\I6f]}lf0'Uf,Q%7;0`}mYW6}1)xIV $J&潓̠Cbp4L$K>@oa*va7ةrМiWɍ`H;jBΗZv SQCU)MrV b<96N'zT1GFVEf=F7Mr);P(gƛW7X{Ȭ#]<*#OձtD|fSknUteÊr e|7݅=lWbN_"ȦH7D|}V<1INx[;(s̓4i[,y^S%pNjncܐizkk~F ځt7'MFDMk^o>he5ٶCq NP }D5x!OюbN05me%W 2S"ZK;S1fihR?FQCNOMRI  ל9g|G߸WZb ޭe,{I<K٭3:'2Ȭ3:85H‰)kzhL" xe,`FAjb(ܖf\v3%*Ejи^VI[H[adjd\ƾjstùQ +"<=m;76FŬ'l6??}߯UXl>ImVx>Wg-1vՑ?/|V!+˨7a$QT,x"4ݰ5Y8V\=|R>G+~;MJC//nz09O&#"ațL%4@L^~Fk \Tf1dyqPJH%A2@m݂@ÀTS`ؚl?Jp Rh5 [Dآݶ!4jYm,Euω~*NE*D̼'OFر#_k}?4N#Y &1@Z!XTa~(=*'ž۞Bj䆋ZF#ʵWO3jQ48CTNы^Ck Z6a\Jܴg@g3氨Pxɇct ӈ 抂"A:6Q;pMfFUG4d[ & !/[*4ǖN $ֽWpưgcrJW3:ͻ6dTj,.T-\]wk9׎"g'̺S.UVDHa)Yvj)--±:ޖQ.%*>Ӕ?D. Mh,qNjPڄ *%*MԒ)JT0P50]/Mo.c>:G{cί idrir40ZW(n8tfR"G)áib|sy۔83MLtN̪ɨn>V:f(<,T"Ř݃CXYu+XRO;;0E6F'I NTvҠn/8[/ԜD_ԝ|w.+D2\\Ի^=oZe^NSsA%5m 63݃ H}D37TބJ³{24K<'fof IDATq$6Jph.ա8y"b9+>'_ǃ3{gT>N.\^{Ye 񨕏0N$JH[$FnYys~U~+_w݌8Ӳ;̗T-oyZDϖt9mI[TEFTBd)ugr3>cM(d9 d HgA?5qsbdV5R0;7{F[޷ma4tAP'zQ%[!`d#fX&+ldH6T6Fbt4ݕVO$>>Ë5:du!X!VR5X/ _Nn5Bӑu)eCHż5ڄ>X97$t?'@h7޶>!MݖubɲMy|4OD8Xe^K Z2W`9Z>t)U#ٌyƘ3Νs88x=<º45hLD!:z}2a"Ij+Y2érYs<,'ʅz~;*$r:x[bi"G+fsm4X6f00f#LL%@qKGfak~9U]jy\hYp] ]uf4`e8a/F扻)]cFm|Ky?R'}Hywcl`ta=c 3Yl0_gnO3:ΣKg]2 aIk>E=5akV06=-yS8O}Z~uV["-R[W^ pR"[/?23;_d{0s1F(f{_jɂ$0+j!XQJ]cܮ{|8x/g]clwX]b_ZԞ>眧-oFńKP;k׮/k^stto_S:ZDp$ㇻzܨO_;֨/hu T:E*ã_K#^W=T"@|x=C]Wmo{[ hmldηj JlD:|׾.5*2[eu&| f]>B ,DXJ%ECHItl&͂O!RE:U_ՁsT4#DVxx;n碌5r!>*W2[Ȋ3$)a@Add1A̙YwWWB^kaJdgq'-l`tιM9~-?tdh NQ}WݰB]v*ݨOHb WqS!"x\9"u m9ؤT䆅&H$ɒUo~!]{3UIum!/PRpyWj$9AGw& Q+]>z->4|Oɮ@wb X6\9x.YO%qΫ4kc=j +a/KHQ@!01VEy fcsSmNr721/]j,Y"[,w>tOވˑffQ]P IcYthv-%q NoVMTN(S\{fZs#wgǼYÏR˷D}{-{%U[:CsgqU}mm }[aq92**D_!qʪ+PSyi4%st0 b|FGŲFǤ#tB9_I4J.Ĺ뙷MR,GƭPX%KIУ3',s7+$5EP/51Ku!AXJf :/ K CorqCGz?3Ri]|kY3kCA[3B䌧!h!1'3,ҽ)@36}zef )lmG3{jU9(^ū;^ ؉.v_#7ϕNZo?ɉ 3Nw{ u2oVygu6\_J95qCsrrW_ Su&)v[ų}E h{k7^o 䟧_ż1\El\x]PR*&Lj3j~rK 3Q:EBuR$ "\@C4Nu" SD6GXev[rs۪kKA-:N{N pE8F*}{zI$l3~;}ARo E (ZFz/3M46 qak/Y[EXOY-vZ㚏 ,ih_p=SUF*n[@(FQ =eqD|mZ6+M\щK 9*D'#EOb6 MIߞw"kB.@T6eSWgcL cՂ[h1~X2+|`10؇lk]י,"EBtMMM sq/d}Ke:KX<&3t^Oev.홯y]ƿ`9cϘեx-xHp;'#Y2Z){wD\Qt`.$!6 s?n|<~uy)]뼯Ip2"Smp]\?$f>3! U`85$PU^CVV [XؠmwDrs,e݉Sq=Yd9D*N1!7J]7v{"Oe)N8uy?Pmhkm:#g:&֒D"]kRۡڞ]|w]鋛/[Te7o|k_Wp_T]Uݪ-fȳ+l0UVm%;93bXC=Tלg8.??s?sväVui%UXI9扣?yūzo E_ތM2˚xo]e"MR1b[I ?-# 4AgꨢoU K9ms3blְ ΐ 0|Nj IJ -t=t M? v*N l,L t F1iЭB qd333ٸM ͺ&ZVP?3]%=<8'%,.[-8Jw&Μ84ȄՀp¶p+!ELRxJsPv=1ܕHxeLYh݃gRDzXr(r0[ZٗĢ0S(id0QWg{͌bbx/1Sʈ2hg¥eP%1~CBh!Z庿p_<]؁'-BCW+XFUb!0# h66ghvq ;:ԋ~,ňā3Ԋ5vƓ 7ħhN @G>fLHȰfq ¶qwV#5Ckt3q(.^$JQ/Mmrq#" `^l_"͍8guK-Sɫm]V&ا̌8,]{#0j%eӟK:Nl(Ejcfx n *'DZfgx.1|X-pTͰ/3v]b_UMo3:=̪ݿw/}Kgb+*ng#T˴dfPF]~v )R,x\&ДW'''@->w zIh]p9? %"z1&yH}yS 4Wv<Ǔ0Nv4z$WyJW𾔍U[U\*Uy[*:UС1Qu+w8MVh1wtqe+JsC%4f0fDrQIċcq žة1kv?X-yT?I垬j[H:b4oVjrH%W$#|j 93sME&th[ F}_[0!a1Pt],L1_CP:yeh2܍.EŒW|1P3=UcLiy`8-bΌGᣤOgP6JTh>#?76Csi$'[f75TX݇eY98+Y=ng9[-빙ʳ ^@U,%WBwD3n(r\ >wKOwa V~m68|7, ÓwWfB]^t v'iNGnHg[+5/=dM潫 \2_jF?ccwKՐ|Xs1k_?<ԑ|nTGX'g~_OL(/wn[8fN4O'dfRY9L+WJ` "!mXXC-Y(,\"PtXDZrLGzT"GeUs ru9/6qTlw AX.=1O3]zD.GO0‰u69mPGDX'padL:q'}Xάo!%oD] ѡA
=bo[Iͭ%,+;:F75t,):όO)])"ov [p!O|C[܁9Oۯyk 3K)m6i/}oo3ZshFf_oى<8wA ۘ;x>WG 5t]PWQR(@ՂJ*z=" 0ܕYx ~D#Vp_Kaa:8 ; g*iAa|ZFX:uMY `-:zbcv c!2",Jpǐ'Uu{<Ж8e s%o(էΠY76Lx,~ܩFabee{H*ګ]ݮB!Z5Zv@Q̞ӌxh V89Iؓ/-%6c|g ,; (Xt_Oi* FWulUjۓT`.FˇIϵ|y#pW=pvػև#7бhGXYOMqWF/|k5jcP6ssIE \J̰h,y͆j)Vg,kn p@UDp5cXlk \ ]}tj:<~ X?4g}uK2:9>>>WtJTLEikEV CX!ւʘ ]_Η1 E^AYož.-LuEb3VJSͳ/^ ?Lz"E k8!3~k=ȥGIZYtXy!EĽtJ<^p*}rC ֕ȌTH"?ni4sa~a꿊a.v]|ّRes['{SՊ|ɟ?o~w+ O}_򓟜@+s_*!i*@dxfF O)ӊafOŽQ cj% 0TJP8;I6<38зaF`u=QSaS8 n`p=8V- v]wGX+sbYf(S":Lq$Щڨf KV-Qo .TKAZɔVf!Po. ';xCtX 3cjk棗&O$0Zuf89=tm& LhL]o[oFIɽq4@k<(h @g>XU a4Qm lqRcBe:,&[Sq=s,eXv ]6;0j+ռD;FJn'n],lx1R'C,U~=I(]H!pi@􈹘Q,YR"4s͠AI [KDP_؇C*\aI vdg|2#mc!RVtFOlZ$kG>ǃfH$NDQQAݲ Wx}5|k m3zk0$9eK(pj83I,2DB IDAT#1y+aKWR ma6ĦJ^B;p@&VS%0d]C0Kػ >gE.̜o[E./#6g 9*\4vWso]b򵳩,4++/{˾$Z_?Tҧ?o {Ϣ'>7???LL=}n!QM#&U|]wϯ^]1$_*MZ//899ޙecJ@ą;ct:(VXxܔn1h`g93c]0%Q NaBiEZDc٨ykzgvak鶣Nq<i42 5fk43RmGD-:oP_ʘ(ɧZq=st8#7N$u8@(`Q-Yc}2|J2v ѱa&ʶWmQO^ctF0̆P9uc1tFHJ!DgE했ixm{Jm}kXLӯH @v5IMTڊbP9@vK@jjS)!:b^fɬ!pk!zǷ&ObԔz+z?yn@r[5&#|-"8jBbcMƔ{oNڨ#DJ_C trS8f9ء彜ygF_8/ ~K(BEщRHh.cf)6-^XZ;?d8ݍDi νZU oC|^dVTH|g=-u7;>0+F1^͌8Auvϻ nv]iPeU8zh[LL3+w|u{'~'xZ3[㪽dI,:uvUrO(m;J)Ϳ7o}[)o2;|wn d=/{~׿ۧtbooSJ|;_mo;oqFKōU{trӟ5xZH1ps ( U-BDJEj&cL4 SWi:2Յ3%;/P.Kt︢&ˆ$/d^Id1u& #4v\$,1Jzbf"A¼GF?,(M8c}%Bk؀âϪ(^70k1sQ[C|eIiH 3,P J â1DSޅE^Y]+pr UE<4&9P:aFpZW׈Hz!,IIeaE[S =]Xxt ږM|+m^ϧqHɅo9l#좿gN:t7/9pq?LfNs4Ǣ|kWW~gʣM:SZ̍CoDCB:(,7M|{w3>}]S.=ڔԥY_ahns뭗> }5-/p+;wPy?e`9:SXC`/i L,-/R^zTv^UI kBQfCIA+ ؠMlʠ0kTFƘ[ kj s7фd1Tz:덓ͳgPm3-gMЍ|t{E3>C%Ljsϩ1b24VagSP39CQqa k4 /ERyBkl5 mx-/[X_hPJtb"ra.J6*[ڕ{<@S ̶LKP c'w5PF5I.9ūٟ^^ SܺEߎX?cP{DFpMJ)or!lLP;}ҧ[=jEvhsʣ% (Yk:2hhβbwR{.v4Nm۾/*UtG4R?K/}}ߛk׮աhfp=>dsA+4ƭiڬP5ַկ~o|#<—$miOo6vW. 8SWx,8"zi4 De]pGb o/d]"ٟ׿~~*UVw 1sy@$Ie9 %4 {n޼x;տW(e ~ww@mW$ϴn6}W mo{_UJy&j뺺Ї~G~~~TιNi3dIiĩ 2~7 o׿M2,(Š|Y{ 6tF,jW8ϕ̒i_\b%q}tMd hmla,W*U+B.fٔ3}[L^-:ӑ-vâ0Z6p.Q~ST~unsYrcڣZ8 U2cVα]YHZF^`o)nu[VD`z\ Zc#6n 9ZYZbVx B{$@Smbo* =QIU>"xHLUTC3{(SHuΏ{I\5\dwQ4+, ʑ5cZL5Dil޻r]j˹mwNZH!AK@? ʶtvN΃ejnl abZݍօĔDQ%<}[kU9~j9dG1&7׭U|l])8qXװ5 3gDXn tFfR틚"*]wE,NdkW^rmZ3~+\3:զuj +Sˎc,V} ЃB5!{$I.u JKM&W$ec%;1eapGos)+ ˢ3:Lr;gf(m\z"ePHkx+XjWEhvQ+*s:CjRtɼ^,CTb̺5 A c"@w}^w{a7\7T|i_)PHc"|G*we~PqN.c_L }&%㋈`D]/p~hB#,xM)SSy]B$g4pmkߊ۱۱߲1ˣUD̗9q|+ܔ??˗G~G~G[>k73з&UĬ#wq\>c?#?É1.+v}b?m` ^BfC2K\(͏L&EǮ v[X"E?I2ւm!XXYETH ,;, xde}4ore;1N/n^(!&# Zӿbi_ ^k'+hv2). 4YcZlñCbmon" `*h̝ '&uB;)F/^(sۑ9,H{`&OeQq o VL3vϔE\(9&R[kS?٩&MzM.MNY5S lm5FvcHyiMJ:+W W D悱hW/{|N{JCB^Fșz/!:k>u/z N1B )L I#Fa@6EʾWCԟ;5X%)Ն'5Q p v\ P4ˆhe GЈu z;+KN 0E*I4SicZl|#jF c/8 !Y@45=hni) 0>9VIy_>_XY"7ՔZ˭]bH$ 4k|Y| }޿ꀈe/ B4$JNR;/ZX#\1QbW|JD0h,e@I J֚0PO qUE{/<1N5YryЋiۭW؊۱۱ڋ㉴>cs2j>|ӟ'?Y1/w89OuC믛>37z׻.]4kv3cԟ^Osca:~w?O|b3vSnus|dO_k-ۤopDJͳ_;vݘ zG{=C@ݷKs֫Pr7n7OB\tXTBln]]㒪+z ZH pYcX1Um:My3ͼI*xOĀSgyYik:GEEP)Xى'iMUzv‰&ɖ"&= cۉŵ4%:*vGeGNB2b ʰ*^Ʊb>ivقPYB5<% 2 &/m V27IaDEW#kh61 s(ȢPNLN6ublƶMmNe)3fOqlrnq \iQk(ӆ5!^{, yb ȰCqBjf pjXLFY+et?,.C;x)UIj'YyN/|&4}}^C&F7u:ճʿ8+i;718OY&k!s7\CرlcnZ3e'"Ja bbɢ"%o5mx%i)6j-'4h_ aʊ,['ÍZ6ʽ간M-D30TCDa\*& 7C{7ŅH^8/' $zgJ.!W'v;`/:%쁭 H܀#"4{|zD*gӛ=,)89,K6h2DS3s 7&Kf|;bvlvlǷbT8Cu}|$J׽QNt}k??{?C??UU4ea眿կ>Gy'`2n2v3>QNl5KGn+6Ѭ՗96*z׻~Uq~m6gOO/‡?x`{%Rt.~뻲ȶOٛY%OG_~H4lp){=#b22"ehۨYrvY Gs"Ny9J)2* 1p9iFFUKiT +fce!BHZ銝vW%z͊%lX.XTQhy0&:Xˇ`,ec-iiҀ-ј%p  IDAT[`Y_K  $n5]&ґBU5q*gOu' ъ^L`jz~3N;xr֧ 8G?ׄTs&5ӊڔPS%QSUvkX%jՌ\5Yq,rԈM>a:v5({h0 RI\!R`X?rL\ ;%ᚑUW9TsFVYԯEqF 6ǜ*f<|T꼆i is95_3L3+ePZ4SXo5*„aYܣ!;ٲȈҞEns'b ZB6:KX<1-;>h ʭoZ/$A%KkH!k,rJT- TTQێkdf3'h,놘(܋4׆*!e|;?+ox^i0s%4n*%ڵ#dK*f/G€Jyp2HZQ>ZtbRŻ_')^PkO$YVQ̄Va]=gZjg޲!k)Yr,P \* [M3!nbj9Iz,MHߴاD$ 1RFcT'=h0Jh+N)kTP(8QF'+fS~`Sv;m0zEԮsF-lna=Z10&"A/֣ެ# DS!u떈kXlLSS&FƹƸ,^Q0*h.ٺkmsiAQ6X5Eh 59 ,34^Pifbҋ3a3t A$A:>"vF.wGJ\2#tN1q崆ZIjqmQ؎؎?=\sk׮Z,߬2|[}g}}C) .]+导΂|<ַ]^a0S??́ϛs"⟛fuZ|?OzsߔS\ -ts rsoE%pbYf헼gJNBAxBfdz{Tpۡx}PQRcGHت!ҚCB _8Wؚ4SEgQI kɥIxFX8, '&XFSF+Y5Z#E6&*32Ք YRD%曲HƢdn-' Y9@d4|@ WCVv@^lUn_ 4ϻD2b);JX Sda^"Vwt6愯:*ВOYj+ԎJBYBRc~;9A`kh%LZJ-e75n{m1Vؓb0(Q\K[*K-*ixtԫ&NaX RRTR&s!CTZ[[~օG0.,>l';"gP ֆ0NZLHq\RJ}f +yݧ.Q>\|cY=Z}Z--ZBotig(p]w a~~r ‹5݉E`s"^kiA ijHe=tM@ j[Iu?o؎؎W_yU>{#<ׯ_>7۱y˱)!R׫.==/›9ng Tκ{\I`~Ʌ><+^-s^."jz'>{ٖO>}ǔF0og?~y"m s}_~}84 CJiN>Z/YlJ*7~o zћ-`$pP6geB3WU'=Mj.1b _` cL9L]i2n kF77D/"r9J+yD)5{J\T:+9tiFjKh&dJLKXU`*n/\ ^q!rh5e5qu+5 g WgmuX0#KH7D%kBi&)M @XrRuz@I]J9]DR+RYȄ;V,1s_蠘YndAd- Zd [ =^LA$XCGSV#!G%*wh#KI؅VxӼҠuNJ'utPkާV՛6֙:%S2pw< if`3qgd]<#23ӎZHa] @`Zvʹ)]JAb}ZFkK 8MQ{8>:-|), (ĸ ]ace7)z p.u`qJek! iBoptJ;;Nl؊۱۱JI)ُ9緾7<99]mO緼 XƜ+W\/xwY:9ͣ}wu惸^ŬݗR~fvʕ/| 8>#ⳟ+WdQ~~'~杶1'=Ԫ//|c䯼xۭ_җ/x*9ی*~Cv=@'~'v/c!\JD;LE ]-ec/ž" GN( z cS4*/-&GHLDlgdk$NE/F8`v`%N`}PTy;=6?atOh󻱼ۜY$)3%|XYtZ1tJ(/U*%5H\F5iUEJe;ESi:׉‘-X"\X{>C@g掎.W3lrtG-: 3Uwvgr T34w4;.*p-Jfa忉ë(dX0Jd-d (d,g/YCp0.G@'5GTf͊Քu7K. [@gfF*-k?Cߐ̽:a HXWDЌ9)URkvs e 뉅/J";ֱ%/?,h5h:u=Y}q 58X:}$-#qp%{N KkK]ܞ]?ZӟaulWP==!ށ(׭X۱x<*UEmo{G?ѿ7j9ΗY2,SO=u?>=o|}ozӛ{^K.HUw{ޗsիW+yjLN)=|tt09RNjg.7ķԾs. #|qQZԓO>ySquȘ]z9Αۙ̈́qݿw_/>%Lk~]#ͼxg6!HE6 'Up{@K}(`I4%Llѭr0I;'"pRj 3}5ZE+=J5, CXx;>OUtLι;vfC#soW~Wyl:=&Ee6m3*hX۟S624광7)~[xmWh8+%SUU}5JB[IA UOoi IH3wvǦgP]VُОqSekTZ ɍVru#Cp \6̢  <Y\S ^0+2X׀SwY虣eiWT+JwMԹ";!q[vlvlǷfyݪ_?O?˗c[qsf) })myB75͟R_߈7o}s}_ ;Y,s[&zxI{ң>ZE~꼪Vz._OMob2V_kBc=>>hvLww\#Ї>7?ϴIe5%#\}b_'vKUt7QUg*5!UyCh -ĀaלݾjUV,ffZTz` &qb4W=W)uQCc|mx(RFqոGhN9-!npd! >q7k~ oy^*K}Tޱ X*ؤ[n^}ӫB27!7F-وgWrFUh#84ܯYwҧWt~]1a`Mӯ Z#%`V|vh_/_kXNF$7.]Ԉu`j/EVB:؅pvT[@^R گ nJƦ uͷ&jS͙UT fQ8[SiH>uNDL_O5s-[_"1s1xc6bCmvEi0JwX3ژ{1Ȳv:q pahp9[&97!zAR[=no'fvq:c;c;%:,΄ʯ[o[Ju-MY&̜so99?L57})?PY a^M̠;V|G뺪V v|{/*KtG?xքnl~9E6"}'x'䟼~,ݮ9Ͽ DBnqA'zcGر#Q4ɵ$'d? !XB KF\I.3#A N:,1 CYٷxz/&S< Ө(Z!\vv%BnlPcfm MMgIv`rrP`֦Q2wɳW1\*)i*'PnY IDAT)V+eb02Rb Xr% 5FeQnH~IgrbO18u) Q?ksoI^M/$zU rY*8?N [ɀQaU돬WnX>='"\3 lau˔̱K潘B*xx)%TFce~ܕhkEV-E:E E$DRX5녻qPL׿Y0iWYj1 8WGvy+혫&0Ѵė`ia>&Ժ< [zB  ]I J^K*A&21`us^DN5Dzb',YMV3Z̊Uq]iJR-xuaܽkNT)NfZwLu˶؎؎W^P:-O<˗{issY4JɨNZsJ=Qa3A{~bEߤSf:T{?Vn80};comc^fA_GR׼~r{=v|1kR(?~WySX,y.'&{0V{3`fv*Uh'Z FDA t8KDɃLf%A`: HʶV`ծ S|Ui6#ZC@%S;;+ (;cbNi%b<{Sm$MjUvVw*ԣih`KbIͲJҟq9žϠLgg(L/Y|p*&SQlsrŎ=cg{E-8nKЫZ*ޔܽϝU7߹  i:8 ULn1㭪z$ĬFr:~=_Ih$nW WK:'bMUc.}%"3ͩ_:8YK*_U|9x^r-uKKB+XVhecDY,ٝ #18B7#v>8:0ٽ3_ԘZm;0 sKY7AfF!G}DjYdYd+bqwoa% (DY)xX{*3$I-}ܒy&!Gnav¬ۊ۱۱4{M k?`nA aWEioiԧ96`f͹;QY{y7г{{w.;xG?[Fn~/>iSSLjnM==}:ܢ̢v{u{g25$O[8sʕxϏOw??KK|aRnТą L.QWn) 's+YN}Q2<)pQ^QGPؑCottt1z.%sEDȓ21U >NH 0T']M_î 'H&2EANs4PnE"f7n֊Pv<.]"_X;03u:o?WRrFIP93Abru\ ӄ`EC88tMK] #aB +c֖+7?̲; EQSN@QW,2{ҮK nBEi YWH M9V[1˕(4Lb9#.er+` ,Kl|hen{eiG yDtѲKD>"(u*#T];F(اu=Q^JmQ V-Kk',e1J/|L˦O nPCTÙ^cSWWԄwq= .ʢao؎؎̨Zɬ/{챷m礖٠cqG_Mm?8yc\,ݭzlKksǸY&FjQ3DŽL&3R8/ˠ[(*FIRĂX`o+W, \|:“U SkE$snr4K%ByMS}cS·cqum_̹ \~zw'b^dtFgZ&jCkCӼOl&]yg&&r[X`h>2;cX\ެoںʎw` Ss|#ʝgg}믻7A;oLQqG+:^3Ei(N K4ޞv*l)g~q&4zzV{t3ñD}؇ Ү5y6h!J ,D1ê#47*UCMINBHS4Ḽ],^tAPZKu-硶7k3# Vj|k@!ShStq%X_qǔԼzm2EDo ƃR:qC\׈%a%PZY.EDP84a9K1W ,,rĠX3\̎/%¦YX2:lׁ5(^g'Ē%""(Ӣm]xGuNnh^NN`Pju >3e9x[º#K7^ڿ9`u`{+((KKYE-y޵_Uַܤ_a c55wJo;c;W<=Uu3 %vD2)@â([@)&nlؖ9D?|aH2BM)0bF#*X#ssN}rUgd gd-4zzNS?y[(5’~/ſ{n J],q_n[- uc{m-W(ܺuWW[;ahӄ|} ֶDpo#+:gY  Lܿsu?};6znW*C\u(ql8T<߷SFcO-V6lL;4}K_'?/8meۉh ʬ:GtKf U-o[xa;;Me1#"4i5Ȳio" Tyu"r)5t nF{1+NX1L첚*{&͞f6<7*BgVFZ.hkkEZX G#&|]lzflBY-[C1UpXLb%BRTzavP/Q o- wid2hksm!)l:]z9Cs,E4N̹e`d7xv`ھ,G72uBubv`T4{-x-Bmr5%" WIN'%+sZ^%&FT&ԩkȰY8(AdhK45ܪ(b4[&/d\+!W}BDH t,&q[[-\ʍc0=v鎚IRT\@Z^PjԖ&_؈qwy@(,;v z 3w0Ő,`EBYn1!kOHKR!Q旼ٌ,F{ndl 2U]Q.:MLnp;r>iz%tUI,Ť9”|rܹ߸ύsNQ 8W~pWI9ȸfv0^}}ޤ;~]h*9pb[H4J%V 6m͹M iiWT Q*BjA*,_o7 ҶkS]·MJzTur..Z#Z];JU"ԗ6?__zoVml؛`1(3"&\yDKk6looܧ?g}p^ >f΢ay>,J#+ Km[%vϚk6qgGd6Ls LM%te*UE]#+%#/fWc Mm9=„>BN Ns;QjOZx5=hTۖ>7hU4G:}Xh nJ}7~IUwB⍑P2{]ZnHT׊4V@(vKR|Qtv7n[GS]m DNW>hjƑ+29kuh•ᚌSϒ5 ksm vt~ Kcϝ[qo-u mrAT*#\W}UUJn//Aku@*vf'&cVmB)\sWhm'\M <[Cܡ4pmFlZ+ddk"EhЖ+}/I[pִǴ5qKY:|g.F׺$NiG-3#TeR0!h p-8i7Dk'-s. 2G=vNfE-f_R:՝s~O|{{;~{f uݮjb=%P0uyIxHs7n X59?oͿ'>^j\#J,׼HiRTrjxu t=%dWtUj;:aE7":u^Đ"Z&m4KRa1qLO_ $MDnWuHȪM [MWIm(Jq#ߊ#\'C+LtKWij^`+L6wDd"/{fǫI%LCa9B=ښ5+nuSJ, aFiM-2`  ~J}ofn )5I8oX^w\.{$-q˄iA'=̇;GqUpͬo2~\jZOVԪ')%D{'JCU- qʏ.Np/ yZ8ė,+%&ھJ1Gڷ6*lg6sghgOF4͆evy^9ǰO`,B\|S#Bu rmp;ye%-vUJVꨱ`U J11 8#;!؛c zqNÙq!< +OyăʗU.׻z_+C2N!jL~p'^^t2j2ea[x(_7䘷Q_z Bk|N†s+XqVmVtZkSd7}+r<\5!mէ?&ID?~g~jj gL2Kptڱ{rMrm»KwٿZt(xtOp+n &;pXrt!YU"pus]߽? ?廿ϾϿo/VUS0bز"TIE&ZZ#Z/>3<̧>;iǔEZ;c؊$*T\kD*VXDZ)3{*M 8Q/oQLMAK$;<^)"wY^'sd_h^bJ(2ߐnm@ Tf}hV̫7O|yd I$#l)Ufg\_]j9n :ӨD/M/mlFX V:+ވ瞮9ZrN)bue\%AM gpx9˨ \he IDATIlaˢr9W$ڤ6XfchoGDS dDJvD>ZUZ|Q??8~WIJjȻX+6DNK-Hg[֙m|ٷ!~33|&Β |v4&n]zRd5M4MwgôXgp,|;7?[?3?5ŷ|ן/}zæ09ɍR"VDs6~SG#?>ԧ}ٯ|+I\9$֩LfkyuSVܻ;meq;;le1Hg\fn55:WojYL;xI JYL^0 ndlz+(ZbGyW-E͞18Fw6%h:`v5-hьVFt6kui4!'iZEoR,!h1` z@7T- 7j=syF?TE$Vw%o<1 Ql4uK\Bc/kJvfInƖ~ uYeGU.q'*bRL= CD5Tܞq1ZW:fp)1aT,B]$vŻ!cTea 9F8M#=B _fF [v˹{OaڱXPk97$MQ=[v?lDV]Jа6\;,kf eC{wMMڀ5o,:p%`l&J͊*9T4͸c}Ђ/L#eDmQ 5]rlhVo$aOk]1K" KNcj0Iu_~:k|nG*yL~HqVOZbPn*@D88oєL(D;OO^т-8޲qxY~|߿\dn[upj??]]_|6,nv([Zg0]Tꤗ$=8`56x4M\P b |7۾w|w}ϟu~++(ko.tQ6+dej"8j,9d_~gy?|s{^w7_WjuFQqdTe31wxE;ZN@m; ye֩5YU AH ۢ Ӑrh)2sU4Cz?n6'LPx7骹Kf;gwvUwDJu"lRZ;P!0}^Ghs5]#Vsj&Tj)ˬ#͎cd2\]LVFf :6oMTK&HTFY- "\#y@N֟h&gf-vHSwQ5429$JrYj rs̙K!PDTZk3LI][,$𔹮\I5ĕ9 Iv8"jZaébU%"al.ɇTs> 6?;yzkLWFk+{zP0J!n}O=v _U#닉HzGh7fDi;aط 0>?r-GVr+ӊOaorVsXzbR*;̎9&+.^+ڰSo Ԝ&57|V?CrտWG[h'}3<#?#?|Skz۵ڌ:5Z,uw QU ׍N[٨[pבW:|WaBkVկ*aV{#E@W+I3S6nX6Klڬ"gԓ+W/|g?\}0[j ]o}۾l͟SzLgw}[V&gw^)s}_/~q=L,i(o֎Mvb2iŞr55jfR*(J]y -G P#m7ΫٽR*cUTLD0+bi.LL"3kRSք'kߪzہ.YQ%4.L\P?7vCK?9wZ5ɤGhf^ C,X5%3vP`B0:/[SLMB%~[@]G3X8JR o\q juÖe݁T5WdqIo6R&|Dۇ '~2[49Jg0(Vv#!NZⲟ[B=hD7.#9mŕ=YU(OKh-v~R@{U5*>ܚy7o캪N; Lr#4*Lo]1֣@k^63(޻ZC4BJ/EbyN>喌;_gWzmU{K%O4˛`LV-z yng}2ǘ9GI޼AI:RuʶBtPA:e艛 [hW֮,t{NRdxRo{H&G|֥TPfR)('Z88I0j_Woo69h4̞z3,Yk8]2SdK`pE"qPЁZ%axKtJemf[rd!Y/Sҷߒ{f Nkqߦ*MƎ I8AԱ|w6;+:Hae?1=)U pgZ.@ca.\դkܒ$@NʰZS]Uv%Gke+8'OKCߵϤC?Q6o:D0VK.K"Jg5@$kZc-M8X5z>9֦Ce,`sW; .9,PD5<&[X'HanTә~Dܨp#8؊-u<${SF!j&;x~OKzd2jq3kskBv#O `J-;!ɪ񕏔04$l1glZ?M^5LjDdVl`հUɨ56a>;888baM&"{WW_+`+FSJiߴRo3n۔__;?ӪT~;~'~'Xܴ=ڛ<4>|&NMzUYkrECS` 5laReahO25hOm (5u,ܔƲfp.hѵr9:I6<ڞΨpJ#;m`*W57ɹXSk"JaX?FZdML{w|5F2u8'5ӝÒ(шwh+vh U98 ]ܬԶ0N#?02ksF=pdڙL&с?\Y^}IMuvkOae*~.Mfsk ]+*K#a檉ս=f_H/B\KHts:IUX sn㯻pHߨg'Bfd>ˤu%(">2NN;O;շ̙YROypZ 5_ ZQP<'};SC6+3Gh"EbfmdM]]QINf3&[e7 rĺYן}`~g[>]o W ([.]J'&HA=Cyhhq7/|Ylo"VA?G˺C)U5P[+}\nۼ~ R[}+:gHY4[+<5ߚ7r%A$dVp*oN.` a۶e{d*i@TlLm~U'ExudEe^Oge׆Л+Smt\ j7 yzfZx2tXq6SɌ8Xin߾я~ci%5a>q[1[h~x5~-CuŶ~h K)KQpoi''̮1"=ʉ%#)$ˆuQS)h4皎n^0|+ l̹H.\9:SDS@hXrRtΨZ6r39uyֳRDu OӴhl}ko = nP+2."쬝؅F5 h~1x\$'MkHuv$ hpblzhL4pn^y$vk.&$%»WZ]*=\O]gD+/onP죃'bcEx)'.Z榇O78[,eHKb \ϕh_Ę9#):hȈy9ws* "~ndҭjD&5&-asM't*{jJ:o]|OaA2UӅ77Z>12\]|7Oa^֦p=s%fiUL][ɫ#ZkդIVׇL٤UQv\u[9Ŵ7/Z*+"8x9oϕMVAzmUX >O7)aeq:/&__^*2ftJ;4/ݝ;5lE`TrIETFFQvͼ T,3qiM]w]]؍{Nhay:H·[fw PLm$M3t~E6lX'ZkUke[YE1Ʋ*T4"^VzXp]4WlŒ"qjw{#ea3TaKɄ&F{m̗m=yZu߆S4 R{7!, )ܞIGpdwmVmkV2}JIPǺ4'[Ɯ-i_몇jbE:Iqq?MfȢZo߾/?Sz9 =*v5vW}sc8~?owR`WI.}P5vVƬhj=łEѺp1wNpnXd6Aкks%ѽmܶns|&WVʁuO8R,iͰ>Yx7HT;OclJͪ"f.ˇT:,\ +mc 1m'.w;b _x`å}EC 4]%H%gd^6Ԣ fU#ݯo?RI BDC7g=I^\+//p\5d>!%%)-}5x͊"m*^Ap_}hnS8N=LpXZPMum{" W^<PdEVjlLPe7dpAw} VDi4[<kujty&3:p IDATw!51kt ,8 6+ }pZ8҂Fk#t%Е[ f٬ӝ1TOؙޒ5 DFJ悚ǍVgANdھ/4[rڜ$z㡉>ڡںОϙZBGfgT qRֻ\2]VfChWri{\=zc ‡̛8Aޡ*cv .J$c_n"ER6p&΃[p'<-U X'gfih5aO\H`d,Ͳ{%("!Yi֜ЦMq(QC!/C]Clpi [(qqG3WE_ЇZex[Cbbx|<S+Tgy}n{6eOc߃lzߴN^HbЭ!ngjÀxޙn QVc+;Q9úKQ)A 5Ź_=ukrsjɞ kU&{Ё3tlIuuRq _KYwhۭ(fM Hx"7ZZ`SCs8AOYNh"%bC7p.n;C` [Z6E3q6fh+Rk=3ʭXaK9 d&vppQ8/ h-͓擒qqqЧ\_ljSd;㓟w^xᅇ˖x'q|5F%>}GxfRJfݿw/)$Y#3*= !V㋆d[(a9Ti0"1Ŵz>o9`gv=q1W v`$ Úz]h;SF%4Q<%R8!iFT7*}PX)׵ƀr1X:1)܂UӲް9Kwaߒv1HEEmѡ2PI(v톡Xw=X&й$Έ{QȦՁzknVQKX'O1ECW}5R/W4]\S{nz*Uz ց=9%E/:TYA52ZyRY^F䛵͌Fbl@R=IXJ@7܇bj\fR9VPSQ[խ<́->)x!if, Ldw۩Q]m?%{[IgGmR NGxl:׳?Knz y$,t /Ʉv&}$ie3W7&juR݋z^jB:y>ѫ pWJ{Aa ⌇]z/t[T'ckvװ]#Ͷ{bULu @3=,Bx!]_Q)n;s%-k)Ug>C!N ^ WtlI7x%]MZGNIXȽ-zd*|)PAM^E@kkJZwjLl1ۋO/TǺg,XR gO_eR6V4NrpXrM?I;|W $2L'.>I1W8 l\z>?P ǓNJW-u33-go"oة5u4J!DꁨF{ya<0gӧO~c'?__?KHl6I0ƻ1NiOs?s>MS ٿ)x<~ӧO|/~Dž-m=)cOPPEk[ I耎&2\cREjz ;M{*4I?kpTXJ$J5b2N2Y!m8ݢiGylDsx-ݰ{ݥۧK23uOy̆@A>9x֦eYz /]I­  ab>>f,9]-,* !ݧ O݃wALd[6He.Ug{[:h42UD0ʔN^KT~,. m+Gk̵DQ xT`T~6]^4y2[S+^j^p\~jJҒx6xrnq-|2퍒P;1ɊF2ee)SвYD$S:#LI.J H^T}a˚AHS8u97 8B}slƍpؐsqa/P/&9B͛[߂+vNܭ~^LJ)χJx\pY-4Йu`*ϮNkϵYN&mcxQo>(g*ƣ^X!xb -9(Uؚ͛!\AJU8r_Rkz/~t7+J_%zk{Yk1ze.C,bZFJ4:GW͓.%;:¹E^~+/i7 n7p)=$1IhT W ?0n;Z;E~˷|?'~'_˟gxJ/eM{/ܚi'-ga֙I>ݳk>W߿n34tbRYzi-Ȑ. u_4)#3c N_}=}Pε4sc֢#Vcw٥)I^+e r.Xt)Gw^&R&껬OV4gH=$0m' H/PBϪah r!L$rvӥ2}z}ܒ-~d=1󺹅[d׺G pIN njo Uu&:ے;%[dW`Cz,XOXu1S¹a\Zogqt$[| $j)W}HKAr2"l% U)Ɗ⌣9KJ])gfPxhc{q K&j YTzZ+|xE+aQULw$[B^AMp&[`*rO/k''Ҁ+Uy5`I5#Q2*?osu\/ͥg2C#q:^X0xU$O~?O~ӟ*(23NŢpf)_(iٜ&ꃳxqԥW\ooW-7wH_vAuE_[h"Mxq"/ #诚d[57wZȂYD3yը{$;Zֳcim)۱< ܊kFg':|9jM}"OkY,_!:t׀z2\gr!t(ƴ": r=,7 1 SWA8ټښA::sSmݹh}_|߫[+$YL,x9HNwy~ahf0䷝e{Iͭm5wIj͙8oyõ>/h+Ύ>#*V,i5Ǹ2"wQw}/U ? FA)v }@FnCӾJOֲrHt/ i_)V}U/[bk+e}+zrN[-~$=xT?fCc֦5)1~l7+bKL=o*]];;ք 5bA%'0:_G+pqUE';4+xj[pؗs]V_ފ4*i3{Oqx/s}gP8Y\*5^aibU;]h3U |e2)ichlcc*R݋o-yOm$^/Z9CĥR>YJ;7Ğ㌼ ^c#ѧI9ySy%&Fm"9'[fzx}6.RX?AߍZU^wm׭jlr&>B#b'ů(KKm=#⊼v{̊0<*n÷=o5IqԏrlWM@+6G `f䆼4A7{CmQ{B? 6E~gYƱsnV\$ >v◿X{g 5=rf:ߦ`pR`<0'tOY6]/ȅ wmAz_Aj)fU_5R6h?"9q@[k~cL uaХS/^.f]Av N~g‹x#rԏ)~ 5d,VZL[[ңEj~xn~|_] yLɥmW::9Դ mnfWxK`]'X4sDa˸.n55U.>F(uvi ;^U^ʷLJio͂pO }i7M5TdBU*J& ޣG|K)fso,EV[LWr Lо,wnnN2҈KTugX2ka*~;@#ha<14q xm a#{ʊpAFp"Geŗ% N4֜xch ѬhDV΁bG>6ͱ#/`(g:is!.id!(ZSBE$o' (O"sf &iae;?{~ WQ߮y'9fLs p<{a<0"I/54__K~]⓿f77?{K=M[8'h4˲Ĥ6OͿ7_snlrt.ƶ}r)vV9j6fv4h_8Ţy<; [鄼Jo8qO-rA m[ >}YV܎\KTVKWu#"=8/< IDATG1q-Ff*X:nbܖRswkT^c݉Yo:ޜx}}rYIRP!ã@օlU/ҩc(2:ɥ "|i.zC7`yOrd ּ-4G&#3-.~aL"ǓHwf*3g5袻ߩ1#bO{X\EgrxqW>^AB]-Մ>-{)T1XXļ"w!^(\143e+WȡUuoFJ\K;#G;ZOp(6ZdXcwql9;bPYIXɦMXF=ƿծpi{Wd#FZ(>-fВ-XN|jFNؘ T7fd*ZC#&pLIj5JaAdßv,jċIs!Նjxm 1c7 e&W/bpr~aKބpsys [Jp?;ӑKi2riFYrV]CCpv3M_}%Ń7|sv@V:'&ů5ZeJ g}ʱhE }+|ZyW @}M:Ԛ|j=t,"ΔI Y ̵HI Z7&Sn6؍L<=M ܮeųƻ 4<[厼[8%?w[i =NaNXVeRar'ɛ֙Pl9$m\źθ\YF%>J;* !RfĤOh؉`0d[<ݮjs GPݑw5GEF݅i?Tjz~ >!J2k֙X=1hYozH{8}Q+.[=UUoc|B;kE5]υ lɝc)r׃ =(\SwEGRUYȱt???]5lg;bWkv;GNr>v_ÆeRD,qSl<'eλC\SF;SO7l5%<6&iY.ph7U$Ug{6e_3hb\Ymc﷝=qY=]+^ UV&xIYYrwܦhZ{6_X0xQEea|3?#?~klɩi?~a{P?Eaݼu3$7|7|'O5!Z}pVEJae\tr?Əfg\Ot  tW`k4ɒ";4v\n{kg8m虚[/,Mƍ-&^&k:fh˕X]}.A){3ĿiqKuֳ;I!]>GCY XGb/n3О'cSNqc=OʄтC7Zn8;O 64pCY;=8XIZpKoDEgasw'9qF^ܚvGE 9݌4Qql;1**qAQY&:ܧFK>97|J^BvFdi+a}LsfEAM/3q^/z>BB Bn,mlPa&<FKR YOt怆 R?rO+"&XװwɖƗ,kW#N|o2-Sc!+ʲF1 $HSXprh,{M7WG-w=$i(Per*z/A]mS2\' 6C.Dg؃Lzί$;'M\46fk#²7y3^t[E?&Yoq_7I)|ǪUKD3i.Kx P;{fKjl1pb Bu>?QҿO}M'B}'O>~7~?R0>4e hКEl߶,3M;n4e,KpxJ.kūW8`n;8k;ם9 Qn53c73xәc+pP O3 l=\@]xj%܇~樴{dFl@ax{tXQlt:o!tEϺw!\̢|%o2x'S=crHH.}kV=g.ξƦ2V#.ԙLq~5Pi&z۹׌G+=h=U13L7`^9,!)+z-KO"94nlhqzTy╈`cB{O'дCEX|ޞ4iՠ0 b'YmeR+gQ%ܢ7s:< jFݧŲMA7Ԃ&"ʊ Њ95\AL#ɞF8{&*TJ~}u&<.Z@GI^Sr}y?۫*Cջ_Z`* ڊpGbbz|i 5 U 52Nq49&hJ{[v^r;0DU >V]8܇0sb4ve©*Z⓴kZ6Y^ .eądC99ORI i{ |#\j_5ҫғz$q6h)<ClTmh&i1uFӉuڄx͚ NkxԎ'~$KxF|(k'}$M[kCǴ㥮d}cO}SKy }w#0>J2<<7џz)6<}P۾O?я~D??_'6UĘ/[O}ͣ?o"6q=`wCIX]@v6Jg7u|M!OqIϤz[xW6[S#vp[x= $ ,x lwp su'zh=Er Mёt CK'XGnĂʃ| Ɋ=hI-t x#$ V4solB>M]:!_"p/ֳCz!xBۮ|ef\).Iz̀촕.-"8;9M 㧪a6fN$M pȃk61a'n}&SƜUtRd8P7|.7?]?{cbbAOǣuc.t{s ogu&ӧQ񦁾.#kW>%Pt薸V< t OLPj]PTu)Axr6"5U1,9)PZ=-t!&7b=`wOMK ̞i-%^>dƔMu.Mm9fgG-e⯄>pW Wch!.+{;5V}z cfKn=L :l$7S4 q Y>;]oD_OOOɶ&t+kvpi.+|ǚU;íoc`#![13#ݎ^SGwָ0Nu0wp'b.[| R{kՐACGqtq6RW$S=q~/˸_u&p~VqTҳ}|yh]ipMHMUOpa*rJ|j/G#8e;5Nz[/jnK͡L]{н2ZE95VQ 71i{vb?$~5^֐؇F@Mn]G/[&v\,{k/v4JGCT1ϴ-[H+eК]9 G{Gf=ʲ'!r ᫺Ij aLt[zG}eXo_;huL>#a ~;5,66SG99EaCLt FFl!+IHq0G],Jf|0qCmA&>(דgɪǧ<EƖOn:]h#NHG#TsE^\Ra<0>ࣰ$'ɾTۿ__ A ͇y? P940ϣK^MW[6o7a9lO|W{`ҴL8jӼKɎDt= "_e神s/#B@ȶzopw&AJ!D(l;\M`$ ֡ e 0'GO$۟yPV,`YFzOj^2l0+K:a]G$_F4AS@Py˷V@ =„h^fgw"5}evج"=T-d؍giy= Ū=1Lۋuar@*Y#b/Mk'o<Rlж{ j6 <֐R"]6B<6Bh3\s&[I.K?(m-#@Xrd\ڻ+1$-ܢ}Hcp+XU\֓b 8f.n L7#4]ݝZj& fP+g?O*`D3Cֹ1 l[ 짷Q'I\_tl4Bm ߭cY;}?ʴIb5i%k&/`Jfd&䝹Y'FRfF$r"d<}$犋C*.kAk. ʷu)aךʳJ^?ns|k%73]cg~="V&?(APC-@9UN"n{k_ ;tH6YI(sG;`79tww\*.&쐦D;9BeGUڧ̀pMl+>,=*;K ?0|l^- IDAT@74}|~~}}=Z4x'c컿_{~ߩŬ4,:'?Jfк~P1mi{鏦+ wyoo\bvv;zC&OƐ΀VL[ŵlQrqٟ)fX$2s W eWϯ4{4Ө%8'o]HwGH|j}I_+q [RǑWW,%n 7j?P>^LJx`TPݱQ8lY{2^| U"!Y'"0^|+LZ0kF8g~mγ>>UN%i!6ĸ*wv'$H-_ L .H-PDA@ EniP:QnH$PPNJ]u>^k9ާ/ƘssNUry*{ϵ֜c{F' miңVܒV\[ *MʌcTl7pi6mFnѺu=59"=ڪ-rx4[l$9& mŨyt4F)9uп~^xg~Y|}C}{yǮWu#U4%J6br :E!E9c^G/__O*.M5˰8Gt 1ʠB,Z:5>ܜ̸e(20vZVd[Bl?-'1G2-*!_/ֺݝ\d૲|}BF7@w;bOJ b[=]f1(@qMPDU`,Rə4ͨ7KWULrw.=z!{1GVv'7WFWr.39RH4]d|ڀܤiO9na8 wުSHE9FHE kBKQHj*}L=~jN1& 4mkSfּijKp`sC~bkF)s'ƨ=cvʪ) *PGA7HGYg% a˼%$(3,+Dqi*mEꡘi-8Rnr -}2;?PD2w[ >:lR0DBnŅ +qґxS@inEG>;N1{Ȧ@ T3Fv|%zV]|>$<`+V~mlP7Ɏ$0pƒRvhd 3k쪼SKkBZD/gz»[r\gbSF<I[tͭbpfj(SM\$`VDvcC]b'b_r!wKuC.3?4\>J&<ѭɬ֊TkV^FJ@ 6٘NU nbv]*.h(E%MF,RbI}Sf֤ߺWÛVt9+~=~?}x瞜 V3?v^m_'c{G:][nvM3N(>˸o1̆rMVj\7*8-?lǦ5ub3ǍLvEYh6/^ɴ*N@dH2uMLpaoL=ѾimJ =GgmU'vq?I{l6 -JS9~8fGx$t.} -D`-bōRc|_`Qn5k}9ޢdyr i{XI l6T{+ʫj\yI\+Ft?R"ۉNլ*hNE%HTp)34#K!+)V 2g&Hם͘H$ZB"P0C@vX˺K< yP[b_Y7uj3ۊ% %2`GlL*5 e^ADPno8{nIlW/VA6s31Rj; <ٕ{ZP!dɠapj; s29%uTǜre敹[an!k$lF`6EZzɗX:G-6&4 Ц< Cf)|[{x)mox[h$?EJR^{ Ħ=4$hs#֜|w]}}}?? < 9(P(Uzw}7?/~ᓟo, 9MEEg=# `q:c|dgb,yh"iٰ>PT(_I-ÒM~ެ~RvW1PϽDg+#(Z\ _Q<='-AR䧉ftHS=Wn b<^-KD%J ay|2M14l@ [TMWҍc4&ֵkInk 捋9,EFtR@ŹPpޠF Sp :xN 鬸Sa5[s)=Ov\)%]ѷAݟʢDtXT[H7lpalɰciik7ywoLSL(Mefֆ%${ {+#FG4AM[9+]M0 Fõ{IA-U =P[#abݵ-V̌/=xϢW_7p5G8zrU^ZG<1$?"pJ[f6OoUSz3b)QڪUF8G*m)c Vf1_u9/W.P+'X`!瓮GK[u#g4)MU.p2(k^pBځr-d;ilUSՕo]TǾr7O\S6xȷ]['O}`v|5R@7b: 8.7hLtyt}JqIxQ\ ٢99ZO 8^ӎlq RAl驌ɖ5;!˭ 5H-7rv)fG x<[4cnGkjnE] fkShjZGT2D nF{z`/WQj qx Ub(yK`#5aK|io `3/oeW-P͙Ah:P3{tp[>WTʶA-ЕA@ѕ%׸Rݱ= %BBVdv Dxh^N8&_%ë1ppUԒ]i칞vz,v)Jv`HQh݌[\ZZF`ఄ'lx ~faPFߖvKUiy?v:k#v8~̽%t^ޡX '̒VcW&:"7B%D[>krhশ6kA! {iSN+{c_Y:4Wz tnKP?=4/_GM%8*ۼҊ&{GsB #N([4cbqhkEB<[hFrZOwhҢ@z:O 86hNju#qx S JMi^;o//}>6MFY9svֿ$eɯe[p-R޽NPkp%%íd]#bTj׾?~?ADKv&cpf<Ôv0uQWۜyь8K64t g@ BfIi^ ՋRئ!@YtcnXE]噂)=1EYGN-t'NoN^ yy6w#/S98l8Qsھ5 8<5*y|EFbkGO.w͍S9FyԷl\h7\i$۪4"eЮEy==tX&XPZ;L<NX J2!>ăsH}sDcutʒϱΖnQ97H^=|sm+$Ԅ&Û Q\ +P52K`G& 58v )$ɗ8zQklb iGLsmvhOS4kV>~Jr0lbLp)@zCKwfx^bxj>( h>oQOf`%sBf#ߢS@L܇\z VA, Elfj&s4%S IAjD𬎉)}x ꢣ5Ƀ.9#l9[{Ӡ:ҏQ+ Y}8y QU%<`ىޢ-b5G1p0\.-:\Bɱ'&<7S4jt]nJrFB#;a#e)HDaTN~C+.x ~ j.SpBĦЀ١UCT jmVo`s۪}/naqKQ9le ՛ΛV&7ݿc?cMhJh#VLk>XZ(V;>rQ+Jg)Jwrbh2;qܢ@ rD%-싼ԅԽw -~d'Pn0au(}ZNoS;'3t|j^Lf&BBUfv ].+4WMOH[-}$k Tk 5$ s!vTvq,DȂcqp#7\ϗ܂;5e^ e0l+3:AMj{+0Ōc3p(fի2X#|3LHKRٵpv-"&aL  WG'Ư&74m oBR+6|Y :xmurrB|jo?C?|_O|+OjJk=~w|w}?_JqݮfiVyy)"J7*ͯkm< yl!ښEd!CS4Ԅl?sǤ(Dmektn ă8=mp.(vt ep'-]^ż*XҤ|s$8nU7ZD#˜;,+g<^|񑫵g1Ǘ%5|6ENFFvWRdHډthz D-9I?gbn+q*Ni)FSKzDZhE#m)p@} H3@XCCc YQ}\2N/{6^q%V/?4Y8Z޸͚kSL)V*_4Zv# 4 Fpc"ɹ[oB5d"׌ڥs rnlO}g?O}3^%ݻw{?';g}g}إY 륽RnV}{:#~Xq<ހdg\޸Q"|ܱ\Yv(Ǝ|G~oͿO?ܿM,Qiu0G1Uf}}[Mi"ْuM5#;if n[ϩ v68mȷMAt+|r;Cqً*[mpxjȵPrD@A[K)Ԋ qmM pܠ0_&djۮD:s/GpF՟Ǘ^l/׺OH~mz3VJД>pu&Ido|m:5;.!JZ KKXE]ź65YMI+{o9n/݌gQTRQ ؖTA DDF2}ZstJxB^ұVk7wm.P6$lяp"Bx"Ue4AmƝhDY;1(,nߙXC; .˨Ggt/ج%x's@%n"W%K*n9K+_͝TDʱ098aTbWW7Xw'_*@BOf&39$W0U/Vz&Ê[VD3x-?r p|&˓wy%\ΥĵϠ+ceb*2h~Pd's@H'=DdWn[cj]֤2) UgiP"\5\'׭ûIkL%g.d6ywVjYSGj ֢BTz슿Lx T u<|~[igEIi9}{ۿoͿ'?_NiWi*'ak;)Qg5=(;U}bA[(}SQnɟɟɟ>??jTPL*"){yi昛Rt7šI0Ԙv›H˧ܤnf$mۉt,gy@L S <ϖ17Oe7A0ӷn$Ɇ16Sdޚه/+{_&^:sHJ߁Fnm16Js"&H4<=b(Ѓ=6@6]EȢ;M lSf1vi*b4!P$ gg% 4)p:qqu5>#>oo//r g>HɦzƮe8(*ٯ_\ ᱌_ta% 0Mӊowoۿ۾?w?3ϼ]:ͷ8HĿyuM_7x^;Jq;~2D_\mKA:ed9 s߿ ixf ;ݸS0JuDMxHe[j>e<%RoK~@U{sȝ{nJХ@m aBʖ 9v :s*R4k~bCEr20.vpn`TŔ,+W!;Nt$G"X4p#L"۪-ފ‘Oko๙-;cPpxt=L|WpP;ǸJpiiu!zPGk!JbA,W_i%G1c'O=FsfX:epr&Fc2/75=Pk]j&zJ}V%ZBqBHM/VlE/ָ݄^|KWEG$:դj9"&M ڱWzd)W嬨d$ qf+٬O) LIىT(l(f$%Vl4s3q2{g 8ўzf#;g){[1`DJQO5F*Лv ?5ڠs%?jz=KN _??>_'Kaj:m18'){mU I "@m}ң7T z0H0tL ݙ=ڋ}cؖ5FO;ɥ%mSJ]ܚ|$ƾlVo E.T*L!N\.; Q87nԩ8 0gK.9&EX_k3<%ʴ[m&KmڥW::6(098ZٞKtk@ 9D5.hcvbgR;rˣ.͝|m0&Iy'/F̩VZLsTݑؚjB?>itxYE0ȶ7.d!tsPO4khʙv]  K64U]krqW\OWyu+M"#jff3Y3z(%sF~. ȬdC{ `aVY4\B-+IO[pxFqAlWE1!N{S(ʍѢ+ ^T}fLh9_;hNQPhMT-!Gz>6ɸ7f#ƠdD;7; ' 23 9$O-nճ:+L۵ =U bVeMI*L0ŏʅPOJU-=kR8 9U3>q^>5sE8*'4^~q+uc 7! !.B YV AI*nfU\[2O g<<<ƩRHכ}{_)C>}oog>i[[?v{uuO67|7|+8~C-cvy>Art%[j T&#N4BS)Y v=ľ5-\[VrrhvSa==6Y- Ii_`E $qC,h4ṉ?VX ړM4C>7%RO>5[ F]0YsFX@nXZfӞ1vC'0;7<[P(ymE%y=,H1+V Whf y6AyM uY6i}+Nbu9po'8"K}Q1)I{ÂwoKdy=e i2ztyo Iэd 4h6l@qHH#sʌwk#ݘ=볶+U=\WR漞Aм?hQgr1[};{i([10P)dy:@6s۾ӋjD4 Ͱ9 j Eq !cj@&틬^70_)1(g5z\7֧RF]+k?:I4nooof/Zkok߿51Kn{ jn-^oOӛ9C"2=Cɪ?g?||d7qòq1)֠ ɪ&U{|"ccuAj VN2ܰe`4 2$Y^،[BB! azgk1"T+6p#p?L> Po-|+6XC,m_S\w<ߥbl`Ql}.-Ti_WV }t+tD&!|_yڛW֧RF]^ʯm=`so޶wܿh5 I_/fJ9_{L{ymv[SmS ׏7E-umTޑ?X??_G*ek_ 5S7 ¶_ {HCNfJ1Ѳ! TOA #VAϡ[A:E. RЀ:XHm1L>zH`q־k( XaɈ>y)+Nwޏ&,؀ށ=fM\7e'ZQ]:'Y =| [Y>^*z>J^j`#zLA {"Q 4pyݥKp@R"flaleB 7 J1nc4DsO'h'&\%hx$.F6BzW3PIr"t CI}I",'$ 0p' IDATL{DGq9zX,UPHK`W/x%| ;R6 `G]%`fVhmw#n~%L~m:B70 \!'x#iV6 <f p6DdMq፡ tƨ=F 6G:.O~y}& p;bdRodž+kRߛ'niR+!\NŠH!hlT=c>O 1U0AfX@g\'-hXo c,!s3Ϯkܝfn.è `Ԡ>C$>a0MEpe0g ͵c>RJeMF_o诅=VDrm`} ӯ`7O{ ?m15.7ڿ]is{KןI}JHױ8ds$чg}gZ??e[@ u.™ZUm|ŵ$)NHI"qҧSnFl R>M!fxZӱ(5dO_~ k:KDCD-XɝdT^icW(ۛ {4lYY}vO/(79@41Q QrV2B އ.^AMd#莾y&I֧|?<=M k d?ۆZB7[jU)p#dwF6`.JMt!&L?xr6E3>Iv?.cAu[a\XEl#~~vw%NB6ڌ6*lOvG۴Y ` bb߱AS(ނ _YwvDL@1 $8};`"6Z#jSߩپPq ڠυM~MtE ֗ea߆5uɡU8#ba `‹Cm+<(&F4x^Onh{_ٷD*+ E_!5\+6ٳ~$m>MfXRJ<^_Skb۞6m}sO^ۜkf8|^M=-Mk&}ò/JYǺݝS`m4)B1/Vseg}_|?O.3tN$I}ֈxe4s$(3Hлnc lE;Pf6u6'8 { 3PoY }8=\ ' ki(9!(RPL2nN31·M޻z=c\lX [oa- z нJ/_,$4`1ϽgBo` % ]o;9#FӵIj7ܲk"x8Јpj-j(I6 5ro3^:?( ^JIa=䢠6]P!l8O9bݗ!?>$S e3\#17&P' ,\ 4u[@C ౷22x rp}{AUڝpD+  Gqi5I~x.;A; ]{yD>":ƊDM!oy5ܞt]KZ$8E~k=Y=\F\JM{0cEAL&炎zX?ilRP,g" 3K/,K1fMi}N L>Ê6DdL?4q;nhZp3RA_% R>@ 8K*l^!`P@7x~ۤ}wXMd}  E\XBKhm ٵB @+Ұ@H<?3O)_'wyK׌~_^yg.҇iWDy`ez7ޑh7?w?/~6]VJZQ!s톯~1s9V@YWǛb0]< T.BQ?XxN3*r5?E!B 8 `U+0ђͩ6oˆ*"zd`oQIٰRPQQgģD+hR#.dܦF)}{OG?r!琼C x/V`gpj3U|D5]Lm |B`PM7F;X NגA16c BV| <4 +KijZGv'- D8oL@z}#~S:qӫK#%Ek74 JU3dnX(6pgcQ {=48kGFo߃`,B[ {S1G\-FD^Z#%l&;(16أdt`sSD,Pi%1Ɗƾk{7Bʢ/Pm@x"JCg+L|dux ou5 _P Kc|=ԛ^3 L1^${F<֊8{ld vMcG|^buOv)e/*qp SS_th?ڛŰ;?6o~ SJ)wZw/R.o ?Ii?W?ٿ/ſ|zzO|;\`?tM X\'ƌ/T<*8Mf8 g"|S٤ mh mm#JDh#ϒ4h-ʋC%hpIa+swkϔ~ |"fls`}ZDg7\7\{ըAkқCOjOFܰ!D?1W`Y,&,У0 a4^߀8 o ZDL\Bl 8Ju̷s]Ki MprA\P/_V`k 6Pb-yxӻ6ƽ_$xZ`S}45" #7ޣ8A ;# AhxA<:^2pN p6klCY>J*bgr*GCl-Xij:= E8 lwn֊V,J룢Cq߇'eoqqACw"̩ Tr \he~[ʹaY3mP(ո| y0ɚ$e>ڥԷ&3viV U[A/0j6o^۴LXδb<#&`YXT/7`H"3U=!XR s>nTʹNJMЃ?h||+`qlmV;Hհyi~a89ݜќi67ո,LbbQ^[i,VߖTbhH CɃ;FQPq3 n4 \a{;-Q承36?f ٢n9@(iDl'.hcj RL]!U.>((t R5^j]J4hNx<^g~mu2O)RJm 6niq$j >0h!:??Ͻ5$ۑǪ%Z`)+|=}wdǶY;t@Etb:J/l~9c+(Oy~_! lxr{40FZ x@Q6ƭ0Yl3+g_QlBygMn_xo:KzOOiAE-';=iť!*ʌl(j am8p'GO;ztRڊO譞O [O oV-m,A̘8TM_pZp;HN^qǽܰq>aPQg&֕KPe3YS6h݆voxEW^n# tVgL3.o {Wwx~Xpy'opR74,LTql&mnop! v|aByO NĪm 8?GZ@4=&'Qh`Nџ{lHd=]^odȱ^=|e +fWp€uvm =p QmoVjU1Em\/K>#+JO`*( ~ӊ;&6`Eӄ_cX/54`PMf0 X,8G%9aG+ "p벇چmakXA d@0`RsQZ,? NJ)Rn+ PQ+ (w6lOSA^K>qyʹ6ArB. p ~>g44de}v( MD:h~7c6Kd&U?ֆqU/ u Ŵ_: 8w 0T. KLV XX*8L9Qj! lawf #ڸv|w VX -̘VlB0UԉSB(eF5o'g&]/X @\TD;;lZaO,uraFd@#>a04Ee'ep0-9}*{qbU҄rA h|( aϜgLٵπX>b)<x#",S0& ǃW 3+aQgۜ5} 8eUe|4a.4 5;fNMiXP%¸ŵ8_<X/ aٝ ~=h%l$ AH0V`IAL_Pǣ/N;p~8 l^g,Q]z5y *T% i2yPu5"ΑYQ(uvFp7FVR9=/+B 8`P1CRU&R䊛 x PW {챻Q `G$]#/{zNIb[peq\1U|Y('O:r鄧x<^9 cm*LQ$]<-چ-GǣV5Qk{s U3f`Ayg5}w|:s)_7Q@ `\V4P,ǘ E.퇓]adc>з zxuAzB GDhd0@D HPMc"ƛ~V|9-0vR'CBF/pmV&˝iCcm61WՔw sθ+֧RJ)L^R)eAF{1W^+"q$G\'Ea,4W8 RGu\5Vx [xXhjٗ_RB;!#9W>, 3_*E[hG"`Z@T3zu5N4ylx z\\/9fM*w:2C IZa[Q;8L_ xShM.$Qzbhd ݾLkk-Au߾FRMr܁^9|Xec$xS9`Ta({JثKC*o @6 EAh0Q~^~MEx~`gyĻOHE_oUJI! <ڏ_g3*cL#nqs?) wB@JPP5]IDATG~M&tUTv=t_Ky8VH} >؋?\7#/~yv |6N/FcKY&aȰ>RJ)RJiBCGH pNJ)Q9`6RJ)RJѕRJoZ)RJ)RJhއŎl.RJx 68)RJ)RJûm?KRJ) RJ)RJ)7y}f)1֧RJ)RJ)2O)B,RJ)RJ).Rz)YYRJ)RJ)tzuc٧R&ٔRJ)RJ)RJeRJ)RJ)RJeXRJ)RJ)RJ) SJ)RJ)RJ)a}J)RJ)RJ)2O)RJ)RJ)^X)RJ)RJ) ˰>RJ)RJ)Rza֧RJ)RJ)RJ/,RJ)RJ)RJeXRJ)RJ)RJ) SJ)RJ)RJ)a}J)RJ)RJ)2O)RJ)RJ)^X)RJ)RJ) ˰>RJ)RJ)Rza֧RJ)RJ)RJ/,RJ)RJ)RJeXRJ)RJ)RJ) SJ)RJ)RJ)a}J)RJ)RJ)2O)RJ)RJ)^X)RJ)RJ) ˰>RJ)RJ)Rza֧RJ)RJ)RJ/,RJ)RJ)RJeXRJ)RJ)RJ) SJ)RJ)RJ)a}J)RJ)RJ)2O)RJ)RJ)^X)RJ)RJ) ˰>RJ)RJ)Rza֧RJ)RJ)RJ/,RJ)RJ)RJeXRJ)RJ)RJ) SJ)RJ)RJ)L^zIENDB`neo-0.7.2/doc/source/images/neologo_light.png0000600013464101346420000001505713420077704017353 0ustar yohyohPNG  IHDR)WsRGB pHYs  tIME  ;0IDATxy՝?UTT˪bI1NNwfrNzLw:ct:=3풰hm"@B|珷(%8!3_ws{ߒm:3Kp΀ j3ﴐ=o"/ ^WWW^^Ғ1hРE +ŏDxNɹ+Z[[c=;vj2do|xsoRMMͫ:o޼e˖%<999gϾ{Ǎ;o߾~ן}w%ƻ;kaa(l={Pϝ;w]]zO۶m{ǏhwAC۽kwZV$tԖ瞻zaÆޜ$s=ww>%%3 ;} ^H W]  02ڡ R!ڠQ_CBC-ԚfH:Hu[qDg! 2P7!AZb- @A+[.70T6C+Ch43-F"Ǥ&S)3+HRYY޽{/\>}6lP\\~XTTԯ_l UUU=x̙SO^7|ֿ` D 2IٱkhmR'FW0\l4'@8aH@.dA0P 9iV4u.Ҡ AЌn;] PdgK8/A( b,`8bfQhD68 U[\Ճ:>8Fe˖ׯ ?333_tEx뭷|cwoL܉->~YYY?ӋEccC/_~vg;w/˓#GUD4j)Ӧ2 geeb,G#Br*J70 ii1hOQ|j3V @i Ud|hsPb n:Ȳst,jq|IfȁL|eCԙBһZ J`;@ 20%%zǷ9ƍ}q+++=cwe'Idʕ=L>=Hk$VTT G#ݾ"|Z#1o~O=[ֳ>۽eŊ3g<7}5.ӑ9k֬Gye޼y?OzT\򇙄mD9r?]_ Fri' 4 J؅ aF\5P1RȮHz[)**0aBoVK/=&=nrɒ%'Mpgm2_QQѽ{-E(_ };^d1(@ ,^dnwB P >xQ,b"n7W* JL16xG'o0xe[M1hD*ц(zۮb Nћ|,gUf5^P.G<1w\wP%3u"2jԨ^ (#2utg[pg qv'饗^:-==SWsGTs[t,@J0YRΣ44蚠CN! y?ᯂEa+w&u@3i>Ku!|%Y STu)GE;;zVrrrz%%jlnn>m=ֵk6442*GwU+Wu(pʴ)?3)ͼTE@ QSDEgd/La"Sx6"Vpuh9 Gt~͐eaXm]!l |-ٲ]-E4⤽qYc+њ(d) WC 8,M/):9YGOS?] #F+Ǝmr8Ǩ6!X!1?َ]VDn2WU(EW!:y!Z!1BH~\4Hȃa',i5$xU{L&jVQ¹OFpevJmP,ERћӿm۶?AiGv F_nԩXVVt"E3cQFSL Q@ǜcD\h4xX+J—C0-piטD8#ԉO Mv'jZ©{*I7tSNNʞIxl{棵u=bn""+r8lYa-O0mb*Z+Jv\*9\f>UhBEDڮxMRMyLE{Yv7;QM +چE/|Ȃ*H>ppU|p̘1_lYWfU&b'+~nkjKGk 3f-M#Brd~#Gy/WB+SS6GawHEQ6+Z%F$(NS73B.`5j'4ؘЂ^WhR9ߞ>p 6GΝ;O= eΊhkp>X}G4X4+z5NQybt( o%Pz#8*/k:Jq*]NmN@ЈڥL#Dx E"_$hcD›hCv?qlx~ڵ_[(:{9}ZM E;M #|6Q"VJ|-S(#i9h3+L1Df]!f+\&41W)r-1wnB^uGaf.@jG(&cVk"Mljc*++?SpyۛK 7?(f(\ 8Ah;F@í)s皉6F=O.C2 ].X 70^A!3ԙ?$sҬȶs(IX1N$4Q~\CĹ~dգ(S|Q&E怈V֑A[T ~ ax7*z ~a D?2E4S[}%K̞={wWM197xJ%W#?p…/>2`,jZ'04]!r3,0Dm%3˷82!(!&]l5t\b&v #d\n>:t (<skJiGRmܸqΜ9>lT;555555|_$Hw~~[o/O_en߽ފI2QY{7o|s.[qYH]dN jI*zhHu z9aA #pA\(x|$ ID¹@x# 82,Cʍ!ŤBLwT$<1j'e.,ZhѢEf4iR׵?j+-+XmOFQZ^^{G!#)_I* \VC[{-=P ofh,ӊr݂K92Y(GE㤳hsPpx}&B::n[~dsQNns*fHP5~B1cƌ}T&Yv믿*//_pO>iӦf̘q5̜9 ++RJҽ{{F^[0H& "CE6쵟n4d֤ܳjP`@RFf)4I Y(,* a -8m 7[m0\ C'd!)XτA8Gr 4VL׬A['PY󢟩ﱜ֭۽{wmmm"((((--=묳|W] ׷׷'$a_4G>tƝ38C-*PIENDB`neo-0.7.2/doc/source/images/simple_generated_diagram.png0000600013464101346420000103445013420077704021514 0ustar yohyohPNG  IHDRsBIT|d pHYsaa?i IDATxwx}V^B"#a#+X"hjqԟt@jEE-mmVXEDmR%L$}#pb$\W~:ǐd @XIK/}ЦMj¾}*+++8,.@! !F0B@B a#@! !F0B@B a#@! !F0B@B a#@! !F0B@B a#@! !F0B@B a#@! !F0B@B a#@! !F0B@B a#@! !F0B@B a#@! !F0B@B a#@! !F0B@B a#@! !F0B@[ вvP,>>^wu@A@$g+cP#4͊GNI RDdD+rWbIf+B LEDFhƟf 89ϑ2vB a#Phgz-I97so87 -}q}$ileJrKďn'|0dnn7KmP7uFgJzq~S'Y 82BQ'<<9E1.?w[n04n0qDԍ[苲2=ݫ <E@CO~U2MTi#x#cljVڟ/Ţ'{ $N!R^gZ[T{ڭʀ_6 x%K+D[Uwn[ׇPٛrj%C415];O1-Xe ;g~"?LBBзdMZ]W4 (]2IL 򺽒 O>{3wW%Y?aSΡ}8gg+bC.hvP[QL5r!UBjuh e@vv١/Rfɚ&Cy6Y3IҐC4dIRlJ/{(bu u)@C@GUU[TQVrgUnI2E{TX}~r.nճVT|zSkY+۫}G[XǦ1^_nثʒJ6%'9}u3e[NZw_S:>2.R~c-;!פfjÆ&_KRnܨE.+?O@ I5kZQST]MadU ]C׫_Z$]١^Tf)"-B34nۜȝVg-[\EEUee~VIgotGrr{}N.JJj֏14USCkTSGVk &ɧ衊$=AL5bbd9#]0x:|McJaVdei@ttKZ@y*+[M2 CKJ\=;TOLc׭Su jQQz@E@^GE>[T@MI@@} N(huFeektZVkl(%%SR/6I.>4Hv{Bn5"6VMQ%yTAxe/UII)!ll1T_:ӊ&lZ6xҘ*v}4h:GD4$-*([4! NDWMUV~#5F[K))iF}[Q 6.Wvm&CBI}v^mѺֆ8EO'$0 s:tRd4^Cǣ| -B'*eoڐLK$r؇nݪ-ZК@DjW4%'_se5y;o~[송 Ҡ薻 h݁e3 Mtƍ-^B]О5VkfKW$ݥO盦mެee>}4"6Yj4MS2@M@K_X,c Hɔyomt_r={fu,m ! jݡrebPtdnO88sKmCKO׵;6cd?콾f?A2Je? ɚ߳nFܽ[52n" Ns@kUUIa'TNgWĜ! I[tfܶJ22^"\ ǎmֱ9c>\X/pjD N|!隍a0xD_ N'|rIz dD(>~b8֕G6:8z,ͻtxddLSqq:wf5kSe[+*^Hm[~N\h' @0̀-,dW fs8vo7e:t8^a[׿ W_^:o8jIq>D픩ͤ9:Q'm~}}p( hm6-:tٛ_?J^^qh--R7!Cdm!pPu۝--QK>_*& 9Nhvֿn[% S={6S#of JHOר\&N$=Z.Lf *4u5:t/U{n|&zRMˊMMUMp]VU/@wZzktI҃zIO{,x3~-} ݫ.RKSW_]QL/9zt髅 uӒ{QƍJ%yyN)wFg*|F'#GΑtO{?UgI@Cݝh Sn~m[(Z!DWQ;zdwW$5F㕔t"":ueg2n[%MLJ'=`X,nE&&jѯGK7ܠ;UgzOILJ$UYc!wEv~z{bRRVU\O< }+*TQ:x>x}pFr~Ky"eu|Qs֙jjK|np⣏j׊zk5w0}0gNA?szw>7OO\t-[֬ͣ#kI/Pr̕x:z T+.$-YR3v쨷/jӽzl33uOj_ֿK,^<9U;t8츮x]m).-MgxcNR F4<_?& O55:OI4x$Uŋ*sz:YX]^Xo׭;?FQ:suݻ{RטVIvPW#ct_5oJ*4!>Wb**4r8:ɕÑrzbi?صKVVSFN'?m驔i>3G'LPiP+,P#g^:Ug\w6o֊^id6_5}\y:_?^q*ݷOۖ-ӹwݥL%edHV.Xngd $Rmr%t"0'۱c橺L] Sdb^͛5ۂ<9fsu*"%3З/duKozK[>P1))JE7 \\{vݻu$1;[7g䉯? 6+55UVB55{dw_.D11%-ٯTS0\ޭzjQvD}Sϯ ƌӧ1 CC^9mLԈɓ5替?@9uFLb-v]贬,3x׹zsN:?׊^GsjЕWk|Ϛgѷݦ]+V,/u#hӤݽvZ UY@b B;ifʾ۝#|RyE3Ni0(I^ߴI)~ۥDk>4%@^ 7%ݷk~ұ:8)h&t]*TR ߑ[tkp%'_sNy8(ISnU z\[S~mɹ29Yek!]랝;CT|.*,\U%z 3LIIret J]/ @kgܳ %=*/EY@aQy>_Wk55ptRl- J7۶՛Z"鶴4k'}确˿oIv{_˥w׌;dotw!CXm!ڬC]_' 3Z1-ZiGMMpېp[*+O߆8,R %%w\m v%L8kRRBRp&|ur8HmCJ^`m*Smk=z[,UBb.kUQQQ[,zwoz IwoۦKim8)1ܦ/,T^]ZkvV I]NбcH9C\j\.$t*ϑqq95U"4%yI/iB@Vkpuk;<zVUUiڴiJOOԈ#ӜfϞ 9NuY_~/jʔ)$.>>ի\. vgd~کS%)'Ge>_!Z r bNu >5ثo@@ܾ]uJCUV3 C۶mC=G}T>֬Y;#xLeel6Ν?cnVM8QoV]wu|WJJ_Ojǎrݒ$hc?{H>#u] ~s֬Ysĺ^ܦ$iȐ!;5k,M0AYYYfffa]z^ɩ! 8v| !C@Z]JJZe8(IsvV_o_ݻSTCTSS#Iz4uT]zzj*͘1#z]qqqijߡs sNv9SO=uJ2efϞW^yEÆ SZZΝ{L Ko"#umJlu><`Hzd̆hq]UVR^o""N58P#Zm0(I>oƃVI&&qaocjΜ9}oV\;11Qzv(LlNahڴi6m٣o>} &4MUMI[^Q.IJ ]a1-+;5""NǓ-թċp~%\U%=N%u͚v 믿,k1wyVbb}~'=tEgVbb6o,P4u'Y(]ݡCO^[%={wJh1;UZ\On}>HMS[oA11Nu;Vw}x [_U]]}Ņ^1cիW/kʕСMvsm6~w}|[oU1bbbbdsΑT$͟?_cƌQJJv*nځmeeL#bcCWpt <*-R+Un^Y,JLD8(IohOα:wMA-0&;:uN|P_2224}FmYhZ͙3Gƍӝwީ-[hĈGSa{ ~Η2O؟i$F @]_g; } w 28i>_?Vq񇒬r8RUSSG%%MPll,s~ɩW,](!Tԅ>&ppRjjmUTQMSt`%%MyZ+P=peи曵dɒ:X<@p„ Zb,L@͝;כmz d )gUVnTAbә!\^obcr8o'ݫIrZPP^W@@iii4h 5|(/]"{QX%3//t I~JJ迒LEDtQM.Y,N%%W\(YPyʔ|oqqEm BRS7JHHPrr`i7nbbbɓ'4zqqnFr)33S3f̨7+r)55U?x$I'Oѣ/W^r\ڻwo)FMf >\.Kgq֮]+Iڵk222$I\p,Kp)FW\>[JII߮F۰a:,EEE)++K+V8ٷfϞ 9NuY_~/jʔ)$.n>e5,fӤd;Aq >B9*,|Gkpbqf\JNTZ&5$UIܹs5o<͘1C?7h~nݪѣGpW_O?>L_}iӦ/w|uzDÆ ӢE?YC^WR픦[l#<~XK,QǺ馛pBEDDh񪬬TΝoHϟ+V7lr\wyZpz!˺馛{uiz7t:5iҤ`cX4k֬#Ђ ci5|h„ {%I˗/׊+t}m׵;uRU^?p Tד iUQʾT Cn>IōRLPY,P"M/ZА41̦|zu]wi钤cǪk׮{ԣG-^8CÆ 5h ^Z~~JF}oZԡCqZrFuVǹUtM&MP7#x<36!A|)/O?OK aU@-: IJUT􁊋?4UWoK K^I$%ٻw8'v7w}.r~|>|> 4Hqqq=s駟֎;]g˖-O#֓yVݚ4rHZ=dպ⋃TZիW;~(89^7w w}WfҚ5kk-yaZ)&e z VU9]$YTS[}t\Pآ6^TDDB'//ORmG\] :) 4sL9z_eeeڻw$驧ҥ^wѣ ?BIRjjIII9ڛF״JLL;]Qל `ʔ)={^y 6Liii;w1{<7SFbQa,t ߑW*3SOJǟqoCӋe3 ]^lcǎ h7NJJ]wݥիWZjnfISsΟ?_Zf233uW@I;3n';ut\c6 ^f*ahڴiڼyvڥ)ShZdIHpx.F?{U=3齓 "*vwU{ւ++ʺUuW/PlБPBIB:H2L ubss$$s~<ΐDDDDD%EDDD:-G,G<<􌤢bᄇO"(hfs[fڋ (@||o0$h9??+W2|ph_UU >|8_|Eq0 aÆ5;O?MXX۷oN|n--Q^xDDDDD:N0 ).^VO7""8l<<|H{Q1nȽ<<<>}:f"""ꫯb6}O2|pJn&BBBؿ?_|gϦwޜ{\uU/L.]߿?&g}[n///&OLUU| E/Ã8{96mo?7nj3ĉ>}:Gs3x*<<"ƌ@޽?>cǎ%**De?iDFrΝT:ӄCָDDDDsSH'pTPX%8V|}SUaX q:9zE-Xsx衇~?M7Ā:ujj^zrJnV&MĜ9sڵQx7+o~l믿Ʒf])SUV1ydnl6"d25X߼\s5TUU矻f^{kryqיV.]oZ\}Oz [Y!va0qqX&XU\<."""""mF B3Le~$tV`8vwEES\e))qcD""""ҙ)A("""r0 '8rd1v{!X98\@Hhfowa-/,ģu,iA9Ef3ڌZ""""g((oX*+3 ,l$ԍHgi0 ʶrb#f($$d4`;x6Ft&ּ5uEDDD=bTDDD4tZ).^MIZ,||⩪ڋ@pXz;wHuCDˊ+3gg$&&bۙ,4"&VRj[iOJH),,|BچꫳqFSPҽ{wu#߼y`ذatޝO>/oٲ?+ .DΝ˜9s\c{o\7 wqg}6J>\{DFF2l0V^9>e5kpsw]uUݻwଳΪ>4??~Y#"wqXs=MTe&%EDmHDDUU^6/~~ntr^wެ[uX,\qm~~~L05k;~εyfV+'Ntm gȑ: /O>MJl͚X)))̞=I&ڬJKd%EN##5p:qKJt2JHf3H;:a֒'+// pz :srr W)U/Au¹:T'N^^^ۮ &kiӦQTTkٳf.ǔ39TeW> Qix#;;X[TTTн{wfϞpDgϫȟ~"f`X@k ssD""""PDDL`ulq;Za%0> tތq4&9[me9yNK"ݷqRVXʆ_}htl2\TV)>\LDB{ R/|`!6bgVVR4}]MX fQYVIPd}Øsҟc[߿f8NzՓ'ok߰elU6v/mt~0ذd'wO.NpRƧ0[՟ó=mLjfoN\k.k99Dxzr]MpTp0f% [^׼9_yzy972ZDDDD貭r?ddrv6j>.zW 8d2QxUbϻ[O:Ixyubٺl+ ڶh"v<}e:9rU9u˒W-f]y7GOu? &(*/T'/{:Gi=l6q8@OO~ݥ EE1"0!..a4*''A~~'i2^^DDDDPPDDZR~y3UUu kdVf? ݇UVa-N㢻. o,OHt^^8Nrvk`rJ~_՛ so8|w}*6W 7I3&|Nei%D&ER|_0fm c-lX$>u[ڭvÕTU/KplY Lϛ]wޞ}!]BXH?Pomos%#"8xx~ k[[Bsl3W)p}Q?,S,̵٘KYYzy.]9:UtDNlw!"iZ,$QY0خ%EDm/+cڵө,t=6L V6&G4z:GFZZ_&Ʊo>}تVh>av178~-c:i(7gk]#"6s%KتllYǚDzZ 0(@ tmNUQ>̬*r}~lbRڞ1j>3afbWdlWD{+ KDDDD5PDDZ{)sTxL$z{:c;?rJR&k5ZVP٣e'Kfe\m1SD|[rMc-CMH?@1Χp2+ΛJzcO~`6q:|ڝxx[[ X@UUZ;!33y>3aut""~fmzE=t$"",WqP]pB#lF?Q׍b7):TDei%:ET(ʫػq/ldKS!]B\Ö[0L|} M7߼ }bU()`OLf7!3Юs$;Vy8se/lnKy׏.=ݛߑ?UIgʄMX_ZXEڱJJX[R;wreDbbwEDJ3k/""""r S6c׮$axxe2)A|}ޥ ,yuI1,%f`D FbǪTT00hV#XcE1,w 7Lj'5wsLjѓwb߇ʲ:ώ;&Kՙd2*r[ T]'nAl[܌\V.ZI/E%-~m9rou `16w80\mR;f|p0>LOO05:^&||A멷Mi Su~>.?׭=E&Frws=00 ?<<= Lj\1 "l,Yh2&?6! /@__"#ɳZC,8xWXP2 !9t^^ɾqz""y{W?(+a/ǽ4K%P DDd)A(""'n={6zcCC$,̽A_EDcJ nDv$ߌݻɱZy{w"/.^VƿssYp WXds} ⶘$PEDNkqG'֘DDN hED=t%ADDNS{r/%'7 66405۸}'L)3\{ƳM\ޝ׭ Wn.:D٢qYQ\=;wrMd$Ӣ9O-HEDNKu~j/"ۜ{PUP0DDخr^Iu11 pwX""m.חSSsslfuSfCCyOs˸P:Uaɻ1vFW={sLgU-(""""K Bi֢f3O%%/v))\Ɖj ૂn1~ 7t—sy19@ Uۖ.?3vz>6""WCDDDD:%EDEJJxa솁x4!h,"'05:T<;-fz|<gӰä'hٺ?qŖ-$ZŋSHRDD볼}%ƒHĔ3Sa>~cLSwo@"""RꊈH5Ճ ӓvuwX""]tmpF;Ppp ߿?""Zt@nh*+ϡCÎ UHJb9N""H̄͛)w8\->[FL9cHBǓVZʿ\ v׍'r|h~ˀIDDNޤEGRy?)?ښZavuJ OUYam&7W4)9'^@puN8ޖ[GHtH8swRr#zJ"""""$aݘnm͸mtlSD9+T.ܸVk:trm6?s& d99,>r3M' ~󂃙""tmee ww8r?ٕ>;/ %4TI)SH T}lXgdoF6=hբrj+[὇9곰VZ)/*?esN%EDDN""ҤE|vRfsf`2m)q8#T<՟&56ndgE t3328XU=0b2lfRD""[Q?rriz?okڏH9l]Y-y{.Kx:?M TW'KGolh lj#ny"(+,Ȏ;(:T'9@pT0QGXzOcwդH'>=??+RV|}4 IDAT97CD\Ax1z.}mZn%)%I'>ve Qrpӯϗ4""rB6G32\F$x{3%:YfɳEO. Nbub(`;ٵz7Ͻ!Ijl f:ջ[0FU[TF B9nll??? `?DDduةZB==v`~/|w w0m6>0{cpSt47EGdgNn.vh]x ״} .~TD]şQ¿ 2rclrTVW">9.OŇxYH?@Uy[}-P-dndE8Nÿ́DxbWrp%C0n _c h4Ʀ.)? {w_LX01 #G/L6I3&|Nei%D&F}o;Wd=ϢًsnRDDDN""ҨJY{`}}K%"rXXԿ?ݹ88'6lAj:?140}br2 䕬,WUa o?~TDlN'LWrP&DToϦo69o-YZnw#1yI̿i>;މliˢ"}]lU6Cś V2!^0yh2 Z=u(LTХ{n/'o k}hNC2~~|"#yHe^ƈ+FPWrR%"""GN#9#l@Egu;m*Fw/z/X= =zyxsbEmt3ݺpTZȴ4Rӆ{zxrqX&Ҍ}kV+332b9SdTT02-231tɤ$& 2ML~l2zg_{61=b괶?O}cƺņõ^S r (~?p$21豳]KKY_~u>~{%.˿~ɁX+?TVۧ1&G, 7q u##{6dDDDA {1?WFF6[qkz%5ӂlk #k"2L&%&ˋ[1hx)ii|3x0p?pƇQQ(r8GG3=._Jk))tN'7ez{379kmή]yܭQJCbƺ~ee,_~NuYs̚yf:}w ==ic*m->n|u9l>Kb[/ o&E->oѤ`ބLyiJSPDDRV%Z'\nOA'JGzPt4A{|9-L!ˋnt6ȷ8wz4mWw__^HNfvR:l(-drmL+oMGED(۹oNͭH^{vXtsEo>bz{?/JGfM{~^6gmdg{럾DXl& 0 ﯷ| S'ˋ˱@u9WwL)+:HѻO'jqR}wЀ&qp1ζcvrU5kxg40W/o߼ѣY?ݷz5>NNf{]r<ӯ/οL8'EE|C=,ۗW?onOٿn+~xJmBx8BGm=@؍YZPnZ,LaaNMQQxLִN2+nLDLVR ky&9h|fݛw=`2j^?d}}O~o6g,7tӵ}W[MO|=FhpilvKG꿑{nżtKԹ7$ 4:ݹlλx^ETA(""<?5lO-NKL::O䓙3z 6?Ǯ1ߌsޟ'3KI}Wc-?gt9>~GDPɡt?2g?caDP?rƍr%׎N'n h55<(̛2?;*,`Ǫm?l~S@o{,0 TߝϏDeY%iH[V@κ 󿨳ϛq7Q]qx3=StiE11L|`"o&%GJH_Nzp d2Hš;pC)+,kQPIESPDDȪd~vz ƅ;zRo yqpv-_Π+76G #v`6ѥwoιN׶>]Tg_&Suy`fx#&L qk QURR'69cuj*oĶFap_?j0~G0?; (t rrx#'_GED]3Vo~jk3wvd|-ՎKG3f$w!c]9; "yX2o]o>N]r%߼i;K:9jyXX+|G3!]OXCE-f"KWLu_︍p]cXغl+ +R1YM1N %EDᬹ.xwߝrl}bu(F]7Ebf̔12qS^Ro[HtO,}.ݻ0q]q]{1O9옎%^%^Rg]C.K4'KDDDچ 弙q!! qwX 兽 %9s=<{Wbzv{:'5n͛+cK/e߷1DAn0y9+BkT/s{ g(ݛ5M}h<>#e:&nĪWDM9 ğ pP] 5,(-Ç+9tě+EDDD !ǝw. 65&Nš*bb2z̈kr쬽{y$#\4L }?r$5cݸQ׳ȑs^""'SQ׮囚=L&->0瓓0Kg}"""" ffd`b0x\t-bܹ$]ͺP|㟇w/K &"9'$_Xkג};ǵ+cǒtY\6gIכ\CK/%W/ .$ "^ޝL7EhB-ċɄyz='LJ^s͟zD^fnVEv{(\S\͛ϓII\ѡMD[ɣ579z?@;a4""OEE,iw\v׸q>ÚfЕWrwœO]o;o'o.{3w:hx-|D&>}/Եp`8'T}c^CٰhYYxxy7d7_x\}:\J''%1=>7rr?9V+&h4YX(\V[II\EBl/|qlxX^LNƫ,i 0֭[Gjjc7~=?p08i+/>4em2p\u lߎAI5pyD|atNn.ǮJTWݜHn>>JL.]("gEE\u+V6yOjcSl*;֛l9EDœ{P]EDQQJKK""ݚb$=L&~*A9u7GGAXL&3Oy3G#lfjL g3w&VV2-=Ukv6""r˪\솁H `tM7*57l`E9]ZeCaݷ/O%%L^a g-'Pp}ɽ]`|<A l6n޾ώ(ѻ7m\5h8^d?mc {`HH'第 T_\}41SmDDz*57$afe%glzfkd%j^1)x%+ ӵWC@ɼ,^殮]HW{-"jT#[ī={rkLLW6 2)wP͏'"r*PPDtH'tfoL@WDD;,i8Vpƍl-+k4IxjeTZK aP@@{yʺxyl<߲Sf&v;' _y#'GU?"r2 x(#jр nǟfWf gni7AY+X ^|4D' d3!4=U}ܜ-Z`Ѱ&zzz mAe[ I vnחQQ{10CBN\w%:#~~k<=7nտlX/]ח1q^ٹ޷]UrNoۇ,X˪U|z9yK]z~hd$cw~rrZ~~H}}嗘-YBuXOFAA! B$A1*-N |dg38ZsLlߊ,ڵ•?T[7N._~_q%{\,Гۺ5#-,\ `*'TʫքtȮVQwfL| ښL˗+,T )j?>>kgͽ+4Ֆ7i7BD}?vr#Q@v޳ {uKtF_:E#HH$Jeљ3ޝs˭[aN %Νz~L:3ǣW/*3:wfsP=6nl$a]z~pd$w2  }}ν:g'Of pq hv-%AA_p B)YMK*M5gMVs'F֍Jx3<75{O}=ƶشl(BTKb6rX`ݲ%/<sJ%3 }@0339;B_1̈JSMF~y5JK6 HߘjR=\:TaȐGݷT)S0S{7f ?Aɶԣ|ۯtrd2:Vʛ_k&=A&"A( Otʀ7llN'M۴i09k~OL"'wwd,SF* 毖-$ ^&&21dz:3##j`j*{SSbc#6   -tUU`Ov9(/_zڴq ^޽)퍍Դ)ߜ9CvQS۵cI~y:_<ݼ}6z4'燏 /;LJYǎAh+ns6&zݯ\ BA;_AgTn!0G]F̄vH$Ɨ_ݣG1n0n?w,J9ˁyz5EyyH54虘e@")j]\P31awQϱ%KcT@zL ~=8o!!5RD?}ܾp<-fX1JJb ۪:A}۶ _0/Y_{ d=== "Nb)Lٱ OYWtd₂4or',Q&4+&@Mxav6:(-)!)8_Su#Gi޷o!z07ն)O f{˖ :$ -,x̌_<*"IB((,m~ёhIŔS(NH7QPVU,uuۺ5 p61\OJCP(({Xt44ΐ6ᇪ( U%?;~AHڼs'swB_K 1r$CIL y \zM*;4m˪U4_\g{{@?SRX|,;ǎY,8}{Qvrn^_3Ã|=ݺ[!3U+<,,0\Oǵ"CmmUںR%dmǨ.1C{g)Z2II&5e24-!;fj_72?SU.ijhHMH˫TR5tj_gg+4P۲%fwXcl ±Ij;Ƃ PDPP(*:Z5ZB'I  NtEd7vV㗕ȱoU‚痵W(oۖ,o1Amkё_F 9iYTk䥧?֓p*V͛uTղ ݱB^gt7ԊR(WCCF)+r֍z*AҭZP!СI7nsg „k0~<TOJ$ Ean.GKyڽu;ooL퟉Wݴ)F2+(̟-[2)KhHLeKKV'$et4٥j^R|F[X;_RJ/6sqq:ߜ=)*-;rrښO'-?>*ѩJRBMvFFtb||*j/%bȚK(9GQi)rYCvTz_1RnHh8ykdGUJWە B2Kch U׫W+ g(IAl] 1޾TYS*U}^GMw7lNn.ᩩDךHcf%AF  π11P|VcR""ᇘ99O>S@dLS@.x~ϙC7ĹkWt 9t3kָ씔Js 77'[y^ZmVz]Js׮ ]7rnzth;v,}>Xm075*먺97\߷^ӧcټ9ZzzXT UgX+V\,gQ{17no 6eмy <c,-ѐHOˁq7n >IBc tr❦M*:U2%j* k8+(&&|Bkс'Cq53P(dTKևW NUeLFQ$_2A[Hd$rG|3{y߇^NN5nF޺U BCj <9N)e_VAy ggum(>S5 Ƕu{ihiIݻՖ礤`,73#7-y=r$S{۰K[ݮ9y\g]9B)Sk8>[S\PP{t<>3/^={0qpϷ"JL˾m[^ݼYX+W80o^nC`gV$;7n{rrC(;a;2 8Sy \Đb*B+,fk䔖RBos==ڷ*+Dm}--kڔiiY @D͚I.UUy[[suvǦMA (:_mUF^s$)'GmZ2j߮^qj)G(UQZ~~b.o]DQTZwWc& vscUIE)( mڠȈԼJm)bKJАJVؿm_][M ڴa]@;n`#Lf?=%A! BA11h%u~LT*-*B!WJT=/)*RUB||7umg9)) Tٴjō ?quꛛidHmY8l(),t2 Y9\jcӲ%O'12VT>,M< ""UeV: U+^~y _ Abe6uiiʫ KNf;|hgg͚aR 'Wn`a1kZ\07g˵k gLuuͱ{6߯P(֢JOgOx8I\-[Ц n3\@O/STZJv rs-#m>}ЖɘM x>9z>NNV6^]]tδ}LKr97RRIIaݰa?%%qr"WΉ̌BB8.-1Һp35E*+kՊ&::X0j=FsXx3Aj=e:?k n'+ZM߾]]ђɘgood6UZ_/GG򊋙ocZ[U߇DooGGCZs32RBFX*.JK9((|\/]BOS s,  M$Ab7+%Eu#TKя鐎:FFشlɩUե ;+ʪV ޣбsg/X 0wvSRPv2r>GsfLDߧcc߶-.ݺq.yv/LRH!=bYY8t耞).J7КUyũ_ԩd%%q_Yab/Թ36mD[#+Zѣ8Kwwr97n wwϟgLزxO pZ qq\۽Cژ99r 45Affkݚ]WCB+G^tFܺŹ,7hE@Bw\GGޱE)M>“zNC_'FY[~ =՗)>>˄],(`K/Z6Y\2*WvspĄ g^twGGCBA+ ;Glf&rtdI~u ruEOS7oR{_~fھ}rrbE,0lX+cm`55iii ǹ=z`/9,z}f.ܽm-. =Uc9 }}etwpv~LLhiiIK jþh&+H$t.ё&-l!H0֦9<t耱eA!H˗i[eN%AfX) +7666$)_|qw##g$ޞ3gr1utdؒ%9&}Μ᷉)6E^R¡ w/1sty̍ 96{'N`15*_}E He2{f9U* ,\HfBv>>txU.Ã'8naaas'vľ/ M,':a~'pWv>]ˉe˸{7:@;Sn_~sJNf–-\ImL?{Gz\ZZy{w,,7[kM0o3Znq|R⃂MM‚Ckt47qחԨ(%%$ eߏDBbΞV'r*E $Т$a9BTfFF_vTK{ `"gg^g ;wJ\N) 44ٲ%=a$&3;z3qnkܸz_ŷyn|ةSIs'7+9l9g>/7q?^\VN+Wg85v8K17BkgYis AxZ%9ՐhF2 ]߀ |n'.eV`*Bfo&NAj%32p*EryIu͛3Ʀ!C{,*lMNfvTL*zz|Ts&bO"#YZ&U+Њ5A˪U;e eqYY 彃8 glj#tr~Jxj*|K}~})*;,>{k|ァ> “ʕ+bQAժx0R};M>| vmڠۤ Wrh3rC%?L1Nx8ErPG{&kӆAAԐ$TSS$L"UkkXZSB,)81U^43{77 <u:g*-`mj771cDWYu\٥H*}ݧOlW*peRB7mԔ Æ L BAPAi)k(U,HxiFiQǡ"7- }SSڎK߇P_ pݛOiť oӆ$a6 0 o6tN[*=;;&Z[<.11pʓSSi\GGfۣ]aNYA,]b='HՕS2gOaOk0AA ShGJ %%hH$liV#GtW z]gߧ:Aꪓ1~ "Tm pg0I`\GG޲۷_Bj(J˿b}b"?6oN'p.4AxTVլP4dwVt66nAA"eAx W}(L{F;sAiod)oo d2joZx8k*,b=+HKUP@ߠ PX؀ ӣT`fDRLJFFo/  DP)s%;K)8ӣQc% <| HC$7o3$pa:t`V>ΔΟgyl,%ru3 NVI /\wqqac)oolbhAAAx Sfu||Jw6s ue`i#I͛YjϞ֭9ަ .ȓ˙ץKh0G˗9ߤuwGK)  L\M g||0Ԭ5IAD+bc,]owRˣ[` CBSTԐa xz:._V~jAC^^kg'AA$ OM)hH$HAAI鑑,IBMx9ZZ-Ud\ϟgM|<ʿ߂S(/.AA䔖::\j׎~   <DP)P(XGybB4[FIAxrqm[Q,i6lۛzz5+KKyM_̥ RCEr9Sy/"9ޤ ڵý'AAAH <%s(rקauv͜Ɇ1c;:>w_SŅme]ڧG{:_ѭ'/>8|]\z]o +X޵kz۷kL³EW>>XkkZI8-1'ݚ4!}{Vb z ^\`1 "ށXi۶DS"{ eC@@sc` :W;ɨz]:W-Y…G~to^tFR__NTX>Ϗ*+hJOoPAAx@"A(_|Qkgט*? IDAT<Nj|aD1и~#T8mǍ_~[C%P$䨫?>>#IAD?%$4X\O |9^jvTlLLD.Wsrh{2粲PPQ!ָ?ww4&BCYXz]s8]pD]֭#uCrW=tnm 9t0˗N S < ؓ^T_*er'Ifˑ"yiq1R yTRXvcTh #kkAh::mۖPRCiJDK--6hTs57WmR Iaaȏx4lЀv߽˸7(U dݪML;]M#>0d&lZXZ'C-ٹX˿ο֑>{)wٳgfrd") A;kf$hNlW6#Fngfbc`-MitF+WǨQqCdðaWAGCa ϟ#|| ڬ NT qd₂4or', (7/-d^X{ޖl洏WRwR):'L"a-,,-~OE?g˝;ruee8Ax啖2)4m))@mb-1:ۣw^UvCbMXj*)ʪ5C0￱g~Ϟx(˓rrK^ٹUZӧʟF!DG[0Օ`]CetyiðaL8ڊ? >ʺ}(={bc`@DZs-*bA}x㽼=nb~QAAh4[ ]est04<Y\^X.]kgd0y6Ub/5*?d?1`]Ù5kv-I!!D/7nTU99 냪X h߶-R ΛW-yܵ+>8w튡%Wwԩ8xOe׮zPTxՋ/ҭjr9:t;wVN_cpH O?1fjqNJ⠯/ 0Cie1w=8a>$?{::vzȈ_~gTŠyhصm]'ǢK54pޝPTU$(Ϗ[ZIϴn&ذ/xnD;wF'QZ\\iz&&TkW)f3;9Y 9.gɱŋ9p!cヴJ2'k==矑dtv<aYkkC+W+,$T C]㐗Mݻ)bH$R__a>#   Ax]r#QQӫD̬yP0;R BAnj*97*m:c\a=j?&M 䠯/%:PŮȑ^`LL}<7mRw8aG&vvCUEor8Oҥ(*SC|.˗k?LL*:w526t^πD*eM[v͜Iiq1N;3hJ r*I+-ө٨s9wŋ9q#֯GȈc㏫% Ơ9l'W`ܹXѴڕΘWHΜֶjMK&CqC*0M~U&z;##z-ٙCy;Ghۖ}!M-Ͼ76۷3ۛ{LOsqq`cAA BA'4(,}* kl蛙_!Z./-2--J$33k~رc3t"ղcSGGoO_of֦Gɿֱ{L=^,P(8p&=+ -UKl+G(wh(XԷjn m7 O&0)_^ S ek|r(Ƽ۱#fV8>Ⱦ Ą_U-xEAbpA'TvI ( /Z*7N;S\P@Ia!6ZUuk%%Dw',J2[E=[l[kG޽7^ŅgP(HU]RXX)̄*ŽSYQc۪n8z_rh( j)ĵ{wڽ2w##նl&KO'. aϱ9&Oƪy6-[ktJ Pv45kr夫7{DE "9 LJ-Z`ƭ҅;*d֙_RT.Fyzbrtd͛d}σ{g D d ɀ40Sʍ ƍϷޢԩX{xPGRHyy5 ktaϬYS49bffH*̶׏9w8tH#DDԵ+7mIJe8t@d݃j9dtڌQ?Möuk==ijsgzet̅M0i DʕYYU;n虘`-[ȠĉjXyxڽ;{>g˪q'pt"Z&䦦raӦJTV\~h<\107B973v&1 --1i ]1Pk@j+,Q+0>>xD"5͙Ūx@ծ7ٞͱՄB#K*,dеk\ɩT]\{.8Alv=aalbi-̂n„6mp35%Ϝ%{ffHLtuianN/GG򊋙ocZ[kffww45iceU)UvF.Va0^$g0pn#mmGEݿF۶ kB-X'NCC IItё9䗔Am:Bkkd2Ws7w9;;1e|??gccʹاwƠ zlg?p61aL˖}Xu_|w!31[ce' __=<3k/eǡ.Wruˆ.ZDl{-Ңv-Ƶ|r~^Z;ΞE9:b5@-&qx;u⥥Kku9ߟ i1`ZU:u9rmH c9l41䫯8PFQZBɽ~7۵#pv|-vN^TؼMeg\{`ˈWB_Sdj+ Kz?{ŵ>pEEAlH+^rcl11McM/&XPcCTn0Ѩ(XEQAwv.<O;33==$>Z4--ԉ^W9c}M~ʢlNK +(gX1(hx:w ''lB <<@[ǁښCB}; }VͿ5SnᩩjOcr&;w4iRmWnϺu3\Y~ptG_K烂y3:{vxm9;Hf-ހjJ^Ŋ?d޽t43]oF=\Vѣ((`ȦMYV@Fjݚ، ^>x[#6ǎeJ׮@ŨŠ`Ν]ߘ9;LJ_ÙkJ%_PC  A{I+ȩ7/Ņ!ٳC7-y,}MӢ #-?Ѕ Yla}f , (iJqCL t34䈫+vbcU@rS1)i:6,UNޞO t8  =Z97zF.-eDt4/#Fb4PRɪ52VOct20Xl   ?-Bl}ʆk-ؠ5 dƍrelj.6l`}),C\Yt2՗x~>SS踉w"]?_W-[z&}{Rj` vJ}Xp]f]rcbBn IꪒyNfg?T`KjA)ؘ"9(  -Гղ,*OOWʋ>%&Hmonp"ncy=^p.}֭&eɓD5ɾ66&պuW4Y[[6t\z;#WQ$aГё==qׯwUROd$o&$P(.+cDT[nV[>ʊXh(2 %6FD4>"#{bQ+[oeqp0{z5KC}Ke?s  hH 23)P(.}ML4ѣܜ.VVCxHer9:􋪕}lmI]3fѪU tw6 jߞ?nnq0GU2:2۴XWI+2ΎWkSSYRmԬzիkG~w.W(ftQ+O=ݻ/ ոK٤dϥKMG%9'^$V,׏QQ h{gxK}Kނ  7JZEu눬c\`ɓtVWE|՟b'U-;@Iǥ'NI~x)x.Na(ya^;t3Զ}>(zyCh(vkԴc, ⥞=yo_֜;̀z yKDZ6tP#9':}J$la^{ Qw|4t(F::#Hz,3F5^ o=JK<⢊eSTӺu@[ucպ5ߺU699Lܱ;zH"xƜg,g@&(XPp/Us:!_mNW۴D`QLLR[,I=Ջ׮Qr2zyLvX@-fQTTʮn'=4UsnWuH49nn ~.9'9©d hgjK={}So5:1}{yQ$&RP7Fo9m;8!+|}yյjVd0,k̃da77GDii GDp㭷T%޽5sir%+ s==t#F`Tt2~;l@[zvueŐ!u^?`\mm 7n‚'yM~Ħ#JdaC2cG xEEt03~xӳ^ӆ~ ҝ;kk3}{=CC6W3Ghks7ݧ  H 4sj%н֓fQƼ~?mP*3ߟ[11acDy=a@Tê}$df."әܵ+C;vjv6>ޙ3IM/><É׷xe*s_Tr_Mec\vFn;`*|9c,~h /hb"\]}#?[Y⯓n?g{PB쌍eQjrdivee|YLOS==&w(U0)+37na66& B_=3g%2ɉ;|uj>@Eܪ[_ٽ4zm[X\6s/^LK~OW*eEǎLfŋ\,,X\)*Ox8kto2KIaa|>t&>3>n沰gO:S\ň-[|e6 nIqAAxDPX-Y!j`4PGG-R5G᭼<ژZ_8ggIIdImCBTy!J%Wkz ߯Ζ`չ+> MK*e#QtXeOaK UTʘjzLmw8GIWͫw%+Km;G `js;nU]-Ѵ510JeJCSS訖쬖 OccB.l8ICo `Jl,PwV%ꫪѶ9z13D=j mٴBw;x>OKsvnzޝeQQ8ݯ33+)aVZ$1?_9v{uK  {QChJ o[6jjG{J$l҅r߽KY;@\Ш(&|',MVs4\xgeKΘhW˖X.ً sGLĿ[7QMAhyvN`^?|ܒQVW?NyP,ƳQ]n7TJl&ںj5FRVcmc#N!99Lߵ\mۚZN+jgffzz}9>xTue2Jk$x8v ]ӧ~JK8Ƈ/[YQc[L 22߈~mrll8GC̘> p|syE+K^1A1\A/Lje]$:^|v1a|ϟնbež=s'_=II]̊'Y T}qVȈ<<{cG\'Os8!);wr7ry36o/re/5:.03 CCU\fzzxٱIkt4ظΒsugGssod4h \2 Umw!#79yǖh3^Ӈbs8!ObŋliIŋuڴe2<%dtu!u])%% "ƿoiֳ'ВHj9\\Lp֦Y*OhRJJ\ѺmΝEr^B~I$kOzAɲٻMsrOMU-߷n՚co_SUc:ubr.eobBfaڶW}-T\{SvU5GDps=X6Sucf\ۡ)yy6hurEqWKAO< 4S[nFL 37Ǻr'U͒u[f͢T.gF@"#:eJ8v,ߟu+ ̍ hAAUԑ8q"A/7T*Q<@ch]7k[&pp.{gΤjOe'M=/Ͽ Z\[L|r4ߌEGssԵ12]--qqcmm3vbƮ]y:33k3j}a_Ϙm-,xӳ35%pL23oWn„Zѣ((`ȦMYQ7y-ǎEK*lynH++>6L(''td2ە+|_lR۟ee|W^p0H"6V}fGwyĥS\^{<g?[9y"#Ð9G99ۡ9HJb]x8+N|sgo6/U~3g<" ͓Py1A)U(9sj [tV4C7-y,}M✸v!6t47t81_1ciEx- (iՊj#ͻQ\Lpn*7 =GZXEmh䕗R|<\/鱧{w:z +voyilؕ+Xחoup`ĉq* ǓBd`{9tk̏z >9988MTKX+O"a;N SFl&Ljg˭\~;P79M77:| ]]Ynm͎{<gf-2 hedČY뫺,fׅ 䔔hn΢~_yrA/~FN}nhz|os Ye  BAh;LU}+퍱V˚:V$Ӝ }37nC3M$J`ŋii1Bl:sӓ\ʗ7oYr4Bϟg #W^XV-#₁LFyg;#z啔WRa gAO$AwR}cA)dJ235 B)dDhZ&\/W* ww<^:R)_891ܜg/^$\YRBpqrQrɥ2"*؂Y66lmQς Y BAf$0#gZ8KKZp#T*%75奏?t8 Z{(9EE߹Ow۵#/nެNIEőQQI7CSl%qz̅ ɩ諕%G7MjQCQ\]dogHE2ς Yoa DBwU9>GHՈLAܞԲH$>st$ iiY*|!ݳ'4S.u^Fsʒ=Eчϐ(kcGݮx~ς -!LEfR`7IA$HX۹3S|ee "qԓI$,qpww9URBpy聜e@DYee+HxE$qÆ TJnnCz,Ù8q"ڵk5W?P[v5R)jҥKhcΚ5 GGG hժӦM#11QmR)R]]]9r$~~~(5 ДDPؗVe; smm̛7j:tRڶm0&fooϲe4ILL_{HRfϞ]k'NkR)k@ F* @NNK.%9*ꢣ?~ѣy7y4[] ֭[B^T˖/_d BRϡCIJJbذa婶H$5N8?Hǎy>|8M   ̀RdoF啽%D++L%-5n~ .oѿ <),,$66 6c&LPqdee|r Dj_%''3h 122… l߾dڴi{(Ԧ+G|## S+I̾xݺ4JGC=z͛{*J} ,9{wLL4gKr83I+(HJquerߟC>~o|̙GQ-DRk{mS޶mtԉSN1fr+++8&MČ31b+WdM  BAf ,/ʞ)q-|AA(&ԥ辖7s5mۆCۺӻwo|||x c̙3&}Ą 2dgΜ_~Mz,y18ꊳA#  ޽rq&TH$ݶ-yxFWޒ""X#J6( = U&eT8.OcǎMAAj-ӑJpiƌ-fff <0 ]]]B~mӣO>>}ZT*?d˖-HR|}}$/ v,Tw}K/%ƍHRv̙3122ٙǏS\\/9o޽xxx`hh "**a.}vEFFFɬYfѥK֬YZSúuԪ"+䄁]v嫯RLU%ӧO3vXڵ+WmYj:DrrrU~Jd2rrrׯА͛77YT"/kDo___O? pDPTk kKgţI۷o}}}xg(JGΝ111aر7K4TH\[[[lllx7(Qĉx{{c``-oC‘JOXXX`ooϊ+ԶW$k@@NNN3w\8~80uT2?QQQ9ccc͙7o999YYY̟?;;;qttd| j<_h׮̚5K-f???R)aaa}}}j58T8 !F*W^}}}\\\ |||ʊcrF\V^Mv066*~}ЪU+wި}.^Ν;׊>cǎwQs7|Cnn.۷oop_6mwUv*Ç#J^SJJ fffdu'6hSg7ZYebBt^LL#[Y_j*3.\@T"9h)zO}_@vv6ɤp*F:-[#͛L2b>v]v1l0P*L:/_qttdȑ={WWWƎKHH?#P|G}?!CvUVQ^^Ν;T-ZD= cǎL> `hhH@@̞=t̙31bbӦM6ɩ*iveO:Uk$\ucƌ!$$D5^c*Α#GxWYb_um,XСC ٙ3griժ'$$ZQٳ?!$$gbjjɓk%wڅL&cڴi6ܸq_tЁ#G3Æ #==ׯ7j{AAH 432T%$&[[?e6lYի0MLLdժU|駬[0^{5zM6Ö-[xw駟Tۜ:uÇL`` 1[nwykGQ^^?s?jܩ?!!իWW_K/o͒%K_8y$K,Q}.>>Î;駟8uZ~5kp1,YR+qZӼyD"ɓܹ~/:{gY`vBWWѣGs=ǼyسgzzzL:UsT*0al۶ŋ+ׯ3c vŖ-[d 8Bs & ݻwmذ8瓐T*lٲs6/gggڶm˹s&$$9s0dKx뭷Tkݺ5{`͚5w^WG{.\ŋ/1))9s ͟.nnd𼞐jTKݺ3:I wF.Z|w&/_F (xv/ܞ9ބ111G{gΜgϞ-ݻ7ZZ3<,Z#F0d~gZnΝ;ח]vSXXÇ>}:Ǐѣ3|pmۆ_~%}kkkzM.]eϞ=[^xqqAUҩS'֯_ϰaQuh?~<6lW&;;|K?\.W%8###111O?e;e˖1xzD"AKK;Ю]z׷mۖbUtҲe͚5L</h"6lPk o3l06l@IIԩ;iӆ޽{ӻwotttjO>tQT:FF4 =Ps2n8qܹEb :~~~@DD;vP%Q/_AAAdz"** 777BCCyWU=xsB[[R柉"00/Bm7x^z oooY~=oFx1cxyyq99p.\ &&F5̬{r9>>>X[[saN:B<ٻwZY{s׏͛7F3>3lll0a-sĥdbhAWWDFRV9ʪqqpwo#Hxuk05.Ģ"ˁ%% #{wF=H}|:IJR}pݝ-`6y{{s)Ā(**ԩSL>ӧO{Ǒ#GHMMU%W(1co6pAʘ4iP<١C:UѢ0d2j?3ϫm;z:Q򀏏jܪ%ޣGrrr?>=LNo4&wǵkrUJJJCΎ_wyׯs)N?>Ç͛|嗌5++{~VAGK Aаj R> /_LjtYҥ jkҽ{w aɒ%J=½_%jǫjp(,,$$$Sl~8&MB. SSSUNwww>s~z45[_jJV׭:VOڙӿZ Nu~thh(:tP%ѪfNRRӧOUVhkkchhHaaa{ҷoZAD*y?>;wT5m޼?^mYU^ mfId2ҵkW6nHϞ= mT “ԔݺչN )!ZGA3\b^%@BV߸T6+J_C$bޜ;wrΜ9àATϝ;`/_ɓ'9<...jsN4"(EܹCRRSfQSjj*浞wlllT%Q/Kgp5tttTӹsgIHH`ȑX[[pBAnݚz߸q===́UqW^a߿ϳxZsDSuxXRselق=;tAEΞ={2qD:Dvvvz}AAH h؞ ը0933@-U^|oSD7MVV 5 HqƽN穭[cwTZV^^JܹsKs?~}L֬YΝ;Yti̬sRСCjuuuk=fee}w^ '|°aٳ\6w\bbbd˖-j4\vQo5XUA'J hPVYrrT B90VD%RSSVhw/U4A`ffD"O>Q+T}U;~~~r@Ɲeii?:_Y5ke˘2e ׯ_ಪM!##m۪}_~ڒL.5$&&ǟ_VVVH$zj.o>MLL:u*7orUGҥKܺuKUִ.JּyuqqqaС\xџiCrq1qC*)aLt4'=<0oB-ƌzq1ҸTXH u̡$)W(xeT"܌ (-ྏ# B {Wm #F;wΎ-[ԙlr|CCCCBB|Ɲ9!Cg`ŊiwٳjŊ+C!ɺ@޽e8r??>F"--ɓ'cllǽRaff3\^dop:ֳ'3.\ 8+KmIwn.vu|J p}wI&&ruDCۛ۷%٪U <<|6l@TTϜ9CJ(9GGGbccС-[d-Z-[0uTGz-1b5bʕ=zԨ7fI IDATL6 ̙CjjjR-amnC\\h4j֬_קO"""S-Yv-K,m۶*Us1uT۷Q=]ƞ={ʕ+lܸ/r=W!B<yBB:V_ oZfĉt;vǬ^3f>Y>DeI&ѬY3ECs)V^͒%K(^ZΓ4fjժENٳ'Μ={~cCpp0;wJ*hZM<}UӗK99s&֔)S"E_Mhh(?S߶m[ҥ 'NÇ;'''}ts=~駌5ӧO3i$-jTwKbn} ӓ?czN]J CVwCիr~GN:ŲeEO?%88>}Э[76o̞={ʔ*UbŊpB\\\ptt$ "E9@mNTTtn߾͂ 8|0K.}Z"חKiiqvQiZ7qfT\:qᷪUv$SM:dҨ?+af&jUaάHpp0\,^~oSbEMƤI+Qaaalٲ(N||D[*c<$$[2zhwڵ9rJ+Ws5|7ؽ{7#Fo߾<~rѪU+J*@Rn]6mڄsjF/YgѢE 0dTºueJ j:/{RXz5.^^^̚5vvv1p@z-׿=T[[[4u֬\͛x>5׏. ;v@Rnx+W&Mb s#GdٲehтowyG_FV?_ҰaCʔ)уٳg3yd.\ТE޿xyij?>$T/^CeJy{X}[&Hde_w22h}0 wG6%JJl_ùE r]Yva0ghƍikk˄ r85ٳ={eKPv@ݺuұZjY5jD\\73bFaY[[߿ߡfk:zժUٽ{wrۆrai&4MY$t|}}4ömr-#BC({^B!o͛4y:T&U_4Yeʔ… xzzM)ij2|"""@8CL4RWSk>e 曼V9]&%q+# 3#{1xtݍt/ͫv||wTT#KёXTU&%q֖]ժN-=ǁ{ rO!%5zs#G5'x13tva4B!^n BB}.?g( J(RR%4ٟf͚v^Z{&334Z-׮]6Ϧ"wfi4;tK/鈴W~~LT x3kii߿\VH5̟R.<~l:uxr|vB!B<_օ]!x~+S sv.*vZ[xTBՕĉl2+ii<|ժd-O/ JŐeH#GHԷ2V_  >pF#'TZB!>yBBk/..YYfDRJaWA!˔L9w.ۺ ȃt>r2 EQևbS7otJ IÇ8M4oo>*S%B!x=ӭBqCLB! ^^f)|?j֤$W\F=+KpP!BQ $@(lٖBQ*}}i쌹qf }2Ϝ)誉<(fmͺUZluZFVKLϻw r&>x@f== ^B!B׋mu kyp*ik[5B!F&ՆF +ɕ*JenrK\)*?#oIpP!BQ$@(HQ6ܺE?ɬU*8;rBSښMؘI󻙌;;U=GSh 8]l8  !B! ~KiiBwބv5m̘15ZMttteӧVq j5GaWlڴ~eˢV7o^2cƌAVVDԯ_s #FbŊdرO=ژ1c 3l`FEak˦@>0i8ZuyShQ֬Iu''pٳtLN~FF'}<] gI3gΤlٲXYY1x`VZELLsA2{lU#%J୷Zƌí[͞UυB!ċFBQyHEz'$$СC\^ ݆ 8vmڴ,_\]]IHH`׮]ϴhт3fȩS؆z/ ۯ_?~קާMGG`Ra?R ̤)ȿҶDOww޸A9sa ,mӻwo2x`V\k5aYDD| ZbڵDGGJzزeK+%%lBӶ P!xXvu-@r-e Javϱ&"'g[#)S_ri4G-x]6{~澛><==%̒7_Gd[ JMsr2VZ-}/_DvVVTCNDEVR=15/V8L, g(O8(DDDȈg`Ʋh"w_޾}{ZlIϞ=9y6fA1XjL§V%33gT#!#OBQ@Eaӭ[C dttޝ4j*|||(Z(mڴgx{{|1oߎZfΝiӆ"Eǚ5kUP#G2~xqss㣏>"diР3x`:iׯӱcGTK,ɶMRR͛7ŅEhܳV={61ŋL27Ψ.ϒ%K;r.{ qFpppiӦdפI ɉrѿ߿\fΜ|@%h۶-gΜAVl2vJѢE8p`Yn]lu 1%.]4#GdΝ=zb9EQqqqdɒ|hZ jZ9LGeՕI^^fe[ofɓ[)/*Oʖe}ժ8ZYevF$o:x,HpP`ΝnwwwiԨ3&vrʨjgö6`>}3g<2e=oMY/,2e },^$,Xmۖ#Fk.CQ222r1=zl/ OU0lY"Jʖj@f^.tD>5/^}5jeoo$Ԓ5/tcL>{E˭mگ_?Oʕ+IHH`ժUnݚ@HHHuֹ+<<+Wͺu#==={ě[.%J`Ν9ǴٳgiҤ .$..J*ѨQ#Ο?oMN5jGBBf2̷GEnHKK#>>ި ,K.888=!Cgf̘ƍ5jQܞr.^HPPseՄѦMݹs駟~z*UdqB!Bٷo"yEmɇ Zϕ(TJ*bpE(ϟ/:ubcccVUӕq)۶mST*2i$ׯ+jZYp~Y卶SEi׮ҤI (ڵ3**W^UEQF)S&7Ry)(VT*믿GQEv횢R<CQ[y򵍮~Z2ZW9<<<[~7ER)!C(Jzz~٧~T*7ߌ ?Pjrus 1*{iER)={XOK},]T_nJŊsܗ!o(ѣGJLCr]{ڋ''g!x۷OBQ@޾m4lƆ2͕+Wrac4_/\zUlΜ9TR4 F⤙oQW%Jdɒ\xQLRf/%CܹHFFrrr.X[[|ŋlٲߟ8nܸ}?~s>Q TX*U?7oF'qbb"-[Oܶml 5j(:Z2[s-S*V;v4J˵pBz9?-%.Νڵko^Ɔ-Zi|@bbb?cƌyⅦQ3t[H&^EY }s<}# s^%޽K<DPPP,\Ш YG޹s'!!!ԯ_;vY4hݻIMMSNFu %))<ѩW%Jn۶-iii>|9 DQڵk{)S4 'NLgѧO6mڤ… )[, bʔ)̞=;[ƈ< =xCR|y4 ͛7gFӬ !BR !DPZM\\d.Vҥs,lUXl ]vY{m K2-S[njӧATJܹsv\j 6Jxx8JEҐ>K&۵+YgZMų'۾MիWӴnݚ%Kh"٣0'z.iާO6nիWٹs'O5tYwww\dNȒRJQjZx.īƆ UR*ۃ<̤0H-^LV*_WB__ [؅WVr233mn֟wv%8( IAM -]jժ2eP^=ݫ_Ν;j4lؐ sNN<ɕ+WЧ2/E… 1 RZj+ͯ%Jh8{2Ν#_ gDEE￳w^}y7ÃEYž={̙3i׮Fۛ֯_/n>#::?-[w^5kBνBunhBeA#.]7ߴX.N+Wq|eVz64J믿I&{Rܺu EQ}}}YblݺO?wy'zӵk׌>kZn޼gƒ{7=ƍj,_\?޽{fg)Po< Ã9=M6Yܘ ^v ___rfرcʱL޽xy;8: &5} {l DϹDQn Ij=X&Mbm@.669gݻ49xTqKpPm󢇇(®]>|8ڵҥKÇgÆ X[[ϭ[oصk...Y16oޜ-Y1-sk旍 cݺuL<9={pM}V[[[l#o޼wjj*7ndFY(n߾my4j5Rn]N9dРA_,YhVexxxӼcmmͱcΞ=khZgΟ?ع-Sw>|HիoժUcܸq￧SN 8ǨLXX>tڕ-[G=wތ7;;;~\}\-Z7776l_ƞ={P۷ogf͚111omm9r$%K$00Zmtu g̘AXXnnn/_ҥKWMxR{ͤS.]HݗBES:MT϶@}M "?|}}Yx1W///=sܹC˖- rܿ'OhggGٱc'N̶쫯Ņ#GҿNK.|79(O6fڴioiӆ;m6vޭO/3sLWN߾}dȑx{{ݓŋSD ~mkMQ]]]oC߿mskm[[[u뵨_>tԉw}ZͦM?K|Or3g$((>s BpppB:Sʾ}^zaE!^Iݣ}*`kL2\pOO\ˆWK޽9y$;v(SbEV^MV :"xR33ipaNث^Q. Ν '%`V6 668LhCxz酅QlYi-11 J-pvvfҥ]WBrۿ Bm 2oʗ/O˖- JBʊuhFAZ-VO5X$cM2GZ-قK0!^t* C͚55k3gdѢEZٳټy3 (!B Z Tҋ֮]Ǐͮsrr*!xڕ,JrduZ`ի;:Y|WN<JcV3}9 ZҾ}{-7 6B!yB)Sc2%̀5i|ZuOZZ?^hԦ34Egd닍ZfŋSx®B!xE!-nt"HBB!/?i)ÜPѣw@%̭t:TA^Cr2_B!6  !sղ]}PMA+I?%BljS ^X:)KRN݌ :Ѕ@cggml M<ȭ􂯬B!B9w}\\ 6B!(H66Z"VV2J[&)TqBzIÇ9tQp U5doo$Q.JW!BB<'{p̀_:eʔaرϡ'%%1cp֭ggݻDFF兝e˖e\zP}vlMY||<111K.e˖LFصkQVQXYYB5ŋϤB=kQE cPs(3vII$ܽZ #ZMY;;vWNM''LÇ޷|B!"$@(I{Fd@UGªxB* ٳg̏BTT  s!!!2tP6mٲe u… V۷7d[nz㉎~&ǜ>}:KfΜ9I&M8|pvbѭ[7-[F@@3LuZk(M#l}[?Tkcذ5(Vŋu .Qg~\5('Q՜:uk?cƌ] lBӦMqvvɉ W2{L۷3f O.*W!xHP!w`4K%HMMsY___*V"/8q;v`ή]x1,*f<ٯJLL oM6ex{{g+[Nj׮M&M8tʕw!SR JQ_+gqГ'wydht/߼iup`S` Nٶq"ߟ>J-$kÐ&V5j@2e *Bcbbh޼9%Kdɒ%o n殑}6QQQ/MP!xIP!{|HShU%dttޝ4SS[VR%Kmޞ5kyf2*T//TRTRSN=ٶ*{MHHVLJEҦM._/s} 7t۾};7rʨjg ;6&gϞSN舟}]{^:gK(>vׯ+ٱ5(Z- hNNUFtt4.j">>YfhHOO`̙|у5k»cСSBZnMrrJ"::#G0|f͚@t?3 0w4JĉL0I&O?o>>C}`mm͔)SX~= bܸqjԨqGBBN;vYfW_xbxx8W\!::u1x`ŤI ɉrѿ߿Vzu֯_?}3ӧOӥK\\\prrcǎ-y` <ggg,vFaY… ՋH`` /6Fȑ#HIHHЗ1FVd1)))xyyЬY3j5^^^ߨjĉjdΣ[Fai޼9EŅMMlBqttd9W!(,s!xju=е@"E F/>V˗_~IϞ=^t L0AG5k֌g2fZlɍ75k&L`4oޜƏ/dgʕ5k_QLs(ܼyD<==Wdd;3fЗ_>o&::䄟AAAgÇӪU+bbbˊ+ƻȑ#quu%11_~6mYAX[[:B<+g!M^ٲeըQ̤ZjTP}QF +VKY'>>?OOO<==h4Ԯ]ⱼ(Y$w1*״iS<<|g}F^Xn]cȐ!ٓx,Y|@zO>!>>S_͙3g)KO>GZ {{{._СC)W/^dZ ݻwS^= gׯBr刉AV3vXڴiÁ,eپ}; 6swAPPÐaaaرC̴crs\+ݻ{ уA1rHƎKΝIII52+ԩ3f̠VZCݺu~򱱱TP5Z`'Ofʔ)TR+WaҰ'99 RV-bbbpttdǎ\r|a%JpaFJ?DDDпKO!(l B =Lł$@+WЫW/ϟ?ϕ+Wܹ.]0`E!99GQTtܙE-ZILLbŊ @V_scKFFW^ 9s0}tN:Ǐq>$!!A[XQFLXXAAADFFrM4iGu6>Y53t! Fbb"7o/?y>@Re͚5O_az=Ջ t GG#G}rcU+ggG? ^h%WRɉtחO3`PښAAT'x_rs mxI=i$9s aŅ Y,YB\\gΜ֖-Zb V\*իWٲe%Ju$%%siժ`F'>>]ƢE>}:>գlٲ6m4h.hI5$447nPD ԩdu43`Ki&}{f͚xyyzju2R.`jN2e5LH``~y 8qقdÇk׮RfMCppkdHQ>}0l0OFAQ-Zd`޽4oޜ>@90** OOOnݪ>iٲNn? 3|pB!^bT!w `kKQk铑7nPti/ӳ_pqww'==ׯ[,_gЕ+W(Y2++ls̙c#T7džne˖1`ڵkǚ5kػw/Eq-Z-}APT;wKRZ5SLի޽{sw~9MꓗΝ;whٲ%޽]v0w|{ȑ#8psmSRRС۷7zGqMݟBPCggR{ɓlukzp _MSVVl 7C, sqaWj`mi&Y[*D~/^eҿiCe˖,Y*Tdu1툥RX"u`W7oFw[iӦ[[[SR%}V$2 +Yn|~wBBBpvvƆF( 'Nq_vvvddd>>>9%ߎzL8oooh4DGG}?t /!tڕǏzj kdJJJjժn:Ǝ˾}kӭ[7$?nhΜ9TR4 F2BB`s5e ZӇQS+Wq|4mڔ5j`3*I&hw^Kbccuִk.zW5ppp`ڵfׯ]ZOq{/cz=r /SNԩS@e^x{{̂ Xp!4i$}ݾ}6mł U۷D//D*e#y .0ig5`V)0g!ɉ?Wm aS SRՁv IDATlذWWW)U-Z@:b+VF1t;vjŋs|KNtm{0$xiZnMɒ%Yh{a\~ccccԙ/))n̙3ן;wlќL:(ILL$<|8(NȚ`F#Ammm:Az+WG 60WO>_sk&JŐ!C8v)))DDD0l0}g%J8I=??e.a!/w'X{X>h&PM̕Zfĉt;vԧ>}:E/?Ņ rJ֯_끬A1bT*~~~̛7ǏtR̽hӦ ~~~t҅'CƎQyEQm ƍ3x`Ms!55hʕ+V;w.ݺuٙ7xI&ѬY3ECs)V^͒%K̤e˖Sre߿ĉ֣P&MPTpĉܹ`F?Ofĉ0g}ٺuRti>CFٳg5kcã?L2i $PC%4EAuAbE  h""˂˲!HIhB'@3s@I$2I'Üww&a=nwȑ#yX~=s-pݢ R5k֌ǏGѼys¨UVYfOYd k&""jժ=-iG?vrM6nǮ]BZZoiii,_\"(-Zz:NwGRx,n'z(!JIa֭TZkL%۴ ؛=}Oڅ+M^رcƍl"VQsr'?MvEEGGc\͝8qkr_cgdd7yawhݻwgɒ%lٲHA԰axypn?SN&9NV\?ٳ,[ʗ1Y(&E޽;aaa 4~EQn]N;֭[ݻ7ݺu?gɅ~.e⧈HyH 55 %fȐ!l6MG}DPP=zp}|G7d4jԈO?={cx{{3sLN8Att4}(}3 %Kr[oŰa܎7 㢷 ƨQرcSL0 ~zţ>:z̜9_ot҅L4AtҢE ^}UO`` ݺu_kt:/lʕYz5/ nӤI6nVzۛGyL˖-]ԨQ}ұcGk/zm޷o_ O?͉'ܭ~Uܹ3aPR%5jw͓O>Y3EÒ-a|!GP2O_-o]j*}q{6kƟ/Taq?[$|j.K1mڴ}4hЀG*~u]D&&44)S7?.GW^3f àvL6`֭Cb4M\riX]vi{1-b ku|\ 0kժeΜ9۷ٿKNiٳͺu_sզb1'1+W4Mbo<ج^k6l1b ƒ}Xb٣GrʦaaoYl'0Wn֨QÜ2e9~xAcRSSAUT1iӦ8&4M3''QaQ߷i_|a6nvif||i_u;w٩S'3((ȬTپ}{sѢEnǬ[μM3((ȼ[;v\s1{1388 1Gmn=Jg-!!4'f9MufIP׊p G!Cxb~5kp-''~9s\Vi%)~%ec;v*X^^ll׎\;I9MݥΌFJ=YYtٰ=ƍy8,c<} :]vzRvFF1R~Yq)׾]WʫiӦop,~z)I iin-A@E#Rz͛Hrr2]wkWrPDNa5baa^j7YFlG<-nĮ3nƌpDDDJeb "n4:ׯa*'GҡreLDID֟9JY@bdx԰.$|hvP% KZ:u">>_СC3gKJܹs;w(A("R֦w Ra>>GDDDʟ6|جn-IJZj)GV>8M![IW D}˖Im ۾ap͚ Q\Xn )HDDDD%EDJ,丵TB5X 8P:@D6kG_R,3Mgǎڬ[U+=԰$LWm۰)I(rQ111ADDD|] "!-aЮreE#""" ^*׻93v;}p8 ;}%'n[Zdjn0ᬬ H8s>74M Ycq]Pڴw%%HIBB,"""RtJ-gϺ6 b=Tb4Oppɜڲ@lGz:lHӉ3OSNURBm6Ӻ[$9~܃щm?t:/zHEH ؞/4iRB4!""H+= V>ͤ={<Z;E Hqo3AlTq~>Iؤ$II|$5oСt҅oMRrezÇ]Ǥ1j(5j?p{UVaXXj={$ 6mڰqFRRR;\2QQQu;7==ѣG/;v,pLI:u*RvmGFFseذax{{cX\s_ӤI8pUQDDDd`RR -[ܾ=u|}K?,d@Rݒ*UbQx[*$a q& ȷ-Z`O)`&)))[08q\ԨQ8:uԩS̙3@yĈiW^JZ@//٪!6'C+5!瓄M&m̿RR<xHJ233/5jv~-7u^UT)8{qك76o\Æ cԩ,Xƌ3tnnTDDDD赈=#@[d@"4)oZ |3Y,f6mܒ n9T\`׽{w~G^{5/_(Vy ٳ'ݺuof٬ZE3zyy1eʔb]oȑ}:zgt9bi;uĪU:t(}e|嗴om2n8f͚u]Gllyh&"r K<{sucǎL20ի>ϹX[~/g$''Szu:t@>}.zp̞=~"##/830d̙)b|ڼ9-97q:čUp_͚ Xvis݃xqc}QfcEtLHݎO OLdY˖,\Q}z#Ϫa///∋s;vnr;~\7/^x^xbŞqbȐ! 2Ǽ⋼nmN"""":0t,""iTZ<_n@7⥁INN&,,x: TRS߰2ib!m[x ;%1RRܒ9&{:Nkť+4O"#(Z.HD _oׯWQ+q8;-9PGAv+רQ8KL$ 4Mw|oolE dit4 7=]<A EY"""R]{BC |s22cGvs bVdhB DFhy@D"""""RV)A("rXaD{MkrWw'B/enmuT*ULPոqv#G<EJ\xn4RPDDD<,jhl <￳̙RB{nZ&M58o1^탷m'Oz ")k [Ϟ%'O.h﹀DDDDkf(4韘2tlތ3_ӿ3vmEUM_j%m7 %EDDDD3 [ كP+EDD34 "tYY ٺգΦƍ9iև IDAT8ϷY!!԰ &MNnٴ ODDDDD}Tcr&l†4`EGg͟Δcmt4$#3[7mqEDDDD(e^HyȀD""""rqu|}yB3IIvneEV٪mRZQˋ[$m%4HJ?MʳӠAOɓSN9}4'OfϞ=nsb,6l 66֣EDDDr)A("r6 hK""Kj՘X^v'p,;A[,9佒06:Wת>>ukxyN`yJ #vyc=K.-9'O/ӧk׮R+,"""eIUxXvӤW8& p?קGPPx$ؿ^Sܱ@f͸jիzm~~U+|,` |x0U|t,''IXXZBBBСCIq~""""WS񰤳gɻc 4֬Be8DDa 2P@`ݬKM*ޓA׭ˠ556,ƒo vre&cƌ!((}B]7nW^СC9gѓ'OPV-hذ!ƍs{ кukU "|iCҥKϟO&Mc*pVHHHCѮ];֯_޽{[nbnV_oߟP}QL='))Ν;@LL k׮җNJDDԮ]~ܹs6lX,^\EDDD@ B˶;3@[C__DRqhC""rl,j+ز3v{^Ο7m"nww/ĖRגnAA,$_c<\3f0k,ƍǧ~͛7o;vХKl6}WfcF/B\\?8q"<s{},^z FNNpۙ6m/"K.%((q4Ç_íٳg]6 . ..kײhѢB|x{{_ /0|^CpB|}}8p+ X,^͛/رc׿E\\dggӧOƏ5kXv-&LpU"""ryy:(d!g}|<Mő;1eZnoptڕ>4}Y̒%K5j{n6oz9s2|p^z%ofT%8FMXXӦM[oeݮGݞK^:t/3,Xv}ǣ>ʄ eݻoo hX4i&M*cƌDEEq~SfV+իWwHұref6l_wtkw23yߙY"r&l V?DFX&Pa߽e +Z><+“O>رc޽;5ӨQ#,YJ6jԈ۳qFZjźuxGҥ|s=y\w4MRRRXf 7dO<#Gn <<>'x֭[yъ ,[ǮR s&MYfcyNWNv_.VOϞ=] `Nرc%ED.lU X/H'!%Mw9ŸU(,KIq,@EGSK_I=0egJ;Lޛ7sL Q P.j;v}ڼիVrXQFpVZQJ֯_OVhݺ53fjҳgOW?M:rEiذa~yc SN\֭c֭s?KԽ ´iӆXCLLLS/'"""RTbTD2<_2ޒKG=VXA~p8vvxwؽ{uG^;[Ҩڴiòeˈ%!!X{AjCD3 y͚QۻCxhvdd\5>:|MKZ=3V-WQ'pcFRR_ʎ#GUqs{]uq&OfsIMMevmL8F8qjպh|5Vrk[fy ԩSY`۷',,3f~""""%A Bp0+@ʋ^ phCDDJ_uϛ7/P2NڲB.> ݸW!af& q 8cظJJG+|72`|I֭[C=\ѣGIHHaÆ 0Ǐ POz% f͚zZjs䐒rɾbGf۶mݻaÆ1vX.]xDDDD.F BːEB1 \ + pgt ♺u_;s{1͛qY}n`W\V`~d$7TBnAr+#ޛ7p\t:u7|j~p+kٽ{w)SX?M6L2LKfͨUE)j)M-攔~gڷoB&eվ}{{-Zik׮ȱ\-ueԩTVm۶nyDDDDJ""`v6^yY 8DDsbקC` w5i"?VNMHu8]{hnZW6,Z,&2 -IKj*woقƥtxyySO3}t{ bq+ ?yd~ EXr%fU;wW_'˗/g̘1ԨQ( _䣏>bĈ|,^aÆqYuS7waҥvmTTaÆPfMTڵkټys1zh222ݻ7}?0wuˋ)S뜑#GsϱdV\O?ӧڵ+M6 ..x8') ^D2ro5 "&L $$VZo:Ѿ}{ Z*'66MҹsgHTTN^{,֭˭zѩS'̙Þ={^C f1m4>#ѣk˜aNP'|¨QHLL$**e˖ٳgsq7W`߾}̝;aÆ믿 pmuޗ_~s=޽{]se`9?XO?M||<*U;d̙αl6Νb0yd&NxCy1{lnxgׯg}srDž^?W"ǎq{RRv+;8-Z\tݻvs}XoۖRȠCBvUGgϊHT߯DD*+A("r9 ;32ږFGvrNE _y!Wyмnܘ =o#ܻu[>ҟG*w~;sοFZQeԺ5ThlYE7l :uJ#dJn fFÆNmn"E5tPVꑉBCZTkhwۮ]u8y3rr\{Y@oTi* Q5x`61|v}:qqqoߞza߿Coի뜇zL? &`oSDD]LI={.x(V}5k^`1V+ߵlIu8m*l ٺ~~z4ke"""R)A("R v aq~#<:,,S!]PFF~~~jժ熅ߥf@`ZXT@D"ۓIIE:vtR~~W9*) mt47m؀|nټ >Us٢'ꑈBVڃ :aÆ4E*'gnv $''iii<ÄM7ݎX,9`+4a2d}Gn\\SRR;w& ֮][P\SN%""___j׮M~`ܹ߃7u{Сt҅ӤI8pUQDDsנan׿xb^y}]>s~":t( .חss, Ez͛/رc׿E\\dggӧOƏ5kXvۙ6m/"K.jժźi2bv=|hɓe U'8< SYg+""pA^^XAtС͚5 ڵkGDDK,/EOLLd…|W 0ݻS^=^yO:q||cǎ'3bzԩCƍ/\yNWNv_.xjj^󊏏gϞ96p@׿#""رcG)))YHGDDʷ7Q6XԢ5}|R&ߧcb1wӳZ5lofÆ$I >t6*p9( Yȕ Bb8edU/͵+VгgO|}}vBCCiڴ) >[oyfݻ-88뮻HϥG7k rrr\3M6,[Xkޜ֯wkw32xfnfikʓl:{yJ¯gھ}<[$\=?,"""ZA("R,vYVPa nݺRFDDl؛[mJ Uaj]fh믤:k+,j%L|PXt8N=L۷nUңZ5O',"""rmPPDNrs!ؗr!=:t!lIId;]q\}|xiSܲŭ|s95KzRGOH`{z:έ*{:t ESYDSPD텶k9N=cwⷴ4WrVFE[;BCt8=JwnZ:%zM)VhAup:qvݲjEJEDDDSPDRQj֬ 8q82Η]mib,_ CE"Rvj&9~Y&0Y3]kٸ1>u8g:ܿm+4MwVܪN}Wϳy>DHQY"J b'%ӷ՞_ߺ܆a) {.ڵPUnUoo>Sx+9TJ3F V:ŜC\ƪUTGcuy:Fh,":sڃ23 =}6[H0 (9(""RJNnOJ"pw0* ^w ɰBm.v_yQ#GxGR)99IDDDDZQm"Rq:j>_c||z*4kΘ, ^ȴjvdЖ-؝Bϓ+wKlض 4/~\%EDH%F'3s{jSX|T= EDDj[|%'?2iSzgY4^ڿbY@5i8Hia99՞}`t"""׎=ܿmyXk5J=6<ߠ ={̙RI<レ5VSvADDDD*RPDJb ,LFv?We*U`d"""׎lIIdw?5j䱸ԩC@ -[RkR\4ɳi LJxȕѨH:X6 tc̙a1Mc;\kț^XX+ xY,|ڼy>ؑ{=xLޱC\) Urg9w`Zx02kǢcx}i҄ȀĔWC??^/d L߷iix\d@ۏpѣ;|sATP)Te%dgx{b^t` /PDDػۦg7n("M veEA)*CTP>`CHMש$73Mvܙzbdܓ{9غ|ҊrIAޛs8jt] ?_gQٖQ2m"%4 bvG>.h%+ pdcfDDDbVݸԢ{jk^p<==$FIu5HX5 }MDDDDD+B"%wm?/;#B^p8 #1P3HM=1;""᫭x}frS$xxȲmy-\tUh>)W[-6bK%^&"Z"\X413,Fp?\ձ@DDƏ{{p0珫qDJ-9-qJFUi 0bQy]А}I%(@kWp?f8a`*RR(:l̎(pmA,(^}}$FxA.ˋ\\Ѐ@δ DDK CB>\epm̌(;\F&ă TVZ[ qF/AM̾?gĉ DDKW%P(v ` %eA/9""u뭭53m^L_ ڔmkJ <6Ӄ?ڗQchb{y0D #/3  QL""D3zI\Q]iJ7-2jf Bkŋqnne=E/DDDDDB"%9btwv!i,ŵVi8E]]B;-QF}67ےO#u|tϞ>X $"ZX|)"`4M ׻ ؘQr>/fxp&d*;X[WqAS<<؈1U#-RRp{uu N<3>n_RDDDDD B"%9HA>{icvDDD+8{HE W$ :_޸GXF&4-@J8'77w/a;"""""J(,-Ga3HLUp13""64Ѣ w7)&,Uu cc$FwԠ傄Ic |DDDDD˕\gDD(fga?U_YΉLӀM"%H숈cc7㖪*lzmimMIեQFEhhّŁTYnGx||twۛQ`hbv;hen#Ul]ecfDDDk8ƹ8 23BZKJ㱌5 F[]iQ8"%?!lokk6eDDDDD8X $"ZXVT |{("14iczA{Ni⢆j{Pl}1Fp&EM ṉ ;Ң8ɂ|$'R@>ac= """"Z:݃HLF(Yx%1˺&tڕ֚:&5KJb] kkQtBѽ~?n;5""""!|ĨM E`TPQx<[!i6fGDD}>|156(I]]V*;jhG0k::lʊA,Wuu[xF!*E~#4m=ctx<6fFDDB6$ @˅Wu"~iSbкL\^Tdq.G-B"%Z#F5m>nrax(6fGDDֆ=>JRm[NdCe1h].$̎mq-KbbhDcŒU̍2Ӵ HR*<:3#""JNO==ה㸴=r\}ۮ(x$ 25Յ.%""""!BMd9=㩅dۘQUU|r&4l/)+$WWuf-~iQxkz:V\Gbw) DDK^ ݃#  Ixľ(O44`LU-k9Eni:Ilzs30ɔrԸݑQ~?Q""""" (8d>o@!s2Tunw <#""J>FGiM n[rW7ᰜjJS͑&uuᕩ)"""""3,-Q$;=hA8<Eɉt}d  kKLǛrJeU[#57cJH12j\%""""`hb03 A"px.W#""J.a{BW8Ʋ09=;gmi^:jo+-W`5|_`IA{AnJ""tSW^5(ABJ6|hގ@ j7saHeY]5jTpiSItc-(ʾc[zzG:3O"%J˃Ah*|]EDQUu.W\r#""JON▞KKpeI IM+5:AeWZ'n*Xp^}=GѺ!y:I.&{pGbA}z t5Ztǃ8Zt^] y ]]hI%8-3 @ 5JDDDD DDK$ b"cĨahtZ B6fGDD{icvDDDq5Z[8EWEEB<=1 ٕʼn"wTWGί? 퉈B"%JR4u| 2D0<ݕ6fGDD4 [N$=\YZjWZIME6͘PU;Ң8rn^> `Wٝњ`hB>/1 KQ bv{{,GQ) fװID:@ck(Ԅ M{Ѣƍx,:GFQҢ8rkUȨџi"""""Zq<3%"Z:qP0؎px SAXxo=-xhx8jwʰEm"~V[Go"˚2X:&QF?4Qah<æp]L2M>^ 0=086fGDDUnj-z׋+;65=X  ~cWZGޕ@li(H8鬛N.PPH4M4E숈Z[1QEؼ2Gƍ**.˖ ;IʭUUp7,-{ {Sƙ: HL pJ(==> D[^Ң27WVFÆmm6dDM3oADDDDIB"ep/2ft*A:aP\K\UGv@Smʌ(t54XN$n7()+-: qLJeԨxqjʮ(|GxQADDDDIB"e,A8 @+to)SmcfDDD;0UbhѸ$ ۼ$ija6uDS[w-:DDDDxJD t%@P}B3-px.W)|2#""JlNOnKqPnWZoNKEQ]vEqMiitaapχٝachpx3Y@:EɎ CAK(Ai74`oQ@{1ָsSE\ <_mmMuiP^ Y;ֆ8'"""":BE ْR$ AW(J\.DDDt(n.2Ӛ˲]i2:3k%'/UWc`р--DDDDDtX $"Z\N;@_A u!ꇢFbiB'n($""ŵV{%&8#;gn`ORtH.+*Bmpko/~Ң8\==2fG?4NL؝!chii8 4M>_R r #""JPiFh ;9;(;kj,}y#ۖ\0!Qbh%慣`psYpxP;\K\UGvWCmʌ(q70g&& _Q.-9ygf&ڰE&>>ږŏZKJ `)mvEDDDDtHX $"Z=zjk!qhB+upei)-:AǮ(% w`j DDː(~m vh8d9+3M׻_DDD|RRی׊QCl?w::0N18Z ؘi- -Eb}6f0@ fPa8p:Km̎(Li.mj J\.\U߫-I gC lɉmpTDnjB('-B"eEk}qvGy0؅pxck$eYwPpjf&͵+-Z#iVUYMc::lʊMHWH|~]?fDDDDDbhfug@A& i0 숈i1"‚(97b^4vEq$SQ}d@_(NX $"Z<JA0BB3,-Uu ̾䈈x~j2ZTp}EnҢ5& 3a"כ}I21)uKGk*@8<5^Tl$b&RL*bY{N8{9.- Ϸtj:6ڕApWM\4񙦦G DD˔(~.B4(*9״)Hnwm%ocR,k,4j=:]pys3To׊!b~?SXhORwN»231v1 JmUUPWFy- DD ?Fa跡PB3-qMd*[󜈈6A]E:ќ[uķmɇO˅!`s)nx"xKDt b:n:#P 㩃,yNDDD?ePQcʼn ĕ%%jؘ]iQygf&Ά"aY""""X $":nwEDLk8fo`Batx r$""9Yb>[XhOR0\\ b=/RK 7qy0{""""" DD^kM~䠪c=8 NgADDȾڊ]55E.сy% ߫R`ߏJLǃAľϗ>N9J+f\IPt} ]׻IDDDى <84d 8/7'fdؕ%㱜\ !ǚ9AUHe3_ni7!""""ZX $":4IK&o$y-cDu} ts ""JdaSX%ʖ(1I[00֞Ң8(x||sJ"""" DDS(:o|0֠@0T33M rODDzA\aY7NPBҢwKU%2 \ىWW۔Yq[U޵k>wJKK111aWDD˖N "C!!t! w:1F]#hdY:EɂQ*%""J_hijXoUSY:4[^\RPG:SuwEExLiYY8=+ ٵ*?}NLL`*4^("!!p.2%Uo(UmԴqī DDDy|l Xb⤌ {qmY~58}^ZpE[ٺվ(RU/4Յ** 8=/@DB0 DD25bo (EB\eO""d6 | "`E\tN\YRttDc?ߓ8>D8ˋ^nƧ Q:qbK n&CvADDsQ3.hZuBp8r,qUQ#oUKDD ~ۋ`0jˑdc@|"'h )$0L[[u!"""5!!*t:BT\4` $YuUHDDDsa|2K犸~/${iax[Z2%2VTa oa """"Z,"IP@3MV|"h$ZMARt>ŷ\p]p{u5dEr-jk+ThK PvC OOL؝#<&": 5Ojt}(E]R(J(>3{ϋəI+OZUegkhqW_]iQ ,ODDDDDkA;"DVvC 4*2e0 si:LS]b!""J&i IԬ,;3OC#]zE4Ҳn識|oۜMl3>g,_zQ6gDDDxxFBDt*]. 5x 5m@dy%cPL8Wd?DDD##%&\a!@R~XUm/lMinč,Pk鲌Nލg|Fzǰǧ|xΉko-{!ƚ{O7ytFp:׉Wl}Vs\.AbǛI\.!+""!at*2M~?ޜ" ip,qM@Zڛ!IQ2 yR% Wٔ'[^\RPqk-*Begzd&&paCMM (s t`s Zc{-=OYMmGA w 7Ͼlham/6#-' yy}uq5a[lȐ(~@HDt*ܱ s :io( sKF *[%;˰חp "`̕~tE]]TD[0`˂*k{ iy@FFd"[%g~L8 oCǫUJ5vyXvy'o;0 -ehC"w}&E"ꅢX: #Ng❸pttX"foTa]i:tʒ|#R70%%ue0F[0j́emT͆fS#mx7fO;.|~^t|Occ;=(; ys>Z0;TLԽ'{oL1M;ۉW*;k:2 2QwRN zǐ1om6_s?>>嬷@ bk'x|: #/_5·#0 =zǐ?q2chz'^Kho5“ARtIsȵA?ϞH`zd=8}p[""B"$( Uɔ` NfdxHK{ AZ%o!dX{q ?"7"Z%_..Ə{{14Q\ގmaGKkQ<2=| 5\ WMC<yP~_uB}N݆s0cAk:5Ebu 0]h(aW#8{ײO߸{݃4һ.=h [X $":lUnwTZyX'#B IJ5mU|ȯMDD^,݃ xoVmy$\W^O55Ebia욙))6fG+M7Mgj h !bv-K :rA`p4Єp ;L/ʎ.Cێ6t`eU1O//ѵ )rOt?W4،ѽ[?HqНƩ O 15'Iav"nyR ^|Ř_qLEc_?~\^J(ŶSEߌNk}-RtqڧO?Ǣ~8}Ǡ:mglwO(T^^dgv 7㉻RC/>~w|~* ʎ*_~xl>y''z!aq.4 (t:cnPTu4jAMUE>&""JFi>4qkU6Nt[SvzQxoG^exqڧNqgކnKDD@HDt*n`5U [ka*~nPPdԦ+O o)> l幨zSUYEsٽ-aq8C_b(atcUCzM7@ # @.""C!\ii0; uOlff"]gmmGs8u<6:1h:eAiPNq?G+9=Nr)k|ٽx]>2Fľe={{__]kl;e=A/H{}F-fNJcrhӣÍS.>z2c 6ubksbn_݉uh~9쐡73l A=OkQl,Y1MΙ =8ȋwW#l J8EN!_YS^=gj {ߘVU01zhA0,C$A{pnn.NF*gJozή@ffоݐGvuThEV@tZt-hxލtQssJr"MQލLO7|W} q ȋDDD혞}"Z;33 p9kl E  J:f/NHO]55`jc>5~zwxzn:(+TUӿx:~1_C/EuEp26o;q1\)s[l:iANidEFN83 IDATi>YKCW}'s"#=7"r\r ν܃(J"y8#mw>E_I,rbW}|=m K_u}KR :|亏>~7]vB!i;""Z;wr(J8"%%fqP윞^EP:W8!za05eI> ŵlutՋֆl]1+4M윙 TRL2 @Ӊ qNn.6yM;s bc; .^n6d[ \).7=$Qchȅ4^o(*Si0x9BhK--cQA(śc@S ܼ<K"{}>zh #. {-87/M<榸g&v?;2u>lj^Uv=+׶[߹uUKDDD+B"덌!U G uCQ6YNY|ع%["(r:ی(nlKIGss񻡡qmm8;'(ڙ^jWdp~8]3c(ù877Ȁ̟7%Lq @zn:*9߭ C703L8=Gw*$""! p"n4Q_4B(X…` >s%icHI9b9% Uŵ GQbPCڲ2nh( ށ|оLo(+ َL`"xv6E_(+\{Ʊ8c|DDDX $"Z!Ǧ%"338 BP4@ҢrqT~7vuaF5XQ//SxpqA~o"局u0x~j2nqgdp!8%BQrtr""" qkOO;rqA~]i3 ø߸3]A<9>=:%윜%M """"ZOX $"Z!Gx1M&h`@K\ӦqDѱ"y%owtD )_Y IboDNJN\^TD'Mttv53ALe @hJ \Ty<6fLDDDDX $"Z!G^A 6@ u?$d-֗>~10`Y{PpLj* D%%c7v\^\Q>g4 Ɲ13)) rsqTJ (@DDDDtP, MZ_8QUEip,qUYE]K׾J(mp8b\i錻.,\W혞}}p~È%fs0U\!?Ѳv'@D,A^_u.B47B]7 ?{T=ps]P *CEPA'ƅ{{2p~SR]MRZ̒]2/ta$B!.[sv*̋qM֩Ȥ<kk/q>++lmZ2d!MPU'bcbZ=X3큋/2;6F#xڴ9UUewA_"+ZUu`th o/TU=uywBuU^xa!%I !D3X/9Xceej9Zxe9{aeq !m/JQEv0y;)JX[IIөyl;r++Yɧii()`89P@qSY^k1)9npAezT;IP!*I !D3jf=L`(Zɸ{R!h[tD DHB{W¸i85'𳵵\pUUّerS i4wvHhpjDvnpA;b^OFfBJBь;8 BgzӨ<]ɲ簶6EB!ڂĔi~~[&(!Z=HryըNUy=1O:whlM]QL>MM%j<&/_j;;n;!DN:U[B!$C!VK|Qy粲$@EQLF6B!K^ ƭ5^ mh oSi"=rKhUeKn.oN% ZPߟC2N x?-<4s粥{xB,@o0w[yK̝η՜[9~^+/65!vlXjheƟ ˹q?ܹ rBܿE_ہto|߾,ص?\ 殑>RX^΋[p~MB,`5Թi%hE3w.{\lK{} !-ABfɉRu5"ps@/,+& (5vvABq0%ZsAAAX&(!Zdoo%$p؝ !,TW * :UplQL0-<5dfnK̮{拏gnը(]]w^6 ?&$Alxd&$|<sW\#qw߾Xbbw\qpccl,֯goZ_"mv1OFjEni{!,=t048B^OƮX(UEN3|8zb6poìl.9WBX%.ͬ+Ϟ7QԚ"ʼz VƧcB!.fH]Z iDU^ ǍczӴ4 "IT?rs,-5٨n܂֊$oo'EQoP4>47mZ. h)FTvȾsZ7'أGﳨiW3<4wiJw//3c""[fwc(Ryki{!ylkбCSRwI7um=z0j2mƫ#F!! B!hfUBռ1k%KKQ bzwPzB!ĥD괠SCCq#q Eg{{bjUTSRx3<붴rYZX5Ղy( >>tT֑Lܸii yvPTڻ7 T%uZN֖gewj*y]@UŅڵ9w~~~,7}?abbHɉz`V[v0CCq3-ÞTڸ}ii80{w5 :twdeˋ/ƎuWB^~ȦiY]5VRYɋ[ñcdחF`6**xafV8ANi)ann<ɽ;k0c b^\OΜ[y5* 9ÇalKLM8V!C޼j~M7&B, )?߸77gJa|.|sp$IK3I8( [*,!ZFQ'NTޝ 7UUٞϻv ՂPX(L~tuj4w&`r4+ZT*br['OިQ;;3`g֖'7n䎟~berʘ3|8~NN֭WT ƻ%!q;eqҥ\'VXӛ6SVƊ Lxb`\\xsF-q>ZUU#rqaqt4-[Ʃ&rbsUعȅ y+W|:3=R iL@ ] ݺ1oH*zfd[=GgcbkU_S-x'3M[Q{{\]ߙu deQ33vǎMQ`7/ǓT^nXT iQj*MM`0@URH@jA( 7oի'GQobzIDKJLL~.NLNf?2o_9vueR7 QTeuZG[kt'NǫN̢".)!>7^XhuuֹRcXkpcÝw֭(T%=?='F5U{-I쬬M6C&<w;;V8;-kеXE>}:ABO={X=y2ݽꛌ}ZBqiB>NNPuѼtNۛ̑B!.y[rsٔk2:;3B\ 4^ Sc$fLmwnr2geI6y.8^^X7%h~]==i$*zƍLY}WoٚKJ@)j8SZj2DfuF^ONi)~-k-.&V[˳Yaz$<=$![ZdPNg0kzZ֨\M6a!!,fojjlZLÑ[ZIF1kczkMw۷73wF]wG:m^qq9;uBќE0$$d2ZP؊†N:peH[Ǚs縫O n`z:Vd<6xpϪ'x骫Bh ¼< !DiV_pڋH{Q!u99*JQݝ+Ek5!kCBi ȃ7L۬,NJti)5k|MDoo oUZrgf2?ܣad=\] Z#S{MPggܱʾ:,v-[*$Թs\oxh(RSMƞ}|9LZa!lĤ=Ld4]\xsll_?ձ##¸zR:䖖'5/GGNƭ?Eln.酅T=\V8A__lzIˆt$ˋgg8:v%Ņ<=rz0sbnݩ\u6ĉyyt#F$j8ذw\<8p .l罝;ѿ?v5.S׮,:xlaXH?<ɉgMё߿Rd2]<;o_. 4۶̢"n\@nՋ]))ǂ\\swْvWU2hFW^XB.@Urll/BXJyI9䂡Nqaee664~~!=QU4ש*o[*,!.*wJBƱ< s|))tƪ?O5FL?? "NĽX:9k۶^TD{{DDVu+\=Ol؀-OFF-)d~Wh5K Z[75[u.$s89y4ܸ['ɻFֲ[nhV=X{8֙WOīQQ߱B, 㯓'M۶r:dDGGuk* /lTW{t~xrFΕ0gp& 7C IDATs_''k#F4) @o0S\55ը{=jlԐ!}&ع3g{ֻ7]q[g\A8}mۆ(Lݛ:v䡵ko]u~cR?k֜T&"X9ÇruYQ6İ!&+odZ>R-U)~oX5WWW lly?Z:!hSZΎ%%ps F'BX/tɘ0ÃPBuw|QoYUUّ#Gg_Q^+9X ΝI乐Ic9.O`36-aȑ,ص pHf&ލ-:4v۲;vUWZr[umtLY C#Z^ 3>9QQLSB!ڭbT!ژ2:;8J{Q!ΞXIɘ0Ã^NNJF !!̊1T답F`lJJbQVTU/hU]\x68=<<hml1FE3; yFc56۰\;^Kٯ3iO>i}7hҲW!ho$A(r8;BQZN!bWU^7^U#ՃBu&&]Y T%RYNnr2) ډA=0ÃgtumE bm6!vd>a!-NBa!NjKѽ[[K#BXďYY.-5nT aV <K~5s֌n%UdPZ1Z!B!H1w\|ܘbVٗhSY66ޖG!hu:UT qJBzql_ h` #B!$A(ڬK4dLQu}s]+28\].CQ4^A!hgVde[Vf2fEGG%Dw~>/Ǜ$ Q+B!$A( Eƕ@Bt9\1 G&D} *NG!hu:Q0zA!ٔR~vx8V M!Ba|ZBLp:SߙJ+ςQ 0ś,k)!nif&&A+6//K&2a.?p?ssd:P@UQv&B!BIh S'Nl?@q^Wٹr'G7%'5UU=ܳgk_$D'PS->>\y畄kp?qXr:=ܽnd޹KU!pΪ>̎ga׮I!h5$$HFx9>_ϝ3V $TUQlS6'1B!MhWzcryUX6{GM32Ɍ$fw SߝjLGW3.[RPB|t<}LJGcdwJr™ aB!ZŒ RMƬ[==&ՃBp);&1h(UۼJx%!C2Zxn8B!B&-FE!#2yO|ܸϻޮUAWoW&8 /NÉqeژ W&2)DN>Υg[[~~rܢm3EE)J #BusUTKb&;F{9jnM AX޽;<'z`Qۑ a)K̝Aj(5!ܹĵks֭LOtl㚥Kq?7dE<~a=GuR2lJ|y^B!D{# Bn(brwley%9i9]覣yl =GȞ0RLXcRw?w;ú93Őۆ~EI˞^FI~ ֶLyc H*+c/KBt2**LƬtAq))at߻U٨ 隴50zG׊O|ׂd4Ϗ]KCiW8a0X| -&z$0iJmf̝w^YFEgBQB\yǕuz$uVY*f-ΥTݕ( ]-s.q,|@x ))(!M?8qI0$'3LJ>NNG!hz=s{=(.UqҌ 4P/1fx8C\]ϻYLI\zvq[Klke^tY5k1UdIךBZ䷓h @hP%'N靧0v@6xi:ǬFP}tGw3^w<ɾ\1B!ڡȮ4R*QKAbYYBb5jٙl<<țII&s, vHf&On޴4a<;t(Sz`|?'fXHw* wo4§Wxd$E^7rqW&67-[(*NrX<xOߚȯ&G ww'!/?dm ~9u / f s|QE]O'Nbk˳C;5B |w뭬<~11ڽ;Ə;Xq(qCDM4"sf 3֬+2?`p>=\x:W 㑑|kٻEU޽ys-|{fZPs7HvI :t!Cwo:K_87fdÃ@HQ/l|k?o͍uwAÐ ~crrln 1mXt it粀VL5x~fvhn.:**q#őVXH3H25ԩ_3CM!4IvI5++*kp9 22QUD D2 N*B/]Ս?$7=-wcmgKoz SRx#!=(y-1^2HJtTU 5gt&S(4cLy zs8Y\,-|۩q}G/oo0kYY䗗O)(ww'wQ|CQbxh(o11KKP'1PZ=osg5VV\ש?8'OU\iӌppoHf&||},;u#'Y}$=9[\̲Çhh* $ &MK3SUd^X܀t'ՕY_ΈP߷!.7/hZB!.6 mW}uH )CQ4<(I? Ne]|:tսDbѠhLf3յB!'~i;||AeWTvr2Ri0Ի)Lllx5,i>>X7jom^^FWjTUU>IMe^/EۧQ6y'o]WS32,OF&Ã2[:{xر8;cPU.ZDYn:-mZ\$mz՝kP(t'IRc}eKy88pIj7>7/gTǎ,|Ig?RnZm%%,fItt5)wv&1?Ǔ <{;wjTs粀\x&$[sdnoqKn,=t[wgC &޳53/faTU{9ގLZF;uw4ڻ7Ooڄ(;;xۛ$TVroq[l㏸zu`ʊ<7sҥ<8h>9>{bdPWTBW/]ʟwŕM]'Gs5TT0wolZ~>q#_M]Ć |s'ݽXx E&s , Oomm7䄫>=&sUէFE1gO\.0g5ktp@mliV}N_*}%,]ʣ_5ۓ6ҥcB!WK32Ȭ4nEfPUVdeTl,YRUD` O7{]NoGGK)/gٳL7䄗mFzQ[}X>aq+Vpy` n^~jsOԫ27ƍd1cG7ۿ0xb\lmy22mIIM~٘1۾GpMx8_tÖ,1Yѣyt:^j^ї_* yzĉ Qw361矱jo9 sm0wo֮ؑmB!. ߿5(t8BѠͧ<Յ KzCJy$0`]! :MR0~{MQyU}/繐ml,eﳲ|g ё 싔Ȝt8u}Vwie%}}3wĈV"A_~;³zsv>ٻ6`λﲤr__-B&:pA(fyfPEOc%B!ڐΞ׺KrPIeeL9vȃ.*IZQu}'~`>xr`'AYml'|9LҡY?OĉlwK%͌%9(Bi1*=`]N% |}26pn!5qdL pv*WW%DTo33y*6J:8ΝVMxt:*Gn.GdDxVא Θ6cwˢVU4ݒnbM7Y: !#2 !E֖<c|WYʑ !*sЀIBҮ8Dw<gYe~ǎ"Z dnB5in,f}5C+F"BvHP!."0=f[~ʽ~~B\6rz~=c<<,XVӱp, e Dʫ[噙FeB!LcBC(|եVP3B!ehL0/+ĥHx:ͪg<͍# "M%<_ﵹ %"!B!,]T * l]!D e0J ..<SS% u|~8{I-BT=?x-,gPUdvl,3vv|ܩc<MsT KLl娄BNY FIUeIz:awVRZ/mmͷݺv\HH6m'Yݬ\O'O66ş ͺͺ60dBJ++Y8nѝ:1kz;~ggv{/cFEq0#B!ĥ]TE}1VVy6i"#=6!ڋ<k<ぁ["'O3e:vVV(ҢI袊 ^Pyq|Xh(ݽόh֪Ȧ0w6Z-RK\#B!h}Z\zvؤ[`Fve%u_1b|<˺wHlB!DbYddԫ* -̎eUvy8OOJQ arT%%Aضܽz5K@3w.Kni}w|[IITVxl`/Y¶.%> ֻ檐ڱG7DDQ8պQX3w.͑LV8AHy}v{`Wf;?;b^1cw۷=%Kru四o6.Ձܷf W^7n䏸8 qscF_;w-}}}Y8nU1990jڪ~w뭬VϣG۷ȑ\gY[nfu6{M~˦!Cӻ7v>{6 ?@efFdIn44-73۷rPH'AQQj҄N_MpnnWBJޭ{,X~Ф {Lٲ~/!J>LEJ t),Bvvdh4$bm]jFneҮLJ,=qhF ~ ;/^^7`[T}buɓt\;$׎_4j_֩R%3";'}`i @ :7ATE-@'|!BFKNJFt4/^kŰ_Æ/_kyu\*iF܈y>>qgr!z-_N@81ȸu׾[WW.<ȉݭ[Ɲ9m CBpǢE9|}YԹ3qg^f R}{lXة'NPNz9NQcG(/BRFspܫݫTAmnAlBT 3+Tu^4:$'Ǎ`BV3"#)UPнlRWQNDFΟgGRy^^)_G;_]  !!Ȍ |ΰ={HJO?F)y+7Ns+3SbRAm YyFJ8sG}0d/e䚳5.EFޯx)6쏈`T&wܧ;Cw_~KF] ի ')Y&OXb"po #B!xxJ%`Gߟvrˇ23bӦh5W9u M)j}r50#s듃'˕#ڵb8xJZѿ{X{ ˟ౌqR?ŋv|^^ ΝzW; !JP]$z !xl]dFK__$JQQ>̯II &SY+JrkS$B ww>ܱ3hd GFhiz5.kWzmֿ*9 &O[["n޼[Tqb [w o@~zu^xdYAm*RO|YΎ*Xߟ;ە+lӦ1t.4Ixe{C~=58;nNWd?S!R7PTk L?''; h;f ~~NNFD#uܕ*:ױʛ k#- ]p̰(;OefVAբP*̾Y7ka bc YCwë)ГϘ-c8x{O F>{̨PjGYIIaKb"\B`AL ZBj%ggʃ?JMsKM-N XXR"(zz2Uf/VPQF<^% ee.t:DF=f ~zǿ_/]BӱW#R+ DGSPON.д@)U Hb39=shǔSl12/'8s".?_}s--էNŮ]Aݺ8[Yuְ0Yܱ~?Fe!P;Үdgs!~4?wn3KK]kt_B3y7t;ꃲrp๗_6us{-,h>d͇ !%-Y–?ǽR% /}Pwkg!jc;,12ߓ"<cB-Kۨ"iʕ3EHaի|QdC˕c/ວ% oõk0qt8dBraC:3vtR@bgݙ3%xJ}޳AjӜKLdrevv-4Zveh4tlJnT*}9ς*89חw)T-66|R>hV<FF2s#wD!_LSQ~2{ @OZFV F?‹/p-,JrDž_$\sϕxTffhЫrxڵ\|& Pcq~~|OzQZBz:+'tBykkDFO|<_M3A7_m+T[mu{+33+[oCCyyݶ ? R,֭;۷KMLdEN"OhssڮZEŋ5RrJu@J֭XIaݺ,xem;2Y38k6`T /:uX6ޛ6F6(N_xmےF+xqblNUioO׼N66?۶kײyus9(1Mq^]Nnj֭^B!0 ;v5k4u,Ã8lĈn/'y8׮a@fh9t(VyB:CKڵ$GFbig{*ݛ 4|8PlY  kؐ6y1ls"" ʕ5׭ce'6l`a<^`Lpӧs寿ӓ M䣏3Va|PMz׮\ eر\|Fè9h+W@J oݺ܈bvӦtkq<ߥ `EϞyxebsٳl1sdfc_h!5L@+S$sr/Ⲳ<+|.%ބB<h  .ׯJDӱ>!AsC)R^;;VDżOlLH[RjsٲZtARX c><ղsrh|9q}ݻ|9u[gF˂ @7BK_QIvMnXjIbB<Vqq>w:;`f@!BI&'$k*𱷷ϘȌ ?II7֩K*𶇇hhZʇ]`z %p~}%XIMI…Tssco<))KLd4fyΦI@B;~x鞃Aں7qP90>Tiԡ !Jnn|鴴"=Ssr&t&-]E[+sw7ADY]t4Cʛo9?9(l@,,L3GT7%\&24q4³tG]xZg-1~j0Ȃcǘ?ڲB!3Dʊ?#9" ժE3(Sԡ !J BizDuZ`fTx{.τB<$##QA'jtj*8Rd SȫeV|̼ys11R≱ mrz}}cDAiڔ1M: !B<.>UjՊJZ: !D)q:B4:FE1UF !x23>>Ȩu- #'W25":%c2bE|#hfFww+oh4MH~jt q}cF!B<$A(Oo"sh+>cx('N0vXt27B!=ɝB<#GKfy\cI!oZDD- `jI{8Fnkh zc))FihgǙuTO;:%i4vd1 QJfgg{#!Brrf?gk:7s&6jTu׮el@Tp|Z뙍۷{}3g{̣Mm#u@Tf&k䡎B&2Ȥ*m20."#Ͱp4:AYQXľ`L(>Q3dd歷ޢqlٲ h߾=q2p@*T5+WfƌٷoJ}ѺuklllQ'O$))W_};;;Vʟi۷2dXZZR^"oZfڴiܹ󭑎;vdƍT*wN^fƍꫯ֭֭cܸqOЦM{&J%cǎ-XC{nBBBpuu%++:0b8@hh(#Gɓ4i۶mD-BnW^r^I?pQLA}p!=͉t-[T !xJ̌pF wv_F}Bni4 :U׮ # pPYD7y|͎$k50+* Jt$%%qQbذam o߰aCnܸҥK3gṎi5t1؀^4ݻ(qp"4TJ;92]'sA997jD1csw''+;\ї.= 0 /WűP(y]B/nj4,(@|c͛dfA6NN,Ty"L#π ?ִi"ˢسg~~~T^1M4رcw;;;$ @5;v,:tf͚ž B!Dֱq`< n8TMzsrP:NLx^@mabOoOvrbkC78FV^2cWE8jFڽ|z +8,G__ml'߻>[9KKEu?o0EXB!p*̌WeROӱ46jGr2/9/?u<ӓjIraR^5'$,&a^7222033ӿjlذthѢQQQ899QfM{7ʕc„ 899q9࿤d~ՂB!R70==JKsOw1ɻ<};Gѻߨݹ? ̩wcs&MtZ-[-_|aBVl]ϥ$d99ud]$N8yAP[Xؼ9-"WPvr2fb߬Y<߶cưw ~9FZ" MǙgeoO7s&  eKڎR'p__Ex**G$B̊BI|JoodPv=;wTd>`V}JtpqyG//f(fDF򆛛zAؼys̬Y bshݺ5͚5E :@9|0e˖eȐ!wWV3zhFY{pqq^zڲm6n޼IAAAЬY3\]]NB!MKZ9:re69=. ~ rPݻJY١P(h7lXd}%kGG*iO S6Z}1|9UܹTQlrp๗_e򒻖vv3vc{ ϞځdRr5O͚^ .;Ǐѣy{"\kP+59өһ_!?urB&H<)v'%ٳ\6\ x[Yٙׯ!T޺EBLP(+l?ǣP(x7iӦ Sx-+l͌7)SMٲe[=K~j'6l… 7oT\SNjժŰaØ9s&_~%}aҥwl'!BҬ%]uu͛ Jq GMV~T!m`\1t{\j5n*kA޽נa{VuP|֥ǂo׎cml1.\0qx7Ul`vftCM3J۶D?Υ?/W1RK'ŝF DG3?B1':5)L Sk_Qzdj g*rSR?>fj 4e˖YֲeKrrV !$$`oڴ>˗/ r{YXX0qD&NX 8C>}ͤI4i2c$BQڕBg/PQukr۽c`nm=zoЀ_&L t2\9f b~+Mqi|t:*lҒHMY01J1 }?]Sװ!Ԅ}?7woꅍ )\>xj;װ!-"%؁NeEϞ<ױ#A-[YԹ3k .]fL |?K_}/k` :/ e_Ջ>Vh‚r~^Nl@֭FTljm8c+23^B<,-B5:+<BRFv6$!7YcX!,g~ god}ժԴ5I|j@Ӂk3# ;t B!BVoL~<G[@_{':tfL /Tm׎dqma{nj!+5[ww6Ϗp~=_~/[.{bQ֭?ӧu0rB&ɛħV-N_OrTjssr%ftLޫ˗`ϴi]Ne޵+~mGfرx  .^,{ }"#fj,D%I<.BNJ LjITt̉K  @uwgVŊG\< {yŋRdjvMJ !BRcǎQfMSr<<&JB\f&}Ν" VJ%Kx%o^pQ$gg~ Yy 66țM<8ooo"**!B<ݎ?A!x<>)II=!O_&T*^+[ S)1*GHr-gԑ`)hfFWWytɴ460!B!DIA22ɸ*'fGG"$!Oh Txx`%%$ysrFӧt(R{8Ϗk'(xz)PXP0&Ƅ !B!G4ݴ)7pV~}Z#BB3,v&%q)=++'dّDᙛsMg۱9TTIOvvTN|@B^^|p~m 0adB!Bxv4rbb矦w nH9A!L"'&4033[T|͸+W cc燍BRvN cbQ|ec˟nY̕J' oKdhs(wђ B!)$o"u#B;LHM@OO&^VgXL.%'#5nܜ&_%->{F~ LVwaVՕht:TNlkkB!L$J%dgK:!=ze"rs8 UFE T !xLt:3P`8x /xtgJdџ^˗̄)1ӓqqj|' B1bV˦1cưd"##͛7oӧ~~~˗/_~h4O='NeF%\B!0g"AQeeeEVLBܙU^o|S fY:`yLB&-NYT(L3&1+o7nAPܜUQ"mmfc4tFce\dytR}7n/AC>AM;#G>1?#!B^OUBVL98PэeB7ofٲe~3g~3XvJ%{c>iM(||(<ؔHtf)BBge!1Ѡ4*_"88ʕ+zX[[?cfoo믿ѣT~/^{ eRvm>LFx|Ji9r֭[ [nGC֫W\%i?!B=}뭷%T*JW){a˗yWptt֖.]]NLLK.X[[5klҮ];pppI&;v +WrJ%J+WҒ{-[-_aaao5b' HÆ 9rHwQڵkOBnRPpQm[l>?qx9UF ~gƎ˱cJ4dIO!a(u QF_Jhh(/11ƍòeXjo߾|o&,ZiӦ1a.A-ٰa4iD_{~ԨQk׎իo߾=;"r <۷ӢE u릿޼y3W!d@\\\ ^'&&2f nݺEdd$s_fԨQTPjժsN_pg'ŎXS\qqqE '''b j uQ+~1a֬YC:ubڴiڷ$'B0'̙3صk666Ԯ]nJNSعs'?>TbEWn]Ϟ=kZ6mʟڵkOŅ7o/^̆ e>ZhAxx8&MbӦMcaaW=883fP~}֮]_ʼn'xWfӦMT^[n1a&NȐ!C&?]juYI޽M(L0XLJZVy7B!~SSf`H<cIdfƖ瞣-)||iܱJrKKIF}'$$L(uvv_7s0_~ޞBBB;v,]v%""gggbccRC=!!SBB%?Q$MRRR:? !C0d"""Xp!Crt$1 !B܉t-={кuk,--%9\]] 2(Wy'GEVӮ];jժ !!~ooo077ŋ\t߻w/mVFYfŊϘ͛/S e˖I&e ???bbb8tt Mr) ۽{7v*Q|ҦwoS!ҒEFh]3EHB!9FF*Fvv6׻JN.|Il,y{ "![Fx >Φz=KVkt̬Ps))*|r6,e̩Ȩjȑ#qqqz̝;Ri0ʘ1cS]vw888;;v,AAA t֍Uj5knnnTZBI۷/t҅LnJHHRIۛ3gjooo&OL2eׯ舍 ժU+r!C0o<ڷoaСt`DRՌ=#G{{ի-۶m͛AAAЬY3\]]K~B!$ٙZj_]w/nnn$'' Z IOO_eKƍEW8)䄕/< NNN@hmn!mT^/`Rq3:4Nʨ!xKH ][4Vӥ|UɕUx8S##QP"o~&*'JjeP֖c))h-]<.XgKqo9L:JoMYf~@:Ĉ#߿?+Wm)6lҥK|2ԯ_]vaeev577gɬ\GGGZlP( 8ӧOSjwQwgf27BXYe *{{ZdQ)jZ۟ H]*j7uyP*KTe+!BlG&H $\ɜپs '3sߛ?bpB|A>l:wΝ;W+++UVq=pW^o~mth7b.\s=o߾[ّgq3g駟`,Z$"""Ҝ6رcYr%fkn;3p|[laouzDEEY}pOwީqq>׵^˵^ŋ>{lfϞ}Ȑ!]++M2)Sl;3}plx ֐mϟշDDDD>}:9s;bZ뮻?~<7|3O>$111̚5Lܹsȑ#̞=;Zӧ+WSNtޝ>}0uT.r~rss)//gӦMTTT`_W&L@\\ FΙߞʬY8묳p\|7|>q0  GTnؑgy={w*"f}]QڲFo֕r֭84x2}yD"᪌ ޱ2_W EDDDDZv;T8uCvu/_ҵkW.~0k,V+zjs.]:[n3f0k,z<iii\},Xgyݻ榛n .` >>{3fpB.BNի9r<\p >!,~9s&xwg}z5:OTOH`pB+\ >/Op? $zYoYZSt! pWN4HD\ʔlHd/ei)TTD,ieJSDDDu0s 2$ҵΝ;GNNNHH֤^ؿ۷o`OTY""rY_|A'4ZZҧ5ժzk^ @ X z,iU_֑6jZ>ݳgjCZZ:^m6lh3ED$5؃F )s#S†q WddD"i8hf|9dAiuNgDRdxqt检x<nw/O2""QU! V& #Ryo3 ~E53p:d6ʐH'rLu_X="""""Ja7nv=WOR6䦎 +h8rDDPUŧ%%!!4cLj$3MM(uCLܙ撡<#8O7X%z/)""""ҪOpB^Ո9%&=bb"W4X, wl,gjYRv3y6-*XJͬ}pmVVDixk23YZP4:!H'"""""( l2zt ""M0 vȯv qb^-(n"U4|5کFК9yՌߴm!Y|0` -_q!;Euf zL?<ȝ99JDDDDDjŨyvv}r~>|F>LjL,Vcsy9g[Ƕʐ0FqP89nAt^ϏP5"""""Lȉ':` ھ`MiiDjYq\F֯k~0?ڰ"+ ,.]{nZ2Jd7u?"eB 떎Cf2 eGDDn_99K作7S;YaiaU+Xi~{iAA9i$;;3<Ü9sҥ%K`XzGys)C5>ٸq#sUiSHX` \mu>rK \y:x=_iΝwx!Ht`̌P"M[L '<|Zw"SӪbUWl7dg|Aa7mm%$3ʼn4323F,g9qSLojbXسgOqڵ+TkssmI\\s6lb=z+iiiL0i1:k֬ǚ tvJJHQ!i^ؿ?dLb4df\gaG$/o&~ims3eybbb8q".X,̝;A+??}Y233q:L0_W^5k}?>|)))zn.Fpy<~aN=Tbbb߿?+Vbۜvi< ~7dggԩSq8חsӣG۷oO>cгgO,r?񏉋#++iӦt:w#//łbp nY/_N=HLLdɸ\.VZIJJbĉM61~xIMMeʔ)4x5HWfd` me%?ZHQtС^`j#۹IXSRp sm:^Y-^HK!;;Tm,׀ֽ{waÆ1l0 hOGq饗rm6V\yoݺwy^zo &œO>p۞={/sr9_}QcZ6vZ?|nVFĉYp!ɤӽ{wΰa >|wy+'$|0Fp%$$woɼyׯ+Wdĉ|嗜ytM[;wu;oO?M~~>/Z_zÝwC=_o5ѣG?r~_2iҤ~Шmɱ{nvKcx9QWffB~~6 ^QuxHZ,==or76͐uΌ.]4KF-Lh[EЀnVZOLL nLLzKzcZK Yvmm/>֖-[p:?oKKK㬳jk9s?R50sw3w\&L!C|>ٳgSDDDˀ4M>իovK/˹7n;wd|7꫼p Ǜ\DZaܱ#svKͬ+-eWUq+**bɒ%,Y$38'55%)UfffH@y:x P ֕FQ=ۮvdpWp7RRR… ;w.L63f"""".[B?\3駟bX|n͛7sl0aBH^x~n硇ɅTVVf&NPѣq:lݺ5'`ݻj߿yժU\zx< $''c8&Mts!OOEbi3Z'75튈H.)0'Eݦ͚bpAiw~0Ni9iii\uU[.kǼǎ).. RPP@v\EEEjvaL>m۶{nnF~>H&"""rm@k׮e̙Wn?330蚃nAQQs nSZZʾ}]h6>ϕ!Y +""- ?>!(ݩxdȰr`Ƥ|q"-&^jڌ7vXn2dHWnݎyCo IDATxx*++|: n?o;tk֬i㴤]#СCmwbn])] C矇0 (..r5:$LOOjR{w}cDz`+VSRR0  0nܸ&)q2iҤ:6uӦߧ"BJfAED啻,+,[~E'7 ﹹDGG?{qdee~VZu]رcz~1qDNJqq1YYY àSN׿&---iSl̞=Yf5>z+ >D>JJJ=z4{ge̘1dff(֪#ƌCUU#FϘ1czkL2QgeŊAP^/˗/Mi >;vn;-6vXk 6H˱Y,\+^GPvhkNDDZwpxWYP,ߴN',))׿?I{)ܺ};U Xr90j__2rrr7nzjs.^{w3gRQQaXj鍮駟t2}t{ϰY;X1^oC#FpB{9\.}孷3jn9s&O?4<'OfѢEai-'7}aԩ\~R^^ΦM`Ӈ)Spwr Y~=O=1sƌ\{w}77x2vXM3#|IVZŻOSJKK"]I׷;w&//rDD֏W7nX8g2## zgqq*iLdQ~~H8h3 T8 ^/m Zh P8(RapSǎb4qa~ٳɟoĈ]R*++YzAn lnl6jiƥhRo#5anQ?IK`e""r 6M&h;rq֭|^RrH MJjDڐge1{nex{vXM_ .t>t9e+???% R@D.b֭[WzjjDDϕRPw􉏏PU""e!ұN7q\xCp@Ң"QHtSǎ|Vg6*63811U^H!""""'1MCt!e41))D:k#Gs8L{̖/$eI oL޷/Vk$i&fdpTykڀ%0[EqKDR@(""g2*99BmDhq0wD4vBڋ@n|l4s-8ސwTX5[SVՂܦxA~֙HDDDD)Y"]=Q: EDݼWT^&kwCdz'O;gy!r&ee׵(tȑV$""""~( FraCYYDio 3(Q{na!m݊4ԬF߾LE<))dYF,Bi~ EDǓYdAYU\DDڙ!oT#QIkVQfgtCt:#]F8Z\\̼y89S2iƧeX]R!j\a""'W Rc\n>C[(8C/L$7<)쫮j7 7f.gvޝ>}Dq" H:t x%ieiiDiLd! <#[wSo8hX[nAfb^6MR6"""""ͪ]m7̙C.]/Tx#""""ˀjF)UZZw}ǟ'<~3\pDGG|rLu]wqO>$ӦM?dܸqL8k0sLz7 {:thΙ3Z;u;>˚5kxwyEӴ46uS@("l8k6禦F4Mٻ7:+oAqrr' HHwlx rQp̲2.\Ȱap:\q呓ŋ)++cʕ5_cǎK,_. qƱsNϟϻヒjeɒ% :_~/dff2wܣkZg˨ƾAзo_mDD#{lPVFM܉4)˫ـeea;d4cv^ $X|zI6pP]AU^/:zDDDDDNv }R 5^ M墢"n^N@\q8NYg/税ZED*bԀ66j3*"^)( <11 U Nk FE3 h]i~=@me+ڌ4klORXXpCLn_~lٲW^yɓ'7y""-Ⴔ4V6Mzp+9v!Z)!kNK(į毇c]gңl`zE<f6rLjڌai&z@C7СC6~mƌCUU#FXYn7ofL>ǓݬjgVWWGi Ƨh^UŁj#RfyaaڃPsgj/zLm§G0p0nDLeiI䳲D-Fn 11.{ӧSN/穧?fʕ<̜9]v#УGfϞMvv6wqQ޼yDEE5+;;d?fl"+"m)ZPDɼ^P%&*:R6m0 ȉf!ڏ"@hF&'wj3*""""|e@hF@u׊3NEkcƌ,\ /SzjF M7Ā6mP3[oŬ\e˖[ix4Z,.\Ⱥu8쳹K"ҾLHKV8e)-`E""'JJB-i';r1zF*- ۞kL _ B0]D$2H `g%%)G^u5#6l`{UUpw\7xAV(-*RS.kj3*"""" CelPE""'j,- ^-]Uň p6]Z~|6x0ADYV?p:%tH8;9?jF{uEDNك&fdh-mJ~W];:+08!O "-**H]<-/,P5"""""''""$F$'eUVrDDN ]{$:=fذB3l[aII4"^iil{UmFEDDDDBi#Zแ/JJ"UH`mYYH@pڋPV#nwH8h~$f wwi6xdI"""""'""$rBZAMDDbYW{7R II/pPMC-= Z9.HЧdiv3κ&sRS#WHZAAlo{*%ߡG"ҊXLq&^jvҤ1Miz6M^^i!MhP9)0[, sEDDDBi2Y[V |PDvVU"uj/Zcb+$4 iiկvUeeڃ+*qqJN^\Նz^/WUjoz4q [+ e (_hU'LG~X-j `t8UH;PDD̏ ֋3PDDh B8trƍp:4=?~:Q Eڴq))lZ[7 y[NUy<\r8\y<q_C/; `1 FW&g՝wBMj _?e_&i`(ckDdD- LLZE-{F͆UAHPظq#+VࡇЛ:9?JN셀""0<v^bƍ쩮KM "'5o3 Oi{<rj??~!b& 6֘*/=F(5L[xv&fqu4VRl6l62v}!bjPnY"""Pظq#se֬Y Edt?mN'n'EDj{e%[i/zmVV WӺ݌߼m!83) Gu<*WVKmFLӤau5N) aMv3<@AOMHiP/ܬZ^_9xMNGۏ4,B~h迩 Y_XHN̈"':l=WZTZ41g㍟jb 9ɼq`۵ee\PDA8| cǎ?f^^zdt­7t /ȑ#W˖-\|ɀ4GyE~:tax7x7EΜ9sعs';wk; &"Gɼv0Imx 䤘H瘘Hq._%%!'%m@V D<,,f6/ 9 Gztʱf5eW&76KZIH}%֚}ߓ\"bD`0Y'\xjBz~.x(v)=웙z7#c݀Xo6k \\lfgX,GE&fÎv;]cbMhl򴨈_Kh=>(=N_u],_ l6O<iiil޼߂sڴiw뭷rws=rWӽ{w~s뭷+={`1M'e͛G.]Xx1ǏᆪsnX={6gn}Wxy'ׯot2a~_#zjV+~w׿f$$$Ƚ-"͈hndMIIiK໪pmff xL}-=|8䤢 DDLi3tj*Ln]WUʠv߉( U7nYv;YQQGEזˈ"-*:^ժmaXXu- Hж.EG|bO8ZXza [=wJhv=nJ9ˀ4M>̺u`߾}̜9,}YG#GXhQH@8uT.m/BƏσ>@Ν9/=z4Vo֭cȐ!wyYO=T[VivZ?|nV'޽;ÇizaV^MϞ="~'b 2LTcXVO{Q/pEA\iL;V+4ntfH{qvr2IV+(*vf*Z| IDAT&Gꄀ;u8/\RDfPo5!H(|_NFmn'bvi0HHh^4z9rQtRPϴ=tR ֚ԄzJ=J dtԘ{f!Hn~~Ν; &ɇc 9眐myyyZSO=\nvGfGfܹL0!C4ipPDf0<1O,xppjll+iLӬv^4M}=D[,|K/NDZb ^-(mf KnvUUt8QU^F'mkcɩ ;l4OZ109٥u^/G }abI%k_ttҰ  cb㟅xZl,=bc9-6VjTpL%`ٲe~w}{o`JNNypj]vSN9Wލ7HII .dܹdgg3m4f̘qfVV"rb'%Eiio++ ~_ ۋ޽g|29 MJjD$".OOgo].RO5b͎*JWU]uf):fZ,dJ9`VT?il ؀^ _]|_{}jdsݬo˶J3 FX~KDD)ۀ<ƛw}c+VhСݻwgٲe!ٛa0>}:ӧOg޽,\￟}2a„&>᠅´%"ʽYO{Q^7{=a3Ԗ-JD"Ԁ6C>;s{쩮UV﫰Na`ֳ& $Zv;NG"Mbdd?*<UW ֆf CwpnaPVz(fnu돬zY|yq7n< :thGӵkWyymƄ duu5A{fDDGvoۊ#"hCZn,^ؿ;w{+}pQzz V$"AEii[T4oq}v6iRrl*+ZQj y m,ܶcA (àst4=}'OXBiuVz;.۔5aP"αJ9 8S񰡼M!mO-7Lxm|4vkرc6m< {^k?1c0n8~zEqq1_}L>l6fϞͬY|z+ >D>JJJ=z4P#>˘1c̤[nH8}Ⰰ7EH6׷u]p^lǜ-$"'KyA3֭cGUeuZF3kۀ},ȈXzҽ6{l,v,:q/rIgѯMӤfUUpUU쨬$>>k_Z+*B8XKn|>AMsRS#WH+^QQ%iil(+'7 Zxwo%҆*{C/ī{oQgKzT/(ciI6u!o+*VYɷ|]Q֊ ;c0ú-M`ou5uD%$0(!Xu<9( f?hvamк1"" CAi;.i)TT0nF^oRRcӚ9"ir8X_Vƺ2UZr=5kژ 0x}nє{<͚7u {kiҢȔ}ᷕ|SQ v:cnBnw:9t#b,1Nh8 !xiHoq58-.""5YWVf\qҶYU97RV 7!N"ieej5=hYq 9#1 Z%l!/w=iLƊZ:t{r8᷾*+jP^eur6WTn j`_h@G]d6@4 a;.uB EDuH ].JntWDi/. LZA16_]v`G CTT$iWWۄ~UVW lhh&}X:A`ȉfhbxHq2 Ntf\ji&,py )vYu2II3(! m@n8rHis۴#I!VY0#GB߳c=2qi†QQs :EGGUP|:z{ !vG2rCyy}99\,*rdj-u|)LJ0&y mB(2X =|';w¢ExtR_g֬YiƱ|ʕ̙3ٌ>x k֬瞓B42e) ǤBrxZ0sSF)ة]Ǐ3;Bphص+}}cxB4)f3;5+mYY&jE~+ySJte.~Q^Oo^&ۛcNKc~Vu>!l(//Rf[̑zh4}>ݻw3d $LFO>GeРAгgOvJ-^h<==`ԩ8Jh:~/_1 !NW__te.,]..Z2'-鳱x4cKFw8Qe9. !*J).fMZϜ}mc> jh^{왆;8L TA&&gNxN5{7x}{ܥR kvBk?1b[v矧]v]1k]vӇCѭ[7˃>ȍ7[VyfΜɿ/UUd2}vڷo_z!Ν iѢ=ݻw &&}Vzެ_Dnc…tɱSO=7 @pp0{f 8jhkxgƌxl3fp4_~ʇ !Gg2W$@(hveg^&2Ro[,Z`vchB4( )333d2o4~BT';5E3< A :4 <jM7y%v!xkf&%%0U!D^>>CaoNfeqo첓ֲ-6Ll&bH$nb"srXץK'4KHMMeɎennn;-[8m޼?X,\֭߿nݺѽ{w^}UZ-cƌqٖ̝wym۶c6 0={TkR{e„ NӦMCQ 5jߥx%%%5@=_J\\&MgϞ.eS'==+,TXH/F#ohV`r#//Z`0a⋊*L^^|Nfbf() x \+O\c 3S~~b?әi4ByicY\dg3;mYY M^ p0/Q\zw]cg-ϕdpDճf tqL?ncDNe*Fiii,ZE9-WKىo&>,=<꫌7tï8j՘KOu%%%U0tss#00D{}:>53gYYYX8xGyǫܷ&OQ?lBʿSR\f ǧFumXU;g_NNA е+fw "bdfSf&58c?F`iH xj4@׌WE`V$&bVU&bHP!7NЀyessm,.-@ZI o~хS԰m`v~,bp=t w/*^9Np`Z6 >BCCHMMuʖKMMu.((n3gVҠ?˗/g8p8Oŋ NLL뮻&cOMMe˖NGDD6+KJJ0LU3k(̛7yqEVX󉉉aҤI2&!ĵTJ( B4[;) 3]E!zY*FQXץ mxTBԝ~7L((. Xv ~I}}q,\'ӌF.n4(Z8 !pӓlO8(X8ˎl~r2,e<,X} `em]Q\2#1>oe]H>̪ypc+B>ZlIHHk֬q ,..fƍx`1b=JϞ=u=z^`ڵ?^zG}ȑ#+ݯ&֬Y_{oǎK~EU\tӧ+WQf?һwjjՊŋ;pq&Mؼ]Tjфh.4BジUU%@(hV 7ͪʔF\^D^WpeeN (SI lf= )##yy\]@B F zyyOѐo/YlbP#Bѐyj\kْ\/8֗Nv) (Jˀ'[\x2『ܺVǺl^u";Fr`1[npKi ﴾t\6nDL6s#Ca1qup5q짪c_?.O~aÛ{Fs;޾~D:G} `4֭oƖ*lh"ә9s&\x8:vȠA1cXV-[Fhh((‹/]w݅;ӦMkײ|r|j6[ISNG-Xd ̙30裏0 ХK 1o<~m&Nȣ>JBB[nqʨZ: `j3w% IDAT\F#C׳n::t(;v` >Zn !O[//GrB|R!82~0{Dn͟U36:t`ǎ<}ѪU+ƏOx9wߟM69f͚;K,?`00j(GF(A]-̑#Ge@FaŊ<3 2-ZpY et-&>\ip0ݝvhTUH^L&KOg{v6f{.@R𐉞.Ã>>Lm>=##B!D9g;-18@ؖ\Ee*)ᇌ M))q +ZT)co#zO7%SFayNB! >>tՅG.RRT©]Eݽ4YY5’ <42j ?.NɓF`̞=!W-aOF#u v`;fl\e]8s镞~K_:WbQUdb]z:rrP)K:Cxz2!0Cts mbPq.tsf& ,Bk,vX,܀la\"v߆A-H>(Bݥ1!(G$kEEl4dbD[j%(0@WԽ^z=A:f3V+?ge1} !eq #;EuM<]vslutW}BdTUӧ>e-5z!gCũ}C2 '~=Kth}ԟ{NsyRΥNEqW\6yeT(O% eݢv BQ'==BJ8KjIŒ1:Pn.3uN vSEV+?ed:-RRR!K:AAlZ&2) eIKcY<*!B4xSto T@2ՙY?&RV} }*8Fq~1'w$הXBԉh//~Œb݅Bԉ&S.\kY-EE=tb$Vӓccqhe|y1]z:kXo2Q`:i`OO&1>(%$6^8O{ozB!a1lְs'NoF?W \rye^L7  !Q(`Xs^Bfb]zˠ0Ŀb$lfon6Wx WeC׮D3qii|ʯXq.Z^d/*YMN@??feb@da  !BDB!D G#u#kax5l%rs4/EΝ%"9''5ii|ɂ?:SQLg o;qmL6V&@6-[a !BѠHP!D*]g`UU4үJф7#eoyTéGN{>ؑAu:&tX,l`Mz:H+OUSFVΤ ;svfek6㫓K B!BjB~ih4,[EѲeK+WhZs*ZxjO{o~L HM^!@ٌB4QVUe]zh-<<@ߦ.iiF#ϷiSMƅBJMTvegrt:3hd.L5y+!SKOgXB!hWeO< <!U///uV}###c* ’Z-9Q r4iUcB/'eU0)(AN:ϭrG11RZHiPdָ ؂VlofH???y-JM bY|<` <; ?{B!.2{l̙:]pZ8ϝ;Ǎ7ވ`@3m4'ՑOHH 6;mhx7;w.AAAL4 ]`m͚5 ^Odd$˗/;`m*+zQ ={dΝ~ 5xb$""SRPPʕ+σٳ}8ۯ/mW4$ 2L9W#B!aiv=,Y r4LVh41qD8PYgW^!,,^{#Fp)ߙ0a_|:{ƍ(N#zj^{5"""x饗p۷rwq> , ..3fpy\\/hXp! .KZW^yW_}Xٸq#L4g}ŋ}vZ18qK,_חj߯arhh/UB۴4Q:Eaxm*sNLAA1+>7tJX^Bw/SR,%)SlAdHy݉0``]z:lAL&:ЄB!h]0::6)}~K.`ӦMлwoYv-SLGo᫯b1֭[kK/9m߾=^o}_O˖- |ꩧw޽VVE[{a̘1̝;ױlƌGGGЯ_ CM&۷oB4:Ŏ%@(hRoϖ*Ko`W]̗*Յ kPF@3!0+L^n0Y^ߦ1B!ajm޼1c~A;$$;o߾*C2uT2ooo&L={?~o3qDDz _2j(ǿ;uPeԒjvY=z+qqqL4={VC۶m%8(Ddts# Y B4QM&%l}m(@L3؄2U`Gv6%%B! V[ZZ+Wdʕի1 2BBB*CBBx[)))-(XVPPiiiUlNwww ܯ̙CVV+V ..0}Q*!2c"B&;ɑU׀&1Q٫oUP4XL5$!hEt7{A+)#|S!D 0gpR,3AT]D^x'\Jxx8XV arr2NVeWJN> AQdbP_YU6=i B!FCpF1pR;[jI͛G}ĨQx %!!͛7s3bĈ+ˌ3 44^{ ¼y `fΜO<(DDDd*0Vt: .dgܹF^gݺudee1tP:v>|8!!!np~B#LQJP$OOwY^Ԭo E?KN+~m[D}5Yƪd6gdPZxj4L55$ Bo@wǵ\z<2!7dg7PBJwO>ҥKyꩧ5kq ! 77,H/) !D- 0`s879'N\&>H._\éDGGs}WBT^}ǎ9!!h ر庾z=z92w/ )_U `]EI}\>NN棤$fGI( ) 5$׋hJI>B4.x$%1LJE[^3l.凒!LTyzfN7W{k㱺 ƹٌ FFyFZ"Oj->_/]b|xDF@e}=R/YY z}ޝ<9l{PUBv^?SY.,?XGQXnAmڴ~ݻro2!hH]@̐B&frT)ǪukƴmK׷͛Y6~|ժȫG;}*]uƶk}zwz5[xE~ _#EQJ^ >>>|a6ѣ6l **ޏB\Ke{dKP|Ȳ*/_ȷUW_ǎtڔQR—JJlGpY=4iF#rB4nnٛ [33TRjB!DM^@crT֍Kٵ+/1?2%BNq1s{ѣ'xk\cr<#{xp{μ>0|8wtX_go͹ ZW.]=$LJ); [.oَ iݺsP j/(~5cT1 }?G?3bbxhO:NҨQ`v٭8ˣGӱh;pK>D˝˿gGp&#'ڷc8e"jĉKM~͍z` JWOf1ʊr?+ p/-z q,5૾Nȑ#ٺuuQQQ={߀سg5M!m9wTV6fd &gΝsN gV,Nw|w&Uz=mJ۝}|m!!yxހC9YU) %@(<7t(i&M=(5EQ9yod%'?STā$^;Ĝڰ`]>⎮]Y8t(y*oسf^M'tRsw : }Fj^oOV-[ᇜ_GG7nO]p-#Z3/Xl[A }E IDATlՊm.n9O?m^j+V 77:9B岩~F!}Z*k>K.rѣNJkW#ׂUUٞŇ|BZU`i`77f134T2IE4``}VYU>=O!D-60z{UTTUU)Z-1];9v#8Kϝ?Ǐ;Omv휂HoX0d>>MH3h -Wy`r2ʵAX{} Od$mx޽^߲%/ m9۷-TxEE,޶`D6֭LTP+۷ĕ ҹ31yx`򸖥px9eB^Ugm0~.0wIIW i5N`~~NA j%RgتU;e*v qNͺ'Y}8:tXݸ5'Npsl,:DT@@iR&`5abb"-3!RcC̬MHlmȨzjl^h0r- d ղsg5v^{'c+hT GRRަ xrfL&\E| MHu@#8Ϗ-Z'>)@8\ :N%v\DAI cb<,*le-֭cu{^Ug:Fc2xymN5 T??ϳ?dOLNlR1ė FVcɴN!G~s*g[j[eФZV{B!*HU1h2UZqx=:}rs]K>!˫%R̿SS q HW>ߟÙn45SB!*0, 13CCu\B!7EQ/7Y]ulŗQPUUkZ]o~OvQZqS&(X>Kpv!>>$ˤ qMr%Ae*S [\5W:Fse{Q˴JE9D `§2m[>>P.feqӗ_RTrNnZ=wuO>!%/ȸb Z&^]I I!D#\?+ B.bl [yxк S]x?1Uld੪ʎlML䳔 VJM3F(Uv Oף XL)#CB!jҕOOEᥑ#Y.s =hiKxjfn3{ EQ&\6[˒ɹt(W4vS $ oC5U1Jxj&TU%B`ZҺ5ߝ<2@ɓxz:2C=Z2R=sM7_).1/mʀ-k5Y۔5޽{{B!A( $@(h ~rySmYrrItBiSǣ5R\̇IIDN8,ST:w2+4^z5q-Dc3`p̪6_B!WJL>V}[ Vxb|9ƮrFFOLtⳳ_'Y&ۓ'+1׿E HeD%!V?7 r= E[ 5}DBZH `B,ݵ{t7ΘL:tzf?-HTPG +U`6hДy|>zW筲4¬nxⳳyyԨGu%xVEEU75M"@(( HPlȨ #t,%%L9|W їY0[l_ KOG``UӥC AA c\`}!XH-) wQJM!_t2a֞8A^O@WkJˣF1PU):q6#'OٌXV} u}` ż/t hŊ}(X;"7KFCe?sWNc61M8z} ,7x ^^,2׭D-(Z=5c7e //4nlfd6dӼ?e 169ˋNF#U>( ڿ[;w&ӓ0__~QUcsXޮx<= { EEո?=q">{ є^غNF#KƵkVk@̪CnoxT%%ܿn7ƲYGV]MUpwvu+:t\}^dOB yxȸ(Xu2*YXȮ˗޽x9n$@(N+ e B!DSd.ZT[b'UX4LA'&nb")%%NRmo9ͭ+D#( c<9322$@(⚻G~tYWUXʩS[ fTt [Y,ܲۿh:tSêt;ÃQQ2z6ƷkObjNvlaZަ ƍsdƌ#Gxk`ʔ ˎAeٮ]/œ2ÃaΝ0=&Ʊ8ƌ~5)S8 cǎkCZfwTy3ʞ>v:wvsŢ.'{V?{E[O?UUܺ5<761/ſnaV9;'x6>=|QѬ:+WV9Ǥ+=m bCB Ư??{NEQxg^ٻaQQ8km9R׿EÃF#ǃ} Z3(o>zYcB 2.Of+hKJ޾^^ׯƲY~b3X_f~Vu6QoX֬ qoDBCi'!ä$f?KSF,t![xx8IIaVùf޼SSY}_}gm76)yyD/[Ʈ{!6$yk̞{]ΦͲeVƷo_iˣ(,Jۿd ![JI)#e@NgT+X|uZ`|PlYgD>JN&b[:m'17"qhw52LVؔEU$Yz|ھ'h4^ۼ͚Iw1lN481LN/d}S-s$%o?ڵ!5) BQBt#|n;vV:-:UYFԎ>IN%&r$/bb_-=`@?G( S^ZٛUhS,*B)r_  !*:[,L9re&FQXݹcVULMLTHu&h=wӍF`x@xF&VBfUEBkcѰa,6!aQQX.a4Y BQ<%Pф/(bQu]||堜{+D{}{zʅ:c*)aeR9_Xm{yfbtw L dY|<`˺h2IV!BHP!D*߃:=ڜrNQSNHӔ4#}?lJHdJTՑYUp4']Q%$""'sECb۳(XjydB!B  !SA(hJ6LeErsyit@{ooAMbዔވ@nUe vBNu:#;bUlF <2!B!B)  ! ߌ `w-ZICw՝;-0t~>$$D-J٪ j+pGh(EDH3!@vggc9#CB!YB:U>@h:GuH^^W{?=JN]kYTYN,U-Ko->Ue0mGGB!CrV*Nyh4(Go&  !}UI$+w P 6e7 Ї(tJ!iBvL;}~X-y;_˶ttdϹ'55`:`^7V|TUJJB^nn xRMf[sFk\/328#&G*舉& uƭZ.B! BHEPqJOuumm*N~b4[47sංA(~-^st sx / f3CBq w8RSomڰwhѢNGp,?Ig P:peeDG,_٠ӑ,Ϲu~^/ ʧ_:vfjv623)X.{̙\2|8Nڷ}3fܹ'u4lµct: zשsxsȍ»8*͎'!Bz66P!VޖL ! j`ŠA5~O^:~׵pb6I㖬,~I^߽BttmBQ'/Iz,u>^Ul_htɁUw |_e&ee0#7kƌ?gن <9 EAN`[KJ^Zʷ7̨3r$ Bѣ\Amڀn9.퓚l7sW_Wǯyorr~Oenw*+CI -g8 &"!F#鑑w:8bSpH/=qe6dV̙ü~}D~M SYs% (ᮏ?f}An /$.&+5vQjh42)+7.cϽv|WUx% KJgoW\{aRysYg]v=x(1_X>}{ڴj,/]z)w?쯨kݻ9RSM&І ԐVudV׿ kfzN^رՀpGi) {/yBG7nэ8_שlFGTvًBѣ*n r@U&y^/WMȑhlv[p}liO8q.k1~<_O$R m?∄BJm6Ʀw8_M:?c_vkk7U[|%Jm6^R^2 -.~]ٽn=ӧɵײ I ݻ,_j;p7>wOƛW\:lܢu9w/;W]ŜAX{|{D?oO>/̟ϛ{&R{_ϝS~ƐDE`e%+彫&`hs0_7ì%Ʃ~ s[trkWrݘ1\4Y束;-7Ȗo$=&h<`eIKC(\v-_ ѩK/e?夂P!Dr4*l؜Aab\RPuua[qP]z-.ee؃]0[玬, ϴBy t8xMOA !8土J22sv?_vfBff`9]Æ0"9w?>glk!:2 ~Y=xKGhqJK ]>jT* :*l@KoEڤqjNO>쭨MUW s`s8T2%DY_P< :]*+5:(+ m~ ?l22|fi6\{EGF6 F$'Pb5 /~?6?<8N!/11Цrn7 X&9*)M#R ٖ3/1I8u^7v,+LUUl#f ^Vƴ@80ذ&a/\&Wm]ӧ3Q>݃yyvA B(Bs[,9)y]^}PhCu:<|vEO-a[hyFde1,h'YqI䌨(PC_!Dyoys9#5HSnwS8^[ 4}yn^޽}y7}6;vIaє)5}z= &Gj-kJ9璣(=Ύ߿r_\uWGrk:&%1=7WaBf&e#GAKhJym-)k[sVZZ4_ˎɓ}deɧJ@$ BѣK&|^.7BUUnxr՚:bXmu:U%B\AȁX!Dj ;ؓIop}=7]@Ct~;UUYq]M :k (\li#*inFn+ŅB4g u+( 6l +6T&Ea1FDpJIᇾ9Oʫ_y'M>̐-Ibrv6H;tV*^ٳ֮;TuJJ㏙;hۻ<ڵ|QXwzTU еo*QD;Iv7Nɢ"& s9&t:;_ef,-Z@RDÍF64PtRl-g !8I42{ w%5:f䳣G)Z sR`63%;/$LrTs_[?IAa$di99޹o229j=,1:͛~m#%*.)ف4lk=͟]p>Sd$wMǛ<}==Pg~ ={X0vl5tplo(|`Ao_/tEG35'O,[]td$;ge1n܊9sX{\kn mCÙ oe_08!g.dn;7n\oE!i  5"n[!ì]ln:;ڐ7{{(SOr]t.sN Bѳn rVя8^ q'4VN4ÆZ`LL Tmm-+.xl+WzΎ\%$H[k!DXci)иhH@(B.u=pP!DrI@(觶lMQ vft4&d߾Uq:EQLzh6c_PKc۝SS3'bbqBSy}s !{'ZRUg1cz{8BH@(gQсn!Dl (ߩutǝNB<2AϟUU8vl/`j%+[qBSM^@v;rH !x|+>[ D !Q-FZm+K !D߲lT靘r8q"u:܄n<.8ǎq! 6c^ G箜 Z['iN|<>nX$9G%B!DP!DrK*_Z,a;ـhC  yDkeĈSOÚ2/, T 99\}ĩϣ^ť8O2reBc/%wo}сe5^AjwFg_ωgJ@(B!NU !16 C_|][K7tYzTf*d2k\.XRX@.X1xiRw296{*_JCCmP_nx<l ϣa[ Mf1wP˴BFC/@_hjj1tjZbu:L~n~YV{䬠vnU3P!D3JBjRP!DqAiT !/,u¹'9ㅅlZCdWqyJI*q8]QϗzhH_fdl9Lq r{ԸTT\-W~tpyQ>Ne%+cwm-h@Cc82hd$59(9^J}?jSdSkٜ+}-H;]Τ,F _a~l:ޠBB~&p1uUeuz2F!DSIO!DBfc&LjjuE#k=^<~ǎq<(4m+6XUdel;CU6K~C!`C9X_Oxm󭅫@jaWh-+ d2ͷ&!IgDGcjy?ϙ3'?l;sZ\VRR4hcƌiرq\ln6/_7n\? I8(D/B!zLeЇ9Xǝp:9b~Zll{<\wZhVG^1ϵmxxG `0hhuv6&%2;/yUv;{[WǾzvrzO'뉗qq$l栃}^!h$/LkSKs8(|Ng펿 1T+A5@^ϨhF3[ɃBSPVv>[oM7wyGBB?x[+.DzB]_wee%9 8*nիY|9,Zۦc_!DO`B!zLp@&}ܗK$<|GC)99 P/|$AFlHk9Cc uu-@ӦqwK(DΎ`zͨo+cu 3C\UUN8ᡯ1j抠:Ei<\G9|\S$XY11͙ yg_̙3I v.D[on8( /fzjF]CB!=kkR% Bq 3`pN巅!A 0``]7NxSi)+ 3'0P|!` lhm|mX(%`aQQbZg A@!ڠhz8rqngP N(L q}54pC 쫫PC6[ͅ:,v8(q8:( gFGsVLG!N*M*^/߻dsϒ-eŊO̟??J:K7B $ BcE!YL !MVkj.UeZ+^z/ڰ:KJXYXHIeeqWNiJTUU c_Px#  "%"B $$?ha[bt:ƚL5Z\WrBCg}=GUܪFiաSU׎[^%ta0&QQQ{}U!kpΜ9,Zg}Ç /c޼y{̝;{aÆQSS֭[IIIaaoXlK.m-\d&Odb0{lq>FUVq璚 ]x'GB!= )"s !6N-[b yT \ι IuR/,`P(ܚݹl@}=jkU[vݵ -nוA'->2"#[ã5dv\\[(|n6smzz/J`j\S,X5k>OB;7n\oE!) k/ /33{qDB ;v.F2c;m; ɽҎvǒ(*`P(ܜ=9UM]l6vֲfco]]hֲ5:j蠖ÌFR#D/px6n l O ՀStHCm]*{2` Չ&cc`21d"WBΦ,{{8BV{W;w B!=jBe_Yg҅n2TU<7LX5th,)cǰx< 5¯23/7~ V\쪭ewm-l6lihKdm&*\QQќdAk!F$MV+v;5. 9"4FDG3":+.ox__ eOm-}Pk^X6d21!KB!8 I@(GԸ6cPїmZвmNQee|aN̎.gk^//Q*]0# RpZfQt-KW\ y͕g0hkz\_lmV {yT.V8q:4\|6Wmu]]s1 n7X>IcRl,|ULSb^!" BR߁YSt*!ĩarN92)D@Xpȑ* Fy9Ppp IDATv`ƌ %`AvUU)j6;l6,A4( 7UGED9FD411QQT~nJll DXa.aϋ`F|<3q3Pmm]jk9P_۸ׅ RvQu5.O`Jl,}xهB!)DB!=@g!h+ڤfU rӡC4xBEj`+ق[UŽQ?yF @n>].l|e_-VkB=\U_i~ԤrFtt=KZ qjtRk7*z(ddz}'8fc͆~Srm5W]ΒfG^+VX8bc1v|ۏDD0db8JN^OFd$eN'*bAUUaE!h$h$"#PiV6:/u:yxh|9Lez\ B!D BQp؇Sb%j_:B@ 4btRV{lTB ^=TU,vT > x&ۮ lGF e)!8YQx/H*[l~R]Ǐ\FFA]:nX8ЀՌ+7;:s6H0%66MPZ:-W$O^O^B2A[VNBP]לfm6O0``v|<3(4ҖT!=LB!RہvB>P}=Бku7<-=U`at]qcsYY[AlL&cv||- _YlXlV+ ^o Uh| 2@e`BN:;&&TMKoI z.NJ \Vpba=oWm-nU AvNQyyDXqq̌cZ\M&Z%KkQPPCᡇ_wB{X,~߳`/s 7vӖʻwfڵ<28H@(ە:j bnH/[O2&'7<8Y.Olq1nUP6hyy\ԭ;fMV+f>5HaVU1Z-SbcǔX&L !Ahv׋]#KTHMf ToX$(Z=>?xh,qVL s87>qq]vґw?я:t~af͚$ ?>[l 4˗/gҥ}5'ȧ !ݮ^O*}W6[Va:Eao.V+ϖlC `hX5thQvlI R "CQH4kmHRF%GeTFcbJl,#M㛺:\[US[ˤncV_zqqLcqNRpߠ*ïkk`*߿wz]|SWSEEhhp/0'BSj'wBՒEVVIcHNN& N z}_C&B!]q0W陋!D6![wzT8^8eNgPpWNwcUZ6Z,|a6lTG W3X2L}̔jA m7OEǐgo^`UUɷs~n6/{ݪfV+;F0%6ϤX"E^O3#0vX{9v܉>DC1sLfϞ͛oImm-?\s ./櫯bժU7|Xǚ5kO.cժUL8c vqΝˤIx)--了7xɲ?ϸ[Xt)˗//ѣD2uFaٲe,[]_~O2zhYnNdVXMjn݅  !vEE^TgmVca;EQxqĈ[kj|[W Y,I}9^ڞ}n6bmuf&iLcjl,bcs; !D_5` A|T!R<<mƷllXX_Son恡w-EaZ\ZN0I`(Dq=Z-xm6͛xl_~yg5ɓ['D#{B!]ݎ[UCBՊBgEGСsR(=99n}[[˝G🚚@V8?8xeJ nS=.f3몫dt7FYqqgL2OQQQ\r%""" ذaCsM7 j{8yDhxd`x2XZNQII\ș\!55k t:[cP9sػw/ƍkz>ly}]=W_eܹaoבkr뭷sm޼; <'/p8mF*&LhXKnn.+VO`MYUlũLB!ݪG BE[8*\SXBFUU>fhhhX4@rD̵-jxdIM 몫SWh 2۪Nl]`lL s|8%BL:ã8P_J{rP)mlAp*_X,lX;?Ԉ.U@|M:EtR3f h4M4衇8q"]v\s |rΌ3=z4^g}4F(,@'55_U'S||8 0db܅cc[$ht]wQYYO˯<:hz=G?{Uq{LI23)F $P`Yu]a*,""+ػ (*H'tғ$)#!B \Wf& g~`ai)?Rh4!lfYQQ pTZ~>7oxvz:q89nw%XtRP_Bڢ7>4kZcrJ'$rssILL$'''ssaĉx<4(j2P!DVS}>Bx|>*+k=afv6߻rpPCÅa](xZ~ X=55E~mC:\MHVηX8jVBq2  U>nUeqyyKQtt{X\^0`cu5p1F70mf[M KFhKgow(踤i1*ݤt_hBƭў~ ¶Aw`{MMIh:ఈF[gdB#{Oeey*'L3S@a}=:,,-+ǃojj`0h.)N@'{Q!CZ !+?XG!::*pwb"c֭cE[g)u}WUqz<( 3 V+#"#eA!hG"#QU*+첄hwq!!Lۙ`SUWU+E.WF' U`2w$-,11\r#B !h7*45ȄBJ9āخ]4om4<ҹޡ3k'm@`yE┣QL5xS'^~.+R>/)aGmm.MoQ[?rrw/Kf<$B7 !h7C ;Ґp5ޅ02* VγX0d!3c ^1Bt16cl6ڕ|p0xi>wac^Na!s QεX4: m6C&S!=I@(+s)o{P_ !N" 6Ӿ[x8FG3jehd$!64BўCBH eO]^`QyyKj0p}ɔ|t2 Y+ƯN'_:7<&m6zrsBqI@({1/ivG:B:/QQQ\4!INQ^ ۍMvYBtHQz=qqx|>\|^Rg%%گi*+ɪ]H mfcxd$zqJ!8j !8z l$ Bihf  m6FY,ZJGMqx\Qhh^A(7>uBq ݢJαXX'FYQQ߻vekc+b\-64~[W r\`& !GDB!ܐ` VFQAѠnQn6s=4MG(k4Xz,:Q:]k(H/8eL4 TTH@(n0p8nr:_R—N'"zɠpIt4W0j%\B!$ BqCC`"PGchFykOU:'A!B%Ræc6 abKAIBɄBCKDXSY䊄8Yz㚸8>gd5kEV||\TEEk4\lqel,c$,>믿NQQ?}> nZi'ރ:!!t'94P|M bB1tt cWm-^`EEEK⤢he0bii؊.*Z{||Z\̇ń38Ζ,YŒ3={6$%%e]Z5o>+.#Xx110Sqlݺ;Y0ZUSSCxx8}=u;{ڟzsۍV#98_BvhC'N,>[Sr]ńIxDx+u!8rnU= (xPUݻv-]-#_.ZD+xzܺᝂ~*-eGM ރ5JBéy}`#)#6$[)67z|1?,r&l * 7ĉӞ={xINN>躻ve]Fnnn]YYmFll,>իW7[F /0i$l6^x!btԩͶuc6ILLdL0#G=͙3Fƍ>|8FL.]p|IRSS #!!K/@ W^F <1bݻw'< "O;UU&-7Um\mrF"VU:Ѣ%,> aa.b!D+LV 3""Y^ O%%|\\WN'nU ,lT%%|ZRB0fXYtrT_ILL駟c\#FЩS'x 4 ӧOgܸqYM-2o~fΜncԨQl۶YЧzcW>^Νs=GBB3f`Ϟ=tuL0;3S2}tƏݻh6mӦ՛o̙3ygի,\z.BOxbZm-[xd"**d!G>!B4jF j(h B}|(/,_U+BmU-ۖmc窝\˻S%ok^`߮`Gr8;V)5oi#Z7_F)BYAK?Ye۹ś<;FѴ~bI8)[qxAZ(k5Z-VQ%\`j1h4kywT>}Mpkºu>;U{T`ݯu컈SRaOpa0` \ړ qg6k5YZ"u:&L۩x䓢"p:0iX80f,a8 tAyYfb40`̟?K.oذ>O>/QFs=nj3v֍^{m;[o!CܦG+ &&|r :Za~^b>&M 6~T8ڇ:N/^ܦ#JvB!hK.ly#[la˒-huZI΀kop N9)+|K}M=gE'뫬@8n#ʷ59 zɽr֗J\8rrc|un6%ŗUU 5r`M d}E{7}]7).f^I -%%l6cFtOXXO'XzUZ V˥^x`00vXVXl1cz3ׯq+l6ܦrUnӶׯw}7ӧO /$33WiiiSB!h&sl&Yݺk^ dof->m˶`7]bIEvU}v2yHiX>;S9hMzw[N0v '7}Ð+l]UU)+,#.5M?oi883#ѧǍ@p%zP{|wtb窝m6{|> 'RS a_-:z@t`I[.RKnɯ#:rɮ%qbk-DVHF*SWGN]?5 .:=&BCO0XqA/)lIkb!:V11\C˷|\\ܒ*^tI&# Z-Wrm\##ȱ\̙Ü9s^~~~>們rLQQ 6e())9d-MuP[[{ĉ)//W^av>|C{ BьQt 6ݫwS ТәdIXYJNJLH <–$'/У JmeÉc#s)}S6ض89k4L)eBNGNGrXWU)yձ:OmaUF"o{x8L2h6l&RZ qB4^URSC/[8!DrQt4EGS?,r^/so~> !!\gsMl,M qlߟzJ||<|f!aaa! ͖=ԍkYII IIKJJ:Mo0yd&OLvv6 SL!## /0 ٨B!ZJX0k @҇7[^;098_//N}Fp(h#J|h(V|_kkɩܻofDmUUl]xL cXd$""h6dVfB@2f󪮩PDFh6ޝ)xaac zfFv6 n۹:.VI8LFbvi&z;wn`j kzBke4!Dp4CCyvlfKM QSC]urTs!z{=uu]X~! `LAZ Ae6EW*+zGFp9 uR-*bj|>4㛪s'ə\g3>&F:2yd~m=\OF:zbz뭔s=eɇܿ3nLL ^{-=O?f;鱢6mSNm:&M"::38_|Ayy9gu=z`ٌ9XRR<'9j !Ubm<'vd3QUZůX&!=pq:8}C5Ϳnnh1><=0go.D>Lw|wLo<'߽^_i?1=wRvޅQ[4m=X-h!^z=gDD4{ݧձIx{u5ɩ 2Hږ6]ǣdCU/`h`6BÁfi(DMVUVVT$!Q7sw ģhhky9?s֭\lqp(-(::~G} 9s޽{oSUUŠA>6k,̚5Gy믿_ź8)j*233]BLdd$.Ehb(pmu߭kfILh1RVPwNap'cy}vkmk=NQ=@ÜfiO\:R^3 9W}zuPWqAZ-ExKd\d?C̸suuDqYS,\.: qr|Z\,rAaSmI#Z˵qq N'$rssILL$'''EMM zL>=!ZWWB!h".-vR_JGDli1aFK:Q(nm,~1[JyQ9yvO Qa L` ٲd wfZly6z9/e尿Ըj0D>!Z-]bKY"J Jo(8j4dd0X@x lߒXJ4,x+#sh(C29d"\+hoM&+-ţTxdՑv'^ϭ ܚZ>(*B6TUZ6zyWI zk)sȷ~˺u8q\̞=ō7҄Lj Bndh˷>,*S NjOUZ]ͪJVVT"# V+X, gŌ߸1zqiLL+BOx [Wlb?Ȣc#^oqEiXXd s[lЯ_?f̘#~ǖ B!DČD:D)m,|qaF buB4BH5qq =ΪbCu53E( 6@̏$Q4uBVeBBz<]B+* m<>;xhlXUǢM9~׹sgvyL7tPVXqL)X$ B!q*v:a=wBY[B.k04lQU\rݻ h8;*,αXm4IB0Z-^/^UeUeeKB( dHd$vR-*bj|(y r811>v;҂T+Be+ǘP*8 !=7>GZBC;/}hPe(N' N|EIII.C!ZeH#BEQ`!'# ç|T\ @bH[k0**JJ FD­p.KZXn7j^khAlm۸*&"#; GnnnBcBB!B!8A,4⧲2)-8t`tz.,䍂3?YQQD!2Mf#x*+9b bEBȪsgb"w&&7 SP@hA PnQo%,[n'An ⰴ !B!Iihxc|<CBȖx[kjؙǿrsL&.C#"j} ёé`MEB:d]y&5 4o<節]xt.X8&՟zV\☓P!B!Nb%<.L;kk"`-XUYJMp11\lrޙGp5j|>`MeeKB B4.Ი xWZSNQj%B/N,:7LB$ B!BS( ~*+`X^UdӉ `|L FG9<ߓQN3YVQXr$! :qr2+**x=?wz/xWNɡĭ \+B9j!B!)LQ t3nm KKsuXr;c4.ш(} q `ue%n(jBEQڕ%%7?ƛtTVr֭y6xΌBCP!B!D(0a00nDz2~,+R UUUlbt !h8dL]_U-Jq j:.Ȯ孂^':UB),kx8w%&r}\Qz}0Bс)j*233]BLDDT3t`#Zh b7f̸-*R+- ̓t(Yu:.,5aaFÅ6GG3f#BP' χ_q*:E&{vYB\7?\fs¾cm?cl?B!B+< 5*!DE<qBZ]B.VNQbbfz8m WU08"Z8Z]+yy7?rQ*`jnҍ+"V^-s !h?c!hU{Kqt o0pr2%|t2Nj}V[6QU. h6+ҮTtpLl lBUUkSqt7{׮<٥ b^K]f ssWn.gFFrWb"FGVBqʑB!NI.ezZ٧~4kv̷;=-~x>k~|ê+c^B :zt0Ӈ끆-E銊 ޹n˗l۳qxz7SRP_j0 v;ef~&c_\^Ε6oLݵKBqP!8{>#nv)\ӻ+ ӟڼϳgS B´ZF: IDATl{wee}}AJa-݌T2`? ꫪٹx1D%&yU ߱k x=+Vjv\0u*F-~Q>QqTBqbƝܙaɼ;Tx-[|WT[$puv;eBɡ4}>4&A(ͼڣϥna!/沩6Gv]2&;! !IDB!Fy^] cĉ(sD ǜ{/r)xn>/Mʗ^Ny?bEaM7owm-oMb]Sw3gI'fgw=m73oʔf'H-g$GMy9{/UTĉt/6ٮ(S33p9[. :* I+@潛nO>!Z7O>Iڵ6 ct4{Re˾4m(J ޿f2Ffx= 6n>Əgwsۼ!'+bc"6r;|\TDN}}aazfdgTv6fn۹*6C!ڛ(4YYQGUY[Y쒄U:'&r[B\.EEU5ۧ|TT{EEd ܓĵqq$,D!B%>&{J6,XP?Qơϕw>XoX^UI̤"[ |U̝hܙ^~9gɲ7͕/3.-eɫnq=z0lҤk{^'xW hKf6ٳH1=˖7´#U)ش>k5 [&sXC@E P 6Qx$,))|vd DtTBqQa ݺ[|pUUat>+)qm\n2Ih#EoslK~}= *UdIJr˧%%T5puy.*~B:B5.޽):Wwg8InQbn.Vk ۻ|>H2eB! HQΊ^2١q!k0`J>(,lB!: BtZv,ZĨߕWi@:p eΝ2Im2ͱTzeq1X6UNgW3y`2κn+{v8ͶorM ::q }஭=:a}q\[ǒ‡N~5L}-deqϳwj6C!ĩ)Jef{`҅n@-s0Wm#piYlT[ڵ E5xZݦ1:'ף%;vL|de5 B =f??9V+W!: A(ø1DEɟ̢_fԩXTMG\s ~6i#>ׅr;`ɓY0u*)'%9+_zm/#L@H-[X0u*x#qO;_Of9>mUN'o^s ޟBFV+ogdP2ldd0bA[mb-ؗ,ѝ;{B4uɄyhSUHBCBx$%CQϞ <}?1nz-[ƋTz<٢BEUVZBd)PU_ro;S'l?Z.;? !ߟ| + .Gqaq1sɪ:`TaSZZ^=IIyB;7;m12cYQ!:U3'jhӨ2)>Hi9U!qzji1*Bo~} "w:~}ET//%9s\E[t=L Bvd 垤$IJbsU2V[~^RJJi0p_R GE z43JB!)[<򘝛KۍBCJY99#'K0F!; Bம'hyU;eQmSPFqI:R!2FLMtRf@C󠰱!jnٺw`RBw$$9<<xeyVk B ᯝ;pN|R\s{ƛo<̕hdV,!8ŨBvs*BŨb{jky9/(x@EZSZlܓȨ(!]bpΫ=z$!e.b`ߍ8Y{H&'sK|dB Lzu]߾Xp[Uy-?˗3~V\AW!N& !B!399U>ZQh¾ N.X˖bn./Zwam R%B)9 Kր\cktի9kv8!8J !B!G@(y_p BB!B!QΌ4 tL}6x17e˸!L:!!Bq$G׮ S]^]̈́͛t)ɡm}cB! BтfaÎ˾\}5<9|ƕWyO>a…QB!SV˵v;+ `e\Gpa$Ĝ-[56YN_ kW!H^)) k=z@-8ܻ};K0m.JdW!h] B!Nu@g}7^g})d\pA{'B#ltaO]]`>%|<:;?QN3 UU> !8l ngrFr'lndģBYY1ٖBQQQٳH@(Bt0N]B!!^ܗB{SYY&Wi^I 0j寝;3(""hu`ȭ#),,U !ĉM(\E,-/白{WRoTb՘1A8ۛoMw޹QQGY0$%27mFF@#p/B5k7LF3##56Gw&'q[[3I[X p&#^jBq{ZXAzzյk|q*pv%'=9VVLjkoKZ2 ;|!m>z#rr5VB!jNYh9t(}(,(`7Q而[g??, ߟ=_}/ƠON 5w.^}3kk= 14 1AA7H~kWWNĻCzi{1؆JFV u`[9`ji>N!B;j4, d'_h KU>wj*=O%Z98H`VajJDN*E|ffM$5gxz,+W/9çNHaR9ݱ#jccNjzl?O򲳉=LOɀUdbaat+K50Н;D:NWv  Z ׀}1'Vﻴ+B{9lӆT$ 4>t}@Xf4_#DBqWhjެWK[{` ˴|>g?xׯ_Xh@}B<$A(V,n駙֠PhX ;;**涶`LIд8SRtX9;Wk19Ee͞|ѣ9n m۲oooypmؐ&OvX8p NЯ!fj\kۖZ-6j5/I̬MQ{[X`\|&BqwT<̉ƍiUw~ɿx6$.*B t:vfs5BbT!D`nk c}OK++,$+9K'',Ɉ7S 3E!-6?Ŭ( mG\Ƒ~cY8y{W]ac}Nl~}%8 A!>eglZ-.*._&&/Q4{Ἧ &*y6ρGfA҂^!-Svp`Or2"#ْ. '_cIL Arw⎐B ^^Xrn2)$Ubϟ׿{7yyiMeff87hozrMݺ-. VS-i+d$$U6T5 !QNd۶|G}33?K]ahk.(p{37tE!tesPGZGGcŋc؛\#q Q)S K{+,,hڿN~~R1hжmKޘ{%/^doɰ_~1HR !V;;4}yy +WggTҚF15L"5E B!GFÒh awJ~]!RXg?:p͡Շ!8wݡ"5MB!j`o8q4߿>;ӹs+\w;Z>(j48}L r=_~1x'W$?+bS #B(sp={RR֤$E?ߝes̺|>>t@*Ě(BhffM$RS % )Nok_:>cYJ}ԖĎE;ؽt7] @̥^BHzK0I !B!(EQdk[[1#2U0l=z6#'NΎOi"-.@Lu:B$A(-ШTFJQp42bJENM( F8Khګ) 6dĆbhE[6)+pzi%p@nh;-j#Ը8'cja vijg'M\:r .~7F&Fع/׽lr#,^%|)0I !n)B!ʊ?Rz7}|.HM8~|m +##9;7;B~:xS[O9,r2ss4s}/z֓su$} B!B!S˄KH)(`LX_,//rvF%-n+? ס B!j%־t|2±u1H&ӗHN jPtOV7~l]/p%A̫W +bjaտ%egcjaZi`@n_$A(B!BT..<7QQL %?:ZNϜ;Ǭ˗C]M|_76Z&5E B!B>[ n,Nz@13+UR *Ĥ퓪]`!B!B!DzlӆZU*JM/ LFN׉JOP;%UpQB!=/|A"ڹk܊rpw~חdLS_wT !B!+##&iV"#Y@NGGQۓ :|\\鉻Y7ha4u:eft8B!@FRO].]Ɔ-?cY=s55"h8W\V>4֘K1lb2m4 nwKo423HNb鸥##.-X B!B!P&&eTzo|Kcb56Q1[c^gaQ4ONI !RPB[>בظ,sp3zRLQ:= mhCP"OE0؆6HKAm065fq"]u)p%HۢT<ؤŨB!B27@hAK/ru:>r̹rJ%*gnO&瓚_ !(J_PnajKz1jc5~,_ꎋ ƦB3o^&G66Ԙ[SI}<{(2,(M*BJlLVJ  9gǖ/gԾ}_5Ah(DO3}B!\MΤ$Ƅq4=PXRP0>z,@US%.feʪB!D.úT\W ڦZFQC~HjmIĀ r̤*\>ljC B!R![p|/`F\Y %J!.vvnт?6ij܃:jNCΝñcNO 1>Cj(!B!$A(B  0B!Eq''BZ[??]Jf9JÌxtiY%ZKFBC(B!j$B*g7lÀfo/ԯ;olܘYdŦ];cq!L?ԩj֌۵c׼yǍʕ\>r)Lʕ۴o~ q,<>bSSiӘݾ=ݾ=ǎړ7;?_!2F*/թCx6J6}*>"6NWlmFu:T !0i.2BZڵ3ÇcbCCgvzbs'zwvڵNDz_'yW`Ἱu+nnkucмy;-0n[d&&~;xm<<܌ 8ON,2EQ/|.̟ϙ>~>T?9ii7Kkw ҥ9?似G!BvߣA{Q̤$\a_֫g=%TFF&Jsb?ÇclfFG׻b1Wop`"̜mzt9ҠP!)³wtdRx8]C(I)8:2w3fiEKcF!B<$A(V0>ǧzOO,ڌ_ONܩo/ `ak+ |RAP`NM%df6N%='NWԔѣ =0Y5K@mۏB!`cdĜⶣ^mGKdz11Z-#X`6443Cſ%4+eZB!nx0ȅB:^^Xrn*]װ_?.lٿ2h/ ٮјږۆs7ظʊB3kk>^v]M8NGBxYB!-[ZY3`llv4.!v''d5X#EyB!D B!RcWv(* \ڷF?ֿgOֿ>[g2h/ ݱ#mС<=JIډhi;bDcrԚ5ߺ+gg<<8aNS'4]={:r}?> ( -]zW_NƸ66ʕXXPiӢؼLLʕ8`ꊕ-O!vQg\\xI-v4,+Ǐ3bbR] 99t:B33k:!BBZqZZw|/]OO3gagM1cmg7 =u4/8+bRٓOrQVKNz:~1\غS_6/NVкT)̽E N[ǾoE50.ĦNtNAzP'ՄFF_t B’xY3kz '*&ӓ}wX? T9a??!"9v\Pv5&EQ(}+|w/eݺhmm :fFFLo;;N2}n\Įñ41#>TVǦ0B˷GTڕIaݐ!t:*代G1l^vv9LB!B!1FF| ʎd}aa,G|VMMIXQB<0\]k:܂;zE>zG_hac :ѷׅ^o )ۛ~~_[^|cX>=i2>n}ܑ}ߪ;3I#3QiUu/^nтVիw,@B!B!q ۂX롡$&RRQ` {{ bc3##&cǸ2j~D38ӧ KJ̌|ڳA%j׏sqqt$ƼҢw\TP1ln ߖEW/XYY~N矜#閷UIP!B!( rv46jB޺xqq,G{[MZ33ԊBNG$Agfr--o~䚢( EQN|f& DL n%-'cׯ3W/x{F:9NqjOd$=~M0sg2aV t:W [܂~kSS'$p$u:+qSI;w9#Mxqc^I=kk9Ne}]++f#puz$eeq,:uTvzzTe˗֪yϏoht4Nzz:c۵Ɔ43+W橧 سGX6hÙs'xϯڿ]5c֭St֥$]]}nVw//^& B!B!BؘyŅBBH(UMwJ 1LJ5EkfF~qŁB.N#UcfdD/V;Ǣlj9R\Ϊ}۶ dcj3+W~N8i4ⷁW4xL 6r$*/Ls77Zܹ֭p>~l[wwf@oo1k>V\8Ssrg3m[==d<@bVǻ;2TgpF89amj kiVOSWWJȪQubJ5/;;<_6F{{Lj_^eX&m𰱩p84[ݻBZYe23 *9;[v X~~}nؐQ6ylO@kk[gvke@lHU!%VǢ'0%b~x{T5ۛc˖ݖm~45U{1{Hz)ބmnߍs0}j?df-_~[cB!~oՊgQ@Va!\@'r&شffbss+,xBԜ6afo e׊;yyz:kp:6(JMضN0H ľL5(꓃iS]3ۿL#~~r</sS11rUr x.(suU'曍ܜ*DŽS L<E~Ӧybt: ƕTەprތΥ۔tA;;VY;HBZ[oQWaZFQا:͟z ^֐-[H٠Aw"2!⎲56f?O:9|H w%'p |}zTjWsr[ !5`SZ]|(x~_O0aV/Rsrh]s{  IDATeꬲItapL%Fc‚ 38))BTZ YY@oWuse{tjy))VgN綬$ =Yh4\NIa-u_`R^oFeMfdp!!*ɷõsR*Q?BZãB P,/CVƵy vuźB!xPqp|֌xׯhnŒB^8qq|נu{e>HΖBgK"%[33EnVr ~qҪ?>:}Wmx_3EQp L5e-c+ӎ4̸J)%[{\2 s(]z跲s5fLb:rۡSp 53WI%'hKl PcAޖ{1߇ԱS8G;w;3%~v}*UYK{o_pQSk0/8I]ENL޽ ٲ`qqK|ܮ]9ZX.2UG3ۻ7}!_Ɍ ۵+>Ġ"Yl҄_n;+~{UG(einߞnߞc^_mٸ1 @؞=+93Of4l//Hvj*ϞelܘyJګ^ڷyGr/:'31_}t©km'&${YM2Ik0nNǎls܌ Vƍm[,YQr<035bѓOP|CXF׸"G~/w@>mْÆ]{nlѬ֌g[a@staY@Չ+|S͉+BbcdljԥmJL$A~-n7Z]vr995Bۭt߭*11uݺ&&ͭܗC @EQkGlFR+g ƶ[))-7T\Vn5^wBs̵WsccWZoJVϫ!=GYYՊq11܂ށ}e΁$I%& a\s=kk23 n-GV~>F*RǷ̙[׭q*EaXP cٳ ՃQiiL߽.ZmF*B.enXtzM]]_gt;3kkgFmX뼸buoVEK#LM9{6II8xz ZZcMI;`Dbd$;!7+>|`0ni40 Bk6h?i11XEIs7w̟ϙ>~| cc_$B.enr32xx<۵w ˖{ w}syoO? p9ݱ#綄woMT.yվ==≪^cիi+/c>uc5z;QWXH-~,W4LfdȌigΰoy1u2]aA~E1k;"z^c ;jYЯ~;$_ƐǹxB?ee$$prjLD!C׼9۷/ ,>%ʂ<ׯyV:y G1j##vKu-%o_j_g၅=9iicB!FF|ӠBtn.LHiЀ'k4sx&I !f^MJv6 ๠rfpUgaLڹ/;;ȈBFNN޿+))XURؘ /_7d#7m?$|ѻ${ӧ!,,X1yBZZŁػ1 yTΝ15C? Fd\vgedc0dOXIav^|?=pNq\5,sҨAE^E t: o ^z;wގDQѱ~}6i4 ĸ-[tjGd~L+ c=rݽX2`/a,eIuvv4tvh'#"Vahl6h9Er.\JQ65#n-[bSCI !&=< 1]PXM/Z%TFF_D^qEQȈGӹsQT*VK'V"W/Lbs ?cONdYLvΙ HAWXqJ6g蓃<6m hhЭ{ޝ:UI+\۵~?*mmqsc{zpmbagW鹌=KUBV:?k^=h$A(VdaMIIa``*NGFBQE=7ߤO[[{πR1t} Ǎ /϶m3yA¸DFB>6mڒd̬F㾙k\«}{ϚŁŋY[|`[.Q_<Ύ9sX8a2Z!_Xµ}5᪸8'%A89hi/$B!jv;4[q555!!$dfl^7g~.w5gO0hr2]퐞XfG}ztHI !ժ;ٳ ff/ 4jx!N\\j^Ԕѣ =0Y5?!7iseg֤TFBΣe%7պ>d QI7, /a;vm$13.IfRM3y3׸t@29v-[f¦Nk_Y[wdNL3;_{Q{`y5B{MW;;ζlɄKT5aR~>9ggžA53YKNwöcB!dgnNu=ׇ1]J>hft7m񳇚p8*%KgѣVsBQ=B!j27mےM~Nnpkܘ|.ޭ_$_f-kWWLo_7!$lpKVOpj:NZE2Jsĉt:1'%*Ǐx:uZz~Ҫs}4jDa~>۷BJIKSө- !>,1 66B6II\=v즏^c#m^x *vpkؐFCի@ѹQ!X;6qbc?xe@ Rt\/Bq/ܥ 1\~Bܪ.Z-&[tm'B!Me&lvSOkWp  73ΑIq ];֎O 025eǜ9hPT>3ߣǖ/ggQU+B6o&EtULپ=,aSeKlؠOݪqTzqu*u7501'V,Cg۶\;Ea_`Rg ;;\8d%'f Vөk?zNXt}f.,;9| ~;ww28d<&=֏>BQٻ`vv7٭\s搕J-Ącv튶ukěàAB,^#N 8zyqjo݊3vB{Ag[[δjKw EՄ yy<|/2M915XB!ăGB!j  ++IW{zrHr3kk\iUj>u'ɉFbji׭]G/p觟htx5<0R14֍.áK9d ={wY^SKKڎ 9r4i 9v-ׯOi[?FWPwýE /_NիPiSG?xl\9zcǒΣ\.,B܏4j5_2ɉBBMuaC[,j썌H 2;v66w=!B`Rݑ#Gh޼yM">uM~}zBi҅SlР@^v6iCر:ыؿ%<;v`QܴlvL󧞢Q5%>>qUk:!w@fA3UPjEK__FymZ>ttOO&H,!0~ ǎpR?为VѣGP!ăڵdCFB,ʊ>}j:4<=˱eh5ѽ-lbΟ5 t#9.!,B;BswtdٳR?{U>p^I i B HGz"WRl"14w!H FI( }$K6z9sΝ{I6LZKϳ#5Ecn31!,+ BEY/X!B<@ Bb`b~ %P(hт_}ba˼~ ё~gcdaTɃ~nq#{,g^\+75!B<OYYqeKFE)#)kܘ1z *O!B!@B!e<4 ~awZVnnLyݨg6|Qk-Z0a !Bq/}|x?ilLZ @B!$B!B!B( &Ғ!\ͥ@frt4;SSYܠ6ƨK__)(@V?VBvi&O~`J#4eiixϟώ#Rq´={Kz~>^VV `Jp0zz_K¨cd ʕ3fSm }~Ԫ3gXy /_&9'#F[Tbel=4`bP0ͺx4(vB!B! b3e7oQ߷}{k^sSoB=G@慆;>^‚i{so]FТEy={upF[bF%aa;wRy__^S=Kbf&^&ɉ?f K +CEHl:B!B!Ŀ_669w"kvtB!"dRaR u@\]AOZuSc`T:Cݺuw Sm]S APp1-_Kg bnhՏ>~~ O?1y xO#2P!B!lo- @{/6NqDOr.B<jwoz;+hg_Ur ])SΘG0qFlΥʕr$m~ >/v-75yΛŴ4fߏr 3fp)=43f+.NSZVs`1{6V$65Uy6=u반=YtV]H۟rl̡բE쌍`O|}A_l2lN $txBǏܼRj~+Wre~Ӈe<Ŵ4.[FAm&M8s[vv 3aCRkݦ1{r$Jt ~~:?G|Zڌuj^;ubˋ/Sm[zՍ}ٙqo^ƼиqYZ2* _OҪ;>+ QܙnsB!B!+|MWkkEFrSR%A(2u*}V~ϼy|ݮ]y-2=߳pouvm5kGׄB!)hْ66@IQ50E:xznFF) B't_@Owa/k|9vsbx cnsDZYirsGvּW*А֭cNUC>(y;j3 Ϻgip!~3*$JAG~uwgY:9*Ξެ)? BD=y-2 = 66鞵w}Iݾ۾0  !_АMM0}=JnLOQܼyWQVS \B<̴LLM ms r Yدƍ㟱c+,feaojs*ƬJb1ĄÇR1hj?g׬zVm6\,,H%G M:t2C׬<,G&L`A^JcZ8Ԙ@VED**bٳ<Wm$tuO.  !xX{x`s_ڮ " uqqڲtr©a{ѭ;SXz_B!q 78Ҽ9uQE@Fa!N杘TXe+<]B<KyIkI99Z*񗚛͊+WŊA԰!8ln3,o`wwAڻb ^̫uGNǺu|7_@;;͘f7DEmmOח *SoCU|ؠi[Bz%B!ȑ dvӦػ77l|>}{v~;eni֌}>"Փ&sg2J^v,Gbv& $dTJ [ ~~a;h|ӥ 5"Un?I*8Mgٿ_z] n?/C?׏+'Ojmw|J֍YE˖,5Jswb_rY?u*_nͬ YЩ;su\:~>>!|ݺڿB!㢙AAtrJR|q2Nb^mfdy}Bש룢4eEEl|Ғ99 T툍>rKg[ouDDzgՕK(wo!!#]& nԈ9|GmT'?K-4enĦRTac[X~uimfܫgidF?슋c=L/kncč|w(/4iY?R;?!e~%gsypW(΍3hرV^ׁt)/`9חmh>Ӡ͛yߟl łÇp.:ԭjF\Vkq,cb61ΏPfҥXXhyXȤ$"43W]HJn.^VVpq+eo|< OƆ{'B<rRRHKH~m}|Pۡy'ǏGY%JZE*k^}F\E|Mݺ [Hæn]= ΞũaCM-0Erz5^^2i߾+mْk[WcW^OllX8` xxkCF{PU+gؐ}^ [!Q#,-y6"YYYEE uWWPYs&gCCPP B!xMm׎z 7'ёgh؛zPn3VӓEiw47};W<,M~A']0q&zXA^a!q'Sn|yu+/oDaq1߳&XkelmxoNq27gd@w\WV^zv-[HAV\6hV&,׏Og3 _wgȚ5:gzN|(9 +թxʗNmۖ ^lݣFѡ4-jyΏG:@IU ##7mZ5 DD0c^Mf/hO]N?TRȈ|ҹ3/h]O9B! &VVqvfzh<1֪P(e/,~]…'$`ᡳBV 7r%$Qi:5lWϜqm> @AN{#ȸ~].xKYb%jԈsu,v[fZc5&Nqp"zzo_ !Bpys>˗5o8:@d |p)!BݻyZ峺vzϏI)#ӦG_~u::rhܸJV܇9+ֹO+J3F֖,R6Y^.8djFGw{J7g|Ze风Luj3Pfv@:N/:S'uTm4X(HP!#BT2|Rv~9!SRRLӵpf{>Te +?+ChZAT̟ϞPjm?}llD7ġ~} MM؞^b؅@I9^_~Ғ=G׷߮(OgW_{<6mB!70T*C7kk^$"HFMc}QIa-f !ģʊ/>ngO?1h*I}IJn.gxg:ԭKH[.BG#E*qOcd^ڰAS'B|a2̙4:)d#`JY_m#x8Z)Sݓ ^Lvr27l`9qq2>]U›ZQnZ !BhncÙ- aa|[/9;W"ؘ,fg`.w$H{؝3CCLqW6DE1n88T;S{AB!9zv͸8vΝ)WDGJ]s'fquM]0g!SbjmMg+8 pnܸ``\n ii]Lש&Of3ϰi:p Ɩ'&w/&O[oo,]g6aUTWk+8#KbB`X::{;̛GnFu[Ɔǎq9ڗ{:nAxnM>2t( {uq1,u켽9~=Q;v`&VVwulB!O |}iciɨsPS,v,B7ƽCgnFFhdj{߽{.٫* !LB! ذoɺq$׹3}WSGP8dE*NZѴ0VHk?G&'55ʈ_Žys GQAu\\QԥIzeB/f0hέRf`{w1Ićqz*;4mJ?sf rr[>LÞ=5uEEm1roт5kHr}CClfdg6YY ;r !Bxff?}!{f ^# QXs39ei>>srSP23o_Uf !m X-IOOݹ1vffL!/`Iv6NN8q{BSeN!B!4K}}g7ey7cb̒B!B<6jJ)B!ģHP0Ãקl B!xd B!=.!Bq:[[s*(A̤Ž4 rcIڵ40OF}=Ꞷ?o(a2T4S\TG Nd1cJ:ylQTqeϼ =dD F1cj={I.]{O|<].%7𶶾׭#!3ݣ ˗9q*ff:8Օ-Z[X\xF0/4G;jb_MuHݻõ,LMi/3:$BOLB!B!L~zawCMJB k'kth@c`TZ;[3Xy-nКCx7fp\ቸqi{p2ihg']ǧRL޲TڸKxKGaq1ݼW/\--'ɓ$ffbcbB+WWV=~@,j^/.ܼI3gg~ҿ?UC{D[73Ν޴immdzٳԔ>ZR_&2K##i۶R[9eDzxڶIIl1aaĤJP!65ɓϯ-7ʝt_7ޠ]C86@x<16DSGG[sy󸔞ά?ujKS;U?[f Bq(JP_~yB* 6[B!T(hdfF#33 G&@XI ޠ0⩋o GV+wb 6}ISWPv-еDfw4A+WXr YR7N>N5XӺleNFRa[ôgG\Xϼ +VVԸKc~N< :gM025|7]pSiOwӻNk?7# /}$_ =4))1bX ,-;)X<~yӲG%j*/~lɋwC3B޴RS͌5\8|jUTyJmÈ]8:䍙խXުWiDfݬuYpk--aܼrS%]J"iyW\aW#k죮Q^K.13@''6k>[o7 M|^ټ;bj`v9( srhx1ux 3oqrDҰ0?4\bkL EEU1v; e3ifm ##ڶ׭ Zܺ43gB=ZUmѴ)!=3gi4vpmXO?%rz&@ ?yyLڼ:t_ Q>pOv+ccm,?uz Cݺ%8%3xy7>:YZz:KZuB ebPN@IК[xڭ{Q*Yڷ'=ڃB7e_LMLLxroj&&%OM!ԫ4֔&ݚIZ6pݰv(_BVcdjD]q̮3m #r2;ϛ}VM0ƫCI<ޥ{ugIfncN ݰqRAtl/߯3@xu141Ŀ?'>IJB gpm芺XͅВnJn{ઃ\:s_R `?Z>:] >-}h9%vrs;/Ɔ/tII}´)uL102 ?'cqqwݩOS+?Slr#Cz* '.k=i5U>><]:X}ݺvaaZ|~4>~%l:~3' IDAT/4#F`fX'ŀ hb"}|4AU_=}fle糰_?M˂"^MBF,`ww+zLJ${{2sIzݎ;Gȹs4vp 9'aa|Ӛm_kQUP}jͼSդر+tMBȦM\0?ϝ^nco97|ɓda0lMK~ռߚ:NNji9e ?>Tlgk]]|KVj;w\Bu;OG!m3ĐO5؆c;g'Jmh=5|3J1^k9"  3CCR;;23Nkl浾R&M/,:v0+POOgZN`m8H =KZwGf]J[wwMpV_z:Υ uڮ7iºgώX6DEmLjٲݓBG/7IMM,YcRXXx_S>SԫWŋzRSS3fL͕+7ot.}  >>ooovA.]nGB!ɯMճW ݼ|S:lKa[*mGFRVFk;7 <~ئy_vüüJB][wwEjyףBnnki/>mWϾ.Ulء>ܙ]jճ6175ZV99, cIX埩-u?9Y3Yx83ܜ7۴ajvYk|4t֜35zqnem_/%ڞrtW6Ãe,o-[Rn] 8[m,($$;f9R+Bhh( >ɘ !B<̬+-ܺMGu U}TWXGb*=ol-@( @VZ%mWN4iKchˡR}/<3yf8P6aq]pkB`Gc[7[X~ D-5gk|ުP1kׅ+-mi=5;N~zj5Y)Ydfx]ZEֺۆo i|;L-M:?SRUuWi\InFm{;vf6쉏gǝ(*.fw\֭9ӵʒrjnRbcbȦMe IٕޑBG4i֗0 ͟D044U+Y;Nɘ !BwFu10C:|FtP3/+O3C,cE˺9\>s}L.IIgZTr 6m. ؿ|?IhܥVs X;[k3hdfD>-4²QNFY)NNNsLd䐒MI&h>rr;3+3?#S#- j KfMH%iRRͬ|iP婴l*`w[4+m[>k{=ގWSMI:mwݶxtbP.Pt9=+WxC^aq1/\"MMex]ESGgdQtǎq.9GEi1x<-OGGYk'egz moߊmYoFba2~d>33y*"994*+?[I1zc\ܿlB!m=z4cǎ@OORŋ>}:^(ǐ!C‚P6%''3p@LMMaʕ8p޽{舕;vxGfٲe?(JJ%˖)**? ccc7n5zzzrEf͚ҥKǣT*ٵkRgҤIXYYʕ+Q|888Ʒ~[~6l1,\ctdLB!ul~IZ}܁sZskgJFevZ.^`ﲽDپ[Oj?gѐ,\g:̪X9rs8B/:OwLҙK@Iеb0( ǟIؖ0NqvY^pk 4<=:7܁s}Q 9JPMyv@ӴTиsq|r"vG{<-al~+ %3%뷽ؑ?p#DźOl?%!~ϯS9Ch6~Q֢>x;ʮ;bػlfJdł#GXZWKJR-xsVV:źgjWޔ`l#"ӧat{ƍΝlbw\omJz~>Q,<3\aVٽt8!wogkt4cׯxb"S۶|>nz}Syъ[sLMwcE+55ڞ*m2Q#t4Re_H"U#2ZiSbRRr/kx8Vd޽|ءC3bahH+GѣշeKt4.^Xb";;Ϡ4lE;h9pl;17kIQ+A(Gٳ -ҹ%''Ӿ}{<<:]iL_??3*0PGKM/.i?t͚{5Fif80̌`wwvE'Oj734_MxvZ;8b Z-ZU.]iV ɓk6*m3g}уWdNB! xyyV y}vJs͆ 0`@۟>}-[qF@z4A2/uqq1:uZz oooHOO 駟Xvf-Ů]˧~ʟI`` FFFꫯhӦ VÄ.Rޭ[7~7OO>a֬YL2.]\m0K__SJɘV?B!үzUU^qj>oѯ rer3r1cJ:q[31|po;7bo`agAA [T>A>|'W$'=''NJpRR\n` 8996p={$_NCGVXZy$iكA06'X,7$f&XZP7._l@GĞ r jE7zU}UkdfG C@F51ǡ58|YYDpu:}(n'bWx4ǤxwyyZ}tkƠD]FfJ&BiSXx:u~q%m FPKg.<wW<\+܋p`s6k~ZNL W_m7KK-K'OO>H~T``Qw hC_55瞫ciech710A{>Yf{L -]]i`g&ON[4ťRЊm6utиqw,-{BΝ;޽;Awpp~?~cקwޚ&MPBꅤ$e^yz3&&wڅ!zsΚӱcGMBK3[СC2h >tԉ~Bu%رcmOƴ1B!xtՉN:պg'v|3ңMjg'~UVЬwW%A m xP*h?=퇷*(}x\TC0J%+ձVX& jzC|7{سw8aجaU~ndUqdG: Foz[\qҤkt]ھ\z|3u#])x!]Kc 5 wx=G=ףR{B<ֺt† hڴtlѢl޼}%k]tkq|10Dhh(W\jаΝ;K^^m۶jmڴĄk׮iҼ_dLB!24y\>s TEۚI8x9<.W:{cݝR:O'zB!@B!sSL_[nL4 GGGٹs'/BMңGƏϗ_~1~!5񬬬h֬3f̌4M@ Xb6looo4hK/ĠAxw  ++p_>[lo߾p&|YY5~!/2111iJEdd$gϞ]P(nk-BT!BTfdjD=nsխ8z:O(B'B!$ β2vvv:t^{L\]]ڵ+~~~e˘0a&LɉYfkgŊL0{///ϟϜ9s72j(222Xx1#G~^z,\8 ^l'0qDzE^^qqq:ǣ6ck{=?>~)fff3vj...dLB!BB!B!o9<{a֬Y%$x1$A(P!!BQ"YYYl_y#JOtR?___fΜ) B!*I !B!BTYYY$B Q!DlfwdT(/iiiT*JYMAB!B!R2C ;!(ԩped'yyy_3ydbbbhժ֭>~ܹC͚5yw8qz?ǏÃcǎɓ'qpp 00[[[ƌÇeռm1c۷o'66ggg,Yѧ,͝;sh֬[ne۶mx{{׫iرp nܸA5KBM B!B! cܣGy#z $&>( _μyXp!?;w/ܜ .KpN0Accƌ>`ҤIL>`ggG˖-3f }͛萕'/^[[[6l@Νr 666ƮT*={6g.nܸExbw&55ݻ3c Νɓ'QTXXXr ߟJ*QrR⟒B!B!K(e⒓^kr9;׮]9EmRа:ɹ&F{@lMLܻ7PkO޹Êӧmn1U+xxh <={g c]]LMhgͩndB䗕ţG (NSZ5Y~}ѣy}vJΝ>}:6664hЀSNѦM9|04n;g.[U**T{Y:uĘ1c` ~w IDATmмy|C=zɓ'[n)dBQkז}dd$>>>ŕxǏ/ z۶m:thWɘ !B˗Y|=nӦ)6l`Sx8Сm׎]KHk_ս;~m۪޾͙hZ׬ о'95r$;c/]ⵕ+9]&!W:9Hzz:W^''' a֬Y_+lk%-:`j׮36mVdiii̘1Tʁ%,,صsS$(' B!Hdd$~~~8t|Vϯ,{)ɘ !B+93`?zĉ#x͍V5k2Ņޤށb>UPf͸ՋzzEmncC3kkI˖.5LLwddf>[*SGGO}vƎK=طogϞeԩs311ɷr%""mmmttt_|QQQexx{{3w\n݊,^DVZBBQaRJ=물N8::پ^f2B!b3/ghFr9e.[X[3_xʘ&MXԱ#~?HlR4ˮ]UZxd$ӏ\L F nؐ;V|<:ĉ[HJK c4-|SSp<^..13x7gc܊fY(g֥gf3gz֟;ǚ0.ƢM5YѥFҜ"ذ!K;wFKq~IiۖYmh-5Ы_Qr2GT{7QIQ3cx9_t;B~޽,Xnۓ]233Ύ۷{.'X 'Ndĉܺu5k0eׯOxBr]qBJUVlٲ"##9sPvmhذa/iii̞=[ݧN:̛7ϰĄѣGc/!!cbooטvquɏuT*5J?~-[b``@j՘0aGtt4JRᖷfܹ{{{>|8iiiӨQ#ɓ'NΝ122///KÃ=90x0Z 1;YanRoXL OP[JߨǏݤ A[7?} ùx>{fJ˖ eUh(4nHWW, կŨ5 9DޮN OC5R*̨IqIO~Ν;U?G퉊̌ƍk5l_(5j`ܹqe))) !A(P(\rOJ\2۶m'''݋'N #Gcf͚E-uKۙB!eԔ*ħ,W"-3Eg{{v'Ο'򣏰T v_̇ٳޞ zeۻv1uk, so==閝kSVk) <^n乡8ڴQ;#3W++j-_N؝;4^x6~&zzlNu͛]͚aml1:*UEaY;; Ln@ Ju>E'&>c_V B&LNzXz5eRASNk׎3e̙3XXX0qBb̜93f UTy~i[^=h׮UVf͚@V B$BTYYYϡ`nra^"s;W2g Æ sΌ7mj+((???,X@tt44k֬ؒNڹfV\IZZgǎ꛺4iԩSY|9ӦMc_qB BQԩSG8v:::t҅tu{vX|9?J!Cx\p>H{omWfŊܸqC]COO'%%ºu4oӦ k׮T1;::YvvvY9mwQ?fرdddۜ111!,,dСC5qic*B) Ltu5tT۞fKN&3+{0"OyBǏַ/S僃yBsV&nj,xVxJ)o~ k##2h~zrT5X/O\L U M74=GT˕$B7СG---7xwn۶6j{2o޼R-R~JbޤF[A$xq$A(B~ll,)))RÇj/X222hx;ɓ'ӡCLMM4/H\\1Bcv#d_X}1̡S`[zz:(Jbcc'_ QQQ(2e?B!7.RYOBiC٥>wo8y6ӂu+1&鬐&VVhkիtuCa^EPKTfdgN!Žξ6nŕY̅)hTxF 6Sk#ǽ,$=3٥B" B!8q őV$a*UPT?}K$.bi 74=vJmK(>Ef#\\r(f mQRSRz(fft|ΉH!BTL BQ˸]v$''S ܦ]vdff7Uiii̞={4=ӈӧLvܩG'{}R4oޜk׮'vE'wb{xxp%7n\mKKT!pRo.^d+T723}|kGeee8[v@͛ʢ#ZZ܈cիl$#3753#!5_~aժE&oڄүE߾=ܺ3ukVJD\J\jȈ`f܊{mm͚[L ;nIrRؤ$6k##}a@'':ĵGVrII 15?ccjk%,ː(  $re/!?JB!*?=z4}aʔ)8;;@xx8,XGGG7nwޥYfܹs0-[V1? ɓi߾=[nݻxxx0a>sիիINNSn]J%_}r888pB:vJ^s ˖-[022ёX6mD .𔖏nnnӇCRrenݺѱ6mڄ7ږx2B!DYʯo3<(O |-9;K(| lCsVa596|8gw`gjJʢBB.jbQǎEƝJrb8SOzf&愿:*{Oile޽irOJ'?@ϭ[incwZ>9cWޱYޮL:rIImˬ6msqJ zR' uthS&IIԫR}ء??VCuo_Bff tr-3.B# B!B(ʕ+[.k֬!""ʕ+￯W_accʕ+֖#G ֭[,YUVѿMƇ?ƎիW3g aÆѹsgƍcaaeXb[رcjՊcǎ1{l =z@?n={ͤIx!>>>̚5+ߘ4F[6N:Ō39r$)))ԨQ.]`]r 9ɴ&dL S!BbGl+_mݶF-?PR=W={0ڢ))#]]t8}" <4Ɔƌh˜=[qzY^}'xxk)ٓy=^34N޾mZbk ]\J?!B(0)%5QQQBJ>BTVVV޽!>\4t2G±3* ݽsv7?c]K"|XZSB!B!B(xwݻwϏ֭[k$:wNHHJ;)'A8s  "HP!+ 33C!B!q=C--- J5hyϡJ*TR,+wŽ>92'CMB!B!B!DbɘRJ>Kpӹsg055ˋxqqq1+++SSN֭[qqqA__+++Ljj*^^^jՊ-[>oWb4LhXX͚5C__Mo;vDTTb̙3nVʸqHJJwK.o`hhHƍ C_sbggիW筷"99@FT4~QQQ-F!^4I !B!B!ed,_S7pE6nܨQիj [VZʼn':tĉ9}49rYf~~۸?tttHKK4r ߟcjjZh܃fȑر]]]tBbb"իWg׮]ݻ ݻwi߾=رyeF!Cb׮]驎0J__"qF-ZĔ)S8z(TZTwΌ38y$!!!̜9Sm\r-DE&%FB!B!B2Β%K2e ԬYSݻW8͍p eܸqVZd :Jݞ{m,=zɓ'[n?1cвeKlllXnǏӬYBd 8p0h fϞi߿?4mڔ3gвeBRPTŞKngϞSNsT;g6d- B!B!Z˗3ؑחc""|9+L(}},bmŽo?ιbyj㑑,;uLcɡesǏSRm,@Kݻn_=[L E07ƫW:48CB1{HI_ɹCD_B!=nӦǎ;*e^ndDȨQU^'ݽ[lYmU##Y3g8OOBFܜio'Hxn=|:֑ƺ=9tCws;nI, H#%}}s3>MkЀ}}{#"uFOP=!Fq|pVv]ʼ?7o&9^ŹwlF\nUTx:::?|уYfaookơCx!VVVESj^P9SRwwLmmm̈ɓ]SGGOx%ܹsٺu+nnnX[[xm[e#%FBTfR/'[v)2!Bp,4ݹC;|?xp[Gu(L"zޮ>|kժ5 u2j^._S3!5aAAUCCԪE Mu]~$ E׭HifԔk|_jcfFk9붶*qH'':}5svu<|UV h|F?sss ` p9|}}ӧn4hP&?x\P7^XYYK*ѣbϋB`ĉL8[nfLB޽{$DE 3BTvvv__e\B!x]wN7cp!j[/^T?_krf;~ zRNfȮ]۸Qql-۷q]Jڰ3UrL:|_~?+Vippվpja~&:6`0oU/fߓos114+ͣ5k&JA#Ҙx06K7w.׮[4ʇbt)/Y]O ;B&wQ'?VoO|&a<> 쫯U6$1Vkr6sMsJzի  q LfݨUp挺m` Cw44!5cbQ/YLHrݶl?}WZͼ'+>>{$#ScKY QC68AײgEwKڵנk^@YXRjU٣nKMMٺ.]q J2g>}Jdd$XYYy"}ѣG:u 777g777͛79s]6zzz4lPy011!**JݶuV/sT/ݱcGJB!oii'A2Օ\7 ƅ͏"etv?ݼIKSRxη}r?1w)>mrު/\#GXd$ປ@MVѯ<<,D]ʎ~Ң7ߐXHU,qJs^AޞVVEЀ͛EƜ4Xou+L֭ܺKHTB>+15-;rhƹ1 \q޽]q07gΝ&%IJR%F߲R|9==z;:KT }K1KToK{3`gDn (*6---&L+X`O>(J~}||8}4}alܸr%;ol29¡C|qwGffs=իk.C@@nnn ~ !Bؤ$n7 ){Ge$& ҅M ˖Q7OIJ kzP'R32};яcml ^_EKKɥѾ_1X&zz ;fic57kƘ-mmYu1y| 넎M8y*+|Cׯ͛Fkk G|i%dkcc s'))tal9/KKIIOc ףG)$IlҮ yj1OyL]`V].Ȼӧ? w+\--g~!biEn#\\e wԉM_mbߧ=JsJX̵.x9}'ƲpBT*Fٙ[888p)f̘ȑ#IIIFt_tuuiѢ?3gӦMҡC7(믿fر899q P*YӧӺuklll=:UFpp0&Mo߾TTAg,uݝ5kְrJҨ_>;vPώlҤ SNeL6Ç~BIW$BT(y=y5kЬY3YY}56lɓ'ݻW]?##ǫq5֮]Ν;ݻ7۷ƍ{nT*4mڔuqAVoeT*ZZ=Wׯ#B!51iWx>t˵N-$h$LrtOy'Np%6V]TPp?1s{LfVfyʋеn]Q,;A3{>s qڦ8WFzԬIXviҟ"#66H >OffTa]nܘ5kb릿\.B5+W&uҟL!KU CCYq 7HIO@[rUS170zJ>vws-fe4*OrllNs9JRŋYxFy47hЀ]yfhh"-ZT ĠA |nCvfϞٳ7nX}CvAG~4ڼhsss.t?mU ÇgE_U! B!F2,EN,44wwwur{ cǎC.]H Ю];/_~k1}tƍGFF'NPuW3g2ΫsB!"R!| R32]/tn9g˛D470PfQYOO㱎JҧnߦߎpqsBOa&A&$Ьzu6m 3}}b$|,$p/!&%vejez\`祲 aپϒ]PR\97$>/ՍLZ~=--z%%~=[zz|,|} z+)Bpgges|6Emٳ9C4(qV{I!m BQUC8'a4{{aGWDZ`Pժ9틏h__ŝB!9V®HXDaw:zt9bɞ W`gjڞ=m+r3}} 5ڭ*U^22xUvUAb"}<;SS4/ nkˑCINKk|x`g%4r:ob,Y9fVV]mZ6] ~qy /_ƣvmtnۓۋ6Ņy'Np]6_FJ݁kuDwR722*tϓ⿧tƅB Ғk=x@㱙;}_VV.NNN\x7sB!VloϘ&M36>UJW&%U%gͺ.RIêU绻[~]]N%yfN=JNTTnyhogGǘJa5,vj6YJ-[Тw ^* #9==_ӧ9_8WVvE#\\޳66^Bݷnؘx56VݙŃ$NGEŽΟ]Z|ڪU*BI !x1???ziӈ/ѣӧSLٙILLdDDD0m4Ν=g&(({3giiiTĄ͛7cjj!s?B!x,+U9?LLBft[3y7Id2-T)(x6Z}Wm_ZkgiNݱm6jė(9UDaL:r;vPIGA YNu>{~'  a5s1h~B'XZށz(sꘚ2)W"nE.?xI$c֬FY|ݳ^=]]tiۖ 058XV0ggkW;6@Ff&{\ˮ] %sJS:O?wd,QowڵcUh(_=˰FX77>|ȜFP0Q#:שøo^yzx0f~| OӉyIfK̓'MH5iۖY gB9|:ZJ%fXZ{w9;KG!/ qBW X[[UE#F௿חuq-uׯS^=~GZn @hh(..]aÆ\f͚ȰaÀgwj.]kAʕqqqߧ{xxx̯Qvٲe /0f___ȳ:;v0}t"##ƍ:`B!+++t?~KNKn F7no$nJ]8߿O+m\*H>8x|QyJȠM` w=Z~Ymx{p㏩WT8>}F`b")Ĕw8ϝ^B,JyoIP!#_/ BQ<Ņ{pqJ gp::KGʕx9=5~<2Z{9öK3qbYEʨ|/YkժqK]"y˱ncC[owH$b*MPJ !B!BK陙ZYq_Xrvm"?LDUȑr\+##LT..+˜w-ll4J !I !B!Brw;2em[|ڶ-0Blj9!B!B!B!^ B!B!B!Ч~%J?lmmT*,(_Z5kV{N%++CBB!B!HBV1OwBQI_eF͚5IOOwZكEyw*9$gΜB(pp$A(B!BQA ʢÇB@VHuU{=ĔuyTdqvv.*naZZ* R .I !B!BTJx@Ą1w8BQՇ$TbӦMT*"""ذa֭ۅn'|Bpp0tЁ/I>cvEBB͚5cҥ4nXGTb .^Ν;qqq!88Zj1tP̙׸qصkL:ӧO͏?<+1ŋ3sQ^=ѢEf ̝;sh֬[ne۶mx{{ <{m֯_cǎLJ7np jԨ\bB!B!D3DRaeeU!Dr_R1Yfamm (gzll,ZFlذR/ݺuܹs%k///;ƢEdɒ%xxxp5ͣk׮l۶ -g)Bq> ,YBY`7oޤnݺ1dƍ̙3ӓHu Jٳg3{bcƍ,ZŋĽ{8|0tޝ3f0w\NݻG5aN,66u{lllZwRP0qD&Nȭ[Xf SL~t޽“oBB!* ///ZjŞ={Wt֍w$$$0vX100~,]Tc?ǏGTrq:uꄡ!#뇱1NNN/&%%1qDlllӣysbggիW筷"99@FTӖ-[ppp@__Dw !BW|BW;5q5k,vMAPP-))j-D^{ {uÇ )~^5j0w\̸|2SRRHN!ʋ$BT ׯ3o<.\ڵk WILLDKKŋs!ƍǜ9sXlY3={k.T* `Ȑ!k.lmmׯ;===پ};~~~۷:uйsb/ (J|}}Ku7ndѢEL2G@ժUIMM{'O$$$3gr ߟkB! Y? !^'N$>>:}v~'nʨQ8vX;99ѣYn[nddd0qb=‚C2yd֬Yٳ'ms)6cƌaݻI&O6mW={7 <8A"> u,)j`tTڪ8j@|46ڪt54Ԙ^Ԏ 8IԉBD%ZF/a#0{}s?ֱ,G (TWWի*..d$]pA)))>:ul_h޼y7{lHuuƍ1chѢE@g?ӡC4b/Pqq$I/BCCzz'QjŦ-**ѣhm $ 2N~U}ύ7ꭷRJJnݺ@6Ǜ %%%uZƎUUU6"##u֩R!!!SDD$iРAJIIQVVRSSzBC NnHR߾}eXte6l|ӧO[𨳭(ɂ_|N[ii$@=zPhh,߈#TRRȸ+++m9LI^銉QXXXԞ={2ggh̙9sfKjҥ񊏏'00P}Oo6m~z_>UUU9s{///mڴI6m$ݾ}[fϞmӔcruumpMQ>̬նqF<(KԬI5g-X@F׿jʕuծ]:۩f:sl{qTKHHוtk޼yz[3FL O7hqfnܸӧ;4̈́!4Ν;UkBcͲ:(((H۷oYdHs2 JJJRRRΝ;l%'NL">IDAT'+$$D111;?<{ȟɟ8{5x7 6////m۶MX,8pDn`%ܹS媪*}gͲv|tt֬Y:9ro{6iu! (..wCp"'Bp"'Bp"mѣÐ$-[L]ti׵l29sV{nnF3ħꫯRzz "! { I5kms5-_GLL e4?Lp8&gD 1R...2L2L?i|}}#>^ׯ_~~5M>]TϞ=RkՀ驀M2Ex >\[lQ>}Y"f DGզMj-t) >\ڶm֯_6mORRl6,Y"b<''Go]vUYY)L'OԊ+|y{{7)S4c UV;vnݺΝ;kǎ$٬Bܹm|wj˖-1cFSNU||v!ZcohTzz#|u|UU-ZiӦ/}ĉպz<޽{7oDIСC?Po  I իպukݻWZ$kN.]}Z_$uQ:|]\\JKKJM5p@ݻW*))Y6?؎qP 8K.IG|}}k///ײe^ƍ:$iڵ/%KW^߿'I%Iϯɱs4w}WgnnnСj?ܐ$Ν;6)[*""B&IVjX[lGL 8(BN$IW\{*..STTYfIl˗URR={jҤI*//$ՙ4xo6LTVVիN<)AIII:qΞ=%''+??.ȟɟBCҥ{nk[EE RU9~70p@;sΞ=}* @7o~d<1|U:tHwݻw}ܹS or,OJ׮]:ĉ}l?؎q;$USZZ|}}?O2ݱl2EDDhҤI6mڷos?Wzz5l0ƪ_~Қ5kԩS'OA>}5qDݽ{W{l$kUZbڴiIڵk͛7[^^^߿m$%%iݺu?~͛/*99YoIyZtҚ<&11Q2dڶm|]~]#F$KfF)???uMm#&xVyy?E3gThhnjӧO:tH/֌3t]uUcǎ.K9sFZ?S'wwwXB}5j(;xj5g;vL޽{%FQZh^xԩ 4|M[edd(((Hܹ&L۷o+77{psshõeGp #&xaɒ%JMM$P+//uEmܸQ|Ν;7 ھ}233}v EEEʕ+X,Ok.^Z6lЧ~MZjhԩ׎;XUVV>rhTzzzƦM{)99Y_~fTQQ-^XtA֚9yVXL}6̓q(!G7y\VV$i$+((H{ѯG?vvء>L&M$EEE[nZzV\iۻwo} nʕ+hْ_ҥz豤_$uQ:|4iJLLZ_I RgWM:<9oZ=ZX,X,SppJJJ_RR"M0ֺuk7NEEE;:z***4~xk&˨Q+I.TYYic{*==]%%%MS\z@ FL Q p͕ݭ?Gm=]N_???i{˗/K? &M"<ܐ$Ν;UBB222uVEDDd2iժUM9c#3x(EѠA`z?kL@@]Z.]RΝkm9(5+00^^^ޤg<-AIIIJJJҹs甝d(&& "3x B@cǎ?jt֭wvemw^"IY۾{ڴk׮Pt Iݻu;D lGxȟoZ$m޼YFkN:ŋ*((Д)SSllfϞk׮SNZzݻFG;vԴiӴ` uY+VO%.]&ILL m*??_ׯ_׈#$I$٬#G:YdVxȟɟ$ ޶:t.\_]?L&էO&sƍz뭷[ni*((feeBIIIz4|8p@...M>&[TUU<l[N Q^^"""$I RJJ876$U(,,ޱ1*--d҅ ]ܾ}[AAA={p8?-+hَ97hN7|ƍ2ͺqㆦOnC `ϴ{5`tQsҶm۔.Ţj߾}޽{xȟB3-::ZݻͺH569P <Ӳuz?kժSpl΁!+++S``;ȟ8:gh(*; E F; -=w@Ku Z. '!-3P BP BP BP BP BP BP BP BP BP BP BP BP BP BP BP BP BP BP BP BP BP BP $}j׭uIENDB`neo-0.7.2/doc/source/index.rst0000600013464101346420000000621413507452453014407 0ustar yohyoh.. module:: neo .. image:: images/neologo.png :width: 600 px Neo is a Python package for working with electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats, including Spike2, NeuroExplorer, AlphaOmega, Axon, Blackrock, Plexon, Tdt, Igor Pro, and support for writing to a subset of these formats plus non-proprietary formats including Kwik and HDF5. The goal of Neo is to improve interoperability between Python tools for analyzing, visualizing and generating electrophysiology data, by providing a common, shared object model. In order to be as lightweight a dependency as possible, Neo is deliberately limited to represention of data, with no functions for data analysis or visualization. Neo is used by a number of other software tools, including SpykeViewer_ (data analysis and visualization), Elephant_ (data analysis), the G-node_ suite (databasing), PyNN_ (simulations), tridesclous_ (spike sorting) and ephyviewer_ (data visualization). OpenElectrophy_ (data analysis and visualization) used an older version of Neo. Neo implements a hierarchical data model well adapted to intracellular and extracellular electrophysiology and EEG data with support for multi-electrodes (for example tetrodes). Neo's data objects build on the quantities_ package, which in turn builds on NumPy by adding support for physical dimensions. Thus Neo objects behave just like normal NumPy arrays, but with additional metadata, checks for dimensional consistency and automatic unit conversion. A project with similar aims but for neuroimaging file formats is `NiBabel`_. Documentation ------------- .. toctree:: :maxdepth: 1 install core usecases io rawio examples api_reference whatisnew developers_guide io_developers_guide authors License ------- Neo is free software, distributed under a 3-clause Revised BSD licence (BSD-3-Clause). Support ------- If you have problems installing the software or questions about usage, documentation or anything else related to Neo, you can post to the `NeuralEnsemble mailing list`_. If you find a bug, please create a ticket in our `issue tracker`_. Contributing ------------ Any feedback is gladly received and highly appreciated! Neo is a community project, and all contributions are welcomed - see the :doc:`developers_guide` for more information. `Source code `_ is on GitHub. Citation -------- .. include:: ../../CITATION.txt .. _OpenElectrophy: https://github.com/OpenElectrophy/OpenElectrophy .. _Elephant: http://neuralensemble.org/elephant .. _G-node: http://www.g-node.org/ .. _Neuroshare: http://neuroshare.org/ .. _SpykeViewer: https://spyke-viewer.readthedocs.org/en/latest/ .. _NiBabel: http://nipy.sourceforge.net/nibabel/ .. _PyNN: http://neuralensemble.org/PyNN .. _quantities: http://pypi.python.org/pypi/quantities .. _`NeuralEnsemble mailing list`: http://groups.google.com/group/neuralensemble .. _`issue tracker`: https://github.com/NeuralEnsemble/python-neo/issues .. _tridesclous: https://github.com/tridesclous/tridesclous .. _ephyviewer: https://github.com/NeuralEnsemble/ephyviewer neo-0.7.2/doc/source/install.rst0000600013464101346420000000374313507452453014752 0ustar yohyoh************ Installation ************ Neo is a pure Python package, so it should be easy to get it running on any system. Dependencies ============ * Python_ >= 2.7 * numpy_ >= 1.7.1 * quantities_ >= 0.12.1 For Debian/Ubuntu, you can install these using:: $ apt-get install python-numpy python-pip $ pip install quantities You may need to run these as root. For other operating systems, you can download installers from the links above, or use a scientific Python distribution such as Anaconda_. Certain IO modules have additional dependencies. If these are not satisfied, Neo will still install but the IO module that uses them will fail on loading: * scipy >= 0.12.0 for NeoMatlabIO * h5py >= 2.5 for Hdf5IO, KwikIO * klusta for KwikIO * igor >= 0.2 for IgorIO * nixio >= 1.5 for NixIO * stfio for StimfitIO Installing from the Python Package Index ======================================== .. warning:: alpha and beta releases cannot be installed from PyPI. If you have pip_ installed:: $ pip install neo This will automatically download and install the latest release (again you may need to have administrator privileges on the machine you are installing on). To download and install manually, download: |neo_github_url| Then: .. parsed-literal:: $ unzip neo-|release|.zip $ cd neo-|release| $ python setup.py install or:: $ python3 setup.py install depending on which version of Python you are using. Installing from source ====================== To install the latest version of Neo from the Git repository:: $ git clone git://github.com/NeuralEnsemble/python-neo.git $ cd python-neo $ python setup.py install .. _`Python`: http://python.org/ .. _`numpy`: http://numpy.scipy.org/ .. _`quantities`: http://pypi.python.org/pypi/quantities .. _`pip`: http://pypi.python.org/pypi/pip .. _`setuptools`: http://pypi.python.org/pypi/setuptools .. _Anaconda: https://www.continuum.io/downloads neo-0.7.2/doc/source/io.rst0000600013464101346420000002530413507452453013710 0ustar yohyoh****** Neo IO ****** .. currentmodule:: neo Preamble ======== The Neo :mod:`io` module aims to provide an exhaustive way of loading and saving several widely used data formats in electrophysiology. The more these heterogeneous formats are supported, the easier it will be to manipulate them as Neo objects in a similar way. Therefore the IO set of classes propose a simple and flexible IO API that fits many format specifications. It is not only file-oriented, it can also read/write objects from a database. At the moment, there are 3 families of IO modules: 1. for reading closed manufacturers' formats (Spike2, Plexon, AlphaOmega, BlackRock, Axon, ...) 2. for reading(/writing) formats from open source tools (KlustaKwik, Elan, WinEdr, WinWcp, ...) 3. for reading/writing Neo structure in neutral formats (HDF5, .mat, ...) but with Neo structure inside (NeoHDF5, NeoMatlab, ...) Combining **1** for reading and **3** for writing is a good example of use: converting your datasets to a more standard format when you want to share/collaborate. Introduction ============ There is an intrinsic structure in the different Neo objects, that could be seen as a hierachy with cross-links. See :doc:`core`. The highest level object is the :class:`Block` object, which is the high level container able to encapsulate all the others. A :class:`Block` has therefore a list of :class:`Segment` objects, that can, in some file formats, be accessed individually. Depending on the file format, i.e. if it is streamable or not, the whole :class:`Block` may need to be loaded, but sometimes particular :class:`Segment` objects can be accessed individually. Within a :class:`Segment`, the same hierarchical organisation applies. A :class:`Segment` embeds several objects, such as :class:`SpikeTrain`, :class:`AnalogSignal`, :class:`IrregularlySampledSignal`, :class:`Epoch`, :class:`Event` (basically, all the different Neo objects). Depending on the file format, these objects can sometimes be loaded separately, without the need to load the whole file. If possible, a file IO therefore provides distinct methods allowing to load only particular objects that may be present in the file. The basic idea of each IO file format is to have, as much as possible, read/write methods for the individual encapsulated objects, and otherwise to provide a read/write method that will return the object at the highest level of hierarchy (by default, a :class:`Block` or a :class:`Segment`). The :mod:`neo.io` API is a balance between full flexibility for the user (all :meth:`read_XXX` methods are enabled) and simple, clean and understandable code for the developer (few :meth:`read_XXX` methods are enabled). This means that not all IOs offer the full flexibility for partial reading of data files. One format = one class ====================== The basic syntax is as follows. If you want to load a file format that is implemented in a generic :class:`MyFormatIO` class:: >>> from neo.io import MyFormatIO >>> reader = MyFormatIO(filename="myfile.dat") you can replace :class:`MyFormatIO` by any implemented class, see :ref:`list_of_io` Modes ====== IO can be based on a single file, a directory containing files, or a database. This is described in the :attr:`mode` attribute of the IO class. >>> from neo.io import MyFormatIO >>> print MyFormatIO.mode 'file' For *file* mode the *filename* keyword argument is necessary. For *directory* mode the *dirname* keyword argument is necessary. Ex: >>> reader = io.PlexonIO(filename='File_plexon_1.plx') >>> reader = io.TdtIO(dirname='aep_05') Supported objects/readable objects ================================== To know what types of object are supported by a given IO interface:: >>> MyFormatIO.supported_objects [Segment , AnalogSignal , SpikeTrain, Event, Spike] Supported objects does not mean objects that you can read directly. For instance, many formats support :class:`AnalogSignal` but don't allow them to be loaded directly, rather to access the :class:`AnalogSignal` objects, you must read a :class:`Segment`:: >>> seg = reader.read_segment() >>> print(seg.analogsignals) >>> print(seg.analogsignals[0]) To get a list of directly readable objects :: >>> MyFormatIO.readable_objects [Segment] The first element of the previous list is the highest level for reading the file. This mean that the IO has a :meth:`read_segment` method:: >>> seg = reader.read_segment() >>> type(seg) neo.core.Segment All IOs have a read() method that returns a list of :class:`Block` objects (representing the whole content of the file):: >>> bl = reader.read() >>> print bl[0].segments[0] neo.core.Segment Lazy option (deprecated) ======================== In some cases you may not want to load everything in memory because it could be too big. For this scenario, some IOs implement ``lazy=True/False``. With ``lazy=True`` all arrays will have a size of zero, but all the metadata will be loaded. The *lazy_shape* attribute is added to all array-like objects (AnalogSignal, IrregularlySampledSignal, SpikeTrain, Epoch, Event). In this case, *lazy_shape* is a tuple that has the same value as *shape* with ``lazy=False``. To know if a class supports lazy mode use ``ClassIO.support_lazy``. By default (if not specified), ``lazy=False``, i.e. all data is loaded. The lazy option will be removed in future Neo versions. Similar functionality will be implemented using proxy objects. Example of lazy loading:: >>> seg = reader.read_segment(lazy=False) >>> print(seg.analogsignals[0].shape) # this is (N, M) >>> seg = reader.read_segment(lazy=True) >>> print(seg.analogsignals[0].shape) # this is 0, the AnalogSignal is empty >>> print(seg.analogsignals[0].lazy_shape) # this is (N, M) .. _neo_io_API: Details of API ============== The :mod:`neo.io` API is designed to be simple and intuitive: - each file format has an IO class (for example for Spike2 files you have a :class:`Spike2IO` class). - each IO class inherits from the :class:`BaseIO` class. - each IO class can read or write directly one or several Neo objects (for example :class:`Segment`, :class:`Block`, ...): see the :attr:`readable_objects` and :attr:`writable_objects` attributes of the IO class. - each IO class supports part of the :mod:`neo.core` hierachy, though not necessarily all of it (see :attr:`supported_objects`). - each IO class has a :meth:`read()` method that returns a list of :class:`Block` objects. If the IO only supports :class:`Segment` reading, the list will contain one block with all segments from the file. - each IO class that supports writing has a :meth:`write()` method that takes as a parameter a list of blocks, a single block or a single segment, depending on the IO's :attr:`writable_objects`. - each IO is able to do a *lazy* load: all metadata (e.g. :attr:`sampling_rate`) are read, but not the actual numerical data. lazy_shape attribute is added to provide information on real size. - each IO is able to save and load all required attributes (metadata) of the objects it supports. - each IO can freely add user-defined or manufacturer-defined metadata to the :attr:`annotations` attribute of an object. If you want to develop your own IO ================================== See :doc:`io_developers_guide` for information on how to implement a new IO. .. _list_of_io: List of implemented formats =========================== .. automodule:: neo.io Logging ======= :mod:`neo` uses the standard Python :mod:`logging` module for logging. All :mod:`neo.io` classes have logging set up by default, although not all classes produce log messages. The logger name is the same as the full qualified class name, e.g. :class:`neo.io.hdf5io.NeoHdf5IO`. By default, only log messages that are critically important for users are displayed, so users should not disable log messages unless they are sure they know what they are doing. However, if you wish to disable the messages, you can do so:: >>> import logging >>> >>> logger = logging.getLogger('neo') >>> logger.setLevel(100) Some io classes provide additional information that might be interesting to advanced users. To enable these messages, do the following:: >>> import logging >>> >>> logger = logging.getLogger('neo') >>> logger.setLevel(logging.INFO) It is also possible to log to a file in addition to the terminal:: >>> import logging >>> >>> logger = logging.getLogger('neo') >>> handler = logging.FileHandler('filename.log') >>> logger.addHandler(handler) To only log to the terminal:: >>> import logging >>> from neo import logging_handler >>> >>> logger = logging.getLogger('neo') >>> handler = logging.FileHandler('filename.log') >>> logger.addHandler(handler) >>> >>> logging_handler.setLevel(100) This can also be done for individual IO classes:: >>> import logging >>> >>> logger = logging.getLogger('neo.io.hdf5io.NeoHdf5IO') >>> handler = logging.FileHandler('filename.log') >>> logger.addHandler(handler) Individual IO classes can have their loggers disabled as well:: >>> import logging >>> >>> logger = logging.getLogger('neo.io.hdf5io.NeoHdf5IO') >>> logger.setLevel(100) And more detailed logging messages can be enabled for individual IO classes:: >>> import logging >>> >>> logger = logging.getLogger('neo.io.hdf5io.NeoHdf5IO') >>> logger.setLevel(logging.INFO) The default handler, which is used to print logs to the command line, is stored in :attr:`neo.logging_handler`. This example changes how the log text is displayed:: >>> import logging >>> from neo import logging_handler >>> >>> formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') >>> logging_handler.setFormatter(formatter) For more complex logging, please see the documentation for the logging_ module. .. note:: If you wish to implement more advanced logging as describe in the documentation for the logging_ module or elsewhere on the internet, please do so before calling any :mod:`neo` functions or initializing any :mod:`neo` classes. This is because the default handler is created when :mod:`neo` is imported, but it is not attached to the :mod:`neo` logger until a class that uses logging is initialized or a function that uses logging is called. Further, the handler is only attached if there are no handlers already attached to the root logger or the :mod:`neo` logger, so adding your own logger will override the default one. Additional functions and/or classes may get logging during bugfix releases, so code relying on particular modules not having logging may break at any time without warning. .. _`logging`: http://docs.python.org/library/logging.html neo-0.7.2/doc/source/io_developers_guide.rst0000600013464101346420000001361213507452453017314 0ustar yohyoh.. _io_dev_guide: ******************** IO developers' guide ******************** .. _io_guiline: Guidelines for IO implementation ================================ There are two ways to add a new IO module: * By directly adding a new IO class in a module within :mod:`neo.io`: the reader/writer will deal directly with Neo objects * By adding a RawIO class in a module within :mod:`neo.rawio`: the reader should work with raw buffers from the file and provide some internal headers for the scale/units/name/... You can then generate an IO module simply by inheriting from your RawIO class and from :class:`neo.io.BaseFromRaw` For read only classes, we encourage you to write a :class:`RawIO` class because it allows slice reading, and is generally much quicker and easier (although only for reading) than implementing a full IO class. For read/write classes you can mix the two levels neo.rawio for reading and neo.io for writing. Recipe to develop an IO module for a new data format: 1. Fully understand the object model. See :doc:`core`. If in doubt ask the `mailing list`_. 2. Fully understand :mod:`neo.io.examplerawio`, It is a fake IO to explain the API. If in doubt ask the list. 3. Copy/paste ``examplerawio.py`` and choose clear file and class names for your IO. 4. implement all methods that **raise(NotImplementedError)** in :mod:`neo.rawio.baserawio`. Return None when the object is not supported (spike/waveform) 5. Write good docstrings. List dependencies, including minimum version numbers. 6. Add your class to :mod:`neo.rawio.__init__`. Keep imports inside ``try/except`` for dependency reasons. 7. Create a class in :file:`neo/io/` 8. Add your class to :mod:`neo.io.__init__`. Keep imports inside ``try/except`` for dependency reasons. 9. Create an account at https://gin.g-node.org and deposit files in :file:`NeuralEnsemble/ephy_testing_data`. 10. Write tests in :file:`neo/rawio/test_xxxxxrawio.py`. You must at least pass the standard tests (inherited from :class:`BaseTestRawIO`). See :file:`test_examplerawio.py` 11. Write a similar test in :file:`neo.tests/iotests/test_xxxxxio.py`. See :file:`test_exampleio.py` 12. Make a pull request when all tests pass. Miscellaneous ============= * If your IO supports several versions of a format (like ABF1, ABF2), upload to the gin.g-node.org test file repository all file versions possible. (for test coverage). * :py:func:`neo.core.Block.create_many_to_one_relationship` offers a utility to complete the hierachy when all one-to-many relationships have been created. * In the docstring, explain where you obtained the file format specification if it is a closed one. * If your IO is based on a database mapper, keep in mind that the returned object MUST be detached, because this object can be written to another url for copying. Tests ===== :py:class:`neo.rawio.tests.common_rawio_test.BaseTestRawIO` and :py:class:`neo.test.io.commun_io_test.BaseTestIO` provide standard tests. To use these you need to upload some sample data files at `gin-gnode`_. They will be publicly accessible for testing Neo. These tests: * check the compliance with the schema: hierachy, attribute types, ... * For IO modules able to both write and read data, it compares a generated dataset with the same data after a write/read cycle. The test scripts download all files from `gin-gnode`_ and stores them locally in ``/tmp/files_for_tests/``. Subsequent test runs use the previously downloaded files, rather than trying to download them each time. Each test must have at least one class that inherits ``BaseTestRawIO`` and that has 3 attributes: * ``rawioclass``: the class * ``entities_to_test``: a list of files (or directories) to be tested one by one * ``files_to_download``: a list of files to download (sometimes bigger than ``entities_to_test``) Here is an example test script taken from the distribution: :file:`test_axonrawio.py`: .. literalinclude:: ../../neo/rawio/tests/test_axonrawio.py Logging ======= All IO classes by default have logging using the standard :mod:`logging` module: already set up. The logger name is the same as the fully qualified class name, e.g. :class:`neo.io.hdf5io.NeoHdf5IO`. The :attr:`class.logger` attribute holds the logger for easy access. There are generally 3 types of situations in which an IO class should use a logger * Recoverable errors with the file that the users need to be notified about. In this case, please use :meth:`logger.warning` or :meth:`logger.error`. If there is an exception associated with the issue, you can use :meth:`logger.exception` in the exception handler to automatically include a backtrace with the log. By default, all users will see messages at this level, so please restrict it only to problems the user absolutely needs to know about. * Informational messages that advanced users might want to see in order to get some insight into the file. In this case, please use :meth:`logger.info`. * Messages useful to developers to fix problems with the io class. In this case, please use :meth:`logger.debug`. A log handler is automatically added to :mod:`neo`, so please do not user your own handler. Please use the :attr:`class.logger` attribute for accessing the logger inside the class rather than :meth:`logging.getLogger`. Please do not log directly to the root logger (e.g. :meth:`logging.warning`), use the class's logger instead (:meth:`class.logger.warning`). In the tests for the io class, if you intentionally test broken files, please disable logs by setting the logging level to `100`. ExampleIO ========= .. autoclass:: neo.rawio.ExampleRawIO .. autoclass:: neo.io.ExampleIO Here is the entire file: .. literalinclude:: ../../neo/rawio/examplerawio.py .. literalinclude:: ../../neo/io/exampleio.py .. _`mailing list`: http://groups.google.com/group/neuralensemble .. _gin-gnode: https://web.gin.g-node.org/NeuralEnsemble/ephy_testing_data neo-0.7.2/doc/source/rawio.rst0000600013464101346420000001764613507452453014434 0ustar yohyoh********* Neo RawIO ********* .. currentmodule:: neo.rawio .. _neo_rawio_API: For performance and memory consumption reasons a new layer has been added to Neo. In brief: * **neo.io** is the user-oriented read/write layer. Reading consists of getting a tree of Neo objects from a data source (file, url, or directory). When reading, all Neo objects are correctly scaled to the correct units. Writing consists of making a set of Neo objects persistent in a file format. * **neo.rawio** is a low-level layer for reading data only. Reading consists of getting NumPy buffers (often int16/int64) of signals/spikes/events. Scaling to real values (microV, times, ...) is done in a second step. Here the underlying objects must be consistent across Blocks and Segments for a given data source. The neo.rawio API has been added for developers. The neo.rawio is close to what could be a C API for reading data but in Python/NumPy. Not all IOs are implemented in :mod:`neo.rawio` but all classes implemented in :mod:`neo.rawio` are also available in :mod:`neo.io`. Possible uses of the :mod:`neo.rawio` API are: * fast reading chunks of signals in int16 and do the scaling of units (uV) on a GPU while scaling the zoom. This should improve bandwith HD to RAM and RAM to GPU memory. * load only some small chunk of data for heavy computations. For instance the spike sorting module tridesclous_ does this. The :mod:`neo.rawio` API is less flexible than :mod:`neo.io` and has some limitations: * read-only * AnalogSignals must have the same characteristcs across all Blocks and Segments: ``sampling_rate``, ``shape[1]``, ``dtype`` * AnalogSignals should all have the same value of ``sampling_rate``, otherwise they won't be read at the same time. * Units must have SpikeTrain event if empty across all Block and Segment * Epoch and Event are processed the same way (with ``durations=None`` for Event). For an intuitive comparison of :mod:`neo.io` and :mod:`neo.rawio` see: * :file:`example/read_file_neo_io.py` * :file:`example/read_file_neo_rawio.py` One speculative benefit of the :mod:`neo.rawio` API should be that a developer should be able to code a new RawIO class with little knowledge of the Neo tree of objects or of the :mod:`quantities` package. Basic usage =========== First create a reader from a class:: >>> from neo.rawio import PlexonRawIO >>> reader = PlexonRawIO(filename='File_plexon_3.plx') Then browse the internal header and display information:: >>> reader.parse_header() >>> print(reader) PlexonRawIO: File_plexon_3.plx nb_block: 1 nb_segment: [1] signal_channels: [V1] unit_channels: [Wspk1u, Wspk2u, Wspk4u, Wspk5u ... Wspk29u Wspk30u Wspk31u Wspk32u] event_channels: [] You get the number of blocks and segments per block. You have information about channels: **signal_channels**, **unit_channels**, **event_channels**. All this information is internally available in the *header* dict:: >>> for k, v in reader.header.items(): ... print(k, v) signal_channels [('V1', 0, 1000., 'int16', '', 2.44140625, 0., 0)] event_channels [] nb_segment [1] nb_block 1 unit_channels [('Wspk1u', 'ch1#0', '', 0.00146484, 0., 0, 30000.) ('Wspk2u', 'ch2#0', '', 0.00146484, 0., 0, 30000.) ... Read signal chunks of data and scale them:: >>> channel_indexes = None  #could be channel_indexes = [0] >>> raw_sigs = reader.get_analogsignal_chunk(block_index=0, seg_index=0, i_start=1024, i_stop=2048, channel_indexes=channel_indexes) >>> float_sigs = reader.rescale_signal_raw_to_float(raw_sigs, dtype='float64') >>> sampling_rate = reader.get_signal_sampling_rate() >>> t_start = reader.get_signal_t_start(block_index=0, seg_index=0) >>> units =reader.header['signal_channels'][0]['units'] >>> print(raw_sigs.shape, raw_sigs.dtype) >>> print(float_sigs.shape, float_sigs.dtype) >>> print(sampling_rate, t_start, units) (1024, 1) int16 (1024, 1) float64 1000.0 0.0 V There are 3 ways to select a subset of channels: by index (0 based), by id or by name. By index is not ambiguous 0 to n-1 (included), for some IOs channel_names (and sometimes channel_ids) have no guarantees to be unique, in such cases it would raise an error. Example with BlackrockRawIO for the file FileSpec2.3001:: >>> raw_sigs = reader.get_analogsignal_chunk(channel_indexes=None) #Take all channels >>> raw_sigs1 = reader.get_analogsignal_chunk(channel_indexes=[0, 2, 4])) #Take 0 2 and 4 >>> raw_sigs2 = reader.get_analogsignal_chunk(channel_ids=[1, 3, 5]) # Same but with there id (1 based) >>> raw_sigs3 = reader.get_analogsignal_chunk(channel_names=['chan1', 'chan3', 'chan5'])) # Same but with there name print(raw_sigs1.shape[1], raw_sigs2.shape[1], raw_sigs3.shape[1]) 3, 3, 3 Inspect units channel. Each channel gives a SpikeTrain for each Segment. Note that for many formats a physical channel can have several units after spike sorting. So the nb_unit could be more than physical channel or signal channels. >>> nb_unit = reader.unit_channels_count() >>> print('nb_unit', nb_unit) nb_unit 30 >>> for unit_index in range(nb_unit): ... nb_spike = reader.spike_count(block_index=0, seg_index=0, unit_index=unit_index) ... print('unit_index', unit_index, 'nb_spike', nb_spike) unit_index 0 nb_spike 701 unit_index 1 nb_spike 716 unit_index 2 nb_spike 69 unit_index 3 nb_spike 12 unit_index 4 nb_spike 95 unit_index 5 nb_spike 37 unit_index 6 nb_spike 25 unit_index 7 nb_spike 15 unit_index 8 nb_spike 33 ... Get spike timestamps only between 0 and 10 seconds and convert them to spike times:: >>> spike_timestamps = reader.spike_timestamps(block_index=0, seg_index=0, unit_index=0, t_start=0., t_stop=10.) >>> print(spike_timestamps.shape, spike_timestamps.dtype, spike_timestamps[:5]) (424,) int64 [ 90 420 708 1020 1310] >>> spike_times = reader.rescale_spike_timestamp( spike_timestamps, dtype='float64') >>> print(spike_times.shape, spike_times.dtype, spike_times[:5]) (424,) float64 [ 0.003 0.014 0.0236 0.034 0.04366667] Get spike waveforms between 0 and 10 s:: >>> raw_waveforms = reader.spike_raw_waveforms( block_index=0, seg_index=0, unit_index=0, t_start=0., t_stop=10.) >>> print(raw_waveforms.shape, raw_waveforms.dtype, raw_waveforms[0,0,:4]) (424, 1, 64) int16 [-449 -206 34 40] >>> float_waveforms = reader.rescale_waveforms_to_float(raw_waveforms, dtype='float32', unit_index=0) >>> print(float_waveforms.shape, float_waveforms.dtype, float_waveforms[0,0,:4]) (424, 1, 64) float32 [-0.65771484 -0.30175781 0.04980469 0.05859375] Count events per channel:: >>> reader = PlexonRawIO(filename='File_plexon_2.plx') >>> reader.parse_header() >>> nb_event_channel = reader.event_channels_count() nb_event_channel 28 >>> print('nb_event_channel', nb_event_channel) >>> for chan_index in range(nb_event_channel): ... nb_event = reader.event_count(block_index=0, seg_index=0, event_channel_index=chan_index) ... print('chan_index',chan_index, 'nb_event', nb_event) chan_index 0 nb_event 1 chan_index 1 nb_event 0 chan_index 2 nb_event 0 chan_index 3 nb_event 0 ... Read event timestamps and times for chanindex=0 and with time limits (t_start=None, t_stop=None):: >>> ev_timestamps, ev_durations, ev_labels = reader.event_timestamps(block_index=0, seg_index=0, event_channel_index=0, t_start=None, t_stop=None) >>> print(ev_timestamps, ev_durations, ev_labels) [1268] None ['0'] >>> ev_times = reader.rescale_event_timestamp(ev_timestamps, dtype='float64') >>> print(ev_times) [ 0.0317] .. _tridesclous: https://github.com/tridesclous/tridesclous neo-0.7.2/doc/source/releases/0002700013464101346420000000000013511307751014341 5ustar yohyohneo-0.7.2/doc/source/releases/0.5.0.rst0000600013464101346420000001360113507452453015541 0ustar yohyoh======================= Neo 0.5.0 release notes ======================= 22nd March 2017 For Neo 0.5, we have taken the opportunity to simplify the Neo object model. Although this will require an initial time investment for anyone who has written code with an earlier version of Neo, the benefits will be greater simplicity, both in your own code and within the Neo code base, which should allow us to move more quickly in fixing bugs, improving performance and adding new features. More detail on these changes follows: Merging of "single-value" and "array" versions of data classes ============================================================== In previous versions of Neo, we had :class:`AnalogSignal` for one-dimensional (single channel) signals, and :class:`AnalogSignalArray` for two-dimensional (multi-channel) signals. In Neo 0.5.0, these have been merged under the name :class:`AnalogSignal`. :class:`AnalogSignal` has the same behaviour as the old :class:`AnalogSignalArray`. It is still possible to create an :class:`AnalogSignal` from a one-dimensional array, but this will be converted to an array with shape `(n, 1)`, e.g.: .. code-block:: python >>> signal = neo.AnalogSignal([0.0, 0.1, 0.2, 0.5, 0.6, 0.5, 0.4, 0.3, 0.0], ... sampling_rate=10*kHz, ... units=nA) >>> signal.shape (9, 1) Multi-channel arrays are created as before, but using :class:`AnalogSignal` instead of :class:`AnalogSignalArray`: .. code-block:: python >>> signal = neo.AnalogSignal([[0.0, 0.1, 0.2, 0.5, 0.6, 0.5, 0.4, 0.3, 0.0], ... [0.0, 0.2, 0.4, 0.7, 0.9, 0.8, 0.7, 0.6, 0.3]], ... sampling_rate=10*kHz, ... units=nA) >>> signal.shape (9, 2) Similarly, the :class:`Epoch` and :class:`EpochArray` classes have been merged into an array-valued class :class:`Epoch`, ditto for :class:`Event` and :class:`EventArray`, and the :class:`Spike` class, whose main function was to contain the waveform data for an individual spike, has been suppressed; waveform data are now available as the :attr:`waveforms` attribute of the :class:`SpikeTrain` class. Recording channels ================== As a consequence of the removal of "single-value" data classes, information on recording channels and the relationship between analog signals and spike trains is also stored differently. In Neo 0.5, we have introduced a new class, :class:`ChannelIndex`, which replaces both :class:`RecordingChannel` and :class:`RecordingChannelGroup`. In older versions of Neo, a :class:`RecordingChannel` object held metadata about a logical recording channel (a name and/or integer index) together with references to one or more :class:`AnalogSignal`\s recorded on that channel at different points in time (different :class:`Segment`\s); redundantly, the :class:`AnalogSignal` also had a :attr:`channel_index` attribute, which could be used in addition to or instead of creating a :class:`RecordingChannel`. Metadata about :class:`AnalogSignalArray`\s could be contained in a :class:`RecordingChannelGroup` in a similar way, i.e. :class:`RecordingChannelGroup` functioned as an array-valued version of :class:`RecordingChannel`, but :class:`RecordingChannelGroup` could also be used to group together individual :class:`RecordingChannel` objects. With Neo 0.5, information about the channel names and ids of an :class:`AnalogSignal` is contained in a :class:`ChannelIndex`, e.g.: .. code-block:: python >>> signal = neo.AnalogSignal([[0.0, 0.1, 0.2, 0.5, 0.6, 0.5, 0.4, 0.3, 0.0], ... [0.0, 0.2, 0.4, 0.7, 0.9, 0.8, 0.7, 0.6, 0.3]], ... [0.0, 0.1, 0.3, 0.6, 0.8, 0.7, 0.6, 0.5, 0.3]], ... sampling_rate=10*kHz, ... units=nA) >>> channels = neo.ChannelIndex(index=[0, 1, 2], ... channel_names=["chan1", "chan2", "chan3"]) >>> signal.channel_index = channels In this use, it replaces :class:`RecordingChannel`. :class:`ChannelIndex` may also be used to group together a subset of the channels of a multi-channel signal, for example: .. code-block:: python >>> channel_group = neo.ChannelIndex(index=[0, 2]) >>> channel_group.analogsignals.append(signal) >>> unit = neo.Unit() # will contain the spike train recorded from channels 0 and 2. >>> unit.channel_index = channel_group Checklist for updating code from 0.3/0.4 to 0.5 =============================================== To update your code from Neo 0.3/0.4 to 0.5, run through the following checklist: 1. Change all usages of :class:`AnalogSignalArray` to :class:`AnalogSignal`. 2. Change all usages of :class:`EpochArray` to :class:`Epoch`. 3. Change all usages of :class:`EventArray` to :class:`Event`. 4. Where you have a list of (single channel) :class:`AnalogSignal`\s all of the same length, consider converting them to a single, multi-channel :class:`AnalogSignal`. 5. Replace :class:`RecordingChannel` and :class:`RecordingChannelGroup` with :class:`ChannelIndex`. .. note:: in points 1-3, the data structure is still an array, it just has a shorter name. Other changes ============= * added :class:`NixIO` (`about the NIX format`_) * added :class:`IgorIO` * added :class:`NestIO` (for data files produced by the `NEST simulator`_) * :class:`NeoHdf5IO` is now read-only. It will read data files produced by earlier versions of Neo, but another HDF5-based IO, e.g. :class:`NixIO`, should be used for writing data. * many fixes/improvements to existing IO modules. All IO modules should now work with Python 3. .. https://github.com/NeuralEnsemble/python-neo/issues?utf8=✓&q=is%3Aissue%20is%3Aclosed%20created%3A%3E2014-02-01%20 .. _`about the NIX format`: https://github.com/G-Node/nix/wiki .. _`NEST simulator`: http://nest-simulator.org neo-0.7.2/doc/source/releases/0.5.1.rst0000600013464101346420000000225513507452453015545 0ustar yohyoh======================= Neo 0.5.1 release notes ======================= 4th May 2017 * Fixes to :class:`AxonIO` (thanks to @erikli and @cjfraz) and :class:`NeuroExplorerIO` (thanks to Mark Hollenbeck) * Fixes to pickling of :class:`Epoch` and :class:`Event` objects (thanks to Hélissande Fragnaud) * Added methods :meth:`as_array()` and :meth:`as_quantity()` to Neo data objects to simplify the common tasks of turning a Neo data object back into a plain Numpy array * Added :class:`NeuralynxIO`, which reads standard Neuralynx output files in ncs, nev, nse and ntt format (thanks to Julia Sprenger and Carlos Canova). * Added the :attr:`extras_require` field to setup.py, to clearly document the requirements for different io modules. For example, this allows you to run :command:`pip install neo[neomatlabio]` and have the extra dependency needed for the :mod:`neomatlabio` module (scipy in this case) be automatically installed. * Fixed a bug where slicing an :class:`AnalogSignal` did not modify the linked :class:`ChannelIndex`. (Full `list of closed issues`_) .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.5.1+is%3Aclosed neo-0.7.2/doc/source/releases/0.5.2.rst0000600013464101346420000000151313507452453015542 0ustar yohyoh======================= Neo 0.5.2 release notes ======================= 27th September 2017 * Removed support for Python 2.6 * Pickling :class:`AnalogSignal` and :class:`SpikeTrain` now preserves parent objects * Added NSDFIO, which reads and writes NSDF files * Fixes and improvements to PlexonIO, NixIO, BlackrockIO, NeuralynxIO, IgorIO, ElanIO, MicromedIO, TdtIO and others. Thanks to Michael Denker, Achilleas Koutsou, Mieszko Grodzicki, Samuel Garcia, Julia Sprenger, Andrew Davison, Rohan Shah, Richard C Gerkin, Mieszko Grodzicki, Mikkel Elle Lepperød, Joffrey Gonin, Hélissande Fragnaud, Elodie Legouée and Matthieu Sénoville for their contributions to this release. (Full `list of closed issues`_) .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.5.2+is%3Aclosed neo-0.7.2/doc/source/releases/0.6.0.rst0000600013464101346420000000353113507452453015543 0ustar yohyoh======================= Neo 0.6.0 release notes ======================= XXth March 2018 This is a draft. Major changes: * Introduced :mod:`neo.rawio`: a low-level reader for various data formats * Added continuous integration for all IOs using CircleCI (previously only :mod:`neo.core` was tested, using Travis CI) * Moved the test file repository to https://web.gin.g-node.org/NeuralEnsemble/ephy_testing_data - this makes it easier for people to contribute new files for testing. Other important changes: * Added :func:`time_index()` and :func:`splice()` methods to :class:`AnalogSignal` * IO fixes and improvements: Blackrock, TDT, Axon, Spike2, Brainvision, Neuralynx * Implemented `__deepcopy__` for all data classes * New IO: BCI2000 * Lots of PEP8 fixes! * Implemented `__getitem__` for :class:`Epoch` * Removed "cascade" support from all IOs * Removed lazy loading except for IOs based on rawio * Marked lazy option as deprecated * Added :func:`time_slice` in read_segment() for IOs based on rawio * Made :attr:`SpikeTrain.times` return a :class:`Quantity` instead of a :class:`SpikeTrain` * Raise a :class:`ValueError` if ``t_stop`` is earlier than ``t_start`` when creating an empty :class:`SpikeTrain` * Changed filter behaviour to return all objects if no filter parameters are specified * Fix pickling/unpickling of :class:`Events` Deprecated IO classes: * :class:`KlustaKwikIO` (use :class:`KwikIO` instead) * :class:`PyNNTextIO`, :class:`PyNNNumpyIO` (Full `list of closed issues`_) Thanks to Björn Müller, Andrew Davison, Achilleas Koutsou, Chadwick Boulay, Julia Sprenger, Matthieu Senoville, Michael Denker and especially Samuel Garcia for their contributions to this release. .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.6.0+is%3Aclosed neo-0.7.2/doc/source/releases/0.7.0.rst0000600013464101346420000000203213507452453015537 0ustar yohyoh======================= Neo 0.7.0 release notes ======================= 26th November 2018 Main added features: * array annotations Other features: * `Event.to_epoch()` * Change the behaviour of `SpikeTrain.__add__` and `SpikeTrain.__sub__` * bug fix for `Epoch.time_slice()` New IO classes: * RawMCSRawIO (raw multi channel system file format) * OpenEphys format * Intanrawio (both RHD and RHS) * AxographIO Many bug fixes and improvements in IO: * AxonIO * WinWCPIO * NixIO * ElphyIO * Spike2IO * NeoMatlab * NeuralynxIO * BlackrockIO (V2.3) * NixIO (rewritten) Removed: * PyNNIO (Full `list of closed issues`_) Thanks to Achilleas Koutsou, Andrew Davison, Björn Müller, Chadwick Boulay, erikli, Jeffrey Gill, Julia Sprenger, Lucas (lkoelman), Mark Histed, Michael Denker, Mike Sintsov, Samuel Garcia, Scott W Harden and William Hart for their contributions to this release. .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.7.0+is%3Aclosed neo-0.7.2/doc/source/usecases.rst0000600013464101346420000001675113507452453015122 0ustar yohyoh***************** Typical use cases ***************** Recording multiple trials from multiple channels ================================================ In this example we suppose that we have recorded from an 8-channel probe, and that we have recorded three trials/episodes. We therefore have a total of 8 x 3 = 24 signals, grouped into three :class:`AnalogSignal` objects, one per trial. Our entire dataset is contained in a :class:`Block`, which in turn contains: * 3 :class:`Segment` objects, each representing data from a single trial, * 1 :class:`ChannelIndex`. .. image:: images/multi_segment_diagram.png :width: 75% :align: center :class:`Segment` and :class:`ChannelIndex` objects provide two different ways to access the data, corresponding respectively, in this scenario, to access by **time** and by **space**. .. note:: Segments do not always represent trials, they can be used for many purposes: segments could represent parallel recordings for different subjects, or different steps in a current clamp protocol. **Temporal (by segment)** In this case you want to go through your data in order, perhaps because you want to correlate the neural response with the stimulus that was delivered in each segment. In this example, we're averaging over the channels. .. doctest:: import numpy as np from matplotlib import pyplot as plt for seg in block.segments: print("Analyzing segment %d" % seg.index) avg = np.mean(seg.analogsignals[0], axis=1) plt.figure() plt.plot(avg) plt.title("Peak response in segment %d: %f" % (seg.index, avg.max())) **Spatial (by channel)** In this case you want to go through your data by channel location and average over time. Perhaps you want to see which physical location produces the strongest response, and every stimulus was the same: .. doctest:: # We assume that our block has only 1 ChannelIndex chx = block.channelindexes[0]: siglist = [sig[:, chx.index] for sig in chx.analogsignals] avg = np.mean(siglist, axis=0) plt.figure() for index, name in zip(chx.index, chx.channel_names): plt.plot(avg[:, index]) plt.title("Average response on channels %s: %s' % (index, name) **Mixed example** Combining simultaneously the two approaches of descending the hierarchy temporally and spatially can be tricky. Here's an example. Let's say you saw something interesting on the 6th channel (index 5) on even numbered trials during the experiment and you want to follow up. What was the average response? .. doctest:: index = chx.index[5] avg = np.mean([seg.analogsignals[0][:, index] for seg in block.segments[::2]], axis=1) plt.plot(avg) Recording spikes from multiple tetrodes ======================================= Here is a similar example in which we have recorded with two tetrodes and extracted spikes from the extra-cellular signals. The spike times are contained in :class:`SpikeTrain` objects. Again, our data set is contained in a :class:`Block`, which contains: * 3 :class:`Segments` (one per trial). * 2 :class:`ChannelIndexes` (one per tetrode), which contain: * 2 :class:`Unit` objects (= 2 neurons) for the first :class:`ChannelIndex` * 5 :class:`Units` for the second :class:`ChannelIndex`. In total we have 3 x 7 = 21 :class:`SpikeTrains` in this :class:`Block`. .. image:: images/multi_segment_diagram_spiketrain.png :width: 75% :align: center There are three ways to access the :class:`SpikeTrain` data: * by :class:`Segment` * by :class:`RecordingChannel` * by :class:`Unit` **By Segment** In this example, each :class:`Segment` represents data from one trial, and we want a PSTH for each trial from all units combined: .. doctest:: for seg in block.segments: print("Analyzing segment %d" % seg.index) stlist = [st - st.t_start for st in seg.spiketrains] plt.figure() count, bins = np.histogram(stlist) plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title("PSTH in segment %d" % seg.index) **By Unit** Now we can calculate the PSTH averaged over trials for each unit, using the :attr:`block.list_units` property: .. doctest:: for unit in block.list_units: stlist = [st - st.t_start for st in unit.spiketrains] plt.figure() count, bins = np.histogram(stlist) plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title("PSTH of unit %s" % unit.name) **By ChannelIndex** Here we calculate a PSTH averaged over trials by channel location, blending all units: .. doctest:: for chx in block.channelindexes: stlist = [] for unit in chx.units: stlist.extend([st - st.t_start for st in unit.spiketrains]) plt.figure() count, bins = np.histogram(stlist) plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title("PSTH blend of tetrode %s" % chx.name) Spike sorting ============= Spike sorting is the process of detecting and classifying high-frequency deflections ("spikes") on a group of physically nearby recording channels. For example, let's say you have defined a ChannelIndex for a tetrode containing 4 separate channels. Here is an example showing (with fake data) how you could iterate over the contained signals and extract spike times. (Of course in reality you would use a more sophisticated algorithm.) .. doctest:: # generate some fake data seg = Segment() seg.analogsignals.append( AnalogSignal([[0.1, 0.1, 0.1, 0.1], [-2.0, -2.0, -2.0, -2.0], [0.1, 0.1, 0.1, 0.1], [-0.1, -0.1, -0.1, -0.1], [-0.1, -0.1, -0.1, -0.1], [-3.0, -3.0, -3.0, -3.0], [0.1, 0.1, 0.1, 0.1], [0.1, 0.1, 0.1, 0.1]], sampling_rate=1000*Hz, units='V')) chx = ChannelIndex(channel_indexes=[0, 1, 2, 3]) chx.analogsignals.append(seg.analogsignals[0]) # extract spike trains from each channel st_list = [] for signal in chx.analogsignals: # use a simple threshhold detector spike_mask = np.where(np.min(signal.magnitude, axis=1) < -1.0)[0] # create a spike train spike_times = signal.times[spike_mask] st = neo.SpikeTrain(spike_times, t_start=signal.t_start, t_stop=signal.t_stop) # remember the spike waveforms wf_list = [] for spike_idx in np.nonzero(spike_mask)[0]: wf_list.append(signal[spike_idx-1:spike_idx+2, :]) st.waveforms = np.array(wf_list) st_list.append(st) At this point, we have a list of spiketrain objects. We could simply create a single Unit object, assign all spike trains to it, and then assign the Unit to the group on which we detected it. .. doctest:: u = Unit() u.spiketrains = st_list chx.units.append(u) Now the recording channel group (tetrode) contains a list of analogsignals, and a single Unit object containing all of the detected spiketrains from those signals. Further processing could assign each of the detected spikes to an independent source, a putative single neuron. (This processing is outside the scope of Neo. There are many open-source toolboxes to do it, for instance our sister project OpenElectrophy.) In that case we would create a separate Unit for each cluster, assign its spiketrains to it, and then store all the units in the original recording channel group. .. EEG .. Network simulations neo-0.7.2/doc/source/whatisnew.rst0000600013464101346420000000471613507452453015316 0ustar yohyoh============= Release notes ============= .. toctree:: :maxdepth: 1 releases/0.7.0.rst releases/0.6.0.rst releases/0.5.2.rst releases/0.5.1.rst releases/0.5.0.rst .. releases/0.2.0.rst .. releases/0.2.1.rst .. releases/0.3.0.rst .. releases/0.3.1.rst .. releases/0.3.2.rst .. releases/0.3.3.rst Version 0.4.0 ------------- * added StimfitIO * added KwikIO * significant improvements to AxonIO, BlackrockIO, BrainwareSrcIO, NeuroshareIO, PlexonIO, Spike2IO, TdtIO, * many test suite improvements * Container base class Version 0.3.3 ------------- * fix a bug in PlexonIO where some EventArrays only load 1 element. * fix a bug in BrainwareSrcIo for segments with no spikes. Version 0.3.2 ------------- * cleanup of io test code, with additional helper functions and methods * added BrainwareDamIo * added BrainwareF32Io * added BrainwareSrcIo Version 0.3.1 ------------- * lazy/cascading improvement * load_lazy_olbject() in neo.io added * added NeuroscopeIO Version 0.3.0 ------------- * various bug fixes in neo.io * added ElphyIO * SpikeTrain performence improved * An IO class now can return a list of Block (see read_all_blocks in IOs) * python3 compatibility improved Version 0.2.1 ------------- * assorted bug fixes * added :func:`time_slice()` method to the :class:`SpikeTrain` and :class:`AnalogSignalArray` classes. * improvements to annotation data type handling * added PickleIO, allowing saving Neo objects in the Python pickle format. * added ElphyIO (see http://www.unic.cnrs-gif.fr/software.html) * added BrainVisionIO (see http://www.brainvision.com/) * improvements to PlexonIO * added :func:`merge()` method to the :class:`Block` and :class:`Segment` classes * development was mostly moved to GitHub, although the issue tracker is still at neuralensemble.org/neo Version 0.2.0 ------------- New features compared to neo 0.1: * new schema more consistent. * new objects: RecordingChannelGroup, EventArray, AnalogSignalArray, EpochArray * Neuron is now Unit * use the quantities_ module for everything that can have units. * Some objects directly inherit from Quantity: SpikeTrain, AnalogSignal, AnalogSignalArray, instead of having an attribute for data. * Attributes are classifyed in 3 categories: necessary, recommended, free. * lazy and cascade keywords are added to all IOs * Python 3 support * better tests .. _quantities: http://pypi.python.org/pypi/quantities neo-0.7.2/examples/0002700013464101346420000000000013511307751012307 5ustar yohyohneo-0.7.2/examples/generated_data.py0000600013464101346420000001145213507452453015620 0ustar yohyoh# -*- coding: utf-8 -*- """ This is an example for creating simple plots from various Neo structures. It includes a function that generates toy data. """ from __future__ import division # Use same division in Python 2 and 3 import numpy as np import quantities as pq from matplotlib import pyplot as plt import neo def generate_block(n_segments=3, n_channels=4, n_units=3, data_samples=1000, feature_samples=100): """ Generate a block with a single recording channel group and a number of segments, recording channels and units with associated analog signals and spike trains. """ feature_len = feature_samples / data_samples # Create Block to contain all generated data block = neo.Block() # Create multiple Segments block.segments = [neo.Segment(index=i) for i in range(n_segments)] # Create multiple ChannelIndexes block.channel_indexes = [neo.ChannelIndex(name='C%d' % i, index=i) for i in range(n_channels)] # Attach multiple Units to each ChannelIndex for channel_idx in block.channel_indexes: channel_idx.units = [neo.Unit('U%d' % i) for i in range(n_units)] # Create synthetic data for seg in block.segments: feature_pos = np.random.randint(0, data_samples - feature_samples) # Analog signals: Noise with a single sinewave feature wave = 3 * np.sin(np.linspace(0, 2 * np.pi, feature_samples)) for channel_idx in block.channel_indexes: sig = np.random.randn(data_samples) sig[feature_pos:feature_pos + feature_samples] += wave signal = neo.AnalogSignal(sig * pq.mV, sampling_rate=1 * pq.kHz) seg.analogsignals.append(signal) channel_idx.analogsignals.append(signal) # Spike trains: Random spike times with elevated rate in short period feature_time = feature_pos / data_samples for u in channel_idx.units: random_spikes = np.random.rand(20) feature_spikes = np.random.rand(5) * feature_len + feature_time spikes = np.hstack([random_spikes, feature_spikes]) train = neo.SpikeTrain(spikes * pq.s, 1 * pq.s) seg.spiketrains.append(train) u.spiketrains.append(train) block.create_many_to_one_relationship() return block block = generate_block() # In this example, we treat each segment in turn, averaging over the channels # in each: for seg in block.segments: print("Analysing segment %d" % seg.index) siglist = seg.analogsignals time_points = siglist[0].times avg = np.mean(siglist, axis=0) # Average over signals of Segment plt.figure() plt.plot(time_points, avg) plt.title("Peak response in segment %d: %f" % (seg.index, avg.max())) # The second alternative is spatial traversal of the data (by channel), with # averaging over trials. For example, perhaps you wish to see which physical # location produces the strongest response, and each stimulus was the same: # There are multiple ChannelIndex objects connected to the block, each # corresponding to a a physical electrode for channel_idx in block.channel_indexes: print("Analysing channel %d: %s" % (channel_idx.index, channel_idx.name)) siglist = channel_idx.analogsignals time_points = siglist[0].times avg = np.mean(siglist, axis=0) # Average over signals of RecordingChannel plt.figure() plt.plot(time_points, avg) plt.title("Average response on channel %d" % channel_idx.index) # There are three ways to access the spike train data: by Segment, # by ChannelIndex or by Unit. # By Segment. In this example, each Segment represents data from one trial, # and we want a peristimulus time histogram (PSTH) for each trial from all # Units combined: for seg in block.segments: print("Analysing segment %d" % seg.index) stlist = [st - st.t_start for st in seg.spiketrains] count, bins = np.histogram(np.hstack(stlist)) plt.figure() plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title("PSTH in segment %d" % seg.index) # By Unit. Now we can calculate the PSTH averaged over trials for each Unit: for unit in block.list_units: stlist = [st - st.t_start for st in unit.spiketrains] count, bins = np.histogram(np.hstack(stlist)) plt.figure() plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title("PSTH of unit %s" % unit.name) # By ChannelIndex. Here we calculate a PSTH averaged over trials by # channel location, blending all Units: for chx in block.channel_indexes: stlist = [] for unit in chx.units: stlist.extend([st - st.t_start for st in unit.spiketrains]) count, bins = np.histogram(np.hstack(stlist)) plt.figure() plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title("PSTH blend of recording channel group %s" % chx.name) plt.show() neo-0.7.2/examples/read_files_neo_io.py0000600013464101346420000000216613507452453016320 0ustar yohyoh# -*- coding: utf-8 -*- """ This is an example for reading files with neo.io """ import urllib import neo url_repo = 'https://web.gin.g-node.org/NeuralEnsemble/ephy_testing_data/raw/master/' # Plexon files distantfile = url_repo + 'plexon/File_plexon_3.plx' localfile = './File_plexon_3.plx' urllib.request.urlretrieve(distantfile, localfile) # create a reader reader = neo.io.PlexonIO(filename='File_plexon_3.plx') # read the blocks blks = reader.read(lazy=False) print(blks) # access to segments for blk in blks: for seg in blk.segments: print(seg) for asig in seg.analogsignals: print(asig) for st in seg.spiketrains: print(st) # CED Spike2 files distantfile = url_repo + 'spike2/File_spike2_1.smr' localfile = './File_spike2_1.smr' urllib.request.urlretrieve(distantfile, localfile) # create a reader reader = neo.io.Spike2IO(filename='File_spike2_1.smr') # read the block bl = reader.read(lazy=False)[0] print(bl) # access to segments for seg in bl.segments: print(seg) for asig in seg.analogsignals: print(asig) for st in seg.spiketrains: print(st) neo-0.7.2/examples/read_files_neo_rawio.py0000600013464101346420000000620613507452453017031 0ustar yohyoh# -*- coding: utf-8 -*- """ This is an example for reading files with neo.rawio compare with read_files_neo_io.py """ import urllib from neo.rawio import PlexonRawIO # Get Plexon files distantfile = 'https://portal.g-node.org/neo/plexon/File_plexon_3.plx' localfile = './File_plexon_3.plx' urllib.request.urlretrieve(distantfile, localfile) # create a reader reader = PlexonRawIO(filename='File_plexon_3.plx') reader.parse_header() print(reader) print(reader.header) # Read signal chunks channel_indexes = None # could be channel_indexes = [0] raw_sigs = reader.get_analogsignal_chunk(block_index=0, seg_index=0, i_start=1024, i_stop=2048, channel_indexes=channel_indexes) float_sigs = reader.rescale_signal_raw_to_float(raw_sigs, dtype='float64') sampling_rate = reader.get_signal_sampling_rate() t_start = reader.get_signal_t_start(block_index=0, seg_index=0) units = reader.header['signal_channels'][0]['units'] print(raw_sigs.shape, raw_sigs.dtype) print(float_sigs.shape, float_sigs.dtype) print(sampling_rate, t_start, units) # Count unit and spike per units nb_unit = reader.unit_channels_count() print('nb_unit', nb_unit) for unit_index in range(nb_unit): nb_spike = reader.spike_count(block_index=0, seg_index=0, unit_index=unit_index) print('unit_index', unit_index, 'nb_spike', nb_spike) # Read spike times spike_timestamps = reader.get_spike_timestamps(block_index=0, seg_index=0, unit_index=0, t_start=0., t_stop=10.) print(spike_timestamps.shape, spike_timestamps.dtype, spike_timestamps[:5]) spike_times = reader.rescale_spike_timestamp(spike_timestamps, dtype='float64') print(spike_times.shape, spike_times.dtype, spike_times[:5]) # Read spike waveforms raw_waveforms = reader.get_spike_raw_waveforms(block_index=0, seg_index=0, unit_index=0, t_start=0., t_stop=10.) print(raw_waveforms.shape, raw_waveforms.dtype, raw_waveforms[0, 0, :4]) float_waveforms = reader.rescale_waveforms_to_float(raw_waveforms, dtype='float32', unit_index=0) print(float_waveforms.shape, float_waveforms.dtype, float_waveforms[0, 0, :4]) # Read event timestamps and times (take anotehr file) distantfile = 'https://portal.g-node.org/neo/plexon/File_plexon_2.plx' localfile = './File_plexon_2.plx' urllib.request.urlretrieve(distantfile, localfile) # Count event per channel reader = PlexonRawIO(filename='File_plexon_2.plx') reader.parse_header() nb_event_channel = reader.event_channels_count() print('nb_event_channel', nb_event_channel) for chan_index in range(nb_event_channel): nb_event = reader.event_count(block_index=0, seg_index=0, event_channel_index=chan_index) print('chan_index', chan_index, 'nb_event', nb_event) ev_timestamps, ev_durations, ev_labels = reader.get_event_timestamps(block_index=0, seg_index=0, event_channel_index=0, t_start=None, t_stop=None) print(ev_timestamps, ev_durations, ev_labels) ev_times = reader.rescale_event_timestamp(ev_timestamps, dtype='float64') print(ev_times) neo-0.7.2/examples/simple_plot_with_matplotlib.py0000600013464101346420000000241013507452453020474 0ustar yohyoh# -*- coding: utf-8 -*- """ This is an example for plotting a Neo object with matplotlib. """ import urllib import numpy as np import quantities as pq from matplotlib import pyplot import neo url = 'https://portal.g-node.org/neo/' # distantfile = url + 'neuroexplorer/File_neuroexplorer_2.nex' # localfile = 'File_neuroexplorer_2.nex' distantfile = 'https://portal.g-node.org/neo/plexon/File_plexon_3.plx' localfile = './File_plexon_3.plx' urllib.request.urlretrieve(distantfile, localfile) # reader = neo.io.NeuroExplorerIO(filename='File_neuroexplorer_2.nex') reader = neo.io.PlexonIO(filename='File_plexon_3.plx') bl = reader.read(lazy=False)[0] for seg in bl.segments: print("SEG: " + str(seg.file_origin)) fig = pyplot.figure() ax1 = fig.add_subplot(2, 1, 1) ax2 = fig.add_subplot(2, 1, 2) ax1.set_title(seg.file_origin) ax1.set_ylabel('arbitrary units') mint = 0 * pq.s maxt = np.inf * pq.s for i, asig in enumerate(seg.analogsignals): times = asig.times.rescale('s').magnitude asig = asig.magnitude ax1.plot(times, asig) trains = [st.rescale('s').magnitude for st in seg.spiketrains] colors = pyplot.cm.jet(np.linspace(0, 1, len(seg.spiketrains))) ax2.eventplot(trains, colors=colors) pyplot.show() neo-0.7.2/neo/0002700013464101346420000000000013511307751011252 5ustar yohyohneo-0.7.2/neo/__init__.py0000600013464101346420000000054013507452453013367 0ustar yohyoh# -*- coding: utf-8 -*- ''' Neo is a package for representing electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats ''' import logging logging_handler = logging.StreamHandler() from neo.core import * # ~ import neo.rawio from neo.io import * from neo.version import version as __version__ neo-0.7.2/neo/core/0002700013464101346420000000000013511307751012202 5ustar yohyohneo-0.7.2/neo/core/__init__.py0000600013464101346420000000262613477534627014340 0ustar yohyoh# -*- coding: utf-8 -*- """ :mod:`neo.core` provides classes for storing common electrophysiological data types. Some of these classes contain raw data, such as spike trains or analog signals, while others are containers to organize other classes (including both data classes and other container classes). Classes from :mod:`neo.io` return nested data structures containing one or more class from this module. Classes: .. autoclass:: Block .. autoclass:: Segment .. autoclass:: ChannelIndex .. autoclass:: Unit .. autoclass:: AnalogSignal .. autoclass:: IrregularlySampledSignal .. autoclass:: Event .. autoclass:: Epoch .. autoclass:: SpikeTrain """ # needed for python 3 compatibility from __future__ import absolute_import, division, print_function from neo.core.block import Block from neo.core.segment import Segment from neo.core.channelindex import ChannelIndex from neo.core.unit import Unit from neo.core.analogsignal import AnalogSignal from neo.core.irregularlysampledsignal import IrregularlySampledSignal from neo.core.event import Event from neo.core.epoch import Epoch from neo.core.spiketrain import SpikeTrain # Block should always be first in this list objectlist = [Block, Segment, ChannelIndex, AnalogSignal, IrregularlySampledSignal, Event, Epoch, Unit, SpikeTrain] objectnames = [ob.__name__ for ob in objectlist] class_by_name = dict(zip(objectnames, objectlist)) neo-0.7.2/neo/core/analogsignal.py0000600013464101346420000005174013507452453015227 0ustar yohyoh# -*- coding: utf-8 -*- ''' This module implements :class:`AnalogSignal`, an array of analog signals. :class:`AnalogSignal` inherits from :class:`basesignal.BaseSignal` which derives from :class:`BaseNeo`, and from :class:`quantites.Quantity`which in turn inherits from :class:`numpy.array`. Inheritance from :class:`numpy.array` is explained here: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html In brief: * Initialization of a new object from constructor happens in :meth:`__new__`. This is where user-specified attributes are set. * :meth:`__array_finalize__` is called for all new objects, including those created by slicing. This is where attributes are copied over from the old object. ''' # needed for python 3 compatibility from __future__ import absolute_import, division, print_function import logging import numpy as np import quantities as pq from neo.core.baseneo import BaseNeo, MergeError, merge_annotations from neo.core.dataobject import DataObject from neo.core.channelindex import ChannelIndex from copy import copy, deepcopy from neo.core.basesignal import BaseSignal logger = logging.getLogger("Neo") def _get_sampling_rate(sampling_rate, sampling_period): ''' Gets the sampling_rate from either the sampling_period or the sampling_rate, or makes sure they match if both are specified ''' if sampling_period is None: if sampling_rate is None: raise ValueError("You must provide either the sampling rate or " + "sampling period") elif sampling_rate is None: sampling_rate = 1.0 / sampling_period elif sampling_period != 1.0 / sampling_rate: raise ValueError('The sampling_rate has to be 1/sampling_period') if not hasattr(sampling_rate, 'units'): raise TypeError("Sampling rate/sampling period must have units") return sampling_rate def _new_AnalogSignalArray(cls, signal, units=None, dtype=None, copy=True, t_start=0 * pq.s, sampling_rate=None, sampling_period=None, name=None, file_origin=None, description=None, array_annotations=None, annotations=None, channel_index=None, segment=None): ''' A function to map AnalogSignal.__new__ to function that does not do the unit checking. This is needed for pickle to work. ''' obj = cls(signal=signal, units=units, dtype=dtype, copy=copy, t_start=t_start, sampling_rate=sampling_rate, sampling_period=sampling_period, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) obj.channel_index = channel_index obj.segment = segment return obj class AnalogSignal(BaseSignal): ''' Array of one or more continuous analog signals. A representation of several continuous, analog signals that have the same duration, sampling rate and start time. Basically, it is a 2D array: dim 0 is time, dim 1 is channel index Inherits from :class:`quantities.Quantity`, which in turn inherits from :class:`numpy.ndarray`. *Usage*:: >>> from neo.core import AnalogSignal >>> import quantities as pq >>> >>> sigarr = AnalogSignal([[1, 2, 3], [4, 5, 6]], units='V', ... sampling_rate=1*pq.Hz) >>> >>> sigarr >>> sigarr[:,1] >>> sigarr[1, 1] array(5) * V *Required attributes/properties*: :signal: (quantity array 2D, numpy array 2D, or list (data, channel)) The data itself. :units: (quantity units) Required if the signal is a list or NumPy array, not if it is a :class:`Quantity` :t_start: (quantity scalar) Time when signal begins :sampling_rate: *or* **sampling_period** (quantity scalar) Number of samples per unit time or interval between two samples. If both are specified, they are checked for consistency. *Recommended attributes/properties*: :name: (str) A label for the dataset. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. *Optional attributes/properties*: :dtype: (numpy dtype or str) Override the dtype of the signal array. :copy: (bool) True by default. :array_annotations: (dict) Dict mapping strings to numpy arrays containing annotations \ for all data points Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Properties available on this object*: :sampling_rate: (quantity scalar) Number of samples per unit time. (1/:attr:`sampling_period`) :sampling_period: (quantity scalar) Interval between two samples. (1/:attr:`quantity scalar`) :duration: (Quantity) Signal duration, read-only. (size * :attr:`sampling_period`) :t_stop: (quantity scalar) Time when signal ends, read-only. (:attr:`t_start` + :attr:`duration`) :times: (quantity 1D) The time points of each sample of the signal, read-only. (:attr:`t_start` + arange(:attr:`shape`[0])/:attr:`sampling_rate`) :channel_index: access to the channel_index attribute of the principal ChannelIndex associated with this signal. *Slicing*: :class:`AnalogSignal` objects can be sliced. When taking a single column (dimension 0, e.g. [0, :]) or a single element, a :class:`~quantities.Quantity` is returned. Otherwise an :class:`AnalogSignal` (actually a view) is returned, with the same metadata, except that :attr:`t_start` is changed if the start index along dimension 1 is greater than 1. Note that slicing an :class:`AnalogSignal` may give a different result to slicing the underlying NumPy array since signals are always two-dimensional. *Operations available on this object*: == != + * / ''' _single_parent_objects = ('Segment', 'ChannelIndex') _quantity_attr = 'signal' _necessary_attrs = (('signal', pq.Quantity, 2), ('sampling_rate', pq.Quantity, 0), ('t_start', pq.Quantity, 0)) _recommended_attrs = BaseNeo._recommended_attrs def __new__(cls, signal, units=None, dtype=None, copy=True, t_start=0 * pq.s, sampling_rate=None, sampling_period=None, name=None, file_origin=None, description=None, array_annotations=None, **annotations): ''' Constructs new :class:`AnalogSignal` from data. This is called whenever a new class:`AnalogSignal` is created from the constructor, but not when slicing. __array_finalize__ is called on the new object. ''' signal = cls._rescale(signal, units=units) obj = pq.Quantity(signal, units=units, dtype=dtype, copy=copy).view(cls) if obj.ndim == 1: obj.shape = (-1, 1) if t_start is None: raise ValueError('t_start cannot be None') obj._t_start = t_start obj._sampling_rate = _get_sampling_rate(sampling_rate, sampling_period) obj.segment = None obj.channel_index = None return obj def __init__(self, signal, units=None, dtype=None, copy=True, t_start=0 * pq.s, sampling_rate=None, sampling_period=None, name=None, file_origin=None, description=None, array_annotations=None, **annotations): ''' Initializes a newly constructed :class:`AnalogSignal` instance. ''' # This method is only called when constructing a new AnalogSignal, # not when slicing or viewing. We use the same call signature # as __new__ for documentation purposes. Anything not in the call # signature is stored in annotations. # Calls parent __init__, which grabs universally recommended # attributes and sets up self.annotations DataObject.__init__(self, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) def __reduce__(self): ''' Map the __new__ function onto _new_AnalogSignalArray, so that pickle works ''' return _new_AnalogSignalArray, (self.__class__, np.array(self), self.units, self.dtype, True, self.t_start, self.sampling_rate, self.sampling_period, self.name, self.file_origin, self.description, self.array_annotations, self.annotations, self.channel_index, self.segment) def _array_finalize_spec(self, obj): ''' Set default values for attributes specific to :class:`AnalogSignal`. Common attributes are defined in :meth:`__array_finalize__` in :class:`basesignal.BaseSignal`), which is called every time a new signal is created and calls this method. ''' self._t_start = getattr(obj, '_t_start', 0 * pq.s) self._sampling_rate = getattr(obj, '_sampling_rate', None) return obj def __deepcopy__(self, memo): cls = self.__class__ new_signal = cls(np.array(self), units=self.units, dtype=self.dtype, t_start=self.t_start, sampling_rate=self.sampling_rate, sampling_period=self.sampling_period, name=self.name, file_origin=self.file_origin, description=self.description) new_signal.__dict__.update(self.__dict__) memo[id(self)] = new_signal for k, v in self.__dict__.items(): try: setattr(new_signal, k, deepcopy(v, memo)) except TypeError: setattr(new_signal, k, v) return new_signal def __repr__(self): ''' Returns a string representing the :class:`AnalogSignal`. ''' return ('<%s(%s, [%s, %s], sampling rate: %s)>' % (self.__class__.__name__, super(AnalogSignal, self).__repr__(), self.t_start, self.t_stop, self.sampling_rate)) def get_channel_index(self): """ """ if self.channel_index: return self.channel_index.index else: return None def __getitem__(self, i): ''' Get the item or slice :attr:`i`. ''' if isinstance(i, (int, np.integer)): # a single point in time across all channels obj = super(AnalogSignal, self).__getitem__(i) obj = pq.Quantity(obj.magnitude, units=obj.units) elif isinstance(i, tuple): obj = super(AnalogSignal, self).__getitem__(i) j, k = i if isinstance(j, (int, np.integer)): # extract a quantity array obj = pq.Quantity(obj.magnitude, units=obj.units) else: if isinstance(j, slice): if j.start: obj.t_start = (self.t_start + j.start * self.sampling_period) if j.step: obj.sampling_period *= j.step elif isinstance(j, np.ndarray): raise NotImplementedError( "Arrays not yet supported") # in the general case, would need to return # IrregularlySampledSignal(Array) else: raise TypeError("%s not supported" % type(j)) if isinstance(k, (int, np.integer)): obj = obj.reshape(-1, 1) if self.channel_index: obj.channel_index = self.channel_index.__getitem__(k) obj.array_annotate(**deepcopy(self.array_annotations_at_index(k))) elif isinstance(i, slice): obj = super(AnalogSignal, self).__getitem__(i) if i.start: obj.t_start = self.t_start + i.start * self.sampling_period obj.array_annotations = deepcopy(self.array_annotations) elif isinstance(i, np.ndarray): # Indexing of an AnalogSignal is only consistent if the resulting number of # samples is the same for each trace. The time axis for these samples is not # guaranteed to be continuous, so returning a Quantity instead of an AnalogSignal here. new_time_dims = np.sum(i, axis=0) if len(new_time_dims) and all(new_time_dims == new_time_dims[0]): obj = np.asarray(self).T.__getitem__(i.T) obj = obj.T.reshape(self.shape[1], -1).T obj = pq.Quantity(obj, units=self.units) else: raise IndexError("indexing of an AnalogSignals needs to keep the same number of " "sample for each trace contained") else: raise IndexError("index should be an integer, tuple, slice or boolean numpy array") return obj def __setitem__(self, i, value): """ Set an item or slice defined by :attr:`i` to `value`. """ # because AnalogSignals are always at least two-dimensional, # we need to handle the case where `i` is an integer if isinstance(i, int): i = slice(i, i + 1) elif isinstance(i, tuple): j, k = i if isinstance(k, int): i = (j, slice(k, k + 1)) return super(AnalogSignal, self).__setitem__(i, value) # sampling_rate attribute is handled as a property so type checking can # be done @property def sampling_rate(self): ''' Number of samples per unit time. (1/:attr:`sampling_period`) ''' return self._sampling_rate @sampling_rate.setter def sampling_rate(self, rate): ''' Setter for :attr:`sampling_rate` ''' if rate is None: raise ValueError('sampling_rate cannot be None') elif not hasattr(rate, 'units'): raise ValueError('sampling_rate must have units') self._sampling_rate = rate # sampling_period attribute is handled as a property on underlying rate @property def sampling_period(self): ''' Interval between two samples. (1/:attr:`sampling_rate`) ''' return 1. / self.sampling_rate @sampling_period.setter def sampling_period(self, period): ''' Setter for :attr:`sampling_period` ''' if period is None: raise ValueError('sampling_period cannot be None') elif not hasattr(period, 'units'): raise ValueError('sampling_period must have units') self.sampling_rate = 1. / period # t_start attribute is handled as a property so type checking can be done @property def t_start(self): ''' Time when signal begins. ''' return self._t_start @t_start.setter def t_start(self, start): ''' Setter for :attr:`t_start` ''' if start is None: raise ValueError('t_start cannot be None') self._t_start = start @property def duration(self): ''' Signal duration (:attr:`size` * :attr:`sampling_period`) ''' return self.shape[0] / self.sampling_rate @property def t_stop(self): ''' Time when signal ends. (:attr:`t_start` + :attr:`duration`) ''' return self.t_start + self.duration @property def times(self): ''' The time points of each sample of the signal (:attr:`t_start` + arange(:attr:`shape`)/:attr:`sampling_rate`) ''' return self.t_start + np.arange(self.shape[0]) / self.sampling_rate def __eq__(self, other): ''' Equality test (==) ''' if (isinstance(other, AnalogSignal) and ( self.t_start != other.t_start or self.sampling_rate != other.sampling_rate)): return False return super(AnalogSignal, self).__eq__(other) def _check_consistency(self, other): ''' Check if the attributes of another :class:`AnalogSignal` are compatible with this one. ''' if isinstance(other, AnalogSignal): for attr in "t_start", "sampling_rate": if getattr(self, attr) != getattr(other, attr): raise ValueError( "Inconsistent values of %s" % attr) # how to handle name and annotations? def _repr_pretty_(self, pp, cycle): ''' Handle pretty-printing the :class:`AnalogSignal`. ''' pp.text("{cls} with {channels} channels of length {length}; " "units {units}; datatype {dtype} ".format(cls=self.__class__.__name__, channels=self.shape[1], length=self.shape[0], units=self.units.dimensionality.string, dtype=self.dtype)) if self._has_repr_pretty_attrs_(): pp.breakable() self._repr_pretty_attrs_(pp, cycle) def _pp(line): pp.breakable() with pp.group(indent=1): pp.text(line) for line in ["sampling rate: {0}".format(self.sampling_rate), "time: {0} to {1}".format(self.t_start, self.t_stop)]: _pp(line) def time_index(self, t): """Return the array index corresponding to the time `t`""" t = t.rescale(self.sampling_period.units) i = (t - self.t_start) / self.sampling_period i = int(np.rint(i.magnitude)) return i def time_slice(self, t_start, t_stop): ''' Creates a new AnalogSignal corresponding to the time slice of the original AnalogSignal between times t_start, t_stop. Note, that for numerical stability reasons if t_start, t_stop do not fall exactly on the time bins defined by the sampling_period they will be rounded to the nearest sampling bins. ''' # checking start time and transforming to start index if t_start is None: i = 0 else: i = self.time_index(t_start) # checking stop time and transforming to stop index if t_stop is None: j = len(self) else: j = self.time_index(t_stop) if (i < 0) or (j > len(self)): raise ValueError('t_start, t_stop have to be withing the analog \ signal duration') # we're going to send the list of indicies so that we get *copy* of the # sliced data obj = super(AnalogSignal, self).__getitem__(np.arange(i, j, 1)) # If there is any data remaining, there will be data for every channel # In this case, array_annotations need to stay available # super.__getitem__ cannot do this, so it needs to be done here if len(obj) > 0: obj.array_annotations = self.array_annotations obj.t_start = self.t_start + i * self.sampling_period return obj def splice(self, signal, copy=False): """ Replace part of the current signal by a new piece of signal. The new piece of signal will overwrite part of the current signal starting at the time given by the new piece's `t_start` attribute. The signal to be spliced in must have the same physical dimensions, sampling rate, and number of channels as the current signal and fit within it. If `copy` is False (the default), modify the current signal in place. If `copy` is True, return a new signal and leave the current one untouched. In this case, the new signal will not be linked to any parent objects. """ if signal.t_start < self.t_start: raise ValueError("Cannot splice earlier than the start of the signal") if signal.t_stop > self.t_stop: raise ValueError("Splice extends beyond signal") if signal.sampling_rate != self.sampling_rate: raise ValueError("Sampling rates do not match") i = self.time_index(signal.t_start) j = i + signal.shape[0] if copy: new_signal = deepcopy(self) new_signal.segment = None new_signal.channel_index = None new_signal[i:j, :] = signal return new_signal else: self[i:j, :] = signal return self neo-0.7.2/neo/core/baseneo.py0000600013464101346420000003514613507452453014206 0ustar yohyoh# -*- coding: utf-8 -*- """ This module defines :class:`BaseNeo`, the abstract base class used by all :module:`neo.core` classes. """ # needed for python 3 compatibility from __future__ import absolute_import, division, print_function from datetime import datetime, date, time, timedelta from decimal import Decimal import logging from numbers import Number import numpy as np ALLOWED_ANNOTATION_TYPES = (int, float, complex, str, bytes, type(None), datetime, date, time, timedelta, Number, Decimal, np.number, np.bool_) # handle both Python 2 and Python 3 try: ALLOWED_ANNOTATION_TYPES += (long, unicode) except NameError: pass try: basestring except NameError: basestring = str logger = logging.getLogger("Neo") class MergeError(Exception): pass def _check_annotations(value): """ Recursively check that value is either of a "simple" type (number, string, date/time) or is a (possibly nested) dict, list or numpy array containing only simple types. """ if isinstance(value, np.ndarray): if not issubclass(value.dtype.type, ALLOWED_ANNOTATION_TYPES): raise ValueError("Invalid annotation. NumPy arrays with dtype %s" "are not allowed" % value.dtype.type) elif isinstance(value, dict): for element in value.values(): _check_annotations(element) elif isinstance(value, (list, tuple)): for element in value: _check_annotations(element) elif not isinstance(value, ALLOWED_ANNOTATION_TYPES): raise ValueError("Invalid annotation. Annotations of type %s are not" "allowed" % type(value)) def merge_annotation(a, b): """ First attempt at a policy for merging annotations (intended for use with parallel computations using MPI). This policy needs to be discussed further, or we could allow the user to specify a policy. Current policy: For arrays or lists: concatenate For dicts: merge recursively For strings: concatenate with ';' Otherwise: fail if the annotations are not equal """ assert type(a) == type(b), 'type(%s) %s != type(%s) %s' % (a, type(a), b, type(b)) if isinstance(a, dict): return merge_annotations(a, b) elif isinstance(a, np.ndarray): # concatenate b to a return np.append(a, b) elif isinstance(a, list): # concatenate b to a return a + b elif isinstance(a, basestring): if a == b: return a else: return a + ";" + b else: assert a == b, '%s != %s' % (a, b) return a def merge_annotations(A, B): """ Merge two sets of annotations. Merging follows these rules: All keys that are in A or B, but not both, are kept. For keys that are present in both: For arrays or lists: concatenate For dicts: merge recursively For strings: concatenate with ';' Otherwise: warn if the annotations are not equal """ merged = {} for name in A: if name in B: try: merged[name] = merge_annotation(A[name], B[name]) except BaseException as exc: # exc.args += ('key %s' % name,) # raise merged[name] = "MERGE CONFLICT" # temporary hack else: merged[name] = A[name] for name in B: if name not in merged: merged[name] = B[name] logger.debug("Merging annotations: A=%s B=%s merged=%s", A, B, merged) return merged def _reference_name(class_name): """ Given the name of a class, return an attribute name to be used for references to instances of that class. For example, a Segment object has a parent Block object, referenced by `segment.block`. The attribute name `block` is obtained by calling `_container_name("Block")`. """ name_map = { "ChannelIndex": "channel_index" } return name_map.get(class_name, class_name.lower()) def _container_name(class_name): """ Given the name of a class, return an attribute name to be used for lists (or other containers) containing instances of that class. For example, a Block object contains a list of Segment objects, referenced by `block.segments`. The attribute name `segments` is obtained by calling `_container_name_plural("Segment")`. """ name_map = { "ChannelIndex": "channel_indexes" } return name_map.get(class_name, _reference_name(class_name) + 's') class BaseNeo(object): """ This is the base class from which all Neo objects inherit. This class implements support for universally recommended arguments, and also sets up the :attr:`annotations` dict for additional arguments. Each class can define one or more of the following class attributes: :_single_parent_objects: Neo objects that can be parents of this object. This attribute is used in cases where only one parent of this class is allowed. An instance attribute named class.__name__.lower() will be automatically defined to hold this parent and will be initialized to None. :_multi_parent_objects: Neo objects that can be parents of this object. This attribute is used in cases where multiple parents of this class is allowed. An instance attribute named class.__name__.lower()+'s' will be automatically defined to hold this parent and will be initialized to an empty list. :_necessary_attrs: A list of tuples containing the attributes that the class must have. The tuple can have 2-4 elements. The first element is the attribute name. The second element is the attribute type. The third element is the number of dimensions (only for numpy arrays and quantities). The fourth element is the dtype of array (only for numpy arrays and quantities). This does NOT include the attributes holding the parents or children of the object. :_recommended_attrs: A list of tuples containing the attributes that the class may optionally have. It uses the same structure as :_necessary_attrs: :_repr_pretty_attrs_keys_: The names of attributes printed when pretty-printing using iPython. The following helper properties are available: :_parent_objects: All parent objects. :_single_parent_objects: + :_multi_parent_objects: :_single_parent_containers: The names of the container attributes used to store :_single_parent_objects: :_multi_parent_containers: The names of the container attributes used to store :_multi_parent_objects: :_parent_containers: All parent container attributes. :_single_parent_containers: + :_multi_parent_containers: :parents: All objects that are parents of the current object. :_all_attrs: All required and optional attributes. :_necessary_attrs: + :_recommended_attrs: The following "universal" methods are available: :__init__: Grabs the universally recommended arguments :attr:`name`, :attr:`file_origin`, and :attr:`description` and stores them as attributes. Also takes every additional argument (that is, every argument that is not handled by :class:`BaseNeo` or the child class), and puts in the dict :attr:`annotations`. :annotate(**args): Updates :attr:`annotations` with keyword/value pairs. :merge(**args): Merge the contents of another object into this one. The merge method implemented here only merges annotations (see :merge_annotations:). Subclasses should implementt their own merge rules. :merge_annotations(**args): Merge the :attr:`annotations` of another object into this one. Each child class should: 0) describe its parents (if any) and attributes in the relevant class attributes. :_recommended_attrs: should append BaseNeo._recommended_attrs to the end. 1) call BaseNeo.__init__(self, name=name, description=description, file_origin=file_origin, **annotations) with the universal recommended arguments, plus optional annotations 2) process its required arguments in its __new__ or __init__ method 3) process its non-universal recommended arguments (in its __new__ or __init__ method Non-keyword arguments should only be used for required arguments. The required and recommended arguments for each child class (Neo object) are specified in the _necessary_attrs and _recommended_attrs attributes and documentation for the child object. """ # these attributes control relationships, they need to be # specified in each child class # Parent objects whose children can have a single parent _single_parent_objects = () # Parent objects whose children can have multiple parents _multi_parent_objects = () # Attributes that an instance is requires to have defined _necessary_attrs = () # Attributes that an instance may or may have defined _recommended_attrs = (('name', str), ('description', str), ('file_origin', str)) # Attributes that are used for pretty-printing _repr_pretty_attrs_keys_ = ("name", "description", "annotations") def __init__(self, name=None, description=None, file_origin=None, **annotations): """ This is the base constructor for all Neo objects. Stores universally recommended attributes and creates :attr:`annotations` from additional arguments not processed by :class:`BaseNeo` or the child class. """ # create `annotations` for additional arguments _check_annotations(annotations) self.annotations = annotations # these attributes are recommended for all objects. self.name = name self.description = description self.file_origin = file_origin # initialize parent containers for parent in self._single_parent_containers: setattr(self, parent, None) for parent in self._multi_parent_containers: setattr(self, parent, []) def annotate(self, **annotations): """ Add annotations (non-standardized metadata) to a Neo object. Example: >>> obj.annotate(key1=value0, key2=value1) >>> obj.key2 value2 """ _check_annotations(annotations) self.annotations.update(annotations) def _has_repr_pretty_attrs_(self): return any(getattr(self, k) for k in self._repr_pretty_attrs_keys_) def _repr_pretty_attrs_(self, pp, cycle): first = True for key in self._repr_pretty_attrs_keys_: value = getattr(self, key) if value: if first: first = False else: pp.breakable() with pp.group(indent=1): pp.text("{0}: ".format(key)) pp.pretty(value) def _repr_pretty_(self, pp, cycle): """ Handle pretty-printing the :class:`BaseNeo`. """ pp.text(self.__class__.__name__) if self._has_repr_pretty_attrs_(): pp.breakable() self._repr_pretty_attrs_(pp, cycle) @property def _single_parent_containers(self): """ Containers for parent objects whose children can have a single parent. """ return tuple([_reference_name(parent) for parent in self._single_parent_objects]) @property def _multi_parent_containers(self): """ Containers for parent objects whose children can have multiple parents. """ return tuple([_container_name(parent) for parent in self._multi_parent_objects]) @property def _parent_objects(self): """ All types for parent objects. """ return self._single_parent_objects + self._multi_parent_objects @property def _parent_containers(self): """ All containers for parent objects. """ return self._single_parent_containers + self._multi_parent_containers @property def parents(self): """ All parent objects storing the current object. """ single = [getattr(self, attr) for attr in self._single_parent_containers] multi = [list(getattr(self, attr)) for attr in self._multi_parent_containers] return tuple(single + sum(multi, [])) @property def _all_attrs(self): """ Returns a combination of all required and recommended attributes. """ return self._necessary_attrs + self._recommended_attrs def merge_annotations(self, other): """ Merge annotations from the other object into this one. Merging follows these rules: All keys that are in the either object, but not both, are kept. For keys that are present in both objects: For arrays or lists: concatenate the two arrays For dicts: merge recursively For strings: concatenate with ';' Otherwise: fail if the annotations are not equal """ merged_annotations = merge_annotations(self.annotations, other.annotations) self.annotations.update(merged_annotations) def merge(self, other): """ Merge the contents of another object into this one. See :meth:`merge_annotations` for details of the merge operation. """ self.merge_annotations(other) neo-0.7.2/neo/core/basesignal.py0000600013464101346420000002527613507452474014710 0ustar yohyoh# -*- coding: utf-8 -*- ''' This module implements :class:`BaseSignal`, an array of signals. This is a parent class from which all signal objects inherit: :class:`AnalogSignal` and :class:`IrregularlySampledSignal` :class:`BaseSignal` inherits from :class:`quantities.Quantity`, which inherits from :class:`numpy.array`. Inheritance from :class:`numpy.array` is explained here: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html In brief: * Constructor :meth:`__new__` for :class:`BaseSignal` doesn't exist. Only child objects :class:`AnalogSignal` and :class:`IrregularlySampledSignal` can be created. ''' # needed for Python 3 compatibility from __future__ import absolute_import, division, print_function import copy import logging import numpy as np import quantities as pq from neo.core.baseneo import BaseNeo, MergeError, merge_annotations from neo.core.dataobject import DataObject, ArrayDict from neo.core.channelindex import ChannelIndex logger = logging.getLogger("Neo") class BaseSignal(DataObject): ''' This is the base class from which all signal objects inherit: :class:`AnalogSignal` and :class:`IrregularlySampledSignal`. This class contains all common methods of both child classes. It uses the following child class attributes: :_necessary_attrs: a list of the attributes that the class must have. :_recommended_attrs: a list of the attributes that the class may optionally have. ''' def _array_finalize_spec(self, obj): ''' Called by :meth:`__array_finalize__`, used to customize behaviour of sub-classes. ''' return obj def __array_finalize__(self, obj): ''' This is called every time a new signal is created. It is the appropriate place to set default values for attributes for a signal constructed by slicing or viewing. User-specified values are only relevant for construction from constructor, and these are set in __new__ in the child object. Then they are just copied over here. Default values for the specific attributes for subclasses (:class:`AnalogSignal` and :class:`IrregularlySampledSignal`) are set in :meth:`_array_finalize_spec` ''' super(BaseSignal, self).__array_finalize__(obj) self._array_finalize_spec(obj) # The additional arguments self.annotations = getattr(obj, 'annotations', {}) # Add empty array annotations, because they cannot always be copied, # but do not overwrite existing ones from slicing etc. # This ensures the attribute exists if not hasattr(self, 'array_annotations'): self.array_annotations = ArrayDict(self._get_arr_ann_length()) # Globally recommended attributes self.name = getattr(obj, 'name', None) self.file_origin = getattr(obj, 'file_origin', None) self.description = getattr(obj, 'description', None) # Parent objects self.segment = getattr(obj, 'segment', None) self.channel_index = getattr(obj, 'channel_index', None) @classmethod def _rescale(self, signal, units=None): ''' Check that units are present, and rescale the signal if necessary. This is called whenever a new signal is created from the constructor. See :meth:`__new__' in :class:`AnalogSignal` and :class:`IrregularlySampledSignal` ''' if units is None: if not hasattr(signal, "units"): raise ValueError("Units must be specified") elif isinstance(signal, pq.Quantity): # This test always returns True, i.e. rescaling is always executed if one of the units # is a pq.CompoundUnit. This is fine because rescaling is correct anyway. if pq.quantity.validate_dimensionality(units) != signal.dimensionality: signal = signal.rescale(units) return signal def rescale(self, units): obj = super(BaseSignal, self).rescale(units) obj.channel_index = self.channel_index return obj def __getslice__(self, i, j): ''' Get a slice from :attr:`i` to :attr:`j`.attr[0] Doesn't get called in Python 3, :meth:`__getitem__` is called instead ''' return self.__getitem__(slice(i, j)) def __ne__(self, other): ''' Non-equality test (!=) ''' return not self.__eq__(other) def _apply_operator(self, other, op, *args): ''' Handle copying metadata to the new signal after a mathematical operation. ''' self._check_consistency(other) f = getattr(super(BaseSignal, self), op) new_signal = f(other, *args) new_signal._copy_data_complement(self) # _copy_data_complement can't always copy array annotations, # so this needs to be done locally new_signal.array_annotations = copy.deepcopy(self.array_annotations) return new_signal def _get_required_attributes(self, signal, units): ''' Return a list of the required attributes for a signal as a dictionary ''' required_attributes = {} for attr in self._necessary_attrs: if 'signal' == attr[0]: required_attributes[str(attr[0])] = signal else: required_attributes[str(attr[0])] = getattr(self, attr[0], None) required_attributes['units'] = units return required_attributes def duplicate_with_new_data(self, signal, units=None): ''' Create a new signal with the same metadata but different data. Required attributes of the signal are used. Note: Array annotations can not be copied here because length of data can change ''' if units is None: units = self.units # else: # units = pq.quantity.validate_dimensionality(units) # signal is the new signal required_attributes = self._get_required_attributes(signal, units) new = self.__class__(**required_attributes) new._copy_data_complement(self) new.annotations.update(self.annotations) # Note: Array annotations are not copied here, because it is not ensured # that the same number of signals is used and they would possibly make no sense # when combined with another signal return new def _copy_data_complement(self, other): ''' Copy the metadata from another signal. Required and recommended attributes of the signal are used. Note: Array annotations can not be copied here because length of data can change ''' all_attr = {self._recommended_attrs, self._necessary_attrs} for sub_at in all_attr: for attr in sub_at: if attr[0] != 'signal': setattr(self, attr[0], getattr(other, attr[0], None)) setattr(self, 'annotations', getattr(other, 'annotations', None)) # Note: Array annotations cannot be copied because length of data can be changed # here # which would cause inconsistencies def __rsub__(self, other, *args): ''' Backwards subtraction (other-self) ''' return self.__mul__(-1, *args) + other def __add__(self, other, *args): ''' Addition (+) ''' return self._apply_operator(other, "__add__", *args) def __sub__(self, other, *args): ''' Subtraction (-) ''' return self._apply_operator(other, "__sub__", *args) def __mul__(self, other, *args): ''' Multiplication (*) ''' return self._apply_operator(other, "__mul__", *args) def __truediv__(self, other, *args): ''' Float division (/) ''' return self._apply_operator(other, "__truediv__", *args) def __div__(self, other, *args): ''' Integer division (//) ''' return self._apply_operator(other, "__div__", *args) __radd__ = __add__ __rmul__ = __sub__ def merge(self, other): ''' Merge another signal into this one. The signal objects are concatenated horizontally (column-wise, :func:`np.hstack`). If the attributes of the two signal are not compatible, an Exception is raised. Required attributes of the signal are used. ''' for attr in self._necessary_attrs: if 'signal' != attr[0]: if getattr(self, attr[0], None) != getattr(other, attr[0], None): raise MergeError("Cannot merge these two signals as the %s differ." % attr[0]) if self.segment != other.segment: raise MergeError( "Cannot merge these two signals as they belong to different segments.") if hasattr(self, "lazy_shape"): if hasattr(other, "lazy_shape"): if self.lazy_shape[0] != other.lazy_shape[0]: raise MergeError("Cannot merge signals of different length.") merged_lazy_shape = (self.lazy_shape[0], self.lazy_shape[1] + other.lazy_shape[1]) else: raise MergeError("Cannot merge a lazy object with a real object.") if other.units != self.units: other = other.rescale(self.units) stack = np.hstack((self.magnitude, other.magnitude)) kwargs = {} for name in ("name", "description", "file_origin"): attr_self = getattr(self, name) attr_other = getattr(other, name) if attr_self == attr_other: kwargs[name] = attr_self else: kwargs[name] = "merge(%s, %s)" % (attr_self, attr_other) merged_annotations = merge_annotations(self.annotations, other.annotations) kwargs.update(merged_annotations) kwargs['array_annotations'] = self._merge_array_annotations(other) signal = self.__class__(stack, units=self.units, dtype=self.dtype, copy=False, t_start=self.t_start, sampling_rate=self.sampling_rate, **kwargs) signal.segment = self.segment if hasattr(self, "lazy_shape"): signal.lazy_shape = merged_lazy_shape # merge channel_index (move to ChannelIndex.merge()?) if self.channel_index and other.channel_index: signal.channel_index = ChannelIndex(index=np.arange(signal.shape[1]), channel_ids=np.hstack( [self.channel_index.channel_ids, other.channel_index.channel_ids]), channel_names=np.hstack( [self.channel_index.channel_names, other.channel_index.channel_names])) else: signal.channel_index = ChannelIndex(index=np.arange(signal.shape[1])) return signal neo-0.7.2/neo/core/block.py0000600013464101346420000001157513420077704013660 0ustar yohyoh# -*- coding: utf-8 -*- ''' This module defines :class:`Block`, the main container gathering all the data, whether discrete or continous, for a given recording session. base class used by all :module:`neo.core` classes. :class:`Block` derives from :class:`Container`, from :module:`neo.core.container`. ''' # needed for python 3 compatibility from __future__ import absolute_import, division, print_function from datetime import datetime from neo.core.container import Container, unique_objs class Block(Container): ''' Main container gathering all the data, whether discrete or continous, for a given recording session. A block is not necessarily temporally homogeneous, in contrast to :class:`Segment`. *Usage*:: >>> from neo.core import (Block, Segment, ChannelIndex, ... AnalogSignal) >>> from quantities import nA, kHz >>> import numpy as np >>> >>> # create a Block with 3 Segment and 2 ChannelIndex objects ,,, blk = Block() >>> for ind in range(3): ... seg = Segment(name='segment %d' % ind, index=ind) ... blk.segments.append(seg) ... >>> for ind in range(2): ... chx = ChannelIndex(name='Array probe %d' % ind, ... index=np.arange(64)) ... blk.channel_indexes.append(chx) ... >>> # Populate the Block with AnalogSignal objects ... for seg in blk.segments: ... for chx in blk.channel_indexes: ... a = AnalogSignal(np.random.randn(10000, 64)*nA, ... sampling_rate=10*kHz) ... chx.analogsignals.append(a) ... seg.analogsignals.append(a) *Required attributes/properties*: None *Recommended attributes/properties*: :name: (str) A label for the dataset. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. :file_datetime: (datetime) The creation date and time of the original data file. :rec_datetime: (datetime) The date and time of the original recording. *Properties available on this object*: :list_units: descends through hierarchy and returns a list of :class:`Unit` objects existing in the block. This shortcut exists because a common analysis case is analyzing all neurons that you recorded in a session. Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Container of*: :class:`Segment` :class:`ChannelIndex` ''' _container_child_objects = ('Segment', 'ChannelIndex') _child_properties = ('Unit',) _recommended_attrs = ((('file_datetime', datetime), ('rec_datetime', datetime), ('index', int)) + Container._recommended_attrs) _repr_pretty_attrs_keys_ = (Container._repr_pretty_attrs_keys_ + ('file_origin', 'file_datetime', 'rec_datetime', 'index')) _repr_pretty_containers = ('segments',) def __init__(self, name=None, description=None, file_origin=None, file_datetime=None, rec_datetime=None, index=None, **annotations): ''' Initalize a new :class:`Block` instance. ''' super(Block, self).__init__(name=name, description=description, file_origin=file_origin, **annotations) self.file_datetime = file_datetime self.rec_datetime = rec_datetime self.index = index @property def data_children_recur(self): ''' All data child objects stored in the current object, obtained recursively. ''' # subclassing this to remove duplicate objects such as SpikeTrain # objects in both Segment and Unit # Only Block can have duplicate items right now, so implement # this here for performance reasons. return tuple(unique_objs(super(Block, self).data_children_recur)) def list_children_by_class(self, cls): ''' List all children of a particular class recursively. You can either provide a class object, a class name, or the name of the container storing the class. ''' # subclassing this to remove duplicate objects such as SpikeTrain # objects in both Segment and Unit # Only Block can have duplicate items right now, so implement # this here for performance reasons. return unique_objs(super(Block, self).list_children_by_class(cls)) @property def list_units(self): ''' Return a list of all :class:`Unit` objects in the :class:`Block`. ''' return self.list_children_by_class('unit') neo-0.7.2/neo/core/channelindex.py0000600013464101346420000002154613507452453015231 0ustar yohyoh# -*- coding: utf-8 -*- ''' This module defines :class:`ChannelIndex`, a container for multiple data channels. :class:`ChannelIndex` derives from :class:`Container`, from :module:`neo.core.container`. ''' # needed for Python 3 compatibility from __future__ import absolute_import, division, print_function import numpy as np import quantities as pq from neo.core.container import Container class ChannelIndex(Container): ''' A container for indexing/grouping data channels. This container has several purposes: * Grouping all :class:`AnalogSignal`\s and :class:`IrregularlySampledSignal`\s inside a :class:`Block` across :class:`Segment`\s; * Indexing a subset of the channels within an :class:`AnalogSignal` and :class:`IrregularlySampledSignal`\s; * Container of :class:`Unit`\s. Discharges of multiple neurons (:class:`Unit`\'s) can be seen on the same channel. *Usage 1* providing channel IDs across multiple :class:`Segment`:: * Recording with 2 electrode arrays across 3 segments * Each array has 64 channels and is data is represented in a single :class:`AnalogSignal` object per electrode array * channel ids range from 0 to 127 with the first half covering electrode 0 and second half covering electrode 1 >>> from neo.core import (Block, Segment, ChannelIndex, ... AnalogSignal) >>> from quantities import nA, kHz >>> import numpy as np ... >>> # create a Block with 3 Segment and 2 ChannelIndex objects >>> blk = Block() >>> for ind in range(3): ... seg = Segment(name='segment %d' % ind, index=ind) ... blk.segments.append(seg) ... >>> for ind in range(2): ... channel_ids=np.arange(64)+ind ... chx = ChannelIndex(name='Array probe %d' % ind, ... index=np.arange(64), ... channel_ids=channel_ids, ... channel_names=['Channel %i' % chid ... for chid in channel_ids]) ... blk.channel_indexes.append(chx) ... >>> # Populate the Block with AnalogSignal objects >>> for seg in blk.segments: ... for chx in blk.channel_indexes: ... a = AnalogSignal(np.random.randn(10000, 64)*nA, ... sampling_rate=10*kHz) ... # link AnalogSignal and ID providing channel_index ... a.channel_index = chx ... chx.analogsignals.append(a) ... seg.analogsignals.append(a) *Usage 2* grouping channels:: * Recording with a single probe with 8 channels, 4 of which belong to a Tetrode * Global channel IDs range from 0 to 8 * An additional ChannelIndex is used to group subset of Tetrode channels >>> from neo.core import Block, ChannelIndex >>> import numpy as np >>> from quantities import mV, kHz ... >>> # Create a Block >>> blk = Block() >>> blk.segments.append(Segment()) ... >>> # Create a signal with 8 channels and a ChannelIndex handling the >>> # channel IDs (see usage case 1) >>> sig = AnalogSignal(np.random.randn(1000, 8)*mV, sampling_rate=10*kHz) >>> chx = ChannelIndex(name='Probe 0', index=range(8), ... channel_ids=range(8), ... channel_names=['Channel %i' % chid ... for chid in range(8)]) >>> chx.analogsignals.append(sig) >>> sig.channel_index=chx >>> blk.segments[0].analogsignals.append(sig) ... >>> # Create a new ChannelIndex which groups four channels from the >>> # analogsignal and provides a second ID scheme >>> chx = ChannelIndex(name='Tetrode 0', ... channel_names=np.array(['Tetrode ch1', ... 'Tetrode ch4', ... 'Tetrode ch6', ... 'Tetrode ch7']), ... index=np.array([0, 3, 5, 6])) >>> # Attach the ChannelIndex to the the Block, >>> # but not the to the AnalogSignal, since sig.channel_index is >>> # already linked to the global ChannelIndex of Probe 0 created above >>> chx.analogsignals.append(sig) >>> blk.channel_indexes.append(chx) *Usage 3* dealing with :class:`Unit` objects:: * Group 5 unit objects in a single :class:`ChannelIndex` object >>> from neo.core import Block, ChannelIndex, Unit ... >>> # Create a Block >>> blk = Block() ... >>> # Create a new ChannelIndex and add it to the Block >>> chx = ChannelIndex(index=None, name='octotrode A') >>> blk.channel_indexes.append(chx) ... >>> # create several Unit objects and add them to the >>> # ChannelIndex >>> for ind in range(5): ... unit = Unit(name = 'unit %d' % ind, ... description='after a long and hard spike sorting') ... chx.units.append(unit) *Required attributes/properties*: :index: (numpy.array 1D dtype='i') Index of each channel in the attached signals (AnalogSignals and IrregularlySampledSignals). The order of the channel IDs needs to be consistent across attached signals. *Recommended attributes/properties*: :name: (str) A label for the dataset. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. :channel_names: (numpy.array 1D dtype='S') Names for each recording channel. :channel_ids: (numpy.array 1D dtype='int') IDs of the corresponding channels referenced by 'index'. :coordinates: (quantity array 2D (x, y, z)) Physical or logical coordinates of all channels. Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Container of*: :class:`AnalogSignal` :class:`IrregularlySampledSignal` :class:`Unit` ''' _container_child_objects = ('Unit',) _data_child_objects = ('AnalogSignal', 'IrregularlySampledSignal') _single_parent_objects = ('Block',) _necessary_attrs = (('index', np.ndarray, 1, np.dtype('i')),) _recommended_attrs = ((('channel_names', np.ndarray, 1, np.dtype('S')), ('channel_ids', np.ndarray, 1, np.dtype('i')), ('coordinates', pq.Quantity, 2)) + Container._recommended_attrs) def __init__(self, index, channel_names=None, channel_ids=None, name=None, description=None, file_origin=None, coordinates=None, **annotations): ''' Initialize a new :class:`ChannelIndex` instance. ''' # Inherited initialization # Sets universally recommended attributes, and places all others # in annotations super(ChannelIndex, self).__init__(name=name, description=description, file_origin=file_origin, **annotations) # Defaults if channel_names is None: channel_names = np.array([], dtype='S') if channel_ids is None: channel_ids = np.array([], dtype='i') # Store recommended attributes self.channel_names = np.array(channel_names) self.channel_ids = np.array(channel_ids) self.index = np.array(index) self.coordinates = coordinates def __getitem__(self, i): ''' Get the item or slice :attr:`i`. ''' index = self.index.__getitem__(i) if self.channel_names.size > 0: channel_names = self.channel_names[index] if not channel_names.shape: channel_names = [channel_names] else: channel_names = None if self.channel_ids.size > 0: channel_ids = self.channel_ids[index] if not channel_ids.shape: channel_ids = [channel_ids] else: channel_ids = None obj = ChannelIndex(index=np.arange(index.size), channel_names=channel_names, channel_ids=channel_ids) obj.block = self.block obj.analogsignals = self.analogsignals obj.irregularlysampledsignals = self.irregularlysampledsignals # we do not copy the list of units, since these are related to # the entire set of channels in the parent ChannelIndex return obj neo-0.7.2/neo/core/container.py0000600013464101346420000006053513507452453014554 0ustar yohyoh# -*- coding: utf-8 -*- """ This module implements generic container base class that all neo container object inherit from. It provides shared methods for all container types. :class:`Container` is derived from :class:`BaseNeo` """ # needed for python 3 compatibility from __future__ import absolute_import, division, print_function from neo.core.baseneo import BaseNeo, _reference_name, _container_name def unique_objs(objs): """ Return a list of objects in the list objs where all objects are unique using the "is" test. """ seen = set() return [obj for obj in objs if id(obj) not in seen and not seen.add(id(obj))] def filterdata(data, targdict=None, objects=None, **kwargs): """ Return a list of the objects in data matching *any* of the search terms in either their attributes or annotations. Search terms can be provided as keyword arguments or a dictionary, either as a positional argument after data or to the argument targdict. targdict can also be a list of dictionaries, in which case the filters are applied sequentially. If targdict and kwargs are both supplied, the targdict filters are applied first, followed by the kwarg filters. A targdict of None or {} and objects = None corresponds to no filters applied, therefore returning all child objects. Default targdict and objects is None. objects (optional) should be the name of a Neo object type, a neo object class, or a list of one or both of these. If specified, only these objects will be returned. """ # if objects are specified, get the classes if objects: if hasattr(objects, 'lower') or isinstance(objects, type): objects = [objects] elif objects is not None: return [] # handle cases with targdict if targdict is None: targdict = kwargs elif not kwargs: pass elif hasattr(targdict, 'keys'): targdict = [targdict, kwargs] else: targdict += [kwargs] if not targdict: results = data # if multiple dicts are provided, apply each filter sequentially elif not hasattr(targdict, 'keys'): # for performance reasons, only do the object filtering on the first # iteration results = filterdata(data, targdict=targdict[0], objects=objects) for targ in targdict[1:]: results = filterdata(results, targdict=targ) return results else: # do the actual filtering results = [] for key, value in sorted(targdict.items()): for obj in data: if (hasattr(obj, key) and getattr(obj, key) == value and all([obj is not res for res in results])): results.append(obj) elif (key in obj.annotations and obj.annotations[key] == value and all([obj is not res for res in results])): results.append(obj) # keep only objects of the correct classes if objects: results = [result for result in results if result.__class__ in objects or result.__class__.__name__ in objects] return results class Container(BaseNeo): """ This is the base class from which Neo container objects inherit. It derives from :class:`BaseNeo`. In addition to the setup :class:`BaseNeo` does, this class also automatically sets up the lists to hold the children of the object. Each class can define one or more of the following class attributes (in addition to those of BaseNeo): :_container_child_objects: Neo container objects that can be children of this object. This attribute is used in cases where the child can only have one parent of this type. An instance attribute named class.__name__.lower()+'s' will be automatically defined to hold this child and will be initialized to an empty list. :_data_child_objects: Neo data objects that can be children of this object. An instance attribute named class.__name__.lower()+'s' will be automatically defined to hold this child and will be initialized to an empty list. :_multi_child_objects: Neo container objects that can be children of this object. This attribute is used in cases where the child can have multiple parents of this type. An instance attribute named class.__name__.lower()+'s' will be automatically defined to hold this child and will be initialized to an empty list. :_child_properties: Properties that return sub-children of a particular type. These properties must still be defined. This is mostly used for generate_diagram. :_repr_pretty_containers: The names of containers attributes printed when pretty-printing using iPython. The following helper properties are available (in addition to those of BaseNeo): :_single_child_objects: All neo container objects that can be children of this object and where the child can only have one parent of this type. :_container_child_objects: + :_data_child_objects: :_child_objects: All child objects. :_single_child_objects: + :_multi_child_objects: :_container_child_containers: The names of the container attributes used to store :_container_child_objects: :_data_child_containers: The names of the container attributes used to store :_data_child_objects: :_single_child_containers: The names of the container attributes used to store :_single_child_objects: :_multi_child_containers: The names of the container attributes used to store :_multi_child_objects: :_child_containers: All child container attributes. :_single_child_containers: + :_multi_child_containers: :_single_children: All objects that are children of the current object where the child can only have one parent of this type. :_multi_children: All objects that are children of the current object where the child can have multiple parents of this type. :data_children: All data objects that are children of the current object. :container_children: All container objects that are children of the current object. :children: All Neo objects that are children of the current object. :data_children_recur: All data objects that are children of the current object or any of its children, any of its children's children, etc. :container_children_recur: All container objects that are children of the current object or any of its children, any of its children's children, etc. :children_recur: All Neo objects that are children of the current object or any of its children, any of its children's children, etc. The following "universal" methods are available (in addition to those of BaseNeo): :size: A dictionary where each key is an attribute storing child objects and the value is the number of objects stored in that attribute. :filter(**args): Retrieves children of the current object that have particular properties. :list_children_by_class(**args): Retrieves all children of the current object recursively that are of a particular class. :create_many_to_one_relationship(**args): For each child of the current object that can only have a single parent, set its parent to be the current object. :create_many_to_many_relationship(**args): For children of the current object that can have more than one parent of this type, put the current object in the parent list. :create_relationship(**args): Combines :create_many_to_one_relationship: and :create_many_to_many_relationship: :merge(**args): Annotations are merged based on the rules of :merge_annotations:. Child objects with the same name and a :merge: method are merged using that method. Other child objects are appended to the relevant container attribute. Parents attributes are NOT changed in this operation. Unlike :BaseNeo.merge:, this method implements all necessary merge rules for a container class. Each child class should: 0) call Container.__init__(self, name=name, description=description, file_origin=file_origin, **annotations) with the universal recommended arguments, plus optional annotations 1) process its required arguments in its __new__ or __init__ method 2) process its non-universal recommended arguments (in its __new__ or __init__ method """ # Child objects that are a container and have a single parent _container_child_objects = () # Child objects that have data and have a single parent _data_child_objects = () # Child objects that can have multiple parents _multi_child_objects = () # Properties returning children of children [of children...] _child_properties = () # Containers that are listed when pretty-printing _repr_pretty_containers = () def __init__(self, name=None, description=None, file_origin=None, **annotations): """ Initalize a new :class:`Container` instance. """ super(Container, self).__init__(name=name, description=description, file_origin=file_origin, **annotations) # initialize containers for container in self._child_containers: setattr(self, container, []) @property def _single_child_objects(self): """ Child objects that have a single parent. """ return self._container_child_objects + self._data_child_objects @property def _container_child_containers(self): """ Containers for child objects that are a container and have a single parent. """ return tuple([_container_name(child) for child in self._container_child_objects]) @property def _data_child_containers(self): """ Containers for child objects that have data and have a single parent. """ return tuple([_container_name(child) for child in self._data_child_objects]) @property def _single_child_containers(self): """ Containers for child objects with a single parent. """ return tuple([_container_name(child) for child in self._single_child_objects]) @property def _multi_child_containers(self): """ Containers for child objects that can have multiple parents. """ return tuple([_container_name(child) for child in self._multi_child_objects]) @property def _child_objects(self): """ All types for child objects. """ return self._single_child_objects + self._multi_child_objects @property def _child_containers(self): """ All containers for child objects. """ return self._single_child_containers + self._multi_child_containers @property def _single_children(self): """ All child objects that can only have single parents. """ childs = [list(getattr(self, attr)) for attr in self._single_child_containers] return tuple(sum(childs, [])) @property def _multi_children(self): """ All child objects that can have multiple parents. """ childs = [list(getattr(self, attr)) for attr in self._multi_child_containers] return tuple(sum(childs, [])) @property def data_children(self): """ All data child objects stored in the current object. Not recursive. """ childs = [list(getattr(self, attr)) for attr in self._data_child_containers] return tuple(sum(childs, [])) @property def container_children(self): """ All container child objects stored in the current object. Not recursive. """ childs = [list(getattr(self, attr)) for attr in self._container_child_containers + self._multi_child_containers] return tuple(sum(childs, [])) @property def children(self): """ All child objects stored in the current object. Not recursive. """ return self.data_children + self.container_children @property def data_children_recur(self): """ All data child objects stored in the current object, obtained recursively. """ childs = [list(child.data_children_recur) for child in self.container_children] return self.data_children + tuple(sum(childs, [])) @property def container_children_recur(self): """ All container child objects stored in the current object, obtained recursively. """ childs = [list(child.container_children_recur) for child in self.container_children] return self.container_children + tuple(sum(childs, [])) @property def children_recur(self): """ All child objects stored in the current object, obtained recursively. """ return self.data_children_recur + self.container_children_recur @property def size(self): """ Get dictionary containing the names of child containers in the current object as keys and the number of children of that type as values. """ return dict((name, len(getattr(self, name))) for name in self._child_containers) def filter(self, targdict=None, data=True, container=False, recursive=True, objects=None, **kwargs): """ Return a list of child objects matching *any* of the search terms in either their attributes or annotations. Search terms can be provided as keyword arguments or a dictionary, either as a positional argument after data or to the argument targdict. targdict can also be a list of dictionaries, in which case the filters are applied sequentially. If targdict and kwargs are both supplied, the targdict filters are applied first, followed by the kwarg filters. A targdict of None or {} corresponds to no filters applied, therefore returning all child objects. Default targdict is None. If data is True (default), include data objects. If container is True (default False), include container objects. If recursive is True (default), descend into child containers for objects. objects (optional) should be the name of a Neo object type, a neo object class, or a list of one or both of these. If specified, only these objects will be returned. If not specified any type of object is returned. Default is None. Note that if recursive is True, containers not in objects will still be descended into. This overrides data and container. Examples:: >>> obj.filter(name="Vm") >>> obj.filter(objects=neo.SpikeTrain) >>> obj.filter(targdict={'myannotation':3}) """ # if objects are specified, get the classes if objects: data = True container = True children = [] # get the objects we want if data: if recursive: children.extend(self.data_children_recur) else: children.extend(self.data_children) if container: if recursive: children.extend(self.container_children_recur) else: children.extend(self.container_children) return filterdata(children, objects=objects, targdict=targdict, **kwargs) def list_children_by_class(self, cls): """ List all children of a particular class recursively. You can either provide a class object, a class name, or the name of the container storing the class. """ if not hasattr(cls, 'lower'): cls = cls.__name__ container_name = _container_name(cls) objs = list(getattr(self, container_name, [])) for child in self.container_children_recur: objs.extend(getattr(child, container_name, [])) return objs def create_many_to_one_relationship(self, force=False, recursive=True): """ For each child of the current object that can only have a single parent, set its parent to be the current object. Usage: >>> a_block.create_many_to_one_relationship() >>> a_block.create_many_to_one_relationship(force=True) If the current object is a :class:`Block`, you want to run populate_RecordingChannel first, because this will create new objects that this method will link up. If force is True overwrite any existing relationships If recursive is True desecend into child objects and create relationships there """ parent_name = _reference_name(self.__class__.__name__) for child in self._single_children: if (hasattr(child, parent_name) and getattr(child, parent_name) is None or force): setattr(child, parent_name, self) if recursive: for child in self.container_children: child.create_many_to_one_relationship(force=force, recursive=True) def create_many_to_many_relationship(self, append=True, recursive=True): """ For children of the current object that can have more than one parent of this type, put the current object in the parent list. If append is True add it to the list, otherwise overwrite the list. If recursive is True desecend into child objects and create relationships there """ parent_name = _container_name(self.__class__.__name__) for child in self._multi_children: if not hasattr(child, parent_name): continue if append: target = getattr(child, parent_name) if self not in target: target.append(self) continue setattr(child, parent_name, [self]) if recursive: for child in self.container_children: child.create_many_to_many_relationship(append=append, recursive=True) def create_relationship(self, force=False, append=True, recursive=True): """ For each child of the current object that can only have a single parent, set its parent to be the current object. For children of the current object that can have more than one parent of this type, put the current object in the parent list. If the current object is a :class:`Block`, you want to run populate_RecordingChannel first, because this will create new objects that this method will link up. If force is True overwrite any existing relationships If append is True add it to the list, otherwise overwrite the list. If recursive is True desecend into child objects and create relationships there """ self.create_many_to_one_relationship(force=force, recursive=False) self.create_many_to_many_relationship(append=append, recursive=False) if recursive: for child in self.container_children: child.create_relationship(force=force, append=append, recursive=True) def merge(self, other): """ Merge the contents of another object into this one. Container children of the current object with the same name will be merged. All other objects will be appended to the list of objects in this one. Duplicate copies of the same object will be skipped. Annotations are merged such that only items not present in the current annotations are added. """ # merge containers with the same name for container in (self._container_child_containers + self._multi_child_containers): lookup = dict((obj.name, obj) for obj in getattr(self, container)) ids = [id(obj) for obj in getattr(self, container)] for obj in getattr(other, container): if id(obj) in ids: continue if obj.name in lookup: lookup[obj.name].merge(obj) else: lookup[obj.name] = obj ids.append(id(obj)) getattr(self, container).append(obj) # for data objects, ignore the name and just add them for container in self._data_child_containers: objs = getattr(self, container) lookup = dict((obj.name, i) for i, obj in enumerate(objs)) ids = [id(obj) for obj in objs] for obj in getattr(other, container): if id(obj) in ids: continue if hasattr(obj, 'merge') and obj.name is not None and obj.name in lookup: ind = lookup[obj.name] try: newobj = getattr(self, container)[ind].merge(obj) getattr(self, container)[ind] = newobj except NotImplementedError: getattr(self, container).append(obj) ids.append(id(obj)) else: lookup[obj.name] = obj ids.append(id(obj)) getattr(self, container).append(obj) # use the BaseNeo merge as well super(Container, self).merge(other) def _repr_pretty_(self, pp, cycle): """ Handle pretty-printing. """ pp.text(self.__class__.__name__) pp.text(" with ") vals = [] for container in self._child_containers: objs = getattr(self, container) if objs: vals.append('%s %s' % (len(objs), container)) pp.text(', '.join(vals)) if self._has_repr_pretty_attrs_(): pp.breakable() self._repr_pretty_attrs_(pp, cycle) for container in self._repr_pretty_containers: pp.breakable() objs = getattr(self, container) pp.text("# %s (N=%s)" % (container, len(objs))) for (i, obj) in enumerate(objs): pp.breakable() pp.text("%s: " % i) with pp.indent(3): pp.pretty(obj) neo-0.7.2/neo/core/dataobject.py0000600013464101346420000003710713507452453014671 0ustar yohyoh# -*- coding: utf-8 -*- """ This module defines :class:`DataObject`, the abstract base class used by all :module:`neo.core` classes that can contain data (i.e. are not container classes). It contains basic functionality that is shared among all those data objects. """ import copy import warnings import quantities as pq import numpy as np from neo.core.baseneo import BaseNeo, _check_annotations def _normalize_array_annotations(value, length): """Check consistency of array annotations Recursively check that value is either an array or list containing only "simple" types (number, string, date/time) or is a dict of those. Args: :value: (np.ndarray, list or dict) value to be checked for consistency :length: (int) required length of the array annotation Returns: np.ndarray The array_annotations from value in correct form Raises: ValueError: In case value is not accepted as array_annotation(s) """ # First stage, resolve dict of annotations into single annotations if isinstance(value, dict): for key in value.keys(): if isinstance(value[key], dict): raise ValueError("Nested dicts are not allowed as array annotations") value[key] = _normalize_array_annotations(value[key], length) elif value is None: raise ValueError("Array annotations must not be None") # If not array annotation, pass on to regular check and make it a list, that is checked again # This covers array annotations with length 1 elif not isinstance(value, (list, np.ndarray)) or ( isinstance(value, pq.Quantity) and value.shape == ()): _check_annotations(value) value = _normalize_array_annotations(np.array([value]), length) # If array annotation, check for correct length, only single dimension and allowed data else: # Get length that is required for array annotations, which is equal to the length # of the object's data own_length = length # Escape check if empty array or list and just annotate an empty array (length 0) # This enables the user to easily create dummy array annotations that will be filled # with data later on if len(value) == 0: if not isinstance(value, np.ndarray): value = np.ndarray((0,)) val_length = own_length else: # Note: len(o) also works for np.ndarray, it then uses the first dimension, # which is exactly the desired behaviour here val_length = len(value) if not own_length == val_length: raise ValueError( "Incorrect length of array annotation: {} != {}".format(val_length, own_length)) # Local function used to check single elements of a list or an array # They must not be lists or arrays and fit the usual annotation data types def _check_single_elem(element): # Nested array annotations not allowed currently # If element is a list or a np.ndarray, it's not conform except if it's a quantity of # length 1 if isinstance(element, list) or (isinstance(element, np.ndarray) and not ( isinstance(element, pq.Quantity) and ( element.shape == () or element.shape == (1,)))): raise ValueError("Array annotations should only be 1-dimensional") if isinstance(element, dict): raise ValueError("Dictionaries are not supported as array annotations") # Perform regular check for elements of array or list _check_annotations(element) # Arrays only need testing of single element to make sure the others are the same if isinstance(value, np.ndarray): # Type of first element is representative for all others # Thus just performing a check on the first element is enough # Even if it's a pq.Quantity, which can be scalar or array, this is still true # Because a np.ndarray cannot contain scalars and sequences simultaneously # If length of data is 0, then nothing needs to be checked if len(value): # Perform check on first element _check_single_elem(value[0]) return value # In case of list, it needs to be ensured that all data are of the same type else: # Conversion to numpy array makes all elements same type # Converts elements to most general type try: value = np.array(value) # Except when scalar and non-scalar values are mixed, this causes conversion to fail except ValueError as e: msg = str(e) if "setting an array element with a sequence." in msg: raise ValueError("Scalar values and arrays/lists cannot be " "combined into a single array annotation") else: raise e # If most specialized data type that possibly fits all elements is object, # raise an Error with a telling error message, because this means the elements # are not compatible if value.dtype == object: raise ValueError("Cannot convert list of incompatible types into a single" " array annotation") # Check the first element for correctness # If its type is correct for annotations, all others are correct as well # Note: Emtpy lists cannot reach this point _check_single_elem(value[0]) return value class DataObject(BaseNeo, pq.Quantity): ''' This is the base class from which all objects containing data inherit It contains common functionality for all those objects and handles array_annotations. Common functionality that is not included in BaseNeo includes: - duplicating with new data - rescaling the object - copying the object - returning it as pq.Quantity or np.ndarray - handling of array_annotations Array_annotations are a kind of annotation that contains metadata for every data point, i.e. per timestamp (in SpikeTrain, Event and Epoch) or signal channel (in AnalogSignal and IrregularlySampledSignal). They can contain the same data types as regular annotations, but are always represented as numpy arrays of the same length as the number of data points of the annotated neo object. Args: name (str, optional): Name of the Neo object description (str, optional): Human readable string description of the Neo object file_origin (str, optional): Origin of the data contained in this Neo object array_annotations (dict, optional): Dictionary containing arrays / lists which annotate individual data points of the Neo object. kwargs: regular annotations stored in a separate annotation dictionary ''' def __init__(self, name=None, description=None, file_origin=None, array_annotations=None, **annotations): """ This method is called by each data object and initializes the newly created object by adding array annotations and calling __init__ of the super class, where more annotations and attributes are processed. """ if not hasattr(self, 'array_annotations') or not self.array_annotations: self.array_annotations = ArrayDict(self._get_arr_ann_length()) if array_annotations is not None: self.array_annotate(**array_annotations) BaseNeo.__init__(self, name=name, description=description, file_origin=file_origin, **annotations) def array_annotate(self, **array_annotations): """ Add array annotations (annotations for individual data points) as arrays to a Neo data object. Example: >>> obj.array_annotate(code=['a', 'b', 'a'], category=[2, 1, 1]) >>> obj.array_annotations['code'][1] 'b' """ self.array_annotations.update(array_annotations) def array_annotations_at_index(self, index): """ Return dictionary of array annotations at a given index or list of indices :param index: int, list, numpy array: The index (indices) from which the annotations are extracted :return: dictionary of values or numpy arrays containing all array annotations for given index/indices Example: >>> obj.array_annotate(code=['a', 'b', 'a'], category=[2, 1, 1]) >>> obj.array_annotations_at_index(1) {code='b', category=1} """ # Taking only a part of the array annotations # Thus not using ArrayDict here, because checks for length are not needed index_annotations = {} # Use what is given as an index to determine the corresponding annotations, # if not possible, numpy raises an Error for ann in self.array_annotations.keys(): # NO deepcopy, because someone might want to alter the actual object using this try: index_annotations[ann] = self.array_annotations[ann][index] except IndexError as e: # IndexError caused by 'dummy' array annotations should not result in failure # Taking a slice from nothing results in nothing if len(self.array_annotations[ann]) == 0 and not self._get_arr_ann_length() == 0: index_annotations[ann] = self.array_annotations[ann] else: raise e return index_annotations def _merge_array_annotations(self, other): ''' Merges array annotations of 2 different objects. The merge happens in such a way that the result fits the merged data In general this means concatenating the arrays from the 2 objects. If an annotation is only present in one of the objects, it will be omitted :return Merged array_annotations ''' merged_array_annotations = {} omitted_keys_self = [] # Concatenating arrays for each key for key in self.array_annotations: try: value = copy.deepcopy(self.array_annotations[key]) other_value = copy.deepcopy(other.array_annotations[key]) # Quantities need to be rescaled to common unit if isinstance(value, pq.Quantity): try: other_value = other_value.rescale(value.units) except ValueError: raise ValueError("Could not merge array annotations " "due to different units") merged_array_annotations[key] = np.append(value, other_value) * value.units else: merged_array_annotations[key] = np.append(value, other_value) except KeyError: # Save the omitted keys to be able to print them omitted_keys_self.append(key) continue # Also save omitted keys from 'other' omitted_keys_other = [key for key in other.array_annotations if key not in self.array_annotations] # Warn if keys were omitted if omitted_keys_other or omitted_keys_self: warnings.warn("The following array annotations were omitted, because they were only " "present in one of the merged objects: {} from the one that was merged " "into and {} from the one that was merged into the other" "".format(omitted_keys_self, omitted_keys_other), UserWarning) # Return the merged array_annotations return merged_array_annotations def rescale(self, units): ''' Return a copy of the object converted to the specified units :return: Copy of self with specified units ''' # Use simpler functionality, if nothing will be changed dim = pq.quantity.validate_dimensionality(units) if self.dimensionality == dim: return self.copy() # Rescale the object into a new object obj = self.duplicate_with_new_data(signal=self.view(pq.Quantity).rescale(dim), units=units) # Expected behavior is deepcopy, so deepcopying array_annotations obj.array_annotations = copy.deepcopy(self.array_annotations) obj.segment = self.segment return obj # Needed to implement this so array annotations are copied as well, ONLY WHEN copying 1:1 def copy(self, **kwargs): ''' Returns a copy of the object :return: Copy of self ''' obj = super(DataObject, self).copy(**kwargs) obj.array_annotations = self.array_annotations return obj def as_array(self, units=None): """ Return the object's data as a plain NumPy array. If `units` is specified, first rescale to those units. """ if units: return self.rescale(units).magnitude else: return self.magnitude def as_quantity(self): """ Return the object's data as a quantities array. """ return self.view(pq.Quantity) def _get_arr_ann_length(self): """ Return the length of the object's data as required for array annotations This is the last dimension of every object. :return Required length of array annotations for this object """ # Number of items is last dimension in of data object # This method should be overridden in case this changes try: length = self.shape[-1] # Note: This is because __getitem__[int] returns a scalar Epoch/Event/SpikeTrain # To be removed if __getitem__[int] is changed except IndexError: length = 1 return length def duplicate_with_new_array(self, signal, units=None): warnings.warn("Use of the `duplicate_with_new_array function is deprecated. " "Please use `duplicate_with_new_data` instead.", DeprecationWarning) return self.duplicate_with_new_data(signal, units=units) class ArrayDict(dict): """Dictionary subclass to handle array annotations When setting `obj.array_annotations[key]=value`, checks for consistency should not be bypassed. This class overrides __setitem__ from dict to perform these checks every time. The method used for these checks is given as an argument for __init__. """ def __init__(self, length, check_function=_normalize_array_annotations, *args, **kwargs): super(ArrayDict, self).__init__(*args, **kwargs) self.check_function = check_function self.length = length def __setitem__(self, key, value): # Directly call the defined function # Need to wrap key and value in a dict in order to make sure # that nested dicts are detected value = self.check_function({key: value}, self.length)[key] super(ArrayDict, self).__setitem__(key, value) # Updating the dict also needs to perform checks, so rerouting this to __setitem__ def update(self, *args, **kwargs): if args: if len(args) > 1: raise TypeError("update expected at most 1 arguments, " "got %d" % len(args)) other = dict(args[0]) for key in other: self[key] = other[key] for key in kwargs: self[key] = kwargs[key] def __reduce__(self): return super(ArrayDict, self).__reduce__() neo-0.7.2/neo/core/epoch.py0000600013464101346420000002656513507452453013675 0ustar yohyoh# -*- coding: utf-8 -*- ''' This module defines :class:`Epoch`, an array of epochs. :class:`Epoch` derives from :class:`BaseNeo`, from :module:`neo.core.baseneo`. ''' # needed for python 3 compatibility from __future__ import absolute_import, division, print_function import sys from copy import deepcopy import numpy as np import quantities as pq from neo.core.baseneo import BaseNeo, merge_annotations from neo.core.dataobject import DataObject, ArrayDict PY_VER = sys.version_info[0] def _new_epoch(cls, times=None, durations=None, labels=None, units=None, name=None, description=None, file_origin=None, array_annotations=None, annotations=None, segment=None): ''' A function to map epoch.__new__ to function that does not do the unit checking. This is needed for pickle to work. ''' e = Epoch(times=times, durations=durations, labels=labels, units=units, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) e.segment = segment return e class Epoch(DataObject): ''' Array of epochs. *Usage*:: >>> from neo.core import Epoch >>> from quantities import s, ms >>> import numpy as np >>> >>> epc = Epoch(times=np.arange(0, 30, 10)*s, ... durations=[10, 5, 7]*ms, ... labels=np.array(['btn0', 'btn1', 'btn2'], dtype='S')) >>> >>> epc.times array([ 0., 10., 20.]) * s >>> epc.durations array([ 10., 5., 7.]) * ms >>> epc.labels array(['btn0', 'btn1', 'btn2'], dtype='|S4') *Required attributes/properties*: :times: (quantity array 1D) The start times of each time period. :durations: (quantity array 1D or quantity scalar) The length(s) of each time period. If a scalar, the same value is used for all time periods. :labels: (numpy.array 1D dtype='S') Names or labels for the time periods. *Recommended attributes/properties*: :name: (str) A label for the dataset, :description: (str) Text description, :file_origin: (str) Filesystem path or URL of the original data file. *Optional attributes/properties*: :array_annotations: (dict) Dict mapping strings to numpy arrays containing annotations \ for all data points Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`, ''' _single_parent_objects = ('Segment',) _quantity_attr = 'times' _necessary_attrs = (('times', pq.Quantity, 1), ('durations', pq.Quantity, 1), ('labels', np.ndarray, 1, np.dtype('S'))) def __new__(cls, times=None, durations=None, labels=None, units=None, name=None, description=None, file_origin=None, array_annotations=None, **annotations): if times is None: times = np.array([]) * pq.s if durations is None: durations = np.array([]) * pq.s elif durations.size != times.size: if durations.size == 1: durations = durations * np.ones_like(times.magnitude) else: raise ValueError("Durations array has different length to times") if labels is None: labels = np.array([], dtype='S') elif len(labels) != times.size: raise ValueError("Labels array has different length to times") if units is None: # No keyword units, so get from `times` try: units = times.units dim = units.dimensionality except AttributeError: raise ValueError('you must specify units') else: if hasattr(units, 'dimensionality'): dim = units.dimensionality else: dim = pq.quantity.validate_dimensionality(units) # check to make sure the units are time # this approach is much faster than comparing the # reference dimensionality if (len(dim) != 1 or list(dim.values())[0] != 1 or not isinstance(list(dim.keys())[0], pq.UnitTime)): ValueError("Unit %s has dimensions %s, not [time]" % (units, dim.simplified)) obj = pq.Quantity.__new__(cls, times, units=dim) obj.labels = labels obj.durations = durations obj.segment = None return obj def __init__(self, times=None, durations=None, labels=None, units=None, name=None, description=None, file_origin=None, array_annotations=None, **annotations): ''' Initialize a new :class:`Epoch` instance. ''' DataObject.__init__(self, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) def __reduce__(self): ''' Map the __new__ function onto _new_epoch, so that pickle works ''' return _new_epoch, (self.__class__, self.times, self.durations, self.labels, self.units, self.name, self.file_origin, self.description, self.array_annotations, self.annotations, self.segment) def __array_finalize__(self, obj): super(Epoch, self).__array_finalize__(obj) self.annotations = getattr(obj, 'annotations', None) self.name = getattr(obj, 'name', None) self.file_origin = getattr(obj, 'file_origin', None) self.description = getattr(obj, 'description', None) self.segment = getattr(obj, 'segment', None) # Add empty array annotations, because they cannot always be copied, # but do not overwrite existing ones from slicing etc. # This ensures the attribute exists if not hasattr(self, 'array_annotations'): self.array_annotations = ArrayDict(self._get_arr_ann_length()) def __repr__(self): ''' Returns a string representing the :class:`Epoch`. ''' # need to convert labels to unicode for python 3 or repr is messed up if PY_VER == 3: labels = self.labels.astype('U') else: labels = self.labels objs = ['%s@%s for %s' % (label, time, dur) for label, time, dur in zip(labels, self.times, self.durations)] return '' % ', '.join(objs) def _repr_pretty_(self, pp, cycle): super(Epoch, self)._repr_pretty_(pp, cycle) def rescale(self, units): ''' Return a copy of the :class:`Epoch` converted to the specified units ''' obj = super(Epoch, self).rescale(units) obj.segment = self.segment return obj def __getitem__(self, i): ''' Get the item or slice :attr:`i`. ''' obj = Epoch(times=super(Epoch, self).__getitem__(i)) obj._copy_data_complement(self) try: # Array annotations need to be sliced accordingly obj.array_annotate(**deepcopy(self.array_annotations_at_index(i))) except AttributeError: # If Quantity was returned, not Epoch pass return obj def __getslice__(self, i, j): ''' Get a slice from :attr:`i` to :attr:`j`.attr[0] Doesn't get called in Python 3, :meth:`__getitem__` is called instead ''' return self.__getitem__(slice(i, j)) @property def times(self): return pq.Quantity(self) def merge(self, other): ''' Merge the another :class:`Epoch` into this one. The :class:`Epoch` objects are concatenated horizontally (column-wise), :func:`np.hstack`). If the attributes of the two :class:`Epoch` are not compatible, and Exception is raised. ''' othertimes = other.times.rescale(self.times.units) times = np.hstack([self.times, othertimes]) * self.times.units kwargs = {} for name in ("name", "description", "file_origin"): attr_self = getattr(self, name) attr_other = getattr(other, name) if attr_self == attr_other: kwargs[name] = attr_self else: kwargs[name] = "merge(%s, %s)" % (attr_self, attr_other) merged_annotations = merge_annotations(self.annotations, other.annotations) kwargs.update(merged_annotations) kwargs['array_annotations'] = self._merge_array_annotations(other) labels = kwargs['array_annotations']['labels'] durations = kwargs['array_annotations']['durations'] return Epoch(times=times, durations=durations, labels=labels, **kwargs) def _copy_data_complement(self, other): ''' Copy the metadata from another :class:`Epoch`. Note: Array annotations can not be copied here because length of data can change ''' # Note: Array annotations cannot be copied because length of data could be changed # here which would cause inconsistencies. This is instead done locally. for attr in ("name", "file_origin", "description", "annotations"): setattr(self, attr, getattr(other, attr, None)) def __deepcopy__(self, memo): cls = self.__class__ new_ep = cls(times=self.times, durations=self.durations, labels=self.labels, units=self.units, name=self.name, description=self.description, file_origin=self.file_origin) new_ep.__dict__.update(self.__dict__) memo[id(self)] = new_ep for k, v in self.__dict__.items(): try: setattr(new_ep, k, deepcopy(v, memo)) except TypeError: setattr(new_ep, k, v) return new_ep def duplicate_with_new_data(self, signal, units=None): ''' Create a new :class:`Epoch` with the same metadata but different data (times, durations) Note: Array annotations can not be copied here because length of data can change ''' if units is None: units = self.units else: units = pq.quantity.validate_dimensionality(units) new = self.__class__(times=signal, units=units) new._copy_data_complement(self) # Note: Array annotations can not be copied here because length of data can change return new def time_slice(self, t_start, t_stop): ''' Creates a new :class:`Epoch` corresponding to the time slice of the original :class:`Epoch` between (and including) times :attr:`t_start` and :attr:`t_stop`. Either parameter can also be None to use infinite endpoints for the time interval. ''' _t_start = t_start _t_stop = t_stop if t_start is None: _t_start = -np.inf if t_stop is None: _t_stop = np.inf indices = (self >= _t_start) & (self <= _t_stop) new_epc = self[indices] return new_epc def set_labels(self, labels): self.array_annotate(labels=labels) def get_labels(self): return self.array_annotations['labels'] labels = property(get_labels, set_labels) def set_durations(self, durations): self.array_annotate(durations=durations) def get_durations(self): return self.array_annotations['durations'] durations = property(get_durations, set_durations) neo-0.7.2/neo/core/event.py0000600013464101346420000003040413507452453013703 0ustar yohyoh# -*- coding: utf-8 -*- ''' This module defines :class:`Event`, an array of events. :class:`Event` derives from :class:`BaseNeo`, from :module:`neo.core.baseneo`. ''' # needed for python 3 compatibility from __future__ import absolute_import, division, print_function import sys from copy import deepcopy import numpy as np import quantities as pq from neo.core.baseneo import merge_annotations from neo.core.dataobject import DataObject, ArrayDict from neo.core.epoch import Epoch PY_VER = sys.version_info[0] def _new_event(cls, times=None, labels=None, units=None, name=None, file_origin=None, description=None, array_annotations=None, annotations=None, segment=None): ''' A function to map Event.__new__ to function that does not do the unit checking. This is needed for pickle to work. ''' e = Event(times=times, labels=labels, units=units, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) e.segment = segment return e class Event(DataObject): ''' Array of events. *Usage*:: >>> from neo.core import Event >>> from quantities import s >>> import numpy as np >>> >>> evt = Event(np.arange(0, 30, 10)*s, ... labels=np.array(['trig0', 'trig1', 'trig2'], ... dtype='S')) >>> >>> evt.times array([ 0., 10., 20.]) * s >>> evt.labels array(['trig0', 'trig1', 'trig2'], dtype='|S5') *Required attributes/properties*: :times: (quantity array 1D) The time of the events. :labels: (numpy.array 1D dtype='S') Names or labels for the events. *Recommended attributes/properties*: :name: (str) A label for the dataset. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. *Optional attributes/properties*: :array_annotations: (dict) Dict mapping strings to numpy arrays containing annotations \ for all data points Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. ''' _single_parent_objects = ('Segment',) _quantity_attr = 'times' _necessary_attrs = (('times', pq.Quantity, 1), ('labels', np.ndarray, 1, np.dtype('S'))) def __new__(cls, times=None, labels=None, units=None, name=None, description=None, file_origin=None, array_annotations=None, **annotations): if times is None: times = np.array([]) * pq.s if labels is None: labels = np.array([], dtype='S') if units is None: # No keyword units, so get from `times` try: units = times.units dim = units.dimensionality except AttributeError: raise ValueError('you must specify units') else: if hasattr(units, 'dimensionality'): dim = units.dimensionality else: dim = pq.quantity.validate_dimensionality(units) # check to make sure the units are time # this approach is much faster than comparing the # reference dimensionality if (len(dim) != 1 or list(dim.values())[0] != 1 or not isinstance(list(dim.keys())[0], pq.UnitTime)): ValueError("Unit %s has dimensions %s, not [time]" % (units, dim.simplified)) obj = pq.Quantity(times, units=dim).view(cls) obj.labels = labels obj.segment = None return obj def __init__(self, times=None, labels=None, units=None, name=None, description=None, file_origin=None, array_annotations=None, **annotations): ''' Initialize a new :class:`Event` instance. ''' DataObject.__init__(self, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) def __reduce__(self): ''' Map the __new__ function onto _new_event, so that pickle works ''' return _new_event, (self.__class__, np.array(self), self.labels, self.units, self.name, self.file_origin, self.description, self.array_annotations, self.annotations, self.segment) def __array_finalize__(self, obj): super(Event, self).__array_finalize__(obj) self.annotations = getattr(obj, 'annotations', None) self.name = getattr(obj, 'name', None) self.file_origin = getattr(obj, 'file_origin', None) self.description = getattr(obj, 'description', None) self.segment = getattr(obj, 'segment', None) # Add empty array annotations, because they cannot always be copied, # but do not overwrite existing ones from slicing etc. # This ensures the attribute exists if not hasattr(self, 'array_annotations'): self.array_annotations = ArrayDict(self._get_arr_ann_length()) def __repr__(self): ''' Returns a string representing the :class:`Event`. ''' # need to convert labels to unicode for python 3 or repr is messed up if PY_VER == 3: labels = self.labels.astype('U') else: labels = self.labels objs = ['%s@%s' % (label, time) for label, time in zip(labels, self.times)] return '' % ', '.join(objs) def _repr_pretty_(self, pp, cycle): super(Event, self)._repr_pretty_(pp, cycle) def rescale(self, units): ''' Return a copy of the :class:`Event` converted to the specified units ''' obj = super(Event, self).rescale(units) obj.segment = self.segment return obj @property def times(self): return pq.Quantity(self) def merge(self, other): ''' Merge the another :class:`Event` into this one. The :class:`Event` objects are concatenated horizontally (column-wise), :func:`np.hstack`). If the attributes of the two :class:`Event` are not compatible, and Exception is raised. ''' othertimes = other.times.rescale(self.times.units) times = np.hstack([self.times, othertimes]) * self.times.units kwargs = {} for name in ("name", "description", "file_origin"): attr_self = getattr(self, name) attr_other = getattr(other, name) if attr_self == attr_other: kwargs[name] = attr_self else: kwargs[name] = "merge(%s, %s)" % (attr_self, attr_other) print('Event: merge annotations') merged_annotations = merge_annotations(self.annotations, other.annotations) kwargs.update(merged_annotations) kwargs['array_annotations'] = self._merge_array_annotations(other) evt = Event(times=times, labels=kwargs['array_annotations']['labels'], **kwargs) return evt def _copy_data_complement(self, other): ''' Copy the metadata from another :class:`Event`. Note: Array annotations can not be copied here because length of data can change ''' # Note: Array annotations cannot be copied # because they are linked to their respective timestamps for attr in ("name", "file_origin", "description", "annotations"): setattr(self, attr, getattr(other, attr, None)) # Note: Array annotations cannot be copied # because length of data can be changed # here which would cause inconsistencies # # This includes labels and durations!!! def __deepcopy__(self, memo): cls = self.__class__ new_ev = cls(times=self.times, labels=self.labels, units=self.units, name=self.name, description=self.description, file_origin=self.file_origin) new_ev.__dict__.update(self.__dict__) memo[id(self)] = new_ev for k, v in self.__dict__.items(): try: setattr(new_ev, k, deepcopy(v, memo)) except TypeError: setattr(new_ev, k, v) return new_ev def __getitem__(self, i): obj = super(Event, self).__getitem__(i) try: obj.array_annotate(**deepcopy(self.array_annotations_at_index(i))) except AttributeError: # If Quantity was returned, not Event pass return obj def duplicate_with_new_data(self, signal, units=None): ''' Create a new :class:`Event` with the same metadata but different data Note: Array annotations can not be copied here because length of data can change ''' if units is None: units = self.units else: units = pq.quantity.validate_dimensionality(units) new = self.__class__(times=signal, units=units) new._copy_data_complement(self) # Note: Array annotations cannot be copied here, because length of data can be changed return new def time_slice(self, t_start, t_stop): ''' Creates a new :class:`Event` corresponding to the time slice of the original :class:`Event` between (and including) times :attr:`t_start` and :attr:`t_stop`. Either parameter can also be None to use infinite endpoints for the time interval. ''' _t_start = t_start _t_stop = t_stop if t_start is None: _t_start = -np.inf if t_stop is None: _t_stop = np.inf indices = (self >= _t_start) & (self <= _t_stop) new_evt = self[indices] return new_evt def set_labels(self, labels): self.array_annotate(labels=labels) def get_labels(self): return self.array_annotations['labels'] labels = property(get_labels, set_labels) def to_epoch(self, pairwise=False, durations=None): """ Returns a new Epoch object based on the times and labels in the Event object. This method has three modes of action. 1. By default, an array of `n` event times will be transformed into `n-1` epochs, where the end of one epoch is the beginning of the next. This assumes that the events are ordered in time; it is the responsibility of the caller to check this is the case. 2. If `pairwise` is True, then the event times will be taken as pairs representing the start and end time of an epoch. The number of events must be even, otherwise a ValueError is raised. 3. If `durations` is given, it should be a scalar Quantity or a Quantity array of the same size as the Event. Each event time is then taken as the start of an epoch of duration given by `durations`. `pairwise=True` and `durations` are mutually exclusive. A ValueError will be raised if both are given. If `durations` is given, epoch labels are set to the corresponding labels of the events that indicate the epoch start If `durations` is not given, then the event labels A and B bounding the epoch are used to set the labels of the epochs in the form 'A-B'. """ if pairwise: # Mode 2 if durations is not None: raise ValueError("Inconsistent arguments. " "Cannot give both `pairwise` and `durations`") if self.size % 2 != 0: raise ValueError("Pairwise conversion of events to epochs" " requires an even number of events") times = self.times[::2] durations = self.times[1::2] - times labels = np.array( ["{}-{}".format(a, b) for a, b in zip(self.labels[::2], self.labels[1::2])]) elif durations is None: # Mode 1 times = self.times[:-1] durations = np.diff(self.times) labels = np.array( ["{}-{}".format(a, b) for a, b in zip(self.labels[:-1], self.labels[1:])]) else: # Mode 3 times = self.times labels = self.labels return Epoch(times=times, durations=durations, labels=labels) neo-0.7.2/neo/core/irregularlysampledsignal.py0000600013464101346420000004556013507452474017703 0ustar yohyoh# -*- coding: utf-8 -*- ''' This module implements :class:`IrregularlySampledSignal`, an array of analog signals with samples taken at arbitrary time points. :class:`IrregularlySampledSignal` inherits from :class:`basesignal.BaseSignal` which derives from :class:`BaseNeo`, from :module:`neo.core.baseneo`, and from :class:`quantities.Quantity`, which in turn inherits from :class:`numpy.ndarray`. Inheritance from :class:`numpy.array` is explained here: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html In brief: * Initialization of a new object from constructor happens in :meth:`__new__`. This is where user-specified attributes are set. * :meth:`__array_finalize__` is called for all new objects, including those created by slicing. This is where attributes are copied over from the old object. ''' # needed for Python 3 compatibility from __future__ import absolute_import, division, print_function from copy import deepcopy import numpy as np import quantities as pq from neo.core.baseneo import BaseNeo, MergeError, merge_annotations from neo.core.basesignal import BaseSignal from neo.core.channelindex import ChannelIndex from neo.core.dataobject import DataObject def _new_IrregularlySampledSignal(cls, times, signal, units=None, time_units=None, dtype=None, copy=True, name=None, file_origin=None, description=None, array_annotations=None, annotations=None, segment=None, channel_index=None): ''' A function to map IrregularlySampledSignal.__new__ to a function that does not do the unit checking. This is needed for pickle to work. ''' iss = cls(times=times, signal=signal, units=units, time_units=time_units, dtype=dtype, copy=copy, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) iss.segment = segment iss.channel_index = channel_index return iss class IrregularlySampledSignal(BaseSignal): ''' An array of one or more analog signals with samples taken at arbitrary time points. A representation of one or more continuous, analog signals acquired at time :attr:`t_start` with a varying sampling interval. Each channel is sampled at the same time points. Inherits from :class:`quantities.Quantity`, which in turn inherits from :class:`numpy.ndarray`. *Usage*:: >>> from neo.core import IrregularlySampledSignal >>> from quantities import s, nA >>> >>> irsig0 = IrregularlySampledSignal([0.0, 1.23, 6.78], [1, 2, 3], ... units='mV', time_units='ms') >>> irsig1 = IrregularlySampledSignal([0.01, 0.03, 0.12]*s, ... [[4, 5], [5, 4], [6, 3]]*nA) *Required attributes/properties*: :times: (quantity array 1D, numpy array 1D, or list) The time of each data point. Must have the same size as :attr:`signal`. :signal: (quantity array 2D, numpy array 2D, or list (data, channel)) The data itself. :units: (quantity units) Required if the signal is a list or NumPy array, not if it is a :class:`Quantity`. :time_units: (quantity units) Required if :attr:`times` is a list or NumPy array, not if it is a :class:`Quantity`. *Recommended attributes/properties*:. :name: (str) A label for the dataset :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. *Optional attributes/properties*: :dtype: (numpy dtype or str) Override the dtype of the signal array. (times are always floats). :copy: (bool) True by default. :array_annotations: (dict) Dict mapping strings to numpy arrays containing annotations \ for all data points Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Properties available on this object*: :sampling_intervals: (quantity array 1D) Interval between each adjacent pair of samples. (``times[1:] - times[:-1]``) :duration: (quantity scalar) Signal duration, read-only. (``times[-1] - times[0]``) :t_start: (quantity scalar) Time when signal begins, read-only. (``times[0]``) :t_stop: (quantity scalar) Time when signal ends, read-only. (``times[-1]``) *Slicing*: :class:`IrregularlySampledSignal` objects can be sliced. When this occurs, a new :class:`IrregularlySampledSignal` (actually a view) is returned, with the same metadata, except that :attr:`times` is also sliced in the same way. *Operations available on this object*: == != + * / ''' _single_parent_objects = ('Segment', 'ChannelIndex') _quantity_attr = 'signal' _necessary_attrs = (('times', pq.Quantity, 1), ('signal', pq.Quantity, 2)) def __new__(cls, times, signal, units=None, time_units=None, dtype=None, copy=True, name=None, file_origin=None, description=None, array_annotations=None, **annotations): ''' Construct a new :class:`IrregularlySampledSignal` instance. This is called whenever a new :class:`IrregularlySampledSignal` is created from the constructor, but not when slicing. ''' signal = cls._rescale(signal, units=units) if time_units is None: if hasattr(times, "units"): time_units = times.units else: raise ValueError("Time units must be specified") elif isinstance(times, pq.Quantity): # could improve this test, what if units is a string? if time_units != times.units: times = times.rescale(time_units) # should check time units have correct dimensions obj = pq.Quantity.__new__(cls, signal, units=units, dtype=dtype, copy=copy) if obj.ndim == 1: obj = obj.reshape(-1, 1) if len(times) != obj.shape[0]: raise ValueError("times array and signal array must " "have same length") obj.times = pq.Quantity(times, units=time_units, dtype=float, copy=copy) obj.segment = None obj.channel_index = None return obj def __init__(self, times, signal, units=None, time_units=None, dtype=None, copy=True, name=None, file_origin=None, description=None, array_annotations=None, **annotations): ''' Initializes a newly constructed :class:`IrregularlySampledSignal` instance. ''' DataObject.__init__(self, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) def __reduce__(self): ''' Map the __new__ function onto _new_IrregularlySampledSignal, so that pickle works ''' return _new_IrregularlySampledSignal, (self.__class__, self.times, np.array(self), self.units, self.times.units, self.dtype, True, self.name, self.file_origin, self.description, self.array_annotations, self.annotations, self.segment, self.channel_index) def _array_finalize_spec(self, obj): ''' Set default values for attributes specific to :class:`IrregularlySampledSignal`. Common attributes are defined in :meth:`__array_finalize__` in :class:`basesignal.BaseSignal`), which is called every time a new signal is created and calls this method. ''' self.times = getattr(obj, 'times', None) return obj def __deepcopy__(self, memo): cls = self.__class__ new_signal = cls(self.times, np.array(self), units=self.units, time_units=self.times.units, dtype=self.dtype, t_start=self.t_start, name=self.name, file_origin=self.file_origin, description=self.description) new_signal.__dict__.update(self.__dict__) memo[id(self)] = new_signal for k, v in self.__dict__.items(): try: setattr(new_signal, k, deepcopy(v, memo)) except TypeError: setattr(new_signal, k, v) return new_signal def __repr__(self): ''' Returns a string representing the :class:`IrregularlySampledSignal`. ''' return '<%s(%s at times %s)>' % ( self.__class__.__name__, super(IrregularlySampledSignal, self).__repr__(), self.times) def __getitem__(self, i): ''' Get the item or slice :attr:`i`. ''' if isinstance(i, (int, np.integer)): # a single point in time across all channels obj = super(IrregularlySampledSignal, self).__getitem__(i) obj = pq.Quantity(obj.magnitude, units=obj.units) elif isinstance(i, tuple): obj = super(IrregularlySampledSignal, self).__getitem__(i) j, k = i if isinstance(j, (int, np.integer)): # a single point in time across some channels obj = pq.Quantity(obj.magnitude, units=obj.units) else: if isinstance(j, slice): obj.times = self.times.__getitem__(j) elif isinstance(j, np.ndarray): raise NotImplementedError("Arrays not yet supported") else: raise TypeError("%s not supported" % type(j)) if isinstance(k, (int, np.integer)): obj = obj.reshape(-1, 1) # add if channel_index obj.array_annotations = deepcopy(self.array_annotations_at_index(k)) elif isinstance(i, slice): obj = super(IrregularlySampledSignal, self).__getitem__(i) obj.times = self.times.__getitem__(i) obj.array_annotations = deepcopy(self.array_annotations) elif isinstance(i, np.ndarray): # Indexing of an IrregularlySampledSignal is only consistent if the resulting # number of samples is the same for each trace. The time axis for these samples is not # guaranteed to be continuous, so returning a Quantity instead of an # IrregularlySampledSignal here. new_time_dims = np.sum(i, axis=0) if len(new_time_dims) and all(new_time_dims == new_time_dims[0]): obj = np.asarray(self).T.__getitem__(i.T) obj = obj.T.reshape(self.shape[1], -1).T obj = pq.Quantity(obj, units=self.units) else: raise IndexError("indexing of an IrregularlySampledSignal needs to keep the same " "number of sample for each trace contained") else: raise IndexError("index should be an integer, tuple, slice or boolean numpy array") return obj @property def duration(self): ''' Signal duration. (:attr:`times`[-1] - :attr:`times`[0]) ''' return self.times[-1] - self.times[0] @property def t_start(self): ''' Time when signal begins. (:attr:`times`[0]) ''' return self.times[0] @property def t_stop(self): ''' Time when signal ends. (:attr:`times`[-1]) ''' return self.times[-1] def __eq__(self, other): ''' Equality test (==) ''' if (isinstance(other, IrregularlySampledSignal) and not (self.times == other.times).all()): return False return super(IrregularlySampledSignal, self).__eq__(other) def _check_consistency(self, other): ''' Check if the attributes of another :class:`IrregularlySampledSignal` are compatible with this one. ''' # if not an array, then allow the calculation if not hasattr(other, 'ndim'): return # if a scalar array, then allow the calculation if not other.ndim: return # dimensionality should match if self.ndim != other.ndim: raise ValueError('Dimensionality does not match: %s vs %s' % (self.ndim, other.ndim)) # if if the other array does not have a times property, # then it should be okay to add it directly if not hasattr(other, 'times'): return # if there is a times property, the times need to be the same if not (self.times == other.times).all(): raise ValueError('Times do not match: %s vs %s' % (self.times, other.times)) def __rsub__(self, other, *args): ''' Backwards subtraction (other-self) ''' return self.__mul__(-1) + other def _repr_pretty_(self, pp, cycle): ''' Handle pretty-printing the :class:`IrregularlySampledSignal`. ''' pp.text("{cls} with {channels} channels of length {length}; " "units {units}; datatype {dtype} ".format(cls=self.__class__.__name__, channels=self.shape[1], length=self.shape[0], units=self.units.dimensionality.string, dtype=self.dtype)) if self._has_repr_pretty_attrs_(): pp.breakable() self._repr_pretty_attrs_(pp, cycle) def _pp(line): pp.breakable() with pp.group(indent=1): pp.text(line) for line in ["sample times: {0}".format(self.times)]: _pp(line) @property def sampling_intervals(self): ''' Interval between each adjacent pair of samples. (:attr:`times[1:]` - :attr:`times`[:-1]) ''' return self.times[1:] - self.times[:-1] def mean(self, interpolation=None): ''' Calculates the mean, optionally using interpolation between sampling times. If :attr:`interpolation` is None, we assume that values change stepwise at sampling times. ''' if interpolation is None: return (self[:-1] * self.sampling_intervals.reshape(-1, 1)).sum() / self.duration else: raise NotImplementedError def resample(self, at=None, interpolation=None): ''' Resample the signal, returning either an :class:`AnalogSignal` object or another :class:`IrregularlySampledSignal` object. Arguments: :at: either a :class:`Quantity` array containing the times at which samples should be created (times must be within the signal duration, there is no extrapolation), a sampling rate with dimensions (1/Time) or a sampling interval with dimensions (Time). :interpolation: one of: None, 'linear' ''' # further interpolation methods could be added raise NotImplementedError def time_slice(self, t_start, t_stop): ''' Creates a new :class:`IrregularlySampledSignal` corresponding to the time slice of the original :class:`IrregularlySampledSignal` between times `t_start` and `t_stop`. Either parameter can also be None to use infinite endpoints for the time interval. ''' _t_start = t_start _t_stop = t_stop if t_start is None: _t_start = -np.inf if t_stop is None: _t_stop = np.inf indices = (self.times >= _t_start) & (self.times <= _t_stop) count = 0 id_start = None id_stop = None for i in indices: if id_start is None: if i: id_start = count else: if not i: id_stop = count break count += 1 new_st = self[id_start:id_stop] return new_st def merge(self, other): ''' Merge another signal into this one. The signal objects are concatenated horizontally (column-wise, :func:`np.hstack`). If the attributes of the two signals are not compatible, an Exception is raised. Required attributes of the signal are used. ''' if not np.array_equal(self.times, other.times): raise MergeError("Cannot merge these two signals as the sample times differ.") if self.segment != other.segment: raise MergeError( "Cannot merge these two signals as they belong to different segments.") if hasattr(self, "lazy_shape"): if hasattr(other, "lazy_shape"): if self.lazy_shape[0] != other.lazy_shape[0]: raise MergeError("Cannot merge signals of different length.") merged_lazy_shape = (self.lazy_shape[0], self.lazy_shape[1] + other.lazy_shape[1]) else: raise MergeError("Cannot merge a lazy object with a real object.") if other.units != self.units: other = other.rescale(self.units) stack = np.hstack((self.magnitude, other.magnitude)) kwargs = {} for name in ("name", "description", "file_origin"): attr_self = getattr(self, name) attr_other = getattr(other, name) if attr_self == attr_other: kwargs[name] = attr_self else: kwargs[name] = "merge(%s, %s)" % (attr_self, attr_other) merged_annotations = merge_annotations(self.annotations, other.annotations) kwargs.update(merged_annotations) signal = self.__class__(self.times, stack, units=self.units, dtype=self.dtype, copy=False, **kwargs) signal.segment = self.segment signal.array_annotate(**self._merge_array_annotations(other)) if hasattr(self, "lazy_shape"): signal.lazy_shape = merged_lazy_shape # merge channel_index (move to ChannelIndex.merge()?) if self.channel_index and other.channel_index: signal.channel_index = ChannelIndex(index=np.arange(signal.shape[1]), channel_ids=np.hstack( [self.channel_index.channel_ids, other.channel_index.channel_ids]), channel_names=np.hstack( [self.channel_index.channel_names, other.channel_index.channel_names])) else: signal.channel_index = ChannelIndex(index=np.arange(signal.shape[1])) return signal neo-0.7.2/neo/core/segment.py0000600013464101346420000002216613507452453014232 0ustar yohyoh# -*- coding: utf-8 -*- ''' This module defines :class:`Segment`, a container for data sharing a common time basis. :class:`Segment` derives from :class:`Container`, from :module:`neo.core.container`. ''' # needed for python 3 compatibility from __future__ import absolute_import, division, print_function from datetime import datetime import numpy as np from neo.core.container import Container class Segment(Container): ''' A container for data sharing a common time basis. A :class:`Segment` is a heterogeneous container for discrete or continous data sharing a common clock (time basis) but not necessary the same sampling rate, start or end time. *Usage*:: >>> from neo.core import Segment, SpikeTrain, AnalogSignal >>> from quantities import Hz, s >>> >>> seg = Segment(index=5) >>> >>> train0 = SpikeTrain(times=[.01, 3.3, 9.3], units='sec', t_stop=10) >>> seg.spiketrains.append(train0) >>> >>> train1 = SpikeTrain(times=[100.01, 103.3, 109.3], units='sec', ... t_stop=110) >>> seg.spiketrains.append(train1) >>> >>> sig0 = AnalogSignal(signal=[.01, 3.3, 9.3], units='uV', ... sampling_rate=1*Hz) >>> seg.analogsignals.append(sig0) >>> >>> sig1 = AnalogSignal(signal=[100.01, 103.3, 109.3], units='nA', ... sampling_period=.1*s) >>> seg.analogsignals.append(sig1) *Required attributes/properties*: None *Recommended attributes/properties*: :name: (str) A label for the dataset. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. :file_datetime: (datetime) The creation date and time of the original data file. :rec_datetime: (datetime) The date and time of the original recording :index: (int) You can use this to define a temporal ordering of your Segment. For instance you could use this for trial numbers. Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Properties available on this object*: :all_data: (list) A list of all child objects in the :class:`Segment`. *Container of*: :class:`Epoch` :class:`Event` :class:`AnalogSignal` :class:`IrregularlySampledSignal` :class:`SpikeTrain` ''' _data_child_objects = ('AnalogSignal', 'Epoch', 'Event', 'IrregularlySampledSignal', 'SpikeTrain') _single_parent_objects = ('Block',) _recommended_attrs = ((('file_datetime', datetime), ('rec_datetime', datetime), ('index', int)) + Container._recommended_attrs) _repr_pretty_containers = ('analogsignals',) def __init__(self, name=None, description=None, file_origin=None, file_datetime=None, rec_datetime=None, index=None, **annotations): ''' Initialize a new :class:`Segment` instance. ''' super(Segment, self).__init__(name=name, description=description, file_origin=file_origin, **annotations) self.file_datetime = file_datetime self.rec_datetime = rec_datetime self.index = index # t_start attribute is handled as a property so type checking can be done @property def t_start(self): ''' Time when first signal begins. ''' t_starts = [sig.t_start for sig in self.analogsignals + self.spiketrains + self.irregularlysampledsignals] t_starts += [e.times[0] for e in self.epochs + self.events if len(e.times) > 0] # t_start is not defined if no children are present if len(t_starts) == 0: return None t_start = min(t_starts) return t_start # t_stop attribute is handled as a property so type checking can be done @property def t_stop(self): ''' Time when last signal ends. ''' t_stops = [sig.t_stop for sig in self.analogsignals + self.spiketrains + self.irregularlysampledsignals] t_stops += [e.times[-1] for e in self.epochs + self.events if len(e.times) > 0] # t_stop is not defined if no children are present if len(t_stops) == 0: return None t_stop = max(t_stops) return t_stop def take_spiketrains_by_unit(self, unit_list=None): ''' Return :class:`SpikeTrains` in the :class:`Segment` that are also in a :class:`Unit` in the :attr:`unit_list` provided. ''' if unit_list is None: return [] spiketrain_list = [] for spiketrain in self.spiketrains: if spiketrain.unit in unit_list: spiketrain_list.append(spiketrain) return spiketrain_list # def take_analogsignal_by_unit(self, unit_list=None): # ''' # Return :class:`AnalogSignal` objects in the :class:`Segment` that are # have the same :attr:`channel_index` as any of the :class:`Unit: objects # in the :attr:`unit_list` provided. # ''' # if unit_list is None: # return [] # channel_indexes = [] # for unit in unit_list: # if unit.channel_indexes is not None: # channel_indexes.extend(unit.channel_indexes) # return self.take_analogsignal_by_channelindex(channel_indexes) # # def take_analogsignal_by_channelindex(self, channel_indexes=None): # ''' # Return :class:`AnalogSignal` objects in the :class:`Segment` that have # a :attr:`channel_index` that is in the :attr:`channel_indexes` # provided. # ''' # if channel_indexes is None: # return [] # anasig_list = [] # for anasig in self.analogsignals: # if anasig.channel_index in channel_indexes: # anasig_list.append(anasig) # return anasig_list def take_slice_of_analogsignalarray_by_unit(self, unit_list=None): ''' Return slices of the :class:`AnalogSignal` objects in the :class:`Segment` that correspond to a :attr:`channel_index` of any of the :class:`Unit` objects in the :attr:`unit_list` provided. ''' if unit_list is None: return [] indexes = [] for unit in unit_list: if unit.get_channel_indexes() is not None: indexes.extend(unit.get_channel_indexes()) return self.take_slice_of_analogsignalarray_by_channelindex(indexes) def take_slice_of_analogsignalarray_by_channelindex(self, channel_indexes=None): ''' Return slices of the :class:`AnalogSignalArrays` in the :class:`Segment` that correspond to the :attr:`channel_indexes` provided. ''' if channel_indexes is None: return [] sliced_sigarrays = [] for sigarr in self.analogsignals: if sigarr.get_channel_index() is not None: ind = np.in1d(sigarr.get_channel_index(), channel_indexes) sliced_sigarrays.append(sigarr[:, ind]) return sliced_sigarrays def construct_subsegment_by_unit(self, unit_list=None): ''' Return a new :class:`Segment that contains the :class:`AnalogSignal`, :class:`AnalogSignal`, and :class:`SpikeTrain` objects common to both the current :class:`Segment` and any :class:`Unit` in the :attr:`unit_list` provided. *Example*:: >>> from neo.core import (Segment, Block, Unit, SpikeTrain, ... ChannelIndex) >>> >>> blk = Block() >>> chx = ChannelIndex(name='group0') >>> blk.channel_indexes = [chx] >>> >>> for ind in range(5): ... unit = Unit(name='Unit #%s' % ind, channel_index=ind) ... chx.units.append(unit) ... >>> >>> for ind in range(3): ... seg = Segment(name='Simulation #%s' % ind) ... blk.segments.append(seg) ... for unit in chx.units: ... train = SpikeTrain([1, 2, 3], units='ms', t_start=0., ... t_stop=10) ... train.unit = unit ... unit.spiketrains.append(train) ... seg.spiketrains.append(train) ... >>> >>> seg0 = blk.segments[-1] >>> seg1 = seg0.construct_subsegment_by_unit(chx.units[:2]) >>> len(seg0.spiketrains) 5 >>> len(seg1.spiketrains) 2 ''' seg = Segment() seg.spiketrains = self.take_spiketrains_by_unit(unit_list) seg.analogsignals = \ self.take_slice_of_analogsignalarray_by_unit(unit_list) # TODO copy others attributes return seg neo-0.7.2/neo/core/spiketrain.py0000600013464101346420000010053113507452453014732 0ustar yohyoh# -*- coding: utf-8 -*- ''' This module implements :class:`SpikeTrain`, an array of spike times. :class:`SpikeTrain` derives from :class:`BaseNeo`, from :module:`neo.core.baseneo`, and from :class:`quantites.Quantity`, which inherits from :class:`numpy.array`. Inheritance from :class:`numpy.array` is explained here: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html In brief: * Initialization of a new object from constructor happens in :meth:`__new__`. This is where user-specified attributes are set. * :meth:`__array_finalize__` is called for all new objects, including those created by slicing. This is where attributes are copied over from the old object. ''' # needed for python 3 compatibility from __future__ import absolute_import, division, print_function import sys import copy import warnings import numpy as np import quantities as pq from neo.core.baseneo import BaseNeo, MergeError, merge_annotations from neo.core.dataobject import DataObject, ArrayDict def check_has_dimensions_time(*values): ''' Verify that all arguments have a dimensionality that is compatible with time. ''' errmsgs = [] for value in values: dim = value.dimensionality if (len(dim) != 1 or list(dim.values())[0] != 1 or not isinstance(list(dim.keys())[0], pq.UnitTime)): errmsgs.append("value %s has dimensions %s, not [time]" % (value, dim.simplified)) if errmsgs: raise ValueError("\n".join(errmsgs)) def _check_time_in_range(value, t_start, t_stop, view=False): ''' Verify that all times in :attr:`value` are between :attr:`t_start` and :attr:`t_stop` (inclusive. If :attr:`view` is True, vies are used for the test. Using drastically increases the speed, but is only safe if you are certain that the dtype and units are the same ''' if t_start > t_stop: raise ValueError("t_stop (%s) is before t_start (%s)" % (t_stop, t_start)) if not value.size: return if view: value = value.view(np.ndarray) t_start = t_start.view(np.ndarray) t_stop = t_stop.view(np.ndarray) if value.min() < t_start: raise ValueError("The first spike (%s) is before t_start (%s)" % (value, t_start)) if value.max() > t_stop: raise ValueError("The last spike (%s) is after t_stop (%s)" % (value, t_stop)) def _check_waveform_dimensions(spiketrain): ''' Verify that waveform is compliant with the waveform definition as quantity array 3D (spike, channel_index, time) ''' if not spiketrain.size: return waveforms = spiketrain.waveforms if (waveforms is None) or (not waveforms.size): return if waveforms.shape[0] != len(spiketrain): raise ValueError("Spiketrain length (%s) does not match to number of " "waveforms present (%s)" % (len(spiketrain), waveforms.shape[0])) def _new_spiketrain(cls, signal, t_stop, units=None, dtype=None, copy=True, sampling_rate=1.0 * pq.Hz, t_start=0.0 * pq.s, waveforms=None, left_sweep=None, name=None, file_origin=None, description=None, array_annotations=None, annotations=None, segment=None, unit=None): ''' A function to map :meth:`BaseAnalogSignal.__new__` to function that does not do the unit checking. This is needed for :module:`pickle` to work. ''' if annotations is None: annotations = {} obj = SpikeTrain(signal, t_stop, units, dtype, copy, sampling_rate, t_start, waveforms, left_sweep, name, file_origin, description, array_annotations, **annotations) obj.segment = segment obj.unit = unit return obj class SpikeTrain(DataObject): ''' :class:`SpikeTrain` is a :class:`Quantity` array of spike times. It is an ensemble of action potentials (spikes) emitted by the same unit in a period of time. *Usage*:: >>> from neo.core import SpikeTrain >>> from quantities import s >>> >>> train = SpikeTrain([3, 4, 5]*s, t_stop=10.0) >>> train2 = train[1:3] >>> >>> train.t_start array(0.0) * s >>> train.t_stop array(10.0) * s >>> train >>> train2 *Required attributes/properties*: :times: (quantity array 1D, numpy array 1D, or list) The times of each spike. :units: (quantity units) Required if :attr:`times` is a list or :class:`~numpy.ndarray`, not if it is a :class:`~quantites.Quantity`. :t_stop: (quantity scalar, numpy scalar, or float) Time at which :class:`SpikeTrain` ended. This will be converted to the same units as :attr:`times`. This argument is required because it specifies the period of time over which spikes could have occurred. Note that :attr:`t_start` is highly recommended for the same reason. Note: If :attr:`times` contains values outside of the range [t_start, t_stop], an Exception is raised. *Recommended attributes/properties*: :name: (str) A label for the dataset. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. :t_start: (quantity scalar, numpy scalar, or float) Time at which :class:`SpikeTrain` began. This will be converted to the same units as :attr:`times`. Default: 0.0 seconds. :waveforms: (quantity array 3D (spike, channel_index, time)) The waveforms of each spike. :sampling_rate: (quantity scalar) Number of samples per unit time for the waveforms. :left_sweep: (quantity array 1D) Time from the beginning of the waveform to the trigger time of the spike. :sort: (bool) If True, the spike train will be sorted by time. *Optional attributes/properties*: :dtype: (numpy dtype or str) Override the dtype of the signal array. :copy: (bool) Whether to copy the times array. True by default. Must be True when you request a change of units or dtype. :array_annotations: (dict) Dict mapping strings to numpy arrays containing annotations \ for all data points Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Properties available on this object*: :sampling_period: (quantity scalar) Interval between two samples. (1/:attr:`sampling_rate`) :duration: (quantity scalar) Duration over which spikes can occur, read-only. (:attr:`t_stop` - :attr:`t_start`) :spike_duration: (quantity scalar) Duration of a waveform, read-only. (:attr:`waveform`.shape[2] * :attr:`sampling_period`) :right_sweep: (quantity scalar) Time from the trigger times of the spikes to the end of the waveforms, read-only. (:attr:`left_sweep` + :attr:`spike_duration`) :times: (quantity array 1D) Returns the :class:`SpikeTrain` as a quantity array. *Slicing*: :class:`SpikeTrain` objects can be sliced. When this occurs, a new :class:`SpikeTrain` (actually a view) is returned, with the same metadata, except that :attr:`waveforms` is also sliced in the same way (along dimension 0). Note that t_start and t_stop are not changed automatically, although you can still manually change them. ''' _single_parent_objects = ('Segment', 'Unit') _quantity_attr = 'times' _necessary_attrs = (('times', pq.Quantity, 1), ('t_start', pq.Quantity, 0), ('t_stop', pq.Quantity, 0)) _recommended_attrs = ((('waveforms', pq.Quantity, 3), ('left_sweep', pq.Quantity, 0), ('sampling_rate', pq.Quantity, 0)) + BaseNeo._recommended_attrs) def __new__(cls, times, t_stop, units=None, dtype=None, copy=True, sampling_rate=1.0 * pq.Hz, t_start=0.0 * pq.s, waveforms=None, left_sweep=None, name=None, file_origin=None, description=None, array_annotations=None, **annotations): ''' Constructs a new :clas:`Spiketrain` instance from data. This is called whenever a new :class:`SpikeTrain` is created from the constructor, but not when slicing. ''' if len(times) != 0 and waveforms is not None and len(times) != waveforms.shape[0]: # len(times)!=0 has been used to workaround a bug occuring during neo import raise ValueError("the number of waveforms should be equal to the number of spikes") # Make sure units are consistent # also get the dimensionality now since it is much faster to feed # that to Quantity rather than a unit if units is None: # No keyword units, so get from `times` try: dim = times.units.dimensionality except AttributeError: raise ValueError('you must specify units') else: if hasattr(units, 'dimensionality'): dim = units.dimensionality else: dim = pq.quantity.validate_dimensionality(units) if hasattr(times, 'dimensionality'): if times.dimensionality.items() == dim.items(): units = None # units will be taken from times, avoids copying else: if not copy: raise ValueError("cannot rescale and return view") else: # this is needed because of a bug in python-quantities # see issue # 65 in python-quantities github # remove this if it is fixed times = times.rescale(dim) if dtype is None: if not hasattr(times, 'dtype'): dtype = np.float elif hasattr(times, 'dtype') and times.dtype != dtype: if not copy: raise ValueError("cannot change dtype and return view") # if t_start.dtype or t_stop.dtype != times.dtype != dtype, # _check_time_in_range can have problems, so we set the t_start # and t_stop dtypes to be the same as times before converting them # to dtype below # see ticket #38 if hasattr(t_start, 'dtype') and t_start.dtype != times.dtype: t_start = t_start.astype(times.dtype) if hasattr(t_stop, 'dtype') and t_stop.dtype != times.dtype: t_stop = t_stop.astype(times.dtype) # check to make sure the units are time # this approach is orders of magnitude faster than comparing the # reference dimensionality if (len(dim) != 1 or list(dim.values())[0] != 1 or not isinstance(list(dim.keys())[0], pq.UnitTime)): ValueError("Unit has dimensions %s, not [time]" % dim.simplified) # Construct Quantity from data obj = pq.Quantity(times, units=units, dtype=dtype, copy=copy).view(cls) # if the dtype and units match, just copy the values here instead # of doing the much more expensive creation of a new Quantity # using items() is orders of magnitude faster if (hasattr(t_start, 'dtype') and t_start.dtype == obj.dtype and hasattr(t_start, 'dimensionality') and t_start.dimensionality.items() == dim.items()): obj.t_start = t_start.copy() else: obj.t_start = pq.Quantity(t_start, units=dim, dtype=obj.dtype) if (hasattr(t_stop, 'dtype') and t_stop.dtype == obj.dtype and hasattr(t_stop, 'dimensionality') and t_stop.dimensionality.items() == dim.items()): obj.t_stop = t_stop.copy() else: obj.t_stop = pq.Quantity(t_stop, units=dim, dtype=obj.dtype) # Store attributes obj.waveforms = waveforms obj.left_sweep = left_sweep obj.sampling_rate = sampling_rate # parents obj.segment = None obj.unit = None # Error checking (do earlier?) _check_time_in_range(obj, obj.t_start, obj.t_stop, view=True) return obj def __init__(self, times, t_stop, units=None, dtype=np.float, copy=True, sampling_rate=1.0 * pq.Hz, t_start=0.0 * pq.s, waveforms=None, left_sweep=None, name=None, file_origin=None, description=None, array_annotations=None, **annotations): ''' Initializes a newly constructed :class:`SpikeTrain` instance. ''' # This method is only called when constructing a new SpikeTrain, # not when slicing or viewing. We use the same call signature # as __new__ for documentation purposes. Anything not in the call # signature is stored in annotations. # Calls parent __init__, which grabs universally recommended # attributes and sets up self.annotations DataObject.__init__(self, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) def _repr_pretty_(self, pp, cycle): super(SpikeTrain, self)._repr_pretty_(pp, cycle) def rescale(self, units): ''' Return a copy of the :class:`SpikeTrain` converted to the specified units ''' obj = super(SpikeTrain, self).rescale(units) obj.t_start = self.t_start.rescale(units) obj.t_stop = self.t_stop.rescale(units) obj.unit = self.unit return obj def __reduce__(self): ''' Map the __new__ function onto _new_BaseAnalogSignal, so that pickle works ''' import numpy return _new_spiketrain, (self.__class__, numpy.array(self), self.t_stop, self.units, self.dtype, True, self.sampling_rate, self.t_start, self.waveforms, self.left_sweep, self.name, self.file_origin, self.description, self.array_annotations, self.annotations, self.segment, self.unit) def __array_finalize__(self, obj): ''' This is called every time a new :class:`SpikeTrain` is created. It is the appropriate place to set default values for attributes for :class:`SpikeTrain` constructed by slicing or viewing. User-specified values are only relevant for construction from constructor, and these are set in __new__. Then they are just copied over here. Note that the :attr:`waveforms` attibute is not sliced here. Nor is :attr:`t_start` or :attr:`t_stop` modified. ''' # This calls Quantity.__array_finalize__ which deals with # dimensionality super(SpikeTrain, self).__array_finalize__(obj) # Supposedly, during initialization from constructor, obj is supposed # to be None, but this never happens. It must be something to do # with inheritance from Quantity. if obj is None: return # Set all attributes of the new object `self` from the attributes # of `obj`. For instance, when slicing, we want to copy over the # attributes of the original object. self.t_start = getattr(obj, 't_start', None) self.t_stop = getattr(obj, 't_stop', None) self.waveforms = getattr(obj, 'waveforms', None) self.left_sweep = getattr(obj, 'left_sweep', None) self.sampling_rate = getattr(obj, 'sampling_rate', None) self.segment = getattr(obj, 'segment', None) self.unit = getattr(obj, 'unit', None) # The additional arguments self.annotations = getattr(obj, 'annotations', {}) # Add empty array annotations, because they cannot always be copied, # but do not overwrite existing ones from slicing etc. # This ensures the attribute exists if not hasattr(self, 'array_annotations'): self.array_annotations = ArrayDict(self._get_arr_ann_length()) # Note: Array annotations have to be changed when slicing or initializing an object, # copying them over in spite of changed data would result in unexpected behaviour # Globally recommended attributes self.name = getattr(obj, 'name', None) self.file_origin = getattr(obj, 'file_origin', None) self.description = getattr(obj, 'description', None) if hasattr(obj, 'lazy_shape'): self.lazy_shape = obj.lazy_shape def __deepcopy__(self, memo): cls = self.__class__ new_st = cls(np.array(self), self.t_stop, units=self.units, dtype=self.dtype, copy=True, sampling_rate=self.sampling_rate, t_start=self.t_start, waveforms=self.waveforms, left_sweep=self.left_sweep, name=self.name, file_origin=self.file_origin, description=self.description) new_st.__dict__.update(self.__dict__) memo[id(self)] = new_st for k, v in self.__dict__.items(): try: setattr(new_st, k, copy.deepcopy(v, memo)) except TypeError: setattr(new_st, k, v) return new_st def __repr__(self): ''' Returns a string representing the :class:`SpikeTrain`. ''' return '' % ( super(SpikeTrain, self).__repr__(), self.t_start, self.t_stop) def sort(self): ''' Sorts the :class:`SpikeTrain` and its :attr:`waveforms`, if any, by time. ''' # sort the waveforms by the times sort_indices = np.argsort(self) if self.waveforms is not None and self.waveforms.any(): self.waveforms = self.waveforms[sort_indices] self.array_annotate(**copy.deepcopy(self.array_annotations_at_index(sort_indices))) # now sort the times # We have sorted twice, but `self = self[sort_indices]` introduces # a dependency on the slicing functionality of SpikeTrain. super(SpikeTrain, self).sort() def __getslice__(self, i, j): ''' Get a slice from :attr:`i` to :attr:`j`. Doesn't get called in Python 3, :meth:`__getitem__` is called instead ''' return self.__getitem__(slice(i, j)) def __add__(self, time): ''' Shifts the time point of all spikes by adding the amount in :attr:`time` (:class:`Quantity`) If `time` is a scalar, this also shifts :attr:`t_start` and :attr:`t_stop`. If `time` is an array, :attr:`t_start` and :attr:`t_stop` are not changed unless some of the new spikes would be outside this range. In this case :attr:`t_start` and :attr:`t_stop` are modified if necessary to ensure they encompass all spikes. It is not possible to add two SpikeTrains (raises ValueError). ''' spikes = self.view(pq.Quantity) check_has_dimensions_time(time) if isinstance(time, SpikeTrain): raise TypeError("Can't add two spike trains") new_times = spikes + time if time.size > 1: t_start = min(self.t_start, np.min(new_times)) t_stop = max(self.t_stop, np.max(new_times)) else: t_start = self.t_start + time t_stop = self.t_stop + time return SpikeTrain(times=new_times, t_stop=t_stop, units=self.units, sampling_rate=self.sampling_rate, t_start=t_start, waveforms=self.waveforms, left_sweep=self.left_sweep, name=self.name, file_origin=self.file_origin, description=self.description, array_annotations=copy.deepcopy(self.array_annotations), **self.annotations) def __sub__(self, time): ''' Shifts the time point of all spikes by subtracting the amount in :attr:`time` (:class:`Quantity`) If `time` is a scalar, this also shifts :attr:`t_start` and :attr:`t_stop`. If `time` is an array, :attr:`t_start` and :attr:`t_stop` are not changed unless some of the new spikes would be outside this range. In this case :attr:`t_start` and :attr:`t_stop` are modified if necessary to ensure they encompass all spikes. In general, it is not possible to subtract two SpikeTrain objects (raises ValueError). However, if `time` is itself a SpikeTrain of the same size as the SpikeTrain, returns a Quantities array (since this is often used in checking whether two spike trains are the same or in calculating the inter-spike interval. ''' spikes = self.view(pq.Quantity) check_has_dimensions_time(time) if isinstance(time, SpikeTrain): if self.size == time.size: return spikes - time else: raise TypeError("Can't subtract spike trains with different sizes") else: new_times = spikes - time if time.size > 1: t_start = min(self.t_start, np.min(new_times)) t_stop = max(self.t_stop, np.max(new_times)) else: t_start = self.t_start - time t_stop = self.t_stop - time return SpikeTrain(times=spikes - time, t_stop=t_stop, units=self.units, sampling_rate=self.sampling_rate, t_start=t_start, waveforms=self.waveforms, left_sweep=self.left_sweep, name=self.name, file_origin=self.file_origin, description=self.description, array_annotations=copy.deepcopy(self.array_annotations), **self.annotations) def __getitem__(self, i): ''' Get the item or slice :attr:`i`. ''' obj = super(SpikeTrain, self).__getitem__(i) if hasattr(obj, 'waveforms') and obj.waveforms is not None: obj.waveforms = obj.waveforms.__getitem__(i) try: obj.array_annotate(**copy.deepcopy(self.array_annotations_at_index(i))) except AttributeError: # If Quantity was returned, not SpikeTrain pass return obj def __setitem__(self, i, value): ''' Set the value the item or slice :attr:`i`. ''' if not hasattr(value, "units"): value = pq.Quantity(value, units=self.units) # or should we be strict: raise ValueError( # "Setting a value # requires a quantity")? # check for values outside t_start, t_stop _check_time_in_range(value, self.t_start, self.t_stop) super(SpikeTrain, self).__setitem__(i, value) def __setslice__(self, i, j, value): if not hasattr(value, "units"): value = pq.Quantity(value, units=self.units) _check_time_in_range(value, self.t_start, self.t_stop) super(SpikeTrain, self).__setslice__(i, j, value) def _copy_data_complement(self, other, deep_copy=False): ''' Copy the metadata from another :class:`SpikeTrain`. Note: Array annotations can not be copied here because length of data can change ''' # Note: Array annotations cannot be copied because length of data can be changed # here which would cause inconsistencies for attr in ("left_sweep", "sampling_rate", "name", "file_origin", "description", "annotations"): attr_value = getattr(other, attr, None) if deep_copy: attr_value = copy.deepcopy(attr_value) setattr(self, attr, attr_value) def duplicate_with_new_data(self, signal, t_start=None, t_stop=None, waveforms=None, deep_copy=True, units=None): ''' Create a new :class:`SpikeTrain` with the same metadata but different data (times, t_start, t_stop) Note: Array annotations can not be copied here because length of data can change ''' # using previous t_start and t_stop if no values are provided if t_start is None: t_start = self.t_start if t_stop is None: t_stop = self.t_stop if waveforms is None: waveforms = self.waveforms if units is None: units = self.units else: units = pq.quantity.validate_dimensionality(units) new_st = self.__class__(signal, t_start=t_start, t_stop=t_stop, waveforms=waveforms, units=units) new_st._copy_data_complement(self, deep_copy=deep_copy) # Note: Array annotations are not copied here, because length of data could change # overwriting t_start and t_stop with new values new_st.t_start = t_start new_st.t_stop = t_stop # consistency check _check_time_in_range(new_st, new_st.t_start, new_st.t_stop, view=False) _check_waveform_dimensions(new_st) return new_st def time_slice(self, t_start, t_stop): ''' Creates a new :class:`SpikeTrain` corresponding to the time slice of the original :class:`SpikeTrain` between (and including) times :attr:`t_start` and :attr:`t_stop`. Either parameter can also be None to use infinite endpoints for the time interval. ''' _t_start = t_start _t_stop = t_stop if t_start is None: _t_start = -np.inf if t_stop is None: _t_stop = np.inf indices = (self >= _t_start) & (self <= _t_stop) new_st = self[indices] new_st.t_start = max(_t_start, self.t_start) new_st.t_stop = min(_t_stop, self.t_stop) if self.waveforms is not None: new_st.waveforms = self.waveforms[indices] return new_st def merge(self, other): ''' Merge another :class:`SpikeTrain` into this one. The times of the :class:`SpikeTrain` objects combined in one array and sorted. If the attributes of the two :class:`SpikeTrain` are not compatible, an Exception is raised. ''' if self.sampling_rate != other.sampling_rate: raise MergeError("Cannot merge, different sampling rates") if self.t_start != other.t_start: raise MergeError("Cannot merge, different t_start") if self.t_stop != other.t_stop: raise MemoryError("Cannot merge, different t_stop") if self.left_sweep != other.left_sweep: raise MemoryError("Cannot merge, different left_sweep") if self.segment != other.segment: raise MergeError("Cannot merge these two signals as they belong to" " different segments.") if hasattr(self, "lazy_shape"): if hasattr(other, "lazy_shape"): merged_lazy_shape = (self.lazy_shape[0] + other.lazy_shape[0]) else: raise MergeError("Cannot merge a lazy object with a real" " object.") if other.units != self.units: other = other.rescale(self.units) wfs = [self.waveforms is not None, other.waveforms is not None] if any(wfs) and not all(wfs): raise MergeError("Cannot merge signal with waveform and signal " "without waveform.") stack = np.concatenate((np.asarray(self), np.asarray(other))) sorting = np.argsort(stack) stack = stack[sorting] kwargs = {} kwargs['array_annotations'] = self._merge_array_annotations(other, sorting=sorting) for name in ("name", "description", "file_origin"): attr_self = getattr(self, name) attr_other = getattr(other, name) if attr_self == attr_other: kwargs[name] = attr_self else: kwargs[name] = "merge(%s, %s)" % (attr_self, attr_other) merged_annotations = merge_annotations(self.annotations, other.annotations) kwargs.update(merged_annotations) train = SpikeTrain(stack, units=self.units, dtype=self.dtype, copy=False, t_start=self.t_start, t_stop=self.t_stop, sampling_rate=self.sampling_rate, left_sweep=self.left_sweep, **kwargs) if all(wfs): wfs_stack = np.vstack((self.waveforms, other.waveforms)) wfs_stack = wfs_stack[sorting] train.waveforms = wfs_stack train.segment = self.segment if train.segment is not None: self.segment.spiketrains.append(train) if hasattr(self, "lazy_shape"): train.lazy_shape = merged_lazy_shape return train def _merge_array_annotations(self, other, sorting=None): ''' Merges array annotations of 2 different objects. The merge happens in such a way that the result fits the merged data In general this means concatenating the arrays from the 2 objects. If an annotation is only present in one of the objects, it will be omitted. Apart from that the array_annotations need to be sorted according to the sorting of the spikes. :return Merged array_annotations ''' assert sorting is not None, "The order of the merged spikes must be known" merged_array_annotations = {} omitted_keys_self = [] keys = self.array_annotations.keys() for key in keys: try: self_ann = copy.deepcopy(self.array_annotations[key]) other_ann = copy.deepcopy(other.array_annotations[key]) if isinstance(self_ann, pq.Quantity): other_ann.rescale(self_ann.units) arr_ann = np.concatenate([self_ann, other_ann]) * self_ann.units else: arr_ann = np.concatenate([self_ann, other_ann]) merged_array_annotations[key] = arr_ann[sorting] # Annotation only available in 'self', must be skipped # Ignore annotations present only in one of the SpikeTrains except KeyError: omitted_keys_self.append(key) continue omitted_keys_other = [key for key in other.array_annotations if key not in self.array_annotations] if omitted_keys_self or omitted_keys_other: warnings.warn("The following array annotations were omitted, because they were only " "present in one of the merged objects: {} from the one that was merged " "into and {} from the one that was merged into the other" "".format(omitted_keys_self, omitted_keys_other), UserWarning) return merged_array_annotations @property def times(self): ''' Returns the :class:`SpikeTrain` as a quantity array. ''' return pq.Quantity(self) @property def duration(self): ''' Duration over which spikes can occur, (:attr:`t_stop` - :attr:`t_start`) ''' if self.t_stop is None or self.t_start is None: return None return self.t_stop - self.t_start @property def spike_duration(self): ''' Duration of a waveform. (:attr:`waveform`.shape[2] * :attr:`sampling_period`) ''' if self.waveforms is None or self.sampling_rate is None: return None return self.waveforms.shape[2] / self.sampling_rate @property def sampling_period(self): ''' Interval between two samples. (1/:attr:`sampling_rate`) ''' if self.sampling_rate is None: return None return 1.0 / self.sampling_rate @sampling_period.setter def sampling_period(self, period): ''' Setter for :attr:`sampling_period` ''' if period is None: self.sampling_rate = None else: self.sampling_rate = 1.0 / period @property def right_sweep(self): ''' Time from the trigger times of the spikes to the end of the waveforms. (:attr:`left_sweep` + :attr:`spike_duration`) ''' dur = self.spike_duration if self.left_sweep is None or dur is None: return None return self.left_sweep + dur neo-0.7.2/neo/core/unit.py0000600013464101346420000000511113477534746013552 0ustar yohyoh# -*- coding: utf-8 -*- ''' This module defines :class:`Unit`, a container of :class:`SpikeTrain` objects from a unit. :class:`Unit` derives from :class:`Container`, from :module:`neo.core.container`. ''' # needed for python 3 compatibility from __future__ import absolute_import, division, print_function import numpy as np from neo.core.container import Container class Unit(Container): ''' A container of :class:`SpikeTrain` objects from a unit. A :class:`Unit` regroups all the :class:`SpikeTrain` objects that were emitted by a single spike source during a :class:`Block`. A spike source is often a single neuron but doesn't have to be. The spikes may come from different :class:`Segment` objects within the :class:`Block`, so this object is not contained in the usual :class:`Block`/ :class:`Segment`/:class:`SpikeTrain` hierarchy. A :class:`Unit` is linked to :class:`ChannelIndex` objects from which it was detected. With tetrodes, for instance, multiple channels may record the same :class:`Unit`. *Usage*:: >>> from neo.core import Unit, SpikeTrain >>> >>> unit = Unit(name='pyramidal neuron') >>> >>> train0 = SpikeTrain(times=[.01, 3.3, 9.3], units='sec', t_stop=10) >>> unit.spiketrains.append(train0) >>> >>> train1 = SpikeTrain(times=[100.01, 103.3, 109.3], units='sec', ... t_stop=110) >>> unit.spiketrains.append(train1) *Required attributes/properties*: None *Recommended attributes/properties*: :name: (str) A label for the dataset. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Container of*: :class:`SpikeTrain` ''' _data_child_objects = ('SpikeTrain',) _single_parent_objects = ('ChannelIndex',) _recommended_attrs = Container._recommended_attrs def __init__(self, name=None, description=None, file_origin=None, **annotations): ''' Initialize a new :clas:`Unit` instance (spike source) ''' super(Unit, self).__init__(name=name, description=description, file_origin=file_origin, **annotations) self.channel_index = None def get_channel_indexes(self): """ """ if self.channel_index: return self.channel_index.index else: return None neo-0.7.2/neo/io/0002700013464101346420000000000013511307751011661 5ustar yohyohneo-0.7.2/neo/io/__init__.py0000600013464101346420000001162413507452474014006 0ustar yohyoh# -*- coding: utf-8 -*- """ :mod:`neo.io` provides classes for reading and/or writing electrophysiological data files. Note that if the package dependency is not satisfied for one io, it does not raise an error but a warning. :attr:`neo.io.iolist` provides a list of successfully imported io classes. Functions: .. autofunction:: neo.io.get_io Classes: .. autoclass:: neo.io.AlphaOmegaIO .. autoclass:: neo.io.AsciiSignalIO .. autoclass:: neo.io.AsciiSpikeTrainIO .. autoclass:: neo.io.AxographIO .. autoclass:: neo.io.AxonIO .. autoclass:: neo.io.BCI2000IO .. autoclass:: neo.io.BlackrockIO .. autoclass:: neo.io.BrainVisionIO .. autoclass:: neo.io.BrainwareDamIO .. autoclass:: neo.io.BrainwareF32IO .. autoclass:: neo.io.BrainwareSrcIO .. autoclass:: neo.io.ElanIO .. autoclass:: neo.io.ElphyIO .. autoclass:: neo.io.IgorIO .. autoclass:: neo.io.IntanIO .. autoclass:: neo.io.KlustaKwikIO .. autoclass:: neo.io.KwikIO .. autoclass:: neo.io.MicromedIO .. autoclass:: neo.io.NeoHdf5IO .. autoclass:: neo.io.NeoMatlabIO .. autoclass:: neo.io.NestIO .. autoclass:: neo.io.NeuralynxIO .. autoclass:: neo.io.NeuroExplorerIO .. autoclass:: neo.io.NeuroScopeIO .. autoclass:: neo.io.NeuroshareIO .. autoclass:: neo.io.NixIO .. autoclass:: neo.io.NSDFIO .. autoclass:: neo.io.OpenEphysIO .. autoclass:: neo.io.PickleIO .. autoclass:: neo.io.PlexonIO .. autoclass:: neo.io.RawBinarySignalIO .. autoclass:: neo.io.RawMCSIO .. autoclass:: neo.io.StimfitIO .. autoclass:: neo.io.TdtIO .. autoclass:: neo.io.WinEdrIO .. autoclass:: neo.io.WinWcpIO """ import os.path # try to import the neuroshare library. # if it is present, use the neuroshareapiio to load neuroshare files # if it is not present, use the neurosharectypesio to load files try: import neuroshare as ns except ImportError as err: from neo.io.neurosharectypesio import NeurosharectypesIO as NeuroshareIO # print("\n neuroshare library not found, loading data with ctypes" ) # print("\n to use the API be sure to install the library found at:") # print("\n www.http://pythonhosted.org/neuroshare/") else: from neo.io.neuroshareapiio import NeuroshareapiIO as NeuroshareIO # print("neuroshare library successfully imported") # print("\n loading with API...") from neo.io.alphaomegaio import AlphaOmegaIO from neo.io.asciisignalio import AsciiSignalIO from neo.io.asciispiketrainio import AsciiSpikeTrainIO from neo.io.axographio import AxographIO from neo.io.axonio import AxonIO from neo.io.blackrockio import BlackrockIO from neo.io.blackrockio_v4 import BlackrockIO as OldBlackrockIO from neo.io.bci2000io import BCI2000IO from neo.io.brainvisionio import BrainVisionIO from neo.io.brainwaredamio import BrainwareDamIO from neo.io.brainwaref32io import BrainwareF32IO from neo.io.brainwaresrcio import BrainwareSrcIO from neo.io.elanio import ElanIO # from neo.io.elphyio import ElphyIO from neo.io.exampleio import ExampleIO from neo.io.igorproio import IgorIO from neo.io.intanio import IntanIO from neo.io.klustakwikio import KlustaKwikIO from neo.io.kwikio import KwikIO from neo.io.micromedio import MicromedIO from neo.io.hdf5io import NeoHdf5IO from neo.io.neomatlabio import NeoMatlabIO from neo.io.nestio import NestIO from neo.io.neuralynxio import NeuralynxIO from neo.io.neuralynxio_v1 import NeuralynxIO as OldNeuralynxIO from neo.io.neuroexplorerio import NeuroExplorerIO from neo.io.neuroscopeio import NeuroScopeIO from neo.io.nixio import NixIO from neo.io.nixio_fr import NixIO as NixIOFr from neo.io.nsdfio import NSDFIO from neo.io.openephysio import OpenEphysIO from neo.io.pickleio import PickleIO from neo.io.plexonio import PlexonIO from neo.io.rawbinarysignalio import RawBinarySignalIO from neo.io.rawmcsio import RawMCSIO from neo.io.spike2io import Spike2IO from neo.io.stimfitio import StimfitIO from neo.io.tdtio import TdtIO from neo.io.winedrio import WinEdrIO from neo.io.winwcpio import WinWcpIO iolist = [ AlphaOmegaIO, AsciiSignalIO, AsciiSpikeTrainIO, AxographIO, AxonIO, BCI2000IO, BlackrockIO, BrainVisionIO, BrainwareDamIO, BrainwareF32IO, BrainwareSrcIO, ElanIO, # ElphyIO, ExampleIO, IgorIO, IntanIO, KlustaKwikIO, KwikIO, MicromedIO, NixIO, # place NixIO before NeoHdf5IO to make it the default for .h5 files NeoHdf5IO, NeoMatlabIO, NestIO, NeuralynxIO, NeuroExplorerIO, NeuroScopeIO, NeuroshareIO, NSDFIO, OpenEphysIO, PickleIO, PlexonIO, RawBinarySignalIO, RawMCSIO, Spike2IO, StimfitIO, TdtIO, WinEdrIO, WinWcpIO ] def get_io(filename, *args, **kwargs): """ Return a Neo IO instance, guessing the type based on the filename suffix. """ extension = os.path.splitext(filename)[1][1:] for io in iolist: if extension in io.extensions: return io(filename, *args, **kwargs) raise IOError("File extension %s not registered" % extension) neo-0.7.2/neo/io/alphaomegaio.py0000600013464101346420000006004313507452453014671 0ustar yohyoh# -*- coding: utf-8 -*- """ Class for reading data from Alpha Omega .map files. This class is an experimental reader with important limitations. See the source code for details of the limitations. The code of this reader is of alpha quality and received very limited testing. This code is written from the incomplete file specifications available in: [1] AlphaMap Data Acquisition System User's Manual Version 10.1.1 Section 5 APPENDIX B: ALPHAMAP FILE STRUCTURE, pages 120-140 Edited by ALPHA OMEGA Home Office: P.O. Box 810, Nazareth Illit 17105, Israel http://www.alphaomega-eng.com/ and from the source code of a C software for conversion of .map files to .eeg elan software files : [2] alphamap2eeg 1.0, 12/03/03, Anne CHEYLUS - CNRS ISC UMR 5015 Supported : Read @author : sgarcia, Florent Jaillet """ # NOTE: For some specific types of comments, the following convention is used: # "TODO:" Desirable future evolution # "WARNING:" Information about code that is based on broken or missing # specifications and that might be wrong # Main limitations of this reader: # - The reader is only able to load data stored in data blocks of type 5 # (data block for one channel). In particular it means that it doesn't # support signals stored in blocks of type 7 (data block for multiple # channels). # For more details on these data blocks types, see 5.4.1 and 5.4.2 p 127 in # [1]. # - Rather than supporting all the neo objects types that could be extracted # from the file, all read data are returned in AnalogSignal objects, even for # digital channels or channels containing spiking informations. # - Digital channels are not converted to events or events array as they # should. # - Loading multichannel signals as AnalogSignalArrays is not supported. # - Many data or metadata that are avalaible in the file and that could be # represented in some way in the neo model are not extracted. In particular # scaling of the data and extraction of the units of the signals are not # supported. # - It received very limited testing, exlusively using python 2.6.6. In # particular it has not been tested using Python 3.x. # # These limitations are mainly due to the following reasons: # - Incomplete, unclear and in some places innacurate specifications of the # format in [1]. # - Lack of test files containing all the types of data blocks of interest # (in particular no file with type 7 data block for multiple channels where # available when writing this code). # - Lack of knowledge of the Alphamap software and the associated data models. # - Lack of time (especially as the specifications are incomplete, a lot of # reverse engineering and testing is required, which makes the development of # this IO very painful and long). # needed for python 3 compatibility from __future__ import absolute_import, division # specific imports import datetime import os import struct # file no longer exists in Python3 try: file except NameError: import io file = io.BufferedReader # note neo.core need only numpy and quantities import numpy as np import quantities as pq from neo.io.baseio import BaseIO from neo.core import Block, Segment, AnalogSignal class AlphaOmegaIO(BaseIO): """ Class for reading data from Alpha Omega .map files (experimental) This class is an experimental reader with important limitations. See the source code for details of the limitations. The code of this reader is of alpha quality and received very limited testing. Usage: >>> from neo import io >>> r = io.AlphaOmegaIO( filename = 'File_AlphaOmega_1.map') >>> blck = r.read_block() >>> print blck.segments[0].analogsignals """ is_readable = True # This is a reading only class is_writable = False # writing is not supported # This class is able to directly or indirectly read the following kind of # objects supported_objects = [Block, Segment, AnalogSignal] # TODO: Add support for other objects that should be extractable from .map # files (Event, Epoch?, Epoch Array?, SpikeTrain?) # This class can only return a Block readable_objects = [Block] # TODO : create readers for different type of objects (Segment, # AnalogSignal,...) # This class is not able to write objects writeable_objects = [] # This is for GUI stuff : a definition for parameters when reading. read_params = {Block: []} # Writing is not supported, so no GUI stuff write_params = None name = 'AlphaOmega' extensions = ['map'] mode = 'file' def __init__(self, filename=None): """ Arguments: filename : the .map Alpha Omega file name """ BaseIO.__init__(self) self.filename = filename # write is not supported so I do not overload write method from BaseIO def read_block(self, lazy=False): """ Return a Block. """ assert not lazy, 'Do not support lazy' def count_samples(m_length): """ Count the number of signal samples available in a type 5 data block of length m_length """ # for information about type 5 data block, see [1] count = int((m_length - 6) / 2 - 2) # -6 corresponds to the header of block 5, and the -2 take into # account the fact that last 2 values are not available as the 4 # corresponding bytes are coding the time stamp of the beginning # of the block return count # create the neo Block that will be returned at the end blck = Block(file_origin=os.path.basename(self.filename)) blck.file_origin = os.path.basename(self.filename) fid = open(self.filename, 'rb') # NOTE: in the following, the word "block" is used in the sense used in # the alpha-omega specifications (ie a data chunk in the file), rather # than in the sense of the usual Block object in neo # step 1: read the headers of all the data blocks to load the file # structure pos_block = 0 # position of the current block in the file file_blocks = [] # list of data blocks available in the file seg = Segment(file_origin=os.path.basename(self.filename)) seg.file_origin = os.path.basename(self.filename) blck.segments.append(seg) while True: first_4_bytes = fid.read(4) if len(first_4_bytes) < 4: # we have reached the end of the file break else: m_length, m_TypeBlock = struct.unpack('Hcx', first_4_bytes) block = HeaderReader(fid, dict_header_type.get(m_TypeBlock, Type_Unknown)).read_f() block.update({'m_length': m_length, 'm_TypeBlock': m_TypeBlock, 'pos': pos_block}) if m_TypeBlock == '2': # The beginning of the block of type '2' is identical for # all types of channels, but the following part depends on # the type of channel. So we need a special case here. # WARNING: How to check the type of channel is not # described in the documentation. So here I use what is # proposed in the C code [2]. # According to this C code, it seems that the 'm_isAnalog' # is used to distinguished analog and digital channels, and # 'm_Mode' encodes the type of analog channel: # 0 for continuous, 1 for level, 2 for external trigger. # But in some files, I found channels that seemed to be # continuous channels with 'm_Modes' = 128 or 192. So I # decided to consider every channel with 'm_Modes' # different from 1 or 2 as continuous. I also couldn't # check that values of 1 and 2 are really for level and # external trigger as I had no test files containing data # of this types. type_subblock = 'unknown_channel_type(m_Mode=' \ + str(block['m_Mode']) + ')' description = Type2_SubBlockUnknownChannels block.update({'m_Name': 'unknown_name'}) if block['m_isAnalog'] == 0: # digital channel type_subblock = 'digital' description = Type2_SubBlockDigitalChannels elif block['m_isAnalog'] == 1: # analog channel if block['m_Mode'] == 1: # level channel type_subblock = 'level' description = Type2_SubBlockLevelChannels elif block['m_Mode'] == 2: # external trigger channel type_subblock = 'external_trigger' description = Type2_SubBlockExtTriggerChannels else: # continuous channel type_subblock = 'continuous(Mode' \ + str(block['m_Mode']) + ')' description = Type2_SubBlockContinuousChannels subblock = HeaderReader(fid, description).read_f() block.update(subblock) block.update({'type_subblock': type_subblock}) file_blocks.append(block) pos_block += m_length fid.seek(pos_block) # step 2: find the available channels list_chan = [] # list containing indexes of channel blocks for ind_block, block in enumerate(file_blocks): if block['m_TypeBlock'] == '2': list_chan.append(ind_block) # step 3: find blocks containing data for the available channels list_data = [] # list of lists of indexes of data blocks # corresponding to each channel for ind_chan, chan in enumerate(list_chan): list_data.append([]) num_chan = file_blocks[chan]['m_numChannel'] for ind_block, block in enumerate(file_blocks): if block['m_TypeBlock'] == '5': if block['m_numChannel'] == num_chan: list_data[ind_chan].append(ind_block) # step 4: compute the length (number of samples) of the channels chan_len = np.zeros(len(list_data), dtype=np.int) for ind_chan, list_blocks in enumerate(list_data): for ind_block in list_blocks: chan_len[ind_chan] += count_samples( file_blocks[ind_block]['m_length']) # step 5: find channels for which data are available ind_valid_chan = np.nonzero(chan_len)[0] # step 6: load the data # TODO give the possibility to load data as AnalogSignalArrays for ind_chan in ind_valid_chan: list_blocks = list_data[ind_chan] ind = 0 # index in the data vector # read time stamp for the beginning of the signal form = '>> from neo import io >>> r = io.AsciiSignalIO(filename='File_asciisignal_2.txt') >>> seg = r.read_segment() >>> print seg.analogsignals [>> from neo import io >>> r = io.AsciiSpikeTrainIO( filename = 'File_ascii_spiketrain_1.txt') >>> seg = r.read_segment() >>> print seg.spiketrains # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE [>> import neo >>> r = neo.io.AxographIO(filename=filename) >>> blk = r.read_block() >>> display(blk) >>> # get signals >>> seg_index = 0 # episode number >>> sigs = [sig for sig in blk.segments[seg_index].analogsignals ... if sig.name in channel_names] >>> display(sigs) >>> # get event markers (same for all segments/episodes) >>> ev = blk.segments[0].events[0] >>> print([ev for ev in zip(ev.times, ev.labels)]) >>> # get interval bars (same for all segments/episodes) >>> ep = blk.segments[0].epochs[0] >>> print([ep for ep in zip(ep.times, ep.durations, ep.labels)]) >>> # get notes >>> print(blk.annotations['notes']) """ name = 'AxographIO' description = 'This IO reads .axgd/.axgx files created with AxoGraph' _prefered_signal_group_mode = 'split-all' def __init__(self, filename='', force_single_segment=False): AxographRawIO.__init__(self, filename, force_single_segment) BaseFromRaw.__init__(self, filename) neo-0.7.2/neo/io/axonio.py0000600013464101346420000000745313507452453013546 0ustar yohyoh# -*- coding: utf-8 -*- from neo.io.basefromrawio import BaseFromRaw from neo.rawio.axonrawio import AxonRawIO from neo.core import Block, Segment, AnalogSignal, Event import quantities as pq class AxonIO(AxonRawIO, BaseFromRaw): """ Class for reading data from pCLAMP and AxoScope files (.abf version 1 and 2), developed by Molecular device/Axon technologies. - abf = Axon binary file - atf is a text file based format from axon that could be read by AsciiIO (but this file is less efficient.) Here an important note from erikli@github for user who want to get the : With Axon ABF2 files, the information that you need to recapitulate the original stimulus waveform (both digital and analog) is contained in multiple places. - `AxonIO._axon_info['protocol']` -- things like number of samples in episode - `AxonIO.axon_info['section']['ADCSection']` | `AxonIO.axon_info['section']['DACSection']` -- things about the number of channels and channel properties - `AxonIO._axon_info['protocol']['nActiveDACChannel']` -- bitmask specifying which DACs are actually active - `AxonIO._axon_info['protocol']['nDigitalEnable']` -- bitmask specifying which set of Epoch timings should be used to specify the duration of digital outputs - `AxonIO._axon_info['dictEpochInfoPerDAC']` -- dict of dict. First index is DAC channel and second index is Epoch number (i.e. information about Epoch A in Channel 2 would be in `AxonIO._axon_info['dictEpochInfoPerDAC'][2][0]`) - `AxonIO._axon_info['EpochInfo']` -- list of dicts containing information about each Epoch's digital out pattern. Digital out is a bitmask with least significant bit corresponding to Digital Out 0 - `AxonIO._axon_info['listDACInfo']` -- information about DAC name, scale factor, holding level, etc - `AxonIO._t_starts` -- start time of each sweep in a unified time basis - `AxonIO._sampling_rate` The current AxonIO.read_protocol() method utilizes a subset of these. In particular I know it doesn't consider `nDigitalEnable`, `EpochInfo`, or `nActiveDACChannel` and it doesn't account for different types of Epochs offered by Clampex/pClamp other than discrete steps (such as ramp, pulse train, etc and encoded by `nEpochType` in the EpochInfoPerDAC section). I'm currently parsing a superset of the properties used by read_protocol() in my analysis scripts, but that code still doesn't parse the full information and isn't in a state where it could be committed and I can't currently prioritize putting together all the code that would parse the full set of data. The `AxonIO._axon_info['EpochInfo']` section doesn't currently exist. """ _prefered_signal_group_mode = 'split-all' def __init__(self, filename): AxonRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) def read_protocol(self): """ Read the protocol waveform of the file, if present; function works with ABF2 only. Protocols can be reconstructed from the ABF1 header. Returns: list of segments (one for every episode) with list of analog signls (one for every DAC). """ sigs_by_segments, sig_names, sig_units = self.read_raw_protocol() segments = [] for seg_index, sigs in enumerate(sigs_by_segments): seg = Segment(index=seg_index) t_start = self._t_starts[seg_index] * pq.s for c, sig in enumerate(sigs): ana_sig = AnalogSignal(sig, sampling_rate=self._sampling_rate * pq.Hz, t_start=t_start, name=sig_names[c], units=sig_units[c]) seg.analogsignals.append(ana_sig) segments.append(seg) return segments neo-0.7.2/neo/io/basefromrawio.py0000600013464101346420000005354413507452453015113 0ustar yohyoh# -*- coding: utf-8 -*- """ BaseFromRaw ====== BaseFromRaw implement a bridge between the new neo.rawio API and the neo.io legacy that give neo.core object. The neo.rawio API is more restricted and limited and do not cover tricky cases with asymetrical tree of neo object. But if a format is done in neo.rawio the neo.io is done for free by inheritance of this class. """ # needed for python 3 compatibility from __future__ import print_function, division, absolute_import # from __future__ import unicode_literals is not compatible with numpy.dtype both py2 py3 import warnings import collections import logging import numpy as np from neo import logging_handler from neo.core import (AnalogSignal, Block, Epoch, Event, IrregularlySampledSignal, ChannelIndex, Segment, SpikeTrain, Unit) from neo.io.baseio import BaseIO import quantities as pq class BaseFromRaw(BaseIO): """ This implement generic reader on top of RawIO reader. Arguments depend on `mode` (dir or file) File case:: reader = BlackRockIO(filename='FileSpec2.3001.nev') Dir case:: reader = NeuralynxIO(dirname='Cheetah_v5.7.4/original_data') Other arguments are IO specific. """ is_readable = True is_writable = False supported_objects = [Block, Segment, AnalogSignal, SpikeTrain, Unit, ChannelIndex, Event, Epoch] readable_objects = [Block, Segment] writeable_objects = [] support_lazy = True name = 'BaseIO' description = '' extentions = [] mode = 'file' _prefered_signal_group_mode = 'split-all' # 'group-by-same-units' _prefered_units_group_mode = 'split-all' # 'all-in-one' def __init__(self, *args, **kargs): BaseIO.__init__(self, *args, **kargs) self.parse_header() def read_block(self, block_index=0, lazy=False, signal_group_mode=None, units_group_mode=None, load_waveforms=False): """ :param block_index: int default 0. In case of several block block_index can be specified. :param lazy: False by default. :param signal_group_mode: 'split-all' or 'group-by-same-units' (default depend IO): This control behavior for grouping channels in AnalogSignal. * 'split-all': each channel will give an AnalogSignal * 'group-by-same-units' all channel sharing the same quantity units ar grouped in a 2D AnalogSignal :param units_group_mode: 'split-all' or 'all-in-one'(default depend IO) This control behavior for grouping Unit in ChannelIndex: * 'split-all': each neo.Unit is assigned to a new neo.ChannelIndex * 'all-in-one': all neo.Unit are grouped in the same neo.ChannelIndex (global spike sorting for instance) :param load_waveforms: False by default. Control SpikeTrains.waveforms is None or not. """ if lazy: warnings.warn( "Lazy is deprecated and will be replaced by ProxyObject functionality.", DeprecationWarning) if signal_group_mode is None: signal_group_mode = self._prefered_signal_group_mode if units_group_mode is None: units_group_mode = self._prefered_units_group_mode # annotations bl_annotations = dict(self.raw_annotations['blocks'][block_index]) bl_annotations.pop('segments') bl_annotations = check_annotations(bl_annotations) bl = Block(**bl_annotations) # ChannelIndex are plit in 2 parts: # * some for AnalogSignals # * some for Units # ChannelIndex for AnalogSignals all_channels = self.header['signal_channels'] channel_indexes_list = self.get_group_channel_indexes() for channel_index in channel_indexes_list: for i, (ind_within, ind_abs) in self._make_signal_channel_subgroups( channel_index, signal_group_mode=signal_group_mode).items(): chidx_annotations = {} if signal_group_mode == "split-all": chidx_annotations = self.raw_annotations['signal_channels'][i] elif signal_group_mode == "group-by-same-units": for key in list(self.raw_annotations['signal_channels'][i].keys()): chidx_annotations[key] = [] for j in ind_abs: for key in list(self.raw_annotations['signal_channels'][i].keys()): chidx_annotations[key].append(self.raw_annotations[ 'signal_channels'][j][key]) if 'name' in list(chidx_annotations.keys()): chidx_annotations.pop('name') chidx_annotations = check_annotations(chidx_annotations) ch_names = all_channels[ind_abs]['name'].astype('S') neo_channel_index = ChannelIndex(index=ind_within, channel_names=ch_names, channel_ids=all_channels[ind_abs]['id'], name='Channel group {}'.format(i), **chidx_annotations) bl.channel_indexes.append(neo_channel_index) # ChannelIndex and Unit # 2 case are possible in neo defifferent IO have choosen one or other: # * All units are grouped in the same ChannelIndex and indexes are all channels: # 'all-in-one' # * Each units is assigned to one ChannelIndex: 'split-all' # This is kept for compatibility unit_channels = self.header['unit_channels'] if units_group_mode == 'all-in-one': if unit_channels.size > 0: channel_index = ChannelIndex(index=np.array([], dtype='i'), name='ChannelIndex for all Unit') bl.channel_indexes.append(channel_index) for c in range(unit_channels.size): unit_annotations = self.raw_annotations['unit_channels'][c] unit_annotations = check_annotations(unit_annotations) unit = Unit(**unit_annotations) channel_index.units.append(unit) elif units_group_mode == 'split-all': for c in range(len(unit_channels)): unit_annotations = self.raw_annotations['unit_channels'][c] unit_annotations = check_annotations(unit_annotations) unit = Unit(**unit_annotations) channel_index = ChannelIndex(index=np.array([], dtype='i'), name='ChannelIndex for Unit') channel_index.units.append(unit) bl.channel_indexes.append(channel_index) # Read all segments for seg_index in range(self.segment_count(block_index)): seg = self.read_segment(block_index=block_index, seg_index=seg_index, lazy=lazy, signal_group_mode=signal_group_mode, load_waveforms=load_waveforms) bl.segments.append(seg) # create link to other containers ChannelIndex and Units for seg in bl.segments: for c, anasig in enumerate(seg.analogsignals): bl.channel_indexes[c].analogsignals.append(anasig) nsig = len(seg.analogsignals) for c, sptr in enumerate(seg.spiketrains): if units_group_mode == 'all-in-one': bl.channel_indexes[nsig].units[c].spiketrains.append(sptr) elif units_group_mode == 'split-all': bl.channel_indexes[nsig + c].units[0].spiketrains.append(sptr) bl.create_many_to_one_relationship() return bl def read_segment(self, block_index=0, seg_index=0, lazy=False, signal_group_mode=None, load_waveforms=False, time_slice=None): """ :param block_index: int default 0. In case of several block block_index can be specified. :param seg_index: int default 0. Index of segment. :param lazy: False by default. :param signal_group_mode: 'split-all' or 'group-by-same-units' (default depend IO): This control behavior for grouping channels in AnalogSignal. * 'split-all': each channel will give an AnalogSignal * 'group-by-same-units' all channel sharing the same quantity units ar grouped in a 2D AnalogSignal :param load_waveforms: False by default. Control SpikeTrains.waveforms is None or not. :param time_slice: None by default means no limit. A time slice is (t_start, t_stop) both are quantities. All object AnalogSignal, SpikeTrain, Event, Epoch will load only in the slice. """ if lazy: warnings.warn( "Lazy is deprecated and will be replaced by ProxyObject functionality.", DeprecationWarning) if signal_group_mode is None: signal_group_mode = self._prefered_signal_group_mode # annotations seg_annotations = dict(self.raw_annotations['blocks'][block_index]['segments'][seg_index]) for k in ('signals', 'units', 'events'): seg_annotations.pop(k) seg_annotations = check_annotations(seg_annotations) seg = Segment(index=seg_index, **seg_annotations) seg_t_start = self.segment_t_start(block_index, seg_index) * pq.s seg_t_stop = self.segment_t_stop(block_index, seg_index) * pq.s # get only a slice of objects limited by t_start and t_stop time_slice = (t_start, t_stop) if time_slice is None: t_start, t_stop = None, None t_start_, t_stop_ = None, None else: assert not lazy, 'time slice only work when not lazy' t_start, t_stop = time_slice t_start = ensure_second(t_start) t_stop = ensure_second(t_stop) # checks limits if t_start < seg_t_start: t_start = seg_t_start if t_stop > seg_t_stop: t_stop = seg_t_stop # in float format in second (for rawio clip) t_start_, t_stop_ = float(t_start.magnitude), float(t_stop.magnitude) # new spiketrain limits seg_t_start = t_start seg_t_stop = t_stop # AnalogSignal signal_channels = self.header['signal_channels'] if signal_channels.size > 0: channel_indexes_list = self.get_group_channel_indexes() for channel_indexes in channel_indexes_list: sr = self.get_signal_sampling_rate(channel_indexes) * pq.Hz sig_t_start = self.get_signal_t_start( block_index, seg_index, channel_indexes) * pq.s sig_size = self.get_signal_size(block_index=block_index, seg_index=seg_index, channel_indexes=channel_indexes) if not lazy: # in case of time_slice get: get i_start, i_stop, new sig_t_start if t_stop is not None: i_stop = int((t_stop - sig_t_start).magnitude * sr.magnitude) if i_stop > sig_size: i_stop = sig_size else: i_stop = None if t_start is not None: i_start = int((t_start - sig_t_start).magnitude * sr.magnitude) if i_start < 0: i_start = 0 sig_t_start += (i_start / sr).rescale('s') else: i_start = None raw_signal = self.get_analogsignal_chunk(block_index=block_index, seg_index=seg_index, i_start=i_start, i_stop=i_stop, channel_indexes=channel_indexes) float_signal = self.rescale_signal_raw_to_float( raw_signal, dtype='float32', channel_indexes=channel_indexes) for i, (ind_within, ind_abs) in self._make_signal_channel_subgroups( channel_indexes, signal_group_mode=signal_group_mode).items(): units = np.unique(signal_channels[ind_abs]['units']) assert len(units) == 1 units = ensure_signal_units(units[0]) if signal_group_mode == 'split-all': # in that case annotations by channel is OK chan_index = ind_abs[0] d = self.raw_annotations['blocks'][block_index]['segments'][seg_index][ 'signals'][chan_index] annotations = dict(d) if 'name' not in annotations: annotations['name'] = signal_channels['name'][chan_index] else: # when channel are grouped by same unit # annotations have channel_names and channel_ids array # this will be moved in array annotations soon annotations = {} annotations['name'] = 'Channel bundle ({}) '.format( ','.join(signal_channels[ind_abs]['name'])) annotations['channel_names'] = signal_channels[ind_abs]['name'] annotations['channel_ids'] = signal_channels[ind_abs]['id'] annotations = check_annotations(annotations) if lazy: anasig = AnalogSignal(np.array([]), units=units, copy=False, sampling_rate=sr, t_start=sig_t_start, **annotations) anasig.lazy_shape = (sig_size, len(ind_within)) else: anasig = AnalogSignal(float_signal[:, ind_within], units=units, copy=False, sampling_rate=sr, t_start=sig_t_start, **annotations) seg.analogsignals.append(anasig) # SpikeTrain and waveforms (optional) unit_channels = self.header['unit_channels'] for unit_index in range(len(unit_channels)): if not lazy and load_waveforms: raw_waveforms = self.get_spike_raw_waveforms(block_index=block_index, seg_index=seg_index, unit_index=unit_index, t_start=t_start_, t_stop=t_stop_) float_waveforms = self.rescale_waveforms_to_float(raw_waveforms, dtype='float32', unit_index=unit_index) wf_units = ensure_signal_units(unit_channels['wf_units'][unit_index]) waveforms = pq.Quantity(float_waveforms, units=wf_units, dtype='float32', copy=False) wf_sampling_rate = unit_channels['wf_sampling_rate'][unit_index] wf_left_sweep = unit_channels['wf_left_sweep'][unit_index] if wf_left_sweep > 0: wf_left_sweep = float(wf_left_sweep) / wf_sampling_rate * pq.s else: wf_left_sweep = None wf_sampling_rate = wf_sampling_rate * pq.Hz else: waveforms = None wf_left_sweep = None wf_sampling_rate = None d = self.raw_annotations['blocks'][block_index]['segments'][seg_index]['units'][ unit_index] annotations = dict(d) if 'name' not in annotations: annotations['name'] = unit_channels['name'][c] annotations = check_annotations(annotations) if not lazy: spike_timestamp = self.get_spike_timestamps(block_index=block_index, seg_index=seg_index, unit_index=unit_index, t_start=t_start_, t_stop=t_stop_) spike_times = self.rescale_spike_timestamp(spike_timestamp, 'float64') sptr = SpikeTrain(spike_times, units='s', copy=False, t_start=seg_t_start, t_stop=seg_t_stop, waveforms=waveforms, left_sweep=wf_left_sweep, sampling_rate=wf_sampling_rate, **annotations) else: nb = self.spike_count(block_index=block_index, seg_index=seg_index, unit_index=unit_index) sptr = SpikeTrain(np.array([]), units='s', copy=False, t_start=seg_t_start, t_stop=seg_t_stop, **annotations) sptr.lazy_shape = (nb,) seg.spiketrains.append(sptr) # Events/Epoch event_channels = self.header['event_channels'] for chan_ind in range(len(event_channels)): if not lazy: ev_timestamp, ev_raw_durations, ev_labels = self.get_event_timestamps( block_index=block_index, seg_index=seg_index, event_channel_index=chan_ind, t_start=t_start_, t_stop=t_stop_) ev_times = self.rescale_event_timestamp(ev_timestamp, 'float64') * pq.s if ev_raw_durations is None: ev_durations = None else: ev_durations = self.rescale_epoch_duration(ev_raw_durations, 'float64') * pq.s ev_labels = ev_labels.astype('S') else: nb = self.event_count(block_index=block_index, seg_index=seg_index, event_channel_index=chan_ind) lazy_shape = (nb,) ev_times = np.array([]) * pq.s ev_labels = np.array([], dtype='S') ev_durations = np.array([]) * pq.s d = self.raw_annotations['blocks'][block_index]['segments'][seg_index]['events'][ chan_ind] annotations = dict(d) if 'name' not in annotations: annotations['name'] = event_channels['name'][chan_ind] annotations = check_annotations(annotations) if event_channels['type'][chan_ind] == b'event': e = Event(times=ev_times, labels=ev_labels, units='s', copy=False, **annotations) e.segment = seg seg.events.append(e) elif event_channels['type'][chan_ind] == b'epoch': e = Epoch(times=ev_times, durations=ev_durations, labels=ev_labels, units='s', copy=False, **annotations) e.segment = seg seg.epochs.append(e) if lazy: e.lazy_shape = lazy_shape seg.create_many_to_one_relationship() return seg def _make_signal_channel_subgroups(self, channel_indexes, signal_group_mode='group-by-same-units'): """ For some RawIO channel are already splitted in groups. But in any cases, channel need to be splitted again in sub groups because they do not have the same units. They can also be splitted one by one to match previous behavior for some IOs in older version of neo (<=0.5). This method aggregate signal channels with same units or split them all. """ all_channels = self.header['signal_channels'] if channel_indexes is None: channel_indexes = np.arange(all_channels.size, dtype=int) channels = all_channels[channel_indexes] groups = collections.OrderedDict() if signal_group_mode == 'group-by-same-units': all_units = np.unique(channels['units']) for i, unit in enumerate(all_units): ind_within, = np.nonzero(channels['units'] == unit) ind_abs = channel_indexes[ind_within] groups[i] = (ind_within, ind_abs) elif signal_group_mode == 'split-all': for i, chan_index in enumerate(channel_indexes): ind_within = [i] ind_abs = channel_indexes[ind_within] groups[i] = (ind_within, ind_abs) else: raise (NotImplementedError) return groups unit_convert = {'Volts': 'V', 'volts': 'V', 'Volt': 'V', 'volt': 'V', ' Volt': 'V', 'microV': 'V'} def ensure_signal_units(units): # test units units = units.replace(' ', '') if units in unit_convert: units = unit_convert[units] try: units = pq.Quantity(1, units) except: logging.warning('Units "{}" can not be converted to a quantity. Using dimensionless ' 'instead'.format(units)) units = '' return units def check_annotations(annotations): # force type to str for some keys # imposed for tests for k in ('name', 'description', 'file_origin'): if k in annotations: annotations[k] = str(annotations[k]) if 'coordinates' in annotations: # some rawio expose some coordinates in annotations but is not standardized # (x, y, z) or polar, at the moment it is more resonable to remove them annotations.pop('coordinates') return annotations def ensure_second(v): if isinstance(v, float): return v * pq.s elif isinstance(v, pq.Quantity): return v.rescale('s') elif isinstance(v, int): return float(v) * pq.s neo-0.7.2/neo/io/baseio.py0000600013464101346420000001646413507452453013515 0ustar yohyoh# -*- coding: utf-8 -*- """ baseio ====== Classes ------- BaseIO - abstract class which should be overridden, managing how a file will load/write its data If you want a model for developing a new IO start from exampleIO. """ import collections import logging from neo import logging_handler from neo.core import (AnalogSignal, Block, Epoch, Event, IrregularlySampledSignal, ChannelIndex, Segment, SpikeTrain, Unit) read_error = "This type is not supported by this file format for reading" write_error = "This type is not supported by this file format for writing" class BaseIO(object): """ Generic class to handle all the file read/write methods for the key objects of the core class. This template is file-reading/writing oriented but it can also handle data read from/written to a database such as TDT sytem tanks or SQLite files. This is an abstract class that will be subclassed for each format The key methods of the class are: - ``read()`` - Read the whole object structure, return a list of Block objects - ``read_block(lazy=True, **params)`` - Read Block object from file with some parameters - ``read_segment(lazy=True, **params)`` - Read Segment object from file with some parameters - ``read_spiketrainlist(lazy=True, **params)`` - Read SpikeTrainList object from file with some parameters - ``write()`` - Write the whole object structure - ``write_block(**params)`` - Write Block object to file with some parameters - ``write_segment(**params)`` - Write Segment object to file with some parameters - ``write_spiketrainlist(**params)`` - Write SpikeTrainList object to file with some parameters The class can also implement these methods: - ``read_XXX(lazy=True, **params)`` - ``write_XXX(**params)`` where XXX could be one of the objects supported by the IO Each class is able to declare what can be accessed or written directly discribed by **readable_objects** and **readable_objects**. The object types can be one of the classes defined in neo.core (Block, Segment, AnalogSignal, ...) Each class does not necessary support all the whole neo hierarchy but part of it. This is described with **supported_objects**. All IOs must support at least Block with a read_block() ** start a new IO ** If you want to implement your own file format, you should create a class that will inherit from this BaseFile class and implement the previous methods. See ExampleIO in exampleio.py """ is_readable = False is_writable = False supported_objects = [] readable_objects = [] writeable_objects = [] support_lazy = False read_params = {} write_params = {} name = 'BaseIO' description = '' extensions = [] mode = 'file' # or 'fake' or 'dir' or 'database' def __init__(self, filename=None, **kargs): self.filename = filename # create a logger for the IO class fullname = self.__class__.__module__ + '.' + self.__class__.__name__ self.logger = logging.getLogger(fullname) # create a logger for 'neo' and add a handler to it if it doesn't # have one already. # (it will also not add one if the root logger has a handler) corename = self.__class__.__module__.split('.')[0] corelogger = logging.getLogger(corename) rootlogger = logging.getLogger() if not corelogger.handlers and not rootlogger.handlers: corelogger.addHandler(logging_handler) ######## General read/write methods ####################### def read(self, lazy=False, **kargs): if lazy: assert self.support_lazy, 'This IO do not support lazy loading' if Block in self.readable_objects: if (hasattr(self, 'read_all_blocks') and callable(getattr(self, 'read_all_blocks'))): return self.read_all_blocks(lazy=lazy, **kargs) return [self.read_block(lazy=lazy, **kargs)] elif Segment in self.readable_objects: bl = Block(name='One segment only') seg = self.read_segment(lazy=lazy, **kargs) bl.segments.append(seg) bl.create_many_to_one_relationship() return [bl] else: raise NotImplementedError def write(self, bl, **kargs): if Block in self.writeable_objects: if isinstance(bl, collections.Sequence): assert hasattr(self, 'write_all_blocks'), \ '%s does not offer to store a sequence of blocks' % \ self.__class__.__name__ self.write_all_blocks(bl, **kargs) else: self.write_block(bl, **kargs) elif Segment in self.writeable_objects: assert len(bl.segments) == 1, \ '%s is based on segment so if you try to write a block it ' + \ 'must contain only one Segment' % self.__class__.__name__ self.write_segment(bl.segments[0], **kargs) else: raise NotImplementedError ######## All individual read methods ####################### def read_block(self, **kargs): assert (Block in self.readable_objects), read_error def read_segment(self, **kargs): assert (Segment in self.readable_objects), read_error def read_unit(self, **kargs): assert (Unit in self.readable_objects), read_error def read_spiketrain(self, **kargs): assert (SpikeTrain in self.readable_objects), read_error def read_analogsignal(self, **kargs): assert (AnalogSignal in self.readable_objects), read_error def read_irregularlysampledsignal(self, **kargs): assert (IrregularlySampledSignal in self.readable_objects), read_error def read_channelindex(self, **kargs): assert (ChannelIndex in self.readable_objects), read_error def read_event(self, **kargs): assert (Event in self.readable_objects), read_error def read_epoch(self, **kargs): assert (Epoch in self.readable_objects), read_error ######## All individual write methods ####################### def write_block(self, bl, **kargs): assert (Block in self.writeable_objects), write_error def write_segment(self, seg, **kargs): assert (Segment in self.writeable_objects), write_error def write_unit(self, ut, **kargs): assert (Unit in self.writeable_objects), write_error def write_spiketrain(self, sptr, **kargs): assert (SpikeTrain in self.writeable_objects), write_error def write_analogsignal(self, anasig, **kargs): assert (AnalogSignal in self.writeable_objects), write_error def write_irregularlysampledsignal(self, irsig, **kargs): assert (IrregularlySampledSignal in self.writeable_objects), write_error def write_channelindex(self, chx, **kargs): assert (ChannelIndex in self.writeable_objects), write_error def write_event(self, ev, **kargs): assert (Event in self.writeable_objects), write_error def write_epoch(self, ep, **kargs): assert (Epoch in self.writeable_objects), write_error neo-0.7.2/neo/io/bci2000io.py0000600013464101346420000000065613507452453013636 0ustar yohyoh# -*- coding: utf-8 -*- from neo.io.basefromrawio import BaseFromRaw from neo.rawio.bci2000rawio import BCI2000RawIO class BCI2000IO(BCI2000RawIO, BaseFromRaw): """Class for reading data from a BCI2000 .dat file, either version 1.0 or 1.1""" _prefered_signal_group_mode = 'split-all' def __init__(self, filename): BCI2000RawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.7.2/neo/io/blackrockio.py0000600013464101346420000000576313507452453014536 0ustar yohyoh# -*- coding: utf-8 -*- from neo.io.basefromrawio import BaseFromRaw from neo.rawio.blackrockrawio import BlackrockRawIO def _move_channel_indexes_and_analogsignals(from_block, to_block): if len(from_block.segments) != len(to_block.segments): raise ValueError('Can not assign segments between block 1 and 2. Different number of ' 'segments present.') for seg_id in range(len(from_block.segments)): for ana in from_block.segments[seg_id].analogsignals: # redirect links from data object to container objects ana.segment = to_block.segments[seg_id] ana.channel_index.block = to_block # add links from container objects to analogsignal ana.segment.analogsignals.append(ana) # channel index was already relinked for another segment if ana.channel_index not in to_block.channel_indexes: to_block.channel_indexes.append(ana.channel_index) # remove (now) duplicated units from channel_index, remove irregular signals ana.channel_index.units = [] ana.channel_index.irregularlysampledsignals = [] class BlackrockIO_single_nsx(BlackrockRawIO, BaseFromRaw): """ Supplementary class for reading BlackRock data using only a single nsx file. """ name = 'Blackrock IO for single nsx' description = "This IO reads a pair of corresponding nev and nsX files of the Blackrock " \ "" + "(Cerebus) recording system." _prefered_signal_group_mode = 'split-all' def __init__(self, filename, nsx_to_load=None, **kargs): BlackrockRawIO.__init__(self, filename=filename, nsx_to_load=nsx_to_load, **kargs) BaseFromRaw.__init__(self, filename) class BlackrockIO(BlackrockIO_single_nsx): name = 'Blackrock IO' description = "This IO reads .nev/.nsX files of the Blackrock (Cerebus) recording system." def __init__(self, filename, nsx_to_load='all', **kargs): BlackrockIO_single_nsx.__init__(self, filename) if nsx_to_load == 'all': self._selected_nsx = self._avail_nsx else: self._selected_nsx = [nsx_to_load] self._nsx_ios = [] for nsx in self._selected_nsx: self._nsx_ios.append(BlackrockIO_single_nsx(filename, nsx_to_load=nsx, **kargs)) def read_block(self, **kargs): bl = self._nsx_ios[0].read_block(**kargs) for nsx_ios in self._nsx_ios[1:]: nsx_block = nsx_ios.read_block(**kargs) _move_channel_indexes_and_analogsignals(nsx_block, bl) del nsx_block return bl def read_segment(self, **kargs): seg = self._nsx_ios[0].read_segment(**kargs) for nsx_ios in self._nsx_ios[1:]: nsx_seg = nsx_ios.read_segment(**kargs) seg.analogsignals.extend(nsx_seg.analogsignals) for ana in nsx_seg.analogsignals: ana.segment = seg ana.channel_index = None del nsx_seg return seg neo-0.7.2/neo/io/blackrockio_v4.py0000600013464101346420000031500613507452453015141 0ustar yohyoh# -*- coding: utf-8 -*- """ Module for reading data from files in the Blackrock format. This module is an older implementation with old neo.io API. A new class Blackrock compunded by BlackrockRawIO and BaseFromIO superseed this one. This work is based on: * Chris Rodgers - first version * Michael Denker, Lyuba Zehl - second version * Samuel Garcia - third version * Lyuba Zehl, Michael Denker - fourth version This IO supports reading only. This IO is able to read: * the nev file which contains spikes * ns1, ns2, .., ns6 files that contain signals at different sampling rates This IO can handle the following Blackrock file specifications: * 2.1 * 2.2 * 2.3 The neural data channels are 1 - 128. The analog inputs are 129 - 144. (129 - 137 AC coupled, 138 - 144 DC coupled) spike- and event-data; 30000 Hz "ns1": "analog data: 500 Hz", "ns2": "analog data: 1000 Hz", "ns3": "analog data: 2000 Hz", "ns4": "analog data: 10000 Hz", "ns5": "analog data: 30000 Hz", "ns6": "analog data: 30000 Hz (no digital filter)" TODO: * videosync events (file spec 2.3) * tracking events (file spec 2.3) * buttontrigger events (file spec 2.3) * config events (file spec 2.3) * check left sweep settings of Blackrock * check nsx offsets (file spec 2.1) * add info of nev ext header (NSASEXEX) to non-neural events (file spec 2.1 and 2.2) * read sif file information * read ccf file information * fix reading of periodic sampling events (non-neural event type) (file spec 2.1 and 2.2) """ from __future__ import division import datetime import os import re import numpy as np import quantities as pq import neo from neo.io.baseio import BaseIO from neo.core import (Block, Segment, SpikeTrain, Unit, Event, ChannelIndex, AnalogSignal) if __name__ == '__main__': pass class BlackrockIO(BaseIO): """ Class for reading data in from a file set recorded by the Blackrock (Cerebus) recording system. Upon initialization, the class is linked to the available set of Blackrock files. Data can be read as a neo Block or neo Segment object using the read_block or read_segment function, respectively. Note: This routine will handle files according to specification 2.1, 2.2, and 2.3. Recording pauses that may occur in file specifications 2.2 and 2.3 are automatically extracted and the data set is split into different segments. Inherits from: neo.io.BaseIO The Blackrock data format consists not of a single file, but a set of different files. This constructor associates itself with a set of files that constitute a common data set. By default, all files belonging to the file set have the same base name, but different extensions. However, by using the override parameters, individual filenames can be set. Args: filename (string): File name (without extension) of the set of Blackrock files to associate with. Any .nsX or .nev, .sif, or .ccf extensions are ignored when parsing this parameter. nsx_override (string): File name of the .nsX files (without extension). If None, filename is used. Default: None. nev_override (string): File name of the .nev file (without extension). If None, filename is used. Default: None. sif_override (string): File name of the .sif file (without extension). If None, filename is used. Default: None. ccf_override (string): File name of the .ccf file (without extension). If None, filename is used. Default: None. verbose (boolean): If True, the class will output additional diagnostic information on stdout. Default: False Returns: - Examples: >>> a = BlackrockIO('myfile') Loads a set of file consisting of files myfile.ns1, ..., myfile.ns6, and myfile.nev >>> b = BlackrockIO('myfile', nev_override='sorted') Loads the analog data from the set of files myfile.ns1, ..., myfile.ns6, but reads spike/event data from sorted.nev """ # Class variables demonstrating capabilities of this IO is_readable = True is_writable = False # This IO can only manipulate continuous data, spikes, and events supported_objects = [ Block, Segment, Event, AnalogSignal, SpikeTrain, Unit, ChannelIndex] readable_objects = [Block, Segment] writeable_objects = [] has_header = False is_streameable = False read_params = { neo.Block: [ ('nsx_to_load', { 'value': 'none', 'label': "List of nsx files (ids, int) to read."}), ('n_starts', { 'value': None, 'label': "List of n_start points (Quantity) to create " "segments from."}), ('n_stops', { 'value': None, 'label': "List of n_stop points (Quantity) to create " "segments from."}), ('channels', { 'value': 'none', 'label': "List of channels (ids, int) to load data from."}), ('units', { 'value': 'none', 'label': "Dictionary for units (values, list of int) to load " "for each channel (key, int)."}), ('load_waveforms', { 'value': False, 'label': "States if waveforms should be loaded and attached " "to spiketrain"}), ('load_events', { 'value': False, 'label': "States if events should be loaded."})], neo.Segment: [ ('n_start', { 'label': "Start time point (Quantity) for segment"}), ('n_stop', { 'label': "Stop time point (Quantity) for segment"}), ('nsx_to_load', { 'value': 'none', 'label': "List of nsx files (ids, int) to read."}), ('channels', { 'value': 'none', 'label': "List of channels (ids, int) to load data from."}), ('units', { 'value': 'none', 'label': "Dictionary for units (values, list of int) to load " "for each channel (key, int)."}), ('load_waveforms', { 'value': False, 'label': "States if waveforms should be loaded and attached " "to spiketrain"}), ('load_events', { 'value': False, 'label': "States if events should be loaded."})]} write_params = {} name = 'Blackrock IO' description = "This IO reads .nev/.nsX file of the Blackrock " + \ "(Cerebus) recordings system." # The possible file extensions of the Cerebus system and their content: # ns1: contains analog data; sampled at 500 Hz (+ digital filters) # ns2: contains analog data; sampled at 1000 Hz (+ digital filters) # ns3: contains analog data; sampled at 2000 Hz (+ digital filters) # ns4: contains analog data; sampled at 10000 Hz (+ digital filters) # ns5: contains analog data; sampled at 30000 Hz (+ digital filters) # ns6: contains analog data; sampled at 30000 Hz (no digital filters) # nev: contains spike- and event-data; sampled at 30000 Hz # sif: contains institution and patient info (XML) # ccf: contains Cerebus configurations extensions = ['ns' + str(_) for _ in range(1, 7)] extensions.extend(['nev', 'sif', 'ccf']) mode = 'file' def __init__(self, filename, nsx_override=None, nev_override=None, sif_override=None, ccf_override=None, verbose=False): """ Initialize the BlackrockIO class. """ BaseIO.__init__(self) # Used to avoid unnecessary repetition of verbose messages self.__verbose_messages = [] # remove extension from base _filenames for ext in self.extensions: self.filename = re.sub( os.path.extsep + ext + '$', '', filename) # remove extensions from overrides self._filenames = {} if nsx_override: self._filenames['nsx'] = re.sub( os.path.extsep + r'ns[1,2,3,4,5,6]$', '', nsx_override) else: self._filenames['nsx'] = self.filename if nev_override: self._filenames['nev'] = re.sub( os.path.extsep + r'nev$', '', nev_override) else: self._filenames['nev'] = self.filename if sif_override: self._filenames['sif'] = re.sub( os.path.extsep + r'sif$', '', sif_override) else: self._filenames['sif'] = self.filename if ccf_override: self._filenames['ccf'] = re.sub( os.path.extsep + r'ccf$', '', ccf_override) else: self._filenames['ccf'] = self.filename # check which files are available self._avail_files = dict.fromkeys(self.extensions, False) self._avail_nsx = [] for ext in self.extensions: if ext.startswith('ns'): file2check = ''.join( [self._filenames['nsx'], os.path.extsep, ext]) else: file2check = ''.join( [self._filenames[ext], os.path.extsep, ext]) if os.path.exists(file2check): self._print_verbose("Found " + file2check + ".") self._avail_files[ext] = True if ext.startswith('ns'): self._avail_nsx.append(int(ext[-1])) # check if there are any files present if not any(list(self._avail_files.values())): raise IOError( 'No Blackrock files present at {}'.format(filename)) # check if manually specified files were found exts = ['nsx', 'nev', 'sif', 'ccf'] ext_overrides = [nsx_override, nev_override, sif_override, ccf_override] for ext, ext_override in zip(exts, ext_overrides): if ext_override is not None and self._avail_files[ext] is False: raise ValueError('Specified {} file {} could not be ' 'found.'.format(ext, ext_override)) # These dictionaries are used internally to map the file specification # revision of the nsx and nev files to one of the reading routines self.__nsx_header_reader = { '2.1': self.__read_nsx_header_variant_a, '2.2': self.__read_nsx_header_variant_b, '2.3': self.__read_nsx_header_variant_b} self.__nsx_dataheader_reader = { '2.1': self.__read_nsx_dataheader_variant_a, '2.2': self.__read_nsx_dataheader_variant_b, '2.3': self.__read_nsx_dataheader_variant_b} self.__nsx_data_reader = { '2.1': self.__read_nsx_data_variant_a, '2.2': self.__read_nsx_data_variant_b, '2.3': self.__read_nsx_data_variant_b} self.__nev_header_reader = { '2.1': self.__read_nev_header_variant_a, '2.2': self.__read_nev_header_variant_b, '2.3': self.__read_nev_header_variant_c} self.__nev_data_reader = { '2.1': self.__read_nev_data_variant_a, '2.2': self.__read_nev_data_variant_a, '2.3': self.__read_nev_data_variant_b} self.__nsx_params = { '2.1': self.__get_nsx_param_variant_a, '2.2': self.__get_nsx_param_variant_b, '2.3': self.__get_nsx_param_variant_b} self.__nsx_databl_param = { '2.1': self.__get_nsx_databl_param_variant_a, '2.2': self.__get_nsx_databl_param_variant_b, '2.3': self.__get_nsx_databl_param_variant_b} self.__waveform_size = { '2.1': self.__get_waveform_size_variant_a, '2.2': self.__get_waveform_size_variant_a, '2.3': self.__get_waveform_size_variant_b} self.__channel_labels = { '2.1': self.__get_channel_labels_variant_a, '2.2': self.__get_channel_labels_variant_b, '2.3': self.__get_channel_labels_variant_b} self.__nsx_rec_times = { '2.1': self.__get_nsx_rec_times_variant_a, '2.2': self.__get_nsx_rec_times_variant_b, '2.3': self.__get_nsx_rec_times_variant_b} self.__nonneural_evtypes = { '2.1': self.__get_nonneural_evtypes_variant_a, '2.2': self.__get_nonneural_evtypes_variant_a, '2.3': self.__get_nonneural_evtypes_variant_b} # Load file spec and headers of available nev file if self._avail_files['nev']: # read nev file specification self.__nev_spec = self.__extract_nev_file_spec() self._print_verbose('Specification Version ' + self.__nev_spec) # read nev headers self.__nev_basic_header, self.__nev_ext_header = \ self.__nev_header_reader[self.__nev_spec]() # Load file spec and headers of available nsx files self.__nsx_spec = {} self.__nsx_basic_header = {} self.__nsx_ext_header = {} self.__nsx_data_header = {} for nsx_nb in self._avail_nsx: # read nsx file specification self.__nsx_spec[nsx_nb] = self.__extract_nsx_file_spec(nsx_nb) # read nsx headers self.__nsx_basic_header[nsx_nb], self.__nsx_ext_header[nsx_nb] = \ self.__nsx_header_reader[self.__nsx_spec[nsx_nb]](nsx_nb) # Read nsx data header(s) for nsx self.__nsx_data_header[nsx_nb] = self.__nsx_dataheader_reader[ self.__nsx_spec[nsx_nb]](nsx_nb) def _print_verbose(self, text): """ Print a verbose diagnostic message (string). """ if self.__verbose_messages: if text not in self.__verbose_messages: self.__verbose_messages.append(text) print(str(self.__class__.__name__) + ': ' + text) def __extract_nsx_file_spec(self, nsx_nb): """ Extract file specification from an .nsx file. """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) # Header structure of files specification 2.2 and higher. For files 2.1 # and lower, the entries ver_major and ver_minor are not supported. dt0 = [ ('file_id', 'S8'), ('ver_major', 'uint8'), ('ver_minor', 'uint8')] nsx_file_id = np.fromfile(filename, count=1, dtype=dt0)[0] if nsx_file_id['file_id'].decode() == 'NEURALSG': spec = '2.1' elif nsx_file_id['file_id'].decode() == 'NEURALCD': spec = '{0}.{1}'.format( nsx_file_id['ver_major'], nsx_file_id['ver_minor']) else: raise IOError('Unsupported NSX file type.') return spec def __extract_nev_file_spec(self): """ Extract file specification from an .nev file """ filename = '.'.join([self._filenames['nev'], 'nev']) # Header structure of files specification 2.2 and higher. For files 2.1 # and lower, the entries ver_major and ver_minor are not supported. dt0 = [ ('file_id', 'S8'), ('ver_major', 'uint8'), ('ver_minor', 'uint8')] nev_file_id = np.fromfile(filename, count=1, dtype=dt0)[0] if nev_file_id['file_id'].decode() == 'NEURALEV': spec = '{0}.{1}'.format( nev_file_id['ver_major'], nev_file_id['ver_minor']) else: raise IOError('NEV file type {0} is not supported'.format( nev_file_id['file_id'])) return spec def __read_nsx_header_variant_a(self, nsx_nb): """ Extract nsx header information from a 2.1 .nsx file """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) # basic header (file_id: NEURALCD) dt0 = [ ('file_id', 'S8'), # label of sampling groun (e.g. "1kS/s" or "LFP Low") ('label', 'S16'), # number of 1/30000 seconds between data points # (e.g., if sampling rate "1 kS/s", period equals "30") ('period', 'uint32'), ('channel_count', 'uint32')] nsx_basic_header = np.fromfile(filename, count=1, dtype=dt0)[0] # "extended" header (last field of file_id: NEURALCD) # (to facilitate compatibility with higher file specs) offset_dt0 = np.dtype(dt0).itemsize shape = nsx_basic_header['channel_count'] # originally called channel_id in Blackrock user manual # (to facilitate compatibility with higher file specs) dt1 = [('electrode_id', 'uint32')] nsx_ext_header = np.memmap( filename, mode='r', shape=shape, offset=offset_dt0, dtype=dt1) return nsx_basic_header, nsx_ext_header def __read_nsx_header_variant_b(self, nsx_nb): """ Extract nsx header information from a 2.2 or 2.3 .nsx file """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) # basic header (file_id: NEURALCD) dt0 = [ ('file_id', 'S8'), # file specification split into major and minor version number ('ver_major', 'uint8'), ('ver_minor', 'uint8'), # bytes of basic & extended header ('bytes_in_headers', 'uint32'), # label of the sampling group (e.g., "1 kS/s" or "LFP low") ('label', 'S16'), ('comment', 'S256'), ('period', 'uint32'), ('timestamp_resolution', 'uint32'), # time origin: 2byte uint16 values for ... ('year', 'uint16'), ('month', 'uint16'), ('weekday', 'uint16'), ('day', 'uint16'), ('hour', 'uint16'), ('minute', 'uint16'), ('second', 'uint16'), ('millisecond', 'uint16'), # number of channel_count match number of extended headers ('channel_count', 'uint32')] nsx_basic_header = np.fromfile(filename, count=1, dtype=dt0)[0] # extended header (type: CC) offset_dt0 = np.dtype(dt0).itemsize shape = nsx_basic_header['channel_count'] dt1 = [ ('type', 'S2'), ('electrode_id', 'uint16'), ('electrode_label', 'S16'), # used front-end amplifier bank (e.g., A, B, C, D) ('physical_connector', 'uint8'), # used connector pin (e.g., 1-37 on bank A, B, C or D) ('connector_pin', 'uint8'), # digital and analog value ranges of the signal ('min_digital_val', 'int16'), ('max_digital_val', 'int16'), ('min_analog_val', 'int16'), ('max_analog_val', 'int16'), # units of the analog range values ("mV" or "uV") ('units', 'S16'), # filter settings used to create nsx from source signal ('hi_freq_corner', 'uint32'), ('hi_freq_order', 'uint32'), ('hi_freq_type', 'uint16'), # 0=None, 1=Butterworth ('lo_freq_corner', 'uint32'), ('lo_freq_order', 'uint32'), ('lo_freq_type', 'uint16')] # 0=None, 1=Butterworth nsx_ext_header = np.memmap( filename, mode='r', shape=shape, offset=offset_dt0, dtype=dt1) return nsx_basic_header, nsx_ext_header def __read_nsx_dataheader(self, nsx_nb, offset): """ Reads data header following the given offset of an nsx file. """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) # dtypes data header dt2 = [ ('header', 'uint8'), ('timestamp', 'uint32'), ('nb_data_points', 'uint32')] return np.memmap( filename, mode='r', dtype=dt2, shape=1, offset=offset)[0] def __read_nsx_dataheader_variant_a( self, nsx_nb, filesize=None, offset=None): """ Reads None for the nsx data header of file spec 2.1. Introduced to facilitate compatibility with higher file spec. """ return None def __read_nsx_dataheader_variant_b( self, nsx_nb, filesize=None, offset=None, ): """ Reads the nsx data header for each data block following the offset of file spec 2.2 and 2.3. """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) filesize = self.__get_file_size(filename) data_header = {} index = 0 if offset is None: offset = self.__nsx_basic_header[nsx_nb]['bytes_in_headers'] while offset < filesize: index += 1 dh = self.__read_nsx_dataheader(nsx_nb, offset) data_header[index] = { 'header': dh['header'], 'timestamp': dh['timestamp'], 'nb_data_points': dh['nb_data_points'], 'offset_to_data_block': offset + dh.dtype.itemsize} # data size = number of data points * (2bytes * number of channels) # use of `int` avoids overflow problem data_size = int(dh['nb_data_points']) * \ int(self.__nsx_basic_header[nsx_nb]['channel_count']) * 2 # define new offset (to possible next data block) offset = data_header[index]['offset_to_data_block'] + data_size return data_header def __read_nsx_data_variant_a(self, nsx_nb): """ Extract nsx data from a 2.1 .nsx file """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) # get shape of data shape = ( self.__nsx_databl_param['2.1']('nb_data_points', nsx_nb), self.__nsx_basic_header[nsx_nb]['channel_count']) offset = self.__nsx_params['2.1']('bytes_in_headers', nsx_nb) # read nsx data # store as dict for compatibility with higher file specs data = {1: np.memmap( filename, mode='r', dtype='int16', shape=shape, offset=offset)} return data def __read_nsx_data_variant_b(self, nsx_nb): """ Extract nsx data (blocks) from a 2.2 or 2.3 .nsx file. Blocks can arise if the recording was paused by the user. """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) data = {} for data_bl in self.__nsx_data_header[nsx_nb].keys(): # get shape and offset of data shape = ( self.__nsx_data_header[nsx_nb][data_bl]['nb_data_points'], self.__nsx_basic_header[nsx_nb]['channel_count']) offset = \ self.__nsx_data_header[nsx_nb][data_bl]['offset_to_data_block'] # read data data[data_bl] = np.memmap( filename, mode='r', dtype='int16', shape=shape, offset=offset) return data def __read_nev_header(self, ext_header_variants): """ Extract nev header information from a 2.1 .nsx file """ filename = '.'.join([self._filenames['nev'], 'nev']) # basic header dt0 = [ # Set to "NEURALEV" ('file_type_id', 'S8'), ('ver_major', 'uint8'), ('ver_minor', 'uint8'), # Flags ('additionnal_flags', 'uint16'), # File index of first data sample ('bytes_in_headers', 'uint32'), # Number of bytes per data packet (sample) ('bytes_in_data_packets', 'uint32'), # Time resolution of time stamps in Hz ('timestamp_resolution', 'uint32'), # Sampling frequency of waveforms in Hz ('sample_resolution', 'uint32'), ('year', 'uint16'), ('month', 'uint16'), ('weekday', 'uint16'), ('day', 'uint16'), ('hour', 'uint16'), ('minute', 'uint16'), ('second', 'uint16'), ('millisecond', 'uint16'), ('application_to_create_file', 'S32'), ('comment_field', 'S256'), # Number of extended headers ('nb_ext_headers', 'uint32')] nev_basic_header = np.fromfile(filename, count=1, dtype=dt0)[0] # extended header # this consist in N block with code 8bytes + 24 data bytes # the data bytes depend on the code and need to be converted # cafilename_nsx, segse by case shape = nev_basic_header['nb_ext_headers'] offset_dt0 = np.dtype(dt0).itemsize # This is the common structure of the beginning of extended headers dt1 = [ ('packet_id', 'S8'), ('info_field', 'S24')] raw_ext_header = np.memmap( filename, mode='r', offset=offset_dt0, dtype=dt1, shape=shape) nev_ext_header = {} for packet_id in ext_header_variants.keys(): mask = (raw_ext_header['packet_id'] == packet_id) dt2 = self.__nev_ext_header_types()[packet_id][ ext_header_variants[packet_id]] nev_ext_header[packet_id] = raw_ext_header.view(dt2)[mask] return nev_basic_header, nev_ext_header def __read_nev_header_variant_a(self): """ Extract nev header information from a 2.1 .nev file """ ext_header_variants = { b'NEUEVWAV': 'a', b'ARRAYNME': 'a', b'ECOMMENT': 'a', b'CCOMMENT': 'a', b'MAPFILE': 'a', b'NSASEXEV': 'a'} return self.__read_nev_header(ext_header_variants) def __read_nev_header_variant_b(self): """ Extract nev header information from a 2.2 .nev file """ ext_header_variants = { b'NEUEVWAV': 'b', b'ARRAYNME': 'a', b'ECOMMENT': 'a', b'CCOMMENT': 'a', b'MAPFILE': 'a', b'NEUEVLBL': 'a', b'NEUEVFLT': 'a', b'DIGLABEL': 'a', b'NSASEXEV': 'a'} return self.__read_nev_header(ext_header_variants) def __read_nev_header_variant_c(self): """ Extract nev header information from a 2.3 .nev file """ ext_header_variants = { b'NEUEVWAV': 'b', b'ARRAYNME': 'a', b'ECOMMENT': 'a', b'CCOMMENT': 'a', b'MAPFILE': 'a', b'NEUEVLBL': 'a', b'NEUEVFLT': 'a', b'DIGLABEL': 'a', b'VIDEOSYN': 'a', b'TRACKOBJ': 'a'} return self.__read_nev_header(ext_header_variants) def __read_nev_data(self, nev_data_masks, nev_data_types): """ Extract nev data from a 2.1 or 2.2 .nev file """ filename = '.'.join([self._filenames['nev'], 'nev']) data_size = self.__nev_basic_header['bytes_in_data_packets'] header_size = self.__nev_basic_header['bytes_in_headers'] # read all raw data packets and markers dt0 = [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('value', 'S{0}'.format(data_size - 6))] raw_data = np.memmap(filename, mode='r', offset=header_size, dtype=dt0) masks = self.__nev_data_masks(raw_data['packet_id']) types = self.__nev_data_types(data_size) data = {} for k, v in nev_data_masks.items(): data[k] = raw_data.view(types[k][nev_data_types[k]])[masks[k][v]] return data def __read_nev_data_variant_a(self): """ Extract nev data from a 2.1 & 2.2 .nev file """ nev_data_masks = { 'NonNeural': 'a', 'Spikes': 'a'} nev_data_types = { 'NonNeural': 'a', 'Spikes': 'a'} return self.__read_nev_data(nev_data_masks, nev_data_types) def __read_nev_data_variant_b(self): """ Extract nev data from a 2.3 .nev file """ nev_data_masks = { 'NonNeural': 'a', 'Spikes': 'b', 'Comments': 'a', 'VideoSync': 'a', 'TrackingEvents': 'a', 'ButtonTrigger': 'a', 'ConfigEvent': 'a'} nev_data_types = { 'NonNeural': 'b', 'Spikes': 'a', 'Comments': 'a', 'VideoSync': 'a', 'TrackingEvents': 'a', 'ButtonTrigger': 'a', 'ConfigEvent': 'a'} return self.__read_nev_data(nev_data_masks, nev_data_types) def __nev_ext_header_types(self): """ Defines extended header types for different .nev file specifications. """ nev_ext_header_types = { b'NEUEVWAV': { # Version>=2.1 'a': [ ('packet_id', 'S8'), ('electrode_id', 'uint16'), ('physical_connector', 'uint8'), ('connector_pin', 'uint8'), ('digitization_factor', 'uint16'), ('energy_threshold', 'uint16'), ('hi_threshold', 'int16'), ('lo_threshold', 'int16'), ('nb_sorted_units', 'uint8'), # number of bytes per waveform sample ('bytes_per_waveform', 'uint8'), ('unused', 'S10')], # Version>=2.3 'b': [ ('packet_id', 'S8'), ('electrode_id', 'uint16'), ('physical_connector', 'uint8'), ('connector_pin', 'uint8'), ('digitization_factor', 'uint16'), ('energy_threshold', 'uint16'), ('hi_threshold', 'int16'), ('lo_threshold', 'int16'), ('nb_sorted_units', 'uint8'), # number of bytes per waveform sample ('bytes_per_waveform', 'uint8'), # number of samples for each waveform ('spike_width', 'uint16'), ('unused', 'S8')]}, b'ARRAYNME': { 'a': [ ('packet_id', 'S8'), ('electrode_array_name', 'S24')]}, b'ECOMMENT': { 'a': [ ('packet_id', 'S8'), ('extra_comment', 'S24')]}, b'CCOMMENT': { 'a': [ ('packet_id', 'S8'), ('continued_comment', 'S24')]}, b'MAPFILE': { 'a': [ ('packet_id', 'S8'), ('mapFile', 'S24')]}, b'NEUEVLBL': { 'a': [ ('packet_id', 'S8'), ('electrode_id', 'uint16'), # label of this electrode ('label', 'S16'), ('unused', 'S6')]}, b'NEUEVFLT': { 'a': [ ('packet_id', 'S8'), ('electrode_id', 'uint16'), ('hi_freq_corner', 'uint32'), ('hi_freq_order', 'uint32'), # 0=None 1=Butterworth ('hi_freq_type', 'uint16'), ('lo_freq_corner', 'uint32'), ('lo_freq_order', 'uint32'), # 0=None 1=Butterworth ('lo_freq_type', 'uint16'), ('unused', 'S2')]}, b'DIGLABEL': { 'a': [ ('packet_id', 'S8'), # Read name of digital ('label', 'S16'), # 0=serial, 1=parallel ('mode', 'uint8'), ('unused', 'S7')]}, b'NSASEXEV': { 'a': [ ('packet_id', 'S8'), # Read frequency of periodic packet generation ('frequency', 'uint16'), # Read if digital input triggers events ('digital_input_config', 'uint8'), # Read if analog input triggers events ('analog_channel_1_config', 'uint8'), ('analog_channel_1_edge_detec_val', 'uint16'), ('analog_channel_2_config', 'uint8'), ('analog_channel_2_edge_detec_val', 'uint16'), ('analog_channel_3_config', 'uint8'), ('analog_channel_3_edge_detec_val', 'uint16'), ('analog_channel_4_config', 'uint8'), ('analog_channel_4_edge_detec_val', 'uint16'), ('analog_channel_5_config', 'uint8'), ('analog_channel_5_edge_detec_val', 'uint16'), ('unused', 'S6')]}, b'VIDEOSYN': { 'a': [ ('packet_id', 'S8'), ('video_source_id', 'uint16'), ('video_source', 'S16'), ('frame_rate', 'float32'), ('unused', 'S2')]}, b'TRACKOBJ': { 'a': [ ('packet_id', 'S8'), ('trackable_type', 'uint16'), ('trackable_id', 'uint16'), ('point_count', 'uint16'), ('video_source', 'S16'), ('unused', 'S2')]}} return nev_ext_header_types def __nev_data_masks(self, packet_ids): """ Defines data masks for different .nev file specifications depending on the given packet identifiers. """ __nev_data_masks = { 'NonNeural': { 'a': (packet_ids == 0)}, 'Spikes': { # Version 2.1 & 2.2 'a': (0 < packet_ids) & (packet_ids <= 255), # Version>=2.3 'b': (0 < packet_ids) & (packet_ids <= 2048)}, 'Comments': { 'a': (packet_ids == 0xFFFF)}, 'VideoSync': { 'a': (packet_ids == 0xFFFE)}, 'TrackingEvents': { 'a': (packet_ids == 0xFFFD)}, 'ButtonTrigger': { 'a': (packet_ids == 0xFFFC)}, 'ConfigEvent': { 'a': (packet_ids == 0xFFFB)}} return __nev_data_masks def __nev_data_types(self, data_size): """ Defines data types for different .nev file specifications depending on the given packet identifiers. """ __nev_data_types = { 'NonNeural': { # Version 2.1 & 2.2 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('packet_insertion_reason', 'uint8'), ('reserved', 'uint8'), ('digital_input', 'uint16'), ('analog_input_channel_1', 'int16'), ('analog_input_channel_2', 'int16'), ('analog_input_channel_3', 'int16'), ('analog_input_channel_4', 'int16'), ('analog_input_channel_5', 'int16'), ('unused', 'S{0}'.format(data_size - 20))], # Version>=2.3 'b': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('packet_insertion_reason', 'uint8'), ('reserved', 'uint8'), ('digital_input', 'uint16'), ('unused', 'S{0}'.format(data_size - 10))]}, 'Spikes': { 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('unit_class_nb', 'uint8'), ('reserved', 'uint8'), ('waveform', 'S{0}'.format(data_size - 8))]}, 'Comments': { 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('char_set', 'uint8'), ('flag', 'uint8'), ('data', 'uint32'), ('comment', 'S{0}'.format(data_size - 12))]}, 'VideoSync': { 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('video_file_nb', 'uint16'), ('video_frame_nb', 'uint32'), ('video_elapsed_time', 'uint32'), ('video_source_id', 'uint32'), ('unused', 'int8', (data_size - 20,))]}, 'TrackingEvents': { 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('parent_id', 'uint16'), ('node_id', 'uint16'), ('node_count', 'uint16'), ('point_count', 'uint16'), ('tracking_points', 'uint16', ((data_size - 14) // 2,))]}, 'ButtonTrigger': { 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('trigger_type', 'uint16'), ('unused', 'int8', (data_size - 8,))]}, 'ConfigEvent': { 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('config_change_type', 'uint16'), ('config_changed', 'S{0}'.format(data_size - 8))]}} return __nev_data_types def __nev_params(self, param_name): """ Returns wanted nev parameter. """ nev_parameters = { 'bytes_in_data_packets': self.__nev_basic_header['bytes_in_data_packets'], 'rec_datetime': datetime.datetime( year=self.__nev_basic_header['year'], month=self.__nev_basic_header['month'], day=self.__nev_basic_header['day'], hour=self.__nev_basic_header['hour'], minute=self.__nev_basic_header['minute'], second=self.__nev_basic_header['second'], microsecond=self.__nev_basic_header['millisecond']), 'max_res': self.__nev_basic_header['timestamp_resolution'], 'channel_ids': self.__nev_ext_header[b'NEUEVWAV']['electrode_id'], 'channel_labels': self.__channel_labels[self.__nev_spec](), 'event_unit': pq.CompoundUnit("1.0/{0} * s".format( self.__nev_basic_header['timestamp_resolution'])), 'nb_units': dict(zip( self.__nev_ext_header[b'NEUEVWAV']['electrode_id'], self.__nev_ext_header[b'NEUEVWAV']['nb_sorted_units'])), 'digitization_factor': dict(zip( self.__nev_ext_header[b'NEUEVWAV']['electrode_id'], self.__nev_ext_header[b'NEUEVWAV']['digitization_factor'])), 'data_size': self.__nev_basic_header['bytes_in_data_packets'], 'waveform_size': self.__waveform_size[self.__nev_spec](), 'waveform_dtypes': self.__get_waveforms_dtype(), 'waveform_sampling_rate': self.__nev_basic_header['sample_resolution'] * pq.Hz, 'waveform_time_unit': pq.CompoundUnit("1.0/{0} * s".format( self.__nev_basic_header['sample_resolution'])), 'waveform_unit': pq.uV} return nev_parameters[param_name] def __get_file_size(self, filename): """ Returns the file size in bytes for the given file. """ filebuf = open(filename, 'rb') filebuf.seek(0, os.SEEK_END) file_size = filebuf.tell() filebuf.close() return file_size def __get_min_time(self): """ Returns the smallest time that can be determined from the recording for use as the lower bound n in an interval [n,m). """ tp = [] if self._avail_files['nev']: tp.extend(self.__get_nev_rec_times()[0]) for nsx_i in self._avail_nsx: tp.extend(self.__nsx_rec_times[self.__nsx_spec[nsx_i]](nsx_i)[0]) return min(tp) def __get_max_time(self): """ Returns the largest time that can be determined from the recording for use as the upper bound m in an interval [n,m). """ tp = [] if self._avail_files['nev']: tp.extend(self.__get_nev_rec_times()[1]) for nsx_i in self._avail_nsx: tp.extend(self.__nsx_rec_times[self.__nsx_spec[nsx_i]](nsx_i)[1]) return max(tp) def __get_nev_rec_times(self): """ Extracts minimum and maximum time points from a nev file. """ filename = '.'.join([self._filenames['nev'], 'nev']) dt = [('timestamp', 'uint32')] offset = \ self.__get_file_size(filename) - \ self.__nev_params('bytes_in_data_packets') last_data_packet = np.memmap( filename, mode='r', offset=offset, dtype=dt)[0] n_starts = [0 * self.__nev_params('event_unit')] n_stops = [ last_data_packet['timestamp'] * self.__nev_params('event_unit')] return n_starts, n_stops def __get_nsx_rec_times_variant_a(self, nsx_nb): """ Extracts minimum and maximum time points from a 2.1 nsx file. """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) t_unit = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'time_unit', nsx_nb) highest_res = self.__nev_params('event_unit') bytes_in_headers = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'bytes_in_headers', nsx_nb) nb_data_points = int( (self.__get_file_size(filename) - bytes_in_headers) / (2 * self.__nsx_basic_header[nsx_nb]['channel_count']) - 1) # add n_start n_starts = [(0 * t_unit).rescale(highest_res)] # add n_stop n_stops = [(nb_data_points * t_unit).rescale(highest_res)] return n_starts, n_stops def __get_nsx_rec_times_variant_b(self, nsx_nb): """ Extracts minimum and maximum time points from a 2.2 or 2.3 nsx file. """ t_unit = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'time_unit', nsx_nb) highest_res = self.__nev_params('event_unit') n_starts = [] n_stops = [] # add n-start and n_stop for all data blocks for data_bl in self.__nsx_data_header[nsx_nb].keys(): ts0 = self.__nsx_data_header[nsx_nb][data_bl]['timestamp'] nbdp = self.__nsx_data_header[nsx_nb][data_bl]['nb_data_points'] # add n_start start = ts0 * t_unit n_starts.append(start.rescale(highest_res)) # add n_stop stop = start + nbdp * t_unit n_stops.append(stop.rescale(highest_res)) return sorted(n_starts), sorted(n_stops) def __get_waveforms_dtype(self): """ Extracts the actual waveform dtype set for each channel. """ # Blackrock code giving the approiate dtype conv = {0: 'int8', 1: 'int8', 2: 'int16', 4: 'int32'} # get all electrode ids from nev ext header all_el_ids = self.__nev_ext_header[b'NEUEVWAV']['electrode_id'] # get the dtype of waveform (this is stupidly complicated) if self.__is_set( np.array(self.__nev_basic_header['additionnal_flags']), 0): dtype_waveforms = dict((k, 'int16') for k in all_el_ids) else: # extract bytes per waveform waveform_bytes = \ self.__nev_ext_header[b'NEUEVWAV']['bytes_per_waveform'] # extract dtype for waveforms fro each electrode dtype_waveforms = dict(zip(all_el_ids, conv[waveform_bytes])) return dtype_waveforms def __get_channel_labels_variant_a(self): """ Returns labels for all channels for file spec 2.1 """ elids = self.__nev_ext_header[b'NEUEVWAV']['electrode_id'] labels = [] for elid in elids: if elid < 129: labels.append('chan%i' % elid) else: labels.append('ainp%i' % (elid - 129 + 1)) return dict(zip(elids, labels)) def __get_channel_labels_variant_b(self): """ Returns labels for all channels for file spec 2.2 and 2.3 """ elids = self.__nev_ext_header[b'NEUEVWAV']['electrode_id'] labels = self.__nev_ext_header[b'NEUEVLBL']['label'] return dict(zip(elids, labels)) if len(labels) > 0 else None def __get_waveform_size_variant_a(self): """ Returns wavform sizes for all channels for file spec 2.1 and 2.2 """ wf_dtypes = self.__get_waveforms_dtype() nb_bytes_wf = self.__nev_basic_header['bytes_in_data_packets'] - 8 wf_sizes = dict([ (ch, int(nb_bytes_wf / np.dtype(dt).itemsize)) for ch, dt in wf_dtypes.items()]) return wf_sizes def __get_waveform_size_variant_b(self): """ Returns wavform sizes for all channels for file spec 2.3 """ elids = self.__nev_ext_header[b'NEUEVWAV']['electrode_id'] spike_widths = self.__nev_ext_header[b'NEUEVWAV']['spike_width'] return dict(zip(elids, spike_widths)) def __get_left_sweep_waveforms(self): """ Returns left sweep of waveforms for each channel. Left sweep is defined as the time from the beginning of the waveform to the trigger time of the corresponding spike. """ # TODO: Double check if this is the actual setting for Blackrock wf_t_unit = self.__nev_params('waveform_time_unit') all_ch = self.__nev_params('channel_ids') # TODO: Double check if this is the correct assumption (10 samples) # default value: threshold crossing after 10 samples of waveform wf_left_sweep = dict([(ch, 10 * wf_t_unit) for ch in all_ch]) # non-default: threshold crossing at center of waveform # wf_size = self.__nev_params('waveform_size') # wf_left_sweep = dict( # [(ch, (wf_size[ch] / 2) * wf_t_unit) for ch in all_ch]) return wf_left_sweep def __get_nsx_param_variant_a(self, param_name, nsx_nb): """ Returns parameter (param_name) for a given nsx (nsx_nb) for file spec 2.1. """ # Here, min/max_analog_val and min/max_digital_val are not available in # the nsx, so that we must estimate these parameters from the # digitization factor of the nev (information by Kian Torab, Blackrock # Microsystems). Here dig_factor=max_analog_val/max_digital_val. We set # max_digital_val to 1000, and max_analog_val=dig_factor. dig_factor is # given in nV by definition, so the units turn out to be uV. labels = [] dig_factor = [] for elid in self.__nsx_ext_header[nsx_nb]['electrode_id']: if self._avail_files['nev']: # This is a workaround for the DigitalFactor overflow in NEV # files recorded with buggy Cerebus system. # Fix taken from: NMPK toolbox by Blackrock, # file openNEV, line 464, # git rev. d0a25eac902704a3a29fa5dfd3aed0744f4733ed df = self.__nev_params('digitization_factor')[elid] if df == 21516: df = 152592.547 dig_factor.append(df) else: dig_factor.append(None) if elid < 129: labels.append('chan%i' % elid) else: labels.append('ainp%i' % (elid - 129 + 1)) nsx_parameters = { 'labels': labels, 'units': np.array( [b'uV'] * self.__nsx_basic_header[nsx_nb]['channel_count']), 'min_analog_val': -1 * np.array(dig_factor), 'max_analog_val': np.array(dig_factor), 'min_digital_val': np.array( [-1000] * self.__nsx_basic_header[nsx_nb]['channel_count']), 'max_digital_val': np.array( [1000] * self.__nsx_basic_header[nsx_nb]['channel_count']), 'timestamp_resolution': 30000, 'bytes_in_headers': self.__nsx_basic_header[nsx_nb].dtype.itemsize + self.__nsx_ext_header[nsx_nb].dtype.itemsize * self.__nsx_basic_header[nsx_nb]['channel_count'], 'sampling_rate': 30000 / self.__nsx_basic_header[nsx_nb]['period'] * pq.Hz, 'time_unit': pq.CompoundUnit("1.0/{0}*s".format( 30000 / self.__nsx_basic_header[nsx_nb]['period']))} return nsx_parameters[param_name] def __get_nsx_param_variant_b(self, param_name, nsx_nb): """ Returns parameter (param_name) for a given nsx (nsx_nb) for file spec 2.2 and 2.3. """ nsx_parameters = { 'labels': self.__nsx_ext_header[nsx_nb]['electrode_label'], 'units': self.__nsx_ext_header[nsx_nb]['units'], 'min_analog_val': self.__nsx_ext_header[nsx_nb]['min_analog_val'], 'max_analog_val': self.__nsx_ext_header[nsx_nb]['max_analog_val'], 'min_digital_val': self.__nsx_ext_header[nsx_nb]['min_digital_val'], 'max_digital_val': self.__nsx_ext_header[nsx_nb]['max_digital_val'], 'timestamp_resolution': self.__nsx_basic_header[nsx_nb]['timestamp_resolution'], 'bytes_in_headers': self.__nsx_basic_header[nsx_nb]['bytes_in_headers'], 'sampling_rate': self.__nsx_basic_header[nsx_nb]['timestamp_resolution'] / self.__nsx_basic_header[nsx_nb]['period'] * pq.Hz, 'time_unit': pq.CompoundUnit("1.0/{0}*s".format( self.__nsx_basic_header[nsx_nb]['timestamp_resolution'] / self.__nsx_basic_header[nsx_nb]['period']))} return nsx_parameters[param_name] def __get_nsx_databl_param_variant_a( self, param_name, nsx_nb, n_start=None, n_stop=None): """ Returns data block parameter (param_name) for a given nsx (nsx_nb) for file spec 2.1. Arg 'n_start' should not be specified! It is only set for compatibility reasons with higher file spec. """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) t_starts, t_stops = \ self.__nsx_rec_times[self.__nsx_spec[nsx_nb]](nsx_nb) bytes_in_headers = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'bytes_in_headers', nsx_nb) # extract parameters from nsx basic extended and data header data_parameters = { 'nb_data_points': int( (self.__get_file_size(filename) - bytes_in_headers) / (2 * self.__nsx_basic_header[nsx_nb]['channel_count']) - 1), 'databl_idx': 1, 'databl_t_start': t_starts[0], 'databl_t_stop': t_stops[0]} return data_parameters[param_name] def __get_nsx_databl_param_variant_b( self, param_name, nsx_nb, n_start, n_stop): """ Returns data block parameter (param_name) for a given nsx (nsx_nb) with a wanted n_start for file spec 2.2 and 2.3. """ t_starts, t_stops = \ self.__nsx_rec_times[self.__nsx_spec[nsx_nb]](nsx_nb) # data header for d_bl in self.__nsx_data_header[nsx_nb].keys(): # from "data header" with corresponding t_start and t_stop data_parameters = { 'nb_data_points': self.__nsx_data_header[nsx_nb][d_bl]['nb_data_points'], 'databl_idx': d_bl, 'databl_t_start': t_starts[d_bl - 1], 'databl_t_stop': t_stops[d_bl - 1]} if t_starts[d_bl - 1] <= n_start < n_stop <= t_stops[d_bl - 1]: return data_parameters[param_name] elif n_start < t_starts[d_bl - 1] < n_stop <= t_stops[d_bl - 1]: self._print_verbose( "User n_start ({0}) is smaller than the corresponding " "t_start of the available ns{1} datablock " "({2}).".format(n_start, nsx_nb, t_starts[d_bl - 1])) return data_parameters[param_name] elif t_starts[d_bl - 1] <= n_start < t_stops[d_bl - 1] < n_stop: self._print_verbose( "User n_stop ({0}) is larger than the corresponding " "t_stop of the available ns{1} datablock " "({2}).".format(n_stop, nsx_nb, t_stops[d_bl - 1])) return data_parameters[param_name] elif n_start < t_starts[d_bl - 1] < t_stops[d_bl - 1] < n_stop: self._print_verbose( "User n_start ({0}) is smaller than the corresponding " "t_start and user n_stop ({1}) is larger than the " "corresponding t_stop of the available ns{2} datablock " "({3}).".format( n_start, n_stop, nsx_nb, (t_starts[d_bl - 1], t_stops[d_bl - 1]))) return data_parameters[param_name] else: continue raise ValueError( "User n_start and n_stop are all smaller or larger than the " "t_start and t_stops of all available ns%i datablocks" % nsx_nb) def __get_nonneural_evtypes_variant_a(self, data): """ Defines event types and the necessary parameters to extract them from a 2.1 and 2.2 nev file. """ # TODO: add annotations of nev ext header (NSASEXEX) to event types # digital events event_types = { 'digital_input_port': { 'name': 'digital_input_port', 'field': 'digital_input', 'mask': self.__is_set(data['packet_insertion_reason'], 0), 'desc': "Events of the digital input port"}, 'serial_input_port': { 'name': 'serial_input_port', 'field': 'digital_input', 'mask': self.__is_set(data['packet_insertion_reason'], 0) & self.__is_set(data['packet_insertion_reason'], 7), 'desc': "Events of the serial input port"}} # analog input events via threshold crossings for ch in range(5): event_types.update({ 'analog_input_channel_{0}'.format(ch + 1): { 'name': 'analog_input_channel_{0}'.format(ch + 1), 'field': 'analog_input_channel_{0}'.format(ch + 1), 'mask': self.__is_set( data['packet_insertion_reason'], ch + 1), 'desc': "Values of analog input channel {0} in mV " "(+/- 5000)".format(ch + 1)}}) # TODO: define field and desc event_types.update({ 'periodic_sampling_events': { 'name': 'periodic_sampling_events', 'field': 'digital_input', 'mask': self.__is_set(data['packet_insertion_reason'], 6), 'desc': 'Periodic sampling event of a certain frequency'}}) return event_types def __get_nonneural_evtypes_variant_b(self, data): """ Defines event types and the necessary parameters to extract them from a 2.3 nev file. """ # digital events event_types = { 'digital_input_port': { 'name': 'digital_input_port', 'field': 'digital_input', 'mask': self.__is_set(data['packet_insertion_reason'], 0), 'desc': "Events of the digital input port"}, 'serial_input_port': { 'name': 'serial_input_port', 'field': 'digital_input', 'mask': self.__is_set(data['packet_insertion_reason'], 0) & self.__is_set(data['packet_insertion_reason'], 7), 'desc': "Events of the serial input port"}} return event_types def __get_unit_classification(self, un_id): """ Returns the Blackrock unit classification of an online spike sorting for the given unit id (un_id). """ # Blackrock unit classification if un_id == 0: return 'unclassified' elif 1 <= un_id <= 16: return '{0}'.format(un_id) elif 17 <= un_id <= 244: raise ValueError( "Unit id {0} is not used by daq system".format(un_id)) elif un_id == 255: return 'noise' else: raise ValueError("Unit id {0} cannot be classified".format(un_id)) def __is_set(self, flag, pos): """ Checks if bit is set at the given position for flag. If flag is an array, an array will be returned. """ return flag & (1 << pos) > 0 def __transform_nsx_to_load(self, nsx_to_load): """ Transforms the input argument nsx_to_load to a list of integers. """ if hasattr(nsx_to_load, "__len__") and len(nsx_to_load) == 0: nsx_to_load = None if isinstance(nsx_to_load, int): nsx_to_load = [nsx_to_load] if isinstance(nsx_to_load, str): if nsx_to_load.lower() == 'none': nsx_to_load = None elif nsx_to_load.lower() == 'all': nsx_to_load = self._avail_nsx else: raise ValueError("Invalid specification of nsx_to_load.") if nsx_to_load: for nsx_nb in nsx_to_load: if not self._avail_files['ns' + str(nsx_nb)]: raise ValueError("ns%i is not available" % nsx_nb) return nsx_to_load def __transform_channels(self, channels, nsx_to_load): """ Transforms the input argument channels to a list of integers. """ all_channels = [] nsx_to_load = self.__transform_nsx_to_load(nsx_to_load) if nsx_to_load is not None: for nsx_nb in nsx_to_load: all_channels.extend( self.__nsx_ext_header[nsx_nb]['electrode_id'].astype(int)) elec_id = self.__nev_ext_header[b'NEUEVWAV']['electrode_id'] all_channels.extend(elec_id.astype(int)) all_channels = np.unique(all_channels).tolist() if hasattr(channels, "__len__") and len(channels) == 0: channels = None if isinstance(channels, int): channels = [channels] if isinstance(channels, str): if channels.lower() == 'none': channels = None elif channels.lower() == 'all': channels = all_channels else: raise ValueError("Invalid channel specification.") if channels: if len(set(all_channels) & set(channels)) < len(channels): self._print_verbose( "Ignoring unknown channel ID(s) specified in in channels.") # Make sure, all channels are valid and contain no duplicates channels = list(set(all_channels).intersection(set(channels))) else: self._print_verbose("No channel is specified, therefore no " "time series and unit data is loaded.") return channels def __transform_units(self, units, channels): """ Transforms the input argument nsx_to_load to a dictionary, where keys (channels) are int, and values (units) are lists of integers. """ if isinstance(units, dict): for ch, u in units.items(): if ch not in channels: self._print_verbose( "Units contain a channel id which is not listed in " "channels") if isinstance(u, int): units[ch] = [u] if hasattr(u, '__len__') and len(u) == 0: units[ch] = None if isinstance(u, str): if u.lower() == 'none': units[ch] = None elif u.lower() == 'all': units[ch] = list(range(17)) units[ch].append(255) else: raise ValueError("Invalid unit specification.") else: if hasattr(units, "__len__") and len(units) == 0: units = None if isinstance(units, str): if units.lower() == 'none': units = None elif units.lower() == 'all': units = list(range(17)) units.append(255) else: raise ValueError("Invalid unit specification.") if isinstance(units, int): units = [units] if (channels is None) and (units is not None): raise ValueError( 'At least one channel needs to be loaded to load units') if units: units = dict(zip(channels, [units] * len(channels))) if units is None: self._print_verbose("No units are specified, therefore no " "unit or spiketrain is loaded.") return units def __transform_times(self, n, default_n): """ Transforms the input argument n_start or n_stop (n) to a list of quantities. In case n is None, it is set to a default value provided by the given function (default_n). """ highest_res = self.__nev_params('event_unit') if isinstance(n, pq.Quantity): n = [n.rescale(highest_res)] elif hasattr(n, "__len__"): n = [tp.rescale(highest_res) if tp is not None else default_n for tp in n] elif n is None: n = [default_n] else: raise ValueError('Invalid specification of n_start/n_stop.') return n def __merge_time_ranges( self, user_n_starts, user_n_stops, nsx_to_load): """ Merges after a validation the user specified n_starts and n_stops with the intrinsically given n_starts and n_stops (from e.g, recording pauses) of the file set. Final n_starts and n_stops are chosen, so that the time range of each resulting segment is set to the best meaningful maximum. This means that the duration of the signals stored in the segments might be smaller than the actually set duration of the segment. """ # define the higest time resolution # (for accurate manipulations of the time settings) max_time = self.__get_max_time() min_time = self.__get_min_time() highest_res = self.__nev_params('event_unit') user_n_starts = self.__transform_times( user_n_starts, min_time) user_n_stops = self.__transform_times( user_n_stops, max_time) # check if user provided as many n_starts as n_stops if len(user_n_starts) != len(user_n_stops): raise ValueError("n_starts and n_stops must be of equal length") # if necessary reset max n_stop to max time of file set start_stop_id = 0 while start_stop_id < len(user_n_starts): if user_n_starts[start_stop_id] < min_time: user_n_starts[start_stop_id] = min_time self._print_verbose( "Entry of n_start '{}' is smaller than min time of the file " "set: n_start set to min time of file set" "".format(user_n_starts[start_stop_id])) if user_n_stops[start_stop_id] > max_time: user_n_stops[start_stop_id] = max_time self._print_verbose( "Entry of n_stop '{}' is larger than max time of the file " "set: n_stop set to max time of file set" "".format(user_n_stops[start_stop_id])) if (user_n_stops[start_stop_id] < min_time or user_n_starts[start_stop_id] > max_time): user_n_stops.pop(start_stop_id) user_n_starts.pop(start_stop_id) self._print_verbose( "Entry of n_start is larger than max time or entry of " "n_stop is smaller than min time of the " "file set: n_start and n_stop are ignored") continue start_stop_id += 1 # get intrinsic time settings of nsx files (incl. rec pauses) n_starts_files = [] n_stops_files = [] if nsx_to_load is not None: for nsx_nb in nsx_to_load: start_stop = \ self.__nsx_rec_times[self.__nsx_spec[nsx_nb]](nsx_nb) n_starts_files.append(start_stop[0]) n_stops_files.append(start_stop[1]) # reducing n_starts from wanted nsx files to minima # (keep recording pause if it occurs) if len(n_starts_files) > 0: if np.shape(n_starts_files)[1] > 1: n_starts_files = [ tp * highest_res for tp in np.min(n_starts_files, axis=1)] else: n_starts_files = [ tp * highest_res for tp in np.min(n_starts_files, axis=0)] # reducing n_starts from wanted nsx files to maxima # (keep recording pause if it occurs) if len(n_stops_files) > 0: if np.shape(n_stops_files)[1] > 1: n_stops_files = [ tp * highest_res for tp in np.max(n_stops_files, axis=1)] else: n_stops_files = [ tp * highest_res for tp in np.max(n_stops_files, axis=0)] # merge user time settings with intrinsic nsx time settings n_starts = [] n_stops = [] for start, stop in zip(user_n_starts, user_n_stops): # check if start and stop of user create a positive time interval if not start < stop: raise ValueError( "t(i) in n_starts has to be smaller than t(i) in n_stops") # Reduce n_starts_files to given intervals of user & add start if len(n_starts_files) > 0: mask = (n_starts_files > start) & (n_starts_files < stop) red_n_starts_files = np.array(n_starts_files)[mask] merged_n_starts = [start] + [ tp * highest_res for tp in red_n_starts_files] else: merged_n_starts = [start] # Reduce n_stops_files to given intervals of user & add stop if len(n_stops_files) > 0: mask = (n_stops_files > start) & (n_stops_files < stop) red_n_stops_files = np.array(n_stops_files)[mask] merged_n_stops = [ tp * highest_res for tp in red_n_stops_files] + [stop] else: merged_n_stops = [stop] # Define combined user and file n_starts and n_stops # case one: if len(merged_n_starts) == len(merged_n_stops): if len(merged_n_starts) + len(merged_n_stops) == 2: n_starts.extend(merged_n_starts) n_stops.extend(merged_n_stops) if len(merged_n_starts) + len(merged_n_stops) > 2: merged_n_starts.remove(merged_n_starts[1]) n_starts.extend([merged_n_starts]) merged_n_stops.remove(merged_n_stops[-2]) n_stops.extend(merged_n_stops) # case two: elif len(merged_n_starts) < len(merged_n_stops): n_starts.extend(merged_n_starts) merged_n_stops.remove(merged_n_stops[-2]) n_stops.extend(merged_n_stops) # case three: elif len(merged_n_starts) > len(merged_n_stops): merged_n_starts.remove(merged_n_starts[1]) n_starts.extend(merged_n_starts) n_stops.extend(merged_n_stops) if len(n_starts) > len(user_n_starts) and \ len(n_stops) > len(user_n_stops): self._print_verbose( "Additional recording pauses were detected. There will be " "more segments than the user expects.") return n_starts, n_stops def __read_event(self, n_start, n_stop, data, ev_dict, lazy=False): """ Creates an event for non-neural experimental events in nev data. """ event_unit = self.__nev_params('event_unit') if lazy: times = [] labels = np.array([], dtype='S') else: times = data['timestamp'][ev_dict['mask']] * event_unit labels = data[ev_dict['field']][ev_dict['mask']].astype(str) # mask for given time interval mask = (times >= n_start) & (times < n_stop) if np.sum(mask) > 0: ev = Event( times=times[mask].astype(float), labels=labels[mask], name=ev_dict['name'], description=ev_dict['desc']) if lazy: ev.lazy_shape = np.sum(mask) else: ev = None return ev def __read_spiketrain( self, n_start, n_stop, spikes, channel_id, unit_id, load_waveforms=False, scaling='raw', lazy=False): """ Creates spiketrains for Spikes in nev data. """ event_unit = self.__nev_params('event_unit') # define a name for spiketrain # (unique identifier: 1000 * elid + unit_nb) name = "Unit {0}".format(1000 * channel_id + unit_id) # define description for spiketrain desc = 'SpikeTrain from channel: {0}, unit: {1}'.format( channel_id, self.__get_unit_classification(unit_id)) # get spike times for given time interval if not lazy: times = spikes['timestamp'] * event_unit mask = (times >= n_start) & (times <= n_stop) times = times[mask].astype(float) else: times = np.array([]) * event_unit st = SpikeTrain( times=times, name=name, description=desc, file_origin='.'.join([self._filenames['nev'], 'nev']), t_start=n_start, t_stop=n_stop) if lazy: st.lazy_shape = np.shape(times) # load waveforms if requested if load_waveforms and not lazy: wf_dtype = self.__nev_params('waveform_dtypes')[channel_id] wf_size = self.__nev_params('waveform_size')[channel_id] waveforms = spikes['waveform'].flatten().view(wf_dtype) waveforms = waveforms.reshape(int(spikes.size), 1, int(wf_size)) if scaling == 'voltage': st.waveforms = ( waveforms[mask] * self.__nev_params('waveform_unit') * self.__nev_params('digitization_factor')[channel_id] / 1000.) elif scaling == 'raw': st.waveforms = waveforms[mask] * pq.dimensionless else: raise ValueError( 'Unkown option {1} for parameter scaling.'.format(scaling)) st.sampling_rate = self.__nev_params('waveform_sampling_rate') st.left_sweep = self.__get_left_sweep_waveforms()[channel_id] # add additional annotations st.annotate( unit_id=int(unit_id), channel_id=int(channel_id)) return st def __read_analogsignal( self, n_start, n_stop, signal, channel_id, nsx_nb, scaling='raw', lazy=False): """ Creates analogsignal for signal of channel in nsx data. """ # TODO: The following part is extremely slow, since the memmaps for the # headers are created again and again. In particular, this makes lazy # loading slow as well. Solution would be to create header memmaps up # front. # get parameters sampling_rate = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'sampling_rate', nsx_nb) nsx_time_unit = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'time_unit', nsx_nb) max_ana = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'max_analog_val', nsx_nb) min_ana = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'min_analog_val', nsx_nb) max_dig = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'max_digital_val', nsx_nb) min_dig = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'min_digital_val', nsx_nb) units = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'units', nsx_nb) labels = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'labels', nsx_nb) dbl_idx = self.__nsx_databl_param[self.__nsx_spec[nsx_nb]]( 'databl_idx', nsx_nb, n_start, n_stop) t_start = self.__nsx_databl_param[self.__nsx_spec[nsx_nb]]( 'databl_t_start', nsx_nb, n_start, n_stop) t_stop = self.__nsx_databl_param[self.__nsx_spec[nsx_nb]]( 'databl_t_stop', nsx_nb, n_start, n_stop) elids_nsx = list(self.__nsx_ext_header[nsx_nb]['electrode_id']) if channel_id in elids_nsx: idx_ch = elids_nsx.index(channel_id) else: return None description = \ "AnalogSignal from channel: {0}, label: {1}, nsx: {2}".format( channel_id, labels[idx_ch], nsx_nb) # TODO: Find a more time/memory efficient way to handle lazy loading data_times = np.arange( t_start.item(), t_stop.item(), self.__nsx_basic_header[nsx_nb]['period']) * t_start.units mask = (data_times >= n_start) & (data_times < n_stop) if lazy: lazy_shape = (np.sum(mask),) sig_ch = np.array([], dtype='float32') sig_unit = pq.dimensionless t_start = n_start.rescale('s') else: data_times = data_times[mask].astype(float) if scaling == 'voltage': if not self._avail_files['nev']: raise ValueError( 'Cannot convert signals in filespec 2.1 nsX ' 'files to voltage without nev file.') sig_ch = signal[dbl_idx][:, idx_ch][mask].astype('float32') # transform dig value to physical value sym_ana = (max_ana[idx_ch] == -min_ana[idx_ch]) sym_dig = (max_dig[idx_ch] == -min_dig[idx_ch]) if sym_ana and sym_dig: sig_ch *= float(max_ana[idx_ch]) / float(max_dig[idx_ch]) else: # general case (same result as above for symmetric input) sig_ch -= min_dig[idx_ch] sig_ch *= float(max_ana[idx_ch] - min_ana[idx_ch]) / \ float(max_dig[idx_ch] - min_dig[idx_ch]) sig_ch += float(min_ana[idx_ch]) sig_unit = units[idx_ch].decode() elif scaling == 'raw': sig_ch = signal[dbl_idx][:, idx_ch][mask].astype(int) sig_unit = pq.dimensionless else: raise ValueError( 'Unkown option {1} for parameter ' 'scaling.'.format(scaling)) t_start = data_times[0].rescale(nsx_time_unit) anasig = AnalogSignal( signal=pq.Quantity(sig_ch, sig_unit, copy=False), sampling_rate=sampling_rate, t_start=t_start, name=labels[idx_ch], description=description, file_origin='.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb])) if lazy: anasig.lazy_shape = lazy_shape anasig.annotate( nsx=nsx_nb, channel_id=int(channel_id)) return anasig def __read_unit(self, unit_id, channel_id): """ Creates unit with unit id for given channel id. """ # define a name for spiketrain # (unique identifier: 1000 * elid + unit_nb) name = "Unit {0}".format(1000 * channel_id + unit_id) # define description for spiketrain desc = 'Unit from channel: {0}, id: {1}'.format( channel_id, self.__get_unit_classification(unit_id)) un = Unit( name=name, description=desc, file_origin='.'.join([self._filenames['nev'], 'nev'])) # add additional annotations un.annotate( unit_id=int(unit_id), channel_id=int(channel_id)) return un def __read_channelindex( self, channel_id, index=None, channel_units=None, cascade=True): """ Returns a ChannelIndex with the given index for the given channels containing a neo.core.unit.Unit object list of the given units. """ flt_type = {0: 'None', 1: 'Butterworth'} chidx = ChannelIndex( np.array([channel_id]), file_origin=self.filename) if index is not None: chidx.index = index chidx.name = "ChannelIndex {0}".format(chidx.index) else: chidx.name = "ChannelIndex" if self._avail_files['nev']: channel_labels = self.__nev_params('channel_labels') if channel_labels is not None: chidx.channel_names = np.array([channel_labels[channel_id]]) chidx.channel_ids = np.array([channel_id]) # additional annotations from nev if channel_id in self.__nev_ext_header[b'NEUEVWAV']['electrode_id']: get_idx = list( self.__nev_ext_header[b'NEUEVWAV']['electrode_id']).index( channel_id) chidx.annotate( connector_ID=self.__nev_ext_header[ b'NEUEVWAV']['physical_connector'][get_idx], connector_pinID=self.__nev_ext_header[ b'NEUEVWAV']['connector_pin'][get_idx], nev_dig_factor=self.__nev_ext_header[ b'NEUEVWAV']['digitization_factor'][get_idx], nev_energy_threshold=self.__nev_ext_header[ b'NEUEVWAV']['energy_threshold'][get_idx] * pq.uV, nev_hi_threshold=self.__nev_ext_header[ b'NEUEVWAV']['hi_threshold'][get_idx] * pq.uV, nev_lo_threshold=self.__nev_ext_header[ b'NEUEVWAV']['lo_threshold'][get_idx] * pq.uV, nb_sorted_units=self.__nev_ext_header[ b'NEUEVWAV']['nb_sorted_units'][get_idx], waveform_size=self.__waveform_size[self.__nev_spec]( )[channel_id] * self.__nev_params('waveform_time_unit')) # additional annotations from nev (only for file_spec > 2.1) if self.__nev_spec in ['2.2', '2.3']: get_idx = list( self.__nev_ext_header[ b'NEUEVFLT']['electrode_id']).index( channel_id) # filter type codes (extracted from blackrock manual) chidx.annotate( nev_hi_freq_corner=self.__nev_ext_header[b'NEUEVFLT'][ 'hi_freq_corner'][get_idx] / 1000. * pq.Hz, nev_hi_freq_order=self.__nev_ext_header[b'NEUEVFLT'][ 'hi_freq_order'][get_idx], nev_hi_freq_type=flt_type[self.__nev_ext_header[ b'NEUEVFLT']['hi_freq_type'][get_idx]], nev_lo_freq_corner=self.__nev_ext_header[ b'NEUEVFLT']['lo_freq_corner'][get_idx] / 1000. * pq.Hz, nev_lo_freq_order=self.__nev_ext_header[ b'NEUEVFLT']['lo_freq_order'][get_idx], nev_lo_freq_type=flt_type[self.__nev_ext_header[ b'NEUEVFLT']['lo_freq_type'][get_idx]]) # additional information about the LFP signal if self.__nev_spec in ['2.2', '2.3'] and self.__nsx_ext_header: # It does not matter which nsX file to ask for this info k = list(self.__nsx_ext_header.keys())[0] if channel_id in self.__nsx_ext_header[k]['electrode_id']: get_idx = list( self.__nsx_ext_header[k]['electrode_id']).index( channel_id) chidx.annotate( nsx_hi_freq_corner=self.__nsx_ext_header[k][ 'hi_freq_corner'][get_idx] / 1000. * pq.Hz, nsx_lo_freq_corner=self.__nsx_ext_header[k][ 'lo_freq_corner'][get_idx] / 1000. * pq.Hz, nsx_hi_freq_order=self.__nsx_ext_header[k][ 'hi_freq_order'][get_idx], nsx_lo_freq_order=self.__nsx_ext_header[k][ 'lo_freq_order'][get_idx], nsx_hi_freq_type=flt_type[ self.__nsx_ext_header[k]['hi_freq_type'][get_idx]], nsx_lo_freq_type=flt_type[ self.__nsx_ext_header[k]['hi_freq_type'][get_idx]]) chidx.description = \ "Container for units and groups analogsignals of one recording " \ "channel across segments." if not cascade: return chidx if self._avail_files['nev']: # read nev data nev_data = self.__nev_data_reader[self.__nev_spec]() if channel_units is not None: # extract first data for channel ch_mask = (nev_data['Spikes']['packet_id'] == channel_id) data_ch = nev_data['Spikes'][ch_mask] for un_id in channel_units: if un_id in np.unique(data_ch['unit_class_nb']): un = self.__read_unit( unit_id=un_id, channel_id=channel_id) chidx.units.append(un) chidx.create_many_to_one_relationship() return chidx def read_segment( self, n_start, n_stop, name=None, description=None, index=None, nsx_to_load='none', channels='none', units='none', load_waveforms=False, load_events=False, scaling='raw', lazy=False, cascade=True): """ Returns an annotated neo.core.segment.Segment. Args: n_start (Quantity): Start time of maximum time range of signals contained in this segment. n_stop (Quantity): Stop time of maximum time range of signals contained in this segment. name (None, string): If None, name is set to default, otherwise it is set to user input. description (None, string): If None, description is set to default, otherwise it is set to user input. index (None, int): If not None, index of segment is set to user index. nsx_to_load (int, list, str): ID(s) of nsx file(s) from which to load data, e.g., if set to 5 only data from the ns5 file are loaded. If 'none' or empty list, no nsx files and therefore no analog signals are loaded. If 'all', data from all available nsx are loaded. channels (int, list, str): Channel id(s) from which to load data. If 'none' or empty list, no channels and therefore no analog signal or spiketrains are loaded. If 'all', all available channels are loaded. units (int, list, str, dict): ID(s) of unit(s) to load. If 'none' or empty list, no units and therefore no spiketrains are loaded. If 'all', all available units are loaded. If dict, the above can be specified individually for each channel (keys), e.g. {1: 5, 2: 'all'} loads unit 5 from channel 1 and all units from channel 2. load_waveforms (boolean): If True, waveforms are attached to all loaded spiketrains. load_events (boolean): If True, all recorded events are loaded. scaling (str): Determines whether time series of individual electrodes/channels are returned as AnalogSignals containing raw integer samples ('raw'), or scaled to arrays of floats representing voltage ('voltage'). Note that for file specification 2.1 and lower, the option 'voltage' requires a nev file to be present. lazy (boolean): If True, only the shape of the data is loaded. cascade (boolean): If True, only the segment without children is returned. Returns: Segment (neo.Segment): Returns the specified segment. See documentation of `read_block()` for a full list of annotations of all child objects. """ # Make sure that input args are transformed into correct instances nsx_to_load = self.__transform_nsx_to_load(nsx_to_load) channels = self.__transform_channels(channels, nsx_to_load) units = self.__transform_units(units, channels) seg = Segment(file_origin=self.filename) # set user defined annotations if they were provided if index is None: seg.index = 0 else: seg.index = index if name is None: seg.name = "Segment {0}".format(seg.index) else: seg.name = name if description is None: seg.description = "Segment containing data from t_min to t_max." else: seg.description = description if not cascade: return seg if self._avail_files['nev']: # filename = self._filenames['nev'] + '.nev' # annotate segment according to file headers seg.rec_datetime = datetime.datetime( year=self.__nev_basic_header['year'], month=self.__nev_basic_header['month'], day=self.__nev_basic_header['day'], hour=self.__nev_basic_header['hour'], minute=self.__nev_basic_header['minute'], second=self.__nev_basic_header['second'], microsecond=self.__nev_basic_header['millisecond']) # read nev data nev_data = self.__nev_data_reader[self.__nev_spec]() # read non-neural experimental events if load_events: ev_dict = self.__nonneural_evtypes[self.__nev_spec]( nev_data['NonNeural']) for ev_type in ev_dict.keys(): ev = self.__read_event( n_start=n_start, n_stop=n_stop, data=nev_data['NonNeural'], ev_dict=ev_dict[ev_type], lazy=lazy) if ev is not None: seg.events.append(ev) # TODO: not yet implemented (only avail in nev_spec 2.3) # videosync events # trackingevents events # buttontrigger events # configevent events # get spiketrain if units is not None: not_existing_units = [] for ch_id in units.keys(): # extract first data for channel ch_mask = (nev_data['Spikes']['packet_id'] == ch_id) data_ch = nev_data['Spikes'][ch_mask] if units[ch_id] is not None: for un_id in units[ch_id]: if un_id in np.unique(data_ch['unit_class_nb']): # extract then data for unit if unit exists un_mask = (data_ch['unit_class_nb'] == un_id) data_un = data_ch[un_mask] st = self.__read_spiketrain( n_start=n_start, n_stop=n_stop, spikes=data_un, channel_id=ch_id, unit_id=un_id, load_waveforms=load_waveforms, scaling=scaling, lazy=lazy) seg.spiketrains.append(st) else: not_existing_units.append(un_id) if not_existing_units: self._print_verbose( "Units {0} on channel {1} do not " "exist".format(not_existing_units, ch_id)) else: self._print_verbose( "There are no units specified for channel " "{0}".format(ch_id)) if nsx_to_load is not None: for nsx_nb in nsx_to_load: # read nsx data nsx_data = \ self.__nsx_data_reader[self.__nsx_spec[nsx_nb]](nsx_nb) # read Analogsignals for ch_id in channels: anasig = self.__read_analogsignal( n_start=n_start, n_stop=n_stop, signal=nsx_data, channel_id=ch_id, nsx_nb=nsx_nb, scaling=scaling, lazy=lazy) if anasig is not None: seg.analogsignals.append(anasig) # TODO: not yet implemented # if self._avail_files['sif']: # sif_header = self._read_sif(self._filenames['sif'] + '.sif') # TODO: not yet implemented # if self._avail_files['ccf']: # ccf_header = self._read_sif(self._filenames['ccf'] + '.ccf') seg.create_many_to_one_relationship() return seg def read_block( self, index=None, name=None, description=None, nsx_to_load='none', n_starts=None, n_stops=None, channels='none', units='none', load_waveforms=False, load_events=False, scaling='raw', lazy=False, cascade=True): """ Args: index (None, int): If not None, index of block is set to user input. name (None, str): If None, name is set to default, otherwise it is set to user input. description (None, str): If None, description is set to default, otherwise it is set to user input. nsx_to_load (int, list, str): ID(s) of nsx file(s) from which to load data, e.g., if set to 5 only data from the ns5 file are loaded. If 'none' or empty list, no nsx files and therefore no analog signals are loaded. If 'all', data from all available nsx are loaded. n_starts (None, Quantity, list): Start times for data in each segment. Number of entries must be equal to length of n_stops. If None, intrinsic recording start times of files set are used. n_stops (None, Quantity, list): Stop times for data in each segment. Number of entries must be equal to length of n_starts. If None, intrinsic recording stop times of files set are used. channels (int, list, str): Channel id(s) from which to load data. If 'none' or empty list, no channels and therefore no analog signal or spiketrains are loaded. If 'all', all available channels are loaded. units (int, list, str, dict): ID(s) of unit(s) to load. If 'none' or empty list, no units and therefore no spiketrains are loaded. If 'all', all available units are loaded. If dict, the above can be specified individually for each channel (keys), e.g. {1: 5, 2: 'all'} loads unit 5 from channel 1 and all units from channel 2. load_waveforms (boolean): If True, waveforms are attached to all loaded spiketrains. load_events (boolean): If True, all recorded events are loaded. scaling (str): Determines whether time series of individual electrodes/channels are returned as AnalogSignals containing raw integer samples ('raw'), or scaled to arrays of floats representing voltage ('voltage'). Note that for file specification 2.1 and lower, the option 'voltage' requires a nev file to be present. lazy (bool): If True, only the shape of the data is loaded. cascade (bool or "lazy"): If True, only the block without children is returned. Returns: Block (neo.segment.Block): Block linking all loaded Neo objects. Block annotations: avail_file_set (list): List of extensions of all available files for the given recording. avail_nsx (list of int): List of integers specifying the .nsX files available, e.g., [2, 5] indicates that an ns2 and and ns5 file are available. avail_nev (bool): True if a .nev file is available. avail_ccf (bool): True if a .ccf file is available. avail_sif (bool): True if a .sif file is available. rec_pauses (bool): True if the session contains a recording pause (i.e., multiple segments). nb_segments (int): Number of segments created after merging recording times specified by user with the intrinsic ones of the file set. Segment annotations: None. ChannelIndex annotations: waveform_size (Quantitiy): Length of time used to save spike waveforms (in units of 1/30000 s). nev_hi_freq_corner (Quantitiy), nev_lo_freq_corner (Quantitiy), nev_hi_freq_order (int), nev_lo_freq_order (int), nev_hi_freq_type (str), nev_lo_freq_type (str), nev_hi_threshold, nev_lo_threshold, nev_energy_threshold (quantity): Indicates parameters of spike detection. nsx_hi_freq_corner (Quantity), nsx_lo_freq_corner (Quantity) nsx_hi_freq_order (int), nsx_lo_freq_order (int), nsx_hi_freq_type (str), nsx_lo_freq_type (str) Indicates parameters of the filtered signal in one of the files ns1-ns5 (ns6, if available, is not filtered). nev_dig_factor (int): Digitization factor in microvolts of the nev file, used to convert raw samples to volt. connector_ID, connector_pinID (int): ID of connector and pin on the connector where the channel was recorded from. nb_sorted_units (int): Number of sorted units on this channel (noise, mua and sua). Unit annotations: unit_id (int): ID of the unit. channel_id (int): Channel ID (Blackrock ID) from which the unit was loaded (equiv. to the single list entry in the attribute channel_ids of ChannelIndex parent). AnalogSignal annotations: nsx (int): nsX file the signal was loaded from, e.g., 5 indicates the .ns5 file. channel_id (int): Channel ID (Blackrock ID) from which the signal was loaded. Spiketrain annotations: unit_id (int): ID of the unit from which the spikes were recorded. channel_id (int): Channel ID (Blackrock ID) from which the spikes were loaded. Event annotations: The resulting Block contains one Event object with the name `digital_input_port`. It contains all digitally recorded events, with the event code coded in the labels of the Event. The Event object contains no further annotation. """ # Make sure that input args are transformed into correct instances nsx_to_load = self.__transform_nsx_to_load(nsx_to_load) channels = self.__transform_channels(channels, nsx_to_load) units = self.__transform_units(units, channels) # Create block bl = Block(file_origin=self.filename) # set user defined annotations if they were provided if index is not None: bl.index = index if name is None: bl.name = "Blackrock Data Block" else: bl.name = name if description is None: bl.description = "Block of data from Blackrock file set." else: bl.description = description if self._avail_files['nev']: bl.rec_datetime = self.__nev_params('rec_datetime') bl.annotate( avail_file_set=[k for k, v in self._avail_files.items() if v]) bl.annotate(avail_nsx=self._avail_nsx) bl.annotate(avail_nev=self._avail_files['nev']) bl.annotate(avail_sif=self._avail_files['sif']) bl.annotate(avail_ccf=self._avail_files['ccf']) bl.annotate(rec_pauses=False) # Test n_starts and n_stops user requirements and combine them if # possible with file internal n_starts and n_stops from rec pauses. n_starts, n_stops = \ self.__merge_time_ranges(n_starts, n_stops, nsx_to_load) bl.annotate(nb_segments=len(n_starts)) if not cascade: return bl # read segment for seg_idx, (n_start, n_stop) in enumerate(zip(n_starts, n_stops)): seg = self.read_segment( n_start=n_start, n_stop=n_stop, index=seg_idx, nsx_to_load=nsx_to_load, channels=channels, units=units, load_waveforms=load_waveforms, load_events=load_events, scaling=scaling, lazy=lazy, cascade=cascade) bl.segments.append(seg) # read channelindexes if channels: for ch_id in channels: if units and ch_id in units.keys(): ch_units = units[ch_id] else: ch_units = None chidx = self.__read_channelindex( channel_id=ch_id, index=0, channel_units=ch_units, cascade=cascade) for seg in bl.segments: if ch_units: for un in chidx.units: sts = seg.filter( targdict={'name': un.name}, objects='SpikeTrain') for st in sts: un.spiketrains.append(st) anasigs = seg.filter( targdict={'channel_id': ch_id}, objects='AnalogSignal') for anasig in anasigs: chidx.analogsignals.append(anasig) bl.channel_indexes.append(chidx) bl.create_many_to_one_relationship() return bl def __str__(self): """ Prints summary of the Blackrock data file set. """ output = "\nFile Origins for Blackrock File Set\n" \ "====================================\n" for ftype in self._filenames.keys(): output += ftype + ':' + self._filenames[ftype] + '\n' if self._avail_files['nev']: output += "\nEvent Parameters (NEV)\n" \ "====================================\n" \ "Timestamp resolution (Hz): " + \ str(self.__nev_basic_header['timestamp_resolution']) + \ "\nWaveform resolution (Hz): " + \ str(self.__nev_basic_header['sample_resolution']) if b'NEUEVWAV' in self.__nev_ext_header.keys(): avail_el = \ self.__nev_ext_header[b'NEUEVWAV']['electrode_id'] con = \ self.__nev_ext_header[b'NEUEVWAV']['physical_connector'] pin = \ self.__nev_ext_header[b'NEUEVWAV']['connector_pin'] nb_units = \ self.__nev_ext_header[b'NEUEVWAV']['nb_sorted_units'] output += "\n\nAvailable electrode IDs:\n" \ "====================================\n" for i, el in enumerate(avail_el): output += "Electrode ID %i: " % el channel_labels = self.__nev_params('channel_labels') if channel_labels is not None: output += "label %s: " % channel_labels[el] output += "connector: %i, " % con[i] output += "pin: %i, " % pin[i] output += 'nb_units: %i\n' % nb_units[i] for nsx_nb in self._avail_nsx: analog_res = self.__nsx_params[self.__nsx_spec[nsx_nb]]( 'sampling_rate', nsx_nb) avail_el = [ el for el in self.__nsx_ext_header[nsx_nb]['electrode_id']] output += "\nAnalog Parameters (NS" \ + str(nsx_nb) + ")\n====================================" output += "\nResolution (Hz): %i" % analog_res output += "\nAvailable channel IDs: " + \ ", ".join(["%i" % a for a in avail_el]) + "\n" return output neo-0.7.2/neo/io/brainvisionio.py0000600013464101346420000000065413507452453015120 0ustar yohyoh# -*- coding: utf-8 -*- from neo.io.basefromrawio import BaseFromRaw from neo.rawio.brainvisionrawio import BrainVisionRawIO class BrainVisionIO(BrainVisionRawIO, BaseFromRaw): """Class for reading data from the BrainVision product.""" _prefered_signal_group_mode = 'split-all' def __init__(self, filename): BrainVisionRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.7.2/neo/io/brainwaredamio.py0000600013464101346420000002063713507452453015234 0ustar yohyoh# -*- coding: utf-8 -*- ''' Class for reading from Brainware DAM files DAM files are binary files for holding raw data. They are broken up into sequence of Segments, each containing a single raw trace and parameters. The DAM file does NOT contain a sampling rate, nor can it be reliably calculated from any of the parameters. You can calculate it from the "sweep length" attribute if it is present, but it isn't always present. It is more reliable to get it from the corresponding SRC file or F32 file if you have one. The DAM file also does not divide up data into Blocks, so only a single Block is returned.. Brainware was developed by Dr. Jan Schnupp and is availabe from Tucker Davis Technologies, Inc. http://www.tdt.com/downloads.htm Neither Dr. Jan Schnupp nor Tucker Davis Technologies, Inc. had any part in the development of this code The code is implemented with the permission of Dr. Jan Schnupp Author: Todd Jennings ''' # needed for python 3 compatibility from __future__ import absolute_import, division, print_function # import needed core python modules import os import os.path # numpy and quantities are already required by neo import numpy as np import quantities as pq # needed core neo modules from neo.core import (AnalogSignal, Block, ChannelIndex, Segment) # need to subclass BaseIO from neo.io.baseio import BaseIO class BrainwareDamIO(BaseIO): """ Class for reading Brainware raw data files with the extension '.dam'. The read_block method returns the first Block of the file. It will automatically close the file after reading. The read method is the same as read_block. Note: The file format does not contain a sampling rate. The sampling rate is set to 1 Hz, but this is arbitrary. If you have a corresponding .src or .f32 file, you can get the sampling rate from that. It may also be possible to infer it from the attributes, such as "sweep length", if present. Usage: >>> from neo.io.brainwaredamio import BrainwareDamIO >>> damfile = BrainwareDamIO(filename='multi_500ms_mulitrep_ch1.dam') >>> blk1 = damfile.read() >>> blk2 = damfile.read_block() >>> print blk1.segments >>> print blk1.segments[0].analogsignals >>> print blk1.units >>> print blk1.units[0].name >>> print blk2 >>> print blk2[0].segments """ is_readable = True # This class can only read data is_writable = False # write is not supported # This class is able to directly or indirectly handle the following objects # You can notice that this greatly simplifies the full Neo object hierarchy supported_objects = [Block, ChannelIndex, Segment, AnalogSignal] readable_objects = [Block] writeable_objects = [] has_header = False is_streameable = False # This is for GUI stuff: a definition for parameters when reading. # This dict should be keyed by object (`Block`). Each entry is a list # of tuple. The first entry in each tuple is the parameter name. The # second entry is a dict with keys 'value' (for default value), # and 'label' (for a descriptive name). # Note that if the highest-level object requires parameters, # common_io_test will be skipped. read_params = {Block: []} # do not support write so no GUI stuff write_params = None name = 'Brainware DAM File' extensions = ['dam'] mode = 'file' def __init__(self, filename=None): ''' Arguments: filename: the filename ''' BaseIO.__init__(self) self._path = filename self._filename = os.path.basename(filename) self._fsrc = None def read(self, lazy=False, **kargs): ''' Reads raw data file "fname" generated with BrainWare ''' assert not lazy, 'Do not support lazy' return self.read_block(lazy=lazy) def read_block(self, lazy=False, **kargs): ''' Reads a block from the raw data file "fname" generated with BrainWare ''' assert not lazy, 'Do not support lazy' # there are no keyargs implemented to so far. If someone tries to pass # them they are expecting them to do something or making a mistake, # neither of which should pass silently if kargs: raise NotImplementedError('This method does not have any ' 'arguments implemented yet') self._fsrc = None block = Block(file_origin=self._filename) # create the objects to store other objects chx = ChannelIndex(file_origin=self._filename, channel_ids=np.array([1]), index=np.array([0]), channel_names=np.array(['Chan1'], dtype='S')) # load objects into their containers block.channel_indexes.append(chx) # open the file with open(self._path, 'rb') as fobject: # while the file is not done keep reading segments while True: seg = self._read_segment(fobject) # if there are no more Segments, stop if not seg: break # store the segment and signals seg.analogsignals[0].channel_index = chx block.segments.append(seg) # remove the file object self._fsrc = None block.create_many_to_one_relationship() return block # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # IMPORTANT!!! # These are private methods implementing the internal reading mechanism. # Due to the way BrainWare DAM files are structured, they CANNOT be used # on their own. Calling these manually will almost certainly alter your # position in the file in an unrecoverable manner, whether they throw # an exception or not. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- def _read_segment(self, fobject): ''' Read a single segment with a single analogsignal Returns the segment or None if there are no more segments ''' try: # float64 -- start time of the AnalogSignal t_start = np.fromfile(fobject, dtype=np.float64, count=1)[0] except IndexError: # if there are no more Segments, return return False # int16 -- index of the stimulus parameters seg_index = np.fromfile(fobject, dtype=np.int16, count=1)[0].tolist() # int16 -- number of stimulus parameters numelements = np.fromfile(fobject, dtype=np.int16, count=1)[0] # read the name strings for the stimulus parameters paramnames = [] for _ in range(numelements): # unit8 -- the number of characters in the string numchars = np.fromfile(fobject, dtype=np.uint8, count=1)[0] # char * numchars -- a single name string name = np.fromfile(fobject, dtype=np.uint8, count=numchars) # exclude invalid characters name = str(name[name >= 32].view('c').tostring()) # add the name to the list of names paramnames.append(name) # float32 * numelements -- the values for the stimulus parameters paramvalues = np.fromfile(fobject, dtype=np.float32, count=numelements) # combine parameter names and the parameters as a dict params = dict(zip(paramnames, paramvalues)) # int32 -- the number elements in the AnalogSignal numpts = np.fromfile(fobject, dtype=np.int32, count=1)[0] # int16 * numpts -- the AnalogSignal itself signal = np.fromfile(fobject, dtype=np.int16, count=numpts) sig = AnalogSignal(signal.astype(np.float) * pq.mV, t_start=t_start * pq.d, file_origin=self._filename, sampling_period=1. * pq.s, copy=False) # Note: setting the sampling_period to 1 s is arbitrary # load the AnalogSignal and parameters into a new Segment seg = Segment(file_origin=self._filename, index=seg_index, **params) seg.analogsignals = [sig] return seg neo-0.7.2/neo/io/brainwaref32io.py0000600013464101346420000002345013507452453015061 0ustar yohyoh# -*- coding: utf-8 -*- ''' Class for reading from Brainware F32 files F32 files are simplified binary files for holding spike data. Unlike SRC files, F32 files carry little metadata. This also means, however, that the file format does not change, unlike SRC files whose format changes periodically (although ideally SRC files are backwards-compatible). Each F32 file only holds a single Block. The only metadata stored in the file is the length of a single repetition of the stimulus and the values of the stimulus parameters (but not the names of the parameters). Brainware was developed by Dr. Jan Schnupp and is availabe from Tucker Davis Technologies, Inc. http://www.tdt.com/downloads.htm Neither Dr. Jan Schnupp nor Tucker Davis Technologies, Inc. had any part in the development of this code The code is implemented with the permission of Dr. Jan Schnupp Author: Todd Jennings ''' # needed for python 3 compatibility from __future__ import absolute_import, division, print_function # import needed core python modules from os import path # numpy and quantities are already required by neo import numpy as np import quantities as pq # needed core neo modules from neo.core import Block, ChannelIndex, Segment, SpikeTrain, Unit # need to subclass BaseIO from neo.io.baseio import BaseIO class BrainwareF32IO(BaseIO): ''' Class for reading Brainware Spike ReCord files with the extension '.f32' The read_block method returns the first Block of the file. It will automatically close the file after reading. The read method is the same as read_block. The read_all_blocks method automatically reads all Blocks. It will automatically close the file after reading. The read_next_block method will return one Block each time it is called. It will automatically close the file and reset to the first Block after reading the last block. Call the close method to close the file and reset this method back to the first Block. The isopen property tells whether the file is currently open and reading or closed. Note 1: There is always only one ChannelIndex. BrainWare stores the equivalent of ChannelIndexes in separate files. Usage: >>> from neo.io.brainwaref32io import BrainwareF32IO >>> f32file = BrainwareF32IO(filename='multi_500ms_mulitrep_ch1.f32') >>> blk1 = f32file.read() >>> blk2 = f32file.read_block() >>> print blk1.segments >>> print blk1.segments[0].spiketrains >>> print blk1.units >>> print blk1.units[0].name >>> print blk2 >>> print blk2[0].segments ''' is_readable = True # This class can only read data is_writable = False # write is not supported # This class is able to directly or indirectly handle the following objects # You can notice that this greatly simplifies the full Neo object hierarchy supported_objects = [Block, ChannelIndex, Segment, SpikeTrain, Unit] readable_objects = [Block] writeable_objects = [] has_header = False is_streameable = False # This is for GUI stuff: a definition for parameters when reading. # This dict should be keyed by object (`Block`). Each entry is a list # of tuple. The first entry in each tuple is the parameter name. The # second entry is a dict with keys 'value' (for default value), # and 'label' (for a descriptive name). # Note that if the highest-level object requires parameters, # common_io_test will be skipped. read_params = {Block: []} # does not support write so no GUI stuff write_params = None name = 'Brainware F32 File' extensions = ['f32'] mode = 'file' def __init__(self, filename=None): ''' Arguments: filename: the filename ''' BaseIO.__init__(self) self._path = filename self._filename = path.basename(filename) self._fsrc = None self._blk = None self.__unit = None self.__t_stop = None self.__params = None self.__seg = None self.__spiketimes = None def read(self, lazy=False, **kargs): ''' Reads simple spike data file "fname" generated with BrainWare ''' return self.read_block(lazy=lazy, ) def read_block(self, lazy=False, **kargs): ''' Reads a block from the simple spike data file "fname" generated with BrainWare ''' assert not lazy, 'Do not support lazy' # there are no keyargs implemented to so far. If someone tries to pass # them they are expecting them to do something or making a mistake, # neither of which should pass silently if kargs: raise NotImplementedError('This method does not have any ' 'argument implemented yet') self._fsrc = None self._blk = Block(file_origin=self._filename) block = self._blk # create the objects to store other objects chx = ChannelIndex(file_origin=self._filename, index=np.array([], dtype=np.int)) self.__unit = Unit(file_origin=self._filename) # load objects into their containers block.channel_indexes.append(chx) chx.units.append(self.__unit) # initialize values self.__t_stop = None self.__params = None self.__seg = None self.__spiketimes = None # open the file with open(self._path, 'rb') as self._fsrc: res = True # while the file is not done keep reading segments while res: res = self.__read_id() block.create_many_to_one_relationship() # cleanup attributes self._fsrc = None self._blk = None self.__t_stop = None self.__params = None self.__seg = None self.__spiketimes = None return block # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # IMPORTANT!!! # These are private methods implementing the internal reading mechanism. # Due to the way BrainWare DAM files are structured, they CANNOT be used # on their own. Calling these manually will almost certainly alter your # position in the file in an unrecoverable manner, whether they throw # an exception or not. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- def __read_id(self): ''' Read the next ID number and do the appropriate task with it. Returns nothing. ''' try: # float32 -- ID of the first data sequence objid = np.fromfile(self._fsrc, dtype=np.float32, count=1)[0] except IndexError: # if we have a previous segment, save it self.__save_segment() # if there are no more Segments, return return False if objid == -2: self.__read_condition() elif objid == -1: self.__read_segment() else: self.__spiketimes.append(objid) return True def __read_condition(self): ''' Read the parameter values for a single stimulus condition. Returns nothing. ''' # float32 -- SpikeTrain length in ms self.__t_stop = np.fromfile(self._fsrc, dtype=np.float32, count=1)[0] # float32 -- number of stimulus parameters numelements = int(np.fromfile(self._fsrc, dtype=np.float32, count=1)[0]) # [float32] * numelements -- stimulus parameter values paramvals = np.fromfile(self._fsrc, dtype=np.float32, count=numelements).tolist() # organize the parameers into a dictionary with arbitrary names paramnames = ['Param%s' % i for i in range(len(paramvals))] self.__params = dict(zip(paramnames, paramvals)) def __read_segment(self): ''' Setup the next Segment. Returns nothing. ''' # if we have a previous segment, save it self.__save_segment() # create the segment self.__seg = Segment(file_origin=self._filename, **self.__params) # create an empy array to save the spike times # this needs to be converted to a SpikeTrain before it can be used self.__spiketimes = [] def __save_segment(self): ''' Write the segment to the Block if it exists ''' # if this is the beginning of the first condition, then we don't want # to save, so exit # but set __seg from None to False so we know next time to create a # segment even if there are no spike in the condition if self.__seg is None: self.__seg = False return if not self.__seg: # create dummy values if there are no SpikeTrains in this condition self.__seg = Segment(file_origin=self._filename, **self.__params) self.__spiketimes = [] times = pq.Quantity(self.__spiketimes, dtype=np.float32, units=pq.ms) train = SpikeTrain(times, t_start=0 * pq.ms, t_stop=self.__t_stop * pq.ms, file_origin=self._filename) self.__seg.spiketrains = [train] self.__unit.spiketrains.append(train) self._blk.segments.append(self.__seg) # set an empty segment # from now on, we need to set __seg to False rather than None so # that if there is a condition with no SpikeTrains we know # to create an empty Segment self.__seg = False neo-0.7.2/neo/io/brainwaresrcio.py0000700013464101346420000016416013507452453015263 0ustar yohyoh# -*- coding: utf-8 -*- """ Class for reading from Brainware SRC files SRC files are binary files for holding spike data. They are broken up into nested data sequences of different types, with each type of sequence identified by a unique ID number. This allows new versions of sequences to be included without breaking backwards compatibility, since new versions can just be given a new ID number. The ID numbers and the format of the data they contain were taken from the Matlab-based reader function supplied with BrainWare. The python code, however, was implemented from scratch in Python using Python idioms. There are some situations where BrainWare data can overflow the SRC file, resulting in a corrupt file. Neither BrainWare nor the Matlab-based reader can read such files. This software, however, will try to recover the data, and in most cases can do so successfully. Each SRC file can hold the equivalent of multiple Neo Blocks. Brainware was developed by Dr. Jan Schnupp and is availabe from Tucker Davis Technologies, Inc. http://www.tdt.com/downloads.htm Neither Dr. Jan Schnupp nor Tucker Davis Technologies, Inc. had any part in the development of this code The code is implemented with the permission of Dr. Jan Schnupp Author: Todd Jennings """ # needed for python 3 compatibility from __future__ import absolute_import, division, print_function # import needed core python modules from datetime import datetime, timedelta from itertools import chain import logging import os.path import sys # numpy and quantities are already required by neo import numpy as np import quantities as pq # needed core neo modules from neo.core import (Block, Event, ChannelIndex, Segment, SpikeTrain, Unit) # need to subclass BaseIO from neo.io.baseio import BaseIO LOGHANDLER = logging.StreamHandler() PY_VER = sys.version_info[0] class BrainwareSrcIO(BaseIO): """ Class for reading Brainware Spike ReCord files with the extension '.src' The read_block method returns the first Block of the file. It will automatically close the file after reading. The read method is the same as read_block. The read_all_blocks method automatically reads all Blocks. It will automatically close the file after reading. The read_next_block method will return one Block each time it is called. It will automatically close the file and reset to the first Block after reading the last block. Call the close method to close the file and reset this method back to the first Block. The _isopen property tells whether the file is currently open and reading or closed. Note 1: The first Unit in each ChannelIndex is always UnassignedSpikes, which has a SpikeTrain for each Segment containing all the spikes not assigned to any Unit in that Segment. Note 2: The first Segment in each Block is always Comments, which stores all comments as an Event object. Note 3: The parameters from the BrainWare table for each condition are stored in the Segment annotations. If there are multiple repetitions of a condition, each repetition is stored as a separate Segment. Note 4: There is always only one ChannelIndex. BrainWare stores the equivalent of ChannelIndexes in separate files. Usage: >>> from neo.io.brainwaresrcio import BrainwareSrcIO >>> srcfile = BrainwareSrcIO(filename='multi_500ms_mulitrep_ch1.src') >>> blk1 = srcfile.read() >>> blk2 = srcfile.read_block() >>> blks = srcfile.read_all_blocks() >>> print blk1.segments >>> print blk1.segments[0].spiketrains >>> print blk1.units >>> print blk1.units[0].name >>> print blk2 >>> print blk2[0].segments >>> print blks >>> print blks[0].segments """ is_readable = True # This class can only read data is_writable = False # write is not supported # This class is able to directly or indirectly handle the following objects supported_objects = [Block, ChannelIndex, Segment, SpikeTrain, Event, Unit] readable_objects = [Block] writeable_objects = [] has_header = False is_streameable = False # This is for GUI stuff: a definition for parameters when reading. # This dict should be keyed by object (`Block`). Each entry is a list # of tuple. The first entry in each tuple is the parameter name. The # second entry is a dict with keys 'value' (for default value), # and 'label' (for a descriptive name). # Note that if the highest-level object requires parameters, # common_io_test will be skipped. read_params = {Block: []} # does not support write so no GUI stuff write_params = None name = 'Brainware SRC File' extensions = ['src'] mode = 'file' def __init__(self, filename=None): """ Arguments: filename: the filename """ BaseIO.__init__(self) # log the __init__ self.logger.info('__init__') # this stores the filename of the current object, exactly as it is # provided when the instance is initialized. self._filename = filename # this store the filename without the path self._file_origin = filename # This stores the file object for the current file self._fsrc = None # This stores the current Block self._blk = None # This stores the current ChannelIndex for easy access # It is equivalent to self._blk.channel_indexes[0] self._chx = None # This stores the current Segment for easy access # It is equivalent to self._blk.segments[-1] self._seg0 = None # this stores a dictionary of the Block's Units by name, # making it easier and faster to retrieve Units by name later # UnassignedSpikes and Units accessed by index are not stored here self._unitdict = {} # this stores the current Unit self._unit0 = None # if the file has a list with negative length, the rest of the file's # list lengths are unreliable, so we need to store this value for the # whole file self._damaged = False # this stores an empty SpikeTrain which is used in various places. self._default_spiketrain = None @property def _isopen(self): """ This property tells whether the SRC file associated with the IO object is open. """ return self._fsrc is not None def _opensrc(self): """ Open the file if it isn't already open. """ # if the file isn't already open, open it and clear the Blocks if not self._fsrc or self._fsrc.closed: self._fsrc = open(self._filename, 'rb') # figure out the filename of the current file self._file_origin = os.path.basename(self._filename) def close(self): """ Close the currently-open file and reset the current reading point. """ self.logger.info('close') if self._isopen and not self._fsrc.closed: self._fsrc.close() # we also need to reset all per-file attributes self._damaged = False self._fsrc = None self._seg0 = None self._file_origin = None self._lazy = False self._default_spiketrain = None def read(self, lazy=False, **kargs): """ Reads the first Block from the Spike ReCording file "filename" generated with BrainWare. If you wish to read more than one Block, please use read_all_blocks. """ return self.read_block(lazy=lazy, **kargs) def read_block(self, lazy=False, **kargs): """ Reads the first Block from the Spike ReCording file "filename" generated with BrainWare. If you wish to read more than one Block, please use read_all_blocks. """ assert not lazy, 'Do not support lazy' # there are no keyargs implemented to so far. If someone tries to pass # them they are expecting them to do something or making a mistake, # neither of which should pass silently if kargs: raise NotImplementedError('This method does not have any ' 'arguments implemented yet') blockobj = self.read_next_block() self.close() return blockobj def read_next_block(self, **kargs): """ Reads a single Block from the Spike ReCording file "filename" generated with BrainWare. Each call of read will return the next Block until all Blocks are loaded. After the last Block, the file will be automatically closed and the progress reset. Call the close method manually to reset back to the first Block. """ # there are no keyargs implemented to so far. If someone tries to pass # them they are expecting them to do something or making a mistake, # neither of which should pass silently if kargs: raise NotImplementedError('This method does not have any ' 'arguments implemented yet') self._opensrc() # create _default_spiketrain here for performance reasons self._default_spiketrain = self._init_default_spiketrain.copy() self._default_spiketrain.file_origin = self._file_origin # create the Block and the contents all Blocks of from IO share self._blk = Block(file_origin=self._file_origin) self._chx = ChannelIndex(file_origin=self._file_origin, index=np.array([], dtype=np.int)) self._seg0 = Segment(name='Comments', file_origin=self._file_origin) self._unit0 = Unit(name='UnassignedSpikes', file_origin=self._file_origin, elliptic=[], boundaries=[], timestamp=[], max_valid=[]) self._blk.channel_indexes.append(self._chx) self._chx.units.append(self._unit0) self._blk.segments.append(self._seg0) # this actually reads the contents of the Block result = [] while hasattr(result, '__iter__'): try: result = self._read_by_id() except: self.close() raise # since we read at a Block level we always do this self._blk.create_many_to_one_relationship() # put the Block in a local object so it can be gargabe collected blockobj = self._blk # reset the per-Block attributes self._blk = None self._chx = None self._unitdict = {} # combine the comments into one big event self._combine_segment_events(self._seg0) # result is None iff the end of the file is reached, so we can # close the file # this notification is not helpful if using the read method with # cascading, since the user will know it is done when the method # returns a value if result is None: self.logger.info('Last Block read. Closing file.') self.close() return blockobj def read_all_blocks(self, lazy=False, **kargs): """ Reads all Blocks from the Spike ReCording file "filename" generated with BrainWare. The progress in the file is reset and the file closed then opened again prior to reading. The file is automatically closed after reading completes. """ # there are no keyargs implemented to so far. If someone tries to pass # them they are expecting them to do something or making a mistake, # neither of which should pass silently assert not lazy, 'Do not support lazy' if kargs: raise NotImplementedError('This method does not have any ' 'argument implemented yet') self.close() self._opensrc() # Read each Block. # After the last Block self._isopen is set to False, so this make a # good way to determine when to stop blocks = [] while self._isopen: try: blocks.append(self.read_next_block()) except: self.close() raise return blocks def _convert_timestamp(self, timestamp, start_date=datetime(1899, 12, 30)): """ _convert_timestamp(timestamp, start_date) - convert a timestamp in brainware src file units to a python datetime object. start_date defaults to 1899.12.30 (ISO format), which is the start date used by all BrainWare SRC data Blocks so far. If manually specified it should be a datetime object or any other object that can be added to a timedelta object. """ # datetime + timedelta = datetime again. try: timestamp = convert_brainwaresrc_timestamp(timestamp, start_date) except OverflowError as err: timestamp = start_date self.logger.exception('_convert_timestamp overflow') return timestamp # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # All methods from here on are private. They are not intended to be used # on their own, although methods that could theoretically be called on # their own are marked as such. All private methods could be renamed, # combined, or split at any time. All private methods prefixed by # "__read" or "__skip" will alter the current place in the file. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- def _read_by_id(self): """ Reader for generic data BrainWare SRC files are broken up into data sequences that are identified by an ID code. This method determines the ID code and calls the method to read the data sequence with that ID code. See the _ID_DICT attribute for a dictionary of code/method pairs. IMPORTANT!!! This is the only private method that can be called directly. The rest of the private methods can only safely be called by this method or by other private methods, since they depend on the current position in the file. """ try: # uint16 -- the ID code of the next sequence seqid = np.asscalar(np.fromfile(self._fsrc, dtype=np.uint16, count=1)) except ValueError: # return a None if at EOF. Other methods use None to recognize # an EOF return None # using the seqid, get the reader function from the reader dict readfunc = self._ID_DICT.get(seqid) if readfunc is None: if seqid <= 0: # return if end-of-sequence ID code. This has to be 0. # just calling "return" will return a None which is used as an # EOF indicator return 0 else: # return a warning if the key is invalid # (this is consistent with the official behavior, # even the official reference files have invalid keys # when using the official reference reader matlab # scripts self.logger.warning('unknown ID: %s', seqid) return [] try: # run the function to get the data return readfunc(self) except (EOFError, UnicodeDecodeError) as err: # return a warning if the EOF is reached in the middle of a method self.logger.exception('Premature end of file') return None # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # These are helper methods. They don't read from the file, so it # won't harm the reading process to call them, but they are only relevant # when used in other private methods. # # These are tuned to the particular needs of this IO class, they are # unlikely to work properly if used with another file format. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- def _assign_sequence(self, data_obj): """ _assign_sequence(data_obj) - Try to guess where an unknown sequence should go based on its class. Warning are issued if this method is used since manual reorganization may be needed. """ if isinstance(data_obj, Unit): self.logger.warning('Unknown Unit found, adding to Units list') self._chx.units.append(data_obj) if data_obj.name: self._unitdict[data_obj.name] = data_obj elif isinstance(data_obj, Segment): self.logger.warning('Unknown Segment found, ' 'adding to Segments list') self._blk.segments.append(data_obj) elif isinstance(data_obj, Event): self.logger.warning('Unknown Event found, ' 'adding to comment Events list') self._seg0.events.append(data_obj) elif isinstance(data_obj, SpikeTrain): self.logger.warning('Unknown SpikeTrain found, ' 'adding to the UnassignedSpikes Unit') self._unit0.spiketrains.append(data_obj) elif hasattr(data_obj, '__iter__') and not isinstance(data_obj, str): for sub_obj in data_obj: self._assign_sequence(sub_obj) else: if self.logger.isEnabledFor(logging.WARNING): self.logger.warning('Unrecognized sequence of type %s found, ' 'skipping', type(data_obj)) _default_datetime = datetime(1, 1, 1) _default_t_start = pq.Quantity(0., units=pq.ms, dtype=np.float32) _init_default_spiketrain = SpikeTrain(times=pq.Quantity([], units=pq.ms, dtype=np.float32), t_start=pq.Quantity(0, units=pq.ms, dtype=np.float32 ), t_stop=pq.Quantity(1, units=pq.ms, dtype=np.float32), waveforms=pq.Quantity([[[]]], dtype=np.int8, units=pq.mV), dtype=np.float32, copy=False, timestamp=_default_datetime, respwin=np.array([], dtype=np.int32), dama_index=-1, trig2=pq.Quantity([], units=pq.ms, dtype=np.uint8), side='') def _combine_events(self, events): """ _combine_events(events) - combine a list of Events with single events into one long Event """ if not events: event = Event(times=pq.Quantity([], units=pq.s), labels=np.array([], dtype='S'), senders=np.array([], dtype='S'), t_start=0) return event times = [] labels = [] senders = [] for event in events: times.append(event.times.magnitude) # With the introduction of array annotations and the adaptation of labels to use # this infrastructure, even single labels are wrapped into an array to ensure # consistency. # The following lines were 'labels.append(event.labels)' which assumed event.labels # to be a scalar. Thus, I can safely assume the array to have length 1, because # it only wraps this scalar. Now this scalar is accessed as the 0th element of # event.labels if event.labels.shape == (1,): labels.append(event.labels[0]) else: raise AssertionError("This single event has multiple labels in an array with " "shape {} instead of a single label.". format(event.labels.shape)) senders.append(event.annotations['sender']) times = np.array(times, dtype=np.float32) t_start = times.min() times = pq.Quantity(times - t_start, units=pq.d).rescale(pq.s) labels = np.array(labels) senders = np.array(senders) event = Event(times=times, labels=labels, t_start=t_start.tolist(), senders=senders) return event def _combine_segment_events(self, segment): """ _combine_segment_events(segment) Combine all Events in a segment. """ event = self._combine_events(segment.events) event_t_start = event.annotations.pop('t_start') segment.rec_datetime = self._convert_timestamp(event_t_start) segment.events = [event] event.segment = segment def _combine_spiketrains(self, spiketrains): """ _combine_spiketrains(spiketrains) - combine a list of SpikeTrains with single spikes into one long SpikeTrain """ if not spiketrains: return self._default_spiketrain.copy() if hasattr(spiketrains[0], 'waveforms') and len(spiketrains) == 1: train = spiketrains[0] return train if hasattr(spiketrains[0], 't_stop'): # workaround for bug in some broken files istrain = [hasattr(utrain, 'waveforms') for utrain in spiketrains] if not all(istrain): goodtrains = [itrain for i, itrain in enumerate(spiketrains) if istrain[i]] badtrains = [itrain for i, itrain in enumerate(spiketrains) if not istrain[i]] spiketrains = (goodtrains + [self._combine_spiketrains(badtrains)]) spiketrains = [itrain for itrain in spiketrains if itrain.size > 0] if not spiketrains: return self._default_spiketrain.copy() # get the times of the spiketrains and combine them waveforms = [itrain.waveforms for itrain in spiketrains] rawtrains = np.array(np.concatenate(spiketrains, axis=1)) times = pq.Quantity(rawtrains, units=pq.ms, copy=False) lens1 = np.array([wave.shape[1] for wave in waveforms]) lens2 = np.array([wave.shape[2] for wave in waveforms]) if lens1.max() != lens1.min() or lens2.max() != lens2.min(): lens1 = lens1.max() - lens1 lens2 = lens2.max() - lens2 waveforms = [np.pad(waveform, ((0, 0), (0, len1), (0, len2)), 'constant') for waveform, len1, len2 in zip(waveforms, lens1, lens2)] waveforms = np.concatenate(waveforms, axis=0) # extract the trig2 annotation trig2 = np.array(np.concatenate([itrain.annotations['trig2'] for itrain in spiketrains], axis=1)) trig2 = pq.Quantity(trig2, units=pq.ms) elif hasattr(spiketrains[0], 'units'): return self._combine_spiketrains([spiketrains]) else: times, waveforms, trig2 = zip(*spiketrains) times = np.concatenate(times, axis=0) # get the times of the SpikeTrains and combine them times = pq.Quantity(times, units=pq.ms, copy=False) # get the waveforms of the SpikeTrains and combine them # these should be a 3D array with the first axis being the spike, # the second axis being the recording channel (there is only one), # and the third axis being the actual waveform waveforms = np.concatenate(waveforms, axis=0) # extract the trig2 annotation trig2 = pq.Quantity(np.hstack(trig2), units=pq.ms, copy=False) if not times.size: return self._default_spiketrain.copy() # get the maximum time t_stop = times[-1] * 2. waveforms = pq.Quantity(waveforms, units=pq.mV, copy=False) train = SpikeTrain(times=times, copy=False, t_start=self._default_t_start.copy(), t_stop=t_stop, file_origin=self._file_origin, waveforms=waveforms, timestamp=self._default_datetime, respwin=np.array([], dtype=np.int32), dama_index=-1, trig2=trig2, side='') return train # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # IMPORTANT!!! # These are private methods implementing the internal reading mechanism. # Due to the way BrainWare SRC files are structured, they CANNOT be used # on their own. Calling these manually will almost certainly alter your # position in the file in an unrecoverable manner, whether they throw # an exception or not. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- def __read_str(self, numchars=1, utf=None): """ Read a string of a specific length. This is compatible with python 2 and python 3. """ rawstr = np.asscalar(np.fromfile(self._fsrc, dtype='S%s' % numchars, count=1)) if utf or (utf is None and PY_VER == 3): return rawstr.decode('utf-8') return rawstr def __read_annotations(self): """ Read the stimulus grid properties. ------------------------------------------------------------------- Returns a dictionary containing the parameter names as keys and the parameter values as values. The returned object must be added to the Block. ID: 29109 """ # int16 -- number of stimulus parameters numelements = np.fromfile(self._fsrc, dtype=np.int16, count=1)[0] if not numelements: return {} # [data sequence] * numelements -- parameter names names = [] for i in range(numelements): # {skip} = byte (char) -- skip one byte self._fsrc.seek(1, 1) # uint8 -- length of next string numchars = np.asscalar(np.fromfile(self._fsrc, dtype=np.uint8, count=1)) # if there is no name, make one up if not numchars: name = 'param%s' % i else: # char * numchars -- parameter name string name = self.__read_str(numchars) # if the name is already in there, add a unique number to it # so it isn't overwritten if name in names: name = name + str(i) names.append(name) # float32 * numelements -- an array of parameter values values = np.fromfile(self._fsrc, dtype=np.float32, count=numelements) # combine the names and values into a dict # the dict will be added to the annotations annotations = dict(zip(names, values)) return annotations def __read_annotations_old(self): """ Read the stimulus grid properties. Returns a dictionary containing the parameter names as keys and the parameter values as values. ------------------------------------------------ The returned objects must be added to the Block. This reads an old version of the format that does not store paramater names, so placeholder names are created instead. ID: 29099 """ # int16 * 14 -- an array of parameter values values = np.fromfile(self._fsrc, dtype=np.int16, count=14) # create dummy names and combine them with the values in a dict # the dict will be added to the annotations params = ['param%s' % i for i in range(len(values))] annotations = dict(zip(params, values)) return annotations def __read_comment(self): """ Read a single comment. The comment is stored as an Event in Segment 0, which is specifically for comments. ---------------------- Returns an empty list. The returned object is already added to the Block. No ID number: always called from another method """ # float64 -- timestamp (number of days since dec 30th 1899) time = np.fromfile(self._fsrc, dtype=np.double, count=1)[0] # int16 -- length of next string numchars1 = np.asscalar(np.fromfile(self._fsrc, dtype=np.int16, count=1)) # char * numchars -- the one who sent the comment sender = self.__read_str(numchars1) # int16 -- length of next string numchars2 = np.asscalar(np.fromfile(self._fsrc, dtype=np.int16, count=1)) # char * numchars -- comment text text = self.__read_str(numchars2, utf=False) comment = Event(times=pq.Quantity(time, units=pq.d), labels=text, sender=sender, file_origin=self._file_origin) self._seg0.events.append(comment) return [] def __read_list(self): """ Read a list of arbitrary data sequences It only says how many data sequences should be read. These sequences are then read by their ID number. Note that lists can be nested. If there are too many sequences (for instance if there are a large number of spikes in a Segment) then a negative number will be returned for the number of data sequences to read. In this case the method tries to guess. This also means that all future list data sequences have unreliable lengths as well. ------------------------------------------- Returns a list of objects. Whether these objects need to be added to the Block depends on the object in question. There are several data sequences that have identical formats but are used in different situations. That means this data sequences has multiple ID numbers. ID: 29082 ID: 29083 ID: 29091 ID: 29093 """ # int16 -- number of sequences to read numelements = np.fromfile(self._fsrc, dtype=np.int16, count=1)[0] # {skip} = bytes * 4 (int16 * 2) -- skip four bytes self._fsrc.seek(4, 1) if numelements == 0: return [] if not self._damaged and numelements < 0: self._damaged = True self.logger.error('Negative sequence count %s, file damaged', numelements) if not self._damaged: # read the sequences into a list seq_list = [self._read_by_id() for _ in range(numelements)] else: # read until we get some indication we should stop seq_list = [] # uint16 -- the ID of the next sequence seqidinit = np.fromfile(self._fsrc, dtype=np.uint16, count=1)[0] # {rewind} = byte * 2 (int16) -- move back 2 bytes, i.e. go back to # before the beginning of the seqid self._fsrc.seek(-2, 1) while 1: # uint16 -- the ID of the next sequence seqid = np.fromfile(self._fsrc, dtype=np.uint16, count=1)[0] # {rewind} = byte * 2 (int16) -- move back 2 bytes, i.e. go # back to before the beginning of the seqid self._fsrc.seek(-2, 1) # if we come across a new sequence, we are at the end of the # list so we should stop if seqidinit != seqid: break # otherwise read the next sequence seq_list.append(self._read_by_id()) return seq_list def __read_segment(self): """ Read an individual Segment. A Segment contains a dictionary of parameters, the length of the recording, a list of Units with their Spikes, and a list of Spikes not assigned to any Unit. The unassigned spikes are always stored in Unit 0, which is exclusively for storing these spikes. ------------------------------------------------- Returns the Segment object created by the method. The returned object is already added to the Block. ID: 29106 """ # (data_obj) -- the stimulus parameters for this segment annotations = self._read_by_id() annotations['feature_type'] = -1 annotations['go_by_closest_unit_center'] = False annotations['include_unit_bounds'] = False # (data_obj) -- SpikeTrain list of unassigned spikes # these go in the first Unit since it is for unassigned spikes unassigned_spikes = self._read_by_id() self._unit0.spiketrains.extend(unassigned_spikes) # read a list of units and grab the second return value, which is the # SpikeTrains from this Segment (if we use the Unit we will get all the # SpikeTrains from that Unit, resuling in duplicates if we are past # the first Segment trains = self._read_by_id() if not trains: if unassigned_spikes: # if there are no assigned spikes, # just use the unassigned spikes trains = zip(unassigned_spikes) else: # if there are no spiketrains at all, # create an empty spike train trains = [[self._default_spiketrain.copy()]] elif hasattr(trains[0], 'dtype'): # workaround for some broken files trains = [unassigned_spikes + [self._combine_spiketrains([trains])]] else: # get the second element from each returned value, # which is the actual SpikeTrains trains = [unassigned_spikes] + [train[1] for train in trains] # re-organize by sweeps trains = zip(*trains) # int32 -- SpikeTrain length in ms spiketrainlen = pq.Quantity(np.fromfile(self._fsrc, dtype=np.int32, count=1)[0], units=pq.ms, copy=False) segments = [] for train in trains: # create the Segment and add everything to it segment = Segment(file_origin=self._file_origin, **annotations) segment.spiketrains = train self._blk.segments.append(segment) segments.append(segment) for itrain in train: # use the SpikeTrain length to figure out the stop time # t_start is always 0 so we can ignore it itrain.t_stop = spiketrainlen return segments def __read_segment_list(self): """ Read a list of Segments with comments. Since comments can occur at any point, whether a recording is happening or not, it is impossible to reliably assign them to a specific Segment. For this reason they are always assigned to Segment 0, which is exclusively used to store comments. -------------------------------------------------------- Returns a list of the Segments created with this method. The returned objects are already added to the Block. ID: 29112 """ # uint8 -- number of electrode channels in the Segment numchannels = np.fromfile(self._fsrc, dtype=np.uint8, count=1)[0] # [list of sequences] -- individual Segments segments = self.__read_list() while not hasattr(segments[0], 'spiketrains'): segments = list(chain(*segments)) # char -- "side of brain" info side = self.__read_str(1) # int16 -- number of comments numelements = np.fromfile(self._fsrc, dtype=np.int16, count=1)[0] # comment_obj * numelements -- comments about the Segments # we don't know which Segment specifically, though for _ in range(numelements): self.__read_comment() # create a channel_index for the numchannels self._chx.index = np.arange(numchannels) self._chx.channel_names = np.array(['Chan{}'.format(i) for i in range(numchannels)], dtype='S') # store what side of the head we are dealing with for segment in segments: for spiketrain in segment.spiketrains: spiketrain.annotations['side'] = side return segments def __read_segment_list_v8(self): """ Read a list of Segments with comments. This is version 8 of the data sequence. This is the same as __read_segment_list_var, but can also contain one or more arbitrary sequences. The class makes an attempt to assign the sequences when possible, and warns the user when this happens (see the _assign_sequence method) -------------------------------------------------------- Returns a list of the Segments created with this method. The returned objects are already added to the Block. ID: 29117 """ # segment_collection_var -- this is based off a segment_collection_var segments = self.__read_segment_list_var() # uint16 -- the ID of the next sequence seqid = np.fromfile(self._fsrc, dtype=np.uint16, count=1)[0] # {rewind} = byte * 2 (int16) -- move back 2 bytes, i.e. go back to # before the beginning of the seqid self._fsrc.seek(-2, 1) if seqid in self._ID_DICT: # if it is a valid seqid, read it and try to figure out where # to put it self._assign_sequence(self._read_by_id()) else: # otherwise it is a Unit list self.__read_unit_list() # {skip} = byte * 2 (int16) -- skip 2 bytes self._fsrc.seek(2, 1) return segments def __read_segment_list_v9(self): """ Read a list of Segments with comments. This is version 9 of the data sequence. This is the same as __read_segment_list_v8, but contains some additional annotations. These annotations are added to the Segment. -------------------------------------------------------- Returns a list of the Segments created with this method. The returned objects are already added to the Block. ID: 29120 """ # segment_collection_v8 -- this is based off a segment_collection_v8 segments = self.__read_segment_list_v8() # uint8 feature_type = np.fromfile(self._fsrc, dtype=np.uint8, count=1)[0] # uint8 go_by_closest_unit_center = np.fromfile(self._fsrc, dtype=np.bool8, count=1)[0] # uint8 include_unit_bounds = np.fromfile(self._fsrc, dtype=np.bool8, count=1)[0] # create a dictionary of the annotations annotations = {'feature_type': feature_type, 'go_by_closest_unit_center': go_by_closest_unit_center, 'include_unit_bounds': include_unit_bounds} # add the annotations to each Segment for segment in segments: segment.annotations.update(annotations) return segments def __read_segment_list_var(self): """ Read a list of Segments with comments. This is the same as __read_segment_list, but contains information regarding the sampling period. This information is added to the SpikeTrains in the Segments. -------------------------------------------------------- Returns a list of the Segments created with this method. The returned objects are already added to the Block. ID: 29114 """ # float32 -- DA conversion clock period in microsec sampling_period = pq.Quantity(np.fromfile(self._fsrc, dtype=np.float32, count=1), units=pq.us, copy=False)[0] # segment_collection -- this is based off a segment_collection segments = self.__read_segment_list() # add the sampling period to each SpikeTrain for segment in segments: for spiketrain in segment.spiketrains: spiketrain.sampling_period = sampling_period return segments def __read_spike_fixed(self, numpts=40): """ Read a spike with a fixed waveform length (40 time bins) ------------------------------------------- Returns the time, waveform and trig2 value. The returned objects must be converted to a SpikeTrain then added to the Block. ID: 29079 """ # float32 -- spike time stamp in ms since start of SpikeTrain time = np.fromfile(self._fsrc, dtype=np.float32, count=1) # int8 * 40 -- spike shape -- use numpts for spike_var waveform = np.fromfile(self._fsrc, dtype=np.int8, count=numpts).reshape(1, 1, numpts) # uint8 -- point of return to noise trig2 = np.fromfile(self._fsrc, dtype=np.uint8, count=1) return time, waveform, trig2 def __read_spike_fixed_old(self): """ Read a spike with a fixed waveform length (40 time bins) This is an old version of the format. The time is stored as ints representing 1/25 ms time steps. It has no trigger information. ------------------------------------------- Returns the time, waveform and trig2 value. The returned objects must be converted to a SpikeTrain then added to the Block. ID: 29081 """ # int32 -- spike time stamp in ms since start of SpikeTrain time = np.fromfile(self._fsrc, dtype=np.int32, count=1) / 25. time = time.astype(np.float32) # int8 * 40 -- spike shape # This needs to be a 3D array, one for each channel. BrainWare # only ever has a single channel per file. waveform = np.fromfile(self._fsrc, dtype=np.int8, count=40).reshape(1, 1, 40) # create a dummy trig2 value trig2 = np.array([-1], dtype=np.uint8) return time, waveform, trig2 def __read_spike_var(self): """ Read a spike with a variable waveform length ------------------------------------------- Returns the time, waveform and trig2 value. The returned objects must be converted to a SpikeTrain then added to the Block. ID: 29115 """ # uint8 -- number of points in spike shape numpts = np.fromfile(self._fsrc, dtype=np.uint8, count=1)[0] # spike_fixed is the same as spike_var if you don't read the numpts # byte and set numpts = 40 return self.__read_spike_fixed(numpts) def __read_spiketrain_indexed(self): """ Read a SpikeTrain This is the same as __read_spiketrain_timestamped except it also contains the index of the Segment in the dam file. The index is stored as an annotation in the SpikeTrain. ------------------------------------------------- Returns a SpikeTrain object with multiple spikes. The returned object must be added to the Block. ID: 29121 """ # int32 -- index of the analogsignalarray in corresponding .dam file dama_index = np.fromfile(self._fsrc, dtype=np.int32, count=1)[0] # spiketrain_timestamped -- this is based off a spiketrain_timestamped spiketrain = self.__read_spiketrain_timestamped() # add the property to the dict spiketrain.annotations['dama_index'] = dama_index return spiketrain def __read_spiketrain_timestamped(self): """ Read a SpikeTrain This SpikeTrain contains a time stamp for when it was recorded The timestamp is stored as an annotation in the SpikeTrain. ------------------------------------------------- Returns a SpikeTrain object with multiple spikes. The returned object must be added to the Block. ID: 29110 """ # float64 -- timeStamp (number of days since dec 30th 1899) timestamp = np.fromfile(self._fsrc, dtype=np.double, count=1)[0] # convert to datetime object timestamp = self._convert_timestamp(timestamp) # seq_list -- spike list # combine the spikes into a single SpikeTrain spiketrain = self._combine_spiketrains(self.__read_list()) # add the timestamp spiketrain.annotations['timestamp'] = timestamp return spiketrain def __read_unit(self): """ Read all SpikeTrains from a single Segment and Unit This is the same as __read_unit_unsorted except it also contains information on the spike sorting boundaries. ------------------------------------------------------------------ Returns a single Unit and a list of SpikeTrains from that Unit and current Segment, in that order. The SpikeTrains must be returned since it is not possible to determine from the Unit which SpikeTrains are from the current Segment. The returned objects are already added to the Block. The SpikeTrains must be added to the current Segment. ID: 29116 """ # same as unsorted Unit unit, trains = self.__read_unit_unsorted() # float32 * 18 -- Unit boundaries (IEEE 32-bit floats) unit.annotations['boundaries'] = [np.fromfile(self._fsrc, dtype=np.float32, count=18)] # uint8 * 9 -- boolean values indicating elliptic feature boundary # dimensions unit.annotations['elliptic'] = [np.fromfile(self._fsrc, dtype=np.uint8, count=9)] return unit, trains def __read_unit_list(self): """ A list of a list of Units ----------------------------------------------- Returns a list of Units modified in the method. The returned objects are already added to the Block. No ID number: only called by other methods """ # this is used to figure out which Units to return maxunit = 1 # int16 -- number of time slices numelements = np.fromfile(self._fsrc, dtype=np.int16, count=1)[0] # {sequence} * numelements1 -- the number of lists of Units to read self._chx.annotations['max_valid'] = [] for i in range(numelements): # {skip} = byte * 2 (int16) -- skip 2 bytes self._fsrc.seek(2, 1) # double max_valid = np.fromfile(self._fsrc, dtype=np.double, count=1)[0] # int16 - the number of Units to read numunits = np.fromfile(self._fsrc, dtype=np.int16, count=1)[0] # update tha maximum Unit so far maxunit = max(maxunit, numunits + 1) # if there aren't enough Units, create them # remember we need to skip the UnassignedSpikes Unit if numunits > len(self._chx.units) + 1: for ind1 in range(len(self._chx.units), numunits + 1): unit = Unit(name='unit%s' % ind1, file_origin=self._file_origin, elliptic=[], boundaries=[], timestamp=[], max_valid=[]) self._chx.units.append(unit) # {Block} * numelements -- Units for ind1 in range(numunits): # get the Unit with the given index # remember we need to skip the UnassignedSpikes Unit unit = self._chx.units[ind1 + 1] # {skip} = byte * 2 (int16) -- skip 2 bytes self._fsrc.seek(2, 1) # int16 -- a multiplier for the elliptic and boundaries # properties numelements3 = np.fromfile(self._fsrc, dtype=np.int16, count=1)[0] # uint8 * 10 * numelements3 -- boolean values indicating # elliptic feature boundary dimensions elliptic = np.fromfile(self._fsrc, dtype=np.uint8, count=10 * numelements3) # float32 * 20 * numelements3 -- feature boundaries boundaries = np.fromfile(self._fsrc, dtype=np.float32, count=20 * numelements3) unit.annotations['elliptic'].append(elliptic) unit.annotations['boundaries'].append(boundaries) unit.annotations['max_valid'].append(max_valid) return self._chx.units[1:maxunit] def __read_unit_list_timestamped(self): """ A list of a list of Units. This is the same as __read_unit_list, except that it also has a timestamp. This is added ad an annotation to all Units. ----------------------------------------------- Returns a list of Units modified in the method. The returned objects are already added to the Block. ID: 29119 """ # double -- time zero (number of days since dec 30th 1899) timestamp = np.fromfile(self._fsrc, dtype=np.double, count=1)[0] # convert to to days since UNIX epoc time: timestamp = self._convert_timestamp(timestamp) # sorter -- this is based off a sorter units = self.__read_unit_list() for unit in units: unit.annotations['timestamp'].append(timestamp) return units def __read_unit_old(self): """ Read all SpikeTrains from a single Segment and Unit This is the same as __read_unit_unsorted except it also contains information on the spike sorting boundaries. This is an old version of the format that used 48-bit floating-point numbers for the boundaries. These cannot easily be read and so are skipped. ------------------------------------------------------------------ Returns a single Unit and a list of SpikeTrains from that Unit and current Segment, in that order. The SpikeTrains must be returned since it is not possible to determine from the Unit which SpikeTrains are from the current Segment. The returned objects are already added to the Block. The SpikeTrains must be added to the current Segment. ID: 29107 """ # same as Unit unit, trains = self.__read_unit_unsorted() # bytes * 108 (float48 * 18) -- Unit boundaries (48-bit floating # point numbers are not supported so we skip them) self._fsrc.seek(108, 1) # uint8 * 9 -- boolean values indicating elliptic feature boundary # dimensions unit.annotations['elliptic'] = np.fromfile(self._fsrc, dtype=np.uint8, count=9).tolist() return unit, trains def __read_unit_unsorted(self): """ Read all SpikeTrains from a single Segment and Unit This does not contain Unit boundaries. ------------------------------------------------------------------ Returns a single Unit and a list of SpikeTrains from that Unit and current Segment, in that order. The SpikeTrains must be returned since it is not possible to determine from the Unit which SpikeTrains are from the current Segment. The returned objects are already added to the Block. The SpikeTrains must be added to the current Segment. ID: 29084 """ # {skip} = bytes * 2 (uint16) -- skip two bytes self._fsrc.seek(2, 1) # uint16 -- number of characters in next string numchars = np.asscalar(np.fromfile(self._fsrc, dtype=np.uint16, count=1)) # char * numchars -- ID string of Unit name = self.__read_str(numchars) # int32 -- SpikeTrain length in ms # int32 * 4 -- response and spon period boundaries parts = np.fromfile(self._fsrc, dtype=np.int32, count=5) t_stop = pq.Quantity(parts[0].astype('float32'), units=pq.ms, copy=False) respwin = parts[1:] # (data_obj) -- list of SpikeTrains spikeslists = self._read_by_id() # use the Unit if it already exists, otherwise create it if name in self._unitdict: unit = self._unitdict[name] else: unit = Unit(name=name, file_origin=self._file_origin, elliptic=[], boundaries=[], timestamp=[], max_valid=[]) self._chx.units.append(unit) self._unitdict[name] = unit # convert the individual spikes to SpikeTrains and add them to the Unit trains = [self._combine_spiketrains(spikes) for spikes in spikeslists] unit.spiketrains.extend(trains) for train in trains: train.t_stop = t_stop.copy() train.annotations['respwin'] = respwin.copy() return unit, trains def __skip_information(self): """ Read an information sequence. This is data sequence is skipped both here and in the Matlab reference implementation. ---------------------- Returns an empty list Nothing is created so nothing is added to the Block. ID: 29113 """ # {skip} char * 34 -- display information self._fsrc.seek(34, 1) return [] def __skip_information_old(self): """ Read an information sequence This is data sequence is skipped both here and in the Matlab reference implementation This is an old version of the format ---------------------- Returns an empty list. Nothing is created so nothing is added to the Block. ID: 29100 """ # {skip} char * 4 -- display information self._fsrc.seek(4, 1) return [] # This dictionary maps the numeric data sequence ID codes to the data # sequence reading functions. # # Since functions are first-class objects in Python, the functions returned # from this dictionary are directly callable. # # If new data sequence ID codes are added in the future please add the code # here in numeric order and the method above in alphabetical order # # The naming of any private method may change at any time _ID_DICT = {29079: __read_spike_fixed, 29081: __read_spike_fixed_old, 29082: __read_list, 29083: __read_list, 29084: __read_unit_unsorted, 29091: __read_list, 29093: __read_list, 29099: __read_annotations_old, 29100: __skip_information_old, 29106: __read_segment, 29107: __read_unit_old, 29109: __read_annotations, 29110: __read_spiketrain_timestamped, 29112: __read_segment_list, 29113: __skip_information, 29114: __read_segment_list_var, 29115: __read_spike_var, 29116: __read_unit, 29117: __read_segment_list_v8, 29119: __read_unit_list_timestamped, 29120: __read_segment_list_v9, 29121: __read_spiketrain_indexed } def convert_brainwaresrc_timestamp(timestamp, start_date=datetime(1899, 12, 30)): """ convert_brainwaresrc_timestamp(timestamp, start_date) - convert a timestamp in brainware src file units to a python datetime object. start_date defaults to 1899.12.30 (ISO format), which is the start date used by all BrainWare SRC data Blocks so far. If manually specified it should be a datetime object or any other object that can be added to a timedelta object. """ # datetime + timedelta = datetime again. return start_date + timedelta(days=timestamp) if __name__ == '__main__': # run this when calling the file directly as a benchmark from neo.test.iotest.test_brainwaresrcio import FILES_TO_TEST from neo.test.iotest.common_io_test import url_for_tests from neo.test.iotest.tools import (create_local_temp_dir, download_test_file, get_test_file_full_path, make_all_directories) shortname = BrainwareSrcIO.__name__.lower().strip('io') local_test_dir = create_local_temp_dir(shortname) url = url_for_tests + shortname FILES_TO_TEST.remove('long_170s_1rep_1clust_ch2.src') make_all_directories(FILES_TO_TEST, local_test_dir) download_test_file(FILES_TO_TEST, local_test_dir, url) for path in get_test_file_full_path(ioclass=BrainwareSrcIO, filename=FILES_TO_TEST, directory=local_test_dir): ioobj = BrainwareSrcIO(path) ioobj.read_all_blocks(lazy=False) neo-0.7.2/neo/io/elanio.py0000600013464101346420000000112613507452453013507 0ustar yohyoh# -*- coding: utf-8 -*- from neo.io.basefromrawio import BaseFromRaw from neo.rawio.elanrawio import ElanRawIO class ElanIO(ElanRawIO, BaseFromRaw): """ Class for reading data from Elan. Elan is software for studying time-frequency maps of EEG data. Elan is developed in Lyon, France, at INSERM U821 https://elan.lyon.inserm.fr """ _prefered_signal_group_mode = 'split-all' # _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): ElanRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.7.2/neo/io/elphyio.py0000600013464101346420000046222713507452453013726 0ustar yohyoh# -*- coding: utf-8 -*- """ README ===================================================================================== This is the implementation of the NEO IO for Elphy files. IO dependencies: - NEO - types - numpy - quantities Quick reference: ===================================================================================== Class ElphyIO() with methods read_block() and write_block() are implemented. This classes represent the way to access and produce Elphy files from NEO objects. As regards reading an existing Elphy file, start by initializing a IO class with it: >>> import neo >>> r = neo.io.ElphyIO( filename="Elphy.DAT" ) >>> r Read the file content into NEO object Block: >>> bl = r.read_block() >>> bl Now you can then read all Elphy data as NEO objects: >>> b1.segments [, , , ] >>> bl.segments[0].analogsignals[0] These functions return NEO objects, completely "detached" from the original Elphy file. Changes to the runtime objects will not cause any changes in the file. Having already existing NEO structures, it is possible to write them as an Elphy file. For example, given a segment: >>> s = neo.Segment() filled with other NEO structures: >>> import numpy as np >>> import quantities as pq >>> a = AnalogSignal( signal=np.random.rand(300), t_start=42*pq.ms) >>> s.analogsignals.append( a ) and added to a newly created NEO Block: >>> bl = neo.Block() >>> bl.segments.append( s ) Then, it's easy to create an Elphy file: >>> r = neo.io.ElphyIO( filename="ElphyNeoTest.DAT" ) >>> r.write_block( bl ) Author: Thierry Brizzi Domenico Guarino """ # needed for python 3 compatibility from __future__ import absolute_import # python commons: from datetime import datetime from fractions import gcd from os import path import re import struct from time import time # note neo.core needs only numpy and quantities import numpy as np import quantities as pq # I need to subclass BaseIO from neo.io.baseio import BaseIO # to import from core from neo.core import (Block, Segment, ChannelIndex, AnalogSignal, Event, SpikeTrain) # -------------------------------------------------------- # OBJECTS class ElphyScaleFactor(object): """ Useful to retrieve real values from integer ones that are stored in an Elphy file : ``scale`` : compute the actual value of a sample with this following formula : ``delta`` * value + ``offset`` """ def __init__(self, delta, offset): self.delta = delta self.offset = offset def scale(self, value): return value * self.delta + self.offset class BaseSignal(object): """ A descriptor storing main signal properties : ``layout`` : the :class:``ElphyLayout` object that extracts data from a file. ``episode`` : the episode in which the signal has been acquired. ``sampling_frequency`` : the sampling frequency of the analog to digital converter. ``sampling_period`` : the sampling period of the analog to digital converter computed from sampling_frequency. ``t_start`` : the start time of the signal acquisition. ``t_stop`` : the end time of the signal acquisition. ``duration`` : the duration of the signal acquisition computed from t_start and t_stop. ``n_samples`` : the number of sample acquired during the recording computed from the duration and the sampling period. ``name`` : a label to identify the signal. ``data`` : a property triggering data extraction. """ def __init__(self, layout, episode, sampling_frequency, start, stop, name=None): self.layout = layout self.episode = episode self.sampling_frequency = sampling_frequency self.sampling_period = 1 / sampling_frequency self.t_start = start self.t_stop = stop self.duration = self.t_stop - self.t_start self.n_samples = int(self.duration / self.sampling_period) self.name = name @property def data(self): raise NotImplementedError('must be overloaded in subclass') class ElphySignal(BaseSignal): """ Subclass of :class:`BaseSignal` corresponding to Elphy's analog channels : ``channel`` : the identifier of the analog channel providing the signal. ``units`` : an array containing x and y coordinates units. ``x_unit`` : a property to access the x-coordinates unit. ``y_unit`` : a property to access the y-coordinates unit. ``data`` : a property that delegate data extraction to the ``get_signal_data`` function of the ```layout`` object. """ def __init__(self, layout, episode, channel, x_unit, y_unit, sampling_frequency, start, stop, name=None): super(ElphySignal, self).__init__(layout, episode, sampling_frequency, start, stop, name) self.channel = channel self.units = [x_unit, y_unit] def __str__(self): return "%s ep_%s ch_%s [%s, %s]" % ( self.layout.file.name, self.episode, self.channel, self.x_unit, self.y_unit) def __repr__(self): return self.__str__() @property def x_unit(self): """ Return the x-coordinate of the signal. """ return self.units[0] @property def y_unit(self): """ Return the y-coordinate of the signal. """ return self.units[1] @property def data(self): return self.layout.get_signal_data(self.episode, self.channel) class ElphyTag(BaseSignal): """ Subclass of :class:`BaseSignal` corresponding to Elphy's tag channels : ``number`` : the identifier of the tag channel. ``x_unit`` : the unit of the x-coordinate. """ def __init__(self, layout, episode, number, x_unit, sampling_frequency, start, stop, name=None): super(ElphyTag, self).__init__(layout, episode, sampling_frequency, start, stop, name) self.number = number self.units = [x_unit, None] def __str__(self): return "%s : ep_%s tag_ch_%s [%s]" % ( self.layout.file.name, self.episode, self.number, self.x_unit) def __repr__(self): return self.__str__() @property def x_unit(self): """ Return the x-coordinate of the signal. """ return self.units[0] @property def data(self): return self.layout.get_tag_data(self.episode, self.number) @property def channel(self): return self.number class ElphyEvent(object): """ A descriptor that store a set of events properties : ``layout`` : the :class:``ElphyLayout` object that extracts data from a file. ``episode`` : the episode in which the signal has been acquired. ``number`` : the identifier of the channel. ``x_unit`` : the unit of the x-coordinate. ``n_events`` : the number of events. ``name`` : a label to identify the event. ``times`` : a property triggering event times extraction. """ def __init__(self, layout, episode, number, x_unit, n_events, ch_number=None, name=None): self.layout = layout self.episode = episode self.number = number self.x_unit = x_unit self.n_events = n_events self.name = name self.ch_number = ch_number def __str__(self): return "%s : ep_%s evt_ch_%s [%s]" % ( self.layout.file.name, self.episode, self.number, self.x_unit) def __repr__(self): return self.__str__() @property def channel(self): return self.number @property def times(self): return self.layout.get_event_data(self.episode, self.number) @property def data(self): return self.times class ElphySpikeTrain(ElphyEvent): """ A descriptor that store spiketrain properties : ``wf_samples`` : number of samples composing waveforms. ``wf_sampling_frequency`` : sampling frequency of waveforms. ``wf_sampling_period`` : sampling period of waveforms. ``wf_units`` : the units of the x and y coordinates of waveforms. ``t_start`` : the time before the arrival of the spike which corresponds to the starting time of a waveform. ``name`` : a label to identify the event. ``times`` : a property triggering event times extraction. ``waveforms`` : a property triggering waveforms extraction. """ def __init__(self, layout, episode, number, x_unit, n_events, wf_sampling_frequency, wf_samples, unit_x_wf, unit_y_wf, t_start, name=None): super(ElphySpikeTrain, self).__init__(layout, episode, number, x_unit, n_events, name) self.wf_samples = wf_samples self.wf_sampling_frequency = wf_sampling_frequency assert wf_sampling_frequency, "bad sampling frequency" self.wf_sampling_period = 1.0 / wf_sampling_frequency self.wf_units = [unit_x_wf, unit_y_wf] self.t_start = t_start @property def x_unit_wf(self): """ Return the x-coordinate of waveforms. """ return self.wf_units[0] @property def y_unit_wf(self): """ Return the y-coordinate of waveforms. """ return self.wf_units[1] @property def times(self): return self.layout.get_spiketrain_data(self.episode, self.number) @property def waveforms(self): return self.layout.get_waveform_data(self.episode, self.number) if self.wf_samples \ else None # -------------------------------------------------------- # BLOCKS class BaseBlock(object): """ Represent a chunk of file storing metadata or raw data. A convenient class to break down the structure of an Elphy file to several building blocks : ``layout`` : the layout containing the block. ``identifier`` : the label that identified the block. ``size`` : the size of the block. ``start`` : the file index corresponding to the starting byte of the block. ``end`` : the file index corresponding to the ending byte of the block NB : Subclassing this class is a convenient way to set the properties using polymorphism rather than a conditional structure. By this way each :class:`BaseBlock` type know how to iterate through the Elphy file and store interesting data. """ def __init__(self, layout, identifier, start, size): self.layout = layout self.identifier = identifier self.size = size self.start = start self.end = self.start + self.size - 1 class ElphyBlock(BaseBlock): """ A subclass of :class:`BaseBlock`. Useful to store the location and size of interesting data within a block : ``parent_block`` : the parent block containing the block. ``header_size`` : the size of the header permitting the identification of the type of the block. ``data_offset`` : the file index located after the block header. ``data_size`` : the size of data located after the header. ``sub_blocks`` : the sub-blocks contained by the block. """ def __init__(self, layout, identifier, start, size, fixed_length=None, size_format="i", parent_block=None): super(ElphyBlock, self).__init__(layout, identifier, start, size) # a block may be a sub-block of another block self.parent_block = parent_block # pascal language store strings in 2 different ways # ... first, if in the program the size of the string is # specified (fixed) then the file stores the length # of the string and allocate a number of bytes equal # to the specified size # ... if this size is not specified the length of the # string is also stored but the file allocate dynamically # a number of bytes equal to the actual size of the string l_ident = len(self.identifier) if fixed_length: l_ident += (fixed_length - l_ident) self.header_size = l_ident + 1 + type_dict[size_format] # starting point of data located in the block self.data_offset = self.start + self.header_size self.data_size = self.size - self.header_size # a block may have sub-blocks # it is to subclasses to initialize # this property self.sub_blocks = list() def __repr__(self): return "%s : size = %s, start = %s, end = %s" % ( self.identifier, self.size, self.start, self.end) def add_sub_block(self, block): """ Append a block to the sub-block list. """ self.sub_blocks.append(block) class FileInfoBlock(ElphyBlock): """ Base class of all subclasses whose the purpose is to extract user file info stored into an Elphy file : ``header`` : the header block relative to the block. ``file`` : the file containing the block. NB : User defined metadata are not really practical. An Elphy script must know the order of metadata storage to know exactly how to retrieve these data. That's why it is necessary to subclass and reproduce elphy script commands to extract metadata relative to a protocol. Consequently managing a new protocol implies to refactor the file info extraction. """ def __init__(self, layout, identifier, start, size, fixed_length=None, size_format="i", parent_block=None): super(FileInfoBlock, self).__init__(layout, identifier, start, size, fixed_length, size_format, parent_block=parent_block) self.header = None self.file = self.layout.file def get_protocol_and_version(self): """ Return a tuple useful to identify the kind of protocol that has generated a file during data acquisition. """ raise Exception("must be overloaded in a subclass") def get_user_file_info(self): """ Return a dictionary containing all user file info stored in the file. """ raise Exception("must be overloaded in a subclass") def get_sparsenoise_revcor(self): """ Return 'REVCOR' user file info. This method is common to :class:`ClassicFileInfo` and :class:`MultistimFileInfo` because the last one is able to store this kind of metadata. """ header = dict() header['n_div_x'] = read_from_char(self.file, 'h') header['n_div_y'] = read_from_char(self.file, 'h') header['gray_levels'] = read_from_char(self.file, 'h') header['position_x'] = read_from_char(self.file, 'ext') header['position_y'] = read_from_char(self.file, 'ext') header['length'] = read_from_char(self.file, 'ext') header['width'] = read_from_char(self.file, 'ext') header['orientation'] = read_from_char(self.file, 'ext') header['expansion'] = read_from_char(self.file, 'h') header['scotoma'] = read_from_char(self.file, 'h') header['seed'] = read_from_char(self.file, 'h') # dt_on and dt_off may not exist in old revcor formats rollback = self.file.tell() header['dt_on'] = read_from_char(self.file, 'ext') if header['dt_on'] is None: self.file.seek(rollback) rollback = self.file.tell() header['dt_off'] = read_from_char(self.file, 'ext') if header['dt_off'] is None: self.file.seek(rollback) return header class ClassicFileInfo(FileInfoBlock): """ Extract user file info stored into an Elphy file corresponding to sparse noise (revcor), moving bar and flashbar protocols. """ def detect_protocol_from_name(self, path): pattern = "\d{4}(\d+|\D)\D" codes = { 'r': 'sparsenoise', 'o': 'movingbar', 'f': 'flashbar', 'm': 'multistim' # here just for assertion } filename = path.split(path)[1] match = re.search(pattern, path) if hasattr(match, 'end'): code = codes.get(path[match.end() - 1].lower(), None) assert code != 'm', "multistim file detected" return code elif 'spt' in filename.lower(): return 'spontaneousactivity' else: return None def get_protocol_and_version(self): if self.layout and self.layout.info_block: self.file.seek(self.layout.info_block.data_offset) version = self.get_title() if version in ['REVCOR1', 'REVCOR2', 'REVCOR + PAIRING']: name = "sparsenoise" elif version in ['BARFLASH']: name = "flashbar" elif version in ['ORISTIM', 'ORISTM', 'ORISTM1', 'ORITUN']: name = "movingbar" else: name = self.detect_protocol_from_name(self.file.name) self.file.seek(0) return name, version return None, None def get_title(self): title_length, title = struct.unpack('= 2): name = None version = None else: if center == 2: name = "sparsenoise" elif center == 3: name = "densenoise" elif center == 4: name = "densenoise" elif center == 5: name = "grating" else: name = None version = None self.file.seek(0) return name, version return None, None def get_title(self): title_length = read_from_char(self.file, 'B') title, = struct.unpack('<%ss' % title_length, self.file.read(title_length)) self.file.seek(self.file.tell() + 255 - title_length) return unicode(title) def get_user_file_info(self): header = dict() if self.layout and self.layout.info_block: # go to the info_block sub_block = self.layout.info_block self.file.seek(sub_block.data_offset) # get the first four parameters acqLGN = read_from_char(self.file, 'i') center = read_from_char(self.file, 'i') surround = read_from_char(self.file, 'i') # store info in the header header['acqLGN'] = acqLGN header['center'] = center header['surround'] = surround if not (header['surround'] >= 2): header.update(self.get_center_header(center)) self.file.seek(0) return header def get_center_header(self, code): # get file info corresponding # to the executed protocol # for the center first ... if code == 0: return self.get_sparsenoise_revcor() elif code == 2: return self.get_sparsenoise_center() elif code == 3: return self.get_densenoise_center(True) elif code == 4: return self.get_densenoise_center(False) elif code == 5: return dict() # return self.get_grating_center() else: return dict() def get_surround_header(self, code): # then the surround if code == 2: return self.get_sparsenoise_surround() elif code == 3: return self.get_densenoise_surround(True) elif code == 4: return self.get_densenoise_surround(False) elif code == 5: raise NotImplementedError() return self.get_grating_center() else: return dict() def get_center_surround(self, center, surround): header = dict() header['stim_center'] = self.get_center_header(center) header['stim_surround'] = self.get_surround_header(surround) return header def get_sparsenoise_center(self): header = dict() header['title'] = self.get_title() header['number_of_sequences'] = read_from_char(self.file, 'i') header['pretrigger_duration'] = read_from_char(self.file, 'ext') header['n_div_x'] = read_from_char(self.file, 'h') header['n_div_y'] = read_from_char(self.file, 'h') header['gray_levels'] = read_from_char(self.file, 'h') header['position_x'] = read_from_char(self.file, 'ext') header['position_y'] = read_from_char(self.file, 'ext') header['length'] = read_from_char(self.file, 'ext') header['width'] = read_from_char(self.file, 'ext') header['orientation'] = read_from_char(self.file, 'ext') header['expansion'] = read_from_char(self.file, 'h') header['scotoma'] = read_from_char(self.file, 'h') header['seed'] = read_from_char(self.file, 'h') header['luminance_1'] = read_from_char(self.file, 'ext') header['luminance_2'] = read_from_char(self.file, 'ext') header['dt_count'] = read_from_char(self.file, 'i') dt_array = list() for _ in range(0, header['dt_count']): dt_array.append(read_from_char(self.file, 'ext')) header['dt_on'] = dt_array if dt_array else None header['dt_off'] = read_from_char(self.file, 'ext') return header def get_sparsenoise_surround(self): header = dict() header['title_surround'] = self.get_title() header['gap'] = read_from_char(self.file, 'ext') header['n_div_x'] = read_from_char(self.file, 'h') header['n_div_y'] = read_from_char(self.file, 'h') header['gray_levels'] = read_from_char(self.file, 'h') header['expansion'] = read_from_char(self.file, 'h') header['scotoma'] = read_from_char(self.file, 'h') header['seed'] = read_from_char(self.file, 'h') header['luminance_1'] = read_from_char(self.file, 'ext') header['luminance_2'] = read_from_char(self.file, 'ext') header['dt_on'] = read_from_char(self.file, 'ext') header['dt_off'] = read_from_char(self.file, 'ext') return header def get_densenoise_center(self, is_binary): header = dict() header['stimulus_type'] = "B" if is_binary else "T" header['title'] = self.get_title() _tmp = read_from_char(self.file, 'i') header['number_of_sequences'] = _tmp if _tmp < 0 else None rollback = self.file.tell() header['stimulus_duration'] = read_from_char(self.file, 'ext') if header['stimulus_duration'] is None: self.file.seek(rollback) header['pretrigger_duration'] = read_from_char(self.file, 'ext') header['n_div_x'] = read_from_char(self.file, 'h') header['n_div_y'] = read_from_char(self.file, 'h') header['position_x'] = read_from_char(self.file, 'ext') header['position_y'] = read_from_char(self.file, 'ext') header['length'] = read_from_char(self.file, 'ext') header['width'] = read_from_char(self.file, 'ext') header['orientation'] = read_from_char(self.file, 'ext') header['expansion'] = read_from_char(self.file, 'h') header['seed'] = read_from_char(self.file, 'h') header['luminance_1'] = read_from_char(self.file, 'ext') header['luminance_2'] = read_from_char(self.file, 'ext') header['dt_on'] = read_from_char(self.file, 'ext') header['dt_off'] = read_from_char(self.file, 'ext') return header def get_densenoise_surround(self, is_binary): header = dict() header['title_surround'] = self.get_title() header['gap'] = read_from_char(self.file, 'ext') header['n_div_x'] = read_from_char(self.file, 'h') header['n_div_y'] = read_from_char(self.file, 'h') header['expansion'] = read_from_char(self.file, 'h') header['seed'] = read_from_char(self.file, 'h') header['luminance_1'] = read_from_char(self.file, 'ext') header['luminance_2'] = read_from_char(self.file, 'ext') header['dt_on'] = read_from_char(self.file, 'ext') header['dt_off'] = read_from_char(self.file, 'ext') return header def get_grating_center(self): pass def get_grating_surround(self): pass class Header(ElphyBlock): """ A convenient subclass of :class:`Block` to store Elphy file header properties. NB : Subclassing this class is a convenient way to set the properties of the header using polymorphism rather than a conditional structure. """ def __init__(self, layout, identifier, size, fixed_length=None, size_format="i"): super(Header, self).__init__(layout, identifier, 0, size, fixed_length, size_format) class Acquis1Header(Header): """ A subclass of :class:`Header` used to identify the 'ACQUIS1/GS/1991' format. Whereas more recent format, the header contains all data relative to episodes, channels and traces : ``n_channels`` : the number of acquisition channels. ``nbpt`` and ``nbptEx`` : parameters useful to compute the number of samples by episodes. ``tpData`` : the data format identifier used to compute sample size. ``x_unit`` : the x-coordinate unit for all channels in an episode. ``y_units`` : an array containing y-coordinate units for each channel in the episode. ``dX`` and ``X0`` : the scale factors necessary to retrieve the actual times relative to each sample in a channel. ``dY_ar`` and ``Y0_ar``: arrays of scale factors necessary to retrieve the actual values relative to samples. ``continuous`` : a boolean telling if the file has been acquired in continuous mode. ``preSeqI`` : the size in bytes of the data preceding raw data. ``postSeqI`` : the size in bytes of the data preceding raw data. ``dat_length`` : the length in bytes of the data in the file. ``sample_size`` : the size in bytes of a sample. ``n_samples`` : the number of samples. ``ep_size`` : the size in bytes of an episode. ``n_episodes`` : the number of recording sequences store in the file. NB : The size is read from the file, the identifier is a string containing 15 characters and the size is encoded as small integer. See file 'FicDefAc1.pas' to identify the parsed parameters. """ def __init__(self, layout): fileobj = layout.file super(Acquis1Header, self).__init__(layout, "ACQUIS1/GS/1991", 1024, 15, "h") # parse the header to store interesting data about episodes and channels fileobj.seek(18) # extract episode properties n_channels = read_from_char(fileobj, 'B') assert not ((n_channels < 1) or (n_channels > 16)), "bad number of channels" nbpt = read_from_char(fileobj, 'h') l_xu, x_unit = struct.unpack('= self.end: tagShift = 0 else: tagShift = read_from_char(layout.file, 'B') # setup object properties self.n_channels = n_channels self.nbpt = nbpt self.tpData = tpData self.x_unit = xu[0:l_xu] self.dX = dX self.X0 = X0 self.y_units = y_units[0:n_channels] self.dY_ar = dY_ar[0:n_channels] self.Y0_ar = Y0_ar[0:n_channels] self.continuous = continuous if self.continuous: self.preSeqI = 0 self.postSeqI = 0 else: self.preSeqI = preSeqI self.postSeqI = postSeqI self.varEp = varEp self.withTags = withTags if not self.withTags: self.tagShift = 0 else: if tagShift == 0: self.tagShift = 4 else: self.tagShift = tagShift self.sample_size = type_dict[types[self.tpData]] self.dat_length = self.layout.file_size - self.layout.data_offset if self.continuous: if self.n_channels > 0: self.n_samples = self.dat_length / (self.n_channels * self.sample_size) else: self.n_samples = 0 else: self.n_samples = self.nbpt self.ep_size = (self.preSeqI + self.postSeqI + self.n_samples * self.sample_size * self.n_channels) self.n_episodes = self.dat_length / self.ep_size if (self.n_samples != 0) else 0 class DAC2GSEpisodeBlock(ElphyBlock): """ Subclass of :class:`Block` useful to store data corresponding to 'DAC2SEQ' blocks stored in the DAC2/GS/2000 format. ``n_channels`` : the number of acquisition channels. ``nbpt`` : the number of samples by episodes. ``tpData`` : the data format identifier used to compute the sample size. ``x_unit`` : the x-coordinate unit for all channels in an episode. ``y_units`` : an array containing y-coordinate units for each channel in the episode. ``dX`` and ``X0`` : the scale factors necessary to retrieve the actual times relative to each sample in a channel. ``dY_ar`` and ``Y0_ar``: arrays of scale factors necessary to retrieve the actual values relative to samples. ``postSeqI`` : the size in bytes of the data preceding raw data. NB : see file 'FdefDac2.pas' to identify the parsed parameters. """ def __init__(self, layout, identifier, start, size, fixed_length=None, size_format="i"): main = layout.main_block n_channels, nbpt, tpData, postSeqI = struct.unpack(' 0] for data_block in blocks: self.file.seek(data_block.start) raw = self.file.read(data_block.size)[0:expected_size] databytes = np.frombuffer(raw, dtype=dtype) chunks.append(databytes) # concatenate all chunks and return # the specified slice if len(chunks) > 0: databytes = np.concatenate(chunks) return databytes[start:end] else: return np.array([]) def reshape_bytes(self, databytes, reshape, datatypes, order='<'): """ Reshape a numpy array containing a set of databytes. """ assert datatypes and len(datatypes) == len(reshape), "datatypes are not well defined" l_bytes = len(databytes) # create the mask for each shape shape_mask = list() for shape in reshape: for _ in xrange(1, shape + 1): shape_mask.append(shape) # create a set of masks to extract data bit_masks = list() for shape in reshape: bit_mask = list() for value in shape_mask: bit = 1 if (value == shape) else 0 bit_mask.append(bit) bit_masks.append(np.array(bit_mask)) # extract data n_samples = l_bytes / np.sum(reshape) data = np.empty([len(reshape), n_samples], dtype=(int, int)) for index, bit_mask in enumerate(bit_masks): tmp = self.filter_bytes(databytes, bit_mask) tp = '%s%s%s' % (order, datatypes[index], reshape[index]) data[index] = np.frombuffer(tmp, dtype=tp) return data.T def filter_bytes(self, databytes, bit_mask): """ Detect from a bit mask which bits to keep to recompose the signal. """ n_bytes = len(databytes) mask = np.ones(n_bytes, dtype=int) np.putmask(mask, mask, bit_mask) to_keep = np.where(mask > 0)[0] return databytes.take(to_keep) def load_channel_data(self, ep, ch): """ Return a numpy array containing the list of bytes corresponding to the specified episode and channel. """ # memorise the sample size and symbol sample_size = self.sample_size(ep, ch) sample_symbol = self.sample_symbol(ep, ch) # create a bit mask to define which # sample to keep from the file bit_mask = self.create_bit_mask(ep, ch) # load all bytes contained in an episode data_blocks = self.get_data_blocks(ep) databytes = self.load_bytes(data_blocks) raw = self.filter_bytes(databytes, bit_mask) # reshape bytes from the sample size dt = np.dtype(numpy_map[sample_symbol]) dt.newbyteorder('<') return np.frombuffer(raw.reshape([len(raw) / sample_size, sample_size]), dt) def apply_op(self, np_array, value, op_type): """ A convenient function to apply an operator over all elements of a numpy array. """ if op_type == "shift_right": return np_array >> value elif op_type == "shift_left": return np_array << value elif op_type == "mask": return np_array & value else: return np_array def get_tag_mask(self, tag_ch, tag_mode): """ Return a mask useful to retrieve bits that encode a tag channel. """ if tag_mode == 1: tag_mask = 0b01 if (tag_ch == 1) else 0b10 elif tag_mode in [2, 3]: ar_mask = np.zeros(16, dtype=int) ar_mask[tag_ch - 1] = 1 st = "0b" + ''.join(np.array(np.flipud(ar_mask), dtype=str)) tag_mask = eval(st) return tag_mask def load_encoded_tags(self, ep, tag_ch): """ Return a numpy array containing bytes corresponding to the specified episode and channel. """ tag_mode = self.tag_mode(ep) tag_mask = self.get_tag_mask(tag_ch, tag_mode) if tag_mode in [1, 2]: # digidata or itc mode # available for all formats ch = self.get_channel_for_tags(ep) raw = self.load_channel_data(ep, ch) return self.apply_op(raw, tag_mask, "mask") elif tag_mode == 3: # cyber k mode # only available for DAC2 objects format # store bytes corresponding to the blocks # containing tags in a numpy array and reshape # it to have a set of tuples (time, value) ck_blocks = self.get_blocks_of_type(ep, 'RCyberTag') databytes = self.load_bytes(ck_blocks) raw = self.reshape_bytes(databytes, reshape=(4, 2), datatypes=('u', 'u'), order='<') # keep only items that are compatible # with the specified tag channel raw[:, 1] = self.apply_op(raw[:, 1], tag_mask, "mask") # computing numpy.diff is useful to know # how many times a value is maintained # and necessary to reconstruct the # compressed signal ... repeats = np.array(np.diff(raw[:, 0]), dtype=int) data = np.repeat(raw[:-1, 1], repeats, axis=0) # ... note that there is always # a transition at t=0 for synchronisation # purpose, consequently it is not necessary # to complete with zeros when the first # transition arrive ... return data def load_encoded_data(self, ep, ch): """ Get encoded value of raw data from the elphy file. """ tag_shift = self.tag_shift(ep) data = self.load_channel_data(ep, ch) if tag_shift: return self.apply_op(data, tag_shift, "shift_right") else: return data def get_signal_data(self, ep, ch): """ Return a numpy array containing all samples of a signal, acquired on an Elphy analog channel, formatted as a list of (time, value) tuples. """ # get data from the file y_data = self.load_encoded_data(ep, ch) x_data = np.arange(0, len(y_data)) # create a recarray data = np.recarray(len(y_data), dtype=[('x', b_float), ('y', b_float)]) # put in the recarray the scaled data x_factors = self.x_scale_factors(ep, ch) y_factors = self.y_scale_factors(ep, ch) data['x'] = x_factors.scale(x_data) data['y'] = y_factors.scale(y_data) return data def get_tag_data(self, ep, tag_ch): """ Return a numpy array containing all samples of a signal, acquired on an Elphy tag channel, formatted as a list of (time, value) tuples. """ # get data from the file y_data = self.load_encoded_tags(ep, tag_ch) x_data = np.arange(0, len(y_data)) # create a recarray data = np.recarray(len(y_data), dtype=[('x', b_float), ('y', b_int)]) # put in the recarray the scaled data factors = self.x_tag_scale_factors(ep) data['x'] = factors.scale(x_data) data['y'] = y_data return data class Acquis1Layout(ElphyLayout): """ A subclass of :class:`ElphyLayout` to know how the 'ACQUIS1/GS/1991' format is organised. Extends :class:`ElphyLayout` to store the offset used to retrieve directly raw data : ``data_offset`` : an offset to jump directly to the raw data. """ def __init__(self, fileobj, data_offset): super(Acquis1Layout, self).__init__(fileobj) self.data_offset = data_offset self.data_blocks = None def get_blocks_end(self): return self.data_offset def is_continuous(self): return self.header.continuous def get_episode_blocks(self): raise NotImplementedError() def set_info_block(self): i_blks = self.get_blocks_of_type('USER INFO') assert len(i_blks) < 2, 'too many info blocks' if len(i_blks): self.info_block = i_blks[0] def set_data_blocks(self): data_blocks = list() size = self.header.n_samples * self.header.sample_size * self.header.n_channels for ep in range(0, self.header.n_episodes): start = self.data_offset + ep * self.header.ep_size + self.header.preSeqI data_blocks.append(DummyDataBlock(self, 'Acquis1Data', start, size)) self.data_blocks = data_blocks def get_data_blocks(self, ep): return [self.data_blocks[ep - 1]] @property def n_episodes(self): return self.header.n_episodes def n_channels(self, episode): return self.header.n_channels def n_tags(self, episode): return 0 def tag_mode(self, ep): return 0 def tag_shift(self, ep): return 0 def get_channel_for_tags(self, ep): return None @property def no_analog_data(self): return True if (self.n_episodes == 0) else self.header.no_analog_data def sample_type(self, ep, ch): return self.header.tpData def sampling_period(self, ep, ch): return self.header.dX def n_samples(self, ep, ch): return self.header.n_samples def x_tag_scale_factors(self, ep): return ElphyScaleFactor( self.header.dX, self.header.X0 ) def x_scale_factors(self, ep, ch): return ElphyScaleFactor( self.header.dX, self.header.X0 ) def y_scale_factors(self, ep, ch): dY = self.header.dY_ar[ch - 1] Y0 = self.header.Y0_ar[ch - 1] # TODO: see why this kind of exception exists if dY is None or Y0 is None: raise Exception('bad Y-scale factors for episode %s channel %s' % (ep, ch)) return ElphyScaleFactor(dY, Y0) def x_unit(self, ep, ch): return self.header.x_unit def y_unit(self, ep, ch): return self.header.y_units[ch - 1] @property def ep_size(self): return self.header.ep_size @property def file_duration(self): return self.header.dX * self.n_samples def get_tag(self, episode, tag_channel): return None def create_channel_mask(self, ep): return np.arange(1, self.header.n_channels + 1) class DAC2GSLayout(ElphyLayout): """ A subclass of :class:`ElphyLayout` to know how the 'DAC2 / GS / 2000' format is organised. Extends :class:`ElphyLayout` to store the offset used to retrieve directly raw data : ``data_offset`` : an offset to jump directly after the 'MAIN' block where 'DAC2SEQ' blocks start. ``main_block```: a shortcut to access 'MAIN' block. ``episode_blocks`` : a shortcut to access blocks corresponding to episodes. """ def __init__(self, fileobj, data_offset): super(DAC2GSLayout, self).__init__(fileobj) self.data_offset = data_offset self.main_block = None self.episode_blocks = None def get_blocks_end(self): return self.file_size # data_offset def is_continuous(self): main_block = self.main_block return main_block.continuous if main_block else False def get_episode_blocks(self): raise NotImplementedError() def set_main_block(self): main_block = self.get_blocks_of_type('MAIN') self.main_block = main_block[0] if main_block else None def set_episode_blocks(self): ep_blocks = self.get_blocks_of_type('DAC2SEQ') self.episode_blocks = ep_blocks if ep_blocks else None def set_info_block(self): i_blks = self.get_blocks_of_type('USER INFO') assert len(i_blks) < 2, "too many info blocks" if len(i_blks): self.info_block = i_blks[0] def set_data_blocks(self): data_blocks = list() identifier = 'DAC2GSData' size = self.main_block.n_samples * self.main_block.sample_size * self.main_block.n_channels if not self.is_continuous(): blocks = self.get_blocks_of_type('DAC2SEQ') for block in blocks: start = block.start + self.main_block.preSeqI data_blocks.append(DummyDataBlock(self, identifier, start, size)) else: start = self.blocks[-1].end + 1 + self.main_block.preSeqI data_blocks.append(DummyDataBlock(self, identifier, start, size)) self.data_blocks = data_blocks def get_data_blocks(self, ep): return [self.data_blocks[ep - 1]] def episode_block(self, ep): return self.main_block if self.is_continuous() else self.episode_blocks[ep - 1] def tag_mode(self, ep): return 1 if self.main_block.withTags else 0 def tag_shift(self, ep): return self.main_block.tagShift def get_channel_for_tags(self, ep): return 1 def sample_type(self, ep, ch): return self.main_block.tpData def sample_size(self, ep, ch): size = super(DAC2GSLayout, self).sample_size(ep, ch) assert size == 2, "sample size is always 2 bytes for DAC2/GS/2000 format" return size def sampling_period(self, ep, ch): block = self.episode_block(ep) return block.dX def x_tag_scale_factors(self, ep): block = self.episode_block(ep) return ElphyScaleFactor( block.dX, block.X0, ) def x_scale_factors(self, ep, ch): block = self.episode_block(ep) return ElphyScaleFactor( block.dX, block.X0, ) def y_scale_factors(self, ep, ch): block = self.episode_block(ep) return ElphyScaleFactor( block.dY_ar[ch - 1], block.Y0_ar[ch - 1] ) def x_unit(self, ep, ch): block = self.episode_block(ep) return block.x_unit def y_unit(self, ep, ch): block = self.episode_block(ep) return block.y_units[ch - 1] def n_samples(self, ep, ch): return self.main_block.n_samples def ep_size(self, ep): return self.main_block.ep_size @property def n_episodes(self): return self.main_block.n_episodes def n_channels(self, episode): return self.main_block.n_channels def n_tags(self, episode): return 2 if self.main_block.withTags else 0 @property def file_duration(self): return self.main_block.dX * self.n_samples def get_tag(self, episode, tag_channel): assert episode in range(1, self.n_episodes + 1) # there are none or 2 tag channels if self.tag_mode(episode) == 1: assert tag_channel in range(1, 3), "DAC2/GS/2000 format support only 2 tag channels" block = self.episode_block(episode) t_stop = self.main_block.n_samples * block.dX return ElphyTag(self, episode, tag_channel, block.x_unit, 1.0 / block.dX, 0, t_stop) else: return None def n_tag_samples(self, ep, tag_channel): return self.main_block.n_samples def get_tag_data(self, episode, tag_channel): # memorise some useful properties block = self.episode_block(episode) sample_size = self.sample_size(episode, tag_channel) sample_symbol = self.sample_symbol(episode, tag_channel) # create a bit mask to define which # sample to keep from the file channel_mask = self.create_channel_mask(episode) bit_mask = self.create_bit_mask(channel_mask, 1) # get bytes from the file data_block = self.data_blocks[episode - 1] n_bytes = data_block.size self.file.seek(data_block.start) databytes = np.frombuffer(self.file.read(n_bytes), ' 0)[0] raw = databytes.take(to_keep) raw = raw.reshape([len(raw) / sample_size, sample_size]) # create a recarray containing data dt = np.dtype(numpy_map[sample_symbol]) dt.newbyteorder('<') tag_mask = 0b01 if (tag_channel == 1) else 0b10 y_data = np.frombuffer(raw, dt) & tag_mask x_data = np.arange(0, len(y_data)) * block.dX + block.X0 data = np.recarray(len(y_data), dtype=[('x', b_float), ('y', b_int)]) data['x'] = x_data data['y'] = y_data return data def create_channel_mask(self, ep): return np.arange(1, self.main_block.n_channels + 1) class DAC2Layout(ElphyLayout): """ A subclass of :class:`ElphyLayout` to know how the Elphy format is organised. Whereas other formats storing raw data at the end of the file, 'DAC2 objects' format spreads them over multiple blocks : ``episode_blocks`` : a shortcut to access blocks corresponding to episodes. """ def __init__(self, fileobj): super(DAC2Layout, self).__init__(fileobj) self.episode_blocks = None def get_blocks_end(self): return self.file_size def is_continuous(self): ep_blocks = [k for k in self.blocks if k.identifier.startswith('B_Ep')] if ep_blocks: ep_block = ep_blocks[0] ep_sub_block = ep_block.sub_blocks[0] return ep_sub_block.continuous else: return False def set_episode_blocks(self): self.episode_blocks = [k for k in self.blocks if str(k.identifier).startswith('B_Ep')] def set_info_block(self): # in fact the file info are contained into a single sub-block with an USR identifier i_blks = self.get_blocks_of_type('B_Finfo') assert len(i_blks) < 2, "too many info blocks" if len(i_blks): i_blk = i_blks[0] sub_blocks = i_blk.sub_blocks if len(sub_blocks): self.info_block = sub_blocks[0] def set_data_blocks(self): data_blocks = list() blocks = self.get_blocks_of_type('RDATA') for block in blocks: start = block.data_start size = block.end + 1 - start data_blocks.append(DummyDataBlock(self, 'RDATA', start, size)) self.data_blocks = data_blocks def get_data_blocks(self, ep): return self.group_blocks_of_type(ep, 'RDATA') def group_blocks_of_type(self, ep, identifier): ep_blocks = list() blocks = [k for k in self.get_blocks_stored_in_episode(ep) if k.identifier == identifier] for block in blocks: start = block.data_start size = block.end + 1 - start ep_blocks.append(DummyDataBlock(self, identifier, start, size)) return ep_blocks def get_blocks_stored_in_episode(self, ep): data_blocks = [k for k in self.blocks if k.identifier == 'RDATA'] n_ep = self.n_episodes blk_1 = self.episode_block(ep) blk_2 = self.episode_block((ep + 1) % n_ep) i_1 = self.blocks.index(blk_1) i_2 = self.blocks.index(blk_2) if (blk_1 == blk_2) or (i_2 < i_1): return [k for k in data_blocks if self.blocks.index(k) > i_1] else: return [k for k in data_blocks if self.blocks.index(k) in xrange(i_1, i_2)] def set_cyberk_blocks(self): ck_blocks = list() blocks = self.get_blocks_of_type('RCyberTag') for block in blocks: start = block.data_start size = block.end + 1 - start ck_blocks.append(DummyDataBlock(self, 'RCyberTag', start, size)) self.ck_blocks = ck_blocks def episode_block(self, ep): return self.episode_blocks[ep - 1] @property def n_episodes(self): return len(self.episode_blocks) def analog_index(self, episode): """ Return indices relative to channels used for analog signals. """ block = self.episode_block(episode) tag_mode = block.ep_block.tag_mode an_index = np.where(np.array(block.ks_block.k_sampling) > 0) if tag_mode == 2: an_index = an_index[:-1] return an_index def n_channels(self, episode): """ Return the number of channels used for analog signals but also events. NB : in Elphy this 2 kinds of channels are not differenciated. """ block = self.episode_block(episode) tag_mode = block.ep_block.tag_mode n_channels = len(block.ks_block.k_sampling) return n_channels if tag_mode != 2 else n_channels - 1 def n_tags(self, episode): block = self.episode_block(episode) tag_mode = block.ep_block.tag_mode tag_map = {0: 0, 1: 2, 2: 16, 3: 16} return tag_map.get(tag_mode, 0) def n_events(self, episode): """ Return the number of channels dedicated to events. """ block = self.episode_block(episode) return block.ks_block.k_sampling.count(0) def n_spiketrains(self, episode): spk_blocks = [k for k in self.blocks if k.identifier == 'RSPK'] return spk_blocks[0].n_evt_channels if spk_blocks else 0 def sub_sampling(self, ep, ch): """ Return the sub-sampling factor for the specified episode and channel. """ block = self.episode_block(ep) return block.ks_block.k_sampling[ch - 1] if block.ks_block else 1 def aggregate_size(self, block, ep): ag_count = self.aggregate_sample_count(block) ag_size = 0 for ch in range(1, ag_count + 1): if (block.ks_block.k_sampling[ch - 1] != 0): ag_size += self.sample_size(ep, ch) return ag_size def n_samples(self, ep, ch): block = self.episode_block(ep) if not block.ep_block.continuous: return block.ep_block.nbpt / self.sub_sampling(ep, ch) else: # for continuous case there isn't any place # in the file that contains the number of # samples unlike the episode case ... data_blocks = self.get_data_blocks(ep) total_size = np.sum([k.size for k in data_blocks]) # count the number of samples in an # aggregate and compute its size in order # to determine the size of an aggregate ag_count = self.aggregate_sample_count(block) ag_size = self.aggregate_size(block, ep) n_ag = total_size / ag_size # the number of samples is equal # to the number of aggregates ... n_samples = n_ag n_chunks = total_size % ag_size # ... but not when there exists # a incomplete aggregate at the # end of the file, consequently # the preeceeding computed number # of samples must be incremented # by one only if the channel map # to a sample in the last aggregate # ... maybe this last part should be # deleted because the n_chunks is always # null in continuous mode if n_chunks: last_ag_size = total_size - n_ag * ag_count size = 0 for i in range(0, ch): size += self.sample_size(ep, i + 1) if size <= last_ag_size: n_samples += 1 return n_samples def sample_type(self, ep, ch): block = self.episode_block(ep) return block.kt_block.k_types[ch - 1] if block.kt_block else block.ep_block.tpData def sampling_period(self, ep, ch): block = self.episode_block(ep) return block.ep_block.dX * self.sub_sampling(ep, ch) def x_tag_scale_factors(self, ep): block = self.episode_block(ep) return ElphyScaleFactor( block.ep_block.dX, block.ep_block.X0 ) def x_scale_factors(self, ep, ch): block = self.episode_block(ep) return ElphyScaleFactor( block.ep_block.dX * block.ks_block.k_sampling[ch - 1], block.ep_block.X0, ) def y_scale_factors(self, ep, ch): block = self.episode_block(ep) return ElphyScaleFactor( block.ch_block.dY_ar[ch - 1], block.ch_block.Y0_ar[ch - 1] ) def x_unit(self, ep, ch): block = self.episode_block(ep) return block.ep_block.x_unit def y_unit(self, ep, ch): block = self.episode_block(ep) return block.ch_block.y_units[ch - 1] def tag_mode(self, ep): block = self.episode_block(ep) return block.ep_block.tag_mode def tag_shift(self, ep): block = self.episode_block(ep) return block.ep_block.tag_shift def get_channel_for_tags(self, ep): block = self.episode_block(ep) tag_mode = self.tag_mode(ep) if tag_mode == 1: ks = np.array(block.ks_block.k_sampling) mins = np.where(ks == ks.min())[0] + 1 return mins[0] elif tag_mode == 2: return block.ep_block.n_channels else: return None def aggregate_sample_count(self, block): """ Return the number of sample in an aggregate. """ # compute the least common multiple # for channels having block.ks_block.k_sampling[ch] > 0 lcm0 = 1 for i in range(0, block.ep_block.n_channels): if block.ks_block.k_sampling[i] > 0: lcm0 = least_common_multiple(lcm0, block.ks_block.k_sampling[i]) # sum quotients lcm / KSampling count = 0 for i in range(0, block.ep_block.n_channels): if block.ks_block.k_sampling[i] > 0: count += lcm0 / block.ks_block.k_sampling[i] return count def create_channel_mask(self, ep): """ Return the minimal pattern of channel numbers representing the succession of channels in the multiplexed data. It is useful to do the mapping between a sample stored in the file and its relative channel. NB : This function has been converted from the 'TseqBlock.BuildMask' method of the file 'ElphyFormat.pas' stored in Elphy source code. """ block = self.episode_block(ep) ag_count = self.aggregate_sample_count(block) mask_ar = np.zeros(ag_count, dtype='i') ag_size = 0 i = 0 k = 0 while k < ag_count: for j in range(0, block.ep_block.n_channels): if (block.ks_block.k_sampling[j] != 0) and (i % block.ks_block.k_sampling[j] == 0): mask_ar[k] = j + 1 ag_size += self.sample_size(ep, j + 1) k += 1 if k >= ag_count: break i += 1 return mask_ar def get_signal(self, episode, channel): block = self.episode_block(episode) k_sampling = np.array(block.ks_block.k_sampling) evt_channels = np.where(k_sampling == 0)[0] if channel not in evt_channels: return super(DAC2Layout, self).get_signal(episode, channel) else: k_sampling[channel - 1] = -1 return self.get_event(episode, channel, k_sampling) def get_tag(self, episode, tag_channel): """ Return a :class:`ElphyTag` which is a descriptor of the specified event channel. """ assert episode in range(1, self.n_episodes + 1) # there are none, 2 or 16 tag # channels depending on tag_mode tag_mode = self.tag_mode(episode) if tag_mode: block = self.episode_block(episode) x_unit = block.ep_block.x_unit # verify the validity of the tag channel if tag_mode == 1: assert tag_channel in range( 1, 3), "Elphy format support only 2 tag channels for tag_mode == 1" elif tag_mode == 2: assert tag_channel in range( 1, 17), "Elphy format support only 16 tag channels for tag_mode == 2" elif tag_mode == 3: assert tag_channel in range( 1, 17), "Elphy format support only 16 tag channels for tag_mode == 3" smp_period = block.ep_block.dX smp_freq = 1.0 / smp_period if tag_mode != 3: ch = self.get_channel_for_tags(episode) n_samples = self.n_samples(episode, ch) t_stop = (n_samples - 1) * smp_freq else: # get the max of n_samples multiplied by the sampling # period done on every analog channels in order to avoid # the selection of a channel without concrete signals t_max = list() for ch in self.analog_index(episode): n_samples = self.n_samples(episode, ch) factors = self.x_scale_factors(episode, ch) chtime = n_samples * factors.delta t_max.append(chtime) time_max = max(t_max) # as (n_samples_tag - 1) * dX_tag # and time_max = n_sample_tag * dX_tag # it comes the following duration t_stop = time_max - smp_period return ElphyTag(self, episode, tag_channel, x_unit, smp_freq, 0, t_stop) else: return None def get_event(self, ep, ch, marked_ks): """ Return a :class:`ElphyEvent` which is a descriptor of the specified event channel. """ assert ep in range(1, self.n_episodes + 1) assert ch in range(1, self.n_channels + 1) # find the event channel number evt_channel = np.where(marked_ks == -1)[0][0] assert evt_channel in range(1, self.n_events(ep) + 1) block = self.episode_block(ep) ep_blocks = self.get_blocks_stored_in_episode(ep) evt_blocks = [k for k in ep_blocks if k.identifier == 'REVT'] n_events = np.sum([k.n_events[evt_channel - 1] for k in evt_blocks], dtype=int) x_unit = block.ep_block.x_unit return ElphyEvent(self, ep, evt_channel, x_unit, n_events, ch_number=ch) def load_encoded_events(self, episode, evt_channel, identifier): """ Return times stored as a 4-bytes integer in the specified event channel. """ data_blocks = self.group_blocks_of_type(episode, identifier) ep_blocks = self.get_blocks_stored_in_episode(episode) evt_blocks = [k for k in ep_blocks if k.identifier == identifier] # compute events on each channel n_events = np.sum([k.n_events for k in evt_blocks], dtype=int, axis=0) pre_events = np.sum(n_events[0:evt_channel - 1], dtype=int) start = pre_events end = start + n_events[evt_channel - 1] expected_size = 4 * np.sum(n_events, dtype=int) return self.load_bytes(data_blocks, dtype=' 0: name = names[episode - 1] start = name.size + 1 - name.data_size + 1 end = name.end - name.start + 1 chars = self.load_bytes([name], dtype='uint8', start=start, end=end, expected_size=name.size).tolist() # print "chars[%s:%s]: %s" % (start,end,chars) episode_name = ''.join([chr(k) for k in chars]) return episode_name def get_event_data(self, episode, evt_channel): """ Return times contained in the specified event channel. This function is triggered when the 'times' property of an :class:`ElphyEvent` descriptor instance is accessed. """ times = self.load_encoded_events(episode, evt_channel, "REVT") block = self.episode_block(episode) return times * block.ep_block.dX / len(block.ks_block.k_sampling) def get_spiketrain(self, episode, electrode_id): """ Return a :class:`Spike` which is a descriptor of the specified spike channel. """ assert episode in range(1, self.n_episodes + 1) assert electrode_id in range(1, self.n_spiketrains(episode) + 1) # get some properties stored in the episode sub-block block = self.episode_block(episode) x_unit = block.ep_block.x_unit x_unit_wf = getattr(block.ep_block, 'x_unit_wf', None) y_unit_wf = getattr(block.ep_block, 'y_unit_wf', None) # number of spikes in the entire episode spk_blocks = [k for k in self.blocks if k.identifier == 'RSPK'] n_events = np.sum([k.n_events[electrode_id - 1] for k in spk_blocks], dtype=int) # number of samples in a waveform wf_sampling_frequency = 1.0 / block.ep_block.dX wf_blocks = [k for k in self.blocks if k.identifier == 'RspkWave'] if wf_blocks: wf_samples = wf_blocks[0].wavelength t_start = wf_blocks[0].pre_trigger * block.ep_block.dX else: wf_samples = 0 t_start = 0 return ElphySpikeTrain(self, episode, electrode_id, x_unit, n_events, wf_sampling_frequency, wf_samples, x_unit_wf, y_unit_wf, t_start) def get_spiketrain_data(self, episode, electrode_id): """ Return times contained in the specified spike channel. This function is triggered when the 'times' property of an :class:`Spike` descriptor instance is accessed. NB : The 'RSPK' block is not actually identical to the 'EVT' one, because all units relative to a time are stored directly after all event times, 1 byte for each. This function doesn't return these units. But, they could be retrieved from the 'RspkWave' block with the 'get_waveform_data function' """ block = self.episode_block(episode) times = self.load_encoded_spikes(episode, electrode_id, "RSPK") return times * block.ep_block.dX def load_encoded_waveforms(self, episode, electrode_id): """ Return times on which waveforms are defined and a numpy recarray containing all the data stored in the RspkWave block. """ # load data corresponding to the RspkWave block identifier = "RspkWave" data_blocks = self.group_blocks_of_type(episode, identifier) databytes = self.load_bytes(data_blocks) # select only data corresponding # to the specified spk_channel ep_blocks = self.get_blocks_stored_in_episode(episode) wf_blocks = [k for k in ep_blocks if k.identifier == identifier] wf_samples = wf_blocks[0].wavelength events = np.sum([k.n_spikes for k in wf_blocks], dtype=int, axis=0) n_events = events[electrode_id - 1] pre_events = np.sum(events[0:electrode_id - 1], dtype=int) start = pre_events end = start + n_events # data must be reshaped before dtype = [ # the time of the spike arrival ('elphy_time', 'u4', (1,)), ('device_time', 'u4', (1,)), # the identifier of the electrode # would also be the 'trodalness' # but this tetrode devices are not # implemented in Elphy ('channel_id', 'u2', (1,)), # the 'category' of the waveform ('unit_id', 'u1', (1,)), # do not used ('dummy', 'u1', (13,)), # samples of the waveform ('waveform', 'i2', (wf_samples,)) ] x_start = wf_blocks[0].pre_trigger x_stop = wf_samples - x_start return np.arange(-x_start, x_stop), np.frombuffer(databytes, dtype=dtype)[start:end] def get_waveform_data(self, episode, electrode_id): """ Return waveforms corresponding to the specified spike channel. This function is triggered when the ``waveforms`` property of an :class:`Spike` descriptor instance is accessed. """ block = self.episode_block(episode) times, databytes = self.load_encoded_waveforms(episode, electrode_id) n_events, = databytes.shape wf_samples = databytes['waveform'].shape[1] dtype = [ ('time', float), ('electrode_id', int), ('unit_id', int), ('waveform', float, (wf_samples, 2)) ] data = np.empty(n_events, dtype=dtype) data['electrode_id'] = databytes['channel_id'][:, 0] data['unit_id'] = databytes['unit_id'][:, 0] data['time'] = databytes['elphy_time'][:, 0] * block.ep_block.dX data['waveform'][:, :, 0] = times * block.ep_block.dX data['waveform'][:, :, 1] = databytes['waveform'] * \ block.ep_block.dY_wf + block.ep_block.Y0_wf return data def get_rspk_data(self, spk_channel): """ Return times stored as a 4-bytes integer in the specified event channel. """ evt_blocks = self.get_blocks_of_type('RSPK') # compute events on each channel n_events = np.sum([k.n_events for k in evt_blocks], dtype=int, axis=0) # sum of array values up to spk_channel-1!!!! pre_events = np.sum(n_events[0:spk_channel], dtype=int) start = pre_events + (7 + len(n_events)) # rspk header end = start + n_events[spk_channel] expected_size = 4 * np.sum(n_events, dtype=int) # constant return self.load_bytes(evt_blocks, dtype='= layout.data_offset)): block = self.factory.create_block(layout, offset) # create the sub blocks if it is DAC2 objects format # this is only done for B_Ep and B_Finfo blocks for # DAC2 objects format, maybe it could be useful to # spread this to other block types. # if isinstance(header, DAC2Header) and (block.identifier in ['B_Ep']) : if isinstance(header, DAC2Header) and (block.identifier in ['B_Ep', 'B_Finfo']): sub_offset = block.data_offset while sub_offset < block.start + block.size: sub_block = self.factory.create_sub_block(block, sub_offset) block.add_sub_block(sub_block) sub_offset += sub_block.size # set up some properties of some DAC2Layout sub-blocks if isinstance(sub_block, ( DAC2EpSubBlock, DAC2AdcSubBlock, DAC2KSampSubBlock, DAC2KTypeSubBlock)): block.set_episode_block() block.set_channel_block() block.set_sub_sampling_block() block.set_sample_size_block() # SpikeTrain # if isinstance(header, DAC2Header) and (block.identifier in ['RSPK']) : # print "\nElphyFile.create_layout() - RSPK" # print "ElphyFile.create_layout() - n_events",block.n_events # print "ElphyFile.create_layout() - n_evt_channels",block.n_evt_channels layout.add_block(block) offset += block.size # set up as soon as possible the shortcut # to the main block of a DAC2GSLayout if (not detect_main and isinstance(layout, DAC2GSLayout) and isinstance(block, DAC2GSMainBlock)): layout.set_main_block() detect_main = True # detect if the file is continuous when # the 'MAIN' block has been parsed if not detect_continuous: is_continuous = isinstance(header, DAC2GSHeader) and layout.is_continuous() # set up the shortcut to blocks corresponding # to episodes, only available for DAC2Layout # and also DAC2GSLayout if not continuous if isinstance(layout, DAC2Layout) or ( isinstance(layout, DAC2GSLayout) and not layout.is_continuous()): layout.set_episode_blocks() layout.set_data_blocks() # finally set up the user info block of the layout layout.set_info_block() self.file.seek(0) return layout def is_continuous(self): return self.layout.is_continuous() @property def n_episodes(self): """ Return the number of recording sequences. """ return self.layout.n_episodes def n_channels(self, episode): """ Return the number of recording channels involved in data acquisition and relative to the specified episode : ``episode`` : the recording sequence identifier. """ return self.layout.n_channels(episode) def n_tags(self, episode): """ Return the number of tag channels relative to the specified episode : ``episode`` : the recording sequence identifier. """ return self.layout.n_tags(episode) def n_events(self, episode): """ Return the number of event channels relative to the specified episode : ``episode`` : the recording sequence identifier. """ return self.layout.n_events(episode) def n_spiketrains(self, episode): """ Return the number of event channels relative to the specified episode : ``episode`` : the recording sequence identifier. """ return self.layout.n_spiketrains(episode) def n_waveforms(self, episode): """ Return the number of waveform channels : """ return self.layout.n_waveforms(episode) def get_signal(self, episode, channel): """ Return the signal or event descriptor relative to the specified episode and channel : ``episode`` : the recording sequence identifier. ``channel`` : the analog channel identifier. NB : For 'DAC2 objects' format, it could be also used to retrieve events. """ return self.layout.get_signal(episode, channel) def get_tag(self, episode, tag_channel): """ Return the tag descriptor relative to the specified episode and tag channel : ``episode`` : the recording sequence identifier. ``tag_channel`` : the tag channel identifier. NB : There isn't any tag channels for 'Acquis1' format. ElphyTag channels appeared after 'DAC2/GS/2000' release. They are also present in 'DAC2 objects' format. """ return self.layout.get_tag(episode, tag_channel) def get_event(self, episode, evt_channel): """ Return the event relative the specified episode and event channel. `episode`` : the recording sequence identifier. ``tag_channel`` : the tag channel identifier. """ return self.layout.get_event(episode, evt_channel) def get_spiketrain(self, episode, electrode_id): """ Return the spiketrain relative to the specified episode and electrode_id. ``episode`` : the recording sequence identifier. ``electrode_id`` : the identifier of the electrode providing the spiketrain. NB : Available only for 'DAC2 objects' format. This descriptor can return the times of a spiketrain and waveforms relative to each of these times. """ return self.layout.get_spiketrain(episode, electrode_id) @property def comments(self): raise NotImplementedError() def get_user_file_info(self): """ Return user defined file metadata. """ if not self.layout.info_block: return dict() else: return self.layout.info_block.get_user_file_info() @property def episode_info(self, ep_number): raise NotImplementedError() def get_signals(self): """ Get all available analog or event channels stored into an Elphy file. """ signals = list() for ep in range(1, self.n_episodes + 1): for ch in range(1, self.n_channels(ep) + 1): signal = self.get_signal(ep, ch) signals.append(signal) return signals def get_tags(self): """ Get all available tag channels stored into an Elphy file. """ tags = list() for ep in range(1, self.n_episodes + 1): for tg in range(1, self.n_tags(ep) + 1): tag = self.get_tag(ep, tg) tags.append(tag) return tags def get_spiketrains(self): """ Get all available spiketrains stored into an Elphy file. """ spiketrains = list() for ep in range(1, self.n_episodes + 1): for ch in range(1, self.n_spiketrains(ep) + 1): spiketrain = self.get_spiketrain(ep, ch) spiketrains.append(spiketrain) return spiketrains def get_rspk_spiketrains(self): """ Get all available spiketrains stored into an Elphy file. """ spiketrains = list() spk_blocks = self.layout.get_blocks_of_type('RSPK') for bl in spk_blocks: # print "ElphyFile.get_spiketrains() - identifier:",bl.identifier for ch in range(0, bl.n_evt_channels): spiketrain = self.layout.get_rspk_data(ch) spiketrains.append(spiketrain) return spiketrains def get_names(self): com_blocks = list() com_blocks = self.layout.get_blocks_of_type('COM') return com_blocks # -------------------------------------------------------- class ElphyIO(BaseIO): """ Class for reading from and writing to an Elphy file. It enables reading: - :class:`Block` - :class:`Segment` - :class:`ChannelIndex` - :class:`Event` - :class:`SpikeTrain` Usage: >>> from neo import io >>> r = io.ElphyIO(filename='ElphyExample.DAT') >>> seg = r.read_block() >>> print(seg.analogsignals) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE >>> print(seg.spiketrains) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE >>> print(seg.events) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE >>> print(anasig._data_description) >>> anasig = r.read_analogsignal() >>> bl = Block() >>> # creating segments, their contents and append to bl >>> r.write_block( bl ) """ is_readable = True # This class can read data is_writable = False # This class can write data # This class is able to directly or indirectly handle the following objects supported_objects = [Block, Segment, AnalogSignal, SpikeTrain] # This class can return a Block readable_objects = [Block] # This class is not able to write objects writeable_objects = [] has_header = False is_streameable = False # This is for GUI stuff : a definition for parameters when reading. # This dict should be keyed by object (`Block`). Each entry is a list # of tuple. The first entry in each tuple is the parameter name. The # second entry is a dict with keys 'value' (for default value), # and 'label' (for a descriptive name). # Note that if the highest-level object requires parameters, # common_io_test will be skipped. read_params = { } # do not supported write so no GUI stuff write_params = { } name = 'Elphy IO' extensions = ['DAT'] # mode can be 'file' or 'dir' or 'fake' or 'database' mode = 'file' # internal serialized representation of neo data serialized = None def __init__(self, filename=None): """ Arguments: filename : the filename to read """ BaseIO.__init__(self) self.filename = filename self.elphy_file = ElphyFile(self.filename) def read_block(self, lazy=False, ): """ Return :class:`Block`. Parameters: lazy : postpone actual reading of the file. """ assert not lazy, 'Do not support lazy' # basic block = Block(name=None) # get analog and tag channels try: self.elphy_file.open() except Exception as e: self.elphy_file.close() raise Exception("cannot open file %s : %s" % (self.filename, e)) # create a segment containing all analog, # tag and event channels for the episode if self.elphy_file.n_episodes is None: print("File '%s' appears to have no episodes" % (self.filename)) return block for episode in range(1, self.elphy_file.n_episodes + 1): segment = self.read_segment(episode) segment.block = block block.segments.append(segment) # close file self.elphy_file.close() # result return block def write_block(self, block): """ Write a given Neo Block to an Elphy file, its structure being, for example: Neo -> Elphy -------------------------------------------------------------- Block File Segment Episode Block (B_Ep) AnalogSignalArray Episode Descriptor (Ep + Adc + Ksamp + Ktype) multichannel RDATA (with a ChannelMask multiplexing channels) 2D NumPy Array ... AnalogSignalArray AnalogSignal AnalogSignal ... ... SpikeTrain Event Block (RSPK) SpikeTrain ... Arguments:: block: the block to be saved """ # Serialize Neo structure into Elphy file # each analog signal will be serialized as elphy Episode Block (with its subblocks) # then all spiketrains will be serialized into an Rspk Block (an Event Block with addons). # Serialize (and size) all Neo structures before writing them to file # Since to write each Elphy Block is required to know in advance its size, # which includes that of its subblocks, it is necessary to # serialize first the lowest structures. # Iterate over block structures elphy_limit = 256 All = '' # print "\n\n--------------------------------------------\n" # print "write_block() - n_segments:",len(block.segments) for seg in block.segments: analogsignals = 0 # init nbchan = 0 nbpt = 0 chls = 0 Dxu = 1e-8 # 0.0000001 Rxu = 1e+8 # 10000000.0 X0uSpk = 0.0 CyberTime = 0.0 aa_units = [] NbEv = [] serialized_analog_data = '' serialized_spike_data = '' # AnalogSignals # Neo signalarrays are 2D numpy array where each row is an array of samples for a # channel: # signalarray A = [[ 1, 2, 3, 4 ], # [ 5, 6, 7, 8 ]] # signalarray B = [[ 9, 10, 11, 12 ], # [ 13, 14, 15, 16 ]] # Neo Segments can have more than one signalarray. # To be converted in Elphy analog channels they need to be all in a 2D array, not in # several 2D arrays. # Concatenate all analogsignalarrays into one and then flatten it. # Elphy RDATA blocks contain Fortran styled samples: # 1, 5, 9, 13, 2, 6, 10, 14, 3, 7, 11, 15, 4, 8, 12, 16 # AnalogSignalArrays -> analogsignals # get the first to have analogsignals with the right shape # Annotations for analogsignals array come as a list of int being source ids # here, put each source id on a separate dict entry in order to have a matching # afterwards idx = 0 annotations = dict() # get all the others # print "write_block() - n_analogsignals:",len(seg.analogsignals) # print "write_block() - n_analogsignalarrays:",len(seg.analogsignalarrays) for asigar in seg.analogsignalarrays: idx, annotations = self.get_annotations_dict( annotations, "analogsignal", asigar.annotations.items(), asigar.name, idx) # array structure _, chls = asigar.shape # units for _ in range(chls): aa_units.append(asigar.units) Dxu = asigar.sampling_period Rxu = asigar.sampling_rate if isinstance(analogsignals, np.ndarray): analogsignals = np.hstack((analogsignals, asigar)) else: analogsignals = asigar # first time # collect and reshape all analogsignals if isinstance(analogsignals, np.ndarray): # transpose matrix since in Neo channels are column-wise while in Elphy are # row-wise analogsignals = analogsignals.T # get dimensions nbchan, nbpt = analogsignals.shape # serialize AnalogSignal analog_data_fmt = '<' + str(analogsignals.size) + 'f' # serialized flattened numpy channels in 'F'ortran style analog_data_64 = analogsignals.flatten('F') # elphy normally uses float32 values (for performance reasons) analog_data = np.array(analog_data_64, dtype=np.float32) serialized_analog_data += struct.pack(analog_data_fmt, *analog_data) # SpikeTrains # Neo spiketrains are stored as a one-dimensional array of times # [ 0.11, 1.23, 2.34, 3.45, 4.56, 5.67, 6.78, 7.89 ... ] # These are converted into Elphy Rspk Block which will contain all of them # RDATA + NbVeV:integer for the number of channels (spiketrains) # + NbEv:integer[] for the number of event per channel # followed by the actual arrays of integer containing spike times # spiketrains = seg.spiketrains # ... but consider elphy loading limitation: NbVeV = len(seg.spiketrains) # print "write_block() - n_spiketrains:",NbVeV if len(seg.spiketrains) > elphy_limit: NbVeV = elphy_limit # serialize format spiketrain_data_fmt = '<' spiketrains = [] for idx, train in enumerate(seg.spiketrains[:NbVeV]): # print "write_block() - train.size:", train.size,idx # print "write_block() - train:", train fake, annotations = self.get_annotations_dict( annotations, "spiketrain", train.annotations.items(), '', idx) # annotations.update( dict( [("spiketrain-"+str(idx), # train.annotations['source_id'])] ) ) # print "write_block() - train[%s].annotation['source_id']:%s" # "" % (idx,train.annotations['source_id']) # total number of events format + blackrock sorting mark (0 for neo) spiketrain_data_fmt += str(train.size) + "i" + str(train.size) + "B" # get starting time X0uSpk = train.t_start.item() CyberTime = train.t_stop.item() # count number of events per train NbEv.append(train.size) # multiply by sampling period train = train * Rxu # all flattened spike train # blackrock acquisition card also adds a byte for each event to sort it spiketrains.extend([spike.item() for spike in train] + [0 for _ in range(train.size)]) # Annotations # print annotations # using DBrecord elphy block, they will be available as values in elphy environment # separate keys and values in two separate serialized strings ST_sub = '' st_fmt = '' st_data = [] BUF_sub = '' serialized_ST_data = '' serialized_BUF_data = '' for key in sorted(annotations.iterkeys()): # take all values, get their type and concatenate fmt = '' data = [] value = annotations[key] if isinstance(value, (int, np.int32, np.int64)): # elphy type 2 fmt = ' 0 else "episode %s" % str(episode + 1) segment = Segment(name=name) # create an analog signal for # each channel in the episode for channel in range(1, self.elphy_file.n_channels(episode) + 1): signal = self.elphy_file.get_signal(episode, channel) analog_signal = AnalogSignal( signal.data['y'], units=signal.y_unit, t_start=signal.t_start * getattr(pq, signal.x_unit.strip()), t_stop=signal.t_stop * getattr(pq, signal.x_unit.strip()), # sampling_rate = signal.sampling_frequency * pq.kHz, sampling_period=signal.sampling_period * getattr(pq, signal.x_unit.strip()), channel_name="episode %s, channel %s" % (int(episode + 1), int(channel + 1)) ) analog_signal.segment = segment segment.analogsignals.append(analog_signal) # create a spiketrain for each # spike channel in the episode # in case of multi-electrode # acquisition context n_spikes = self.elphy_file.n_spiketrains(episode) # print "read_segment() - n_spikes:",n_spikes if n_spikes > 0: for spk in range(1, n_spikes + 1): spiketrain = self.read_spiketrain(episode, spk) spiketrain.segment = segment segment.spiketrains.append(spiketrain) # segment return segment def read_channelindex(self, episode): """ Internal method used to return :class:`ChannelIndex` info. Parameters: elphy_file : is the elphy object. episode : number of elphy episode, roughly corresponding to a segment """ n_spikes = self.elphy_file.n_spikes group = ChannelIndex( name="episode %s, group of %s electrodes" % (episode, n_spikes) ) for spk in range(0, n_spikes): channel = self.read_channelindex(episode, spk) group.channel_indexes.append(channel) return group def read_recordingchannel(self, episode, chl): """ Internal method used to return a :class:`ChannelIndex` label. Parameters: elphy_file : is the elphy object. episode : number of elphy episode, roughly corresponding to a segment. chl : electrode number. """ channel = ChannelIndex(name="episode %s, electrodes %s" % (episode, chl), index=[0]) return channel def read_event(self, episode, evt): """ Internal method used to return a list of elphy :class:`EventArray` acquired from event channels. Parameters: elphy_file : is the elphy object. episode : number of elphy episode, roughly corresponding to a segment. evt : index of the event. """ event = self.elphy_file.get_event(episode, evt) neo_event = Event( times=event.times * pq.s, channel_name="episode %s, event channel %s" % (episode + 1, evt + 1) ) return neo_event def read_spiketrain(self, episode, spk): """ Internal method used to return an elphy object :class:`SpikeTrain`. Parameters: elphy_file : is the elphy object. episode : number of elphy episode, roughly corresponding to a segment. spk : index of the spike array. """ block = self.elphy_file.layout.episode_block(episode) spike = self.elphy_file.get_spiketrain(episode, spk) spikes = spike.times * pq.s # print "read_spiketrain() - spikes: %s" % (len(spikes)) # print "read_spiketrain() - spikes:",spikes dct = { 'times': spikes, # check 't_start': block.ep_block.X0_wf if block.ep_block.X0_wf < spikes[0] else spikes[0], 't_stop': block.ep_block.cyber_time if block.ep_block.cyber_time > spikes[-1] else spikes[-1], 'units': 's', # special keywords to identify the # electrode providing the spiketrain # event though it is redundant with # waveforms 'label': "episode %s, electrode %s" % (episode, spk), 'electrode_id': spk } # new spiketrain return SpikeTrain(**dct) neo-0.7.2/neo/io/exampleio.py0000600013464101346420000000162513507452453014227 0ustar yohyoh# -*- coding: utf-8 -*- """ neo.io have been split in 2 level API: * neo.io: this API give neo object * neo.rawio: this API give raw data as they are in files. Developper are encourage to use neo.rawio. When this is done the neo.io is done automagically with this king of following code. Author: sgarcia """ from neo.io.basefromrawio import BaseFromRaw from neo.rawio.examplerawio import ExampleRawIO class ExampleIO(ExampleRawIO, BaseFromRaw): name = 'example IO' description = "Fake IO" # This is an inportant choice when there are several channels. # 'split-all' : 1 AnalogSignal each 1 channel # 'group-by-same-units' : one 2D AnalogSignal for each group of channel with same units _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename=''): ExampleRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.7.2/neo/io/hdf5io.py0000600013464101346420000004422313507452453013423 0ustar yohyoh# -*- coding: utf-8 -*- """ """ from __future__ import absolute_import import sys import logging import pickle import numpy as np import quantities as pq try: import h5py except ImportError as err: HAVE_H5PY = False else: HAVE_H5PY = True from neo.core import (objectlist, Block, Segment, AnalogSignal, SpikeTrain, Epoch, Event, IrregularlySampledSignal, ChannelIndex, Unit) from neo.io.baseio import BaseIO from neo.core.baseneo import MergeError logger = logging.getLogger('Neo') def disjoint_groups(groups): """`groups` should be a list of sets""" groups = groups[:] # copy, so as not to change original for group1 in groups: for group2 in groups: if group1 != group2: if group2.issubset(group1): groups.remove(group2) elif group1.issubset(group2): groups.remove(group1) return groups class NeoHdf5IO(BaseIO): """ Class for reading HDF5 format files created by Neo version 0.4 or earlier. Writing to HDF5 is not supported by this IO; we recommend using NixIO for this. """ supported_objects = objectlist readable_objects = objectlist name = 'NeoHdf5 IO' extensions = ['h5'] mode = 'file' is_readable = True is_writable = False def __init__(self, filename): if not HAVE_H5PY: raise ImportError("h5py is not available") BaseIO.__init__(self, filename=filename) self._data = h5py.File(filename, 'r') self.object_refs = {} def read_all_blocks(self, lazy=False, merge_singles=True, **kargs): """ Loads all blocks in the file that are attached to the root (which happens when they are saved with save() or write_block()). If `merge_singles` is True, then the IO will attempt to merge single channel `AnalogSignal` objects into multichannel objects, and similarly for single `Epoch`, `Event` and `IrregularlySampledSignal` objects. """ assert not lazy, 'Do not support lazy' self.merge_singles = merge_singles blocks = [] for name, node in self._data.items(): if "Block" in name: blocks.append(self._read_block(node)) return blocks def read_block(self, lazy=False, **kargs): """ Load the first block in the file. """ assert not lazy, 'Do not support lazy' return self.read_all_blocks(lazy=lazy)[0] def _read_block(self, node): attributes = self._get_standard_attributes(node) if "index" in attributes: attributes["index"] = int(attributes["index"]) block = Block(**attributes) for name, child_node in node['segments'].items(): if "Segment" in name: block.segments.append(self._read_segment(child_node, parent=block)) if len(node['recordingchannelgroups']) > 0: for name, child_node in node['recordingchannelgroups'].items(): if "RecordingChannelGroup" in name: block.channel_indexes.append( self._read_recordingchannelgroup(child_node, parent=block)) self._resolve_channel_indexes(block) elif self.merge_singles: # if no RecordingChannelGroups are defined, merging # takes place here. for segment in block.segments: if hasattr(segment, 'unmerged_analogsignals'): segment.analogsignals.extend( self._merge_data_objects(segment.unmerged_analogsignals)) del segment.unmerged_analogsignals if hasattr(segment, 'unmerged_irregularlysampledsignals'): segment.irregularlysampledsignals.extend( self._merge_data_objects(segment.unmerged_irregularlysampledsignals)) del segment.unmerged_irregularlysampledsignals return block def _read_segment(self, node, parent): attributes = self._get_standard_attributes(node) segment = Segment(**attributes) signals = [] for name, child_node in node['analogsignals'].items(): if "AnalogSignal" in name: signals.append(self._read_analogsignal(child_node, parent=segment)) if signals and self.merge_singles: segment.unmerged_analogsignals = signals # signals will be merged later signals = [] for name, child_node in node['analogsignalarrays'].items(): if "AnalogSignalArray" in name: signals.append(self._read_analogsignalarray(child_node, parent=segment)) segment.analogsignals = signals irr_signals = [] for name, child_node in node['irregularlysampledsignals'].items(): if "IrregularlySampledSignal" in name: irr_signals.append(self._read_irregularlysampledsignal(child_node, parent=segment)) if irr_signals and self.merge_singles: segment.unmerged_irregularlysampledsignals = irr_signals irr_signals = [] segment.irregularlysampledsignals = irr_signals epochs = [] for name, child_node in node['epochs'].items(): if "Epoch" in name: epochs.append(self._read_epoch(child_node, parent=segment)) if self.merge_singles: epochs = self._merge_data_objects(epochs) for name, child_node in node['epocharrays'].items(): if "EpochArray" in name: epochs.append(self._read_epocharray(child_node, parent=segment)) segment.epochs = epochs events = [] for name, child_node in node['events'].items(): if "Event" in name: events.append(self._read_event(child_node, parent=segment)) if self.merge_singles: events = self._merge_data_objects(events) for name, child_node in node['eventarrays'].items(): if "EventArray" in name: events.append(self._read_eventarray(child_node, parent=segment)) segment.events = events spiketrains = [] for name, child_node in node['spikes'].items(): raise NotImplementedError('Spike objects not yet handled.') for name, child_node in node['spiketrains'].items(): if "SpikeTrain" in name: spiketrains.append(self._read_spiketrain(child_node, parent=segment)) segment.spiketrains = spiketrains segment.block = parent return segment def _read_analogsignalarray(self, node, parent): attributes = self._get_standard_attributes(node) # todo: handle channel_index sampling_rate = self._get_quantity(node["sampling_rate"]) t_start = self._get_quantity(node["t_start"]) signal = AnalogSignal(self._get_quantity(node["signal"]), sampling_rate=sampling_rate, t_start=t_start, **attributes) signal.segment = parent self.object_refs[node.attrs["object_ref"]] = signal return signal def _read_analogsignal(self, node, parent): return self._read_analogsignalarray(node, parent) def _read_irregularlysampledsignal(self, node, parent): attributes = self._get_standard_attributes(node) signal = IrregularlySampledSignal(times=self._get_quantity(node["times"]), signal=self._get_quantity(node["signal"]), **attributes) signal.segment = parent return signal def _read_spiketrain(self, node, parent): attributes = self._get_standard_attributes(node) t_start = self._get_quantity(node["t_start"]) t_stop = self._get_quantity(node["t_stop"]) # todo: handle sampling_rate, waveforms, left_sweep spiketrain = SpikeTrain(self._get_quantity(node["times"]), t_start=t_start, t_stop=t_stop, **attributes) spiketrain.segment = parent self.object_refs[node.attrs["object_ref"]] = spiketrain return spiketrain def _read_epocharray(self, node, parent): attributes = self._get_standard_attributes(node) times = self._get_quantity(node["times"]) durations = self._get_quantity(node["durations"]) labels = node["labels"].value epoch = Epoch(times=times, durations=durations, labels=labels, **attributes) epoch.segment = parent return epoch def _read_epoch(self, node, parent): return self._read_epocharray(node, parent) def _read_eventarray(self, node, parent): attributes = self._get_standard_attributes(node) times = self._get_quantity(node["times"]) labels = node["labels"].value event = Event(times=times, labels=labels, **attributes) event.segment = parent return event def _read_event(self, node, parent): return self._read_eventarray(node, parent) def _read_recordingchannelgroup(self, node, parent): # todo: handle Units attributes = self._get_standard_attributes(node) channel_indexes = node["channel_indexes"].value channel_names = node["channel_names"].value if channel_indexes.size: if len(node['recordingchannels']): raise MergeError("Cannot handle a RecordingChannelGroup which both has a " "'channel_indexes' attribute and contains " "RecordingChannel objects") raise NotImplementedError("todo") # need to handle node['analogsignalarrays'] else: channels = [] for name, child_node in node['recordingchannels'].items(): if "RecordingChannel" in name: channels.append(self._read_recordingchannel(child_node)) channel_index = ChannelIndex(None, **attributes) channel_index._channels = channels # construction of the index is deferred until we have processed # all RecordingChannelGroup nodes units = [] for name, child_node in node['units'].items(): if "Unit" in name: units.append(self._read_unit(child_node, parent=channel_index)) channel_index.units = units channel_index.block = parent return channel_index def _read_recordingchannel(self, node): attributes = self._get_standard_attributes(node) analogsignals = [] irregsignals = [] for name, child_node in node["analogsignals"].items(): if "AnalogSignal" in name: obj_ref = child_node.attrs["object_ref"] analogsignals.append(obj_ref) for name, child_node in node["irregularlysampledsignals"].items(): if "IrregularlySampledSignal" in name: obj_ref = child_node.attrs["object_ref"] irregsignals.append(obj_ref) return attributes['index'], analogsignals, irregsignals def _read_unit(self, node, parent): attributes = self._get_standard_attributes(node) spiketrains = [] for name, child_node in node["spiketrains"].items(): if "SpikeTrain" in name: obj_ref = child_node.attrs["object_ref"] spiketrains.append(self.object_refs[obj_ref]) unit = Unit(**attributes) unit.channel_index = parent unit.spiketrains = spiketrains return unit def _merge_data_objects(self, objects): if len(objects) > 1: merged_objects = [objects.pop(0)] while objects: obj = objects.pop(0) try: combined_obj_ref = merged_objects[-1].annotations['object_ref'] merged_objects[-1] = merged_objects[-1].merge(obj) merged_objects[-1].annotations['object_ref'] = combined_obj_ref + \ "-" + obj.annotations[ 'object_ref'] except MergeError: merged_objects.append(obj) for obj in merged_objects: self.object_refs[obj.annotations['object_ref']] = obj return merged_objects else: return objects def _get_quantity(self, node): value = node.value unit_str = [x for x in node.attrs.keys() if "unit" in x][0].split("__")[1] units = getattr(pq, unit_str) return value * units def _get_standard_attributes(self, node): """Retrieve attributes""" attributes = {} for name in ('name', 'description', 'index', 'file_origin', 'object_ref'): if name in node.attrs: attributes[name] = node.attrs[name] for name in ('rec_datetime', 'file_datetime'): if name in node.attrs: if sys.version_info.major > 2: attributes[name] = pickle.loads(node.attrs[name], encoding='bytes') else: # Python 2 doesn't have the encoding argument attributes[name] = pickle.loads(node.attrs[name]) if sys.version_info.major > 2: annotations = pickle.loads(node.attrs['annotations'], encoding='bytes') else: annotations = pickle.loads(node.attrs['annotations']) attributes.update(annotations) # avoid "dictionary changed size during iteration" error attribute_names = list(attributes.keys()) if sys.version_info.major > 2: for name in attribute_names: if isinstance(attributes[name], (bytes, np.bytes_)): attributes[name] = attributes[name].decode('utf-8') if isinstance(name, bytes): attributes[name.decode('utf-8')] = attributes[name] attributes.pop(name) return attributes def _resolve_channel_indexes(self, block): def disjoint_channel_indexes(channel_indexes): channel_indexes = channel_indexes[:] for ci1 in channel_indexes: # this works only on analogsignals signal_group1 = set(tuple(x[1]) for x in ci1._channels) for ci2 in channel_indexes: # need to take irregularly sampled signals signal_group2 = set(tuple(x[1]) for x in ci2._channels) # into account too if signal_group1 != signal_group2: if signal_group2.issubset(signal_group1): channel_indexes.remove(ci2) elif signal_group1.issubset(signal_group2): channel_indexes.remove(ci1) return channel_indexes principal_indexes = disjoint_channel_indexes(block.channel_indexes) for ci in principal_indexes: ids = [] by_segment = {} for (index, analogsignals, irregsignals) in ci._channels: # note that what was called "index" in Neo 0.3/0.4 is "id" in Neo 0.5 ids.append(index) for signal_ref in analogsignals: signal = self.object_refs[signal_ref] segment_id = id(signal.segment) if segment_id in by_segment: by_segment[segment_id]['analogsignals'].append(signal) else: by_segment[segment_id] = {'analogsignals': [signal], 'irregsignals': []} for signal_ref in irregsignals: signal = self.object_refs[signal_ref] segment_id = id(signal.segment) if segment_id in by_segment: by_segment[segment_id]['irregsignals'].append(signal) else: by_segment[segment_id] = {'analogsignals': [], 'irregsignals': [signal]} assert len(ids) > 0 if self.merge_singles: ci.channel_ids = np.array(ids) ci.index = np.arange(len(ids)) for seg_id, segment_data in by_segment.items(): # get the segment object segment = None for seg in ci.block.segments: if id(seg) == seg_id: segment = seg break assert segment is not None if segment_data['analogsignals']: merged_signals = self._merge_data_objects(segment_data['analogsignals']) assert len(merged_signals) == 1 merged_signals[0].channel_index = ci merged_signals[0].annotations['object_ref'] = "-".join( obj.annotations['object_ref'] for obj in segment_data['analogsignals']) segment.analogsignals.extend(merged_signals) ci.analogsignals = merged_signals if segment_data['irregsignals']: merged_signals = self._merge_data_objects(segment_data['irregsignals']) assert len(merged_signals) == 1 merged_signals[0].channel_index = ci merged_signals[0].annotations['object_ref'] = "-".join( obj.annotations['object_ref'] for obj in segment_data['irregsignals']) segment.irregularlysampledsignals.extend(merged_signals) ci.irregularlysampledsignals = merged_signals else: raise NotImplementedError() # will need to return multiple ChannelIndexes # handle non-principal channel indexes for ci in block.channel_indexes: if ci not in principal_indexes: ids = [c[0] for c in ci._channels] for cipr in principal_indexes: if ids[0] in cipr.channel_ids: break ci.analogsignals = cipr.analogsignals ci.channel_ids = np.array(ids) ci.index = np.where(np.in1d(cipr.channel_ids, ci.channel_ids))[0] neo-0.7.2/neo/io/igorproio.py0000600013464101346420000001276713507452453014266 0ustar yohyoh# -*- coding: utf-8 -*- """ Class for reading data created by IGOR Pro (WaveMetrics, Inc., Portland, OR, USA) Depends on: igor (https://pypi.python.org/pypi/igor/) Supported: Read Author: Andrew Davison Also contributing: Rick Gerkin """ from __future__ import absolute_import from warnings import warn import numpy as np import quantities as pq from neo.io.baseio import BaseIO from neo.core import Block, Segment, AnalogSignal try: import igor.binarywave as bw import igor.packed as pxp HAVE_IGOR = True except ImportError: HAVE_IGOR = False class IgorIO(BaseIO): """ Class for reading Igor Binary Waves (.ibw) or Packed Experiment (.pxp) files written by WaveMetrics’ IGOR Pro software. It requires the `igor` Python package by W. Trevor King. Usage: >>> from neo import io >>> r = io.IgorIO(filename='...ibw') """ is_readable = True # This class can only read data is_writable = False # write is not supported supported_objects = [Block, Segment, AnalogSignal] readable_objects = [Block, Segment, AnalogSignal] writeable_objects = [] has_header = False is_streameable = False name = 'igorpro' extensions = ['ibw', 'pxp'] mode = 'file' def __init__(self, filename=None, parse_notes=None): """ Arguments: filename: the filename parse_notes: (optional) A function which will parse the 'notes' field in the file header and return a dictionary which will be added to the object annotations. """ BaseIO.__init__(self) assert any([filename.endswith('.%s' % x) for x in self.extensions]), \ "Only the following extensions are supported: %s" % self.extensions self.filename = filename self.extension = filename.split('.')[-1] self.parse_notes = parse_notes def read_block(self, lazy=False): assert not lazy, 'Do not support lazy' block = Block(file_origin=self.filename) block.segments.append(self.read_segment(lazy=lazy)) block.segments[-1].block = block return block def read_segment(self, lazy=False): assert not lazy, 'Do not support lazy' segment = Segment(file_origin=self.filename) segment.analogsignals.append( self.read_analogsignal(lazy=lazy)) segment.analogsignals[-1].segment = segment return segment def read_analogsignal(self, path=None, lazy=False): assert not lazy, 'Do not support lazy' if not HAVE_IGOR: raise Exception(("`igor` package not installed. " "Try `pip install igor`")) if self.extension == 'ibw': data = bw.load(self.filename) version = data['version'] if version > 5: raise IOError(("Igor binary wave file format version {0} " "is not supported.".format(version))) elif self.extension == 'pxp': assert type(path) is str, \ "A colon-separated Igor-style path must be provided." _, filesystem = pxp.load(self.filename) path = path.split(':') location = filesystem['root'] for element in path: if element != 'root': location = location[element.encode('utf8')] data = location.wave content = data['wave'] if "padding" in content: assert content['padding'].size == 0, \ "Cannot handle non-empty padding" signal = content['wData'] note = content['note'] header = content['wave_header'] name = str(header['bname'].decode('utf-8')) units = "".join([x.decode() for x in header['dataUnits']]) try: time_units = "".join([x.decode() for x in header['xUnits']]) assert len(time_units) except: time_units = "s" try: t_start = pq.Quantity(header['hsB'], time_units) except KeyError: t_start = pq.Quantity(header['sfB'][0], time_units) try: sampling_period = pq.Quantity(header['hsA'], time_units) except: sampling_period = pq.Quantity(header['sfA'][0], time_units) if self.parse_notes: try: annotations = self.parse_notes(note) except ValueError: warn("Couldn't parse notes field.") annotations = {'note': note} else: annotations = {'note': note} signal = AnalogSignal(signal, units=units, copy=False, t_start=t_start, sampling_period=sampling_period, name=name, file_origin=self.filename, **annotations) return signal # the following function is to handle the annotations in the # Igor data files from the Blue Brain Project NMC Portal def key_value_string_parser(itemsep=";", kvsep=":"): """ Parses a string into a dict. Arguments: itemsep - character which separates items kvsep - character which separates the key and value within an item Returns: a function which takes the string to be parsed as the sole argument and returns a dict. Example: >>> parse = key_value_string_parser(itemsep=";", kvsep=":") >>> parse("a:2;b:3") {'a': 2, 'b': 3} """ def parser(s): items = s.split(itemsep) return dict(item.split(kvsep, 1) for item in items if item) return parser neo-0.7.2/neo/io/intanio.py0000600013464101346420000000057213507452453013705 0ustar yohyoh# -*- coding: utf-8 -*- from neo.io.basefromrawio import BaseFromRaw from neo.rawio.intanrawio import IntanRawIO class IntanIO(IntanRawIO, BaseFromRaw): __doc__ = IntanRawIO.__doc__ _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): IntanRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.7.2/neo/io/klustakwikio.py0000600013464101346420000004161713507452474014775 0ustar yohyoh# -*- coding: utf-8 -*- """ Reading and writing from KlustaKwik-format files. Ref: http://klusters.sourceforge.net/UserManual/data-files.html Supported : Read, Write Author : Chris Rodgers TODO: * When reading, put the Unit into the RCG, RC hierarchy * When writing, figure out how to get group and cluster if those annotations weren't set. Consider removing those annotations if they are redundant. * Load features in addition to spiketimes. """ import glob import logging import os.path import shutil # note neo.core need only numpy and quantitie import numpy as np try: import matplotlib.mlab as mlab except ImportError as err: HAVE_MLAB = False MLAB_ERR = err else: HAVE_MLAB = True MLAB_ERR = None # I need to subclass BaseIO from neo.io.baseio import BaseIO from neo.core import Block, Segment, Unit, SpikeTrain # Pasted version of feature file format spec """ The Feature File Generic file name: base.fet.n Format: ASCII, integer values The feature file lists for each spike the PCA coefficients for each electrode, followed by the timestamp of the spike (more features can be inserted between the PCA coefficients and the timestamp). The first line contains the number of dimensions. Assuming N1 spikes (spike1...spikeN1), N2 electrodes (e1...eN2) and N3 coefficients (c1...cN3), this file looks like: nbDimensions c1_e1_spk1 c2_e1_spk1 ... cN3_e1_spk1 c1_e2_spk1 ... cN3_eN2_spk1 timestamp_spk1 c1_e1_spk2 c2_e1_spk2 ... cN3_e1_spk2 c1_e2_spk2 ... cN3_eN2_spk2 timestamp_spk2 ... c1_e1_spkN1 c2_e1_spkN1 ... cN3_e1_spkN1 c1_e2_spkN1 ... cN3_eN2_spkN1 timestamp_spkN1 The timestamp is expressed in multiples of the sampling interval. For instance, for a 20kHz recording (50 microsecond sampling interval), a timestamp of 200 corresponds to 200x0.000050s=0.01s from the beginning of the recording session. Notice that the last line must end with a newline or carriage return. """ class KlustaKwikIO(BaseIO): """Reading and writing from KlustaKwik-format files.""" # Class variables demonstrating capabilities of this IO is_readable = True is_writable = True # This IO can only manipulate objects relating to spike times supported_objects = [Block, SpikeTrain, Unit] # Keep things simple by always returning a block readable_objects = [Block] # And write a block writeable_objects = [Block] # Not sure what these do, if anything has_header = False is_streameable = False # GUI params read_params = {} # GUI params write_params = {} # The IO name and the file extensions it uses name = 'KlustaKwik' extensions = ['fet', 'clu', 'res', 'spk'] # Operates on directories mode = 'file' def __init__(self, filename, sampling_rate=30000.): """Create a new IO to operate on a directory filename : the directory to contain the files basename : string, basename of KlustaKwik format, or None sampling_rate : in Hz, necessary because the KlustaKwik files stores data in samples. """ if not HAVE_MLAB: raise MLAB_ERR BaseIO.__init__(self) # self.filename = os.path.normpath(filename) self.filename, self.basename = os.path.split(os.path.abspath(filename)) self.sampling_rate = float(sampling_rate) # error check if not os.path.isdir(self.filename): raise ValueError("filename must be a directory") # initialize a helper object to parse filenames self._fp = FilenameParser(dirname=self.filename, basename=self.basename) def read_block(self, lazy=False): """Returns a Block containing spike information. There is no obvious way to infer the segment boundaries from raw spike times, so for now all spike times are returned in one big segment. The way around this would be to specify the segment boundaries, and then change this code to put the spikes in the right segments. """ assert not lazy, 'Do not support lazy' # Create block and segment to hold all the data block = Block() # Search data directory for KlustaKwik files. # If nothing found, return empty block self._fetfiles = self._fp.read_filenames('fet') self._clufiles = self._fp.read_filenames('clu') if len(self._fetfiles) == 0: return block # Create a single segment to hold all of the data seg = Segment(name='seg0', index=0, file_origin=self.filename) block.segments.append(seg) # Load spike times from each group and store in a dict, keyed # by group number self.spiketrains = dict() for group in sorted(self._fetfiles.keys()): # Load spike times fetfile = self._fetfiles[group] spks, features = self._load_spike_times(fetfile) # Load cluster ids or generate if group in self._clufiles: clufile = self._clufiles[group] uids = self._load_unit_id(clufile) else: # unclustered data, assume all zeros uids = np.zeros(spks.shape, dtype=np.int32) # error check if len(spks) != len(uids): raise ValueError("lengths of fet and clu files are different") # Create Unit for each cluster unique_unit_ids = np.unique(uids) for unit_id in sorted(unique_unit_ids): # Initialize the unit u = Unit(name=('unit %d from group %d' % (unit_id, group)), index=unit_id, group=group) # Initialize a new SpikeTrain for the spikes from this unit st = SpikeTrain( times=spks[uids == unit_id] / self.sampling_rate, units='sec', t_start=0.0, t_stop=spks.max() / self.sampling_rate, name=('unit %d from group %d' % (unit_id, group))) st.annotations['cluster'] = unit_id st.annotations['group'] = group # put features in if len(features) != 0: st.annotations['waveform_features'] = features # Link u.spiketrains.append(st) seg.spiketrains.append(st) block.create_many_to_one_relationship() return block # Helper hidden functions for reading def _load_spike_times(self, fetfilename): """Reads and returns the spike times and features""" with open(fetfilename, mode='r') as f: # Number of clustering features is integer on first line nbFeatures = int(f.readline().strip()) # Each subsequent line consists of nbFeatures values, followed by # the spike time in samples. names = ['fet%d' % n for n in range(nbFeatures)] names.append('spike_time') # Load into recarray data = np.recfromtxt(fetfilename, names=names, skip_header=1, delimiter=' ') # get features features = np.array([data['fet%d' % n] for n in range(nbFeatures)]) # Return the spike_time column return data['spike_time'], features.transpose() def _load_unit_id(self, clufilename): """Reads and return the cluster ids as int32""" with open(clufilename, mode='r') as f: # Number of clusters on this tetrode is integer on first line nbClusters = int(f.readline().strip()) # Read each cluster name as a string cluster_names = f.readlines() # Convert names to integers # I think the spec requires cluster names to be integers, but # this code could be modified to support string names which are # auto-numbered. try: cluster_ids = [int(name) for name in cluster_names] except ValueError: raise ValueError( "Could not convert cluster name to integer in %s" % clufilename) # convert to numpy array and error check cluster_ids = np.array(cluster_ids, dtype=np.int32) if len(np.unique(cluster_ids)) != nbClusters: logging.warning("warning: I got %d clusters instead of %d in %s" % ( len(np.unique(cluster_ids)), nbClusters, clufilename)) return cluster_ids # writing functions def write_block(self, block): """Write spike times and unit ids to disk. Currently descends hierarchy from block to segment to spiketrain. Then gets group and cluster information from spiketrain. Then writes the time and cluster info to the file associated with that group. The group and cluster information are extracted from annotations, eg `sptr.annotations['group']`. If no cluster information exists, it is assigned to cluster 0. Note that all segments are essentially combined in this process, since the KlustaKwik format does not allow for segment boundaries. As implemented currently, does not use the `Unit` object at all. We first try to use the sampling rate of each SpikeTrain, or if this is not set, we use `self.sampling_rate`. If the files already exist, backup copies are created by appending the filenames with a "~". """ # set basename if self.basename is None: logging.warning("warning: no basename provided, using `basename`") self.basename = 'basename' # First create file handles for each group which will be stored self._make_all_file_handles(block) # We'll detect how many features belong in each group self._group2features = {} # Iterate through segments in this block for seg in block.segments: # Write each spiketrain of the segment for st in seg.spiketrains: # Get file handles for this spiketrain using its group group = self.st2group(st) fetfilehandle = self._fetfilehandles[group] clufilehandle = self._clufilehandles[group] # Get the id to write to clu file for this spike train cluster = self.st2cluster(st) # Choose sampling rate to convert to samples try: sr = st.annotations['sampling_rate'] except KeyError: sr = self.sampling_rate # Convert to samples spike_times_in_samples = np.rint( np.array(st) * sr).astype(np.int) # Try to get features from spiketrain try: all_features = st.annotations['waveform_features'] except KeyError: # Use empty all_features = [ [] for _ in range(len(spike_times_in_samples))] all_features = np.asarray(all_features) if all_features.ndim != 2: raise ValueError("waveform features should be 2d array") # Check number of features we're supposed to have try: n_features = self._group2features[group] except KeyError: # First time through .. set number of features n_features = all_features.shape[1] self._group2features[group] = n_features # and write to first line of file fetfilehandle.write("%d\n" % n_features) if n_features != all_features.shape[1]: raise ValueError("inconsistent number of features: " + "supposed to be %d but I got %d" % (n_features, all_features.shape[1])) # Write features and time for each spike for stt, features in zip(spike_times_in_samples, all_features): # first features for val in features: fetfilehandle.write(str(val)) fetfilehandle.write(" ") # now time fetfilehandle.write("%d\n" % stt) # and cluster id clufilehandle.write("%d\n" % cluster) # We're done, so close the files self._close_all_files() # Helper functions for writing def st2group(self, st): # Not sure this is right so make it a method in case we change it try: return st.annotations['group'] except KeyError: return 0 def st2cluster(self, st): # Not sure this is right so make it a method in case we change it try: return st.annotations['cluster'] except KeyError: return 0 def _make_all_file_handles(self, block): """Get the tetrode (group) of each neuron (cluster) by descending the hierarchy through segment and block. Store in a dict {group_id: list_of_clusters_in_that_group} """ group2clusters = {} for seg in block.segments: for st in seg.spiketrains: group = self.st2group(st) cluster = self.st2cluster(st) if group in group2clusters: if cluster not in group2clusters[group]: group2clusters[group].append(cluster) else: group2clusters[group] = [cluster] # Make new file handles for each group self._fetfilehandles, self._clufilehandles = {}, {} for group, clusters in group2clusters.items(): self._new_group(group, nbClusters=len(clusters)) def _new_group(self, id_group, nbClusters): # generate filenames fetfilename = os.path.join(self.filename, self.basename + ('.fet.%d' % id_group)) clufilename = os.path.join(self.filename, self.basename + ('.clu.%d' % id_group)) # back up before overwriting if os.path.exists(fetfilename): shutil.copyfile(fetfilename, fetfilename + '~') if os.path.exists(clufilename): shutil.copyfile(clufilename, clufilename + '~') # create file handles self._fetfilehandles[id_group] = open(fetfilename, mode='w') self._clufilehandles[id_group] = open(clufilename, mode='w') # write out first line # self._fetfilehandles[id_group].write("0\n") # Number of features self._clufilehandles[id_group].write("%d\n" % nbClusters) def _close_all_files(self): for val in self._fetfilehandles.values(): val.close() for val in self._clufilehandles.values(): val.close() class FilenameParser: """Simple class to interpret user's requests into KlustaKwik filenames""" def __init__(self, dirname, basename=None): """Initialize a new parser for a directory containing files dirname: directory containing files basename: basename in KlustaKwik format spec If basename is left None, then files with any basename in the directory will be used. An error is raised if files with multiple basenames exist in the directory. """ self.dirname = os.path.normpath(dirname) self.basename = basename # error check if not os.path.isdir(self.dirname): raise ValueError("filename must be a directory") def read_filenames(self, typestring='fet'): """Returns filenames in the data directory matching the type. Generally, `typestring` is one of the following: 'fet', 'clu', 'spk', 'res' Returns a dict {group_number: filename}, e.g.: { 0: 'basename.fet.0', 1: 'basename.fet.1', 2: 'basename.fet.2'} 'basename' can be any string not containing whitespace. Only filenames that begin with "basename.typestring." and end with a sequence of digits are valid. The digits are converted to an integer and used as the group number. """ all_filenames = glob.glob(os.path.join(self.dirname, '*')) # Fill the dict with valid filenames d = {} for v in all_filenames: # Test whether matches format, ie ends with digits split_fn = os.path.split(v)[1] m = glob.re.search(('^(\w+)\.%s\.(\d+)$' % typestring), split_fn) if m is not None: # get basename from first hit if not specified if self.basename is None: self.basename = m.group(1) # return files with correct basename if self.basename == m.group(1): # Key the group number to the filename # This conversion to int should always work since only # strings of digits will match the regex tetn = int(m.group(2)) d[tetn] = v return d neo-0.7.2/neo/io/kwikio.py0000600013464101346420000001550113507452453013537 0ustar yohyoh# -*- coding: utf-8 -*- """ Class for reading data from a .kwik dataset Depends on: scipy phy Supported: Read Author: Mikkel E. Lepperød @CINPLA """ # TODO: writing to file # needed for python 3 compatibility from __future__ import absolute_import from __future__ import division import numpy as np import quantities as pq import os try: from scipy import stats except ImportError as err: HAVE_SCIPY = False SCIPY_ERR = err else: HAVE_SCIPY = True SCIPY_ERR = None try: from klusta import kwik except ImportError as err: HAVE_KWIK = False KWIK_ERR = err else: HAVE_KWIK = True KWIK_ERR = None # I need to subclass BaseIO from neo.io.baseio import BaseIO # to import from core from neo.core import (Segment, SpikeTrain, Unit, Epoch, AnalogSignal, ChannelIndex, Block) import neo.io.tools class KwikIO(BaseIO): """ Class for "reading" experimental data from a .kwik file. Generates a :class:`Segment` with a :class:`AnalogSignal` """ is_readable = True # This class can only read data is_writable = False # write is not supported supported_objects = [Block, Segment, SpikeTrain, AnalogSignal, ChannelIndex] # This class can return either a Block or a Segment # The first one is the default ( self.read ) # These lists should go from highest object to lowest object because # common_io_test assumes it. readable_objects = [Block] # This class is not able to write objects writeable_objects = [] has_header = False is_streameable = False name = 'Kwik' description = 'This IO reads experimental data from a .kwik dataset' extensions = ['kwik'] mode = 'file' def __init__(self, filename): """ Arguments: filename : the filename """ if not HAVE_KWIK: raise KWIK_ERR BaseIO.__init__(self) self.filename = os.path.abspath(filename) model = kwik.KwikModel(self.filename) # TODO this group is loaded twice self.models = [kwik.KwikModel(self.filename, channel_group=grp) for grp in model.channel_groups] def read_block(self, lazy=False, get_waveforms=True, cluster_group=None, raw_data_units='uV', get_raw_data=False, ): """ Reads a block with segments and channel_indexes Parameters: get_waveforms: bool, default = False Wether or not to get the waveforms get_raw_data: bool, default = False Wether or not to get the raw traces raw_data_units: str, default = "uV" SI units of the raw trace according to voltage_gain given to klusta cluster_group: str, default = None Which clusters to load, possibilities are "noise", "unsorted", "good", if None all is loaded. """ assert not lazy, 'Do not support lazy' blk = Block() seg = Segment(file_origin=self.filename) blk.segments += [seg] for model in self.models: group_id = model.channel_group group_meta = {'group_id': group_id} group_meta.update(model.metadata) chx = ChannelIndex(name='channel group #{}'.format(group_id), index=model.channels, **group_meta) blk.channel_indexes.append(chx) clusters = model.spike_clusters for cluster_id in model.cluster_ids: meta = model.cluster_metadata[cluster_id] if cluster_group is None: pass elif cluster_group != meta: continue sptr = self.read_spiketrain(cluster_id=cluster_id, model=model, get_waveforms=get_waveforms, raw_data_units=raw_data_units) sptr.annotations.update({'cluster_group': meta, 'group_id': model.channel_group}) sptr.channel_index = chx unit = Unit(cluster_group=meta, group_id=model.channel_group, name='unit #{}'.format(cluster_id)) unit.spiketrains.append(sptr) chx.units.append(unit) unit.channel_index = chx seg.spiketrains.append(sptr) if get_raw_data: ana = self.read_analogsignal(model, units=raw_data_units) ana.channel_index = chx seg.analogsignals.append(ana) seg.duration = model.duration * pq.s blk.create_many_to_one_relationship() return blk def read_analogsignal(self, model, units='uV', lazy=False): """ Reads analogsignals Parameters: units: str, default = "uV" SI units of the raw trace according to voltage_gain given to klusta """ assert not lazy, 'Do not support lazy' arr = model.traces[:] * model.metadata['voltage_gain'] ana = AnalogSignal(arr, sampling_rate=model.sample_rate * pq.Hz, units=units, file_origin=model.metadata['raw_data_files']) return ana def read_spiketrain(self, cluster_id, model, lazy=False, get_waveforms=True, raw_data_units=None ): """ Reads sorted spiketrains Parameters: get_waveforms: bool, default = False Wether or not to get the waveforms cluster_id: int, Which cluster to load, according to cluster id from klusta model: klusta.kwik.KwikModel A KwikModel object obtained by klusta.kwik.KwikModel(fname) """ try: if ((not (cluster_id in model.cluster_ids))): raise ValueError except ValueError: print("Exception: cluster_id (%d) not found !! " % cluster_id) return clusters = model.spike_clusters idx = np.nonzero(clusters == cluster_id) if get_waveforms: w = model.all_waveforms[idx] # klusta: num_spikes, samples_per_spike, num_chans = w.shape w = w.swapaxes(1, 2) w = pq.Quantity(w, raw_data_units) else: w = None sptr = SpikeTrain(times=model.spike_times[idx], t_stop=model.duration, waveforms=w, units='s', sampling_rate=model.sample_rate * pq.Hz, file_origin=self.filename, **{'cluster_id': cluster_id}) return sptr neo-0.7.2/neo/io/micromedio.py0000600013464101346420000000074613507452453014376 0ustar yohyoh# -*- coding: utf-8 -*- from neo.io.basefromrawio import BaseFromRaw from neo.rawio.micromedrawio import MicromedRawIO from neo.core import Segment, AnalogSignal, Epoch, Event class MicromedIO(MicromedRawIO, BaseFromRaw): """Class for reading/writing data from Micromed files (.trc).""" _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): MicromedRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.7.2/neo/io/neomatlabio.py0000600013464101346420000003442313507452453014540 0ustar yohyoh# -*- coding: utf-8 -*- """ Module for reading/writing Neo objects in MATLAB format (.mat) versions 5 to 7.2. This module is a bridge for MATLAB users who want to adopt the Neo object representation. The nomenclature is the same but using Matlab structs and cell arrays. With this module MATLAB users can use neo.io to read a format and convert it to .mat. Supported : Read/Write Author: sgarcia, Robert Pröpper """ from datetime import datetime from distutils import version import re import numpy as np import quantities as pq # check scipy try: import scipy.io import scipy.version except ImportError as err: HAVE_SCIPY = False SCIPY_ERR = err else: if version.LooseVersion(scipy.version.version) < '0.12.0': HAVE_SCIPY = False SCIPY_ERR = ImportError("your scipy version is too old to support " + "MatlabIO, you need at least 0.12.0. " + "You have %s" % scipy.version.version) else: HAVE_SCIPY = True SCIPY_ERR = None from neo.io.baseio import BaseIO from neo.core import (Block, Segment, AnalogSignal, Event, Epoch, SpikeTrain, objectnames, class_by_name) classname_lower_to_upper = {} for k in objectnames: classname_lower_to_upper[k.lower()] = k class NeoMatlabIO(BaseIO): """ Class for reading/writing Neo objects in MATLAB format (.mat) versions 5 to 7.2. This module is a bridge for MATLAB users who want to adopt the Neo object representation. The nomenclature is the same but using Matlab structs and cell arrays. With this module MATLAB users can use neo.io to read a format and convert it to .mat. Rules of conversion: * Neo classes are converted to MATLAB structs. e.g., a Block is a struct with attributes "name", "file_datetime", ... * Neo one_to_many relationships are cellarrays in MATLAB. e.g., ``seg.analogsignals[2]`` in Python Neo will be ``seg.analogsignals{3}`` in MATLAB. * Quantity attributes are represented by 2 fields in MATLAB. e.g., ``anasig.t_start = 1.5 * s`` in Python will be ``anasig.t_start = 1.5`` and ``anasig.t_start_unit = 's'`` in MATLAB. * classes that inherit from Quantity (AnalogSignal, SpikeTrain, ...) in Python will have 2 fields (array and units) in the MATLAB struct. e.g.: ``AnalogSignal( [1., 2., 3.], 'V')`` in Python will be ``anasig.array = [1. 2. 3]`` and ``anasig.units = 'V'`` in MATLAB. 1 - **Scenario 1: create data in MATLAB and read them in Python** This MATLAB code generates a block:: block = struct(); block.segments = { }; block.name = 'my block with matlab'; for s = 1:3 seg = struct(); seg.name = strcat('segment ',num2str(s)); seg.analogsignals = { }; for a = 1:5 anasig = struct(); anasig.signal = rand(100,1); anasig.signal_units = 'mV'; anasig.t_start = 0; anasig.t_start_units = 's'; anasig.sampling_rate = 100; anasig.sampling_rate_units = 'Hz'; seg.analogsignals{a} = anasig; end seg.spiketrains = { }; for t = 1:7 sptr = struct(); sptr.times = rand(30,1)*10; sptr.times_units = 'ms'; sptr.t_start = 0; sptr.t_start_units = 'ms'; sptr.t_stop = 10; sptr.t_stop_units = 'ms'; seg.spiketrains{t} = sptr; end event = struct(); event.times = [0, 10, 30]; event.times_units = 'ms'; event.labels = ['trig0'; 'trig1'; 'trig2']; seg.events{1} = event; epoch = struct(); epoch.times = [10, 20]; epoch.times_units = 'ms'; epoch.durations = [4, 10]; epoch.durations_units = 'ms'; epoch.labels = ['a0'; 'a1']; seg.epochs{1} = epoch; block.segments{s} = seg; end save 'myblock.mat' block -V7 This code reads it in Python:: import neo r = neo.io.NeoMatlabIO(filename='myblock.mat') bl = r.read_block() print bl.segments[1].analogsignals[2] print bl.segments[1].spiketrains[4] 2 - **Scenario 2: create data in Python and read them in MATLAB** This Python code generates the same block as in the previous scenario:: import neo import quantities as pq from scipy import rand, array bl = neo.Block(name='my block with neo') for s in range(3): seg = neo.Segment(name='segment' + str(s)) bl.segments.append(seg) for a in range(5): anasig = neo.AnalogSignal(rand(100)*pq.mV, t_start=0*pq.s, sampling_rate=100*pq.Hz) seg.analogsignals.append(anasig) for t in range(7): sptr = neo.SpikeTrain(rand(40)*pq.ms, t_start=0*pq.ms, t_stop=10*pq.ms) seg.spiketrains.append(sptr) ev = neo.Event([0, 10, 30]*pq.ms, labels=array(['trig0', 'trig1', 'trig2'])) ep = neo.Epoch([10, 20]*pq.ms, durations=[4, 10]*pq.ms, labels=array(['a0', 'a1'])) seg.events.append(ev) seg.epochs.append(ep) from neo.io.neomatlabio import NeoMatlabIO w = NeoMatlabIO(filename='myblock.mat') w.write_block(bl) This MATLAB code reads it:: load 'myblock.mat' block.name block.segments{2}.analogsignals{3}.signal block.segments{2}.analogsignals{3}.signal_units block.segments{2}.analogsignals{3}.t_start block.segments{2}.analogsignals{3}.t_start_units 3 - **Scenario 3: conversion** This Python code converts a Spike2 file to MATLAB:: from neo import Block from neo.io import Spike2IO, NeoMatlabIO r = Spike2IO(filename='spike2.smr') w = NeoMatlabIO(filename='convertedfile.mat') blocks = r.read() w.write(blocks[0]) """ is_readable = True is_writable = True supported_objects = [Block, Segment, AnalogSignal, Epoch, Event, SpikeTrain] readable_objects = [Block] writeable_objects = [Block] has_header = False is_streameable = False read_params = {Block: []} write_params = {Block: []} name = 'neomatlab' extensions = ['mat'] mode = 'file' def __init__(self, filename=None): """ This class read/write neo objects in matlab 5 to 7.2 format. Arguments: filename : the filename to read """ if not HAVE_SCIPY: raise SCIPY_ERR BaseIO.__init__(self) self.filename = filename def read_block(self, lazy=False): """ Arguments: """ assert not lazy, 'Do not support lazy' d = scipy.io.loadmat(self.filename, struct_as_record=False, squeeze_me=True, mat_dtype=True) if 'block' not in d: self.logger.exception('No block in ' + self.filename) return None bl_struct = d['block'] bl = self.create_ob_from_struct( bl_struct, 'Block') bl.create_many_to_one_relationship() return bl def write_block(self, bl, **kargs): """ Arguments: bl: the block to b saved """ bl_struct = self.create_struct_from_obj(bl) for seg in bl.segments: seg_struct = self.create_struct_from_obj(seg) bl_struct['segments'].append(seg_struct) for anasig in seg.analogsignals: anasig_struct = self.create_struct_from_obj(anasig) seg_struct['analogsignals'].append(anasig_struct) for ea in seg.events: ea_struct = self.create_struct_from_obj(ea) seg_struct['events'].append(ea_struct) for ea in seg.epochs: ea_struct = self.create_struct_from_obj(ea) seg_struct['epochs'].append(ea_struct) for sptr in seg.spiketrains: sptr_struct = self.create_struct_from_obj(sptr) seg_struct['spiketrains'].append(sptr_struct) scipy.io.savemat(self.filename, {'block': bl_struct}, oned_as='row') def create_struct_from_obj(self, ob): struct = {} # relationship for childname in getattr(ob, '_single_child_containers', []): supported_containers = [subob.__name__.lower() + 's' for subob in self.supported_objects] if childname in supported_containers: struct[childname] = [] # attributes for i, attr in enumerate(ob._all_attrs): attrname, attrtype = attr[0], attr[1] # ~ if attrname =='': # ~ struct['array'] = ob.magnitude # ~ struct['units'] = ob.dimensionality.string # ~ continue if (hasattr(ob, '_quantity_attr') and ob._quantity_attr == attrname): struct[attrname] = ob.magnitude struct[attrname + '_units'] = ob.dimensionality.string continue if not (attrname in ob.annotations or hasattr(ob, attrname)): continue if getattr(ob, attrname) is None: continue if attrtype == pq.Quantity: # ndim = attr[2] struct[attrname] = getattr(ob, attrname).magnitude struct[attrname + '_units'] = getattr( ob, attrname).dimensionality.string elif attrtype == datetime: struct[attrname] = str(getattr(ob, attrname)) else: struct[attrname] = getattr(ob, attrname) return struct def create_ob_from_struct(self, struct, classname): cl = class_by_name[classname] # check if hinerits Quantity # ~ is_quantity = False # ~ for attr in cl._necessary_attrs: # ~ if attr[0] == '' and attr[1] == pq.Quantity: # ~ is_quantity = True # ~ break # ~ is_quantiy = hasattr(cl, '_quantity_attr') # ~ if is_quantity: if hasattr(cl, '_quantity_attr'): quantity_attr = cl._quantity_attr arr = getattr(struct, quantity_attr) # ~ data_complement = dict(units=str(struct.units)) data_complement = dict(units=str( getattr(struct, quantity_attr + '_units'))) if "sampling_rate" in (at[0] for at in cl._necessary_attrs): # put fake value for now, put correct value later data_complement["sampling_rate"] = 0 * pq.kHz try: len(arr) except TypeError: # strange scipy.io behavior: if len is 1 we get a float arr = np.array(arr) arr = arr.reshape((-1,)) # new view with one dimension if "t_stop" in (at[0] for at in cl._necessary_attrs): if len(arr) > 0: data_complement["t_stop"] = arr.max() else: data_complement["t_stop"] = 0.0 if "t_start" in (at[0] for at in cl._necessary_attrs): if len(arr) > 0: data_complement["t_start"] = arr.min() else: data_complement["t_start"] = 0.0 ob = cl(arr, **data_complement) else: ob = cl() for attrname in struct._fieldnames: # check children if attrname in getattr(ob, '_single_child_containers', []): child_struct = getattr(struct, attrname) try: # try must only surround len() or other errors are captured child_len = len(child_struct) except TypeError: # strange scipy.io behavior: if len is 1 there is no len() child = self.create_ob_from_struct( child_struct, classname_lower_to_upper[attrname[:-1]]) getattr(ob, attrname.lower()).append(child) else: for c in range(child_len): child = self.create_ob_from_struct( child_struct[c], classname_lower_to_upper[attrname[:-1]]) getattr(ob, attrname.lower()).append(child) continue # attributes if attrname.endswith('_units') or attrname == 'units': # linked with another field continue if (hasattr(cl, '_quantity_attr') and cl._quantity_attr == attrname): continue item = getattr(struct, attrname) attributes = cl._necessary_attrs + cl._recommended_attrs dict_attributes = dict([(a[0], a[1:]) for a in attributes]) if attrname in dict_attributes: attrtype = dict_attributes[attrname][0] if attrtype == datetime: m = r'(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+).(\d+)' r = re.findall(m, str(item)) if len(r) == 1: item = datetime(*[int(e) for e in r[0]]) else: item = None elif attrtype == np.ndarray: dt = dict_attributes[attrname][2] item = item.astype(dt) elif attrtype == pq.Quantity: ndim = dict_attributes[attrname][1] units = str(getattr(struct, attrname + '_units')) if ndim == 0: item = pq.Quantity(item, units) else: item = pq.Quantity(item, units) else: item = attrtype(item) setattr(ob, attrname, item) return ob neo-0.7.2/neo/io/nestio.py0000600013464101346420000007637613507452453013564 0ustar yohyoh# -*- coding: utf-8 -*- """ Class for reading output files from NEST simulations ( http://www.nest-simulator.org/ ). Tested with NEST2.10.0 Depends on: numpy, quantities Supported: Read Authors: Julia Sprenger, Maximilian Schmidt, Johanna Senk """ # needed for Python3 compatibility from __future__ import absolute_import import os.path import warnings from datetime import datetime import numpy as np import quantities as pq from neo.io.baseio import BaseIO from neo.core import Block, Segment, SpikeTrain, AnalogSignal value_type_dict = {'V': pq.mV, 'I': pq.pA, 'g': pq.CompoundUnit("10^-9*S"), 'no type': pq.dimensionless} class NestIO(BaseIO): """ Class for reading NEST output files. GDF files for the spike data and DAT files for analog signals are possible. Usage: >>> from neo.io.nestio import NestIO >>> files = ['membrane_voltages-1261-0.dat', 'spikes-1258-0.gdf'] >>> r = NestIO(filenames=files) >>> seg = r.read_segment(gid_list=[], t_start=400 * pq.ms, t_stop=600 * pq.ms, id_column_gdf=0, time_column_gdf=1, id_column_dat=0, time_column_dat=1, value_columns_dat=2) """ is_readable = True # class supports reading, but not writing is_writable = False supported_objects = [SpikeTrain, AnalogSignal, Segment, Block] readable_objects = [SpikeTrain, AnalogSignal, Segment, Block] has_header = False is_streameable = False write_params = None # writing is not supported name = 'nest' extensions = ['gdf', 'dat'] mode = 'file' def __init__(self, filenames=None): """ Parameters ---------- filenames: string or list of strings, default=None The filename or list of filenames to load. """ if isinstance(filenames, str): filenames = [filenames] self.filenames = filenames self.avail_formats = {} self.avail_IOs = {} for filename in filenames: path, ext = os.path.splitext(filename) ext = ext.strip('.') if ext in self.extensions: if ext in self.avail_IOs: raise ValueError('Received multiple files with "%s" ' 'extention. Can only load single file of ' 'this type.' % ext) self.avail_IOs[ext] = ColumnIO(filename) self.avail_formats[ext] = path def __read_analogsignals(self, gid_list, time_unit, t_start=None, t_stop=None, sampling_period=None, id_column=0, time_column=1, value_columns=2, value_types=None, value_units=None): """ Internal function called by read_analogsignal() and read_segment(). """ if 'dat' not in self.avail_formats: raise ValueError('Can not load analogsignals. No DAT file ' 'provided.') # checking gid input parameters gid_list, id_column = self._check_input_gids(gid_list, id_column) # checking time input parameters t_start, t_stop = self._check_input_times(t_start, t_stop, mandatory=False) # checking value input parameters (value_columns, value_types, value_units) = \ self._check_input_values_parameters(value_columns, value_types, value_units) # defining standard column order for internal usage # [id_column, time_column, value_column1, value_column2, ...] column_ids = [id_column, time_column] + value_columns for i, cid in enumerate(column_ids): if cid is None: column_ids[i] = -1 # assert that no single column is assigned twice column_list = [id_column, time_column] + value_columns column_list_no_None = [c for c in column_list if c is not None] if len(np.unique(column_list_no_None)) < len(column_list_no_None): raise ValueError( 'One or more columns have been specified to contain ' 'the same data. Columns were specified to %s.' '' % column_list_no_None) # extracting condition and sorting parameters for raw data loading (condition, condition_column, sorting_column) = self._get_conditions_and_sorting(id_column, time_column, gid_list, t_start, t_stop) # loading raw data columns data = self.avail_IOs['dat'].get_columns( column_ids=column_ids, condition=condition, condition_column=condition_column, sorting_columns=sorting_column) sampling_period = self._check_input_sampling_period(sampling_period, time_column, time_unit, data) analogsignal_list = [] # extracting complete gid list for anasig generation if (gid_list == []) and id_column is not None: gid_list = np.unique(data[:, id_column]) # generate analogsignals for each neuron ID for i in gid_list: selected_ids = self._get_selected_ids( i, id_column, time_column, t_start, t_stop, time_unit, data) # extract starting time of analogsignal if (time_column is not None) and data.size: anasig_start_time = data[selected_ids[0], 1] * time_unit else: # set t_start equal to sampling_period because NEST starts # recording only after 1 sampling_period anasig_start_time = 1. * sampling_period # create one analogsignal per value column requested for v_id, value_column in enumerate(value_columns): signal = data[ selected_ids[0]:selected_ids[1], value_column] # create AnalogSignal objects and annotate them with # the neuron ID analogsignal_list.append(AnalogSignal( signal * value_units[v_id], sampling_period=sampling_period, t_start=anasig_start_time, id=i, type=value_types[v_id])) # check for correct length of analogsignal assert (analogsignal_list[-1].t_stop == anasig_start_time + len(signal) * sampling_period) return analogsignal_list def __read_spiketrains(self, gdf_id_list, time_unit, t_start, t_stop, id_column, time_column, **args): """ Internal function for reading multiple spiketrains at once. This function is called by read_spiketrain() and read_segment(). """ if 'gdf' not in self.avail_IOs: raise ValueError('Can not load spiketrains. No GDF file provided.') # assert that the file contains spike times if time_column is None: raise ValueError('Time column is None. No spike times to ' 'be read in.') gdf_id_list, id_column = self._check_input_gids(gdf_id_list, id_column) t_start, t_stop = self._check_input_times(t_start, t_stop, mandatory=True) # assert that no single column is assigned twice if id_column == time_column: raise ValueError('One or more columns have been specified to ' 'contain the same data.') # defining standard column order for internal usage # [id_column, time_column, value_column1, value_column2, ...] column_ids = [id_column, time_column] for i, cid in enumerate(column_ids): if cid is None: column_ids[i] = -1 (condition, condition_column, sorting_column) = \ self._get_conditions_and_sorting(id_column, time_column, gdf_id_list, t_start, t_stop) data = self.avail_IOs['gdf'].get_columns( column_ids=column_ids, condition=condition, condition_column=condition_column, sorting_columns=sorting_column) # create a list of SpikeTrains for all neuron IDs in gdf_id_list # assign spike times to neuron IDs if id_column is given if id_column is not None: if (gdf_id_list == []) and id_column is not None: gdf_id_list = np.unique(data[:, id_column]) spiketrain_list = [] for nid in gdf_id_list: selected_ids = self._get_selected_ids(nid, id_column, time_column, t_start, t_stop, time_unit, data) times = data[selected_ids[0]:selected_ids[1], time_column] spiketrain_list.append(SpikeTrain( times, units=time_unit, t_start=t_start, t_stop=t_stop, id=nid, **args)) # if id_column is not given, all spike times are collected in one # spike train with id=None else: train = data[:, time_column] spiketrain_list = [SpikeTrain(train, units=time_unit, t_start=t_start, t_stop=t_stop, id=None, **args)] return spiketrain_list def _check_input_times(self, t_start, t_stop, mandatory=True): """ Checks input times for existence and setting default values if necessary. t_start: pq.quantity.Quantity, start time of the time range to load. t_stop: pq.quantity.Quantity, stop time of the time range to load. mandatory: bool, if True times can not be None and an error will be raised. if False, time values of None will be replaced by -infinity or infinity, respectively. default: True. """ if t_stop is None: if mandatory: raise ValueError('No t_start specified.') else: t_stop = np.inf * pq.s if t_start is None: if mandatory: raise ValueError('No t_stop specified.') else: t_start = -np.inf * pq.s for time in (t_start, t_stop): if not isinstance(time, pq.quantity.Quantity): raise TypeError('Time value (%s) is not a quantity.' % time) return t_start, t_stop def _check_input_values_parameters(self, value_columns, value_types, value_units): """ Checks value parameters for consistency. value_columns: int, column id containing the value to load. value_types: list of strings, type of values. value_units: list of units of the value columns. Returns adjusted list of [value_columns, value_types, value_units] """ if value_columns is None: raise ValueError('No value column provided.') if isinstance(value_columns, int): value_columns = [value_columns] if value_types is None: value_types = ['no type'] * len(value_columns) elif isinstance(value_types, str): value_types = [value_types] # translating value types into units as far as possible if value_units is None: short_value_types = [vtype.split('_')[0] for vtype in value_types] if not all([svt in value_type_dict for svt in short_value_types]): raise ValueError('Can not interpret value types ' '"%s"' % value_types) value_units = [value_type_dict[svt] for svt in short_value_types] # checking for same number of value types, units and columns if not (len(value_types) == len(value_units) == len(value_columns)): raise ValueError('Length of value types, units and columns does ' 'not match (%i,%i,%i)' % (len(value_types), len(value_units), len(value_columns))) if not all([isinstance(vunit, pq.UnitQuantity) for vunit in value_units]): raise ValueError('No value unit or standard value type specified.') return value_columns, value_types, value_units def _check_input_gids(self, gid_list, id_column): """ Checks gid values and column for consistency. gid_list: list of int or None, gid to load. id_column: int, id of the column containing the gids. Returns adjusted list of [gid_list, id_column]. """ if gid_list is None: gid_list = [gid_list] if None in gid_list and id_column is not None: raise ValueError('No neuron IDs specified but file contains ' 'neuron IDs in column %s. Specify empty list to ' 'retrieve spiketrains of all neurons.' '' % str(id_column)) if gid_list != [None] and id_column is None: raise ValueError('Specified neuron IDs to be %s, but no ID column ' 'specified.' % gid_list) return gid_list, id_column def _check_input_sampling_period(self, sampling_period, time_column, time_unit, data): """ Checks sampling period, times and time unit for consistency. sampling_period: pq.quantity.Quantity, sampling period of data to load. time_column: int, column id of times in data to load. time_unit: pq.quantity.Quantity, unit of time used in the data to load. data: numpy array, the data to be loaded / interpreted. Returns pq.quantities.Quantity object, the updated sampling period. """ if sampling_period is None: if time_column is not None: data_sampling = np.unique( np.diff(sorted(np.unique(data[:, 1])))) if len(data_sampling) > 1: raise ValueError('Different sampling distances found in ' 'data set (%s)' % data_sampling) else: dt = data_sampling[0] else: raise ValueError('Can not estimate sampling rate without time ' 'column id provided.') sampling_period = pq.CompoundUnit(str(dt) + '*' + time_unit.units.u_symbol) elif not isinstance(sampling_period, pq.UnitQuantity): raise ValueError("sampling_period is not specified as a unit.") return sampling_period def _get_conditions_and_sorting(self, id_column, time_column, gid_list, t_start, t_stop): """ Calculates the condition, condition_column and sorting_column based on other parameters supplied for loading the data. id_column: int, id of the column containing gids. time_column: int, id of the column containing times. gid_list: list of int, gid to be loaded. t_start: pq.quantity.Quantity, start of the time range to be loaded. t_stop: pq.quantity.Quantity, stop of the time range to be loaded. Returns updated [condition, condition_column, sorting_column]. """ condition, condition_column = None, None sorting_column = [] curr_id = 0 if ((gid_list != [None]) and (gid_list is not None)): if gid_list != []: def condition(x): return x in gid_list condition_column = id_column sorting_column.append(curr_id) # Sorting according to gids first curr_id += 1 if time_column is not None: sorting_column.append(curr_id) # Sorting according to time curr_id += 1 elif t_start != -np.inf and t_stop != np.inf: warnings.warn('Ignoring t_start and t_stop parameters, because no ' 'time column id is provided.') if sorting_column == []: sorting_column = None else: sorting_column = sorting_column[::-1] return condition, condition_column, sorting_column def _get_selected_ids(self, gid, id_column, time_column, t_start, t_stop, time_unit, data): """ Calculates the data range to load depending on the selected gid and the provided time range (t_start, t_stop) gid: int, gid to be loaded. id_column: int, id of the column containing gids. time_column: int, id of the column containing times. t_start: pq.quantity.Quantity, start of the time range to load. t_stop: pq.quantity.Quantity, stop of the time range to load. time_unit: pq.quantity.Quantity, time unit of the data to load. data: numpy array, data to load. Returns list of selected gids """ gid_ids = np.array([0, data.shape[0]]) if id_column is not None: gid_ids = np.array([np.searchsorted(data[:, 0], gid, side='left'), np.searchsorted(data[:, 0], gid, side='right')]) gid_data = data[gid_ids[0]:gid_ids[1], :] # select only requested time range id_shifts = np.array([0, 0]) if time_column is not None: id_shifts[0] = np.searchsorted(gid_data[:, 1], t_start.rescale( time_unit).magnitude, side='left') id_shifts[1] = (np.searchsorted(gid_data[:, 1], t_stop.rescale( time_unit).magnitude, side='left') - gid_data.shape[0]) selected_ids = gid_ids + id_shifts return selected_ids def read_block(self, gid_list=None, time_unit=pq.ms, t_start=None, t_stop=None, sampling_period=None, id_column_dat=0, time_column_dat=1, value_columns_dat=2, id_column_gdf=0, time_column_gdf=1, value_types=None, value_units=None, lazy=False): assert not lazy, 'Do not support lazy' seg = self.read_segment(gid_list, time_unit, t_start, t_stop, sampling_period, id_column_dat, time_column_dat, value_columns_dat, id_column_gdf, time_column_gdf, value_types, value_units) blk = Block(file_origin=seg.file_origin, file_datetime=seg.file_datetime) blk.segments.append(seg) seg.block = blk return blk def read_segment(self, gid_list=None, time_unit=pq.ms, t_start=None, t_stop=None, sampling_period=None, id_column_dat=0, time_column_dat=1, value_columns_dat=2, id_column_gdf=0, time_column_gdf=1, value_types=None, value_units=None, lazy=False): """ Reads a Segment which contains SpikeTrain(s) with specified neuron IDs from the GDF data. Arguments ---------- gid_list : list, default: None A list of GDF IDs of which to return SpikeTrain(s). gid_list must be specified if the GDF file contains neuron IDs, the default None then raises an error. Specify an empty list [] to retrieve the spike trains of all neurons. time_unit : Quantity (time), optional, default: quantities.ms The time unit of recorded time stamps in DAT as well as GDF files. t_start : Quantity (time), optional, default: 0 * pq.ms Start time of SpikeTrain. t_stop : Quantity (time), default: None Stop time of SpikeTrain. t_stop must be specified, the default None raises an error. sampling_period : Quantity (frequency), optional, default: None Sampling period of the recorded data. id_column_dat : int, optional, default: 0 Column index of neuron IDs in the DAT file. time_column_dat : int, optional, default: 1 Column index of time stamps in the DAT file. value_columns_dat : int, optional, default: 2 Column index of the analog values recorded in the DAT file. id_column_gdf : int, optional, default: 0 Column index of neuron IDs in the GDF file. time_column_gdf : int, optional, default: 1 Column index of time stamps in the GDF file. value_types : str, optional, default: None Nest data type of the analog values recorded, eg.'V_m', 'I', 'g_e' value_units : Quantity (amplitude), default: None The physical unit of the recorded signal values. lazy : bool, optional, default: False Returns ------- seg : Segment The Segment contains one SpikeTrain and one AnalogSignal for each ID in gid_list. """ assert not lazy, 'Do not support lazy' if isinstance(gid_list, tuple): if gid_list[0] > gid_list[1]: raise ValueError('The second entry in gid_list must be ' 'greater or equal to the first entry.') gid_list = range(gid_list[0], gid_list[1] + 1) # __read_xxx() needs a list of IDs if gid_list is None: gid_list = [None] # create an empty Segment seg = Segment(file_origin=",".join(self.filenames)) seg.file_datetime = datetime.fromtimestamp(os.stat(self.filenames[0]).st_mtime) # todo: rather than take the first file for the timestamp, we should take the oldest # in practice, there won't be much difference # Load analogsignals and attach to Segment if 'dat' in self.avail_formats: seg.analogsignals = self.__read_analogsignals( gid_list, time_unit, t_start, t_stop, sampling_period=sampling_period, id_column=id_column_dat, time_column=time_column_dat, value_columns=value_columns_dat, value_types=value_types, value_units=value_units) if 'gdf' in self.avail_formats: seg.spiketrains = self.__read_spiketrains( gid_list, time_unit, t_start, t_stop, id_column=id_column_gdf, time_column=time_column_gdf) return seg def read_analogsignal(self, gid=None, time_unit=pq.ms, t_start=None, t_stop=None, sampling_period=None, id_column=0, time_column=1, value_column=2, value_type=None, value_unit=None, lazy=False): """ Reads an AnalogSignal with specified neuron ID from the DAT data. Arguments ---------- gid : int, default: None The GDF ID of the returned SpikeTrain. gdf_id must be specified if the GDF file contains neuron IDs, the default None then raises an error. Specify an empty list [] to retrieve the spike trains of all neurons. time_unit : Quantity (time), optional, default: quantities.ms The time unit of recorded time stamps. t_start : Quantity (time), optional, default: 0 * pq.ms Start time of SpikeTrain. t_stop : Quantity (time), default: None Stop time of SpikeTrain. t_stop must be specified, the default None raises an error. sampling_period : Quantity (frequency), optional, default: None Sampling period of the recorded data. id_column : int, optional, default: 0 Column index of neuron IDs. time_column : int, optional, default: 1 Column index of time stamps. value_column : int, optional, default: 2 Column index of the analog values recorded. value_type : str, optional, default: None Nest data type of the analog values recorded, eg.'V_m', 'I', 'g_e'. value_unit : Quantity (amplitude), default: None The physical unit of the recorded signal values. lazy : bool, optional, default: False Returns ------- spiketrain : SpikeTrain The requested SpikeTrain object with an annotation 'id' corresponding to the gdf_id parameter. """ assert not lazy, 'Do not support lazy' # __read_spiketrains() needs a list of IDs return self.__read_analogsignals([gid], time_unit, t_start, t_stop, sampling_period=sampling_period, id_column=id_column, time_column=time_column, value_columns=value_column, value_types=value_type, value_units=value_unit)[0] def read_spiketrain( self, gdf_id=None, time_unit=pq.ms, t_start=None, t_stop=None, id_column=0, time_column=1, lazy=False, **args): """ Reads a SpikeTrain with specified neuron ID from the GDF data. Arguments ---------- gdf_id : int, default: None The GDF ID of the returned SpikeTrain. gdf_id must be specified if the GDF file contains neuron IDs. Providing [] loads all available IDs. time_unit : Quantity (time), optional, default: quantities.ms The time unit of recorded time stamps. t_start : Quantity (time), default: None Start time of SpikeTrain. t_start must be specified. t_stop : Quantity (time), default: None Stop time of SpikeTrain. t_stop must be specified. id_column : int, optional, default: 0 Column index of neuron IDs. time_column : int, optional, default: 1 Column index of time stamps. lazy : bool, optional, default: False Returns ------- spiketrain : SpikeTrain The requested SpikeTrain object with an annotation 'id' corresponding to the gdf_id parameter. """ assert not lazy, 'Do not support lazy' if (not isinstance(gdf_id, int)) and gdf_id is not None: raise ValueError('gdf_id has to be of type int or None.') if gdf_id is None and id_column is not None: raise ValueError('No neuron ID specified but file contains ' 'neuron IDs in column ' + str(id_column) + '.') return self.__read_spiketrains([gdf_id], time_unit, t_start, t_stop, id_column, time_column, **args)[0] class ColumnIO: ''' Class for reading an ASCII file containing multiple columns of data. ''' def __init__(self, filename): """ filename: string, path to ASCII file to read. """ self.filename = filename # read the first line to check the data type (int or float) of the data f = open(self.filename) line = f.readline() additional_parameters = {} if '.' not in line: additional_parameters['dtype'] = np.int32 self.data = np.loadtxt(self.filename, **additional_parameters) if len(self.data.shape) == 1: self.data = self.data[:, np.newaxis] def get_columns(self, column_ids='all', condition=None, condition_column=None, sorting_columns=None): """ column_ids : 'all' or list of int, the ids of columns to extract. condition : None or function, which is applied to each row to evaluate if it should be included in the result. Needs to return a bool value. condition_column : int, id of the column on which the condition function is applied to sorting_columns : int or list of int, column ids to sort by. List entries have to be ordered by increasing sorting priority! Returns ------- numpy array containing the requested data. """ if column_ids == [] or column_ids == 'all': column_ids = range(self.data.shape[-1]) if isinstance(column_ids, (int, float)): column_ids = [column_ids] column_ids = np.array(column_ids) if column_ids is not None: if max(column_ids) >= len(self.data) - 1: raise ValueError('Can not load column ID %i. File contains ' 'only %i columns' % (max(column_ids), len(self.data))) if sorting_columns is not None: if isinstance(sorting_columns, int): sorting_columns = [sorting_columns] if (max(sorting_columns) >= self.data.shape[1]): raise ValueError('Can not sort by column ID %i. File contains ' 'only %i columns' % (max(sorting_columns), self.data.shape[1])) # Starting with whole dataset being selected for return selected_data = self.data # Apply filter condition to rows if condition and (condition_column is None): raise ValueError('Filter condition provided, but no ' 'condition_column ID provided') elif (condition_column is not None) and (condition is None): warnings.warn('Condition column ID provided, but no condition ' 'given. No filtering will be performed.') elif (condition is not None) and (condition_column is not None): condition_function = np.vectorize(condition) mask = condition_function( selected_data[:, condition_column]).astype(bool) selected_data = selected_data[mask, :] # Apply sorting if requested if sorting_columns is not None: values_to_sort = selected_data[:, sorting_columns].T ordered_ids = np.lexsort(tuple(values_to_sort[i] for i in range(len(values_to_sort)))) selected_data = selected_data[ordered_ids, :] # Select only requested columns selected_data = selected_data[:, column_ids] return selected_data neo-0.7.2/neo/io/neuralynxio.py0000600013464101346420000000204513507452453014616 0ustar yohyoh# -*- coding: utf-8 -*- """ Class for reading data from Neuralynx files. This IO supports NCS, NEV and NSE file formats. Depends on: numpy Supported: Read Author: Julia Sprenger, Carlos Canova """ # needed for python 3 compatibility from __future__ import absolute_import, division from neo.io.basefromrawio import BaseFromRaw from neo.rawio.neuralynxrawio import NeuralynxRawIO class NeuralynxIO(NeuralynxRawIO, BaseFromRaw): """ Class for reading data from Neuralynx files. This IO supports NCS, NEV, NSE and NTT file formats. NCS contains signals for one channel NEV contains events NSE contains spikes and waveforms for mono electrodes NTT contains spikes and waveforms for tetrodes """ _prefered_signal_group_mode = 'group-by-same-units' mode = 'dir' def __init__(self, dirname, use_cache=False, cache_path='same_as_resource'): NeuralynxRawIO.__init__(self, dirname=dirname, use_cache=use_cache, cache_path=cache_path) BaseFromRaw.__init__(self, dirname) neo-0.7.2/neo/io/neuralynxio_v1.py0000600013464101346420000031556513507452453015242 0ustar yohyoh# -*- coding: utf-8 -*- """ Class for reading data from Neuralynx files. This IO supports NCS, NEV and NSE file formats. This module is an older implementation with old neo.io API. A new class NeuralynxIO compunded by NeuralynxRawIO and BaseFromIO superseed this one. Depends on: numpy Supported: Read Author: Julia Sprenger, Carlos Canova Adapted from the exampleIO of python-neo """ # needed for python 3 compatibility from __future__ import absolute_import, division import sys import os import warnings import codecs import copy import re import datetime import pkg_resources import numpy as np import quantities as pq from neo.io.baseio import BaseIO from neo.core import (Block, Segment, ChannelIndex, AnalogSignal, SpikeTrain, Event, Unit) from os import listdir, sep from os.path import isfile, getsize import hashlib import pickle if hasattr(pkg_resources, 'pkg_resources'): parse_version = pkg_resources.pkg_resources.parse_version else: parse_version = pkg_resources.parse_version class NeuralynxIO(BaseIO): """ Class for reading Neuralynx files. It enables reading: - :class:'Block' - :class:'Segment' - :class:'AnalogSignal' - :class:'SpikeTrain' Usage: from neo import io import quantities as pq import matplotlib.pyplot as plt session_folder = '../Data/2014-07-24_10-31-02' NIO = io.NeuralynxIO(session_folder,print_diagnostic = True) block = NIO.read_block(t_starts = 0.1*pq.s, t_stops = 0.2*pq.s, events=True) seg = block.segments[0] analogsignal = seg.analogsignals[0] plt.plot(analogsignal.times.rescale(pq.ms), analogsignal.magnitude) plt.show() """ is_readable = True # This class can only read data is_writable = False # write is not supported # This class is able to directly or indirectly handle the following objects # You can notice that this greatly simplifies the full Neo object hierarchy supported_objects = [Segment, AnalogSignal, SpikeTrain, Event] # This class can return either a Block or a Segment # The first one is the default ( self.read ) # These lists should go from highest object to lowest object because # common_io_test assumes it. readable_objects = [Segment, AnalogSignal, SpikeTrain] # This class is not able to write objects writeable_objects = [] has_header = False is_streameable = False # This is for GUI stuff : a definition for parameters when reading. # This dict should be keyed by object (`Block`). Each entry is a list # of tuple. The first entry in each tuple is the parameter name. The # second entry is a dict with keys 'value' (for default value), # and 'label' (for a descriptive name). # Note that if the highest-level object requires parameters, # common_io_test will be skipped. read_params = { Segment: [('waveforms', {'value': True})], Block: [('waveforms', {'value': False})] } # do not supported write so no GUI stuff write_params = None name = 'Neuralynx' description = 'This IO reads .nse/.ncs/.nev files of the Neuralynx (' \ 'Cheetah) recordings system (tetrodes).' extensions = ['nse', 'ncs', 'nev', 'ntt'] # mode can be 'file' or 'dir' or 'fake' or 'database' # the main case is 'file' but some reader are base on a directory or # a database this info is for GUI stuff also mode = 'dir' # hardcoded parameters from manual, which are not present in Neuralynx # data files # unit of timestamps in different files nev_time_unit = pq.microsecond ncs_time_unit = pq.microsecond nse_time_unit = pq.microsecond ntt_time_unit = pq.microsecond # unit of sampling rate in different files ncs_sr_unit = pq.Hz nse_sr_unit = pq.Hz ntt_sr_unit = pq.Hz def __init__(self, sessiondir=None, cachedir=None, use_cache='hash', print_diagnostic=False, filename=None): """ Arguments: sessiondir: the directory the files of the recording session are collected. Default 'None'. print_diagnostic: indicates, whether information about the loading of data is printed in terminal or not. Default 'False'. cachedir: the directory where metadata about the recording session is read from and written to. use_cache: method used for cache identification. Possible values: 'hash'/ 'always'/'datesize'/'never'. Default 'hash' filename: this argument is handles the same as sessiondir and is only added for external IO interfaces. The value of sessiondir has priority over filename. """ BaseIO.__init__(self) # possiblity to provide filename instead of sessiondir for IO # compatibility if filename is not None and sessiondir is None: sessiondir = filename if sessiondir is None: raise ValueError('Must provide a directory containing data files of' ' of one recording session.') # remove filename if specific file was passed if any([sessiondir.endswith('.%s' % ext) for ext in self.extensions]): sessiondir = sessiondir[:sessiondir.rfind(sep)] # remove / for consistent directory handling if sessiondir.endswith(sep): sessiondir = sessiondir.rstrip(sep) # set general parameters of this IO self.sessiondir = sessiondir self.filename = sessiondir.split(sep)[-1] self._print_diagnostic = print_diagnostic self.associated = False self._associate(cachedir=cachedir, usecache=use_cache) self._diagnostic_print( 'Initialized IO for session %s' % self.sessiondir) def read_block(self, lazy=False, cascade=True, t_starts=None, t_stops=None, electrode_list=None, unit_list=None, analogsignals=True, events=False, waveforms=False): """ Reads data in a requested time window and returns block with as many segments es necessary containing these data. Arguments: lazy : Postpone actual reading of the data files. Default 'False'. cascade : Do not postpone reading subsequent neo types (segments). Default 'True'. t_starts : list of quantities or quantity describing the start of the requested time window to load. If None or [None] the complete session is loaded. Default 'None'. t_stops : list of quantities or quantity describing the end of the requested time window to load. Has to contain the same number of values as t_starts. If None or [None] the complete session is loaded. Default 'None'. electrode_list : list of integers containing the IDs of the requested to load. If [] or None all available channels will be loaded. Default: None. unit_list : list of integers containing the IDs of the requested units to load. If [] or None all available units will be loaded. Default: None. analogsignals : boolean, indication whether analogsignals should be read. Default: True. events : Loading events. If True all available events in the given time window will be read. Default: False. waveforms : Load waveform for spikes in the requested time window. Default: False. Returns: Block object containing the requested data in neo structures. Usage: from neo import io import quantities as pq import matplotlib.pyplot as plt session_folder = '../Data/2014-07-24_10-31-02' NIO = io.NeuralynxIO(session_folder,print_diagnostic = True) block = NIO.read_block(lazy = False, cascade = True, t_starts = 0.1*pq.s, t_stops = 0.2*pq.s, electrode_list = [1,5,10], unit_list = [1,2,3], events = True, waveforms = True) plt.plot(block.segments[0].analogsignals[0]) plt.show() """ # Create block bl = Block(file_origin=self.sessiondir) bl.name = self.filename if not cascade: return bl # Checking input of t_start and t_stop # For lazy users that specify x,x instead of [x],[x] for t_starts, # t_stops if t_starts is None: t_starts = [None] elif type(t_starts) == pq.Quantity: t_starts = [t_starts] elif type(t_starts) != list or any( [(type(i) != pq.Quantity and i is not None) for i in t_starts]): raise ValueError('Invalid specification of t_starts.') if t_stops is None: t_stops = [None] elif type(t_stops) == pq.Quantity: t_stops = [t_stops] elif type(t_stops) != list or any( [(type(i) != pq.Quantity and i is not None) for i in t_stops]): raise ValueError('Invalid specification of t_stops.') # adapting t_starts and t_stops to known gap times (extracted in # association process / initialization) for gap in self.parameters_global['gaps']: # gap=gap_list[0] for e in range(len(t_starts)): t1, t2 = t_starts[e], t_stops[e] gap_start = gap[1] * self.ncs_time_unit - \ self.parameters_global['t_start'] gap_stop = gap[2] * self.ncs_time_unit - self.parameters_global[ 't_start'] if ((t1 is None and t2 is None) or (t1 is None and t2 is not None and t2.rescale( self.ncs_time_unit) > gap_stop) or (t2 is None and t1 is not None and t1.rescale( self.ncs_time_unit) < gap_stop) or (t1 is not None and t2 is not None and t1.rescale( self.ncs_time_unit) < gap_start and t2.rescale(self.ncs_time_unit) > gap_stop)): # adapting first time segment t_stops[e] = gap_start # inserting second time segment t_starts.insert(e + 1, gap_stop) t_stops.insert(e + 1, t2) warnings.warn( 'Substituted t_starts and t_stops in order to skip ' 'gap in recording session.') # loading all channels if empty electrode_list if electrode_list == [] or electrode_list is None: electrode_list = self.parameters_ncs.keys() # adding a segment for each t_start, t_stop pair for t_start, t_stop in zip(t_starts, t_stops): seg = self.read_segment(lazy=lazy, cascade=cascade, t_start=t_start, t_stop=t_stop, electrode_list=electrode_list, unit_list=unit_list, analogsignals=analogsignals, events=events, waveforms=waveforms) bl.segments.append(seg) # generate units units = [] channel_unit_collection = {} for st in [s for seg in bl.segments for s in seg.spiketrains]: # collecting spiketrains of same channel and unit id to generate # common unit chuid = (st.annotations['channel_index'], st.annotations['unit_id']) if chuid in channel_unit_collection: channel_unit_collection[chuid].append(st) else: channel_unit_collection[chuid] = [st] for chuid in channel_unit_collection: sts = channel_unit_collection[chuid] unit = Unit(name='Channel %i, Unit %i' % chuid) unit.spiketrains.extend(sts) units.append(unit) # generate one channel indexes for each analogsignal for anasig in [a for seg in bl.segments for a in seg.analogsignals]: channelids = anasig.annotations['channel_index'] channel_names = ['channel %i' % i for i in channelids] channelidx = ChannelIndex(index=range(len(channelids)), channel_names=channel_names, name='channel ids for all analogsignal ' '"%s"' % anasig.name, channel_ids=channelids) channelidx.analogsignals.append(anasig) bl.channel_indexes.append(channelidx) # generate channel indexes for units channelids = [unit.spiketrains[0].annotations['channel_index'] for unit in units] channel_names = ['channel %i' % i for i in channelids] channelidx = ChannelIndex(index=range(len(channelids)), channel_names=channel_names, name='channel ids for all spiketrains', channel_ids=channelids) channelidx.units.extend(units) bl.channel_indexes.append(channelidx) bl.create_many_to_one_relationship() # Adding global parameters to block annotation bl.annotations.update(self.parameters_global) return bl def read_segment(self, lazy=False, cascade=True, t_start=None, t_stop=None, electrode_list=None, unit_list=None, analogsignals=True, events=False, waveforms=False): """Reads one Segment. The Segment will contain one AnalogSignal for each channel and will go from t_start to t_stop. Arguments: lazy : Postpone actual reading of the data files. Default 'False'. cascade : Do not postpone reading subsequent neo types (SpikeTrains, AnalogSignals, Events). Default 'True'. t_start : time (quantity) that the Segment begins. Default None. t_stop : time (quantity) that the Segment ends. Default None. electrode_list : list of integers containing the IDs of the requested to load. If [] or None all available channels will be loaded. Default: None. unit_list : list of integers containing the IDs of the requested units to load. If [] or None all available units will be loaded. If False, no unit will be loaded. Default: None. analogsignals : boolean, indication whether analogsignals should be read. Default: True. events : Loading events. If True all available events in the given time window will be read. Default: False. waveforms : Load waveform for spikes in the requested time window. Default: False. Returns: Segment object containing neo objects, which contain the data. """ # input check # loading all channels if empty electrode_list if electrode_list == [] or electrode_list is None: electrode_list = self.parameters_ncs.keys() elif electrode_list is None: raise ValueError('Electrode_list can not be None.') elif [v for v in electrode_list if v in self.parameters_ncs.keys()] == []: # warn if non of the requested channels are present in this session warnings.warn('Requested channels %s are not present in session ' '(contains only %s)' % ( electrode_list, self.parameters_ncs.keys())) electrode_list = [] seg = Segment(file_origin=self.filename) if not cascade: return seg # generate empty segment for analogsignal collection empty_seg = Segment(file_origin=self.filename) # Reading NCS Files # # selecting ncs files to load based on electrode_list requested if analogsignals: for chid in electrode_list: if chid in self.parameters_ncs: file_ncs = self.parameters_ncs[chid]['filename'] self.read_ncs(file_ncs, empty_seg, lazy, cascade, t_start=t_start, t_stop=t_stop) else: self._diagnostic_print('Can not load ncs of channel %i. ' 'No corresponding ncs file ' 'present.' % (chid)) # supplementory merge function, should be replaced by neo utility # function def merge_analogsignals(anasig_list): for aid, anasig in enumerate(anasig_list): anasig.channel_index = None if aid == 0: full_analogsignal = anasig else: full_analogsignal = full_analogsignal.merge(anasig) for key in anasig_list[0].annotations.keys(): listified_values = [a.annotations[key] for a in anasig_list] full_analogsignal.annotations[key] = listified_values return full_analogsignal analogsignal = merge_analogsignals(empty_seg.analogsignals) seg.analogsignals.append(analogsignal) analogsignal.segment = seg # Reading NEV Files (Events)# # reading all files available if events: for filename_nev in self.nev_asso: self.read_nev(filename_nev, seg, lazy, cascade, t_start=t_start, t_stop=t_stop) # Reading Spike Data only if requested if unit_list is not False: # Reading NSE Files (Spikes)# # selecting nse files to load based on electrode_list requested for chid in electrode_list: if chid in self.parameters_nse: filename_nse = self.parameters_nse[chid]['filename'] self.read_nse(filename_nse, seg, lazy, cascade, t_start=t_start, t_stop=t_stop, waveforms=waveforms) else: self._diagnostic_print('Can not load nse of channel %i. ' 'No corresponding nse file ' 'present.' % (chid)) # Reading ntt Files (Spikes)# # selecting ntt files to load based on electrode_list requested for chid in electrode_list: if chid in self.parameters_ntt: filename_ntt = self.parameters_ntt[chid]['filename'] self.read_ntt(filename_ntt, seg, lazy, cascade, t_start=t_start, t_stop=t_stop, waveforms=waveforms) else: self._diagnostic_print('Can not load ntt of channel %i. ' 'No corresponding ntt file ' 'present.' % (chid)) return seg def read_ncs(self, filename_ncs, seg, lazy=False, cascade=True, t_start=None, t_stop=None): ''' Reading a single .ncs file from the associated Neuralynx recording session. In case of a recording gap between t_start and t_stop, data are only loaded until gap start. For loading data across recording gaps use read_block(...). Arguments: filename_ncs : Name of the .ncs file to be loaded. seg : Neo Segment, to which the AnalogSignal containing the data will be attached. lazy : Postpone actual reading of the data. Instead provide a dummy AnalogSignal. Default 'False'. cascade : Not used in this context. Default: 'True'. t_start : time or sample (quantity or integer) that the AnalogSignal begins. Default None. t_stop : time or sample (quantity or integer) that the AnalogSignal ends. Default None. Returns: None ''' # checking format of filename and correcting if necessary if filename_ncs[-4:] != '.ncs': filename_ncs = filename_ncs + '.ncs' if sep in filename_ncs: filename_ncs = filename_ncs.split(sep)[-1] # Extracting the channel id from prescan (association) of ncs files with # this recording session chid = self.get_channel_id_by_file_name(filename_ncs) if chid is None: raise ValueError('NeuralynxIO is attempting to read a file ' 'not associated to this session (%s).' % ( filename_ncs)) if not cascade: return # read data header_time_data = self.__mmap_ncs_packet_timestamps(filename_ncs) data = self.__mmap_ncs_data(filename_ncs) # ensure meaningful values for requested start and stop times # in case time is provided in samples: transform to absolute time units if isinstance(t_start, int): t_start = t_start / self.parameters_ncs[chid]['sampling_rate'] if isinstance(t_stop, int): t_stop = t_stop / self.parameters_ncs[chid]['sampling_rate'] # rescaling to global start time of recording (time of first sample # in any file type) if t_start is None or t_start < ( self.parameters_ncs[chid]['t_start'] - self.parameters_global[ 't_start']): t_start = ( self.parameters_ncs[chid]['t_start'] - self.parameters_global[ 't_start']) if t_start > ( self.parameters_ncs[chid]['t_stop'] - self.parameters_global[ 't_start']): raise ValueError( 'Requested times window (%s to %s) is later than data are ' 'recorded (t_stop = %s) ' 'for file %s.' % (t_start, t_stop, (self.parameters_ncs[chid]['t_stop'] - self.parameters_global['t_start']), filename_ncs)) if t_stop is None or t_stop > ( self.parameters_ncs[chid]['t_stop'] - self.parameters_global[ 't_start']): t_stop = ( self.parameters_ncs[chid]['t_stop'] - self.parameters_global[ 't_start']) if t_stop < ( self.parameters_ncs[chid]['t_start'] - self.parameters_global['t_start']): raise ValueError( 'Requested times window (%s to %s) is earlier than data ' 'are ' 'recorded (t_start = %s) ' 'for file %s.' % (t_start, t_stop, (self.parameters_ncs[chid]['t_start'] - self.parameters_global['t_start']), filename_ncs)) if t_start >= t_stop: raise ValueError( 'Requested start time (%s) is later than / equal to stop ' 'time ' '(%s) ' 'for file %s.' % (t_start, t_stop, filename_ncs)) # Extracting data signal in requested time window unit = pq.dimensionless # default value if lazy: sig = [] p_id_start = 0 else: tstamps = header_time_data * self.ncs_time_unit - \ self.parameters_global['t_start'] # find data packet to start with signal construction starts = np.where(tstamps <= t_start)[0] if len(starts) == 0: self._diagnostic_print( 'Requested AnalogSignal not present in this time ' 'interval.') return else: # first packet to be included into signal p_id_start = starts[-1] # find data packet where signal ends (due to gap or t_stop) stops = np.where(tstamps >= t_stop)[0] if len(stops) != 0: first_stop = [stops[0]] else: first_stop = [] # last packet to be included in signal p_id_stop = min(first_stop + [len(data)]) # search gaps in recording in time range to load gap_packets = [gap_id[0] for gap_id in self.parameters_ncs[chid]['gaps'] if gap_id[0] > p_id_start] if len(gap_packets) > 0 and min(gap_packets) < p_id_stop: p_id_stop = min(gap_packets) warnings.warn( 'Analogsignalarray was shortened due to gap in ' 'recorded ' 'data ' ' of file %s at packet id %i' % ( filename_ncs, min(gap_packets))) # search broken packets in time range to load broken_packets = [] if 'broken_packet' in self.parameters_ncs[chid]: broken_packets = [packet[0] for packet in self.parameters_ncs[chid]['broken_packet'] if packet[0] > p_id_start] if len(broken_packets) > 0 and min(broken_packets) < p_id_stop: p_id_stop = min(broken_packets) warnings.warn( 'Analogsignalarray was shortened due to broken data ' 'packet in recorded data ' ' of file %s at packet id %i' % ( filename_ncs, min(broken_packets))) # construct signal in valid packet range sig = np.array(data[p_id_start:p_id_stop + 1], dtype=float) sig = sig.reshape(len(sig) * len(sig[0])) # ADBitVolts is not guaranteed to be present in the header! if 'ADBitVolts' in self.parameters_ncs[chid]: sig *= self.parameters_ncs[chid]['ADBitVolts'] unit = pq.V else: warnings.warn( 'Could not transform data from file %s into physical ' 'signal. ' 'Missing "ADBitVolts" value in text header.') # defining sampling rate for rescaling purposes sampling_rate = self.parameters_ncs[chid]['sampling_unit'][0] # creating neo AnalogSignal containing data anasig = AnalogSignal(signal=pq.Quantity(sig, unit, copy=False), sampling_rate=1 * sampling_rate, # rescaling t_start to sampling time units t_start=(header_time_data[p_id_start] * self.ncs_time_unit - self.parameters_global['t_start']).rescale( 1 / sampling_rate), name='channel_%i' % (chid), channel_index=chid) # removing protruding parts of first and last data packet if anasig.t_start < t_start.rescale(anasig.t_start.units): anasig = anasig.time_slice(t_start.rescale(anasig.t_start.units), None) if anasig.t_stop > t_stop.rescale(anasig.t_start.units): anasig = anasig.time_slice(None, t_stop.rescale(anasig.t_start.units)) annotations = copy.deepcopy(self.parameters_ncs[chid]) for pop_key in ['sampling_rate', 't_start']: if pop_key in annotations: annotations.pop(pop_key) anasig.annotations.update(annotations) anasig.annotations['electrode_id'] = chid # this annotation is necesary for automatic genereation of # recordingchannels anasig.annotations['channel_index'] = chid anasig.segment = seg # needed for merge function of analogsignals seg.analogsignals.append(anasig) def read_nev(self, filename_nev, seg, lazy=False, cascade=True, t_start=None, t_stop=None): ''' Reads associated nev file and attaches its content as eventarray to provided neo segment. In constrast to read_ncs times can not be provided in number of samples as a nev file has no inherent sampling rate. Arguments: filename_nev : Name of the .nev file to be loaded. seg : Neo Segment, to which the Event containing the data will be attached. lazy : Postpone actual reading of the data. Instead provide a dummy Event. Default 'False'. cascade : Not used in this context. Default: 'True'. t_start : time (quantity) that the Events begin. Default None. t_stop : time (quantity) that the Event end. Default None. Returns: None ''' if filename_nev[-4:] != '.nev': filename_nev += '.nev' if sep in filename_nev: filename_nev = filename_nev.split(sep)[-1] if filename_nev not in self.nev_asso: raise ValueError('NeuralynxIO is attempting to read a file ' 'not associated to this session (%s).' % ( filename_nev)) # # ensure meaningful values for requested start and stop times # # providing time is samples for nev file does not make sense as we # don't know the underlying sampling rate if isinstance(t_start, int): raise ValueError( 'Requesting event information from nev file in samples ' 'does ' 'not make sense. ' 'Requested t_start %s' % t_start) if isinstance(t_stop, int): raise ValueError( 'Requesting event information from nev file in samples ' 'does ' 'not make sense. ' 'Requested t_stop %s' % t_stop) # ensure meaningful values for requested start and stop times if t_start is None or t_start < ( self.parameters_nev[filename_nev]['t_start'] - self.parameters_global['t_start']): t_start = (self.parameters_nev[filename_nev]['t_start'] - self.parameters_global['t_start']) if t_start > (self.parameters_nev[filename_nev]['t_stop'] - self.parameters_global['t_start']): raise ValueError( 'Requested times window (%s to %s) is later than data are ' 'recorded (t_stop = %s) ' 'for file %s.' % (t_start, t_stop, (self.parameters_nev[filename_nev]['t_stop'] - self.parameters_global['t_start']), filename_nev)) if t_stop is None or t_stop > ( self.parameters_nev[filename_nev]['t_stop'] - self.parameters_global['t_start']): t_stop = (self.parameters_nev[filename_nev]['t_stop'] - self.parameters_global['t_start']) if t_stop < (self.parameters_nev[filename_nev]['t_start'] - self.parameters_global['t_start']): raise ValueError( 'Requested times window (%s to %s) is earlier than data ' 'are ' 'recorded (t_start = %s) ' 'for file %s.' % (t_start, t_stop, ( self.parameters_nev[filename_nev][ 't_start'] - self.parameters_global['t_start']), filename_nev)) if t_start >= t_stop: raise ValueError( 'Requested start time (%s) is later than / equal to stop ' 'time ' '(%s) ' 'for file %s.' % (t_start, t_stop, filename_nev)) data = self.__mmap_nev_file(filename_nev) # Extracting all events for one event type and put it into an event # array # TODO: Check if this is the correct way of event creation. for event_type in self.parameters_nev[filename_nev]['event_types']: # Extract all time stamps of digital markers and rescaling time type_mask = [i for i in range(len(data)) if (data[i][4] == event_type['event_id'] and data[i][5] == event_type['nttl'] and data[i][10].decode('latin-1') == event_type[ 'name'])] marker_times = [t[3] for t in data[type_mask]] * self.nev_time_unit - \ self.parameters_global['t_start'] # only consider Events in the requested time window [t_start, # t_stop] time_mask = [i for i in range(len(marker_times)) if ( marker_times[i] >= t_start and marker_times[i] <= t_stop)] marker_times = marker_times[time_mask] # Do not create an eventarray if there are no events of this type # in the requested time range if len(marker_times) == 0: continue ev = Event(times=pq.Quantity(marker_times, units=self.nev_time_unit, dtype="int"), labels=event_type['name'], name="Digital Marker " + str(event_type), file_origin=filename_nev, marker_id=event_type['event_id'], digital_marker=True, analog_marker=False, nttl=event_type['nttl']) seg.events.append(ev) def read_nse(self, filename_nse, seg, lazy=False, cascade=True, t_start=None, t_stop=None, unit_list=None, waveforms=False): ''' Reads nse file and attaches content as spike train to provided neo segment. Times can be provided in samples (integer values). If the nse file does not contain a sampling rate value, the ncs sampling rate on the same electrode is used. Arguments: filename_nse : Name of the .nse file to be loaded. seg : Neo Segment, to which the Spiketrain containing the data will be attached. lazy : Postpone actual reading of the data. Instead provide a dummy SpikeTrain. Default 'False'. cascade : Not used in this context. Default: 'True'. t_start : time or sample (quantity or integer) that the SpikeTrain begins. Default None. t_stop : time or sample (quantity or integer) that the SpikeTrain ends. Default None. unit_list : unit ids to be loaded. If [], all units are loaded. Default None. waveforms : Load the waveform (up to 32 data points) for each spike time. Default: False Returns: None ''' if filename_nse[-4:] != '.nse': filename_nse += '.nse' if sep in filename_nse: filename_nse = filename_nse.split(sep)[-1] # extracting channel id of requested file channel_id = self.get_channel_id_by_file_name(filename_nse) if channel_id is not None: chid = channel_id else: # if nse file is empty it is not listed in self.parameters_nse, but # in self.nse_avail if filename_nse in self.nse_avail: warnings.warn('NeuralynxIO is attempting to read an empty ' '(not associated) nse file (%s). ' 'Not loading nse file.' % (filename_nse)) return else: raise ValueError('NeuralynxIO is attempting to read a file ' 'not associated to this session (%s).' % ( filename_nse)) # ensure meaningful values for requested start and stop times # in case time is provided in samples: transform to absolute time units # ncs sampling rate is best guess if there is no explicit sampling # rate given for nse values. if 'sampling_rate' in self.parameters_nse[chid]: sr = self.parameters_nse[chid]['sampling_rate'] elif chid in self.parameters_ncs and 'sampling_rate' in \ self.parameters_ncs[chid]: sr = self.parameters_ncs[chid]['sampling_rate'] else: raise ValueError( 'No sampling rate present for channel id %i in nse file ' '%s. ' 'Could also not find the sampling rate of the respective ' 'ncs ' 'file.' % ( chid, filename_nse)) if isinstance(t_start, int): t_start = t_start / sr if isinstance(t_stop, int): t_stop = t_stop / sr # + rescaling global recording start (first sample in any file type) # This is not optimal, as there is no way to know how long the # recording lasted after last spike if t_start is None or t_start < ( self.parameters_nse[chid]['t_first'] - self.parameters_global[ 't_start']): t_start = ( self.parameters_nse[chid]['t_first'] - self.parameters_global[ 't_start']) if t_start > ( self.parameters_nse[chid]['t_last'] - self.parameters_global['t_start']): raise ValueError( 'Requested times window (%s to %s) is later than data are ' 'recorded (t_stop = %s) ' 'for file %s.' % (t_start, t_stop, (self.parameters_nse[chid]['t_last'] - self.parameters_global['t_start']), filename_nse)) if t_stop is None: t_stop = (sys.maxsize) * self.nse_time_unit if t_stop is None or t_stop > ( self.parameters_nse[chid]['t_last'] - self.parameters_global[ 't_start']): t_stop = ( self.parameters_nse[chid]['t_last'] - self.parameters_global[ 't_start']) if t_stop < ( self.parameters_nse[chid]['t_first'] - self.parameters_global[ 't_start']): raise ValueError( 'Requested times window (%s to %s) is earlier than data ' 'are recorded (t_start = %s) ' 'for file %s.' % (t_start, t_stop, (self.parameters_nse[chid]['t_first'] - self.parameters_global['t_start']), filename_nse)) if t_start >= t_stop: raise ValueError( 'Requested start time (%s) is later than / equal to stop ' 'time ' '(%s) for file %s.' % (t_start, t_stop, filename_nse)) # reading data [timestamps, channel_ids, cell_numbers, features, data_points] = self.__mmap_nse_packets(filename_nse) # load all units available if unit_list==[] or None if unit_list == [] or unit_list is None: unit_list = np.unique(cell_numbers) elif not any([u in cell_numbers for u in unit_list]): self._diagnostic_print( 'None of the requested unit ids (%s) present ' 'in nse file %s (contains unit_list %s)' % ( unit_list, filename_nse, np.unique(cell_numbers))) # extracting spikes unit-wise and generate spiketrains for unit_i in unit_list: if not lazy: # Extract all time stamps of that neuron on that electrode unit_mask = np.where(cell_numbers == unit_i)[0] spike_times = timestamps[unit_mask] * self.nse_time_unit spike_times = spike_times - self.parameters_global['t_start'] time_mask = np.where(np.logical_and(spike_times >= t_start, spike_times < t_stop)) spike_times = spike_times[time_mask] else: spike_times = pq.Quantity([], units=self.nse_time_unit) # Create SpikeTrain object st = SpikeTrain(times=spike_times, t_start=t_start, t_stop=t_stop, sampling_rate=self.parameters_ncs[chid][ 'sampling_rate'], name="Channel %i, Unit %i" % (chid, unit_i), file_origin=filename_nse, unit_id=unit_i, channel_id=chid) if waveforms and not lazy: # Collect all waveforms of the specific unit # For computational reasons: no units, no time axis st.waveforms = data_points[unit_mask][time_mask] # TODO: Add units to waveforms (pq.uV?) and add annotation # left_sweep = x * pq.ms indicating when threshold crossing # occurred in waveform st.annotations.update(self.parameters_nse[chid]) st.annotations['electrode_id'] = chid # This annotations is necessary for automatic generation of # recordingchannels st.annotations['channel_index'] = chid seg.spiketrains.append(st) def read_ntt(self, filename_ntt, seg, lazy=False, cascade=True, t_start=None, t_stop=None, unit_list=None, waveforms=False): ''' Reads ntt file and attaches content as spike train to provided neo segment. Arguments: filename_ntt : Name of the .ntt file to be loaded. seg : Neo Segment, to which the Spiketrain containing the data will be attached. lazy : Postpone actual reading of the data. Instead provide a dummy SpikeTrain. Default 'False'. cascade : Not used in this context. Default: 'True'. t_start : time (quantity) that the SpikeTrain begins. Default None. t_stop : time (quantity) that the SpikeTrain ends. Default None. unit_list : unit ids to be loaded. If [] or None all units are loaded. Default None. waveforms : Load the waveform (up to 32 data points) for each spike time. Default: False Returns: None ''' if filename_ntt[-4:] != '.ntt': filename_ntt += '.ntt' if sep in filename_ntt: filename_ntt = filename_ntt.split(sep)[-1] # extracting channel id of requested file channel_id = self.get_channel_id_by_file_name(filename_ntt) if channel_id is not None: chid = channel_id else: # if ntt file is empty it is not listed in self.parameters_ntt, but # in self.ntt_avail if filename_ntt in self.ntt_avail: warnings.warn('NeuralynxIO is attempting to read an empty ' '(not associated) ntt file (%s). ' 'Not loading ntt file.' % (filename_ntt)) return else: raise ValueError('NeuralynxIO is attempting to read a file ' 'not associated to this session (%s).' % ( filename_ntt)) # ensure meaningful values for requested start and stop times # in case time is provided in samples: transform to absolute time units # ncs sampling rate is best guess if there is no explicit sampling # rate given for ntt values. if 'sampling_rate' in self.parameters_ntt[chid]: sr = self.parameters_ntt[chid]['sampling_rate'] elif chid in self.parameters_ncs and 'sampling_rate' in \ self.parameters_ncs[chid]: sr = self.parameters_ncs[chid]['sampling_rate'] else: raise ValueError( 'No sampling rate present for channel id %i in ntt file ' '%s. ' 'Could also not find the sampling rate of the respective ' 'ncs ' 'file.' % ( chid, filename_ntt)) if isinstance(t_start, int): t_start = t_start / sr if isinstance(t_stop, int): t_stop = t_stop / sr # + rescaling to global recording start (first sample in any # recording file) if t_start is None or t_start < ( self.parameters_ntt[chid]['t_first'] - self.parameters_global[ 't_start']): t_start = ( self.parameters_ntt[chid]['t_first'] - self.parameters_global[ 't_start']) if t_start > ( self.parameters_ntt[chid]['t_last'] - self.parameters_global[ 't_start']): raise ValueError( 'Requested times window (%s to %s) is later than data are ' 'recorded (t_stop = %s) ' 'for file %s.' % (t_start, t_stop, (self.parameters_ntt[chid]['t_last'] - self.parameters_global['t_start']), filename_ntt)) if t_stop is None: t_stop = (sys.maxsize) * self.ntt_time_unit if t_stop is None or t_stop > ( self.parameters_ntt[chid]['t_last'] - self.parameters_global[ 't_start']): t_stop = ( self.parameters_ntt[chid]['t_last'] - self.parameters_global[ 't_start']) if t_stop < ( self.parameters_ntt[chid]['t_first'] - self.parameters_global[ 't_start']): raise ValueError( 'Requested times window (%s to %s) is earlier than data ' 'are ' 'recorded (t_start = %s) ' 'for file %s.' % (t_start, t_stop, (self.parameters_ntt[chid]['t_first'] - self.parameters_global['t_start']), filename_ntt)) if t_start >= t_stop: raise ValueError( 'Requested start time (%s) is later than / equal to stop ' 'time ' '(%s) ' 'for file %s.' % (t_start, t_stop, filename_ntt)) # reading data [timestamps, channel_ids, cell_numbers, features, data_points] = self.__mmap_ntt_packets(filename_ntt) # TODO: When ntt available: Implement 1 RecordingChannelGroup per # Tetrode, such that each electrode gets its own recording channel # load all units available if units==[] if unit_list == [] or unit_list is None: unit_list = np.unique(cell_numbers) elif not any([u in cell_numbers for u in unit_list]): self._diagnostic_print( 'None of the requested unit ids (%s) present ' 'in ntt file %s (contains units %s)' % ( unit_list, filename_ntt, np.unique(cell_numbers))) # loading data for each unit and generating spiketrain for unit_i in unit_list: if not lazy: # Extract all time stamps of that neuron on that electrode mask = np.where(cell_numbers == unit_i)[0] spike_times = timestamps[mask] * self.ntt_time_unit spike_times = spike_times - self.parameters_global['t_start'] spike_times = spike_times[np.where( np.logical_and(spike_times >= t_start, spike_times < t_stop))] else: spike_times = pq.Quantity([], units=self.ntt_time_unit) # Create SpikeTrain object st = SpikeTrain(times=spike_times, t_start=t_start, t_stop=t_stop, sampling_rate=self.parameters_ncs[chid][ 'sampling_rate'], name="Channel %i, Unit %i" % (chid, unit_i), file_origin=filename_ntt, unit_id=unit_i, channel_id=chid) # Collect all waveforms of the specific unit if waveforms and not lazy: # For computational reasons: no units, no time axis # transposing to adhere to neo guidline, which states that # time should be in the first axis. # This is stupid and not intuitive. st.waveforms = np.array( [data_points[t, :, :] for t in range(len(timestamps)) if cell_numbers[t] == unit_i]).transpose() # TODO: Add units to waveforms (pq.uV?) and add annotation # left_sweep = x * pq.ms indicating when threshold crossing # occurred in waveform st.annotations = self.parameters_ntt[chid] st.annotations['electrode_id'] = chid # This annotations is necessary for automatic generation of # recordingchannels st.annotations['channel_index'] = chid seg.spiketrains.append(st) # private routines # ################################################# def _associate(self, cachedir=None, usecache='hash'): """ Associates the object with a specified Neuralynx session, i.e., a combination of a .nse, .nev and .ncs files. The meta data is read into the object for future reference. Arguments: cachedir : Directory for loading and saving hashes of recording sessions and pickled meta information about files extracted during association process use_cache: method used for cache identification. Possible values: 'hash'/ 'always'/'datesize'/'never'. Default 'hash' Returns: - """ # If already associated, disassociate first if self.associated: raise IOError( "Trying to associate an already associated NeuralynxIO " "object.") # Create parameter containers # Dictionary that holds different parameters read from the .nev file self.parameters_nse = {} # List of parameter dictionaries for all potential file types self.parameters_ncs = {} self.parameters_nev = {} self.parameters_ntt = {} # combined global parameters self.parameters_global = {} # Scanning session directory for recorded files self.sessionfiles = [f for f in listdir(self.sessiondir) if isfile(os.path.join(self.sessiondir, f))] # Listing available files self.ncs_avail = [] self.nse_avail = [] self.nev_avail = [] self.ntt_avail = [] # Listing associated (=non corrupted, non empty files) self.ncs_asso = [] self.nse_asso = [] self.nev_asso = [] self.ntt_asso = [] if usecache not in ['hash', 'always', 'datesize', 'never']: raise ValueError( "Argument value of usecache '%s' is not valid. Accepted " "values are 'hash','always','datesize','never'" % usecache) if cachedir is None and usecache != 'never': raise ValueError('No cache directory provided.') # check if there are any changes of the data files -> new data check run check_files = True if usecache != 'always' else False # never # checking files if usecache=='always' if cachedir is not None and usecache != 'never': self._diagnostic_print( 'Calculating %s of session files to check for cached ' 'parameter files.' % usecache) cachefile = cachedir + sep + self.sessiondir.split(sep)[ -1] + '/hashkeys' if not os.path.exists(cachedir + sep + self.sessiondir.split(sep)[-1]): os.makedirs(cachedir + sep + self.sessiondir.split(sep)[-1]) if usecache == 'hash': hashes_calc = {} # calculates hash of all available files for f in self.sessionfiles: file_hash = self.hashfile(open(self.sessiondir + sep + f, 'rb'), hashlib.sha256()) hashes_calc[f] = file_hash elif usecache == 'datesize': hashes_calc = {} for f in self.sessionfiles: hashes_calc[f] = self.datesizefile( self.sessiondir + sep + f) # load hashes saved for this session in an earlier loading run if os.path.exists(cachefile): hashes_read = pickle.load(open(cachefile, 'rb')) else: hashes_read = {} # compare hashes to previously saved meta data und load meta data # if no changes occured if usecache == 'always' or all([f in hashes_calc and f in hashes_read and hashes_calc[f] == hashes_read[f] for f in self.sessionfiles]): check_files = False self._diagnostic_print( 'Using cached metadata from earlier analysis run in ' 'file ' '%s. Skipping file checks.' % cachefile) # loading saved parameters parameterfile = cachedir + sep + self.sessiondir.split(sep)[ -1] + '/parameters.cache' if os.path.exists(parameterfile): parameters_read = pickle.load(open(parameterfile, 'rb')) else: raise IOError('Inconsistent cache files.') for IOdict, dictname in [(self.parameters_global, 'global'), (self.parameters_ncs, 'ncs'), (self.parameters_nse, 'nse'), (self.parameters_nev, 'nev'), (self.parameters_ntt, 'ntt')]: IOdict.update(parameters_read[dictname]) self.nev_asso = self.parameters_nev.keys() self.ncs_asso = [val['filename'] for val in self.parameters_ncs.values()] self.nse_asso = [val['filename'] for val in self.parameters_nse.values()] self.ntt_asso = [val['filename'] for val in self.parameters_ntt.values()] for filename in self.sessionfiles: # Extracting only continuous signal files (.ncs) if filename[-4:] == '.ncs': self.ncs_avail.append(filename) elif filename[-4:] == '.nse': self.nse_avail.append(filename) elif filename[-4:] == '.nev': self.nev_avail.append(filename) elif filename[-4:] == '.ntt': self.ntt_avail.append(filename) else: self._diagnostic_print( 'Ignoring file of unknown data type %s' % filename) if check_files: self._diagnostic_print('Starting individual file checks.') # ======================================================================= # # Scan NCS files # ======================================================================= self._diagnostic_print( '\nDetected %i .ncs file(s).' % (len(self.ncs_avail))) for ncs_file in self.ncs_avail: # Loading individual NCS file and extracting parameters self._diagnostic_print("Scanning " + ncs_file + ".") # Reading file packet headers filehandle = self.__mmap_ncs_packet_headers(ncs_file) if filehandle is None: continue try: # Checking consistency of ncs file self.__ncs_packet_check(filehandle) except AssertionError: warnings.warn( 'Session file %s did not pass data packet check. ' 'This file can not be loaded.' % ncs_file) continue # Reading data packet header information and store them in # parameters_ncs self.__read_ncs_data_headers(filehandle, ncs_file) # Reading txt file header channel_id = self.get_channel_id_by_file_name(ncs_file) self.__read_text_header(ncs_file, self.parameters_ncs[channel_id]) # Check for invalid starting times of data packets in ncs file self.__ncs_invalid_first_sample_check(filehandle) # Check ncs file for gaps self.__ncs_gap_check(filehandle) self.ncs_asso.append(ncs_file) # ======================================================================= # # Scan NSE files # ======================================================================= # Loading individual NSE file and extracting parameters self._diagnostic_print( '\nDetected %i .nse file(s).' % (len(self.nse_avail))) for nse_file in self.nse_avail: # Loading individual NSE file and extracting parameters self._diagnostic_print('Scanning ' + nse_file + '.') # Reading file filehandle = self.__mmap_nse_packets(nse_file) if filehandle is None: continue try: # Checking consistency of nse file self.__nse_check(filehandle) except AssertionError: warnings.warn( 'Session file %s did not pass data packet check. ' 'This file can not be loaded.' % nse_file) continue # Reading header information and store them in parameters_nse self.__read_nse_data_header(filehandle, nse_file) # Reading txt file header channel_id = self.get_channel_id_by_file_name(nse_file) self.__read_text_header(nse_file, self.parameters_nse[channel_id]) # using sampling rate from txt header, as this is not saved # in data packets if 'SamplingFrequency' in self.parameters_nse[channel_id]: self.parameters_nse[channel_id]['sampling_rate'] = \ (self.parameters_nse[channel_id][ 'SamplingFrequency'] * self.nse_sr_unit) self.nse_asso.append(nse_file) # ======================================================================= # # Scan NEV files # ======================================================================= self._diagnostic_print( '\nDetected %i .nev file(s).' % (len(self.nev_avail))) for nev_file in self.nev_avail: # Loading individual NEV file and extracting parameters self._diagnostic_print('Scanning ' + nev_file + '.') # Reading file filehandle = self.__mmap_nev_file(nev_file) if filehandle is None: continue try: # Checking consistency of nev file self.__nev_check(filehandle) except AssertionError: warnings.warn( 'Session file %s did not pass data packet check. ' 'This file can not be loaded.' % nev_file) continue # Reading header information and store them in parameters_nev self.__read_nev_data_header(filehandle, nev_file) # Reading txt file header self.__read_text_header(nev_file, self.parameters_nev[nev_file]) self.nev_asso.append(nev_file) # ======================================================================= # # Scan NTT files # ======================================================================= self._diagnostic_print( '\nDetected %i .ntt file(s).' % (len(self.ntt_avail))) for ntt_file in self.ntt_avail: # Loading individual NTT file and extracting parameters self._diagnostic_print('Scanning ' + ntt_file + '.') # Reading file filehandle = self.__mmap_ntt_file(ntt_file) if filehandle is None: continue try: # Checking consistency of nev file self.__ntt_check(filehandle) except AssertionError: warnings.warn( 'Session file %s did not pass data packet check. ' 'This file can not be loaded.' % ntt_file) continue # Reading header information and store them in parameters_nev self.__read_ntt_data_header(filehandle, ntt_file) # Reading txt file header self.__read_ntt_text_header(ntt_file) # using sampling rate from txt header, as this is not saved # in data packets if 'SamplingFrequency' in self.parameters_ntt[channel_id]: self.parameters_ntt[channel_id]['sampling_rate'] = \ (self.parameters_ntt[channel_id][ 'SamplingFrequency'] * self.ntt_sr_unit) self.ntt_asso.append(ntt_file) # ======================================================================= # # Check consistency across files # ======================================================================= # check RECORDING_OPENED / CLOSED times (from txt header) for # different files for parameter_collection in [self.parameters_ncs, self.parameters_nse, self.parameters_nev, self.parameters_ntt]: # check recoding_closed times for specific file types if any(np.abs(np.diff([i['recording_opened'] for i in parameter_collection.values()])) > datetime.timedelta(seconds=1)): raise ValueError( 'NCS files were opened for recording with a delay ' 'greater than 0.1 second.') # check recoding_closed times for specific file types if any(np.diff([i['recording_closed'] for i in parameter_collection.values() if i['recording_closed'] is not None]) > datetime.timedelta(seconds=0.1)): raise ValueError( 'NCS files were closed after recording with a ' 'delay ' 'greater than 0.1 second.') # get maximal duration of any file in the recording parameter_collection = list(self.parameters_ncs.values()) + \ list(self.parameters_nse.values()) + \ list(self.parameters_ntt.values()) + \ list(self.parameters_nev.values()) self.parameters_global['recording_opened'] = min( [i['recording_opened'] for i in parameter_collection]) self.parameters_global['recording_closed'] = max( [i['recording_closed'] for i in parameter_collection]) # Set up GLOBAL TIMING SCHEME # ############################# for file_type, parameter_collection in [ ('ncs', self.parameters_ncs), ('nse', self.parameters_nse), ('nev', self.parameters_nev), ('ntt', self.parameters_ntt)]: # check starting times name_t1, name_t2 = ['t_start', 't_stop'] if ( file_type != 'nse' and file_type != 'ntt') \ else ['t_first', 't_last'] # checking if files of same type start at same time point if file_type != 'nse' and file_type != 'ntt' \ and len(np.unique(np.array( [i[name_t1].magnitude for i in parameter_collection.values()]))) > 1: raise ValueError( '%s files do not start at same time point.' % file_type) # saving t_start and t_stop for each file type available if len([i[name_t1] for i in parameter_collection.values()]): self.parameters_global['%s_t_start' % file_type] = min( [i[name_t1] for i in parameter_collection.values()]) self.parameters_global['%s_t_stop' % file_type] = min( [i[name_t2] for i in parameter_collection.values()]) # extracting minimial t_start and maximal t_stop value for this # recording session self.parameters_global['t_start'] = min( [self.parameters_global['%s_t_start' % t] for t in ['ncs', 'nev', 'nse', 'ntt'] if '%s_t_start' % t in self.parameters_global]) self.parameters_global['t_stop'] = max( [self.parameters_global['%s_t_stop' % t] for t in ['ncs', 'nev', 'nse', 'ntt'] if '%s_t_start' % t in self.parameters_global]) # checking gap consistency across ncs files # check number of gaps detected if len(np.unique([len(i['gaps']) for i in self.parameters_ncs.values()])) != 1: raise ValueError('NCS files contain different numbers of gaps!') # check consistency of gaps across files and create global gap # collection self.parameters_global['gaps'] = [] for g in range(len(list(self.parameters_ncs.values())[0]['gaps'])): integrated = False gap_stats = np.unique( [i['gaps'][g] for i in self.parameters_ncs.values()], return_counts=True) if len(gap_stats[0]) != 3 or len(np.unique(gap_stats[1])) != 1: raise ValueError( 'Gap number %i is not consistent across NCS ' 'files.' % ( g)) else: # check if this is second part of already existing gap for gg in range(len(self.parameters_global['gaps'])): globalgap = self.parameters_global['gaps'][gg] # check if stop time of first is start time of second # -> continuous gap if globalgap[2] == \ list(self.parameters_ncs.values())[0]['gaps'][ g][1]: self.parameters_global['gaps'][gg] = \ self.parameters_global['gaps'][gg][:2] + ( list(self.parameters_ncs.values())[0][ 'gaps'][g][ 2],) integrated = True break if not integrated: # add as new gap if this is not a continuation of # existing global gap self.parameters_global['gaps'].append( list(self.parameters_ncs.values())[0][ 'gaps'][g]) # save results of association for future analysis together with hash # values for change tracking if cachedir is not None and usecache != 'never': pickle.dump({'global': self.parameters_global, 'ncs': self.parameters_ncs, 'nev': self.parameters_nev, 'nse': self.parameters_nse, 'ntt': self.parameters_ntt}, open(cachedir + sep + self.sessiondir.split(sep)[ -1] + '/parameters.cache', 'wb')) if usecache != 'always': pickle.dump(hashes_calc, open( cachedir + sep + self.sessiondir.split(sep)[ -1] + '/hashkeys', 'wb')) self.associated = True # private routines # #########################################################� # Memory Mapping Methods def __mmap_nse_packets(self, filename): """ Memory map of the Neuralynx .ncs file optimized for extraction of data packet headers Reading standard dtype improves speed, but timestamps need to be reconstructed """ filesize = getsize(self.sessiondir + sep + filename) # in byte if filesize > 16384: data = np.memmap(self.sessiondir + sep + filename, dtype=' timestamp in microsec timestamps = data[:, 0] \ + data[:, 1] * 2 ** 16 \ + data[:, 2] * 2 ** 32 \ + data[:, 3] * 2 ** 48 channel_id = data[:, 4] + data[:, 5] * 2 ** 16 cell_number = data[:, 6] + data[:, 7] * 2 ** 16 features = [data[:, p] + data[:, p + 1] * 2 ** 16 for p in range(8, 23, 2)] features = np.array(features, dtype='i4') data_points = data[:, 24:56].astype('i2') del data return timestamps, channel_id, cell_number, features, data_points else: return None def __mmap_ncs_data(self, filename): """ Memory map of the Neuralynx .ncs file optimized for data extraction""" if getsize(self.sessiondir + sep + filename) > 16384: data = np.memmap(self.sessiondir + sep + filename, dtype=np.dtype(('i2', (522))), mode='r', offset=16384) # removing data packet headers and flattening data return data[:, 10:] else: return None def __mmap_ncs_packet_headers(self, filename): """ Memory map of the Neuralynx .ncs file optimized for extraction of data packet headers Reading standard dtype improves speed, but timestamps need to be reconstructed """ filesize = getsize(self.sessiondir + sep + filename) # in byte if filesize > 16384: data = np.memmap(self.sessiondir + sep + filename, dtype=' 16384: data = np.memmap(self.sessiondir + sep + filename, dtype=' 16384: return np.memmap(self.sessiondir + sep + filename, dtype=nev_dtype, mode='r', offset=16384) else: return None def __mmap_ntt_file(self, filename): """ Memory map the Neuralynx .nse file """ nse_dtype = np.dtype([ ('timestamp', ' 16384: return np.memmap(self.sessiondir + sep + filename, dtype=nse_dtype, mode='r', offset=16384) else: return None def __mmap_ntt_packets(self, filename): """ Memory map of the Neuralynx .ncs file optimized for extraction of data packet headers Reading standard dtype improves speed, but timestamps need to be reconstructed """ filesize = getsize(self.sessiondir + sep + filename) # in byte if filesize > 16384: data = np.memmap(self.sessiondir + sep + filename, dtype=' timestamp in microsec timestamps = data[:, 0] + data[:, 1] * 2 ** 16 + \ data[:, 2] * 2 ** 32 + data[:, 3] * 2 ** 48 channel_id = data[:, 4] + data[:, 5] * 2 ** 16 cell_number = data[:, 6] + data[:, 7] * 2 ** 16 features = [data[:, p] + data[:, p + 1] * 2 ** 16 for p in range(8, 23, 2)] features = np.array(features, dtype='i4') data_points = data[:, 24:152].astype('i2').reshape((4, 32)) del data return timestamps, channel_id, cell_number, features, data_points else: return None # ___________________________ header extraction __________________________ def __read_text_header(self, filename, parameter_dict): # Reading main file header (plain text, 16kB) text_header = codecs.open(self.sessiondir + sep + filename, 'r', 'latin-1').read(16384) # necessary text encoding depends on Python version if sys.version_info.major < 3: text_header = text_header.encode('latin-1') parameter_dict['cheetah_version'] = \ self.__get_cheetah_version_from_txt_header(text_header, filename) parameter_dict.update(self.__get_filename_and_times_from_txt_header( text_header, parameter_dict['cheetah_version'])) # separating lines of header and ignoring last line (fill), check if # Linux or Windows OS if sep == '/': text_header = text_header.split('\r\n')[:-1] if sep == '\\': text_header = text_header.split('\n')[:-1] # minor parameters possibly saved in header (for any file type) minor_keys = ['AcqEntName', 'FileType', 'FileVersion', 'RecordSize', 'HardwareSubSystemName', 'HardwareSubSystemType', 'SamplingFrequency', 'ADMaxValue', 'ADBitVolts', 'NumADChannels', 'ADChannel', 'InputRange', 'InputInverted', 'DSPLowCutFilterEnabled', 'DspLowCutFrequency', 'DspLowCutNumTaps', 'DspLowCutFilterType', 'DSPHighCutFilterEnabled', 'DspHighCutFrequency', 'DspHighCutNumTaps', 'DspHighCutFilterType', 'DspDelayCompensation', 'DspFilterDelay_\xb5s', 'DisabledSubChannels', 'WaveformLength', 'AlignmentPt', 'ThreshVal', 'MinRetriggerSamples', 'SpikeRetriggerTime', 'DualThresholding', 'Feature Peak 0', 'Feature Valley 1', 'Feature Energy 2', 'Feature Height 3', 'Feature NthSample 4', 'Feature NthSample 5', 'Feature NthSample 6', 'Feature NthSample 7', 'SessionUUID', 'FileUUID', 'CheetahRev', 'ProbeName', 'OriginalFileName', 'TimeCreated', 'TimeClosed', 'ApplicationName', 'AcquisitionSystem', 'ReferenceChannel'] # extracting minor key values of header (only taking into account # non-empty lines) for i, minor_entry in enumerate(text_header): if minor_entry == '' or minor_entry[0] == '#': continue matching_key = [key for key in minor_keys if minor_entry.strip('-').startswith(key)] if len(matching_key) == 1: matching_key = matching_key[0] minor_value = minor_entry.split(matching_key)[1].strip( ' ').rstrip(' ') # determine data type of entry if minor_value.isdigit(): # converting to int if possible minor_value = int(minor_value) else: # converting to float if possible try: minor_value = float(minor_value) except: pass if matching_key in parameter_dict: warnings.warn( 'Multiple entries for %s in text header of %s' % ( matching_key, filename)) else: parameter_dict[matching_key] = minor_value elif len(matching_key) > 1: raise ValueError( 'Inconsistent minor key list for text header ' 'interpretation.') else: warnings.warn( 'Skipping text header entry %s, because it is not in ' 'minor key list' % minor_entry) self._diagnostic_print( 'Successfully decoded text header of file (%s).' % filename) def __get_cheetah_version_from_txt_header(self, text_header, filename): version_regex = re.compile(r'((-CheetahRev )|' r'(ApplicationName Cheetah "))' r'(?P\d{1,3}\.\d{1,3}\.\d{1,3})') match = version_regex.search(text_header) if match: return match.groupdict()['version'] else: raise ValueError('Can not extract Cheetah version from file ' 'header of file %s' % filename) def __get_filename_and_times_from_txt_header(self, text_header, version): if parse_version(version) <= parse_version('5.6.4'): datetime1_regex = re.compile(r'## Time Opened \(m/d/y\): ' r'(?P\S+)' r' \(h:m:s\.ms\) ' r'(?P