pax_global_header00006660000000000000000000000064116745177700014531gustar00rootroot0000000000000052 comment=3ae85dd633ad101c7923375ae1e49eab1a7947c1 brian-1.3.1/000077500000000000000000000000001167451777000126265ustar00rootroot00000000000000brian-1.3.1/.svnauthors000066400000000000000000000007361167451777000150510ustar00rootroot00000000000000thesamovar = Dan Goodman romainbrette = Romain Brette crossant = Cyrille Rossant bfontaine = Bertrand Fontaine bgour = Boris Gourevitch emuller = Eilif Muller vbenich = Victor Benichoux jlaudan = Jonathan Laudanski mstimberg = Marcel Stimberg brian-1.3.1/MANIFEST.in000066400000000000000000000000741167451777000143650ustar00rootroot00000000000000recursive-include brian *.py *.c *.h *.i *.cpp *.cxx *.bat brian-1.3.1/README.txt000066400000000000000000000316711167451777000143340ustar00rootroot00000000000000============ B R I A N ============================= A clock-driven simulator for spiking neural networks ==================================================== Version: 1.3.1 Authors: Romain Brette http://audition.ens.fr/brette/ Dan Goodman http://thesamovar.net/neuroscience Team: Cyrille Rossant http://cyrille.rossant.net/ Bertrand Fontaine http://lpp.psycho.univ-paris5.fr/person.php?name=BertrandF Victor Benichoux Marcel Stimberg Jonathan Laudanski Boris Gourevitch http://pi314.net/ ==== Installation ========================================================== Requirements: Python (version 2.5-7), the following modules: * numpy * scipy (preferably 0.7 or later) * pylab Windows: run the installer exe file Others: run 'python setup.py install' from the download folder. ==== Extras ================================================================ Included in the extras download are: docs Documentation for Brian including tutorials. examples Examples of using Brian, these serve as supplementary documentation. tutorials Fully worked through tutorials on using Brian. These can be read through in the documentation too. ==== Usage and Documentation =============================================== See the documentation in the extras download, or online: http://www.briansimulator.org/docs ==== Changes =============================================================== Version 1.3.0 to 1.3.1 ---------------------- Minor features: * New PoissonInput class * New auditory model: TanCarney (brian.hears) * Many more examples from papers * New electrode compensation module (in library.electrophysiology) * New trace analysis module (in library.electrophysiology) * Added new brian.tools.taskfarm.run_tasks function to use multiple CPUs to perform multiple runs of a simulation and save results to a DataManager, with an optional GUI interface. * Added FractionalDelay filterbank to brian.hears, fractional itds to HeadlessDatabase and fractional shifts to Sound.shifted. * Added vowel function to brian.hears for creating artificial vowel sounds * New spike_triggered_average function * Added maxlevel and atmaxlevel to Sound * New IRNS/IRNO noise functions Improvements: * SpikeGeneratorGroup is much faster. * Added RemoteControlClient.set(var, name) to allow sending data to the server from the client (previously you could only receive data from the server but not send it, except in string form). * Monitors do not process empty spike arrays when there have not been any spikes, increases speed for monitored networks with sparse firing (#78) * Various speed optimisations Bug fixes: * Fixed bug with frozen equations and time variable in equations * Fixed bug with loading sounds using Sound('filename.wav') * SpikeMonitor now clears spiketimes correctly on reinit (#75) * MultiConnection now propagates reinit (important for monitors) (#76) * Fixed bug in realtime plotting * Fixed various bugs in Sound * Fixed bugs in STDP * Bow propagates spikes only if spikes exist (#78) Version 1.2.1 to 1.3.0 ---------------------- Major features: * Added Brian.hears auditory library Minor features: * Added new brian.tools.datamanager.DataManager, moved from brian.experimental * reinit(states=False) will now not reset NeuronGroup state variables to 0. * modelfitting now has support for refractoriness * New examples in misc: after_potential, non_reliability, reliability, van_rossum_metric, remotecontrolserver, remotecontrolclient * New experimental.neuromorphic package * Van Rossum metric added Improvements: * SpikeGeneratorGroup is faster for large number of events ("gather" option). * Speed improvement for pure Python version of sparse matrix preparation * Speed improvements for spike propagation weave code (50-100% faster). * Clocks have been changed and should now behave more predictably. In addition, you can now specify an order attribute for clocks. * modelfitting is now based on playdoh 0.3 * modelfitting can now use euler/exp.euler or RK2 integration schemes * Loading AER data is much faster * Freezing now uses higher precision (used to only use 12sf now uses 17sf) Bug fixes: * Bug in STDP with small values for wmin/wmax fixed (ticket #63) * Equations/aliases now work correctly in STDP (ticket #56) * Bug in sparse matrices introduced in scipy 0.8.0 fixed * Bug in TimedArray when dt keyword is used now fixed (thanks to Adrien Wohrer for pointing out the bug). * Units now work correctly in STDP (ticket #60) * STDP raises an error if operations are reordered (ticket #57) * linked_var works with static vars (equations) (ticket #68) * Changing clock.t during a run won't end the run * Fixed ticket #66 (unit bug) * Fixed ticket #64 (bug with freeze) * Can now run a network with no group * Exception handling now works properly for C version of circiular spike container * ccircular now builds correctly on linux and 64 bit Internal changes: * brian.connection deprecated and replaced by subpackage brian.connections, making the code structure much more straightforward and setting up for future work on code generation, etc. Version 1.2.0 to 1.2.1 ---------------------- Major features: * New remote controlling of running Brian scripts via RemoteControlServer and RemoteControlClient. Minor features: * New module tools.io * weight and sparseness can now both be functions in connect_random * New StateHistogramMonitor object * clear now has a new keyword all which allows you to destroy all Brian objects regardless of whether or not they would be found by MagicNetwork. In addition, garbage collection is called after a clear. * New method StateMonitor.insert_spikes to have spikes on voltage traces. Improvements * The sparseness keyword in connect_random can be a function * Added 'wmin' to STDP * You can now access STDP internal variables, e.g. stdp.A_pre, and monitor them by doing e.g. StateMonitor(stdp.pre_group, 'A_pre') * STDP now supports nonlinear equations and parameters * refractory can now be a vector (see docstring for NeuronGroup) for constant resets. * modelfitting now uses playdoh library * C++ compiled code is now much faster thanks to adding -ffast-math switch to gcc, and there is an option which allows you to set your own compiler switches, for example -march=native on gcc 4.2+. * SpikeGeneratorGroup now has a spiketimes attribute to reset the list of spike times. * StateMonitor now caches values in an array, improving speed for M[i] operation and resolving ticket #53 Bug fixes * Sparse matrices with some versions of scipy * Weave now works on 64 bit platforms with 64 bit Python * Fixed bug introduced in 1.2.0 where dense DelayConnection structures would not propagate any spikes * Fixed bug where connect* functions on DelayConnection didn't work with subgroups but only with the whole group. * Fixed bug with linked_var from subgroups not working * Fixed bug with adding Equations objects together using a shared base equation (ticket #9 on the trac) * unit_checking=False now works (didn't do anything before) * Fixed bug with using Equations object twice (for two different NeuronGroups) * Fixed unit checking bug and ZeroDivisionError (ticket #38) * Fixed rare problems with spikes being lost due to wrong size of SpikeContainer, it now dynamically adapts to the number of spikes. * Fixed ticket #5, ionic_currents did not work with units off * Fixed ticket #6, Current+MembraneEquation now works * Fixed bug in modelfitting : the fitness was not computed right with CPUs. * Fixed bug in modelfitting with random seeds on Unix systems. * brian.hears.filtering now works correctly on 64 bit systems Removed features * Model has now been removed from Brian (it was deprecated in 1.1). Version 1.1.3 to 1.2.0 ---------------------- Major features: * Model fitting toolbox (library.modelfitting) Minor features: * New real-time ``refresh=`` options added to plotting functions * Gamma factor in utils.statistics * New RegularClock object * Added brian_sample_run function to test installation in place of nose tests Improvements: * Speed improvements to monitors and plotting functions * Sparse matrix support improved, should work with scipy versions up to 0.7.1 * Various improvements to brian.hears (still experimental though) * Parameters now picklable * Made Equations picklable Bug fixes: * Fixed major bug with subgroups and connections (announced on webpage) * Fixed major bug with multiple clocks (announced on webpage) * No warnings with Python 2.6 * Minor bugfix to TimedArray caused by floating point comparisons * Bugfix: refractory neurons could fire in very extreme circumstances * Fixed bug with DelayConnection not setting max_delay * Fixed bug with STP * Fixed bug with weight=lambda i,j:rand() New examples: * New multiprocessing examples * Added polychronisation example * Added modelfitting examples * Added examples of TimedArray and linked_var * Added examples of using derived classes with Brian * Realtime plotting example Version 1.1.2 to 1.1.3 ---------------------- * STDP now works with DelayConnection * Added EventClock * Added RecentStateMonitor * Added colormap option to StateMonitor.plot * Added timed array module, see TimedArray class for details. * Added optional progress reporting to run() * New recall() function (converse to forget()) * Added progress reporting module (brian.utils.progressreporting) * Added SpikeMonitor.spiketimes * Added developer's guide to docs * Early version of brian.hears subpackage for auditory modelling * Various bug fixes Version 1.1.1 to 1.1.2 ---------------------- * Standard functions rand() and randn() can now be used in string resets. * New forget() function. * Major bugfix for STP Version 1.1.0 to 1.1.1 ---------------------- * New statistical function: vector_strength * Bugfix for one line string thresholds/resets Version 1.0.0 to 1.1.0 ---------------------- * STDP * Short-term plasticity (Tsodyks-Markram model) * New DelayConnection for heterogeneous delays * New code for Connections, including new 'dynamic' connection matrix type * Reset and threshold can be specified with strings (Python expressions) * Much improved documentation * clear() function added for ipython users * Simplified initialisation of Connection objects * Optional unit checking in NeuronGroup * Spike train statistics (utils.statistics) * Miscellaneous optimisations * New MultiStateMonitor class * New Group, MultiGroup objects (for convenience of people writing extensions mostly) * Improved contained_objects protocol with ObjectContainer class in brian.base * UserComputed* classes removed for this version (they will return in another form). Version 1.0.0 RC5 to version 1.0.0 ---------------------------------- * 2nd order Runge-Kutta method (use order=2) * Quantity arrays are disabled (units only for scalars) * brian_global_config added * UserComputedConnectionMatrix and UserComputedSparseConnectionMatrix * SimpleCustomRefractoriness, CustomRefractoriness Version 1.0.0 RC4 to version 1.0.0 RC5 -------------------------------------- * Bugfix of sparse matrix problems * Compiled version of spike propagation (much faster for networks with lots of spikes) * Assorted small improvements Version 1.0.0 RC3 to version 1.0.0 RC4 -------------------------------------- * Added StateSpikeMonitor * Changed QuantityArray behaviour to work better with numpy, scipy and pylab Version 1.0.0 RC2 to version 1.0.0 RC3 -------------------------------------- * Small bugfixes Version 1.0.0 RC1 to version 1.0.0 RC2 -------------------------------------- * Documentation system now much better, using Sphinx, includes cross references, index, etc. * Added VariableReset * Added run_all_tests() * numpywrappers module added, but not in global namespace * Quantity comparison to zero doesn't check units (positivity/negativity) Version 1.0.0 beta to version 1.0.0 RC1 --------------------------------------- * Connection: connect_full allows a functional weight argument (like connect_random) * Short-term plasticity: In Connection: 'modulation' argument allows modulating weights by a state variable from the source group (see examples). * HomogeneousCorrelatedSpikeTrains: input spike trains with exponential correlations. * Network.stop(): stops the simulation (can be called by a user script) * PopulationRateMonitor: smooth_rate method * Optimisation of Euler code: use compile=True when initialising NeuronGroup * More examples * Pickling now works (saving and loading states) * dot(a,b) now works correctly with qarray's with homogeneous units * Parallel simulations using Parallel Python (independent simulations only) * Example of html inferfaces to Brian scripts using CherryPy * Time dependence in equations (see phase_locking example) * SpikeCounter and PopulationSpikeCounter brian-1.3.1/brian/000077500000000000000000000000001167451777000137215ustar00rootroot00000000000000brian-1.3.1/brian/__init__.py000066400000000000000000000161521167451777000160370ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # """ Brian """ __docformat__ = "restructuredtext en" import warnings as _warnings from scipy import * try: from pylab import * except: _warnings.warn("Couldn't import pylab.") if 'x' in globals(): del x # for some reason x is defined as 'symlog' by pylab! if 'f' in globals(): del f from clock import * from connections import * from directcontrol import * from stateupdater import * from monitor import * from network import * from neurongroup import * from plotting import * from reset import * from threshold import * from units import * from tools import * from equations import * from globalprefs import * from unitsafefunctions import * from stdunits import * from membrane_equations import * from compartments import * from log import * from magic import * from stdp import * from stp import * from timedarray import * from tests.simpletest import * __version__ = '1.3.1' __release_date__ = '2011-12-16' ### Define global preferences which are not defined anywhere else define_global_preference( 'useweave', 'False', desc=""" Defines whether or not functions should use inlined compiled C code where defined. Requires a compatible C++ compiler. The ``gcc`` and ``g++`` compilers are probably the easiest option (use Cygwin on Windows machines). See also the ``weavecompiler`` global preference. """) set_global_preferences(useweave=False) define_global_preference( 'weavecompiler', 'gcc', desc=''' Defines the compiler to use for weave compilation. On Windows machines, installing Cygwin is the easiest way to get access to the gcc compiler. ''') set_global_preferences(weavecompiler='gcc') define_global_preference( 'gcc_options', "['-ffast-math']", desc=''' Defines the compiler switches passed to the gcc compiler. For gcc versions 4.2+ we recommend using ``-march=native``. By default, the ``-ffast-math`` optimisations are turned on - if you need IEEE guaranteed results, turn this switch off. ''') set_global_preferences(gcc_options=['-ffast-math']) define_global_preference( 'usecodegen', 'False', desc=''' Whether or not to use experimental code generation support. ''') set_global_preferences(usecodegen=False) define_global_preference( 'usecodegenweave', 'False', desc=''' Whether or not to use C with experimental code generation support. ''') set_global_preferences(usecodegenweave=False) define_global_preference( 'usecodegenstateupdate', 'True', desc=''' Whether or not to use experimental code generation support on state updaters. ''') set_global_preferences(usecodegenstateupdate=True) define_global_preference( 'usecodegenreset', 'False', desc=''' Whether or not to use experimental code generation support on resets. Typically slower due to weave overheads, so usually leave this off. ''') set_global_preferences(usecodegenreset=False) define_global_preference( 'usecodegenthreshold', 'True', desc=''' Whether or not to use experimental code generation support on thresholds. ''') set_global_preferences(usecodegenthreshold=True) define_global_preference( 'usenewpropagate', 'False', desc=''' Whether or not to use experimental new C propagation functions. ''') set_global_preferences(usenewpropagate=False) define_global_preference( 'usecstdp', 'False', desc=''' Whether or not to use experimental new C STDP. ''') set_global_preferences(usecstdp=False) define_global_preference( 'brianhears_usegpu', 'False', desc=''' Whether or not to use the GPU (if available) in Brian.hears. Support is experimental at the moment, and requires the PyCUDA package to be installed. ''') set_global_preferences(brianhears_usegpu=False) # check if we were run from a file or some other source, and set the default # behaviour for magic functions accordingly import inspect as _inspect import os as _os _of = _inspect.getouterframes(_inspect.currentframe()) if len(_of) > 1 and _os.path.exists(_of[1][1]): _magic_useframes = True else: _magic_useframes = False define_global_preference( 'magic_useframes', str(_magic_useframes), desc=""" Defines whether or not the magic functions should search for objects defined only in the calling frame or if they should find all objects defined in any frame. This should be set to ``False`` if you are using Brian from an interactive shell like IDLE or IPython where each command has its own frame, otherwise set it to ``True``. """) set_global_preferences(magic_useframes=_magic_useframes) ### Update documentation for global preferences import globalprefs as _gp _gp.__doc__ += _gp.globalprefdocs try: import brian_global_config except ImportError: pass try: import nose @nose.tools.nottest def run_all_tests(): import tests tests.go() except ImportError: def run_all_tests(): print "Brian test framework requires 'nose' package." brian-1.3.1/brian/base.py000066400000000000000000000032321167451777000152050ustar00rootroot00000000000000''' Various base classes for Brian ''' __all__ = ['ObjectContainer'] class ObjectContainer(object): ''' Implements the contained_objects protocol The object contains an attribute _contained_objects and a property contained_objects whose getter just returns _contained_objects or an empty list, and whose setter appends _contained_objects with the objects. This makes classes which set the value of contained_objects without thinking about things like inheritance work correctly. You can still directly manipulate _contained_objects or do something like:: co = obj.contained_objects co[:] = [] ''' def get_contained_objects(self): if hasattr(self, '_contained_objects'): return self._contained_objects self._contained_objects = [] return self._contained_objects def set_contained_objects(self, newobjs): self._contained_objects = self.get_contained_objects() self._contained_objects.extend(newobjs) contained_objects = property(fget=get_contained_objects, fset=set_contained_objects) if __name__ == '__main__': from brian import * class A(NetworkOperation): def __init__(self): x = NetworkOperation(lambda:None) print 'A:', id(x) self.contained_objects = [x] class B(A): def __init__(self): super(B, self).__init__() x = NetworkOperation(lambda:None) print 'B:', id(x) self.contained_objects = [x] b = B() print map(id, b.contained_objects) brian-1.3.1/brian/clock.py000066400000000000000000000347641167451777000154040ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # """ Clocks for the simulator """ __docformat__ = "restructuredtext en" __all__ = ['Clock', 'defaultclock', 'guess_clock', 'define_default_clock', 'reinit_default_clock', 'get_default_clock', 'EventClock', 'RegularClock', 'FloatClock', 'NaiveClock'] from inspect import stack from units import * from globalprefs import * import magic from time import time from numpy import ceil class Clock(magic.InstanceTracker): ''' An object that holds the simulation time and the time step. Initialisation arguments: ``dt`` The time step of the simulation. ``t`` The current time of the clock. ``order`` If two clocks have the same time, the order of the clock is used to resolve which clock is processed first, lower orders first. ``makedefaultclock`` Set to ``True`` to make this clock the default clock. The times returned by this clock are always off the form ``n*dt+offset`` for integer ``n`` and float ``dt`` and ``offset``. For example, for a clock with ``dt=10*ms``, setting ``t=25*ms`` will set ``n=2`` and ``offset=5*ms``. For a clock that uses true float values for ``t`` rather than underlying integers, use :class:`FloatClock` (although see the caveats there). In order to make sure that certain operations happen in the correct sequence, you can use the ``order`` attribute, clocks with a lower order will be processed first if the time is the same. The condition for two clocks to be considered as having the same time is ``abs(t1-t2) 1: # several clocks: ambiguous # What type of error? raise TypeError("Clock is ambiguous. Please specify it explicitly.") if len(clocks) == 1: return clocks[0] # Fall back on default clock if exists_global_preference('defaultclock'): return get_global_preference('defaultclock') # No clock found raise TypeError("No clock found. Please define a clock.") class EventClock(Clock): ''' Clock that is used for events. Works the same as a :class:`Clock` except that it is never guessed as a clock to use by :class:`NeuronGroup`, etc. These clocks can be used to make multiple clock simulations without causing ambiguous clock problems. ''' @staticmethod def _track_instances(): return False # Do not track the default clock class DefaultClock(Clock): @staticmethod def _track_instances(): return False defaultclock = DefaultClock(dt=0.1 * msecond) define_global_preference( 'defaultclock', 'Clock(dt=0.1*msecond)', desc=""" The default clock to use if none is provided or defined in any enclosing scope. """) def define_default_clock(**kwds): ''' Create a new default clock Uses the keywords of the :class:`Clock` initialiser. Sample usage:: define_default_clock(dt=1*ms) ''' kwds['makedefaultclock'] = True newdefaultclock = Clock(**kwds) def reinit_default_clock(t=0 * msecond): ''' Reinitialise the default clock (to zero or a specified time) ''' get_default_clock().reinit(t) def get_default_clock(): ''' Returns the default clock object. ''' return get_global_preference('defaultclock') if __name__ == '__main__': print id(guess_clock()), id(defaultclock) brian-1.3.1/brian/compartments.py000066400000000000000000000061261167451777000170140ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Compartmental models for Brian. ''' from equations import * from membrane_equations import * from units import check_units, ohm #TODO: add MembraneEquation class Compartments(Equations): """ Creates a compartmental model from a dictionary of MembraneEquation objects. Compartments are initially unconnected. Caution: the original objects are modified. """ def __init__(self, comps): Equations.__init__(self, '') for name, eqs in comps.iteritems(): name = str(name) # Change variable names vars = eqs._units.keys() for var in vars: eqs.substitute(var, var + '_' + name) self += eqs @check_units(Ra=ohm) def connect(self, a, b, Ra): """ Connects compartment a to compartment b with axial resistance Ra. """ a, b = str(a), str(b) # Axial current from a to b Ia_name = 'Ia_' + a + '_' + b self += Equations('Ia=(va-vb)*invRa : amp', Ia=Ia_name, invRa=1. / Ra, va='vm_' + a, vb='vm_' + b) # Add the current to both compartments self._string['__membrane_Im_' + a] += '-' + Ia_name self._string['__membrane_Im_' + b] += '+' + Ia_name brian-1.3.1/brian/connection.py000066400000000000000000000005431167451777000164340ustar00rootroot00000000000000''' This module is here only to maintain backward compatibility with old code that references brian.connection rather than brian.connections. It should be removed at some point in the future ''' from connections import * import log as _log _log.log_warn('brian', 'Module "connection" in Brian is deprecated, use "brian.connections" instead.') brian-1.3.1/brian/connections/000077500000000000000000000000001167451777000162435ustar00rootroot00000000000000brian-1.3.1/brian/connections/__init__.py000066400000000000000000000003271167451777000203560ustar00rootroot00000000000000from sparsematrix import * from connectionvector import * from constructionmatrix import * from connectionmatrix import * from connection import * from delayconnection import * from otherconnections import * brian-1.3.1/brian/connections/base.py000066400000000000000000000016021167451777000175260ustar00rootroot00000000000000import copy from itertools import izip import itertools from random import sample import bisect from ..units import second, msecond, check_units, DimensionMismatchError import types from .. import magic from ..log import log_warn, log_info, log_debug from numpy import * from scipy import sparse, stats, rand, weave, linalg import scipy import scipy.sparse import numpy from numpy.random import binomial, exponential import random as pyrandom from scipy import random as scirandom from ..utils.approximatecomparisons import is_within_absolute_tolerance from ..globalprefs import get_global_preference from ..base import ObjectContainer from ..stdunits import ms from operator import isSequenceType effective_zero = 1e-40 colon_slice = slice(None, None, None) def todense(x): if hasattr(x, 'todense'): return x.todense() return array(x, copy=False) brian-1.3.1/brian/connections/connection.py000066400000000000000000000465551167451777000207730ustar00rootroot00000000000000from base import * from sparsematrix import * from connectionvector import * from constructionmatrix import * from connectionmatrix import * from construction import * from propagation_c_code import * from scipy.sparse import issparse # we do this at the bottom because of order of import issues #from delayconnection import * __all__ = ['Connection'] class Connection(magic.InstanceTracker, ObjectContainer): ''' Mechanism for propagating spikes from one group to another A Connection object declares that when spikes in a source group are generated, certain neurons in the target group should have a value added to specific states. See Tutorial 2: Connections to understand this better. With arguments: ``source`` The group from which spikes will be propagated. ``target`` The group to which spikes will be propagated. ``state`` The state variable name or number that spikes will be propagated to in the target group. ``delay`` The delay between a spike being generated at the source and received at the target. Depending on the type of ``delay`` it has different effects. If ``delay`` is a scalar value, then the connection will be initialised with all neurons having that delay. For very long delays, this may raise an error. If ``delay=True`` then the connection will be initialised as a :class:`DelayConnection`, allowing heterogeneous delays (a different delay for each synapse). ``delay`` can also be a pair ``(min,max)`` or a function of one or two variables, in both cases it will be initialised as a :class:`DelayConnection`, see the documentation for that class for details. Note that in these cases, initialisation of delays will only have the intended effect if used with the ``weight`` and ``sparseness`` arguments below. ``max_delay`` If you are using a connection with heterogeneous delays, specify this to set the maximum allowed delay (smaller values use less memory). The default is 5ms. ``modulation`` The state variable name from the source group that scales the synaptic weights (for short-term synaptic plasticity). ``structure`` Data structure: ``sparse`` (default), ``dense`` or ``dynamic``. See below for more information on structures. ``weight`` If specified, the connection matrix will be initialised with values specified by ``weight``, which can be any of the values allowed in the methods `connect*`` below. ``sparseness`` If ``weight`` is specified and ``sparseness`` is not, a full connection is assumed, otherwise random connectivity with this level of sparseness is assumed. **Methods** ``connect_random(P,Q,p[,weight=1[,fixed=False[,seed=None]]])`` Connects each neuron in ``P`` to each neuron in ``Q`` with independent probability ``p`` and weight ``weight`` (this is the amount that gets added to the target state variable). If ``fixed`` is True, then the number of presynaptic neurons per neuron is constant. If ``seed`` is given, it is used as the seed to the random number generators, for exactly repeatable results. ``connect_full(P,Q[,weight=1])`` Connect every neuron in ``P`` to every neuron in ``Q`` with the given weight. ``connect_one_to_one(P,Q)`` If ``P`` and ``Q`` have the same number of neurons then neuron ``i`` in ``P`` will be connected to neuron ``i`` in ``Q`` with weight 1. ``connect(P,Q,W)`` You can specify a matrix of weights directly (can be in any format recognised by NumPy). Note that due to internal implementation details, passing a full matrix rather than a sparse one may slow down your code (because zeros will be propagated as well as nonzero values). **WARNING:** No unit checking is done at the moment. Additionally, you can directly access the matrix of weights by writing:: C = Connection(P,Q) print C[i,j] C[i,j] = ... Where here ``i`` is the source neuron and ``j`` is the target neuron. Note: if ``C[i,j]`` should be zero, it is more efficient not to write ``C[i,j]=0``, if you write this then when neuron ``i`` fires all the targets will have the value 0 added to them rather than just the nonzero ones. **WARNING:** No unit checking is currently done if you use this method. Take care to set the right units. **Connection matrix structures** Brian currently features three types of connection matrix structures, each of which is suited for different situations. Brian has two stages of connection matrix. The first is the construction stage, used for building a weight matrix. This stage is optimised for the construction of matrices, with lots of features, but would be slow for runtime behaviour. Consequently, the second stage is the connection stage, used when Brian is being run. The connection stage is optimised for run time behaviour, but many features which are useful for construction are absent (e.g. the ability to add or remove synapses). Conversion between construction and connection stages is done by the ``compress()`` method of :class:`Connection` which is called automatically when it is used for the first time. The structures are: ``dense`` A dense matrix. Allows runtime modification of all values. If connectivity is close to being dense this is probably the most efficient, but in most cases it is less efficient. In addition, a dense connection matrix will often do the wrong thing if using STDP. Because a synapse will be considered to exist but with weight 0, STDP will be able to create new synapses where there were previously none. Memory requirements are ``8NM`` bytes where ``(N,M)`` are the dimensions. (A ``double`` float value uses 8 bytes.) ``sparse`` A sparse matrix. See :class:`SparseConnectionMatrix` for details on implementation. This class features very fast row access, and slower column access if the ``column_access=True`` keyword is specified (making it suitable for learning algorithms such as STDP which require this). Memory requirements are 12 bytes per nonzero entry for row access only, or 20 bytes per nonzero entry if column access is specified. Synapses cannot be created or deleted at runtime with this class (although weights can be set to zero). ``dynamic`` A sparse matrix which allows runtime insertion and removal of synapses. See :class:`DynamicConnectionMatrix` for implementation details. This class features row and column access. The row access is slower than for ``sparse`` so this class should only be used when insertion and removal of synapses is crucial. Memory requirements are 24 bytes per nonzero entry. However, note that more memory than this may be required because memory is allocated using a dynamic array which grows by doubling its size when it runs out. If you know the maximum number of nonzero entries you will have in advance, specify the ``nnzmax`` keyword to set the initial size of the array. **Advanced information** The following methods are also defined and used internally, if you are writing your own derived connection class you need to understand what these do. ``propagate(spikes)`` Action to take when source neurons with indices in ``spikes`` fired. ``do_propagate()`` The method called by the :class:`Network` ``update()`` step, typically just propagates the spikes obtained by calling the ``get_spikes`` method of the ``source`` :class:`NeuronGroup`. ''' #@check_units(delay=second) def __init__(self, source, target, state=0, delay=0 * msecond, modulation=None, structure='sparse', weight=None, sparseness=None, max_delay=5 * ms, **kwds): if not isinstance(delay, float): if delay is True: delay = None # this instructs us to use DelayConnection, but not initialise any delays self.__class__ = DelayConnection self.__init__(source, target, state=state, modulation=modulation, structure=structure, weight=weight, sparseness=sparseness, delay=delay, max_delay=max_delay, **kwds) return self.source = source # pointer to source group self.target = target # pointer to target group if isinstance(state, str): # named state variable self.nstate = target.get_var_index(state) else: self.nstate = state # target state index if isinstance(modulation, str): # named state variable self._nstate_mod = source.get_var_index(modulation) else: self._nstate_mod = modulation # source state index if isinstance(structure, str): structure = construction_matrix_register[structure] self.W = structure((len(source), len(target)), **kwds) self.iscompressed = False # True if compress() has been called source.set_max_delay(delay) if not isinstance(self, DelayConnection): self.delay = int(delay / source.clock.dt) # Synaptic delay in time bins self._useaccel = get_global_preference('useweave') self._cpp_compiler = get_global_preference('weavecompiler') self._extra_compile_args = ['-O3'] if self._cpp_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] self._keyword_based_init(weight=weight, sparseness=sparseness) def _keyword_based_init(self, weight=None, sparseness=None, **kwds): # Initialisation of weights # TODO: check consistency of weight and sparseness # TODO: select dense or sparse according to sparseness if weight is not None or sparseness is not None: if weight is None: weight = 1.0 if sparseness is None: sparseness = 1 if issparse(weight) or isinstance(weight, ndarray): self.connect(W=weight) elif sparseness == 1: self.connect_full(weight=weight) else: self.connect_random(weight=weight, p=sparseness) def propagate(self, spikes): if not self.iscompressed: self.compress() if len(spikes): # Target state variable sv = self.target._S[self.nstate] # If specified, modulation state variable if self._nstate_mod is not None: sv_pre = self.source._S[self._nstate_mod] # Get the rows of the connection matrix, each row will be either a # DenseConnectionVector or a SparseConnectionVector. rows = self.W.get_rows(spikes) if not self._useaccel: # Pure Python version is easier to understand, but slower than C++ version below if isinstance(rows[0], SparseConnectionVector): if self._nstate_mod is None: # Rows stored as sparse vectors without modulation for row in rows: sv[row.ind] += row else: # Rows stored as sparse vectors with modulation for i, row in izip(spikes, rows): # note we call the numpy __mul__ directly because row is # a SparseConnectionVector with different mul semantics sv[row.ind] += numpy.ndarray.__mul__(row, sv_pre[i]) else: if self._nstate_mod is None: # Rows stored as dense vectors without modulation for row in rows: sv += row else: # Rows stored as dense vectors with modulation for i, row in izip(spikes, rows): sv += numpy.ndarray.__mul__(row, sv_pre[i]) else: # C++ accelerated code, does the same as the code above but faster and less pretty nspikes = len(spikes) if isinstance(rows[0], SparseConnectionVector): rowinds = [r.ind for r in rows] datas = rows if self._nstate_mod is None: code = propagate_weave_code_sparse codevars = propagate_weave_code_sparse_vars else: code = propagate_weave_code_sparse_modulation codevars = propagate_weave_code_sparse_modulation_vars else: if not isinstance(spikes, numpy.ndarray): spikes = array(spikes, dtype=int) N = len(sv) if self._nstate_mod is None: code = propagate_weave_code_dense codevars = propagate_weave_code_dense_vars else: code = propagate_weave_code_dense_modulation codevars = propagate_weave_code_dense_modulation_vars weave.inline(code, codevars, compiler=self._cpp_compiler, #type_converters=weave.converters.blitz, extra_compile_args=self._extra_compile_args) def compress(self): if not self.iscompressed: self.W = self.W.connection_matrix() self.iscompressed = True def reinit(self): ''' Resets the variables. ''' pass def do_propagate(self): self.propagate(self.source.get_spikes(self.delay)) def origin(self, P, Q): ''' Returns the starting coordinate of the given groups in the connection matrix W. ''' return (P._origin - self.source._origin, Q._origin - self.target._origin) # TODO: rewrite all the connection functions to work row by row for memory and time efficiency # TODO: change this def connect(self, source=None, target=None, W=None): ''' Connects (sub)groups P and Q with the weight matrix W (any type). Internally: inserts W as a submatrix. TODO: checks if the submatrix has already been specified. ''' P = source or self.source Q = target or self.target i0, j0 = self.origin(P, Q) self.W[i0:i0 + len(P), j0:j0 + len(Q)] = W def connect_random(self, source=None, target=None, p=1., weight=1., fixed=False, seed=None, sparseness=None): ''' Connects the neurons in group P to neurons in group Q with probability p, with given weight (default 1). The weight can be a quantity or a function of i (in P) and j (in Q). If ``fixed`` is True, then the number of presynaptic neurons per neuron is constant. ''' P = source or self.source Q = target or self.target if sparseness is not None: p = sparseness # synonym if seed is not None: numpy.random.seed(seed) # numpy's random number seed pyrandom.seed(seed) # Python's random number seed if fixed: random_matrix_function = random_matrix_fixed_column else: random_matrix_function = random_matrix if callable(weight): # Check units try: if weight.func_code.co_argcount == 2: weight(0, 0) + Q._S0[self.nstate] else: weight() + Q._S0[self.nstate] except DimensionMismatchError, inst: raise DimensionMismatchError("Incorrects unit for the synaptic weights.", *inst._dims) self.connect(P, Q, random_matrix_function(len(P), len(Q), p, value=weight)) else: # Check units try: weight + Q._S0[self.nstate] except DimensionMismatchError, inst: raise DimensionMismatchError("Incorrects unit for the synaptic weights.", *inst._dims) self.connect(P, Q, random_matrix_function(len(P), len(Q), p, value=float(weight))) def connect_full(self, source=None, target=None, weight=1.): ''' Connects the neurons in group P to all neurons in group Q, with given weight (default 1). The weight can be a quantity or a function of i (in P) and j (in Q). ''' P = source or self.source Q = target or self.target # TODO: check units if callable(weight): # Check units try: weight(0, 0) + Q._S0[self.nstate] except DimensionMismatchError, inst: raise DimensionMismatchError("Incorrects unit for the synaptic weights.", *inst._dims) W = zeros((len(P), len(Q))) try: x = weight(0, 1. * arange(0, len(Q))) failed = False if array(x).size != len(Q): failed = True except: failed = True if failed: # vector-based not possible log_debug('connections', 'Cannot build the connection matrix by rows') for i in range(len(P)): for j in range(len(Q)): w = float(weight(i, j)) #if not is_within_absolute_tolerance(w,0.,effective_zero): # for sparse matrices W[i, j] = w else: for i in range(len(P)): # build W row by row #Below: for sparse matrices (?) #w = weight(i,1.*arange(0,len(Q))) #I = (abs(w)>effective_zero).nonzero()[0] #print w, I, w[I] #W[i,I] = w[I] W[i, :] = weight(i, 1. * arange(0, len(Q))) self.connect(P, Q, W) else: try: weight + Q._S0[self.nstate] except DimensionMismatchError, inst: raise DimensionMismatchError("Incorrect unit for the synaptic weights.", *inst._dims) self.connect(P, Q, float(weight) * ones((len(P), len(Q)))) def connect_one_to_one(self, source=None, target=None, weight=1): ''' Connects source[i] to target[i] with weights 1 (or weight). ''' P = source or self.source Q = target or self.target if (len(P) != len(Q)): raise AttributeError, 'The connected (sub)groups must have the same size.' # TODO: unit checking self.connect(P, Q, float(weight) * eye_lil_matrix(len(P))) def __getitem__(self, i): return self.W.__getitem__(i) def __setitem__(self, i, x): # TODO: unit checking self.W.__setitem__(i, x) from delayconnection import * brian-1.3.1/brian/connections/connectionmatrix.py000066400000000000000000000656641167451777000222220ustar00rootroot00000000000000from base import * from sparsematrix import * from connectionvector import * __all__ = [ 'ConnectionMatrix', 'SparseConnectionMatrix', 'DenseConnectionMatrix', 'DynamicConnectionMatrix', ] class ConnectionMatrix(object): ''' Base class for connection matrix objects Connection matrix objects support a subset of the following methods: ``get_row(i)``, ``get_col(i)`` Returns row/col ``i`` as a :class:`DenseConnectionVector` or :class:`SparseConnectionVector` as appropriate for the class. ``set_row(i, val)``, ``set_col(i, val)`` Sets row/col with an array, :class:`DenseConnectionVector` or :class:`SparseConnectionVector` (if supported). ``get_element(i, j)``, ``set_element(i, j, val)`` Gets or sets a single value. ``get_rows(rows)`` Returns a list of rows, should be implemented without Python function calls for efficiency if possible. ``get_cols(cols)`` Returns a list of cols, should be implemented without Python function calls for efficiency if possible. ``insert(i,j,x)``, ``remove(i,j)`` For sparse connection matrices which support it, insert a new entry or remove an existing one. ``getnnz()`` Return the number of nonzero entries. ``todense()`` Return the matrix as a dense array. The ``__getitem__`` and ``__setitem__`` methods are implemented by default, and automatically select the appropriate methods from the above in the cases where the item to be got or set is of the form ``:``, ``i,:``, ``:,j`` or ``i,j``. ''' # methods to be implemented by subclass def get_row(self, i): return NotImplemented def get_col(self, i): return NotImplemented def set_row(self, i, x): return NotImplemented def set_col(self, i, x): return NotImplemented def set_element(self, i, j, x): return NotImplemented def get_element(self, i, j): return NotImplemented def get_rows(self, rows): return [self.get_row(i) for i in rows] def get_cols(self, cols): return [self.get_col(i) for i in cols] def insert(self, i, j, x): return NotImplemented def remove(self, i, j): return NotImplemented def getnnz(self): return NotImplemented def todense(self): return array([todense(r) for r in self]) # we support the following indexing schemes: # - s[:] # - s[i,:] # - s[:,i] # - s[i,j] def __getitem__(self, item): if isinstance(item, tuple) and isinstance(item[0], int) and item[1] == colon_slice: return self.get_row(item[0]) if isinstance(item, slice): if item == colon_slice: return self else: raise ValueError(str(item) + ' not supported.') if isinstance(item, int): return self.get_row(item) if isinstance(item, tuple): if len(item) != 2: raise TypeError('Only 2D indexing supported.') item_i, item_j = item if isinstance(item_i, int) and isinstance(item_j, slice): if item_j == colon_slice: return self.get_row(item_i) raise ValueError('Only ":" indexing supported.') if isinstance(item_i, slice) and isinstance(item_j, int): if item_i == colon_slice: return self.get_col(item_j) raise ValueError('Only ":" indexing supported.') if isinstance(item_i, int) and isinstance(item_j, int): return self.get_element(item_i, item_j) raise TypeError('Only (i,:), (:,j), (i,j) indexing supported.') raise TypeError('Can only get items of type slice or tuple') def __setitem__(self, item, value): if isinstance(item, tuple) and isinstance(item[0], int) and item[1] == colon_slice: return self.set_row(item[0], value) if isinstance(item, slice): raise ValueError(str(item) + ' not supported.') if isinstance(item, int): return self.set_row(item, value) if isinstance(item, tuple): if len(item) != 2: raise TypeError('Only 2D indexing supported.') item_i, item_j = item if isinstance(item_i, int) and isinstance(item_j, slice): if item_j == colon_slice: return self.set_row(item_i, value) raise ValueError('Only ":" indexing supported.') if isinstance(item_i, slice) and isinstance(item_j, int): if item_i == colon_slice: return self.set_col(item_j, value) raise ValueError('Only ":" indexing supported.') if isinstance(item_i, int) and isinstance(item_j, int): return self.set_element(item_i, item_j, value) raise TypeError('Only (i,:), (:,j), (i,j) indexing supported.') raise TypeError('Can only set items of type slice or tuple') class DenseConnectionMatrix(ConnectionMatrix, numpy.ndarray): ''' Dense connection matrix See documentation for :class:`ConnectionMatrix` for details on connection matrix types. This matrix implements a dense connection matrix. It is just a numpy array. The ``get_row`` and ``get_col`` methods return :class:`DenseConnectionVector`` objects. ''' def __new__(subtype, data, **kwds): if 'copy' not in kwds: kwds = dict(kwds.iteritems()) kwds['copy'] = False return numpy.array(data, **kwds).view(subtype) def __init__(self, val, **kwds): # precompute rows and cols for fast returns by get_rows etc. self.rows = [DenseConnectionVector(numpy.ndarray.__getitem__(self, i)) for i in xrange(val.shape[0])] self.cols = [DenseConnectionVector(numpy.ndarray.__getitem__(self, (slice(None), i))) for i in xrange(val.shape[1])] def get_rows(self, rows): return [self.rows[i] for i in rows] def get_cols(self, cols): return [self.cols[i] for i in cols] def get_row(self, i): return self.rows[i] def get_col(self, i): return self.cols[i] def set_row(self, i, x): numpy.ndarray.__setitem__(self, i, todense(x)) def set_col(self, i, x): numpy.ndarray.__setitem__(self, (colon_slice, i), todense(x)) #self[:, i] = todense(x) def get_element(self, i, j): return numpy.ndarray.__getitem__(self, (i, j)) #return self[i,j] def set_element(self, i, j, val): numpy.ndarray.__setitem__(self, (i, j), val) #self[i,j] = val insert = set_element def remove(self, i, j): numpy.ndarray.__setitem__(self, (i, j), 0) #self[i, j] = 0 class SparseConnectionMatrix(ConnectionMatrix): ''' Sparse connection matrix See documentation for :class:`ConnectionMatrix` for details on connection matrix types. This class implements a sparse matrix with a fixed number of nonzero entries. Row access is very fast, and if the ``column_access`` keyword is ``True`` then column access is also supported (but is not as fast as row access). The matrix should be initialised with a scipy sparse matrix. The ``get_row`` and ``get_col`` methods return :class:`SparseConnectionVector` objects. In addition to the usual slicing operations supported, ``M[:]=val`` is supported, where ``val`` must be a scalar or an array of length ``nnz``. Implementation details: The values are stored in an array ``alldata`` of length ``nnz`` (number of nonzero entries). The slice ``alldata[rowind[i]:rowind[i+1]]`` gives the values for row ``i``. These slices are stored in the list ``rowdata`` so that ``rowdata[i]`` is the data for row ``i``. The array ``rowj[i]`` gives the corresponding column ``j`` indices. For row access, the memory requirements are 12 bytes per entry (8 bytes for the float value, and 4 bytes for the column indices). The array ``allj`` of length ``nnz`` gives the column ``j`` coordinates for each element in ``alldata`` (the elements of ``rowj`` are slices of this array so no extra memory is used). If column access is being used, then in addition to the above there are lists ``coli`` and ``coldataindices``. For column ``j``, the array ``coli[j]`` gives the row indices for the data values in column ``j``, while ``coldataindices[j]`` gives the indices in the array ``alldata`` for the values in column ``j``. Column access therefore involves a copy operation rather than a slice operation. Column access increases the memory requirements to 20 bytes per entry (4 extra bytes for the row indices and 4 extra bytes for the data indices). ''' def __init__(self, val, column_access=True, **kwds): self._useaccel = get_global_preference('useweave') self._cpp_compiler = get_global_preference('weavecompiler') self._extra_compile_args = ['-O3'] if self._cpp_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] self.nnz = nnz = val.getnnz()# nnz stands for number of nonzero entries alldata = numpy.zeros(nnz) if column_access: colind = numpy.zeros(val.shape[1] + 1, dtype=int) allj = numpy.zeros(nnz, dtype=int) rowind = numpy.zeros(val.shape[0] + 1, dtype=int) rowdata = [] rowj = [] if column_access: coli = [] coldataindices = [] i = 0 # i points to the current index in the alldata array as we go through row by row for c in xrange(val.shape[0]): # extra the row values and column indices of row c of the initialising matrix # this works for any of the scipy sparse matrix formats if isinstance(val, sparse.lil_matrix): r = val.rows[c] d = val.data[c] else: sr = val[c, :] sr = sr.tolil() r = sr.rows[0] d = sr.data[0] # copy the values into the alldata array, the indices into the allj array, and # so forth rowind[c] = i alldata[i:i + len(d)] = d allj[i:i + len(r)] = r rowdata.append(alldata[i:i + len(d)]) rowj.append(allj[i:i + len(r)]) i = i + len(r) rowind[val.shape[0]] = i if column_access: # counts the number of nonzero elements in each column counts = zeros(val.shape[1], dtype=int) if len(allj): bincounts = numpy.bincount(allj) else: bincounts = numpy.array([], dtype=int) counts[:len(bincounts)] = bincounts # ensure that counts is the right length # two algorithms depending on whether weave is available if self._useaccel: # this algorithm just goes through one by one adding each # element to the appropriate bin whose sizes we have # precomputed. alldi will contain all the data indices # in blocks alldi[s[i]:s[i+1]] of length counts[i], and # curcdi[i] is the current offset into each block. s is # therefore just the cumulative sum of counts. curcdi = numpy.zeros(val.shape[1], dtype=int) allcoldataindices = numpy.zeros(nnz, dtype=int) colind[:] = numpy.hstack(([0], cumsum(counts))) colalli = numpy.zeros(nnz, dtype=int) numrows = val.shape[0] code = ''' int i = 0; for(int k=0;k=rowind[i+1]) i++; int j = allj[k]; allcoldataindices[colind[j]+curcdi[j]] = k; colalli[colind[j]+curcdi[j]] = i; curcdi[j]++; } ''' weave.inline(code, ['nnz', 'allj', 'allcoldataindices', 'rowind', 'numrows', 'curcdi', 'colind', 'colalli'], compiler=self._cpp_compiler, extra_compile_args=self._extra_compile_args) # now store the blocks of allcoldataindices in coldataindices and update coli too for i in xrange(len(colind) - 1): D = allcoldataindices[colind[i]:colind[i + 1]] I = colalli[colind[i]:colind[i + 1]] coldataindices.append(D) coli.append(I) else: # now allj[a] will be the columns in order, so that # the first counts[0] elements of allj[a] will be 0, # or in other words the first counts[0] elements of a # will be the data indices of the elements (i,j) with j==0 # mergesort is necessary because we want the relative ordering # of the elements of a within a block to be maintained allcoldataindices = a = argsort(allj, kind='mergesort') # this defines colind so that a[colind[i]:colind[i+1]] are the data # indices where j==i colind[:] = numpy.hstack(([0], cumsum(counts))) # this computes the row index of each entry by first generating # expanded_row_indices which gives the corresponding row index # for each entry enumerated row-by-row, and then using the # array allcoldataindices to index this array to convert into # the corresponding row index for each entry enumerated # col-by-col. if len(a): expanded_row_indices = empty(len(a), dtype=int) for k, (i, j) in enumerate(zip(rowind[:-1], rowind[1:])): expanded_row_indices[i:j] = k colalli = expanded_row_indices[a] else: colalli = numpy.zeros(nnz, dtype=int) # in this loop, I are the data indices where j==i # and alli[I} are the corresponding i coordinates for i in xrange(len(colind) - 1): D = a[colind[i]:colind[i + 1]] I = colalli[colind[i]:colind[i + 1]] coldataindices.append(D) coli.append(I) self.alldata = alldata self.rowdata = rowdata self.allj = allj self.rowj = rowj self.rowind = rowind self.shape = val.shape self.column_access = column_access if column_access: self.colalli = colalli self.coli = coli self.coldataindices = coldataindices self.allcoldataindices = allcoldataindices self.colind = colind self.rows = [SparseConnectionVector(self.shape[1], self.rowj[i], self.rowdata[i]) for i in xrange(self.shape[0])] def getnnz(self): return self.nnz def get_element(self, i, j): n = searchsorted(self.rowj[i], j) if n >= len(self.rowj[i]) or self.rowj[i][n] != j: return 0 return self.rowdata[i][n] def set_element(self, i, j, x): n = searchsorted(self.rowj[i], j) if n >= len(self.rowj[i]) or self.rowj[i][n] != j: raise ValueError('Insertion of new elements not supported for SparseConnectionMatrix.') self.rowdata[i][n] = x def get_row(self, i): return self.rows[i] def get_rows(self, rows): return [self.rows[i] for i in rows] def get_col(self, j): if self.column_access: return SparseConnectionVector(self.shape[0], self.coli[j], self.alldata[self.coldataindices[j]]) else: raise TypeError('No column access.') def get_cols(self, cols): if self.column_access: return [SparseConnectionVector(self.shape[0], self.coli[j], self.alldata[self.coldataindices[j]]) for j in cols] else: raise TypeError('No column access.') def set_row(self, i, val): if isinstance(val, SparseConnectionVector): if val.ind is not self.rowj[i]: if not (val.ind == self.rowj[i]).all(): raise ValueError('Sparse row setting must use same indices.') self.rowdata[i][:] = val else: if isinstance(val, numpy.ndarray): val = asarray(val) self.rowdata[i][:] = val[self.rowj[i]] else: self.rowdata[i][:] = val def set_col(self, j, val): if self.column_access: if isinstance(val, SparseConnectionVector): if val.ind is not self.coli[j]: if not (val.ind == self.coli[j]).all(): raise ValueError('Sparse col setting must use same indices.') self.alldata[self.coldataindices[j]] = val else: if isinstance(val, numpy.ndarray): val = asarray(val) self.alldata[self.coldataindices[j]] = val[self.coli[j]] else: self.alldata[self.coldataindices[j]] = val else: raise TypeError('No column access.') def __setitem__(self, item, value): if item == colon_slice: self.alldata[:] = value else: ConnectionMatrix.__setitem__(self, item, value) class DynamicConnectionMatrix(ConnectionMatrix): ''' Dynamic (sparse) connection matrix See documentation for :class:`ConnectionMatrix` for details on connection matrix types. This class implements a sparse matrix with a variable number of nonzero entries. Row access and column access are provided, but are not as fast as for :class:`SparseConnectionMatrix`. The matrix should be initialised with a scipy sparse matrix. The ``get_row`` and ``get_col`` methods return :class:`SparseConnectionVector` objects. In addition to the usual slicing operations supported, ``M[:]=val`` is supported, where ``val`` must be a scalar or an array of length ``nnz``. **Implementation details** The values are stored in an array ``alldata`` of length ``nnzmax`` (maximum number of nonzero entries). This is a dynamic array, see: http://en.wikipedia.org/wiki/Dynamic_array You can set the resizing constant with the argument ``dynamic_array_const``. Normally the default value 2 is fine but if memory is a worry it could be made smaller. Rows and column point in to this data array, and the list ``rowj`` consists of an array of column indices for each row, with ``coli`` containing arrays of row indices for each column. Similarly, ``rowdataind`` and ``coldataind`` consist of arrays of pointers to the indices in the ``alldata`` array. ''' def __init__(self, val, nnzmax=None, dynamic_array_const=2, **kwds): self.shape = val.shape self.dynamic_array_const = dynamic_array_const if nnzmax is None or nnzmax < val.getnnz(): nnzmax = val.getnnz() self.nnzmax = nnzmax self.nnz = val.getnnz() self.alldata = numpy.zeros(nnzmax) self.unusedinds = range(self.nnz, self.nnzmax) i = 0 self.rowj = [] self.rowdataind = [] alli = zeros(self.nnz, dtype=int) allj = zeros(self.nnz, dtype=int) for c in xrange(val.shape[0]): # extra the row values and column indices of row c of the initialising matrix # this works for any of the scipy sparse matrix formats if isinstance(val, sparse.lil_matrix): r = val.rows[c] d = val.data[c] else: sr = val[c, :] sr = sr.tolil() r = sr.rows[0] d = sr.data[0] self.alldata[i:i + len(d)] = d self.rowj.append(array(r, dtype=int)) self.rowdataind.append(arange(i, i + len(d))) allj[i:i + len(d)] = r alli[i:i + len(d)] = c i += len(d) # now update the coli and coldataind variables self.coli = [] self.coldataind = [] # counts the number of nonzero elements in each column if numpy.__version__ >= '1.3.0': counts = numpy.histogram(allj, numpy.arange(val.shape[1] + 1, dtype=int))[0] else: counts = numpy.histogram(allj, numpy.arange(val.shape[1] + 1, dtype=int), new=True)[0] # now we have to go through one by one unfortunately, and so we keep curcdi, the # current column data index for each column curcdi = numpy.zeros(val.shape[1], dtype=int) # initialise the memory for the column data indices for j in xrange(val.shape[1]): self.coldataind.append(numpy.zeros(counts[j], dtype=int)) # one by one for every element, update the dataindices and curcdi data pointers for i, j in enumerate(allj): self.coldataind[j][curcdi[j]] = i curcdi[j] += 1 for j in xrange(val.shape[1]): self.coli.append(alli[self.coldataind[j]]) def getnnz(self): return self.nnz def insert(self, i, j, x): n = searchsorted(self.rowj[i], j) if n < len(self.rowj[i]) and self.rowj[i][n] == j: self.alldata[self.rowdataind[i][n]] = x return m = searchsorted(self.coli[j], i) if self.nnz == self.nnzmax: # reallocate memory using a dynamic array structure (amortized O(1) cost for append) newnnzmax = int(self.nnzmax * self.dynamic_array_const) if newnnzmax <= self.nnzmax: newnnzmax += 1 if newnnzmax > self.shape[0] * self.shape[1]: newnnzmax = self.shape[0] * self.shape[1] self.alldata = hstack((self.alldata, numpy.zeros(newnnzmax - self.nnzmax, dtype=self.alldata.dtype))) self.unusedinds.extend(range(self.nnz, newnnzmax)) self.nnzmax = newnnzmax newind = self.unusedinds.pop(-1) self.alldata[newind] = x self.nnz += 1 # update row newrowj = numpy.zeros(len(self.rowj[i]) + 1, dtype=int) newrowj[:n] = self.rowj[i][:n] newrowj[n] = j newrowj[n + 1:] = self.rowj[i][n:] self.rowj[i] = newrowj newrowdataind = numpy.zeros(len(self.rowdataind[i]) + 1, dtype=int) newrowdataind[:n] = self.rowdataind[i][:n] newrowdataind[n] = newind newrowdataind[n + 1:] = self.rowdataind[i][n:] self.rowdataind[i] = newrowdataind # update col newcoli = numpy.zeros(len(self.coli[j]) + 1, dtype=int) newcoli[:m] = self.coli[j][:m] newcoli[m] = i newcoli[m + 1:] = self.coli[j][m:] self.coli[j] = newcoli newcoldataind = numpy.zeros(len(self.coldataind[j]) + 1, dtype=int) newcoldataind[:m] = self.coldataind[j][:m] newcoldataind[m] = newind newcoldataind[m + 1:] = self.coldataind[j][m:] self.coldataind[j] = newcoldataind def remove(self, i, j): n = searchsorted(self.rowj[i], j) if n >= len(self.rowj[i]) or self.rowj[i][n] != j: raise ValueError('No element to remove at position ' + str(i, j)) oldind = self.rowdataind[i][n] self.unusedinds.append(oldind) self.nnz -= 1 m = searchsorted(self.coli[j], i) # update row newrowj = numpy.zeros(len(self.rowj[i]) - 1, dtype=int) newrowj[:n] = self.rowj[i][:n] newrowj[n:] = self.rowj[i][n + 1:] self.rowj[i] = newrowj newrowdataind = numpy.zeros(len(self.rowdataind[i]) - 1, dtype=int) newrowdataind[:n] = self.rowdataind[i][:n] newrowdataind[n:] = self.rowdataind[i][n + 1:] self.rowdataind[i] = newrowdataind # update col newcoli = numpy.zeros(len(self.coli[j]) - 1, dtype=int) newcoli[:m] = self.coli[j][:m] newcoli[m:] = self.coli[j][m + 1:] self.coli[j] = newcoli newcoldataind = numpy.zeros(len(self.coldataind[j]) - 1, dtype=int) newcoldataind[:m] = self.coldataind[j][:m] newcoldataind[m:] = self.coldataind[j][m + 1:] self.coldataind[j] = newcoldataind def get_element(self, i, j): n = searchsorted(self.rowj[i], j) if n >= len(self.rowj[i]) or self.rowj[i][n] != j: return 0 return self.alldata[self.rowdataind[i][n]] set_element = insert def get_row(self, i): return SparseConnectionVector(self.shape[1], self.rowj[i], self.alldata[self.rowdataind[i]]) def get_rows(self, rows): return [SparseConnectionVector(self.shape[1], self.rowj[i], self.alldata[self.rowdataind[i]]) for i in rows] def get_col(self, j): return SparseConnectionVector(self.shape[0], self.coli[j], self.alldata[self.coldataind[j]]) def get_cols(self, cols): return [SparseConnectionVector(self.shape[0], self.coli[j], self.alldata[self.coldataind[j]]) for j in cols] def set_row(self, i, val): if isinstance(val, SparseConnectionVector): if val.ind is not self.rowj[i]: if not (val.ind == self.rowj[i]).all(): raise ValueError('Sparse row setting must use same indices.') self.alldata[self.rowdataind[i]] = val else: if isinstance(val, numpy.ndarray): val = asarray(val) self.alldata[self.rowdataind[i]] = val[self.rowj[i]] else: self.alldata[self.rowdataind[i]] = val def set_col(self, j, val): if isinstance(val, SparseConnectionVector): if val.ind is not self.coli[j]: if not (val.ind == self.coli[j]).all(): raise ValueError('Sparse row setting must use same indices.') self.alldata[self.coldataind[j]] = val else: if isinstance(val, numpy.ndarray): val = asarray(val) self.alldata[self.coldataind[j]] = val[self.coli[j]] else: self.alldata[self.coldataind[j]] = val def __setitem__(self, item, value): if item == colon_slice: self.alldata[:self.nnz] = value else: ConnectionMatrix.__setitem__(self, item, value) brian-1.3.1/brian/connections/connectionvector.py000066400000000000000000000123321167451777000222000ustar00rootroot00000000000000from base import * __all__ = [ 'ConnectionVector', 'SparseConnectionVector', 'DenseConnectionVector', ] class ConnectionVector(object): ''' Base class for connection vectors, just used for defining the interface ConnectionVector objects are returned by ConnectionMatrix objects when they retrieve rows or columns. At the moment, there are two choices, sparse or dense. This class has no real function at the moment. ''' def todense(self): return NotImplemented def tosparse(self): return NotImplemented class DenseConnectionVector(ConnectionVector, numpy.ndarray): ''' Just a numpy array. ''' def __new__(subtype, arr): return numpy.array(arr, copy=False).view(subtype) def todense(self): return self def tosparse(self): return SparseConnectionVector(len(self), self.nonzero(), self) class SparseConnectionVector(ConnectionVector, numpy.ndarray): ''' Sparse vector class A sparse vector is typically a row or column of a sparse matrix. This class can be treated in many cases as if it were just a vector without worrying about the fact that it is sparse. For example, if you write ``2*v`` it will evaluate to a new sparse vector. There is one aspect of the semantics which is potentially confusing. In a binary operation with a dense vector such as ``sv+dv`` where ``sv`` is sparse and ``dv`` is dense, the result will be a sparse vector with zeros where ``sv`` has zeros, the potentially nonzero elements of ``dv`` where ``sv`` has no entry will be simply ignored. It is for this reason that it is a ``SparseConnectionVector`` and not a general ``SparseVector``, because these semantics make sense for rows and columns of connection matrices but not in general. Implementation details: The underlying numpy array contains the values, the attribute ``n`` is the length of the sparse vector, and ``ind`` is an array of the indices of the nonzero elements. ''' def __new__(subtype, n, ind, data): x = numpy.array(data, copy=False).view(subtype) x.n = n x.ind = ind return x def __array_finalize__(self, orig): # the array is passed through this function after standard numpy operations, # this ensures that the indices are kept from the original array. This makes, # for example, sin(x) do the right thing for x a sparse vector. try: self.ind = orig.ind self.n = orig.n except AttributeError: pass return self def todense(self): x = zeros(self.n) x[self.ind] = self return x def tosparse(self): return self # This is a list of the binary operations that numpy arrays support. modifymeths = ['__add__', '__and__', '__div__', '__divmod__', '__eq__', '__floordiv__', '__ge__', '__gt__', '__iadd__', '__iand__', '__idiv__', '__ifloordiv__', '__ilshift__', '__imod__', '__imul__', '__ior__', '__ipow__', '__irshift__', '__isub__', '__itruediv__', '__ixor__', '__le__', '__lshift__', '__lt__', '__mod__', '__mul__', '__ne__', '__or__', '__pow__', '__radd__', '__rand__', '__rdiv__', '__rdivmod__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__sub__', '__truediv__', '__xor__'] # This template function (where __add__ is replaced by any of the methods above) implements # the semantics described in this class' docstring when operating with a dense vector. template = ''' def __add__(self, other): if isinstance(other, SparseConnectionVector): # Note that removing this check is potentially dangerous, but only in weird circumstances would it cause # any problems, and leaving it in causes problems for STDP with DelayConnection (because the indices are # not the same, but should be presumed to be equal). #if other.ind is not self.ind: # raise TypeError('__add__(SparseConnectionVector, SparseConnectionVector) only defined if indices are the same') return SparseConnectionVector(self.n, self.ind, numpy.ndarray.__add__(asarray(self), asarray(other))) if isinstance(other, numpy.ndarray): return SparseConnectionVector(self.n, self.ind, numpy.ndarray.__add__(asarray(self), other[self.ind])) return SparseConnectionVector(self.n, self.ind, numpy.ndarray.__add__(asarray(self), other)) '''.strip() # this substitutes any of the method names in the modifymeths list for __add__ in the template # above and then executes them, i.e. adding them as methods to the class. When the behaviour is # stable, this can be replaced by the explicit definitions but it may as well be left as it is for # the moment (it's slower at import time, but not at run time, and errors are more difficult to # catch when it is done like this). for m in modifymeths: s = template.replace('__add__', m) exec s del modifymeths, template brian-1.3.1/brian/connections/construction.py000066400000000000000000000164221167451777000213540ustar00rootroot00000000000000from base import * from sparsematrix import * __all__ = ['random_row_func', 'random_matrix', 'random_matrix_fixed_column', 'eye_lil_matrix', ] def random_row_func(N, p, weight=1., initseed=None): ''' Returns a random connectivity ``row_func`` for use with :class:`UserComputedConnectionMatrix` Gives equivalent output to the :meth:`Connection.connect_random` method. Arguments: ``N`` The number of target neurons. ``p`` The probability of a synapse. ``weight`` The connection weight (must be a single value). ``initseed`` The initial seed value (for reproducible results). ''' if initseed is None: initseed = pyrandom.randint(100000, 1000000) # replace this cur_row = numpy.zeros(N) myrange = numpy.arange(N, dtype=int) def row_func(i): pyrandom.seed(initseed + int(i)) scirandom.seed(initseed + int(i)) k = scirandom.binomial(N, p, 1)[0] cur_row[:] = 0.0 cur_row[pyrandom.sample(myrange, k)] = weight return cur_row return row_func # Generation of matrices def random_matrix(n, m, p, value=1.): ''' Generates a sparse random matrix with size (n,m). Entries are 1 (or optionnally value) with probability p. If value is a function, then that function is called for each non zero element as value() or value(i,j). ''' # TODO: # Simplify (by using valuef) W = sparse.lil_matrix((n, m)) if callable(value) and callable(p): if value.func_code.co_argcount == 0: valuef = lambda i, j:[value() for _ in j] # value function elif value.func_code.co_argcount == 2: try: failed = (array(value(0, arange(m))).size != m) except: failed = True if failed: # vector-based not possible log_debug('connections', 'Cannot build the connection matrix by rows') valuef = lambda i, j:[value(i, k) for k in j] else: valuef = value else: raise AttributeError, "Bad number of arguments in value function (should be 0 or 2)" if p.func_code.co_argcount == 2: # Check if p(i,j) is vectorisable try: failed = (array(p(0, arange(m))).size != m) except: failed = True if failed: # vector-based not possible log_debug('connections', 'Cannot build the connection matrix by rows') for i in xrange(n): W.rows[i] = [j for j in range(m) if rand() < p(i, j)] W.data[i] = list(valuef(i, array(W.rows[i]))) else: # vector-based possible for i in xrange(n): W.rows[i] = list((rand(m) < p(i, arange(m))).nonzero()[0]) W.data[i] = list(valuef(i, array(W.rows[i]))) elif p.func_code.co_argcount == 0: for i in xrange(n): W.rows[i] = [j for j in range(m) if rand() < p()] W.data[i] = list(valuef(i, array(W.rows[i]))) else: raise AttributeError, "Bad number of arguments in p function (should be 2)" elif callable(value): if value.func_code.co_argcount == 0: # TODO: should work with partial objects for i in xrange(n): k = random.binomial(m, p, 1)[0] W.rows[i] = sample(xrange(m), k) W.rows[i].sort() W.data[i] = [value() for _ in xrange(k)] elif value.func_code.co_argcount == 2: try: failed = (array(value(0, arange(m))).size != m) except: failed = True if failed: # vector-based not possible log_debug('connections', 'Cannot build the connection matrix by rows') for i in xrange(n): k = random.binomial(m, p, 1)[0] W.rows[i] = sample(xrange(m), k) W.rows[i].sort() W.data[i] = [value(i, j) for j in W.rows[i]] else: for i in xrange(n): k = random.binomial(m, p, 1)[0] W.rows[i] = sample(xrange(m), k) W.rows[i].sort() W.data[i] = list(value(i, array(W.rows[i]))) else: raise AttributeError, "Bad number of arguments in value function (should be 0 or 2)" elif callable(p): if p.func_code.co_argcount == 2: # Check if p(i,j) is vectorisable try: failed = (array(p(0, arange(m))).size != m) except: failed = True if failed: # vector-based not possible log_debug('connections', 'Cannot build the connection matrix by rows') for i in xrange(n): W.rows[i] = [j for j in range(m) if rand() < p(i, j)] W.data[i] = [value] * len(W.rows[i]) else: # vector-based possible for i in xrange(n): W.rows[i] = list((rand(m) < p(i, arange(m))).nonzero()[0]) W.data[i] = [value] * len(W.rows[i]) elif p.func_code.co_argcount == 0: for i in xrange(n): W.rows[i] = [j for j in range(m) if rand() < p()] W.data[i] = [value] * len(W.rows[i]) else: raise AttributeError, "Bad number of arguments in p function (should be 2)" else: for i in xrange(n): k = random.binomial(m, p, 1)[0] # Not significantly faster to generate all random numbers in one pass # N.B.: the sample method is implemented in Python and it is not in Scipy W.rows[i] = sample(xrange(m), k) W.rows[i].sort() W.data[i] = [value] * k return W def random_matrix_fixed_column(n, m, p, value=1.): ''' Generates a sparse random matrix with size (n,m). Entries are 1 (or optionnally value) with probability p. The number of non-zero entries by per column is fixed: (int)(p*n) If value is a function, then that function is called for each non zero element as value() or value(i,j). ''' W = sparse.lil_matrix((n, m)) k = (int)(p * n) for j in xrange(m): # N.B.: the sample method is implemented in Python and it is not in Scipy for i in sample(xrange(n), k): W.rows[i].append(j) if callable(value): if value.func_code.co_argcount == 0: for i in xrange(n): W.data[i] = [value() for _ in xrange(len(W.rows[i]))] elif value.func_code.co_argcount == 2: for i in xrange(n): W.data[i] = [value(i, j) for j in W.rows[i]] else: raise AttributeError, "Bad number of arguments in value function (should be 0 or 2)" else: for i in xrange(n): W.data[i] = [value] * len(W.rows[i]) return W def eye_lil_matrix(n): ''' Returns the identity matrix of size n as a lil_matrix (sparse matrix). ''' M = sparse.lil_matrix((n, n)) M.setdiag([1.] * n) return M brian-1.3.1/brian/connections/constructionmatrix.py000066400000000000000000000054461167451777000226050ustar00rootroot00000000000000from base import * from sparsematrix import * from connectionmatrix import * __all__ = [ 'ConstructionMatrix', 'SparseConstructionMatrix', 'DenseConstructionMatrix', 'DynamicConstructionMatrix', 'construction_matrix_register', ] class ConstructionMatrix(object): ''' Base class for construction matrices A construction matrix is used to initialise and build connection matrices. A ``ConstructionMatrix`` class has to implement a method ``connection_matrix(*args, **kwds)`` which returns a :class:`ConnectionMatrix` object of the appropriate type. ''' def connection_matrix(self, *args, **kwds): return NotImplemented class DenseConstructionMatrix(ConstructionMatrix, numpy.ndarray): ''' Dense construction matrix. Essentially just numpy.ndarray. The ``connection_matrix`` method returns a :class:`DenseConnectionMatrix` object. The ``__setitem__`` method is overloaded so that you can set values with a sparse matrix. ''' def __init__(self, val, **kwds): self[:] = 0 self.init_kwds = kwds def connection_matrix(self, **additional_kwds): self.init_kwds.update(additional_kwds) kwds = self.init_kwds return DenseConnectionMatrix(self, **kwds) def __setitem__(self, index, W): # Make it work for sparse matrices #if isinstance(W,sparse.spmatrix): if isinstance(W, sparse.spmatrix): ndarray.__setitem__(self, index, W.todense()) else: ndarray.__setitem__(self, index, W) def todense(self): return asarray(self) class SparseConstructionMatrix(ConstructionMatrix, SparseMatrix): ''' SparseConstructionMatrix is converted to SparseConnectionMatrix. ''' def __init__(self, arg, **kwds): SparseMatrix.__init__(self, arg) self.init_kwds = kwds def connection_matrix(self, **additional_kwds): self.init_kwds.update(additional_kwds) return SparseConnectionMatrix(self, **self.init_kwds) class DynamicConstructionMatrix(ConstructionMatrix, SparseMatrix): ''' DynamicConstructionMatrix is converted to DynamicConnectionMatrix. ''' def __init__(self, arg, **kwds): SparseMatrix.__init__(self, arg) self.init_kwds = kwds def connection_matrix(self, **additional_kwds): self.init_kwds.update(additional_kwds) return DynamicConnectionMatrix(self, **self.init_kwds) # this is used to look up str->class conversions for structure=... keyword construction_matrix_register = { 'dense':DenseConstructionMatrix, 'sparse':SparseConstructionMatrix, 'dynamic':DynamicConstructionMatrix, } brian-1.3.1/brian/connections/delayconnection.py000066400000000000000000000425541167451777000220050ustar00rootroot00000000000000from base import * from sparsematrix import * from connectionvector import * from constructionmatrix import * from connectionmatrix import * from construction import * from connection import * from propagation_c_code import * import warnings network_operation = None # we import this when we need to because of order of import issues __all__ = [ 'DelayConnection', ] class DelayConnection(Connection): ''' Connection which implements heterogeneous postsynaptic delays Initialised as for a :class:`Connection`, but with the additional keyword: ``max_delay`` Specifies the maximum delay time for any neuron. Note, the smaller you make this the less memory will be used. Overrides the following attribute of :class:`Connection`: .. attribute:: delay A matrix of delays. This array can be changed during a run, but at no point should it be greater than ``max_delay``. In addition, the methods ``connect``, ``connect_random``, ``connect_full``, and ``connect_one_to_one`` have a new keyword ``delay=...`` for setting the initial values of the delays, where ``delay`` can be one of: * A float, all delays will be set to this value * A pair (min, max), delays will be uniform between these two values. * A function of no arguments, will be called for each nonzero entry in the weight matrix. * A function of two argument ``(i,j)`` will be called for each nonzero entry in the weight matrix. * A matrix of an appropriate type (e.g. ndarray or lil_matrix). Finally, there is a method: ``set_delays(source, target, delay)`` Where ``delay`` must be of one of the types above. **Notes** This class implements post-synaptic delays. This means that the spike is propagated immediately from the presynaptic neuron with the synaptic weight at the time of the spike, but arrives at the postsynaptic neuron with the given delay. At the moment, Brian only provides support for presynaptic delays if they are homogeneous, using the ``delay`` keyword of a standard ``Connection``. **Implementation** :class:`DelayConnection` stores an array of size ``(n,m)`` where ``n`` is ``max_delay/dt`` for ``dt`` of the target :class:`NeuronGroup`'s clock, and ``m`` is the number of neurons in the target. This array can potentially be quite large. Each row in this array represents the array that should be added to the target state variable at some particular future time. Which row corresponds to which time is tracked using a circular indexing scheme. When a spike from neuron ``i`` in the source is encountered, the delay time of neuron ``i`` is looked up, the row corresponding to the current time plus that delay time is found using the circular indexing scheme, and then the spike is propagated to that row as for a standard connection (although this won't be propagated to the target until a later time). **Warning** If you are using a dynamic connection matrix, it is your responsibility to ensure that the nonzero entries of the weight matrix and the delay matrix exactly coincide. This is not an issue for sparse or dense matrices. ''' def __init__(self, source, target, state=0, modulation=None, structure='sparse', weight=None, sparseness=None, delay=None, max_delay=5 * msecond, **kwds): global network_operation if network_operation is None: from ..network import network_operation Connection.__init__(self, source, target, state=state, modulation=modulation, structure=structure, weight=weight, sparseness=sparseness, **kwds) self._max_delay = int(max_delay / target.clock.dt) + 1 source.set_max_delay(max_delay) # Each row of the following array stores the cumulative effect of spikes at some # particular time, defined by a circular indexing scheme. The _cur_delay_ind attribute # stores the row corresponding to the current time, so that _cur_delay_ind+1 corresponds # to that time + target.clock.dt, and so on. When _cur_delay_ind reaches _max_delay it # resets to zero. self._delayedreaction = numpy.zeros((self._max_delay, len(target))) # vector of delay times, can be changed during a run if isinstance(structure, str): structure = construction_matrix_register[structure] self.delayvec = structure((len(source), len(target)), **kwds) self._cur_delay_ind = 0 # this network operation is added to the Network object via the contained_objects # protocol (see the line after the function definition). The standard Connection.propagate # function propagates spikes to _delayedreaction rather than the target, and this # function which is called after the usual propagations propagates that data from # _delayedreaction to the target. It only needs to be called each target.clock update. @network_operation(clock=target.clock, when='after_connections') def delayed_propagate(): # propagate from _delayedreaction -> target group target._S[self.nstate] += self._delayedreaction[self._cur_delay_ind, :] # reset the current row of _delayedreaction self._delayedreaction[self._cur_delay_ind, :] = 0.0 # increase the index for the circular indexing scheme self._cur_delay_ind = (self._cur_delay_ind + 1) % self._max_delay self.contained_objects = [delayed_propagate] # this is just used to convert delayvec's which are in ms to integers, precalculating it makes it faster self._invtargetdt = 1 / self.target.clock._dt self._useaccel = get_global_preference('useweave') self._cpp_compiler = get_global_preference('weavecompiler') self._extra_compile_args = ['-O3'] if self._cpp_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] if delay is not None: self.set_delays(delay=delay) def propagate(self, spikes): if not self.iscompressed: self.compress() if len(spikes): # Target state variable dr = self._delayedreaction # If specified, modulation state variable if self._nstate_mod is not None: sv_pre = self.source._S[self._nstate_mod] # Get the rows of the connection matrix, each row will be either a # DenseConnectionVector or a SparseConnectionVector. rows = self.W.get_rows(spikes) dvecrows = self.delayvec.get_rows(spikes) if not self._useaccel: # Pure Python version is easier to understand, but slower than C++ version below if isinstance(rows[0], SparseConnectionVector): if self._nstate_mod is None: # Rows stored as sparse vectors without modulation for row, dvecrow in izip(rows, dvecrows): if not len(row.ind) == len(dvecrow.ind): raise RuntimeError('Weight and delay matrices must be kept in synchrony for sparse matrices.') drind = (self._cur_delay_ind + numpy.array(self._invtargetdt * dvecrow, dtype=int)) % self._max_delay dr[drind, dvecrow.ind] += row else: # Rows stored as sparse vectors with modulation for i, row, dvecrow in izip(spikes, rows, dvecrows): if not len(row.ind) == len(dvecrow.ind): raise RuntimeError('Weight and delay matrices must be kept in synchrony for sparse matrices.') drind = (self._cur_delay_ind + numpy.array(self._invtargetdt * dvecrow, dtype=int)) % self._max_delay # note we call the numpy __mul__ directly because row is # a SparseConnectionVector with different mul semantics dr[drind, dvecrow.ind] += numpy.ndarray.__mul__(row, sv_pre[i]) else: if self._nstate_mod is None: # Rows stored as dense vectors without modulation drjind = numpy.arange(len(self.target), dtype=int) for row, dvecrow in izip(rows, dvecrows): drind = (self._cur_delay_ind + numpy.array(self._invtargetdt * dvecrow, dtype=int)) % self._max_delay dr[drind, drjind[:len(drind)]] += row else: # Rows stored as dense vectors with modulation drjind = numpy.arange(len(self.target), dtype=int) for i, row, dvecrow in izip(spikes, rows, dvecrows): drind = (self._cur_delay_ind + numpy.array(self._invtargetdt * dvecrow, dtype=int)) % self._max_delay dr[drind, drjind[:len(drind)]] += numpy.ndarray.__mul__(row, sv_pre[i]) else: # C++ accelerated code, does the same as the code above but faster and less pretty nspikes = len(spikes) cdi = self._cur_delay_ind idt = self._invtargetdt md = self._max_delay if isinstance(rows[0], SparseConnectionVector): rowinds = [r.ind for r in rows] datas = rows if self._nstate_mod is None: code = delay_propagate_weave_code_sparse codevars = delay_propagate_weave_code_sparse_vars else: code = delay_propagate_weave_code_sparse_modulation codevars = delay_propagate_weave_code_sparse_modulation_vars else: if not isinstance(spikes, numpy.ndarray): spikes = array(spikes, dtype=int) N = len(self.target) if self._nstate_mod is None: code = delay_propagate_weave_code_dense codevars = delay_propagate_weave_code_dense_vars else: code = delay_propagate_weave_code_dense_modulation codevars = delay_propagate_weave_code_dense_modulation_vars weave.inline(code, codevars, compiler=self._cpp_compiler, type_converters=weave.converters.blitz, extra_compile_args=self._extra_compile_args) def do_propagate(self): self.propagate(self.source.get_spikes(0)) def _set_delay_property(self, val): self.delayvec[:] = val delay = property(fget=lambda self:self.delayvec, fset=_set_delay_property) def compress(self): if not self.iscompressed: # We want delayvec to have nonzero entries at the same places as # W does, so we use W to initialise the compressed version of # delayvec, and then copy the values from the old delayvec to # the new compressed one, allowing delayvec and W to not have # to be perfectly intersected at the initialisation stage. If # the structure is dynamic, it will be the user's # responsibility to update them in sequence delayvec = self.delayvec # Special optimisation for the lil_matrix case if the indices # already match. Note that this special optimisation also allows # you to use multiple synapses via a sort of hack (editing the # rows and data attributes of the lil_matrix explicitly) using_lil_matrix = False if isinstance(delayvec, sparse.lil_matrix): using_lil_matrix = True self.delayvec = self.W.connection_matrix(copy=True) repeated_index_hack = False for i in xrange(self.W.shape[0]): if using_lil_matrix: try: row = SparseConnectionVector(self.W.shape[1], array(delayvec.rows[i], dtype=int), array(delayvec.data[i])) if sum(diff(row.ind)==0)>0: warnings.warn("You are using the repeated index hack, be careful!") repeated_index_hack = True self.delayvec.set_row(i, row) except ValueError: if repeated_index_hack: warnings.warn("You are using the repeated index " "hack and not ensuring that weight " "and delay indices correspond, this " "is almost certainly an error!") using_lil_matrix = False # The delayvec[i,:] operation for sparse.lil_matrix format # is VERY slow, but the CSR format is fine. delayvec = delayvec.tocsr() if not using_lil_matrix: self.delayvec.set_row(i, array(todense(delayvec[i, :]), copy=False).flatten()) Connection.compress(self) def set_delays(self, source=None, target=None, delay=None): ''' Set the delays corresponding to the weight matrix ``delay`` must be one of: * A float, all delays will be set to this value * A pair (min, max), delays will be uniform between these two values. * A function of no arguments, will be called for each nonzero entry in the weight matrix. * A function of two argument ``(i,j)`` will be called for each nonzero entry in the weight matrix. * A matrix of an appropriate type (e.g. ndarray or lil_matrix). ''' if delay is None: return W = self.W P = source or self.source Q = target or self.target i0, j0 = self.origin(P, Q) i1 = i0 + len(P) j1 = j0 + len(Q) if isinstance(W, sparse.lil_matrix): def getrow(i): inds = array(W.rows[i], dtype=int) inds = inds[logical_and(inds >= j0, inds < j1)] return inds, len(inds) else: def getrow(i): inds = (W[i, j0:j1] != 0).nonzero()[0] + j0 return inds, len(inds) #return slice(j0, j1), j1-j0 if isinstance(delay, (float, int)): for i in xrange(i0, i1): inds, L = getrow(i) self.delayvec[i, inds] = delay elif isinstance(delay, (tuple, list)) and len(delay) == 2: delaymin, delaymax = delay for i in xrange(i0, i1): inds, L = getrow(i) rowdelay = rand(L) * (delaymax - delaymin) + delaymin self.delayvec[i, inds] = rowdelay elif callable(delay) and delay.func_code.co_argcount == 0: for i in xrange(i0, i1): inds, L = getrow(i) rowdelay = [delay() for _ in xrange(L)] self.delayvec[i, inds] = rowdelay elif callable(delay) and delay.func_code.co_argcount == 2: for i in xrange(i0, i1): inds, L = getrow(i) if isinstance(inds, slice): inds = numpy.arange(inds.start, inds.stop) self.delayvec[i, inds] = delay(i - i0, inds - j0) else: #raise TypeError('delays must be float, pair or function of 0 or 2 arguments') self.delayvec[i0:i1, j0:j1] = delay # probably won't work, but then it will raise an error def connect(self, source=None, target=None, W=None, delay=None): Connection.connect(self, source=source, target=target, W=W) if delay is not None: self.set_delays(source, target, delay) def connect_random(self, source=None, target=None, p=1.0, weight=1.0, fixed=False, seed=None, sparseness=None, delay=None): Connection.connect_random(self, source=source, target=target, p=p, weight=weight, fixed=fixed, seed=seed, sparseness=sparseness) if delay is not None: self.set_delays(source, target, delay) def connect_full(self, source=None, target=None, weight=1.0, delay=None): Connection.connect_full(self, source=source, target=target, weight=weight) if delay is not None: self.set_delays(source, target, delay) def connect_one_to_one(self, source=None, target=None, weight=1.0, delay=None): Connection.connect_one_to_one(self, source=source, target=target, weight=weight) if delay is not None: self.set_delays(source, target, delay) brian-1.3.1/brian/connections/otherconnections.py000066400000000000000000000047731167451777000222140ustar00rootroot00000000000000from base import * from connection import * __all__ = [ 'IdentityConnection', 'MultiConnection', ] class IdentityConnection(Connection): ''' A :class:`Connection` between two groups of the same size, where neuron ``i`` in the source group is connected to neuron ``i`` in the target group. Initialised with arguments: ``source``, ``target`` The source and target :class:`NeuronGroup` objects. ``state`` The target state variable. ``weight`` The weight of the synapse, must be a scalar. ``delay`` Only homogeneous delays are allowed. The benefit of this class is that it has no storage requirements and is optimised for this special case. ''' @check_units(delay=second) def __init__(self, source, target, state=0, weight=1, delay=0 * msecond): if (len(source) != len(target)): raise AttributeError, 'The connected (sub)groups must have the same size.' self.source = source # pointer to source group self.target = target # pointer to target group if type(state) == types.StringType: # named state variable self.nstate = target.get_var_index(state) else: self.nstate = state # target state index self.W = float(weight) # weight source.set_max_delay(delay) self.delay = int(delay / source.clock.dt) # Synaptic delay in time bins def propagate(self, spikes): ''' Propagates the spikes to the target. ''' self.target._S[self.nstate, spikes] += self.W def compress(self): pass class MultiConnection(Connection): ''' A hub for multiple connections with a common source group. ''' def __init__(self, source, connections=[]): self.source = source self.connections = connections self.iscompressed = False self.delay = connections[0].delay def propagate(self, spikes): ''' Propagates the spikes to the targets. ''' for C in self.connections: C.propagate(spikes) def compress(self): if not self.iscompressed: for C in self.connections: C.compress() self.iscompressed = True def reinit(self): for C in self.connections: # this does nothing for normal connections but is important for # SpikeMonitors C.reinit() brian-1.3.1/brian/connections/propagation_c_code.py000066400000000000000000000226731167451777000224460ustar00rootroot00000000000000################################################################################ ################## NO DELAYS ################################################### ################################################################################ ################## SPARSE ###################################################### propagate_weave_code_sparse_vars = ['sv', 'rowinds', 'datas', 'spikes', 'nspikes'] propagate_weave_code_sparse = ''' for(int j=0;j row = convert_to_blitz(_row,"row"); long* row = (long*)_row->data; PyObject* _datasj = datas[j]; PyArrayObject* _data = convert_to_numpy(_datasj, "data"); conversion_numpy_check_type(_data, PyArray_DOUBLE, "data"); conversion_numpy_check_size(_data, 1, "data"); //blitz::Array data = convert_to_blitz(_data,"data"); double* data = (double*)_data->data; //int m = row.numElements(); int m = _row->dimensions[0]; for(int k=0;k row = convert_to_blitz(_row,"row"); long* row = (long*)_row->data; PyObject* _datasj = datas[j]; PyArrayObject* _data = convert_to_numpy(_datasj, "data"); conversion_numpy_check_type(_data, PyArray_DOUBLE, "data"); conversion_numpy_check_size(_data, 1, "data"); //blitz::Array data = convert_to_blitz(_data,"data"); double* data = (double*)_data->data; //int m = row.numElements(); int m = _row->dimensions[0]; //double mod = sv_pre(spikes(j)); double mod = sv_pre[spikes[j]]; for(int k=0;k row = convert_to_blitz(_row,"row"); double *row = (double *)_row->data; for(int k=0;k row = convert_to_blitz(_row,"row"); //double mod = sv_pre(spikes(j)); double *row = (double *)_row->data; double mod = sv_pre[spikes[j]]; for(int k=0;k row = convert_to_blitz(_row,"row"); PyObject* _datasj = datas[j]; PyArrayObject* _data = convert_to_numpy(_datasj, "data"); conversion_numpy_check_type(_data, PyArray_DOUBLE, "data"); conversion_numpy_check_size(_data, 1, "data"); blitz::Array data = convert_to_blitz(_data,"data"); PyObject* _dvecrowsj = dvecrows[j]; PyArrayObject* _dvecrow = convert_to_numpy(_dvecrowsj, "dvecrow"); conversion_numpy_check_type(_dvecrow, PyArray_DOUBLE, "dvecrow"); conversion_numpy_check_size(_dvecrow, 1, "dvecrow"); blitz::Array dvecrow = convert_to_blitz(_dvecrow,"dvecrow"); int m = row.numElements(); for(int k=0;k row = convert_to_blitz(_row,"row"); PyObject* _datasj = datas[j]; PyArrayObject* _data = convert_to_numpy(_datasj, "data"); conversion_numpy_check_type(_data, PyArray_DOUBLE, "data"); conversion_numpy_check_size(_data, 1, "data"); blitz::Array data = convert_to_blitz(_data,"data"); PyObject* _dvecrowsj = dvecrows[j]; PyArrayObject* _dvecrow = convert_to_numpy(_dvecrowsj, "dvecrow"); conversion_numpy_check_type(_dvecrow, PyArray_DOUBLE, "dvecrow"); conversion_numpy_check_size(_dvecrow, 1, "dvecrow"); blitz::Array dvecrow = convert_to_blitz(_dvecrow,"dvecrow"); int m = row.numElements(); double mod = sv_pre(spikes(j)); for(int k=0;k row = convert_to_blitz(_row,"row"); PyObject* _dvecrowsj = dvecrows[j]; PyArrayObject* _dvecrow = convert_to_numpy(_dvecrowsj, "dvecrow"); conversion_numpy_check_type(_dvecrow, PyArray_DOUBLE, "dvecrow"); conversion_numpy_check_size(_dvecrow, 1, "dvecrow"); blitz::Array dvecrow = convert_to_blitz(_dvecrow,"dvecrow"); for(int k=0;k row = convert_to_blitz(_row,"row"); PyObject* _dvecrowsj = dvecrows[j]; PyArrayObject* _dvecrow = convert_to_numpy(_dvecrowsj, "dvecrow"); conversion_numpy_check_type(_dvecrow, PyArray_DOUBLE, "dvecrow"); conversion_numpy_check_size(_dvecrow, 1, "dvecrow"); blitz::Array dvecrow = convert_to_blitz(_dvecrow,"dvecrow"); double mod = sv_pre(spikes(j)); for(int k=0;k self.curperiod: self.reinit() self.curperiod = cp t = t - cp * self.period # it is the iterator for neuron i, and nextspiketime is the stored time of the next spike for it, nextspiketime, i in zip(self.spiketimeiter, self.nextspiketime, range(len(self.spiketimes))): # Note we use approximate equality testing because the clock t+=dt mechanism accumulates errors if isinstance(self.spiketimes[i], numpy.ndarray): curt = float(t) else: curt = t if nextspiketime is not None and is_approx_less_than_or_equal(nextspiketime, curt): firing[i] = 1 try: nextspiketime = it.next() if is_approx_less_than_or_equal(nextspiketime, curt): log_warn('brian.MultipleSpikeGeneratorThreshold', 'Stacking multiple firing times') except StopIteration: nextspiketime = None self.nextspiketime[i] = nextspiketime return where(firing)[0] class SpikeGeneratorGroup(NeuronGroup): """Emits spikes at given times Initialised as:: SpikeGeneratorGroup(N,spiketimes[,clock[,period]]) with arguments: ``N`` The number of neurons in the group. ``spiketimes`` An object specifying which neurons should fire and when. It can be a container such as a ``list``, containing tuples ``(i,t)`` meaning neuron ``i`` fires at time ``t``, or a callable object which returns such a container (which allows you to use generator objects even though this is slower, see below). ``i`` can be an integer or an array (list of neurons that spike at the same time). If ``spiketimes`` is not a list or tuple, the pairs ``(i,t)`` need to be sorted in time. You can also pass a numpy array ``spiketimes`` where the first column of the array is the neuron indices, and the second column is the times in seconds. Alternatively you can pass a tuple with two arrays, the first one being the neuron indices and the second one times. WARNING: units are not checked in this case, the time array should be in seconds. ``clock`` An optional clock to update with (omit to use the default clock). ``period`` Optionally makes the spikes recur periodically with the given period. Note that iterator objects cannot be used as the ``spikelist`` with a period as they cannot be reinitialised. ``gather=False`` Set to True if you want to gather spike events that fall in the same timestep. (Deprecated since Brian 1.3.1) ``sort=True`` Set to False if your spike events are already sorted. Has an attribute: ``spiketimes`` This can be used to reset the list of spike times, however the values of ``N``, ``clock`` and ``period`` cannot be changed. **Sample usages** The simplest usage would be a list of pairs ``(i,t)``:: spiketimes = [(0,1*ms), (1,2*ms)] SpikeGeneratorGroup(N,spiketimes) A more complicated example would be to pass a generator:: import random def nextspike(): nexttime = random.uniform(0*ms,10*ms) while True: yield (random.randint(0,9),nexttime) nexttime = nexttime + random.uniform(0*ms,10*ms) P = SpikeGeneratorGroup(10,nextspike()) This would give a neuron group ``P`` with 10 neurons, where a random one of the neurons fires at an average rate of one every 5ms. Please note that as of 1.3.1, this behavior is preserved but will run slower than initializing with arrays, or lists. **Notes** Note that if a neuron fires more than one spike in a given interval ``dt``, additional spikes will be discarded. If you want them to stack, consider using the less efficient :class:`MultipleSpikeGeneratorGroup` object instead. A warning will be issued if this is detected. Also, if you want to use a SpikeGeneratorGroup with many spikes and/or neurons, please use an initialization with arrays. Also note that if you pass a generator, then reinitialising the group will not have the expected effect because a generator object cannot be reinitialised. Instead, you should pass a callable object which returns a generator. In the example above, that would be done by calling:: P = SpikeGeneratorGroup(10,nextspike) Whenever P is reinitialised, it will call ``nextspike()`` to create the required spike container. """ def __init__(self, N, spiketimes, clock=None, period=None, sort=True, gather=None): clock = guess_clock(clock) self.N = N self.period = period if gather: log_warn('brian.SpikeGeneratorGroup', 'SpikeGeneratorGroup\'s gather keyword use is deprecated') fallback = False # fall back on old SpikeGeneratorThreshold or not if isinstance(spiketimes, list): # spiketimes is a list of (i,t) if len(spiketimes): idx, times = zip(*spiketimes) else: idx, times = [], [] # the following try ... handles the case where spiketimes has index arrays # e.g spiketimes = [([0, 1], 0 * msecond), ([0, 1, 2], 2 * msecond)] # Notes: # - if there is always the same number of indices by array, its simple, it's just a matter of flattening # - if not, then it requires a for loop, and it's done in the except try: idx = array(idx, dtype = float) times = array(times, dtype = float) if idx.ndim > 1: # simple case times = tile(times.reshape((len(times), 1)), (idx.shape[1], 1)).flatten() idx = idx.flatten() except ValueError: new_idx = [] new_times = [] for k, item in enumerate(idx): if isinstance(item, list): new_idx += item # append indices new_times += [times[k]]*len(item) else: new_times += [times[k]] new_idx += [item] idx = array(new_idx, dtype = float) times = new_times times = array(times, dtype = float) elif isinstance(spiketimes, tuple): # spike times is a tuple with idx, times in arrays idx = spiketimes[0] times = spiketimes[1] elif isinstance(spiketimes, ndarray): # spiketimes is a ndarray, with first col is index and second time idx = spiketimes[:,0] times = spiketimes[:,1] else: log_warn('brian.SpikeGeneratorGroup', 'Using (slow) threshold because spiketimes is assumed to be a generator/iterator') # spiketimes is a callable object, so falling back on old SpikeGeneratorThreshold fallback = True if not fallback: thresh = FastSpikeGeneratorThreshold(N, idx, times, dt=clock.dt, period=period) else: thresh = SpikeGeneratorThreshold(N, spiketimes, period=period, sort=sort) if not hasattr(self, '_initialized'): NeuronGroup.__init__(self, N, model=LazyStateUpdater(), threshold=thresh, clock=clock) self._initialized = True else: self._threshold = thresh def reinit(self): super(SpikeGeneratorGroup, self).reinit() self._threshold.reinit() @property def spiketimes(self): return self._threshold.spiketimes @spiketimes.setter def spiketimes(self, values): self.__init__(self.N, values, period = self.period) class FastSpikeGeneratorThreshold(Threshold): ''' A faster version of the SpikeGeneratorThreshold where spikes are processed prior to the run (offline). It replaces the SpikeGeneratorThreshold as of 1.3.1. ''' ## Notes: # - N is ignored (should it not?) def __init__(self, N, addr, timestamps, dt = None, period=None): self.set_offsets(addr, timestamps, dt = dt) self.period = period self.dt = dt self.reinit() def set_offsets(self, I, T, dt = 1000): # Convert times into integers T = array(ceil(T/dt), dtype=int) # Put them into order # We use a field array to sort first by time and then by neuron index spikes = zeros(len(I), dtype=[('t', int), ('i', int)]) spikes['t'] = T spikes['i'] = I spikes.sort(order=('t', 'i')) T = ascontiguousarray(spikes['t']) self.I = ascontiguousarray(spikes['i']) # Now for each timestep, we find the corresponding segment of I with # the spike indices for that timestep. # The idea of offsets is that the segment offsets[t]:offsets[t+1] # should give the spikes with time t, i.e. T[offsets[t]:offsets[t+1]] # should all be equal to t, and so then later we can return # I[offsets[t]:offsets[t+1]] at time t. It might take a bit of thinking # to see why this works. Since T is sorted, and bincount[i] returns the # number of elements of T equal to i, then j=cumsum(bincount(T))[t] # gives the first index in T where T[j]=t. if len(T): self.offsets = hstack((0, cumsum(bincount(T)))) else: self.offsets = array([]) def __call__(self, P): t = P.clock.t if self.period is not None: cp = int(t / self.period) if cp > self.curperiod: self.reinit() self.curperiod = cp t = t - cp * self.period dt = P.clock.dt t = int(round(t/dt)) if t+1>=len(self.offsets): return array([], dtype=int) return self.I[self.offsets[t]:self.offsets[t+1]] def reinit(self): self.curperiod = -1 @property def spiketimes(self): # this is a pain to do! retrieve spike times from offsets res = [] for k in range(len(self.offsets)-1): idx = self.I[self.offsets[k]:self.offsets[k+1]] ts = [k*self.dt]*len(idx) res += zip(idx, ts) return res class SpikeGeneratorThreshold(Threshold): """ Old threshold object for the SpikeGeneratorGroup **Notes** This version of the SpikeGeneratorThreshold object is deprecated, since version 1.3.1 of Brian it has been replaced in most cases by the FastSpikeGeneratorThreshold. This is kept only as a fallback object for when a SpikeGeneratorGroup object is initialized with a generator or an iterator object (see the doc for SpikeGeneratorGroup for more details). Please note that since this implementation is slower, using a static data structure as an input to a SpikeGeneratorGroup is advised. """ def __init__(self, N, spiketimes, period=None, sort=True): self.set_spike_times(N, spiketimes, period=period, sort=sort) def reinit(self): def makeiter(obj): if callable(obj): return iter(obj()) return iter(obj) self.spiketimeiter = makeiter(self.spiketimes) try: self.nextspikenumber, self.nextspiketime = self.spiketimeiter.next() except StopIteration: self.nextspiketime = None self.nextspikenumber = 0 self.curperiod = -1 def set_spike_times(self, N, spiketimes, period=None, sort=True): # N is the number of neurons, spiketimes is an iterable object of tuples (i,t) where # t is the spike time, and i is the neuron number. If spiketimes is a list or tuple, # then it will be sorted here. if isinstance(spiketimes, (list, tuple)) and sort: spiketimes = sorted(spiketimes, key=itemgetter(1)) self.spiketimes = spiketimes self.N = N self.period = period self.reinit() def __call__(self, P): firing = zeros(self.N) t = P.clock.t if self.period is not None: cp = int(t / self.period) if cp > self.curperiod: self.reinit() self.curperiod = cp t = t - cp * self.period if isinstance(self.spiketimes, numpy.ndarray): t = float(t) while self.nextspiketime is not None and is_approx_less_than_or_equal(self.nextspiketime, t): if type(self.nextspikenumber)==int and firing[self.nextspikenumber]: log_warn('brian.SpikeGeneratorThreshold', 'Discarding multiple overlapping spikes') firing[self.nextspikenumber] = 1 try: self.nextspikenumber, self.nextspiketime = self.spiketimeiter.next() except StopIteration: self.nextspiketime = None return where(firing)[0] # The output of this function is fed into SpikeGeneratorGroup, consisting of # time sorted pairs (t,i) where t is when neuron i fires @check_units(t=second, n=1, sigma=second) def PulsePacketGenerator(t, n, sigma): times = [pyrandom.gauss(t, sigma) for i in range(n)] times.sort() neuron = range(n) pyrandom.shuffle(neuron) return zip(neuron, times) class PulsePacket(SpikeGeneratorGroup): """ Fires a Gaussian distributed packet of n spikes with given spread **Initialised as:** :: PulsePacket(t,n,sigma[,clock]) with arguments: ``t`` The mean firing time ``n`` The number of spikes in the packet ``sigma`` The standard deviation of the firing times. ``clock`` The clock to use (omit to use default or local clock) **Methods** This class is derived from :class:`SpikeGeneratorGroup` and has all its methods as well as one additional method: .. method:: generate(t,n,sigma) Change the parameters and/or generate a new pulse packet. """ @check_units(t=second, n=1, sigma=second) def __init__(self, t, n, sigma, clock=None): self.clock = guess_clock(clock) self.generate(t, n, sigma) def reinit(self): super(PulsePacket, self).reinit() self._threshold.reinit() @check_units(t=second, n=1, sigma=second) def generate(self, t, n, sigma): SpikeGeneratorGroup.__init__(self, n, PulsePacketGenerator(t, n, sigma), self.clock) def __repr__(self): return "Pulse packet neuron group" class PoissonGroup(NeuronGroup): ''' A group that generates independent Poisson spike trains. **Initialised as:** :: PoissonGroup(N,rates[,clock]) with arguments: ``N`` The number of neurons in the group ``rates`` A scalar, array or function returning a scalar or array. The array should have the same length as the number of neurons in the group. The function should take one argument ``t`` the current simulation time. ``clock`` The clock which the group will update with, do not specify to use the default clock. ''' def __init__(self, N, rates=0 * hertz, clock=None): ''' Initializes the group. P.rates gives the rates. ''' NeuronGroup.__init__(self, N, model=LazyStateUpdater(), threshold=PoissonThreshold(), clock=clock) if callable(rates): # a function is passed self._variable_rate = True self.rates = rates self._S0[0] = self.rates(self.clock.t) else: self._variable_rate = False self._S[0, :] = rates self._S0[0] = rates self.var_index = {'rate':0} def update(self): if self._variable_rate: self._S[0, :] = self.rates(self.clock.t) NeuronGroup.update(self) class OfflinePoissonGroup(object): # This is weird, there is only an init method def __init__(self, N, rates, T): """ Generates a Poisson group with N spike trains and given rates over the time window [0,T]. """ if isscalar(rates): rates = rates * ones(N) totalrate = sum(rates) isi = exponential(1 / totalrate, T * totalrate * 2) spikes = cumsum(isi) spikes = spikes[spikes <= T] neurons = randint(0, N, len(spikes)) self.spiketimes = zip(neurons, spikes) # Used in PoissonInput below class EmptyGroup(object): def __init__(self, clock): self.clock = clock def get_spikes(self, delay): return None class PoissonInput(Connection): """ Adds a Poisson input to a NeuronGroup. Allows to efficiently simulate a large number of independent Poisson inputs to a NeuronGroup variable, without simulating every synapse individually. The synaptic events are generated randomly during the simulation and are not preloaded and stored in memory (unless record=True is used). All the inputs must target the same variable, have the same frequency and same synaptic weight. You can use as many PoissonInput objects as you want, even targetting a same NeuronGroup. There is the possibility to consider time jitter in the presynaptic spikes, and synaptic unreliability. The inputs can also be recorded if needed. Finally, all neurons from the NeuronGroup receive independent realizations of Poisson spike trains, except if the keyword freeze=True is used, in which case all neurons receive the same Poisson input. **Initialised as:** :: PoissonInput(target[, N][, rate][, weight][, state][, jitter][, reliability][, copies][, record][, freeze]) with arguments: ``target`` The target :class:`NeuronGroup` ``N`` The number of independent Poisson inputs ``rate`` The rate of each Poisson process ``weight`` The synaptic weight ``state`` The name or the index of the synaptic variable of the :class:`NeuronGroup` ``jitter`` is ``None`` by default. There is the possibility to consider ``copies`` presynaptic spikes at each Poisson event, randomly shifted according to an exponential law with parameter ``jitter=taujitter`` (in second). ``reliability`` is ``None`` by default. There is the possibility to consider ``copies`` presynaptic spikes at each Poisson event, where each of these spikes is unreliable, i.e. it occurs with probability ``jitter=alpha`` (between 0 and 1). ``copies`` The number of copies of each Poisson event. This is identical to ``weight=copies*w``, except if ``jitter`` or ``reliability`` are specified. ``record`` ``True`` if the input has to be recorded. In this case, the recorded events are stored in the ``recorded_events`` attribute, as a list of pairs ``(i,t)`` where ``i`` is the neuron index and ``t`` is the event time. ``freeze`` ``True`` if the input must be the same for all neurons of the :class:`NeuronGroup` """ _record = [] def __init__(self, target, N=None, rate=None, weight=None, state=None, jitter=None, reliability=None, copies=1, record=False, freeze=False): self.source = EmptyGroup(target.clock) self.target = target self.N = len(self.target) self.clock = target.clock self.delay = None self.iscompressed = True self.delays = None # delay to wait for the j-th synchronous spike to occur after the last sync event, for target neuron i self.lastevent = -inf * ones(self.N) # time of the last event for target neuron i self.events = [] self.recorded_events = [] self.n = N self.rate = rate self.w = weight self.var = state self._jitter = jitter if jitter is not None: self.delays = zeros((copies, self.N)) self.reliability = reliability self.copies = copies self.record = record self.frozen = freeze if (jitter is not None) and (reliability is not None): raise Exception("Specifying both jitter and reliability is currently not supported.") if isinstance(state, str): # named state variable self.index = self.target.get_var_index(state) else: self.index = state @property def jitter(self): return self._jitter @jitter.setter def jitter(self, value): self._jitter = value if value is not None: self.delays = zeros((self.copies, self.N)) def propagate(self, spikes): i = 0 n = self.n f = self.rate w = self.w var = self.var jitter = self.jitter reliability = self.reliability record = self.record frozen = self.frozen state = self.index if (jitter==None) and (reliability==None): if frozen: rnd = binomial(n=n, p=f * self.clock.dt) self.target._S[state, :] += w * rnd if rnd > 0: self.events.append(self.clock.t) else: rnd = binomial(n=n, p=f * self.clock.dt, size=(self.N)) self.target._S[state, :] += w * rnd ind = nonzero(rnd>0)[0] if record and len(ind)>0: self.recorded_events.append((ind[0], self.clock.t)) elif (jitter is not None): p = self.copies taujitter = jitter if (p > 0) & (f > 0): k = binomial(n=n, p=f * self.clock.dt, size=(self.N)) # number of synchronous events here, for every target neuron syncneurons = (k > 0) # neurons with a syncronous event here self.lastevent[syncneurons] = self.clock.t if taujitter == 0.0: self.delays[:, syncneurons] = zeros((p, sum(syncneurons))) else: self.delays[:, syncneurons] = exponential(scale=taujitter, size=(p, sum(syncneurons))) # Delayed spikes occur now lastevent = tile(self.lastevent, (p, 1)) b = (abs(self.clock.t - (lastevent + self.delays)) <= (self.clock.dt / 2) * ones((p, self.N))) # delayed spikes occurring now weff = sum(b, axis=0) * w self.target._S[state, :] += weff elif (reliability is not None): p = self.copies alpha = reliability if (p > 0) & (alpha > 0): weff = w * binomial(n=p, p=alpha) self.target._S[state, :] += weff * binomial(n=n, p=f * self.clock.dt, size=(self.N)) def _test(): import doctest doctest.testmod() if __name__ == "__main__": _test() brian-1.3.1/brian/equations.py000066400000000000000000001235241167451777000163120ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Differential equations for Brian models. ''' #from scipy.weave import blitz from operator import isSequenceType import types from units import * from stdunits import * from inspection import * from scipy import exp from scipy import weave from globalprefs import * import re import inspect import optimiser import warnings import uuid import numpy from numpy import zeros, ones import numpy from log import * from optimiser import * from scipy import optimize import unitsafefunctions import copy try: import sympy use_sympy = True except: warnings.warn('sympy not installed') use_sympy = False __all__ = ['Equations', 'unique_id'] # TODO: write interface for equations.py def unique_id(): """ Returns a unique name (e.g. for internal hidden variables). """ return '_' + str(uuid.uuid1().int) class Equations(object): """Container that stores equations from which models can be created Initialised as:: Equations(expr[,level=0[,keywords...]]) with arguments: ``expr`` An expression, which can each be a string representing equations, an :class:`Equations` objects, or a list of strings and :class:`Equations` objects. See below for details of the string format. ``level`` Indicates how many levels back in the stack the namespace for string equations is found, so that e.g. ``level=0`` looks in the namespace of the function where the :class:`Equations` object was created, ``level=1`` would look in the namespace of the function that called the function where the :class:`Equations` object was created, etc. Normally you can just leave this out. ``keywords`` Any sequence of keyword pairs ``key=value`` where the string ``key`` in the string equations will be replaced with ``value`` which can be either a string, value or ``None``, in the latter case a unique name will be generated automatically (but it won't be pretty). Systems of equations can be defined by passing lists of :class:`Equations` to a new :class:`Equations` object, or by adding :class:`Equations` objects together (the usage is similar to that of a Python ``list``). **String equations** String equations can be of any of the following forms: (1) ``dx/dt = f : unit`` (differential equation) (2) ``x = f : unit`` (equation) (3) ``x = y`` (alias) (4) ``x : unit`` (parameter) Here each of ``x`` and ``y`` can be any valid Python variable name, ``f`` can be any valid Python expression, and ``unit`` should be the unit of the corresponding ``x``. You can also include multi-line expressions by appending a ``\`` character at the end of each line which is continued on the next line (following the Python standard), or comments by including a ``#`` symbol. These forms mean: *Differential equation* A differential equation with variable ``x`` which has physical units ``unit``. The variable ``x`` will become one of the state variables of the model. *Equation* An equation defining the meaning of ``x`` can be used for building systems of complicated differential equations. *Alias* The variable ``x`` becomes equivalent to the variable ``y``, useful for connecting two separate systems of equations together. *Parameter* The variable ``x`` will have physical units ``unit`` and will be one of the state variables of the model (but will not evolve dynamically, instead it should be set by the user). .. index:: single: xi pair: xi; noise single: white noise single: gaussian noise single: noise single: noise; gaussian single: noise; white **Noise** String equations can also use the reserved term ``xi`` for a Gaussian white noise with mean 0 and variance 1. **Example usage** :: eqs=Equations(''' dv/dt=(u-v)/tau : volt u=3*v : volt w=v ''') **Details** For more details, see :ref:`moreonequations` in the user manual. """ def __init__(self, expr='', level=0, **kwds): # Empty object self._Vm = None # name of variable with membrane potential self._eq_names = [] # equations names self._diffeq_names = [] # differential equations names self._diffeq_names_nonzero = [] # differential equations names self._function = {} # dictionary of functions self._string = {} # dictionary of strings (defining the functions) self._namespace = {} # dictionary of namespaces for the strings (globals,locals) self._alias = {} # aliases (mapping name1 -> name2) self._units = {'t':second} # dictionary of units self._dependencies = {} # dictionary of dependencies (on static equations) self._useweave = get_global_preference('useweave') self._cpp_compiler = get_global_preference('weavecompiler') self._extra_compile_args = ['-O3'] if self._cpp_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] self._frozen = False # True if all units and parameters are gone self._prepared = False if not isinstance(expr, str): # assume it is a sequence of Equations objects for eqs in expr: if not isinstance(eqs, Equations): eqs = Equations(eqs, level=level + 1) self += eqs elif expr != '': # Check keyword arguments param_dict = {} for name, value in kwds.iteritems(): if value is None: # name is not important: choose unique name value = unique_id() if isinstance(value, str): # variable name substitution expr = re.sub('\\b' + name + '\\b', value, expr) expr = re.sub('\\bd' + name + '\\b', 'd' + value, expr) # derivative else: param_dict[name] = value if kwds == {}: # weird: changed from param_dict on 18/06/08 self.parse_string_equations(expr, level=level + 1) else: self.parse_string_equations(expr, namespace=param_dict, level=level + 1) """ ----------------------------------------------------------------------- PARSING AND BUILDING NAMESPACES ----------------------------------------------------------------------- """ def parse_string_equations(self, eqns, level=1, namespace=None): """ Parses a string defining equations and builds an Equations object. Uses the namespace in the given level of the stack. """ diffeq_pattern = re.compile('\s*d(\w+)\s*/\s*dt\s*=\s*(.+?)\s*:\s*(.*)') eq_pattern = re.compile('\s*(\w+)\s*=\s*(.+?)\s*:\s*(.*)') alias_pattern = re.compile('\s*(\w+)\s*=\s*(\w+)\s*$') param_pattern = re.compile('\s*(\w+)\s*:\s*(.*)') empty_pattern = re.compile('\s*$') patterns = [diffeq_pattern, eq_pattern, alias_pattern, param_pattern, empty_pattern] # Merge multi-line statements eqns = re.sub('\\\s*?\n', ' ', eqns) # Namespace of the functions ns_global, ns_local = namespace, namespace if namespace is None: frame = inspect.stack()[level + 1][0] ns_global, ns_local = frame.f_globals, frame.f_locals #print frame.f_code.co_filename #useful for debugging which file the namespace came from for line in eqns.splitlines(): line = re.sub('#.*', '', line) # remove comments result = None for pattern in patterns: result = pattern.match(line) if result: break if result == None: raise TypeError, "Invalid equation string: " + line if pattern == eq_pattern: name, eq, unit = result.groups() self.add_eq(name, eq, unit, ns_global, ns_local) elif pattern == diffeq_pattern: name, eq, unit = result.groups() self.add_diffeq(name, eq, unit, ns_global, ns_local) elif pattern == alias_pattern: name1, name2 = result.groups() self.add_alias(name1, name2) elif pattern == param_pattern: name, unit = result.groups() self.add_param(name, unit, ns_global, ns_local) def add_eq(self, name, eq, unit, global_namespace={}, local_namespace={}): """ Inserts an equation. name = variable name eq = string definition unit = unit of the variable (possibly a string) *_namespace = namespaces associated to the string """ # Find external objects vars = list(get_identifiers(eq)) if type(unit) == types.StringType: vars.extend(list(get_identifiers(unit))) self._namespace[name] = {} for var in vars: if var in local_namespace: #local self._namespace[name][var] = local_namespace[var] elif var in global_namespace: #global self._namespace[name][var] = global_namespace[var] elif var in globals(): # typically units self._namespace[name][var] = globals()[var] self._eq_names.append(name) if type(unit) == types.StringType: self._units[name] = eval(unit, self._namespace[name].copy()) else: self._units[name] = unit self._string[name] = eq def add_diffeq(self, name, eq, unit, global_namespace={}, local_namespace={}, nonzero=True): """ Inserts a differential equation. name = variable name eq = string definition unit = unit of the variable (possibly a string) *_namespace = namespaces associated to the string nonzero = False if dx/dt=0 (parameter) """ # Find external objects vars = list(get_identifiers(eq)) if type(unit) == types.StringType: vars.extend(list(get_identifiers(unit))) self._namespace[name] = {} for var in vars: if var in local_namespace: #local self._namespace[name][var] = local_namespace[var] elif var in global_namespace: #global self._namespace[name][var] = global_namespace[var] elif var in globals(): # typically units self._namespace[name][var] = globals()[var] self._diffeq_names.append(name) if type(unit) == types.StringType: self._units[name] = eval(unit, self._namespace[name].copy()) else: self._units[name] = unit self._string[name] = eq if nonzero: self._diffeq_names_nonzero.append(name) def add_alias(self, name1, name2): """ Inserts an alias. name1 = new name name2 = old name """ self._alias[name1] = name2 # TODO: what if name2 is not defined yet? self.add_eq(name1, name2, self._units[name2]) def add_param(self, name, unit, global_namespace={}, local_namespace={}): """ Inserts a parameter. name = variable name eq = string definition unit = unit of the variable (possibly a string) *_namespace = namespaces associated to the string """ if isinstance(unit, Quantity): unit = scalar_representation(unit) self.add_diffeq(name, '0*' + unit + '/second', unit, global_namespace, local_namespace, nonzero=False) """ ----------------------------------------------------------------------- FINALISATION ----------------------------------------------------------------------- """ def prepare(self, check_units=True): ''' Do a number of checks (units) and preparation of the object. ''' if self._prepared: return # Let Vm be the first differential equation vm_name = self.get_Vm() if vm_name: # TODO: INFO logging i = self._diffeq_names.index(vm_name) self._diffeq_names[0], self._diffeq_names[i] = self._diffeq_names[i], self._diffeq_names[0] else: pass # TODO: WARNING log that a potential problem has occurred here? # Clean namespace (avoids conflicts between variables and external variables) self.clean_namespace() # Compile strings to functions self.compile_functions() # Check units if check_units: self.check_units() # Set the update order of (static) variables self.set_eq_order() # Replace static variables by their value in differential equations self.substitute_eq() self.compile_functions() # Check free variables free_vars = self.free_variables() if free_vars != []: log_info('brian.equations', 'Free variables: ' + str(free_vars)) self._prepared = True def get_Vm(self): ''' Finds the variable that is most likely to be the membrane potential. ''' if self._Vm: return self._Vm vm_names = ['v', 'V', 'vm', 'Vm'] guesses = [var for var in self._diffeq_names if var in vm_names] if len(guesses) == 1: # Unambiguous return guesses[0] else: # Ambiguous or not found return None def clean_namespace(self): ''' Removes all variable names from namespaces ''' all_variables = self._eq_names + self._diffeq_names + self._alias.keys() + ['t'] for name in self._namespace: for var in all_variables: if var in self._namespace[name]: log_warn('brian.equations', 'Equation variable ' + var + ' also exists in the namespace') del self._namespace[name][var] def compile_functions(self, freeze=False): """ Compile all functions defined as strings. If freeze is True, all external parameters and units are replaced by their value. ALL FUNCTIONS MUST HAVE STRINGS. """ all_variables = self._eq_names + self._diffeq_names + self._alias.keys() + ['t'] # Check if freezable freeze = freeze and all([optimiser.freeze(expr, all_variables, self._namespace[name])\ for name, expr in self._string.iteritems()]) self._frozen = freeze # Compile strings to functions for name, expr in self._string.iteritems(): namespace = self._namespace[name] # name space of the function # Find variables vars = [var for var in get_identifiers(expr) if var in all_variables] if freeze: expr = optimiser.freeze(expr, all_variables, namespace) #self._string[name]=expr # should we? #namespace={} s = "lambda " + ','.join(vars) + ":" + expr self._function[name] = eval(s, namespace) def check_units(self): ''' Checks the units of the differential equations, using the units of x. dx_i/dt must have units of x_i / time. ''' self.set_eq_order() # Better: replace xi in the string, or in the namespace try: for var in self._eq_names: f = self._function[var] old_func_globals = copy.copy(f.func_globals) f.func_globals['xi'] = 0 * second ** -.5 # Noise f.func_globals.update(namespace_replace_quantity_with_pure(f.func_globals)) units = namespace_replace_quantity_with_pure(self._units) self.apply(var, units) + self._units[var] # Check that the two terms have the same dimension f.func_globals.update(old_func_globals) for var in self._diffeq_names: f = self._function[var] old_func_globals = copy.copy(f.func_globals) f.func_globals['xi'] = 0 * second ** -.5 # Noise f.func_globals.update(namespace_replace_quantity_with_pure(f.func_globals)) units = namespace_replace_quantity_with_pure(self._units) self.apply(var, units) + (self._units[var] / second) # Check that the two terms have the same dimension f.func_globals.update(old_func_globals) except DimensionMismatchError, inst: raise DimensionMismatchError("The differential equation of " + var + " is not homogeneous", *inst._dims) except: warnings.warn("Unexpected exception in checking units of " + var) raise def set_eq_order(self): ''' Computes the internal depency graph of static variables and deduces the update order. Sets the list of dependencies of dynamic variables on static variables. This is called by check_units() ''' if len(self._eq_names) > 0: # Internal dependency dictionary dependency = {} for key in self._eq_names: f = self._function[key] dependency[key] = [var for var in f.func_code.co_varnames if var in self._eq_names] # Sets the order staticvars_list = [] no_dep = None while (len(staticvars_list) < len(self._eq_names)) and (no_dep != []): no_dep = [key for key, value in dependency.iteritems() if value == []] staticvars_list += no_dep # Clear dependency list for key in no_dep: del dependency[key] for key, value in dependency.iteritems(): dependency[key] = [var for var in value if not(var in staticvars_list)] if no_dep == []: # The dependency graph has cycles! raise ReferenceError, "The static variables are referring to each other" else: staticvars_list = [] # Calculate dependencies on static variables self._dependencies = {} for key in staticvars_list: self._dependencies[key] = [] for var in self._function[key].func_code.co_varnames: if var in self._eq_names: self._dependencies[key] += [var] + self._dependencies[var] for key in self._diffeq_names: f = self._function[key] self._dependencies[key] = [] for var in f.func_code.co_varnames: if var in self._eq_names: self._dependencies[key] += [var] + self._dependencies[var] # Sort the dependency lists for key in self._dependencies: staticdep = [(staticvars_list.index(var), var) for var in self._dependencies[key]] staticdep.sort() self._dependencies[key] = [x[1] for x in staticdep] # Update _eq self._eq_names = staticvars_list def substitute_eq(self, name=None): """ Replaces the static variable 'name' by its value in differential equations. If None: substitute all static variables. """ if name is None: for var in self._eq_names[-1::-1]: # reverse order self.substitute_eq(var) else: self.add_prefix_namespace(name) #print name for var in self._diffeq_names_nonzero: # String self._string[var] = re.sub("\\b" + name + "\\b", '(' + self._string[name] + ')', self._string[var]) # Namespace self._namespace[var].update(self._namespace[name]) #print self def add_prefix_namespace(self, name): """ Make the variables in the namespace associated to variable name specific to that variable by inserting the prefix name_. """ vars = self._namespace[name].keys() untransformed_funcs = set(getattr(unitsafefunctions, v) for v in unitsafefunctions.quantity_versions) untransformed_funcs.update(set([numpy.clip])) for var in vars: v = self._namespace[name][var] addprefix = True if isinstance(v, numpy.ufunc): addprefix = False try: if v in untransformed_funcs: addprefix = False except TypeError: #unhashable types pass if addprefix: # String self._string[name] = re.sub("\\b" + var + "\\b", name + '_' + var, self._string[name]) # Namespace self._namespace[name][name + '_' + var] = self._namespace[name][var] del self._namespace[name][var] def free_variables(self): """ Returns the list of free variables (i.e., which are not defined within the equation string). """ all_variables = self._eq_names + self._diffeq_names + self._alias.keys() + ['t'] free_vars = [] for expr in self._string.itervalues(): free_vars += [name for name in get_identifiers(expr) if name not in all_variables] return list(set(free_vars)) """ ----------------------------------------------------------------------- CALCULATING RHS OF DIFF EQUATIONS ----------------------------------------------------------------------- """ def apply(self, state, vardict): ''' Calculates self._function[state] with arguments in vardict and static variables. The dictionary is filled with the required static variables. ''' f = self._function[state] # Calculate static variables for var in self._dependencies[state]: # could add something like: if var not in vardict: this would allow you to override the dependencies if you wanted to - worth doing? vardict[var] = call_with_dict(self._function[var], vardict) return f(*[vardict[var] for var in f.func_code.co_varnames]) """ ----------------------------------------------------------------------- EQUATION INSPECTION ----------------------------------------------------------------------- """ def is_stochastic(self, var=None): ''' Returns True if the equation for var is stochastic, or if all equations are stochastic (var=None). ''' if var: return 'xi' in get_identifiers(self._string[var]) else: return any([self.is_stochastic(name) for name in self._diffeq_names_nonzero]) def is_time_dependent(self, var=None): ''' Returns True if the equation for var is time dependent, or if all equations are time dependent (var=None). ''' if var: return 't' in get_identifiers(self._string[var]) else: return any([self.is_time_dependent(name) for name in self._diffeq_names_nonzero]) # return any([self.is_time_dependent(name) for name in self._diffeq_names_nonzero+self._eq_names]) def is_linear(self): ''' Returns True if all equations are linear. If the equations are time dependent, then returns False. If the equations depend on external functions, then returns False. ''' if self.is_time_dependent(): return False for f in self._namespace.iterkeys(): if any([type(key) == types.FunctionType for key in self._namespace[f].itervalues()]): return False return all([is_affine(f) for f in self._function.itervalues()]) def is_conditionally_linear(self): ''' Returns True if the differential equations are linear with respect to the state variable. ''' # Equations have to be prepared for it to work. for var in self._diffeq_names: S = self._units.copy() S[var] = AffineFunction() try: self.apply(var, S) except: return False return True """ ----------------------------------------------------------------------- NUMERICAL INTEGRATION (to be replaced by code generation) ----------------------------------------------------------------------- """ def forward_euler(self, S, dt): ''' Updates the value of the state variables in dictionary S with the forward Euler algorithm over step dt. ''' # Calculate all static variables (or do that after?) #for var in self._eq_names: # S[var]=call_with_dict(self._function[var],S) # Calculate derivatives buffer = {} for varname in self._diffeq_names_nonzero: f = self._function[varname] buffer[varname] = f(*[S[var] for var in f.func_code.co_varnames]) # Update variables for var in self._diffeq_names_nonzero: S[var] += dt * buffer[var] def forward_euler_code_string(self): ''' Generates Python code for a forward Euler step. ''' # TODO: check if it can really be frozen # TODO: change /a to *(1/a) with precalculation (use parser) all_variables = self._eq_names + self._diffeq_names + self._alias.keys() + ['t'] # nonzero? insert dt? vars_tmp = [name + '__tmp' for name in self._diffeq_names] lines = ','.join(self._diffeq_names) + '=P._S\n' lines += ','.join(vars_tmp) + '=P._dS\n' for name in self._diffeq_names_nonzero: namespace = self._namespace[name] expr = optimiser.freeze(self._string[name], all_variables, namespace) lines += name + '__tmp[:]=' + expr + '\n' lines += 'P._S+=dt*P._dS\n' #print lines return lines # Return a function f(P) or a namespace (exec code in namespace) # 1st option: include directly in neurongroup._state_updater (good?) def forward_euler_code(self): ''' Generates Python code for a forward Euler step. ''' # TODO: check if it can really be frozen # TODO: change /a to *(1/a) with precalculation (use parser) all_variables = self._eq_names + self._diffeq_names + self._alias.keys() + ['t'] # nonzero? insert dt? vars_tmp = [name + '__tmp' for name in self._diffeq_names] lines = ','.join(self._diffeq_names) + '=P._S\n' lines += ','.join(vars_tmp) + '=P._dS\n' for name in self._diffeq_names_nonzero: namespace = self._namespace[name] expr = optimiser.freeze(self._string[name], all_variables, namespace) lines += name + '__tmp[:]=' + expr + '\n' lines += 'P._S+=dt*P._dS\n' #print lines return compile(lines, 'Euler update code', 'exec') # Return a function f(P) or a namespace (exec code in namespace) # 1st option: include directly in neurongroup._state_updater (good?) def Runge_Kutta2(self, S, dt): ''' Updates the value of the state variables in dictionary S with the 2nd order Runge-Kutta algorithm over step dt (midpoint). ''' # Calculate all static variables (or do that after?) #for var in self._eq_names: # S[var]=call_with_dict(self._function[var],S) # Calculate derivatives buffer = {} S_half = S.copy() # Half a step for varname in self._diffeq_names_nonzero: f = self._function[varname] buffer[varname] = f(*[S[var] for var in f.func_code.co_varnames]) # Update variables for var in self._diffeq_names_nonzero: S_half[var] = S[var] + .5 * dt * buffer[var] # Whole step for varname in self._diffeq_names_nonzero: f = self._function[varname] buffer[varname] = f(*[S_half[var] for var in f.func_code.co_varnames]) # Update variables for var in self._diffeq_names_nonzero: S[var] += dt * buffer[var] def exponential_euler(self, S, dt): ''' Updates the value of the state variables in dictionary S with an exponential Euler algorithm over step dt. Test with is_conditionally_linear first. Same as default integration method in Genesis. Close to the implicit Euler method in Neuron. ''' # Calculate all static variables (BAD: INSERT IT BELOW) #for var in self._eq_names: # S[var]=call_with_dict(self._function[var],S) n = len(S[self._diffeq_names_nonzero[0]]) # Calculate the coefficients of the affine function Z = zeros(n) O = ones(n) A = {} B = {} for varname in self._diffeq_names_nonzero: f = self._function[varname] oldval = S[varname] S[varname] = Z B[varname] = f(*[S[var] for var in f.func_code.co_varnames]).copy() # important if compiled S[varname] = O A[varname] = f(*[S[var] for var in f.func_code.co_varnames]) - B[varname] B[varname] /= A[varname] S[varname] = oldval # Integrate for varname in self._diffeq_names_nonzero: f = self._function[varname] if self._useweave: Bx = B[varname] Ax = A[varname] Sx = S[varname] # Compilation with blitz: we need an approximation because exp is not understood #weave.blitz('Sx[:]=-Bx+(Sx+Bx)*(1.+Ax*dt*(1.+.5*Ax*dt))',check_size=0) code = """ for(int k=0;k<=]?=', line) # lines of the form w = ..., w *= ..., etc. if m: num_persynapse += 1 per_synapse_lines.append(line) else: num_perneuron += 1 if num_persynapse!=0 and not reordering_warning: log_warn('brian.experimental.cstdp', 'STDP operations are being re-ordered, results may be wrong.') reordering_warning = True per_neuron_lines.append(line) return per_neuron_lines, per_synapse_lines per_neuron_pre, per_synapse_pre = splitcode(pre) per_neuron_post, per_synapse_post = splitcode(post) all_vars = vars_pre + vars_post + ['w'] per_neuron_pre = [c_single_statement(freeze(line, all_vars, pre_namespace)) for line in per_neuron_pre] per_neuron_post = [c_single_statement(freeze(line, all_vars, post_namespace)) for line in per_neuron_post] per_synapse_pre = [c_single_statement(freeze(line, all_vars, pre_namespace)) for line in per_synapse_pre] per_synapse_post = [c_single_statement(freeze(line, all_vars, post_namespace)) for line in per_synapse_post] per_neuron_pre = '\n'.join(per_neuron_pre) per_neuron_post = '\n'.join(per_neuron_post) per_synapse_pre = '\n'.join(per_synapse_pre) per_synapse_post = '\n'.join(per_synapse_post) # Neuron groups G_pre = NeuronGroup(len(C.source), model=sep_pre, clock=self.clock) G_post = NeuronGroup(len(C.target), model=sep_post, clock=self.clock) G_pre._S[:] = 0 G_post._S[:] = 0 self.pre_group = G_pre self.post_group = G_post var_group = {} for i, v in enumerate(vars_pre): var_group[v] = G_pre for i, v in enumerate(vars_post): var_group[v] = G_post self.var_group = var_group self.contained_objects += [G_pre, G_post] vars_pre_ind = {} for i, var in enumerate(vars_pre): vars_pre_ind[var] = i vars_post_ind = {} for i, var in enumerate(vars_post): vars_post_ind[var] = i prevars_dict = dict((k, G_pre.state(k)) for k in vars_pre) postvars_dict = dict((k, G_post.state(k)) for k in vars_post) clipcode = '' if isfinite(wmin): clipcode += 'if(w<%wmin%) w = %wmin%;\n'.replace('%wmin%', repr(float(wmin))) if isfinite(wmax): clipcode += 'if(w>%wmax%) w = %wmax%;\n'.replace('%wmax%', repr(float(wmax))) if not isinstance(C, DelayConnection): precode = iterate_over_spikes('_j', '_spikes', (load_required_variables('_j', prevars_dict), transform_code(per_neuron_pre), iterate_over_row('_k', 'w', C.W, '_j', (load_required_variables('_k', postvars_dict), transform_code(per_synapse_pre), ConnectionCode(clipcode))))) postcode = iterate_over_spikes('_j', '_spikes', (load_required_variables('_j', postvars_dict), transform_code(per_neuron_post), iterate_over_col('_i', 'w', C.W, '_j', (load_required_variables('_i', prevars_dict), transform_code(per_synapse_post), ConnectionCode(clipcode))))) log_debug('brian.experimental.c_stdp', 'CSTDP Pre code:\n' + str(precode)) log_debug('brian.experimental.c_stdp', 'CSTDP Post code:\n' + str(postcode)) connection_delay = C.delay * C.source.clock.dt if (delay_pre is None) and (delay_post is None): # same delays as the Connnection C delay_pre = connection_delay delay_post = 0 * ms elif delay_pre is None: delay_pre = connection_delay - delay_post if delay_pre < 0 * ms: raise AttributeError, "Postsynaptic delay is too large" elif delay_post is None: delay_post = connection_delay - delay_pre if delay_post < 0 * ms: raise AttributeError, "Postsynaptic delay is too large" # create forward and backward Connection objects or SpikeMonitor objects pre_updater = SpikeMonitor(C.source, function=precode, delay=delay_pre) post_updater = SpikeMonitor(C.target, function=postcode, delay=delay_post) updaters = [pre_updater, post_updater] self.contained_objects += [pre_updater, post_updater] else: if delay_pre is not None or delay_post is not None: raise ValueError("Must use delay_pre=delay_post=None for the moment.") max_delay = C._max_delay * C.target.clock.dt # Ensure that the source and target neuron spikes are kept for at least the # DelayConnection's maximum delay C.source.set_max_delay(max_delay) C.target.set_max_delay(max_delay) self.G_pre_monitors = {} self.G_post_monitors = {} self.G_pre_monitors.update(((var, RecentStateMonitor(G_pre, vars_pre_ind[var], duration=(C._max_delay + 1) * C.target.clock.dt, clock=G_pre.clock)) for var in vars_pre)) self.G_post_monitors.update(((var, RecentStateMonitor(G_post, vars_post_ind[var], duration=(C._max_delay + 1) * C.target.clock.dt, clock=G_post.clock)) for var in vars_post)) self.contained_objects += self.G_pre_monitors.values() self.contained_objects += self.G_post_monitors.values() prevars_dict_delayed = dict((k, self.G_pre_monitors[k]) for k in prevars_dict.keys()) postvars_dict_delayed = dict((k, self.G_post_monitors[k]) for k in postvars_dict.keys()) precode_immediate = iterate_over_spikes('_j', '_spikes', (load_required_variables('_j', prevars_dict), transform_code(per_neuron_pre))) precode_delayed = iterate_over_spikes('_j', '_spikes', iterate_over_row('_k', 'w', C.W, '_j', extravars={'_delay':C.delayvec}, code=( ConnectionCode('double _t_past = _max_delay-_delay;', vars={'_max_delay':float(max_delay)}), load_required_variables_pastvalue('_k', '_t_past', postvars_dict_delayed), transform_code(per_synapse_pre), ConnectionCode(clipcode)))) postcode = iterate_over_spikes('_j', '_spikes', (load_required_variables('_j', postvars_dict), transform_code(per_neuron_post), iterate_over_col('_i', 'w', C.W, '_j', extravars={'_delay':C.delayvec}, code=( load_required_variables_pastvalue('_i', '_delay', prevars_dict_delayed), transform_code(per_synapse_post), ConnectionCode(clipcode))))) log_debug('brian.experimental.c_stdp', 'CSTDP Pre code (immediate):\n' + str(precode_immediate)) log_debug('brian.experimental.c_stdp', 'CSTDP Pre code (delayed):\n' + str(precode_delayed)) log_debug('brian.experimental.c_stdp', 'CSTDP Post code:\n' + str(postcode)) pre_updater_immediate = SpikeMonitor(C.source, function=precode_immediate) pre_updater_delayed = SpikeMonitor(C.source, function=precode_delayed, delay=max_delay) post_updater = SpikeMonitor(C.target, function=postcode) updaters = [pre_updater_immediate, pre_updater_delayed, post_updater] self.contained_objects += updaters def __call__(self): pass def __getattr__(self, name): if name == 'var_group': # this seems mad - the reason is that getattr is only called if the thing hasn't # been found using the standard methods of finding attributes, which for var_index # should have worked, this is important because the next line looks for var_index # and if we haven't got a var_index we don't want to get stuck in an infinite # loop raise AttributeError if not hasattr(self, 'var_group'): # only provide lookup of variable names if we have some variable names, i.e. # if the var_index attribute exists raise AttributeError G = self.var_group[name] return G.state_(name) def __setattr__(self, name, val): if not hasattr(self, 'var_group') or name not in self.var_group: object.__setattr__(self, name, val) else: G = self.var_group[name] G.state_(name)[:] = val if __name__ == '__main__': from time import time log_level_debug() structure = 'dense' delay = False if not delay: delay = 0 * ms max_delay = 5 * ms N = 1000 taum = 10 * ms tau_pre = 20 * ms tau_post = tau_pre Ee = 0 * mV vt = -54 * mV vr = -60 * mV El = -74 * mV taue = 5 * ms F = 15 * Hz gmax = .01 dA_pre = .01 dA_post = -dA_pre * tau_pre / tau_post * 1.05 eqs_neurons = ''' dv/dt=(ge*(Ee-vr)+El-v)/taum : volt # the synaptic current is linearized dge/dt=-ge/taue : 1 ''' input = PoissonGroup(N, rates=F) neurons = NeuronGroup(1, model=eqs_neurons, threshold=vt, reset=vr) synapses = Connection(input, neurons, 'ge', weight=rand(len(input), len(neurons)) * gmax, structure=structure, delay=delay, max_delay=max_delay) neurons.v = vr #stdp=ExponentialSTDP(synapses,tau_pre,tau_post,dA_pre,dA_post,wmax=gmax) ## Explicit STDP rule eqs_stdp = ''' dA_pre/dt=-A_pre/tau_pre : 1 dA_post/dt=-A_post/tau_post : 1 ''' dA_post *= gmax dA_pre *= gmax stdp = CSTDP(synapses, eqs=eqs_stdp, pre='A_pre+=dA_pre;w+=A_post', post='A_post+=dA_post;w+=A_pre', wmax=gmax) rate = PopulationRateMonitor(neurons) start_time = time() run(100 * second, report='text') print "Simulation time:", time() - start_time subplot(311) plot(rate.times / second, rate.smooth_rate(100 * ms)) subplot(312) plot(synapses.W.todense() / gmax, '.') subplot(313) hist(synapses.W.todense() / gmax, 20) show() brian-1.3.1/brian/experimental/codegen/000077500000000000000000000000001167451777000200225ustar00rootroot00000000000000brian-1.3.1/brian/experimental/codegen/__init__.py000066400000000000000000000016511167451777000221360ustar00rootroot00000000000000from codegen import * from integration_schemes import * from codegen_python import * from codegen_c import * from codegen_gpu import * if __name__ == '__main__': if True: from brian import * eqs = Equations(''' dV/dt = -W*V/(10*second) : volt dW/dt = -V**2/(1*second) : volt ''') scheme = exp_euler_scheme print 'Equations' print '=========' print eqs print 'Scheme' print '======' for block_specifier, block_code in scheme: print block_specifier print block_code print 'Python code' print '===========' print PythonCodeGenerator().generate(eqs, scheme) print 'C code' print '======' print CCodeGenerator().generate(eqs, scheme) print 'GPU code' print '======' print GPUCodeGenerator().generate(eqs, scheme) brian-1.3.1/brian/experimental/codegen/c_support_code.py000066400000000000000000000002561167451777000234070ustar00rootroot00000000000000c_support_code = ''' double clip(double x, double low, double high) { if(xhigh) return high; return x; } #define inf INFINITY ''' brian-1.3.1/brian/experimental/codegen/codegen.py000066400000000000000000000224521167451777000220050ustar00rootroot00000000000000from ...optimiser import freeze from string import Template import re from rewriting import * __all__ = ['CodeGenerator'] class CodeGenerator(object): ''' CodeGenerator base class for generating code in a variety of languages from mathematical expressions Base classes can define their own behaviour, but the following scheme is expected to be fairly standard. The initialiser takes at least one argument ``sympy_rewrite``, which defaults to ``True``, which specifies whether or not expressions should be simplified by passing them through sympy, so that e.g. ``V/(10*0.1)`` gets simplified to ``V*1.0`` (but not to ``V`` for some reason). There are also the following methods:: .. method:: generate(eqs, scheme) Returns code in the target language for the set of equations ``eqs`` according to the integration scheme ``scheme``. The following schemes are included by default: * ``euler_scheme`` * ``rk2_scheme`` * ``exp_euler_scheme`` This method may also call the :meth:`initialisation` and :meth:`finalisation` methods for code to include at the start and end. By default, it will call the :meth:`scheme` method to fill in the code in between. .. method:: initialisation(eqs) Returns initialisation code. .. method:: finalisation(eqs) Returns finalisation code. .. method:: scheme(eqs, scheme) Returns code for the given set of equations integrated according to the given scheme. See below for an explanation of integration schemes. In the process of generating code from the schemes, this method may call any of the following methods: :meth:`single_statement` to convert a single statement into code; :meth:`single_expr` to convert a single expression into code; :meth:`substitute` to substitute different values for specific variable names in given expressions; :meth:`vartype` to get the variable type specifier if necessary for that language. .. method:: single_statement(expr) Convert the given statement into code. By default, this looks for statements of the form ``A = B``, ``A += B``, etc. and applies the method :meth:`single_expr` to ``B``. Classes deriving from the base class typically only need to write a :meth:`single_expr` method and not a :meth:`single_statement` one. .. method:: single_expr(expr) Convert the given expression into code. For example, for Python you can do nothing, but for C, for example the expression ``A**B`` should be converted to ``pow(A, B)``, and this is done using sympy replacement. By default, this method will simplify the expression using sympy if the code generator was initialised with ``sympy_rewrite=True``. .. method:: substitute(var_expr, substitutions) Makes the given substitutions into the expression. ``substitutions`` should be a dictionary with keys the names of the variables to substitute, and values the value to substitute for that expression. By default these values are passed to the :meth:`single_substitute` method to check if there are any language specific things that need to be done to it, for example something substituting ``1`` should be replaced by ``ones(num_neurons)`` for Python which is vectorised. Typically this method shouldn't need to be rewritten for derived classes, whereas :meth:`single_substitute` might. .. method:: single_substitute(s) Replaces ``s``, a substitution value from :meth:`substitute`, with a language specific version if necessary. Should return a string. **Schemes** A scheme consists of a sequence of pairs ``(block_specifier, block_code)``. The ``block_specifier`` is currently unused, the schemes available at the moment all use ``('foreachvar', 'all')`` in this space - other specifiers are possible, e.g. ``'all_nonzero'`` instead of ``'all'``. ``block_code`` is a multi-line template string expressing that stage of the scheme (templating explained below). For each specifier, code pair, a block of code is generated in that order - multiple pairs can be used for separating stages in an integration scheme (i.e. do something for all variables, then do something else for all variables, etc.). Templating has two features, the first is standard Python templating replacements, e.g. the template string ``'${var}__buf'`` would be replaced by ``'x__buf'`` where the Python variable ``var='x'``. The available variables are: ``var`` The current variable in the loop over each variable. ``var_expr`` The right hand side of the differential equation, f(x) for the equation dx/dt=f(x). ``vartype`` The data type (or ``''`` if none). Should be used at the start of the statement if the statement is declaring a new variable. The other feature of templating is that expressions of the form ``'@func(args)'`` will call the method ``func`` of the code generator with the given ``args`` and be replaced by that expression. Typically, this is used for the :meth:`substitute` method. Example scheme:: rk2_scheme = [ (('foreachvar', 'all'), """ $vartype ${var}__buf = $var_expr $vartype ${var}__half = $var+dt*${var}__buf """), (('foreachvar', 'all'), """ ${var}__buf = @substitute(var_expr, {var:var+'__buf'}) $var += dt*${var}__buf """) ] ''' def __init__(self, sympy_rewrite=True): self.sympy_rewrite = sympy_rewrite def single_statement(self, expr): m = re.search(r'[^><=]=', expr) if m: return expr[:m.end()] + ' ' + self.single_expr(expr[m.end():]) return expr def single_expr(self, expr): if self.sympy_rewrite is not False: if self.sympy_rewrite is True: #rewriters = [floatify_numbers] rewriters = [] else: rewriters = self.sympy_rewrite return sympy_rewrite(expr.strip(), rewriters) return expr.strip() def vartype(self): return '' def initialisation(self, eqs): return '' def finalisation(self, eqs): return '' def generate(self, eqs, scheme): code = self.initialisation(eqs) code += self.scheme(eqs, scheme) code += self.finalisation(eqs) return code def scheme(self, eqs, scheme): code = '' all_variables = eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t'] vartype = self.vartype() for block_specifier, block_code in scheme: # for the moment, processing of block_specifier is very crude if block_specifier == ('foreachvar', 'all'): vars_to_use = eqs._diffeq_names elif block_specifier == ('foreachvar', 'nonzero'): vars_to_use = eqs._diffeq_names_nonzero for line in block_code.split('\n'): line = line.strip() if line: origline = line for var in vars_to_use: vars = eqs._diffeq_names line = origline namespace = eqs._namespace[var] var_expr = freeze(eqs._string[var], all_variables, namespace) while 1: m = re.search(r'\@(\w+)\(', line) if not m: break methname = m.group(1) start, end = m.span() numopen = 1 for i in xrange(end, len(line)): if line[i] == '(': numopen += 1 if line[i] == ')': numopen -= 1 if numopen == 0: break if numopen != 0: raise SyntaxError('Parentheses unmatching.') args = line[start + 1:i + 1] #print args exec 'line = line[:start]+self.' + args + '+line[i+1:]' substitutions = {'vartype':vartype, 'var':var, 'var_expr':var_expr} t = Template(line) code += self.single_statement(t.substitute(**substitutions)) + '\n' return code def single_substitute(self, s): return str(s) def substitute(self, var_expr, substitutions): for var, replace_var in substitutions.iteritems(): var_expr = re.sub(r'\b' + var + r'\b', self.single_substitute(replace_var), var_expr) return var_expr brian-1.3.1/brian/experimental/codegen/codegen_c.py000066400000000000000000000025361167451777000223100ustar00rootroot00000000000000from codegen import * from rewriting import * __all__ = ['CCodeGenerator'] class CCodeGenerator(CodeGenerator): def __init__(self, dtype='double', sympy_rewrite=True): CodeGenerator.__init__(self, sympy_rewrite=sympy_rewrite) self._dtype = dtype def vartype(self): return self._dtype def initialisation(self, eqs): vartype = self.vartype() code = '' for j, name in enumerate(eqs._diffeq_names): code += vartype + ' *' + name + '__Sbase = _S+' + str(j) + '*num_neurons;\n' return code def generate(self, eqs, scheme): vartype = self.vartype() code = self.initialisation(eqs) code += 'for(int _i=0;_i<=]=', expr) if m: return expr[:m.end()] + ' ' + single_expr(expr[m.end():]) return expr def c_single_expr(expr): return rewrite_to_c_expression(single_expr(expr.strip())).strip() def c_single_statement(expr): return single_statement(expr, single_expr=c_single_expr) + ';' python_single_expr = single_expr python_single_statement = single_statement gpu_single_expr = c_single_expr gpu_single_statement = c_single_statement brian-1.3.1/brian/experimental/codegen/integration_schemes.py000066400000000000000000000021401167451777000244230ustar00rootroot00000000000000__all__ = ['euler_scheme', 'rk2_scheme', 'exp_euler_scheme'] euler_scheme = [ (('foreachvar', 'nonzero'), ''' $vartype ${var}__tmp = $var_expr '''), (('foreachvar', 'nonzero'), ''' $var += ${var}__tmp*dt ''') ] rk2_scheme = [ (('foreachvar', 'all'), ''' $vartype ${var}__buf = $var_expr $vartype ${var}__half = (.5*dt)*${var}__buf ${var}__half += $var '''), (('foreachvar', 'nonzero'), ''' ${var}__buf = @substitute(var_expr, dict((var, var+'__half') for var in vars)) $var += dt*${var}__buf ''') ] exp_euler_scheme = [ (('foreachvar', 'nonzero'), ''' $vartype ${var}__B = @substitute(var_expr, {var:0}) $vartype ${var}__A = @substitute(var_expr, {var:1}) ${var}__A -= ${var}__B ${var}__B /= ${var}__A ${var}__A *= dt '''), (('foreachvar', 'nonzero'), ''' $var += ${var}__B $var *= exp(${var}__A) $var -= ${var}__B ''') ] brian-1.3.1/brian/experimental/codegen/known_issues.txt000066400000000000000000000002721167451777000233130ustar00rootroot00000000000000KNOWN ISSUES (roughly in order of importance): TODO: Python functions can't be used in Equations (allow for specification of C code somehow? workaround with network operation?)brian-1.3.1/brian/experimental/codegen/reset.py000066400000000000000000000102161167451777000215160ustar00rootroot00000000000000import re from rewriting import * from ...inspection import namespace from ...optimiser import freeze from ...reset import Reset from ...equations import Equations from ...globalprefs import get_global_preference from ...log import log_warn from expressions import * from scipy import weave from c_support_code import * __all__ = ['generate_c_reset', 'generate_python_reset', 'CReset', 'PythonReset'] def generate_c_reset(eqs, inputcode, vartype='double', level=0, ns=None): if ns is None: ns, unknowns = namespace(inputcode, level=level + 1, return_unknowns=True) all_variables = eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t'] code = '' for j, name in enumerate(eqs._diffeq_names): code += vartype + ' *' + name + '__Sbase = _S+' + str(j) + '*_num_neurons;\n' code += 'for(int _i=0;_i<_nspikes;_i++){\n' code += ' long _j = _spikes[_i];\n' for j, name in enumerate(eqs._diffeq_names): code += ' ' + vartype + ' &' + name + ' = ' + name + '__Sbase[_j];\n' for line in inputcode.split('\n'): line = line.strip() if line: line = freeze(line, all_variables, ns) line = c_single_statement(line) code += ' ' + line + '\n' code += '}\n' return code def generate_python_reset(eqs, inputcode, level=0, ns=None): if ns is None: ns, unknowns = namespace(inputcode, level=level + 1, return_unknowns=True) all_variables = eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t', '_spikes'] code = '' for var in eqs._diffeq_names: inputcode = re.sub("\\b" + var + "\\b", var + '[_spikes]', inputcode) for line in inputcode.split('\n'): line = line.strip() if line: line = freeze(line, all_variables, ns) line = python_single_statement(line) code += line + '\n' return code class PythonReset(Reset): def __init__(self, inputcode, level=0): self._ns, unknowns = namespace(inputcode, level=level + 1, return_unknowns=True) self._inputcode = inputcode self._prepared = False def __call__(self, P): ns = self._ns if not self._prepared: vars = [var for var in P.var_index if isinstance(var, str)] eqs = P._eqs outputcode = generate_python_reset(eqs, self._inputcode, ns=ns) self._compiled_code = compile(outputcode, "PythonReset", "exec") for var in vars: ns[var] = P.state(var) self._prepared = True cc = self._compiled_code spikes = P.LS.lastspikes() ns['_spikes'] = spikes exec cc in ns class CReset(Reset): def __init__(self, inputcode, level=0): self._ns, unknowns = namespace(inputcode, level=level + 1, return_unknowns=True) self._inputcode = inputcode self._prepared = False self._weave_compiler = get_global_preference('weavecompiler') self._extra_compile_args = ['-O3'] if self._weave_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] def __call__(self, P): if not self._prepared: vars = [var for var in P.var_index if isinstance(var, str)] eqs = P._eqs self._outputcode = generate_c_reset(eqs, self._inputcode, ns=self._ns) _spikes = P.LS.lastspikes() dt = P.clock._dt t = P.clock._t _nspikes = len(_spikes) _S = P._S _num_neurons = len(P) try: weave.inline(self._outputcode, ['_S', '_nspikes', 'dt', 't', '_spikes', '_num_neurons'], c_support_code=c_support_code, compiler=self._weave_compiler, extra_compile_args=self._extra_compile_args) except: log_warn('brian.experimental.codegen.reset', 'C compilation failed, falling back on Python.') self.__class__ = PythonReset self._prepared = False return self.__call__(P) brian-1.3.1/brian/experimental/codegen/rewriting.py000066400000000000000000000107721167451777000224150ustar00rootroot00000000000000try: from sympy.printing.ccode import CCodePrinter from sympy.printing.precedence import precedence import sympy except ImportError: sympy = None CCodePrinter = object from ...optimiser import symbolic_eval def boolean_printer(origop, newop, start=''): def f(self, expr): PREC = precedence(expr) return start + newop.join(self.parenthesize(a, PREC) for a in expr.args) f.__name__ = '_print_' + origop return f class NewCCodePrinter(CCodePrinter): _print_And = boolean_printer('And', '&&') _print_Or = boolean_printer('Or', '||') _print_Not = boolean_printer('Not', '', '!') __all__ = ['rewrite_to_c_expression', 'sympy_rewrite', 'rewrite_pow', 'floatify_numbers'] if sympy is not None: pow = sympy.Function('pow') else: pow = None def rewrite_pow(e): if not len(e.args): return e newargs = tuple(rewrite_pow(a) for a in e.args) if isinstance(e, sympy.Pow) and e.args[1] != -1: return pow(*newargs) else: return e.new(*newargs) def floatify_numbers(e): if not len(e.args): if e.is_number: return sympy.Number(float(e)) return e newargs = tuple(floatify_numbers(a) for a in e.args) return e.new(*newargs) def make_sympy_expressions(eqs): exprs = {} for name in eqs._diffeq_names + eqs._eq_names: exprs[name] = symbolic_eval(eqs._string[name]) return exprs def sympy_rewrite(s, rewriters=None): if rewriters is not None: if callable(rewriters): rewriters = [rewriters] else: rewriters = [] expr = symbolic_eval(s) if not hasattr(expr, 'args'): return str(expr) for f in rewriters: expr = f(expr) return str(expr) def rewrite_to_c_expression(s): if sympy is None: raise ImportError('sympy package required for code generation.') e = symbolic_eval(s) return NewCCodePrinter().doprint(e) def generate_c_expressions(eqs): exprs = make_sympy_expressions(eqs) cexprs = {} for name, expr in exprs.iteritems(): cexprs[name] = str(rewrite_pow(expr)) return cexprs if __name__ == '__main__': from brian import * if True: s = '-V**2/(10*0.001)' print sympy_rewrite(s) print sympy_rewrite(s, rewrite_pow) print sympy_rewrite(s, floatify_numbers) print sympy_rewrite(s, [rewrite_pow, floatify_numbers]) if False: area = 20000 * umetre ** 2 Cm = (1 * ufarad * cm ** -2) * area gl = (5e-5 * siemens * cm ** -2) * area El = -60 * mV EK = -90 * mV ENa = 50 * mV g_na = (100 * msiemens * cm ** -2) * area g_kd = (30 * msiemens * cm ** -2) * area VT = -63 * mV # Time constants taue = 5 * ms taui = 10 * ms # Reversal potentials Ee = 0 * mV Ei = -80 * mV we = 6 * nS # excitatory synaptic weight (voltage) wi = 67 * nS # inhibitory synaptic weight # The model eqs = Equations(''' dv/dt = (gl*(El-v)+ge*(Ee-v)+gi*(Ei-v)-g_na*(m*m*m)*h*(v-ENa)-g_kd*(n*n*n*n)*(v-EK))/Cm : volt dm/dt = alpham*(1-m)-betam*m : 1 dn/dt = alphan*(1-n)-betan*n : 1 dh/dt = alphah*(1-h)-betah*h : 1 dge/dt = -ge*(1./taue) : siemens dgi/dt = -gi*(1./taui) : siemens alpham = 0.32*(mV**-1)*(13*mV-v+VT)/(exp((13*mV-v+VT)/(4*mV))-1.)/ms : Hz betam = 0.28*(mV**-1)*(v-VT-40*mV)/(exp((v-VT-40*mV)/(5*mV))-1)/ms : Hz alphah = 0.128*exp((17*mV-v+VT)/(18*mV))/ms : Hz betah = 4./(1+exp((40*mV-v+VT)/(5*mV)))/ms : Hz alphan = 0.032*(mV**-1)*(15*mV-v+VT)/(exp((15*mV-v+VT)/(5*mV))-1.)/ms : Hz betan = .5*exp((10*mV-v+VT)/(40*mV))/ms : Hz ''') cexprs = generate_c_expressions(eqs) for k, v in cexprs.iteritems(): print k, ':', v if False: taum = 20 * msecond taue = 5 * msecond taui = 10 * msecond Ee = (0. + 60.) * mvolt Ei = (-80. + 60.) * mvolt eqs = Equations(''' dv/dt = (-v+ge*(Ee-v)+gi*(Ei-v))*(1./taum) : volt dge/dt = -ge*(1./taue) : 1 dgi/dt = -gi*(1./taui) : 1 ''') cexprs = generate_c_expressions(eqs) for k, v in cexprs.iteritems(): print k, ':', v if False: x = sympy.Symbol('x') y = x ** 2 + 3 * x print y print rewrite_pow(y) brian-1.3.1/brian/experimental/codegen/stateupdaters.py000066400000000000000000000157551167451777000233010ustar00rootroot00000000000000from ...stateupdater import StateUpdater from ...globalprefs import get_global_preference from ...clock import guess_clock from ...log import log_debug, log_warn from codegen_c import * from codegen_python import * from integration_schemes import * import time from scipy import weave import numpy, scipy import re from c_support_code import * try: import numexpr as numexpr except ImportError: numexpr = None __all__ = ['CStateUpdater', 'PythonStateUpdater'] class CStateUpdater(StateUpdater): def __init__(self, eqs, scheme, clock=None, freeze=False): self.clock = guess_clock(clock) self.eqs = eqs self.scheme = scheme self.freeze = freeze self.code_c = CCodeGenerator().generate(eqs, scheme) log_debug('brian.experimental.codegen.stateupdaters', 'C state updater code:\n' + self.code_c) self._weave_compiler = get_global_preference('weavecompiler') self._extra_compile_args = ['-O3'] if self._weave_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] self.namespace = {} code_vars = re.findall(r'\b\w+\b', self.code_c) self._arrays_to_check = [] for varname in code_vars: if varname not in eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t', 'dt', '_S', 'num_neurons']: # this is kind of a hack, but since we're going to be writing a new # and more sensible Equations module anyway, it can stand for name in eqs._namespace: if varname in eqs._namespace[name]: varval = eqs._namespace[name][varname] if isinstance(varval, numpy.ndarray): self.namespace[varname] = varval self._arrays_to_check.append((varname, varval)) self.code_c = re.sub(r'\b'+varname+r'\b', varname+'[_i]', self.code_c) break def __call__(self, P): if self._arrays_to_check is not None: N = len(P) for name, X in self._arrays_to_check: if len(X)!=N: raise ValueError('Array '+name+' has wrong size ('+str(len(X))+' instead of '+str(N)+')') self._arrays_to_check = None self.namespace['dt'] = P.clock._dt self.namespace['t'] = P.clock._t self.namespace['num_neurons'] = len(P) self.namespace['_S'] = P._S try: weave.inline(self.code_c, self.namespace.keys(),#['_S', 'num_neurons', 'dt', 't'], local_dict = self.namespace, support_code=c_support_code, compiler=self._weave_compiler, extra_compile_args=self._extra_compile_args) except: log_warn('brian.experimental.codegen.stateupdaters', 'C compilation failed, falling back on Python.') self.__class__ = PythonStateUpdater self.__init__(self.eqs, self.scheme, self.clock, self.freeze) self.__call__(P) class PythonStateUpdater(StateUpdater): def __init__(self, eqs, scheme, clock=None, freeze=False): eqs.prepare() self.clock = guess_clock(clock) self.code_python = PythonCodeGenerator().generate(eqs, scheme) self.compiled_code = compile(self.code_python, 'StateUpdater code', 'exec') if False and numexpr is not None: # This only improves things for large N, in which case Python speed # is close to C speed anyway, so less valuable newcode = '' for line in self.code_python.split('\n'): m = re.search(r'(\b\w*\b\s*[^><=]?=\s*)(.*)', line) # lines of the form w = ..., w *= ..., etc. if m: if '*' in m.group(2) or '+' in m.group(2) or '/' in m.group(2) or \ '-' in m.group(2) or '**' in m.group(2) or '(' in m.group(2): if '[' not in m.group(2): line = m.group(1) + "_numexpr.evaluate('" + m.group(2) + "')" newcode += line + '\n' self.code_python = newcode self.code_python = 'def _stateupdate(_S, dt, t, num_neurons):\n' + '\n'.join([' ' + line for line in self.code_python.split('\n')]) + '\n' self.namespace = {'_numexpr':numexpr} for varname in self.compiled_code.co_names: if varname not in eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t', 'dt', '_S', 'num_neurons']: # this is kind of a hack, but since we're going to be writing a new # and more sensible Equations module anyway, it can stand for name in eqs._namespace: if varname in eqs._namespace[name]: self.namespace[varname] = eqs._namespace[name][varname] break if varname not in self.namespace: if hasattr(numpy, varname): self.namespace[varname] = getattr(numpy, varname) log_debug('brian.experimental.codegen.stateupdaters', 'Python state updater code:\n' + self.code_python) exec self.code_python in self.namespace self.state_update_func = self.namespace['_stateupdate'] def __call__(self, P): self.state_update_func(P._S, P.clock._dt, P.clock._t, len(P)) if __name__ == '__main__': from brian import * duration = 1 * second N = 10000 domonitor = False # duration = 100*ms # N = 10 # domonitor = True eqs = Equations(''' #dV/dt = -V*V/(10*ms) : 1 #dV/dt = cos(2*pi*t/(100*ms))/(10*ms) : 1 dV/dt = -W*V/(100*ms) : 1 dW/dt = -W/(100*ms) : 1 #dV/dt = cos(2*pi*t/(100*ms))/(10*ms) : 1 #dW/dt = cos(2*pi*t/(100*ms))/(10*ms) : 1 #dW2/dt = cos(2*pi*t/(100*ms))/(10*ms) : 1 #dV/dt = h/(10*ms) : 1 #h = -V*V : 1 ''') print eqs G = NeuronGroup(N, eqs, compile=True, freeze=True, implicit=True) print G._state_updater.__class__ # su = PythonStateUpdater(eqs, euler_scheme, clock=G.clock) # print 'Python code:' # print su.code_python su = CStateUpdater(eqs, exp_euler_scheme, clock=G.clock) print 'C++ loop code:' print su.code_c G.V = 1 G.W = 0.1 if domonitor: M = StateMonitor(G, 'V', record=True) S = copy(G._S) run(1 * ms) start = time.time() run(duration) print 'Original code:', (time.time() - start) * second if domonitor: M_V = M[0] reinit() G._S[:] = S G._state_updater = su run(1 * ms) start = time.time() run(duration) print 'New code:', (time.time() - start) * second if domonitor: M_V2 = M[0] if domonitor: print amax(abs(M_V - M_V2)) plot(M.times, M_V) plot(M.times, M_V2) show() brian-1.3.1/brian/experimental/codegen/test_hh.py000066400000000000000000000037031167451777000220350ustar00rootroot00000000000000from brian import * from brian.library.ionic_currents import * from brian.experimental.codegen import * import time from scipy import weave N = 1000 record_and_plot = N == 1 El = 10.6 * mV EK = -12 * mV ENa = 120 * mV eqs = MembraneEquation(1 * uF) + leak_current(.3 * msiemens, El) eqs += K_current_HH(36 * msiemens, EK) + Na_current_HH(120 * msiemens, ENa) eqs += Current('I:amp') eqs.prepare() print eqs print '.............................' pycode = PythonCodeGenerator().generate(eqs, exp_euler_scheme) print pycode print '.............................' ccode = CCodeGenerator().generate(eqs, exp_euler_scheme) print ccode neuron = NeuronGroup(N, eqs, implicit=True, freeze=True) if record_and_plot: trace = StateMonitor(neuron, 'vm', record=True) neuron.I = 10 * uA _S_python = array(neuron._S) _S = array(neuron._S) ns = {'_S':_S_python, 'exp':exp, 'dt':defaultclock._dt} pycode_comp = compile(pycode, '', 'exec') start = time.time() run(100 * ms) print 'N:', N print 'Brian:', time.time() - start start = time.time() hand_trace_python = [] for T in xrange(int(100 * ms / defaultclock.dt)): exec pycode_comp in ns if record_and_plot: hand_trace_python.append(copy(_S_python[0])) print 'Codegen Python:', time.time() - start start = time.time() hand_trace_c = [] for T in xrange(int(100 * ms / defaultclock.dt)): dt = defaultclock._dt t = T * defaultclock._dt num_neurons = len(neuron) weave.inline(ccode, ['_S', 'num_neurons', 'dt', 't'], compiler='gcc', #type_converters=weave.converters.blitz, extra_compile_args=['-O3', '-march=native', '-ffast-math'])#O2 seems to be faster than O3 here if record_and_plot: hand_trace_c.append(copy(_S[0])) print 'Codegen C:', time.time() - start if record_and_plot: plot(trace[0]) plot(array(hand_trace_python)) plot(array(hand_trace_c)) show() brian-1.3.1/brian/experimental/codegen/test_reset.py000066400000000000000000000013301167451777000225520ustar00rootroot00000000000000from brian import * from brian.experimental.codegen.reset import * eqs = Equations(''' #dV/dt = -V/(100*second) : volt V : volt Vt : volt ''') dVt = 2 * mV reset = ''' V = 0*volt Vt += dVt ''' print '**** C CODE *****' print generate_c_reset(eqs, reset) print '**** PYTHON CODE ****' print generate_python_reset(eqs, reset) GP = NeuronGroup(10, eqs, threshold=1 * volt, reset=PythonReset(reset)) GC = NeuronGroup(10, eqs, threshold=1 * volt, reset=CReset(reset)) GP.V = 0.5 GP.Vt = 0.1 GP.V[[2, 4, 8]] = 2 GC.V = 0.5 GC.Vt = 0.1 GC.V[[2, 4, 8]] = 2 run(1 * ms) print 'GP.V =', GP.V print 'GP.Vt =', GP.Vt print 'GC.V =', GC.V print 'GC.Vt =', GC.Vt brian-1.3.1/brian/experimental/codegen/test_threshold.py000066400000000000000000000013671167451777000234360ustar00rootroot00000000000000from brian import * from brian.experimental.codegen.threshold import * eqs = Equations(''' V : volt Vt : volt ''') threshold = ''' (V>Vt)&(V>Vt) ''' print '**** C CODE *****' print generate_c_threshold(eqs, threshold) # print '**** PYTHON CODE ****' print generate_python_threshold(eqs, threshold) GP = NeuronGroup(10, eqs, threshold=PythonThreshold(threshold), reset=0 * volt) GC = NeuronGroup(10, eqs, threshold=CThreshold(threshold), reset=0 * volt) MP = SpikeMonitor(GP) MC = SpikeMonitor(GC) GP.V[[1, 2, 4, 8]] = 2 GP.Vt = 1 * volt GP.Vt[[2, 8]] = 3 GC.V[[1, 2, 4, 8]] = 2 GC.Vt = 1 * volt GC.Vt[[2, 8]] = 3 run(1 * ms) print GP.V print GP.Vt print MP.spikes print print GC.V print GC.Vt print MC.spikes brian-1.3.1/brian/experimental/codegen/threshold.py000066400000000000000000000101771167451777000223760ustar00rootroot00000000000000import re from rewriting import * from ...inspection import namespace from ...optimiser import freeze from ...threshold import Threshold from ...equations import Equations from ...globalprefs import get_global_preference from ...log import log_warn from expressions import * from scipy import weave from c_support_code import * __all__ = ['generate_c_threshold', 'generate_python_threshold', 'CThreshold', 'PythonThreshold'] def generate_c_threshold(eqs, inputcode, vartype='double', level=0, ns=None): inputcode = inputcode.strip() if ns is None: ns, unknowns = namespace(inputcode, level=level + 1, return_unknowns=True) all_variables = eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t'] code = 'int _numspikes = 0;\n' for j, name in enumerate(eqs._diffeq_names): code += vartype + ' *' + name + '__Sbase = _S+' + str(j) + '*_num_neurons;\n' code += 'for(int _i=0;_i<_num_neurons;_i++){\n' for j, name in enumerate(eqs._diffeq_names): code += ' ' + vartype + ' &' + name + ' = ' + name + '__Sbase[_i];\n' inputcode = freeze(inputcode.strip(), all_variables, ns) inputcode = c_single_expr(inputcode) code += ' if(' + inputcode + ')\n' code += ' _spikes[_numspikes++] = _i;\n' code += '}\n' code += 'return_val = _numspikes;\n' return code def generate_python_threshold(eqs, inputcode, level=0, ns=None): inputcode = inputcode.strip() if ns is None: ns, unknowns = namespace(inputcode, level=level + 1, return_unknowns=True) all_variables = eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t'] inputcode = inputcode.strip() inputcode = freeze(inputcode, all_variables, ns) inputcode = python_single_expr(inputcode) return inputcode class PythonThreshold(Threshold): def __init__(self, inputcode, level=0): inputcode = inputcode.strip() self._ns, unknowns = namespace(inputcode, level=level + 1, return_unknowns=True) self._inputcode = inputcode self._prepared = False def __call__(self, P): if not self._prepared: ns = self._ns vars = [var for var in P.var_index if isinstance(var, str)] eqs = P._eqs outputcode = generate_python_threshold(eqs, self._inputcode, ns=ns) for var in vars: ns[var] = P.state(var) self._compiled_code = compile(outputcode, "PythonThreshold", "eval") self._prepared = True return eval(self._compiled_code, self._ns).nonzero()[0] class CThreshold(Threshold): def __init__(self, inputcode, level=0): inputcode = inputcode.strip() self._ns, unknowns = namespace(inputcode, level=level + 1, return_unknowns=True) self._inputcode = inputcode self._prepared = False self._weave_compiler = get_global_preference('weavecompiler') self._extra_compile_args = ['-O3'] if self._weave_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] def __call__(self, P): if not self._prepared: vars = [var for var in P.var_index if isinstance(var, str)] eqs = P._eqs self._outputcode = generate_c_threshold(eqs, self._inputcode, ns=self._ns) _spikes = P._spikesarray t = P.clock._t _S = P._S _num_neurons = len(P) try: _numspikes = weave.inline(self._outputcode, ['_S', 't', '_spikes', '_num_neurons'], support_code=c_support_code, compiler=self._weave_compiler, extra_compile_args=self._extra_compile_args) except: log_warn('brian.experimental.codegen.threshold', 'C compilation failed, falling back on Python.') self.__class__ = PythonThreshold self._prepared = False return self.__call__(P) return _spikes[0:_numspikes] brian-1.3.1/brian/experimental/coincidence_detection.py000066400000000000000000000107131167451777000232730ustar00rootroot00000000000000from brian import * from numpy import * __all__ = ['CoincidenceDetectorGroup'] class CoincidenceDetectorConnection(Connection): def propagate(self, spikes): self.target.LSST[spikes] = self.source.clock._t class CoincidenceDetectorStateUpdater(LazyStateUpdater): def __init__(self, allow_multiple, ordered): self.allow_multiple = allow_multiple self.ordered = ordered LazyStateUpdater.__init__(self) def __call__(self, P): lsst = P.LSST i0 = P.indices[0] i1 = P.indices[1] lssti0 = lsst[i0] lssti1 = lsst[i1] firing = abs(lssti0 - lssti1) < P.delta if self.ordered: firing = logical_and(firing, lssti0 > lssti1) firing = logical_and(firing, lssti0 > P.lastfiring) firing = logical_and(firing, lssti1 > P.lastfiring) P._S[0][firing] = 1 if self.allow_multiple: # this allows each spike to be used for multiple coincidences P.lastfiring[firing] = where(lssti0 > lssti1, lssti0, lssti1) - P.clock._dt * 0.5 else: # this allows each spike to be used for multiple coincidences P.lastfiring[firing] = P.clock._t - P.clock._dt * 0.5 class CoincidenceDetectorGroup(NeuronGroup): ''' Perfect pairwise coincidence detector group Initialisation arguments: ``source`` The source group ``N`` The number of coincidence detector neurons ``delta`` The maximum time interval to count as a coincidence ``allow_multiple`` Whether or not a spike can be used for multiple coincidences for a given coincidence detector. Suppose neuron 0 fires at time 1ms and neuron 1 fires at times 2ms and 3ms with window 5ms say. If ``allow_multiple=True`` then the coincidence detector will fire at times 2ms and 3ms, if ``allow_multiple=False`` then it will only fire at time 2ms. ``ordered`` If ``False`` and s, t are the spike times then ``abs(s-t)t`` must also hold. .. method: set_indices Takes two arguments, ``i0`` and ``i1`` which should be lists or arrays of length the number of neurons in the group. Coincidence detector neuron ``k`` receives inputs from ``i0[k]`` and ``i1[k]`` in the source group. ''' def __init__(self, source, N, delta=1 * ms, allow_multiple=False, ordered=False, clock=None, **kwds): if clock is None: clock = source.clock NeuronGroup.__init__(self, N, model=CoincidenceDetectorStateUpdater(allow_multiple, ordered), reset=0, threshold=0.5, clock=clock, **kwds) self._S[:] = 0 self.source = source self.conn = CoincidenceDetectorConnection(source, self) self.contained_objects = [self.conn] self.LSST = zeros(len(source)) self.lastfiring = ones(N) * clock._dt * 0.5 if isinstance(delta, Quantity): delta = float(delta) self.delta = delta self.indices = zeros((2, N), dtype=int) def set_indices(self, i0, i1): self.indices[0] = i0 self.indices[1] = i1 def reinit(self): self.LSST[:] = 0 self.lastfiring[:] = self.clock._dt * 0.5 if __name__ == '__main__': if 1: G = MultipleSpikeGeneratorGroup( [ [1 * ms, 2 * ms, 2.1 * ms], [1.5 * ms] ]) H = CoincidenceDetectorGroup(G, 2, 1 * ms, ordered=True, allow_multiple=True) H.indices[0] = [0, 1] H.indices[1] = [1, 0] MG = SpikeMonitor(G) MH = SpikeMonitor(H) run(10 * ms) print MG.spikes print MH.spikes if 0: G = MultipleSpikeGeneratorGroup( [ [1 * ms, 2.5 * ms], [1.5 * ms] ]) H = CoincidenceDetectorGroup(G, 2) H.indices[0] = [0, 1] H.indices[1] = [1, 0] MG = SpikeMonitor(G) MH = SpikeMonitor(H) run(10 * ms) print MG.spikes print MH.spikes if 0: G = PoissonGroup(2, 250 * Hz) H = CoincidenceDetectorGroup(G, 1) H.set_indices([0], [1]) MG = SpikeMonitor(G) MH = SpikeMonitor(H) run(1 * second) raster_plot(MG, MH) show() brian-1.3.1/brian/experimental/compensation/000077500000000000000000000000001167451777000211155ustar00rootroot00000000000000brian-1.3.1/brian/experimental/compensation/__init__.py000066400000000000000000000000001167451777000232140ustar00rootroot00000000000000brian-1.3.1/brian/experimental/compensation/compare.py000077500000000000000000000043131167451777000231210ustar00rootroot00000000000000import brian_no_units from brian import * import numpy import brian import brian.experimental.modelfitting as gpu import fast_compensation as cpu #brian.log_level_error() debug_level() def compensate_gpu(input, trace, p=1.0): equations = Equations(''' dV0/dt=(R*Iinj-V0+Vr)/tau : 1 Iinj=(V-V0)/Re : 1 dV/dt=Re*(I-Iinj)/taue : 1 Ve=V-V0 : 1 I : 1 R : 1 Re : 1 Vr : 1 tau : second taue : second ''') params = dict( R = [1.0e3, 1.0e6, 1000.0e6, 1.0e12], tau = [.1*ms, 1*ms, 30*ms, 200*ms], Vr = [-100.0e-3, -80e-3, -40e-3, -10.0e-3], Re = [1.0e3, 1.0e6, 1000.0e6, 1.0e12], taue = [.01*ms, .1*ms, 5*ms, 20*ms]) tracecomp, traceV, traceVe, results = gpu.compensate(input, trace, gpu=1, p=p, equations=equations, maxiter=10, popsize=10, slice_duration=1*second, **params) return traceV, traceVe, results if __name__ == '__main__': I = numpy.load("current1.npy")[:10000] * 25 # because current1[2010_12_09_0006] * 25 Vraw = numpy.load("trace1.npy")[:10000] R = 600*Mohm tau = 10*ms Vr = -70*mV Re = 100*Mohm taue = 1*ms # CPU COMPENSATION #Vcomp, params = cpu.compensate(I, Vraw, dt=.1*ms, # p = 1.0, # durslice=10*second, # R=R, tau=tau, Vr=Vr, Re=Re, taue=taue) # GPU COMPENSATION traceV, traceVe, results = compensate_gpu(I, Vraw) Vcomp_gpu = Vraw - traceVe subplot(211) plot(I) subplot(212) plot(Vraw) plot(Vcomp_gpu) show() brian-1.3.1/brian/experimental/compensation/fast_compensation.py000066400000000000000000000106501167451777000252050ustar00rootroot00000000000000import brian_no_units from brian import * from filter import * from scipy.optimize import * class ElectrodeCompensation (object): eqs = """ dV/dt=Re*(-Iinj)/taue : volt dV0/dt=(R*Iinj-V0+Vr)/tau : volt Iinj=(V-V0)/Re : amp """ def __init__(self, I, Vraw, dt=defaultclock.dt, durslice=1*second, p=1.0, *params): self.I = I self.Vraw = Vraw self.p = p self.dt = dt self.x0 = self.params_to_vector(*params) self.duration = len(I) * dt self.durslice = min(durslice, self.duration) self.slicesteps = int(durslice/dt) self.nslices = int(ceil(len(I)*dt/durslice)) self.islice = 0 self.I_list = [I[self.slicesteps*i:self.slicesteps*(i+1)] for i in range(self.nslices)] self.Vraw_list = [Vraw[self.slicesteps*i:self.slicesteps*(i+1)] for i in range(self.nslices)] def vector_to_params(self, *x): R,tau,Vr,Re,taue = x R = R*R tau = tau*tau Re = Re*Re taue = taue*taue return R,tau,Vr,Re,taue def params_to_vector(self, *params): x = params x = [sqrt(params[0]), sqrt(params[1]), params[2], sqrt(params[3]), sqrt(params[4])] return list(x) def get_model_trace(self, row, *x): R, tau, Vr, Re, taue = self.vector_to_params(*x) eqs = Equations(self.eqs) eqs.prepare() self._eqs = eqs y = simulate(eqs, self.I_list[self.islice] * Re/taue, self.dt, row=row) return y def fitness(self, x): R, tau, Vr, Re, taue = self.vector_to_params(*x) y = self.get_model_trace(0, *x) e = self.dt*sum(abs(self.Vraw_list[self.islice]-y)**self.p) return e def compensate_slice(self, x0): fun = lambda x: self.fitness(x) x = fmin(fun, x0, maxiter=10000, maxfun=10000) return x def compensate(self): self.params_list = [] self.xlist = [self.x0] t0 = time.clock() for self.islice in range(self.nslices): newx = self.compensate_slice(self.xlist[self.islice]) self.xlist.append(newx) self.params_list.append(self.vector_to_params(*newx)) print "Slice %d/%d compensated in %.2f seconds" % \ (self.islice+1, self.nslices, time.clock()-t0) t0 = time.clock() self.xlist = self.xlist[1:] return self.xlist def get_compensated_trace(self): Vcomp_list = [] Vmodel_list = [] for self.islice in range(self.nslices): x = self.xlist[self.islice] V = self.get_model_trace(0, *x) V0 = self.get_model_trace(1, *x) Velec = V-V0 Vcomp_list.append(self.Vraw_list[self.islice] - Velec) Vmodel_list.append(V) self.Vcomp = hstack(Vcomp_list) self.Vmodel = hstack(Vmodel_list) return self.Vcomp, self.Vmodel def compensate(I, Vraw, dt=defaultclock.dt, durslice=1*second, p=1.0, **initial_params): R = initial_params.get("R", 100*Mohm) tau = initial_params.get("tau", 20*ms) Vr = initial_params.get("Vr", -70*mV) Re = initial_params.get("Re", 50*Mohm) taue = initial_params.get("taue", .5*ms) comp = ElectrodeCompensation(I, Vraw, dt, durslice, p, R, tau, Vr, Re, taue) comp.compensate() Vcomp, Vmodel = comp.get_compensated_trace() params = comp.params_list return Vcomp, Vmodel, params if __name__ == '__main__': I = numpy.load("current1.npy")[:100000] * 80 Vraw = numpy.load("trace1.npy")[:100000] R = 200*Mohm tau = 20*ms Vr = -70*mV Re = 50*Mohm taue = .5*ms Vcomp, Vmodel, params = compensate(I, Vraw, dt=.1*ms, p = 1.0, durslice=10*second, R=R, tau=tau, Vr=Vr, Re=Re, taue=taue) #subplot(211) #plot(I) #subplot(212) plot(Vraw, 'k') plot(Vmodel, 'g') plot(Vcomp, 'r') show() brian-1.3.1/brian/experimental/compensation/filter.py000066400000000000000000000062511167451777000227600ustar00rootroot00000000000000import brian_no_units from brian import * from itertools import count from scipy.signal import lfilter from scipy import linalg from numpy.linalg import inv, matrix_power import numpy import time def get_linear_equations(eqs): ''' Returns the matrices A and C for the linear model dX/dt = M(X-B), where eqs is an Equations object. ''' # Otherwise assumes it is given in functional form n = len(eqs._diffeq_names) # number of state variables dynamicvars = eqs._diffeq_names #print eqs._diffeq_names # Calculate B AB = zeros((n, 1)) d = dict.fromkeys(dynamicvars) #print d for j in range(n): d[dynamicvars[j]] = 0. * eqs._units[dynamicvars[j]] for var, i in zip(dynamicvars, count()): #print i, var AB[i] = -eqs.apply(var, d) # Calculate A M = zeros((n, n)) for i in range(n): for j in range(n): d[dynamicvars[j]] = 0. * eqs._units[dynamicvars[j]] if isinstance(eqs._units[dynamicvars[i]], Quantity): d[dynamicvars[i]] = Quantity.with_dimensions(1., eqs._units[dynamicvars[i]].get_dimensions()) else: d[dynamicvars[i]] = 1. for var, j in zip(dynamicvars, count()): M[j, i] = eqs.apply(var, d) + AB[j] #M-=eye(n)*1e-10 # quick dirty fix for problem of constant derivatives; dimension = Hz #B=linalg.lstsq(M,AB)[0] # We use this instead of solve in case M is degenerate try: B = linalg.solve(M, AB) # We use this instead of solve in case M is degenerate except: B = None return M, B def compute_filter(A, row=0): d = len(A) # compute a a = poly(A) # directly vector a of the filter, a[0]=1 # compute b recursively b = zeros(d+1) T = eye(d) b[0] = T[row, 0] for i in range(1, d+1): T = a[i]*eye(d) + dot(A, T) b[i] = T[row, 0] return b, a def simulate(eqs, I, dt, row=0): """ I must be normalized (I*Re/taue for example) """ M, B = get_linear_equations(eqs) A = linalg.expm(M * dt) b, a = compute_filter(A, row=row) y = lfilter(b, a, I*dt) + B[row] return y if __name__ == '__main__': R = 500*Mohm Re = 400*Mohm tau = 10*ms taue = 1.0*ms Vr = -70*mV dt = .1*ms # +Re*I/taue eqs = Equations(""" dV/dt=Re*(-Iinj)/taue : volt dV0/dt=(R*Iinj-V0+Vr)/tau : volt Iinj=(V-V0)/R : amp """) eqs.prepare() Is = numpy.load("current1.npy")[:50000] eqs2 = Equations(""" dV/dt=Re*(I-Iinj)/taue : volt dV0/dt=(R*Iinj-V0+Vr)/tau : volt Iinj=(V-V0)/R : amp I : amp """) G = NeuronGroup(1, eqs2) G.V = Vr G.I = TimedArray(Is) stm = StateMonitor(G, 'V', record=True) t0 = time.clock() run(len(Is)*dt) t1 = time.clock()-t0 y0 = stm.values[0] t0 = time.clock() y = simulate(eqs, Is * Re/taue, dt, row=0) t2 = time.clock()-t0 print t1, t2, t1/t2 print max(abs(y-y0)) subplot(211) plot(Is) subplot(212) plot(y0) plot(y) show() brian-1.3.1/brian/experimental/connectionmonitor.py000066400000000000000000000403461167451777000225460ustar00rootroot00000000000000''' Ideas for weight monitor: Should have following features: * Record from a subset of weights * Convert to Brian or scipy sparse matrix format * Callback feature * Record to file * Display in GUI ''' from brian import * from random import sample import matplotlib.cm as cm try: import pygame except ImportError: pygame = None __all__ = ['ConnectionMonitor'] class ConnectionMonitor(NetworkOperation): ''' Monitors synaptic connection weights Initialisation arguments: ``C`` The connection from which to monitor weight values. ``matrix='W'`` Which matrix to record from, 'W' is weights but you can also record delays with ``matrix='delay'``. ``synapses=None`` Specify which synapses to record. Can take the following values: An integer Will record this many randomly selected synapses (or less if there are less). A value between 0 and 1 Will record this fraction of the synapses, chosen at random. ``True``, ``None`` Records all the synapses A list The list should consist of pairs ``(i, j)`` to record those synapses only. Note that if an integer or fraction is specified, for a sparse matrix, the set of synapses that will be recorded will be chosen at random from the nonzero values at the beginning of the run. If you are using a :class:`DynamicConnectionMatrix` you might want to specify the synapses to record from explicitly. ``callback=None`` A callback function, each time a matrix ``M`` is recorded, ``callback(M)`` will be called. This could be used for saving matrices to a file, for example, or for runtime analysis. See below for recorded matrix format. ``store=False`` Whether or not to keep a copy of the recorded matrices in memory, specifically in the ``values`` attribute (see below). ``display=False`` Whether or not to display images at runtime. Can take any of the following values: ``'pygame'`` Displays the matrix using the pygame module, which must be installed if you want to use this. Only one matrix can be recorded at a time using pygame. Note that if ``display`` is not ``False`` it will replace the image callback provided (if any). ``image_callback=None`` Each time an image ``I`` is recorded, ``image_callback(I)`` is called. See below for image format. ``size=None`` Specifies the dimension of the image. A fairly crude scaling will be used if the dimensions of the image are different to those of the matrix. ``cmap=jet`` The colourmap to use. Various values are available in ``matplotlib.cm``. ``wmin=None, wmax=None`` The minimum and maximum weights for use with the colourmap. If not specified, for each recorded matrix the minimum and maximum values will be used. ``clock=None`` By default, matrices will be recorded every 1s of simulated time. ``when='end'`` When the matrices are recorded (see :class:`NetworkOperation`). **Matrix format** Matrices are recorded in one of two formats, either a numpy ``ndarray`` if the matrix to be recorded is dense and all synapses are being recorded, or a Brian :class:`SparseMatrix` (which is derived from ``scipy.sparse.lil_matrix``) in all other cases. If a subset of synapses to record has been selected, the sparse matrix will consist only of those synapses. **Image format** The images are numpy arrays with shape ``(width, height, 4)`` and ``dtype=uint8``. The last axis is the colour in the format RGBA, with values between 0 and 255. The A component is alpha, and will always be 255. **Attributes** ``values`` If ``store=True`` this consists of a list of pairs ``(t, M)`` giving the recorded matrix ``M`` at time ``t``. ''' def __init__(self, C, matrix='W', synapses=None, callback=None, store=False, display=False, image_callback=None, size=None, cmap=cm.jet, wmin=None, wmax=None, clock=None, when='end', ): self.C = C self.matrix = matrix self.synapses = synapses self.synapse_set = None self.callback = callback self.display = display self.store = store self.image_callback = image_callback Wh, Ww = getattr(C, matrix).shape if size is None: size = Ww, Wh w, h = size self.size = (h, w) self.wmin = wmin self.wmax = wmax self.cmap = cmap if clock is None: clock = EventClock(dt=1*second) if display=='pygame': if pygame is None: warnings.warn('Need pygame module to use pygame display mode.') else: self.screen = pygame.display.set_mode(self.size[::-1]) self.screen_arr = pygame.surfarray.pixels3d(self.screen) self.image_callback = self.pygame_image_callback NetworkOperation.__init__(self, lambda:None, clock=clock, when=when) self.reinit() def reinit(self): self.values = [] def get_matrix(self): W = getattr(self.C, self.matrix) sparseW = isinstance(W.get_row(0), SparseConnectionVector) if not(self.synapses is None or self.synapses is True): if self.synapse_set is None: # The first time we run we generate the synapse set # We don't do this on initialisation because the user # might initialise the matrix values before defining the # monitor. if sparseW: nnz = W.getnnz() else: nnz = W.shape[0]*W.shape[1] synapses = self.synapses if isinstance(synapses, float): synapses = int(synapses*nnz) if isinstance(synapses, int): if synapses>=nnz: # recording all synapses, so use the code below self.synapses = None else: # We want to pick a sample of indices at random, so # we pick a subset of [0...nnz-1] and pick out the # corresponding synapses (i,j) if they are arranged # in order indices = sample(xrange(nnz), synapses) indices.sort() indices = array(indices, dtype=int) if not sparseW: # For dense matrices there is a simple conversion # from 1D indices to 2D indices i = indices%W.shape[0] j = indices/W.shape[0] synapses = zip(i, j) else: # For sparse matrices, we need to go through row # by row and pick out the corresponding indices # Note that we don't return a list of pairs # (i,j) in this case, but directly create the # synapse_set as it is more efficient ss = SparseMatrix(W.shape) for i in xrange(W.shape[0]): row = W.get_row(i) I = indicesWw: w = Ww if h>Wh: h = Wh image = zeros((w, h)) # We're going to do some crude scaling by adding all the weights and # dividing by the number of itmes, for each pixel of the image image_numitems = zeros((w, h), dtype=int) ind = arange(W.shape[1]) # We will do histograms for each row, adding up the corresponding # weights for indices in these bins bins = hstack(((arange(w)*Ww)/w, Ww)) for i in xrange(W.shape[0]): u = h-1-((i*h)//Wh) # target coordinate, scaled row = W.get_row(i) if isinstance(row, SparseConnectionVector): ind = row.ind numitems, _ = histogram(ind, bins) vals, _ = histogram(ind, bins, weights=row) image[:, u] += vals image_numitems[:, u] += numitems image_numitems[image_numitems==0] = 1 image = image/image_numitems if wmin is None: wmin = amin(image) if wmax is None: wmax = amax(image) if wmax - wmin < 1e-20: wmax = wmin + 1e-20 image = self.cmap(clip((image - wmin) / (wmax - wmin), 0, 1), bytes=True) # Scale up if the requested image size is larger than the matrix ih, iw = self.size if ih>h or iw>w: yind = (arange(ih)*h)/ih xind = (arange(iw)*w)/iw xind, yind = meshgrid(xind, yind) image = image[xind.T, yind.T, :] return image def pygame_image_callback(self, image): self.screen_arr[:, :, :3] = image[:, :, :3] pygame.display.flip() pygame.event.pump() def __call__(self): W = self.get_matrix() if self.callback is not None: self.callback(W) if self.store: self.values.append((self.clock.t, W)) if self.image_callback is not None: self.image_callback(self.get_image()) if __name__=='__main__': if 0: N = 1000 taum = 10 * ms tau_pre = 20 * ms tau_post = tau_pre Ee = 0 * mV vt = -54 * mV vr = -60 * mV El = -74 * mV taue = 5 * ms F = 15 * Hz gmax = .01 dA_pre = .01 dA_post = -dA_pre * tau_pre / tau_post * 1.05 eqs_neurons = ''' dv/dt=(ge*(Ee-vr)+El-v)/taum : volt # the synaptic current is linearized dge/dt=-ge/taue : 1 ''' input = PoissonGroup(N, rates=F) neurons = NeuronGroup(1, model=eqs_neurons, threshold=vt, reset=vr) synapses = Connection(input, neurons, 'ge', weight=rand(len(input), len(neurons)) * gmax) neurons.v = vr #stdp=ExponentialSTDP(synapses,tau_pre,tau_post,dA_pre,dA_post,wmax=gmax) ## Explicit STDP rule eqs_stdp = ''' dA_pre/dt=-A_pre/tau_pre : 1 dA_post/dt=-A_post/tau_post : 1 ''' dA_post *= gmax dA_pre *= gmax stdp = STDP(synapses, eqs=eqs_stdp, pre='A_pre+=dA_pre;w+=A_post', post='A_post+=dA_post;w+=A_pre', wmax=gmax) rate = PopulationRateMonitor(neurons) M = ConnectionMonitor(synapses, store=True, synapses=200) run(100 * second, report='text') Z = zeros((100, 1000)) for i, (_, m) in enumerate(M.values): #print m[:, 0].__class__ Z[i, :] = m.todense().flatten() imshow(Z, aspect='auto') show() else: Nin = 200 Nout = 200 duration = 40 * second Fmax = 50 * Hz tau = 10 * ms taue = 2 * ms taui = 5 * ms sigma = 0.#0.4 # Note that factor (tau/taue) makes integral(v(t)) the same when the connection # acts on ge as if it acted directly on v. eqs = Equations(''' #dv/dt = -v/tau + sigma*xi/(2*tau)**.5 : 1 dv/dt = (-v+(tau/taue)*ge-(tau/taui)*gi)/tau + sigma*xi/(2*tau)**.5 : 1 dge/dt = -ge/taue : 1 dgi/dt = -gi/taui : 1 excitatory = ge inhibitory = gi ''') reset = 0 threshold = 1 refractory = 0 * ms taup = 5 * ms taud = 5 * ms Ap = .1 Ad = -Ap * taup / taud * 1.2 wmax_ff = 0.1 wmax_rec = wmax_ff wmax_inh = wmax_rec width = 0.2 recwidth = 0.2 Gin = PoissonGroup(Nin) Gout = NeuronGroup(Nout, eqs, reset=reset, threshold=threshold, refractory=refractory) ff = Connection(Gin, Gout, 'excitatory', structure='dense') for i in xrange(Nin): ff[i, :] = (rand(Nout) > .5) * wmax_ff rec = Connection(Gout, Gout, 'excitatory') for i in xrange(Nout): d = abs(float(i) / Nout - linspace(0, 1, Nout)) d[d > .5] = 1. - d[d > .5] dsquared = d ** 2 prob = exp(-dsquared / (2 * recwidth ** 2)) prob[i] = -1 inds = (rand(Nout) < prob).nonzero()[0] w = rand(len(inds)) * wmax_rec rec[i, inds] = w inh = Connection(Gout, Gout, 'inhibitory', sparseness=1, weight=wmax_inh) stdp_ff = ExponentialSTDP(ff, taup, taud, Ap, Ad, wmax=wmax_ff) stdp_rec = ExponentialSTDP(rec, taup, taud, Ap, Ad, wmax=wmax_rec) M = ConnectionMonitor(ff, display='pygame', #size=(500,500), #synapses=10000, ) run(0 * ms) @network_operation(clock=EventClock(dt=20 * ms)) def stimulation(): Gin.rate = Fmax * exp(-(linspace(0, 1, Nin) - rand())**2 / (2 * width ** 2)) run(40*second, report='stderr') if 0: for i, (_, m) in enumerate(M.values): if i%16==0: figure() subplot(4, 4, i%16+1) imshow(array(m.todense()), aspect='auto', #interpolation='nearest' ) show() brian-1.3.1/brian/experimental/cuda/000077500000000000000000000000001167451777000173325ustar00rootroot00000000000000brian-1.3.1/brian/experimental/cuda/__init__.py000066400000000000000000000000321167451777000214360ustar00rootroot00000000000000from gpucodegen import * brian-1.3.1/brian/experimental/cuda/buffering.py000066400000000000000000000322431167451777000216570ustar00rootroot00000000000000''' GPU Buffering class See docstring for GPUBufferedArray for details. ''' from brian import * import pycuda import pycuda.gpuarray import numpy from numpy import intp __all__ = ['SynchronisationError', 'GPUBufferedArray'] if __name__ == '__main__': DEBUG_BUFFER_CACHE = True else: DEBUG_BUFFER_CACHE = False numpy_inplace_methods = set([ '__iadd__', '__iand__', '__idiv__', '__ifloordiv__', '__ilshift__', '__imod__', '__imul__', '__ior__', '__ipow__', '__irshift__', '__isub__', '__itruediv__', '__ixor__', '__setitem__', '__setslice__', 'byteswap', 'fill', 'put', 'sort', ]) numpy_access_methods = set([ '__abs__', '__add__', '__and__', '__array__', '__contains__', '__copy__', '__deepcopy__', '__delattr__', '__delitem__', '__delslice__', '__div__', '__divmod__', '__eq__', '__float__', '__floordiv__', '__ge__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__hex__', '__iadd__', '__iand__', '__idiv__', '__ifloordiv__', '__ilshift__', '__imod__', '__imul__', '__index__', '__int__', '__invert__', '__ior__', '__ipow__', '__irshift__', '__isub__', '__iter__', '__itruediv__', '__ixor__', '__le__', '__long__', '__lshift__', '__lt__', '__mod__', '__mul__', '__ne__', '__neg__', '__nonzero__', '__oct__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdiv__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setitem__', '__setslice__', '__setstate__', '__str__', '__sub__', '__truediv__', '__xor__', 'all', 'any', 'argmax', 'argmin', 'argsort', 'astype', 'byteswap', 'choose', 'clip', 'compress', 'conj', 'conjugate', 'copy', 'cumprod', 'cumsum', 'diagonal', 'dump', 'dumps', 'fill', 'flatten', 'getfield', 'item', 'itemset', 'max', 'mean', 'min', 'nonzero', 'prod', 'ptp', 'put', 'ravel', 'repeat', 'reshape', 'resize', 'round', 'searchsorted', 'setfield', 'setflags', 'sort', 'squeeze', 'std', 'sum', 'swapaxes', 'take', 'tofile', 'tolist', 'tostring', 'trace', 'transpose', 'var', 'view']) - numpy_inplace_methods class SynchronisationError(RuntimeError): ''' Indicates that the GPU and CPU data are in conflict (i.e. both have changed) ''' pass # These two functions are decorators to be applied to the numpy # methods. The inplace version sets the cpu data changed flag # after it has run, the access version only makes sure that # the data has been synchronised to the CPU. def gpu_buffered_array_inplace_method(func): def new_func(self, *args, **kwds): self.sync_to_cpu() rval = func(self, *args, **kwds) self._cpu_data_changed = True return rval new_func.__name__ = func.__name__ new_func.__doc__ = func.__doc__ return new_func def gpu_buffered_array_access_method(func): def new_func(self, *args, **kwds): self.sync_to_cpu() rval = func(self, *args, **kwds) return rval new_func.__name__ = func.__name__ new_func.__doc__ = func.__doc__ return new_func class DecoupledGPUBufferedArray(numpy.ndarray): pass class GPUBufferedArray(numpy.ndarray): ''' An object that automatically manages a numpy array that is stored on the CPU and GPU The idea is that whenever data is accessed on either the GPU or CPU, this object performs any synchronisation operations. So for example if you change a value in the array on the CPU and then run a function on the GPU, the data from the CPU side needs to be copied to the GPU before calling the GPU function. The best practice is to use the ``cpu_array`` and ``gpu_array`` or ``gpu_dev_alloc`` attributes explicitly, and call the ``changed_cpu_data()`` and ``changed_gpu_data()`` methods after modifying data. However, the class does its best to be intelligent about when changes to data have happened so these arrays can be used in code that is not GPU aware to some extent. See implementation notes below for details on when this will fail. **Initialisation** Initialising with a numpy array creates a ``pycuda.gpuarray.GPUArray`` instance to store the data on the GPU. If you want, you can specify a pre-existing ``GPUArray`` or ``pycuda.DeviceAllocation`` instance. Initialising with a pagelocked numpy array will make copies faster. **Attributes** Each of these attributes returns a reference to data on the CPU or GPU (and like everything in this class, will synchronise if necessary). Obtaining a reference to CPU/GPU data is taken as implying that the data will be changed by subsequent code, and so the CPU/GPU data is marked as changed. To obtain references without marking the data as changed, use the attribute with ``_nomodify`` appended. If you are obtaining and storing a reference, you need to explicitly mark it as modified when you make a change (see the ``changed_gpu_data()`` and ``changed_cpu_data()`` methods below). ``cpu_array``, ``cpu_array_nomodify`` A reference to the array on the CPU. ``gpu_array``, ``gpu_array_nomodify`` If the GPU data is a ``GPUArray``, this will return it, otherwise it will throw an error. ``gpu_dev_alloc``, ``gpu_dev_alloc_nomodify`` Return the ``DeviceAllocation`` for the GPU data. ``gpu_pointer``, ``gpu_pointer_nomodify`` Returns an integer pointer to the GPU data. **Methods** ``changed_gpu_data()``, ``changed_cpu_data()`` Mark the data on the GPU or CPU as modified. You need to do this explicitly if you obtained and stored a reference to the CPU or GPU data. ``sync_to_gpu()``, ``sync_to_cpu()`` Explicitly make sure that the data on the CPU/GPU has been copied to the GPU/CPU respectively if they are marked as changed. ``sync()`` Explicitly synchronise the CPU and GPU. **Implementation** The implementation is not clever about copying data, it either copies all the data or none at all. Internally it stores two flags to indicate whether data on the CPU or GPU has changed, which it tries its best to be accurate but cannot guarantee (see below), and performs a copy when needed (subsequently updating these flags). Results of numpy functions, except for a handful of explicitly inplace operators, do not mark the data as changed. So doing, for example, ``clip(y,0,1,out=y)`` will not mark the data in ``y`` as changed. ''' def __new__(subtype, cpu_arr, gpu_arr=None): return numpy.array(cpu_arr, copy=False).view(subtype) def __init__(self, cpu_arr, gpu_arr=None): if gpu_arr is None: if DEBUG_BUFFER_CACHE: print 'Initialising and copying GPU memory' self._gpu_arr = pycuda.gpuarray.to_gpu(self) self._gpu_data_changed = False self._cpu_data_changed = False else: self._gpu_arr = gpu_arr self._gpu_data_changed = False self._cpu_data_changed = True def _check_synchronisation(self): if not hasattr(self, '_cpu_data_changed'): # This happens if you do, for example, y[0,:][:]=1 because the y[0,:] will # return an uninitialised GPUBufferedArray. At the moment, we don't # handle this case (although perhaps in the case of slicing we could?) so # maybe we should throw an error here? The alternative is to try to minimise # the impact on code (although it will have decoupled the array from the # GPU data, which may not be expected). self.__class__ = DecoupledGPUBufferedArray self._cpu_data_changed = False self._gpu_data_changed = False return if self._cpu_data_changed and self._gpu_data_changed: raise SynchronisationError('GPU and CPU data desynchronised.') def sync(self): self.sync_to_gpu() self.sync_to_cpu() def sync_to_gpu(self): self._check_synchronisation() if self._cpu_data_changed: gpu_arr = self._gpu_arr if isinstance(gpu_arr, pycuda.gpuarray.GPUArray): gpu_arr.set(self) else: pycuda.driver.memcpy_htod(gpu_arr, self) self._cpu_data_changed = False if DEBUG_BUFFER_CACHE: print 'Synchronised to GPU' def sync_to_cpu(self): self._check_synchronisation() if self._gpu_data_changed: gpu_arr = self._gpu_arr if isinstance(gpu_arr, pycuda.gpuarray.GPUArray): gpu_arr.get(self) else: pycuda.driver.memcpy_dtoh(self, gpu_arr) self._gpu_data_changed = False if DEBUG_BUFFER_CACHE: print 'Synchronised to CPU' def changed_gpu_data(self): self._gpu_data_changed = True self._check_synchronisation() def changed_cpu_data(self): self._cpu_data_changed = True self._check_synchronisation() def get_cpu_array(self, modify=True): self.sync_to_cpu() if modify: # assume that the user is going to modify the data self._cpu_data_changed = True return self def get_gpu_array(self, modify=True): self.sync_to_gpu() if modify: # assume that the user is going to modify the data self._gpu_data_changed = True if isinstance(self._gpu_arr, pycuda.gpuarray.GPUArray): return self._gpu_arr raise TypeError('GPU buffer is not a GPUArray') def get_gpu_dev_alloc(self, modify=True): self.sync_to_gpu() if modify: # assume that the user is going to modify the data self._gpu_data_changed = True if isinstance(self._gpu_arr, pycuda.gpuarray.GPUArray): return self._gpu_arr.gpudata elif isinstance(self._gpu_arr, pycuda.driver.DeviceAllocation): return self._gpu_arr raise TypeError('gpu_arr should be a DeviceAllocation or GPUArray.') def get_gpu_pointer(self): return intp(self.gpu_dev_alloc) cpu_array = property(fget=get_cpu_array) gpu_array = property(fget=get_gpu_array) gpu_dev_alloc = property(fget=get_gpu_dev_alloc) gpu_pointer = property(fget=get_gpu_pointer) cpu_array_nomodify = property(fget=lambda self:self.get_cpu_array(False)) gpu_array_nomodify = property(fget=lambda self:self.get_gpu_array(False)) gpu_dev_alloc_nomodify = property(fget=lambda self:self.get_gpu_dev_alloc(False)) gpu_pointer_nomodify = property(fget=lambda self:self.get_gpu_pointer(False)) for __name in numpy_inplace_methods: exec __name + ' = gpu_buffered_array_inplace_method(numpy.ndarray.' + __name + ')' for __name in numpy_access_methods: exec __name + ' = gpu_buffered_array_access_method(numpy.ndarray.' + __name + ')' def __array_wrap__(self, obj, context=None): # normally, __array_wrap__ calls __array_finalize__ to convert the result obj into an object of its own type # but here, we don't want that, we want it to be a straight numpy array which doesn't maintain a connection # to the GPU array. return obj if __name__ == '__main__': import pycuda.autoinit from pycuda import driver as drv def gpuarrstatus(z): return '(cpu_changed=' + str(z._cpu_data_changed) + ', gpu_changed=' + str(z._gpu_data_changed) + ')' x = array([1, 2, 3, 4], dtype=numpy.float32) print '+ About to initialise GPUBufferedArray' y = GPUBufferedArray(x) print '- Initialised GPUBufferedArray', gpuarrstatus(y) print '+ About to set item on CPU array', gpuarrstatus(y) y[3] = 5 print '- Set item on CPU array', gpuarrstatus(y) mod = drv.SourceModule(''' __global__ void doubleit(float *y) { int i = threadIdx.x; y[i] *= 2.0; } ''') doubleit = mod.get_function("doubleit") print '+ About to call GPU function', gpuarrstatus(y) doubleit(y.gpu_array, block=(4, 1, 1)) print '- Called GPU function', gpuarrstatus(y) print '+ About to print CPU array', gpuarrstatus(y) print y print '- Printed CPU array', gpuarrstatus(y) brian-1.3.1/brian/experimental/cuda/gpucodegen.py000066400000000000000000000330731167451777000220320ustar00rootroot00000000000000#import brian_no_units from brian import * from brian import optimiser from scipy import weave try: import pycuda import pycuda.autoinit as autoinit import pycuda.driver as drv import pycuda.compiler as compiler from pycuda import gpuarray from buffering import * except ImportError: log_warn('brian.experimental.cuda.gpucodegen', 'Cannot import pycuda') import time from brian.experimental.codegen.rewriting import rewrite_to_c_expression __all__ = ['GPUNonlinearStateUpdater', 'UserControlledGPUNonlinearStateUpdater', 'GPUNeuronGroup'] #DEBUG_BUFFER_CACHE = False class GPUNonlinearStateUpdater(NonlinearStateUpdater): def __init__(self, eqs, clock=None, freeze=False, precision='double', maxblocksize=512, forcesync=False): NonlinearStateUpdater.__init__(self, eqs, clock, compile=False, freeze=freeze) self.precision = precision if self.precision == 'double': self.precision_dtype = float64 else: self.precision_dtype = float32 self.clock_dt = float(guess_clock(clock).dt) self.code_gpu = self.generate_forward_euler_code() self.maxblocksize = maxblocksize self.forcesync = forcesync self._prepared = False def generate_forward_euler_code(self): eqs = self.eqs M = len(eqs._diffeq_names) all_variables = eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t'] clines = '__global__ void stateupdate(int N, SCALAR t, SCALAR *S)\n' clines += '{\n' clines += ' int i = blockIdx.x * blockDim.x + threadIdx.x;\n' clines += ' if(i>=N) return;\n' for j, name in enumerate(eqs._diffeq_names): clines += ' int _index_' + name + ' = i+' + str(j) + '*N;\n' for j, name in enumerate(eqs._diffeq_names): # clines += ' SCALAR &' + name + ' = S[i+'+str(j)+'*N];\n' clines += ' SCALAR ' + name + ' = S[_index_' + name + '];\n' for j, name in enumerate(eqs._diffeq_names): namespace = eqs._namespace[name] expr = optimiser.freeze(eqs._string[name], all_variables, namespace) expr = rewrite_to_c_expression(expr) print expr if name in eqs._diffeq_names_nonzero: clines += ' SCALAR ' + name + '__tmp = ' + expr + ';\n' for name in eqs._diffeq_names_nonzero: # clines += ' '+name+' += '+str(self.clock_dt)+'*'+name+'__tmp;\n' clines += ' S[_index_' + name + '] = ' + name + '+' + str(self.clock_dt) + '*' + name + '__tmp;\n' clines += '}\n' clines = clines.replace('SCALAR', self.precision) self.gpu_mod = compiler.SourceModule(clines) self.gpu_func = self.gpu_mod.get_function("stateupdate") return clines def _prepare(self, P): blocksize = self.maxblocksize if len(P) < blocksize: blocksize = len(P) if len(P) % blocksize == 0: gridsize = len(P) / blocksize else: gridsize = len(P) / blocksize + 1 self._prepared = True self.gpu_func.prepare((int32, self.precision_dtype, 'i'), (blocksize, 1, 1)) self._S_gpu_addr = P._S.gpu_pointer self._gpu_N = int32(len(P)) self._gpu_grid = (gridsize, 1) def __call__(self, P): if not self._prepared: self._prepare(P) P._S.sync_to_gpu() self.gpu_func.prepared_call(self._gpu_grid, self._gpu_N, self.precision_dtype(P.clock._t), self._S_gpu_addr) P._S.changed_gpu_data() if self.forcesync: P._S.sync_to_cpu() P._S.changed_cpu_data() class UserControlledGPUNonlinearStateUpdater(GPUNonlinearStateUpdater): def __init__(self, eqs, clock=None, freeze=False, precision='double', maxblocksize=512, gpu_to_cpu_vars=None, cpu_to_gpu_vars=None): GPUNonlinearStateUpdater.__init__(self, eqs, clock=clock, freeze=freeze, precision=precision, maxblocksize=maxblocksize, forcesync=False) self.gpu_to_cpu_vars = gpu_to_cpu_vars self.cpu_to_gpu_vars = cpu_to_gpu_vars def _prepare(self, P): GPUNonlinearStateUpdater._prepare(self, P) if isinstance(P._S._gpu_arr, pycuda.gpuarray.GPUArray): self._gpuoffset = int(P._S._gpu_arr.gpudata) elif isinstance(P._S._gpu_arr, pycuda.driver.DeviceAllocation): self._gpuoffset = int(P._S._gpu_arr) self._cpuflat = array(P._S, copy=False) self._cpuflat.shape = self._cpuflat.size def __call__(self, P): if not self._prepared: self._prepare(P) # copy from CPU to GPU for i, j, k in self.cpu_to_gpu_vars: pycuda.driver.memcpy_htod(self._gpuoffset + i, self._cpuflat[j:k]) self.gpu_func.prepared_call(self._gpu_grid, self._gpu_N, self.precision_dtype(P.clock._t), self._S_gpu_addr) # copy from GPU back to CPU for i, j, k in self.gpu_to_cpu_vars: pycuda.driver.memcpy_dtoh(self._cpuflat[j:k], self._gpuoffset + i) P._S._cpu_data_changed = False # override any buffering P._S._gpu_data_changed = False # override any buffering class GPUNeuronGroup(NeuronGroup): ''' Neuron group which performs numerical integration on the GPU. .. warning:: This class is still experimental, not supported and subject to change in future versions of Brian. Initialised with arguments as for :class:`NeuronGroup` and additionally: ``precision='double'`` The GPU scalar precision to use, older models can only use ``precision='float'``. ``maxblocksize=512`` If GPU compilation fails, reduce this value. ``forcesync=False`` Whether or not to force copying of state variables to and from the GPU each time step. This is slow, so it is better to specify precisely which variables should be copied to and from using ``gpu_to_cpu_vars`` and ``cpu_to_gpu_vars``. ``pagelocked_mem=True`` Whether to store state variables in pagelocked memory on the CPU, which makes copying data to/from the GPU twice as fast. ``cpu_to_gpu_vars=None``, ``gpu_to_cpu_vars=None`` Which variables should be copied each time step from the CPU to the GPU (before state update) and from the GPU to the CPU (after state update). The point of the copying of variables to and from the GPU is that the GPU maintains a separate memory from the CPU, and so changes made on either the CPU or GPU won't automatically be reflected in the other. Since only numerical integration is done on the GPU, any state variable that is modified by incoming synapses, for example, should be copied to and from the GPU each time step. In addition, any variables used for thresholding or resetting need to be appropriately copied (GPU->CPU for thresholding, and both for resetting). ''' def __init__(self, N, model, threshold=None, reset=NoReset(), init=None, refractory=0 * msecond, level=0, clock=None, order=1, implicit=False, unit_checking=True, max_delay=0 * msecond, compile=False, freeze=False, method=None, precision='double', maxblocksize=512, forcesync=False, pagelocked_mem=True, gpu_to_cpu_vars=None, cpu_to_gpu_vars=None): eqs = model eqs.prepare() NeuronGroup.__init__(self, N, eqs, threshold=threshold, reset=reset, init=init, refractory=refractory, level=level, clock=clock, order=order, compile=compile, freeze=freeze, method=method) self.precision = precision if self.precision == 'double': self.precision_dtype = float64 self.precision_nbytes = 8 else: self.precision_dtype = float32 self.precision_nbytes = 4 self.clock = guess_clock(clock) if gpu_to_cpu_vars is None and cpu_to_gpu_vars is None: self._state_updater = GPUNonlinearStateUpdater(eqs, clock=self.clock, precision=precision, maxblocksize=maxblocksize, forcesync=forcesync) else: cpu_to_gpu_vars = [(self.get_var_index(var) * len(self) * self.precision_nbytes, self.get_var_index(var) * len(self), (self.get_var_index(var) + 1) * len(self)) for var in cpu_to_gpu_vars] gpu_to_cpu_vars = [(self.get_var_index(var) * len(self) * self.precision_nbytes, self.get_var_index(var) * len(self), (self.get_var_index(var) + 1) * len(self)) for var in gpu_to_cpu_vars] self._state_updater = UserControlledGPUNonlinearStateUpdater(eqs, clock=self.clock, precision=precision, maxblocksize=maxblocksize, gpu_to_cpu_vars=gpu_to_cpu_vars, cpu_to_gpu_vars=cpu_to_gpu_vars) if pagelocked_mem: self._S = GPUBufferedArray(drv.pagelocked_zeros(self._S.shape, dtype=self.precision_dtype)) else: self._S = GPUBufferedArray(array(self._S, dtype=self.precision_dtype)) self._gpuneurongroup_init_finished = True def copyvar_cpu_to_gpu(self, var): i, j, k = (self.get_var_index(var) * len(self) * self.precision_nbytes, self.get_var_index(var) * len(self), (self.get_var_index(var) + 1) * len(self)) pycuda.driver.memcpy_htod(self._state_updater._gpuoffset + i, self._state_updater._cpuflat[j:k]) def copyvar_gpu_to_cpu(self, var): i, j, k = (self.get_var_index(var) * len(self) * self.precision_nbytes, self.get_var_index(var) * len(self), (self.get_var_index(var) + 1) * len(self)) pycuda.driver.memcpy_dtoh(self._state_updater._cpuflat[j:k], self._state_updater._gpuoffset + i) def __setattr__(self, name, val): try: self._gpuneurongroup_init_finished NeuronGroup.__setattr__(self, name, val) if name in self.var_index: self._S.changed_cpu_data() except AttributeError: object.__setattr__(self, name, val) if __name__ == '__main__': from brian.experimental.ccodegen import AutoCompiledNonlinearStateUpdater set_global_preferences(usecodegen=False) #duration = 10*second #N = 1000 #domonitor = False duration = 1000 * ms N = 100 domonitor = False showfinal = False forcesync = True method = 'gpu' # methods are 'c', 'python' and 'gpu' if drv.get_version() == (2, 0, 0): # cuda version precision = 'float' elif drv.get_version() > (2, 0, 0): precision = 'double' else: raise Exception, "CUDA 2.0 required" #precision = 'float' import buffering buffering.DEBUG_BUFFER_CACHE = False # eqs = Equations(''' # #dV/dt = -V*V/(10*ms) : 1 # dV/dt = cos(2*pi*t/(100*ms))/(10*ms) : 1 # #dV/dt = -V*V*V*V*V/(100*ms) : 1 # #dW/dt = -W*W*W*W*W/(100*ms) : 1 # #dV/dt = cos(2*pi*t/(100*ms))/(10*ms) : 1 # #dW/dt = cos(2*pi*t/(100*ms))/(10*ms) : 1 # #dW2/dt = cos(2*pi*t/(100*ms))/(10*ms) : 1 # #dV/dt = h/(10*ms) : 1 # #h = -V*V : 1 # ''') #eqs = Equations('\n'.join('dv'+str(i)+'/dt=-v'+str(i)+'/second:1' for i in range(20))) #10 works 11 is too much #print eqs # taum=20*ms # taue=5*ms # taui=10*ms # Vt=-50*mV # Vr=-60*mV # El=-49*mV # # eqs= Equations(''' # dV/dt = (ge+gi-(V-El))/taum : volt # dge/dt = -ge/taue : volt # dgi/dt = -gi/taui : volt # ''') from brian.library.ionic_currents import * El = 10.6 * mV EK = -12 * mV ENa = 120 * mV eqs = MembraneEquation(1 * uF) + leak_current(.3 * msiemens, El) eqs += K_current_HH(36 * msiemens, EK) + Na_current_HH(120 * msiemens, ENa) eqs += Current('I:amp') #eqs.prepare() # #for n in eqs._string.keys(): # eqs._string[n] = rewrite_to_c_expression(eqs._string[n]) #print eqs if method == 'gpu': G = GPUNeuronGroup(N, eqs, precision=precision, maxblocksize=256, forcesync=forcesync) print 'GPU loop code:' print G._state_updater.code_gpu gf = G._state_updater.gpu_func print '(lmem, smem, registers) = ', (gf.local_size_bytes, gf.shared_size_bytes, gf.num_regs) devdata = pycuda.tools.DeviceData() orec = pycuda.tools.OccupancyRecord(devdata, 256) print 'tb_per_mp', orec.tb_per_mp print 'limited_by', orec.limited_by print 'warps_per_mp', orec.warps_per_mp print 'occupancy', orec.occupancy elif method == 'c': G = NeuronGroup(N, eqs, compile=True, freeze=True) su = AutoCompiledNonlinearStateUpdater(eqs, G.clock, freeze=True) G._state_updater = su elif method == 'python': #G = NeuronGroup(N, eqs, freeze=True)#, compile=True, freeze=True) G = NeuronGroup(N, eqs, compile=True, freeze=True) G.V = 1 if domonitor: M = StateMonitor(G, 'V', record=True) start = time.time() run(duration) autoinit.context.synchronize() print method, 'code:', (time.time() - start) if domonitor: M_V = M[0] if domonitor: plot(M.times, M_V) if showfinal: figure() plot(G.V) if domonitor or showfinal: show() brian-1.3.1/brian/experimental/dana/000077500000000000000000000000001167451777000173215ustar00rootroot00000000000000brian-1.3.1/brian/experimental/dana/__init__.py000066400000000000000000000000001167451777000214200ustar00rootroot00000000000000brian-1.3.1/brian/experimental/dana/convolution_matrix.py000066400000000000000000000076721167451777000236520ustar00rootroot00000000000000def convolution_matrix(src, dst, kernel, toric=False): ''' Build a sparse convolution matrix M such that: (M*src.ravel()).reshape(src.shape) = convolve2d(src,kernel) You can specify whether convolution is toric or not and specify a different output shape. If output (dst) is different, convolution is only applied at corresponding normalized location within the src array. Building the matrix can be pretty long if your kernel is big but it can nonetheless saves you some time if you need to apply several convolution compared to fft convolution (no need to go to the Fourier domain). Parameters: ----------- src : n-dimensional numpy array Source shape dst : n-dimensional numpy array Destination shape kernel : n-dimensional numpy array Kernel to be used for convolution Returns: -------- A sparse convolution matrix Examples: --------- >>> Z = np.ones((3,3)) >>> M = convolution_matrix(Z,Z,Z,True) >>> print (M*Z.ravel()).reshape(Z.shape) [[ 9. 9. 9.] [ 9. 9. 9.] [ 9. 9. 9.]] >>> M = convolution_matrix(Z,Z,Z,False) >>> print (M*Z.ravel()).reshape(Z.shape) [[ 4. 6. 4.] [ 6. 9. 6.] [ 4. 6. 4.]] ''' # For a toric connection, it is wrong to have a kernel larger # than the source # if toric: # shape = np.minimum(np.array(src.shape), np.array(kernel.shape)) # kernel = extract(kernel, shape, np.rint(np.array(kernel.shape)/2.)) # Get non NaN value from kernel and their indices. nz = (1 - np.isnan(kernel)).nonzero() data = kernel[nz].ravel() indices = [0,]*(len(kernel.shape)+1) indices[0] = np.array(nz) indices[0] += np.atleast_2d((np.array(src.shape)//2 - np.array(kernel.shape)//2)).T # Generate an array A for a given shape such that given an index tuple I, # we can translate into a flat index F = (I*A).sum() to_flat_index = np.ones((len(src.shape),1), dtype=int) if len(src.shape) > 1: to_flat_index[:-1] = src.shape[1] R, C, D = [], [], [] dst_index = 0 src_indices = [] # Translate target tuple indices into source tuple indices taking care of # possible scaling (this is done by normalizing indices) for i in range(len(src.shape)): z = np.rint((np.linspace(0,1,dst.shape[i])*(src.shape[i]-1))).astype(int) src_indices.append(z) nd = [0,]*(len(kernel.shape)) for index in np.ndindex(dst.shape): dims = [] # Are we starting a new dimension ? if index[-1] == 0: for i in range(len(index)-1,0,-1): if index[i]: break dims.insert(0,i-1) dims.append(len(dst.shape)-1) for dim in dims: i = index[dim] if toric: z = (indices[dim][dim] - src.shape[dim]//2 +(kernel.shape[dim]+1)%2 + src_indices[dim][i]) % src.shape[dim] else: z = (indices[dim][dim] - src.shape[dim]//2 +(kernel.shape[dim]+1)%2 + src_indices[dim][i]) # if toric: # z = (indices[dim][dim] - src.shape[dim]/2.0 -(kernel.shape[dim]+1)%2 + src_indices[dim][i]) % src.shape[dim] # else: # z = (indices[dim][dim] - src.shape[dim]/2.0 -(kernel.shape[dim]+1)%2+ src_indices[dim][i]) n = np.where((z >= 0)*(z < src.shape[dim]))[0] if dim == 0: nd[dim] = n.copy() else: nd[dim] = nd[dim-1][n] indices[dim+1] = np.take(indices[dim], n, 1) indices[dim+1][dim] = z[n] dim = len(dst.shape)-1 z = indices[dim+1] R.extend( [dst_index,]*len(z[0]) ) C.extend( (z*to_flat_index).sum(0).tolist() ) D.extend( data[nd[-1]].tolist() ) dst_index += 1 return sp.coo_matrix( (D,(R,C)), (dst.size,src.size)) brian-1.3.1/brian/experimental/integrodiff.py000066400000000000000000000125151167451777000212740ustar00rootroot00000000000000''' Conversion from integral form to differential form. See BEP-5. TODO: * maximum rank * better function name * discrete time version * rescale X0 to avoid numerical problems * automatic determination of T? ''' import re import inspect from brian.units import * from brian.stdunits import * from brian.inspection import get_identifiers from brian.utils.autodiff import * from brian.equations import * from scipy import linalg def integral2differential(expr, T=20 * ms, level=0, N=20, suffix=None, matrix_output=False): ''' Example: eqs,w=integral2differential('g(t)=t*exp(-t/tau)') M,nvar,w=integral2differential('g(t)=t*exp(-t/tau)',matrix_output=True) Returns an Equations object corresponding to the time-invariant linear system specified by the impulse response g(t), and the value w to generate the impulse response: g_in->g_in+w. If matrix_output is True, returns the matrix of the corresponding differential system, the index nvar of the variable and the initial condition w=x_nvar(0). T is the interval over which the function is calculated. N is the number of points chosen in that interval. level is the frame level where the expression is defined. suffix is a string added to internal variable names (default: unique string). ''' # Expression matching varname, time, RHS = re.search('\s*(\w+)\s*\(\s*(\w+)\s*\)\s*=\s*(.+)\s*', expr).groups() # Build the namespace frame = inspect.stack()[level + 1][0] global_namespace, local_namespace = frame.f_globals, frame.f_locals # Find external objects vars = list(get_identifiers(RHS)) namespace = {} for var in vars: if var == time: # time variable pass elif var in local_namespace: #local namespace[var] = local_namespace[var] elif var in global_namespace: #global namespace[var] = global_namespace[var] elif var in globals(): # typically units namespace[var] = globals()[var] # Convert to a function f = eval('lambda ' + time + ':' + RHS, namespace) # Unit unit = get_unit(f(rand()*second)).name # Pick N points t = rand(N) * T # Calculate derivatives and find rank n = 0 rank = 0 M = f(t).reshape(N, 1) while rank == n: n += 1 dfn = differentiate(f, t, order=n).reshape(N, 1) x, _, rank, _ = linalg.lstsq(M, dfn) if rank == n: M = hstack([M, dfn]) oldx = x # oldx expresses dfn as a function of df0,..,dfn-1 (n=rank) # Find initial condition X0 = array([differentiate(f, 0 * ms, order=n) for n in range(rank)]) # Rescaling DOES NOT WORK #R=ones(rank) #for i in range(rank): # if X0[i]!=0.: # R[i]=1./X0[i] # else: # R[i]=1. #R=diag(R) #X0=dot(R,X0) #oldx=dot(R,oldx) # Build A A = diag(ones(rank - 1), 1) A[-1, :] = oldx.reshape(1, rank) # Find Q=P^{-1} Q = eye(rank) if X0[0] == 0.: # continuous g, spikes act on last variable: x->x+1 Q[:, -1] = X0 nvar = rank - 1 w = 1. # Exact inversion P = eye(rank) P[:-1, -1] = -X0[:-1] / X0[-1] # Has to be !=0 !! P[-1, -1] = 1. / X0[-1] else: # discontinuous g, spikes act on first variable: x->x+g(0) Q[:, 0] = X0 nvar = 0 w = X0[0] P = linalg.inv(Q) M = dot(dot(P, A), Q) #M=dot(linalg.inv(R),dot(M,R)) # Turn into string # Set variable names if rank < 5: names = [varname] + ['x', 'y', 'z'][:rank - 1] else: names = [varname] + ['x' + str(i) for i in range(rank - 1)] # Add suffix if suffix is None: suffix = unique_id() names[1:] = [name + suffix for name in names[1:]] # Build string eqs = [] for i in range(rank): eqs.append('d' + names[i] + '/dt=' + '+'.join([str(x) + '*' + name for x, name in zip(M[i, :], names) if x != 0.]) + ' : ' + str(unit)) eqs.append(varname + '_in=' + names[nvar]) # alias eq_string = '\n'.join(eqs).replace('+-', '-') if matrix_output: return M, nvar, w else: return Equations(eq_string), w if __name__ == '__main__': from brian import * from scipy import linalg tau = 10 * ms tau2 = 5 * ms freq = 350 * Hz # The gammatone example does not seem to work for higher orders # probably a numerical problem; use a rescaling matrix for X0? f = lambda t:(t / tau) ** 1 * exp(-t / tau) * cos(2 * pi * freq * t) A, nvar, w = integral2differential('g(t)=(t/tau)**1*exp(-t/tau)*cos(2*pi*freq*t)', suffix='', matrix_output=True) eq, w = integral2differential('g(t)=(t/tau)**1*exp(-t/tau)*cos(2*pi*freq*t)', suffix='', matrix_output=False) print eq #f=lambda t:exp(-t/tau)-exp(-t/tau2)*cos(2*pi*t/tau) #A,nvar,w=integral2differential('g(t)=exp(-t/tau)-exp(-t/tau2)*cos(2*pi*t/tau)', # matrix_output=True) print A, nvar, w for t in range(10): t = t * 1 * ms print linalg.expm(A * t)[0, nvar] * w, f(t) #t=arange(50)*.5*ms #plot(t,f(t)) #show() brian-1.3.1/brian/experimental/modelfitting/000077500000000000000000000000001167451777000211035ustar00rootroot00000000000000brian-1.3.1/brian/experimental/modelfitting/__init__.py000077500000000000000000000000651167451777000232200ustar00rootroot00000000000000from modelfitting import * from compensation import *brian-1.3.1/brian/experimental/modelfitting/benchmark.py000077500000000000000000000025601167451777000234150ustar00rootroot00000000000000from brian import * from brian.experimental.modelfitting import * from numpy.random import * import time if __name__ == '__main__': equations = Equations(''' dV/dt=(R*I-V)/tau : 1 I : 1 R : 1 tau : second ''') input = loadtxt('current.txt') trace = loadtxt('trace_artificial.txt') groups = 1 overlap = 0*ms # input, trace = slice_trace(input, trace, slices = groups, overlap = overlap) neurons = 25600 R = 3e9*ones(neurons) tau = 25*ms*ones(neurons) criterion = LpError(p=2, varname='V') t0 = time.clock() criterion_values = simulate( model = equations, reset = 0, threshold = 1, data = trace, input = input, use_gpu = True, groups = groups, overlap = overlap, stepsize = 128*ms, dt = .1*ms, criterion = criterion, neurons = neurons, R = R, tau = tau) dur = time.clock()-t0 print dur print min(criterion_values), max(criterion_values) brian-1.3.1/brian/experimental/modelfitting/compensation.py000077500000000000000000000102521167451777000241570ustar00rootroot00000000000000from brian import * from modelfitting import * import time, os dt = defaultclock.dt def compensate(current, trace, popsize = 100, maxiter = 10, equations = None, reset = None, threshold = None, slice_duration = 1 * second, overlap = 100*ms, initial = None, dt = defaultclock.dt, cpu = None, gpu = 1, record=['V','Ve'], cut = None, cut_length = 0.0, p = 0.5, best_params=None, **params): trace0 = trace.copy() if initial is None: initial = trace[0] if slice_duration is None: slices = 1 else: slices = max(int(len(current)*dt/slice_duration), 1) current0 = current current, trace = slice_trace(current, trace, slices=slices, overlap=overlap, dt=dt) if cut is not None: cut, cut_indices = transform_spikes(popsize, current0, [(0, float(c)) for c in cut], slices=slices, overlap=overlap, dt=dt) cut_steps = int32(cut_length/dt) cut = array(cut/dt, dtype=int32) cut_indices = array(cut_indices, dtype=int32) else: cut_indices = None cut_steps = 0 initial_values = {'V0': initial, 'Ve': 0.} if equations is None: equations = Equations(''' dV0/dt=(R*Iinj-V0+Vr)/tau : 1 Iinj=(V-V0)/Re : 1 dV/dt=Re*(I-Iinj)/taue : 1 Ve=V-V0 : 1 I : 1 R : 1 Re : 1 Vr : 1 tau : second taue : second ''') if threshold is None: threshold = "V>100000" if reset is None: reset = "" if len(params) == 0: params = dict(R = [1.0e3, 1.0e6, 1000.0e6, 1.0e12], Re = [1.0e3, 1.0e6, 1000.0e6, 1.0e12], Vr = [-100.0e-3, -80e-3, -40e-3, -10.0e-3], tau = [.1*ms, 1*ms, 30*ms, 200*ms], taue = [.01*ms, .1*ms, 5*ms, 20*ms]) criterion = LpError(p=p, varname='V') if best_params is None: results = modelfitting( model = equations, reset = reset, threshold = threshold, data = trace, input = current, cpu = cpu, gpu = gpu, dt = dt, popsize = popsize, maxiter = maxiter, onset = overlap, criterion = criterion, initial_values = initial_values, **params ) print_table(results) best_params = results.best_params else: results = best_params for key in best_params.keys(): best_params[key] = [best_params[key]] criterion_values, record_values = simulate( model = equations, reset = reset, threshold = threshold, data = trace, input = current, dt = dt, neurons = slices, # 1 neuron/slice onset = overlap, criterion = criterion, use_gpu = True, record = record, initial_values = initial_values, **best_params ) traceV = record_values[0][:,int(overlap/dt):].flatten() traceVe = record_values[1][:,int(overlap/dt):].flatten() # return traceV, traceVe, results return trace0 - traceVe, traceV, traceVe, results brian-1.3.1/brian/experimental/modelfitting/criteria.py000077500000000000000000001452201167451777000232660ustar00rootroot00000000000000from brian import Equations, NeuronGroup, Clock, CoincidenceCounter, Network, zeros, array, \ ones, kron, ms, second, concatenate, hstack, sort, nonzero, diff, TimedArray, \ reshape, sum, log, Monitor, NetworkOperation, defaultclock, linspace, vstack, \ arange, sort_spikes, rint, SpikeMonitor, Connection, int32, double,SpikeGeneratorGroup,DelayConnection,\ SpikeCounter,forget,diagflat,inf,sqrt,ones_like,isnan,append,repeat,tile,mean,logical_and,where from brian.tools.statistics import firing_rate, get_gamma_factor from playdoh import * try: import pycuda import pycuda.driver as drv import pycuda from pycuda import gpuarray can_use_gpu = True except ImportError: can_use_gpu = False from brian.experimental.codegen.integration_schemes import * import sys, cPickle class Criterion(Monitor, NetworkOperation): """ Abstract class from which modelfitting criterions should derive. Derived classes should implement the following methods: ``initialize(self, **params)`` Called once before the simulation. ```params`` is a dictionary with criterion-specific parameters. ``timestep_call(self)`` Called at every timestep. ``spike_call(self, neurons)`` Called at every spike, with the spiking neurons as argument. ``get_values(self)`` Called at the end, returns the criterion values. You have access to the following methods: ``self.get_value(self, varname)`` Returns the value of the specified variable for all neurons (vector). You have access to the following attributes: ``self.step`` The time step (integer) ``self.group`` The NeuronGroup ``self.traces=None`` Target traces. A 2-dimensional K*T array where K is the number of targets, and T the total number of timesteps. It is still a 2-dimensional array when K=1. It is None if not specified. ``self.spikes=None`` A list of target spike trains : [(i,t)..] where i is the target index and t the spike time. ``self.N`` The number of neurons in the NeuronGroup ``self.K=1`` The number of targets. ``self.duration`` The total duration of the simulation, in seconds. ``self.total_steps`` The total number of time steps. ``self.dt`` The timestep duration, in seconds. ``self.delays=zeros(self.n)`` The delays for every neuron, in seconds. The delay is relative to the target. ``self.onset=0*ms`` The onset, in seconds. The first timesteps, before onset, should be discarded in the criterion. ``self.intdelays=0`` The delays, but in number of timesteps (``int(delays/dt)``). NOTE: traces and spikes have all been sliced before being passed to the criterion object """ def __init__(self, group, traces=None, spikes=None, targets_count=1, duration=None, onset=0*ms, spikes_inline=None, spikes_offset=None,trials_offset=None, traces_inline=None, traces_offset=None, delays=None, when='end', **params): NetworkOperation.__init__(self, None, clock=group.clock, when=when) self.group = group # needed by SpikeMonitor self.source = group self.source.set_max_delay(0) self.delay = 0 self.trials_offset=trials_offset self.N = len(group) # number of neurons self.K = targets_count # number of targets self.dt = self.clock.dt if traces is not None: self.traces = array(traces) # KxT array if self.traces.ndim == 1: self.traces = self.traces.reshape((1,-1)) assert targets_count==self.traces.shape[0] self.spikes = spikes # get the duration from the traces if duration is not specified in the constructor if duration is None: duration = self.traces.shape[1] # total number of steps self.duration = duration self.total_steps = int(duration/self.dt) if delays is None: delays = zeros(self.n) self.delays = delays self.onset = int(onset/self.dt) self.intdelays = array(self.delays/self.clock.dt, dtype=int) self.mindelay = min(delays) self.maxdelay = max(delays) # the following data is sliced self.spikes_inline = spikes_inline self.spikes_offset = spikes_offset self.traces_inline = traces_inline self.traces_offset = traces_offset if self.spikes is not None: # target spike count and rates self.target_spikes_count = self.get_spikes_count(self.spikes) self.target_spikes_rate = self.get_spikes_rate(self.spikes) self.initialize(**params) def step(self): """ Return the current time step """ return int(round(self.clock.t/self.dt)) def get_spikes_count(self, spikes): count = zeros(self.K) for (i,t) in spikes: count[i] += 1 return count def get_spikes_rate(self, spikes): count = self.get_spikes_count(spikes) return count*1.0/self.duration def get_gpu_code(self): # TO IMPLEMENT """ Returns CUDA code snippets to put at various places in the CUDA code template. It must return a dictionary with the following items: %CRITERION_DECLARATION%: kernel declaration code %CRITERION_INIT%: initialization code %CRITERION_TIMESTEP%: main code, called at every time step %CRITERION_END%: finalization code """ log_warn("GPU code not implemented by the derived criterion class") def initialize_cuda_variables(self): # TO IMPLEMENT """ Initialize criterion-specific CUDA variables here. """ pass def get_kernel_arguments(self): # TO IMPLEMENT """ Return a list of objects to pass to the CUDA kernel. """ pass def initialize(self): # TO IMPLEMENT """ Override this method to initialize the criterion before the simulation """ pass def timestep_call(self): # TO IMPLEMENT """ Override this method to do something at every time step """ pass def spike_call(self, neurons): # TO IMPLEMENT """ Override this method to do something at every time spike. neurons contains the list of neurons that just spiked """ pass def get_value(self, varname): return self.group.state_(varname) def __call__(self): self.timestep_call() neurons = self.group.get_spikes() self.spike_call(neurons) # def propagate(self, neurons): # self.spike_call(neurons) def get_values(self): # TO IMPLEMENT """ Override this method to return the criterion values at the end. It must return one (or several, as a tuple) additive values, i.e., values corresponding to slices. The method normalize can normalize combined values. """ pass additive_values = property(get_values) def normalize(self, values): # TO IMPLEMENT """ Values contains combined criterion values. It is either a vector of values (as many values as neurons), or a tuple with different vector of as many values as neurons. """ pass class CriterionStruct(object): type = None # 'trace', 'spike' or 'both' def get_name(self): return self.__class__.__name__ name = property(get_name) class LpErrorCriterion(Criterion): type = 'traces' def initialize(self, p=2, varname='v', method='all',insets=None,outsets=None,points=None): """ Called at the beginning of every iteration. The keyword arguments here are specified in modelfitting.initialize_criterion(). """ self.method=method if insets is not None: self.insets=array(append(insets,insets[-1]+100000000),dtype=int) self.outsets=array(append(outsets,outsets[-1]+100000000),dtype=int) if points is not None: self.points=array(append(points,points[-1]+100000000),dtype=int) self.next_index=0 self.p = double(p) self.varname = varname self._error = zeros((self.K, self.N)) def timestep_call(self): if self.method=='all': v = self.get_value(self.varname) t = self.step()+1 if t= self.duration: return d = self.intdelays indices = (t-d>=0)&(t-d= self.duration: return if t>=self.insets[self.next_index] and t<=self.outsets[self.next_index]: if t==self.outsets[self.next_index]: self.next_index+=1 d = self.intdelays indices = (t-d>=0)&(t-d= self.duration: return if t==self.points[self.next_index]: self.next_index+=1 d = self.intdelays indices = (t-d>=0)&(t-d= onset)&(Tdelay= onset)&(Tdelay= inset)&(T <= outset)) { if (T == outset) {next_index+=1; inset=insets[next_index]; outset=outsets[next_index]; } error = error + pow(abs(trace_value - %s), %.4f); } """ %(self.varname, self.p) # FINALIZATION code['%CRITERION_END%'] = """ error_arr[neuron_index] = error; *next_index_past=next_index; """ if self.method=='points': # DECLARATION code['%CRITERION_DECLARE%'] = """ double *error_arr, int *points, int *next_index_past,""" # INITIALIZATION code['%CRITERION_INIT%'] = """ double error = error_arr[neuron_index]; int next_index=*next_index_past; int point=points[next_index]; """ # TIMESTEP #(T >= onset)&(Tdelayilevel*self.level_duration,temp<=(ilevel+1)*self.level_duration) # print where(ind)[0] self.target_count_level[ilevel,itrial]=len(where(ind)[0]) # First target spikes (needed for the computation of # the target train firing rates) # self.first_target_spike = zeros(self.N) self.last_spike_allowed = ones(self.N, dtype='bool') self.next_spike_allowed = ones(self.N, dtype='bool') if 1: #threadIdx.x+blockDim.x*itrial def get_cuda_code(self): block_size=32 code = {} # DECLARATION code['%CRITERION_DECLARE%'] = """ int *spikecount, // Number of spikes produced by each neuron int *num_coincidences, // Count of coincidences for each neuron """ if self.algorithm == 'exclusive': code['%CRITERION_DECLARE%'] += """ int *sp_trial_indices, // int *last_spike_time_arr, // pass zero array int *next_spike_time_arr, bool *last_spike_allowed_arr, bool *next_spike_allowed_arr, int *spike_count_level, int *current_level_index, """ # INITIALIZATION code['%CRITERION_INIT%'] = """ int ntrials=%i;//*ntrial; int nlevel=%i; int level_duration=%i; int level_ind = current_level_index[neuron_index]; int nspikes_level = spike_count_level[neuron_index*nlevel+level_ind]; int nspikes = spikecount[neuron_index]; __shared__ int ncoinc[%i]; __shared__ int sp_trial_indices_temp[%i]; __shared__ int last_spike_time[%i]; __shared__ int next_spike_time[%i]; __shared__ bool last_spike_allowed[%i]; __shared__ bool next_spike_allowed[%i]; for(int itrial=0;itrial=Tspike; bool near_next_spike = next_spike_time[threadIdx.x*ntrials+itrial]-%d<=Tspike; near_last_spike = near_last_spike && has_spiked; near_next_spike = near_next_spike && has_spiked; ncoinc[threadIdx.x*ntrials+itrial] +=(near_last_spike&&last_spike_allowed[threadIdx.x*ntrials+itrial]) || (near_next_spike&&next_spike_allowed[threadIdx.x*ntrials+itrial]); bool near_both_allowed = (near_last_spike&&last_spike_allowed[threadIdx.x*ntrials+itrial]) && (near_next_spike&&next_spike_allowed[threadIdx.x*ntrials+itrial]); last_spike_allowed[threadIdx.x*ntrials+itrial] = last_spike_allowed[threadIdx.x*ntrials+itrial] && !near_last_spike; next_spike_allowed[threadIdx.x*ntrials+itrial] = (next_spike_allowed[threadIdx.x*ntrials+itrial] && !near_next_spike) || near_both_allowed; if(Tspike>=next_spike_time[threadIdx.x*ntrials+itrial]){ sp_trial_indices_temp[threadIdx.x*ntrials+itrial]++; last_spike_time[threadIdx.x*ntrials+itrial] = next_spike_time[threadIdx.x*ntrials+itrial]; next_spike_time[threadIdx.x*ntrials+itrial] = spiketimes[sp_trial_indices_temp[threadIdx.x*ntrials+itrial]+1]; } last_spike_allowed[threadIdx.x*ntrials+itrial] = next_spike_allowed[threadIdx.x*ntrials+itrial]; next_spike_allowed[threadIdx.x*ntrials+itrial] = true; } nspikes += has_spiked*(T>=onset); if (T>(level_ind+1)*level_duration) { spike_count_level[neuron_index*nlevel+level_ind]=nspikes_level; level_ind+=1; nspikes_level=spike_count_level[neuron_index*nlevel+level_ind]; } nspikes_level += has_spiked*(T>=onset); """% (self.delta, self.delta) # FINALIZATION code['%CRITERION_END%'] = """ spike_count_level[neuron_index*nlevel+level_ind]=nspikes_level; if (Tend>(level_ind+1)*level_duration) { level_ind+=1; } for(int itrial=0;itrial=Tspike; bool near_next_spike = next_spike_time_arr[neuron_index*ntrials+itrial]-%d<=Tspike; near_last_spike = near_last_spike && has_spiked; near_next_spike = near_next_spike && has_spiked; num_coincidences[neuron_index*ntrials+itrial] +=(near_last_spike&&last_spike_allowed_arr[neuron_index*ntrials+itrial]) || (near_next_spike&&next_spike_allowed_arr[neuron_index*ntrials+itrial]); bool near_both_allowed = (near_last_spike&&last_spike_allowed_arr[neuron_index*ntrials+itrial]) && (near_next_spike&&next_spike_allowed_arr[neuron_index*ntrials+itrial]); last_spike_allowed_arr[neuron_index*ntrials+itrial] = last_spike_allowed_arr[neuron_index*ntrials+itrial] && !near_last_spike; next_spike_allowed_arr[neuron_index*ntrials+itrial] = (next_spike_allowed_arr[neuron_index*ntrials+itrial] && !near_next_spike) || near_both_allowed; if(Tspike>=next_spike_time_arr[neuron_index*ntrials+itrial]){ sp_trial_indices[neuron_index*ntrials+itrial]+=1; last_spike_time_arr[neuron_index*ntrials+itrial] = next_spike_time_arr[neuron_index*ntrials+itrial]; next_spike_time_arr[neuron_index*ntrials+itrial] = spiketimes[sp_trial_indices[neuron_index*ntrials+itrial]+1]; } last_spike_allowed_arr[neuron_index*ntrials+itrial] = next_spike_allowed_arr[neuron_index*ntrials+itrial]; next_spike_allowed_arr[neuron_index*ntrials+itrial] = true; } nspikes += has_spiked*(T>=onset); """% (self.delta, self.delta) # FINALIZATION code['%CRITERION_END%'] = """ spikecount[neuron_index] = nspikes; """ return code def initialize_cuda_variables(self): """ Initialize GPU variables to pass to the kernel """ self.spike_count_gpu = gpuarray.to_gpu(zeros(self.N, dtype=int32)) self.coincidences_gpu = gpuarray.to_gpu(zeros(self.N*self.ntrials, dtype=int32)) self.sp_trial_indices_gpu = gpuarray.to_gpu(array(self.trials_offset,dtype=int32)) self.last_spike_time_gpu = gpuarray.to_gpu(zeros(self.N*self.ntrials, dtype=int32)) self.next_spike_time_gpu = gpuarray.to_gpu(zeros(self.N*self.ntrials, dtype=int32)) self.last_spike_allowed_arr = gpuarray.to_gpu(zeros(self.N*self.ntrials, dtype=bool)) self.next_spike_allowed_arr = gpuarray.to_gpu(ones(self.N*self.ntrials, dtype=bool)) self.spike_count_level_gpu = gpuarray.to_gpu(zeros(self.N*self.nlevels, dtype=int32)) self.current_level_index_gpu = gpuarray.to_gpu(zeros(self.N, dtype=int32)) def get_kernel_arguments(self): """ Return a list of objects to pass to the CUDA kernel. """ args = [self.spike_count_gpu, self.coincidences_gpu,self.sp_trial_indices_gpu,self.last_spike_time_gpu,self.next_spike_time_gpu] args += [self.last_spike_allowed_arr,self.next_spike_allowed_arr,self.spike_count_level_gpu,self.current_level_index_gpu] return args def update_gpu_values(self): """ Call gpuarray.get() on final values, so that get_values() returns updated values. """ self.coincidences = self.coincidences_gpu.get() self.spike_count = self.spike_count_gpu.get() self.spike_count_level = self.spike_count_level_gpu.get() # print self.sp_trial_indices_gpu.get() def get_values(self): return (self.coincidences, self.spike_count) def normalize(self, values): coincidence_count = values[0] spike_count = values[1] spike_count_temp = repeat(spike_count,self.ntrials) # self.target_count_temp = tile(self.target_count,self.N) delta = self.delta*self.dt if self.ntrials>1: coincidence_count = sum(reshape(coincidence_count,(self.N,self.ntrials)),axis=1) gamma = get_gamma_factor(coincidence_count, spike_count*self.ntrials, sum(self.target_count), array(mean(self.target_rate)), delta) self.spike_count_level=reshape(self.spike_count_level,(self.N,self.nlevels)) # print self.target_count_level.shape,sum(self.target_count_level,axis=1) # print self.spike_count_level-sum(self.target_count_level,axis=1) # print abs(self.spike_count_level*self.ntrials-sum(self.target_count_level,axis=1))/sum(self.target_count_level,axis=1) # print self.target_count_level if self.fr_weight!=0: gamma = gamma - self.fr_weight*mean(abs(self.spike_count_level*self.ntrials-sum(self.target_count_level,axis=1))/sum(self.target_count_level,axis=1),axis=1) return gamma class GammaFactor2(CriterionStruct): def __init__(self, delta = 4*ms,fr_weight=0, coincidence_count_algorithm = 'exclusive',nlevels=1,level_duration=100): self.type = 'spikes' self.delta = delta self.nlevels = nlevels self.fr_weight = fr_weight self.level_duration=level_duration self.coincidence_count_algorithm = coincidence_count_algorithm class GammaFactorCriterion(Criterion): """ Coincidence counter class. Counts the number of coincidences between the spikes of the neurons in the network (model spikes), and some user-specified data spike trains (target spikes). This number is defined as the number of target spikes such that there is at least one model spike within +- ``delta``, where ``delta`` is the half-width of the time window. Initialised as:: cc = CoincidenceCounter(source, data, delta = 4*ms) with the following arguments: ``source`` A :class:`NeuronGroup` object which neurons are being monitored. ``data`` The list of spike times. Several spike trains can be passed in the following way. Define a single 1D array ``data`` which contains all the target spike times one after the other. Now define an array ``spiketimes_offset`` of integers so that neuron ``i`` should be linked to target train: ``data[spiketimes_offset[i]], data[spiketimes_offset[i]+1]``, etc. It is essential that each spike train with the spiketimes array should begin with a spike at a large negative time (e.g. -1*second) and end with a spike that is a long time after the duration of the run (e.g. duration+1*second). ``delta=4*ms`` The half-width of the time window for the coincidence counting algorithm. ``spiketimes_offset`` A 1D array, ``spiketimes_offset[i]`` is the index of the first spike of the target train associated to neuron i. ``spikedelays`` A 1D array with spike delays for each neuron. All spikes from the target train associated to neuron i are shifted by ``spikedelays[i]``. ``coincidence_count_algorithm`` If set to ``'exclusive'``, the algorithm cannot count more than one coincidence for each model spike. If set to ``'inclusive'``, the algorithm can count several coincidences for a single model spike. ``onset`` A scalar value in seconds giving the start of the counting: no coincidences are counted before ``onset``. Has three attributes: ``coincidences`` The number of coincidences for each neuron of the :class:`NeuronGroup`. ``coincidences[i]`` is the number oflength[i]`` is the spike count for n i. ``target_length`` The number of spikes in the target spike train associated to each neuron. """ type = 'spikes' def initialize(self, delta=4*ms, fr_weight=0,coincidence_count_algorithm='exclusive'): self.algorithm = coincidence_count_algorithm self.delta = int(rint(delta / self.dt)) self.fr_weight = fr_weight self.spike_count = zeros(self.N, dtype='int') self.coincidences = zeros(self.N, dtype='int') self.spiketime_index = self.spikes_offset self.last_spike_time = array(rint(self.spikes_inline[self.spiketime_index] / self.dt), dtype=int) self.next_spike_time = array(rint(self.spikes_inline[self.spiketime_index + 1] / self.dt), dtype=int) # First target spikes (needed for the computation of # the target train firing rates) # self.first_target_spike = zeros(self.N) self.last_spike_allowed = ones(self.N, dtype='bool') self.next_spike_allowed = ones(self.N, dtype='bool') def spike_call(self, spiking_neurons): dt = self.dt t = self.step()*dt spiking_neurons = array(spiking_neurons) if len(spiking_neurons)>0: if t >= self.onset: self.spike_count[spiking_neurons] += 1 T_spiking = array(rint((t + self.delays[spiking_neurons]) / dt), dtype=int) remaining_neurons = spiking_neurons remaining_T_spiking = T_spiking while True: remaining_indices, = (remaining_T_spiking > self.next_spike_time[remaining_neurons]).nonzero() if len(remaining_indices): indices = remaining_neurons[remaining_indices] self.spiketime_index[indices] += 1 self.last_spike_time[indices] = self.next_spike_time[indices] self.next_spike_time[indices] = array(rint(self.spikes_inline[self.spiketime_index[indices] + 1] / dt), dtype=int) if self.algorithm == 'exclusive': self.last_spike_allowed[indices] = self.next_spike_allowed[indices] self.next_spike_allowed[indices] = True remaining_neurons = remaining_neurons[remaining_indices] remaining_T_spiking = remaining_T_spiking[remaining_indices] else: break # Updates coincidences count near_last_spike = self.last_spike_time[spiking_neurons] + self.delta >= T_spiking near_next_spike = self.next_spike_time[spiking_neurons] - self.delta <= T_spiking last_spike_allowed = self.last_spike_allowed[spiking_neurons] next_spike_allowed = self.next_spike_allowed[spiking_neurons] I = (near_last_spike & last_spike_allowed) | (near_next_spike & next_spike_allowed) if t >= self.onset: self.coincidences[spiking_neurons[I]] += 1 if self.algorithm == 'exclusive': near_both_allowed = (near_last_spike & last_spike_allowed) & (near_next_spike & next_spike_allowed) self.last_spike_allowed[spiking_neurons] = last_spike_allowed & -near_last_spike self.next_spike_allowed[spiking_neurons] = (next_spike_allowed & -near_next_spike) | near_both_allowed def get_cuda_code(self): code = {} # DECLARATION code['%CRITERION_DECLARE%'] = """ int *spikecount, // Number of spikes produced by each neuron int *num_coincidences, // Count of coincidences for each neuron """ if self.algorithm == 'exclusive': code['%CRITERION_DECLARE%'] += """ bool *last_spike_allowed_arr, bool *next_spike_allowed_arr, """ # INITIALIZATION code['%CRITERION_INIT%'] = """ int ncoinc = num_coincidences[neuron_index]; int nspikes = spikecount[neuron_index]; int last_spike_time = spiketimes[spiketime_index]; int next_spike_time = spiketimes[spiketime_index+1]; """ if self.algorithm == 'exclusive': code['%CRITERION_INIT%'] += """ bool last_spike_allowed = last_spike_allowed_arr[neuron_index]; bool next_spike_allowed = next_spike_allowed_arr[neuron_index]; """ # TIMESTEP code['%CRITERION_TIMESTEP%'] = """ const int Tspike = T+spikedelay; """ if self.algorithm == 'inclusive': code['%CRITERION_TIMESTEP%'] += """ ncoinc += has_spiked && (((last_spike_time+%d)>=Tspike) || ((next_spike_time-%d)<=Tspike)); """ % (self.delta, self.delta) if self.algorithm == 'exclusive': code['%CRITERION_TIMESTEP%'] += """ bool near_last_spike = last_spike_time+%d>=Tspike; bool near_next_spike = next_spike_time-%d<=Tspike; near_last_spike = near_last_spike && has_spiked; near_next_spike = near_next_spike && has_spiked; ncoinc += (near_last_spike&&last_spike_allowed) || (near_next_spike&&next_spike_allowed); bool near_both_allowed = (near_last_spike&&last_spike_allowed) && (near_next_spike&&next_spike_allowed); last_spike_allowed = last_spike_allowed && !near_last_spike; next_spike_allowed = (next_spike_allowed && !near_next_spike) || near_both_allowed; """ % (self.delta, self.delta) code['%CRITERION_TIMESTEP%'] += """ nspikes += has_spiked*(T>=onset); if(Tspike>=next_spike_time){ spiketime_index++; last_spike_time = next_spike_time; next_spike_time = spiketimes[spiketime_index+1]; """ if self.algorithm == 'exclusive': code['%CRITERION_TIMESTEP%'] += """ last_spike_allowed = next_spike_allowed; next_spike_allowed = true; """ code['%CRITERION_TIMESTEP%'] += """ } """ # FINALIZATION code['%CRITERION_END%'] = """ num_coincidences[neuron_index] = ncoinc; spikecount[neuron_index] = nspikes; """ if self.algorithm == 'exclusive': code['%CRITERION_END%'] += """ last_spike_allowed_arr[neuron_index] = last_spike_allowed; next_spike_allowed_arr[neuron_index] = next_spike_allowed; """ return code def initialize_cuda_variables(self): """ Initialize GPU variables to pass to the kernel """ self.spike_count_gpu = gpuarray.to_gpu(zeros(self.N, dtype=int32)) self.coincidences_gpu = gpuarray.to_gpu(zeros(self.N, dtype=int32)) if self.algorithm == 'exclusive': self.last_spike_allowed_arr = gpuarray.to_gpu(zeros(self.N, dtype=bool)) self.next_spike_allowed_arr = gpuarray.to_gpu(ones(self.N, dtype=bool)) def get_kernel_arguments(self): """ Return a list of objects to pass to the CUDA kernel. """ args = [self.spike_count_gpu, self.coincidences_gpu] if self.algorithm == 'exclusive': args += [self.last_spike_allowed_arr, self.next_spike_allowed_arr] return args def update_gpu_values(self): """ Call gpuarray.get() on final values, so that get_values() returns updated values. """ self.coincidences = self.coincidences_gpu.get() self.spike_count = self.spike_count_gpu.get() def get_values(self): return (self.coincidences, self.spike_count) def normalize(self, values): coincidence_count = values[0] spike_count = values[1] delta = self.delta*self.dt gamma = get_gamma_factor(coincidence_count, spike_count, self.target_spikes_count, self.target_spikes_rate, delta) gamma = gamma - self.fr_weight*abs(spike_count-self.target_spikes_count)/self.target_spikes_count # print 'fr',abs(spike_count-self.target_spikes_count)/self.target_spikes_count,'fr' return gamma class GammaFactor(CriterionStruct): def __init__(self, delta = 4*ms,fr_weight=0, coincidence_count_algorithm = 'exclusive'): self.type = 'spikes' self.delta = delta self.fr_weight = fr_weight self.coincidence_count_algorithm = coincidence_count_algorithm class VanRossumCriterion(Criterion): type = 'spikes' def initialize(self, tau): self.delay_range =max(self.delays)- min(self.delays)#delay range self.min_delay = abs(min(self.delays))#minimum of possible delay self.distance_vector=zeros(self.N) self.nbr_neurons_group = self.N/self.K eqs=""" dv/dt=(-v)/tau: volt """ # network to convolve target spikes with the kernel self.input_target=SpikeGeneratorGroup(self.K,self.spikes,clock=self.group.clock) self.kernel_target=NeuronGroup(self.K,model=eqs,clock=self.group.clock) self.C_target = DelayConnection(self.input_target, self.kernel_target, 'v', structure='dense', max_delay=self.min_delay) self.C_target.connect_one_to_one(self.input_target,self.kernel_target) self.C_target.delay = self.min_delay*ones_like(self.C_target.delay) # network to convolve population spikes with the kernel self.kernel_population=NeuronGroup(self.N,model=eqs,clock=self.group.clock) self.C_population = DelayConnection(self.group, self.kernel_population, 'v', structure='sparse', max_delay=self.delay_range) for iN in xrange(self.N): self.C_population.delay[iN,iN] = diagflat(self.min_delay + self.delays[iN]) self.C_population.connect_one_to_one(self.group,self.kernel_population) self.spikecount_mon = SpikeCounter(self.group) self.contained_objects = [self.kernel_population,self.C_population,self.spikecount_mon,self.input_target,self.C_target,self.kernel_target] self.dt=self.group.clock.dt self.tau = tau def timestep_call(self): trace_population = self.kernel_population.state_('v') trace_target = self.kernel_target.state_('v') for igroup in xrange(self.K): self.distance_vector[igroup*self.nbr_neurons_group:(1+igroup)*self.nbr_neurons_group] += (trace_population[igroup*self.nbr_neurons_group:(1+igroup)*self.nbr_neurons_group]-trace_target[igroup])**2 self.spikecount=self.spikecount_mon.count def get_values(self): return (self.distance_vector) def normalize(self, distance_vector): distance_vector[nonzero(self.spikecount==0)] = inf #distance_vector[nonzero(self.spikecount==1)] = inf return 1-distance_vector*self.group.clock.dt def get_cuda_code(self): code = {} # DECLARATION code['%CRITERION_DECLARE%'] = """ double *error_arr, int *count_arr, """ # INITIALIZATION code['%CRITERION_INIT%'] = """ double error = error_arr[neuron_index]; double V_kernel =0; double V_target =0; int count=count_arr[neuron_index]; int next_spike_time = spiketimes[spiketime_index+1]; """ # TIMESTEP # code['%CRITERION_TIMESTEP%'] = """ # """ code['%CRITERION_TIMESTEP%'] = """ const int Tspike = T+spikedelay; if(has_spiked){ // If the neuron has spiked V_kernel+=1; } count += has_spiked*(T>=onset); if(Tspike>=next_spike_time){ V_target+=1; spiketime_index++; next_spike_time = spiketimes[spiketime_index+1]; } V_kernel=V_kernel*exp(-%.8f/%.6f); V_target=V_target*exp(-%.8f/%.6f); error+=pow(V_kernel-V_target,2); """ % (self.dt,self.tau,self.dt,self.tau) # FINALIZATION code['%CRITERION_END%'] = """ error_arr[neuron_index] = error; count_arr[neuron_index] = count; """ return code def initialize_cuda_variables(self): """ Initialize GPU variables to pass to the kernel """ self.error_gpu = gpuarray.to_gpu(zeros(self.N, dtype=double)) self.count_gpu = gpuarray.to_gpu(zeros(self.N, dtype=int32)) def get_kernel_arguments(self): """ Return a list of objects to pass to the CUDA kernel. """ args = [self.error_gpu,self.count_gpu] return args def update_gpu_values(self): """ Call gpuarray.get() on final values, so that get_values() returns updated values. """ self.distance_vector = self.error_gpu.get() self.spikecount = self.count_gpu.get() class VanRossum(CriterionStruct): def __init__(self, tau): self.type = 'spikes' self.tau = tau class BretteCriterion(Criterion): type = 'spikes' def initialize(self,tau_metric): # tau_metric=tau_metric** self.delay_range =max(self.delays)- min(self.delays)#delay range self.min_delay = abs(min(self.delays))#minimum of possible delay self.corr_vector=zeros(self.N) self.norm_pop = zeros(self.N) self.norm_target = zeros(self.N) self.nbr_neurons_group = self.N/self.K eqs=""" tau:second dv/dt=(-v)/tau: volt """ # network to convolve target spikes with the kernel self.input_target=SpikeGeneratorGroup(self.K,self.spikes,clock=self.group.clock) self.kernel_target=NeuronGroup(self.N,model=eqs,clock=self.group.clock) self.C_target = DelayConnection(self.input_target, self.kernel_target, 'v', structure='sparse', max_delay=self.min_delay) self.kernel_target.tau=tau_metric for igroup in xrange(self.K): self.C_target.W[igroup,igroup*self.nbr_neurons_group:(1+igroup)*self.nbr_neurons_group] = ones(self.nbr_neurons_group) self.C_target.delay[igroup,igroup*self.nbr_neurons_group:(1+igroup)*self.nbr_neurons_group] = self.min_delay * ones(self.nbr_neurons_group) # network to convolve population spikes with the kernel self.kernel_population=NeuronGroup(self.N,model=eqs,clock=self.group.clock) self.C_population = DelayConnection(self.group, self.kernel_population, 'v', structure='sparse', max_delay=self.delay_range) for iN in xrange(self.N): self.C_population.delay[iN,iN] = diagflat(self.min_delay + self.delays[iN]) self.C_population.connect_one_to_one(self.group,self.kernel_population) self.kernel_population.tau=tau_metric self.spikecount_mon = SpikeCounter(self.group) self.contained_objects = [self.kernel_population,self.C_population,self.spikecount_mon,self.input_target,self.C_target,self.kernel_target] self.tau_metric=tau_metric self.dt=self.group.clock.dt def timestep_call(self): trace_population = self.kernel_population.state_('v') trace_target = self.kernel_target.state_('v') self.corr_vector += trace_population*trace_target self.norm_pop += trace_population**2 self.norm_target += trace_target**2 self.spikecount=self.spikecount_mon.count def get_values(self): #print self.corr_vector,self.norm_pop,self.norm_target return (self.corr_vector,self.norm_pop,self.norm_target) def normalize(self, values): corr_vector=values[0] norm_pop=values[1] norm_target=values[2] corr_vector[nonzero(self.spikecount==0)] = -inf temp=self.corr_vector/self.tau_metric/sqrt(norm_pop)/sqrt(norm_target) temp[isnan(temp)] = -inf return temp def get_cuda_code(self): code = {} # DECLARATION code['%CRITERION_DECLARE%'] = """ double *error_arr, int *count_arr, double *tau_metric_arr, double *norm_pop_arr, double *norm_target_arr, """ # INITIALIZATION code['%CRITERION_INIT%'] = """ double error = error_arr[neuron_index]; double V_kernel =0; double V_target =0; double tau_metric=tau_metric_arr[neuron_index]; int count=count_arr[neuron_index]; int next_spike_time = spiketimes[spiketime_index+1]; double norm_pop=norm_pop_arr[neuron_index]; double norm_target=norm_target_arr[neuron_index]; """ # TIMESTEP # code['%CRITERION_TIMESTEP%'] = """ # """ code['%CRITERION_TIMESTEP%'] = """ const int Tspike = T+spikedelay; if(has_spiked){ // If the neuron has spiked V_kernel+=1; } count += has_spiked*(T>=onset); if(Tspike>=next_spike_time){ V_target+=1; spiketime_index++; next_spike_time = spiketimes[spiketime_index+1]; } V_kernel=V_kernel*exp(-%.8f/tau_metric); V_target=V_target*exp(-%.8f/tau_metric); error+=V_kernel*V_target; norm_pop+=pow(V_kernel,2); norm_target +=pow(V_target,2); """ % (self.dt,self.dt) # FINALIZATION code['%CRITERION_END%'] = """ error_arr[neuron_index] = error; count_arr[neuron_index] = count; norm_pop_arr[neuron_index] = norm_pop; norm_target_arr[neuron_index] = norm_target; """ return code def initialize_cuda_variables(self): """ Initialize GPU variables to pass to the kernel """ self.error_gpu = gpuarray.to_gpu(zeros(self.N, dtype=double)) self.count_gpu = gpuarray.to_gpu(zeros(self.N, dtype=int32)) self.tau_metric_gpu = gpuarray.to_gpu(array(self.tau_metric, dtype=double)) self.norm_pop_gpu = gpuarray.to_gpu(zeros(self.N, dtype=double)) self.norm_target_gpu = gpuarray.to_gpu(zeros(self.N, dtype=double)) def get_kernel_arguments(self): """ Return a list of objects to pass to the CUDA kernel. """ args = [self.error_gpu,self.count_gpu,self.tau_metric_gpu,self.norm_pop_gpu,self.norm_target_gpu] return args def update_gpu_values(self): """ Call gpuarray.get() on final values, so that get_values() returns updated values. """ self.corr_vector = self.error_gpu.get() self.spikecount = self.count_gpu.get() self.norm_pop = self.norm_pop_gpu.get() self.norm_target = self.norm_target_gpu.get() class Brette(CriterionStruct): def __init__(self): self.type = 'spikes' brian-1.3.1/brian/experimental/modelfitting/gpu_modelfitting.py000077500000000000000000000614151167451777000250270ustar00rootroot00000000000000from brian import Equations, NeuronGroup, Clock, CoincidenceCounter, Network, zeros, array, \ ones, kron, ms, second, concatenate, hstack, sort, nonzero, diff, TimedArray, \ reshape, sum, log, Monitor, NetworkOperation, defaultclock, linspace, vstack, \ arange, sort_spikes, rint, SpikeMonitor, Connection, Threshold, Reset, \ int32, double, VariableReset, StringReset, VariableThreshold, StringThreshold, \ Refractoriness from brian.tools.statistics import firing_rate, get_gamma_factor from playdoh import * import brian.optimiser as optimiser import pycuda.driver as drv import pycuda from pycuda.gpuarray import GPUArray from pycuda import gpuarray try: from pycuda.compiler import SourceModule except ImportError: from pycuda.driver import SourceModule from numpy import * from brian.experimental.codegen.integration_schemes import * from brian.experimental.codegen.codegen_gpu import * import re __all__ = ['GPUModelFitting', 'euler_scheme', 'rk2_scheme', 'exp_euler_scheme','close_cuda' ] if drv.get_version() == (2, 0, 0): # cuda version default_precision = 'float' elif drv.get_version() > (2, 0, 0): default_precision = 'double' else: raise Exception, "CUDA 2.0 required" class ModelfittingGPUCodeGenerator(GPUCodeGenerator): def generate(self, eqs, scheme): vartype = self.vartype() code = '' for line in self.scheme(eqs, scheme).split('\n'): line = line.strip() if line: code += ' ' + line + '\n' return code BLOCKSIZE = 256 def get_cuda_template(): return """ __global__ void runsim( // ITERATIONS int Tstart, int Tend, // Start, end time as integer (t=T*dt) int duration, // Total duration as integer // STATE VARIABLES %SCALAR% *state_vars, // State variables are offset from this // INPUT PARAMETERS double *I_arr, // Input current int *I_arr_offset, // Input current offset (for separate input // currents for each neuron) // DELAYS PARAMETERS int *spikedelay_arr, // Integer delay for each spike // REFRACTORY PARAMETERS int *refractory_arr, // Integer refractory times int *next_allowed_spiketime_arr, // Integer time of the next allowed spike (for refractoriness) // CRITERION SPECIFIC PARAMETERS %CRITERION_DECLARE% // DATA DECLARE %DATA_DECLARE% // STATEMONITOR DECLARE %STATEMONITOR_DECLARE% // SPIKEMONITOR DECLARE %SPIKEMONITOR_DECLARE% // MISC PARAMETERS int onset // Time onset (only count spikes from here onwards) ) { // NEURON INDEX const int neuron_index = blockIdx.x * blockDim.x + threadIdx.x; if(neuron_index>=%NUM_NEURONS%) return; // EXTRACT STATE VARIABLES %EXTRACT_STATE_VARIABLES% // LOAD VARIABLES %LOAD_VARIABLES% // DATA INIT %DATA_INIT% // STATEMONITOR INIT %STATEMONITOR_INIT% // SPIKEMONITOR INIT %SPIKEMONITOR_INIT% // CRITERION INITIALIZATION %CRITERION_INIT% // INPUT INITIALIZATION int I_offset0 = I_arr_offset[blockIdx.x * blockDim.x]; // I_offset of the first thread in the block int I_offset = I_arr_offset[neuron_index]; // DELAYS INITIALIZATION int spikedelay = spikedelay_arr[neuron_index]; // REFRACTORY INITIALIZATION const int refractory = refractory_arr[neuron_index]; int next_allowed_spiketime = next_allowed_spiketime_arr[neuron_index]; %SCALAR% ${input_var} = 0; for(int T=Tstart; T=onset for neurons in the first group if ((neuron_index>=%NUM_NEURONS_FIRSTGROUP%)|(T>=onset)) { %STATE_UPDATE% } // THRESHOLD const bool is_refractory = (T<=next_allowed_spiketime); const bool has_spiked = (%THRESHOLD%)&&!is_refractory; // RESET if(has_spiked||is_refractory) { %RESET1% } if(has_spiked) { next_allowed_spiketime = T+refractory; %RESET2% %SPIKEMONITOR_UPDATE% } // DATA UPDATE %DATA_UPDATE% // STATEMONITOR UPDATE %STATEMONITOR_UPDATE% // CRITERION TIMESTEP %CRITERION_TIMESTEP% } // STORE VARIABLES %STORE_VARIABLES% // STATEMONITOR END %STATEMONITOR_END% // SPIKEMONITOR END %SPIKEMONITOR_END% // CRITERION END %CRITERION_END% next_allowed_spiketime_arr[neuron_index] = next_allowed_spiketime; // END %DATA_END% } """ class GPUModelFitting(object): ''' Model fitting class to interface with GPU Initialisation arguments: ``G`` The initialised NeuronGroup to work from. ``eqs`` The equations defining the NeuronGroup. ``I``, ``I_offset`` The current array and offsets (see below). ``spiketimes``, ``spiketimes_offset`` The spike times array and offsets (see below). ``spikedelays``, Array of delays for each neuron. ``refractory``, Array of refractory periods, or a single value. ``delta`` The half-width of the coincidence window. ``precision`` Should be 'float' or 'double' - by default the highest precision your GPU supports. ``coincidence_count_algorithm`` Should be 'inclusive' if multiple predicted spikes can match one target spike, or 'exclusive' (default) if multiple predicted spikes can match only one target spike (earliest spikes are matched first). Methods: ``reinit_vars(I, I_offset, spiketimes, spiketimes_offset, spikedelays)`` Reinitialises all the variables, counters, etc. The state variable values are copied from the NeuronGroup G again, and the variables I, I_offset, etc. are copied from the method arguments. ``launch(duration[, stepsize])`` Runs the kernel on the GPU for simulation time duration. If ``stepsize`` is given, the simulation is broken into pieces of that size. This is useful on Windows because driver limitations mean that individual GPU kernel launches cannot last more than a few seconds without causing a crash. Attributes: ``coincidence_count`` An array of the number of coincidences counted for each neuron. ``spike_count`` An array of the number of spikes counted for each neuron. **Details** The equations for the NeuronGroup can be anything, but they will be solved with the Euler method. One restriction is that there must be a parameter named I which is the time varying input current. (TODO: support for multiple input currents? multiple names?) The current I is passed to the GPU in the following way. Define a single 1D array I which contains all the time varying current arrays one after the other. Now define an array I_offset of integers so that neuron i should see currents: I[I_offset[i]], I[I_offset[i]+1], I[I_offset[i]+2], etc. The experimentally recorded spike times should be passed in a similar way, put all the spike times in a single array and pass an offsets array spiketimes_offset. One difference is that it is essential that each spike train with the spiketimes array should begin with a spike at a large negative time (e.g. -1*second) and end with a spike that is a long time after the duration of the run (e.g. duration+1*second). The GPU uses this to mark the beginning and end of the train rather than storing the number of spikes for each train. ''' def __init__(self, G, eqs, criterion, # Criterion object input_var, subpopsize, onset=0*ms, precision=default_precision, statemonitor_var=None, spikemonitor = False, nbr_spikes = 200, duration=None, scheme=euler_scheme, stand_alone=False ): eqs.prepare() self.precision = precision self.scheme = scheme if precision == 'double': self.mydtype = float64 else: self.mydtype = float32 self.stand_alone=stand_alone self.N = len(G) self.dt = G.clock.dt self.onset = onset self.eqs = eqs self.G = G self.spikemonitor = spikemonitor self.nbr_spikes = nbr_spikes self.subpopsize = subpopsize self.duration = int(ceil(duration/self.dt)) self.input_var = input_var self.statemonitor_var = statemonitor_var self.criterion = criterion self.generate_code() def generate_threshold_code(self, src): eqs = self.eqs threshold = self.G._threshold if threshold.__class__ is Threshold: state = threshold.state if isinstance(state, int): state = eqs._diffeq_names[state] threshold = state + '>' + str(float(threshold.threshold)) elif isinstance(threshold, VariableThreshold): state = threshold.state if isinstance(state, int): state = eqs._diffeq_names[state] threshold = state + '>' + threshold.threshold_state elif isinstance(threshold, StringThreshold): namespace = threshold._namespace expr = threshold._expr all_variables = eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t'] expr = optimiser.freeze(expr, all_variables, namespace) threshold = expr else: raise ValueError('Threshold must be constant, VariableThreshold or StringThreshold.') # Substitute threshold src = src.replace('%THRESHOLD%', threshold) return src def generate_reset_code(self, src): eqs = self.eqs reset = self.G._resetfun # print eqs if reset.__class__ is Refractoriness: state = reset.state if isinstance(state, int): state = eqs._diffeq_names[state] reset = state + ' = ' + str(float(reset.resetvalue)) elif reset.__class__ is Reset: state = reset.state if isinstance(state, int): state = eqs._diffeq_names[state] reset = state + ' = ' + str(float(reset.resetvalue)) elif isinstance(reset, VariableReset): state = reset.state if isinstance(state, int): state = eqs._diffeq_names[state] reset = state + ' = ' + reset.resetvaluestate elif isinstance(reset, StringReset): namespace = reset._namespace expr = reset._expr all_variables = eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t'] expr = optimiser.freeze(expr, all_variables, namespace) reset = expr # self.reset = reset # Substitute reset reset = '\n '.join(line.strip() + ';' for line in reset.split('\n') if line.strip()) reset=reset.split('\n') src = src.replace('%RESET1%', reset[0]) reset = '\n '.join(line.strip()+'\n' for line in reset[1:]) src = src.replace('%RESET2%', reset) # print src return src def generate_data_code(self, src): # Substitute spikes/traces declare if self.criterion.type == 'spikes': data_declare = """ int *spiketimes, // Array of all spike times as integers (begin and // end each train with large negative value) int *spiketime_indices, // Pointer into above array for each neuron """ if self.criterion.type == 'traces': data_declare = """ double *traces_arr, int *traces_arr_offset, """ src = src.replace('%DATA_DECLARE%', data_declare) # Substitute spikes/traces init if self.criterion.type == 'spikes': data_init = """ int spiketime_index = spiketime_indices[neuron_index]; """ if self.criterion.type == 'traces': data_init = """ int trace_offset = traces_arr_offset[neuron_index]; double trace_value = 0.0; int Tdelay = 0; """ src = src.replace('%DATA_INIT%', data_init) # Substitute spikes/traces update if self.criterion.type == 'spikes': data_update = """ """ if self.criterion.type == 'traces': data_update = """ Tdelay = T+spikedelay; if ((Tdelay>=0)&(Tdelay=2.2 # pass self.block = (blocksize, 1, 1) self.grid = (int(ceil(float(self.N) / blocksize)), 1) self.kernel_func_kwds = {'block':self.block, 'grid':self.grid} mydtype = self.mydtype N = self.N eqs = self.eqs statevars_arr = gpuarray.to_gpu(array(self.G._S.flatten(), dtype=mydtype)) self.I = gpuarray.to_gpu(array(I, dtype=mydtype)) self.statevars_arr = statevars_arr self.I_offset = gpuarray.to_gpu(array(I_offset, dtype=int32)) # SPIKES if self.criterion.type == 'spikes': self.initialize_spikes(spiketimes, spiketimes_offset) # TRACES if self.criterion.type == 'traces': self.initialize_traces(traces, traces_offset) self.criterion.initialize_cuda_variables() self.initialize_delays(spikedelays) self.initialize_refractory(refractory) if self.statemonitor_var is not None: self.initialize_statemonitor() if self.spikemonitor is True: self.initialize_spikemonitor() self.initialize_kernel_arguments() def launch(self, duration, stepsize=128*ms): if stepsize is None: self.kernel_func(int32(0), int32(duration / self.dt), *self.kernel_func_args, **self.kernel_func_kwds) pycuda.context.synchronize() else: stepsize = int(stepsize / self.dt) duration = int(duration / self.dt) for Tstart in xrange(0, duration, stepsize): Tend = Tstart + min(stepsize, duration - Tstart) self.kernel_func(int32(Tstart), int32(Tend), int32(duration), *self.kernel_func_args, **self.kernel_func_kwds) pycuda.context.synchronize() # if self.stand_alone and 1: ## print 'close stand alone' ## drv.init() # pycuda.context.pop() # pycuda.context = None def get_statemonitor_values(self): values = [val.get().reshape((self.N, -1)) for val in self.statemonitor_values] if len(self.statemonitor_var)==1: return values[0] else: return values def get_spikemonitor_values(self): values = self.spikemonitor_values.get().reshape((self.N, -1)) return values def close_cuda(): """ Closes the current PyCUDA context. MUST be called at the end of the script. """ print 'close' log_debug("Trying to close current PyCUDA context") if pycuda.context is not None: try: log_debug("Closing current PyCUDA context") pycuda.context.pop() pycuda.context = None except: log_warn("A problem occurred when closing PyCUDA context") brian-1.3.1/brian/experimental/modelfitting/lp2bert.py000066400000000000000000000076731167451777000230440ustar00rootroot00000000000000 if criterion_name == 'LpError2': params['p'] = self.criterion.p params['varname'] = self.criterion.varname params['insets'] = self.criterion.insets params['outsets'] = self.criterion.outsets self.criterion_object = LpErrorCriterion2(**params) class LpErrorCriterion2(Criterion): type = 'traces' def initialize(self, p=2, varname='v',insets=None,outsets=None): """ Called at the beginning of every iteration. The keyword arguments here are specified in modelfitting.initialize_criterion(). """ self.p = double(p) self.varname = varname self._error = zeros((self.K, self.N)) self.insets = insets self.outsets = outsets self.next_index=0 self.insets = array(append(insets,inf),dtype=int) self.outsets = array(append(outsets,inf),dtype=int) def timestep_call(self): v = self.get_value(self.varname) t = self.step()+1 if t= self.duration: return if t>=self.insets[self.next_index]and t<=self.outsets[self.next_index]: if t==self.outsets[self.next_index]: self.next_index+=1 d = self.intdelays indices = (t-d>=0)&(t-d= onset)&(Tdelay= insets[next_index])&(T <= outsets[next_index])) { if (T == outsets[next_index]) {next_index+=1;} error = error + pow(abs(trace_value - %s), %.4f); } """ % (self.varname, self.p) # FINALIZATION code['%CRITERION_END%'] = """ error_arr[neuron_index] = error; """ return code def initialize_cuda_variables(self): """ Initialize GPU variables to pass to the kernel """ self.error_gpu = gpuarray.to_gpu(zeros(self.N, dtype=double)) self.insets_gpu = gpuarray.to_gpu(self.insets) self.outsets_gpu = gpuarray.to_gpu(self.outsets) def get_kernel_arguments(self): """ Return a list of objects to pass to the CUDA kernel. """ args = [self.error_gpu,self.insets_gpu,self.outsets_gpu] return args def update_gpu_values(self): """ Call gpuarray.get() on final values, so that get_values() returns updated values. """ self._error = self.error_gpu.get() print self._error def get_values(self): if self.K == 1: error = self._error.flatten() else: error = self._error return error # just the integral, for every slice def normalize(self, error): # error is now the combined error on the whole duration (sum on the slices) # HACK: 1- because modelfitting MAXIMIZES for now... # self._norm = sum(self.traces**self.p, axis=1)**(1./self.p) # norm of every trace return 1-(self.dt*error)**(1./self.p)#/self._norm class LpError2(CriterionStruct): """ Structure used by the users to specify a criterion """ def __init__(self, p = 2, varname = 'v',insets=None,outsets=None): self.type = 'trace' self.p = p self.insets = insets self.outsets = outsets self.varname = varname brian-1.3.1/brian/experimental/modelfitting/modelfitting.py000077500000000000000000000526731167451777000241620ustar00rootroot00000000000000from brian import Equations, NeuronGroup, Clock, CoincidenceCounter, Network, zeros, array, \ ones, kron, ms, second, concatenate, hstack, sort, nonzero, diff, TimedArray, \ reshape, sum, log, Monitor, NetworkOperation, defaultclock, linspace, vstack, \ arange, sort_spikes, rint, SpikeMonitor, Connection from brian.tools.statistics import firing_rate, get_gamma_factor try: from playdoh import * except Exception, e: print e raise ImportError("Playdoh must be installed (https://code.google.com/p/playdoh/)") try: import pycuda from gpu_modelfitting import GPUModelFitting can_use_gpu = True except ImportError: can_use_gpu = False from brian.experimental.codegen.integration_schemes import * from criteria import * from simulator import * import sys, cPickle __all__ = ['modelfitting', 'print_table', 'PSO', 'GA', 'CMAES', 'slice_trace', 'transform_spikes', 'MAXCPU', 'MAXGPU', 'GammaFactor','GammaFactor2', 'LpError','VanRossum','Brette', 'simulate', 'debug_level', 'info_level', 'warning_level', 'open_server'] class ModelFitting(Fitness): def initialize(self, **kwds): # Gets the key,value pairs in shared_data for key, val in self.shared_data.iteritems(): setattr(self, key, val) # Gets the key,value pairs in **kwds for key, val in kwds.iteritems(): setattr(self, key, val) self.model = cPickle.loads(self.model) if type(self.model) is str: self.model = Equations(self.model) self.simulator = Simulator(self.model, self.reset, self.threshold, inputs = self.inputs, input_var = self.input_var, dt = self.dt, refractory = self.refractory, max_refractory = self.max_refractory, spikes = self.spikes, traces = self.traces, groups = self.groups, slices = self.slices, overlap = self.overlap, onset = self.onset, neurons = self.nodesize, initial_values = self.initial_values, unit_type = self.unit_type, stepsize = self.stepsize, precision = self.precision, criterion = self.criterion, ntrials=self.ntrials, method = self.method ) def evaluate(self, **param_values): """ Use fitparams['delays'] to take delays into account Use fitparams['refractory'] to take refractory into account """ values = self.simulator.run(**param_values) return values def modelfitting(model=None, reset=None, threshold=None, refractory=0*ms, data=None, input_var='I', input=None, dt=None, popsize=1000, maxiter=10, slices=1, overlap=None, onset=None, initial_values=None, stepsize=100 * ms, unit_type=None, total_units=None, ntrials=1, cpu=None, gpu=None, precision='double', # set to 'float' or 'double' to specify single or double precision on the GPU machines=[], allocation=None, returninfo=False, scaling=None, algorithm=CMAES, async = None, criterion=None, optparams={}, method='Euler', **params): """ Model fitting function. Fits a spiking neuron model to electrophysiological data (injected current and spikes). See also the section :ref:`model-fitting-library` in the user manual. **Arguments** ``model`` An :class:`~brian.Equations` object containing the equations defining the model. ``reset`` A reset value for the membrane potential, or a string containing the reset equations. ``threshold`` A threshold value for the membrane potential, or a string containing the threshold equations. ``refractory`` The refractory period in second. If it's a single value, the same refractory will be used in all the simulations. If it's a list or a tuple, the fitting will also optimize the refractory period (see ``**params`` below). Warning: when using a refractory period, you can't use a custom reset, only a fixed one. ``data`` A list of spike times, or a list of several spike trains as a list of pairs (index, spike time) if the fit must be performed in parallel over several target spike trains. In this case, the modelfitting function returns as many parameters sets as target spike trains. ``input_var='I'`` The variable name used in the equations for the input current. ``input`` A vector of values containing the time-varying signal the neuron responds to (generally an injected current). ``dt`` The time step of the input (the inverse of the sampling frequency). ``**params`` The list of parameters to fit the model with. Each parameter must be set as follows: ``param_name=[bound_min, min, max, bound_max]`` where ``bound_min`` and ``bound_max`` are the boundaries, and ``min`` and ``max`` specify the interval from which the parameter values are uniformly sampled at the beginning of the optimization algorithm. If not using boundaries, set ``param_name=[min, max]``. Also, you can add a fit parameter which is a spike delay for all spikes : add the special parameter ``delays`` in ``**params``, for example ``modelfitting(..., delays=[-10*ms, 10*ms])``. You can also add fit the refractory period by specifying ``modelfitting(..., refractory=[-10*ms, 10*ms])``. ``popsize`` Size of the population (number of particles) per target train used by the optimization algorithm. ``maxiter`` Number of iterations in the optimization algorithm. ``optparams`` Optimization algorithm parameters. It is a dictionary: keys are parameter names, values are parameter values or lists of parameters (one value per group). This argument is specific to the optimization algorithm used. See :class:`PSO`, :class:`GA`, :class:`CMAES`. ``delta=4*ms`` The precision factor delta (a scalar value in second). ``slices=1`` The number of time slices to use. ``overlap=0*ms`` When using several time slices, the overlap between consecutive slices, in seconds. ``initial_values`` A dictionary containing the initial values for the state variables. ``cpu`` The number of CPUs to use in parallel. It is set to the number of CPUs in the machine by default. ``gpu`` The number of GPUs to use in parallel. It is set to the number of GPUs in the machine by default. ``precision`` GPU only: a string set to either ``float`` or ``double`` to specify whether to use single or double precision on the GPU. If it is not specified, it will use the best precision available. ``returninfo=False`` Boolean indicating whether the modelfitting function should return technical information about the optimization. ``scaling=None`` Specify the scaling used for the parameters during the optimization. It can be ``None`` or ``'mapminmax'``. It is ``None`` by default (no scaling), and ``mapminmax`` by default for the CMAES algorithm. ``algorithm=CMAES`` The optimization algorithm. It can be :class:`PSO`, :class:`GA` or :class:`CMAES`. ``optparams={}`` Optimization parameters. See ``method='Euler'`` Integration scheme used on the CPU and GPU: ``'Euler'`` (default), ``RK``, or ``exponential_Euler``. See also :ref:`numerical-integration`. ``machines=[]`` A list of machine names to use in parallel. See :ref:`modelfitting-clusters`. **Return values** Return an :class:`OptimizationResult` object with the following attributes: ``best_pos`` Minimizing position found by the algorithm. For array-like fitness functions, it is a single vector if there is one group, or a list of vectors. For keyword-like fitness functions, it is a dictionary where keys are parameter names and values are numeric values. If there are several groups, it is a list of dictionaries. ``best_fit`` The value of the fitness function for the best positions. It is a single value if there is one group, or it is a list if there are several groups. ``info`` A dictionary containing various information about the optimization. Also, the following syntax is possible with an ``OptimizationResult`` instance ``or``. The ``key`` is either an optimizing parameter name for keyword-like fitness functions, or a dimension index for array-like fitness functions. ``or[key]`` it is the best ``key`` parameter found (single value), or the list of the best parameters ``key`` found for all groups. ``or[i]`` where ``i`` is a group index. This object has attributes ``best_pos``, ``best_fit``, ``info`` but only for group ``i``. ``or[i][key]`` where ``i`` is a group index, is the same as ``or[i].best_pos[key]``. For more details on the gamma factor, see `Jolivet et al. 2008, "A benchmark test for a quantitative assessment of simple neuron models", J. Neurosci. Methods `__ (available in PDF `here `__). """ for param in params.keys(): if (param not in model._diffeq_names) and (param != 'delays') and (param != 'tau_metric'): raise Exception("Parameter %s must be defined as a parameter in the model" % param) if criterion is None: criterion = GammaFactor() data = array(data) if criterion.type == 'spikes': # Make sure that 'data' is a N*2-array if data.ndim == 1: data = concatenate((zeros((len(data), 1)), data.reshape((-1, 1))), axis=1) spikes = data traces = None if ntrials>1: groups = 1 else: groups = int(array(data)[:, 0].max() + 1) # number of target trains elif criterion.type == 'trace': if data.ndim == 1: data = data.reshape((1,-1)) spikes = None traces = data groups = data.shape[0] elif criterion.type == 'both': # TODO log_warn("Not implemented yet") pass inputs = input if inputs.ndim==1: inputs = inputs.reshape((1,-1)) # dt must be set if dt is None: raise Exception('dt (sampling frequency of the input) must be set') # default overlap when no time slicing if overlap is None: overlap = 0*ms if onset is None: onset = overlap # if slices == 1: # overlap = 0*ms # onset = overlap # default allocation if cpu is None and gpu is None and unit_type is None: if CANUSEGPU: unit_type = 'GPU' else: unit_type = 'CPU' # check numerical integration method if (gpu>0 or unit_type == 'GPU') and method not in ['Euler', 'RK', 'exponential_Euler']: raise Exception("The method can only be 'Euler', 'RK', or 'exponential_Euler' when using the GPU") if method not in ['Euler', 'RK', 'exponential_Euler', 'linear', 'nonlinear']: raise Exception("The method can only be 'Euler', 'RK', 'exponential_Euler', 'linear', or 'nonlinear'") if (algorithm == CMAES) & (scaling is None): scaling = 'mapminmax' # determines whether optimization over refractoriness or not if type(refractory) is tuple or type(refractory) is list: params['refractory'] = refractory max_refractory = refractory[-1] else: max_refractory = None # duration = len(input) * dt # duration of the input # keyword arguments for Modelfitting initialize kwds = dict( model=cPickle.dumps(model), threshold=threshold, reset=reset, refractory=refractory, max_refractory=max_refractory, input_var=input_var, dt=dt, criterion=criterion, slices=slices, overlap=overlap, returninfo=returninfo, precision=precision, stepsize=stepsize, ntrials=ntrials, method=method, onset=onset) shared_data = dict(inputs=inputs, traces=traces, spikes=spikes, initial_values=initial_values) if async: r = maximize_async( ModelFitting, shared_data=shared_data, kwds = kwds, groups=groups, popsize=popsize, maxiter=maxiter, optparams=optparams, unit_type = unit_type, machines=machines, allocation=allocation, total_units = total_units, cpu=cpu, gpu=gpu, returninfo=returninfo, codedependencies=[], algorithm=algorithm, scaling=scaling, **params) else: r = maximize( ModelFitting, shared_data=shared_data, kwds = kwds, groups=groups, popsize=popsize, maxiter=maxiter, optparams=optparams, unit_type = unit_type, machines=machines, allocation=allocation, total_units = total_units, cpu=cpu, gpu=gpu, returninfo=returninfo, codedependencies=[], algorithm=algorithm, scaling=scaling, **params) # r is (results, fitinfo) or (results) return r #def get_spikes(model=None, reset=None, threshold=None, # input=None, input_var='I', dt=None, # **params): # """ # Retrieves the spike times corresponding to the best parameters found by # the modelfitting function. # # **Arguments** # # ``model``, ``reset``, ``threshold``, ``input``, ``input_var``, ``dt`` # Same parameters as for the ``modelfitting`` function. # # ``**params`` # The best parameters returned by the ``modelfitting`` function. # # **Returns** # # ``spiketimes`` # The spike times of the model with the given input and parameters. # """ # duration = len(input) * dt # ngroups = len(params[params.keys()[0]]) # # group = NeuronGroup(N=ngroups, model=model, reset=reset, threshold=threshold, # clock=Clock(dt=dt)) # group.set_var_by_array(input_var, TimedArray(input, clock=group.clock)) # for param, values in params.iteritems(): # if (param == 'delays') | (param == 'fitness'): # continue # group.state(param)[:] = values # # M = SpikeMonitor(group) # net = Network(group, M) # net.run(duration) # reinit_default_clock() # return M.spikes # #def predict(model=None, reset=None, threshold=None, # data=None, delta=4 * ms, # input=None, input_var='I', dt=None, # **params): # """ # Predicts the gamma factor of a fitted model with respect to the data with # a different input current. # # **Arguments** # # ``model``, ``reset``, ``threshold``, ``input_var``, ``dt`` # Same parameters as for the ``modelfitting`` function. # # ``input`` # The input current, that can be different from the current used for the fitting # procedure. # # ``data`` # The experimental spike times to compute the gamma factor against. They have # been obtained with the current ``input``. # # ``**params`` # The best parameters returned by the ``modelfitting`` function. # # **Returns** # # ``gamma`` # The gamma factor of the model spike trains against the data. # If there were several groups in the fitting procedure, it is a vector # containing the gamma factor for each group. # """ # spikes = get_spikes(model=model, reset=reset, threshold=threshold, # input=input, input_var=input_var, dt=dt, # **params) # # ngroups = len(params[params.keys()[0]]) # gamma = zeros(ngroups) # for i in xrange(ngroups): # spk = [t for j, t in spikes if j == i] # gamma[i] = gamma_factor(spk, data, delta, normalize=True, dt=dt) # if len(gamma) == 1: # return gamma[0] # else: # return gamma if __name__ == '__main__': from brian import loadtxt, ms, savetxt, loadtxt, Equations, NeuronGroup, run, SpikeMonitor,\ StateMonitor, Network from pylab import * def generate_data(): g = NeuronGroup(1, model=equations, reset=0, threshold=1) g.I = TimedArray(input, dt=.1*ms) g.tau = 25*ms g.R = 3e9 SpM = SpikeMonitor(g) StM = StateMonitor(g, 'V', record=True) net = Network(g, SpM, StM) net.run(1*second) return StM.values[0], SpM.spikes equations = Equations(''' dV/dt=(R*I-V)/tau : 1 I : 1 R : 1 tau : second #tau_metric : second ''') input = loadtxt('current.txt') # ARTIFICIAL DATA: R=3e9, tau=25*ms # spikes = loadtxt('spikes.txt') # real data trace, spikes = generate_data() # savetxt('trace_artificial.txt', trace) # savetxt('spikes_artificial.txt', spikes) # trace = loadtxt('trace_artificial.txt') overlap = 10*ms slices = 1 dt = .1*ms input, trace = slice_trace(input, trace, slices = slices, dt=dt, overlap = overlap) # GAMMA FACTOR # criterion = GammaFactor(delta=2*ms) # spikes= loadtxt('spikes_artificial.txt') # data = spikes # data[:,1] += 50*ms # # LP ERROR criterion = LpError(p=2, varname='V') data = trace print data.shape print input.shape # # Van Rossum # criterion = VanRossum(tau=2*ms) # data = spikes #Brette # criterion = Brette() # data = spikes results = modelfitting( model = equations, reset = 0, threshold = 1, data = data, input = input, #onset = overlap, gpu = 1, #cpu=4, dt = dt, popsize = 20000, maxiter = 10, criterion = criterion, R = [1.0e9,1.0e9, 9.0e9, 9.0e9], tau = [10*ms,10*ms, 40*ms, 40*ms], algorithm=CMAES, tau_metric= [0.5*ms,0.5*ms, 4*ms, 4*ms], #delays=[-1*ms, 1*ms] ) print_table(results) # criterion_values, record_values = simulate( model = equations, # reset = 0, # threshold = 1, # data = data, # input = input, # use_gpu = False, # dt = dt, # criterion = criterion, # record = 'V', # onset = overlap, # neurons = slices, # **results.best_params # ) # # trace = trace[:,int(overlap/dt):].flatten() # traceV = record_values[:,int(overlap/dt):].flatten() # # plot(trace) # plot(traceV) # # show() # brian-1.3.1/brian/experimental/modelfitting/modelfitting_old.py000077500000000000000000001407121167451777000250100ustar00rootroot00000000000000 from brian import Equations, NeuronGroup, Clock, CoincidenceCounter, Network, zeros, array, \ ones, kron, ms, second, concatenate, hstack, sort, nonzero, diff, TimedArray, \ reshape, sum, log, Monitor, NetworkOperation, defaultclock, linspace, vstack, \ arange, sort_spikes, rint, SpikeMonitor, Connection,SpikeGeneratorGroup,DelayConnection,\ SpikeCounter,forget,diagflat,inf,sqrt from scipy.sparse import dia_matrix,lil_matrix from brian.tools.statistics import firing_rate, get_gamma_factor try: from playdoh import * except Exception, e: print e raise ImportError("Playdoh must be installed (https://code.google.com/p/playdoh/)") try: import pycuda from gpu_modelfitting import GPUModelFitting can_use_gpu = True except ImportError: can_use_gpu = False from brian.experimental.codegen.integration_schemes import * import sys, cPickle __all__ = ['modelfitting', 'print_table', 'get_spikes', 'predict', 'PSO', 'GA','CMAES','Brette','VanRossum', 'MAXCPU', 'MAXGPU', 'GammaFactor', 'LpError', 'debug_level', 'info_level', 'warning_level', 'open_server'] class DataTransformer(object): """ Transform spike, input and trace data from user-friendly data structures, like 2 dimensional arrays or lists of spikes, into algorithm- and GPU-friendly structures, i.e. inline vectors (1 dimensional) easily parallelizable. """ def __init__(self, neurons, input, spikes = None, traces = None, dt = defaultclock.dt, slices = 1, overlap = 0*ms, groups = 1): self.neurons = neurons # number of particles on the node self.input = input # a IxT array self.spikes = spikes # a list of spikes [(i,t)...] self.traces = traces # a KxT array self.slices = slices self.overlap = overlap self.groups = groups self.dt = dt # ensure 2 dimensions if self.input.ndim == 1: self.input = self.input.reshape((1,-1)) if self.traces is not None: if self.traces.ndim == 1: self.traces = self.traces.reshape((1,-1)) self.inputs_count = self.input.shape[0] self.T = self.input.shape[1] # number of steps self.duration = self.T*self.dt self.subpopsize = self.neurons/self.groups # number of neurons per group: nodesize/groups self.input = self.input[:,0:self.slices * (self.T / self.slices)] # makes sure that len(input) is a multiple of slices self.sliced_steps = self.T / self.slices # timesteps per slice self.overlap_steps = int(self.overlap / self.dt) # timesteps during the overlap self.total_steps = self.sliced_steps + self.overlap_steps # total number of timesteps self.sliced_duration = self.overlap + self.duration / self.slices # duration of the vectorized simulation self.N = self.neurons * self.slices # TOTAL number of neurons on this node self.input = hstack((zeros((self.inputs_count, self.overlap_steps)), self.input)) # add zeros at the beginning because there is no overlap from the previous slice def slice_spikes(self, spikes): # from standard structure to standard structure sliced_spikes = [] slice_length = self.sliced_steps*self.dt for (i,t) in spikes: slice = int(t/slice_length) newt = self.overlap + (t % slice_length)*second newi = i + self.groups*slice sliced_spikes.append((newi, newt)) sliced_spikes = sort_spikes(sliced_spikes) return sliced_spikes def slice_traces(self, traces): # from standard structure to standard structure k = traces.shape[0] sliced_traces = zeros((k*self.slices, self.total_steps)) for slice in xrange(self.slices): i0 = slice*k i1 = (slice+1)*k j0 = slice*self.sliced_steps j1 = (slice+1)*self.sliced_steps sliced_traces[i0:i1,self.overlap_steps:] = traces[:,j0:j1] if slice>0: sliced_traces[i0:i1,:self.overlap_steps] = traces[:,j0-self.overlap_steps:j0] return sliced_traces def transform_spikes(self, spikes): # from standard structure to inline structure i, t = zip(*spikes) i = array(i) t = array(t) alls = [] n = 0 pointers = [] model_target = [] for j in xrange(self.groups): s = sort(t[i == j]) s = hstack((-1 * second, s, self.duration + 1 * second)) model_target.extend([j] * self.subpopsize) alls.append(s) pointers.append(n) n += len(s) pointers = array(pointers, dtype=int) model_target = array(hstack(model_target), dtype=int) spikes_inline = hstack(alls) spikes_offset = pointers[model_target] return spikes_inline, spikes_offset def transform_traces(self, traces): # from standard structure to inline structure K, T = traces.shape traces_inline = traces.flatten() traces_offset = array(kron(arange(K), T*ones(self.subpopsize)), dtype=int) return traces_inline, traces_offset class Criterion(Monitor, NetworkOperation): """ Abstract class from which modelfitting criterions should derive. Derived classes should implement the following methods: ``initialize(self, **params)`` Called once before the simulation. ```params`` is a dictionary with criterion-specific parameters. ``timestep_call(self)`` Called at every timestep. ``spike_call(self, neurons)`` Called at every spike, with the spiking neurons as argument. ``get_values(self)`` Called at the end, returns the criterion values. You have access to the following methods: ``self.get_value(self, varname)`` Returns the value of the specified variable for all neurons (vector). You have access to the following attributes: ``self.step`` The time step (integer) ``self.group`` The NeuronGroup ``self.traces=None`` Target traces. A 2-dimensional K*T array where K is the number of targets, and T the total number of timesteps. It is still a 2-dimensional array when K=1. It is None if not specified. ``self.spikes=None`` A list of target spike trains : [(i,t)..] where i is the target index and t the spike time. ``self.N`` The number of neurons in the NeuronGroup ``self.K=1`` The number of targets. ``self.duration`` The total duration of the simulation, in seconds. ``self.total_steps`` The total number of time steps. ``self.dt`` The timestep duration, in seconds. ``self.delays=zeros(self.n)`` The delays for every neuron, in seconds. The delay is relative to the target. ``self.onset=0*ms`` The onset, in seconds. The first timesteps, before onset, should be discarded in the criterion. ``self.intdelays=0`` The delays, but in number of timesteps (``int(delays/dt)``). NOTE: traces and spikes have all been sliced before being passed to the criterion object """ def __init__(self, group, traces=None, spikes=None, targets_count=1, duration=None, onset=0*ms, spikes_inline=None, spikes_offset=None, traces_inline=None, traces_offset=None, delays=None, when='end', **params): NetworkOperation.__init__(self, None, clock=group.clock, when=when) self.group = group # needed by SpikeMonitor self.source = group self.source.set_max_delay(0) self.delay = 0 self.N = len(group) # number of neurons self.K = targets_count # number of targets self.dt = self.clock.dt if traces is not None: self.traces = array(traces) # KxT array if self.traces.ndim == 1: self.traces = self.traces.reshape((1,-1)) assert targets_count==self.traces.shape[0] self.spikes = spikes # get the duration from the traces if duration is not specified in the constructor if duration is None: duration = self.traces.shape[1] # total number of steps self.duration = duration self.total_steps = int(duration/self.dt) if delays is None: delays = zeros(self.n) self.delays = delays self.onset = int(onset/self.dt) self.intdelays = array(self.delays/self.clock.dt, dtype=int) self.mindelay = min(delays) self.maxdelay = max(delays) # the following data is sliced self.spikes_inline = spikes_inline self.spikes_offset = spikes_offset self.traces_inline = traces_inline self.traces_offset = traces_offset if self.spikes is not None: # target spike count and rates self.target_spikes_count = self.get_spikes_count(self.spikes) self.target_spikes_rate = self.get_spikes_rate(self.spikes) self.initialize(**params) def step(self): """ Return the current time step """ return int(self.clock.t/self.dt) def get_spikes_count(self, spikes): count = zeros(self.K) for (i,t) in spikes: count[i] += 1 return count def get_spikes_rate(self, spikes): count = self.get_spikes_count(spikes) return count*1.0/self.duration def initialize(self): # TO IMPLEMENT """ Override this method to initialize the criterion before the simulation """ pass def timestep_call(self): # TO IMPLEMENT """ Override this method to do something at every time step """ pass def spike_call(self, neurons): # TO IMPLEMENT """ Override this method to do something at every time spike. neurons contains the list of neurons that just spiked """ pass def get_value(self, varname): return self.group.state_(varname) def __call__(self): self.timestep_call() def propagate(self, neurons): self.spike_call(neurons) def get_values(self): # TO IMPLEMENT """ Override this method to return the criterion values at the end. It must return one (or several, as a tuple) additive values, i.e., values corresponding to slices. The method normalize can normalize combined values. """ pass additive_values = property(get_values) def normalize(self, values): # TO IMPLEMENT """ Values contains combined criterion values. It is either a vector of values (as many values as neurons), or a tuple with different vector of as many values as neurons. """ pass class CriterionStruct(object): type = None # 'trace', 'spike' or 'both' def get_name(self): return self.__class__.__name__ name = property(get_name) class LpErrorCriterion(Criterion): def initialize(self, p=2, varname='v'): self.p = p self.varname = varname self._error = zeros((self.K, self.N)) def timestep_call(self): v = self.get_value(self.varname) t = self.step() if t=0)&(t-d= self.onset: self.spike_count[spiking_neurons] += 1 T_spiking = array(rint((t + self.delays[spiking_neurons]) / dt), dtype=int) remaining_neurons = spiking_neurons remaining_T_spiking = T_spiking while True: remaining_indices, = (remaining_T_spiking > self.next_spike_time[remaining_neurons]).nonzero() if len(remaining_indices): indices = remaining_neurons[remaining_indices] self.spiketime_index[indices] += 1 self.last_spike_time[indices] = self.next_spike_time[indices] self.next_spike_time[indices] = array(rint(self.spikes_inline[self.spiketime_index[indices] + 1] / dt), dtype=int) if self.coincidence_count_algorithm == 'exclusive': self.last_spike_allowed[indices] = self.next_spike_allowed[indices] self.next_spike_allowed[indices] = True remaining_neurons = remaining_neurons[remaining_indices] remaining_T_spiking = remaining_T_spiking[remaining_indices] else: break # Updates coincidences count near_last_spike = self.last_spike_time[spiking_neurons] + self.delta >= T_spiking near_next_spike = self.next_spike_time[spiking_neurons] - self.delta <= T_spiking last_spike_allowed = self.last_spike_allowed[spiking_neurons] next_spike_allowed = self.next_spike_allowed[spiking_neurons] I = (near_last_spike & last_spike_allowed) | (near_next_spike & next_spike_allowed) if t >= self.onset: self.coincidences[spiking_neurons[I]] += 1 if self.coincidence_count_algorithm == 'exclusive': near_both_allowed = (near_last_spike & last_spike_allowed) & (near_next_spike & next_spike_allowed) self.last_spike_allowed[spiking_neurons] = last_spike_allowed & -near_last_spike self.next_spike_allowed[spiking_neurons] = (next_spike_allowed & -near_next_spike) | near_both_allowed def get_values(self): return (self.coincidences, self.spike_count) def normalize(self, values): coincidence_count = values[0] spike_count = values[1] delta = self.delta*self.dt gamma = get_gamma_factor(coincidence_count, spike_count, self.target_spikes_count, self.target_spikes_rate, delta) return gamma class GammaFactor(CriterionStruct): def __init__(self, delta = 4*ms, coincidence_count_algorithm = 'exclusive'): self.type = 'spikes' self.delta = delta self.coincidence_count_algorithm = coincidence_count_algorithm class VanRossumCriterion(Criterion): def initialize(self, tau): self.delay_range =max(self.delays)- min(self.delays)#delay range self.min_delay = abs(min(self.delays))#minimum of possible delay self.distance_vector=zeros(self.N) self.nbr_neurons_group = self.N/self.K eqs=""" dv/dt=(-v)/tau: volt """ # network to convolve target spikes with the kernel self.input_target=SpikeGeneratorGroup(self.K,self.spikes,clock=self.group.clock) self.kernel_target=NeuronGroup(self.K,model=eqs,clock=self.group.clock) self.C_target = DelayConnection(self.input_target, self.kernel_target, 'v', structure='dense', max_delay=self.min_delay) self.C_target.connect_one_to_one(self.input_target,self.kernel_target) self.C_target.delay = self.min_delay*ones_like(self.C_target.delay) # network to convolve population spikes with the kernel self.kernel_population=NeuronGroup(self.N,model=eqs,clock=self.group.clock) self.C_population = DelayConnection(self.group, self.kernel_population, 'v', structure='sparse', max_delay=self.delay_range) for iN in xrange(self.N): self.C_population.delay[iN,iN] = diagflat(self.min_delay + self.delays[iN]) self.C_population.connect_one_to_one(self.group,self.kernel_population) self.spikecount = SpikeCounter(self.group) self.contained_objects = [self.kernel_population,self.C_population,self.spikecount,self.input_target,self.C_target,self.kernel_target] def __call__(self): trace_population = self.kernel_population.state_('v') trace_target = self.kernel_target.state_('v') for igroup in xrange(self.K): self.distance_vector[igroup*self.nbr_neurons_group:(1+igroup)*self.nbr_neurons_group] += (trace_population[igroup*self.nbr_neurons_group:(1+igroup)*self.nbr_neurons_group]-trace_target[igroup])**2 def get_values(self): return (self.distance_vector) def normalize(self, distance_vector): distance_vector[nonzero(self.spikecount.count==0)] = inf return -self.distance_vector*self.group.clock.dt class VanRossum(CriterionStruct): def __init__(self, tau): self.type = 'spikes' self.tau = tau class BretteCriterion(Criterion): def initialize(self,tau_metric): self.delay_range =max(self.delays)- min(self.delays)#delay range self.min_delay = abs(min(self.delays))#minimum of possible delay self.corr_vector=zeros(self.N) self.norm_pop = zeros(self.N) self.norm_target = zeros(self.N) self.nbr_neurons_group = self.N/self.K eqs=""" tau:second dv/dt=(-v)/tau: volt """ # network to convolve target spikes with the kernel self.input_target=SpikeGeneratorGroup(self.K,self.spikes,clock=self.group.clock) self.kernel_target=NeuronGroup(self.N,model=eqs,clock=self.group.clock) self.C_target = DelayConnection(self.input_target, self.kernel_target, 'v', structure='sparse', max_delay=self.min_delay) self.kernel_target.tau=tau_metric for igroup in xrange(self.K): self.C_target.W[igroup,igroup*self.nbr_neurons_group:(1+igroup)*self.nbr_neurons_group] = ones(self.nbr_neurons_group) self.C_target.delay[igroup,igroup*self.nbr_neurons_group:(1+igroup)*self.nbr_neurons_group] = self.min_delay * ones(self.nbr_neurons_group) # network to convolve population spikes with the kernel self.kernel_population=NeuronGroup(self.N,model=eqs,clock=self.group.clock) self.C_population = DelayConnection(self.group, self.kernel_population, 'v', structure='sparse', max_delay=self.delay_range) for iN in xrange(self.N): self.C_population.delay[iN,iN] = diagflat(self.min_delay + self.delays[iN]) self.C_population.connect_one_to_one(self.group,self.kernel_population) self.kernel_population.tau=tau_metric self.spikecount = SpikeCounter(self.group) self.contained_objects = [self.kernel_population,self.C_population,self.spikecount,self.input_target,self.C_target,self.kernel_target] def __call__(self): trace_population = self.kernel_population.state_('v') trace_target = self.kernel_target.state_('v') self.corr_vector += trace_population*trace_target self.norm_pop += trace_population**2 self.norm_target += trace_target**2 def get_values(self): return (self.corr_vector,self.norm_pop,self.norm_target) def normalize(self, values): corr_vector=values[0] norm_pop=values[1] norm_target=values[2] corr_vector[nonzero(self.spikecount.count==0)] = -inf #print self.corr_vector/sqrt(norm_pop)/sqrt(norm_target) return self.corr_vector/sqrt(norm_pop)/sqrt(norm_target) class Brette(CriterionStruct): def __init__(self): self.type = 'spikes' class ModelFitting(Fitness): def initialize(self, **kwds): # Initialization of variables self.use_gpu = self.unit_type=='GPU' # Gets the key,value pairs in shared_data for key, val in self.shared_data.iteritems(): setattr(self, key, val) # Gets the key,value pairs in **kwds for key, val in kwds.iteritems(): setattr(self, key, val) self.neurons = self.nodesize self.groups = self.groups self.model = cPickle.loads(self.model) if type(self.model) is str: self.model = Equations(self.model) self.initialize_neurongroup() self.transform_data() self.inject_input() # if self.use_gpu: # ######## # # TODO # ######## # # Select integration scheme according to method # if self.method == 'Euler': scheme = euler_scheme # elif self.method == 'RK': scheme = rk2_scheme # elif self.method == 'exponential_Euler': scheme = exp_euler_scheme # else: raise Exception("The numerical integration method is not valid") # # self.mf = GPUModelFitting(self.group, self.model, self.input, self.I_offset, # self.spiketimes, self.spiketimes_offset, zeros(self.neurons), 0*ms, self.delta, # precision=self.precision, scheme=scheme) # else: # self.cc = CoincidenceCounter(self.group, self.spiketimes, self.spiketimes_offset, # onset=self.onset, delta=self.delta) def initialize_neurongroup(self): # Add 'refractory' parameter on the CPU only if not self.use_gpu: if self.max_refractory is not None: refractory = 'refractory' self.model.add_param('refractory', second) else: refractory = self.refractory else: if self.max_refractory is not None: refractory = 0*ms else: refractory = self.refractory # Must recompile the Equations : the functions are not transfered after pickling/unpickling self.model.compile_functions() self.group = NeuronGroup(self.neurons, model=self.model, reset=self.reset, threshold=self.threshold, refractory=refractory, max_refractory = self.max_refractory, method = self.method, clock=Clock(dt=self.dt)) if self.initial_values is not None: for param, value in self.initial_values.iteritems(): self.group.state(param)[:] = value def transform_data(self): self.transformer = DataTransformer(self.neurons, self.inputs, spikes = self.spikes, traces = self.traces, dt = self.dt, slices = self.slices, overlap = self.overlap, groups = self.groups) self.total_steps = self.transformer.total_steps self.sliced_duration = self.transformer.sliced_duration self.sliced_inputs = self.transformer.slice_traces(self.inputs) self.inputs_inline, self.inputs_offset = self.transformer.transform_traces(self.sliced_inputs) if self.traces is not None: self.sliced_traces = self.transformer.slice_traces(self.traces) self.traces_inline, self.traces_offset = self.transformer.transform_traces(self.sliced_traces) else: self.sliced_traces, self.traces_inline, self.traces_offset = None, None, None if self.spikes is not None: self.sliced_spikes = self.transformer.slice_spikes(self.spikes) self.spikes_inline, self.spikes_offset = self.transformer.transform_spikes(self.sliced_spikes) else: self.sliced_spikes, self.spikes_inline, self.spikes_offset = None, None, None def inject_input(self): # Injects current in consecutive subgroups, where I_offset have the same value # on successive intervals I_offset = self.inputs_offset k = -1 for i in hstack((nonzero(diff(I_offset))[0], len(I_offset) - 1)): I_offset_subgroup_value = I_offset[i] I_offset_subgroup_length = i - k sliced_subgroup = self.group.subgroup(I_offset_subgroup_length) input_sliced_values = self.inputs_inline[I_offset_subgroup_value:I_offset_subgroup_value + self.total_steps] sliced_subgroup.set_var_by_array(self.input_var, TimedArray(input_sliced_values, clock=self.group.clock)) k = i def initialize_criterion(self, delays,tau_metric = None): # general criterion parameters params = dict(group=self.group, traces=self.sliced_traces, spikes=self.sliced_spikes, targets_count=self.groups*self.slices, duration=self.sliced_duration, onset=self.onset, spikes_inline=self.spikes_inline, spikes_offset=self.spikes_offset, traces_inline=self.traces_inline, traces_offset=self.traces_offset, delays=delays, when='start') criterion_name = self.criterion.__class__.__name__ # criterion-specific parameters if criterion_name == 'GammaFactor': params['delta'] = self.criterion.delta params['coincidence_count_algorithm'] = self.criterion.coincidence_count_algorithm self.criterion_object = GammaFactorCriterion(**params) if criterion_name == 'LpError': params['p'] = self.criterion.p params['varname'] = self.criterion.varname self.criterion_object = LpErrorCriterion(**params) if criterion_name == 'VanRossum': params['tau'] = self.criterion.tau self.criterion_object = VanRossumCriterion(**params) if criterion_name == 'Brette': params['tau_metric'] = tau_metric self.criterion_object = BretteCriterion(**params) def update_neurongroup(self, **param_values): """ Inject fitting parameters into the NeuronGroup """ # Sets the parameter values in the NeuronGroup object self.group.reinit() for param, value in param_values.iteritems(): self.group.state(param)[:] = kron(value, ones(self.slices)) # kron param_values if slicing # Reinitializes the model variables if self.initial_values is not None: for param, value in self.initial_values.iteritems(): self.group.state(param)[:] = value def combine_sliced_values(self, values): if type(values) is tuple: combined_values = tuple([sum(reshape(v, (self.slices, -1)), axis=0) for v in values]) else: combined_values = sum(reshape(values, (self.slices, -1)), axis=0) return combined_values def evaluate(self, **param_values): """ Use fitparams['delays'] to take delays into account Use fitparams['refractory'] to take refractory into account """ delays = param_values.pop('delays', zeros(self.neurons)) refractory = param_values.pop('refractory', zeros(self.neurons)) tau_metric = param_values.pop('tau_metric', zeros(self.neurons)) # repeat spike delays and refractory to take slices into account delays = kron(delays, ones(self.slices)) refractory = kron(refractory, ones(self.slices)) tau_metric = kron(tau_metric, ones(self.slices)) self.update_neurongroup(**param_values) if self.criterion.__class__.__name__ == 'Brette': self.initialize_criterion(delays,tau_metric) else: self.initialize_criterion(delays) if self.use_gpu: pass ######### # TODO ######### # # Reinitializes the simulation object # self.mf.reinit_vars(self.input, self.I_offset, self.spiketimes, self.spiketimes_offset, delays, refractory) # # LAUNCHES the simulation on the GPU # self.mf.launch(self.duration, self.stepsize) # coincidence_count = self.mf.coincidence_count # spike_count = self.mf.spike_count else: # set the refractory period if self.max_refractory is not None: self.group.refractory = refractory # Launch the simulation on the CPU self.group.clock.reinit() net = Network(self.group, self.criterion_object) net.run(self.duration) sliced_values = self.criterion_object.get_values() combined_values = self.combine_sliced_values(sliced_values) values = self.criterion_object.normalize(combined_values) return values def modelfitting(model=None, reset=None, threshold=None, refractory=0*ms, data=None, input_var='I', input=None, dt=None, popsize=1000, maxiter=10, slices=1, overlap=0*second, initial_values=None, stepsize=100 * ms, unit_type='CPU', total_units=None, cpu=None, gpu=None, precision='double', # set to 'float' or 'double' to specify single or double precision on the GPU machines=[], allocation=None, returninfo=False, scaling=None, algorithm=CMAES, criterion=None, optparams={}, method='Euler', **params): """ Model fitting function. Fits a spiking neuron model to electrophysiological data (injected current and spikes). See also the section :ref:`model-fitting-library` in the user manual. **Arguments** ``model`` An :class:`~brian.Equations` object containing the equations defining the model. ``reset`` A reset value for the membrane potential, or a string containing the reset equations. ``threshold`` A threshold value for the membrane potential, or a string containing the threshold equations. ``refractory`` The refractory period in second. If it's a single value, the same refractory will be used in all the simulations. If it's a list or a tuple, the fitting will also optimize the refractory period (see ``**params`` below). Warning: when using a refractory period, you can't use a custom reset, only a fixed one. ``data`` A list of spike times, or a list of several spike trains as a list of pairs (index, spike time) if the fit must be performed in parallel over several target spike trains. In this case, the modelfitting function returns as many parameters sets as target spike trains. ``input_var='I'`` The variable name used in the equations for the input current. ``input`` A vector of values containing the time-varying signal the neuron responds to (generally an injected current). ``dt`` The time step of the input (the inverse of the sampling frequency). ``**params`` The list of parameters to fit the model with. Each parameter must be set as follows: ``param_name=[bound_min, min, max, bound_max]`` where ``bound_min`` and ``bound_max`` are the boundaries, and ``min`` and ``max`` specify the interval from which the parameter values are uniformly sampled at the beginning of the optimization algorithm. If not using boundaries, set ``param_name=[min, max]``. Also, you can add a fit parameter which is a spike delay for all spikes : add the special parameter ``delays`` in ``**params``, for example ``modelfitting(..., delays=[-10*ms, 10*ms])``. You can also add fit the refractory period by specifying ``modelfitting(..., refractory=[-10*ms, 10*ms])``. ``popsize`` Size of the population (number of particles) per target train used by the optimization algorithm. ``maxiter`` Number of iterations in the optimization algorithm. ``optparams`` Optimization algorithm parameters. It is a dictionary: keys are parameter names, values are parameter values or lists of parameters (one value per group). This argument is specific to the optimization algorithm used. See :class:`PSO`, :class:`GA`, :class:`CMAES`. ``delta=4*ms`` The precision factor delta (a scalar value in second). ``slices=1`` The number of time slices to use. ``overlap=0*ms`` When using several time slices, the overlap between consecutive slices, in seconds. ``initial_values`` A dictionary containing the initial values for the state variables. ``cpu`` The number of CPUs to use in parallel. It is set to the number of CPUs in the machine by default. ``gpu`` The number of GPUs to use in parallel. It is set to the number of GPUs in the machine by default. ``precision`` GPU only: a string set to either ``float`` or ``double`` to specify whether to use single or double precision on the GPU. If it is not specified, it will use the best precision available. ``returninfo=False`` Boolean indicating whether the modelfitting function should return technical information about the optimization. ``scaling=None`` Specify the scaling used for the parameters during the optimization. It can be ``None`` or ``'mapminmax'``. It is ``None`` by default (no scaling), and ``mapminmax`` by default for the CMAES algorithm. ``algorithm=CMAES`` ``optparams={}`` Optimization parameters. See ``method='Euler'`` Integration scheme used on the CPU and GPU: ``'Euler'`` (default), ``RK``, or ``exponential_Euler``. See also :ref:`numerical-integration`. ``machines=[]`` A list of machine names to use in parallel. See :ref:`modelfitting-clusters`. **Return values** Return an :class:`OptimizationResult` object with the following attributes: ``best_pos`` Minimizing position found by the algorithm. For array-like fitness functions, it is a single vector if there is one group, or a list of vectors. For keyword-like fitness functions, it is a dictionary where keys are parameter names and values are numeric values. If there are several groups, it is a list of dictionaries. ``best_fit`` The value of the fitness function for the best positions. It is a single value if there is one group, or it is a list if there are several groups. ``info`` A dictionary containing various information about the optimization. Also, the following syntax is possible with an ``OptimizationResult`` instance ``or``. The ``key`` is either an optimizing parameter name for keyword-like fitness functions, or a dimension index for array-like fitness functions. ``or[key]`` it is the best ``key`` parameter found (single value), or the list of the best parameters ``key`` found for all groups. ``or[i]`` where ``i`` is a group index. This object has attributes ``best_pos``, ``best_fit``, ``info`` but only for group ``i``. ``or[i][key]`` where ``i`` is a group index, is the same as ``or[i].best_pos[key]``. For more details on the gamma factor, see `Jolivet et al. 2008, "A benchmark test for a quantitative assessment of simple neuron models", J. Neurosci. Methods `__ (available in PDF `here `__). """ if criterion is None: criterion = GammaFactor() data = array(data) if criterion.type == 'spikes': # Make sure that 'data' is a N*2-array if data.ndim == 1: data = concatenate((zeros((len(data), 1)), data.reshape((-1, 1))), axis=1) spikes = data traces = None elif criterion.type == 'trace': if data.ndim == 1: data = data.reshape((1,-1)) spikes = None traces = data elif criterion.type == 'both': # TODO log_warn("Not implemented yet") pass inputs = input if inputs.ndim==1: inputs = inputs.reshape((1,-1)) # dt must be set if dt is None: raise Exception('dt (sampling frequency of the input) must be set') # default overlap when no time slicing if slices == 1: overlap = 0*ms # default allocation if cpu is None and gpu is None: if CANUSEGPU: gpu = 1 else: cpu = MAXCPU-1 # check numerical integration method if (gpu>0 or unit_type == 'GPU') and method not in ['Euler', 'RK', 'exponential_Euler']: raise Exception("The method can only be 'Euler', 'RK', or 'exponential_Euler' when using the GPU") if method not in ['Euler', 'RK', 'exponential_Euler', 'linear', 'nonlinear']: raise Exception("The method can only be 'Euler', 'RK', 'exponential_Euler', 'linear', or 'nonlinear'") if (algorithm == CMAES) & (scaling is None): scaling = 'mapminmax' # determines whether optimization over refractoriness or not if type(refractory) is tuple or type(refractory) is list: params['refractory'] = refractory max_refractory = refractory[-1] # refractory = 'refractory' else: max_refractory = None # common values # group_size = particles # Number of particles per target train groups = int(array(data)[:, 0].max() + 1) # number of target trains # N = group_size * group_count # number of neurons duration = len(input) * dt # duration of the input # keyword arguments for Modelfitting initialize kwds = dict( model=cPickle.dumps(model), threshold=threshold, reset=reset, refractory=refractory, max_refractory=max_refractory, input_var=input_var, dt=dt, duration=duration, criterion=criterion, slices=slices, overlap=overlap, returninfo=returninfo, precision=precision, stepsize=stepsize, method=method, onset=overlap) shared_data = dict(inputs=inputs, traces=traces, spikes=spikes, initial_values=initial_values) r = maximize( ModelFitting, shared_data=shared_data, kwds = kwds, groups=groups, popsize=popsize, maxiter=maxiter, optparams=optparams, unit_type = unit_type, machines=machines, allocation=allocation, cpu=cpu, gpu=gpu, returninfo=returninfo, codedependencies=[], algorithm=algorithm, scaling=scaling, **params) # r is (results, fitinfo) or (results) return r def get_spikes(model=None, reset=None, threshold=None, input=None, input_var='I', dt=None, **params): """ Retrieves the spike times corresponding to the best parameters found by the modelfitting function. **Arguments** ``model``, ``reset``, ``threshold``, ``input``, ``input_var``, ``dt`` Same parameters as for the ``modelfitting`` function. ``**params`` The best parameters returned by the ``modelfitting`` function. **Returns** ``spiketimes`` The spike times of the model with the given input and parameters. """ duration = len(input) * dt ngroups = len(params[params.keys()[0]]) group = NeuronGroup(N=ngroups, model=model, reset=reset, threshold=threshold, clock=Clock(dt=dt)) group.set_var_by_array(input_var, TimedArray(input, clock=group.clock)) for param, values in params.iteritems(): if (param == 'delays') | (param == 'fitness'): continue group.state(param)[:] = values M = SpikeMonitor(group) net = Network(group, M) net.run(duration) reinit_default_clock() return M.spikes def predict(model=None, reset=None, threshold=None, data=None, delta=4 * ms, input=None, input_var='I', dt=None, **params): """ Predicts the gamma factor of a fitted model with respect to the data with a different input current. **Arguments** ``model``, ``reset``, ``threshold``, ``input_var``, ``dt`` Same parameters as for the ``modelfitting`` function. ``input`` The input current, that can be different from the current used for the fitting procedure. ``data`` The experimental spike times to compute the gamma factor against. They have been obtained with the current ``input``. ``**params`` The best parameters returned by the ``modelfitting`` function. **Returns** ``gamma`` The gamma factor of the model spike trains against the data. If there were several groups in the fitting procedure, it is a vector containing the gamma factor for each group. """ spikes = get_spikes(model=model, reset=reset, threshold=threshold, input=input, input_var=input_var, dt=dt, **params) ngroups = len(params[params.keys()[0]]) gamma = zeros(ngroups) for i in xrange(ngroups): spk = [t for j, t in spikes if j == i] gamma[i] = gamma_factor(spk, data, delta, normalize=True, dt=dt) if len(gamma) == 1: return gamma[0] else: return gamma if __name__ == '__main__': from brian import loadtxt, ms, savetxt, loadtxt, Equations, NeuronGroup, run, SpikeMonitor,\ StateMonitor, Network from pylab import * def generate_data(): g = NeuronGroup(1, model=equations, reset=0, threshold=1) g.I = TimedArray(input, dt=.1*ms) g.tau = 25*ms g.R = 3e9 SpM = SpikeMonitor(g) StM = StateMonitor(g, 'V', record=True) net = Network(g, SpM, StM) net.run(1*second) return StM.values[0], SpM.spikes equations = Equations(''' dV/dt=(R*I-V)/tau : 1 I : 1 R : 1 tau : second ''') input = loadtxt('current.txt') # ARTIFICIAL DATA: R=3e9, tau=25*ms # spikes = loadtxt('spikes.txt') # real data # trace, spikes = generate_data() # savetxt('trace_artificial.txt', trace) # savetxt('spikes_artificial.txt', spikes) trace = loadtxt('trace_artificial.txt') spikes= loadtxt('spikes_artificial.txt') # GAMMA FACTOR # criterion = GammaFactor(delta=4*ms) # data = spikes # LP ERROR # criterion = LpError(p=2, varname='V') # data = trace ##Van Rossum ERROR # criterion = VanRossum(tau=4*ms) # data = spikes equations = Equations(''' dV/dt=(R*I-V)/tau : 1 I : 1 R : 1 tau_metric:second tau : second ''') # Brette ERROR criterion = Brette() data = spikes # results = modelfitting( model = equations, reset = 0, threshold = 1, data = data, input = input, cpu = 1, dt = .1*ms, popsize = 1000, maxiter = 5, criterion = criterion, R = [1.0e9, 9.0e9], tau = [10*ms, 40*ms], tau_metric=[1*ms,10*ms] # delays = [-5*ms, 5*ms], # refractory = [0*ms, 0*ms, 10*ms, 10*ms] ) print_table(results) brian-1.3.1/brian/experimental/modelfitting/simulator.py000077500000000000000000000662301167451777000235060ustar00rootroot00000000000000from brian import Equations, NeuronGroup, Clock, CoincidenceCounter, Network, zeros, array, \ ones, kron, ms, second, concatenate, hstack, sort, nonzero, diff, TimedArray, \ reshape, sum, log, Monitor, NetworkOperation, defaultclock, linspace, vstack, \ arange, sort_spikes, rint, SpikeMonitor, Connection, StateMonitor,where from brian.tools.statistics import firing_rate, get_gamma_factor try: from playdoh import * except Exception, e: print e raise ImportError("Playdoh must be installed (https://code.google.com/p/playdoh/)") try: import pycuda from gpu_modelfitting import GPUModelFitting can_use_gpu = True except ImportError: can_use_gpu = False from brian.experimental.codegen.integration_schemes import * from criteria import * import sys, cPickle class DataTransformer(object): """ Transform spike, input and trace data from user-friendly data structures, like 2 dimensional arrays or lists of spikes, into algorithm- and GPU-friendly structures, i.e. inline vectors (1 dimensional) easily parallelizable. """ def __init__(self, neurons, input, spikes = None, traces = None, dt = defaultclock.dt, slices = 1, overlap = 0*ms, groups = 1,ntrials=1): self.neurons = neurons # number of particles on the node self.input = input # a IxT array self.spikes = spikes # a list of spikes [(i,t)...] self.traces = traces # a KxT array self.slices = slices self.overlap = overlap self.groups = groups self.ntrials=ntrials self.dt = dt # ensure 2 dimensions if self.input.ndim == 1: self.input = self.input.reshape((1,-1)) if self.traces is not None: if self.traces.ndim == 1: self.traces = self.traces.reshape((1,-1)) self.inputs_count = self.input.shape[0] self.T = self.input.shape[1] # number of steps self.duration = self.T*self.dt self.subpopsize = self.neurons/self.groups # number of neurons per group: nodesize/groups self.input = self.input[:,0:self.slices * (self.T / self.slices)] # makes sure that len(input) is a multiple of slices self.sliced_steps = self.T / self.slices # timesteps per slice self.overlap_steps = int(self.overlap / self.dt) # timesteps during the overlap self.total_steps = self.sliced_steps + self.overlap_steps # total number of timesteps self.sliced_duration = self.overlap + self.duration / self.slices # duration of the vectorized simulation self.N = self.neurons * self.slices # TOTAL number of neurons on this node self.input = hstack((zeros((self.inputs_count, self.overlap_steps)), self.input)) # add zeros at the beginning because there is no overlap from the previous slice def slice_spikes(self, spikes): # slice the spike trains into different chunks and assign new index sliced_spikes = [] slice_length = self.sliced_steps*self.dt # print self.groups,self.slices, slice_length for (i,t) in spikes: slice = int(t/slice_length) newt = self.overlap + (t % slice_length)*second newi = i + self.groups*slice # print i,t,slice,newt,newi # discard unreachable spikes if newi >= (self.slices*self.groups): continue sliced_spikes.append((newi, newt)) sliced_spikes = sort_spikes(sliced_spikes) return sliced_spikes def slice_traces(self, traces): # from standard structure to standard structure k = traces.shape[0] sliced_traces = zeros((k*self.slices, self.total_steps)) for slice in xrange(self.slices): i0 = slice*k i1 = (slice+1)*k j0 = slice*self.sliced_steps j1 = (slice+1)*self.sliced_steps sliced_traces[i0:i1,self.overlap_steps:] = traces[:,j0:j1] if slice>0: sliced_traces[i0:i1,:self.overlap_steps] = traces[:,j0-self.overlap_steps:j0] return sliced_traces def transform_trials(self, spikes): # from standard structure to inline structure i, t = zip(*spikes) i = array(i) t = array(t) alls = [] n = 0 pointers = [] model_target = [] pointers.append(0) for j in xrange(self.ntrials): s = sort(t[i == j]) s = hstack((-1 * second, s, self.duration + 1 * second)) alls.append(s) pointers.append(pointers[j]+len(s)) pointers.pop() spikes_inline = hstack(alls) # print pointers # print nonzero(spikes_inline==-1) # show() return spikes_inline, pointers def transform_spikes(self, spikes): # from standard structure to inline structure i, t = zip(*spikes) i = array(i) t = array(t) alls = [] n = 0 pointers = [] model_target = [] for j in xrange(self.groups): s = sort(t[i == j]) s = hstack((-1 * second, s, self.duration + 1 * second)) model_target.extend([j] * self.subpopsize) alls.append(s) pointers.append(n) n += len(s) pointers = array(pointers, dtype=int) model_target = array(hstack(model_target), dtype=int) spikes_inline = hstack(alls) spikes_offset = pointers[model_target] return spikes_inline, spikes_offset def transform_traces(self, traces): # from standard structure to inline structure K, T = traces.shape traces_inline = traces.flatten() traces_offset = array(kron(arange(K), T*ones(self.subpopsize)), dtype=int) return traces_inline, traces_offset def slice_trace(input, trace, slices = 1, dt = defaultclock.dt, overlap = 0*ms): input = input.reshape((1,-1)) trace = trace.reshape((1,-1)) dt = DataTransformer(1, input, trace, dt = dt, slices = slices, overlap = overlap, groups = slices) sliced_traces = dt.slice_traces(trace) sliced_input = dt.slice_traces(input) return sliced_input, sliced_traces def transform_spikes(n, input, spikes, slices = 1, dt = defaultclock.dt, overlap = 0*ms): input = input.reshape((1,-1)) dt = DataTransformer(n, input, spikes = spikes, dt = dt, slices = slices, overlap = overlap) sliced_spikes = dt.slice_spikes(spikes) dt.groups = slices spikes_inline, spike_offset = dt.transform_spikes(sliced_spikes) return spikes_inline, spike_offset class Simulator(object): def __init__(self, model, reset, threshold, inputs, input_var = 'I', dt = defaultclock.dt, refractory = 0*ms, max_refractory = None, spikes = None, traces = None, groups = 1, slices = 1, overlap = 0*second, onset = 0*second, neurons = 1000, # = nodesize = number of neurons on this node = (total number of neurons on this node)/(number of slices) initial_values = None, unit_type = 'CPU', stepsize = 128*ms, precision = 'double', criterion = None, statemonitor_var=None, spikemonitor = False, nbr_spikes = 200, ntrials=1, method = 'Euler', # stand_alone=False, # neuron_group=None, # given_neuron_group=False ): # print refractory, max_refractory # self.neuron_group = neuron_group # self.given_neuron_group = False # self.stand_alone = given_neuron_group self.model = model self.reset = reset self.threshold = threshold self.inputs = inputs self.input_var = input_var self.dt = dt self.refractory = refractory self.max_refractory = max_refractory self.spikes = spikes self.traces = traces self.initial_values = initial_values self.groups = groups self.slices = slices self.overlap = overlap self.ntrials=ntrials self.onset = onset self.neurons = neurons self.unit_type = unit_type if type(statemonitor_var) is not list and statemonitor_var is not None: statemonitor_var = [statemonitor_var] self.statemonitor_var = statemonitor_var self.spikemonitor=spikemonitor self.nbr_spikes = nbr_spikes self.stepsize = stepsize self.precision = precision self.criterion = criterion self.method = method self.use_gpu = self.unit_type=='GPU' if self.statemonitor_var is not None: self.statemonitor_values = [zeros(self.neurons)]*len(statemonitor_var) self.initialize_neurongroup() self.transform_data() self.inject_input() if self.criterion.__class__.__name__ == 'Brette': self.initialize_criterion(delays=zeros(self.neurons),tau_metric=zeros(self.neurons)) else: self.initialize_criterion(delays=zeros(self.neurons)) if self.use_gpu: self.initialize_gpu() def initialize_neurongroup(self): # Add 'refractory' parameter on the CPU only if not self.use_gpu: if self.max_refractory is not None: refractory = 'refractory' self.model.add_param('refractory', second) else: refractory = self.refractory else: if self.max_refractory is not None: refractory = 0*ms else: refractory = self.refractory # Must recompile the Equations : the functions are not transfered after pickling/unpickling self.model.compile_functions() # print refractory, self.max_refractory if type(refractory) is double: refractory=refractory*second # if self.give_neuron_group == False: self.group = NeuronGroup(self.neurons, # TODO: * slices? model=self.model, reset=self.reset, threshold=self.threshold, refractory=refractory, max_refractory = self.max_refractory, method = self.method, clock=Clock(dt=self.dt)) if self.initial_values is not None: for param, value in self.initial_values.iteritems(): self.group.state(param)[:] = value # else: # self.group = self.neuron_group def initialize_gpu(self): # Select integration scheme according to method if self.method == 'Euler': scheme = euler_scheme elif self.method == 'RK': scheme = rk2_scheme elif self.method == 'exponential_Euler': scheme = exp_euler_scheme else: raise Exception("The numerical integration method is not valid") self.mf = GPUModelFitting(self.group, self.model, self.criterion_object, self.input_var, self.neurons/self.groups, self.onset, statemonitor_var = self.statemonitor_var, spikemonitor = self.spikemonitor, nbr_spikes = self.nbr_spikes, duration = self.sliced_duration, precision=self.precision, scheme=scheme) def transform_data(self): self.transformer = DataTransformer(self.neurons, self.inputs, spikes = self.spikes, traces = self.traces, dt = self.dt, slices = self.slices, overlap = self.overlap, groups = self.groups,ntrials=self.ntrials) self.total_steps = self.transformer.total_steps self.sliced_duration = self.transformer.sliced_duration if self.ntrials>1: self.inputs_inline = self.inputs.flatten() self.sliced_inputs = self.inputs self.inputs_offset = zeros(self.neurons) else: self.sliced_inputs = self.transformer.slice_traces(self.inputs) self.inputs_inline, self.inputs_offset = self.transformer.transform_traces(self.sliced_inputs) if self.traces is not None: self.sliced_traces = self.transformer.slice_traces(self.traces) self.traces_inline, self.traces_offset = self.transformer.transform_traces(self.sliced_traces) else: self.sliced_traces, self.traces_inline, self.traces_offset = None, None, None if self.spikes is not None: if self.ntrials>1: self.sliced_spikes = self.transformer.slice_spikes(self.spikes) self.spikes_inline, self.trials_offset = self.transformer.transform_trials(self.spikes) self.spikes_offset = zeros((self.neurons),dtype=int) else: self.sliced_spikes = self.transformer.slice_spikes(self.spikes) self.spikes_inline, self.spikes_offset = self.transformer.transform_spikes(self.sliced_spikes) self.trials_offset=[0] else: self.sliced_spikes, self.spikes_inline, self.spikes_offset,self.trials_offset = None, None, None, None def inject_input(self): # Injects current in consecutive subgroups, where I_offset have the same value # on successive intervals I_offset = self.inputs_offset k = -1 for i in hstack((nonzero(diff(I_offset))[0], len(I_offset) - 1)): I_offset_subgroup_value = I_offset[i] I_offset_subgroup_length = i - k sliced_subgroup = self.group.subgroup(I_offset_subgroup_length) input_sliced_values = self.inputs_inline[I_offset_subgroup_value:I_offset_subgroup_value + self.total_steps] sliced_subgroup.set_var_by_array(self.input_var, TimedArray(input_sliced_values, clock=self.group.clock)) k = i def initialize_criterion(self, **criterion_params): # general criterion parameters params = dict(group=self.group, traces=self.sliced_traces, spikes=self.sliced_spikes, targets_count=self.groups*self.slices, duration=self.sliced_duration, onset=self.onset, spikes_inline=self.spikes_inline, spikes_offset=self.spikes_offset, traces_inline=self.traces_inline, traces_offset=self.traces_offset,trials_offset=self.trials_offset) for key,val in criterion_params.iteritems(): params[key] = val criterion_name = self.criterion.__class__.__name__ # criterion-specific parameters if criterion_name == 'GammaFactor': params['delta'] = self.criterion.delta params['coincidence_count_algorithm'] = self.criterion.coincidence_count_algorithm params['fr_weight'] = self.criterion.fr_weight self.criterion_object = GammaFactorCriterion(**params) if criterion_name == 'GammaFactor2': params['delta'] = self.criterion.delta params['coincidence_count_algorithm'] = self.criterion.coincidence_count_algorithm params['fr_weight'] = self.criterion.fr_weight params['nlevels'] = self.criterion.nlevels params['level_duration'] = self.criterion.level_duration self.criterion_object = GammaFactorCriterion2(**params) if criterion_name == 'LpError': params['p'] = self.criterion.p params['varname'] = self.criterion.varname params['method'] = self.criterion.method params['insets'] = self.criterion.insets params['outsets'] = self.criterion.outsets params['points'] = self.criterion.points self.criterion_object = LpErrorCriterion(**params) if criterion_name == 'VanRossum': params['tau'] = self.criterion.tau self.criterion_object = VanRossumCriterion(**params) if criterion_name == 'Brette': self.criterion_object = BretteCriterion(**params) def update_neurongroup(self, **param_values): """ Inject fitting parameters into the NeuronGroup """ # Sets the parameter values in the NeuronGroup object self.group.reinit() for param, value in param_values.iteritems(): self.group.state(param)[:] = kron(value, ones(self.slices)) # kron param_values if slicing # Reinitializes the model variables if self.initial_values is not None: for param, value in self.initial_values.iteritems(): self.group.state(param)[:] = value def combine_sliced_values(self, values): if type(values) is tuple: combined_values = tuple([sum(reshape(v, (self.slices, -1)), axis=0) for v in values]) else: combined_values = sum(reshape(values, (self.slices, -1)), axis=0) return combined_values def run(self, **param_values): delays = param_values.pop('delays', zeros(self.neurons)) # print self.refractory,self.max_refractory if self.max_refractory is not None: refractory = param_values.pop('refractory', zeros(self.neurons)) else: refractory = self.refractory*ones(self.neurons) tau_metric = param_values.pop('tau_metric', zeros(self.neurons)) self.update_neurongroup(**param_values) # repeat spike delays and refractory to take slices into account delays = kron(delays, ones(self.slices)) refractory = kron(refractory, ones(self.slices)) tau_metric = kron(tau_metric, ones(self.slices)) # TODO: add here parameters to criterion_params if a criterion must use some parameters criterion_params = dict(delays=delays) if self.criterion.__class__.__name__ == 'Brette': criterion_params['tau_metric'] = tau_metric self.update_neurongroup(**param_values) self.initialize_criterion(**criterion_params) if self.use_gpu: # Reinitializes the simulation object self.mf.reinit_vars(self.criterion_object, self.inputs_inline, self.inputs_offset, self.spikes_inline, self.spikes_offset, self.traces_inline, self.traces_offset, delays, refractory ) # LAUNCHES the simulation on the GPU self.mf.launch(self.sliced_duration, self.stepsize) # Synchronize the GPU values with a call to gpuarray.get() self.criterion_object.update_gpu_values() else: # set the refractory period if self.max_refractory is not None: self.group.refractory = refractory # Launch the simulation on the CPU self.group.clock.reinit() net = Network(self.group, self.criterion_object) if self.statemonitor_var is not None: self.statemonitors = [] for state in self.statemonitor_var: monitor = StateMonitor(self.group, state, record=True) self.statemonitors.append(monitor) net.add(monitor) net.run(self.sliced_duration) sliced_values = self.criterion_object.get_values() combined_values = self.combine_sliced_values(sliced_values) values = self.criterion_object.normalize(combined_values) return values def get_statemonitor_values(self): if not self.use_gpu: return [monitor.values for monitor in self.statemonitors] else: return self.mf.get_statemonitor_values() def get_spikemonitor_values(self): if not self.use_gpu: return [monitor.values for monitor in self.statemonitors] else: return self.mf.get_spikemonitor_values() def simulate( model, reset = None, threshold = None, input = None, input_var = 'I', dt = defaultclock.dt, refractory = 0*ms, max_refractory = None, data = None, groups = 1, slices = 1, overlap = 0*second, onset = None, neurons = 1, initial_values = None, use_gpu = False, stepsize = 128*ms, precision = 'double', criterion = None, ntrials=1, record = None, spikemonitor = False, nbr_spikes = 200, method = 'Euler', stand_alone=False, # neuron_group = none, **params): unit_type = 'CPU' if use_gpu: unit_type = 'GPU' for param in params.keys(): if (param not in model._diffeq_names) and (param != 'delays') and (param != 'tau_metric'): raise Exception("Parameter %s must be defined as a parameter in the model" % param) if criterion is None: criterion = GammaFactor() data = array(data) if criterion.type == 'spikes': # Make sure that 'data' is a N*2-array if data.ndim == 1: data = concatenate((zeros((len(data), 1)), data.reshape((-1, 1))), axis=1) spikes = data traces = None groups = int(array(data)[:, 0].max() + 1) # number of target trains elif criterion.type == 'trace': if data.ndim == 1: data = data.reshape((1,-1)) spikes = None traces = data groups = data.shape[0] elif criterion.type == 'both': # TODO log_warn("Not implemented yet") pass inputs = input if inputs.ndim==1: inputs = inputs.reshape((1,-1)) # dt must be set if dt is None: raise Exception('dt (sampling frequency of the input) must be set') # default overlap when no time slicing if slices == 1: overlap = 0*ms if onset is None: onset = overlap # check numerical integration method if use_gpu and method not in ['Euler', 'RK', 'exponential_Euler']: raise Exception("The method can only be 'Euler', 'RK', or 'exponential_Euler' when using the GPU") if method not in ['Euler', 'RK', 'exponential_Euler', 'linear', 'nonlinear']: raise Exception("The method can only be 'Euler', 'RK', 'exponential_Euler', 'linear', or 'nonlinear'") # determines whether optimization over refractoriness or not if type(refractory) is tuple or type(refractory) is list: params['refractory'] = refractory max_refractory = refractory[-1] else: max_refractory = None # Initialize GPU if use_gpu: set_gpu_device(0) # if neuron_group is not None: # self.neuron_group = neuron_group # self.given_neuron_group = True # else: # self.given_neuron_group = False simulator = Simulator(model, reset, threshold, inputs, input_var = input_var, dt = dt, refractory = refractory, max_refractory = max_refractory, spikes = spikes, traces = traces, groups = groups, slices = slices, overlap = overlap, onset = onset, neurons = neurons, initial_values = initial_values, unit_type = unit_type, stepsize = stepsize, precision = precision, ntrials=ntrials, criterion = criterion, statemonitor_var = record, spikemonitor = spikemonitor, nbr_spikes = nbr_spikes, method = method, # stand_alone=stand_alone, # self.neuron_group, # self.given_neuron_group ) criterion_values = simulator.run(**params) if record is not None and spikemonitor is False: record_values = simulator.get_statemonitor_values() return criterion_values, record_values elif record is not None and spikemonitor is True: record_values = simulator.get_statemonitor_values() spike_times = simulator.get_spikemonitor_values() return criterion_values, record_values,spike_times elif record is None and spikemonitor is True: spike_times = simulator.get_spikemonitor_values() return criterion_values,spike_times else: return criterion_values if __name__ == '__main__': from brian import loadtxt, ms, savetxt, loadtxt, Equations, NeuronGroup, run, SpikeMonitor,\ StateMonitor, Network from pylab import * equations = Equations(''' dV/dt=(R*I-V)/tau : 1 I : 1 R : 1 tau : second ''') input = loadtxt('current.txt') trace = loadtxt('trace_artificial.txt') neurons = 1 groups = 4 overlap = 250*ms R = [3e9]*neurons tau = linspace(25*ms, 30*ms, neurons) # GAMMA FACTOR criterion = GammaFactor(delta=2*ms) spikes= loadtxt('spikes_artificial.txt') data = spikes # LP ERROR # criterion = LpError(p=2, varname='V') # input, trace = slice_trace(input, trace, slices = groups, overlap = overlap) # data = trace # SIMULATE, EVALUATE CRITERION AND RECORD TRACES ON GPU criterion_values, record_values = simulate( model = equations, reset = 0, threshold = 1, data = trace, input = input, use_gpu = True, dt = .1*ms, criterion = criterion, record = 'V', onset = overlap, neurons = neurons*groups, R = R, tau = tau, ) print criterion_values print record_values.shape n = data.shape[0] print data.shape for i in xrange(n): subplot(100*n+10+(i+1)) plot(trace[i,:]) plot(record_values[i,:]) show() brian-1.3.1/brian/experimental/morphology.py000066400000000000000000000417141167451777000211760ustar00rootroot00000000000000''' Neuronal morphology module for Brian. TODO: * Maybe str: should give a text representation of the tree * set_length etc.: should be recursive * check_consistency: check that lengths, area and coordinates are consistent (and fix) * rescale: change number of compartments ''' from brian.group import Group from scipy import rand from numpy import * from brian.units import meter from brian.stdunits import um import warnings from pylab import figure import copy try: from mpl_toolkits.mplot3d import Axes3D except: warnings.warn('Pylab 0.99.1 is required for 3D plots') __all__ = ['Morphology', 'Cylinder', 'Soma'] class Morphology(object): ''' Neuronal morphology (=tree of branches). ''' def __init__(self, filename=None, n=None): self.children = [] self._namedkid = {} self.iscompressed = False if filename is not None: self.loadswc(filename) elif n is not None: # Creates a branch with n compartments # The problem here is that these parameters should have some self-consistency self.x, self.y, self.z, self.diameter, self.length, self.area = [zeros(n) for _ in range(6)] def set_length(self): ''' Sets the length of compartments according to their coordinates ''' x = hstack((0 * um, self.x)) y = hstack((0 * um, self.y)) z = hstack((0 * um, self.z)) self.length = sum((x[1:] - x[:-1]) ** 2 + (y[1:] - y[:-1]) ** 2 + (z[1:] - z[:-1]) ** 2) ** .5 def set_area(self): ''' Sets the area of compartments according to diameter and length (assuming cylinders) ''' self.area = pi * self.diameter * self.length def set_coordinates(self): ''' Sets the coordinates of compartments according to their lengths (taking a random direction) ''' l = cumsum(self.length) theta = rand()*2 * pi phi = rand()*2 * pi self.x = l * sin(theta) * cos(phi) self.y = l * sin(theta) * sin(phi) self.z = l * cos(theta) def loadswc(self, filename): ''' Reads a SWC file containing a neuronal morphology. Large database at http://neuromorpho.org/neuroMorpho Information below from http://www.mssm.edu/cnic/swc.html SWC File Format The format of an SWC file is fairly simple. It is a text file consisting of a header with various fields beginning with a # character, and a series of three dimensional points containing an index, radius, type, and connectivity information. The lines in the text file representing points have the following layout. n T x y z R P n is an integer label that identifies the current point and increments by one from one line to the next. T is an integer representing the type of neuronal segment, such as soma, axon, apical dendrite, etc. The standard accepted integer values are given below. * 0 = undefined * 1 = soma * 2 = axon * 3 = dendrite * 4 = apical dendrite * 5 = fork point * 6 = end point * 7 = custom x, y, z gives the cartesian coordinates of each node. R is the radius at that node. P indicates the parent (the integer label) of the current point or -1 to indicate an origin (soma). ''' # 1) Create the list of segments, each segment has a list of children lines = open(filename).read().splitlines() segment = [] # list of segments types = ['undefined', 'soma', 'axon', 'dendrite', 'apical', 'fork', 'end', 'custom'] previousn = -1 for line in lines: if line[0] != '#': # comment numbers = line.split() n = int(numbers[0]) - 1 T = types[int(numbers[1])] x = float(numbers[2]) * um y = float(numbers[3]) * um z = float(numbers[4]) * um R = float(numbers[5]) * um P = int(numbers[6]) - 1 # 0-based indexing if (n != previousn + 1): raise ValueError, "Bad format in file " + filename seg = dict(x=x, y=y, z=z, T=T, diameter=2 * R, parent=P, children=[]) location = (x, y, z) if T == 'soma': seg['area'] = 4 * pi * R ** 2 seg['length'] = 0 * um else: # dendrite locationP = (segment[P]['x'], segment[P]['y'], segment[P]['z']) seg['length'] = (sum((array(location) - array(locationP)) ** 2)) ** .5 * meter seg['area'] = seg['length'] * 2 * pi * R if P >= 0: segment[P]['children'].append(n) segment.append(seg) previousn = n # We assume that the first segment is the root self.create_from_segments(segment) def create_from_segments(self, segment, origin=0): """ Recursively create the morphology from a list of segments. Each segment has attributes: x,y,z,diameter,area,length (vectors) and children (list). It also creates a dictionary of names (_namedkid). """ n = origin if segment[origin]['T'] != 'soma': # if it's a soma, only one compartment while (len(segment[n]['children']) == 1) and (segment[n]['T'] != 'soma'): # Go to the end of the branch n += 1 # End of branch branch = segment[origin:n + 1] # Set attributes self.diameter, self.length, self.area, self.x, self.y, self.z = \ zip(*[(seg['diameter'], seg['length'], seg['area'], seg['x'], seg['y'], seg['z']) for seg in branch]) self.diameter, self.length, self.area, self.x, self.y, self.z = array(self.diameter), array(self.length), \ array(self.area), array(self.x), array(self.y), array(self.z) self.type = segment[n]['T'] # normally same type for all compartments in the branch # Create children (list) self.children = [Morphology().create_from_segments(segment, origin=c) for c in segment[n]['children']] # Create dictionary of names (enumerates children from number 1) for i, child in enumerate(self.children): self._namedkid[str(i + 1)] = child # Name the child if possible if child.type in ['soma', 'axon', 'dendrite']: if child.type in self._namedkid: self._namedkid[child.type] = None # two children with the same name: erase (see next block) else: self._namedkid[child.type] = child # Erase useless names for k in self._namedkid.keys(): if self._namedkid[k] is None: del self._namedkid[k] # If two kids, name them L (left) and R (right) if len(self.children) == 2: self._namedkid['L'] = self._namedkid['1'] self._namedkid['R'] = self._namedkid['2'] return self def branch(self): ''' Returns the current branch without the children. ''' morpho = copy.copy(self) morpho.children = [] morpho._namedkid = {} return morpho def __getitem__(self, x): """ Returns the subtree named x. Ex.: neuron['axon'] or neuron['11213'] neuron[10*um:20*um] returns the subbranch from 10 um to 20 um. neuron[10*um] returns one compartment. neuron[5] returns compartment number 5. TODO: neuron[:] returns the full branch. """ if type(x) == type(0): # int: returns one compartment morpho = self.branch() i = x j = i + 1 morpho.diameter = morpho.diameter[i:j] morpho.length = morpho.length[i:j] morpho.area = morpho.area[i:j] morpho.x = morpho.x[i:j] morpho.y = morpho.y[i:j] morpho.z = morpho.z[i:j] if hasattr(morpho, '_origin'): morpho._origin += i return morpho elif isinstance(x, slice): # neuron[10*um:20*um] morpho = self.branch() start, stop = x.start, x.stop l = cumsum(morpho.length) # coordinate on the branch i = searchsorted(l, start) j = searchsorted(l, stop) morpho.diameter = morpho.diameter[i:j] morpho.length = morpho.length[i:j] morpho.area = morpho.area[i:j] morpho.x = morpho.x[i:j] morpho.y = morpho.y[i:j] morpho.z = morpho.z[i:j] if hasattr(morpho, '_origin'): morpho._origin += i return morpho elif isinstance(x, float): # neuron[10*um] morpho = self.branch() l = cumsum(morpho.length) # coordinate on the branch i = searchsorted(l, x) j = i + 1 morpho.diameter = morpho.diameter[i:j] morpho.length = morpho.length[i:j] morpho.area = morpho.area[i:j] morpho.x = morpho.x[i:j] morpho.y = morpho.y[i:j] morpho.z = morpho.z[i:j] if hasattr(morpho, '_origin'): morpho._origin += i return morpho else: x = str(x) # convert int to string if (len(x) > 1) and all([c in 'LR123456789' for c in x]): # binary string of the form LLLRLR or 1213 (or mixed) return self._namedkid[x[0]][x[1:]] elif x in self._namedkid: return self._namedkid[x] else: raise AttributeError, "The subtree " + x + " does not exist" def __setitem__(self, x, kid): """ Inserts the subtree and name it x. Ex.: neuron['axon'] or neuron['11213'] If the tree already exists with another name, then it creates a synonym for this tree. The coordinates of the subtree are relative before function call, and are absolute after function call. """ x = str(x) # convert int to string if (len(x) > 1) and all([c in 'LR123456789' for c in x]): # binary string of the form LLLRLR or 1213 (or mixed) self._namedkid[x[0]][x[1:]] = kid elif x in self._namedkid: raise AttributeError, "The subtree " + x + " already exists" else: # Update coordinates kid.x += self.x[-1] kid.y += self.y[-1] kid.z += self.z[-1] if kid not in self.children: self.children.append(kid) self._namedkid[str(len(self.children))] = kid # numbered child self._namedkid[x] = kid def __delitem__(self, x): """ Removes the subtree x. """ x = str(x) # convert int to string if (len(x) > 1) and all([c in 'LR123456789' for c in x]): # binary string of the form LLLRLR or 1213 (or mixed) del self._namedkid[x[0]][x[1:]] elif x in self._namedkid: child = self._namedkid[x] # Delete from name dictionary for name, kid in self._namedkid.items(): if kid is child: del self._namedkid[name] # Delete from list of children for i, kid in enumerate(self.children): if kid is child: del self.children[i] else: raise AttributeError, "The subtree " + x + " does not exist" def __getattr__(self, x): """ Returns the subtree named x. Ex.: axon=neuron.axon """ return self[x] def __setattr__(self, x, kid): """ Attach a subtree and named it x. If the subtree is None then the subtree x is deleted. Ex.: neuron.axon=Soma(diameter=10*um) Ex.: neuron.axon=None """ if isinstance(kid, Morphology): if kid is None: del self[x] else: self[x] = kid else: # If it is not a subtree, then it's a normal class attribute object.__setattr__(self, x, kid) def __len__(self): """ Returns the total number of compartments. """ return len(self.x) + sum(len(child) for child in self.children) def compress(self, diameter=None, length=None, area=None, x=None, y=None, z=None, origin=0): """ Compresses the tree by changing the compartment vectors to views on a matrix (or vectors). The morphology cannot be changed anymore but all other functions should work normally. origin : offset in the base matrix """ self._origin = origin n = len(self.x) # Update values of vectors diameter[:n] = self.diameter length[:n] = self.length area[:n] = self.area x[:n] = self.x y[:n] = self.y z[:n] = self.z # Attributes are now views on these vectors self.diameter = diameter[:n] self.length = length[:n] self.area = area[:n] self.x = x[:n] self.y = y[:n] self.z = z[:n] for kid in self.children: kid.compress(diameter=diameter[n:], length=length[n:], area=area[n:], x=x[n:], y=y[n:], z=z[n:], origin=n) n += len(kid) self.iscompressed = True def plot(self, axes=None, simple=True, origin=None): """ Plots the morphology in 3D. Units are um. axes : the figure axes (new figure if not given) simple : if True, the diameter of branches is ignored """ if axes is None: # new figure fig = figure() axes = Axes3D(fig) x, y, z, d = self.x / um, self.y / um, self.z / um, self.diameter / um if origin is not None: x0, y0, z0 = origin x = hstack((x0, x)) y = hstack((y0, y)) z = hstack((z0, z)) if len(x) == 1: # root with a single compartment: probably just the soma axes.plot(x, y, z, "r.", linewidth=d[0]) else: if simple: axes.plot(x, y, z, "k") else: # linewidth reflects compartment diameter for n in range(1, len(x)): axes.plot([x[n - 1], x[n]], [y[n - 1], y[n]], [z[n - 1], z[n]], 'k', linewidth=d[n - 1]) for c in self.children: c.plot(origin=(x[-1], y[-1], z[-1]), axes=axes, simple=simple) class Cylinder(Morphology): """ A cylinder. """ def __init__(self, length=None, diameter=None, n=1, type=None, x=None, y=None, z=None): """ Creates a cylinder. n: number of compartments. type : 'soma', 'axon' or 'dendrite' x,y,z : end point (relative to origin of cylinder) length is optional (and ignored) if x,y,z is specified If x,y,z unspecified: random direction """ Morphology.__init__(self) if x is None: theta = rand()*2 * pi phi = rand()*2 * pi x = length * sin(theta) * cos(phi) y = length * sin(theta) * sin(phi) z = length * cos(theta) else: length = (sum(array((x, y, z)) ** 2)) ** .5 scale = arange(1, n + 1) * 1. / n self.x, self.y, self.z = x * scale, y * scale, z * scale self.length = ones(n) * length / n self.diameter = ones(n) * diameter self.area = ones(n) * pi * diameter * length / n self.type = type class Soma(Morphology): # or Sphere? """ A spherical soma. """ def __init__(self, diameter=None): Morphology.__init__(self) self.diameter = ones(1) * diameter self.area = ones(1) * pi * diameter ** 2 self.length = zeros(1) self.x, self.y, self.z = zeros(1), zeros(1), zeros(1) self.type = 'soma' if __name__ == '__main__': from pylab import show #morpho=Morphology('mp_ma_40984_gc2.CNG.swc') # retinal ganglion cell morpho = Morphology('oi24rpy1.CNG.swc') # visual L3 pyramidal cell print len(morpho), "compartments" morpho.axon = None morpho.plot() #morpho=Cylinder(length=10*um,diameter=1*um,n=10) #morpho.plot(simple=True) morpho = Soma(diameter=10 * um) morpho.dendrite = Cylinder(length=3 * um, diameter=1 * um, n=10) morpho.dendrite.L = Cylinder(length=5 * um, diameter=1 * um, n=10) morpho.dendrite.R = Cylinder(length=7 * um, diameter=1 * um, n=10) morpho.dendrite.LL = Cylinder(length=3 * um, diameter=1 * um, n=10) morpho.axon = Morphology(n=5) morpho.axon.diameter = ones(5) * 1 * um morpho.axon.length = [1 * um, 2 * um, 1 * um, 3 * um, 1 * um] morpho.axon.set_coordinates() morpho.axon.set_area() morpho.plot(simple=True) show() brian-1.3.1/brian/experimental/mp_ma_40984_gc2.CNG.swc000066400000000000000000000267671167451777000221600ustar00rootroot00000000000000# Original file mp.ma.40984.gc2.asc.swc edited by Duncan Donohue using SWCfix version 1.21 on 8/31/05. # Errors and fixes documented in mp.ma.40984.gc2.asc.swc.err. See SWCfix1.21.doc for more information. # # Amaral to SWC conversion from L-Measure. R. Scorcioni: rscorcio@gmu.edu # Original fileName:C:\Documents and Settings\Admin\Desktop\Miller-Eutectic\mp.ma.40984.gc2.asc # # ORIGINAL_SOURCE # CREATURE # REGION # FIELD/LAYER # TYPE # CONTRIBUTOR # REFERENCE # RAW # EXTRAS # SOMA_AREA # SHRINKAGE_CORRECTION # VERSION_NUMBER # VERSION_DATE # ********************************************* # SCALE 1.0 1.0 1.0 1 1 0.2917 0.04167 -0.1458 12.030 -1 2 3 12. 6.5 1. 0.850 1 3 3 15. 9. 1.5 0.75 2 4 3 18.5 10. 2.5 0.65 3 5 3 17. 8. 3. 0.15 4 6 3 18. 7. 2.5 0.15 5 7 3 16. 4. 8. 0.15 6 8 3 14. 0.5 8. 0.15 7 9 3 7. -11.5 9. 0.09 8 10 3 1.5 -19. 8. 0.09 9 11 3 -3.5 -22. 9. 0.09 10 12 3 -9.5 -24.5 9. 0.09 11 13 3 -17.5 -24. 9. 0.09 12 14 3 -21.5 -23. 10.5 0.09 13 15 3 -25. -22.5 10.5 0.09 14 16 3 21.5 10. 2. 0.45 4 17 3 24.5 11.5 2.5 0.4 16 18 3 27. 10.5 4. 0.3 17 19 3 28.5 10. 6. 0.2 18 20 3 29.5 11. 7. 0.2 19 21 3 31. 9.5 8.5 0.25 20 22 3 36. 11.5 9. 0.2 21 23 3 37.5 9. 10.5 0.2 22 24 3 38.5 4. 10. 0.25 23 25 3 42. 3.5 11. 0.2 24 26 3 44. 0.5 10. 0.2 25 27 3 48.5 -0.5 10.5 0.2 26 28 3 60. -2. 9. 0.2 27 29 3 68. -1.5 9.5 0.2 28 30 3 72. -3. 9.5 0.25 29 31 3 75.5 -6.5 9. 0.15 30 32 3 79. -9.5 9.5 0.15 31 33 3 84.5 -12. 9. 0.15 32 34 3 96. -13. 8.5 0.15 33 35 3 101.5 -16.5 10. 0.15 34 36 3 112. -17. 10. 0.15 35 37 3 119.5 -18.5 10. 0.15 36 38 3 126. -20. 10. 0.15 37 39 3 133.5 -21.5 10. 0.09 38 40 3 137.5 -21.5 10. 0.09 39 41 3 140.5 -25. 9.5 0.09 40 42 3 139.5 -30. 10.5 0.09 41 43 3 141.5 -34.5 10.5 0.09 42 44 3 141.5 -38. 10.5 0.09 43 45 3 142. -41. 10.5 0.09 44 46 3 145.5 -44. 10.5 0.09 45 47 3 149.5 -48.5 9.5 0.09 46 48 3 154.5 -52. 11. 0.09 47 49 3 158. -58.5 11. 0.09 48 50 3 157.5 -65. 12.5 0.09 49 51 3 156. -70.5 12.5 0.09 50 52 3 155. -80. 12.5 0.09 51 53 3 155. -84. 13.5 0.09 52 54 3 158.5 -84. 13.5 0.09 53 55 3 160.5 -85.5 13.5 0.09 54 56 3 10. -4. 3. 1.95 1 57 3 10.5 -6. 4. 1.85 56 58 3 14.5 -7. 3.5 1.45 57 59 3 15. -10.5 3.5 1.45 58 60 3 18.5 -12.5 3.5 1.45 59 61 3 20.5 -15.5 3.5 1.45 60 62 3 20.5 -18. 3.5 1.45 61 63 3 19.5 -22.5 7.5 0.8 62 64 3 17. -30. 8.5 0.8 63 65 3 15.5 -35. 8.5 0.9 64 66 3 14. -40. 9. 0.9 65 67 3 14. -46. 8.5 0.9 66 68 3 16. -52.5 9. 1.75 67 69 3 13.5 -56.5 9. 0.65 68 70 3 12.5 -59. 8. 0.75 69 71 3 10. -61. 8.5 0.15 70 72 3 5. -65. 7. 0.15 71 73 3 -0.5 -69.5 6.5 0.15 72 74 3 -3.5 -73.5 7. 0.09 73 75 3 -6.5 -78.5 7.5 0.09 74 76 3 -9. -82. 7.5 0.09 75 77 3 -8.5 -89. 7.5 0.09 76 78 3 -7.5 -95. 7. 0.09 77 79 3 -6. -101. 7. 0.09 78 80 3 -8. -113. 6.5 0.09 79 81 3 -11.5 -123. 7. 0.09 80 82 3 -12. -130.5 7.5 0.09 81 83 3 -11.5 -141. 7. 0.09 82 84 3 -13.5 -150.5 7.5 0.09 83 85 3 -15. -157.5 7. 0.09 84 86 3 -17.5 -162.5 7.5 0.09 85 87 3 -18.5 -168. 8. 0.09 86 88 3 -20.5 -170.5 8. 0.09 87 89 3 12. -63.5 9. 0.45 70 90 3 13. -68.5 9. 0.4 89 91 3 14.5 -73. 8.5 0.4 90 92 3 17.5 -78. 9.5 0.4 91 93 3 19. -84.5 10. 0.4 92 94 3 21.5 -92.5 9.5 0.4 93 95 3 24. -99.5 9. 0.4 94 96 3 25. -103. 9. 0.4 95 97 3 25.5 -104.5 10. 0.4 96 98 3 27. -107. 9.5 0.4 97 99 3 29. -109. 10.5 0.4 98 100 3 31.5 -114.5 10.5 0.4 99 101 3 33. -120.5 10. 0.4 100 102 3 32. -123.5 10. 0.4 101 103 3 30. -127. 9. 0.15 102 104 3 28. -131.5 8.5 0.15 103 105 3 26.5 -133.5 9.5 0.09 104 106 3 31.5 -133. 8. 0.09 104 107 3 35.5 -134.5 7. 0.09 106 108 3 33.5 -124. 9. 0.3 102 109 3 37. -125.5 9. 0.2 108 110 3 39. -126.5 8.5 0.15 109 111 3 42.5 -129.5 8.5 0.15 110 112 3 46. -132. 7. 0.15 111 113 3 49. -135.5 6.5 0.09 112 114 3 49. -137. 7. 0.09 113 115 3 53.5 -141. 6. 0.09 114 116 3 57. -145.5 6. 0.09 115 117 3 60.5 -150. 6. 0.09 116 118 3 61.5 -154. 6. 0.09 117 119 3 62. -158.5 7.5 0.09 118 120 3 67.5 -165. 7.5 0.09 119 121 3 72.5 -170.5 8. 0.09 120 122 3 72.5 -176.5 7.5 0.09 121 123 3 76.5 -186. 8.5 0.09 122 124 3 76.5 -187. 8.5 0.09 123 125 3 21.5 -53.5 9. 0.65 68 126 3 25.5 -55.5 9.5 0.55 125 127 3 30.5 -58. 10. 0.45 126 128 3 33. -58.5 9.5 0.45 127 129 3 33. -62. 9. 0.15 128 130 3 35.5 -66.5 7.5 0.15 129 131 3 38. -70. 8.5 0.15 130 132 3 39.5 -75. 9. 0.15 131 133 3 40. -78.5 8. 0.15 132 134 3 42. -81. 8.5 0.09 133 135 3 44. -86.5 9. 0.09 134 136 3 44.5 -90. 8.5 0.09 135 137 3 47. -93. 8. 0.09 136 138 3 50. -95. 8. 0.09 137 139 3 52. -97. 8. 0.09 138 140 3 54.5 -97.5 8. 0.09 139 141 3 56. -98.5 8. 0.09 140 142 3 59.5 -99.5 8. 0.09 141 143 3 62.5 -102. 9. 0.09 142 144 3 69. -105. 9.5 0.09 143 145 3 71.5 -103.5 9.5 0.09 144 146 3 77. -103. 9.5 0.09 145 147 3 79.5 -102.5 9.5 0.09 146 148 3 36.5 -57. 10. 0.3 128 149 3 39. -56. 10. 0.25 148 150 3 42. -58. 9.5 0.25 149 151 3 45. -58.5 9.5 0.2 150 152 3 49. -62. 9.5 0.2 151 153 3 52. -63. 8.5 0.2 152 154 3 55. -62.5 8.5 0.2 153 155 3 56. -64.5 8.5 0.2 154 156 3 58.5 -65.5 8.5 0.2 155 157 3 61. -65.5 7.5 0.2 156 158 3 61.5 -69. 7.5 0.2 157 159 3 63.5 -71. 7. 0.2 158 160 3 66. -73.5 6.5 0.2 159 161 3 68. -75. 6.5 0.2 160 162 3 67.5 -77. 6.5 0.2 161 163 3 69.5 -79. 6.5 0.2 162 164 3 70.5 -82. 6.5 0.2 163 165 3 73. -85.5 6.5 0.2 164 166 3 74.5 -88. 6.5 0.2 165 167 3 75. -89.5 6. 0.2 166 168 3 79.5 -89.5 6. 0.15 167 169 3 82.5 -89.5 5. 0.15 168 170 3 85. -93.5 6. 0.15 169 171 3 88. -98. 6. 0.15 170 172 3 90. -99.5 7.5 0.15 171 173 3 91.5 -100.5 7. 0.15 172 174 3 93. -103.5 7. 0.15 173 175 3 94.5 -103.5 7. 0.15 174 176 3 97. -104.5 7.5 0.15 175 177 3 101.5 -106. 8. 0.15 176 178 3 103.5 -108.5 8. 0.15 177 179 3 104.5 -111. 8. 0.15 178 180 3 106.5 -112. 8. 0.15 179 181 3 108. -115. 8.5 0.15 180 182 3 112.5 -117. 8.5 0.15 181 183 3 115.5 -120. 8.5 0.15 182 184 3 114. -124.5 8. 0.15 183 185 3 116.5 -125. 8. 0.15 184 186 3 116.5 -127.5 8. 0.15 185 187 3 119. -131.5 7.5 0.15 186 188 3 121. -132. 9. 0.15 187 189 3 120.5 -135. 8.5 0.15 188 190 3 124.5 -140.5 9.5 0.15 189 191 3 24. -22. 4. 0.75 62 192 3 25.5 -26. 3.5 0.75 191 193 3 28.5 -30. 3.5 0.850 192 194 3 28. -32.5 3. 0.5 193 195 3 28. -36.5 2.5 0.5 194 196 3 29.5 -42.5 2.5 0.45 195 197 3 30. -49. 1.5 0.45 196 198 3 28.5 -56.5 1.5 0.45 197 199 3 24.5 -65.5 1.5 0.45 198 200 3 22.5 -71. 1.5 0.45 199 201 3 20. -75. 1.5 0.350 200 202 3 17. -81.5 1. 0.350 201 203 3 14. -89. 0.5 0.350 202 204 3 11. -96.5 0. 0.350 203 205 3 9. -99. -1. 0.75 204 206 3 6. -99.5 -1. 0.09 205 207 3 0.5 -101. 1. 0.09 206 208 3 -3. -101. 0.5 0.09 207 209 3 -9. -102.5 1.5 0.09 208 210 3 -13.5 -105. 3. 0.09 209 211 3 -19. -107.5 3. 0.09 210 212 3 -28.5 -109. 5. 0.09 211 213 3 -32.5 -106.5 5. 0.09 212 214 3 -43. -107. 6.5 0.09 213 215 3 -47. -107.5 6.5 0.09 214 216 3 -51. -109. 6. 0.09 215 217 3 -57. -110. 6.5 0.09 216 218 3 -63.5 -110. 6.5 0.09 217 219 3 -70. -110. 6.5 0.09 218 220 3 -76. -110. 6.5 0.09 219 221 3 -84.5 -112.5 6. 0.09 220 222 3 -96. -109.5 7.5 0.09 221 223 3 -102. -108. 7.5 0.09 222 224 3 -107.5 -110.5 6.5 0.09 223 225 3 -113.5 -113.5 8. 0.09 224 226 3 -125. -118.5 7. 0.049 225 227 3 -137.5 -122. 7. 0.049 226 228 3 -143. -125. 6.5 0.049 227 229 3 -147. -126.5 6.5 0.049 228 230 3 10. -105.5 -1.5 0.350 205 231 3 10. -112. -2. 0.350 230 232 3 9.5 -114.5 -1.5 0.6 231 233 3 9. -117. -1. 0.15 232 234 3 6.5 -119.5 3. 0.15 233 235 3 5. -124. 3. 0.15 234 236 3 3.5 -127. 5. 0.15 235 237 3 2. -131.5 5.5 0.15 236 238 3 0.5 -135. 6.5 0.15 237 239 3 0.5 -139.5 6.5 0.15 238 240 3 -1. -142.5 7.5 0.15 239 241 3 -0.5 -149. 6. 0.15 240 242 3 -0.5 -152.5 6.5 0.09 241 243 3 -1. -161. 5.5 0.09 242 244 3 0.5 -164. 5.5 0.09 243 245 3 2. -168. 6.5 0.09 244 246 3 1. -173. 6.5 0.09 245 247 3 1.5 -178.5 6.5 0.09 246 248 3 2.5 -183.5 6.5 0.09 247 249 3 1.5 -189. 6.5 0.09 248 250 3 -0.5 -193.5 6.5 0.09 249 251 3 -1. -197. 6. 0.09 250 252 3 -3. -203.5 5.5 0.09 251 253 3 -5.5 -208.5 5.5 0.09 252 254 3 -6.5 -215. 5.5 0.09 253 255 3 -5.5 -223. 6. 0.09 254 256 3 -7. -229.5 5.5 0.09 255 257 3 -9. -236. 6. 0.09 256 258 3 -9.5 -243.5 7. 0.09 257 259 3 -11. -250. 6.5 0.09 258 260 3 -9.5 -264. 6.5 0.09 259 261 3 -6.5 -271. 7.5 0.09 260 262 3 -6.5 -277.5 7.5 0.09 261 263 3 -3.5 -279. 7.5 0.09 262 264 3 2. -153.5 5.5 0.09 241 265 3 6. -158.5 6. 0.09 264 266 3 9.5 -166.5 6.5 0.09 265 267 3 10.5 -169.5 8. 0.09 266 268 3 7.5 -175. 7.5 0.049 267 269 3 7. -180. 6.5 0.049 268 270 3 6. -186. 6.5 0.049 269 271 3 3.5 -190.5 6. 0.049 270 272 3 2. -198.5 5. 0.049 271 273 3 3. -203.5 5. 0.049 272 274 3 3.5 -210. 5. 0.049 273 275 3 2.5 -223. 5. 0.049 274 276 3 -2.5 -231.5 5. 0.049 275 277 3 -4.5 -238. 6.5 0.049 276 278 3 -6. -239.5 6.5 0.049 277 279 3 13. -173. 7. 0.09 267 280 3 18.5 -179. 7. 0.09 279 281 3 21. -180.5 7. 0.049 280 282 3 22.5 -184. 7. 0.049 281 283 3 23. -187. 6. 0.049 282 284 3 13. -118.5 -1.5 0.15 232 285 3 13. -122.5 -1.5 0.15 284 286 3 14.5 -124.5 -1.5 0.15 285 287 3 15. -128. -0.5 0.15 286 288 3 16. -130.5 0.5 0.15 287 289 3 16. -135. -0.5 0.15 288 290 3 14.5 -141.5 0.5 0.15 289 291 3 15.5 -146. 0. 0.15 290 292 3 15. -149.5 0. 0.15 291 293 3 16. -153. 0.5 0.15 292 294 3 16. -155.5 1. 0.15 293 295 3 15. -161. 1. 0.15 294 296 3 14. -165.5 1. 0.15 295 297 3 15. -168. 1.5 0.15 296 298 3 15.5 -176. 1.5 0.15 297 299 3 15.5 -181.5 1.5 0.15 298 300 3 31.5 -29.5 4. 0.45 193 301 3 34. -30. 4. 0.45 300 302 3 38. -30.5 5. 0.4 301 303 3 39.5 -32. 7.5 0.4 302 304 3 38. -33.5 6.5 0.4 303 305 3 39. -35. 7. 0.350 304 306 3 42. -34.5 6.5 0.25 305 307 3 43.5 -36.5 7.5 0.25 306 308 3 45.5 -39.5 7. 0.15 307 309 3 45.5 -42. 7. 0.15 308 310 3 49. -44. 7. 0.15 309 311 3 49.5 -47.5 7. 0.15 310 312 3 51.5 -48. 7. 0.15 311 313 3 52.5 -51.5 7. 0.15 312 314 3 57.5 -56. 7. 0.15 313 315 3 59. -58. 7.5 0.15 314 316 3 62.5 -56.5 7.5 0.15 315 317 3 64.5 -57.5 7.5 0.15 316 318 3 69. -58. 7.5 0.15 317 319 3 75. -58.5 8.5 0.15 318 320 3 82.5 -58.5 8.5 0.15 319 321 3 93. -56. 9.5 0.15 320 322 3 99. -53.5 9.5 0.15 321 323 3 106. -51. 9.5 0.15 322 324 3 110. -49.5 9.5 0.15 323 325 3 117. -49. 9.5 0.15 324 326 3 120.5 -49.5 9.5 0.2 325 327 3 123.5 -48.5 9.5 0.15 326 328 3 126. -51. 12. 0.15 327 329 3 127. -54. 12.5 0.09 328 330 3 132. -56.5 12.5 0.09 329 331 3 137. -57.5 12. 0.09 330 332 3 139. -60. 11. 0.09 331 333 3 141.5 -62.5 11. 0.09 332 334 3 144. -67. 12. 0.09 333 335 3 143.5 -74.5 12.5 0.09 334 336 3 145. -78.5 12.5 0.09 335 337 3 146.5 -82. 11.5 0.09 336 338 3 150. -85.5 11. 0.09 337 339 3 152. -89. 11. 0.09 338 340 3 154.5 -94. 12.5 0.09 339 341 3 47. -36.5 8.5 0.15 307 342 3 52.5 -36. 10.5 0.09 341 343 3 55. -36. 9.5 0.09 342 344 3 59.5 -36. 9.5 0.09 343 345 3 61. -39. 9.5 0.09 344 346 3 63. -41. 9.5 0.09 345 347 3 65. -45.5 9.5 0.09 346 348 3 68.5 -46.5 8.5 0.049 347 349 3 70. -49. 8.5 0.049 348 350 3 68.5 -52. 10. 0.049 349 351 3 68.5 -55.5 8.5 0.049 350 352 3 71.5 -61.5 10. 0.049 351 353 3 76.5 -62.5 9. 0.049 352 brian-1.3.1/brian/experimental/multilinearstateupdater.py000066400000000000000000000212531167451777000237460ustar00rootroot00000000000000from brian import * from brian.stateupdater import get_linear_equations_solution_numerically, get_linear_equations from scipy import linalg, weave import numpy import inspect from itertools import count import re __all__ = ['MultiLinearStateUpdater', 'get_multilinear_state_updater', 'MultiLinearNeuronGroup'] __all__ = ['MultiLinearStateUpdater', 'get_multilinear_state_updater', 'MultiLinearNeuronGroup'] def get_multilinear_matrices(eqs, subs, clock=None): ''' Returns the matrices M and B for the linear model dX/dt = M(X-B), where eqs is an Equations object. ''' if clock is None: clock = guess_clock() nsubs = len(subs[subs.keys()[0]]) # Otherwise assumes it is given in functional form n = len(eqs._diffeq_names) # number of state variables dynamicvars = eqs._diffeq_names # Calculate B AB = zeros((n, 1, nsubs)) d = dict.fromkeys(dynamicvars) for j in range(n): if dynamicvars[j] in subs: d[dynamicvars[j]] = subs[j] else: d[dynamicvars[j]] = 0. * eqs._units[dynamicvars[j]] for var, i in zip(dynamicvars, count()): AB[i, 0, :] = -eqs.apply(var, d) # Calculate A M = zeros((n, n, nsubs)) for i in range(n): for j in range(n): if dynamicvars[j] in subs: d[dynamicvars[j]] = subs[j] else: d[dynamicvars[j]] = 0. * eqs._units[dynamicvars[j]] if dynamicvars[i] in subs: d[dynamicvars[i]] = subs[i] else: if isinstance(eqs._units[dynamicvars[i]], Quantity): d[dynamicvars[i]] = Quantity.with_dimensions(1., eqs._units[dynamicvars[i]].get_dimensions()) else: d[dynamicvars[i]] = 1. for var, j in zip(dynamicvars, count()): M[j, i, :] = eqs.apply(var, d) + AB[j] allM = M allAB = AB AiBi = [] for i in xrange(nsubs): try: M = reshape(allM[:, :, i], allM.shape[:2]) AB = reshape(allAB[:, :, i], allAB.shape[:2]) C = linalg.solve(M, AB) A = linalg.expm(M * clock.dt) B = -dot(A, C) + C except LinAlgError: numeulersteps = 100 deltat = clock.dt / numeulersteps E = eye(n) + deltat * M C = eye(n) D = zeros((n, 1)) for step in xrange(numeulersteps): C, D = dot(E, C), dot(E, D) - AB * deltat A, B = C, D AiBi.append((A, B)) return AiBi def get_multilinear_state_updater(eqs, subs, level=0, clock=None): ''' Make a multilinear state updater Arguments: ``eqs`` should be the equations, and must be a string not an :class:`Equations` object. ``subs`` A dictionary of arrays, each key k is a variable name, each value is an equally sized array of values it takes, this array should have the size of the number of neurons in the group. ``level`` How many levels up to look for the equations' namespace. ``clock`` If you want. ''' neweqs = Equations(eqs, level=level + 1) neweqs.prepare() AiBi = get_multilinear_matrices(neweqs, subs, clock=clock) n = AiBi[0][0].shape[0] nsubs = len(subs[subs.keys()[0]]) A = numpy.zeros((n, n, nsubs)) B = numpy.zeros((n, nsubs)) for i, (Ai, Bi) in enumerate(AiBi): A[:, :, i] = Ai B[:, i] = Bi.squeeze() return MultiLinearStateUpdater(A, B) class MultiLinearStateUpdater(StateUpdater): ''' A StateUpdater with one differential equation for each neuron Initialise with: ``A``, ``B`` Arrays, see below. The update step for neuron i is S[:,i] = dot(A[:,:,i], S[:,i]) + B[:, i]. ''' def __init__(self, A, B=None): self.A = A self.B = B self._useaccel = get_global_preference('useweave') self._cpp_compiler = get_global_preference('weavecompiler') self._extra_compile_args = ['-O3'] if self._cpp_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] def __call__(self, P): if self._useaccel: n = len(P) m = len(self) S = P._S A = self.A B = self.B code = ''' double x[m]; for(int i=0;i=len(self.offsets): return array([], dtype=int) return self.I[self.offsets[t]:self.offsets[t+1]] ########### AER loading stuff ###################### def load_multiple_AER(filename, check_sorted = False, relative_time = False, directory = '.'): f=open(filename,'rb') line = f.readline() res = [] line = line.strip('\n') while not line == '': res.append(load_AER(os.path.join(directory, line), check_sorted = check_sorted, relative_time = relative_time)) line = f.readline() f.close() return res def load_AER(filename, check_sorted = False, relative_time = True): ''' Loads AER data files for use in Brian. Returns a list containing tuples with a vector of addresses and a vector of timestamps (ints, unit is usually usecond). It can load any kind of .dat, or .aedat files. Note: For index files (that point to multiple .(ae)dat files) it will return a list containing tuples as for single files. Keyword Arguments: If check_sorted is True, checks if timestamps are sorted, and sort them if necessary. If relative_time is True, it will set the first spike time to zero and all others relatively to that precise time (avoid negative timestamps, is definitely a good idea). Hence to use those data files in Brian, one should do: addr, timestamp = load_AER(filename, relative_time = True) G = AERSpikeGeneratorGroup((addr, timestamps)) ''' l = filename.split('.') ext = l[-1].strip('\n') filename = filename.strip('\n') directory = os.path.dirname(filename) if ext == 'aeidx': #AER data points to different AER files return load_multiple_AER(filename, check_sorted = check_sorted, relative_time = relative_time, directory = directory) elif not (ext == 'dat' or ext == 'aedat'): raise ValueError('Wrong extension for AER data, should be dat, or aedat, it was '+ext) # This is inspired by the following Matlab script: # http://jaer.svn.sourceforge.net/viewvc/jaer/trunk/host/matlab/loadaerdat.m?revision=2001&content-type=text%2Fplain f=open(filename,'rb') version=1 # default (if not found in the file) # Skip header and look for version number line = f.readline() while line[0] == '#': if line[:9] == "#!AER-DAT": version = int(float(line[9:-1])) line = f.readline() line += f.read() f.close() if version==1: print 'Loading version 1 file '+filename ''' Format is: sequence of (addr = 2 bytes,timestamp = 4 bytes) Number format is big endian ('>') ''' ## This commented paragraph is the non-vectorized version #nevents=len(line)/6 #for n in range(nevents): # events.append(unpack('>HI',line[n*6:(n+1)*6])) # address,timestamp x=fromstring(line, dtype=int16) # or uint16? x=x.reshape((len(x)/3,3)) addr=x[:,0].newbyteorder('>') timestamp=x[:,1:].copy() timestamp.dtype=int32 timestamp=timestamp.newbyteorder('>').flatten() else: # version==2 print 'Loading version 2 file '+filename ''' Format is: sequence of (addr = 4 bytes,timestamp = 4 bytes) Number format is big endian ('>') ''' ## This commented paragraph is the non-vectorized version #nevents=len(line)/8 #for n in range(nevents): # events.append(unpack('>II',line[n*8:(n+1)*8])) # address,timestamp x = fromstring(line, dtype=int32).newbyteorder('>') addr = x[::2] if len(addr) == len(x[1::2]): timestamp = x[1::2] else: print """It seems there was a problem with the AER file, timestamps and addr don't have the same length!""" timestamp = x[1::2] if check_sorted: # Sorts the events if necessary if any(diff(timestamp)<0): # not sorted ind = argsort(timestamp) addr,timestamp = addr[ind],timestamp[ind] if (timestamp<0).all(): print 'Negative timestamps' if relative_time: t0 = min(timestamp) timestamp -= t0 return addr,timestamp HEADER = """#!AER-DAT2.0\n# This is a raw AE data file - do not edit\n# Data format is int32 address, int32 timestamp (8 bytes total), repeated for each event\n# Timestamps tick is 1 us\n# created with the Brian simulator on """ def save_AER(spikemonitor, f): ''' Saves the SpikeMonitor's contents to a file in aedat format. File should have 'aedat' extension. One can specify an open file, or, alternatively the filename as a string. Usage: save_AER(spikemonitor, file) ''' if isinstance(spikemonitor, SpikeMonitor): spikes = spikemonitor.spikes else: spikes = spikemonitor if isinstance(f, str): strinput = True f = open(f, 'wb') l = f.name.split('.') if not l[-1] == 'aedat': raise ValueError('File should have aedat extension') header = HEADER header += str(datetime.datetime.now()) + '\n' f.write(header) # i,t=zip(*spikes) for (i,t) in spikes: addr = struct.pack('>i', i) f.write(addr) time = struct.pack('>i', int(ceil(float(t/usecond)))) f.write(time) if strinput: f.close() class AERSpikeMonitor(FileSpikeMonitor): """Records spikes to an AER file Initialised as:: FileSpikeMonitor(source, filename[, record=False]) Does everything that a :class:`SpikeMonitor` does except ONLY records the spikes to the named file in AER format. Has one additional method: ``close_file()`` Closes the file manually (will happen automatically when the program ends). """ def __init__(self, source, filename, record=False, delay=0): super(FileSpikeMonitor, self).__init__(source, record, delay) self.filename = filename self.f = open(filename, 'w') header = HEADER header += str(datetime.datetime.now()) + '\n' self.f.write(header) def propagate(self, spikes): # super(AERSpikeMonitor, self).propagate(spikes) # TODO do it better, no struct.pack! check numpy doc for # addr = array(spikes).newbyteorder('>') for i in spikes: addr = struct.pack('>i', i) self.f.write(addr) time = struct.pack('>i', int(ceil(float(self.source.clock.t/usecond)))) self.f.write(time) ########### AER addressing stuff ###################### def extract_DVS_event(addr): ''' Extracts retina event from an address or a vector of addresses. Chip: Digital Vision Sensor (DVS) http://siliconretina.ini.uzh.ch/wiki/index.php Returns: x, y, polarity (ON/OFF: 1/-1) ''' retina_size=128 xmask = 0xfE # x are 7 bits (64 cols) ranging from bit 1-8 ymask = 0x7f00 # y are also 7 bits xshift=1 # bits to shift x to right yshift=8 # bits to shift y to right polmask=1 # polarity bit is LSB x = retina_size - 1 - ((addr & xmask) >> xshift) y = (addr & ymask) >> yshift pol = 1 - 2*(addr & polmask) # 1 for ON, -1 for OFF return x,y,pol def extract_AMS_event(addr): ''' Extracts cochlea event from an address or a vector of addresses Chip: Silicon Cochlea (AMS) Returns: side, channel, filternature More precisely: side: 0 is left, 1 is right channel: apex (LF) is 63, base (HF) is 0 filternature: 0 is lowpass, 1 is bandpass ''' # Reference: # ch.unizh.ini.jaer.chip.cochlea.CochleaAMSNoBiasgen.Extractor in the jAER package (look in the javadoc) # also the cochlea directory in jAER/host/matlab has interesting stuff # the matlab code was used to write this function. I don't understand the javadoc stuff #cochlea_size = 64 xmask = 31 # x are 5 bits 32 channels) ranging from bit 1-5 ymask = 32 # y (one bit) determines left or right cochlea xshift=0 # bits to shift x to right yshift=5 # bits to shift y to right channel = 1 + ((addr & xmask) >> xshift) side = (addr & ymask) >> yshift lpfBpf = mod(addr, 2) # leftRight = mod(addr, 4) return (lpfBpf, side, channel) if __name__=='__main__': path=r'C:Users\Romain\Desktop\jaerSampleData\DVS128' filename=r'\Tmpdiff128-2006-02-03T14-39-45-0800-0 tobi eye.dat' addr,timestamp=load_AER(path+filename) brian-1.3.1/brian/experimental/neuromorphic/__init__.py000066400000000000000000000001031167451777000232330ustar00rootroot00000000000000from realtime import * from AER import * from spikequeue import *brian-1.3.1/brian/experimental/neuromorphic/example_AER.py000066400000000000000000000026441167451777000236320ustar00rootroot00000000000000''' This example demonstrates how to load AER data files and use them as a SpikeGeneratorGroup in Brian For more information either check the documentation in the package, post to the google group, or send me an email victor.benichoux@ens.fr ''' from brian import * from brian.experimental.neuromorphic import * filename = '/path/to/file' # support .aedat and .aeidx files addr, timestamps = load_AER(filename) # load the data # addr contains a list of addresses # timestamps the list of spike times (the first spike comes in at t = 0, but you may change that # Note: In the case of an aeidx file, the value returned is a list of tuples (addr, timestamp) group = AERSpikeGeneratorGroup((addr, timestamps))# Create an AER group # at that point you can do whatever you want, it is a regular Brian neuron group # * Events addressing * # # We provide with two event extraction functions that fetch the # specialized addresses of the DVS retina and the AMS cochlea. # For example: (x, y, pol) = extract_DVS_event(addr) # returns the x and y positions of the neuron, alongside the polarity # (on/off) of the event # See the documentation for more on this. M = SpikeMonitor(group)# monitor it run(group.maxtime) # this group has an additional attribute maxtime # that gives you the last spike time # plot it raster_plot(M) show() # Additionally you can save any SpikeMonitor to and aedat file format as follows save(M, './dummy.aedat') brian-1.3.1/brian/experimental/neuromorphic/notes.txt000066400000000000000000000012531167451777000230220ustar00rootroot00000000000000Current state ------------- AER.py: * AERSpikeGeneratorGroup: see with Dan for integrating in SpikeGeneratorGroup? * FastDCThreshold: other name? * load_AER: ok * load_multiple_AER: ok, hidden * save_AER: mapping, vectorisation, argument possible (addr,timestamp) * extract_DVS_event: ok * extract_AMS_event: ok realtime.py: -> network.py spikequeue.py: unused Others/TODO: * SpikeMonitor.save -> AER? (see with Dan+Tobi). Pb possible avec dt=1 µs. idea: SpikeMonitor.save(filename,dt=1*us) see with PyNN guys et al? * Send UDP spikes to jAER * Docs * (someday) online spike input via AERSpikeGeneratorGroup (use SpikeContainer) * A few examples brian-1.3.1/brian/experimental/neuromorphic/realtime.py000066400000000000000000000037401167451777000233100ustar00rootroot00000000000000""" Real-time Brian See BEP-20-AER interface """ from brian.network import NetworkOperation from brian.clock import EventClock from brian.stdunits import ms from time import sleep,time __all__=['RealtimeController'] class RealtimeController(NetworkOperation): ''' Pauses are inserted so that Brian runs always between real time and about 50 ms ahead of it (probably a bit more). Principle: every 50 ms, so we insert a sleep() to synchronize. Typical use:: R=RealtimeController() run(1*second) R.reinit() # will resynchronise real time at next run run(2*second) ''' def __init__(self,dt=50*ms,verbose=False): ''' dt is the resynchronisation period ''' self.clock = EventClock(dt=dt) # this could be a global preference self.when = 'end' self.first_time=True self.verbose=verbose # for debugging def __call__(self): # First time: synchronise Brian and real time if self.first_time: # stores the start time self.synchronise() self.first_time=False real_time=time()+self.offset if self.verbose: print self.clock.t,real_time if self.clock._treal_time: # synchronize real time and clock sleep(self.clock._t-real_time) def synchronise(self): ''' Sets the offset to the real clock so that Brian and real time are synchronised. ''' self.offset=self.clock._t-time() def reinit(self): ''' The clock synchronise with real time at the next call. ''' self.first_time=True if __name__=='__main__': from brian import * R=RealtimeController(dt=100*ms,verbose=True) run(3*second) sleep(1*second) R.reinit() # try to comment it! run(2*second) brian-1.3.1/brian/experimental/neuromorphic/spikequeue.py000066400000000000000000000064051167451777000236670ustar00rootroot00000000000000""" SpikeQueue This is really a sketch. """ #from brian.utils.circular import SpikeContainer #from brian.neurongroup import NeuronGroup #from brian.directcontrol import SpikeGeneratorGroup from brian import * from numpy import unique from time import time __all__=['SpikeQueue'] class SpikeQueue(NeuronGroup): ''' A group that sends spikes like SpikeGeneratorGroup, but in a vectorised way, forgetting past events. Initialised with a vector of spike times and a vector of corresponding neuron indices. ''' def __init__(self, N, spiketimes, neurons, clock=None, check_sorted=True): clock = guess_clock(clock) NeuronGroup.__init__(self, N, model=LazyStateUpdater(), clock=clock) # Check if spike times are sorted if check_sorted: # Sorts the events if necessary if any(diff(spiketimes)<0): # not sorted ind=argsort(spiketimes) neurons,spiketimes=neurons[ind],spiketimes[ind] # Create the spike queue self.set_max_delay(max(spiketimes)) # This leaves space for the next spikes # Push the spikes self.LS.push(neurons) # This takes a bit of time (not sure why but this could be enhanced) # Set the cursors back self.LS.ind.advance(-1) self.LS.S.cursor=self.LS.ind[0] # Discretize spike times and make them relative to current time step spiketimes=array((spiketimes-clock.t)/clock.dt,dtype=int) # in units of dt # Calculate indices of spike groups u,indices=unique(spiketimes,return_index=True) # determine repeated time indices # Build vector of indices with -1 meaning: same index as previously x=-ones(max(u)+2) # maximum time index x[-1]=len(spiketimes) # last entry ## This is vectorized yet incredibly inefficient #x[u]=indices #empty_ind=where(x<0)[0] # time bins with no spikes # This is really slow: #x[empty_ind]=indices[digitize(empty_ind,u)] # -1 are replaced with next positive entry # As a loop (This now takes about 30% of the whole construction time): # Perhaps it could be written in C x[u]=indices for i in where(x<0)[0][::-1]: # -1 are replaced with next positive entry x[i]=x[i+1] # x[0] is always 0; maybe this should be dropped self.LS.ind[0:len(x)]=x self.LS.ind[len(x)]=-1 # no more spike at that point self._stopped=False # True when no more spike def reinit(self): super(SpikeQueueGroup, self).reinit() def push_spike_times(self,spiketimes,neurons): ''' Inserts future spike times For this we need to store the end position of future spikes ''' pass def update(self): # LS.S contains the data (neurons that spike) # LS.ind is a circular vector with pointers to locations in LS.S, # one for each future time bin if (self.LS.ind[1]>=0) & (not self._stopped): ns=self.LS.ind[1]-self.LS.ind[0] # number of spikes in next bin self.LS.S.advance(ns) else: self._stopped=True self.LS.ind[1]=self.LS.ind[0] self.LS.ind.advance(1) brian-1.3.1/brian/experimental/neuromorphic/test_AERSpikeMonitor.py000066400000000000000000000011761167451777000255210ustar00rootroot00000000000000from brian import * from brian.experimental.neuromorphic import * defaultclock.dt = 1*ms runtime = 100*ms g = PoissonGroup(100, 100*Hz) #g = SpikeGeneratorGroup(1, [(0,t*ms) for t in range(int(runtime/defaultclock.dt))]) Maer = AERSpikeMonitor(g, './dummy.aedat') run(100*ms) Maer.close_file() addr, timestamps = load_AER('./dummy.aedat') if len(addr) == len(M.spikes): print 'looks good' else: print 'glub' print 'addr, M.spikes',len(addr),len(M.spikes) # Check for NewSpikeGeneratorGroup N = max(addr) group = NewSpikeGeneratorGroup(N, (addr, timestamps), time_unit = 1*usecond) plot(timestamps, addr, '.') show() brian-1.3.1/brian/experimental/neuromorphic/test_newspikegengroup.py000066400000000000000000000004621167451777000261370ustar00rootroot00000000000000from brian import * from brian.experimental.neuromorphic import * defaultclock.dt = 1*ms runtime = 100*ms addr, timestamps = load_AER('./dummy.aedat') N = max(addr) group = NewSpikeGeneratorGroup(N, (addr, timestamps), time_unit = 1*usecond) M = SpikeMonitor(group) run(100*ms) raster_plot(M) show() brian-1.3.1/brian/experimental/neuromorphic/test_struct_vs_np.py000066400000000000000000000007031167451777000252720ustar00rootroot00000000000000from brian import * import struct x = array(rand(10)*1000, dtype = int16) print 'x',x x_n = x.newbyteorder('>') print 'x_n',x_n s_struct = '' for v in x_n: s_struct += struct.pack('>i', v) s_np =x_n.tostring() if s_np == s_struct: print 'yay' else: print s_np print s_struct xre=fromstring(s_struct, dtype=int16) # or uint16? xre = xre[:,0].newbyteorder('>') print 'xre', xre if (x == xre).all(): print 'YAY' brian-1.3.1/brian/experimental/new_c_propagate.py000066400000000000000000000462761167451777000221440ustar00rootroot00000000000000''' NOTES: General scheme for propagation code is something like: Iterate over spikes (spike index i): Set source neuron index j=spikes[i] Load required source neuron variables for neuron index j Iterate over row j of W: Set target neuron index k Load weight variable Load required target neuron variables for neuron index k Execute propagation code Functionally: iterate_over_spikes('j', 'spikes', (load_required_variables('j', {'modulation':modulation_var}), iterate_over_row('k', 'w', W, 'j', (load_required_variables('k', {'V':V}), transform_code('V += w*modulation') ) ) ) ) With some options like: * load_required_variables('j', {}) * iterate_over_row('k', 'w', W, 'j', delayvec=delayvec, delayvar='delay', ...) * load_required_variables_delayedreaction('k', {'V':{'dr':dr, 'cdi':cdi, 'idt':idt, 'md':md}}) We could also have: * iterate_over_col('k', 'w', W, 'j', ...) And STDP could be implemented as follows, for pre spikes: iterate_over_spikes('j', 'spikes', (load_required_variables('j', {'A_pre':A_pre}), """ A_pre += dA_pre """, iterate_over_row('k', 'w', W, 'j', (load_required_variables('k', {'A_post':A_post}), transform_code('w += A_post') ) ) ) ) And for post spikes: iterate_over_spikes('j', 'spikes', (load_required_variables('j', {'A_post':A_post}), transform_code('A_post += dA_post'), iterate_over_col('k', 'w', W, 'j', (load_required_variables('k', {'A_pre':A_pre}), transform_code('w += A_pre') ) ) ) ) To do STDP with delays, we need also: * load_required_variables_pastvalue('k', {'A_post':A_post_monitor}) Maybe for future-proofing STDP against having multiple per-synapse variables in the future, and generally having per-synapse dynamics and linked matrices which we want to jointly iterate over, could improve iterate_over_row/col to iterate over several synaptic variables with the same underlying matrix structure as the main weight matrix. Then, instead of having delayvec=delayvec, delayvar='delay' as a special case, we'd have a list of additional linked matrices. TODO: * Have ConnectionCode object feature a C and Python code and namespace to be executed. The Python namespace can be used for grabbing some parameters such as the _cdi variables that currently have to be set in by hand. ''' if __name__ == '__main__': from brian import * from brian.utils.documentation import * from brian.experimental.codegen.c_support_code import * else: from ..connections import Connection, DelayConnection, MultiConnection, \ DenseConnectionMatrix, DenseConstructionMatrix, \ SparseConnectionMatrix, SparseConstructionMatrix, \ DynamicConnectionMatrix, DynamicConstructionMatrix from ..log import log_debug, log_warn, log_info from ..utils.documentation import flattened_docstring from ..globalprefs import get_global_preference from codegen.c_support_code import * import numpy from scipy import weave import new __all__ = ['make_new_connection', 'expand_code', 'transform_code', 'iterate_over_spikes', 'load_required_variables', 'load_required_variables_delayedreaction', 'load_required_variables_pastvalue', 'iterate_over_row', 'iterate_over_col', 'ConnectionCode', ] class ConnectionCode(object): def __init__(self, codestr, vars=None, pycodestr=None, pyvars=None): if pyvars is None: pyvars = {} if pycodestr is None: pycodestr = '' if vars is None: vars = {} self.vars = vars self.codestr = codestr if len([line for line in pycodestr.split('\n') if line]) > 1: pycodestr = flattened_docstring(pycodestr) else: pycodestr = pycodestr.strip() self.pycodestr = pycodestr self.pyvars = pyvars self.prepared = False self._weave_compiler = get_global_preference('weavecompiler') self._extra_compile_args = ['-O3'] if self._weave_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] def prepare(self): self.pyvars['vars'] = self.vars self.vars['_spikes'] = None self.vars['_spikes_len'] = None self.vars_list = self.vars.keys() if len(self.pycodestr): self.compiled_pycode = compile(self.pycodestr, 'ConnectionCode', 'exec') else: self.compiled_pycode = None self.prepared = True def __call__(self, _spikes): if not self.prepared: self.prepare() if len(_spikes): if not isinstance(_spikes, numpy.ndarray): _spikes = array(_spikes, dtype=int) vars = self.vars # print '****' # print self.codestr # for k, v in vars.iteritems(): # if isinstance(v, numpy.ndarray): # print k, ': shape =', v.shape # else: # print k, ':', v # import sys # sys.stdout.flush() vars['_spikes'] = _spikes vars['_spikes_len'] = len(_spikes) if self.compiled_pycode is not None: exec self.compiled_pycode in self.pyvars weave.inline(self.codestr, self.vars_list, local_dict=self.vars, support_code=c_support_code, compiler=self._weave_compiler, extra_compile_args=self._extra_compile_args) def __str__(self): s = 'C code:\n' spaces = 0 for line in self.codestr.split('\n'): if line.strip(): if '}' in line: spaces -= 4 s += ' ' * spaces + line.strip() + '\n' if '{' in line: spaces += 4 s += 'Python code:\n' s += self.pycodestr return s __repr__ = __str__ def expand_code(code): if isinstance(code, ConnectionCode): return code elif isinstance(code, (tuple, list)): codestr = '\n'.join([expand_code(c).codestr for c in code]) vars = {} pyvars = {} for c in code: vars.update(c.vars) pyvars.update(c.pyvars) pycodestr = '\n'.join([expand_code(c).pycodestr for c in code]) return ConnectionCode(codestr, vars, pycodestr, pyvars) else: raise TypeError('ConnectionCode should be string or tuple') def transform_code(codestr, vars=None): # TODO: replace with something more sophisticated than this return ConnectionCode('\n'.join(line + ';' for line in codestr.split('\n') if line.strip()), vars) def iterate_over_spikes(neuron_index, spikes, code): outcode = ''' for(int _spike_index=0; _spike_index<%SPIKES_LEN%; _spike_index++) { const int %NEURON_INDEX% = %SPIKES%[_spike_index]; %CODE% } ''' code = expand_code(code) outcode = outcode.replace('%SPIKES%', spikes) outcode = outcode.replace('%SPIKES_LEN%', spikes + '_len') outcode = outcode.replace('%NEURON_INDEX%', neuron_index) outcode = outcode.replace('%CODE%', code.codestr) return ConnectionCode(outcode, code.vars, code.pycodestr, code.pyvars) def load_required_variables(neuron_index, neuron_vars): vars = {} codestr = '' for k, v in neuron_vars.iteritems(): vars[k + '__array'] = v codestr += 'double &' + k + ' = ' + k + '__array[' + neuron_index + '];\n' return ConnectionCode(codestr, vars) def load_required_variables_delayedreaction(neuron_index, delay, delay_index, neuron_var, C): vars = {} codestr = 'double &%Z% = _dr[((%CDI%+(int)(_idt*%D%))%_md)*%N%+%I%];' codestr = codestr.replace('%CDI%', delay_index) codestr = codestr.replace('%Z%', neuron_var) codestr = codestr.replace('%D%', delay) codestr = codestr.replace('%N%', '_num_target_neurons') codestr = codestr.replace('%I%', neuron_index) vars['_num_target_neurons'] = len(C.target) vars['_dr'] = C._delayedreaction vars[delay_index] = None # filled in by Python code (below) vars['_idt'] = C._invtargetdt vars['_md'] = C._max_delay pyvars = {'conn':C} pycodestr = "vars['_cdi'] = conn._cur_delay_ind" return ConnectionCode(codestr, vars, pycodestr, pyvars) def load_required_variables_pastvalue(neuron_index, time, neuron_vars): vars = {} pyvars = {} pycodestr = '' codestr = '' for k, M in neuron_vars.iteritems(): vars[k + '__values'] = M._values vars[k + '__arraylen'] = M._values.shape[1] vars[k + '__cti'] = None # current_time_index filled in by propagation function vars[k + '__idt'] = M._invtargetdt vars[k + '__nd'] = M.num_duration pyvars[k + '__RecentStateMonitor'] = M pycodestr += "vars['%k%__cti'] = %k%__RecentStateMonitor.current_time_index\n" .replace('%k%', k) newcodestr = 'double &%var% = %var%__values[((%var%__nd+%var%__cti-1-(int)(%var%__idt*%time%))%%var%__nd)*%var%__arraylen+%i%];\n' newcodestr = newcodestr.replace('%var%', k) newcodestr = newcodestr.replace('%time%', time) newcodestr = newcodestr.replace('%i%', neuron_index) codestr += newcodestr; return ConnectionCode(codestr, vars, pycodestr, pyvars) def iterate_over_row(target_index, weight_variable, weight_matrix, source_index, code, extravars={}): code = expand_code(code) vars = {} vars.update(code.vars) if isinstance(weight_matrix, DenseConnectionMatrix): outcode = ''' double *_weight_arr_row = _weight_arr+%SOURCEINDEX%*_num_target_neurons; for(int %TARGETINDEX%=0; %TARGETINDEX%<_num_target_neurons; %TARGETINDEX%++) { double &%WEIGHT% = _weight_arr_row[%TARGETINDEX%]; %EXTRAVARS% %CODE% } ''' extravarscode = '' for k, v in extravars.iteritems(): extracodetmp = 'double &%V% = %V%__array[%TARGETINDEX%+%SOURCEINDEX%*_num_target_neurons];' extracodetmp = extracodetmp.replace('%V%', k) extracodetmp = extracodetmp.replace('%TARGETINDEX%', target_index) extracodetmp = extracodetmp.replace('%SOURCEINDEX%', source_index) vars[k + '__array'] = numpy.asarray(v) extravarscode += extracodetmp outcode = outcode.replace('%EXTRAVARS%', extravarscode) vars['_weight_arr'] = numpy.asarray(weight_matrix) vars['_num_target_neurons'] = weight_matrix.shape[1] elif isinstance(weight_matrix, SparseConnectionMatrix): outcode = ''' for(int _p=_rowind[%SOURCEINDEX%]; _p<_rowind[%SOURCEINDEX%+1]; _p++) { int %TARGETINDEX% = _allj[_p]; double &%WEIGHT% = _alldata[_p]; %EXTRAVARS% %CODE% } ''' extravarscode = '' for k, v in extravars.iteritems(): extracodetmp = 'double &%V% = %V%__alldata[_p];' extracodetmp = extracodetmp.replace('%V%', k) vars[k + '__alldata'] = v.alldata extravarscode += extracodetmp outcode = outcode.replace('%EXTRAVARS%', extravarscode) vars['_rowind'] = weight_matrix.rowind vars['_allj'] = weight_matrix.allj vars['_alldata'] = weight_matrix.alldata elif isinstance(weight_matrix, DynamicConnectionMatrix): # TODO: support dynamic matrix structure # the best way to support dynamic matrix type would be to # reorganise dynamic matrix data structure. Ideally, it should consist # of numpy arrays only. Maybe some sort of linked list structure? # Otherwise, we can use code that accesses the Python lists, but it's # probably less efficient (maybe this is anyway not a big issue with # the dynamic matrix type?) raise TypeError('Dynamic matrix not supported.') else: raise TypeError('Must be dense/sparse/dynamic matrix.') outcode = outcode.replace('%TARGETINDEX%', target_index) outcode = outcode.replace('%SOURCEINDEX%', source_index) outcode = outcode.replace('%WEIGHT%', weight_variable) outcode = outcode.replace('%CODE%', code.codestr) return ConnectionCode(outcode, vars, code.pycodestr, code.pyvars) def iterate_over_col(source_index, weight_variable, weight_matrix, target_index, code, extravars={}): code = expand_code(code) vars = {} vars.update(code.vars) if isinstance(weight_matrix, DenseConnectionMatrix): outcode = ''' double *_weight_arr_row = _weight_arr+%TARGETINDEX%; for(int %SOURCEINDEX%=0; %SOURCEINDEX%<_num_source_neurons; %SOURCEINDEX%++) { double &%WEIGHT% = _weight_arr_row[%SOURCEINDEX%*_num_target_neurons]; %EXTRAVARS% %CODE% } ''' extravarscode = '' for k, v in extravars.iteritems(): extracodetmp = 'double &%V% = %V%__array[%SOURCEINDEX%+%TARGETINDEX%*_num_target_neurons];' extracodetmp = extracodetmp.replace('%V%', k) extracodetmp = extracodetmp.replace('%SOURCEINDEX%', source_index) extracodetmp = extracodetmp.replace('%TARGETINDEX%', target_index) vars[k + '__array'] = numpy.asarray(v) extravarscode += extracodetmp outcode = outcode.replace('%EXTRAVARS%', extravarscode) vars['_weight_arr'] = numpy.asarray(weight_matrix) vars['_num_source_neurons'] = weight_matrix.shape[0] vars['_num_target_neurons'] = weight_matrix.shape[1] elif isinstance(weight_matrix, SparseConnectionMatrix): outcode = ''' for(int _q=_colind[%TARGETINDEX%]; _q<_colind[%TARGETINDEX%+1]; _q++) { int _p = _allcoldataindices[_q]; int %SOURCEINDEX% = _colalli[_q]; double &%WEIGHT% = _alldata[_p]; %EXTRAVARS% %CODE% } ''' extravarscode = '' for k, v in extravars.iteritems(): extracodetmp = 'double &%V% = %V%__alldata[_p];' extracodetmp = extracodetmp.replace('%V%', k) vars[k + '__alldata'] = v.alldata extravarscode += extracodetmp outcode = outcode.replace('%EXTRAVARS%', extravarscode) vars['_colind'] = weight_matrix.colind vars['_colalli'] = weight_matrix.colalli vars['_allcoldataindices'] = weight_matrix.allcoldataindices vars['_alldata'] = weight_matrix.alldata elif isinstance(weight_matrix, DynamicConnectionMatrix): # TODO: support dynamic matrix structure # the best way to support dynamic matrix type would be to # reorganise dynamic matrix data structure. Ideally, it should consist # of numpy arrays only. Maybe some sort of linked list structure? # Otherwise, we can use code that accesses the Python lists, but it's # probably less efficient (maybe this is anyway not a big issue with # the dynamic matrix type?) raise TypeError('Dynamic matrix not supported.') else: raise TypeError('Must be dense/sparse/dynamic matrix.') outcode = outcode.replace('%SOURCEINDEX%', source_index) outcode = outcode.replace('%TARGETINDEX%', target_index) outcode = outcode.replace('%WEIGHT%', weight_variable) outcode = outcode.replace('%CODE%', code.codestr) return ConnectionCode(outcode, vars, code.pycodestr, code.pyvars) # TODO: # * TEST iterate_over_col # * TEST load_required_variables_pastvalue def generate_connection_code(C): modulation = C._nstate_mod is not None delay = isinstance(C, DelayConnection) if not modulation and not delay: code = iterate_over_spikes('_j', '_spikes', iterate_over_row('_k', 'w', C.W, '_j', (load_required_variables('_k', {'V':C.target.state(C.nstate)}), transform_code('V += w')))) elif modulation and not delay: code = iterate_over_spikes('_j', '_spikes', (load_required_variables('_j', {'modulation':C.source.state(C._nstate_mod)}), iterate_over_row('_k', 'w', C.W, '_j', (load_required_variables('_k', {'V':C.target.state(C.nstate)}), transform_code('V += w*modulation'))))) elif not modulation and delay: code = iterate_over_spikes('_j', '_spikes', iterate_over_row('_k', 'w', C.W, '_j', extravars={'_delay':C.delayvec}, code=(load_required_variables_delayedreaction('_k', '_delay', '_cdi', 'V', C), transform_code('V += w')))) elif modulation and delay: code = iterate_over_spikes('_j', '_spikes', (load_required_variables('_j', {'modulation':C.source.state(C._nstate_mod)}), iterate_over_row('_k', 'w', C.W, '_j', extravars={'_delay':C.delayvec}, code=(load_required_variables_delayedreaction('_k', '_delay', '_cdi', 'V', C), transform_code('V += w*modulation'))))) else: raise TypeError('Not supported.') return code def new_propagate(self, _spikes): if not self.iscompressed: self.compress() if not hasattr(self, '_connection_code'): self._connection_code = generate_connection_code(self) log_warn('brian.experimental.new_c_propagate', 'Using new C based propagation function.') log_debug('brian.experimental.new_c_propagate', 'C based propagation code:\n' + str(self._connection_code)) if len(_spikes): self._connection_code(_spikes) def make_new_connection(C): if C.__class__ is MultiConnection: for c in C.connections: make_new_connection(c) if C.__class__ is Connection or C.__class__ is DelayConnection: if C.W.__class__ is SparseConnectionMatrix or \ C.W.__class__ is DenseConnectionMatrix or \ C.W.__class__ is SparseConstructionMatrix or \ C.W.__class__ is DenseConstructionMatrix: C.propagate = new.instancemethod(new_propagate, C, C.__class__) if __name__ == '__main__': log_level_debug() structure = 'sparse' delay = 0 * ms#0*ms or True modulation = False G = NeuronGroup(1, 'V:1\nmod:1', reset=0, threshold=1) G.V = 2 G.mod = 5 H = NeuronGroup(10, 'V:1') if modulation: modulation = 'mod' else: modulation = None C = Connection(G, H, 'V', structure=structure, delay=delay, modulation=modulation) C[0, :] = linspace(0, 1, 10) if delay: C.delay[0, :] = linspace(0, 1, 10) * ms M = StateMonitor(H, 'V', record=True) make_new_connection(C) run(2 * ms) M.plot() legend() show() brian-1.3.1/brian/experimental/oi24rpy1.CNG.swc000066400000000000000000023030641167451777000211430ustar00rootroot00000000000000# Original file oi24rpy1.asc.swc edited by Duncan Donohue using StdSwc version 1.21 on 8/22/05. # Irregularities and fixes documented in oi24rpy1.asc.swc.std. See StdSwc1.21.doc for more information. # # Neurolucida to SWC conversion from L-Measure. R. Scorcioni: rscorcio@gmu.edu # Original fileName:C:\Documents and Settings\Admin\Desktop\eysel\oi24rpy1.asc # # ORIGINAL_SOURCE # CREATURE # REGION # FIELD/LAYER # TYPE # CONTRIBUTOR # REFERENCE # RAW # EXTRAS # SOMA_AREA # SHRINKAGE_CORRECTION # VERSION_NUMBER # VERSION_DATE # ********************************************* # SCALE 1.0 1.0 1.0 1 1 -21910.8 -7388.66 216.957 10.485 -1 2 4 -21911.8 -7390.21 232.92 3.22 1 3 4 -21912.1 -7388.21 236.59 2.725 2 4 4 -21912.8 -7385.5 239.71 2.355 3 5 4 -21913 -7384.13 247.16 1.86 4 6 4 -21913.1 -7383.92 247.21 1.86 5 7 4 -21913.7 -7382.31 250 1.86 6 8 4 -21913.7 -7382.43 250.69 1.86 7 9 4 -21914.4 -7380.22 248.55 1.86 8 10 4 -21914.6 -7378.42 248.87 1.86 9 11 4 -21918.9 -7372.62 250.23 1.985 10 12 4 -21919.6 -7371.91 250.41 1.985 11 13 4 -21920.5 -7369.79 251.08 1.985 12 14 4 -21920.5 -7369.83 251.31 1.985 13 15 4 -21920.4 -7369.68 253.82 1.985 14 16 4 -21921 -7368.54 260.46 1.985 15 17 4 -21921 -7368.59 260.76 1.985 16 18 4 -21921.9 -7366.92 260.97 1.985 17 19 4 -21921.7 -7366.84 259.37 1.985 18 20 4 -21921.4 -7367.07 259.3 1.735 19 21 4 -21921 -7369.1 258.93 1.61 20 22 4 -21921.1 -7368.88 258.98 1.61 21 23 4 -21920.6 -7369.83 258.77 1.61 22 24 4 -21920.1 -7371 259.89 1.61 23 25 4 -21919.3 -7371.39 261.27 1.485 24 26 4 -21918.9 -7371.87 261.15 1.365 25 27 4 -21918.3 -7372.27 261.03 1.24 26 28 4 -21917.3 -7372.9 260.83 1.24 27 29 4 -21916.8 -7374.15 260.58 1.24 28 30 4 -21917 -7374.91 260.46 1.115 29 31 4 -21916.7 -7376.02 260.87 0.865 30 32 4 -21916.4 -7378.19 263.6 0.865 31 33 4 -21916.5 -7379.25 263.43 0.745 32 34 4 -21916.8 -7380.85 263.19 0.745 33 35 4 -21916.3 -7383.44 262.73 0.745 34 36 4 -21915.4 -7385.42 262.31 0.745 35 37 4 -21914.5 -7387.1 261.95 0.745 36 38 4 -21914 -7387.84 263.33 0.745 37 39 4 -21913.5 -7389.03 263.08 0.745 38 40 4 -21912.5 -7389.92 262.84 0.62 39 41 4 -21913.1 -7391.34 262.66 0.62 40 42 4 -21912.6 -7392.06 262.5 0.62 41 43 4 -21912.3 -7392.02 262.48 0.62 42 44 4 -21911.5 -7393.15 262.22 0.62 43 45 4 -21911.5 -7393.41 262.18 0.62 44 46 4 -21911.9 -7394.8 264.68 0.62 45 47 4 -21909.4 -7394.81 264.44 0.495 46 48 4 -21907.4 -7396.44 267.61 0.495 47 49 4 -21906.2 -7398.4 267.71 0.495 48 50 4 -21904.5 -7399.26 269.57 0.495 49 51 4 -21904.3 -7400.4 270.01 0.495 50 52 4 -21902.7 -7401.73 272.87 0.495 51 53 4 -21900.8 -7403.4 273.41 0.495 52 54 4 -21899 -7403.27 275.98 0.495 53 55 4 -21899.3 -7403.29 276 0.495 54 56 4 -21898.3 -7403.89 275.93 0.495 55 57 4 -21897.3 -7404.45 276.79 0.495 56 58 4 -21897.2 -7404.57 277.49 0.495 57 59 4 -21895.6 -7405.77 281.39 0.495 58 60 4 -21895.5 -7405.82 281.68 0.495 59 61 4 -21894.5 -7408.02 287.01 0.495 60 62 4 -21894.4 -7408.14 287.76 0.495 61 63 4 -21892.8 -7409.86 291.47 0.495 62 64 4 -21891 -7410.34 294.48 0.495 63 65 4 -21889.4 -7412.21 297.29 0.495 64 66 4 -21889.4 -7412.23 297.4 0.495 65 67 4 -21888.2 -7413.25 299.56 0.495 66 68 4 -21887.9 -7413.17 299.55 0.495 67 69 4 -21886.1 -7414.67 300.58 0.495 68 70 4 -21884.5 -7416.79 304.78 0.495 69 71 4 -21883.6 -7418.6 304.97 0.495 70 72 4 -21883.3 -7418.69 305.63 0.495 71 73 4 -21881.9 -7419.84 306.01 0.495 72 74 4 -21881.6 -7419.78 306 0.495 73 75 4 -21880.3 -7421.21 307.73 0.37 74 76 4 -21878 -7422.02 307.37 0.37 75 77 4 -21878 -7422.27 307.33 0.37 76 78 4 -21876.8 -7423.72 308.7 0.37 77 79 4 -21876.7 -7423.75 308.91 0.37 78 80 4 -21874.9 -7424.23 308.89 0.37 79 81 4 -21874.8 -7424.27 309.15 0.37 80 82 4 -21872.6 -7425.56 313.41 0.37 81 83 4 -21871.7 -7427.78 312.96 0.37 82 84 4 -21870.4 -7428.68 312.69 0.37 83 85 4 -21869.5 -7431.23 319.88 0.37 84 86 4 -21868.2 -7433.64 323.9 0.37 85 87 4 -21867.7 -7433.56 323.86 0.37 86 88 4 -21866.4 -7434.96 324.84 0.37 87 89 4 -21866.9 -7436.07 324.7 0.37 88 90 4 -21865.8 -7438.34 330.44 0.37 89 91 4 -21864.3 -7438.36 330.36 0.37 90 92 4 -21862.9 -7440.24 334.22 0.37 91 93 4 -21860.7 -7443.62 345.87 0.37 92 94 4 -21859.8 -7443.4 347.22 0.37 93 95 4 -21858.4 -7444.14 348.2 0.37 94 96 4 -21858.3 -7444.26 348.93 0.37 95 97 4 -21856.1 -7445.62 348.86 0.37 96 98 4 -21852.7 -7446.83 349.44 0.37 97 99 4 -21852.3 -7447.07 349.39 0.37 98 100 4 -21850.6 -7448.64 348.98 0.37 99 101 4 -21850.3 -7448.3 349.01 0.37 100 102 4 -21843.7 -7445.64 349.24 0.37 101 103 4 -21842.2 -7447.19 348.85 0.37 102 104 4 -21840 -7449.83 352.09 0.37 103 105 4 -21837.5 -7451.18 351.63 0.37 104 106 4 -21835.8 -7452.13 351.33 0.25 105 107 4 -21834.2 -7453.14 351 0.25 106 108 4 -21832.4 -7453.3 350.93 0.25 107 109 4 -21830.6 -7454 350.78 0.25 108 110 4 -21830.6 -7453.72 350.83 0.25 109 111 4 -21829.3 -7453.19 350.9 0.25 110 112 4 -21826.4 -7454.25 352.15 0.25 111 113 4 -21826.5 -7453.96 352.2 0.25 112 114 4 -21824.5 -7454.08 352 0.25 113 115 4 -21824.2 -7454.05 351.97 0.25 114 116 4 -21820.6 -7454.85 351.51 0.25 115 117 4 -21815.7 -7457.56 350.6 0.25 116 118 4 -21811.8 -7458.46 350.73 0.25 117 119 4 -21811.1 -7458.79 350.61 0.25 118 120 4 -21808.3 -7460.33 350.09 0.25 119 121 4 -21807.3 -7460.69 349.94 0.25 120 122 4 -21807 -7460.66 349.92 0.25 121 123 4 -21801.6 -7461.89 350.73 0.25 122 124 4 -21796.4 -7462.1 350.2 0.25 123 125 4 -21912.1 -7397.22 264.29 0.62 46 126 4 -21911.1 -7397.82 264.1 0.62 125 127 4 -21911.1 -7399.42 263.84 0.62 126 128 4 -21911.8 -7400.36 263.75 0.62 127 129 4 -21910.2 -7401.39 263.42 0.62 128 130 4 -21910.2 -7401.6 263.38 0.62 129 131 4 -21910.1 -7403.25 263.11 0.62 130 132 4 -21909.7 -7405.15 263.7 0.62 131 133 4 -21910 -7405.23 263.71 0.62 132 134 4 -21909.3 -7407.23 263.51 0.62 133 135 4 -21909.3 -7407.26 263.65 0.62 134 136 4 -21909.6 -7408.3 260.27 0.495 135 137 4 -21909.5 -7410.22 259.94 0.495 136 138 4 -21909 -7411.43 259.69 0.495 137 139 4 -21908.8 -7412.01 257.3 0.495 138 140 4 -21908.7 -7412.27 257.25 0.495 139 141 4 -21908.4 -7413.95 256.14 0.495 140 142 4 -21908.5 -7415.32 255.91 0.495 141 143 4 -21907.6 -7417.24 255.48 0.495 142 144 4 -21907.8 -7418.92 255.22 0.495 143 145 4 -21907.3 -7420.4 254.93 0.495 144 146 4 -21907 -7420.58 254.87 0.495 145 147 4 -21905.9 -7421.7 254.58 0.495 146 148 4 -21904.8 -7421.46 254.52 0.495 147 149 4 -21905.2 -7422.14 254.45 0.495 148 150 4 -21905 -7423.67 254.17 0.495 149 151 4 -21905 -7424.72 254.01 0.37 150 152 4 -21904.9 -7425.5 253.86 0.37 151 153 4 -21904.5 -7425.73 253.79 0.37 152 154 4 -21904.3 -7425.92 253.73 0.37 153 155 4 -21904 -7427.21 253.49 0.37 154 156 4 -21904.1 -7428.53 253.28 0.37 155 157 4 -21903.9 -7429.06 253.18 0.37 156 158 4 -21903.4 -7430.78 252.85 0.37 157 159 4 -21903 -7431.27 252.72 0.37 158 160 4 -21902.7 -7431.48 252.66 0.37 159 161 4 -21902.3 -7431.92 252.55 0.37 160 162 4 -21901.9 -7432.42 252.44 0.37 161 163 4 -21901.8 -7432.64 252.4 0.37 162 164 4 -21901.7 -7433.4 252.26 0.37 163 165 4 -21901.6 -7434.2 252.11 0.37 164 166 4 -21901.5 -7434.43 252.07 0.37 165 167 4 -21901.5 -7435.05 251.11 0.37 166 168 4 -21901.2 -7435.29 251.04 0.37 167 169 4 -21900.7 -7436.24 250.83 0.37 168 170 4 -21900.8 -7437.34 250.66 0.37 169 171 4 -21900.9 -7437.88 250.59 0.37 170 172 4 -21901.1 -7437.88 248.85 0.37 171 173 4 -21900.3 -7439.06 248.58 0.37 172 174 4 -21900.2 -7439.54 248.49 0.37 173 175 4 -21900.2 -7441.15 248.22 0.37 174 176 4 -21900.2 -7442.24 248.05 0.37 175 177 4 -21899.7 -7443.14 249.13 0.37 176 178 4 -21899.2 -7444.62 248.65 0.37 177 179 4 -21899.1 -7444.88 248.6 0.37 178 180 4 -21898.9 -7446.11 248.06 0.37 179 181 4 -21898.8 -7446.59 247.8 0.37 180 182 4 -21898.7 -7447.34 247.6 0.37 181 183 4 -21898.7 -7447.61 247.55 0.37 182 184 4 -21898 -7448.29 247.37 0.37 183 185 4 -21897.8 -7448.8 247.28 0.37 184 186 4 -21897.6 -7448.74 247.26 0.37 185 187 4 -21897.6 -7450.07 247 0.37 186 188 4 -21897.6 -7451.99 245.51 0.37 187 189 4 -21897.7 -7451.7 245.49 0.37 188 190 4 -21897.6 -7453.52 245 0.37 189 191 4 -21897.6 -7453.49 244.82 0.37 190 192 4 -21897.7 -7454.35 242.05 0.37 191 193 4 -21897.2 -7456.09 244.25 0.37 192 194 4 -21897.3 -7457.43 242.75 0.37 193 195 4 -21897.3 -7457.56 241.81 0.37 194 196 4 -21896.8 -7460.35 241.3 0.37 195 197 4 -21896.5 -7460.56 241.23 0.37 196 198 4 -21896.3 -7461.6 241.04 0.37 197 199 4 -21896.3 -7461.88 241 0.37 198 200 4 -21897.4 -7463.7 240.8 0.37 199 201 4 -21897.3 -7463.92 240.75 0.37 200 202 4 -21896.3 -7464.81 240.51 0.37 201 203 4 -21895.9 -7465.24 240.41 0.37 202 204 4 -21895.7 -7465.19 240.39 0.37 203 205 4 -21895.1 -7466.7 240.09 0.37 204 206 4 -21895 -7467.2 240 0.37 205 207 4 -21894.9 -7467.49 239.94 0.37 206 208 4 -21894.9 -7467.72 239.9 0.37 207 209 4 -21894.1 -7469.16 239.59 0.37 208 210 4 -21894.1 -7469.38 239.55 0.37 209 211 4 -21893.8 -7470.69 239.31 0.37 210 212 4 -21894 -7471.28 239.22 0.37 211 213 4 -21894.4 -7471.88 239.17 0.37 212 214 4 -21894.6 -7472.2 239.14 0.37 213 215 4 -21894.8 -7472.77 239.06 0.37 214 216 4 -21894.8 -7473 239.01 0.37 215 217 4 -21895 -7473.59 238.94 0.37 216 218 4 -21895.1 -7474.66 238.77 0.37 217 219 4 -21895 -7474.94 238.72 0.37 218 220 4 -21894.6 -7475.62 238.38 0.37 219 221 4 -21894.3 -7475.83 238.31 0.37 220 222 4 -21894.2 -7476.09 238.26 0.37 221 223 4 -21893.7 -7477.06 238.06 0.37 222 224 4 -21893.8 -7478.4 237.84 0.37 223 225 4 -21893.7 -7478.87 237.76 0.37 224 226 4 -21893.5 -7480.21 237.51 0.37 225 227 4 -21893.3 -7480.97 237.38 0.37 226 228 4 -21893.2 -7481.46 237.28 0.37 227 229 4 -21893.1 -7482.22 237.14 0.37 228 230 4 -21893 -7482.76 237.05 0.37 229 231 4 -21892.9 -7483.29 236.95 0.37 230 232 4 -21893.2 -7483.56 236.93 0.37 231 233 4 -21893 -7484.08 236.83 0.37 232 234 4 -21892.8 -7485.39 236.59 0.37 233 235 4 -21892.5 -7485.29 236.59 0.37 234 236 4 -21892.1 -7486.03 236.42 0.37 235 237 4 -21891.7 -7486.73 236.27 0.37 236 238 4 -21891.5 -7487.28 236.16 0.37 237 239 4 -21891.4 -7488.32 235.98 0.37 238 240 4 -21891 -7489.25 235.78 0.37 239 241 4 -21889.4 -7490.89 237.31 0.37 240 242 4 -21889.3 -7491.44 237.21 0.37 241 243 4 -21889 -7491.59 237.16 0.37 242 244 4 -21887.7 -7494.03 236.63 0.37 243 245 4 -21885.3 -7496.03 232.07 0.37 244 246 4 -21885.2 -7496.29 232.02 0.37 245 247 4 -21884.5 -7498.82 231.53 0.37 246 248 4 -21884.3 -7499.59 231.39 0.37 247 249 4 -21883.9 -7501.9 230.96 0.37 248 250 4 -21884.1 -7502.16 230.94 0.37 249 251 4 -21884.7 -7503.64 230.75 0.37 250 252 4 -21884.5 -7504.45 230.6 0.37 251 253 4 -21884 -7506.41 230.22 0.37 252 254 4 -21883.6 -7506.66 230.14 0.37 253 255 4 -21883.5 -7506.9 230.1 0.37 254 256 4 -21883.4 -7508.72 231.36 0.37 255 257 4 -21883.3 -7509.24 231.27 0.37 256 258 4 -21883.1 -7511.46 231.53 0.37 257 259 4 -21883.1 -7511.7 231.49 0.37 258 260 4 -21883.1 -7514.1 232.59 0.37 259 261 4 -21883 -7514.64 232.49 0.37 260 262 4 -21882.5 -7515.73 232.98 0.37 261 263 4 -21882 -7516.57 233.52 0.37 262 264 4 -21882 -7516.82 233.47 0.37 263 265 4 -21881.5 -7517.54 233.32 0.37 264 266 4 -21881.1 -7518.36 233.85 0.37 265 267 4 -21880.9 -7520.73 238.15 0.37 266 268 4 -21880.5 -7522.78 237.78 0.37 267 269 4 -21880.1 -7523.8 237.57 0.37 268 270 4 -21879.8 -7525.05 237.33 0.37 269 271 4 -21880.1 -7526.05 228.97 0.37 270 272 4 -21880.4 -7527.7 228.73 0.25 271 273 4 -21881.4 -7529.74 228.48 0.25 272 274 4 -21881.2 -7531.06 228.25 0.25 273 275 4 -21882 -7532.8 228.03 0.25 274 276 4 -21908.7 -7409.33 266.5 0.495 135 277 4 -21907.3 -7410.9 266.1 0.495 276 278 4 -21906.9 -7412.95 265.73 0.495 277 279 4 -21907.1 -7413.25 265.7 0.495 278 280 4 -21908.3 -7414.57 265.6 0.495 279 281 4 -21907.8 -7416.22 269.21 0.495 280 282 4 -21907.7 -7418.84 271.48 0.495 281 283 4 -21907.3 -7420.31 272.46 0.495 282 284 4 -21906.6 -7423.85 273.13 0.495 283 285 4 -21906.6 -7424.17 273.2 0.495 284 286 4 -21905.8 -7425.96 275.02 0.495 285 287 4 -21905.2 -7428.2 277.14 0.495 286 288 4 -21905.3 -7430.33 278.42 0.495 287 289 4 -21905.9 -7432.97 278.59 0.495 288 290 4 -21905.6 -7433.2 278.76 0.495 289 291 4 -21905.3 -7433.22 279.06 0.495 290 292 4 -21905.3 -7433.29 279.49 0.495 291 293 4 -21905.7 -7434.89 280.32 0.495 292 294 4 -21905.4 -7436.47 280.36 0.495 293 295 4 -21905.1 -7436.74 280.62 0.495 294 296 4 -21904.3 -7437.37 280.45 0.495 295 297 4 -21904.3 -7437.72 280.96 0.495 296 298 4 -21903.4 -7438.5 282.82 0.495 297 299 4 -21903.4 -7438.69 282.79 0.495 298 300 4 -21903.2 -7439.31 280.15 0.495 299 301 4 -21903.1 -7439.87 280.05 0.495 300 302 4 -21903.1 -7440.92 279.88 0.495 301 303 4 -21903.1 -7441.04 280.59 0.495 302 304 4 -21902.2 -7443.41 282.53 0.495 303 305 4 -21902.1 -7443.63 283.9 0.495 304 306 4 -21902.7 -7447.15 285.9 0.495 305 307 4 -21903.3 -7448.85 284.13 0.495 306 308 4 -21904 -7450.3 283.95 0.495 307 309 4 -21903.7 -7451.29 283.76 0.495 308 310 4 -21903.8 -7452.41 283.59 0.495 309 311 4 -21904.3 -7454.38 283.31 0.495 310 312 4 -21905 -7454.6 281.02 0.495 311 313 4 -21904.7 -7455.08 283.93 0.495 312 314 4 -21904.4 -7456.62 283.8 0.495 313 315 4 -21904.4 -7458.26 283.53 0.495 314 316 4 -21904.4 -7460.48 285.21 0.495 315 317 4 -21904 -7463.23 288.53 0.495 316 318 4 -21905.2 -7466.08 288.17 0.495 317 319 4 -21905.2 -7467.71 287.9 0.495 318 320 4 -21905 -7468.23 287.8 0.495 319 321 4 -21905 -7469.78 287.54 0.495 320 322 4 -21904.9 -7470.86 287.35 0.495 321 323 4 -21903.6 -7471.89 289.93 0.495 322 324 4 -21903.6 -7472.14 289.88 0.495 323 325 4 -21902.3 -7474.54 289.1 0.495 324 326 4 -21901.8 -7476 288.81 0.495 325 327 4 -21901.5 -7475.96 288.79 0.495 326 328 4 -21900.9 -7477.71 288.44 0.495 327 329 4 -21901.5 -7479.18 288.25 0.495 328 330 4 -21900.1 -7480.47 287.91 0.495 329 331 4 -21900.2 -7480.95 288.88 0.495 330 332 4 -21900.1 -7481.47 288.79 0.495 331 333 4 -21900.1 -7482.72 289.71 0.495 332 334 4 -21900.1 -7482.97 289.66 0.495 333 335 4 -21899.5 -7484.25 289.62 0.495 334 336 4 -21899.4 -7484.77 289.52 0.495 335 337 4 -21899.2 -7486.3 289.24 0.495 336 338 4 -21898.7 -7486.75 289.13 0.495 337 339 4 -21897.8 -7488.43 288.77 0.495 338 340 4 -21897.8 -7488.7 288.72 0.495 339 341 4 -21896 -7491.7 285.97 0.495 340 342 4 -21894.3 -7495.52 291.98 0.37 341 343 4 -21891.8 -7502.15 291.89 0.37 342 344 4 -21891.7 -7502.42 293.48 0.37 343 345 4 -21889.9 -7508.74 293.77 0.37 344 346 4 -21889.5 -7509.11 294.85 0.37 345 347 4 -21887.8 -7513.08 294.45 0.37 346 348 4 -21887.8 -7513.36 294.4 0.37 347 349 4 -21886.9 -7516.67 293.77 0.37 348 350 4 -21886.2 -7518.7 293.39 0.37 349 351 4 -21886.1 -7519.19 293.3 0.37 350 352 4 -21922.8 -7364.84 263.55 1.61 18 353 4 -21924.2 -7361.92 262.7 1.365 352 354 4 -21925.5 -7358.19 263.44 1.365 353 355 4 -21926 -7355.6 266.82 1.24 354 356 4 -21926.8 -7353.38 267.22 1.24 355 357 4 -21926.5 -7352.68 272.62 1.115 356 358 4 -21928.5 -7352.07 271.92 0.99 357 359 4 -21930.2 -7350.55 272.33 0.99 358 360 4 -21932.5 -7348.59 272.87 0.99 359 361 4 -21934.8 -7345.83 273.54 0.99 360 362 4 -21936.5 -7344.27 273.96 0.865 361 363 4 -21937.9 -7343.25 272.71 0.865 362 364 4 -21937.9 -7342.97 272.76 0.865 363 365 4 -21938.5 -7342.88 272.83 0.865 364 366 4 -21940.3 -7342.13 274.41 0.62 365 367 4 -21941.1 -7342.23 274.47 0.62 366 368 4 -21941.9 -7342.25 281.05 0.62 367 369 4 -21941.8 -7342.26 281.17 0.62 368 370 4 -21943.3 -7342.01 281.49 0.62 369 371 4 -21943.3 -7342.03 281.64 0.62 370 372 4 -21944.7 -7341.47 284.74 0.495 371 373 4 -21946.1 -7341.26 284.91 0.495 372 374 4 -21946.9 -7340.56 287.74 0.495 373 375 4 -21947 -7340.29 287.79 0.495 374 376 4 -21947.8 -7339.74 289.79 0.495 375 377 4 -21949.5 -7339.6 290.4 0.495 376 378 4 -21949.9 -7339.74 293.62 0.495 377 379 4 -21951.3 -7339.46 295.44 0.495 378 380 4 -21952.9 -7339.35 297.55 0.495 379 381 4 -21954.7 -7338.9 297.82 0.495 380 382 4 -21954.7 -7338.92 298 0.495 381 383 4 -21956.1 -7338.53 298.95 0.495 382 384 4 -21957.4 -7338.34 301.14 0.495 383 385 4 -21958.9 -7338.43 301.61 0.495 384 386 4 -21958.8 -7338.5 302.05 0.495 385 387 4 -21960.7 -7338.14 301.04 0.495 386 388 4 -21960.6 -7338.26 301.79 0.495 387 389 4 -21961.9 -7338.88 305.68 0.495 388 390 4 -21961.8 -7339.03 306.59 0.495 389 391 4 -21963.5 -7338.81 303.56 0.62 390 392 4 -21965.7 -7339.31 303.85 0.62 391 393 4 -21967 -7339.47 304.92 0.495 392 394 4 -21966.9 -7339.57 305.48 0.495 393 395 4 -21967.8 -7339.6 309.31 0.495 394 396 4 -21967.8 -7339.65 309.61 0.495 395 397 4 -21969.5 -7339.74 310.06 0.495 396 398 4 -21969.5 -7339.77 310.27 0.495 397 399 4 -21971.2 -7340.04 314.62 0.495 398 400 4 -21972.4 -7339.74 314.79 0.495 399 401 4 -21974.5 -7339.8 317.39 0.495 400 402 4 -21975.4 -7339.47 317.56 0.495 401 403 4 -21976.2 -7338.96 320.3 0.495 402 404 4 -21978.1 -7338.66 322.32 0.495 403 405 4 -21979.5 -7338.35 325.32 0.495 404 406 4 -21979.4 -7338.41 325.69 0.495 405 407 4 -21980.8 -7338.1 327.15 0.495 406 408 4 -21980.8 -7337.99 328.12 0.495 407 409 4 -21981.9 -7337.81 328.93 0.495 408 410 4 -21983.5 -7337.36 332.51 0.495 409 411 4 -21984.5 -7337.23 335.21 0.495 410 412 4 -21987.3 -7334.73 335.88 0.495 411 413 4 -21990.9 -7333.8 336.37 0.495 412 414 4 -21993.1 -7333.41 338.29 0.495 413 415 4 -21993.1 -7333.44 338.44 0.495 414 416 4 -21996.2 -7332.36 338.99 0.37 415 417 4 -21997.8 -7331.34 339.54 0.37 416 418 4 -22000.1 -7330.13 343.09 0.37 417 419 4 -22003.1 -7327.37 344.88 0.37 418 420 4 -22006.3 -7323.61 345.8 0.37 419 421 4 -22008.9 -7320.85 347.84 0.37 420 422 4 -22012.3 -7319.4 349.14 0.37 421 423 4 -22014.8 -7317.96 349.61 0.37 422 424 4 -22015.1 -7318.01 349.63 0.37 423 425 4 -22019.1 -7315.65 353.13 0.37 424 426 4 -22021.6 -7314.23 353.6 0.37 425 427 4 -22024.1 -7313.34 353.98 0.37 426 428 4 -22027.1 -7312.15 355.27 0.37 427 429 4 -22029 -7311.11 355.62 0.37 428 430 4 -22031.7 -7308.99 359.94 0.37 429 431 4 -22036.1 -7304.96 361.03 0.25 430 432 4 -22038.5 -7302.69 361.63 0.25 431 433 4 -22041.5 -7299.49 369.06 0.25 432 434 4 -21942.7 -7339.65 275.12 0.495 367 435 4 -21943.6 -7337.96 275.49 0.495 434 436 4 -21944 -7337.25 275.64 0.495 435 437 4 -21944.3 -7335.47 275.97 0.495 436 438 4 -21944.4 -7335.18 276.02 0.495 437 439 4 -21945.8 -7333.6 276.42 0.495 438 440 4 -21945.9 -7333.34 276.46 0.495 439 441 4 -21945.8 -7332.5 276.59 0.495 440 442 4 -21945.5 -7332.45 276.57 0.495 441 443 4 -21945 -7330.54 276.84 0.495 442 444 4 -21945.1 -7329.73 276.99 0.495 443 445 4 -21945.9 -7328.83 277.21 0.495 444 446 4 -21946.2 -7328.37 277.32 0.495 445 447 4 -21946.3 -7327.87 277.41 0.495 446 448 4 -21946.6 -7326.31 277.64 0.495 447 449 4 -21947.1 -7325.36 277.84 0.495 448 450 4 -21947.2 -7325.06 277.9 0.495 449 451 4 -21947.3 -7323.79 278.13 0.495 450 452 4 -21947.4 -7323.54 278.17 0.495 451 453 4 -21948 -7322.05 278.47 0.495 452 454 4 -21948 -7321.79 278.52 0.495 453 455 4 -21948.5 -7320.84 278.72 0.495 454 456 4 -21949 -7319.6 278.97 0.495 455 457 4 -21949.5 -7318.4 279.22 0.495 456 458 4 -21950 -7317.14 279.47 0.495 457 459 4 -21950 -7316.92 279.52 0.495 458 460 4 -21950.5 -7315.91 279.73 0.495 459 461 4 -21950.4 -7315.31 279.81 0.495 460 462 4 -21950.3 -7314.23 279.98 0.495 461 463 4 -21950.3 -7313.99 280.03 0.495 462 464 4 -21949.8 -7313.63 280.04 0.495 463 465 4 -21949.9 -7313.35 280.09 0.495 464 466 4 -21951.5 -7310.56 276.62 0.495 465 467 4 -21952.7 -7310.24 276.79 0.495 466 468 4 -21952.7 -7310.04 276.82 0.495 467 469 4 -21954.8 -7309.39 277.13 0.495 468 470 4 -21956.8 -7308.13 277.52 0.495 469 471 4 -21957.2 -7307.42 277.68 0.495 470 472 4 -21958.2 -7306.27 277.94 0.495 471 473 4 -21958.5 -7306.33 277.96 0.495 472 474 4 -21959.7 -7304.51 274.14 0.495 473 475 4 -21959.8 -7303.96 274.24 0.495 474 476 4 -21960.4 -7302.28 274.64 0.37 475 477 4 -21961.8 -7301.2 274.94 0.37 476 478 4 -21962.2 -7300.23 275.14 0.37 477 479 4 -21962.5 -7298.43 275.47 0.495 478 480 4 -21963.3 -7297.53 275.69 0.495 479 481 4 -21963.6 -7295.72 276.02 0.495 480 482 4 -21964.5 -7294.2 274.48 0.495 481 483 4 -21964.6 -7293.99 274.52 0.495 482 484 4 -21965.5 -7292.34 275.22 0.495 483 485 4 -21967.2 -7290.65 271.66 0.495 484 486 4 -21967.3 -7290.11 271.76 0.495 485 487 4 -21967.4 -7289.07 271.95 0.495 486 488 4 -21967.4 -7288.84 271.99 0.495 487 489 4 -21967.3 -7286.68 272.33 0.495 488 490 4 -21968.6 -7284.93 270.8 0.495 489 491 4 -21969.3 -7284.56 270.89 0.495 490 492 4 -21969.4 -7284.03 270.84 0.495 491 493 4 -21962.2 -7286.44 269.77 0.37 492 494 4 -21962.5 -7284.45 269.07 0.37 493 495 4 -21963.3 -7282.95 269.39 0.37 494 496 4 -21963.4 -7281.94 269.57 0.37 495 497 4 -21963.6 -7280.63 269.8 0.37 496 498 4 -21963.8 -7279.07 270.08 0.37 497 499 4 -21965 -7278.11 269.91 0.37 498 500 4 -21964.2 -7276.19 268.85 0.37 499 501 4 -21963.7 -7274.31 267.96 0.37 500 502 4 -21963.1 -7273.57 265.73 0.37 501 503 4 -21962.6 -7270.88 266.13 0.37 502 504 4 -21963.4 -7268.29 264.87 0.37 503 505 4 -21963.3 -7265.8 263.21 0.37 504 506 4 -21963.2 -7264.21 263.47 0.37 505 507 4 -21964.2 -7262.35 260.16 0.37 506 508 4 -21964.7 -7260.87 260.45 0.37 507 509 4 -21965.5 -7259.35 260.77 0.37 508 510 4 -21965.9 -7257.14 258.61 0.37 509 511 4 -21965.5 -7254.36 256.99 0.37 510 512 4 -21965.3 -7252.21 257.32 0.37 511 513 4 -21964.7 -7250.01 257.63 0.37 512 514 4 -21965.7 -7247.34 257.33 0.37 513 515 4 -21964.7 -7244.75 255.48 0.25 514 516 4 -21964.8 -7243.95 255.62 0.37 515 517 4 -21952.7 -7310.96 283.76 0.495 465 518 4 -21954.1 -7309.39 284.15 0.495 517 519 4 -21956.4 -7307.73 284.64 0.495 518 520 4 -21958.4 -7306.5 289.59 0.495 519 521 4 -21959.4 -7303.77 290.14 0.495 520 522 4 -21959.7 -7303.68 290.18 0.495 521 523 4 -21960.8 -7302.19 290.53 0.495 522 524 4 -21962 -7300.59 290.91 0.495 523 525 4 -21962.4 -7300.42 290.97 0.495 524 526 4 -21964.5 -7299.5 291.32 0.495 525 527 4 -21964.8 -7299.28 291.38 0.495 526 528 4 -21965.6 -7298.07 291.66 0.37 527 529 4 -21967.3 -7296.59 292.07 0.37 528 530 4 -21969.2 -7295.59 292.4 0.37 529 531 4 -21969.4 -7294.86 292.54 0.37 530 532 4 -21970.9 -7294.05 292.82 0.37 531 533 4 -21971.4 -7293.1 293.02 0.37 532 534 4 -21972.3 -7292.75 293.17 0.37 533 535 4 -21972.6 -7292.75 293.19 0.37 534 536 4 -21973.8 -7292.53 293.28 0.37 535 537 4 -21975 -7291.22 295.48 0.37 536 538 4 -21975.6 -7290.8 295.61 0.37 537 539 4 -21976.4 -7290.16 295.78 0.37 538 540 4 -21976.7 -7289.7 295.89 0.37 539 541 4 -21979 -7288.52 296.3 0.37 540 542 4 -21979 -7288.21 296.35 0.37 541 543 4 -21980.5 -7287.01 295.58 0.37 542 544 4 -21980.5 -7286.94 295.16 0.37 543 545 4 -21981.5 -7285.83 295.44 0.37 544 546 4 -21983.1 -7285.03 295.72 0.37 545 547 4 -21984.2 -7284.99 295.83 0.37 546 548 4 -21984.3 -7284.5 295.92 0.37 547 549 4 -21984.6 -7284.55 295.94 0.37 548 550 4 -21985.6 -7283.71 296.17 0.37 549 551 4 -21986 -7283 296.33 0.37 550 552 4 -21986.9 -7281.16 293.01 0.37 551 553 4 -21987.2 -7281.21 293.03 0.37 552 554 4 -21988.1 -7280.62 293.21 0.37 553 555 4 -21988.5 -7280.46 293.27 0.37 554 556 4 -21988.8 -7279.99 293.39 0.37 555 557 4 -21988.6 -7278.55 291.85 0.37 556 558 4 -21990.2 -7276.42 292.35 0.37 557 559 4 -21990.7 -7276.3 292.42 0.37 558 560 4 -21991 -7276.36 292.44 0.37 559 561 4 -21993 -7273.6 283.14 0.37 560 562 4 -21994.4 -7272.01 283.53 0.25 561 563 4 -21938.9 -7339.55 272.32 0.62 365 564 4 -21939.3 -7337.76 272.65 0.62 563 565 4 -21939 -7337.44 272.68 0.62 564 566 4 -21939.2 -7335.07 271.43 0.62 565 567 4 -21939.1 -7333.98 271.6 0.62 566 568 4 -21939.1 -7331.33 272.03 0.495 567 569 4 -21939 -7328.52 268.97 0.495 568 570 4 -21939 -7328.49 268.8 0.495 569 571 4 -21939.5 -7327.49 269 0.495 570 572 4 -21939.5 -7326.98 269.09 0.495 571 573 4 -21939.4 -7326.68 269.12 0.495 572 574 4 -21940 -7324.75 268.36 0.495 573 575 4 -21940 -7324.49 268.41 0.495 574 576 4 -21940.8 -7322.09 263.5 0.495 575 577 4 -21940.8 -7322.06 263.31 0.495 576 578 4 -21940.9 -7320.22 263.62 0.495 577 579 4 -21941 -7319.95 263.67 0.495 578 580 4 -21942.4 -7317.17 259.01 0.495 579 581 4 -21936.7 -7322.18 257.64 0.37 580 582 4 -21943.4 -7315.64 252.71 0.37 581 583 4 -21944.1 -7314.08 249.51 0.37 582 584 4 -21944.9 -7312.36 249.87 0.37 583 585 4 -21945.8 -7311.41 250.12 0.37 584 586 4 -21946.1 -7309.27 247.12 0.37 585 587 4 -21945.3 -7308.39 247.19 0.37 586 588 4 -21944.6 -7307.23 247.31 0.37 587 589 4 -21942.8 -7305.62 244.22 0.37 588 590 4 -21942.1 -7303.09 242.76 0.37 589 591 4 -21942.9 -7300.69 239.46 0.37 590 592 4 -21942.9 -7298.82 239.76 0.37 591 593 4 -21944 -7296.6 240.24 0.37 592 594 4 -21944.4 -7294.68 235.32 0.37 593 595 4 -21944.3 -7293.09 235.58 0.37 594 596 4 -21945.5 -7292.47 235.8 0.37 595 597 4 -21945 -7289.71 235.86 0.37 596 598 4 -21944.4 -7288.01 236.08 0.37 597 599 4 -21944.5 -7284.12 234.21 0.37 598 600 4 -21944 -7281.65 234.19 0.37 599 601 4 -21943.7 -7279.72 234.48 0.37 600 602 4 -21943.5 -7278.66 234.64 0.37 601 603 4 -21943.4 -7277.57 234.75 0.37 602 604 4 -21942.7 -7276.42 234.87 0.25 603 605 4 -21942.2 -7275.58 234.97 0.37 604 606 4 -21941.5 -7274.15 235.14 0.37 605 607 4 -21941 -7271.92 235.46 0.37 606 608 4 -21940.3 -7270.8 235.58 0.25 607 609 4 -21938.9 -7269.11 237.82 0.25 608 610 4 -21937.7 -7267.35 238 0.37 609 611 4 -21936.8 -7265.38 238.25 0.25 610 612 4 -21935.6 -7264 237.5 0.125 611 613 4 -21936.8 -7263.28 238.59 0.125 611 614 4 -21925.3 -7350.33 273.01 0.745 357 615 4 -21925.2 -7347.64 273.45 0.745 614 616 4 -21925 -7345.98 273.7 0.745 615 617 4 -21925 -7345.73 273.74 0.745 616 618 4 -21925.2 -7344.05 276.44 0.745 617 619 4 -21926.4 -7342.63 279.42 0.495 618 620 4 -21926.9 -7341.14 279.72 0.495 619 621 4 -21927 -7340.86 279.78 0.495 620 622 4 -21927.6 -7338.8 281.25 0.495 621 623 4 -21928.4 -7337.2 282.02 0.495 622 624 4 -21928.5 -7336.96 282.07 0.495 623 625 4 -21929.2 -7335.28 285.83 0.495 624 626 4 -21929.6 -7332.98 286.35 0.495 625 627 4 -21929.7 -7332.22 286.56 0.495 626 628 4 -21930.3 -7331.01 291.5 0.495 627 629 4 -21931.7 -7329.72 291.84 0.495 628 630 4 -21932 -7329.26 291.95 0.495 629 631 4 -21934.6 -7327.37 292.51 0.495 630 632 4 -21934.4 -7325.6 296.39 0.495 631 633 4 -21935.1 -7323.58 296.79 0.495 632 634 4 -21935.2 -7323.31 296.84 0.495 633 635 4 -21935.2 -7321.78 300.33 0.495 634 636 4 -21935.5 -7320.18 300.62 0.495 635 637 4 -21935.8 -7318.73 303.93 0.495 636 638 4 -21935.9 -7318.45 303.98 0.495 637 639 4 -21936.4 -7315.88 304.45 0.495 638 640 4 -21936.3 -7314.83 304.62 0.495 639 641 4 -21934.7 -7313.94 315.03 0.495 640 642 4 -21934.1 -7312.37 320.41 0.495 641 643 4 -21934.2 -7312.08 320.46 0.495 642 644 4 -21932.2 -7310.53 324.38 0.495 643 645 4 -21930.8 -7309.33 328.13 0.495 644 646 4 -21930.9 -7309.07 328.18 0.495 645 647 4 -21928.8 -7307.47 333.59 0.495 646 648 4 -21928.8 -7307.58 334.27 0.495 647 649 4 -21928.9 -7307.65 334.27 0.495 648 650 4 -21928.3 -7306.2 334.45 0.495 649 651 4 -21927.7 -7305.02 337.83 0.495 650 652 4 -21927.7 -7304.79 337.87 0.495 651 653 4 -21926.7 -7302.54 341.78 0.495 652 654 4 -21925.5 -7298.86 347 0.495 653 655 4 -21925.2 -7294.93 353.18 0.37 654 656 4 -21925.2 -7294.95 353.32 0.37 655 657 4 -21926.1 -7292.17 361.38 0.37 656 658 4 -21926 -7292.29 362.13 0.37 657 659 4 -21926 -7290.23 367.93 0.37 658 660 4 -21925.9 -7290.48 369.47 0.37 659 661 4 -21925.9 -7288.4 374.49 0.37 660 662 4 -21925.9 -7288.42 374.66 0.37 661 663 4 -21925.7 -7285.54 381.96 0.37 662 664 4 -21924.1 -7281.87 389.05 0.37 663 665 4 -21916.6 -7279.21 388.79 0.25 664 666 4 -21915.4 -7277.19 391.14 0.25 665 667 4 -21913.7 -7274.83 399.82 0.25 666 668 4 -21913 -7272.59 400.12 0.25 667 669 4 -21910.8 -7267.82 405.17 0.125 668 670 4 -21937.5 -7311.57 305.27 0.495 640 671 4 -21936.9 -7309.87 305.5 0.495 670 672 4 -21936.9 -7306.94 305.98 0.495 671 673 4 -21937 -7306.65 306.03 0.495 672 674 4 -21936.4 -7305.02 306.25 0.495 673 675 4 -21936.5 -7304.79 306.3 0.495 674 676 4 -21935.7 -7302.96 306.52 0.495 675 677 4 -21935.5 -7302.43 306.59 0.495 676 678 4 -21935.7 -7301.38 306.79 0.495 677 679 4 -21935.5 -7300.82 306.86 0.495 678 680 4 -21934.7 -7298.82 307.51 0.495 679 681 4 -21934.3 -7297.88 308.55 0.495 680 682 4 -21934 -7297.56 308.58 0.495 681 683 4 -21933 -7297.23 312.33 0.495 682 684 4 -21932.7 -7296.93 312.36 0.495 683 685 4 -21932.6 -7296.33 312.44 0.495 684 686 4 -21932.5 -7295.01 312.65 0.495 685 687 4 -21932.3 -7294.96 312.63 0.495 686 688 4 -21930.6 -7293.03 312.8 0.495 687 689 4 -21930.3 -7292.99 312.78 0.495 688 690 4 -21929.8 -7291.56 312.96 0.495 689 691 4 -21929.5 -7290.97 313.04 0.495 690 692 4 -21929.3 -7290.65 313.08 0.495 691 693 4 -21928.9 -7290.05 313.13 0.495 692 694 4 -21929 -7289.29 313.27 0.495 693 695 4 -21928.9 -7287.72 315.46 0.495 694 696 4 -21929.5 -7286.33 316.23 0.495 695 697 4 -21929.7 -7285.27 316.42 0.495 696 698 4 -21929.7 -7284.8 316.59 0.495 697 699 4 -21929.5 -7284.26 316.76 0.495 698 700 4 -21929.5 -7282.67 317.14 0.495 699 701 4 -21929.6 -7282.21 317.36 0.495 700 702 4 -21929.4 -7281.86 317.4 0.495 701 703 4 -21929.3 -7279.26 318.2 0.495 702 704 4 -21929 -7278.93 318.24 0.495 703 705 4 -21927.9 -7276.75 319.51 0.495 704 706 4 -21926.5 -7274.85 320.95 0.495 705 707 4 -21926.6 -7274.61 321.01 0.495 706 708 4 -21924.6 -7271.06 321.5 0.495 707 709 4 -21924.4 -7270.97 321.49 0.495 708 710 4 -21922.1 -7267.69 322.03 0.495 709 711 4 -21921.7 -7267.15 322.26 0.495 710 712 4 -21921.3 -7265.96 322.6 0.495 711 713 4 -21921.3 -7265.99 322.79 0.495 712 714 4 -21921.9 -7265.15 323.38 0.495 713 715 4 -21921.7 -7264.72 324.36 0.495 714 716 4 -21921.7 -7264.47 324.41 0.495 715 717 4 -21921.8 -7262.26 325.91 0.495 716 718 4 -21921.3 -7261.55 326.78 0.495 717 719 4 -21921.4 -7261.26 326.83 0.495 718 720 4 -21921.1 -7259.38 327.44 0.495 719 721 4 -21920.7 -7258.82 327.74 0.495 720 722 4 -21920.7 -7258.58 327.78 0.495 721 723 4 -21919.8 -7256.89 328.37 0.495 722 724 4 -21919.8 -7255.83 328.89 0.495 723 725 4 -21920.5 -7252.85 329.55 0.495 724 726 4 -21920.8 -7251.3 329.84 0.495 725 727 4 -21919.9 -7249.27 333.21 0.495 726 728 4 -21919.6 -7248.95 333.23 0.495 727 729 4 -21919.7 -7248.7 333.28 0.495 728 730 4 -21919.2 -7246.76 333.55 0.37 729 731 4 -21918.9 -7245.34 333.76 0.37 730 732 4 -21917.7 -7243.46 336.61 0.37 731 733 4 -21917.6 -7242.35 336.78 0.37 732 734 4 -21917.7 -7239.99 337.18 0.37 733 735 4 -21917.5 -7239.44 337.26 0.37 734 736 4 -21917.3 -7239.39 337.24 0.37 735 737 4 -21916.9 -7238.26 337.39 0.37 736 738 4 -21916.8 -7237.17 337.57 0.37 737 739 4 -21916.7 -7236.36 337.68 0.37 738 740 4 -21916.3 -7235.2 337.84 0.37 739 741 4 -21916.1 -7234.93 337.87 0.37 740 742 4 -21915.2 -7232.94 340.1 0.37 741 743 4 -21915 -7232.4 340.17 0.37 742 744 4 -21915.1 -7231.85 340.28 0.37 743 745 4 -21914.9 -7231.57 340.3 0.37 744 746 4 -21914.6 -7230.17 340.51 0.37 745 747 4 -21914.3 -7229.88 340.53 0.37 746 748 4 -21913.7 -7228.68 340.67 0.37 747 749 4 -21913.5 -7227.84 340.79 0.37 748 750 4 -21912.5 -7224.96 343.91 0.37 749 751 4 -21912.5 -7224.69 343.96 0.37 750 752 4 -21910.8 -7222.66 348.35 0.37 751 753 4 -21910.2 -7220.04 351.05 0.25 752 754 4 -21910.8 -7218.17 351.02 0.37 753 755 4 -21911.5 -7215.59 351.51 0.25 754 756 4 -21911.8 -7214.56 354.98 0.495 755 757 4 -21911.8 -7214.32 355.02 0.495 756 758 4 -21912.5 -7212.09 358.98 0.25 757 759 4 -21913.5 -7209.85 359.44 0.25 758 760 4 -21912.6 -7207.56 361.49 0.25 759 761 4 -21911.6 -7204.57 362.81 0.25 760 762 4 -21912.1 -7201.9 362.51 0.25 761 763 4 -21912.1 -7201.59 362.57 0.25 762 764 4 -21922.4 -7343.16 275.86 0.62 618 765 4 -21922.5 -7342.66 275.95 0.62 764 766 4 -21922.3 -7342.36 275.98 0.62 765 767 4 -21921.6 -7340.1 276.28 0.62 766 768 4 -21921.4 -7339.51 276.37 0.62 767 769 4 -21921.2 -7339.43 276.35 0.62 768 770 4 -21920.2 -7336.62 276.73 0.495 769 771 4 -21919.8 -7334.42 277.06 0.495 770 772 4 -21920.2 -7332.38 277.43 0.495 771 773 4 -21920.1 -7331.28 277.61 0.495 772 774 4 -21919.1 -7330.27 277.68 0.495 773 775 4 -21918.2 -7329.06 277.79 0.495 774 776 4 -21917.7 -7328.64 277.82 0.495 775 777 4 -21917.8 -7328.42 277.86 0.495 776 778 4 -21917.2 -7326.75 278.09 0.495 777 779 4 -21917.1 -7326.13 278.18 0.495 778 780 4 -21917 -7325.91 278.21 0.495 779 781 4 -21916.3 -7322.2 275.22 0.495 780 782 4 -21916.1 -7319.8 275.6 0.495 781 783 4 -21916 -7318.92 275.74 0.495 782 784 4 -21916.1 -7318.67 275.78 0.495 783 785 4 -21915.3 -7318.02 275.82 0.495 784 786 4 -21915.4 -7317.75 275.87 0.495 785 787 4 -21915.2 -7316.2 275.05 0.495 786 788 4 -21915.2 -7315.94 275.09 0.495 787 789 4 -21915.7 -7313.34 275.49 0.495 788 790 4 -21915.6 -7310.7 275.92 0.495 789 791 4 -21916.2 -7307.59 276.49 0.495 790 792 4 -21916 -7307.26 276.52 0.495 791 793 4 -21915.4 -7303.99 277.01 0.495 792 794 4 -21915.2 -7303.43 277.09 0.495 793 795 4 -21914 -7301.83 278.79 0.495 794 796 4 -21914.1 -7301.59 278.84 0.495 795 797 4 -21912.9 -7298.72 279.13 0.495 796 798 4 -21912.7 -7298.37 279.17 0.495 797 799 4 -21911.3 -7295.2 279.57 0.495 798 800 4 -21911.1 -7294.6 279.65 0.495 799 801 4 -21910.2 -7293.63 279.72 0.495 800 802 4 -21910.2 -7293.12 279.82 0.495 801 803 4 -21910 -7292.81 279.84 0.495 802 804 4 -21909.1 -7289.97 280.22 0.495 803 805 4 -21908.7 -7289.15 280.33 0.495 804 806 4 -21907 -7286.08 280.67 0.495 805 807 4 -21905.6 -7284.11 281.45 0.37 806 808 4 -21904.7 -7282.29 281.67 0.37 807 809 4 -21904.6 -7282.01 281.7 0.37 808 810 4 -21903.9 -7279.2 282.1 0.37 809 811 4 -21903.7 -7278.87 282.14 0.37 810 812 4 -21903.2 -7276.92 282.41 0.37 811 813 4 -21902.5 -7275.76 282.54 0.37 812 814 4 -21901.2 -7274.79 284.6 0.37 813 815 4 -21900.3 -7273.54 284.72 0.37 814 816 4 -21899.3 -7272.3 284.84 0.37 815 817 4 -21898.2 -7270.74 285 0.37 816 818 4 -21898.1 -7269.61 285.17 0.37 817 819 4 -21897.5 -7268.16 286.36 0.37 818 820 4 -21897.6 -7267.87 286.41 0.37 819 821 4 -21896.7 -7266.37 286.57 0.37 820 822 4 -21896.5 -7265.78 286.65 0.37 821 823 4 -21895.9 -7262.75 287.1 0.37 822 824 4 -21894.9 -7261.38 289.53 0.37 823 825 4 -21894.7 -7260.81 289.65 0.37 824 826 4 -21894.4 -7259.39 289.85 0.37 825 827 4 -21893.8 -7257.71 290.08 0.37 826 828 4 -21893.3 -7257.37 290.09 0.37 827 829 4 -21892.6 -7256.69 290.13 0.37 828 830 4 -21892.4 -7256.38 290.16 0.37 829 831 4 -21892.8 -7254.28 290.55 0.37 830 832 4 -21892.5 -7254.24 290.53 0.37 831 833 4 -21891.9 -7253.03 290.67 0.37 832 834 4 -21891.1 -7252.42 290.7 0.37 833 835 4 -21890.9 -7252.05 290.74 0.37 834 836 4 -21891.2 -7250.54 291.02 0.37 835 837 4 -21890.7 -7248.8 291.25 0.37 836 838 4 -21890.2 -7248.46 291.26 0.37 837 839 4 -21889.7 -7247.83 291.32 0.37 838 840 4 -21889.5 -7247.29 291.4 0.37 839 841 4 -21889.3 -7246.96 291.43 0.37 840 842 4 -21889.6 -7245.16 291.76 0.37 841 843 4 -21889.3 -7244.85 291.79 0.37 842 844 4 -21889.4 -7244.61 291.83 0.37 843 845 4 -21888 -7242.97 291.98 0.37 844 846 4 -21887.8 -7242.47 292.04 0.37 845 847 4 -21887.6 -7242.1 292.08 0.37 846 848 4 -21887 -7239.87 295.13 0.37 847 849 4 -21886.6 -7237.87 295.42 0.37 848 850 4 -21886.5 -7236.78 295.58 0.37 849 851 4 -21886.5 -7236.04 295.72 0.37 850 852 4 -21884.7 -7230.78 300.65 0.37 851 853 4 -21884.1 -7227.56 301.49 0.37 852 854 4 -21884.4 -7224.68 301.99 0.37 853 855 4 -21884.6 -7223.64 302.18 0.37 854 856 4 -21884.2 -7220.93 302.66 0.37 855 857 4 -21883.3 -7219.93 302.76 0.37 856 858 4 -21881.4 -7218.18 305.45 0.37 857 859 4 -21879.5 -7216.73 305.5 0.37 858 860 4 -21879.2 -7216.47 305.52 0.37 859 861 4 -21878.1 -7214.52 302.14 0.37 860 862 4 -21877.3 -7212.76 302.35 0.37 861 863 4 -21912.8 -7312.92 275.99 0.495 788 864 4 -21911.4 -7311.52 276.09 0.495 863 865 4 -21909.4 -7308.51 276.4 0.495 864 866 4 -21907.3 -7306.39 280.38 0.37 865 867 4 -21907.1 -7306.06 280.42 0.37 866 868 4 -21906.8 -7306.04 280.39 0.37 867 869 4 -21907.2 -7304.77 281.87 0.37 868 870 4 -21907.3 -7304.49 281.92 0.37 869 871 4 -21907.3 -7302.87 282.19 0.37 870 872 4 -21906.7 -7300.12 282.59 0.37 871 873 4 -21906.4 -7299.82 282.61 0.37 872 874 4 -21906.2 -7299.46 282.66 0.37 873 875 4 -21905.7 -7299.12 282.66 0.37 874 876 4 -21905.4 -7297.98 282.82 0.37 875 877 4 -21904.9 -7295.86 283.54 0.37 876 878 4 -21904.8 -7294.86 287.44 0.37 877 879 4 -21905.1 -7294.93 287.45 0.37 878 880 4 -21905 -7293.16 289.84 0.37 879 881 4 -21904.4 -7291.76 290.02 0.37 880 882 4 -21904.4 -7291.48 290.06 0.37 881 883 4 -21904 -7290.03 293.37 0.37 882 884 4 -21904 -7289.75 293.42 0.37 883 885 4 -21904.2 -7287.47 294.06 0.37 884 886 4 -21901.9 -7284.51 302.56 0.37 885 887 4 -21902.1 -7282.15 302.96 0.37 886 888 4 -21901.8 -7281.83 302.99 0.37 887 889 4 -21901 -7279.47 308.98 0.37 888 890 4 -21900.8 -7277.57 309.28 0.37 889 891 4 -21900.9 -7277.31 309.32 0.37 890 892 4 -21900.5 -7275.6 312.25 0.37 891 893 4 -21899.4 -7272.44 312.67 0.37 892 894 4 -21898.4 -7270.26 316.68 0.37 893 895 4 -21898.4 -7269.98 316.73 0.37 894 896 4 -21897.2 -7267.11 317.09 0.37 895 897 4 -21897.3 -7266.86 317.14 0.37 896 898 4 -21896.4 -7265.62 317.26 0.37 897 899 4 -21896.2 -7265.29 317.3 0.37 898 900 4 -21895.2 -7263.6 319.4 0.37 899 901 4 -21894.5 -7261.76 322.17 0.37 900 902 4 -21893.1 -7257.68 326.68 0.37 901 903 4 -21892.5 -7251.67 332.02 0.37 902 904 4 -21891 -7248.42 338.22 0.37 903 905 4 -21891 -7248.19 338.26 0.37 904 906 4 -21889.2 -7246.66 339.9 0.37 905 907 4 -21888.5 -7245.2 340.08 0.37 906 908 4 -21888.4 -7243.83 340.3 0.37 907 909 4 -21888.5 -7243.56 340.35 0.37 908 910 4 -21886.4 -7241.13 346.93 0.37 909 911 4 -21885.4 -7238.56 347.26 0.25 910 912 4 -21884.8 -7236.34 349.53 0.25 911 913 4 -21923.5 -7366.31 269.31 0.99 352 914 4 -21923.5 -7366.37 269.69 0.99 913 915 4 -21923.2 -7367.68 275.75 0.99 914 916 4 -21923.2 -7367.41 275.8 0.99 915 917 4 -21918.8 -7365.32 279.21 0.99 916 918 4 -21918.1 -7364.4 279.39 0.99 917 919 4 -21918.1 -7364.47 279.83 0.99 918 920 4 -21916.4 -7363.48 283.74 0.99 919 921 4 -21914.7 -7362.14 285.38 0.99 920 922 4 -21914.7 -7362.22 285.89 0.99 921 923 4 -21913.6 -7360.73 286.49 0.99 922 924 4 -21913.6 -7360.48 286.53 0.99 923 925 4 -21912 -7358.96 288.63 0.99 924 926 4 -21911.5 -7358.6 288.64 0.99 925 927 4 -21909.9 -7356.7 288.81 0.865 926 928 4 -21908 -7354.89 292.95 0.865 927 929 4 -21905.9 -7353.05 295.75 0.865 928 930 4 -21904.4 -7351.1 298.45 0.865 929 931 4 -21904.3 -7351.26 299.43 0.865 930 932 4 -21903.9 -7349.17 302.1 0.745 931 933 4 -21903.2 -7347.52 305.41 0.745 932 934 4 -21902.8 -7346.65 305.51 0.745 933 935 4 -21902.9 -7346.36 305.57 0.745 934 936 4 -21902.3 -7345.97 311.53 0.62 935 937 4 -21901.6 -7345.14 316.85 0.62 936 938 4 -21900.3 -7344.16 321.65 0.62 937 939 4 -21900.3 -7343.93 321.69 0.62 938 940 4 -21899.7 -7343.4 324.02 0.62 939 941 4 -21901.3 -7343.24 327.53 0.62 940 942 4 -21901.5 -7340.6 331.19 0.62 941 943 4 -21901.8 -7340.65 331.21 0.62 942 944 4 -21901.1 -7338.11 339.24 0.62 943 945 4 -21901.5 -7338.15 339.27 0.62 944 946 4 -21900.5 -7336.46 344.59 0.62 945 947 4 -21899.2 -7335.36 350.42 0.62 946 948 4 -21897.5 -7333.67 357.94 0.62 947 949 4 -21897.5 -7333.78 358.58 0.62 948 950 4 -21897.6 -7331.09 368.24 0.62 949 951 4 -21896.9 -7328.35 373.46 0.62 950 952 4 -21890.7 -7324.8 373.47 0.495 951 953 4 -21888.8 -7322.91 384.73 0.495 952 954 4 -21888.6 -7320.18 392.56 0.495 953 955 4 -21888.5 -7317.39 404.42 0.495 954 956 4 -21887.9 -7315.51 412.86 0.495 955 957 4 -21885.8 -7312.85 418.59 0.495 956 958 4 -21885 -7309.62 430.35 0.495 957 959 4 -21885 -7309.35 430.4 0.495 958 960 4 -21883.1 -7306.97 437.4 0.495 959 961 4 -21883.4 -7306.77 437.46 0.495 960 962 4 -21881.2 -7304.19 441.03 0.495 961 963 4 -21880.8 -7302.15 445.13 0.495 962 964 4 -21880.6 -7300.47 445.18 0.37 963 965 4 -21888.1 -7300.69 445.85 0.495 964 966 4 -21886.3 -7299.94 452.33 0.495 965 967 4 -21885.2 -7297.72 458.09 0.37 966 968 4 -21883.7 -7298.1 470.6 0.25 967 969 4 -21878.2 -7303.85 443.68 0.495 962 970 4 -21876.5 -7302.66 443.71 0.37 969 971 4 -21874.4 -7302.64 444.47 0.37 970 972 4 -21902.3 -7350.37 297.87 0.865 931 973 4 -21899.6 -7348.93 298.83 0.865 972 974 4 -21898 -7347.9 300.98 0.865 973 975 4 -21895.5 -7347.19 302.5 0.865 974 976 4 -21895.4 -7347.34 303.41 0.865 975 977 4 -21894.9 -7346.89 303.21 0.865 976 978 4 -21893.5 -7345.56 306.27 0.62 977 979 4 -21890.6 -7343.13 306.39 0.62 978 980 4 -21890.4 -7342.82 306.43 0.62 979 981 4 -21888.6 -7341.21 306.53 0.62 980 982 4 -21888.1 -7340.79 306.55 0.62 981 983 4 -21887.9 -7340.74 306.54 0.62 982 984 4 -21884.7 -7339.69 309.81 0.62 983 985 4 -21884.2 -7339.3 309.83 0.62 984 986 4 -21883.4 -7337.59 310.04 0.62 985 987 4 -21883.5 -7337.3 310.09 0.62 986 988 4 -21881.8 -7335.27 311 0.495 987 989 4 -21881 -7333.49 311.22 0.495 988 990 4 -21880.7 -7333.15 311.26 0.495 989 991 4 -21880.3 -7332.58 311.31 0.495 990 992 4 -21880.1 -7331.96 311.4 0.495 991 993 4 -21878.5 -7330.06 311.56 0.495 992 994 4 -21878.2 -7329.75 311.71 0.495 993 995 4 -21878.3 -7329.53 311.76 0.495 994 996 4 -21876.6 -7328.13 315.04 0.495 995 997 4 -21876.7 -7327.93 315.08 0.495 996 998 4 -21876 -7326.54 315.51 0.495 997 999 4 -21876 -7326.62 315.98 0.495 998 1000 4 -21874.5 -7324.89 317.14 0.495 999 1001 4 -21874.6 -7324.62 317.19 0.495 1000 1002 4 -21873.1 -7323.66 320.64 0.495 1001 1003 4 -21871.9 -7322.61 320.69 0.495 1002 1004 4 -21870.5 -7320.96 320.83 0.495 1003 1005 4 -21868.7 -7318.91 318.95 0.495 1004 1006 4 -21867.4 -7318.15 318.96 0.495 1005 1007 4 -21866.9 -7317.78 318.97 0.495 1006 1008 4 -21865.6 -7317.24 318.94 0.495 1007 1009 4 -21864.5 -7316.76 318.92 0.495 1008 1010 4 -21862.7 -7315.9 318.89 0.495 1009 1011 4 -21862.5 -7315.81 318.88 0.495 1010 1012 4 -21859.8 -7314.32 318.88 0.495 1011 1013 4 -21859.7 -7313.96 318.93 0.495 1012 1014 4 -21856.5 -7311.5 317.55 0.495 1013 1015 4 -21853.1 -7309.79 317.51 0.495 1014 1016 4 -21852.6 -7309.43 317.53 0.495 1015 1017 4 -21849.5 -7307.46 317.41 0.495 1016 1018 4 -21849.3 -7307.14 317.44 0.495 1017 1019 4 -21844.6 -7304.6 314.24 0.495 1018 1020 4 -21844 -7304.45 314.21 0.495 1019 1021 4 -21841.5 -7303 313.22 0.495 1020 1022 4 -21841.3 -7302.91 312.86 0.495 1021 1023 4 -21838.2 -7300.89 312.48 0.495 1022 1024 4 -21838 -7300.6 312.5 0.495 1023 1025 4 -21837.8 -7300.27 312.54 0.495 1024 1026 4 -21835.5 -7298.72 312.52 0.495 1025 1027 4 -21835 -7298.38 312.53 0.495 1026 1028 4 -21832.6 -7296.19 311.6 0.495 1027 1029 4 -21832.4 -7295.82 311.64 0.495 1028 1030 4 -21832 -7295.24 311.7 0.495 1029 1031 4 -21831.2 -7293.21 311.96 0.495 1030 1032 4 -21830.6 -7292.02 312.1 0.495 1031 1033 4 -21830.3 -7291.96 312.08 0.495 1032 1034 4 -21828.4 -7289.35 313.23 0.37 1033 1035 4 -21827.2 -7286.97 313.5 0.37 1034 1036 4 -21826.9 -7286.93 313.48 0.37 1035 1037 4 -21824.2 -7283.04 314.45 0.37 1036 1038 4 -21821.6 -7281.8 317.81 0.37 1037 1039 4 -21820 -7279.68 318.01 0.37 1038 1040 4 -21817 -7277.7 318.05 0.37 1039 1041 4 -21815.4 -7276.88 318.04 0.37 1040 1042 4 -21815.1 -7276.82 318.03 0.37 1041 1043 4 -21812 -7275.05 318.94 0.37 1042 1044 4 -21806.7 -7272.47 323.81 0.37 1043 1045 4 -21881.5 -7338.52 309.69 0.745 987 1046 4 -21879.4 -7337.87 309.59 0.745 1045 1047 4 -21879.1 -7337.77 309.58 0.745 1046 1048 4 -21875.8 -7336.3 312.25 0.495 1047 1049 4 -21872.9 -7336.29 312.13 0.495 1048 1050 4 -21870.3 -7335.54 315.32 0.495 1049 1051 4 -21867.5 -7334.51 316.87 0.495 1050 1052 4 -21864.5 -7335.81 319.6 0.495 1051 1053 4 -21864.3 -7335.82 319.58 0.495 1052 1054 4 -21862.6 -7337.08 319.26 0.495 1053 1055 4 -21861.6 -7337.66 319.08 0.495 1054 1056 4 -21857.9 -7340.47 318.62 0.495 1055 1057 4 -21857.3 -7340.61 318.54 0.495 1056 1058 4 -21855.1 -7341.71 319.1 0.495 1057 1059 4 -21855 -7341.8 319.59 0.495 1058 1060 4 -21851.9 -7341.93 319.28 0.495 1059 1061 4 -21852.2 -7341.95 319.3 0.495 1060 1062 4 -21848.7 -7342.18 318.94 0.495 1061 1063 4 -21846.3 -7342.55 318.66 0.495 1062 1064 4 -21843.8 -7342.11 318.49 0.495 1063 1065 4 -21843.8 -7341.83 318.54 0.495 1064 1066 4 -21841.4 -7340.61 318.52 0.495 1065 1067 4 -21841.2 -7340.27 318.55 0.495 1066 1068 4 -21840.2 -7339.57 318.57 0.495 1067 1069 4 -21839.5 -7338.63 318.66 0.495 1068 1070 4 -21835.7 -7335.91 322.17 0.495 1069 1071 4 -21833 -7333.64 324.38 0.37 1070 1072 4 -21833.1 -7333.39 324.43 0.37 1071 1073 4 -21831.3 -7331.72 324.54 0.37 1072 1074 4 -21830.8 -7331.11 324.6 0.37 1073 1075 4 -21829.8 -7330.36 324.62 0.37 1074 1076 4 -21829.6 -7329.53 324.75 0.37 1075 1077 4 -21826.5 -7326.84 328.06 0.37 1076 1078 4 -21826.5 -7326.57 328.11 0.37 1077 1079 4 -21825.4 -7324.49 328.34 0.37 1078 1080 4 -21824.9 -7323.87 328.41 0.37 1079 1081 4 -21823.5 -7322.53 328.49 0.37 1080 1082 4 -21823.2 -7322.24 328.51 0.37 1081 1083 4 -21822 -7320.67 328.66 0.37 1082 1084 4 -21821.8 -7320.07 328.74 0.37 1083 1085 4 -21819 -7316.97 330.81 0.37 1084 1086 4 -21818.8 -7316.62 330.84 0.37 1085 1087 4 -21815 -7314.47 328.42 0.37 1086 1088 4 -21811.5 -7313.03 328.34 0.37 1087 1089 4 -21810.4 -7312.58 328.31 0.37 1088 1090 4 -21807 -7311.98 328.09 0.37 1089 1091 4 -21806.1 -7311.87 328.03 0.37 1090 1092 4 -21805.8 -7312.06 327.97 0.37 1091 1093 4 -21798.1 -7313.39 327.01 0.25 1092 1094 4 -21890.4 -7347.42 308.83 0.62 977 1095 4 -21888.2 -7347.85 310.55 0.62 1094 1096 4 -21887.4 -7348.22 310.41 0.62 1095 1097 4 -21885.5 -7347.35 310.38 0.62 1096 1098 4 -21885.2 -7347.28 310.37 0.62 1097 1099 4 -21882.5 -7347.96 315.22 0.495 1098 1100 4 -21882.3 -7348.15 316.36 0.495 1099 1101 4 -21881.1 -7348.85 317.32 0.495 1100 1102 4 -21880.7 -7348.83 317.48 0.495 1101 1103 4 -21878.7 -7348.72 317.31 0.495 1102 1104 4 -21878.8 -7348.45 317.38 0.495 1103 1105 4 -21876.4 -7348.72 318.26 0.495 1104 1106 4 -21876.1 -7348.64 318.25 0.495 1105 1107 4 -21875.3 -7348.8 318.15 0.495 1106 1108 4 -21875 -7348.69 318.14 0.495 1107 1109 4 -21871.4 -7348.54 318.25 0.62 1108 1110 4 -21869.3 -7350.04 319.07 0.62 1109 1111 4 -21867.1 -7350.87 319.63 0.62 1110 1112 4 -21867 -7351.02 320.5 0.62 1111 1113 4 -21865.2 -7350.79 322.46 0.62 1112 1114 4 -21863 -7350.85 324.73 0.62 1113 1115 4 -21860.3 -7351.43 324.38 0.62 1114 1116 4 -21857.5 -7350.95 324.2 0.495 1115 1117 4 -21856 -7351.2 324.02 0.495 1116 1118 4 -21853.9 -7351.22 325.97 0.495 1117 1119 4 -21852.5 -7350.99 325.87 0.495 1118 1120 4 -21851.2 -7350.19 325.88 0.495 1119 1121 4 -21851.2 -7349.93 325.93 0.495 1120 1122 4 -21849.3 -7351.15 329.81 0.495 1121 1123 4 -21849.1 -7350.85 329.84 0.495 1122 1124 4 -21845.8 -7349.72 329.72 0.495 1123 1125 4 -21842.5 -7349.35 332.19 0.495 1124 1126 4 -21842.4 -7349.43 332.69 0.495 1125 1127 4 -21840.3 -7348.86 336.16 0.495 1126 1128 4 -21840.1 -7348.55 336.19 0.495 1127 1129 4 -21838.4 -7348.27 336.08 0.495 1128 1130 4 -21835.5 -7347.67 336.96 0.495 1129 1131 4 -21832.3 -7347.14 338.59 0.495 1130 1132 4 -21830.1 -7346.5 338.49 0.495 1131 1133 4 -21829.8 -7346.19 338.51 0.495 1132 1134 4 -21829 -7346.04 338.47 0.495 1133 1135 4 -21825.6 -7346.39 341.88 0.495 1134 1136 4 -21822.6 -7344.98 342.96 0.495 1135 1137 4 -21819.8 -7345.56 342.61 0.495 1136 1138 4 -21819.9 -7345.32 342.65 0.495 1137 1139 4 -21817.8 -7345.82 347.37 0.495 1138 1140 4 -21816.2 -7347.52 358.29 0.495 1139 1141 4 -21813.7 -7350.01 363.49 0.495 1140 1142 4 -21807.3 -7349.69 362.95 0.495 1141 1143 4 -21803.5 -7350.24 371.56 0.495 1142 1144 4 -21801.9 -7350.74 372.79 0.495 1143 1145 4 -21801.8 -7350.84 373.38 0.495 1144 1146 4 -21800.1 -7350.75 378.15 0.495 1145 1147 4 -21798.8 -7350.55 381.32 0.495 1146 1148 4 -21798.8 -7350.45 382.54 0.495 1147 1149 4 -21795.8 -7349.92 385.71 0.495 1148 1150 4 -21793.6 -7347.52 392.86 0.37 1149 1151 4 -21790.6 -7346.84 397.15 0.25 1150 1152 4 -21787.6 -7347.3 402.93 0.25 1151 1153 4 -21785.1 -7346.95 408.32 0.25 1152 1154 4 -21781.2 -7346.16 417.25 0.25 1153 1155 4 -21765.8 -7342.06 420.6 0.25 1154 1156 4 -21924 -7368.76 276.18 1.24 916 1157 4 -21924.8 -7369.8 279.64 1.24 1156 1158 4 -21925.1 -7369.85 279.66 1.24 1157 1159 4 -21924.9 -7370.12 281.28 1.485 1158 1160 4 -21925.3 -7372.55 290.99 1.61 1159 1161 4 -21926.4 -7373.43 293.37 1.61 1160 1162 4 -21928.9 -7372.63 292.36 1.115 1161 1163 4 -21930.4 -7372.37 292.54 0.865 1162 1164 4 -21932.1 -7372.1 293.99 0.865 1163 1165 4 -21932 -7372.16 294.32 0.865 1164 1166 4 -21933.9 -7372.04 297.81 0.865 1165 1167 4 -21934.7 -7370.63 298.12 0.865 1166 1168 4 -21935.9 -7369.85 300.21 0.745 1167 1169 4 -21937.1 -7369.12 305.29 0.745 1168 1170 4 -21937.8 -7368.71 305.45 0.745 1169 1171 4 -21938.5 -7367.78 305.73 0.745 1170 1172 4 -21939.3 -7365.87 309.59 0.745 1171 1173 4 -21940.6 -7364.39 310.76 0.62 1172 1174 4 -21940.5 -7364.46 311.18 0.62 1173 1175 4 -21941.4 -7363.57 314.3 0.495 1174 1176 4 -21942.3 -7362.83 315.26 0.495 1175 1177 4 -21943.1 -7362.18 318.45 0.495 1176 1178 4 -21944.3 -7361.33 320.11 0.495 1177 1179 4 -21945.4 -7359.77 321.14 0.495 1178 1180 4 -21947.8 -7359.46 321.45 0.495 1179 1181 4 -21947.8 -7359.49 321.62 0.495 1180 1182 4 -21948.3 -7358.75 323.14 0.495 1181 1183 4 -21949.8 -7357.54 322.2 0.495 1182 1184 4 -21949.9 -7357.25 322.25 0.495 1183 1185 4 -21951 -7356.7 325.59 0.495 1184 1186 4 -21951.8 -7356.63 325.7 0.495 1185 1187 4 -21953.1 -7355.56 326.22 0.495 1186 1188 4 -21954.8 -7352.23 323.95 0.495 1187 1189 4 -21956.5 -7350.63 324.37 0.495 1188 1190 4 -21958.4 -7349.64 324.72 0.37 1189 1191 4 -21959.9 -7349.34 329.51 0.37 1190 1192 4 -21962.2 -7348.73 330.22 0.37 1191 1193 4 -21962.2 -7348.65 331.03 0.37 1192 1194 4 -21963.4 -7348.72 334.93 0.37 1193 1195 4 -21964.6 -7348.24 338.85 0.37 1194 1196 4 -21966.5 -7347.3 339.68 0.37 1195 1197 4 -21966.5 -7347.32 339.8 0.37 1196 1198 4 -21969.5 -7345.75 342.02 0.37 1197 1199 4 -21972.8 -7342.88 347.83 0.37 1198 1200 4 -21973.9 -7341.71 348.12 0.37 1199 1201 4 -21974 -7341.47 348.16 0.37 1200 1202 4 -21976.4 -7338.05 351.64 0.37 1201 1203 4 -21979.4 -7335.77 356.12 0.37 1202 1204 4 -21983 -7333.25 360.61 0.37 1203 1205 4 -21985.4 -7332.08 366.98 0.37 1204 1206 4 -21990.3 -7331.04 370.67 0.37 1205 1207 4 -21992.2 -7330.27 370.97 0.37 1206 1208 4 -21995.9 -7328.4 377.13 0.37 1207 1209 4 -22000.6 -7326.61 379.76 0.37 1208 1210 4 -22003.9 -7325.86 380.32 0.37 1209 1211 4 -22005.2 -7325.48 384.65 0.37 1210 1212 4 -22000.3 -7325.79 385.19 0.37 1211 1213 4 -22002.6 -7324.13 387.22 0.37 1212 1214 4 -22004.2 -7323.82 389.78 0.37 1213 1215 4 -22004.1 -7323.95 390.56 0.37 1214 1216 4 -22007 -7322.93 393.73 0.37 1215 1217 4 -22009.2 -7322.79 396.63 0.25 1216 1218 4 -22012 -7321.3 398.84 0.25 1217 1219 4 -22012.2 -7321.62 398.8 0.25 1218 1220 4 -22013.6 -7321 405.79 0.25 1219 1221 4 -22015.3 -7319.76 410.67 0.25 1220 1222 4 -22017.8 -7318.03 414.91 0.25 1221 1223 4 -22019 -7316.5 424.63 0.25 1222 1224 4 -22021.3 -7314.27 432.43 0.25 1223 1225 4 -21939.3 -7370.74 314.95 0.62 1171 1226 4 -21941.2 -7370.5 315.15 0.62 1225 1227 4 -21942.5 -7370.8 315.57 0.62 1226 1228 4 -21942.4 -7370.9 316.18 0.62 1227 1229 4 -21944.2 -7372.21 319.88 0.495 1228 1230 4 -21946.1 -7372.85 319.95 0.495 1229 1231 4 -21948.5 -7374.6 320.92 0.495 1230 1232 4 -21950.7 -7376.73 321.5 0.495 1231 1233 4 -21950.7 -7376.76 321.65 0.495 1232 1234 4 -21952.6 -7376.99 322.42 0.495 1233 1235 4 -21952.9 -7377.03 322.44 0.495 1234 1236 4 -21953.7 -7377.56 321.37 0.495 1235 1237 4 -21954.1 -7378.45 321.38 0.495 1236 1238 4 -21955.3 -7381.07 321.06 0.495 1237 1239 4 -21957.5 -7383.07 320.94 0.495 1238 1240 4 -21958.6 -7384.29 321.81 0.495 1239 1241 4 -21960.4 -7385.45 321.79 0.495 1240 1242 4 -21964.1 -7386.84 322.83 0.495 1241 1243 4 -21965.5 -7385.43 322.66 0.495 1242 1244 4 -21966.9 -7386.2 325.68 0.495 1243 1245 4 -21970.2 -7387.09 325.85 0.495 1244 1246 4 -21970.2 -7386.81 325.9 0.495 1245 1247 4 -21971.8 -7388.7 327.06 0.495 1246 1248 4 -21973.2 -7390.48 327.99 0.495 1247 1249 4 -21974.3 -7391.9 330.67 0.495 1248 1250 4 -21976.2 -7392.04 332.36 0.495 1249 1251 4 -21979 -7393.01 335.48 0.495 1250 1252 4 -21981 -7394.79 335.62 0.495 1251 1253 4 -21983.4 -7395.52 335.96 0.495 1252 1254 4 -21985.6 -7397.04 337.67 0.495 1253 1255 4 -21988.2 -7400.27 337.77 0.495 1254 1256 4 -21988.2 -7400.1 338.19 0.495 1255 1257 4 -21990.4 -7402.38 339.75 0.495 1256 1258 4 -21990.7 -7402.39 339.78 0.495 1257 1259 4 -21994.8 -7405.34 339.77 0.495 1258 1260 4 -21998 -7408.51 339.35 0.37 1259 1261 4 -21998 -7408.74 339.31 0.37 1260 1262 4 -21999.6 -7410.97 339.1 0.37 1261 1263 4 -22001.7 -7413.14 341.68 0.37 1262 1264 4 -22004.5 -7415 341.64 0.37 1263 1265 4 -22004.8 -7415.06 341.65 0.37 1264 1266 4 -22007.2 -7415.55 343.56 0.37 1265 1267 4 -22009.5 -7415.67 343.76 0.37 1266 1268 4 -22012.4 -7416.19 347.21 0.37 1267 1269 4 -22012.7 -7416.24 347.23 0.37 1268 1270 4 -22015.7 -7417.6 345.8 0.37 1269 1271 4 -22016 -7417.73 344.76 0.37 1270 1272 4 -22020 -7420.28 342.99 0.37 1271 1273 4 -22023.3 -7421.72 343.06 0.37 1272 1274 4 -22023.5 -7421.73 343.08 0.37 1273 1275 4 -22025.5 -7421.84 343.25 0.37 1274 1276 4 -22025.9 -7421.59 343.32 0.37 1275 1277 4 -22028.9 -7423.47 349.26 0.25 1276 1278 4 -22033.4 -7424.22 341.65 0.25 1277 1279 4 -22034 -7424.3 341.69 0.37 1278 1280 4 -21955.8 -7376.75 322.51 0.495 1236 1281 4 -21957.1 -7377.05 322.59 0.495 1280 1282 4 -21958.9 -7377.06 322.75 0.495 1281 1283 4 -21959.2 -7377.19 322.75 0.495 1282 1284 4 -21960.3 -7377.12 322.87 0.495 1283 1285 4 -21961.9 -7377.97 322.88 0.495 1284 1286 4 -21963.3 -7377.45 323.1 0.495 1285 1287 4 -21963.6 -7377.49 323.12 0.495 1286 1288 4 -21964.6 -7377.67 324.66 0.37 1287 1289 4 -21966 -7378.2 324.7 0.37 1288 1290 4 -21967.2 -7378.17 324.82 0.37 1289 1291 4 -21967.7 -7378.07 324.89 0.37 1290 1292 4 -21968.4 -7377.33 325.07 0.37 1291 1293 4 -21969.8 -7377.36 325.2 0.37 1292 1294 4 -21971.4 -7376.97 323.98 0.37 1293 1295 4 -21972.5 -7377.13 324.06 0.37 1294 1296 4 -21972.7 -7374.71 326.52 0.37 1295 1297 4 -21972.9 -7374.82 326.52 0.37 1296 1298 4 -21973.9 -7373.89 326.77 0.37 1297 1299 4 -21975.7 -7373.94 326.93 0.37 1298 1300 4 -21977.8 -7374.03 330.03 0.37 1299 1301 4 -21978.1 -7374.08 330.05 0.37 1300 1302 4 -21980.1 -7373.88 330.27 0.37 1301 1303 4 -21980.8 -7373.75 330.36 0.37 1302 1304 4 -21981 -7373.57 330.41 0.37 1303 1305 4 -21982.5 -7373.5 330.56 0.37 1304 1306 4 -21983.8 -7372.65 330.82 0.37 1305 1307 4 -21983.8 -7372.39 330.87 0.37 1306 1308 4 -21985.5 -7373.27 334.2 0.37 1307 1309 4 -21985.8 -7373.35 334.22 0.37 1308 1310 4 -21987.7 -7373.4 334.39 0.37 1309 1311 4 -21990.5 -7372.96 338.6 0.37 1310 1312 4 -21993.5 -7374.42 342.25 0.37 1311 1313 4 -21993.4 -7374.49 342.7 0.37 1312 1314 4 -21994.8 -7374.45 342.84 0.37 1313 1315 4 -21996.8 -7375.65 346.32 0.37 1314 1316 4 -21997 -7375.68 346.34 0.37 1315 1317 4 -21998.3 -7376.77 346.28 0.37 1316 1318 4 -22000 -7377.29 346.34 0.37 1317 1319 4 -22002.5 -7378.12 350.25 0.37 1318 1320 4 -22005 -7378.83 350.36 0.37 1319 1321 4 -22007.9 -7380.03 351.17 0.37 1320 1322 4 -22009.2 -7380.56 351.21 0.37 1321 1323 4 -22014.2 -7382.54 354.89 0.37 1322 1324 4 -22019.6 -7385.89 356.09 0.37 1323 1325 4 -22023.4 -7386.86 356.29 0.37 1324 1326 4 -22031.6 -7389.18 363.24 0.25 1325 1327 4 -22033.2 -7389.73 363.3 0.25 1326 1328 4 -21926.2 -7373.21 302.94 1.485 1161 1329 4 -21926.3 -7373.7 308.24 1.485 1328 1330 4 -21926.2 -7374.07 318.17 1.485 1329 1331 4 -21926.3 -7374.42 319.98 1.485 1330 1332 4 -21926.9 -7374.51 322.99 1.485 1331 1333 4 -21928.8 -7372.95 327.13 1.24 1332 1334 4 -21928.9 -7372.68 327.19 1.24 1333 1335 4 -21929.7 -7372.73 332.56 1.24 1334 1336 4 -21929.5 -7373.11 333.5 1.24 1335 1337 4 -21930.6 -7373.14 335.64 1.24 1336 1338 4 -21930.4 -7373.46 337.55 1.24 1337 1339 4 -21931.9 -7373.36 341.85 1.24 1338 1340 4 -21933 -7374.54 344.45 1.24 1339 1341 4 -21934.5 -7373.63 350.31 1.24 1340 1342 4 -21934.5 -7373.38 350.36 1.24 1341 1343 4 -21936.3 -7371.94 354.46 1.24 1342 1344 4 -21937.9 -7371.31 362.02 1.115 1343 1345 4 -21940 -7370 364.98 1.115 1344 1346 4 -21941.3 -7370.25 369.85 1.115 1345 1347 4 -21941.7 -7371.13 371.15 1.115 1346 1348 4 -21933.9 -7371.01 370.44 1.115 1347 1349 4 -21935 -7372.39 390.13 1.115 1348 1350 4 -21934.9 -7372.59 391.35 1.115 1349 1351 4 -21935.7 -7373.93 403.59 1.115 1350 1352 4 -21936.4 -7375.09 412.04 1.115 1351 1353 4 -21937.9 -7374.94 411.13 1.115 1352 1354 4 -21938.2 -7375.94 417.83 1.115 1353 1355 4 -21938.8 -7377.47 426.84 1.115 1354 1356 4 -21939.7 -7376.57 428.53 0.745 1355 1357 4 -21940.4 -7376.52 431.93 0.745 1356 1358 4 -21940.5 -7376.25 431.98 0.745 1357 1359 4 -21940.9 -7376.91 436.25 0.745 1358 1360 4 -21941.7 -7377.44 441.47 0.745 1359 1361 4 -21942.9 -7377.83 439.32 0.745 1360 1362 4 -21942.7 -7378.72 442.97 0.745 1361 1363 4 -21951.5 -7380.34 443.53 0.865 1362 1364 4 -21952.8 -7381.56 450.78 0.865 1363 1365 4 -21954.7 -7382.15 455.63 0.865 1364 1366 4 -21956.3 -7381.35 460.75 0.865 1365 1367 4 -21957.5 -7381.67 467.55 0.865 1366 1368 4 -21957.4 -7384.67 479.75 0.865 1367 1369 4 -21957.2 -7385.89 483.57 0.865 1368 1370 4 -21957.5 -7385.9 483.59 0.865 1369 1371 4 -21958.6 -7387.27 485.3 0.865 1370 1372 4 -21958.9 -7386.07 492.47 0.745 1371 1373 4 -21958.7 -7385.72 496.94 0.745 1372 1374 4 -21957.9 -7384.83 506.43 0.745 1373 1375 4 -21957.2 -7384.22 509.94 0.745 1374 1376 4 -21963.3 -7381.32 510.99 0.495 1375 1377 4 -21962.7 -7379.97 522.46 0.495 1376 1378 4 -21962.5 -7379.86 522.46 0.495 1377 1379 4 -21963.6 -7379.24 526.52 0.495 1378 1380 4 -21964.5 -7378.21 531.88 0.495 1379 1381 4 -21964.6 -7376.26 539.18 0.495 1380 1382 4 -21964.9 -7375.6 548.73 0.495 1381 1383 4 -21966.4 -7373.05 555.73 0.495 1382 1384 4 -21968.9 -7372.17 564.25 0.495 1383 1385 4 -21971.2 -7370.77 571.65 0.37 1384 1386 4 -21973.7 -7369.29 582.93 0.37 1385 1387 4 -21959.6 -7389.54 486.53 0.495 1371 1388 4 -21959.6 -7391.48 486.21 0.495 1387 1389 4 -21959.9 -7391.54 486.23 0.495 1388 1390 4 -21960.7 -7393.76 485.94 0.495 1389 1391 4 -21961.8 -7394.28 485.96 0.495 1390 1392 4 -21962 -7394.29 485.98 0.495 1391 1393 4 -21962.6 -7397.04 493.13 0.495 1392 1394 4 -21962.5 -7397.13 493.68 0.495 1393 1395 4 -21964.5 -7398.99 499.26 0.495 1394 1396 4 -21965.6 -7400.27 505.37 0.495 1395 1397 4 -21971 -7396.78 506.36 0.495 1396 1398 4 -21972.3 -7397.31 507.24 0.495 1397 1399 4 -21973.7 -7397.37 507.37 0.495 1398 1400 4 -21975.4 -7399.17 517.7 0.495 1399 1401 4 -21975.3 -7399.44 517.65 0.495 1400 1402 4 -21976.1 -7399.95 517.64 0.495 1401 1403 4 -21977.1 -7400.19 525.18 0.495 1402 1404 4 -21980.9 -7399.6 529.01 0.495 1403 1405 4 -21983 -7399.43 534.11 0.495 1404 1406 4 -21986.4 -7398.57 541.43 0.495 1405 1407 4 -21989.6 -7398.09 545.41 0.37 1406 1408 4 -21992.2 -7402.21 566.59 0.37 1407 1409 4 -21995.5 -7402.84 574.64 0.37 1408 1410 4 -21938.7 -7379.47 430.42 0.745 1355 1411 4 -21939.4 -7380.1 431.72 0.745 1410 1412 4 -21939.1 -7381.6 435.63 0.745 1411 1413 4 -21939 -7383.4 439.37 0.745 1412 1414 4 -21939.8 -7384.19 438.32 0.745 1413 1415 4 -21948 -7386.17 438.75 0.99 1414 1416 4 -21948 -7388.07 441.66 0.99 1415 1417 4 -21947.5 -7390.83 446.44 0.99 1416 1418 4 -21947.3 -7392.86 448.65 0.865 1417 1419 4 -21948 -7394.97 450.47 0.865 1418 1420 4 -21948 -7394.98 450.57 0.865 1419 1421 4 -21949.3 -7396.96 452.62 0.865 1420 1422 4 -21949.9 -7399.46 455.59 0.865 1421 1423 4 -21951.7 -7401.41 458.57 0.745 1422 1424 4 -21953.3 -7402.23 463.11 0.745 1423 1425 4 -21955.1 -7401.96 466.4 0.745 1424 1426 4 -21956.1 -7404.18 473.21 0.745 1425 1427 4 -21956.5 -7407.05 485.7 0.745 1426 1428 4 -21956.4 -7407.22 486.72 0.745 1427 1429 4 -21956.5 -7409.31 493.54 0.745 1428 1430 4 -21956.1 -7409.79 493.71 0.745 1429 1431 4 -21956 -7410.04 493.66 0.745 1430 1432 4 -21955.1 -7412.6 497.9 0.745 1431 1433 4 -21959.2 -7411.47 498.48 0.495 1432 1434 4 -21958.8 -7413 498.19 0.495 1433 1435 4 -21957.3 -7415.69 502.62 0.495 1434 1436 4 -21956 -7420.4 503.63 0.495 1435 1437 4 -21955.7 -7420.88 506.52 0.495 1436 1438 4 -21953.6 -7423.37 507.63 0.495 1437 1439 4 -21953.2 -7423.28 507.61 0.495 1438 1440 4 -21952 -7424.69 507.83 0.495 1439 1441 4 -21951.9 -7426.64 512.32 0.37 1440 1442 4 -21952 -7426.93 513.5 0.37 1441 1443 4 -21951.6 -7428.71 517.73 0.37 1442 1444 4 -21951.5 -7430.2 518.25 0.37 1443 1445 4 -21950.3 -7432.31 520.4 0.37 1444 1446 4 -21947.7 -7434.2 525.07 0.37 1445 1447 4 -21945.5 -7438.9 528.79 0.37 1446 1448 4 -21943.7 -7443.59 545.51 0.25 1447 1449 4 -21931.1 -7375.29 339.28 1.24 1340 1450 4 -21931 -7375.58 341.02 1.24 1449 1451 4 -21929.1 -7378.42 349.46 1.24 1450 1452 4 -21927.8 -7382.55 358.04 1.24 1451 1453 4 -21927.6 -7384.93 363.92 1.24 1452 1454 4 -21926.9 -7386.84 369.08 1.24 1453 1455 4 -21918.9 -7386.54 368.06 0.99 1454 1456 4 -21919 -7386.3 368.11 0.99 1455 1457 4 -21918 -7387.87 373.35 0.99 1456 1458 4 -21918 -7388.03 374.3 0.99 1457 1459 4 -21917.9 -7389.2 376.08 0.99 1458 1460 4 -21917.8 -7389.35 377.02 0.99 1459 1461 4 -21918.1 -7390.37 377.84 0.99 1460 1462 4 -21918 -7390.5 378.64 0.99 1461 1463 4 -21917.9 -7391.51 382.77 0.99 1462 1464 4 -21917.8 -7391.66 383.71 0.99 1463 1465 4 -21916.9 -7393.82 391.83 0.99 1464 1466 4 -21916.6 -7395.21 396.8 0.99 1465 1467 4 -21916.5 -7395.41 398.03 0.99 1466 1468 4 -21916.4 -7396.81 404.15 0.99 1467 1469 4 -21916.5 -7397.07 405.42 0.99 1468 1470 4 -21917.5 -7397.21 408.2 0.99 1469 1471 4 -21917.4 -7397.44 409.56 0.99 1470 1472 4 -21918.5 -7397.11 411.2 0.99 1471 1473 4 -21918.4 -7397.34 412.56 0.99 1472 1474 4 -21919 -7396.2 415.89 0.99 1473 1475 4 -21918 -7395.29 420.92 0.99 1474 1476 4 -21916.9 -7395.5 424.78 0.99 1475 1477 4 -21914.9 -7395.65 433.82 0.99 1476 1478 4 -21914.7 -7394.7 436.43 0.99 1477 1479 4 -21922.9 -7395.68 437.04 1.24 1478 1480 4 -21923.2 -7396.61 443.48 1.24 1479 1481 4 -21922.7 -7398.08 452.1 1.24 1480 1482 4 -21922.9 -7398.11 452.12 1.24 1481 1483 4 -21920.6 -7399.42 462.97 1.24 1482 1484 4 -21920.4 -7399.68 464.57 1.24 1483 1485 4 -21919.2 -7400.55 467.15 1.24 1484 1486 4 -21918.7 -7401.17 469.69 1.24 1485 1487 4 -21917.7 -7403.12 470.48 1.115 1486 1488 4 -21915.7 -7405.18 474.27 1.115 1487 1489 4 -21911.9 -7407.48 481.06 1.115 1488 1490 4 -21910.5 -7410.49 486.61 1.115 1489 1491 4 -21909.3 -7413.32 492.5 0.99 1490 1492 4 -21908.8 -7414.93 492.73 0.99 1491 1493 4 -21908.5 -7416.06 497.47 0.99 1492 1494 4 -21908.8 -7418.21 497.12 0.62 1493 1495 4 -21914.6 -7416.43 497.95 0.62 1494 1496 4 -21915.1 -7417.82 498.63 0.62 1495 1497 4 -21914.7 -7419.59 501.14 0.62 1496 1498 4 -21914.5 -7421.61 501.56 0.62 1497 1499 4 -21914.3 -7422.35 502.82 0.62 1498 1500 4 -21911.8 -7426.45 506.82 0.62 1499 1501 4 -21909.7 -7429.69 513.41 0.495 1500 1502 4 -21907.8 -7431.18 514.92 0.495 1501 1503 4 -21907.5 -7431.66 517.83 0.495 1502 1504 4 -21902.7 -7434.48 522.52 0.495 1503 1505 4 -21900.2 -7436.67 527.18 0.37 1504 1506 4 -21895.1 -7438.15 531.35 0.37 1505 1507 4 -21891.2 -7440.43 538.2 0.37 1506 1508 4 -21887.3 -7443.99 544.01 0.37 1507 1509 4 -21885.3 -7446.71 546.22 0.37 1508 1510 4 -21885.2 -7447.03 548.12 0.37 1509 1511 4 -21883.7 -7449.16 547.87 0.37 1510 1512 4 -21879.4 -7456.16 551.61 0.37 1511 1513 4 -21876.5 -7464.09 549.65 0.25 1512 1514 4 -21915.8 -7425 505.73 0.62 1499 1515 4 -21915.6 -7425.24 507.2 0.62 1514 1516 4 -21916.9 -7426.04 512.84 0.62 1515 1517 4 -21915.8 -7431.3 530.25 0.495 1516 1518 4 -21916 -7433.52 528.42 0.37 1517 1519 4 -21916.2 -7433.59 528.43 0.37 1518 1520 4 -21916 -7439.76 535.68 0.37 1519 1521 4 -21917.7 -7448.93 543.25 0.37 1520 1522 4 -21921.5 -7454.62 542.67 0.25 1521 1523 4 -21923.8 -7458.34 543.12 0.25 1522 1524 4 -21923.7 -7458.49 544.05 0.25 1523 1525 4 -21920.4 -7425.19 512.83 0.495 1516 1526 4 -21924.5 -7425.77 528.11 0.495 1525 1527 4 -21927.5 -7425.81 537.87 0.37 1526 1528 4 -21928.7 -7425.66 544.53 0.37 1527 1529 4 -21928.6 -7425.93 544.48 0.37 1528 1530 4 -21918 -7399.02 472.34 0.865 1486 1531 4 -21916.7 -7396.99 474.23 0.865 1530 1532 4 -21917 -7396.99 474.26 0.865 1531 1533 4 -21915.1 -7394.61 477.85 0.865 1532 1534 4 -21915.1 -7394.63 477.96 0.865 1533 1535 4 -21911.7 -7392.71 480.87 0.865 1534 1536 4 -21908.3 -7392.57 484.67 0.865 1535 1537 4 -21905.8 -7392.73 486.18 0.865 1536 1538 4 -21904.7 -7395.4 492.55 0.865 1537 1539 4 -21904.5 -7398.86 496.29 0.865 1538 1540 4 -21910.3 -7395.48 497.4 0.745 1539 1541 4 -21911.2 -7394.38 502.38 0.745 1540 1542 4 -21911.2 -7394.52 503.23 0.745 1541 1543 4 -21910.4 -7393.69 510.14 0.62 1542 1544 4 -21909.3 -7393.29 517.11 0.62 1543 1545 4 -21907.5 -7394.03 524.88 0.62 1544 1546 4 -21907.6 -7393.77 524.93 0.62 1545 1547 4 -21904.5 -7394.32 536.37 0.495 1546 1548 4 -21900.4 -7394.56 551.91 0.495 1547 1549 4 -21896.2 -7394.52 560.43 0.495 1548 1550 4 -21933.6 -7369.99 343.15 0.495 1339 1551 4 -21935.6 -7367.54 344.64 0.495 1550 1552 4 -21937.7 -7365.89 348.93 0.495 1551 1553 4 -21939 -7363.74 352.75 0.495 1552 1554 4 -21940 -7362.08 355.01 0.37 1553 1555 4 -21940.3 -7362.14 355.15 0.37 1554 1556 4 -21941.5 -7359.51 360.77 0.37 1555 1557 4 -21944.2 -7357.46 365.45 0.37 1556 1558 4 -21946.5 -7355.43 370.5 0.37 1557 1559 4 -21948.4 -7354.95 370.75 0.37 1558 1560 4 -21949.6 -7354.37 375.71 0.37 1559 1561 4 -21950.4 -7353.17 375.98 0.37 1560 1562 4 -21950.3 -7351.71 377.27 0.37 1561 1563 4 -21943.2 -7349.39 375.98 0.37 1562 1564 4 -21944.4 -7346.86 378.97 0.37 1563 1565 4 -21944.3 -7346.99 379.73 0.37 1564 1566 4 -21945.2 -7344.59 383.62 0.37 1565 1567 4 -21945.3 -7344.32 383.67 0.37 1566 1568 4 -21946.3 -7341.44 386.32 0.37 1567 1569 4 -21947.4 -7339.42 389.11 0.37 1568 1570 4 -21949.1 -7336.55 391.12 0.37 1569 1571 4 -21949.1 -7336.35 391.16 0.37 1570 1572 4 -21951.8 -7333.91 397.42 0.25 1571 1573 4 -21951.9 -7333.64 397.47 0.25 1572 1574 4 -21953.8 -7331.74 401.46 0.25 1573 1575 4 -21955.2 -7329.69 405.42 0.25 1574 1576 4 -21955.8 -7327.97 405.76 0.25 1575 1577 4 -21955.8 -7327.74 405.8 0.25 1576 1578 4 -21958.3 -7323.21 414.31 0.25 1577 1579 4 -21959.8 -7320.14 424.53 0.25 1578 1580 4 -21959.7 -7320.33 425.67 0.25 1579 1581 2 -21902.8 -7385.45 200.3 1.115 1 1582 2 -21901.9 -7385.82 200.16 0.99 1581 1583 2 -21901.2 -7386.83 199.91 0.865 1582 1584 2 -21900.7 -7386.9 194.75 0.62 1583 1585 2 -21899.8 -7387.57 191.26 0.495 1584 1586 2 -21900 -7387.6 191.28 0.495 1585 1587 2 -21899.2 -7388.85 188.26 0.495 1586 1588 2 -21900.3 -7389.4 184.31 0.495 1587 1589 2 -21899.7 -7391.42 183.93 0.495 1588 1590 2 -21899.8 -7391.56 178.43 0.495 1589 1591 2 -21895.5 -7390.62 178.19 0.495 1590 1592 2 -21894.9 -7391.04 178.06 0.495 1591 1593 2 -21894.3 -7392.2 175.95 0.495 1592 1594 2 -21894.2 -7392.74 171.79 0.495 1593 1595 2 -21894.3 -7392.58 170.86 0.495 1594 1596 2 -21893.8 -7393.42 169.72 0.495 1595 1597 2 -21893.4 -7393.85 168.13 0.495 1596 1598 2 -21893.5 -7393.65 166.94 0.495 1597 1599 2 -21892.7 -7394.39 164.42 0.495 1598 1600 2 -21892 -7394.96 163.44 0.495 1599 1601 2 -21892.1 -7394.92 163.22 0.495 1600 1602 2 -21891.5 -7395.32 162.92 0.495 1601 1603 2 -21891.5 -7395.05 162.97 0.495 1602 1604 2 -21890.5 -7394.73 157.58 0.495 1603 1605 2 -21892.5 -7392.47 153.72 0.495 1604 1606 2 -21892.6 -7392.24 153.77 0.495 1605 1607 2 -21894.3 -7391.1 152.14 0.495 1606 1608 2 -21894.5 -7390.64 150.85 0.495 1607 1609 2 -21894.5 -7389.12 149.86 0.495 1608 1610 2 -21894.2 -7389.04 149.85 0.495 1609 1611 2 -21893.3 -7387.57 147.11 0.495 1610 1612 2 -21893.1 -7387.27 147.01 0.495 1611 1613 2 -21892 -7386.48 146.73 0.495 1612 1614 2 -21892.2 -7386.19 146.6 0.495 1613 1615 2 -21890.7 -7384.82 146.27 0.495 1614 1616 2 -21890.2 -7384.86 145.39 0.495 1615 1617 2 -21889.3 -7384.72 145.27 0.495 1616 1618 2 -21889.1 -7384.15 145.34 0.495 1617 1619 2 -21888.9 -7384.08 145.33 0.495 1618 1620 2 -21889 -7381.29 138.72 0.495 1619 1621 2 -21887.4 -7380.45 138.52 0.495 1620 1622 2 -21887.6 -7380.24 137.23 0.495 1621 1623 2 -21886.5 -7378.7 134.18 0.495 1622 1624 2 -21886.6 -7378.56 133.34 0.495 1623 1625 2 -21885.8 -7378.04 131.09 0.495 1624 1626 2 -21885.8 -7378 130.84 0.495 1625 1627 2 -21885.5 -7378.1 127.35 0.495 1626 1628 2 -21885 -7378.59 124.27 0.495 1627 1629 2 -21885 -7378.52 123.84 0.495 1628 1630 2 -21884.7 -7379.46 122.23 0.495 1629 1631 2 -21884.5 -7378.99 121.28 0.495 1630 1632 2 -21883.8 -7379.08 119.34 0.495 1631 1633 2 -21883.6 -7378.74 119.38 0.495 1632 1634 2 -21883.2 -7379.06 115.26 0.495 1633 1635 2 -21882.4 -7378.54 111.96 0.495 1634 1636 2 -21882.6 -7378.08 110.61 0.495 1635 1637 2 -21882 -7378.53 112.42 0.495 1636 1638 2 -21881.8 -7377.38 107.36 0.495 1637 1639 2 -21881.8 -7377.1 107.41 0.495 1638 1640 2 -21880 -7378.54 104.92 0.495 1639 1641 2 -21879.7 -7378.28 104.94 0.495 1640 1642 2 -21877.3 -7378.83 104.62 0.495 1641 1643 2 -21876.1 -7379.25 100.06 0.495 1642 1644 2 -21876.3 -7379.24 100.09 0.495 1643 1645 2 -21873.8 -7378.19 97.86 0.495 1644 1646 2 -21872.4 -7378.57 92.25 0.495 1645 1647 2 -21872.8 -7378.24 91.53 0.495 1646 1648 2 -21872.5 -7375.5 85.58 0.495 1647 1649 2 -21871.8 -7374.61 85.65 0.37 1648 1650 2 -21871.8 -7374.33 85.71 0.37 1649 1651 2 -21871.1 -7372.29 85.62 0.37 1650 1652 2 -21870.4 -7371.15 85.75 0.37 1651 1653 2 -21871.2 -7369.18 86.15 0.37 1652 1654 2 -21872.3 -7367.48 86.53 0.37 1653 1655 2 -21872.3 -7365.66 86.84 0.37 1654 1656 2 -21872.5 -7363.77 88.38 0.37 1655 1657 2 -21873.1 -7363.06 89.88 0.37 1656 1658 2 -21873.6 -7361.82 90.13 0.37 1657 1659 2 -21875.4 -7361.06 90.42 0.37 1658 1660 2 -21875.4 -7360.76 90.47 0.37 1659 1661 2 -21876.7 -7359.85 94.79 0.37 1660 1662 2 -21878 -7358.95 94.95 0.37 1661 1663 2 -21878 -7358.93 94.78 0.37 1662 1664 2 -21880.5 -7357.66 94.57 0.37 1663 1665 2 -21880.6 -7357.49 93.52 0.37 1664 1666 2 -21882.5 -7356.82 91.07 0.37 1665 1667 2 -21884.3 -7358.03 91.68 0.37 1666 1668 2 -21886.6 -7358.36 91.84 0.37 1667 1669 2 -21887.7 -7358.27 91.96 0.37 1668 1670 2 -21888.1 -7357.26 92.17 0.37 1669 1671 2 -21890.1 -7357.35 92.34 0.37 1670 1672 2 -21892 -7356.97 90.39 0.37 1671 1673 2 -21894 -7356.8 90.6 0.37 1672 1674 2 -21894 -7356.54 90.65 0.37 1673 1675 2 -21895.5 -7355.99 90.89 0.37 1674 1676 2 -21897.2 -7356.66 90.1 0.37 1675 1677 2 -21899.4 -7357.25 90.2 0.37 1676 1678 2 -21902.6 -7355.84 91.8 0.37 1677 1679 2 -21904 -7355.83 91.93 0.37 1678 1680 2 -21907 -7355.26 92.31 0.37 1679 1681 2 -21907.8 -7355.37 92.36 0.37 1680 1682 2 -21911.2 -7355.11 92.73 0.37 1681 1683 2 -21911.4 -7354.08 92.92 0.37 1682 1684 2 -21912.8 -7354.06 93.05 0.37 1683 1685 2 -21914.2 -7354.28 93.14 0.37 1684 1686 2 -21916.6 -7353.66 93.47 0.37 1685 1687 2 -21916.8 -7353.65 93.5 0.37 1686 1688 2 -21919 -7353 92.59 0.37 1687 1689 2 -21919 -7352.76 92.63 0.37 1688 1690 2 -21921.4 -7352.03 92.98 0.37 1689 1691 2 -21921.6 -7352.14 92.98 0.37 1690 1692 2 -21925.6 -7352.22 93.34 0.37 1691 1693 2 -21926 -7352.02 93.4 0.37 1692 1694 2 -21927.9 -7352.04 93.58 0.37 1693 1695 2 -21928.2 -7351.86 93.64 0.37 1694 1696 2 -21929.7 -7350.79 93.96 0.37 1695 1697 2 -21930.4 -7350.4 94.09 0.37 1696 1698 2 -21932.5 -7349.38 94.46 0.37 1697 1699 2 -21936.6 -7348.46 94.99 0.37 1698 1700 2 -21939.1 -7348.03 93.46 0.37 1699 1701 2 -21943 -7346.8 94.02 0.37 1700 1702 2 -21946.3 -7346.54 91.32 0.37 1701 1703 2 -21950.1 -7345.87 91.79 0.37 1702 1704 2 -21952.9 -7343.92 92.37 0.37 1703 1705 2 -21952.9 -7343.64 92.42 0.37 1704 1706 2 -21955.8 -7342 92.95 0.37 1705 1707 2 -21956 -7342.03 92.98 0.37 1706 1708 2 -21958.5 -7341.53 90.97 0.25 1707 1709 2 -21960 -7341.25 91.15 0.25 1708 1710 2 -21960.3 -7341.01 91.23 0.25 1709 1711 2 -21962 -7341.29 91.34 0.25 1710 1712 2 -21962.3 -7340.81 91.45 0.25 1711 1713 2 -21964.6 -7340.6 91.7 0.25 1712 1714 2 -21965 -7339.93 91.84 0.25 1713 1715 2 -21968.2 -7339.14 92.28 0.25 1714 1716 2 -21968.6 -7338.91 92.35 0.25 1715 1717 2 -21970.3 -7338.68 92.55 0.25 1716 1718 2 -21970.6 -7338.46 92.61 0.25 1717 1719 2 -21971.3 -7339.09 94.02 0.25 1718 1720 2 -21972.3 -7337.92 94.31 0.25 1719 1721 2 -21973.2 -7337.78 94.42 0.25 1720 1722 2 -21975.3 -7337.31 93.08 0.25 1721 1723 2 -21975.7 -7336.89 93.18 0.25 1722 1724 2 -21975.8 -7335.86 93.36 0.25 1723 1725 2 -21976.1 -7335.62 93.44 0.25 1724 1726 2 -21977.8 -7335.63 93.59 0.25 1725 1727 2 -21980 -7335.52 94.13 0.25 1726 1728 2 -21982.1 -7335.36 94.35 0.25 1727 1729 2 -21983.3 -7334.97 94.53 0.25 1728 1730 2 -21983.4 -7334.72 94.58 0.25 1729 1731 2 -21984.5 -7332.9 93.84 0.25 1730 1732 2 -21986.1 -7333.7 93.85 0.25 1731 1733 2 -21988.2 -7332.95 94.18 0.25 1732 1734 2 -21990.2 -7331.53 93.95 0.25 1733 1735 2 -21992.6 -7330.61 94.33 0.25 1734 1736 2 -21997.1 -7328.71 95.06 0.25 1735 1737 2 -21997.4 -7328.76 95.08 0.25 1736 1738 2 -21999.7 -7328.11 95.41 0.25 1737 1739 2 -21999.8 -7327.58 95.51 0.25 1738 1740 2 -22001 -7327.27 95.67 0.25 1739 1741 2 -22001.3 -7327.05 95.73 0.25 1740 1742 2 -22004 -7326.11 96.14 0.25 1741 1743 2 -22007.4 -7325.45 94.02 0.25 1742 1744 2 -22010.1 -7324.33 94.46 0.25 1743 1745 2 -22010.9 -7323.94 96.08 0.25 1744 1746 2 -22012.9 -7323.71 96.31 0.25 1745 1747 2 -22015.2 -7323.54 96.56 0.25 1746 1748 2 -22015.2 -7323.33 96.59 0.25 1747 1749 2 -22017 -7321.88 94.74 0.25 1748 1750 2 -22017 -7322.18 94.69 0.25 1749 1751 2 -22018.1 -7322.31 94.77 0.25 1750 1752 2 -22018.3 -7322.34 94.8 0.25 1751 1753 2 -22019.5 -7322.29 94.91 0.25 1752 1754 2 -22020.1 -7322.11 95 0.25 1753 1755 2 -22021.5 -7322.34 95.09 0.25 1754 1756 2 -22023.5 -7322.15 95.31 0.25 1755 1757 2 -22025.7 -7320.97 94.42 0.25 1756 1758 2 -22026.7 -7320.03 94.67 0.25 1757 1759 2 -22027.2 -7319.07 94.87 0.25 1758 1760 2 -22028.4 -7318.22 95.13 0.25 1759 1761 2 -22028.5 -7317.97 95.17 0.25 1760 1762 2 -22030.6 -7318.57 95.28 0.25 1761 1763 2 -22030.9 -7318.62 95.3 0.25 1762 1764 2 -22031 -7317.86 95.43 0.25 1763 1765 2 -22033.1 -7317.14 95.75 0.25 1764 1766 2 -22033.2 -7316.33 95.89 0.25 1765 1767 2 -22033.6 -7316.16 95.95 0.25 1766 1768 2 -22033.6 -7315.62 96.05 0.25 1767 1769 2 -22034 -7315.13 96.16 0.25 1768 1770 2 -22034.9 -7314.73 96.31 0.25 1769 1771 2 -22035.5 -7315.22 98.2 0.25 1770 1772 2 -22037 -7314.91 98.4 0.25 1771 1773 2 -22037.2 -7314.96 98.41 0.25 1772 1774 2 -22042.3 -7312.73 96.75 0.125 1773 1775 2 -21904.7 -7356.81 85.01 0.25 1677 1776 2 -21904.7 -7356.71 84.36 0.25 1775 1777 2 -21908.1 -7357.01 84.7 0.25 1776 1778 2 -21910.9 -7356.87 81.57 0.25 1777 1779 2 -21913 -7358.66 79.16 0.25 1778 1780 2 -21913 -7358.6 78.79 0.25 1779 1781 2 -21915.7 -7360.1 77.34 0.25 1780 1782 2 -21918.3 -7360.86 74.63 0.25 1781 1783 2 -21920.7 -7362.03 74.66 0.25 1782 1784 2 -21922.6 -7362.36 74.79 0.25 1783 1785 2 -21960.7 -7355.05 55.17 0.25 1784 1786 2 -21962.8 -7356.2 55.18 0.25 1785 1787 2 -21966.3 -7357.66 54.43 0.25 1786 1788 2 -21969 -7358.85 54.48 0.25 1787 1789 2 -21969.8 -7358.96 54.54 0.25 1788 1790 2 -21973.1 -7359.53 54.75 0.25 1789 1791 2 -21974 -7359.18 54.89 0.25 1790 1792 2 -21974.6 -7359 54.98 0.25 1791 1793 2 -21974.9 -7358.79 55.04 0.25 1792 1794 2 -21976.7 -7358.03 55.34 0.25 1793 1795 2 -21977.7 -7357.11 55.58 0.25 1794 1796 2 -21978 -7356.9 55.65 0.25 1795 1797 2 -21981 -7355.82 56.11 0.25 1796 1798 2 -21981.4 -7355.32 56.22 0.25 1797 1799 2 -21981.8 -7354.93 56.32 0.25 1798 1800 2 -21981.8 -7354.37 56.42 0.25 1799 1801 2 -21982.1 -7354.15 56.49 0.25 1800 1802 2 -21985.1 -7353.3 52.31 0.25 1801 1803 2 -21985.4 -7353.32 52.31 0.25 1802 1804 2 -21987.1 -7353.55 52.17 0.25 1803 1805 2 -21987.4 -7353.48 51.69 0.25 1804 1806 2 -21988.4 -7353.57 49.68 0.25 1805 1807 2 -21988.9 -7353.69 49.71 0.25 1806 1808 2 -21990.1 -7353.55 49.84 0.25 1807 1809 2 -21991.2 -7353.77 49.9 0.25 1808 1810 2 -21997.8 -7349.59 51.21 0.37 1809 1811 2 -21998 -7349.93 51.18 0.37 1810 1812 2 -21998.7 -7349.24 51.36 0.37 1811 1813 2 -22000.8 -7348.59 51.66 0.37 1812 1814 2 -22002.3 -7347.86 51.93 0.37 1813 1815 2 -22004.5 -7347.96 49.08 0.37 1814 1816 2 -22006.7 -7348.11 49.19 0.37 1815 1817 2 -22008.8 -7348.97 48.82 0.37 1816 1818 2 -22008.8 -7348.94 48.63 0.37 1817 1819 2 -22010.7 -7349.96 47.71 0.37 1818 1820 2 -22012.6 -7350.54 47.78 0.37 1819 1821 2 -22012.6 -7350.31 47.83 0.37 1820 1822 2 -22013.2 -7349.85 47.96 0.37 1821 1823 2 -22014.1 -7350.46 45.92 0.37 1822 1824 2 -22015.5 -7350.76 46 0.37 1823 1825 2 -22016.3 -7351.14 46.01 0.37 1824 1826 2 -22016.6 -7350.96 46.07 0.37 1825 1827 2 -22016.9 -7350.74 46.13 0.37 1826 1828 2 -22017 -7350.49 46.18 0.37 1827 1829 2 -22019 -7350.35 44.98 0.37 1828 1830 2 -22020.8 -7350.44 45.13 0.37 1829 1831 2 -22022.2 -7350.16 45.31 0.37 1830 1832 2 -22022.6 -7349.74 45.42 0.37 1831 1833 2 -22023.6 -7350.44 45.4 0.37 1832 1834 2 -22025.2 -7350.04 42.99 0.37 1833 1835 2 -22026.5 -7350.54 43.03 0.37 1834 1836 2 -22027.1 -7350.62 43.07 0.37 1835 1837 2 -22027.1 -7350.93 43.02 0.37 1836 1838 2 -22028.4 -7351.16 43.11 0.37 1837 1839 2 -22028.7 -7351.46 43.08 0.37 1838 1840 2 -22032.2 -7351.81 39.89 0.37 1839 1841 2 -22053.3 -7350.96 37.18 0.37 1840 1842 2 -22055 -7351.06 37.32 0.37 1841 1843 2 -22057.4 -7350.43 37.65 0.37 1842 1844 2 -22058.2 -7350.56 37.7 0.37 1843 1845 2 -22059.1 -7350.22 37.84 0.37 1844 1846 2 -22059.4 -7350.26 37.86 0.37 1845 1847 2 -22060.4 -7350.47 37.93 0.37 1846 1848 2 -22060.4 -7350.73 37.88 0.37 1847 1849 2 -22064.1 -7350.72 34.28 0.37 1848 1850 2 -22065.7 -7351.06 34.37 0.37 1849 1851 2 -22067.1 -7351.04 34.51 0.37 1850 1852 2 -22068 -7350.69 34.65 0.37 1851 1853 2 -22068.4 -7350.24 34.76 0.37 1852 1854 2 -22069.2 -7350.64 34.77 0.37 1853 1855 2 -22069.5 -7350.71 34.79 0.37 1854 1856 2 -22070.6 -7350.63 34.9 0.37 1855 1857 2 -22071.2 -7350.26 35.02 0.37 1856 1858 2 -22076.2 -7349.34 35.64 0.37 1857 1859 2 -22078.4 -7348.93 32.83 0.37 1858 1860 2 -22079.8 -7347.65 33.17 0.37 1859 1861 2 -22081 -7348.64 33.12 0.37 1860 1862 2 -22081.3 -7348.49 33.18 0.37 1861 1863 2 -22083.1 -7348.41 32.61 0.37 1862 1864 2 -22083 -7348.62 32.57 0.37 1863 1865 2 -22085.2 -7349.03 32.71 0.37 1864 1866 2 -22087 -7349.13 32.85 0.37 1865 1867 2 -22089.2 -7349 33.09 0.37 1866 1868 2 -22090.1 -7349.29 30.73 0.37 1867 1869 2 -22091.2 -7348.18 31.01 0.37 1868 1870 2 -22092.5 -7348.69 31.05 0.37 1869 1871 2 -22092.8 -7348.73 31.07 0.37 1870 1872 2 -22095.5 -7349.53 31.19 0.37 1871 1873 2 -22095.7 -7349.59 31.2 0.37 1872 1874 2 -22097.5 -7349.07 28.31 0.37 1873 1875 2 -22098.3 -7349.22 28.36 0.37 1874 1876 2 -22098.8 -7349.32 28.4 0.37 1875 1877 2 -22099.6 -7349.72 28.4 0.37 1876 1878 2 -22100.2 -7349.36 28.52 0.37 1877 1879 2 -22100.5 -7349.14 28.59 0.37 1878 1880 2 -22101.2 -7348.99 28.67 0.37 1879 1881 2 -22104.6 -7350.16 28.8 0.37 1880 1882 2 -22106.1 -7349.94 28.96 0.37 1881 1883 2 -22106.8 -7349.24 29.14 0.37 1882 1884 2 -22107.9 -7348.28 27.18 0.37 1883 1885 2 -22108.4 -7347.07 27.43 0.37 1884 1886 2 -22110 -7346.06 23.36 0.37 1885 1887 2 -22110 -7346.32 23.31 0.37 1886 1888 2 -22111.2 -7346.48 21.75 0.37 1887 1889 2 -22111.1 -7346.76 21.69 0.37 1888 1890 2 -22112.2 -7347.25 21.71 0.37 1889 1891 2 -22112.4 -7347.55 21.68 0.37 1890 1892 2 -22114 -7348.09 21.74 0.37 1891 1893 2 -22115.5 -7349.44 21.66 0.37 1892 1894 2 -22116.2 -7350.1 21.61 0.37 1893 1895 2 -22118 -7350.37 19.83 0.37 1894 1896 2 -22119.3 -7350.64 19.91 0.37 1895 1897 2 -22121.7 -7350.42 19.38 0.37 1896 1898 2 -22123.7 -7350.28 19.59 0.37 1897 1899 2 -22125.2 -7349.73 19.7 0.37 1898 1900 2 -22126.3 -7348.34 20.03 0.37 1899 1901 2 -22127.3 -7347.71 18.33 0.37 1900 1902 2 -22129 -7347.76 18.48 0.37 1901 1903 2 -22129.6 -7347.84 18.52 0.37 1902 1904 2 -22130.9 -7347.93 16.23 0.37 1903 1905 2 -22132 -7347.64 16.39 0.37 1904 1906 2 -22132.3 -7347.67 16.4 0.37 1905 1907 2 -22133.3 -7347.31 16.56 0.37 1906 1908 2 -22133.5 -7347.38 16.57 0.37 1907 1909 2 -22135.5 -7347.74 16.69 0.37 1908 1910 2 -22135.7 -7347.82 16.7 0.37 1909 1911 2 -22137.5 -7347.08 17 0.37 1910 1912 2 -22137.6 -7346.84 17.04 0.37 1911 1913 2 -22138.2 -7346.68 17.12 0.37 1912 1914 2 -22138.5 -7346.72 17.14 0.37 1913 1915 2 -22139 -7346.59 17.22 0.37 1914 1916 2 -22139.3 -7346.4 17.28 0.37 1915 1917 2 -22141.7 -7346.73 15.52 0.37 1916 1918 2 -22142 -7346.8 15.54 0.37 1917 1919 2 -22142.7 -7346.42 14.44 0.37 1918 1920 2 -22144.3 -7346.14 12.8 0.37 1919 1921 2 -22144.2 -7346.33 12.31 0.37 1920 1922 2 -22145 -7346.69 11.96 0.37 1921 1923 2 -22145.3 -7346.74 11.98 0.37 1922 1924 2 -22146.8 -7346.47 12.16 0.37 1923 1925 2 -22148.9 -7346.72 9.78 0.25 1924 1926 2 -22150.5 -7346.76 9.93 0.25 1925 1927 2 -22151.4 -7346.92 9.98 0.25 1926 1928 2 -22153.5 -7346.02 10.33 0.25 1927 1929 2 -22153.5 -7346.28 10.28 0.25 1928 1930 2 -22156.5 -7345.81 9.46 0.25 1929 1931 2 -22165.2 -7346.55 4.94 0.25 1930 1932 2 -22165.5 -7346.57 4.67 0.25 1931 1933 2 -22166.9 -7346.54 4.8 0.25 1932 1934 2 -22168.2 -7347.15 5.12 0.25 1933 1935 2 -22168.1 -7347.4 5.07 0.25 1934 1936 2 -22169 -7347.7 3.03 0.25 1935 1937 2 -22170.9 -7348.61 3.05 0.25 1936 1938 2 -22172.5 -7348.89 3.16 0.25 1937 1939 2 -22174.7 -7348.24 1.87 0.25 1938 1940 2 -22177 -7348.11 2.11 0.25 1939 1941 2 -22177.6 -7347.73 2.23 0.25 1940 1942 2 -22178.7 -7347.95 2.29 0.25 1941 1943 2 -22179.5 -7346.78 2.56 0.25 1942 1944 2 -22179.8 -7346.34 2.67 0.25 1943 1945 2 -22181.9 -7345.63 2.98 0.25 1944 1946 2 -22183.1 -7344.56 1.88 0.25 1945 1947 2 -22183.3 -7344.62 1.89 0.25 1946 1948 2 -22184.5 -7342.98 2.27 0.25 1947 1949 2 -22184.4 -7343.24 2.22 0.25 1948 1950 2 -22187.2 -7343 -0.3 0.25 1949 1951 2 -22187.2 -7343.27 -0.35 0.25 1950 1952 2 -22190.3 -7343.59 -1.52 0.25 1951 1953 2 -22190.5 -7343.65 -1.51 0.25 1952 1954 2 -22192.4 -7344.74 -1.51 0.25 1953 1955 2 -22193.8 -7344.06 -3.25 0.25 1954 1956 2 -22194.4 -7343.89 -3.16 0.25 1955 1957 2 -22196.1 -7343.91 -3.01 0.25 1956 1958 2 -22196.9 -7344.32 -3.12 0.25 1957 1959 2 -22197.7 -7343.85 -4.87 0.25 1958 1960 2 -22198.1 -7344.69 -4.97 0.25 1959 1961 2 -22198.7 -7344.53 -4.89 0.25 1960 1962 2 -22198.9 -7344.59 -4.88 0.25 1961 1963 2 -22199.5 -7344.36 -4.78 0.25 1962 1964 2 -22199.6 -7344.12 -4.74 0.25 1963 1965 2 -22201.3 -7343.02 -6.42 0.25 1964 1966 2 -22201.6 -7342.7 -5.49 0.25 1965 1967 2 -22203.8 -7343.04 -5.33 0.25 1966 1968 2 -22204.2 -7342.86 -5.27 0.25 1967 1969 2 -22206.6 -7343.78 -5.2 0.25 1968 1970 2 -22209.1 -7344.45 -5.08 0.25 1969 1971 2 -22209.3 -7344.51 -5.06 0.25 1970 1972 2 -22211.6 -7344.6 -4.87 0.25 1971 1973 2 -22214.2 -7344.49 -4.6 0.25 1972 1974 2 -22214.4 -7344.57 -4.59 0.25 1973 1975 2 -22216.7 -7344.39 -4.35 0.25 1974 1976 2 -22218.2 -7343.83 -4.11 0.25 1975 1977 2 -22219.8 -7344.65 -4.1 0.25 1976 1978 2 -22221.9 -7344.9 -5.8 0.25 1977 1979 2 -22223.9 -7346.18 -7.95 0.25 1978 1980 2 -22224.6 -7346.6 -7.85 0.25 1979 1981 2 -22225.8 -7346.81 -7.78 0.25 1980 1982 2 -22226 -7346.59 -7.71 0.25 1981 1983 2 -22226.4 -7345.85 -7.55 0.25 1982 1984 2 -22226.7 -7345.87 -7.53 0.25 1983 1985 2 -22228.4 -7345.85 -10.61 0.25 1984 1986 2 -22230.4 -7345.95 -10.44 0.25 1985 1987 2 -22230.7 -7345.46 -10.33 0.25 1986 1988 2 -22231 -7345.53 -10.31 0.25 1987 1989 2 -22232.1 -7345.42 -10.19 0.25 1988 1990 2 -22232.5 -7344.95 -10.07 0.25 1989 1991 2 -22232.8 -7345 -10.06 0.25 1990 1992 2 -22233.1 -7344.8 -10 0.25 1991 1993 2 -22233.7 -7344.32 -9.86 0.25 1992 1994 2 -22234 -7344.13 -9.8 0.25 1993 1995 2 -22235.4 -7344.37 -9.7 0.25 1994 1996 2 -22235.7 -7344.43 -9.69 0.25 1995 1997 2 -22236.8 -7344.32 -9.56 0.25 1996 1998 2 -22237.4 -7344.18 -9.48 0.25 1997 1999 2 -22238 -7344.03 -9.4 0.25 1998 2000 2 -22238.4 -7343.82 -9.34 0.25 1999 2001 2 -22239 -7343.65 -9.25 0.25 2000 2002 2 -22239.8 -7343.79 -9.2 0.25 2001 2003 2 -22240.4 -7343.59 -9.11 0.25 2002 2004 2 -22241 -7342.93 -8.94 0.25 2003 2005 2 -22241.1 -7342.66 -8.89 0.25 2004 2006 2 -22242.4 -7343.4 -8.89 0.25 2005 2007 2 -22243.8 -7344.39 -10.74 0.25 2006 2008 2 -22243.7 -7344.63 -10.78 0.25 2007 2009 2 -22245.9 -7345.25 -10.68 0.25 2008 2010 2 -22247.4 -7345.01 -10.5 0.25 2009 2011 2 -22247.7 -7344.76 -10.43 0.25 2010 2012 2 -22248 -7344.53 -10.37 0.25 2011 2013 2 -22248.5 -7343.56 -10.16 0.25 2012 2014 2 -22249.1 -7343.38 -10.08 0.25 2013 2015 2 -22249.7 -7342.94 -9.95 0.25 2014 2016 2 -22250.9 -7342.64 -9.79 0.25 2015 2017 2 -22251.5 -7342.49 -9.71 0.25 2016 2018 2 -22251.5 -7342.22 -9.66 0.25 2017 2019 2 -22251.8 -7342.03 -9.6 0.25 2018 2020 2 -22252.2 -7341.77 -9.52 0.25 2019 2021 2 -22278.5 -7349.83 -25.11 0.25 2020 2022 2 -22285.3 -7351.06 -27.24 0.25 2021 2023 2 -22286.1 -7351.19 -27.18 0.25 2022 2024 2 -22287.4 -7351.95 -27.19 0.25 2023 2025 2 -22287.7 -7351.99 -27.17 0.25 2024 2026 2 -22289.5 -7351.78 -28.36 0.25 2025 2027 2 -22291 -7351.22 -28.14 0.25 2026 2028 2 -22291.3 -7351.26 -28.12 0.25 2027 2029 2 -22292.5 -7351.43 -28.04 0.25 2028 2030 2 -22292.4 -7351.68 -28.09 0.25 2029 2031 2 -22294.4 -7352.15 -28.83 0.25 2030 2032 2 -22294.7 -7352.22 -28.75 0.25 2031 2033 2 -22296.2 -7351.91 -28.56 0.25 2032 2034 2 -22296.2 -7351.65 -28.52 0.25 2033 2035 2 -22297.2 -7352.07 -28.49 0.25 2034 2036 2 -22297.5 -7352.39 -28.52 0.25 2035 2037 2 -22298.5 -7352.61 -28.45 0.25 2036 2038 2 -22301.2 -7352.11 -31.65 0.25 2037 2039 2 -22301.1 -7352.4 -31.7 0.25 2038 2040 2 -22302.1 -7353.09 -31.72 0.25 2039 2041 2 -22302.9 -7353.98 -31.8 0.25 2040 2042 2 -22303.2 -7354.05 -31.78 0.25 2041 2043 2 -22304 -7354.2 -31.73 0.25 2042 2044 2 -22304.8 -7354.32 -31.67 0.25 2043 2045 2 -22305.6 -7354.47 -31.62 0.25 2044 2046 2 -22305.6 -7354.2 -31.57 0.25 2045 2047 2 -22307 -7355.24 -31.62 0.25 2046 2048 2 -22308.8 -7354.96 -33.14 0.25 2047 2049 2 -22309.3 -7355.06 -33.1 0.25 2048 2050 2 -22310.2 -7355.17 -33.04 0.25 2049 2051 2 -22310.7 -7355.54 -33.05 0.25 2050 2052 2 -22312.6 -7356.37 -33.01 0.25 2051 2053 2 -22312.9 -7356.42 -32.99 0.25 2052 2054 2 -22313.4 -7356.23 -32.91 0.25 2053 2055 2 -22313.5 -7356.02 -32.87 0.25 2054 2056 2 -22315.5 -7356.68 -35.27 0.25 2055 2057 2 -22316.9 -7357.19 -35.28 0.25 2056 2058 2 -22316.9 -7357.17 -35.44 0.25 2057 2059 2 -22319.2 -7357.83 -36.7 0.25 2058 2060 2 -22319.5 -7357.56 -37.09 0.25 2059 2061 2 -22321.3 -7357.65 -37.86 0.25 2060 2062 2 -22322.4 -7357.87 -37.79 0.25 2061 2063 2 -22324.1 -7357.86 -37.63 0.25 2062 2064 2 -22324.4 -7357.91 -37.61 0.25 2063 2065 2 -22326.5 -7357.95 -39.06 0.25 2064 2066 2 -22326.5 -7357.93 -39.21 0.25 2065 2067 2 -22334 -7359.18 -43.43 0.25 2066 2068 2 -22335.7 -7359.49 -44.5 0.25 2067 2069 2 -22335.8 -7359.4 -45.07 0.25 2068 2070 2 -22337.2 -7359.97 -46.18 0.25 2069 2071 2 -22338 -7360.45 -47.37 0.25 2070 2072 2 -22338.5 -7361.21 -48.03 0.25 2071 2073 2 -22338.7 -7361.04 -49.02 0.25 2072 2074 2 -22339.7 -7361.5 -49 0.25 2073 2075 2 -22339.8 -7361.37 -49.8 0.25 2074 2076 2 -22341.5 -7361.13 -49.6 0.25 2075 2077 2 -22341.6 -7361.07 -50.01 0.25 2076 2078 2 -22344.2 -7362.02 -50.12 0.25 2077 2079 2 -22346.8 -7362.57 -50.78 0.25 2078 2080 2 -22346.7 -7362.81 -50.83 0.25 2079 2081 2 -22349.2 -7362.88 -52.38 0.25 2080 2082 2 -22349.9 -7363.53 -52.41 0.25 2081 2083 2 -22350.2 -7363.58 -52.4 0.25 2082 2084 2 -22350.9 -7364.21 -52.43 0.25 2083 2085 2 -22351.5 -7364.07 -52.35 0.25 2084 2086 2 -22352.4 -7362.98 -54.54 0.25 2085 2087 2 -22352.7 -7363.01 -54.52 0.25 2086 2088 2 -22354.4 -7363.07 -54.39 0.25 2087 2089 2 -22356 -7362.74 -56.11 0.25 2088 2090 2 -22357.9 -7363.33 -55.87 0.25 2089 2091 2 -22358.2 -7363.6 -55.89 0.25 2090 2092 2 -22361.1 -7364.7 -57.16 0.125 2091 2093 2 -22361.9 -7364.83 -57.1 0.125 2092 2094 2 -22363.5 -7364.41 -59.04 0.125 2093 2095 2 -22364.6 -7365.08 -59.05 0.125 2094 2096 2 -22364.9 -7365.07 -59.02 0.125 2095 2097 2 -22365.8 -7365 -60.24 0.125 2096 2098 2 -22366.9 -7365.15 -60.16 0.125 2097 2099 2 -22367.2 -7365.22 -60.14 0.125 2098 2100 2 -22368.1 -7364.81 -59.99 0.125 2099 2101 2 -22369.2 -7365.58 -60.72 0.125 2100 2102 2 -22370.3 -7365.19 -60.54 0.125 2101 2103 2 -22371.4 -7365.63 -60.52 0.125 2102 2104 2 -22371.7 -7365.65 -60.49 0.125 2103 2105 2 -22372.5 -7365.76 -60.43 0.125 2104 2106 2 -22372.9 -7365.58 -60.37 0.125 2105 2107 2 -22374.6 -7365.57 -61.3 0.125 2106 2108 2 -22375.1 -7366.09 -61.93 0.125 2107 2109 2 -22375.3 -7366.91 -62.04 0.125 2108 2110 2 -22376.6 -7367.2 -64.1 0.125 2109 2111 2 -22377.5 -7367.71 -66.3 0.125 2110 2112 2 -22378.3 -7368.36 -66.34 0.125 2111 2113 2 -22379.4 -7369.01 -66.42 0.125 2112 2114 2 -22379.7 -7369.68 -67.55 0.125 2113 2115 2 -22380 -7369.71 -67.76 0.125 2114 2116 2 -22381.7 -7369.92 -69.1 0.125 2115 2117 2 -22382.8 -7370.56 -69.11 0.125 2116 2118 2 -22384.2 -7370.24 -68.92 0.125 2117 2119 2 -22384.5 -7370.31 -68.91 0.125 2118 2120 2 -22385.3 -7370.4 -68.84 0.125 2119 2121 2 -22387.6 -7370.29 -72.33 0.125 2120 2122 2 -22387.8 -7370.32 -72.31 0.125 2121 2123 2 -22388.8 -7369.99 -73.32 0.125 2122 2124 2 -22389.1 -7370.02 -73.3 0.125 2123 2125 2 -22390.6 -7369.38 -73.51 0.125 2124 2126 2 -22390.9 -7369.4 -73.49 0.125 2125 2127 2 -22392.9 -7368.8 -73.56 0.125 2126 2128 2 -22395.4 -7369.14 -73.38 0.125 2127 2129 2 -22397.9 -7370.55 -77.83 0.125 2128 2130 2 -22398.1 -7370.85 -77.86 0.125 2129 2131 2 -22399.8 -7372.9 -78.04 0.125 2130 2132 2 -22402.4 -7372.63 -81.18 0.125 2131 2133 2 -22440.9 -7376.04 -98.4 0.125 2132 2134 2 -22443.3 -7376.62 -98.27 0.125 2133 2135 2 -22444.2 -7376.74 -98.21 0.125 2134 2136 2 -22446.9 -7375.49 -97.75 0.125 2135 2137 2 -22447.2 -7375 -97.64 0.125 2136 2138 2 -22447.6 -7374.26 -97.48 0.125 2137 2139 2 -22447.9 -7374.04 -97.42 0.125 2138 2140 2 -22450.2 -7373.82 -97.17 0.125 2139 2141 2 -22450.4 -7374.09 -97.19 0.125 2140 2142 2 -22451.4 -7375.27 -97.29 0.125 2141 2143 2 -22451.6 -7375.57 -97.32 0.125 2142 2144 2 -22451.8 -7376.11 -97.39 0.125 2143 2145 2 -22453.4 -7377.37 -97.45 0.125 2144 2146 2 -22454.8 -7377.75 -97.79 0.125 2145 2147 2 -22456.8 -7378.28 -99.32 0.125 2146 2148 2 -22457.1 -7378.77 -99.71 0.125 2147 2149 2 -22456.9 -7380.24 -100.59 0.125 2148 2150 2 -22457 -7380.41 -101.33 0.125 2149 2151 2 -22456.9 -7381.26 -102.19 0.125 2150 2152 2 -22457.5 -7382.5 -108 0.125 2151 2153 2 -22457.2 -7383.08 -109.28 0.125 2152 2154 2 -22457.5 -7383.08 -109.26 0.125 2153 2155 2 -22458.7 -7383.96 -111.52 0.125 2154 2156 2 -22459.9 -7383.93 -113.84 0.125 2155 2157 2 -22459.9 -7384.2 -113.88 0.125 2156 2158 2 -22460.6 -7385.45 -113.35 0.125 2157 2159 2 -22466.4 -7381.97 -112.23 0.125 2158 2160 2 -22468.7 -7382.23 -111.45 0.125 2159 2161 2 -22470.5 -7382.45 -113.21 0.125 2160 2162 2 -22470.4 -7382.45 -113.22 0.125 2161 2163 2 -22471.9 -7382.46 -113.09 0.125 2162 2164 2 -22472.8 -7382.34 -112.99 0.125 2163 2165 2 -22473.8 -7382.77 -112.99 0.125 2164 2166 2 -22474.7 -7382.35 -116.02 0.125 2165 2167 2 -22475 -7382.19 -115.96 0.125 2166 2168 2 -22476.9 -7382.8 -115.88 0.125 2167 2169 2 -22478.1 -7382.72 -115.76 0.125 2168 2170 2 -22478.6 -7381.21 -115.46 0.125 2169 2171 2 -22479.6 -7382.35 -116.24 0.125 2170 2172 2 -22481.5 -7383.16 -117.84 0.125 2171 2173 2 -22481.6 -7383.09 -118.27 0.125 2172 2174 2 -22483.1 -7381.95 -120.1 0.125 2173 2175 2 -22483.8 -7382.88 -120.28 0.125 2174 2176 2 -22483.8 -7382.86 -120.41 0.125 2175 2177 2 -22484.9 -7382.45 -123.68 0.125 2176 2178 2 -22485.5 -7382.54 -123.64 0.125 2177 2179 2 -22486.6 -7382.75 -123.57 0.125 2178 2180 2 -22486.9 -7382.28 -123.46 0.125 2179 2181 2 -22488.8 -7381.88 -128.87 0.125 2180 2182 2 -22515 -7388.77 -154.34 0.125 2181 2183 2 -22518.5 -7389.2 -156.99 0.125 2182 2184 2 -22518.8 -7389.24 -156.97 0.125 2183 2185 2 -22523.3 -7389.58 -160.89 0.125 2184 2186 2 -22524.5 -7389.24 -160.72 0.125 2185 2187 2 -22525.9 -7388.5 -161.36 0.125 2186 2188 2 -22527.3 -7388.43 -161.65 0.125 2187 2189 2 -22528 -7388.2 -163.35 0.125 2188 2190 2 -22530.6 -7388.64 -165.29 0.125 2189 2191 2 -22531.3 -7388.61 -165.75 0.125 2190 2192 2 -22532.9 -7389.52 -162.55 0.125 2191 2193 2 -22533.5 -7389.64 -162.25 0.125 2192 2194 2 -22535.6 -7390.31 -162.16 0.125 2193 2195 2 -22536.1 -7390.64 -162.16 0.125 2194 2196 2 -22537.7 -7391.9 -162.9 0.125 2195 2197 2 -22537.8 -7391.59 -164.74 0.125 2196 2198 2 -22539.3 -7393.67 -166.55 0.125 2197 2199 2 -22539.6 -7393.71 -166.53 0.125 2198 2200 2 -22541.2 -7394.23 -166.89 0.125 2199 2201 2 -22541.2 -7394.45 -167.05 0.125 2200 2202 2 -22543.1 -7396.12 -170.33 0.125 2201 2203 2 -22543.4 -7396.16 -170.3 0.125 2202 2204 2 -22544.2 -7397.13 -171.67 0.125 2203 2205 2 -22544.5 -7397.07 -172.26 0.125 2204 2206 2 -22546.7 -7397.82 -173.17 0.125 2205 2207 2 -22546.8 -7397.71 -173.83 0.125 2206 2208 2 -22548.2 -7397.67 -173.77 0.125 2207 2209 2 -22548.2 -7397.63 -174.01 0.125 2208 2210 2 -22549.5 -7398.84 -174.9 0.125 2209 2211 2 -22549.9 -7398.78 -175.31 0.125 2210 2212 2 -22551.8 -7399.67 -176.86 0.125 2211 2213 2 -22552.1 -7400.17 -174.58 0.125 2212 2214 2 -22552.9 -7400.84 -174.62 0.125 2213 2215 2 -22553.4 -7401.17 -174.62 0.125 2214 2216 2 -22554.7 -7401.67 -174.58 0.125 2215 2217 2 -22556.4 -7402.06 -178.69 0.125 2216 2218 2 -22557.4 -7402.73 -178.71 0.125 2217 2219 2 -22559.5 -7403.34 -182.01 0.125 2218 2220 2 -22559.9 -7403.18 -183.26 0.125 2219 2221 2 -22548.7 -7403.92 -184.44 0.125 2220 2222 2 -22549.9 -7405.68 -184.03 0.125 2221 2223 2 -22550.1 -7405.57 -186.22 0.125 2222 2224 2 -22553.8 -7406.77 -190.55 0.125 2223 2225 2 -22554 -7406.83 -190.54 0.125 2224 2226 2 -22555.5 -7407.96 -190.59 0.125 2225 2227 2 -22556 -7408.1 -190.56 0.125 2226 2228 2 -22557.6 -7408.53 -190.48 0.125 2227 2229 2 -22557.6 -7408.27 -190.43 0.125 2228 2230 2 -22559.5 -7408.12 -192.09 0.125 2229 2231 2 -22561.5 -7408.62 -193.41 0.125 2230 2232 2 -22562 -7408.75 -193.38 0.125 2231 2233 2 -22563.7 -7408.87 -194.87 0.125 2232 2234 2 -22566.9 -7409.19 -197.32 0.125 2233 2235 2 -22567.9 -7410.74 -200.13 0.125 2234 2236 2 -22570.1 -7412.17 -201.59 0.125 2235 2237 2 -22571.9 -7413.45 -202.82 0.125 2236 2238 2 -22574.1 -7415.01 -203.03 0.125 2237 2239 2 -22574.8 -7415.48 -203.04 0.125 2238 2240 2 -22576.7 -7415.91 -202.94 0.125 2239 2241 2 -22576.7 -7416.22 -203.01 0.125 2240 2242 2 -22578 -7416.01 -205.5 0.125 2241 2243 2 -22578.3 -7416.08 -205.49 0.125 2242 2244 2 -22579 -7418.17 -206.94 0.125 2243 2245 2 -22579.5 -7418.33 -206.92 0.125 2244 2246 2 -22579.7 -7418.65 -206.95 0.125 2245 2247 2 -22582.1 -7418.59 -208.64 0.125 2246 2248 2 -22582.4 -7418.64 -208.74 0.125 2247 2249 2 -22585.2 -7418.06 -209.55 0.125 2248 2250 2 -22585.2 -7417.99 -210.02 0.125 2249 2251 2 -22587.8 -7418.29 -211.68 0.125 2250 2252 2 -22588.1 -7418.39 -211.67 0.125 2251 2253 2 -22590.2 -7417.5 -216.07 0.125 2252 2254 2 -22590.3 -7417.38 -216.8 0.125 2253 2255 2 -22594 -7418.83 -222.67 0.125 2254 2256 2 -22594.3 -7418.87 -222.66 0.125 2255 2257 2 -22594.6 -7421.41 -219.78 0.125 2256 2258 2 -22594.7 -7421.17 -221.28 0.125 2257 2259 2 -22598.7 -7422.99 -224.08 0.125 2258 2260 2 -22605.5 -7426.45 -232.92 0.125 2259 2261 2 -22605.7 -7426.59 -232.92 0.125 2260 2262 2 -22606.6 -7427.65 -232.62 0.125 2261 2263 2 -22606.9 -7427.66 -232.85 0.125 2262 2264 2 -22609.3 -7427.91 -236.09 0.125 2263 2265 2 -22609.2 -7428.15 -236.22 0.125 2264 2266 2 -22611.5 -7428.35 -237.92 0.125 2265 2267 2 -22614.1 -7429.5 -238.14 0.125 2266 2268 2 -22614.2 -7429.45 -238.45 0.125 2267 2269 2 -22615.3 -7429.07 -242.02 0.125 2268 2270 2 -22615.6 -7429.11 -242 0.125 2269 2271 2 -22617.5 -7429.32 -242.03 0.125 2270 2272 2 -22617.6 -7429.24 -242.52 0.125 2271 2273 2 -22620.2 -7428.63 -244.49 0.125 2272 2274 2 -22622.4 -7429.15 -246.01 0.125 2273 2275 2 -22622.7 -7428.97 -246.01 0.125 2274 2276 2 -22624 -7428.49 -245.84 0.125 2275 2277 2 -22624.6 -7428.72 -248.41 0.125 2276 2278 2 -22629.8 -7437.44 -249.37 0.125 2277 2279 2 -22630.9 -7437.39 -249.26 0.125 2278 2280 2 -22631.2 -7437.44 -249.24 0.125 2279 2281 2 -22632.8 -7437.98 -249.18 0.125 2280 2282 2 -22634.5 -7438.48 -250.98 0.125 2281 2283 2 -22637.3 -7438.77 -253.78 0.125 2282 2284 2 -22638.1 -7438.68 -253.69 0.125 2283 2285 2 -22640.2 -7439.8 -253.68 0.125 2284 2286 2 -22642 -7440.03 -257.19 0.125 2285 2287 2 -22642.3 -7440.38 -257.22 0.125 2286 2288 2 -22643.5 -7440.1 -257.06 0.125 2287 2289 2 -22644.1 -7439.68 -256.93 0.125 2288 2290 2 -22644.8 -7438.71 -259.82 0.125 2289 2291 2 -22645.9 -7438.9 -259.74 0.125 2290 2292 2 -22646.1 -7438.94 -259.73 0.125 2291 2293 2 -22647.3 -7438.95 -259.62 0.125 2292 2294 2 -22649.7 -7439.99 -265.12 0.125 2293 2295 2 -22649.8 -7439.94 -265.44 0.125 2294 2296 2 -22651.7 -7439.62 -267.5 0.125 2295 2297 2 -22651.6 -7439.85 -267.53 0.125 2296 2298 2 -22653.5 -7441.14 -270.01 0.125 2297 2299 2 -22655 -7440.99 -270.6 0.125 2298 2300 2 -22656.5 -7442.3 -271.08 0.125 2299 2301 2 -22658.8 -7441.69 -273.6 0.125 2300 2302 2 -22659.1 -7441.72 -273.58 0.125 2301 2303 2 -22662.6 -7441.39 -278.78 0.125 2302 2304 2 -22663.6 -7442.15 -278.81 0.125 2303 2305 2 -22665.9 -7443.54 -282.31 0.125 2304 2306 2 -22667.3 -7445.09 -282.89 0.125 2305 2307 2 -22667.6 -7445.1 -282.87 0.125 2306 2308 2 -22668.1 -7445.47 -282.88 0.125 2307 2309 2 -22668.4 -7445.53 -282.86 0.125 2308 2310 2 -22669.4 -7445.45 -285.7 0.125 2309 2311 2 -22669.5 -7445.24 -285.65 0.125 2310 2312 2 -22671.2 -7444.7 -288.72 0.125 2311 2313 2 -22672.9 -7445.41 -289.45 0.125 2312 2314 2 -22676.8 -7445.27 -296.96 0.125 2313 2315 2 -22680.4 -7447.21 -297.38 0.125 2314 2316 2 -22681.6 -7447.01 -298.05 0.125 2315 2317 2 -22684.1 -7446.33 -302.68 0.125 2316 2318 2 -22684.6 -7446.43 -302.65 0.125 2317 2319 2 -22685.3 -7446.11 -303.52 0.125 2318 2320 2 -22685.8 -7446.5 -303.53 0.125 2319 2321 2 -22686.8 -7446.96 -303.51 0.125 2320 2322 2 -22687.1 -7447.01 -303.49 0.125 2321 2323 2 -22687.8 -7446.35 -303.36 0.125 2322 2324 2 -22687.8 -7446.33 -303.51 0.125 2323 2325 2 -22688.9 -7445.84 -305.58 0.125 2324 2326 2 -22689.2 -7445.87 -305.56 0.125 2325 2327 2 -22690.8 -7446.46 -305.51 0.125 2326 2328 2 -22692.3 -7447.5 -305.54 0.125 2327 2329 2 -22693.9 -7448.06 -310.07 0.125 2328 2330 2 -22696.5 -7448.8 -313.08 0.125 2329 2331 2 -22697.8 -7449.87 -313.14 0.125 2330 2332 2 -22698.3 -7449.69 -313.23 0.125 2331 2333 2 -22699 -7449.54 -313.15 0.125 2332 2334 2 -22699 -7449.47 -313.58 0.125 2333 2335 2 -22701.9 -7450.5 -312.04 0.125 2334 2336 2 -22701.8 -7450.75 -312.09 0.125 2335 2337 2 -22709.8 -7450.17 -311.24 0.125 2336 2338 2 -22712.2 -7449.97 -312.47 0.125 2337 2339 2 -22713 -7450.22 -314.9 0.125 2338 2340 2 -22713.6 -7449.77 -315.92 0.125 2339 2341 2 -22714.3 -7451.24 -316.1 0.125 2340 2342 2 -22714.3 -7451.48 -316.14 0.125 2341 2343 2 -22715.1 -7452.12 -316.17 0.125 2342 2344 2 -22716.4 -7452.07 -318.86 0.125 2343 2345 2 -22716.3 -7452.32 -318.91 0.125 2344 2346 2 -22718.5 -7452.18 -324.23 0.125 2345 2347 2 -22720.2 -7451.63 -323.98 0.125 2346 2348 2 -22721.9 -7451.96 -324.99 0.125 2347 2349 2 -22722 -7451.9 -325.33 0.125 2348 2350 2 -22724.9 -7451.93 -330.02 0.125 2349 2351 2 -22726.3 -7452.15 -329.93 0.125 2350 2352 2 -22726.5 -7451.91 -329.86 0.125 2351 2353 2 -22727.5 -7451.25 -329.66 0.125 2352 2354 2 -22730.6 -7451.47 -334.85 0.125 2353 2355 2 -22731 -7451.3 -334.79 0.125 2354 2356 2 -22731.3 -7451.35 -334.77 0.125 2355 2357 2 -22732.9 -7451.25 -337.94 0.125 2356 2358 2 -22733.2 -7451.05 -337.9 0.125 2357 2359 2 -22736.2 -7450.34 -337.6 0.125 2358 2360 2 -22736.4 -7450.38 -337.59 0.125 2359 2361 2 -22739.1 -7451.29 -337.49 0.125 2360 2362 2 -22742.2 -7450.23 -342.35 0.125 2361 2363 2 -22743.8 -7450.45 -342.24 0.125 2362 2364 2 -22744.5 -7452.15 -342.51 0.125 2363 2365 2 -22744.8 -7452.15 -342.5 0.125 2364 2366 2 -22745.3 -7451.98 -342.42 0.125 2365 2367 2 -22749.7 -7453.25 -342.94 0.125 2366 2368 2 -22749.7 -7453.22 -343.11 0.125 2367 2369 2 -22764 -7452.05 -353.44 0.125 2368 2370 2 -22765.6 -7452.24 -353.31 0.125 2369 2371 2 -22766.5 -7452.43 -354.43 0.125 2370 2372 2 -22766.8 -7452.46 -354.41 0.125 2371 2373 2 -22768.8 -7452.72 -354.27 0.125 2372 2374 2 -22770.4 -7453.22 -354.2 0.125 2373 2375 2 -22771.5 -7453.15 -354.08 0.125 2374 2376 2 -22772.3 -7451.11 -353.67 0.125 2375 2377 2 -22772.4 -7450.87 -353.63 0.125 2376 2378 2 -22773.3 -7450.86 -358.7 0.125 2377 2379 2 -22773.3 -7450.84 -358.85 0.125 2378 2380 2 -22774.5 -7450.19 -361.88 0.125 2379 2381 2 -22776.3 -7448.93 -365.72 0.125 2380 2382 2 -22778.5 -7448.46 -366.69 0.125 2381 2383 2 -22778.7 -7448.5 -366.67 0.125 2382 2384 2 -22780.4 -7447.91 -369.65 0.125 2383 2385 2 -22782.4 -7448.19 -374.27 0.125 2384 2386 2 -22786.3 -7448.67 -373.99 0.125 2385 2387 2 -22764.5 -7451.89 -376.56 0.125 2386 2388 2 -22766.4 -7452.55 -376.5 0.125 2387 2389 2 -22767.9 -7452.55 -379.65 0.125 2388 2390 2 -22770 -7453.51 -379.62 0.125 2389 2391 2 -22771.7 -7453.65 -379.48 0.125 2390 2392 2 -22771.7 -7453.35 -379.43 0.125 2391 2393 2 -22772.4 -7452.94 -379.3 0.125 2392 2394 2 -22773 -7453.85 -378.07 0.125 2393 2395 2 -22773 -7453.52 -378.39 0.125 2394 2396 2 -22774.7 -7452.62 -383.44 0.125 2395 2397 2 -22775.1 -7453.29 -383.51 0.125 2396 2398 2 -22775.6 -7453.82 -383.88 0.125 2397 2399 2 -22775.6 -7453.8 -384.01 0.125 2398 2400 2 -22777 -7454.18 -384.97 0.125 2399 2401 2 -22777 -7454.13 -385.28 0.125 2400 2402 2 -22778.8 -7455.3 -385.31 0.125 2401 2403 2 -22778.8 -7455.28 -385.45 0.125 2402 2404 2 -22779.7 -7455.18 -385.41 0.125 2403 2405 2 -22782 -7454.51 -388.5 0.125 2404 2406 2 -22783.7 -7453.55 -391.31 0.125 2405 2407 2 -22785.6 -7453.53 -390.27 0.125 2406 2408 2 -22788.2 -7453.43 -395.22 0.125 2407 2409 2 -22791 -7452.96 -396.15 0.125 2408 2410 2 -22791 -7452.71 -396.11 0.125 2409 2411 2 -22791.9 -7452.11 -395.94 0.125 2410 2412 2 -22792.2 -7452.18 -395.94 0.125 2411 2413 2 -22793 -7452.24 -399.51 0.125 2412 2414 2 -22793.7 -7452.66 -399.5 0.125 2413 2415 2 -22795.2 -7452.76 -400.63 0.125 2414 2416 2 -22796 -7452.88 -400.58 0.125 2415 2417 2 -22796.2 -7452.94 -400.56 0.125 2416 2418 2 -22798.1 -7453.1 -403.18 0.125 2417 2419 2 -22798.7 -7454.54 -403.37 0.125 2418 2420 2 -22801 -7453.06 -413.81 0.125 2419 2421 2 -22803.2 -7453.28 -413.64 0.125 2420 2422 2 -22805.3 -7451.33 -413.12 0.125 2421 2423 2 -22805.6 -7451.14 -413.06 0.125 2422 2424 2 -22805.6 -7450.89 -413.02 0.125 2423 2425 2 -22807 -7449.62 -412.68 0.125 2424 2426 2 -22807.1 -7449.35 -412.63 0.125 2425 2427 2 -22807.6 -7449.21 -412.55 0.125 2426 2428 2 -22809.4 -7448.63 -416.01 0.125 2427 2429 2 -22811.5 -7448.38 -416.68 0.125 2428 2430 2 -22814.3 -7448.08 -420.01 0.125 2429 2431 2 -22815.8 -7448.87 -420.23 0.125 2430 2432 2 -22816.6 -7449.95 -422.74 0.125 2431 2433 2 -22818.3 -7450.84 -422.74 0.125 2432 2434 2 -22820.1 -7449.92 -425.07 0.125 2433 2435 2 -22821.6 -7449.71 -424.9 0.125 2434 2436 2 -22823.9 -7448.06 -429.02 0.125 2435 2437 2 -22824.9 -7448.25 -428.95 0.125 2436 2438 2 -22826.3 -7447.57 -431.12 0.125 2437 2439 2 -22826.6 -7447.66 -431.11 0.125 2438 2440 2 -22828.8 -7447 -432.76 0.125 2439 2441 2 -22829.7 -7448.24 -432.89 0.125 2440 2442 2 -22831.2 -7449.29 -432.97 0.125 2441 2443 2 -22832 -7449.23 -432.89 0.125 2442 2444 2 -22834.2 -7449.29 -436.49 0.125 2443 2445 2 -22834.1 -7449.54 -436.54 0.125 2444 2446 2 -22836.1 -7449.67 -436.37 0.125 2445 2447 2 -22836.2 -7449.42 -436.33 0.125 2446 2448 2 -22837.5 -7448.61 -437.84 0.125 2447 2449 2 -22837.6 -7448.52 -438.39 0.125 2448 2450 2 -22838.9 -7448.96 -438.89 0.125 2449 2451 2 -22839.8 -7449.04 -439.25 0.125 2450 2452 2 -22841.8 -7447.92 -440.19 0.125 2451 2453 2 -22844.2 -7448.2 -439.78 0.125 2452 2454 2 -22844.6 -7447.54 -443.76 0.125 2453 2455 2 -22846.3 -7446.51 -443.83 0.125 2454 2456 2 -22848.7 -7445.69 -443.61 0.125 2455 2457 2 -22849.7 -7444.6 -443.34 0.125 2456 2458 2 -22850.5 -7443.29 -446.9 0.125 2457 2459 2 -22850.6 -7443.85 -445.33 0.125 2458 2460 2 -22859.9 -7440.47 -443.9 0.125 2459 2461 2 -22862.8 -7439.81 -447.42 0.125 2460 2462 2 -22864.9 -7440.07 -449.64 0.125 2461 2463 2 -22865.3 -7441.05 -450.71 0.125 2462 2464 2 -22866.5 -7441.24 -450.63 0.125 2463 2465 2 -22866.8 -7440.81 -450.53 0.125 2464 2466 2 -22867.6 -7440.95 -450.48 0.125 2465 2467 2 -22868.7 -7440.94 -454.73 0.125 2466 2468 2 -22869.2 -7441.48 -455.21 0.125 2467 2469 2 -22869.3 -7441.23 -455.17 0.125 2468 2470 2 -22870.9 -7441.36 -457.67 0.125 2469 2471 2 -22871.7 -7441.49 -457.61 0.125 2470 2472 2 -22873.1 -7441.71 -459.22 0.125 2471 2473 2 -22874.3 -7441.71 -459.15 0.125 2472 2474 2 -22874.3 -7441.68 -459.34 0.125 2473 2475 2 -22876.3 -7442.02 -461.09 0.125 2474 2476 2 -22876.3 -7442.25 -461.14 0.125 2475 2477 2 -22876.7 -7442.25 -463.21 0.125 2476 2478 2 -22877.5 -7441.04 -462.93 0.125 2477 2479 2 -22878.7 -7441.18 -464.96 0.125 2478 2480 2 -22879.8 -7441.37 -464.89 0.125 2479 2481 2 -22880.1 -7441.17 -464.83 0.125 2480 2482 2 -22881.6 -7440.41 -467.53 0.125 2481 2483 2 -22882.8 -7440.09 -467.54 0.125 2482 2484 2 -22884.1 -7440.13 -470.04 0.125 2483 2485 2 -22887.9 -7440.06 -471.33 0.125 2484 2486 2 -22890.4 -7439.7 -474.43 0.125 2485 2487 2 -22890.7 -7439.78 -474.42 0.125 2486 2488 2 -22892.1 -7440.08 -475.63 0.125 2487 2489 2 -22892.4 -7440.1 -475.61 0.125 2488 2490 2 -22893.7 -7440.04 -477.43 0.125 2489 2491 2 -22894 -7439.81 -477.37 0.125 2490 2492 2 -22895.1 -7439.99 -477.39 0.125 2491 2493 2 -22895.4 -7440.02 -477.57 0.125 2492 2494 2 -22897.8 -7440.1 -479.42 0.125 2493 2495 2 -22898 -7440.18 -479.41 0.125 2494 2496 2 -22899.8 -7440.02 -480.59 0.125 2495 2497 2 -22899.9 -7439.74 -480.85 0.125 2496 2498 2 -22902 -7439.74 -482.84 0.125 2497 2499 2 -22903.7 -7440.64 -484.02 0.125 2498 2500 2 -22904.1 -7440.1 -484.47 0.125 2499 2501 2 -22904.4 -7439.77 -484.97 0.125 2500 2502 2 -22904.5 -7439.68 -485.54 0.125 2501 2503 2 -22905.8 -7439.62 -487.21 0.125 2502 2504 2 -22905.7 -7439.88 -487.26 0.125 2503 2505 2 -22907.3 -7439.63 -488.73 0.125 2504 2506 2 -22908.9 -7440.1 -489.26 0.125 2505 2507 2 -22909 -7439.93 -490.28 0.125 2506 2508 2 -22910.4 -7440.11 -490.77 0.125 2507 2509 2 -22910.7 -7440.15 -490.75 0.125 2508 2510 2 -22912.7 -7440.53 -492.29 0.125 2509 2511 2 -22915.2 -7441.07 -493.37 0.125 2510 2512 2 -22915.2 -7441.33 -493.41 0.125 2511 2513 2 -22916.8 -7442.33 -493.91 0.125 2512 2514 2 -22918.2 -7443.9 -494.06 0.125 2513 2515 2 -22918.4 -7443.98 -494.05 0.125 2514 2516 2 -22920.6 -7445.14 -495.81 0.125 2515 2517 2 -22920.9 -7445.17 -495.79 0.125 2516 2518 2 -22923.1 -7445.63 -495.66 0.125 2517 2519 2 -22923.6 -7445.73 -495.63 0.125 2518 2520 2 -22923.9 -7445.52 -495.56 0.125 2519 2521 2 -22924.5 -7445.63 -495.53 0.125 2520 2522 2 -22925 -7446 -495.54 0.125 2521 2523 2 -22925.7 -7446.68 -495.59 0.125 2522 2524 2 -22925.9 -7446.96 -495.62 0.125 2523 2525 2 -22927.5 -7446.79 -498.38 0.125 2524 2526 2 -22929.6 -7446.4 -498.12 0.125 2525 2527 2 -22929.9 -7446.76 -499.43 0.125 2526 2528 2 -22932.3 -7448.86 -499.56 0.125 2527 2529 2 -22933.8 -7449.43 -499.5 0.125 2528 2530 2 -22936.4 -7450.01 -502.24 0.125 2529 2531 2 -22937.3 -7451.01 -502.32 0.125 2530 2532 2 -22939.7 -7451.35 -504.44 0.125 2531 2533 2 -22942.2 -7451.08 -507.19 0.125 2532 2534 2 -22942.3 -7450.87 -508.42 0.125 2533 2535 2 -22943.7 -7451.4 -509.93 0.125 2534 2536 2 -22944.7 -7451.96 -509.92 0.125 2535 2537 2 -22945 -7451.71 -509.85 0.125 2536 2538 2 -22946.8 -7451.87 -511.04 0.125 2537 2539 2 -22948.1 -7453.06 -513.31 0.125 2538 2540 2 -22948.1 -7452.99 -513.74 0.125 2539 2541 2 -22949.8 -7453.23 -514.08 0.125 2540 2542 2 -22949.8 -7453.12 -514.79 0.125 2541 2543 2 -22951.5 -7453.35 -515.52 0.125 2542 2544 2 -22952.3 -7453.77 -515.52 0.125 2543 2545 2 -22953.1 -7454.19 -515.52 0.125 2544 2546 2 -22953.5 -7454.74 -514.35 0.125 2545 2547 2 -22955.4 -7454.38 -516.88 0.125 2546 2548 2 -22956.2 -7454.95 -517.86 0.125 2547 2549 2 -22956.2 -7455.18 -517.9 0.125 2548 2550 2 -22959.2 -7455.74 -518.36 0.125 2549 2551 2 -22960.8 -7455.56 -521.14 0.125 2550 2552 2 -22961 -7455.24 -523.09 0.125 2551 2553 2 -22961 -7455.04 -522.67 0.125 2552 2554 2 -22961 -7455.3 -522.72 0.125 2553 2555 2 -22963.2 -7453.85 -527.03 0.125 2554 2556 2 -22964.1 -7453.35 -524.85 0.125 2555 2557 2 -22964.8 -7452.71 -524.68 0.125 2556 2558 2 -22965.8 -7452.91 -524.62 0.125 2557 2559 2 -22966.7 -7453.12 -524.57 0.125 2558 2560 2 -22967.7 -7453.45 -529.54 0.125 2559 2561 2 -22969.8 -7454.45 -529.51 0.125 2560 2562 2 -22970.7 -7454.5 -531.99 0.125 2561 2563 2 -22973 -7453.59 -531.63 0.125 2562 2564 2 -22973.1 -7453.3 -533.42 0.125 2563 2565 2 -22974.8 -7454.47 -533.46 0.125 2564 2566 2 -22975.8 -7454.11 -533.31 0.125 2565 2567 2 -22977.3 -7453.97 -535.96 0.125 2566 2568 2 -22977.3 -7454.24 -536.01 0.125 2567 2569 2 -22978.6 -7453.54 -536.63 0.125 2568 2570 2 -22979.2 -7452.04 -536.75 0.125 2569 2571 2 -22979.6 -7452.03 -537.29 0.125 2570 2572 2 -22981.6 -7450.6 -542.72 0.125 2571 2573 2 -22982.8 -7451.62 -543.04 0.125 2572 2574 2 -22984.5 -7451.96 -543.03 0.125 2573 2575 2 -22984.5 -7451.93 -543.21 0.125 2574 2576 2 -22987.2 -7452.05 -546.9 0.125 2575 2577 2 -22988.8 -7453.31 -547.78 0.125 2576 2578 2 -22989.5 -7453.47 -547.73 0.125 2577 2579 2 -22990.7 -7453.59 -548.27 0.125 2578 2580 2 -22992 -7453.3 -549.83 0.125 2579 2581 2 -22993.1 -7453.27 -549.72 0.125 2580 2582 2 -22993.4 -7453.08 -549.66 0.125 2581 2583 2 -22994.2 -7453.56 -549.37 0.125 2582 2584 2 -22994.7 -7453.67 -549.34 0.125 2583 2585 2 -22996 -7453.16 -549.13 0.125 2584 2586 2 -22996 -7452.91 -549.09 0.125 2585 2587 2 -22997.5 -7452.38 -553.13 0.125 2586 2588 2 -22997.9 -7452.78 -553.15 0.125 2587 2589 2 -22998.4 -7453.43 -553.22 0.125 2588 2590 2 -23000.3 -7452.66 -553.55 0.125 2589 2591 2 -23000.5 -7452.71 -553.54 0.125 2590 2592 2 -23002.3 -7452.22 -556.64 0.125 2591 2593 2 -23002.6 -7452.31 -556.62 0.125 2592 2594 2 -23003.7 -7452.51 -556.55 0.125 2593 2595 2 -23004.7 -7451.42 -556.28 0.125 2594 2596 2 -23006.9 -7450.79 -557.92 0.125 2595 2597 2 -23007 -7450.52 -557.87 0.125 2596 2598 2 -23008.9 -7450.59 -561.02 0.125 2597 2599 2 -23008.9 -7450.56 -561.2 0.125 2598 2600 2 -23010.3 -7451.32 -562.96 0.125 2599 2601 2 -23012 -7451.63 -564.68 0.125 2600 2602 2 -23013.5 -7451.15 -564.47 0.125 2601 2603 2 -23015.1 -7451.5 -564.42 0.125 2602 2604 2 -23019.2 -7450.86 -563.93 0.125 2603 2605 2 -23021.2 -7451.12 -564.52 0.125 2604 2606 2 -23022.9 -7450.9 -564.32 0.125 2605 2607 2 -23022.8 -7451.2 -564.38 0.125 2606 2608 2 -23024.4 -7450.38 -564.09 0.125 2607 2609 2 -23024.5 -7449.88 -564 0.125 2608 2610 2 -23026.1 -7449.14 -563.73 0.125 2609 2611 2 -23029.6 -7449.5 -565.39 0.125 2610 2612 2 -23030.3 -7450.16 -565.43 0.125 2611 2613 2 -23033.2 -7450.33 -565.96 0.125 2612 2614 2 -23037 -7449.92 -570.39 0.125 2613 2615 2 -23040.2 -7450.9 -574.25 0.125 2614 2616 2 -23042.4 -7452.1 -577.8 0.125 2615 2617 2 -23046.2 -7450.76 -578.14 0.125 2616 2618 2 -23046.2 -7451.04 -578.19 0.125 2617 2619 2 -23047.2 -7451.52 -578.17 0.125 2618 2620 2 -23047.5 -7451.57 -578.15 0.125 2619 2621 2 -23050.6 -7452.42 -579.77 0.125 2620 2622 2 -23050.5 -7452.65 -579.82 0.125 2621 2623 2 -23052.7 -7452.02 -579.51 0.125 2622 2624 2 -23055.6 -7452.88 -579.38 0.125 2623 2625 2 -23057.4 -7452.93 -579.22 0.125 2624 2626 2 -23058.3 -7452.34 -579.04 0.125 2625 2627 2 -23062.8 -7450.57 -594.5 0.125 2626 2628 2 -23063.1 -7450.61 -594.48 0.125 2627 2629 2 -23066.9 -7450.11 -599.68 0.125 2628 2630 2 -23068.3 -7450.63 -599.64 0.125 2629 2631 2 -23072.6 -7450.31 -604.62 0.125 2630 2632 2 -23074.1 -7448.72 -604.22 0.125 2631 2633 2 -23077.9 -7447.76 -611.68 0.125 2632 2634 2 -23079.9 -7448.29 -612.24 0.125 2633 2635 2 -23085.4 -7448.95 -620.17 0.125 2634 2636 2 -23086 -7447.65 -622.11 0.125 2635 2637 2 -23087.3 -7446.76 -621.87 0.125 2636 2638 2 -23088.1 -7445.96 -624.08 0.125 2637 2639 2 -23088.4 -7445.97 -624.06 0.125 2638 2640 2 -23091 -7446.75 -627.63 0.125 2639 2641 2 -23092.1 -7447.15 -627.59 0.125 2640 2642 2 -23094.2 -7447.36 -629.72 0.125 2641 2643 2 -23094.5 -7446.64 -629.57 0.125 2642 2644 2 -23099.3 -7448.05 -632.39 0.125 2643 2645 2 -23101.9 -7448.08 -634.22 0.125 2644 2646 2 -23103.7 -7447.82 -634.02 0.125 2645 2647 2 -23103.9 -7447.6 -633.95 0.125 2646 2648 2 -23103.9 -7445.77 -633.65 0.125 2647 2649 2 -23104 -7445.24 -633.56 0.125 2648 2650 2 -23104.6 -7444.78 -633.42 0.125 2649 2651 2 -23105.6 -7442.75 -635.41 0.125 2650 2652 2 -23107.2 -7440.76 -638.33 0.125 2651 2653 2 -23107.2 -7440.74 -638.44 0.125 2652 2654 2 -23106.7 -7438.69 -642.32 0.125 2653 2655 2 -23105.5 -7436.38 -645.54 0.125 2654 2656 2 -23104 -7435.08 -648.47 0.125 2655 2657 2 -23104.2 -7435.15 -648.46 0.125 2656 2658 2 -23105 -7432.15 -653.88 0.125 2657 2659 2 -23105.2 -7430.52 -657.03 0.125 2658 2660 2 -23105.2 -7430.5 -657.2 0.125 2659 2661 2 -23104.2 -7428.15 -657.81 0.125 2660 2662 2 -23104.2 -7428.4 -657.86 0.125 2661 2663 2 -23104.3 -7425.21 -662.19 0.125 2662 2664 2 -23104.7 -7424.73 -662.08 0.125 2663 2665 2 -23104.7 -7424.49 -662.04 0.125 2664 2666 2 -23105.1 -7422.72 -664.71 0.125 2665 2667 2 -23105.8 -7422.15 -664.21 0.125 2666 2668 2 -23106.8 -7419.67 -669.48 0.125 2667 2669 2 -23106.8 -7419.44 -669.44 0.125 2668 2670 2 -23106.2 -7416.01 -669.98 0.125 2669 2671 2 -23105.3 -7411 -675.19 0.125 2670 2672 2 -23105.6 -7411.05 -675.18 0.125 2671 2673 2 -23105.1 -7408.47 -678.71 0.125 2672 2674 2 -23105.1 -7408.23 -678.67 0.125 2673 2675 2 -23105.9 -7405.34 -680.36 0.125 2674 2676 2 -23106.2 -7405.37 -680.34 0.125 2675 2677 2 -23105.1 -7405.12 -679.6 0.125 2676 2678 2 -23115.4 -7406.42 -678.85 0.125 2677 2679 2 -23116.6 -7400.33 -680.59 0.125 2678 2680 2 -23116.5 -7400.61 -680.64 0.125 2679 2681 2 -23116.1 -7398.63 -682.35 0.125 2680 2682 2 -23116.2 -7398.37 -682.3 0.125 2681 2683 2 -23116.4 -7393.36 -686.17 0.125 2682 2684 2 -23117 -7389.84 -690.95 0.125 2683 2685 2 -23117.3 -7389.92 -690.94 0.125 2684 2686 2 -23117.7 -7387.62 -690.57 0.125 2685 2687 2 -23118 -7387.64 -690.54 0.125 2686 2688 2 -23117.2 -7384.79 -693.72 0.125 2687 2689 2 -23117.1 -7383.72 -693.55 0.125 2688 2690 2 -23117.3 -7381.77 -695.28 0.125 2689 2691 2 -23116.6 -7379.78 -697.11 0.125 2690 2692 2 -23116.9 -7377.93 -696.93 0.125 2691 2693 2 -23116.6 -7375.67 -700.03 0.125 2692 2694 2 -23116.9 -7373.91 -699.71 0.125 2693 2695 2 -23117 -7373.65 -699.66 0.125 2694 2696 2 -23116.9 -7368.95 -704.52 0.125 2695 2697 2 -23117.3 -7365.15 -709.37 0.125 2696 2698 2 -23117.3 -7364.91 -709.33 0.125 2697 2699 2 -23118.4 -7361.55 -712.24 0.125 2698 2700 2 -23117.7 -7358.27 -714.92 0.125 2699 2701 2 -23117.7 -7357.98 -714.87 0.125 2700 2702 2 -23117.7 -7356.17 -714.57 0.125 2701 2703 2 -23117.8 -7355.85 -714.64 0.125 2702 2704 2 -23118.9 -7352.78 -719.18 0.125 2703 2705 2 -23118.9 -7352.56 -719.14 0.125 2704 2706 2 -23120.8 -7350.17 -720.31 0.125 2705 2707 2 -23121.1 -7350.23 -720.29 0.125 2706 2708 2 -23122.5 -7347.04 -721.07 0.125 2707 2709 2 -23122.5 -7347.31 -721.12 0.125 2708 2710 2 -23123.2 -7342.49 -722.76 0.125 2709 2711 2 -23124.2 -7341.33 -722.57 0.125 2710 2712 2 -23125 -7338.48 -724.11 0.125 2711 2713 2 -23126.4 -7333.66 -727.99 0.125 2712 2714 2 -23126.9 -7331.69 -730.36 0.125 2713 2715 2 -23122.7 -7330.73 -729.85 0.125 2714 2716 2 -23122.9 -7330.76 -729.83 0.125 2715 2717 2 -23122.6 -7330.06 -725.78 0.125 2716 2718 2 -23122.5 -7330.35 -725.83 0.125 2717 2719 2 -22997.8 -7453.4 -557.07 0.125 2589 2720 2 -22997.2 -7452.74 -558.45 0.125 2719 2721 2 -23000.5 -7452.04 -564.44 0.125 2720 2722 2 -23001 -7451.82 -566.24 0.125 2721 2723 2 -23006.7 -7450.49 -565.49 0.125 2722 2724 2 -23006.6 -7450.78 -565.54 0.125 2723 2725 2 -23008.3 -7450.19 -571.66 0.125 2724 2726 2 -23008.2 -7450.46 -571.72 0.125 2725 2727 2 -23009 -7450.88 -571.72 0.125 2726 2728 2 -23010.4 -7450.53 -575.17 0.125 2727 2729 2 -23013.7 -7451.35 -579.88 0.125 2728 2730 2 -23016.1 -7452.18 -583.82 0.125 2729 2731 2 -23017.5 -7452.42 -583.72 0.125 2730 2732 2 -23017.3 -7452.29 -585.87 0.125 2731 2733 2 -23017.3 -7452.59 -585.92 0.125 2732 2734 2 -23018.4 -7453.76 -588.16 0.125 2733 2735 2 -23019.1 -7453.88 -591.03 0.125 2734 2736 2 -23019 -7454.15 -591.07 0.125 2735 2737 2 -23020.6 -7453.64 -590.85 0.125 2736 2738 2 -23020.6 -7453.39 -590.8 0.125 2737 2739 2 -23021.1 -7452.67 -590.64 0.125 2738 2740 2 -23021.4 -7449.66 -596.96 0.125 2739 2741 2 -23020 -7448.21 -601.95 0.125 2740 2742 2 -23020.1 -7447.69 -601.86 0.125 2741 2743 2 -23021.7 -7447.31 -607.05 0.125 2742 2744 2 -23024.4 -7445.44 -612.68 0.125 2743 2745 2 -23025.7 -7445.95 -612.64 0.125 2744 2746 2 -23027.4 -7444.96 -618.2 0.125 2745 2747 2 -23029.6 -7444.78 -621.24 0.125 2746 2748 2 -23027 -7444.95 -621.51 0.125 2747 2749 2 -23032 -7443.44 -629.24 0.125 2748 2750 2 -23033.8 -7443.46 -630.23 0.125 2749 2751 2 -23034.7 -7445.45 -631.69 0.125 2750 2752 2 -23034.7 -7445.21 -631.65 0.125 2751 2753 2 -23035.7 -7444.39 -635.35 0.125 2752 2754 2 -23034.8 -7441.68 -642.46 0.125 2753 2755 2 -23034.9 -7441.44 -642.42 0.125 2754 2756 2 -23036.7 -7440.67 -646.49 0.125 2755 2757 2 -23038 -7440.61 -649.57 0.125 2756 2758 2 -23039.5 -7440.72 -651.37 0.125 2757 2759 2 -23040.4 -7439.09 -656.63 0.125 2758 2760 2 -23043.4 -7439.99 -661.46 0.125 2759 2761 2 -23043.8 -7440.1 -663.88 0.125 2760 2762 2 -23046.5 -7438.77 -670.26 0.125 2761 2763 2 -23050.8 -7438.24 -679.64 0.125 2762 2764 2 -23051.6 -7436.11 -683.12 0.125 2763 2765 2 -23051.6 -7436.38 -683.16 0.125 2764 2766 2 -23062.6 -7438.47 -682.48 0.125 2765 2767 2 -23064.3 -7439.22 -688.81 0.125 2766 2768 2 -23064.1 -7439.66 -686.16 0.125 2767 2769 2 -23064.8 -7438.94 -686.56 0.125 2768 2770 2 -23066.1 -7438.01 -691.04 0.125 2769 2771 2 -23068.5 -7436.07 -696.65 0.125 2770 2772 2 -23068.5 -7435.81 -696.61 0.125 2771 2773 2 -23070.3 -7434.96 -704.27 0.125 2772 2774 2 -23071.4 -7436.78 -704.01 0.125 2773 2775 2 -23074 -7438.19 -707.5 0.125 2774 2776 2 -23074.2 -7438.84 -708.68 0.125 2775 2777 2 -23078.7 -7443.54 -715.18 0.125 2776 2778 2 -23080.9 -7446.81 -718.42 0.125 2777 2779 2 -23081.2 -7446.87 -718.4 0.125 2778 2780 2 -23082.3 -7449.88 -722.34 0.125 2779 2781 2 -23082.6 -7449.91 -722.31 0.125 2780 2782 2 -23083.8 -7452.04 -723.66 0.125 2781 2783 2 -23083.8 -7452.2 -724.23 0.125 2782 2784 2 -23085.8 -7453.94 -725.41 0.125 2783 2785 2 -23085.9 -7453.78 -726.36 0.125 2784 2786 2 -23085.9 -7456.42 -728.11 0.125 2785 2787 2 -23085.8 -7456.68 -728.15 0.125 2786 2788 2 -23086.1 -7458.59 -728.45 0.125 2787 2789 2 -23086.3 -7458.6 -728.42 0.125 2788 2790 2 -23086 -7462.77 -733.67 0.125 2789 2791 2 -23086 -7462.55 -733.62 0.125 2790 2792 2 -23084.9 -7465.18 -736.04 0.125 2791 2793 2 -23082.5 -7467.65 -736.69 0.125 2792 2794 2 -23082.4 -7470.67 -737.92 0.125 2793 2795 2 -23083.6 -7477.05 -742.79 0.125 2794 2796 2 -23085.7 -7479.38 -749.44 0.125 2795 2797 2 -23084.9 -7483.13 -756.71 0.125 2796 2798 2 -23080.9 -7483.19 -757.1 0.125 2797 2799 2 -23081.2 -7483.27 -757.08 0.125 2798 2800 2 -23080.6 -7484.45 -757.33 0.125 2799 2801 2 -23080 -7485.2 -759.86 0.125 2800 2802 2 -23080.3 -7485.21 -760.39 0.125 2801 2803 2 -23079.3 -7488.43 -761.04 0.125 2802 2804 2 -23079.2 -7490.38 -766.64 0.125 2803 2805 2 -23079.5 -7491.5 -766.79 0.125 2804 2806 2 -23078.5 -7491 -766.81 0.125 2805 2807 2 -23077.7 -7491.83 -771.93 0.125 2806 2808 2 -23077.4 -7492.51 -773.76 0.125 2807 2809 2 -23076.2 -7494.14 -779.8 0.125 2808 2810 2 -23076.3 -7493.88 -779.76 0.125 2809 2811 2 -23075.5 -7495.16 -782.4 0.125 2810 2812 2 -23075.6 -7494.89 -782.35 0.125 2811 2813 2 -23074.8 -7496.01 -782.61 0.125 2812 2814 2 -23074 -7497.69 -786.27 0.125 2813 2815 2 -23074.3 -7497.74 -786.25 0.125 2814 2816 2 -23074.4 -7499.59 -789.76 0.125 2815 2817 2 -23074.7 -7499.35 -789.69 0.125 2816 2818 2 -23074.1 -7502.14 -790.52 0.125 2817 2819 2 -23073.5 -7503.6 -790.82 0.125 2818 2820 2 -23072.7 -7504.29 -793.45 0.125 2819 2821 2 -23071.3 -7506.65 -794.02 0.125 2820 2822 2 -23036.6 -7436.87 -657.47 0.125 2759 2823 2 -23034.2 -7436.25 -657.59 0.125 2822 2824 2 -23032.8 -7436.31 -657.74 0.125 2823 2825 2 -23031.2 -7437.01 -657.11 0.125 2824 2826 2 -23030.9 -7437.26 -657.18 0.125 2825 2827 2 -23028.9 -7435.94 -657.15 0.125 2826 2828 2 -23028.4 -7435.31 -657.1 0.125 2827 2829 2 -23027.2 -7433.8 -656.96 0.125 2828 2830 2 -23026.2 -7433.19 -656.95 0.125 2829 2831 2 -23024.3 -7432.13 -656.95 0.125 2830 2832 2 -23023.3 -7431.72 -656.98 0.125 2831 2833 2 -23022.8 -7431.39 -656.97 0.125 2832 2834 2 -23021.9 -7429.95 -656.82 0.125 2833 2835 2 -23021.7 -7428.88 -656.66 0.125 2834 2836 2 -23021.5 -7428.38 -656.6 0.125 2835 2837 2 -23021 -7427.77 -656.53 0.125 2836 2838 2 -23018.5 -7424.11 -657.16 0.125 2837 2839 2 -23018.8 -7424.14 -657.14 0.125 2838 2840 2 -23017.4 -7422.94 -658.46 0.125 2839 2841 2 -23017.1 -7422.65 -658.44 0.125 2840 2842 2 -23015.8 -7421.94 -658.45 0.125 2841 2843 2 -23015.6 -7421.91 -658.46 0.125 2842 2844 2 -23014.7 -7420.15 -658.55 0.125 2843 2845 2 -23014.4 -7419.88 -658.53 0.125 2844 2846 2 -23014.2 -7419.42 -662.35 0.125 2845 2847 2 -22932.2 -7444.64 -502.99 0.125 2528 2848 2 -22932.2 -7444.62 -503.13 0.125 2847 2849 2 -22933.4 -7442.13 -509.13 0.125 2848 2850 2 -22935.3 -7439.66 -512.76 0.125 2849 2851 2 -22935.2 -7437.32 -518 0.125 2850 2852 2 -22935.3 -7437.09 -517.96 0.125 2851 2853 2 -22937.1 -7434.81 -521.83 0.125 2852 2854 2 -22938.2 -7432.97 -524.2 0.125 2853 2855 2 -22938.3 -7432.72 -524.15 0.125 2854 2856 2 -22939.6 -7430.5 -528.66 0.125 2855 2857 2 -22939.6 -7430.25 -528.61 0.125 2856 2858 2 -22941.3 -7429.63 -531.11 0.125 2857 2859 2 -22943.1 -7428.8 -534.68 0.125 2858 2860 2 -22944.3 -7428.49 -534.66 0.125 2859 2861 2 -22946.6 -7427.61 -540.31 0.125 2860 2862 2 -22946.1 -7427.69 -548.38 0.125 2861 2863 2 -22948.5 -7425.5 -556.73 0.125 2862 2864 2 -22948.3 -7422.8 -562.37 0.125 2863 2865 2 -22948.4 -7422.57 -562.32 0.125 2864 2866 2 -22949.9 -7423.68 -563.72 0.125 2865 2867 2 -22952.3 -7422.96 -567.36 0.125 2866 2868 2 -22954.2 -7421.85 -570.72 0.125 2867 2869 2 -22956 -7420.53 -572.28 0.125 2868 2870 2 -22959 -7419.22 -580.99 0.125 2869 2871 2 -22960.7 -7418.25 -586.48 0.125 2870 2872 2 -22963.2 -7416.75 -589.77 0.125 2871 2873 2 -22963.8 -7415.25 -589.47 0.125 2872 2874 2 -22965.6 -7414.11 -600.6 0.125 2873 2875 2 -22966 -7413.39 -600.44 0.125 2874 2876 2 -22967.3 -7412.83 -601.78 0.125 2875 2877 2 -22971.9 -7409.85 -624.38 0.125 2876 2878 2 -22974.8 -7410.42 -624.2 0.125 2877 2879 2 -22975 -7408.81 -628.85 0.125 2878 2880 2 -22975.1 -7408.66 -629.75 0.125 2879 2881 2 -22977.3 -7408.7 -632.77 0.125 2880 2882 2 -21908.3 -7355.47 84.79 0.25 1776 2883 2 -21911.7 -7354.19 88.7 0.25 2882 2884 2 -21912.1 -7353.22 88.9 0.25 2883 2885 2 -21912.4 -7353.26 88.92 0.25 2884 2886 2 -21914.8 -7352.35 89.29 0.25 2885 2887 2 -21914.8 -7352.11 89.34 0.25 2886 2888 2 -21916.6 -7350.63 92.1 0.25 2887 2889 2 -21918.8 -7348.89 92.57 0.25 2888 2890 2 -21920.3 -7347.01 96.09 0.25 2889 2891 2 -21921.8 -7345.59 96.46 0.37 2890 2892 2 -21941.6 -7328.76 116.64 0.37 2891 2893 2 -21944.9 -7327.69 117.12 0.37 2892 2894 2 -21947.1 -7326.22 117.57 0.37 2893 2895 2 -21947.9 -7324.76 117.89 0.37 2894 2896 2 -21948.2 -7324.82 117.9 0.37 2895 2897 2 -21948.8 -7324.12 118.08 0.37 2896 2898 2 -21948.9 -7323.61 118.17 0.37 2897 2899 2 -21950 -7323.03 120.09 0.37 2898 2900 2 -21950.3 -7323.09 120.11 0.37 2899 2901 2 -21951.8 -7322.53 120.34 0.37 2900 2902 2 -21952.1 -7320.69 120.67 0.37 2901 2903 2 -21952.4 -7320.48 120.74 0.37 2902 2904 2 -21954 -7319.13 121.11 0.37 2903 2905 2 -21954.3 -7319.19 121.12 0.37 2904 2906 2 -21955.8 -7318.37 121.4 0.37 2905 2907 2 -21956.8 -7317.49 121.64 0.37 2906 2908 2 -21957.2 -7317.08 121.74 0.37 2907 2909 2 -21957.4 -7317.09 121.77 0.37 2908 2910 2 -21959.9 -7315.92 122.19 0.37 2909 2911 2 -21960.2 -7315.4 122.31 0.37 2910 2912 2 -21960.6 -7314.73 122.45 0.37 2911 2913 2 -21961 -7314.23 122.57 0.37 2912 2914 2 -21961 -7313.95 122.62 0.37 2913 2915 2 -21963.1 -7312.5 124.77 0.37 2914 2916 2 -21964.9 -7312 125.1 0.37 2915 2917 2 -21966.5 -7310.74 125.77 0.37 2916 2918 2 -21966.5 -7310.48 125.82 0.37 2917 2919 2 -21967.5 -7309.36 126.3 0.37 2918 2920 2 -21967.8 -7309.16 126.36 0.37 2919 2921 2 -21969.6 -7307.7 129.1 0.37 2920 2922 2 -21971.6 -7307.26 129.36 0.37 2921 2923 2 -21972.3 -7306.33 129.6 0.37 2922 2924 2 -21973.1 -7305.12 129.87 0.37 2923 2925 2 -21974.4 -7304.12 129.29 0.37 2924 2926 2 -21973.2 -7302.78 129.81 0.37 2925 2927 2 -21974 -7302.7 129.9 0.37 2926 2928 2 -21974.3 -7302.73 129.92 0.37 2927 2929 2 -21974.9 -7300.85 131.01 0.37 2928 2930 2 -21975.9 -7299.48 131.36 0.37 2929 2931 2 -21975.9 -7299.76 131.31 0.37 2930 2932 2 -21977.6 -7298 132.2 0.37 2931 2933 2 -21979.5 -7297.92 133.14 0.37 2932 2934 2 -21980.5 -7296.51 134.74 0.37 2933 2935 2 -21980.7 -7295.74 134.88 0.37 2934 2936 2 -21981.9 -7294.89 135.29 0.37 2935 2937 2 -21984 -7293.69 137.15 0.37 2936 2938 2 -21985 -7292.64 137.9 0.37 2937 2939 2 -21985.3 -7292.44 137.96 0.37 2938 2940 2 -21986.3 -7291.37 138.72 0.37 2939 2941 2 -21986.6 -7291.45 138.73 0.37 2940 2942 2 -21986.9 -7291.07 139.37 0.37 2941 2943 2 -21988.2 -7289.74 139.72 0.37 2942 2944 2 -21988.5 -7289.78 139.74 0.37 2943 2945 2 -21990.2 -7288.32 140.5 0.37 2944 2946 2 -21990.1 -7288.49 141.52 0.37 2945 2947 2 -21990.8 -7287.55 141.74 0.37 2946 2948 2 -21990.7 -7287.56 141.82 0.37 2947 2949 2 -21992 -7286.71 142.12 0.37 2948 2950 2 -21993.2 -7286.42 142.33 0.37 2949 2951 2 -21993.2 -7286.44 142.43 0.37 2950 2952 2 -21994.7 -7284.91 144.66 0.37 2951 2953 2 -21995.1 -7284.19 144.82 0.37 2952 2954 2 -21997.5 -7282.41 144.79 0.37 2953 2955 2 -22017.3 -7262.5 160.43 0.37 2954 2956 2 -22019 -7260.95 160.84 0.37 2955 2957 2 -22020.1 -7258.86 163.26 0.37 2956 2958 2 -22021.6 -7257.91 164.15 0.37 2957 2959 2 -22022.1 -7256.42 164.45 0.37 2958 2960 2 -22024.2 -7256.02 164.75 0.37 2959 2961 2 -22024.5 -7255.82 164.82 0.37 2960 2962 2 -22025.3 -7255.1 166.26 0.37 2961 2963 2 -22025.6 -7254.39 168.14 0.37 2962 2964 2 -22026.2 -7252.75 168.68 0.37 2963 2965 2 -22027.5 -7251.65 169.03 0.37 2964 2966 2 -22029.4 -7250.42 169.5 0.37 2965 2967 2 -22030 -7248.68 169.85 0.37 2966 2968 2 -22030.7 -7247.51 170.12 0.37 2967 2969 2 -22032.8 -7247.33 170.35 0.37 2968 2970 2 -22033.6 -7247.21 170.45 0.37 2969 2971 2 -22033.9 -7247.26 170.46 0.37 2970 2972 2 -22034.8 -7246.86 170.62 0.37 2971 2973 2 -22034.8 -7246.59 170.67 0.37 2972 2974 2 -22035.8 -7246.05 170.87 0.37 2973 2975 2 -22035.8 -7245.76 170.92 0.37 2974 2976 2 -22036.6 -7245.4 172.55 0.37 2975 2977 2 -22036.7 -7245.15 172.6 0.37 2976 2978 2 -22037.7 -7244.29 172.84 0.37 2977 2979 2 -22037.8 -7243.5 172.98 0.37 2978 2980 2 -22037.9 -7242.72 173.12 0.37 2979 2981 2 -22038.2 -7242.49 173.19 0.37 2980 2982 2 -22038.3 -7241.94 173.29 0.37 2981 2983 2 -22039.6 -7241.63 176.04 0.37 2982 2984 2 -22040.2 -7241.23 176.17 0.37 2983 2985 2 -22041.6 -7240.4 176.43 0.37 2984 2986 2 -22041.7 -7239.58 176.57 0.37 2985 2987 2 -22041.7 -7239.36 176.68 0.37 2986 2988 2 -22043.5 -7238.73 177.55 0.37 2987 2989 2 -22044.2 -7237.56 177.84 0.37 2988 2990 2 -22044.5 -7237.08 177.95 0.37 2989 2991 2 -22045.7 -7235.48 178.51 0.37 2990 2992 2 -22045.7 -7235.5 178.62 0.37 2991 2993 2 -22047.3 -7235.28 180.33 0.37 2992 2994 2 -22048.9 -7233.73 180.74 0.37 2993 2995 2 -22049.2 -7232.39 180.67 0.37 2994 2996 2 -22049.9 -7231.47 180.89 0.37 2995 2997 2 -22050.2 -7231.23 180.96 0.37 2996 2998 2 -22051.3 -7230.41 181.19 0.37 2997 2999 2 -22051.3 -7230.12 181.24 0.37 2998 3000 2 -22052.5 -7230.42 183.08 0.37 2999 3001 2 -22052.5 -7230.5 183.57 0.37 3000 3002 2 -22052.6 -7228.81 186.26 0.37 3001 3003 2 -22054.1 -7228.55 186.44 0.37 3002 3004 2 -22054.2 -7228.27 186.49 0.37 3003 3005 2 -22055.9 -7227.83 186.73 0.37 3004 3006 2 -22056.1 -7226.71 190.96 0.37 3005 3007 2 -22056.6 -7225.47 191.21 0.37 3006 3008 2 -22057.1 -7223.56 193.63 0.37 3007 3009 2 -22058.1 -7222.17 193.96 0.37 3008 3010 2 -22060 -7220.91 194.35 0.37 3009 3011 2 -22060.1 -7220.68 194.39 0.37 3010 3012 2 -22061.4 -7219.59 194.7 0.37 3011 3013 2 -22061.7 -7219.13 194.81 0.37 3012 3014 2 -22062.8 -7217.83 198.46 0.37 3013 3015 2 -22063.1 -7217.85 198.48 0.37 3014 3016 2 -22064.4 -7216.77 198.78 0.37 3015 3017 2 -22064.4 -7216.59 199.26 0.37 3016 3018 2 -22066.1 -7214.73 200.99 0.37 3017 3019 2 -22066.2 -7214.51 201.2 0.37 3018 3020 2 -22068 -7213.55 201.65 0.37 3019 3021 2 -22069.6 -7213.63 202.11 0.37 3020 3022 2 -22069.6 -7213.66 202.33 0.37 3021 3023 2 -22071.2 -7213.62 203.59 0.37 3022 3024 2 -22071.3 -7213.34 203.64 0.37 3023 3025 2 -22071.7 -7212.49 204.37 0.37 3024 3026 2 -22071.8 -7212.25 204.41 0.37 3025 3027 2 -22074.1 -7211.25 205.86 0.37 3026 3028 2 -22074.2 -7210.48 205.99 0.37 3027 3029 2 -22074.5 -7209.86 206.88 0.37 3028 3030 2 -22074.6 -7209.37 206.97 0.37 3029 3031 2 -22075 -7208.45 207.39 0.37 3030 3032 2 -22076.3 -7207.6 207.64 0.37 3031 3033 2 -22077.8 -7207.05 207.92 0.37 3032 3034 2 -22078.4 -7206.52 208.61 0.37 3033 3035 2 -22078.3 -7206.55 210.39 0.37 3034 3036 2 -22079.4 -7206.08 211.45 0.37 3035 3037 2 -22079.8 -7205.64 211.66 0.37 3036 3038 2 -22081.6 -7205.17 211.91 0.37 3037 3039 2 -22081.9 -7204.72 212.02 0.37 3038 3040 2 -22082.6 -7204.32 212.15 0.37 3039 3041 2 -22082.9 -7203.82 212.27 0.37 3040 3042 2 -22083.2 -7203.61 212.33 0.37 3041 3043 2 -22083.9 -7203.19 212.46 0.37 3042 3044 2 -22083.9 -7202.69 212.55 0.37 3043 3045 2 -22085.2 -7201.49 213.52 0.37 3044 3046 2 -22085.3 -7201.19 213.57 0.37 3045 3047 2 -22085.7 -7200.83 217.01 0.37 3046 3048 2 -22087.2 -7200.29 217.32 0.37 3047 3049 2 -22087.9 -7199.5 218.13 0.37 3048 3050 2 -22088 -7198.73 218.27 0.37 3049 3051 2 -22088.6 -7198.16 219.23 0.37 3050 3052 2 -22088.7 -7197.63 219.33 0.37 3051 3053 2 -22089.4 -7196.71 220.64 0.37 3052 3054 2 -22089.4 -7196.43 220.7 0.37 3053 3055 2 -22090.1 -7195.21 222.45 0.37 3054 3056 2 -22090.4 -7194.75 222.55 0.37 3055 3057 2 -22092.4 -7194.42 223.6 0.37 3056 3058 2 -22092.5 -7194.17 223.65 0.37 3057 3059 2 -22094.8 -7193.5 225.28 0.37 3058 3060 2 -22094.7 -7193.24 226.64 0.37 3059 3061 2 -22096.6 -7191.4 228.62 0.37 3060 3062 2 -22096.9 -7191.04 229.09 0.37 3061 3063 2 -22098.1 -7188.66 229.82 0.37 3062 3064 2 -22098.5 -7188.44 229.89 0.37 3063 3065 2 -22100.5 -7187.56 232.2 0.37 3064 3066 2 -22102.8 -7186.13 230.96 0.37 3065 3067 2 -22102.8 -7185.84 231.01 0.37 3066 3068 2 -22108.4 -7180.34 232.45 0.37 3067 3069 2 -22110.1 -7178.7 232.87 0.25 3068 3070 2 -22111.3 -7177.17 235.68 0.25 3069 3071 2 -22113.3 -7176.51 236.5 0.25 3070 3072 2 -22114.7 -7175.1 239.83 0.25 3071 3073 2 -22114.7 -7175.14 240.08 0.25 3072 3074 2 -22116.8 -7173.65 240.97 0.25 3073 3075 2 -22116.7 -7173.73 241.43 0.25 3074 3076 2 -22117.3 -7172.52 243.42 0.25 3075 3077 2 -22118 -7171.81 243.6 0.25 3076 3078 2 -22119.1 -7170.4 243.94 0.25 3077 3079 2 -22119.1 -7169.87 244.03 0.25 3078 3080 2 -22121.1 -7169.89 244.21 0.25 3079 3081 2 -22122.7 -7168.51 244.59 0.25 3080 3082 2 -22125.3 -7167.38 245.63 0.25 3081 3083 2 -22126.9 -7166.46 248.34 0.25 3082 3084 2 -22127.6 -7165.77 248.59 0.25 3083 3085 2 -22127.8 -7164.46 248.82 0.25 3084 3086 2 -22129.1 -7164.1 250.6 0.25 3085 3087 2 -22129.2 -7163.34 250.74 0.25 3086 3088 2 -22130.9 -7161.21 251.31 0.25 3087 3089 2 -22131.4 -7160.5 253.07 0.25 3088 3090 2 -22132 -7160.35 253.15 0.25 3089 3091 2 -22134 -7160.39 256.7 0.25 3090 3092 2 -22134.7 -7159.65 256.84 0.25 3091 3093 2 -22135 -7159.98 256.8 0.25 3092 3094 2 -22136.3 -7157.75 258.85 0.25 3093 3095 2 -22136.3 -7157.5 258.89 0.25 3094 3096 2 -22137.3 -7156.58 259.14 0.25 3095 3097 2 -22139 -7154.89 262.05 0.25 3096 3098 2 -22140.6 -7153.8 262.38 0.125 3097 3099 2 -22142.8 -7151.77 266.2 0.125 3098 3100 2 -22144.1 -7150.36 266.79 0.125 3099 3101 2 -22146.6 -7149.14 267.22 0.125 3100 3102 2 -22149.2 -7147.38 269.36 0.125 3101 3103 2 -22149.5 -7147.44 269.37 0.125 3102 3104 2 -22151.1 -7146.5 272.21 0.125 3103 3105 2 -22152.6 -7145.18 274.3 0.125 3104 3106 2 -22155.1 -7144.02 281.55 0.125 3105 3107 2 -22155 -7144.21 282.72 0.125 3106 3108 2 -22156.8 -7143.19 283.31 0.125 3107 3109 2 -22157.1 -7142.99 283.37 0.125 3108 3110 2 -22157.7 -7142.01 285.49 0.125 3109 3111 2 -22158.3 -7139.47 285.97 0.125 3110 3112 2 -22158.9 -7139.08 286.09 0.125 3111 3113 2 -22159.2 -7139.07 286.12 0.125 3112 3114 2 -22161.3 -7137.56 286.57 0.125 3113 3115 2 -22162 -7137.09 286.71 0.125 3114 3116 2 -22163.9 -7135.1 288.05 0.125 3115 3117 2 -22165.2 -7133.19 289.97 0.125 3116 3118 2 -22165.4 -7133.21 289.99 0.125 3117 3119 2 -22166.9 -7131.17 292.95 0.125 3118 3120 2 -22168.6 -7129.93 295.41 0.125 3119 3121 2 -22171.2 -7127.93 295.98 0.125 3120 3122 2 -22172.4 -7125.72 298.12 0.125 3121 3123 2 -22174.7 -7125.29 300.21 0.125 3122 3124 2 -22175.6 -7124.13 300.5 0.125 3123 3125 2 -22176 -7123.89 300.57 0.125 3124 3126 2 -22178 -7121.82 304.27 0.125 3125 3127 2 -22181 -7118.73 307.72 0.125 3126 3128 2 -22182.8 -7118.2 308.08 0.125 3127 3129 2 -22183.9 -7118.12 308.64 0.125 3128 3130 2 -22184.1 -7118.27 309.18 0.125 3129 3131 2 -22185.1 -7117.99 311.47 0.125 3130 3132 2 -22185.2 -7117.19 311.61 0.125 3131 3133 2 -22185.6 -7116.74 311.72 0.125 3132 3134 2 -22185.7 -7116.19 311.82 0.125 3133 3135 2 -22186 -7116 311.88 0.125 3134 3136 2 -22186.8 -7115.34 313.82 0.125 3135 3137 2 -22188.3 -7114.75 314.06 0.125 3136 3138 2 -22188.5 -7113.18 314.37 0.125 3137 3139 2 -22189.7 -7112.56 314.59 0.125 3138 3140 2 -22190 -7112.62 314.61 0.125 3139 3141 2 -22191.7 -7111.79 314.91 0.125 3140 3142 2 -22193.5 -7112.18 317.14 0.125 3141 3143 2 -22193.5 -7112.26 317.62 0.125 3142 3144 2 -22196.5 -7112.78 318.41 0.125 3143 3145 2 -22199.6 -7111.97 320.93 0.125 3144 3146 2 -22199.6 -7111.7 320.98 0.125 3145 3147 2 -22200.5 -7109.72 321.46 0.125 3146 3148 2 -22200.5 -7109.17 321.56 0.125 3147 3149 2 -22201.3 -7107.3 322.39 0.125 3148 3150 2 -22201.6 -7107.1 322.69 0.125 3149 3151 2 -22201.5 -7104.96 323.31 0.125 3150 3152 2 -22201.8 -7105.02 323.33 0.125 3151 3153 2 -22203.5 -7098.54 324.56 0.125 3152 3154 2 -22205.2 -7097.04 324.96 0.125 3153 3155 2 -22206.4 -7097 325.08 0.125 3154 3156 2 -22207.8 -7096.78 325.65 0.125 3155 3157 2 -22207.8 -7096.57 325.69 0.125 3156 3158 2 -22208.8 -7095.67 325.94 0.125 3157 3159 2 -22209 -7094.91 326.08 0.125 3158 3160 2 -22210.2 -7093.69 328.6 0.125 3159 3161 2 -22210.4 -7092.64 328.79 0.125 3160 3162 2 -22210.8 -7091.07 331.67 0.125 3161 3163 2 -22210.8 -7091.11 331.9 0.125 3162 3164 2 -22211.9 -7089.05 334.53 0.125 3163 3165 2 -22212.6 -7088.39 334.72 0.125 3164 3166 2 -22212.5 -7088.48 335.24 0.125 3165 3167 2 -22213.9 -7087.92 338.24 0.125 3166 3168 2 -22215 -7086.8 336.99 0.125 3167 3169 2 -22216.2 -7086.76 337.1 0.125 3168 3170 2 -22217.7 -7086.51 337.28 0.125 3169 3171 2 -22218 -7086.08 337.48 0.125 3170 3172 2 -22219.8 -7084.53 337.75 0.125 3171 3173 2 -22220.2 -7083.27 339.59 0.125 3172 3174 2 -22221.4 -7082.7 341.02 0.125 3173 3175 2 -22221.2 -7082.67 342.62 0.125 3174 3176 2 -22222 -7081.54 343.22 0.125 3175 3177 2 -22222.2 -7079 344.16 0.125 3176 3178 2 -22222.8 -7077.95 346.79 0.125 3177 3179 2 -22222.8 -7077.69 346.84 0.125 3178 3180 2 -22223.7 -7077.49 347.78 0.125 3179 3181 2 -22224 -7077.52 347.8 0.125 3180 3182 2 -22226.6 -7076.7 348.11 0.125 3181 3183 2 -22227.8 -7076.92 348.18 0.125 3182 3184 2 -22228 -7077.19 348.16 0.125 3183 3185 2 -22230.9 -7079.34 353.66 0.125 3184 3186 2 -22233.2 -7082.49 358.17 0.125 3185 3187 2 -22233.5 -7082.52 358.19 0.125 3186 3188 2 -22235.2 -7085.95 365.73 0.125 3187 3189 2 -22235.5 -7086.01 365.77 0.125 3188 3190 2 -22236.6 -7086.61 368.07 0.125 3189 3191 2 -22237.6 -7086.88 370.33 0.125 3190 3192 2 -22237.8 -7087.85 378.32 0.125 3191 3193 2 -22238.6 -7087.22 381.51 0.125 3192 3194 2 -22238.5 -7087.26 381.76 0.125 3193 3195 2 -22239 -7085.8 384.08 0.125 3194 3196 2 -22238.9 -7085.86 385.76 0.125 3195 3197 2 -22224.7 -7075.84 349.8 0.125 3181 3198 2 -22226 -7074.61 350.71 0.125 3197 3199 2 -22226.7 -7074.2 350.78 0.125 3198 3200 2 -22226.7 -7073.96 350.82 0.125 3199 3201 2 -22227.3 -7073.73 352.28 0.125 3200 3202 2 -22227.4 -7073.8 354.07 0.125 3201 3203 2 -22228.3 -7072.37 354.38 0.125 3202 3204 2 -22229.9 -7071.11 354.75 0.125 3203 3205 2 -22230.6 -7070.48 354.92 0.125 3204 3206 2 -22231 -7070.24 354.99 0.125 3205 3207 2 -22232.2 -7069.45 355.24 0.125 3206 3208 2 -22232.6 -7068.97 355.36 0.125 3207 3209 2 -22233.2 -7068.6 355.48 0.125 3208 3210 2 -22233.3 -7068.06 355.58 0.125 3209 3211 2 -22233.7 -7067.51 360.97 0.125 3210 3212 2 -22234 -7067.03 361.27 0.125 3211 3213 2 -22234.5 -7066.31 361.43 0.125 3212 3214 2 -22235.2 -7065.69 361.67 0.125 3213 3215 2 -22236.4 -7065.12 361.88 0.125 3214 3216 2 -22236.3 -7065.17 362.15 0.125 3215 3217 2 -22237.3 -7063.81 365.75 0.125 3216 3218 2 -22237.3 -7063.52 365.8 0.125 3217 3219 2 -22238 -7062.86 365.97 0.125 3218 3220 2 -22238.3 -7062.7 366.05 0.125 3219 3221 2 -22240 -7062.42 367.26 0.125 3220 3222 2 -22240 -7062.26 368.11 0.125 3221 3223 2 -22240.5 -7061.09 368.58 0.125 3222 3224 2 -22240.3 -7060.5 368.66 0.125 3223 3225 2 -22240.5 -7058.3 368.53 0.125 3224 3226 2 -22240.5 -7058.1 368.57 0.125 3225 3227 2 -22241.8 -7057.26 368.83 0.125 3226 3228 2 -22242.2 -7056.77 368.94 0.125 3227 3229 2 -22243.4 -7055.71 370.75 0.125 3228 3230 2 -22244.3 -7055.44 371.36 0.125 3229 3231 2 -22244.7 -7054.97 371.47 0.125 3230 3232 2 -22244.7 -7054.72 371.52 0.125 3231 3233 2 -22246.4 -7053.17 376.46 0.125 3232 3234 2 -22246.8 -7052.97 376.52 0.125 3233 3235 2 -22247.5 -7052.11 376.73 0.125 3234 3236 2 -22247.5 -7051.81 376.79 0.125 3235 3237 2 -22249.8 -7050.72 377.18 0.125 3236 3238 2 -22249.8 -7050.4 377.23 0.125 3237 3239 2 -22251.6 -7049.92 377.26 0.125 3238 3240 2 -22254.4 -7048.19 380.17 0.125 3239 3241 2 -22255.7 -7046.65 380.55 0.125 3240 3242 2 -22255.8 -7046.41 380.76 0.125 3241 3243 2 -22256.2 -7046.27 384.17 0.125 3242 3244 2 -22256.1 -7046.51 385.58 0.125 3243 3245 2 -22257.3 -7045.56 386.62 0.125 3244 3246 2 -22257.3 -7045.43 387.44 0.125 3245 3247 2 -22259.3 -7044.93 388.75 0.125 3246 3248 2 -22259.5 -7044.16 388.9 0.125 3247 3249 2 -22260.5 -7043.27 389.27 0.125 3248 3250 2 -22260.8 -7042.87 389.53 0.125 3249 3251 2 -22261.5 -7042.52 389.83 0.125 3250 3252 2 -22264 -7041.77 392.19 0.125 3251 3253 2 -22264.3 -7041.73 391.93 0.125 3252 3254 2 -22265.8 -7041.77 392.12 0.125 3253 3255 2 -22265.8 -7041.26 392.21 0.125 3254 3256 2 -22266.2 -7041.09 392.27 0.125 3255 3257 2 -22267.3 -7040.86 393.08 0.125 3256 3258 2 -22267.5 -7040.94 394.76 0.125 3257 3259 2 -22267.5 -7040.71 394.8 0.125 3258 3260 2 -22267.8 -7040.67 394.33 0.125 3259 3261 2 -22267.6 -7041.08 396.83 0.125 3260 3262 2 -22268 -7040.66 397.24 0.125 3261 3263 2 -22268.3 -7040.02 397.82 0.125 3262 3264 2 -22268.4 -7039.79 397.86 0.125 3263 3265 2 -22268.4 -7039.36 398.5 0.125 3264 3266 2 -22269.3 -7037.09 400.27 0.125 3265 3267 2 -22271.2 -7036.03 401.74 0.125 3266 3268 2 -22273.5 -7033.67 399.74 0.125 3267 3269 2 -22276.8 -7032.27 402.15 0.125 3268 3270 2 -22278.6 -7031.81 406.98 0.125 3269 3271 2 -22279.8 -7031.03 407.38 0.125 3270 3272 2 -22279.9 -7030.74 407.43 0.125 3271 3273 2 -22281.4 -7029.25 411.02 0.125 3272 3274 2 -22282.3 -7028.63 411.26 0.125 3273 3275 2 -22284.7 -7028.12 414.94 0.125 3274 3276 2 -22285.6 -7027.47 416.8 0.125 3275 3277 2 -22286.8 -7025.29 417.28 0.125 3276 3278 2 -22287.5 -7022.76 417.84 0.125 3277 3279 2 -22288.7 -7021.05 418.23 0.125 3278 3280 2 -22288.8 -7020.54 418.32 0.125 3279 3281 2 -22290.7 -7018.76 420.49 0.125 3280 3282 2 -22292.8 -7017.57 421.33 0.125 3281 3283 2 -22292.8 -7017.45 422.03 0.125 3282 3284 2 -22295.2 -7016.55 424.12 0.125 3283 3285 2 -22295.2 -7016.29 424.27 0.125 3284 3286 2 -22296.4 -7015.98 424.44 0.125 3285 3287 2 -22298.9 -7014.47 426.56 0.125 3286 3288 2 -22299.9 -7011.75 427.11 0.125 3287 3289 2 -22300 -7011.23 427.2 0.125 3288 3290 2 -22300.9 -7009.51 427.57 0.125 3289 3291 2 -22300.9 -7009.23 427.62 0.125 3290 3292 2 -22301.8 -7008.23 430.5 0.125 3291 3293 2 -22301.8 -7007.98 430.54 0.125 3292 3294 2 -22303.3 -7007.41 430.78 0.125 3293 3295 2 -22304.8 -7007.15 430.96 0.125 3294 3296 2 -22305.8 -7006.92 430.58 0.125 3295 3297 2 -22308 -7007.79 430.02 0.125 3296 3298 2 -22310.2 -7009.08 427.8 0.125 3297 3299 2 -22312.5 -7010.85 427.59 0.125 3298 3300 2 -22312.7 -7010.93 427.6 0.125 3299 3301 2 -22316.5 -7011.66 422.31 0.125 3300 3302 2 -22316.7 -7011.7 422.33 0.125 3301 3303 2 -22318.6 -7012.82 422.26 0.125 3302 3304 2 -22321.5 -7013.86 419.09 0.125 3303 3305 2 -22323.7 -7015.02 419.1 0.125 3304 3306 2 -22323.7 -7014.99 418.95 0.125 3305 3307 2 -22325.2 -7014.09 418.5 0.125 3306 3308 2 -22327.3 -7014.87 416.3 0.125 3307 3309 2 -22332.3 -7017.85 413.95 0.125 3308 3310 2 -22335 -7020.26 413.91 0.125 3309 3311 2 -22338.4 -7020.72 413.02 0.125 3310 3312 2 -22339.5 -7020.92 413.09 0.125 3311 3313 2 -22342.1 -7021.76 410.58 0.125 3312 3314 2 -22343.4 -7022.31 410.61 0.125 3313 3315 2 -22345 -7023.14 410.62 0.125 3314 3316 2 -22345.6 -7024.29 410.48 0.125 3315 3317 2 -22346.9 -7024.82 410.52 0.125 3316 3318 2 -22348.7 -7025.8 410.82 0.125 3317 3319 2 -22351.3 -7026.21 409.03 0.125 3318 3320 2 -22352.9 -7027.42 408.02 0.125 3319 3321 2 -22356 -7030.86 407.09 0.125 3320 3322 2 -22357.9 -7033.21 406.28 0.125 3321 3323 2 -22359.1 -7035.63 405.99 0.125 3322 3324 2 -22360.5 -7037.74 405.9 0.125 3323 3325 2 -22361.5 -7038.67 403.93 0.125 3324 3326 2 -22362.8 -7040.51 402.29 0.125 3325 3327 2 -22363.9 -7042.1 402.13 0.125 3326 3328 2 -22365.6 -7043.2 402.11 0.125 3327 3329 2 -22367.7 -7044.26 402.82 0.125 3328 3330 2 -22307 -7003.86 433.71 0.125 3296 3331 2 -22307 -7003.58 433.76 0.125 3330 3332 2 -22307.6 -6998 434.77 0.125 3331 3333 2 -22309.7 -6996.19 439.35 0.125 3332 3334 2 -22310 -6996.18 439.38 0.125 3333 3335 2 -22311.7 -6995.96 439.58 0.125 3334 3336 2 -22314.3 -6994.28 440.1 0.125 3335 3337 2 -22316 -6992.94 440.48 0.125 3336 3338 2 -22315.2 -6992.53 440.47 0.125 3337 3339 2 -22315.7 -6991.29 440.72 0.125 3338 3340 2 -22316.4 -6988.73 441.21 0.125 3339 3341 2 -22317.4 -6988.08 441.41 0.125 3340 3342 2 -22318.9 -6985.55 438.51 0.125 3341 3343 2 -22320.2 -6983.18 437.64 0.125 3342 3344 2 -22323.1 -6981.6 436.99 0.125 3343 3345 2 -22323.4 -6981.61 437.02 0.125 3344 3346 2 -22325.9 -6978.05 435 0.125 3345 3347 2 -22326.2 -6978.37 434.97 0.125 3346 3348 2 -22327.2 -6977.47 435.21 0.125 3347 3349 2 -22327.5 -6977.54 435.23 0.125 3348 3350 2 -22327.4 -6975.61 435.55 0.125 3349 3351 2 -22329.2 -6974.08 435.96 0.125 3350 3352 2 -22330.7 -6973.54 436.19 0.125 3351 3353 2 -22330.8 -6972.75 436.34 0.125 3352 3354 2 -22331 -6971.96 436.48 0.125 3353 3355 2 -22331 -6971.71 436.52 0.125 3354 3356 2 -22332.7 -6969.27 435.21 0.125 3355 3357 2 -22333 -6968.77 435.32 0.125 3356 3358 2 -22333.1 -6968.52 435.37 0.125 3357 3359 2 -22334.7 -6966.77 433.5 0.125 3358 3360 2 -22335 -6966.56 433.56 0.125 3359 3361 2 -22335.9 -6966.21 433.71 0.125 3360 3362 2 -22336 -6965.96 433.75 0.125 3361 3363 2 -22336.3 -6964.07 434.09 0.125 3362 3364 2 -22336.6 -6963.64 434.19 0.125 3363 3365 2 -22338.2 -6963.05 434.44 0.125 3364 3366 2 -22339.1 -6961.72 436.77 0.125 3365 3367 2 -22339.3 -6960.38 437.02 0.125 3366 3368 2 -22339.4 -6960.13 437.06 0.125 3367 3369 2 -22339.8 -6959.4 437.22 0.125 3368 3370 2 -22340.8 -6958.49 437.47 0.125 3369 3371 2 -22342.5 -6957.18 437.84 0.125 3370 3372 2 -22344.6 -6955.94 439.73 0.125 3371 3373 2 -22344.6 -6955.69 439.78 0.125 3372 3374 2 -22345.1 -6954.41 444.71 0.125 3373 3375 2 -22347.2 -6953.85 450.61 0.125 3374 3376 2 -22349.9 -6953.65 454.61 0.125 3375 3377 2 -22348.9 -6952.87 457.06 0.125 3376 3378 2 -22348.9 -6952.69 462.47 0.125 3377 3379 2 -22349.9 -6949.76 468.22 0.125 3378 3380 2 -22346.5 -6954.07 438.36 0.125 3373 3381 2 -22348 -6953.81 438.54 0.125 3380 3382 2 -22348.2 -6953.86 438.55 0.125 3381 3383 2 -22348.7 -6953.11 438.66 0.125 3382 3384 2 -22352.5 -6949.73 438.21 0.125 3383 3385 2 -22352.7 -6949.79 438.23 0.125 3384 3386 2 -22354.4 -6948.74 438.55 0.125 3385 3387 2 -22355.6 -6948.14 438.77 0.125 3386 3388 2 -22355.7 -6947.36 438.91 0.125 3387 3389 2 -22358.9 -6945.8 439.46 0.125 3388 3390 2 -22358.9 -6945.48 439.52 0.125 3389 3391 2 -22359.6 -6945.08 439.65 0.125 3390 3392 2 -22361.3 -6942.13 438.77 0.125 3391 3393 2 -22361.3 -6941.87 438.81 0.125 3392 3394 2 -22361.6 -6940.58 439.05 0.125 3393 3395 2 -22361.8 -6940.4 439.11 0.125 3394 3396 2 -22362.9 -6939.47 439.36 0.125 3395 3397 2 -22363.2 -6939.48 439.38 0.125 3396 3398 2 -22363.5 -6939.05 439.49 0.125 3397 3399 2 -22363.6 -6938.79 439.53 0.125 3398 3400 2 -22365.7 -6936.44 440.02 0.125 3399 3401 2 -22366 -6936.5 440.04 0.125 3400 3402 2 -22365.9 -6935.11 440.21 0.125 3401 3403 2 -22366 -6934.85 440.26 0.125 3402 3404 2 -22369.1 -6931.99 439.67 0.125 3403 3405 2 -22369.4 -6932.04 439.68 0.125 3404 3406 2 -22370.7 -6931.14 439.96 0.125 3405 3407 2 -22371.6 -6930.41 441.26 0.125 3406 3408 2 -22373.2 -6929.28 442.61 0.125 3407 3409 2 -22373.2 -6929.01 442.65 0.125 3408 3410 2 -22375.4 -6926.17 443.33 0.125 3409 3411 2 -22376.2 -6924.51 444.23 0.125 3410 3412 2 -22376.2 -6924.6 444.81 0.125 3411 3413 2 -22378.6 -6922.32 445.16 0.125 3412 3414 2 -22380 -6920.67 445.57 0.125 3413 3415 2 -22380.8 -6919.46 445.84 0.125 3414 3416 2 -22382.7 -6917.59 447.78 0.125 3415 3417 2 -22383.8 -6916.47 448.04 0.125 3416 3418 2 -22383.9 -6915.93 448.14 0.125 3417 3419 2 -22384.2 -6915.72 448.2 0.125 3418 3420 2 -22386.3 -6913.69 448.73 0.125 3419 3421 2 -22388.6 -6911.94 449.24 0.125 3420 3422 2 -22391 -6911.01 454.06 0.125 3421 3423 2 -22392.7 -6909.94 454.44 0.125 3422 3424 2 -22392.6 -6910.02 454.94 0.125 3423 3425 2 -22393.3 -6908.76 456.53 0.125 3424 3426 2 -22393.5 -6908.84 456.62 0.125 3425 3427 2 -22396 -6906.71 459.62 0.125 3426 3428 2 -22398 -6904.22 464.08 0.125 3427 3429 2 -22397.9 -6904.26 464.36 0.125 3428 3430 2 -22401.4 -6899.44 465.01 0.125 3429 3431 2 -22406.8 -6892.78 470.64 0.125 3430 3432 2 -22401.2 -6904.61 466.37 0.125 3429 3433 2 -22402.7 -6904.64 466.5 0.125 3432 3434 2 -22404.1 -6902.96 466.91 0.125 3433 3435 2 -22407.1 -6902.97 470.07 0.125 3434 3436 2 -22407 -6903.04 470.48 0.125 3435 3437 2 -22410 -6901.86 472.15 0.125 3436 3438 2 -22411 -6901.37 476.35 0.125 3437 3439 2 -22413 -6899.9 476.78 0.125 3438 3440 2 -22413.3 -6899.42 476.89 0.125 3439 3441 2 -22413.4 -6898.54 478.14 0.125 3440 3442 2 -22413.7 -6898.57 478.16 0.125 3441 3443 2 -22414.1 -6897.1 480.49 0.125 3442 3444 2 -22414.2 -6896.56 480.59 0.125 3443 3445 2 -22415.2 -6894.07 481.18 0.125 3444 3446 2 -22415.2 -6893.85 481.22 0.125 3445 3447 2 -22416 -6892.19 482.36 0.125 3446 3448 2 -22417.9 -6890.55 485.39 0.125 3447 3449 2 -22418.8 -6890.18 485.54 0.125 3448 3450 2 -22418.9 -6889.67 485.63 0.125 3449 3451 2 -22420.5 -6887.34 488.11 0.125 3450 3452 2 -22420.5 -6887.5 489.11 0.125 3451 3453 2 -22421.9 -6886.44 487.83 0.125 3452 3454 2 -22422.6 -6883.87 492.92 0.125 3453 3455 2 -22417.6 -6884.59 492.34 0.125 3454 3456 2 -22418.6 -6882.85 496.67 0.125 3455 3457 2 -22419.6 -6879.46 500.99 0.125 3456 3458 2 -22135.1 -7160.19 250.3 0.125 3090 3459 2 -22136.3 -7159.28 252.03 0.125 3458 3460 2 -22138.2 -7157.34 249.03 0.25 3459 3461 2 -22139.7 -7156.51 249.26 0.25 3460 3462 2 -22142.2 -7154.69 249.53 0.25 3461 3463 2 -22145.6 -7153.97 247.66 0.25 3462 3464 2 -22147.1 -7152.91 247.97 0.25 3463 3465 2 -22149.8 -7150.62 247.13 0.25 3464 3466 2 -22150 -7150.6 247.16 0.25 3465 3467 2 -22152.5 -7149.39 247.59 0.25 3466 3468 2 -22154.1 -7147.73 248.02 0.25 3467 3469 2 -22155.4 -7146.86 248.28 0.25 3468 3470 2 -22157.2 -7145.04 248.89 0.25 3469 3471 2 -22159.7 -7144.41 248 0.25 3470 3472 2 -22161.5 -7143.83 248.26 0.25 3471 3473 2 -22163.9 -7142.75 247.97 0.25 3472 3474 2 -22165.6 -7141.1 248.21 0.25 3473 3475 2 -22165.7 -7140.03 248.35 0.25 3474 3476 2 -22165.8 -7139.74 248.4 0.25 3475 3477 2 -22167 -7139.09 248.41 0.25 3476 3478 2 -22167 -7139.08 248.3 0.25 3477 3479 2 -22168.5 -7138.46 248.39 0.25 3478 3480 2 -22170.2 -7137.13 247.25 0.25 3479 3481 2 -22172 -7136.32 247.55 0.25 3480 3482 2 -22172 -7136.05 247.6 0.25 3481 3483 2 -22172.4 -7135.08 247.8 0.25 3482 3484 2 -22165.3 -7136.36 246.92 0.37 3483 3485 2 -22168.8 -7135.27 240.62 0.37 3484 3486 2 -22169.5 -7133.83 240.93 0.37 3485 3487 2 -22170.5 -7131.33 241.44 0.37 3486 3488 2 -22172.3 -7129.41 237.94 0.37 3487 3489 2 -22172.7 -7128.42 238.14 0.37 3488 3490 2 -22175 -7126.94 236.91 0.37 3489 3491 2 -22176.8 -7124.79 236.89 0.37 3490 3492 2 -22178.1 -7123.02 234.91 0.37 3491 3493 2 -22180.1 -7123.13 235.08 0.37 3492 3494 2 -22182.2 -7122.44 235.39 0.37 3493 3495 2 -22185 -7120.73 232.28 0.37 3494 3496 2 -22186.6 -7118.04 231 0.37 3495 3497 2 -22187.7 -7117.1 230.69 0.37 3496 3498 2 -22189.7 -7115.38 225.56 0.37 3497 3499 2 -22190.7 -7114.49 225.54 0.37 3498 3500 2 -22191.5 -7113.29 225.81 0.37 3499 3501 2 -22193.3 -7111.52 224.85 0.37 3500 3502 2 -22195.6 -7109.77 224.99 0.37 3501 3503 2 -22197.6 -7108 220.81 0.37 3502 3504 2 -22199.1 -7105.64 221.33 0.37 3503 3505 2 -22200.7 -7104.55 221.48 0.37 3504 3506 2 -22206.5 -7100.54 213.36 0.37 3505 3507 2 -22209.8 -7099.01 210.98 0.37 3506 3508 2 -22210.3 -7097.5 211.27 0.37 3507 3509 2 -22213.3 -7095.11 211.7 0.37 3508 3510 2 -22216 -7092.83 207.19 0.37 3509 3511 2 -22218.9 -7092.23 207.34 0.37 3510 3512 2 -22221.8 -7089.18 201.15 0.25 3511 3513 2 -22224.8 -7087.24 199.87 0.25 3512 3514 2 -22226.5 -7086 198.57 0.25 3513 3515 2 -22228.7 -7085.3 197.33 0.25 3514 3516 2 -22230.7 -7082.14 192.94 0.25 3515 3517 2 -22232 -7079.65 191.7 0.25 3516 3518 2 -22234.4 -7077.49 188.01 0.25 3517 3519 2 -22235.6 -7075.65 187.17 0.25 3518 3520 2 -22236.7 -7075.09 184.71 0.25 3519 3521 2 -22238.7 -7074.15 185.05 0.125 3520 3522 2 -22243 -7068.55 180.69 0.125 3521 3523 2 -22245.9 -7063.5 175.92 0.125 3522 3524 2 -22248.3 -7059.32 167.21 0.125 3523 3525 2 -22248.8 -7057.5 167.35 0.125 3524 3526 2 -22252 -7054.46 161.28 0.125 3525 3527 2 -22253.2 -7054.53 160.7 0.125 3526 3528 2 -22253.8 -7052.46 160.73 0.125 3527 3529 2 -22253.8 -7050.78 160.49 0.125 3528 3530 2 -22256.3 -7048.33 158.51 0.125 3529 3531 2 -22257.9 -7046.32 156.69 0.125 3530 3532 2 -22259.4 -7044.73 155.58 0.125 3531 3533 2 -22261.3 -7042.28 154.03 0.125 3532 3534 2 -22263.6 -7040.42 153.79 0.125 3533 3535 2 -22265.8 -7037.74 149.16 0.125 3534 3536 2 -22266.6 -7037.6 149.25 0.125 3535 3537 2 -22267.4 -7036.15 149.41 0.125 3536 3538 2 -22268.5 -7033.52 149.07 0.125 3537 3539 2 -22271.9 -7032.09 142.98 0.125 3538 3540 2 -22273.1 -7030.53 142.22 0.125 3539 3541 2 -22273.8 -7028.79 141.11 0.125 3540 3542 2 -22275.7 -7026.06 136.49 0.125 3541 3543 2 -22276.1 -7024.81 136.74 0.125 3542 3544 2 -22277.2 -7022.88 137.16 0.125 3543 3545 2 -22280 -7018.77 131.75 0.125 3544 3546 2 -22282.7 -7016.66 131.24 0.125 3545 3547 2 -22290.3 -7019.22 131.53 0.125 3546 3548 2 -22292 -7017.44 131.99 0.125 3547 3549 2 -22293.2 -7015.57 132.41 0.125 3548 3550 2 -22295.3 -7012.03 132.06 0.125 3549 3551 2 -22297.2 -7011.08 132.32 0.125 3550 3552 2 -22298.2 -7009.53 130.4 0.125 3551 3553 2 -22299 -7008.12 130.68 0.125 3552 3554 2 -22300.8 -7007.4 130.95 0.125 3553 3555 2 -22303.2 -7005.75 125.57 0.125 3554 3556 2 -22304.5 -7004.95 125.82 0.125 3555 3557 2 -22306.8 -7002.9 121.31 0.125 3556 3558 2 -22307.6 -7001.7 121.58 0.125 3557 3559 2 -22309.1 -7000.96 119.05 0.125 3558 3560 2 -22309.1 -6999.15 119.35 0.125 3559 3561 2 -22309.3 -6997.86 119.59 0.125 3560 3562 2 -22311.4 -6995.6 120.16 0.125 3561 3563 2 -22313.9 -6993.65 118.83 0.125 3562 3564 2 -22314.9 -6991.78 116.68 0.125 3563 3565 2 -22314.8 -6990.07 114.88 0.125 3564 3566 2 -22316.4 -6988.26 113.82 0.125 3565 3567 2 -22317.6 -6986.75 111.88 0.125 3566 3568 2 -22318.5 -6986.41 112.02 0.125 3567 3569 2 -22318.7 -6985.11 112.25 0.125 3568 3570 2 -22320.1 -6983.38 108.64 0.125 3569 3571 2 -22320.4 -6982.09 108.87 0.125 3570 3572 2 -22321.3 -6981.07 106.97 0.125 3571 3573 2 -22321.2 -6979.92 106.88 0.125 3572 3574 2 -22322.9 -6979.35 104.82 0.125 3573 3575 2 -22323.3 -6978.65 104.97 0.125 3574 3576 2 -22321.8 -6973.33 102.61 0.125 3575 3577 2 -22323.7 -6972.94 101.68 0.125 3576 3578 2 -22324.9 -6970.96 101.68 0.125 3577 3579 2 -22329.5 -6962.81 100.13 0.125 3578 3580 2 -22334.1 -6956.31 100.39 0.125 3579 3581 2 -22336.8 -6953.9 101.04 0.125 3580 3582 2 -22338 -6952.04 101.46 0.125 3581 3583 2 -22338.4 -6948.15 102.14 0.125 3582 3584 2 -22340.5 -6945.93 102.7 0.125 3583 3585 2 -22342.2 -6944.09 103.17 0.125 3584 3586 2 -22342.7 -6943.15 103.37 0.125 3585 3587 2 -22344.1 -6939.86 103.34 0.125 3586 3588 2 -22346.1 -6937.11 102.71 0.125 3587 3589 2 -22350.3 -6933.67 100.51 0.125 3588 3590 2 -22353.2 -6931.84 101.09 0.125 3589 3591 2 -22357.2 -6927.74 100.22 0.125 3590 3592 2 -22358.7 -6925.63 100.71 0.125 3591 3593 2 -22360.9 -6921.3 101.64 0.125 3592 3594 2 -22362.7 -6918.75 102.23 0.125 3593 3595 2 -22365.8 -6915.61 98.46 0.125 3594 3596 2 -22368.2 -6911.61 99.35 0.125 3595 3597 2 -22371.9 -6907.72 98.42 0.125 3596 3598 2 -22373.2 -6905.34 98.93 0.125 3597 3599 2 -22376.6 -6900.03 97.88 0.125 3598 3600 2 -22378.6 -6895.94 98.76 0.125 3599 3601 2 -22379.4 -6895.01 98.98 0.125 3600 3602 2 -22380.9 -6891.77 98.9 0.125 3601 3603 2 -22380.7 -6887.9 99.52 0.125 3602 3604 2 -22379.6 -6883.58 96.17 0.125 3603 3605 2 -22379.9 -6881.7 96.39 0.125 3604 3606 2 -22379 -6879.93 96.44 0.125 3605 3607 2 -22378.2 -6877.94 96.7 0.125 3606 3608 2 -22378.6 -6875.15 97.2 0.125 3607 3609 2 -22379.4 -6871.18 96.23 0.125 3608 3610 2 -22379.3 -6867.07 94.62 0.125 3609 3611 2 -22379 -6865.42 94.86 0.125 3610 3612 2 -22379.7 -6864.19 95.13 0.125 3611 3613 2 -22381.4 -6861.05 91.42 0.125 3612 3614 2 -22382.3 -6859.92 91.7 0.125 3613 3615 2 -22382.3 -6858.28 91.97 0.125 3614 3616 2 -22381.8 -6857.71 92.02 0.125 3615 3617 2 -22381.9 -6855.82 92.33 0.125 3616 3618 2 -22381 -6853.86 92.58 0.125 3617 3619 2 -22379.5 -6850.07 89.6 0.125 3618 3620 2 -22378.5 -6847.51 89.93 0.125 3619 3621 2 -22377.2 -6844.59 87.06 0.125 3620 3622 2 -22376.3 -6842.09 84.6 0.125 3621 3623 2 -22375.8 -6840.39 84.83 0.125 3622 3624 2 -22375.9 -6839.12 85.06 0.125 3623 3625 2 -22376.5 -6838.4 82.21 0.125 3624 3626 2 -21870.3 -7372.4 83.95 0.37 1650 3627 2 -21870.5 -7370.37 84.31 0.37 3626 3628 2 -21870.3 -7369.13 86.56 0.37 3627 3629 2 -21870.7 -7366.77 86.99 0.37 3628 3630 2 -21870.7 -7366.55 87.03 0.37 3629 3631 2 -21870.1 -7364.8 87.27 0.37 3630 3632 2 -21869.8 -7364.57 87.28 0.37 3631 3633 2 -21869.3 -7362.61 87.55 0.37 3632 3634 2 -21869.1 -7362.03 87.62 0.37 3633 3635 2 -21869.9 -7360.58 87.94 0.37 3634 3636 2 -21870 -7359.53 88.13 0.37 3635 3637 2 -21869.6 -7358.68 88.23 0.37 3636 3638 2 -21869.8 -7357.64 88.42 0.37 3637 3639 2 -21870.1 -7356 92.69 0.37 3638 3640 2 -21871.1 -7354.64 93.27 0.37 3639 3641 2 -21871.1 -7354.42 93.57 0.37 3640 3642 2 -21870.9 -7352.09 94.47 0.37 3641 3643 2 -21870.9 -7351.82 94.52 0.37 3642 3644 2 -21871.4 -7350.14 95.13 0.37 3643 3645 2 -21871.5 -7349.64 95.49 0.37 3644 3646 2 -21872.3 -7347.51 96.6 0.37 3645 3647 2 -21872.7 -7346.54 96.8 0.37 3646 3648 2 -21873.6 -7343.74 98.6 0.37 3647 3649 2 -21874.7 -7342.07 98.98 0.37 3648 3650 2 -21875 -7341.83 99.05 0.37 3649 3651 2 -21876.3 -7339.67 99.59 0.37 3650 3652 2 -21876.3 -7339.4 99.64 0.37 3651 3653 2 -21878.1 -7337.91 101.89 0.37 3652 3654 2 -21880.2 -7335.36 102.64 0.37 3653 3655 2 -21880.1 -7335.11 102.67 0.37 3654 3656 2 -21881.5 -7331.71 104.02 0.37 3655 3657 2 -21881.3 -7331.82 106.17 0.37 3656 3658 2 -21881.2 -7329.52 105.63 0.37 3657 3659 2 -21881.2 -7329.26 105.68 0.37 3658 3660 2 -21881 -7328.38 106.87 0.37 3659 3661 2 -21881 -7327.86 106.97 0.37 3660 3662 2 -21880 -7325.79 110.02 0.37 3661 3663 2 -21879.9 -7325.45 110.06 0.37 3662 3664 2 -21878.3 -7324.68 110.01 0.37 3663 3665 2 -21878.2 -7323.35 110.21 0.37 3664 3666 2 -21878 -7322.83 110.28 0.37 3665 3667 2 -21876.5 -7321.18 110.42 0.37 3666 3668 2 -21876.6 -7321.01 110.45 0.37 3667 3669 2 -21876.9 -7318.92 110.82 0.37 3668 3670 2 -21877.5 -7316.63 111.26 0.37 3669 3671 2 -21879.9 -7315.47 111.68 0.37 3670 3672 2 -21881.3 -7314.1 112.03 0.37 3671 3673 2 -21882.5 -7311.62 112.56 0.37 3672 3674 2 -21882.6 -7311.13 112.65 0.37 3673 3675 2 -21883.4 -7310.38 115.45 0.37 3674 3676 2 -21883.7 -7308.07 115.86 0.37 3675 3677 2 -21884 -7305.52 118.23 0.25 3676 3678 2 -21884 -7305.26 118.28 0.25 3677 3679 2 -21884.4 -7302.68 118.75 0.25 3678 3680 2 -21884.5 -7302.43 118.79 0.25 3679 3681 2 -21885.1 -7300.13 119.23 0.25 3680 3682 2 -21885.4 -7299.67 119.34 0.25 3681 3683 2 -21886.4 -7298.75 119.58 0.25 3682 3684 2 -21887.2 -7297.3 119.9 0.25 3683 3685 2 -21887.5 -7296.87 120 0.25 3684 3686 2 -21886.9 -7293.24 120.54 0.37 3685 3687 2 -21887.3 -7290.92 120.96 0.37 3686 3688 2 -21889.4 -7286.12 119.38 0.37 3687 3689 2 -21889.3 -7286.2 119.88 0.37 3688 3690 2 -21890.2 -7284.11 121.01 0.37 3689 3691 2 -21892.2 -7278.85 126.04 0.37 3690 3692 2 -21892.3 -7275.45 126.6 0.37 3691 3693 2 -21892.4 -7271.26 127.49 0.37 3692 3694 2 -21894.1 -7269.72 127.91 0.37 3693 3695 2 -21895.9 -7267.82 126.23 0.37 3694 3696 2 -21897.4 -7265.97 126.67 0.37 3695 3697 2 -21897.6 -7266.03 126.68 0.37 3696 3698 2 -21898.3 -7264.41 129.13 0.37 3697 3699 2 -21898.6 -7261.02 129.71 0.37 3698 3700 2 -21898.8 -7259.98 129.91 0.37 3699 3701 2 -21899.8 -7257.51 130.41 0.37 3700 3702 2 -21899.8 -7257.27 130.45 0.37 3701 3703 2 -21900.9 -7256.79 135.71 0.37 3702 3704 2 -21901 -7256.53 135.76 0.37 3703 3705 2 -21902.8 -7252.9 136.53 0.37 3704 3706 2 -21902.6 -7248.22 139.16 0.37 3705 3707 2 -21903.2 -7244.85 139.77 0.37 3706 3708 2 -21902.7 -7243.19 142.81 0.25 3707 3709 2 -21903.7 -7241.76 143.14 0.25 3708 3710 2 -21903.7 -7241.56 143.17 0.25 3709 3711 2 -21904.1 -7239.48 143.55 0.25 3710 3712 2 -21904 -7238.45 143.71 0.25 3711 3713 2 -21905.1 -7237.69 141.7 0.25 3712 3714 2 -21905.4 -7237.5 141.76 0.25 3713 3715 2 -21905.9 -7236.24 142.02 0.25 3714 3716 2 -21906 -7235.98 142.06 0.25 3715 3717 2 -21906.2 -7234.95 142.25 0.25 3716 3718 2 -21906.8 -7234.03 143.98 0.25 3717 3719 2 -21908.3 -7233.24 144.26 0.25 3718 3720 2 -21908.4 -7232.98 144.3 0.25 3719 3721 2 -21909 -7232.58 144.43 0.25 3720 3722 2 -21909.9 -7228.94 144.78 0.25 3721 3723 2 -21910.4 -7227.48 145.07 0.25 3722 3724 2 -21910.5 -7227.24 145.11 0.25 3723 3725 2 -21911 -7224.17 145.67 0.25 3724 3726 2 -21911.3 -7220.5 146.31 0.25 3725 3727 2 -21910.2 -7216.34 145.04 0.25 3726 3728 2 -21910 -7211.8 145.78 0.25 3727 3729 2 -21910 -7211.53 145.82 0.25 3728 3730 2 -21910.5 -7208.53 143.61 0.25 3729 3731 2 -21910.6 -7207.74 143.75 0.25 3730 3732 2 -21910.5 -7204.11 146.32 0.25 3731 3733 2 -21910.6 -7203.85 146.4 0.25 3732 3734 2 -21910.8 -7202.61 146.63 0.25 3733 3735 2 -21911.8 -7200.14 147.12 0.25 3734 3736 2 -21911.6 -7199.56 147.2 0.25 3735 3737 2 -21911.7 -7199.31 147.25 0.25 3736 3738 2 -21910.7 -7196.71 149.65 0.25 3737 3739 2 -21911.3 -7196.31 149.78 0.25 3738 3740 2 -21911.4 -7196.05 149.83 0.25 3739 3741 2 -21911.7 -7194.29 150.14 0.25 3740 3742 2 -21911.5 -7193.98 150.17 0.25 3741 3743 2 -21912 -7192.75 150.42 0.25 3742 3744 2 -21911.9 -7192.49 150.47 0.25 3743 3745 2 -21911.4 -7191.59 153.31 0.25 3744 3746 2 -21911.3 -7191.73 154.15 0.25 3745 3747 2 -21911.4 -7188.24 154.52 0.25 3746 3748 2 -21911.4 -7187.99 154.56 0.25 3747 3749 2 -21912.7 -7186.94 154.86 0.25 3748 3750 2 -21913.5 -7183.42 153 0.25 3749 3751 2 -21913.5 -7183.19 153.04 0.25 3750 3752 2 -21914.1 -7182.49 153.22 0.25 3751 3753 2 -21913.9 -7182.22 153.25 0.25 3752 3754 2 -21914 -7181.42 153.39 0.25 3753 3755 2 -21914.1 -7181.17 153.44 0.25 3754 3756 2 -21914.5 -7180.46 153.59 0.25 3755 3757 2 -21914.4 -7179.39 153.76 0.25 3756 3758 2 -21914.5 -7179.11 153.81 0.25 3757 3759 2 -21914.2 -7178.83 153.84 0.25 3758 3760 2 -21914.6 -7175.97 155.77 0.25 3759 3761 2 -21914 -7173.96 157.09 0.25 3760 3762 2 -21913.3 -7172.31 158.88 0.25 3761 3763 2 -21913.4 -7172 158.94 0.25 3762 3764 2 -21913.7 -7166.89 156.01 0.25 3763 3765 2 -21913.4 -7166.59 156.03 0.25 3764 3766 2 -21913.8 -7160.68 162.15 0.25 3765 3767 2 -21915 -7156.83 163.68 0.25 3766 3768 2 -21915.9 -7154.64 164.13 0.25 3767 3769 2 -21916.2 -7152.6 164.5 0.25 3768 3770 2 -21916.3 -7152.36 164.55 0.25 3769 3771 2 -21915.8 -7150.34 164.3 0.25 3770 3772 2 -21915.7 -7149.61 163.52 0.25 3771 3773 2 -21915.5 -7149.27 163.55 0.25 3772 3774 2 -21916.7 -7145.57 164.28 0.25 3773 3775 2 -21915.6 -7141.85 159.85 0.25 3774 3776 2 -21915.9 -7141.92 159.86 0.25 3775 3777 2 -21915.8 -7140.32 160.12 0.25 3776 3778 2 -21915.9 -7140.05 160.17 0.25 3777 3779 2 -21916.7 -7136.98 160.26 0.25 3778 3780 2 -21916.8 -7136.69 160.11 0.25 3779 3781 2 -21917.7 -7134.24 160.6 0.25 3780 3782 2 -21917.8 -7134 160.65 0.25 3781 3783 2 -21916.8 -7131.43 160.98 0.25 3782 3784 2 -21916.6 -7130.85 161.06 0.25 3783 3785 2 -21917.5 -7129.15 161.3 0.25 3784 3786 2 -21917.5 -7128.88 161.35 0.25 3785 3787 2 -21917.4 -7126.94 164.25 0.25 3786 3788 2 -21917.5 -7125.91 164.44 0.25 3787 3789 2 -21917.2 -7124.77 164.59 0.25 3788 3790 2 -21917 -7124.51 164.61 0.25 3789 3791 2 -21916.6 -7123.15 164.81 0.25 3790 3792 2 -21916.3 -7123.07 164.79 0.25 3791 3793 2 -21916.4 -7121.45 167.83 0.25 3792 3794 2 -21916.3 -7120.65 167.95 0.25 3793 3795 2 -21916.4 -7119.87 168.09 0.25 3794 3796 2 -21916.5 -7119.13 168.22 0.25 3795 3797 2 -21916.6 -7118.85 168.27 0.25 3796 3798 2 -21917 -7115.4 166.77 0.25 3797 3799 2 -21917.7 -7113.19 167.58 0.25 3798 3800 2 -21917.9 -7111.91 167.81 0.25 3799 3801 2 -21917.9 -7111.66 167.86 0.25 3800 3802 2 -21917.9 -7110.07 168.12 0.25 3801 3803 2 -21917.9 -7109.83 168.16 0.25 3802 3804 2 -21917.7 -7107.21 170.48 0.25 3803 3805 2 -21918.2 -7105.75 170.78 0.25 3804 3806 2 -21917 -7104.73 171.02 0.25 3805 3807 2 -21916.3 -7103.34 171.19 0.25 3806 3808 2 -21916.1 -7102.77 171.26 0.25 3807 3809 2 -21915.2 -7101.24 172.72 0.25 3808 3810 2 -21915.2 -7100.96 172.78 0.25 3809 3811 2 -21916.1 -7099.12 173.68 0.25 3810 3812 2 -21916.1 -7096.88 176.2 0.25 3811 3813 2 -21916.1 -7096.6 176.25 0.25 3812 3814 2 -21916.5 -7095.36 176.66 0.25 3813 3815 2 -21917.1 -7093.62 177 0.25 3814 3816 2 -21917.2 -7093.38 177.05 0.25 3815 3817 2 -21917.3 -7091.98 177.95 0.25 3816 3818 2 -21917.7 -7090.93 179.33 0.25 3817 3819 2 -21918.1 -7088.65 180.07 0.25 3818 3820 2 -21918.9 -7086.93 181.75 0.25 3819 3821 2 -21918.9 -7086.41 181.84 0.25 3820 3822 2 -21920.8 -7084.57 184.89 0.25 3821 3823 2 -21920.8 -7084.59 185.04 0.25 3822 3824 2 -21922.5 -7082.51 185.55 0.25 3823 3825 2 -21922.5 -7082.27 185.59 0.25 3824 3826 2 -21923 -7080.57 189.22 0.25 3825 3827 2 -21923 -7080.27 189.27 0.25 3826 3828 2 -21922.4 -7078.87 189.44 0.25 3827 3829 2 -21922 -7078 189.55 0.25 3828 3830 2 -21921.5 -7075.79 189.87 0.25 3829 3831 2 -21921.2 -7075.49 189.9 0.25 3830 3832 2 -21920.4 -7073.93 194.12 0.25 3831 3833 2 -21920.3 -7072.86 194.29 0.25 3832 3834 2 -21919.9 -7071.99 194.39 0.25 3833 3835 2 -21919.7 -7071.69 194.42 0.25 3834 3836 2 -21919.7 -7071.42 194.47 0.25 3835 3837 2 -21920 -7069.9 194.75 0.25 3836 3838 2 -21920 -7069.36 194.84 0.25 3837 3839 2 -21920.5 -7067.91 195.13 0.25 3838 3840 2 -21920.6 -7067.65 195.18 0.25 3839 3841 2 -21921.5 -7065 197.88 0.25 3840 3842 2 -21921.7 -7063.23 198.19 0.25 3841 3843 2 -21921.7 -7062.21 201.7 0.25 3842 3844 2 -21921.8 -7061.99 201.74 0.25 3843 3845 2 -21922.1 -7060.83 204.02 0.25 3844 3846 2 -21922.1 -7060.87 204.28 0.25 3845 3847 2 -21922.3 -7058.28 207.92 0.25 3846 3848 2 -21922.3 -7058.03 207.97 0.25 3847 3849 2 -21923.6 -7054.21 212.7 0.25 3848 3850 2 -21926.5 -7052.08 213.32 0.125 3849 3851 2 -21927.5 -7049.65 213.82 0.125 3850 3852 2 -21927.7 -7049.66 213.84 0.125 3851 3853 2 -21927.1 -7048.52 213.97 0.125 3852 3854 2 -21928.7 -7047.06 214.95 0.125 3853 3855 2 -21928.7 -7046.85 215.33 0.125 3854 3856 2 -21929.9 -7045.99 216.94 0.125 3855 3857 2 -21930.8 -7044.84 218.84 0.125 3856 3858 2 -21932.1 -7041.3 217.52 0.125 3857 3859 2 -21933.4 -7040.19 217.86 0.125 3858 3860 2 -21933.7 -7038.55 218.7 0.125 3859 3861 2 -21933.7 -7036.69 219.01 0.125 3860 3862 2 -21934 -7036.75 219.02 0.125 3861 3863 2 -21934.2 -7035.18 219.31 0.125 3862 3864 2 -21934.3 -7034.91 219.36 0.125 3863 3865 2 -21934.3 -7032.84 219.71 0.125 3864 3866 2 -21934.4 -7032.59 219.75 0.125 3865 3867 2 -21935.3 -7031.23 221.87 0.125 3866 3868 2 -21935.3 -7031.21 221.7 0.125 3867 3869 2 -21936.7 -7029.56 221.98 0.125 3868 3870 2 -21937.8 -7026.85 222.53 0.125 3869 3871 2 -21938.3 -7025.63 222.78 0.125 3870 3872 2 -21938.3 -7025.38 222.82 0.125 3871 3873 2 -21937.3 -7023.75 224.92 0.125 3872 3874 2 -21937 -7022.11 225.17 0.125 3873 3875 2 -21937.3 -7019.96 226.86 0.125 3874 3876 2 -21938.1 -7016.74 227.63 0.125 3875 3877 2 -21938.8 -7015.52 227.96 0.125 3876 3878 2 -21938.8 -7015.54 228.1 0.125 3877 3879 2 -21940.2 -7014.34 227.43 0.125 3878 3880 2 -21941.2 -7013.96 227.58 0.125 3879 3881 2 -21942.5 -7012.39 227.97 0.125 3880 3882 2 -21944.4 -7009.8 228.57 0.125 3881 3883 2 -21944.5 -7009.54 228.62 0.125 3882 3884 2 -21945.3 -7007.25 230.17 0.125 3883 3885 2 -21945.6 -7007.29 230.19 0.125 3884 3886 2 -21946.9 -7006.19 230.27 0.125 3885 3887 2 -21948.6 -7004.65 230.68 0.125 3886 3888 2 -21949.3 -7002.12 231.07 0.125 3887 3889 2 -21950.5 -6999.95 226.91 0.125 3888 3890 2 -21950.5 -6999.71 226.95 0.125 3889 3891 2 -21952.3 -6999.42 227.05 0.125 3890 3892 2 -21953.3 -6997.53 225.84 0.125 3891 3893 2 -21953.5 -6995.21 228.05 0.125 3892 3894 2 -21953.5 -6993.9 228.26 0.125 3893 3895 2 -21953.6 -6993.64 228.31 0.125 3894 3896 2 -21953.9 -6991.34 228.73 0.125 3895 3897 2 -21954 -6991.07 228.78 0.125 3896 3898 2 -21954.2 -6989.06 230.9 0.125 3897 3899 2 -21954.2 -6988.8 230.95 0.125 3898 3900 2 -21955.3 -6987.67 231.23 0.125 3899 3901 2 -21955.6 -6987.46 231.3 0.125 3900 3902 2 -21957.8 -6986.28 231.69 0.125 3901 3903 2 -21958.7 -6984.3 232.11 0.125 3902 3904 2 -21958.7 -6983.79 232.2 0.125 3903 3905 2 -21958.5 -6983.5 232.22 0.125 3904 3906 2 -21958.4 -6982.69 232.35 0.125 3905 3907 2 -21958.2 -6982.11 232.42 0.125 3906 3908 2 -21957.6 -6979.41 231.63 0.125 3907 3909 2 -21957.9 -6979.21 231.69 0.125 3908 3910 2 -21958.2 -6977.18 232.06 0.125 3909 3911 2 -21959.8 -6974.19 233.47 0.125 3910 3912 2 -21961.2 -6972.59 234.03 0.125 3911 3913 2 -21964.5 -6971.14 234.95 0.125 3912 3914 2 -21966.3 -6968.88 235.96 0.125 3913 3915 2 -21966.4 -6968.39 236.05 0.125 3914 3916 2 -21968.6 -6966.49 236.92 0.125 3915 3917 2 -21968.8 -6965.45 237.1 0.125 3916 3918 2 -21968.5 -6963.54 237.48 0.125 3917 3919 2 -21968.6 -6963.3 237.53 0.125 3918 3920 2 -21966.6 -6960.87 234.85 0.125 3919 3921 2 -21966.7 -6960.63 234.89 0.125 3920 3922 2 -21966.3 -6958.33 234.52 0.125 3921 3923 2 -21966.2 -6957.28 234.69 0.125 3922 3924 2 -21966.3 -6956.22 234.88 0.125 3923 3925 2 -21967.7 -6952.89 231.6 0.125 3924 3926 2 -21968.9 -6952.29 231.81 0.125 3925 3927 2 -21969.4 -6951.05 231.91 0.125 3926 3928 2 -21969.5 -6951.01 231.64 0.125 3927 3929 2 -21971.6 -6950.25 231.74 0.125 3928 3930 2 -21971.6 -6950.02 231.78 0.125 3929 3931 2 -21972.8 -6948.27 231.27 0.125 3930 3932 2 -21973.5 -6947.32 231.49 0.125 3931 3933 2 -21974.2 -6946.38 231.71 0.125 3932 3934 2 -21974.3 -6946.14 231.76 0.125 3933 3935 2 -21975.6 -6943.2 232.37 0.125 3934 3936 2 -21976.2 -6941.2 232.76 0.125 3935 3937 2 -21976 -6940.61 232.84 0.125 3936 3938 2 -21975.9 -6937.99 233.26 0.125 3937 3939 2 -21975.6 -6936.66 233.45 0.125 3938 3940 2 -21975.4 -6936.05 233.53 0.125 3939 3941 2 -21975.4 -6934.72 233.75 0.125 3940 3942 2 -21975.6 -6933.4 233.69 0.125 3941 3943 2 -21975.6 -6933.15 233.74 0.125 3942 3944 2 -21977.2 -6931.06 234.23 0.125 3943 3945 2 -21977.2 -6930.79 234.27 0.125 3944 3946 2 -21977.7 -6929.02 234.62 0.125 3945 3947 2 -21979.4 -6927.57 235.48 0.125 3946 3948 2 -21979.4 -6927.33 235.53 0.125 3947 3949 2 -21980.3 -6926.65 235.72 0.125 3948 3950 2 -21980.4 -6926.43 235.85 0.125 3949 3951 2 -21980.6 -6924.22 233.9 0.125 3950 3952 2 -21980.9 -6924.24 233.92 0.125 3951 3953 2 -21981 -6923.5 234.06 0.125 3952 3954 2 -21981.1 -6923.26 234.1 0.125 3953 3955 2 -21981.6 -6921.81 234.71 0.125 3954 3956 2 -21981.8 -6920.56 234.93 0.125 3955 3957 2 -21981.7 -6919.46 235.1 0.125 3956 3958 2 -21981.8 -6918.96 235.2 0.125 3957 3959 2 -21982.1 -6918.74 235.26 0.125 3958 3960 2 -21983.8 -6917.15 238.37 0.125 3959 3961 2 -21983.8 -6917.42 238.32 0.125 3960 3962 2 -21984.5 -6914.95 238.79 0.125 3961 3963 2 -21985.8 -6913.76 240.29 0.125 3962 3964 2 -21986 -6912.21 240.57 0.125 3963 3965 2 -21988.1 -6909.95 241.14 0.125 3964 3966 2 -21989.5 -6908.92 243.09 0.125 3965 3967 2 -21989.6 -6908.68 243.13 0.125 3966 3968 2 -21991.6 -6906.96 243.6 0.125 3967 3969 2 -21992 -6906.44 243.72 0.125 3968 3970 2 -21992.8 -6904.78 244.07 0.125 3969 3971 2 -21992.9 -6903.99 244.22 0.125 3970 3972 2 -21993.5 -6900.29 245.67 0.125 3971 3973 2 -21993.9 -6897.97 246.09 0.125 3972 3974 2 -21994.5 -6894.94 248.24 0.125 3973 3975 2 -21995.3 -6892.44 248.73 0.125 3974 3976 2 -21995.3 -6892.19 248.77 0.125 3975 3977 2 -21994.5 -6889.98 249.06 0.125 3976 3978 2 -21994.3 -6889.62 249.1 0.125 3977 3979 2 -21993.2 -6887.87 249.29 0.125 3978 3980 2 -21993 -6885.83 248.66 0.125 3979 3981 2 -21993.1 -6885.29 248.76 0.125 3980 3982 2 -21993.2 -6883.2 249.11 0.125 3981 3983 2 -21993.3 -6882.97 249.15 0.125 3982 3984 2 -21993.5 -6881.43 249.43 0.125 3983 3985 2 -21993.5 -6881.15 249.48 0.125 3984 3986 2 -21995 -6879.74 249.33 0.125 3985 3987 2 -21995.2 -6879.8 249.34 0.125 3986 3988 2 -21996.8 -6878.74 249.67 0.125 3987 3989 2 -21996.8 -6878.52 249.71 0.125 3988 3990 2 -21998 -6876.88 250.09 0.125 3989 3991 2 -22002 -6874.42 250.89 0.125 3990 3992 2 -22002.2 -6873.13 251.13 0.125 3991 3993 2 -22002.3 -6872.86 251.18 0.125 3992 3994 2 -22002.8 -6871.67 251.42 0.125 3993 3995 2 -22003.9 -6868.79 251.17 0.125 3994 3996 2 -22004.2 -6868.83 251.19 0.125 3995 3997 2 -22004 -6865.85 252.75 0.125 3996 3998 2 -22004.2 -6863 253.25 0.125 3997 3999 2 -22004.3 -6862.7 253.31 0.125 3998 4000 2 -22004.1 -6861.89 253.43 0.125 3999 4001 2 -22004 -6861.1 253.55 0.125 4000 4002 2 -22002.4 -6857.08 258.49 0.125 4001 4003 2 -22000.6 -6852.75 258.31 0.125 4002 4004 2 -22000.7 -6852.5 258.35 0.125 4003 4005 2 -21999.1 -6848.42 258.44 0.125 4004 4006 2 -21998.9 -6846 258.82 0.125 4005 4007 2 -21999.2 -6844.48 259.09 0.125 4006 4008 2 -21999.6 -6842.19 259.51 0.125 4007 4009 2 -21999.3 -6841.88 259.54 0.125 4008 4010 2 -21999.4 -6841.36 259.63 0.125 4009 4011 2 -21999.7 -6839.81 259.92 0.125 4010 4012 2 -22000.8 -6839.87 260.43 0.125 4011 4013 2 -22001.6 -6839.97 260.48 0.125 4012 4014 2 -22002 -6839 260.68 0.125 4013 4015 2 -22002.8 -6838.04 260.92 0.125 4014 4016 2 -22003.1 -6837.61 261.01 0.125 4015 4017 2 -22003.2 -6837.35 261.06 0.125 4016 4018 2 -22003.6 -6836.64 261.22 0.125 4017 4019 2 -22003.4 -6835.82 261.34 0.125 4018 4020 2 -22003.5 -6835.55 261.39 0.125 4019 4021 2 -22003.6 -6835.07 261.48 0.125 4020 4022 2 -22003.3 -6834.76 261.51 0.125 4021 4023 2 -22003.8 -6833.79 261.71 0.125 4022 4024 2 -22003.9 -6833.01 261.85 0.125 4023 4025 2 -22003.7 -6832.48 261.92 0.125 4024 4026 2 -22003.6 -6831.66 262.04 0.125 4025 4027 2 -22004.5 -6828.42 261.36 0.125 4026 4028 2 -22004.4 -6827.1 261.57 0.125 4027 4029 2 -22004.5 -6823.93 260.69 0.125 4028 4030 2 -22004.2 -6822.31 260.93 0.125 4029 4031 2 -22004.2 -6822.06 260.98 0.125 4030 4032 2 -22001.8 -6819.5 261.18 0.125 4031 4033 2 -22001.6 -6818.93 261.25 0.125 4032 4034 2 -22001 -6817.51 261.43 0.125 4033 4035 2 -22000.8 -6817.2 261.46 0.125 4034 4036 2 -22000.1 -6813.33 260.31 0.125 4035 4037 2 -21999.9 -6812.78 260.39 0.125 4036 4038 2 -21999.7 -6812.54 260.41 0.125 4037 4039 2 -21999.7 -6809.64 259.73 0.125 4038 4040 2 -21999.9 -6809.7 259.75 0.125 4039 4041 2 -21999.8 -6807.5 260.09 0.125 4040 4042 2 -21999.8 -6807.28 260.14 0.125 4041 4043 2 -21999.8 -6805.41 260.22 0.125 4042 4044 2 -21999.9 -6805.18 260.26 0.125 4043 4045 2 -22001 -6804.08 259.17 0.125 4044 4046 2 -22002.4 -6803.03 258.06 0.125 4045 4047 2 -22002.5 -6802.76 258.11 0.125 4046 4048 2 -22003.3 -6801.3 258.43 0.125 4047 4049 2 -22003.3 -6801.08 258.45 0.125 4048 4050 2 -22003.7 -6800.1 258.62 0.125 4049 4051 2 -22003.8 -6799.81 258.67 0.125 4050 4052 2 -22004.6 -6797.39 256.46 0.125 4051 4053 2 -22006.8 -6796.36 256.09 0.125 4052 4054 2 -22007 -6796 255.45 0.125 4053 4055 2 -22008.2 -6794.99 256.42 0.125 4054 4056 2 -22009.3 -6794.23 255.61 0.125 4055 4057 2 -22009.9 -6793.8 255.74 0.125 4056 4058 2 -22010.8 -6792.26 253.82 0.125 4057 4059 2 -22011.4 -6790.77 254.12 0.125 4058 4060 2 -22011.4 -6790.52 254.16 0.125 4059 4061 2 -22012.4 -6789.63 254.4 0.125 4060 4062 2 -22012.5 -6788.85 254.55 0.125 4061 4063 2 -22013.2 -6788.46 254.67 0.125 4062 4064 2 -22013.2 -6788.21 254.71 0.125 4063 4065 2 -22013 -6787.01 257.36 0.125 4064 4066 2 -22013 -6786.76 257.41 0.125 4065 4067 2 -22013.5 -6785.79 257.61 0.125 4066 4068 2 -22014 -6784.27 257.91 0.125 4067 4069 2 -22014.8 -6782.85 258.16 0.125 4068 4070 2 -22014.8 -6782.59 258.21 0.125 4069 4071 2 -22015.6 -6781.14 258.53 0.125 4070 4072 2 -22015.7 -6780.9 258.57 0.125 4071 4073 2 -22015.3 -6779.39 259.44 0.125 4072 4074 2 -22015.4 -6779.15 259.49 0.125 4073 4075 2 -22015.9 -6777.83 259.76 0.125 4074 4076 2 -22015.9 -6777.63 259.8 0.125 4075 4077 2 -22016.2 -6776.11 260.07 0.125 4076 4078 2 -22016.6 -6775.35 260.23 0.125 4077 4079 2 -22017.6 -6774.38 259.67 0.125 4078 4080 2 -22018.4 -6772.42 261.73 0.125 4079 4081 2 -22019.8 -6770.85 262.12 0.125 4080 4082 2 -22019.8 -6770.58 262.17 0.125 4081 4083 2 -22020.6 -6768.63 265.7 0.125 4082 4084 2 -22020.6 -6768.35 265.75 0.125 4083 4085 2 -22019.5 -6765.89 268.17 0.125 4084 4086 2 -22019.8 -6762.57 267.35 0.125 4085 4087 2 -22019.9 -6762.32 267.4 0.125 4086 4088 2 -22018.9 -6759.75 267.73 0.125 4087 4089 2 -22018.7 -6759.17 267.81 0.125 4088 4090 2 -22017.5 -6756.56 268.13 0.125 4089 4091 2 -22017.3 -6755.52 268.29 0.125 4090 4092 2 -22017.6 -6753.96 268.57 0.125 4091 4093 2 -22017.6 -6753.72 268.61 0.125 4092 4094 2 -22018.4 -6751.55 267.74 0.125 4093 4095 2 -22018.6 -6751.53 267.77 0.125 4094 4096 2 -22018.8 -6750.53 267.96 0.125 4095 4097 2 -22018.9 -6749.76 268.1 0.125 4096 4098 2 -22020.2 -6748.96 268.35 0.125 4097 4099 2 -22020.3 -6748.68 268.4 0.125 4098 4100 2 -22020.3 -6746.34 268.8 0.125 4099 4101 2 -22020.2 -6745.53 268.92 0.125 4100 4102 2 -22021.3 -6743.48 267.16 0.125 4101 4103 2 -22021 -6743.41 267.14 0.125 4102 4104 2 -22021.1 -6742.39 267.33 0.125 4103 4105 2 -22022.7 -6739.81 267.9 0.125 4104 4106 2 -22022.8 -6738.99 268.05 0.125 4105 4107 2 -22022.9 -6737.65 266.68 0.125 4106 4108 2 -22022.9 -6737.43 266.72 0.125 4107 4109 2 -22022.5 -6735.25 265.37 0.125 4108 4110 2 -22022.7 -6734.46 265.51 0.125 4109 4111 2 -22022.8 -6733.68 265.65 0.125 4110 4112 2 -22023 -6732.64 265.84 0.125 4111 4113 2 -22023.7 -6730.16 266.32 0.125 4112 4114 2 -22023.5 -6729.34 266.44 0.125 4113 4115 2 -22023.4 -6728.3 266.6 0.125 4114 4116 2 -22023.5 -6727.5 266.75 0.125 4115 4117 2 -22024.1 -6725.98 267.15 0.125 4116 4118 2 -22024.5 -6725.31 267.29 0.125 4117 4119 2 -22024.2 -6724.98 267.32 0.125 4118 4120 2 -22024.3 -6724.74 267.37 0.125 4119 4121 2 -22024.3 -6723.17 267.63 0.125 4120 4122 2 -22024.3 -6722.91 267.67 0.125 4121 4123 2 -22024.7 -6722.43 267.79 0.125 4122 4124 2 -22024.8 -6721.91 267.88 0.125 4123 4125 2 -22025.4 -6721.48 268.01 0.125 4124 4126 2 -22025.6 -6720.22 268.24 0.125 4125 4127 2 -22026.8 -6715.13 267.48 0.125 4126 4128 2 -22026.5 -6714.04 267.63 0.125 4127 4129 2 -22027.2 -6712.4 268.5 0.125 4128 4130 2 -22027.3 -6712.24 268.97 0.125 4129 4131 2 -22028 -6709.77 269.67 0.125 4130 4132 2 -22028.8 -6707.46 272.53 0.125 4131 4133 2 -22029.8 -6704.34 274.32 0.125 4132 4134 2 -22029.8 -6702.8 274.58 0.125 4133 4135 2 -22029.8 -6702.55 274.62 0.125 4134 4136 2 -22029.6 -6700.6 274.51 0.125 4135 4137 2 -22029.5 -6699.5 274.6 0.125 4136 4138 2 -22030.2 -6698.07 272.05 0.125 4137 4139 2 -22030.2 -6697.57 272.14 0.125 4138 4140 2 -22030.9 -6696.84 272.33 0.125 4139 4141 2 -22031.3 -6696.14 272.48 0.125 4140 4142 2 -22032.7 -6694.82 272.82 0.125 4141 4143 2 -22032.8 -6694.32 272.91 0.125 4142 4144 2 -22032.5 -6694.3 272.89 0.125 4143 4145 2 -22031.7 -6691.74 270.11 0.125 4144 4146 2 -22031.8 -6690.93 270.25 0.125 4145 4147 2 -22031.6 -6690.64 270.28 0.125 4146 4148 2 -22031.7 -6690.42 270.32 0.125 4147 4149 2 -22029.6 -6687.81 269.75 0.125 4148 4150 2 -22030.2 -6684.57 269.23 0.125 4149 4151 2 -22029.4 -6682.53 269.49 0.125 4150 4152 2 -22028.4 -6681.59 269.55 0.125 4151 4153 2 -22028 -6678.94 270.47 0.125 4152 4154 2 -22028 -6679.21 270.4 0.125 4153 4155 2 -22026.9 -6675.34 270.95 0.125 4154 4156 2 -22023.9 -6679.34 270.43 0.125 4154 4157 2 -22024 -6679.08 270.48 0.125 4156 4158 2 -22021.3 -6678.34 270.35 0.125 4157 4159 2 -22018.1 -6677.52 270.19 0.125 4158 4160 2 -22026.2 -6687.87 269.13 0.125 4149 4161 2 -22025.3 -6686.45 269.28 0.125 4160 4162 2 -22024.1 -6684.82 265.87 0.125 4161 4163 2 -22024.2 -6684.58 265.91 0.125 4162 4164 2 -22020.6 -6680.5 266.25 0.125 4163 4165 2 -22019.8 -6680.12 266.25 0.125 4164 4166 2 -22020 -6676.69 263.8 0.125 4165 4167 2 -22020.1 -6676.44 263.85 0.125 4166 4168 2 -22018.9 -6674.8 260.45 0.125 4167 4169 2 -22018.3 -6673.15 260.67 0.125 4168 4170 2 -22018.1 -6672.32 260.79 0.125 4169 4171 2 -22018.3 -6671.8 260.89 0.125 4170 4172 2 -22018.3 -6671.53 260.94 0.125 4171 4173 2 -22018.5 -6669.53 258.74 0.125 4172 4174 2 -22018.7 -6668.86 257.63 0.125 4173 4175 2 -22018.5 -6666.94 257.93 0.125 4174 4176 2 -22018.5 -6666.45 258.02 0.125 4175 4177 2 -22018.3 -6666.34 258.01 0.125 4176 4178 2 -22018.4 -6665.58 258.14 0.125 4177 4179 2 -22018.7 -6663.58 258.51 0.125 4178 4180 2 -22018.5 -6663.27 258.54 0.125 4179 4181 2 -22018.7 -6662.49 258.68 0.125 4180 4182 2 -22018.1 -6657.88 257.73 0.125 4181 4183 2 -22018.3 -6655.15 257.45 0.125 4182 4184 2 -22018.2 -6652.53 257.87 0.125 4183 4185 2 -22018 -6651.95 257.95 0.125 4184 4186 2 -22017.8 -6650.29 258.2 0.125 4185 4187 2 -22017.3 -6647.4 258.95 0.125 4186 4188 2 -22016.2 -6644.96 257.05 0.125 4187 4189 2 -22016.5 -6643.17 257.37 0.125 4188 4190 2 -22017.6 -6640.6 255.63 0.125 4189 4191 2 -22016.7 -6639.39 255.74 0.125 4190 4192 2 -22016.5 -6638.58 255.86 0.125 4191 4193 2 -22016.7 -6637.78 256 0.125 4192 4194 2 -22016.2 -6636.04 255.94 0.125 4193 4195 2 -22016.2 -6635.82 255.98 0.125 4194 4196 2 -22016.7 -6634.35 256.27 0.125 4195 4197 2 -22016.8 -6634.09 256.32 0.125 4196 4198 2 -22017.5 -6632.86 256.59 0.125 4197 4199 2 -22017.8 -6631.09 256.91 0.125 4198 4200 2 -22017.9 -6630.54 257.01 0.125 4199 4201 2 -22017.4 -6628.12 258.84 0.125 4200 4202 2 -22017.4 -6627.88 258.88 0.125 4201 4203 2 -22017.4 -6625.99 259.2 0.125 4202 4204 2 -22017.4 -6623.92 256.65 0.125 4203 4205 2 -22017.2 -6623.35 256.73 0.125 4204 4206 2 -22017.3 -6623.09 256.77 0.125 4205 4207 2 -22017.6 -6619.72 257.36 0.125 4206 4208 2 -22017.7 -6618.96 257.5 0.125 4207 4209 2 -22018.1 -6618.23 257.65 0.125 4208 4210 2 -22018.5 -6617.25 257.86 0.125 4209 4211 2 -22019 -6616.02 258.11 0.125 4210 4212 2 -22019.1 -6615.53 258.19 0.125 4211 4213 2 -22019.9 -6611.92 255.74 0.125 4212 4214 2 -22020 -6611.68 255.79 0.125 4213 4215 2 -22020.2 -6610.14 256.08 0.125 4214 4216 2 -22020.3 -6609.39 256.22 0.125 4215 4217 2 -22020.8 -6607.9 256.56 0.125 4216 4218 2 -22020.8 -6606.62 256.81 0.125 4217 4219 2 -22022.6 -6601.94 255.09 0.125 4218 4220 2 -22022.7 -6600.91 255.28 0.125 4219 4221 2 -22021.5 -6596.48 255.95 0.125 4220 4222 2 -22021.7 -6595.46 256.13 0.125 4221 4223 2 -22022.4 -6592.75 256.65 0.125 4222 4224 2 -22022.8 -6590.16 257.12 0.125 4223 4225 2 -22023.6 -6586.19 252.53 0.125 4224 4226 2 -22023.7 -6580.13 253.55 0.125 4225 4227 2 -22024 -6578.9 252.42 0.125 4226 4228 2 -22024 -6575.76 252.94 0.125 4227 4229 2 -22024.5 -6572.64 253.5 0.125 4228 4230 2 -22024.6 -6572.13 253.59 0.125 4229 4231 2 -22024.3 -6571.83 253.62 0.125 4230 4232 2 -22024.6 -6570.58 253.85 0.125 4231 4233 2 -22024.6 -6570.34 253.89 0.125 4232 4234 2 -22025.5 -6567.32 253.14 0.125 4233 4235 2 -22027.9 -6565.3 251.55 0.125 4234 4236 2 -22028.2 -6564.37 253.66 0.125 4235 4237 2 -22028.5 -6564.4 253.68 0.125 4236 4238 2 -22030.2 -6562.6 253.88 0.125 4237 4239 2 -22030.3 -6562.53 253.49 0.125 4238 4240 2 -22030.4 -6560.1 253.86 0.125 4239 4241 2 -22030.5 -6559.9 253.9 0.125 4240 4242 2 -22030.6 -6555.44 251.49 0.125 4241 4243 2 -22032.5 -6553.08 250.41 0.125 4242 4244 2 -22032.5 -6552.84 250.45 0.125 4243 4245 2 -22032.7 -6551.29 250.73 0.125 4244 4246 2 -22032.9 -6550.52 250.87 0.125 4245 4247 2 -22033 -6547.86 251.33 0.125 4246 4248 2 -22032.9 -6547.12 251.44 0.125 4247 4249 2 -22033.1 -6545.8 251.67 0.125 4248 4250 2 -22033.9 -6543.3 250.48 0.125 4249 4251 2 -22033.9 -6543.03 250.53 0.125 4250 4252 2 -22034.5 -6541.31 250.87 0.125 4251 4253 2 -22034.7 -6540.52 251.01 0.125 4252 4254 2 -22035 -6538.75 251.33 0.125 4253 4255 2 -22034.7 -6538.42 251.36 0.125 4254 4256 2 -22036 -6534.33 251.71 0.125 4255 4257 2 -22035.7 -6532.71 251.95 0.125 4256 4258 2 -22035.7 -6532.46 252 0.125 4257 4259 2 -22035.4 -6531.34 252.15 0.125 4258 4260 2 -22035.4 -6531.09 252.19 0.125 4259 4261 2 -22031.8 -6527.77 252.14 0.125 4260 4262 2 -22028.1 -6526.83 258.2 0.125 4261 4263 2 -22027.3 -6524.56 264.02 0.125 4262 4264 2 -22026.7 -6522.85 264.25 0.125 4263 4265 2 -22027 -6522.68 264.49 0.125 4264 4266 2 -21870.6 -7377.64 83.98 0.745 1647 4267 2 -21869.3 -7378.44 80.14 0.745 4266 4268 2 -21867.3 -7378.45 75.84 0.745 4267 4269 2 -21867.5 -7378.47 75.86 0.745 4268 4270 2 -21865.7 -7378.47 69.65 0.745 4269 4271 2 -21864.5 -7379.02 65.03 0.745 4270 4272 2 -21863.8 -7379.23 62.09 0.745 4271 4273 2 -21862.3 -7379.23 58.69 0.745 4272 4274 2 -21860.3 -7378.64 55.54 0.745 4273 4275 2 -21859.7 -7378.45 48.98 0.745 4274 4276 2 -21859.7 -7378.37 48.46 0.745 4275 4277 2 -21859.9 -7380.1 53.41 0.745 4276 4278 2 -21857.7 -7379.98 53.14 0.37 4277 4279 2 -21856.1 -7377.63 53.38 0.37 4278 4280 2 -21852.3 -7376.05 58.42 0.37 4279 4281 2 -21852.3 -7376.08 58.64 0.37 4280 4282 2 -21850.7 -7375.08 63.66 0.37 4281 4283 2 -21849.2 -7374.51 70.61 0.37 4282 4284 2 -21849.1 -7372.28 72.1 0.37 4283 4285 2 -21849.1 -7372.15 72.74 0.37 4284 4286 2 -21848.9 -7370.81 76.04 0.25 4285 4287 2 -21849.2 -7370.59 76.11 0.25 4286 4288 2 -21849.8 -7370.22 76.93 0.25 4287 4289 2 -21849.6 -7370.48 78.48 0.25 4288 4290 2 -21849.8 -7369.16 78.32 0.25 4289 4291 2 -21850.7 -7368.52 78.71 0.25 4290 4292 2 -21851.6 -7367.34 81.74 0.25 4291 4293 2 -21851.7 -7366.79 86.31 0.25 4292 4294 2 -21852.4 -7364.87 88.63 0.25 4293 4295 2 -21852.9 -7364.48 90.66 0.25 4294 4296 2 -21852.8 -7361.71 94.86 0.25 4295 4297 2 -21852.4 -7359.31 97.03 0.25 4296 4298 2 -21852.4 -7359.38 97.44 0.25 4297 4299 2 -21851.8 -7358.33 101.42 0.25 4298 4300 2 -21852.1 -7357.63 101.59 0.25 4299 4301 2 -21852.3 -7356.74 105.61 0.25 4300 4302 2 -21852.6 -7356.79 105.63 0.25 4301 4303 2 -21852.6 -7354.61 110.23 0.25 4302 4304 2 -21852.9 -7354.42 110.28 0.25 4303 4305 2 -21850.7 -7351.64 110.53 0.37 4304 4306 2 -21852.7 -7349.22 107.13 0.125 4305 4307 2 -21852.5 -7348.93 107.16 0.125 4306 4308 2 -21851.3 -7347.71 110.61 0.125 4307 4309 2 -21850.6 -7345.5 110.95 0.125 4308 4310 2 -21849 -7343.42 115.82 0.125 4309 4311 2 -21849 -7343.12 115.86 0.125 4310 4312 2 -21848.2 -7341.17 119.48 0.125 4311 4313 2 -21848.2 -7341.19 119.59 0.125 4312 4314 2 -21848 -7339.08 119.94 0.125 4313 4315 2 -21847 -7337.86 126.14 0.125 4314 4316 2 -21849.2 -7335.97 136.02 0.125 4315 4317 2 -21849.5 -7335.42 141.8 0.125 4316 4318 2 -21849.5 -7333.48 147.62 0.125 4317 4319 2 -21849.8 -7333.52 147.64 0.125 4318 4320 2 -21848.8 -7331.17 151.91 0.125 4319 4321 2 -21848.2 -7329.43 157.99 0.125 4320 4322 2 -21846.7 -7327.62 162.76 0.125 4321 4323 2 -21847.2 -7326.56 171.62 0.125 4322 4324 2 -21847.4 -7324.78 174.88 0.125 4323 4325 2 -21847.8 -7323.75 179.32 0.125 4324 4326 2 -21847.9 -7322.95 183.92 0.125 4325 4327 2 -21848.3 -7321.04 184.27 0.25 4326 4328 2 -21848.7 -7319.51 184.57 0.25 4327 4329 2 -21854.3 -7322.44 184.61 0.25 4328 4330 2 -21853.1 -7321.87 194.65 0.25 4329 4331 2 -21852.5 -7321.29 196.46 0.25 4330 4332 2 -21852.2 -7320.84 201.71 0.25 4331 4333 2 -21852.7 -7320.02 204.38 0.25 4332 4334 2 -21852.5 -7318.45 209.6 0.25 4333 4335 2 -21852.5 -7318.41 214.21 0.25 4334 4336 2 -21852.5 -7318.31 215.13 0.25 4335 4337 2 -21851.8 -7318.29 216.87 0.25 4336 4338 2 -21850.9 -7317.78 217.69 0.25 4337 4339 2 -21850.2 -7317.42 220.7 0.25 4338 4340 2 -21850.1 -7317.48 221.04 0.25 4339 4341 2 -21850.4 -7316.42 224.27 0.25 4340 4342 2 -21850.5 -7315.99 227.72 0.25 4341 4343 2 -21850.5 -7315.73 227.77 0.25 4342 4344 2 -21849.7 -7316.03 229.98 0.25 4343 4345 2 -21849.8 -7315.79 230.03 0.25 4344 4346 2 -21850.4 -7315.62 233.35 0.25 4345 4347 2 -21850.4 -7315.32 233.39 0.25 4346 4348 2 -21850.7 -7313.78 236.69 0.25 4347 4349 2 -21851.4 -7314.05 242.53 0.25 4348 4350 2 -21852.8 -7314.14 246.51 0.25 4349 4351 2 -21851.7 -7312.9 249.39 0.25 4350 4352 2 -21851.5 -7312.71 256.27 0.25 4351 4353 2 -21853.6 -7304.2 259.49 0.125 4352 4354 2 -21853.7 -7303.19 259.99 0.125 4353 4355 2 -21853.7 -7303.22 260.15 0.125 4354 4356 2 -21852.9 -7302.35 266.63 0.125 4355 4357 2 -21853 -7302.09 266.68 0.125 4356 4358 2 -21853.2 -7301.78 269.16 0.125 4357 4359 2 -21853.2 -7301.59 269.53 0.125 4358 4360 2 -21853.5 -7300.66 276.2 0.125 4359 4361 2 -21853.5 -7298.72 279.18 0.125 4360 4362 2 -21853.7 -7298.75 279.2 0.125 4361 4363 2 -21854.9 -7298.19 279.41 0.125 4362 4364 2 -21855 -7297.99 279.45 0.125 4363 4365 2 -21855.7 -7297.06 284.43 0.125 4364 4366 2 -21855.7 -7297.32 284.39 0.125 4365 4367 2 -21855.7 -7297.08 288.86 0.125 4366 4368 2 -21855.6 -7296.76 291.91 0.125 4367 4369 2 -21855.6 -7296.52 291.96 0.125 4368 4370 2 -21856.6 -7296.87 298.99 0.125 4369 4371 2 -21856.7 -7296.12 299.13 0.125 4370 4372 2 -21856.9 -7295.77 292.69 0.125 4371 4373 2 -21857 -7295.51 292.74 0.125 4372 4374 2 -21857.4 -7296.57 300.77 0.125 4373 4375 2 -21857.5 -7296.27 300.82 0.125 4374 4376 2 -21856.8 -7295.23 304.66 0.125 4375 4377 2 -21856.8 -7295.26 304.81 0.125 4376 4378 2 -21858.5 -7296.1 307.79 0.125 4377 4379 2 -21858.2 -7295.89 312.79 0.125 4378 4380 2 -21858.3 -7295.61 312.84 0.125 4379 4381 2 -21857.5 -7295.61 316.53 0.125 4380 4382 2 -21859.4 -7296.07 321.53 0.125 4381 4383 2 -21859.4 -7296.1 321.76 0.125 4382 4384 2 -21860.9 -7295.39 326.93 0.125 4383 4385 2 -21863.2 -7295.12 332.07 0.125 4384 4386 2 -21867.3 -7295.91 332.31 0.125 4385 4387 2 -21868.5 -7297.39 336.42 0.125 4386 4388 2 -21870.7 -7298.06 336.51 0.125 4387 4389 2 -21871.3 -7298.13 336.55 0.125 4388 4390 2 -21871.6 -7297.93 336.62 0.125 4389 4391 2 -21872.4 -7298.1 336.67 0.125 4390 4392 2 -21875.4 -7297.51 341.72 0.125 4391 4393 2 -21877.1 -7298.38 347.91 0.125 4392 4394 2 -21878.6 -7298.31 352.38 0.125 4393 4395 2 -21878.5 -7298.59 352.33 0.125 4394 4396 2 -21881.8 -7298.53 356.67 0.125 4395 4397 2 -21884.5 -7299.19 362.72 0.125 4396 4398 2 -21884.5 -7298.91 362.77 0.125 4397 4399 2 -21885.4 -7299.61 363.87 0.125 4398 4400 2 -21885.7 -7299.61 363.89 0.125 4399 4401 2 -21887.3 -7299.59 371.21 0.125 4400 4402 2 -21887.5 -7299.82 372.26 0.125 4401 4403 2 -21890.8 -7300.21 379.46 0.125 4402 4404 2 -21891 -7300.29 379.47 0.125 4403 4405 2 -21886.1 -7297.48 379.47 0.125 4404 4406 2 -21888 -7297.52 387.81 0.125 4405 4407 2 -21888.2 -7297.61 387.82 0.125 4406 4408 2 -21889.6 -7297.11 389.73 0.125 4407 4409 2 -21889.5 -7297.38 391.35 0.125 4408 4410 2 -21894.8 -7296.28 399.09 0.125 4409 4411 2 -21896.8 -7296.43 399.25 0.125 4410 4412 2 -21897 -7296.47 399.26 0.125 4411 4413 2 -21898.4 -7295.45 399.56 0.125 4412 4414 2 -21901 -7294.92 409.96 0.125 4413 4415 2 -21903.4 -7294.57 414.53 0.125 4414 4416 2 -21906 -7292.88 421.86 0.125 4415 4417 2 -21906.9 -7291.15 422.24 0.125 4416 4418 2 -21906.9 -7290.91 422.28 0.125 4417 4419 2 -21908.8 -7287.73 422.43 0.125 4418 4420 2 -21908.9 -7285.39 422.83 0.125 4419 4421 2 -21908.2 -7283.77 426.86 0.125 4420 4422 2 -21908.1 -7283.87 427.48 0.125 4421 4423 2 -21907.1 -7277.89 427.53 0.125 4422 4424 2 -21905.3 -7275.12 429.43 0.125 4423 4425 2 -21903.7 -7272.95 429.64 0.125 4424 4426 2 -21903.5 -7272.59 429.68 0.125 4425 4427 2 -21902.5 -7270 430.02 0.125 4426 4428 2 -21902.4 -7269.17 430.15 0.125 4427 4429 2 -21902.8 -7267.39 430.48 0.125 4428 4430 2 -21902.6 -7267.11 430.51 0.125 4429 4431 2 -21900.9 -7258.69 436.35 0.125 4430 4432 2 -21897.6 -7250.46 448.71 0.125 4431 4433 2 -21898.5 -7247.36 450.34 0.125 4432 4434 2 -21898.5 -7246 450.57 0.125 4433 4435 2 -21898 -7241.4 455.91 0.125 4434 4436 2 -21898.3 -7241.47 455.92 0.125 4435 4437 2 -21899.4 -7239.69 457.01 0.125 4436 4438 2 -21900.4 -7234.61 464.24 0.125 4437 4439 2 -21905 -7234.28 464.73 0.125 4438 4440 2 -21905.5 -7231.12 465.3 0.125 4439 4441 2 -21905.5 -7231.33 465.27 0.125 4440 4442 2 -21904.4 -7225.62 473.04 0.125 4441 4443 2 -21902.6 -7221.79 479.58 0.125 4442 4444 2 -21902.3 -7221.45 479.61 0.125 4443 4445 2 -21899.5 -7217.53 483.3 0.125 4444 4446 2 -21899.4 -7217.82 485.07 0.125 4445 4447 2 -21896.7 -7216.3 489.46 0.125 4446 4448 2 -21897 -7216.33 489.48 0.125 4447 4449 2 -21891.1 -7211.95 496.5 0.125 4448 4450 2 -21890.6 -7210.3 499.93 0.125 4449 4451 2 -21893 -7210.72 501.95 0.125 4450 4452 2 -21896.8 -7211.7 508.7 0.125 4451 4453 2 -21897.1 -7211.75 508.71 0.125 4452 4454 2 -21898 -7211.06 508.92 0.125 4453 4455 2 -21899.6 -7207.13 507.3 0.125 4454 4456 2 -21900.1 -7205.52 530.44 0.125 4455 4457 2 -21900.2 -7205.28 530.48 0.125 4456 4458 2 -21846.5 -7336.01 129.92 0.125 4315 4459 2 -21846.7 -7333.92 133.05 0.125 4458 4460 2 -21846.8 -7332.94 133.39 0.125 4459 4461 2 -21847.1 -7330.44 140.66 0.125 4460 4462 2 -21846.4 -7327.7 145.28 0.125 4461 4463 2 -21845.3 -7326.2 148.83 0.125 4462 4464 2 -21843.7 -7322.75 149.25 0.125 4463 4465 2 -21842.3 -7322.47 153.34 0.125 4464 4466 2 -21842.8 -7319.64 153.86 0.125 4465 4467 2 -21842.8 -7319.16 158.77 0.125 4466 4468 2 -21842.8 -7319.18 158.88 0.125 4467 4469 2 -21842.8 -7317.35 159.2 0.125 4468 4470 2 -21842.6 -7316.09 162.85 0.125 4469 4471 2 -21842.3 -7314.51 163.51 0.125 4470 4472 2 -21842.2 -7314.31 163.98 0.125 4471 4473 2 -21842 -7313.07 167.79 0.125 4472 4474 2 -21841.3 -7311.4 169.56 0.125 4473 4475 2 -21842.1 -7310.56 171.81 0.125 4474 4476 2 -21842.7 -7308.81 172.16 0.125 4475 4477 2 -21841.8 -7308.25 174.19 0.125 4476 4478 2 -21840.9 -7306.93 175.13 0.125 4477 4479 2 -21841.3 -7305.53 176.24 0.125 4478 4480 2 -21840.6 -7303.87 178.13 0.125 4479 4481 2 -21840.6 -7303.02 180.77 0.125 4480 4482 2 -21840.5 -7303.21 181.98 0.125 4481 4483 2 -21840.7 -7301.95 182.61 0.125 4482 4484 2 -21840.2 -7301.73 186.42 0.125 4483 4485 2 -21840 -7299.59 186.75 0.125 4484 4486 2 -21839.8 -7297.18 187.14 0.125 4485 4487 2 -21846.2 -7300.04 187.25 0.125 4486 4488 2 -21845.2 -7299.22 191.06 0.125 4487 4489 2 -21844.8 -7297.07 191.38 0.125 4488 4490 2 -21843.8 -7296.45 194.74 0.125 4489 4491 2 -21843.6 -7296.4 194.72 0.125 4490 4492 2 -21842.8 -7294.54 197.04 0.125 4491 4493 2 -21841.5 -7291.65 200.1 0.125 4492 4494 2 -21840.4 -7289.98 203.88 0.125 4493 4495 2 -21839 -7285.84 207.37 0.125 4494 4496 2 -21837.9 -7283.5 211.94 0.125 4495 4497 2 -21837.8 -7282.56 214.51 0.125 4496 4498 2 -21837.7 -7279.92 217.93 0.125 4497 4499 2 -21837.2 -7278.78 218.08 0.125 4498 4500 2 -21837 -7277.61 219.19 0.125 4499 4501 2 -21838.1 -7275.14 221.25 0.125 4500 4502 2 -21840.3 -7274.23 224.04 0.125 4501 4503 2 -21842.1 -7273.56 228.43 0.125 4502 4504 2 -21844.8 -7272.67 228.97 0.125 4503 4505 2 -21847.7 -7272.2 231.82 0.125 4504 4506 2 -21849.6 -7270.9 232.57 0.08 4505 4507 2 -21851.6 -7270.66 236.12 0.08 4506 4508 2 -21852.2 -7271.63 239.25 0.08 4507 4509 2 -21853.1 -7272.25 243.05 0.125 4508 4510 2 -21854.6 -7272.45 243.15 0.125 4509 4511 2 -21856.3 -7272.15 243.37 0.125 4510 4512 2 -21856.1 -7271.47 247.4 0.125 4511 4513 2 -21855.1 -7270.27 247.42 0.125 4512 4514 2 -21854.2 -7269.23 248.48 0.08 4513 4515 2 -21854 -7268.43 248.59 0.08 4514 4516 2 -21854 -7268.65 248.55 0.08 4515 4517 2 -21853.6 -7266.21 250.44 0.08 4516 4518 2 -21853.2 -7264.82 250.63 0.08 4517 4519 2 -21853 -7262.75 254.24 0.08 4518 4520 2 -21853.2 -7261.15 254.51 0.08 4519 4521 2 -21853.2 -7260.9 254.55 0.08 4520 4522 2 -21853.8 -7260.17 259.32 0.08 4521 4523 2 -21854.7 -7259.81 259.47 0.08 4522 4524 2 -21855.1 -7259.34 259.58 0.08 4523 4525 2 -21855.7 -7258.88 259.72 0.08 4524 4526 2 -21856.4 -7257.88 259.94 0.08 4525 4527 2 -21857 -7257.46 260.07 0.08 4526 4528 2 -21857.4 -7256.73 260.22 0.08 4527 4529 2 -21859.4 -7255.05 263.22 0.08 4528 4530 2 -21860.3 -7252.53 263.72 0.08 4529 4531 2 -21860.3 -7252.25 263.77 0.08 4530 4532 2 -21861.3 -7244.18 263.37 0.08 4531 4533 2 -21861.9 -7242.7 263.67 0.08 4532 4534 2 -21862 -7240.33 264.07 0.08 4533 4535 2 -21862.3 -7238.34 266.57 0.08 4534 4536 2 -21863.8 -7234.94 262.89 0.08 4535 4537 2 -21863.8 -7234.66 262.94 0.08 4536 4538 2 -21864.2 -7234.23 263.04 0.08 4537 4539 2 -21864.2 -7233.96 263.09 0.08 4538 4540 2 -21864.7 -7233.13 271.5 0.08 4539 4541 2 -21865 -7233.17 271.45 0.26 4540 4542 2 -21865 -7231.8 271.67 0.135 4541 4543 2 -21866.1 -7230.45 271.99 0.135 4542 4544 2 -21866.1 -7230.2 272.04 0.135 4543 4545 2 -21866.4 -7228.66 272.32 0.08 4544 4546 2 -21866.4 -7228.37 272.37 0.08 4545 4547 2 -21867.1 -7227.51 274.28 0.08 4546 4548 2 -21867.8 -7225.22 279 0.08 4547 4549 2 -21868.5 -7224.3 279.47 0.08 4548 4550 2 -21869.8 -7218.98 285.07 0.08 4549 4551 2 -21869.9 -7218.75 285.11 0.08 4550 4552 2 -21872.9 -7211.57 290.93 0.08 4551 4553 2 -21874.6 -7208.62 296.96 0.08 4552 4554 2 -21874.5 -7208.69 297.39 0.08 4553 4555 2 -21876.3 -7206.9 297.85 0.08 4554 4556 2 -21876.3 -7206.63 297.9 0.08 4555 4557 2 -21879.8 -7201.59 304.33 0.08 4556 4558 2 -21879.8 -7201.35 304.37 0.08 4557 4559 2 -21881.7 -7197.12 310.8 0.08 4558 4560 2 -21883.9 -7194.58 313.46 0.08 4559 4561 2 -21884.3 -7193.25 320.64 0.08 4560 4562 2 -21884.6 -7193.27 320.66 0.08 4561 4563 2 -21881.6 -7186.79 339.25 0.08 4562 4564 2 -21881.8 -7186.86 339.26 0.08 4563 4565 2 -21881.4 -7186.33 339.8 0.08 4564 4566 2 -21887 -7189.95 319.35 0.08 4562 4567 2 -21887.6 -7188.68 319.61 0.08 4566 4568 2 -21887.6 -7188.42 319.65 0.08 4567 4569 2 -21888.2 -7188.81 326.97 0.08 4568 4570 2 -21888.2 -7189.05 326.92 0.08 4569 4571 2 -21888.8 -7188.66 327.05 0.08 4570 4572 2 -21854.5 -7377.37 51.88 0.25 4278 4573 2 -21851.3 -7376.8 54.53 0.25 4572 4574 2 -21848.8 -7376.48 54.73 0.25 4573 4575 2 -21848.5 -7376.43 54.71 0.25 4574 4576 2 -21844.5 -7376.1 56.33 0.25 4575 4577 2 -21841.9 -7375.68 57.73 0.25 4576 4578 2 -21837.9 -7374.95 59.74 0.25 4577 4579 2 -21837.3 -7374.82 59.71 0.25 4578 4580 2 -21833.6 -7375.14 59.94 0.25 4579 4581 2 -21833 -7374.73 59.96 0.25 4580 4582 2 -21832.8 -7374.72 59.93 0.25 4581 4583 2 -21829.2 -7371.87 60.73 0.25 4582 4584 2 -21825.9 -7369.96 63.3 0.25 4583 4585 2 -21826 -7369.72 63.34 0.25 4584 4586 2 -21824.1 -7368.6 63.35 0.25 4585 4587 2 -21823.4 -7368.14 63.35 0.25 4586 4588 2 -21823.1 -7367.89 63.38 0.25 4587 4589 2 -21820.7 -7366.99 63.3 0.25 4588 4590 2 -21820.5 -7366.94 63.28 0.25 4589 4591 2 -21818.7 -7366.92 63.12 0.25 4590 4592 2 -21818 -7366.26 63.16 0.25 4591 4593 2 -21816.7 -7366.42 65.15 0.25 4592 4594 2 -21814.9 -7366.66 64.99 0.25 4593 4595 2 -21812 -7366.48 70.8 0.25 4594 4596 2 -21812.1 -7366.22 70.88 0.25 4595 4597 2 -21809.6 -7365.62 71.04 0.25 4596 4598 2 -21809.3 -7365.36 71.29 0.25 4597 4599 2 -21806.6 -7364.46 71.73 0.25 4598 4600 2 -21806.1 -7364.36 71.7 0.25 4599 4601 2 -21801.5 -7363.19 73.27 0.25 4600 4602 2 -21795.3 -7362.33 79.77 0.25 4601 4603 2 -21795.1 -7362.02 79.8 0.25 4602 4604 2 -21793 -7356.71 80.96 0.25 4603 4605 2 -21789.3 -7350.21 88.87 0.25 4604 4606 2 -21787.3 -7349.8 89.72 0.25 4605 4607 2 -21786.8 -7349.49 89.72 0.25 4606 4608 2 -21785.5 -7350.06 91.07 0.25 4607 4609 2 -21784.6 -7349.92 91.02 0.25 4608 4610 2 -21782.2 -7350.69 91.42 0.25 4609 4611 2 -21781.9 -7350.7 91.39 0.25 4610 4612 2 -21780.2 -7350.67 91.37 0.25 4611 4613 2 -21779.7 -7350.59 91.33 0.25 4612 4614 2 -21778.1 -7349.32 91.61 0.25 4613 4615 2 -21777.7 -7348.73 91.67 0.25 4614 4616 2 -21777.1 -7348.65 91.63 0.25 4615 4617 2 -21773.8 -7347.21 92.43 0.25 4616 4618 2 -21773 -7346.82 92.42 0.25 4617 4619 2 -21770.8 -7345.16 91.35 0.25 4618 4620 2 -21769.9 -7345.35 91.23 0.25 4619 4621 2 -21768.3 -7345.02 91.13 0.25 4620 4622 2 -21767.8 -7344.69 91.14 0.25 4621 4623 2 -21767.8 -7344.41 91.19 0.25 4622 4624 2 -21766.6 -7343.43 91.24 0.25 4623 4625 2 -21766.3 -7343.13 91.26 0.25 4624 4626 2 -21764.1 -7342.74 91.12 0.25 4625 4627 2 -21763 -7342.58 91.04 0.25 4626 4628 2 -21763.1 -7342.3 91.09 0.25 4627 4629 2 -21761.4 -7342.29 90.94 0.25 4628 4630 2 -21761.1 -7342.22 90.92 0.25 4629 4631 2 -21758.9 -7342.26 94.32 0.25 4630 4632 2 -21758.8 -7341.78 93.11 0.25 4631 4633 2 -21757.5 -7340.79 93.31 0.25 4632 4634 2 -21756.9 -7340.67 94.58 0.25 4633 4635 2 -21756.6 -7340.34 94.61 0.25 4634 4636 2 -21750.8 -7339.42 98.85 0.25 4635 4637 2 -21750.6 -7338.87 98.93 0.25 4636 4638 2 -21750.3 -7338.57 98.96 0.25 4637 4639 2 -21748.1 -7337.79 99.66 0.25 4638 4640 2 -21747.8 -7337.8 99.63 0.25 4639 4641 2 -21744.5 -7336.3 101.93 0.25 4640 4642 2 -21744.3 -7335.74 102.01 0.25 4641 4643 2 -21740.6 -7332.98 100.2 0.25 4642 4644 2 -21738.4 -7331.8 100.19 0.37 4643 4645 2 -21737 -7329.72 100.41 0.37 4644 4646 2 -21736.8 -7329.4 100.44 0.37 4645 4647 2 -21735 -7328.28 100.45 0.37 4646 4648 2 -21734.1 -7327.09 100.57 0.37 4647 4649 2 -21732.4 -7326.76 100.46 0.37 4648 4650 2 -21731.1 -7326.3 100.42 0.37 4649 4651 2 -21730.9 -7325.94 100.45 0.37 4650 4652 2 -21730.5 -7325.1 100.56 0.37 4651 4653 2 -21730.3 -7324.53 100.63 0.37 4652 4654 2 -21730 -7323.98 100.7 0.37 4653 4655 2 -21729.3 -7323.59 100.7 0.37 4654 4656 2 -21727 -7321.89 100.76 0.37 4655 4657 2 -21726.3 -7321.73 100.72 0.37 4656 4658 2 -21723.7 -7320.93 98.79 0.37 4657 4659 2 -21721 -7319.68 98.75 0.37 4658 4660 2 -21720.7 -7319.61 98.74 0.37 4659 4661 2 -21719 -7318.54 98.75 0.37 4660 4662 2 -21718.4 -7318.42 98.72 0.37 4661 4663 2 -21716.6 -7317.31 98.73 0.37 4662 4664 2 -21716.7 -7317.08 98.78 0.37 4663 4665 2 -21715.2 -7315.75 98.85 0.37 4664 4666 2 -21714.4 -7315.6 98.81 0.37 4665 4667 2 -21712.3 -7316 98.55 0.37 4666 4668 2 -21711.8 -7315.93 98.51 0.37 4667 4669 2 -21710.5 -7315.19 98.51 0.37 4668 4670 2 -21710 -7315.06 98.48 0.37 4669 4671 2 -21710 -7314.78 98.53 0.37 4670 4672 2 -21707.8 -7313.63 100.1 0.37 4671 4673 2 -21707.5 -7313.58 100.09 0.37 4672 4674 2 -21706.8 -7312.94 100.13 0.37 4673 4675 2 -21706.6 -7312.9 100.11 0.37 4674 4676 2 -21704 -7312.96 99.85 0.37 4675 4677 2 -21703.7 -7313.14 99.79 0.37 4676 4678 2 -21703.4 -7313.13 99.77 0.37 4677 4679 2 -21702.2 -7313.71 99.56 0.37 4678 4680 2 -21701.1 -7313.49 99.5 0.37 4679 4681 2 -21697.7 -7312.86 102.02 0.37 4680 4682 2 -21696.1 -7310.93 100.4 0.37 4681 4683 2 -21693.8 -7309.47 100.42 0.37 4682 4684 2 -21693.3 -7309.36 100.39 0.37 4683 4685 2 -21691 -7309.23 100.2 0.37 4684 4686 2 -21690.7 -7309.19 100.18 0.37 4685 4687 2 -21687.9 -7308.34 102.54 0.37 4686 4688 2 -21687.2 -7306.3 102.8 0.37 4687 4689 2 -21686.1 -7305.61 102.82 0.37 4688 4690 2 -21684.2 -7303.67 102.96 0.37 4689 4691 2 -21683.1 -7303.22 102.93 0.37 4690 4692 2 -21679.2 -7301.5 102.85 0.37 4691 4693 2 -21678.3 -7301.33 102.8 0.37 4692 4694 2 -21673.5 -7300.01 102.92 0.37 4693 4695 2 -21671.1 -7299.73 105.05 0.25 4694 4696 2 -21669.1 -7298.3 105.09 0.25 4695 4697 2 -21666 -7296.69 105.07 0.25 4696 4698 2 -21662.2 -7294.97 106.54 0.25 4697 4699 2 -21659.4 -7293.67 106.49 0.25 4698 4700 2 -21657.8 -7292.64 106.52 0.25 4699 4701 2 -21657.7 -7291.55 106.69 0.25 4700 4702 2 -21655.2 -7290.59 109.58 0.25 4701 4703 2 -21655.2 -7290.33 109.63 0.25 4702 4704 2 -21652.9 -7288.84 109.66 0.25 4703 4705 2 -21652.3 -7287.41 109.85 0.25 4704 4706 2 -21651 -7286.86 109.81 0.25 4705 4707 2 -21650.4 -7286.82 109.76 0.25 4706 4708 2 -21648.1 -7286.95 109.53 0.25 4707 4709 2 -21647.6 -7286.61 109.54 0.25 4708 4710 2 -21645.2 -7285.95 109.85 0.25 4709 4711 2 -21644.9 -7285.63 109.93 0.25 4710 4712 2 -21644 -7285.53 111.64 0.25 4711 4713 2 -21643.2 -7285.4 111.59 0.25 4712 4714 2 -21641.2 -7285.56 111.37 0.25 4713 4715 2 -21641.2 -7285.29 111.42 0.25 4714 4716 2 -21640.5 -7284.39 111.5 0.25 4715 4717 2 -21640 -7284.04 111.51 0.25 4716 4718 2 -21638.6 -7284.28 111.34 0.25 4717 4719 2 -21638.3 -7283.99 111.36 0.25 4718 4720 2 -21636.8 -7282.69 111.44 0.25 4719 4721 2 -21635.8 -7282.23 111.42 0.25 4720 4722 2 -21635.6 -7281.9 111.45 0.25 4721 4723 2 -21635.4 -7281.62 111.48 0.25 4722 4724 2 -21633.7 -7281.3 111.38 0.25 4723 4725 2 -21633.5 -7280.73 111.45 0.25 4724 4726 2 -21632.5 -7279.82 111.51 0.25 4725 4727 2 -21632 -7279.45 111.61 0.25 4726 4728 2 -21630.9 -7278.92 112.57 0.25 4727 4729 2 -21630.2 -7279.2 113.4 0.25 4728 4730 2 -21629.4 -7279.07 113.35 0.25 4729 4731 2 -21629.2 -7278.76 113.51 0.25 4730 4732 2 -21627.8 -7278.05 113.55 0.25 4731 4733 2 -21627.4 -7277.7 113.57 0.25 4732 4734 2 -21626.6 -7277.55 113.52 0.25 4733 4735 2 -21626 -7277.47 113.48 0.25 4734 4736 2 -21625.8 -7277.36 113.47 0.25 4735 4737 2 -21624.7 -7276.69 113.49 0.25 4736 4738 2 -21623.9 -7276.54 113.43 0.25 4737 4739 2 -21622.5 -7276.58 113.37 0.25 4738 4740 2 -21621.4 -7276.16 113.64 0.25 4739 4741 2 -21621.1 -7276.1 113.62 0.25 4740 4742 2 -21620.1 -7275.22 114.12 0.25 4741 4743 2 -21619 -7274.84 114.44 0.25 4742 4744 2 -21615.8 -7273.8 114.6 0.25 4743 4745 2 -21614.7 -7273.33 114.62 0.25 4744 4746 2 -21613.3 -7273.35 114.51 0.25 4745 4747 2 -21613.1 -7273.03 114.58 0.25 4746 4748 2 -21611.3 -7271.85 114.23 0.25 4747 4749 2 -21609.1 -7271.97 113.99 0.25 4748 4750 2 -21608 -7271.52 113.97 0.25 4749 4751 2 -21607.8 -7271.5 113.95 0.25 4750 4752 2 -21606.8 -7270.28 114.06 0.25 4751 4753 2 -21606 -7270.1 114.02 0.25 4752 4754 2 -21603.8 -7269.98 113.83 0.25 4753 4755 2 -21603.5 -7269.92 113.81 0.25 4754 4756 2 -21594.2 -7266.8 112.7 0.25 4755 4757 2 -21591.1 -7265.98 113.94 0.25 4756 4758 2 -21590.9 -7265.94 113.92 0.25 4757 4759 2 -21589 -7265.31 113.85 0.25 4758 4760 2 -21588.4 -7265.24 113.81 0.25 4759 4761 2 -21588.5 -7264.99 113.85 0.25 4760 4762 2 -21586.2 -7265.11 113.62 0.25 4761 4763 2 -21583.3 -7265.02 114.08 0.25 4762 4764 2 -21582.8 -7264.92 114.05 0.25 4763 4765 2 -21580.9 -7264.32 113.97 0.25 4764 4766 2 -21579.8 -7263.85 113.94 0.25 4765 4767 2 -21579.6 -7263.79 113.93 0.25 4766 4768 2 -21578 -7262.76 113.96 0.25 4767 4769 2 -21577 -7262 113.99 0.25 4768 4770 2 -21576.8 -7261.99 113.97 0.25 4769 4771 2 -21575.8 -7261.53 113.95 0.25 4770 4772 2 -21574.7 -7260.81 113.97 0.25 4771 4773 2 -21572.8 -7261.06 117.04 0.25 4772 4774 2 -21572.5 -7261.02 117.02 0.25 4773 4775 2 -21570 -7260.84 116.81 0.25 4774 4776 2 -21569.8 -7260.26 116.89 0.25 4775 4777 2 -21566.5 -7259.59 117.64 0.25 4776 4778 2 -21564.3 -7259.18 117.5 0.25 4777 4779 2 -21563.8 -7258.85 117.51 0.25 4778 4780 2 -21562.4 -7258.61 117.42 0.25 4779 4781 2 -21561 -7258.35 117.34 0.25 4780 4782 2 -21559.3 -7257.27 117.35 0.25 4781 4783 2 -21557.6 -7256.91 117.26 0.25 4782 4784 2 -21555.3 -7257.36 116.97 0.25 4783 4785 2 -21553.6 -7257.27 116.82 0.25 4784 4786 2 -21549.5 -7255.63 114.38 0.25 4785 4787 2 -21546.8 -7254.31 115.57 0.25 4786 4788 2 -21545.9 -7254.93 115.38 0.25 4787 4789 2 -21544.6 -7254.16 115.38 0.25 4788 4790 2 -21544.3 -7253.86 115.41 0.25 4789 4791 2 -21544.1 -7253.82 115.4 0.25 4790 4792 2 -21541.8 -7252.95 117.53 0.25 4791 4793 2 -21541.4 -7252.65 117.53 0.25 4792 4794 2 -21538.1 -7253.12 117.15 0.25 4793 4795 2 -21537 -7252.94 117.08 0.25 4794 4796 2 -21534.8 -7252.53 116.94 0.25 4795 4797 2 -21534.1 -7252.13 116.94 0.25 4796 4798 2 -21531.6 -7251.42 116.82 0.25 4797 4799 2 -21530.6 -7250.98 116.8 0.25 4798 4800 2 -21530.3 -7250.94 116.78 0.25 4799 4801 2 -21526.2 -7250.46 116.47 0.25 4800 4802 2 -21524.5 -7250.16 116.37 0.25 4801 4803 2 -21520.4 -7249.69 116.06 0.25 4802 4804 2 -21519.5 -7249.8 115.96 0.25 4803 4805 2 -21516.5 -7248.66 118.5 0.25 4804 4806 2 -21511.8 -7246.41 117.89 0.125 4805 4807 2 -21508.6 -7245.08 117.81 0.125 4806 4808 2 -21508.4 -7244.76 117.84 0.125 4807 4809 2 -21503.9 -7242.89 117.73 0.125 4808 4810 2 -21503.7 -7242.62 117.75 0.125 4809 4811 2 -21503 -7240.1 118.1 0.125 4810 4812 2 -21501.7 -7237.24 120.09 0.125 4811 4813 2 -21500.6 -7236.54 120.11 0.125 4812 4814 2 -21500.1 -7236.17 120.13 0.125 4813 4815 2 -21499.5 -7236.09 120.09 0.125 4814 4816 2 -21496.6 -7235.05 119.98 0.125 4815 4817 2 -21495.3 -7234.79 119.9 0.125 4816 4818 2 -21495.2 -7235.09 119.85 0.125 4817 4819 2 -21492.1 -7235.24 119.53 0.125 4818 4820 2 -21491.5 -7235.16 119.49 0.125 4819 4821 2 -21487.9 -7235.32 119.12 0.125 4820 4822 2 -21487.6 -7235.26 119.1 0.125 4821 4823 2 -21485.9 -7235.46 118.91 0.125 4822 4824 2 -21483.3 -7235.2 119.61 0.125 4823 4825 2 -21481.3 -7235.07 119.44 0.125 4824 4826 2 -21480.1 -7233.85 119.54 0.125 4825 4827 2 -21479.9 -7233.79 119.52 0.125 4826 4828 2 -21477.2 -7232.77 119.44 0.125 4827 4829 2 -21476.6 -7232.66 119.41 0.125 4828 4830 2 -21473.6 -7232.18 119.21 0.125 4829 4831 2 -21473.4 -7231.86 119.24 0.125 4830 4832 2 -21471 -7231.55 122.72 0.125 4831 4833 2 -21470.2 -7231.4 122.66 0.125 4832 4834 2 -21468.1 -7231.56 122.45 0.125 4833 4835 2 -21467 -7231.36 122.38 0.125 4834 4836 2 -21464.9 -7231 122.23 0.125 4835 4837 2 -21464.6 -7230.66 122.27 0.125 4836 4838 2 -21463.1 -7229.39 122.34 0.125 4837 4839 2 -21462.7 -7229.01 122.36 0.125 4838 4840 2 -21461.6 -7228.58 122.33 0.125 4839 4841 2 -21460.5 -7228.08 122.31 0.125 4840 4842 2 -21456.4 -7225.76 122.3 0.125 4841 4843 2 -21456.1 -7225.7 122.29 0.125 4842 4844 2 -21453.4 -7225.87 124.03 0.125 4843 4845 2 -21452.6 -7225.72 123.98 0.125 4844 4846 2 -21448.9 -7224.77 123.79 0.125 4845 4847 2 -21448.8 -7224.49 123.83 0.125 4846 4848 2 -21447.9 -7223.78 123.69 0.125 4847 4849 2 -21447.6 -7223.7 123.67 0.125 4848 4850 2 -21445.4 -7223.56 123.49 0.125 4849 4851 2 -21445.1 -7223.52 123.47 0.125 4850 4852 2 -21443.5 -7222.99 123.41 0.125 4851 4853 2 -21443 -7222.88 123.37 0.125 4852 4854 2 -21442.2 -7222.49 123.36 0.125 4853 4855 2 -21441 -7222.28 123.29 0.125 4854 4856 2 -21440.2 -7222.15 123.24 0.125 4855 4857 2 -21439 -7221.42 123.24 0.125 4856 4858 2 -21438.2 -7221 123.24 0.125 4857 4859 2 -21437 -7219.98 123.29 0.125 4858 4860 2 -21436.7 -7219.66 123.32 0.125 4859 4861 2 -21436.4 -7219.6 123.31 0.125 4860 4862 2 -21432.7 -7218.99 126.24 0.125 4861 4863 2 -21432.9 -7219.04 126.25 0.125 4862 4864 2 -21430.5 -7217.37 128.35 0.125 4863 4865 2 -21430.8 -7217.44 128.27 0.125 4864 4866 2 -21428.2 -7216.52 131.57 0.125 4865 4867 2 -21425.5 -7215.76 131.57 0.125 4866 4868 2 -21425.4 -7215.85 132.12 0.125 4867 4869 2 -21422.7 -7215.94 134.93 0.125 4868 4870 2 -21419.3 -7214.51 134.85 0.125 4869 4871 2 -21418.9 -7213.62 134.96 0.125 4870 4872 2 -21417.8 -7213.2 134.93 0.125 4871 4873 2 -21417.5 -7213.12 134.91 0.125 4872 4874 2 -21415 -7211.91 134.87 0.125 4873 4875 2 -21414.4 -7211.51 134.89 0.125 4874 4876 2 -21412.9 -7210.47 134.92 0.125 4875 4877 2 -21412.1 -7210.35 134.86 0.125 4876 4878 2 -21410.7 -7209.23 137.7 0.125 4877 4879 2 -21410.2 -7208.9 137.7 0.125 4878 4880 2 -21408.9 -7208.13 137.71 0.125 4879 4881 2 -21407.8 -7207.93 137.64 0.125 4880 4882 2 -21406.5 -7207.49 137.59 0.125 4881 4883 2 -21406.2 -7207.15 137.62 0.125 4882 4884 2 -21405.3 -7206.19 137.69 0.125 4883 4885 2 -21404.8 -7205.83 137.7 0.125 4884 4886 2 -21402.5 -7204.76 139.79 0.125 4885 4887 2 -21401.2 -7204.53 139.7 0.125 4886 4888 2 -21400.1 -7203.83 139.72 0.125 4887 4889 2 -21399 -7202.22 139.88 0.125 4888 4890 2 -21397 -7201.54 142.15 0.125 4889 4891 2 -21395.5 -7200.74 142.15 0.125 4890 4892 2 -21394.7 -7200.35 142.19 0.125 4891 4893 2 -21394.4 -7200.35 142.17 0.125 4892 4894 2 -21393 -7199.65 144.26 0.125 4893 4895 2 -21392.7 -7199.33 144.29 0.125 4894 4896 2 -21391.4 -7198.15 146.74 0.125 4895 4897 2 -21391 -7198.09 146.72 0.125 4896 4898 2 -21387.7 -7196.59 145.86 0.125 4897 4899 2 -21387.6 -7196.74 146.77 0.125 4898 4900 2 -21385.5 -7195.24 147.69 0.125 4899 4901 2 -21383 -7194.22 150.61 0.125 4900 4902 2 -21383 -7194.42 150.58 0.125 4901 4903 2 -21379.3 -7192.72 150.28 0.125 4902 4904 2 -21379 -7192.68 150.26 0.125 4903 4905 2 -21377.3 -7191.54 150.28 0.125 4904 4906 2 -21376.3 -7190.85 150.3 0.125 4905 4907 2 -21371.9 -7190.13 154.7 0.125 4906 4908 2 -21371.5 -7190.05 154.62 0.125 4907 4909 2 -21370 -7189.46 154.52 0.125 4908 4910 2 -21369.8 -7189.23 154.61 0.125 4909 4911 2 -21366.4 -7188.17 156.77 0.125 4910 4912 2 -21365.3 -7188 156.8 0.125 4911 4913 2 -21362.8 -7187.37 157.14 0.125 4912 4914 2 -21362.5 -7187.08 157.16 0.125 4913 4915 2 -21359.1 -7186.64 159.24 0.125 4914 4916 2 -21358 -7186.19 159.22 0.125 4915 4917 2 -21354.9 -7184.56 159.21 0.125 4916 4918 2 -21354 -7183.83 159.24 0.125 4917 4919 2 -21350.4 -7182.52 159.67 0.125 4918 4920 2 -21345.5 -7181.39 160.88 0.125 4919 4921 2 -21342.6 -7181.09 161.94 0.125 4920 4922 2 -21340.7 -7180.32 157.87 0.125 4921 4923 2 -21337.7 -7179.79 157.67 0.125 4922 4924 2 -21334.5 -7177.72 159.63 0.125 4923 4925 2 -21341.2 -7183.7 159.27 0.125 4924 4926 2 -21338 -7181.91 159.26 0.125 4925 4927 2 -21332.6 -7179.55 159.15 0.125 4926 4928 2 -21331.9 -7179.2 159.14 0.125 4927 4929 2 -21330.3 -7177.91 159.21 0.125 4928 4930 2 -21329.8 -7177.59 159.21 0.125 4929 4931 2 -21327.7 -7176.19 159.25 0.125 4930 4932 2 -21326.6 -7175.84 159.2 0.125 4931 4933 2 -21326 -7175.69 159.17 0.125 4932 4934 2 -21324.8 -7174.71 159.22 0.125 4933 4935 2 -21323.8 -7173.56 159.31 0.125 4934 4936 2 -21322.8 -7172.37 159.42 0.125 4935 4937 2 -21321.8 -7171.43 159.48 0.125 4936 4938 2 -21319.1 -7170.34 159.8 0.125 4937 4939 2 -21318.6 -7169.75 159.85 0.125 4938 4940 2 -21316.7 -7168.94 159.81 0.125 4939 4941 2 -21315.8 -7168.82 159.75 0.125 4940 4942 2 -21314 -7167.54 159.79 0.125 4941 4943 2 -21313.8 -7167.2 159.82 0.125 4942 4944 2 -21309.8 -7166.15 161.2 0.125 4943 4945 2 -21305.8 -7163.85 163.34 0.125 4944 4946 2 -21305.5 -7163.78 163.32 0.125 4945 4947 2 -21304.3 -7163.64 163.24 0.125 4946 4948 2 -21303.8 -7163.54 163.2 0.125 4947 4949 2 -21303 -7163.69 163.1 0.125 4948 4950 2 -21302.1 -7163.57 163.04 0.125 4949 4951 2 -21296.2 -7161.86 165.06 0.125 4950 4952 2 -21295.2 -7161.42 165.03 0.125 4951 4953 2 -21293.2 -7161.14 164.89 0.125 4952 4954 2 -21291.8 -7160.96 164.79 0.125 4953 4955 2 -21289.5 -7161.01 165.32 0.125 4954 4956 2 -21289.1 -7161.19 166.48 0.125 4955 4957 2 -21286.5 -7161.11 166.25 0.125 4956 4958 2 -21285.1 -7160.89 166.16 0.125 4957 4959 2 -21282.8 -7159.51 166.17 0.125 4958 4960 2 -21281.7 -7159.09 166.14 0.125 4959 4961 2 -21280.6 -7158.89 166.07 0.125 4960 4962 2 -21280.1 -7158.58 166.07 0.125 4961 4963 2 -21276 -7159.02 169.84 0.125 4962 4964 2 -21274.4 -7158.51 169.77 0.125 4963 4965 2 -21271 -7158.31 169.49 0.125 4964 4966 2 -21270.5 -7157.99 169.49 0.125 4965 4967 2 -21267.8 -7157.38 168.06 0.125 4966 4968 2 -21267.3 -7157.06 168.06 0.125 4967 4969 2 -21266.8 -7156.68 168.08 0.125 4968 4970 2 -21264.1 -7155.28 168.06 0.125 4969 4971 2 -21263.6 -7154.95 168.07 0.125 4970 4972 2 -21263.4 -7154.64 168.1 0.125 4971 4973 2 -21261.2 -7154.02 172.14 0.125 4972 4974 2 -21260.1 -7153.59 172.11 0.125 4973 4975 2 -21258 -7152.7 172.06 0.125 4974 4976 2 -21257.4 -7152.15 172.1 0.125 4975 4977 2 -21255.9 -7150.61 172.21 0.125 4976 4978 2 -21255.4 -7150.27 172.22 0.125 4977 4979 2 -21255.2 -7149.98 172.25 0.125 4978 4980 2 -21253.1 -7149.55 174.5 0.125 4979 4981 2 -21252.5 -7149.22 174.5 0.125 4980 4982 2 -21252 -7148.91 174.52 0.125 4981 4983 2 -21251 -7148.45 174.5 0.125 4982 4984 2 -21249.5 -7148.3 174.39 0.125 4983 4985 2 -21248.7 -7148.18 174.33 0.125 4984 4986 2 -21248.2 -7147.85 174.34 0.125 4985 4987 2 -21247.9 -7147.58 174.36 0.125 4986 4988 2 -21245.8 -7146.35 178.24 0.125 4987 4989 2 -21245.1 -7145.7 178.28 0.125 4988 4990 2 -21244.8 -7145.66 178.26 0.125 4989 4991 2 -21242.9 -7142.98 178.52 0.125 4990 4992 2 -21242 -7141.25 178.72 0.125 4991 4993 2 -21237.3 -7137.86 185.74 0.125 4992 4994 2 -21237.6 -7137.9 185.76 0.125 4993 4995 2 -21235.7 -7135.05 186.45 0.125 4994 4996 2 -21235.5 -7134.74 186.5 0.125 4995 4997 2 -21233.1 -7132.48 192.33 0.125 4996 4998 2 -21232.9 -7131.92 192.4 0.125 4997 4999 2 -21231.6 -7130.72 192.48 0.125 4998 5000 2 -21231.4 -7130.11 192.56 0.125 4999 5001 2 -21231.1 -7130.11 192.53 0.125 5000 5002 2 -21228 -7125.29 193.85 0.125 5001 5003 2 -21227.8 -7124.71 193.92 0.125 5002 5004 2 -21226 -7122.39 195.9 0.125 5003 5005 2 -21225.5 -7122.07 195.91 0.125 5004 5006 2 -21225.2 -7121.76 195.93 0.125 5005 5007 2 -21223.1 -7120.16 196 0.125 5006 5008 2 -21223 -7119.59 196.08 0.125 5007 5009 2 -21222.7 -7119.57 196.06 0.125 5008 5010 2 -21222.3 -7117.93 196.3 0.125 5009 5011 2 -21221.8 -7117.03 196.41 0.125 5010 5012 2 -21221.9 -7116.81 196.45 0.125 5011 5013 2 -21220 -7116.22 199.1 0.125 5012 5014 2 -21219.5 -7115.95 199.1 0.125 5013 5015 2 -21219.5 -7115.66 199.15 0.125 5014 5016 2 -21217.2 -7113.87 200.05 0.125 5015 5017 2 -21216.1 -7113.03 200.88 0.125 5016 5018 2 -21213.4 -7111.61 200.86 0.125 5017 5019 2 -21212.5 -7110.65 200.93 0.125 5018 5020 2 -21211.9 -7110.07 200.98 0.125 5019 5021 2 -21206.7 -7102.7 201.73 0.125 5020 5022 2 -21206.7 -7102.45 201.77 0.125 5021 5023 2 -21202.8 -7097.06 203.19 0.125 5022 5024 2 -21202.8 -7096.77 203.24 0.125 5023 5025 2 -21200.7 -7095.42 203.27 0.125 5024 5026 2 -21200.5 -7095.12 203.3 0.125 5025 5027 2 -21198.1 -7094.62 205.17 0.125 5026 5028 2 -21198.3 -7093.58 205.36 0.125 5027 5029 2 -21197 -7092.1 205.49 0.125 5028 5030 2 -21196.8 -7091.82 205.51 0.125 5029 5031 2 -21196.3 -7091.18 205.57 0.125 5030 5032 2 -21196 -7091.15 205.55 0.125 5031 5033 2 -21192.5 -7089.67 210.49 0.125 5032 5034 2 -21192.2 -7089.4 210.51 0.125 5033 5035 2 -21192.3 -7089.12 210.56 0.125 5034 5036 2 -21187.2 -7085.56 212.55 0.125 5035 5037 2 -21186.9 -7085.23 212.58 0.125 5036 5038 2 -21186.1 -7084.85 212.57 0.125 5037 5039 2 -21185.6 -7084.3 212.61 0.125 5038 5040 2 -21183.4 -7082.11 215.9 0.125 5039 5041 2 -21182.9 -7082.04 215.86 0.125 5040 5042 2 -21181.3 -7082.63 215.61 0.125 5041 5043 2 -21180.8 -7082.3 215.62 0.125 5042 5044 2 -21178.2 -7080.87 215.61 0.125 5043 5045 2 -21177.6 -7080.53 215.62 0.125 5044 5046 2 -21176.3 -7080.32 215.52 0.125 5045 5047 2 -21175.5 -7080.21 215.46 0.125 5046 5048 2 -21175.1 -7078.61 215.87 0.125 5047 5049 2 -21174.8 -7078.32 215.89 0.125 5048 5050 2 -21173.3 -7077.02 215.96 0.125 5049 5051 2 -21172.8 -7076.68 215.98 0.125 5050 5052 2 -21170 -7074.94 218.7 0.125 5051 5053 2 -21169.5 -7074.06 218.8 0.125 5052 5054 2 -21169.2 -7074.03 218.78 0.125 5053 5055 2 -21168.5 -7073.4 218.82 0.125 5054 5056 2 -21168 -7072.81 218.86 0.125 5055 5057 2 -21165.1 -7072 221.15 0.125 5056 5058 2 -21163.8 -7070.73 221.24 0.125 5057 5059 2 -21163.6 -7070.46 221.26 0.125 5058 5060 2 -21163.5 -7068.87 221.52 0.125 5059 5061 2 -21163 -7068.55 221.52 0.125 5060 5062 2 -21162.7 -7068.27 221.54 0.125 5061 5063 2 -21161.7 -7067.58 221.56 0.125 5062 5064 2 -21160.9 -7066.87 221.6 0.125 5063 5065 2 -21160 -7065.56 225.35 0.125 5064 5066 2 -21159.4 -7065.23 225.36 0.125 5065 5067 2 -21158.4 -7064.3 225.42 0.125 5066 5068 2 -21157.4 -7063.63 225.43 0.125 5067 5069 2 -21156.6 -7063.25 225.42 0.125 5068 5070 2 -21155.1 -7061.29 226.45 0.125 5069 5071 2 -21155.1 -7061.05 226.54 0.125 5070 5072 2 -21153.9 -7060.34 227.93 0.125 5071 5073 2 -21153.6 -7060.19 228.89 0.125 5072 5074 2 -21151.4 -7059.8 232.79 0.125 5073 5075 2 -21150 -7056.03 228.06 0.125 5074 5076 2 -21148 -7055.23 229.41 0.125 5075 5077 2 -21147.6 -7054.11 229.55 0.125 5076 5078 2 -21145 -7051.86 229.49 0.125 5077 5079 2 -21143.2 -7050.52 230.78 0.125 5078 5080 2 -21142.6 -7050.44 230.74 0.125 5079 5081 2 -21141.2 -7050.49 230.6 0.125 5080 5082 2 -21140.6 -7050.41 230.56 0.125 5081 5083 2 -21140.7 -7050.15 230.6 0.125 5082 5084 2 -21139.3 -7049.7 230.55 0.125 5083 5085 2 -21138.3 -7049 230.57 0.125 5084 5086 2 -21137.2 -7048.58 230.54 0.125 5085 5087 2 -21136.4 -7047.7 230.61 0.125 5086 5088 2 -21133.2 -7044.63 229.53 0.125 5087 5089 2 -21133 -7044.07 229.6 0.125 5088 5090 2 -21131.5 -7042.33 229.75 0.125 5089 5091 2 -21131.1 -7041.43 229.86 0.125 5090 5092 2 -21130.8 -7041.15 229.88 0.125 5091 5093 2 -21129.4 -7039.59 230 0.125 5092 5094 2 -21128.3 -7038.92 230.01 0.125 5093 5095 2 -21126 -7037.94 232.1 0.125 5094 5096 2 -21125.8 -7037.61 232.14 0.125 5095 5097 2 -21123.3 -7037.05 232 0.125 5096 5098 2 -21122.5 -7036.64 231.99 0.125 5097 5099 2 -21121.5 -7035.67 232.05 0.125 5098 5100 2 -21120.4 -7035.02 232.06 0.125 5099 5101 2 -21119.7 -7033.71 232.84 0.125 5100 5102 2 -21120 -7033.76 232.86 0.125 5101 5103 2 -21117.7 -7031.59 233 0.125 5102 5104 2 -21122.7 -7026.19 234.36 0.125 5103 5105 2 -21121.2 -7023.5 234.67 0.125 5104 5106 2 -21118.9 -7020.65 234.92 0.125 5105 5107 2 -21117.1 -7018.21 235.16 0.125 5106 5108 2 -21116.8 -7017.42 232.77 0.125 5107 5109 2 -21113.9 -7015.53 234.42 0.125 5108 5110 2 -21114 -7015.31 234.46 0.125 5109 5111 2 -21112.6 -7013.41 234.65 0.125 5110 5112 2 -21112.1 -7012.74 234.71 0.125 5111 5113 2 -21111.2 -7011.78 234.79 0.125 5112 5114 2 -21111 -7011.23 234.87 0.125 5113 5115 2 -21109.1 -7010.06 236.31 0.125 5114 5116 2 -21108.8 -7009.97 236.29 0.125 5115 5117 2 -21107.2 -7009.54 236.7 0.125 5116 5118 2 -21106.5 -7008.58 236.79 0.125 5117 5119 2 -21105.9 -7007.34 236.93 0.125 5118 5120 2 -21105.6 -7007.32 236.91 0.125 5119 5121 2 -21103.8 -7006.16 236.94 0.125 5120 5122 2 -21101.4 -7003.86 237.09 0.125 5121 5123 2 -21100.9 -7003.49 237.11 0.125 5122 5124 2 -21097.6 -7001.96 239.66 0.125 5123 5125 2 -21095.5 -7001.03 239.62 0.125 5124 5126 2 -21095.2 -7000.95 239.61 0.125 5125 5127 2 -21094.3 -7000 239.68 0.125 5126 5128 2 -21094.1 -6999.42 239.75 0.125 5127 5129 2 -21093.9 -6999.11 239.79 0.125 5128 5130 2 -21093.1 -6998.68 239.78 0.125 5129 5131 2 -21092 -6998.2 239.77 0.125 5130 5132 2 -21089.3 -6995.89 240.57 0.125 5131 5133 2 -21086.9 -6994.69 240.55 0.125 5132 5134 2 -21085.4 -6992.55 239.47 0.125 5133 5135 2 -21084.9 -6990.87 239.7 0.125 5134 5136 2 -21082.8 -6989.65 239.7 0.125 5135 5137 2 -21082.5 -6989.58 239.69 0.125 5136 5138 2 -21080.3 -6988.53 242.33 0.125 5137 5139 2 -21080.3 -6988.31 242.36 0.125 5138 5140 2 -21076.8 -6984.43 242.85 0.125 5139 5141 2 -21076.7 -6984.5 243.3 0.125 5140 5142 2 -21074 -6982.99 245.06 0.125 5141 5143 2 -21071.3 -6981.76 247.32 0.125 5142 5144 2 -21069.6 -6979.84 247.55 0.125 5143 5145 2 -21069.2 -6978.99 247.65 0.125 5144 5146 2 -21069 -6978.65 247.69 0.125 5145 5147 2 -21068.7 -6977.05 247.93 0.125 5146 5148 2 -21068.5 -6976.74 247.96 0.125 5147 5149 2 -21068.3 -6976.39 248 0.125 5148 5150 2 -21068.1 -6976.12 248.02 0.125 5149 5151 2 -21067.3 -6975.67 248.02 0.125 5150 5152 2 -21066.3 -6974.71 248.09 0.125 5151 5153 2 -21066 -6974.63 248.08 0.125 5152 5154 2 -21064.6 -6973.01 248.21 0.125 5153 5155 2 -21063.5 -6972.83 248.14 0.125 5154 5156 2 -21063 -6972.46 248.15 0.125 5155 5157 2 -21063 -6972.21 248.2 0.125 5156 5158 2 -21062.4 -6971.26 248.29 0.125 5157 5159 2 -21062.2 -6970.98 248.32 0.125 5158 5160 2 -21061.3 -6970.55 248.32 0.125 5159 5161 2 -21057.4 -6970.49 251.97 0.125 5160 5162 2 -21057.2 -6970.44 251.95 0.125 5161 5163 2 -21055.3 -6969.82 251.88 0.125 5162 5164 2 -21054.3 -6969.08 251.91 0.125 5163 5165 2 -21054 -6969.06 251.89 0.125 5164 5166 2 -21052.1 -6966.8 252.12 0.125 5165 5167 2 -21051.4 -6966.34 252.12 0.125 5166 5168 2 -21050.4 -6965.62 255.2 0.125 5167 5169 2 -21049.9 -6965.82 255.11 0.125 5168 5170 2 -21048 -6966.28 254.86 0.125 5169 5171 2 -21047.2 -6966.12 254.81 0.125 5170 5172 2 -21045.6 -6965.56 254.76 0.125 5171 5173 2 -21045.1 -6965.14 254.79 0.125 5172 5174 2 -21044.6 -6964.81 254.8 0.125 5173 5175 2 -21039 -6958.98 257.1 0.125 5174 5176 2 -21035.4 -6956.55 257.93 0.125 5175 5177 2 -21034.4 -6954.41 263.63 0.125 5176 5178 2 -21034.1 -6952.72 268.23 0.125 5177 5179 2 -21034.4 -6952.78 268.24 0.125 5178 5180 2 -21033.5 -6951.51 268.37 0.125 5179 5181 2 -21031.7 -6948.14 271.24 0.125 5180 5182 2 -21031.4 -6948.08 271.23 0.125 5181 5183 2 -21028.5 -6944.07 275.99 0.125 5182 5184 2 -21026.2 -6940.95 276.63 0.125 5183 5185 2 -21026.1 -6941.06 277.33 0.125 5184 5186 2 -21023.8 -6936.97 281.26 0.125 5185 5187 2 -21023.8 -6936.74 281.3 0.125 5186 5188 2 -21019.7 -6934.9 288.82 0.125 5187 5189 2 -21019.7 -6934.75 289.51 0.125 5188 5190 2 -21017 -6933.62 293.6 0.125 5189 5191 2 -21017.2 -6933.39 293.65 0.125 5190 5192 2 -21016.2 -6932.42 293.72 0.125 5191 5193 2 -21016 -6931.84 293.8 0.125 5192 5194 2 -21015.8 -6931.53 293.83 0.125 5193 5195 2 -21014.9 -6930.02 294 0.125 5194 5196 2 -21014 -6929.04 294.07 0.125 5195 5197 2 -21013.5 -6928.66 294.09 0.125 5196 5198 2 -21013 -6928.06 294.15 0.125 5197 5199 2 -21012.7 -6927.98 294.13 0.125 5198 5200 2 -21011.7 -6926.33 297.96 0.125 5199 5201 2 -21011.4 -6926.29 297.94 0.125 5200 5202 2 -21010.3 -6924.72 298.09 0.125 5201 5203 2 -21009.7 -6924.56 298.07 0.125 5202 5204 2 -21009.5 -6924.55 298.05 0.125 5203 5205 2 -21009.1 -6924.74 297.98 0.125 5204 5206 2 -21007.8 -6922.62 302.53 0.125 5205 5207 2 -21007 -6922.45 302.49 0.125 5206 5208 2 -21006.7 -6922.41 302.46 0.125 5207 5209 2 -21006 -6921.69 302.53 0.125 5208 5210 2 -21005.8 -6921.38 302.58 0.125 5209 5211 2 -21005.5 -6921.08 302.61 0.125 5210 5212 2 -21003 -6920.31 308.42 0.125 5211 5213 2 -21002.8 -6920.24 308.4 0.125 5212 5214 2 -21000.7 -6917.24 308.71 0.125 5213 5215 2 -20998.5 -6915.59 312.49 0.125 5214 5216 2 -20998.3 -6915.52 312.47 0.125 5215 5217 2 -21000 -6921.5 311.64 0.125 5216 5218 2 -20999.8 -6919.03 312.03 0.125 5217 5219 2 -20999.4 -6918.17 312.14 0.125 5218 5220 2 -20999.1 -6917.81 312.17 0.125 5219 5221 2 -20997.4 -6917.8 312.02 0.125 5220 5222 2 -20997.2 -6917.74 312 0.125 5221 5223 2 -20996.8 -6917.67 311.98 0.125 5222 5224 2 -20996.9 -6917.42 312.03 0.125 5223 5225 2 -20994.8 -6915.7 313.56 0.125 5224 5226 2 -20993.3 -6914.41 313.68 0.125 5225 5227 2 -20992.5 -6913.63 318.03 0.125 5226 5228 2 -20993.1 -6912.2 318.54 0.125 5227 5229 2 -20993 -6912.32 319.27 0.125 5228 5230 2 -20992.8 -6911.32 320.14 0.125 5229 5231 2 -20992.7 -6911.54 321.51 0.125 5230 5232 2 -20992.3 -6910.87 325.81 0.125 5231 5233 2 -20992.4 -6910.07 325.96 0.125 5232 5234 2 -20993 -6908.39 329.76 0.125 5233 5235 2 -20994.6 -6906.54 332.03 0.125 5234 5236 2 -20996.1 -6907.22 339.1 0.125 5235 5237 2 -20996 -6907.46 339.06 0.125 5236 5238 2 -20996.8 -6908.13 339.02 0.125 5237 5239 2 -20999.1 -6906.05 340.27 0.125 5238 5240 2 -20999.2 -6905.76 340.32 0.125 5239 5241 2 -21001.2 -6905.97 344.31 0.125 5240 5242 2 -21000 -6911.39 356.87 0.125 5241 5243 2 -21000 -6911.1 356.92 0.125 5242 5244 2 -21002.2 -6901.65 359.54 0.125 5241 5245 2 -20993.6 -6905.9 332.5 0.125 5235 5246 2 -20993.3 -6905.86 332.48 0.125 5245 5247 2 -20991.4 -6904.74 332.49 0.125 5246 5248 2 -20990.8 -6901.64 332.95 0.125 5247 5249 2 -20990.1 -6900.69 333.04 0.125 5248 5250 2 -20989.6 -6900.09 333.09 0.125 5249 5251 2 -20989.5 -6899.01 333.26 0.125 5250 5252 2 -20986.7 -6896.61 338 0.125 5251 5253 2 -20983.9 -6892.5 339.27 0.125 5252 5254 2 -20982.2 -6889.95 344.99 0.125 5253 5255 2 -20978.2 -6885.13 349.66 0.125 5254 5256 2 -20971.8 -6878.56 353.05 0.125 5255 5257 2 -20971.8 -6878.68 353.76 0.125 5256 5258 2 -20970.8 -6876.17 357.45 0.125 5257 5259 2 -20968.3 -6875.45 357.34 0.125 5258 5260 2 -20968.1 -6875.15 357.37 0.125 5259 5261 2 -20967.8 -6875.09 357.35 0.125 5260 5262 2 -20965.8 -6872.01 357.67 0.125 5261 5263 2 -20965.8 -6871.79 357.7 0.125 5262 5264 2 -20963.8 -6870.89 363.92 0.125 5263 5265 2 -20962.5 -6866.34 364.56 0.125 5264 5266 2 -20959.6 -6862.3 368.37 0.125 5265 5267 2 -20958 -6859.67 370.66 0.125 5266 5268 2 -20958.6 -6858.06 377.92 0.125 5267 5269 2 -20959.1 -6856.85 378.04 0.125 5268 5270 2 -20959.1 -6856.61 378.08 0.125 5269 5271 2 -20957 -6853.77 390.46 0.125 5270 5272 2 -20955.4 -6849.31 395.09 0.125 5271 5273 2 -20952.8 -6846.26 406.84 0.125 5272 5274 2 -20949.1 -6842.58 414.56 0.125 5273 5275 2 -20947.6 -6841.02 416.88 0.125 5274 5276 2 -20945.4 -6839.47 419.51 0.125 5275 5277 2 -20945.4 -6839.52 419.83 0.125 5276 5278 2 -20943.2 -6837.72 424.73 0.125 5277 5279 2 -20943.2 -6837.98 424.69 0.125 5278 5280 2 -20942 -6835.91 428.14 0.125 5279 5281 2 -20941.9 -6835.94 428.32 0.125 5280 5282 2 -20939.5 -6836 429.89 0.125 5281 5283 2 -20937.8 -6834.8 443.31 0.125 5282 5284 2 -20939 -6835.09 441.86 0.125 5283 5285 2 -20941.5 -6833.48 442.36 0.25 5284 5286 2 -20941.7 -6833.5 442.38 0.25 5285 5287 2 -20942.7 -6833.95 440.74 0.25 5286 5288 2 -20943.2 -6834.6 444.26 0.25 5287 5289 2 -20943.4 -6835.54 447.6 0.25 5288 5290 2 -20945.2 -6836.15 444.68 0.25 5289 5291 2 -21862.2 -7379.89 50.36 0.495 4277 5292 2 -21864 -7379.13 50.58 0.37 5291 5293 2 -21866.6 -7378.46 50.93 0.37 5292 5294 2 -21869.9 -7377.34 52.57 0.37 5293 5295 2 -21869.8 -7377.44 53.21 0.37 5294 5296 2 -21872.7 -7377.14 53.53 0.37 5295 5297 2 -21873.8 -7376.42 56.24 0.37 5296 5298 2 -21876.1 -7375.76 56.57 0.37 5297 5299 2 -21877.4 -7375.94 58.08 0.37 5298 5300 2 -21879.6 -7376.53 58.19 0.37 5299 5301 2 -21882.1 -7376.7 58.39 0.37 5300 5302 2 -21882.8 -7377.94 61.91 0.37 5301 5303 2 -21886 -7379.56 61.94 0.37 5302 5304 2 -21886.2 -7379.63 61.95 0.37 5303 5305 2 -21888.7 -7380.08 62.57 0.37 5304 5306 2 -21889 -7380.12 62.59 0.37 5305 5307 2 -21891.8 -7379.84 62.98 0.37 5306 5308 2 -21892.1 -7379.85 63.04 0.37 5307 5309 2 -21894.6 -7379.79 63.64 0.37 5308 5310 2 -21899.5 -7379.83 67.38 0.37 5309 5311 2 -21902 -7379.8 67.85 0.37 5310 5312 2 -21905.2 -7379.43 69.35 0.37 5311 5313 2 -21907.3 -7379.41 70.58 0.37 5312 5314 2 -21908.2 -7379.56 70.63 0.37 5313 5315 2 -21909.8 -7379.17 73.12 0.37 5314 5316 2 -21910 -7379.2 73.14 0.37 5315 5317 2 -21913.3 -7379.85 73.84 0.37 5316 5318 2 -21913.1 -7380.07 75.22 0.37 5317 5319 2 -21916.4 -7381.18 75.8 0.37 5318 5320 2 -21916.7 -7381.24 75.82 0.37 5319 5321 2 -21920.8 -7382.19 76.35 0.37 5320 5322 2 -21920.8 -7382.21 76.49 0.37 5321 5323 2 -21923.7 -7383 76.88 0.37 5322 5324 2 -21928.3 -7382.45 77.44 0.37 5323 5325 2 -21928.6 -7382.22 77.51 0.37 5324 5326 2 -21932.8 -7381.8 79.06 0.37 5325 5327 2 -21937.2 -7381.72 78.06 0.37 5326 5328 2 -21964.5 -7391.95 80.11 0.37 5327 5329 2 -21968.8 -7392.63 80.41 0.37 5328 5330 2 -21969.7 -7392.8 80.46 0.37 5329 5331 2 -21972.2 -7392.9 80.68 0.37 5330 5332 2 -21973 -7393.04 80.73 0.37 5331 5333 2 -21975.4 -7392.4 81.06 0.37 5332 5334 2 -21976.4 -7391.47 81.31 0.37 5333 5335 2 -21976.7 -7391.27 81.37 0.37 5334 5336 2 -21979.1 -7391.75 78.98 0.37 5335 5337 2 -21979.3 -7391.79 78.97 0.37 5336 5338 2 -21981.9 -7392.2 79.06 0.37 5337 5339 2 -21983 -7392.08 79.12 0.37 5338 5340 2 -21983.6 -7391.91 79.21 0.37 5339 5341 2 -21985.3 -7391.72 79.32 0.37 5340 5342 2 -21985.9 -7391.5 79.42 0.37 5341 5343 2 -21986.2 -7391.24 79.42 0.37 5342 5344 2 -21988.3 -7391.06 79.48 0.37 5343 5345 2 -21988.9 -7390.6 79.57 0.37 5344 5346 2 -21990.3 -7390.57 79.85 0.37 5345 5347 2 -21992 -7390.85 79.96 0.37 5346 5348 2 -21992.6 -7390.71 80.04 0.37 5347 5349 2 -21994.8 -7389.47 80.45 0.37 5348 5350 2 -21994.8 -7389.22 80.49 0.37 5349 5351 2 -21996.2 -7388.95 80.67 0.37 5350 5352 2 -21998.2 -7388.71 80.9 0.37 5351 5353 2 -22001.1 -7388.63 81.18 0.37 5352 5354 2 -22001.4 -7388.44 81.24 0.37 5353 5355 2 -22003.1 -7388.2 81.44 0.37 5354 5356 2 -22005.8 -7387.81 81.75 0.37 5355 5357 2 -22006.9 -7387.75 81.87 0.37 5356 5358 2 -22007.2 -7387.78 81.89 0.37 5357 5359 2 -22008 -7388.17 81.9 0.37 5358 5360 2 -22008.7 -7388.6 81.9 0.37 5359 5361 2 -22010.9 -7388.92 82.05 0.37 5360 5362 2 -22014 -7389.45 82.25 0.37 5361 5363 2 -22068.4 -7394.33 78.99 0.37 5362 5364 2 -22073.3 -7393.52 79.61 0.37 5363 5365 2 -22077.8 -7393.45 80.09 0.37 5364 5366 2 -22077.8 -7393.51 80.49 0.37 5365 5367 2 -22080.8 -7393.85 81.4 0.37 5366 5368 2 -22084.4 -7394.26 81.67 0.37 5367 5369 2 -22089.4 -7393.41 82.28 0.37 5368 5370 2 -22096.2 -7394.49 84.04 0.37 5369 5371 2 -22100.1 -7392.31 85.27 0.37 5370 5372 2 -22102.9 -7392.77 85.46 0.37 5371 5373 2 -22107.2 -7392.65 85.88 0.37 5372 5374 2 -22110.3 -7392.92 86.12 0.37 5373 5375 2 -22113.8 -7393.52 86.35 0.37 5374 5376 2 -22119 -7393.29 85.56 0.37 5375 5377 2 -22145.8 -7393.96 87.96 0.37 5376 5378 2 -22148.7 -7394.15 88.19 0.37 5377 5379 2 -22151.9 -7394.7 88.4 0.37 5378 5380 2 -22154.4 -7395.4 88.52 0.37 5379 5381 2 -22154.7 -7395.39 88.55 0.37 5380 5382 2 -22158 -7397.29 88.55 0.37 5381 5383 2 -22159.8 -7396.78 88.8 0.37 5382 5384 2 -22163.6 -7396.81 87.42 0.37 5383 5385 2 -22173.3 -7397.85 85.16 0.37 5384 5386 2 -22179.6 -7399.19 85.53 0.37 5385 5387 2 -22185.6 -7400.29 86.87 0.37 5386 5388 2 -22193.3 -7402.9 88.92 0.37 5387 5389 2 -22197.8 -7403.88 87.31 0.37 5388 5390 2 -22200.5 -7404.82 87.32 0.37 5389 5391 2 -22202.4 -7405.11 87.45 0.37 5390 5392 2 -22202.7 -7405.13 87.47 0.37 5391 5393 2 -22204.3 -7405.44 87.57 0.37 5392 5394 2 -22204.7 -7405.23 87.64 0.37 5393 5395 2 -22206.6 -7405.54 87.77 0.37 5394 5396 2 -22206.9 -7405.59 87.79 0.37 5395 5397 2 -22210.9 -7406.77 87.97 0.37 5396 5398 2 -22213.5 -7408.97 90.36 0.37 5397 5399 2 -22215.8 -7409.07 90.55 0.37 5398 5400 2 -22218.3 -7409.21 90.76 0.37 5399 5401 2 -22220.9 -7409.38 90.98 0.37 5400 5402 2 -22222.6 -7410.69 90.92 0.37 5401 5403 2 -22225 -7412.11 90.91 0.37 5402 5404 2 -22225.2 -7412.14 90.93 0.37 5403 5405 2 -22227.3 -7411.97 91.15 0.37 5404 5406 2 -22230.9 -7413.85 91.18 0.37 5405 5407 2 -22232 -7414.07 91.25 0.37 5406 5408 2 -22232.8 -7414.22 91.29 0.37 5407 5409 2 -22233.1 -7414.22 91.32 0.37 5408 5410 2 -22236.5 -7416.07 91.18 0.37 5409 5411 2 -22238.7 -7416.46 91.32 0.37 5410 5412 2 -22096.7 -7396.84 82.97 0.25 5370 5413 2 -22096.9 -7398.97 82.64 0.25 5412 5414 2 -22097.2 -7399 82.66 0.25 5413 5415 2 -22098.9 -7401.17 82.46 0.25 5414 5416 2 -22100.8 -7404.92 82.02 0.25 5415 5417 2 -22102.8 -7408.94 81.54 0.25 5416 5418 2 -22103.3 -7409.23 81.54 0.25 5417 5419 2 -22105.6 -7411.49 81.37 0.25 5418 5420 2 -22106.2 -7412.38 81.29 0.25 5419 5421 2 -22106.4 -7413.22 81.17 0.25 5420 5422 2 -22107.1 -7414.38 81.04 0.25 5421 5423 2 -22108.2 -7416.18 80.85 0.25 5422 5424 2 -21857.9 -7379.92 45.43 0.745 5291 5425 2 -21858.3 -7380.41 40.42 0.745 5424 5426 2 -21857.8 -7380.87 32.74 0.745 5425 5427 2 -21857.3 -7382.39 32.47 0.745 5426 5428 2 -21864 -7375.78 34.19 0.745 5427 5429 2 -21865.1 -7376.68 32.08 0.62 5428 5430 2 -21866.4 -7376.64 26.13 0.62 5429 5431 2 -21866.6 -7377.96 22.73 0.62 5430 5432 2 -21866.5 -7379.12 20.27 0.62 5431 5433 2 -21866.7 -7378.94 19.19 0.62 5432 5434 2 -21865.9 -7380.23 17.78 0.62 5433 5435 2 -21865.9 -7380.47 17.74 0.62 5434 5436 2 -21864.5 -7381.64 13.74 0.62 5435 5437 2 -21863.7 -7382.53 13.24 0.62 5436 5438 2 -21862.3 -7383.85 8.64 0.62 5437 5439 2 -21862.4 -7384.13 8.47 0.62 5438 5440 2 -21861.2 -7384.51 2.75 0.62 5439 5441 2 -21861.5 -7384.54 2.77 0.62 5440 5442 2 -21861 -7385.19 -0.73 0.62 5441 5443 2 -21860.2 -7385.71 -7.74 0.62 5442 5444 2 -21860.3 -7385.65 -8.08 0.62 5443 5445 2 -21855.8 -7389.62 -13.3 0.62 5444 5446 2 -21855.6 -7389.55 -13.31 0.62 5445 5447 2 -21855 -7386.86 -36.64 0.62 5446 5448 2 -21854.6 -7386.73 -38.26 0.62 5447 5449 2 -21852.6 -7385.72 -42.03 0.62 5448 5450 2 -21850 -7384.74 -41.77 0.62 5449 5451 2 -21855 -7382.54 32.21 0.37 5426 5452 2 -21851.9 -7384.68 31.56 0.37 5451 5453 2 -21851.5 -7385.15 31.45 0.37 5452 5454 2 -21848.7 -7386.41 36.15 0.37 5453 5455 2 -21847 -7386.38 36.05 0.37 5454 5456 2 -21844.9 -7387.1 35.79 0.37 5455 5457 2 -21842.2 -7386.44 36.05 0.37 5456 5458 2 -21791.2 -7369.26 64.86 0.37 5457 5459 2 -21788 -7366.57 64.61 0.37 5458 5460 2 -21786 -7364.65 64.71 0.37 5459 5461 2 -21785 -7363.44 64.8 0.37 5460 5462 2 -21784.8 -7363.39 64.78 0.37 5461 5463 2 -21784 -7362.99 64.78 0.37 5462 5464 2 -21782.9 -7362.82 64.7 0.37 5463 5465 2 -21781.8 -7362.64 64.63 0.37 5464 5466 2 -21781.5 -7362.6 64.61 0.37 5465 5467 2 -21781 -7362.25 64.62 0.37 5466 5468 2 -21780.7 -7361.94 64.64 0.37 5467 5469 2 -21780 -7361.55 64.64 0.37 5468 5470 2 -21779.4 -7361.44 64.6 0.37 5469 5471 2 -21778 -7360.87 65.52 0.37 5470 5472 2 -21777.6 -7361.03 66.62 0.37 5471 5473 2 -21776.5 -7360.85 66.7 0.37 5472 5474 2 -21775.4 -7360.67 66.62 0.37 5473 5475 2 -21774.2 -7361.15 67.23 0.37 5474 5476 2 -21771.8 -7360.98 68.27 0.37 5475 5477 2 -21770.3 -7359.95 68.3 0.37 5476 5478 2 -21768.6 -7359.9 68.14 0.37 5477 5479 2 -21767.1 -7360.2 67.96 0.37 5478 5480 2 -21766.8 -7360.17 67.94 0.37 5479 5481 2 -21765 -7359.94 70.89 0.37 5480 5482 2 -21765 -7359.66 70.93 0.37 5481 5483 2 -21763 -7360.08 70.67 0.37 5482 5484 2 -21762.7 -7360.04 70.65 0.37 5483 5485 2 -21760.2 -7359.61 70.49 0.37 5484 5486 2 -21758.9 -7359.2 70.44 0.37 5485 5487 2 -21757.1 -7358.07 70.46 0.37 5486 5488 2 -21756 -7357.63 70.43 0.37 5487 5489 2 -21755.7 -7357.6 70.4 0.37 5488 5490 2 -21753.8 -7357.29 70.28 0.37 5489 5491 2 -21752.7 -7355.87 72.41 0.37 5490 5492 2 -21750.1 -7352.52 72.72 0.125 5491 5493 2 -21749.5 -7349.35 76.91 0.125 5492 5494 2 -21749.5 -7349.63 76.83 0.125 5493 5495 2 -21748.9 -7345.67 82.58 0.125 5494 5496 2 -21748.5 -7344.27 82.77 0.125 5495 5497 2 -21748.5 -7344.02 82.82 0.125 5496 5498 2 -21747.7 -7342.8 85.67 0.125 5497 5499 2 -21746.4 -7340.38 90.07 0.125 5498 5500 2 -21746 -7337.74 94.11 0.125 5499 5501 2 -21746 -7337.52 94.15 0.125 5500 5502 2 -21745.6 -7334.4 98.47 0.125 5501 5503 2 -21744.7 -7331.77 102.72 0.125 5502 5504 2 -21744.7 -7328.54 103.26 0.125 5503 5505 2 -21745.2 -7324.88 102.18 0.125 5504 5506 2 -21744 -7324.38 112.29 0.125 5505 5507 2 -21743.1 -7321.42 107.61 0.125 5506 5508 2 -21743.1 -7319.82 107.88 0.125 5507 5509 2 -21743.2 -7319.57 107.92 0.125 5508 5510 2 -21743 -7317.66 112.23 0.125 5509 5511 2 -21742.5 -7316.79 112.39 0.125 5510 5512 2 -21742.4 -7315.76 112.85 0.125 5511 5513 2 -21742.2 -7315.35 113.97 0.125 5512 5514 2 -21741.5 -7315 115.68 0.125 5513 5515 2 -21741.4 -7314.2 115.81 0.125 5514 5516 2 -21741.2 -7312.95 117.96 0.125 5515 5517 2 -21740.9 -7312.62 117.99 0.125 5516 5518 2 -21740.9 -7311.07 118.25 0.125 5517 5519 2 -21741.1 -7310.27 118.39 0.125 5518 5520 2 -21740.2 -7308.62 123.44 0.125 5519 5521 2 -21741.6 -7305.44 124.1 0.125 5520 5522 2 -21741.9 -7304.1 131.37 0.125 5521 5523 2 -21741.9 -7304.08 131.24 0.125 5522 5524 2 -21741.5 -7302.76 131.83 0.125 5523 5525 2 -21740.2 -7300.54 132.68 0.125 5524 5526 2 -21739.9 -7300.21 132.71 0.125 5525 5527 2 -21739.6 -7300.73 137.43 0.125 5526 5528 2 -21739.6 -7300.45 137.48 0.125 5527 5529 2 -21738.8 -7299.35 134.97 0.125 5528 5530 2 -21738.8 -7299.61 134.92 0.125 5529 5531 2 -21738.1 -7298.46 135.05 0.125 5530 5532 2 -21737.5 -7298.24 136.07 0.125 5531 5533 2 -21737 -7296.49 135.86 0.125 5532 5534 2 -21737 -7296.23 135.9 0.125 5533 5535 2 -21737.4 -7294.18 136.28 0.125 5534 5536 2 -21736.3 -7293.97 140.5 0.125 5535 5537 2 -21736 -7292.55 140.7 0.125 5536 5538 2 -21735.4 -7290.47 143.16 0.125 5537 5539 2 -21735.4 -7290.21 143.2 0.125 5538 5540 2 -21735.4 -7288.65 143.47 0.125 5539 5541 2 -21735.5 -7287.89 143.6 0.125 5540 5542 2 -21735.2 -7286.62 147.43 0.125 5541 5543 2 -21735.3 -7286.1 147.53 0.125 5542 5544 2 -21735.4 -7285.07 147.74 0.125 5543 5545 2 -21734.3 -7283.31 152.39 0.125 5544 5546 2 -21734.2 -7282.21 152.57 0.125 5545 5547 2 -21734 -7280.76 156.73 0.125 5546 5548 2 -21734.9 -7280.11 156.92 0.125 5547 5549 2 -21735 -7279.36 157.06 0.125 5548 5550 2 -21734.3 -7279.35 159.21 0.125 5549 5551 2 -21734.3 -7277.75 159.48 0.125 5550 5552 2 -21734.5 -7276.78 159.66 0.125 5551 5553 2 -21732.8 -7275.98 165.87 0.125 5552 5554 2 -21732.7 -7273.06 166.34 0.125 5553 5555 2 -21732 -7272.4 166.38 0.125 5554 5556 2 -21731.7 -7272.31 166.37 0.125 5555 5557 2 -21731.2 -7271.08 173.2 0.125 5556 5558 2 -21730.9 -7268.86 174.74 0.125 5557 5559 2 -21730.9 -7268.63 174.78 0.125 5558 5560 2 -21728.3 -7265.82 179.89 0.125 5559 5561 2 -21739.3 -7268.49 180.48 0.25 5560 5562 2 -21740 -7266.79 182.57 0.25 5561 5563 2 -21740.3 -7266.06 184.45 0.25 5562 5564 2 -21741.1 -7263.85 188.34 0.25 5563 5565 2 -21741.4 -7262.41 190.76 0.25 5564 5566 2 -21741.9 -7260.75 191.99 0.25 5565 5567 2 -21741.6 -7261.23 194.88 0.25 5566 5568 2 -21742.7 -7259.3 195.3 0.25 5567 5569 2 -21742.7 -7259.03 195.37 0.25 5568 5570 2 -21743 -7257.06 199.53 0.25 5569 5571 2 -21743.6 -7256.43 199.69 0.25 5570 5572 2 -21743.8 -7255.1 199.93 0.25 5571 5573 2 -21744.1 -7253.74 202.74 0.25 5572 5574 2 -21744.1 -7254 202.7 0.25 5573 5575 2 -21744.8 -7252.41 204.16 0.25 5574 5576 2 -21744.6 -7250.98 208.5 0.25 5575 5577 2 -21744.4 -7248.3 208.93 0.25 5576 5578 2 -21744.4 -7248.06 208.97 0.25 5577 5579 2 -21743.5 -7246.8 211.84 0.25 5578 5580 2 -21743.4 -7245.97 211.96 0.25 5579 5581 2 -21743.1 -7245.94 211.94 0.25 5580 5582 2 -21741.7 -7243.45 217.4 0.25 5581 5583 2 -21740.6 -7241.6 219.81 0.25 5582 5584 2 -21739.4 -7239.86 223.24 0.25 5583 5585 2 -21740.6 -7238.41 222.01 0.25 5584 5586 2 -21739.9 -7236.34 227.68 0.25 5585 5587 2 -21740 -7233.4 228.18 0.25 5586 5588 2 -21739.8 -7232.34 228.34 0.25 5587 5589 2 -21739.9 -7232.08 228.39 0.25 5588 5590 2 -21739.9 -7232.15 231.95 0.25 5589 5591 2 -21740.4 -7231.98 238.06 0.25 5590 5592 2 -21740.4 -7231.71 238.11 0.25 5591 5593 2 -21740.5 -7230.95 238.29 0.25 5592 5594 2 -21740.4 -7230.8 238.96 0.25 5593 5595 2 -21740.3 -7228.13 234.88 0.25 5594 5596 2 -21749.6 -7354.29 72.38 0.37 5491 5597 2 -21747.2 -7353.08 72.36 0.37 5596 5598 2 -21746.1 -7353.17 72.23 0.37 5597 5599 2 -21745.7 -7353.35 72.18 0.37 5598 5600 2 -21745.4 -7354.11 72.02 0.37 5599 5601 2 -21745.1 -7354.1 71.99 0.37 5600 5602 2 -21741.1 -7354.45 74.35 0.37 5601 5603 2 -21740.9 -7354.18 74.37 0.37 5602 5604 2 -21738.2 -7353.66 74.2 0.37 5603 5605 2 -21737.6 -7353.36 74.2 0.37 5604 5606 2 -21735.1 -7352.09 76.94 0.37 5605 5607 2 -21734.8 -7351.92 77.7 0.37 5606 5608 2 -21731.9 -7351.49 79.28 0.37 5607 5609 2 -21731.3 -7351.59 80.42 0.37 5608 5610 2 -21730.2 -7351.44 80.49 0.37 5609 5611 2 -21729.8 -7351.68 80.42 0.37 5610 5612 2 -21729 -7351.27 80.42 0.37 5611 5613 2 -21728.8 -7350.94 80.45 0.37 5612 5614 2 -21728 -7350.85 80.38 0.37 5613 5615 2 -21727.4 -7350.74 80.35 0.37 5614 5616 2 -21726.8 -7350.65 80.31 0.37 5615 5617 2 -21725.4 -7350.63 81.36 0.37 5616 5618 2 -21724.5 -7350.45 81.31 0.37 5617 5619 2 -21723.6 -7349.51 81.38 0.37 5618 5620 2 -21723 -7349.42 81.34 0.37 5619 5621 2 -21720.9 -7349.42 83.17 0.37 5620 5622 2 -21719.9 -7350.33 82.94 0.37 5621 5623 2 -21716.4 -7350.12 84.6 0.37 5622 5624 2 -21713.5 -7349.54 86.84 0.37 5623 5625 2 -21712.8 -7348.68 87.23 0.37 5624 5626 2 -21712 -7348.55 87.23 0.37 5625 5627 2 -21711.7 -7348.51 87.27 0.37 5626 5628 2 -21709.1 -7347.86 88.78 0.37 5627 5629 2 -21709.1 -7347.63 88.91 0.37 5628 5630 2 -21708.3 -7347.36 89.76 0.37 5629 5631 2 -21708 -7347.07 89.78 0.37 5630 5632 2 -21705.7 -7346.83 90.47 0.37 5631 5633 2 -21705.5 -7346.39 91.09 0.37 5632 5634 2 -21704.4 -7345.98 91.39 0.37 5633 5635 2 -21703.5 -7345.86 91.65 0.37 5634 5636 2 -21702 -7346.26 91.77 0.37 5635 5637 2 -21701.8 -7346.03 92.1 0.37 5636 5638 2 -21700.5 -7345.25 92.1 0.37 5637 5639 2 -21699.3 -7345.4 92.56 0.37 5638 5640 2 -21699 -7345.35 92.54 0.37 5639 5641 2 -21698.4 -7345.36 93.07 0.37 5640 5642 2 -21697.8 -7345.27 93.04 0.37 5641 5643 2 -21696.4 -7344.96 93.96 0.37 5642 5644 2 -21695.9 -7344.65 93.96 0.37 5643 5645 2 -21693.4 -7343.8 94.6 0.37 5644 5646 2 -21693.2 -7343.58 94.61 0.37 5645 5647 2 -21691.2 -7342.78 95.3 0.37 5646 5648 2 -21691 -7342.74 95.28 0.37 5647 5649 2 -21688.5 -7342.08 95.16 0.37 5648 5650 2 -21688.3 -7342.06 95.14 0.37 5649 5651 2 -21687.5 -7341.16 95.22 0.37 5650 5652 2 -21687.3 -7340.86 95.25 0.37 5651 5653 2 -21685 -7340.62 95.88 0.37 5652 5654 2 -21682 -7338.7 95.91 0.37 5653 5655 2 -21678.2 -7337.79 95.71 0.37 5654 5656 2 -21675.7 -7338.03 96.32 0.37 5655 5657 2 -21674 -7337.45 96.27 0.37 5656 5658 2 -21673.4 -7337.55 96.19 0.37 5657 5659 2 -21672.6 -7337.73 96.08 0.37 5658 5660 2 -21670 -7337.78 95.84 0.37 5659 5661 2 -21669.8 -7337.2 95.92 0.37 5660 5662 2 -21668 -7336.37 95.88 0.37 5661 5663 2 -21666 -7334.71 95.97 0.37 5662 5664 2 -21663.8 -7334.57 95.78 0.37 5663 5665 2 -21663.9 -7334.34 95.83 0.37 5664 5666 2 -21660.8 -7334.28 98.47 0.37 5665 5667 2 -21657.2 -7334.15 97.93 0.37 5666 5668 2 -21656 -7334.42 97.77 0.37 5667 5669 2 -21655.8 -7334.1 97.82 0.37 5668 5670 2 -21654 -7333.88 99.44 0.37 5669 5671 2 -21654 -7333.9 99.54 0.37 5670 5672 2 -21628.5 -7328.94 104.71 0.37 5671 5673 2 -21625.6 -7329.21 104.52 0.37 5672 5674 2 -21621.4 -7329.02 104.2 0.37 5673 5675 2 -21617 -7328.89 106.1 0.37 5674 5676 2 -21615.7 -7328.66 106.02 0.37 5675 5677 2 -21615.1 -7328.53 105.98 0.37 5676 5678 2 -21614.8 -7328.77 105.91 0.37 5677 5679 2 -21614.1 -7329.16 105.79 0.37 5678 5680 2 -21613.6 -7329.09 105.75 0.37 5679 5681 2 -21611.8 -7329.6 105.5 0.37 5680 5682 2 -21611 -7329.42 105.46 0.37 5681 5683 2 -21609.6 -7329.43 105.46 0.37 5682 5684 2 -21609.3 -7329.39 105.44 0.37 5683 5685 2 -21608.1 -7329.56 105.97 0.37 5684 5686 2 -21607.2 -7329.47 106.38 0.37 5685 5687 2 -21605.1 -7329.43 108.06 0.37 5686 5688 2 -21604.9 -7329.4 108.05 0.37 5687 5689 2 -21603.8 -7329.44 107.95 0.37 5688 5690 2 -21603.2 -7329.36 107.97 0.37 5689 5691 2 -21602.4 -7329.23 107.99 0.37 5690 5692 2 -21601.5 -7328.89 108.31 0.37 5691 5693 2 -21600.9 -7329.03 109.77 0.37 5692 5694 2 -21600.1 -7328.63 109.76 0.37 5693 5695 2 -21599.2 -7328.79 109.9 0.37 5694 5696 2 -21598.3 -7328.89 109.81 0.37 5695 5697 2 -21597.1 -7328.98 109.82 0.37 5696 5698 2 -21596.4 -7328.84 109.77 0.37 5697 5699 2 -21595 -7328.4 110.27 0.37 5698 5700 2 -21593.1 -7328.91 111.8 0.25 5699 5701 2 -21591.3 -7329.44 111.55 0.25 5700 5702 2 -21590.8 -7329.33 111.51 0.25 5701 5703 2 -21590 -7328.89 111.51 0.25 5702 5704 2 -21589.4 -7328.8 111.48 0.25 5703 5705 2 -21589.1 -7329 111.42 0.25 5704 5706 2 -21587.4 -7329.27 111.21 0.25 5705 5707 2 -21586.6 -7329.1 111.16 0.25 5706 5708 2 -21586.3 -7328.74 111.19 0.25 5707 5709 2 -21584.8 -7328.25 111.13 0.25 5708 5710 2 -21584.1 -7328.41 111.05 0.25 5709 5711 2 -21583.3 -7328.49 110.95 0.25 5710 5712 2 -21583 -7328.18 110.98 0.25 5711 5713 2 -21582 -7327.99 110.91 0.25 5712 5714 2 -21580.6 -7327.76 110.82 0.25 5713 5715 2 -21580 -7327.68 110.79 0.25 5714 5716 2 -21578.3 -7326.8 113.6 0.25 5715 5717 2 -21578.2 -7325.95 113.73 0.25 5716 5718 2 -21578.2 -7325.71 113.77 0.25 5717 5719 2 -21577 -7324.96 113.78 0.25 5718 5720 2 -21576.7 -7324.88 113.76 0.25 5719 5721 2 -21574.6 -7323.51 113.8 0.25 5720 5722 2 -21574.4 -7323.2 113.83 0.25 5721 5723 2 -21572.8 -7321.24 113.32 0.25 5722 5724 2 -21573.1 -7318.17 112.68 0.25 5723 5725 2 -21572.8 -7316.75 112.89 0.25 5724 5726 2 -21572.8 -7316.51 112.92 0.25 5725 5727 2 -21572 -7316.1 112.92 0.25 5726 5728 2 -21508.2 -7300.79 129.05 0.25 5727 5729 2 -21508.2 -7300.53 129.09 0.25 5728 5730 2 -21506.3 -7300.14 128.78 0.25 5729 5731 2 -21505.8 -7299.79 128.79 0.25 5730 5732 2 -21504 -7300.27 128.55 0.25 5731 5733 2 -21503.8 -7300.2 128.53 0.25 5732 5734 2 -21502.7 -7299.48 128.55 0.25 5733 5735 2 -21502.2 -7299.39 128.52 0.25 5734 5736 2 -21502.2 -7299.66 128.47 0.25 5735 5737 2 -21500.7 -7299.69 128.33 0.25 5736 5738 2 -21499.7 -7299.5 128.27 0.25 5737 5739 2 -21499.1 -7299.43 128.22 0.25 5738 5740 2 -21498.6 -7298.99 128.25 0.25 5739 5741 2 -21498.3 -7298.96 128.23 0.25 5740 5742 2 -21497.5 -7299.13 128.12 0.25 5741 5743 2 -21496.6 -7298.95 128.07 0.25 5742 5744 2 -21495.5 -7299.03 127.95 0.25 5743 5745 2 -21495.2 -7298.93 127.85 0.25 5744 5746 2 -21494.7 -7299.15 130.92 0.25 5745 5747 2 -21494.3 -7299.22 132.95 0.25 5746 5748 2 -21494.1 -7298.88 132.98 0.25 5747 5749 2 -21493.6 -7298.24 133.05 0.25 5748 5750 2 -21493.1 -7297.9 133.05 0.25 5749 5751 2 -21492.8 -7297.8 132.83 0.25 5750 5752 2 -21492.3 -7297.73 132.63 0.25 5751 5753 2 -21491.7 -7297.62 132.59 0.25 5752 5754 2 -21490.9 -7297.47 132.54 0.25 5753 5755 2 -21489.6 -7296.98 132.56 0.25 5754 5756 2 -21489.1 -7296.58 132.57 0.25 5755 5757 2 -21486.3 -7294.53 132.66 0.25 5756 5758 2 -21485.8 -7294.43 132.63 0.25 5757 5759 2 -21484.7 -7294.24 132.55 0.25 5758 5760 2 -21483 -7293.95 132.45 0.25 5759 5761 2 -21482.8 -7294.14 132.39 0.25 5760 5762 2 -21480.6 -7293.29 132.33 0.25 5761 5763 2 -21479.3 -7292.73 132.3 0.25 5762 5764 2 -21477.1 -7292.52 136.04 0.25 5763 5765 2 -21476.9 -7292.19 136.07 0.25 5764 5766 2 -21476.6 -7292.15 136.05 0.25 5765 5767 2 -21475.5 -7291.73 136.13 0.25 5766 5768 2 -21474.7 -7291.61 136.28 0.25 5767 5769 2 -21473.5 -7290.19 136.99 0.25 5768 5770 2 -21473.3 -7289.86 137.02 0.25 5769 5771 2 -21472 -7289.59 139.63 0.25 5770 5772 2 -21393.8 -7270.14 138.63 0.25 5771 5773 2 -21391.7 -7268.95 135.64 0.25 5772 5774 2 -21390.6 -7268.73 135.58 0.25 5773 5775 2 -21390.3 -7268.68 135.56 0.25 5774 5776 2 -21389.4 -7268.8 135.45 0.25 5775 5777 2 -21388.3 -7268.61 135.38 0.25 5776 5778 2 -21387.2 -7268.94 135.22 0.25 5777 5779 2 -21386.9 -7268.88 135.21 0.25 5778 5780 2 -21385.1 -7268.84 135.05 0.25 5779 5781 2 -21384.6 -7268.77 135.01 0.25 5780 5782 2 -21383 -7268.19 134.96 0.25 5781 5783 2 -21381.9 -7268.26 134.84 0.25 5782 5784 2 -21381.7 -7267.91 134.88 0.25 5783 5785 2 -21381.1 -7267.85 134.83 0.25 5784 5786 2 -21380.8 -7267.81 134.81 0.25 5785 5787 2 -21380 -7267.68 134.76 0.25 5786 5788 2 -21379.1 -7267.8 134.66 0.25 5787 5789 2 -21378.3 -7267.66 134.61 0.25 5788 5790 2 -21377.5 -7267.18 134.61 0.25 5789 5791 2 -21375.5 -7266.84 135.9 0.25 5790 5792 2 -21374.4 -7266.65 135.83 0.25 5791 5793 2 -21374.2 -7266.35 135.86 0.25 5792 5794 2 -21373.4 -7265.97 135.85 0.25 5793 5795 2 -21372.6 -7265.79 135.8 0.25 5794 5796 2 -21371.4 -7266.35 135.6 0.25 5795 5797 2 -21371.1 -7266.32 135.57 0.25 5796 5798 2 -21370.1 -7265.87 135.55 0.25 5797 5799 2 -21369.2 -7265.45 135.54 0.25 5798 5800 2 -21365.6 -7265.49 138.78 0.25 5799 5801 2 -21365.1 -7265.35 138.75 0.25 5800 5802 2 -21362.3 -7265.12 138.53 0.25 5801 5803 2 -21361.8 -7264.99 138.5 0.25 5802 5804 2 -21361.8 -7264.73 138.54 0.25 5803 5805 2 -21356.3 -7263.75 138.04 0.25 5804 5806 2 -21355.8 -7263.63 138.01 0.25 5805 5807 2 -21354.4 -7263.47 137.91 0.25 5806 5808 2 -21353.6 -7263.27 137.87 0.25 5807 5809 2 -21352.7 -7263.41 137.76 0.25 5808 5810 2 -21352.3 -7263.04 137.78 0.25 5809 5811 2 -21352 -7262.98 137.76 0.25 5810 5812 2 -21350 -7262.9 137.59 0.25 5811 5813 2 -21349.2 -7263 137.49 0.25 5812 5814 2 -21348.9 -7262.94 137.48 0.25 5813 5815 2 -21347.7 -7263.28 137.31 0.25 5814 5816 2 -21347.3 -7264.1 136.2 0.25 5815 5817 2 -21347.2 -7265.13 136.01 0.25 5816 5818 2 -21346.2 -7267.68 135.81 0.25 5817 5819 2 -21345.4 -7268.82 135.55 0.25 5818 5820 2 -21346.1 -7268.79 128.81 0.25 5819 5821 2 -21345.4 -7270.61 127.28 0.25 5820 5822 2 -21345.3 -7270.87 127.23 0.25 5821 5823 2 -21343.5 -7271.57 126.95 0.25 5822 5824 2 -21342.5 -7270.35 127.06 0.25 5823 5825 2 -21342.7 -7268.82 124.53 0.25 5824 5826 2 -21341.4 -7268.31 124.49 0.25 5825 5827 2 -21340.3 -7269.72 124.15 0.25 5826 5828 2 -21340.3 -7269.93 124.12 0.25 5827 5829 2 -21344.5 -7263.18 138.19 0.25 5815 5830 2 -21342.2 -7262.69 136.12 0.25 5829 5831 2 -21341.4 -7262.55 136.06 0.25 5830 5832 2 -21332.4 -7260.45 135.82 0.25 5831 5833 2 -21331.9 -7260.11 135.83 0.25 5832 5834 2 -21331.6 -7260.04 135.81 0.25 5833 5835 2 -21329.3 -7259.1 135.75 0.25 5834 5836 2 -21326.3 -7258.31 135.66 0.25 5835 5837 2 -21325.8 -7257.68 135.72 0.25 5836 5838 2 -21325.1 -7257.76 136.83 0.25 5837 5839 2 -21322.5 -7257.81 137.75 0.25 5838 5840 2 -21321.1 -7257.7 138.53 0.25 5839 5841 2 -21319.1 -7257.87 138.32 0.25 5840 5842 2 -21318.3 -7257.72 138.26 0.25 5841 5843 2 -21316 -7257.35 138.12 0.25 5842 5844 2 -21313.9 -7256.92 137.98 0.25 5843 5845 2 -21308.4 -7255.54 138.37 0.25 5844 5846 2 -21307.6 -7255.41 138.31 0.25 5845 5847 2 -21305.1 -7255.05 138.55 0.25 5846 5848 2 -21304.8 -7255.2 138.5 0.25 5847 5849 2 -21304.5 -7255.45 138.43 0.25 5848 5850 2 -21302.1 -7256.08 138.1 0.25 5849 5851 2 -21301.8 -7255.8 138.12 0.25 5850 5852 2 -21299.8 -7254.87 138.12 0.25 5851 5853 2 -21298.4 -7254.62 138.03 0.25 5852 5854 2 -21296.5 -7254.05 137.98 0.25 5853 5855 2 -21294.3 -7253.38 138.07 0.25 5854 5856 2 -21294.3 -7253.66 138.02 0.25 5855 5857 2 -21292.7 -7252.73 138.51 0.25 5856 5858 2 -21292.8 -7252.44 138.69 0.25 5857 5859 2 -21291.9 -7252.28 139.94 0.25 5858 5860 2 -21291 -7252.41 139.94 0.25 5859 5861 2 -21289.8 -7252.45 139.83 0.25 5860 5862 2 -21289.3 -7252.37 139.79 0.25 5861 5863 2 -21289 -7252.08 139.82 0.25 5862 5864 2 -21288.8 -7252.03 139.8 0.25 5863 5865 2 -21287.8 -7251.58 139.78 0.25 5864 5866 2 -21287.5 -7251.49 139.77 0.25 5865 5867 2 -21286.7 -7251.1 139.76 0.25 5866 5868 2 -21285.7 -7250.39 139.78 0.25 5867 5869 2 -21285.1 -7250.23 139.76 0.25 5868 5870 2 -21282.8 -7250.39 139.52 0.25 5869 5871 2 -21281.8 -7250.19 139.45 0.25 5870 5872 2 -21281.4 -7250.44 139.38 0.25 5871 5873 2 -21279.8 -7249.85 139.32 0.25 5872 5874 2 -21278.8 -7249.41 139.3 0.25 5873 5875 2 -21278.5 -7249.12 139.33 0.25 5874 5876 2 -21277.9 -7248.22 139.41 0.25 5875 5877 2 -21277.3 -7248.06 139.39 0.25 5876 5878 2 -21277.1 -7247.84 139.4 0.25 5877 5879 2 -21275.2 -7247.64 141.77 0.25 5878 5880 2 -21274.7 -7247.81 141.69 0.25 5879 5881 2 -21272.4 -7247.68 141.51 0.25 5880 5882 2 -21272.1 -7247.64 141.49 0.25 5881 5883 2 -21270.9 -7246.59 141.54 0.25 5882 5884 2 -21270.7 -7246.34 141.56 0.25 5883 5885 2 -21269.9 -7245.9 141.57 0.25 5884 5886 2 -21269.4 -7245.57 141.57 0.25 5885 5887 2 -21266.6 -7245.34 141.35 0.25 5886 5888 2 -21265 -7244.77 141.29 0.25 5887 5889 2 -21264.7 -7244.72 141.27 0.25 5888 5890 2 -21260.8 -7243.76 142.69 0.25 5889 5891 2 -21260 -7243.88 142.59 0.25 5890 5892 2 -21258.6 -7243.63 142.5 0.25 5891 5893 2 -21258.3 -7243.61 142.48 0.25 5892 5894 2 -21257.2 -7243.65 142.36 0.25 5893 5895 2 -21256.3 -7243.76 142.27 0.25 5894 5896 2 -21259.2 -7247.37 141.93 0.125 5895 5897 2 -21256.6 -7247.32 141.7 0.125 5896 5898 2 -21254.7 -7247.05 141.57 0.125 5897 5899 2 -21254.1 -7246.71 141.57 0.125 5898 5900 2 -21251.8 -7246.67 141.37 0.125 5899 5901 2 -21251.6 -7246.64 141.35 0.125 5900 5902 2 -21249.3 -7246.83 141.1 0.125 5901 5903 2 -21248.8 -7246.77 141.06 0.125 5902 5904 2 -21247.8 -7247.72 140.81 0.125 5903 5905 2 -21247.2 -7247.6 140.78 0.125 5904 5906 2 -21246.1 -7247.17 140.75 0.125 5905 5907 2 -21245.6 -7246.87 140.75 0.125 5906 5908 2 -21245.1 -7246.79 140.71 0.125 5907 5909 2 -21244.2 -7246.68 140.65 0.125 5908 5910 2 -21243.7 -7246.32 140.66 0.125 5909 5911 2 -21242.3 -7246.41 140.51 0.125 5910 5912 2 -21242 -7246.36 140.49 0.125 5911 5913 2 -21241.3 -7246.1 142.66 0.125 5912 5914 2 -21241.4 -7245.77 142.28 0.125 5913 5915 2 -21239.9 -7246.37 142.05 0.125 5914 5916 2 -21238.2 -7246.14 141.93 0.125 5915 5917 2 -21238 -7245.88 141.94 0.125 5916 5918 2 -21236.9 -7244.92 142.01 0.125 5917 5919 2 -21237 -7244.63 142.06 0.125 5918 5920 2 -21234.8 -7244.11 141.94 0.125 5919 5921 2 -21233.2 -7243.31 141.93 0.125 5920 5922 2 -21233 -7243.02 141.95 0.125 5921 5923 2 -21232.7 -7242.93 141.94 0.125 5922 5924 2 -21230 -7242.06 141.83 0.125 5923 5925 2 -21227.3 -7241.45 140.49 0.125 5924 5926 2 -21227.3 -7241.36 139.96 0.125 5925 5927 2 -21225.3 -7241.4 139.77 0.125 5926 5928 2 -21224.5 -7241.27 139.71 0.125 5927 5929 2 -21222.5 -7241.04 139.89 0.125 5928 5930 2 -21222.5 -7241.15 140.53 0.125 5929 5931 2 -21220.8 -7240.65 140.53 0.125 5930 5932 2 -21220.6 -7240.61 140.62 0.125 5931 5933 2 -21218.9 -7240.15 140.54 0.125 5932 5934 2 -21218.3 -7240.09 140.5 0.125 5933 5935 2 -21217.8 -7239.76 140.5 0.125 5934 5936 2 -21216.7 -7239.85 140.38 0.125 5935 5937 2 -21216.2 -7239.5 140.39 0.125 5936 5938 2 -21215.9 -7239.46 140.37 0.125 5937 5939 2 -21214.8 -7238.79 140.39 0.125 5938 5940 2 -21214.6 -7238.28 140.45 0.125 5939 5941 2 -21214.3 -7238.18 140.44 0.125 5940 5942 2 -21213.6 -7237.31 140.51 0.125 5941 5943 2 -21213.1 -7237.01 140.51 0.125 5942 5944 2 -21212.8 -7236.92 140.51 0.125 5943 5945 2 -21212 -7236.79 140.45 0.125 5944 5946 2 -21211.1 -7236.72 140.38 0.125 5945 5947 2 -21210.3 -7236.57 140.33 0.125 5946 5948 2 -21210 -7236.55 140.3 0.125 5947 5949 2 -21208.3 -7236.31 140.19 0.125 5948 5950 2 -21208.1 -7236 140.22 0.125 5949 5951 2 -21206.5 -7235.55 140.14 0.125 5950 5952 2 -21205.9 -7235.15 140.15 0.125 5951 5953 2 -21205.6 -7235.16 140.12 0.125 5952 5954 2 -21205.2 -7234.83 140.13 0.125 5953 5955 2 -21204.3 -7234.68 140.08 0.125 5954 5956 2 -21203.2 -7234.56 139.99 0.125 5955 5957 2 -21202.6 -7234.72 139.91 0.125 5956 5958 2 -21202.3 -7234.67 139.89 0.125 5957 5959 2 -21198.3 -7234.82 140.33 0.125 5958 5960 2 -21197 -7235.66 140.08 0.125 5959 5961 2 -21196.7 -7236.16 139.96 0.125 5960 5962 2 -21196.5 -7236.16 139.94 0.125 5961 5963 2 -21195.8 -7236.3 139.86 0.125 5962 5964 2 -21194.7 -7236.14 139.78 0.125 5963 5965 2 -21193.1 -7235.67 139.7 0.125 5964 5966 2 -21192.3 -7235.29 139.7 0.125 5965 5967 2 -21191.3 -7234.62 139.71 0.125 5966 5968 2 -21190.5 -7234.23 139.7 0.125 5967 5969 2 -21189.3 -7233.7 140.74 0.125 5968 5970 2 -21187.5 -7232.68 140.74 0.125 5969 5971 2 -21186.8 -7230.95 140.96 0.125 5970 5972 2 -21185.7 -7230.6 140.92 0.125 5971 5973 2 -21185.5 -7230.29 140.94 0.125 5972 5974 2 -21184.9 -7229.91 140.96 0.125 5973 5975 2 -21183.9 -7229.24 140.97 0.125 5974 5976 2 -21182.9 -7228.37 141.02 0.125 5975 5977 2 -21181.6 -7227.61 141.02 0.125 5976 5978 2 -21181 -7227.52 140.99 0.125 5977 5979 2 -21179.9 -7226.88 140.99 0.125 5978 5980 2 -21179 -7225.89 141.06 0.125 5979 5981 2 -21178.7 -7225.91 141.03 0.125 5980 5982 2 -21176.7 -7223.48 141.25 0.125 5981 5983 2 -21174.2 -7223.01 145.04 0.125 5982 5984 2 -21173.3 -7224.13 147.28 0.125 5983 5985 2 -21172.8 -7224.32 147.2 0.125 5984 5986 2 -21172.7 -7224.57 147.15 0.125 5985 5987 2 -21171.6 -7224.82 146.02 0.125 5986 5988 2 -21171.3 -7225.01 145.96 0.125 5987 5989 2 -21167.6 -7225.04 145.61 0.125 5988 5990 2 -21166.2 -7224.84 145.52 0.125 5989 5991 2 -21164.6 -7224.32 145.45 0.125 5990 5992 2 -21164 -7224.02 145.45 0.125 5991 5993 2 -21161.9 -7223.2 145.38 0.125 5992 5994 2 -21160.8 -7223.04 145.3 0.125 5993 5995 2 -21159.3 -7223.07 145.16 0.125 5994 5996 2 -21158.5 -7223.2 145.06 0.125 5995 5997 2 -21157.4 -7223.33 144.94 0.125 5996 5998 2 -21156.8 -7223.23 144.9 0.125 5997 5999 2 -21155.1 -7223.3 144.73 0.125 5998 6000 2 -21154 -7223.1 144.66 0.125 5999 6001 2 -21153.4 -7223.08 144.61 0.125 6000 6002 2 -21152.3 -7222.89 144.54 0.125 6001 6003 2 -21150.9 -7223.22 144.34 0.125 6002 6004 2 -21150 -7223.35 144.24 0.125 6003 6005 2 -21148.7 -7224.51 143.93 0.125 6004 6006 2 -21148.1 -7224.65 143.85 0.125 6005 6007 2 -21147.6 -7224.6 143.81 0.125 6006 6008 2 -21146.7 -7224.5 143.74 0.125 6007 6009 2 -21146.2 -7224.43 143.7 0.125 6008 6010 2 -21144.7 -7223.92 145.06 0.125 6009 6011 2 -21142.3 -7222.94 145.87 0.125 6010 6012 2 -21142 -7222.92 145.85 0.125 6011 6013 2 -21140 -7222.64 145.71 0.125 6012 6014 2 -21139.7 -7222.58 145.69 0.125 6013 6015 2 -21138.4 -7221.74 146.39 0.125 6014 6016 2 -21138.1 -7221.43 146.42 0.125 6015 6017 2 -21136.5 -7220.91 146.17 0.125 6016 6018 2 -21136.5 -7220.87 145.96 0.125 6017 6019 2 -21135.6 -7221.29 145.8 0.125 6018 6020 2 -21133.4 -7220.78 146.15 0.125 6019 6021 2 -21133.2 -7221.08 147.9 0.125 6020 6022 2 -21131.5 -7220.72 148.73 0.125 6021 6023 2 -21130.4 -7221.09 148.56 0.125 6022 6024 2 -21129.3 -7220.65 148.53 0.125 6023 6025 2 -21128.4 -7220.38 149.02 0.125 6024 6026 2 -21128 -7220.39 149.43 0.125 6025 6027 2 -21126.3 -7220.79 149.74 0.125 6026 6028 2 -21125.9 -7220.83 150.28 0.125 6027 6029 2 -21123.6 -7221.41 150.74 0.125 6028 6030 2 -21123.3 -7221.64 150.67 0.125 6029 6031 2 -21121.4 -7222.71 150.33 0.125 6030 6032 2 -21121.2 -7222.6 149.71 0.125 6031 6033 2 -21119.1 -7222.1 148.47 0.125 6032 6034 2 -21118.8 -7221.77 148.46 0.125 6033 6035 2 -21117.7 -7221.85 148.17 0.125 6034 6036 2 -21116.6 -7221.68 147.9 0.125 6035 6037 2 -21116.6 -7221.95 147.85 0.125 6036 6038 2 -21114.5 -7222.58 146.75 0.125 6037 6039 2 -21113.5 -7222.35 146.2 0.125 6038 6040 2 -21111.5 -7221.04 144.89 0.125 6039 6041 2 -21078.3 -7216.84 141.95 0.125 6040 6042 2 -21074.8 -7216.07 141.65 0.125 6041 6043 2 -21074.4 -7216.38 142.08 0.125 6042 6044 2 -21071.9 -7216.27 141.91 0.125 6043 6045 2 -21071.8 -7216.56 141.86 0.125 6044 6046 2 -21070.7 -7216.64 141.74 0.125 6045 6047 2 -21069.3 -7216.17 141.69 0.125 6046 6048 2 -21068.8 -7216.1 141.65 0.125 6047 6049 2 -21068.5 -7216.34 141.58 0.125 6048 6050 2 -21068 -7216.01 141.59 0.125 6049 6051 2 -21067.7 -7215.72 141.61 0.125 6050 6052 2 -21067.5 -7215.42 141.64 0.125 6051 6053 2 -21066.3 -7215.26 141.56 0.125 6052 6054 2 -21065.8 -7215.17 141.52 0.125 6053 6055 2 -21064.9 -7215.33 141.42 0.125 6054 6056 2 -21064.6 -7215.3 141.4 0.125 6055 6057 2 -21063.6 -7214.58 141.42 0.125 6056 6058 2 -21063.4 -7214.27 141.45 0.125 6057 6059 2 -21062.2 -7214.67 141.27 0.125 6058 6060 2 -21061.9 -7214.64 141.25 0.125 6059 6061 2 -21061.4 -7214.03 141.3 0.125 6060 6062 2 -21060.1 -7213.34 141.29 0.125 6061 6063 2 -21059.2 -7213.19 141.24 0.125 6062 6064 2 -21058.7 -7213.16 141.19 0.125 6063 6065 2 -21058.2 -7212.83 141.2 0.125 6064 6066 2 -21057.1 -7212.42 141.17 0.125 6065 6067 2 -21056.2 -7212.29 141.11 0.125 6066 6068 2 -21054 -7211.85 141.91 0.125 6067 6069 2 -21053 -7211.95 143.28 0.125 6068 6070 2 -21052.1 -7212.09 143.19 0.125 6069 6071 2 -21051.6 -7212.03 143.14 0.125 6070 6072 2 -21050.7 -7211.88 143.09 0.125 6071 6073 2 -21049.1 -7211.44 143.02 0.125 6072 6074 2 -21048.5 -7211.37 142.98 0.125 6073 6075 2 -21047.7 -7210.95 142.97 0.125 6074 6076 2 -21046.9 -7210.58 142.99 0.125 6075 6077 2 -21045.2 -7210.88 142.78 0.125 6076 6078 2 -21044.5 -7209.71 142.91 0.125 6077 6079 2 -21044 -7209.38 142.91 0.125 6078 6080 2 -21042.1 -7210.44 142.56 0.125 6079 6081 2 -21039.9 -7210.16 142.41 0.125 6080 6082 2 -21039.3 -7209.8 142.41 0.125 6081 6083 2 -21037.1 -7209.77 142.21 0.125 6082 6084 2 -21036 -7209.85 142.09 0.125 6083 6085 2 -21034.5 -7210.21 141.89 0.125 6084 6086 2 -21033.1 -7210.93 139.67 0.125 6085 6087 2 -21030.8 -7211.62 138.74 0.125 6086 6088 2 -21029.7 -7211.46 138.66 0.125 6087 6089 2 -21029.2 -7211.39 138.62 0.125 6088 6090 2 -21027.8 -7211.17 138.53 0.125 6089 6091 2 -21023 -7210.75 137.8 0.125 6090 6092 2 -21021.9 -7210.57 137.79 0.125 6091 6093 2 -21019.7 -7210.28 137.63 0.125 6092 6094 2 -21018.3 -7209.86 137.71 0.125 6093 6095 2 -21017.8 -7209.75 137.68 0.125 6094 6096 2 -20988.9 -7218.75 130.67 0.25 6095 6097 2 -20987.6 -7220.41 130.27 0.25 6096 6098 2 -20986 -7220.72 130.08 0.25 6097 6099 2 -20983.7 -7221.48 129.73 0.25 6098 6100 2 -20982.9 -7221.36 129.67 0.25 6099 6101 2 -20980.5 -7221.35 131.47 0.25 6100 6102 2 -20979.9 -7221.26 131.43 0.25 6101 6103 2 -20978.9 -7220.31 131.28 0.25 6102 6104 2 -20977.3 -7220.23 130.57 0.25 6103 6105 2 -20977 -7220.22 130.54 0.25 6104 6106 2 -20974.4 -7220.67 133.56 0.25 6105 6107 2 -20974.2 -7220.62 133.36 0.25 6106 6108 2 -20971.3 -7220.8 133.3 0.25 6107 6109 2 -20971 -7220.75 133.28 0.25 6108 6110 2 -20968.4 -7220.86 134.15 0.25 6109 6111 2 -20967.8 -7220.78 134.22 0.25 6110 6112 2 -20965.8 -7221.35 134.08 0.25 6111 6113 2 -20965.2 -7221.22 134.11 0.25 6112 6114 2 -20965.2 -7221.52 134.05 0.25 6113 6115 2 -20963.6 -7222.61 133.82 0.25 6114 6116 2 -20962.8 -7222.5 133.74 0.25 6115 6117 2 -20962.5 -7222.21 133.76 0.25 6116 6118 2 -20962.3 -7222.05 133.03 0.25 6117 6119 2 -20962.1 -7221.75 133.06 0.25 6118 6120 2 -20962.1 -7221.25 130.14 0.25 6119 6121 2 -20959 -7221.77 132.32 0.25 6120 6122 2 -20958.4 -7221.44 132.32 0.25 6121 6123 2 -20956.2 -7221.12 132.16 0.25 6122 6124 2 -20955.2 -7220.71 132.13 0.25 6123 6125 2 -20954.6 -7220.38 132.14 0.25 6124 6126 2 -20954.1 -7220.29 132.1 0.25 6125 6127 2 -20952.1 -7220.5 131.51 0.25 6126 6128 2 -20950.6 -7220.81 131.32 0.25 6127 6129 2 -20950.4 -7220.47 131.35 0.25 6128 6130 2 -20948.8 -7219.54 131.36 0.25 6129 6131 2 -20948.5 -7219.18 131.39 0.25 6130 6132 2 -20948.4 -7218.63 131.47 0.25 6131 6133 2 -20947.5 -7218.52 131.41 0.25 6132 6134 2 -20946.1 -7218.35 131.3 0.25 6133 6135 2 -20945 -7218.21 131.22 0.25 6134 6136 2 -20944 -7217.76 131.2 0.25 6135 6137 2 -20943.4 -7217.69 131.16 0.25 6136 6138 2 -20941.6 -7218.93 130.12 0.25 6137 6139 2 -20941.3 -7218.84 130.11 0.25 6138 6140 2 -20940.9 -7219.37 131.79 0.25 6139 6141 2 -20940.3 -7219.33 131.74 0.25 6140 6142 2 -20938.8 -7220.29 133.63 0.25 6141 6143 2 -20938.6 -7220.25 133.61 0.25 6142 6144 2 -20937.8 -7219.9 133.83 0.25 6143 6145 2 -20937.5 -7219.85 133.81 0.25 6144 6146 2 -20936.7 -7219.73 133.76 0.25 6145 6147 2 -20936.3 -7219.75 133.72 0.25 6146 6148 2 -20934.8 -7220.06 135.08 0.25 6147 6149 2 -20934.5 -7219.98 135.07 0.25 6148 6150 2 -20933.6 -7220.44 135.08 0.25 6149 6151 2 -20932.7 -7220.59 134.97 0.25 6150 6152 2 -20931.6 -7220.16 134.99 0.25 6151 6153 2 -20931.1 -7220.08 134.96 0.25 6152 6154 2 -20929.9 -7220.18 134.84 0.25 6153 6155 2 -20929.4 -7220.11 134.9 0.25 6154 6156 2 -20927.7 -7220.16 134.74 0.25 6155 6157 2 -20927.3 -7220.34 134.67 0.25 6156 6158 2 -20927.1 -7220.33 134.65 0.25 6157 6159 2 -20926 -7219.91 134.62 0.25 6158 6160 2 -20924.9 -7219.77 134.54 0.25 6159 6161 2 -20923.9 -7218.55 134.65 0.25 6160 6162 2 -20923.7 -7218.22 134.68 0.25 6161 6163 2 -20922.1 -7218.62 136.39 0.25 6162 6164 2 -20921.8 -7218.59 136.37 0.25 6163 6165 2 -20919.6 -7218.86 134.98 0.25 6164 6166 2 -20919.3 -7218.85 134.96 0.25 6165 6167 2 -20917.6 -7219.12 134.75 0.25 6166 6168 2 -20917 -7219.08 134.7 0.25 6167 6169 2 -20915.3 -7219.09 134.54 0.25 6168 6170 2 -20915 -7219.05 134.52 0.25 6169 6171 2 -20914.5 -7218.47 134.57 0.25 6170 6172 2 -20913.7 -7218.06 134.56 0.25 6171 6173 2 -20912.9 -7217.39 134.6 0.25 6172 6174 2 -20911.8 -7217.27 134.52 0.125 6173 6175 2 -20911 -7217.44 134.41 0.125 6174 6176 2 -20910.7 -7217.39 134.39 0.125 6175 6177 2 -20909.2 -7218.23 134.11 0.125 6176 6178 2 -20907.9 -7218.51 134.79 0.125 6177 6179 2 -20907.3 -7218.44 135.12 0.125 6178 6180 2 -20907.3 -7218.47 135.3 0.125 6179 6181 2 -20907 -7218.42 135.28 0.125 6180 6182 2 -20906.7 -7218.11 136.78 0.125 6181 6183 2 -20906.4 -7218.14 136.74 0.125 6182 6184 2 -20905 -7217.92 136.83 0.125 6183 6185 2 -20904.8 -7217.6 136.87 0.125 6184 6186 2 -20904.3 -7217.56 136.82 0.125 6185 6187 2 -20903.7 -7217.72 136.74 0.125 6186 6188 2 -20903.1 -7217.62 136.7 0.125 6187 6189 2 -20902.8 -7217.59 136.68 0.125 6188 6190 2 -20902.4 -7218.34 136.52 0.125 6189 6191 2 -20901.9 -7218.26 136.48 0.125 6190 6192 2 -20901.3 -7218.19 136.44 0.125 6191 6193 2 -20901 -7218.12 136.43 0.125 6192 6194 2 -20900.2 -7218.26 136.32 0.125 6193 6195 2 -20899.3 -7218.19 136.27 0.125 6194 6196 2 -20898.8 -7218.07 136.24 0.125 6195 6197 2 -20898.5 -7217.79 136.26 0.125 6196 6198 2 -20898.3 -7217.74 136.24 0.125 6197 6199 2 -20898 -7217.74 136.21 0.125 6198 6200 2 -20897.4 -7218.16 136.09 0.125 6199 6201 2 -20896.8 -7217.84 136.09 0.125 6200 6202 2 -20895.2 -7217.35 136.02 0.125 6201 6203 2 -20894 -7217.21 135.93 0.125 6202 6204 2 -20894 -7217.41 135.9 0.125 6203 6205 2 -20893.1 -7217.86 135.74 0.125 6204 6206 2 -20892.8 -7218.05 135.68 0.125 6205 6207 2 -20892.5 -7218.02 135.66 0.125 6206 6208 2 -20892.4 -7218.2 136.75 0.125 6207 6209 2 -20892.1 -7218.49 137.02 0.125 6208 6210 2 -20891.8 -7218.45 137 0.125 6209 6211 2 -20890.4 -7218.22 136.9 0.125 6210 6212 2 -20890.1 -7218.2 136.88 0.125 6211 6213 2 -20888.9 -7219.08 136.62 0.125 6212 6214 2 -20888.6 -7218.79 136.64 0.125 6213 6215 2 -20887.4 -7218.36 136.59 0.125 6214 6216 2 -20886.8 -7218 136.6 0.125 6215 6217 2 -20886.8 -7217.48 136.69 0.125 6216 6218 2 -20886.3 -7217.43 136.65 0.125 6217 6219 2 -20886 -7217.39 136.63 0.125 6218 6220 2 -20884.8 -7217.07 137.4 0.125 6219 6221 2 -20884.3 -7216.84 138.13 0.125 6220 6222 2 -20883.2 -7216.44 138.13 0.125 6221 6223 2 -20882.7 -7216.13 138.13 0.125 6222 6224 2 -20882.7 -7215.86 138.18 0.125 6223 6225 2 -20879.1 -7216.28 140.17 0.125 6224 6226 2 -20879 -7216.66 140.74 0.125 6225 6227 2 -20875.9 -7216.15 141.58 0.125 6226 6228 2 -20875.6 -7216.11 141.56 0.125 6227 6229 2 -20870.3 -7215.19 141.81 0.125 6228 6230 2 -20870 -7214.9 141.83 0.125 6229 6231 2 -20864.5 -7214.37 144.42 0.125 6230 6232 2 -20864.2 -7214.34 144.4 0.125 6231 6233 2 -20860.8 -7214.63 144.03 0.125 6232 6234 2 -20859.7 -7214.52 143.95 0.125 6233 6235 2 -20856.9 -7214.39 143.7 0.125 6234 6236 2 -20856.7 -7214.09 143.73 0.125 6235 6237 2 -20855.2 -7213.88 143.63 0.125 6236 6238 2 -20852.2 -7212.9 143.39 0.125 6237 6239 2 -20852 -7212.87 143.37 0.125 6238 6240 2 -20848.2 -7212.9 143.01 0.125 6239 6241 2 -20847.4 -7213.05 142.91 0.125 6240 6242 2 -20846 -7212.81 142.82 0.125 6241 6243 2 -20846 -7212.56 142.86 0.125 6242 6244 2 -20843.2 -7211.41 144.45 0.125 6243 6245 2 -20842.7 -7211.04 144.46 0.125 6244 6246 2 -20841.5 -7211.12 145.51 0.125 6245 6247 2 -20841 -7210.75 145.52 0.125 6246 6248 2 -20841 -7210.51 145.56 0.125 6247 6249 2 -20839.3 -7210.14 146.09 0.125 6248 6250 2 -20838.5 -7210 146.03 0.125 6249 6251 2 -20835.4 -7210.61 143.95 0.125 6250 6252 2 -20834.8 -7210.51 143.91 0.125 6251 6253 2 -20830.1 -7209.89 143.57 0.125 6252 6254 2 -20829.8 -7209.84 143.55 0.125 6253 6255 2 -20826.9 -7208.87 143.44 0.125 6254 6256 2 -20826.6 -7208.33 143.51 0.125 6255 6257 2 -20826.1 -7207.98 143.51 0.125 6256 6258 2 -20823.2 -7206.77 143.44 0.125 6257 6259 2 -20823 -7205.97 143.55 0.125 6258 6260 2 -20816.6 -7205.15 146.55 0.125 6259 6261 2 -20816.3 -7205.11 146.53 0.125 6260 6262 2 -20812.2 -7204.02 146.33 0.125 6261 6263 2 -20811.7 -7203.66 146.34 0.125 6262 6264 2 -20807.2 -7203.88 149.21 0.125 6263 6265 2 -20806 -7201.85 149.44 0.125 6264 6266 2 -20803.8 -7200.19 152.42 0.125 6265 6267 2 -20801 -7199.79 152.22 0.125 6266 6268 2 -20800.7 -7199.5 152.24 0.125 6267 6269 2 -20799.1 -7198.99 152.18 0.125 6268 6270 2 -20793.8 -7197.41 153.03 0.125 6269 6271 2 -20788.8 -7197.33 156.23 0.125 6270 6272 2 -20784.8 -7195.88 155.63 0.125 6271 6273 2 -20782.2 -7195.45 156.46 0.125 6272 6274 2 -20781.9 -7195.41 156.44 0.125 6273 6275 2 -20780 -7194.3 156.45 0.125 6274 6276 2 -20778.1 -7193.12 157.03 0.125 6275 6277 2 -20776.5 -7192.87 156.91 0.125 6276 6278 2 -20776 -7193.07 156.83 0.125 6277 6279 2 -20772.4 -7193.91 156.36 0.125 6278 6280 2 -20771.8 -7193.84 156.32 0.125 6279 6281 2 -20771.6 -7194.1 156.25 0.125 6280 6282 2 -20768.1 -7194.64 155.83 0.125 6281 6283 2 -20767.8 -7194.31 155.86 0.125 6282 6284 2 -20767.6 -7194.31 155.84 0.125 6283 6285 2 -20765.9 -7193.56 155.81 0.125 6284 6286 2 -20765.7 -7193.27 155.84 0.125 6285 6287 2 -20765.4 -7193.19 155.82 0.125 6286 6288 2 -20764.4 -7192.49 155.84 0.125 6287 6289 2 -20763.5 -7192.4 155.78 0.125 6288 6290 2 -20761.3 -7192.13 155.61 0.125 6289 6291 2 -20760 -7191.89 155.53 0.125 6290 6292 2 -20756.5 -7191.68 155.24 0.125 6291 6293 2 -20756.6 -7191.45 155.28 0.125 6292 6294 2 -20754.2 -7190.35 155.57 0.125 6293 6295 2 -20752.9 -7190.51 157.43 0.125 6294 6296 2 -20752.5 -7190.5 157.4 0.125 6295 6297 2 -20749.8 -7189.99 157.94 0.125 6296 6298 2 -20749.8 -7189.69 157.99 0.125 6297 6299 2 -20748 -7188.62 157.99 0.125 6298 6300 2 -20747.4 -7188.29 158 0.125 6299 6301 2 -20747.1 -7188.24 157.98 0.125 6300 6302 2 -20745.6 -7187 158.05 0.125 6301 6303 2 -20745.1 -7186.41 158.1 0.125 6302 6304 2 -20744.1 -7185.45 158.16 0.125 6303 6305 2 -20743.9 -7185.17 158.18 0.125 6304 6306 2 -20742.1 -7184.12 156.51 0.125 6305 6307 2 -20741.1 -7183.41 156.53 0.125 6306 6308 2 -20740.8 -7183.37 156.51 0.125 6307 6309 2 -20740.2 -7181.71 156.73 0.125 6308 6310 2 -20739.9 -7181.41 156.75 0.125 6309 6311 2 -20738.4 -7180.14 156.82 0.125 6310 6312 2 -20738.2 -7179.3 156.94 0.125 6311 6313 2 -20732.9 -7175.22 156.01 0.125 6312 6314 2 -20732.9 -7174.92 156.06 0.125 6313 6315 2 -20730.7 -7174.61 155.9 0.125 6314 6316 2 -20730.1 -7174.51 155.87 0.125 6315 6317 2 -20726.3 -7173.47 155.68 0.125 6316 6318 2 -20724.8 -7172.04 154.75 0.125 6317 6319 2 -20724.8 -7172.11 155.14 0.125 6318 6320 2 -20723.8 -7170.62 155.3 0.125 6319 6321 2 -20720 -7169.68 155.43 0.125 6320 6322 2 -20719.7 -7169.66 155.59 0.125 6321 6323 2 -20713.9 -7166.26 156.09 0.125 6322 6324 2 -20713.9 -7166.01 156.13 0.125 6323 6325 2 -20709.3 -7165.52 156.62 0.125 6324 6326 2 -20707.5 -7164.31 157.33 0.125 6325 6327 2 -20705.1 -7162.95 157.81 0.125 6326 6328 2 -20704.6 -7162.67 157.86 0.125 6327 6329 2 -20701.9 -7161.94 160.2 0.125 6328 6330 2 -20701.8 -7162.23 160.15 0.125 6329 6331 2 -20699.9 -7161.94 160.01 0.125 6330 6332 2 -20699.1 -7161.8 159.96 0.125 6331 6333 2 -20694.1 -7160.56 159.7 0.125 6332 6334 2 -20689 -7157.3 158.48 0.125 6333 6335 2 -20686.1 -7157.3 159.28 0.125 6334 6336 2 -20684.1 -7157.27 159.1 0.125 6335 6337 2 -20683.3 -7157.13 159.04 0.125 6336 6338 2 -20682.2 -7156.27 159.09 0.125 6337 6339 2 -20681.8 -7155.69 159.14 0.125 6338 6340 2 -20681.8 -7155.38 159.19 0.125 6339 6341 2 -20679.5 -7152.17 161.14 0.125 6340 6342 2 -20676.2 -7149.61 164.43 0.125 6341 6343 2 -20675.6 -7149.57 164.43 0.125 6342 6344 2 -20673.7 -7149.26 164.39 0.125 6343 6345 2 -20673.4 -7149.26 164.41 0.125 6344 6346 2 -20670.7 -7148.64 164.3 0.125 6345 6347 2 -20670.1 -7148.55 164.3 0.125 6346 6348 2 -20665.7 -7147.66 164.12 0.125 6347 6349 2 -20664.9 -7147.33 164.15 0.125 6348 6350 2 -20663.3 -7146.31 164.21 0.125 6349 6351 2 -20661.9 -7146.11 164.15 0.125 6350 6352 2 -20661.6 -7146.31 164.09 0.125 6351 6353 2 -20655.1 -7143.67 164.23 0.125 6352 6354 2 -20648.8 -7142.33 167.61 0.125 6353 6355 2 -20649.2 -7136.31 168.65 0.125 6354 6356 2 -20639.9 -7133.74 170.14 0.125 6355 6357 2 -20635.9 -7132.83 171.91 0.125 6356 6358 2 -20632.7 -7131.74 172.53 0.125 6357 6359 2 -20632.5 -7131.7 172.51 0.125 6358 6360 2 -20626.7 -7129.41 174.78 0.125 6359 6361 2 -20621.4 -7128.39 176.03 0.125 6360 6362 2 -20621.4 -7128.14 176.07 0.125 6361 6363 2 -20618.7 -7126.3 176.12 0.125 6362 6364 2 -20618.5 -7125.97 176.18 0.125 6363 6365 2 -20618.2 -7125.66 176.2 0.125 6364 6366 2 -20615.7 -7125 176.34 0.125 6365 6367 2 -20615.8 -7124.7 176.4 0.125 6366 6368 2 -20613.8 -7124.64 176.22 0.125 6367 6369 2 -20613.6 -7124.23 176.27 0.125 6368 6370 2 -20612 -7123.17 176.17 0.125 6369 6371 2 -20611.6 -7122.79 176.19 0.125 6370 6372 2 -20611.1 -7122.41 176.21 0.125 6371 6373 2 -20609.2 -7121.77 176.14 0.125 6372 6374 2 -20609 -7121.47 176.16 0.125 6373 6375 2 -20607.9 -7121.04 176.14 0.125 6374 6376 2 -20607.4 -7120.88 176.11 0.125 6375 6377 2 -20603 -7117.4 179.61 0.125 6376 6378 2 -20601.2 -7117.91 179.36 0.125 6377 6379 2 -20600.9 -7117.57 179.39 0.125 6378 6380 2 -20599.7 -7116.55 179.45 0.125 6379 6381 2 -20599.4 -7116.51 179.43 0.125 6380 6382 2 -20598.9 -7116.14 179.44 0.125 6381 6383 2 -20598.5 -7115.77 179.46 0.125 6382 6384 2 -20598.2 -7115.71 179.44 0.125 6383 6385 2 -20597.9 -7115.66 179.42 0.125 6384 6386 2 -20595.4 -7114.95 179.52 0.125 6385 6387 2 -20593.5 -7114.39 180 0.125 6386 6388 2 -20593.2 -7114.36 179.98 0.125 6387 6389 2 -20591.8 -7113.97 180.72 0.125 6388 6390 2 -20591.5 -7113.71 181.26 0.125 6389 6391 2 -20589.3 -7113.33 181.12 0.125 6390 6392 2 -20586.9 -7111.54 180.08 0.125 6391 6393 2 -20586.7 -7111.22 180.11 0.125 6392 6394 2 -20585 -7111.19 179.96 0.125 6393 6395 2 -20584.5 -7111.06 179.93 0.125 6394 6396 2 -20583.2 -7111.63 179.72 0.125 6395 6397 2 -20582.6 -7111.53 179.68 0.125 6396 6398 2 -20581.8 -7111.37 179.63 0.125 6397 6399 2 -20581.3 -7110.78 179.69 0.125 6398 6400 2 -20580.6 -7110.06 179.74 0.125 6399 6401 2 -20580.4 -7109.73 179.77 0.125 6400 6402 2 -20578.5 -7109.38 179.65 0.125 6401 6403 2 -20577.9 -7109.26 179.62 0.125 6402 6404 2 -20578 -7109 179.67 0.125 6403 6405 2 -20575.7 -7108.18 181.99 0.125 6404 6406 2 -20574 -7107.87 181.89 0.125 6405 6407 2 -20572.6 -7107.87 181.76 0.125 6406 6408 2 -20571.8 -7107.72 181.7 0.125 6407 6409 2 -20570.4 -7107.14 181.68 0.125 6408 6410 2 -20569.6 -7106.96 181.63 0.125 6409 6411 2 -20569.7 -7106.72 181.67 0.125 6410 6412 2 -20566.5 -7105.85 181.55 0.125 6411 6413 2 -20566.2 -7105.78 181.53 0.125 6412 6414 2 -20564.7 -7105.81 181.39 0.125 6413 6415 2 -20564.4 -7105.71 181.38 0.125 6414 6416 2 -20563.3 -7105.52 181.31 0.125 6415 6417 2 -20562.3 -7105.31 181.25 0.125 6416 6418 2 -20562 -7104.99 181.28 0.125 6417 6419 2 -20558.7 -7104.74 184.84 0.125 6418 6420 2 -20558.5 -7104.46 184.86 0.125 6419 6421 2 -20557.4 -7104.19 184.8 0.125 6420 6422 2 -20557.4 -7103.93 184.85 0.125 6421 6423 2 -20555.3 -7102.57 185.34 0.125 6422 6424 2 -20554.3 -7102.11 185.32 0.125 6423 6425 2 -20553.5 -7101.93 185.27 0.125 6424 6426 2 -20552.1 -7101.67 185.19 0.125 6425 6427 2 -20551 -7101.46 185.12 0.125 6426 6428 2 -20550.3 -7100.81 185.15 0.125 6427 6429 2 -20549.8 -7100.17 185.22 0.125 6428 6430 2 -20549.7 -7099.35 185.35 0.125 6429 6431 2 -20549 -7098.39 185.44 0.125 6430 6432 2 -20547.7 -7097.62 185.45 0.125 6431 6433 2 -20545.8 -7097.78 188.13 0.125 6432 6434 2 -20545.8 -7097.83 188.43 0.125 6433 6435 2 -20543 -7097.42 188.87 0.125 6434 6436 2 -20538.2 -7096.19 191.85 0.125 6435 6437 2 -20534.5 -7095.2 193.05 0.125 6436 6438 2 -20533.7 -7093.97 193.17 0.125 6437 6439 2 -20531.8 -7093.35 193.09 0.125 6438 6440 2 -20531.1 -7092.39 193.18 0.125 6439 6441 2 -20526.8 -7091.79 195.71 0.125 6440 6442 2 -20525.7 -7091.36 195.69 0.125 6441 6443 2 -20518.2 -7090.66 198.2 0.125 6442 6444 2 -20517.7 -7090.32 198.2 0.125 6443 6445 2 -20517.4 -7090.28 198.19 0.125 6444 6446 2 -20516.1 -7089.73 198.15 0.125 6445 6447 2 -20515.9 -7089.19 198.22 0.125 6446 6448 2 -20515.2 -7088.48 198.27 0.125 6447 6449 2 -20514.7 -7088.1 198.29 0.125 6448 6450 2 -20514.5 -7088.1 198.27 0.125 6449 6451 2 -20512.5 -7087.42 198.2 0.125 6450 6452 2 -20512.3 -7087.11 198.23 0.125 6451 6453 2 -20511 -7084.98 198.46 0.125 6452 6454 2 -20510.8 -7084.71 198.49 0.125 6453 6455 2 -20510.5 -7084.66 198.47 0.125 6454 6456 2 -20509.1 -7084.4 198.38 0.125 6455 6457 2 -20508.9 -7084.06 198.42 0.125 6456 6458 2 -20504.9 -7082.38 199.17 0.125 6457 6459 2 -20491.9 -7078.32 203.8 0.125 6458 6460 2 -20489.6 -7077.11 203.78 0.125 6459 6461 2 -20480.8 -7074.79 204.3 0.125 6460 6462 2 -20480 -7074.35 204.3 0.125 6461 6463 2 -20479.8 -7074.04 204.33 0.125 6462 6464 2 -20473.5 -7071.98 207.04 0.125 6463 6465 2 -20468.6 -7069.47 205.61 0.125 6464 6466 2 -20468.3 -7069.39 205.6 0.125 6465 6467 2 -20467.5 -7068.17 205.72 0.125 6466 6468 2 -20467.2 -7067.87 205.74 0.125 6467 6469 2 -20465 -7067.43 205.61 0.125 6468 6470 2 -20464.7 -7067.37 205.59 0.125 6469 6471 2 -20463.1 -7067.01 205.37 0.125 6470 6472 2 -21138.1 -7221.17 151.06 0.125 6016 6473 2 -21136.3 -7221.14 157.04 0.125 6472 6474 2 -21134.1 -7221.5 163.48 0.125 6473 6475 2 -21134.3 -7222.56 170.82 0.125 6474 6476 2 -21134.1 -7222.85 172.53 0.125 6475 6477 2 -21134.5 -7222.33 175.76 0.125 6476 6478 2 -21134.5 -7222.08 175.81 0.125 6477 6479 2 -21131.5 -7221.01 176.37 0.125 6478 6480 2 -21131.5 -7220.74 176.41 0.125 6479 6481 2 -21130.8 -7220.32 176.41 0.125 6480 6482 2 -21130.5 -7220.03 176.44 0.125 6481 6483 2 -21130 -7219.45 176.49 0.125 6482 6484 2 -21129.8 -7219.16 176.51 0.125 6483 6485 2 -21129 -7218.79 176.5 0.125 6484 6486 2 -21128.8 -7217.71 176.66 0.125 6485 6487 2 -21128.9 -7216.89 176.8 0.125 6486 6488 2 -21127.7 -7215.69 176.89 0.125 6487 6489 2 -21126.4 -7214.15 177.03 0.125 6488 6490 2 -21126.2 -7213.9 177.05 0.125 6489 6491 2 -21124.6 -7213.13 177.02 0.125 6490 6492 2 -21124.1 -7212.52 177.07 0.125 6491 6493 2 -21134 -7221.04 181.14 0.125 6478 6494 2 -21132.5 -7220.76 186.61 0.125 6493 6495 2 -21132.5 -7220.71 188.04 0.125 6494 6496 2 -21132.3 -7220.94 189.43 0.125 6495 6497 2 -21130 -7220.77 194.71 0.125 6496 6498 2 -21129.7 -7220.72 194.69 0.125 6497 6499 2 -21128.3 -7221.49 199.87 0.125 6498 6500 2 -21128 -7221.67 201.01 0.125 6499 6501 2 -21126.3 -7221.51 204.67 0.125 6500 6502 2 -21187.9 -7237.25 141.75 0.125 5968 6503 2 -21186.4 -7237.85 141.51 0.125 6502 6504 2 -21182.9 -7239.62 143.37 0.125 6503 6505 2 -21181.3 -7240.18 144.69 0.125 6504 6506 2 -21180.2 -7240.03 144.61 0.125 6505 6507 2 -21179.1 -7240.52 146.29 0.125 6506 6508 2 -21179 -7240.7 147.39 0.125 6507 6509 2 -21178.1 -7242.05 149.56 0.125 6508 6510 2 -21177.9 -7242.98 150.59 0.125 6509 6511 2 -21169.1 -7259.06 158.25 0.125 6510 6512 2 -21166.6 -7261.12 157.68 0.125 6511 6513 2 -21166.5 -7261.61 157.59 0.125 6512 6514 2 -21166.2 -7261.83 157.52 0.125 6513 6515 2 -21164.2 -7264.23 156.94 0.125 6514 6516 2 -21163.2 -7265.42 156.65 0.125 6515 6517 2 -21160.7 -7266.42 158.05 0.125 6516 6518 2 -21159.7 -7267.84 157.71 0.125 6517 6519 2 -21159.1 -7268.06 157.62 0.125 6518 6520 2 -21158.7 -7268.27 157.55 0.125 6519 6521 2 -21157.8 -7269.18 157.32 0.125 6520 6522 2 -21157.2 -7269.36 157.26 0.125 6521 6523 2 -21156.3 -7269.55 157.25 0.125 6522 6524 2 -21156 -7269.53 157.43 0.125 6523 6525 2 -21149.9 -7270.24 161.28 0.125 6524 6526 2 -21148.9 -7272.84 162.7 0.125 6525 6527 2 -21148.6 -7273.06 162.64 0.125 6526 6528 2 -21146.5 -7275.56 162.91 0.125 6527 6529 2 -21145.8 -7276.12 163.65 0.125 6528 6530 2 -21145.5 -7276.11 163.63 0.125 6529 6531 2 -21144.9 -7276.69 164.29 0.125 6530 6532 2 -21144.6 -7276.89 164.23 0.125 6531 6533 2 -21144.5 -7277.44 164.13 0.125 6532 6534 2 -21144.2 -7277.66 164.06 0.125 6533 6535 2 -21143.7 -7278.23 164.76 0.125 6534 6536 2 -21143.3 -7278.84 165.47 0.125 6535 6537 2 -21141.8 -7280.51 168.11 0.125 6536 6538 2 -21140 -7280.46 169.24 0.125 6537 6539 2 -21139.5 -7280.9 170.42 0.125 6538 6540 2 -21139.4 -7281.94 170.24 0.125 6539 6541 2 -21139 -7282.6 171.34 0.125 6540 6542 2 -21138.1 -7283.75 172.26 0.125 6541 6543 2 -21137.9 -7283.78 172.23 0.125 6542 6544 2 -21136.3 -7284.76 173.13 0.125 6543 6545 2 -21135.3 -7284.82 173.88 0.125 6544 6546 2 -21134.7 -7285.49 173.72 0.125 6545 6547 2 -21134 -7286.52 173.78 0.125 6546 6548 2 -21132.6 -7286.35 173.68 0.125 6547 6549 2 -21132.3 -7286.34 173.99 0.125 6548 6550 2 -21130.3 -7288.9 177.6 0.125 6549 6551 2 -21129.4 -7288.8 177.54 0.125 6550 6552 2 -21127.4 -7290.81 181 0.125 6551 6553 2 -21127.1 -7291.54 180.85 0.125 6552 6554 2 -21125.9 -7291.64 180.73 0.125 6553 6555 2 -21125.2 -7291.83 182.43 0.125 6554 6556 2 -21124.9 -7292.03 182.37 0.125 6555 6557 2 -21124.2 -7292.42 183.26 0.125 6556 6558 2 -21123.6 -7292.6 183.17 0.125 6557 6559 2 -21123.2 -7293.01 184.1 0.125 6558 6560 2 -21122.1 -7294.34 184.81 0.125 6559 6561 2 -21121.8 -7294.54 184.75 0.125 6560 6562 2 -21121.1 -7295.25 184.57 0.125 6561 6563 2 -21120.3 -7295.41 184.46 0.125 6562 6564 2 -21120 -7295.41 184.62 0.125 6563 6565 2 -21119.3 -7296.11 184.44 0.125 6564 6566 2 -21118.5 -7296.5 184.3 0.125 6565 6567 2 -21118.4 -7296.75 184.25 0.125 6566 6568 2 -21118 -7297.21 184.14 0.125 6567 6569 2 -21117.7 -7297.43 184.08 0.125 6568 6570 2 -21117.4 -7297.35 184.06 0.125 6569 6571 2 -21117.4 -7297.76 184.4 0.125 6570 6572 2 -21116.9 -7297.7 184.36 0.125 6571 6573 2 -21114.1 -7298.64 188.49 0.125 6572 6574 2 -21114 -7298.84 189.74 0.125 6573 6575 2 -21120.1 -7291.93 191.46 0.125 6574 6576 2 -21120.4 -7291.98 191.48 0.125 6575 6577 2 -21117.6 -7293.77 194.93 0.125 6576 6578 2 -21117.3 -7293.67 194.92 0.125 6577 6579 2 -21116.4 -7294 194.78 0.125 6578 6580 2 -21116.3 -7294.27 194.73 0.125 6579 6581 2 -21114.8 -7295.28 194.41 0.125 6580 6582 2 -21113.7 -7296.43 194.12 0.125 6581 6583 2 -21113 -7296.59 194.04 0.125 6582 6584 2 -21112.5 -7296.47 194.01 0.125 6583 6585 2 -21112.2 -7296.68 193.94 0.125 6584 6586 2 -21111 -7298.34 198.25 0.125 6585 6587 2 -21111 -7298.62 198.23 0.125 6586 6588 2 -21110.5 -7299.89 198.35 0.125 6587 6589 2 -21110.1 -7300.08 198.28 0.125 6588 6590 2 -21109.4 -7300.82 198.41 0.125 6589 6591 2 -21109.1 -7300.79 198.39 0.125 6590 6592 2 -21108.2 -7300.86 198.46 0.125 6591 6593 2 -21108.2 -7301.4 198.36 0.125 6592 6594 2 -21107.5 -7302.12 198.45 0.125 6593 6595 2 -21107.2 -7302.08 198.59 0.125 6594 6596 2 -21104.8 -7303.02 201.81 0.125 6595 6597 2 -21104.4 -7303.23 201.74 0.125 6596 6598 2 -21103.5 -7303.58 201.6 0.125 6597 6599 2 -21101.2 -7305.33 205.92 0.125 6598 6600 2 -21099.9 -7305.82 206.65 0.125 6599 6601 2 -21097.7 -7308.01 207.58 0.125 6600 6602 2 -21096.5 -7309.22 209.53 0.125 6601 6603 2 -21096 -7309.65 210.5 0.125 6602 6604 2 -21094.8 -7310.2 210.42 0.125 6603 6605 2 -21093.7 -7310.22 210.31 0.125 6604 6606 2 -21092.4 -7311.05 210.05 0.125 6605 6607 2 -21091.7 -7311.21 209.98 0.125 6606 6608 2 -21090.5 -7311.76 209.78 0.125 6607 6609 2 -21090.2 -7311.69 209.77 0.125 6608 6610 2 -21088.9 -7312.33 211.9 0.125 6609 6611 2 -21088.3 -7312.31 212 0.125 6610 6612 2 -21088.2 -7312.56 211.95 0.125 6611 6613 2 -21087.4 -7313.64 212.71 0.125 6612 6614 2 -21087.3 -7313.93 212.66 0.125 6613 6615 2 -21084.8 -7315.05 212.7 0.125 6614 6616 2 -21084.2 -7315.47 212.57 0.125 6615 6617 2 -21082.7 -7315.81 212.87 0.125 6616 6618 2 -21081.6 -7315.33 212.85 0.125 6617 6619 2 -21081 -7315.51 213.18 0.125 6618 6620 2 -21080.7 -7315.51 213.16 0.125 6619 6621 2 -21080.1 -7315.81 217.14 0.125 6620 6622 2 -21079.7 -7316.22 218.33 0.125 6621 6623 2 -21078.1 -7316.78 218.51 0.125 6622 6624 2 -21077.8 -7317.07 218.89 0.125 6623 6625 2 -21077 -7318.01 218.66 0.125 6624 6626 2 -21076.6 -7318.73 218.5 0.125 6625 6627 2 -21075.9 -7319.15 218.52 0.125 6626 6628 2 -21075.7 -7319.08 218.51 0.125 6627 6629 2 -21074.6 -7319.74 220.26 0.125 6628 6630 2 -21074 -7319.87 220.18 0.125 6629 6631 2 -21072.5 -7320.83 222.13 0.125 6630 6632 2 -21072.2 -7320.99 222.07 0.125 6631 6633 2 -21070.8 -7322.5 222.63 0.125 6632 6634 2 -21070.1 -7322.65 222.54 0.125 6633 6635 2 -21069 -7322.71 222.42 0.125 6634 6636 2 -21068.4 -7322.6 222.39 0.125 6635 6637 2 -21067.6 -7322.68 222.3 0.125 6636 6638 2 -21066.9 -7323.13 222.16 0.125 6637 6639 2 -21065.8 -7324.23 221.88 0.125 6638 6640 2 -21065.6 -7324.17 221.86 0.125 6639 6641 2 -21061.8 -7326.65 224.45 0.125 6640 6642 2 -21061.5 -7326.59 224.29 0.125 6641 6643 2 -21060.5 -7326.06 223.97 0.125 6642 6644 2 -21059.7 -7325.9 223.92 0.125 6643 6645 2 -21058.8 -7326.02 223.82 0.125 6644 6646 2 -21056.3 -7327.31 227.37 0.125 6645 6647 2 -21055.2 -7327.33 227.26 0.125 6646 6648 2 -21054.9 -7327.27 227.24 0.125 6647 6649 2 -21054.2 -7328.22 227.02 0.125 6648 6650 2 -21053.5 -7328.85 226.85 0.125 6649 6651 2 -21053.4 -7329.12 226.8 0.125 6650 6652 2 -21052.4 -7329.96 226.56 0.125 6651 6653 2 -21052.2 -7329.9 226.55 0.125 6652 6654 2 -21049.7 -7330.22 232.08 0.125 6653 6655 2 -21048.8 -7330.33 231.98 0.125 6654 6656 2 -21047.6 -7331.12 231.74 0.125 6655 6657 2 -21047.2 -7331.35 231.67 0.125 6656 6658 2 -21046 -7331.89 231.46 0.125 6657 6659 2 -21045.2 -7331.45 231.46 0.125 6658 6660 2 -21040.8 -7333.84 235.37 0.125 6659 6661 2 -21040 -7333.69 235.32 0.125 6660 6662 2 -21038 -7334.15 238.1 0.125 6661 6663 2 -21037.5 -7333.99 238.08 0.125 6662 6664 2 -21036.8 -7334.41 237.95 0.125 6663 6665 2 -21035.1 -7335.36 238.98 0.125 6664 6666 2 -21034.8 -7335.33 238.96 0.125 6665 6667 2 -21033.6 -7335.44 234.92 0.125 6666 6668 2 -21028.5 -7339.42 240.77 0.125 6667 6669 2 -21020.1 -7341.93 243.38 0.125 6668 6670 2 -21020 -7348.27 242.32 0.125 6669 6671 2 -21018.7 -7348.86 242.1 0.125 6670 6672 2 -21016.1 -7349.44 244.71 0.125 6671 6673 2 -21013.5 -7349.7 248.73 0.125 6672 6674 2 -21013.3 -7349.66 248.75 0.125 6673 6675 2 -21009.4 -7350.75 247.43 0.125 6674 6676 2 -21001.7 -7353.27 251.66 0.125 6675 6677 2 -20990.1 -7355.59 253.3 0.125 6676 6678 2 -20990.1 -7355.69 253.9 0.125 6677 6679 2 -20986.1 -7357.15 258 0.125 6678 6680 2 -20985.5 -7357.34 258.03 0.125 6679 6681 2 -20983.7 -7355.56 258.15 0.125 6680 6682 2 -20984 -7355.59 258.17 0.125 6681 6683 2 -21061.2 -7327.21 223.78 0.125 6640 6684 2 -21061.1 -7327.3 224.31 0.125 6683 6685 2 -21060 -7328.58 224.68 0.125 6684 6686 2 -21059.9 -7328.69 225.33 0.125 6685 6687 2 -21057.3 -7328.85 225.92 0.125 6686 6688 2 -21057.2 -7329.1 225.88 0.125 6687 6689 2 -21055.8 -7330.69 225.48 0.125 6688 6690 2 -21055.4 -7331.13 225.37 0.125 6689 6691 2 -21055.3 -7331.92 225.23 0.125 6690 6692 2 -21053.8 -7332.73 228.25 0.125 6691 6693 2 -21053.6 -7333.44 229.25 0.125 6692 6694 2 -21052.5 -7335.06 231.45 0.125 6693 6695 2 -21052.7 -7335.59 231.38 0.125 6694 6696 2 -21052.7 -7336.93 232.81 0.125 6695 6697 2 -21052.4 -7338.14 233.86 0.125 6696 6698 2 -21052.2 -7337.88 233.88 0.125 6697 6699 2 -21051.8 -7338.08 233.82 0.125 6698 6700 2 -21051.1 -7339.07 233.89 0.125 6699 6701 2 -21050.9 -7339.31 235.29 0.125 6700 6702 2 -21049.4 -7341.16 235.46 0.125 6701 6703 2 -21049.3 -7341.93 235.32 0.125 6702 6704 2 -21048.9 -7342.14 235.26 0.125 6703 6705 2 -21048.2 -7342.59 235.12 0.125 6704 6706 2 -21047.3 -7343.34 234.08 0.125 6705 6707 2 -21047.4 -7343.26 233.56 0.125 6706 6708 2 -21047.4 -7344.85 231.7 0.125 6707 6709 2 -21047.5 -7344.75 231.14 0.125 6708 6710 2 -21046.1 -7346.89 227.93 0.125 6709 6711 2 -21045.9 -7349.62 223.7 0.125 6710 6712 2 -21045.8 -7349.9 223.65 0.125 6711 6713 2 -21045.1 -7352.45 223.16 0.125 6712 6714 2 -21044.7 -7353.14 223.01 0.125 6713 6715 2 -21043.4 -7353.66 222.76 0.125 6714 6716 2 -21043.1 -7355.65 221.94 0.125 6715 6717 2 -21043.1 -7355.5 221.07 0.125 6716 6718 2 -21042.7 -7356.83 219.64 0.125 6717 6719 2 -21042.8 -7356.69 218.85 0.125 6718 6720 2 -21042.6 -7357.23 220.55 0.125 6719 6721 2 -21042.2 -7357.7 220.44 0.125 6720 6722 2 -21041.1 -7360.38 219.89 0.125 6721 6723 2 -21040.7 -7362.98 219.42 0.125 6722 6724 2 -21040 -7365.24 218.98 0.125 6723 6725 2 -21039.8 -7366.03 218.84 0.125 6724 6726 2 -21039.2 -7366.15 218.76 0.125 6725 6727 2 -21038.7 -7367.64 218.46 0.125 6726 6728 2 -21038.3 -7368.09 218.35 0.125 6727 6729 2 -21038.1 -7368.87 218.21 0.125 6728 6730 2 -21036.5 -7370.11 217.85 0.125 6729 6731 2 -21036 -7371.12 217.63 0.125 6730 6732 2 -21035.8 -7372.42 217.4 0.125 6731 6733 2 -21035.7 -7374.83 220.33 0.125 6732 6734 2 -21034.5 -7378.33 219.54 0.125 6733 6735 2 -21034.1 -7378.77 219.43 0.125 6734 6736 2 -21032.9 -7380.92 218.96 0.125 6735 6737 2 -21032.2 -7381.33 218.76 0.125 6736 6738 2 -21029.7 -7383.75 219.5 0.125 6737 6739 2 -21028.7 -7385.49 221.14 0.125 6738 6740 2 -21028.6 -7385.67 222.26 0.125 6739 6741 2 -21026.3 -7388.08 224.16 0.125 6740 6742 2 -21026 -7388.25 224.1 0.125 6741 6743 2 -21025.4 -7389.74 223.85 0.125 6742 6744 2 -21024.3 -7389.53 223.79 0.125 6743 6745 2 -21024.2 -7390.32 223.66 0.125 6744 6746 2 -21023.6 -7390.46 223.58 0.125 6745 6747 2 -21023.5 -7390.71 223.53 0.125 6746 6748 2 -21023 -7392.38 222.81 0.125 6747 6749 2 -21023 -7392.33 222.52 0.125 6748 6750 2 -21021.8 -7392.66 222.35 0.125 6749 6751 2 -21021.2 -7392.8 222.27 0.125 6750 6752 2 -21020.6 -7393.63 224.43 0.125 6751 6753 2 -21020.6 -7393.93 224.38 0.125 6752 6754 2 -21020.8 -7394.89 222.4 0.125 6753 6755 2 -21019.8 -7397.13 221.94 0.125 6754 6756 2 -21019.1 -7397.79 221.77 0.125 6755 6757 2 -21019 -7398.06 221.72 0.125 6756 6758 2 -21017.7 -7399.38 221.37 0.125 6757 6759 2 -21017.1 -7399.56 221.28 0.125 6758 6760 2 -21016.8 -7399.51 221.27 0.125 6759 6761 2 -21015.8 -7400.11 221.08 0.125 6760 6762 2 -21014.7 -7400.14 220.96 0.125 6761 6763 2 -21014.3 -7400.58 220.86 0.125 6762 6764 2 -21014 -7400.8 220.79 0.125 6763 6765 2 -21013.4 -7401.18 220.67 0.125 6764 6766 2 -21011.6 -7405.16 221.62 0.125 6765 6767 2 -21009.5 -7406.42 223.05 0.125 6766 6768 2 -21009.2 -7406.9 222.94 0.125 6767 6769 2 -21008.5 -7407.82 222.72 0.125 6768 6770 2 -21008.2 -7407.76 222.71 0.125 6769 6771 2 -21007 -7408.78 224.88 0.125 6770 6772 2 -21004.5 -7411.56 224.58 0.125 6771 6773 2 -21004.1 -7411.97 224.47 0.125 6772 6774 2 -21003.8 -7412.21 224.4 0.125 6773 6775 2 -21002.5 -7414.6 223.88 0.125 6774 6776 2 -21002.1 -7415.05 223.77 0.125 6775 6777 2 -21001.8 -7415.26 223.71 0.125 6776 6778 2 -21001.1 -7417.79 223.22 0.125 6777 6779 2 -21000.5 -7419.26 222.93 0.125 6778 6780 2 -21000.3 -7420.28 222.74 0.125 6779 6781 2 -20996.7 -7425.8 222 0.125 6780 6782 2 -21882.4 -7381.27 112.01 0.37 1637 6783 2 -21883.7 -7381.99 112.01 0.37 6782 6784 2 -21884.3 -7383.14 111.88 0.37 6783 6785 2 -21884.6 -7384.79 111.63 0.37 6784 6786 2 -21884.4 -7386.09 111.4 0.37 6785 6787 2 -21885 -7387.77 111.17 0.37 6786 6788 2 -21886.3 -7388.52 111.17 0.37 6787 6789 2 -21886.6 -7389.88 110.97 0.37 6788 6790 2 -21886.6 -7390.17 110.92 0.37 6789 6791 2 -21886.3 -7391.42 110.69 0.37 6790 6792 2 -21886.2 -7394.05 110.24 0.37 6791 6793 2 -21886.8 -7395.73 110.02 0.37 6792 6794 2 -21888.2 -7397.25 109.9 0.37 6793 6795 2 -21888.2 -7397.55 109.85 0.37 6794 6796 2 -21888.4 -7399.22 109.6 0.37 6795 6797 2 -21889.1 -7400.64 112.45 0.37 6796 6798 2 -21889.1 -7400.9 112.4 0.37 6797 6799 2 -21889.1 -7402.79 112.47 0.37 6798 6800 2 -21888.4 -7404.96 116.09 0.37 6799 6801 2 -21888.7 -7405.01 116.1 0.37 6800 6802 2 -21889.4 -7406.75 117.25 0.37 6801 6803 2 -21890.8 -7406.97 117.35 0.37 6802 6804 2 -21892.5 -7407.8 118.87 0.37 6803 6805 2 -21894.1 -7408.6 118.88 0.37 6804 6806 2 -21893.9 -7410.25 122.11 0.37 6805 6807 2 -21894.3 -7411.1 122.01 0.37 6806 6808 2 -21894.9 -7410.95 122.09 0.37 6807 6809 2 -21896.3 -7411.18 122.18 0.37 6808 6810 2 -21897.2 -7412.65 126.45 0.37 6809 6811 2 -21897.2 -7412.69 126.68 0.37 6810 6812 2 -21897.2 -7413.51 128.05 0.37 6811 6813 2 -21897.5 -7413.5 128.08 0.37 6812 6814 2 -21896.7 -7414.27 129.73 0.37 6813 6815 2 -21896.5 -7415.3 129.73 0.37 6814 6816 2 -21896.7 -7416.59 132.21 0.37 6815 6817 2 -21896.9 -7416.66 132.23 0.37 6816 6818 2 -21897.6 -7417.38 132.82 0.37 6817 6819 2 -21897.7 -7418.18 137.17 0.37 6818 6820 2 -21897.7 -7418.21 137.3 0.37 6819 6821 2 -21899.3 -7418.63 137.95 0.37 6820 6822 2 -21899.3 -7418.85 137.9 0.37 6821 6823 2 -21900.6 -7420.31 140.28 0.37 6822 6824 2 -21900.5 -7420.72 141.15 0.37 6823 6825 2 -21901.3 -7421.97 142.68 0.37 6824 6826 2 -21901.2 -7422.04 143.13 0.37 6825 6827 2 -21902.1 -7423.44 143.9 0.37 6826 6828 2 -21903.5 -7424.9 145.04 0.37 6827 6829 2 -21904.2 -7426.13 148.14 0.37 6828 6830 2 -21904 -7427.79 149.76 0.37 6829 6831 2 -21905.4 -7429.92 149.79 0.37 6830 6832 2 -21905.4 -7429.98 150.13 0.37 6831 6833 2 -21905.5 -7431.02 152.92 0.37 6832 6834 2 -21905.8 -7431.12 152.94 0.37 6833 6835 2 -21905.8 -7431.2 153.46 0.37 6834 6836 2 -21905.8 -7432.79 157.62 0.37 6835 6837 2 -21905.9 -7433.85 157.48 0.37 6836 6838 2 -21906.6 -7434.79 162.06 0.37 6837 6839 2 -21907.4 -7434.96 162.1 0.37 6838 6840 2 -21907.9 -7435.93 164.35 0.37 6839 6841 2 -21908 -7437 164.19 0.37 6840 6842 2 -21908.2 -7437.86 164.06 0.37 6841 6843 2 -21908.6 -7439.34 165.97 0.37 6842 6844 2 -21908.9 -7439.41 165.98 0.37 6843 6845 2 -21909.3 -7440.52 165.84 0.37 6844 6846 2 -21909.7 -7441.34 169.88 0.37 6845 6847 2 -21916.8 -7441.26 170.56 0.37 6846 6848 2 -21918.4 -7442.06 170.58 0.37 6847 6849 2 -21919.4 -7442.55 171.08 0.37 6848 6850 2 -21919.3 -7442.72 172.13 0.37 6849 6851 2 -21919.1 -7443.71 173.22 0.37 6850 6852 2 -21918.8 -7444.68 175.97 0.37 6851 6853 2 -21918.5 -7444.64 175.95 0.37 6852 6854 2 -21918.1 -7445.67 175.85 0.37 6853 6855 2 -21917.6 -7446.71 175.88 0.37 6854 6856 2 -21917.3 -7446.95 176 0.37 6855 6857 2 -21916.1 -7447.34 176.08 0.37 6856 6858 2 -21915.3 -7447.45 175.5 0.37 6857 6859 2 -21915 -7447.62 175.52 0.37 6858 6860 2 -21914.9 -7447.99 175.92 0.37 6859 6861 2 -21914.7 -7448.82 176.07 0.37 6860 6862 2 -21914.7 -7448.93 176.71 0.37 6861 6863 2 -21914.1 -7449.14 176.97 0.37 6862 6864 2 -21914.1 -7448.98 177.29 0.37 6863 6865 2 -21914.8 -7450.49 180.94 0.37 6864 6866 2 -21914.9 -7452.28 183.29 0.37 6865 6867 2 -21914.9 -7452.3 183.42 0.37 6866 6868 2 -21915.6 -7453.23 183.52 0.37 6867 6869 2 -21915.6 -7453.25 183.65 0.37 6868 6870 2 -21915.7 -7456.07 185.34 0.37 6869 6871 2 -21915 -7457.93 190.35 0.37 6870 6872 2 -21915 -7459.36 192.16 0.37 6871 6873 2 -21915 -7459.48 192.92 0.37 6872 6874 2 -21915.2 -7461.11 194.2 0.37 6873 6875 2 -21915.3 -7464.19 194.68 0.37 6874 6876 2 -21915.3 -7465.81 194.45 0.37 6875 6877 2 -21916.2 -7466.9 197.13 0.37 6876 6878 2 -21916.1 -7468.67 197.97 0.37 6877 6879 2 -21915.8 -7468.71 198.35 0.37 6878 6880 2 -21915.9 -7470.02 199.37 0.37 6879 6881 2 -21916.4 -7471.97 203.98 0.37 6880 6882 2 -21915.8 -7472.66 203.84 0.37 6881 6883 2 -21915.4 -7472.68 204.1 0.37 6882 6884 2 -21915.1 -7474.27 205.65 0.37 6883 6885 2 -21916 -7474.39 205.42 0.37 6884 6886 2 -21916.1 -7475.15 205.3 0.37 6885 6887 2 -21915.3 -7477.41 209.28 0.37 6886 6888 2 -21916 -7478.06 209.24 0.37 6887 6889 2 -21917 -7479 209.17 0.37 6888 6890 2 -21916.7 -7480.44 211.55 0.37 6889 6891 2 -21916.7 -7480.47 211.74 0.37 6890 6892 2 -21916.6 -7480.81 212.07 0.37 6891 6893 2 -21917.9 -7482.83 213.48 0.37 6892 6894 2 -21918.6 -7483.91 214.23 0.37 6893 6895 2 -21918.5 -7484.05 215.09 0.37 6894 6896 2 -21919.9 -7484.98 217.88 0.37 6895 6897 2 -21919.7 -7486.02 217.69 0.37 6896 6898 2 -21919.8 -7486.27 217.65 0.37 6897 6899 2 -21921.3 -7487 217.68 0.37 6898 6900 2 -21921.1 -7488.08 219.16 0.37 6899 6901 2 -21921.2 -7489.02 219.89 0.37 6900 6902 2 -21921 -7489.29 221.51 0.37 6901 6903 2 -21921.3 -7490.8 223.6 0.37 6902 6904 2 -21921.5 -7490.8 223.62 0.37 6903 6905 2 -21922.7 -7491.9 224.12 0.37 6904 6906 2 -21922.7 -7492.4 224.04 0.37 6905 6907 2 -21922.3 -7492.89 223.95 0.37 6906 6908 2 -21922.3 -7492.98 224.46 0.37 6907 6909 2 -21921.4 -7494.17 227.13 0.37 6908 6910 2 -21924.1 -7487.73 228.45 0.37 6909 6911 2 -21924 -7489.02 230.98 0.37 6910 6912 2 -21923.9 -7489.11 231.48 0.37 6911 6913 2 -21923.5 -7491.17 235.87 0.37 6912 6914 2 -21924 -7493.12 232.35 0.37 6913 6915 2 -21924 -7494.27 235.83 0.37 6914 6916 2 -21924.7 -7495.02 236.31 0.37 6915 6917 2 -21925.6 -7496.41 236.93 0.37 6916 6918 2 -21926.2 -7497.93 237.05 0.37 6917 6919 2 -21926.1 -7497.97 237.28 0.37 6918 6920 2 -21925.5 -7499.54 242.13 0.25 6919 6921 2 -21925.4 -7499.79 242.08 0.25 6920 6922 2 -21926.1 -7500.94 241.95 0.25 6921 6923 2 -21927.1 -7501.45 241.96 0.25 6922 6924 2 -21926.1 -7502.31 243.15 0.25 6923 6925 2 -21926.9 -7502.19 243.27 0.25 6924 6926 2 -21927.7 -7502.65 243.42 0.25 6925 6927 2 -21927.7 -7502.67 243.59 0.25 6926 6928 2 -21927.3 -7505.11 246.62 0.25 6927 6929 2 -21927.1 -7506.08 247.68 0.25 6928 6930 2 -21927 -7506.19 248.36 0.25 6929 6931 2 -21926.8 -7508.28 249.73 0.25 6930 6932 2 -21926.7 -7509.5 251.82 0.25 6931 6933 2 -21926.9 -7509.59 251.83 0.25 6932 6934 2 -21927.4 -7509.89 253.1 0.25 6933 6935 2 -21926.8 -7511.18 254.89 0.25 6934 6936 2 -21926.7 -7512.54 256.57 0.25 6935 6937 2 -21927 -7512.61 256.58 0.25 6936 6938 2 -21927 -7513.64 257.75 0.25 6937 6939 2 -21927 -7514.82 258.05 0.25 6938 6940 2 -21927.3 -7514.82 258.08 0.25 6939 6941 2 -21927.2 -7516.34 258.85 0.25 6940 6942 2 -21926.4 -7517.86 261.91 0.25 6941 6943 2 -21925.7 -7518.54 261.89 0.25 6942 6944 2 -21925.5 -7518.51 261.87 0.25 6943 6945 2 -21924.7 -7520.73 271.52 0.25 6944 6946 2 -21924.7 -7520.95 271.49 0.25 6945 6947 2 -21923.7 -7522.43 272.88 0.25 6946 6948 2 -21924 -7523.55 274.28 0.25 6947 6949 2 -21923.9 -7523.68 275.04 0.25 6948 6950 2 -21924.5 -7523.38 275.93 0.25 6949 6951 2 -21924.5 -7523.5 276.66 0.25 6950 6952 2 -21923 -7524.11 279.88 0.25 6951 6953 2 -21922 -7524.07 280.74 0.25 6952 6954 2 -21922.9 -7524.96 283.33 0.25 6953 6955 2 -21922.8 -7526.08 285.04 0.25 6954 6956 2 -21923.7 -7526.54 288.17 0.25 6955 6957 2 -21923.6 -7526.61 288.59 0.25 6956 6958 2 -21922.2 -7527.83 290.75 0.25 6957 6959 2 -21922.5 -7529.07 291.24 0.25 6958 6960 2 -21924 -7530.14 291.21 0.37 6959 6961 2 -21923.3 -7531.82 294.94 0.37 6960 6962 2 -21924.1 -7533.29 301.08 0.37 6961 6963 2 -21924 -7534.81 301.9 0.37 6962 6964 2 -21923.4 -7535.22 301.77 0.37 6963 6965 2 -21921.2 -7537.48 306.85 0.37 6964 6966 2 -21921.2 -7537.74 306.8 0.37 6965 6967 2 -21919.6 -7539.72 311.56 0.37 6966 6968 2 -21918.9 -7542.49 315.2 0.37 6967 6969 2 -21920.5 -7544.11 318.12 0.37 6968 6970 2 -21920.2 -7545.69 317.83 0.37 6969 6971 2 -21918.8 -7548.01 319.84 0.37 6970 6972 2 -21918.6 -7550.27 324.58 0.37 6971 6973 2 -21919.8 -7551.81 324.44 0.37 6972 6974 2 -21921.2 -7552.13 329.59 0.37 6973 6975 2 -21922.7 -7554.52 335.34 0.37 6974 6976 2 -21914.1 -7551.36 331.25 0.37 6975 6977 2 -21914.7 -7552.5 332.32 0.37 6976 6978 2 -21914.9 -7552.61 332.48 0.37 6977 6979 2 -21915.7 -7555.4 333.46 0.37 6978 6980 2 -21915.2 -7556.98 341.06 0.37 6979 6981 2 -21915.5 -7557.04 341.07 0.37 6980 6982 2 -21916.9 -7558.09 342.21 0.37 6981 6983 2 -21916.8 -7558.26 343.24 0.37 6982 6984 2 -21917.2 -7560.3 344.66 0.37 6983 6985 2 -21917.5 -7560.34 344.68 0.37 6984 6986 2 -21918.7 -7562.24 345.14 0.37 6985 6987 2 -21919.9 -7562.8 348.11 0.37 6986 6988 2 -21922.2 -7563.97 353.36 0.37 6987 6989 2 -21923.1 -7566.05 354.78 0.37 6988 6990 2 -21924.5 -7566.34 354.86 0.37 6989 6991 2 -21924.4 -7566.45 355.52 0.37 6990 6992 2 -21926 -7567.68 358.79 0.37 6991 6993 2 -21925.9 -7567.74 359.15 0.37 6992 6994 2 -21927.5 -7568.56 358.77 0.37 6993 6995 2 -21928.8 -7569.03 364.84 0.37 6994 6996 2 -21929.2 -7570.58 366.55 0.37 6995 6997 2 -21930.2 -7571.16 367.08 0.37 6996 6998 2 -21932.2 -7573.37 373.87 0.25 6997 6999 2 -21933.8 -7573.97 373.92 0.25 6998 7000 2 -21933.8 -7574.03 374.28 0.25 6999 7001 2 -21938.1 -7575.99 377.18 0.25 7000 7002 2 -21938.4 -7576.06 377.19 0.25 7001 7003 2 -21941.1 -7577.88 378.15 0.25 7002 7004 2 -21942.7 -7579.33 379.88 0.25 7003 7005 2 -21944.4 -7579.43 380.03 0.25 7004 7006 2 -21945.1 -7581.78 382.96 0.25 7005 7007 2 -21945 -7582.09 384.88 0.25 7006 7008 2 -21947.4 -7584.91 386.66 0.25 7007 7009 2 -21949.5 -7586.69 388.46 0.25 7008 7010 2 -21949.7 -7587.16 389.23 0.25 7009 7011 2 -21950.6 -7588.31 388.43 0.25 7010 7012 2 -21950.4 -7590.07 390.75 0.25 7011 7013 2 -21950.5 -7590.36 392.1 0.25 7012 7014 2 -21951.2 -7592.13 390.63 0.25 7013 7015 2 -21951.4 -7592.18 390.64 0.25 7014 7016 2 -21952 -7593.42 390.85 0.25 7015 7017 2 -21951.3 -7596.31 392.29 0.25 7016 7018 2 -21950.7 -7598.36 390.59 0.25 7017 7019 2 -21950.6 -7598.58 390.54 0.25 7018 7020 2 -21952.8 -7599.05 390.67 0.25 7019 7021 2 -21952.7 -7599.32 390.62 0.25 7020 7022 2 -21953.7 -7604.37 394.53 0.25 7021 7023 2 -21953.7 -7604.45 395.01 0.25 7022 7024 2 -21956.6 -7612.2 395.1 0.25 7023 7025 2 -21962.6 -7614.38 395.3 0.37 7024 7026 2 -21963.4 -7616.44 395.03 0.37 7025 7027 2 -21965.2 -7621.07 401.24 0.37 7026 7028 2 -21965.2 -7623.89 406.01 0.25 7027 7029 2 -21966.6 -7624.43 406.07 0.25 7028 7030 2 -21966.6 -7624.71 406.02 0.25 7029 7031 2 -21969.3 -7625.67 401.41 0.25 7030 7032 2 -21970.8 -7629.49 409.09 0.25 7031 7033 2 -21970.8 -7629.54 409.41 0.25 7032 7034 2 -21972.4 -7631.96 410.68 0.25 7033 7035 2 -21972.7 -7634.68 416.22 0.25 7034 7036 2 -21972.3 -7635.28 419.83 0.25 7035 7037 2 -21972.4 -7638.44 425.06 0.25 7036 7038 2 -21972.8 -7641.03 424.99 0.25 7037 7039 2 -21972.4 -7644.56 421.86 0.25 7038 7040 2 -21920.4 -7530.77 296.66 0.37 6961 7041 2 -21918.3 -7529.86 296.61 0.37 7040 7042 2 -21915.8 -7529.16 296.5 0.37 7041 7043 2 -21911.8 -7529.68 298.53 0.25 7042 7044 2 -21908.9 -7529.31 298.94 0.25 7043 7045 2 -21909.2 -7529.35 298.96 0.25 7044 7046 2 -21904.6 -7529.88 301.25 0.25 7045 7047 2 -21902.7 -7529.49 301.14 0.25 7046 7048 2 -21902.1 -7529.4 301.13 0.25 7047 7049 2 -21900.3 -7530.8 302.87 0.25 7048 7050 2 -21900.1 -7530.53 302.89 0.25 7049 7051 2 -21899.8 -7530.46 302.88 0.25 7050 7052 2 -21899.4 -7529.59 302.98 0.25 7051 7053 2 -21899.4 -7529.3 303.03 0.25 7052 7054 2 -21897.2 -7528.75 306.5 0.25 7053 7055 2 -21894.7 -7528.19 307.27 0.25 7054 7056 2 -21893.4 -7528.77 307.06 0.25 7055 7057 2 -21890.8 -7529.25 310.38 0.25 7056 7058 2 -21890 -7528.83 310.38 0.25 7057 7059 2 -21887.4 -7529.53 313.55 0.25 7058 7060 2 -21885.3 -7530.24 316.4 0.25 7059 7061 2 -21884.4 -7530.62 316.31 0.25 7060 7062 2 -21883.9 -7530.28 316.32 0.25 7061 7063 2 -21883.9 -7530.02 316.37 0.25 7062 7064 2 -21882.2 -7530.19 318.73 0.25 7063 7065 2 -21881.3 -7530.83 318.68 0.25 7064 7066 2 -21879.9 -7531.38 321.65 0.25 7065 7067 2 -21879 -7531.51 321.55 0.25 7066 7068 2 -21878.8 -7530.68 321.68 0.25 7067 7069 2 -21878 -7531.53 324.13 0.25 7068 7070 2 -21876.7 -7531.8 325.2 0.25 7069 7071 2 -21876 -7532.24 325.07 0.25 7070 7072 2 -21875.7 -7533.16 327.62 0.25 7071 7073 2 -21875.4 -7532.86 327.65 0.25 7072 7074 2 -21874.9 -7532.79 327.61 0.25 7073 7075 2 -21873.4 -7532.74 328.8 0.25 7074 7076 2 -21872 -7533.05 330.04 0.25 7075 7077 2 -21867.6 -7534.86 335.1 0.25 7076 7078 2 -21867.6 -7534.89 335.28 0.25 7077 7079 2 -21865.7 -7535.76 335.57 0.25 7078 7080 2 -21864.6 -7536.6 336.6 0.25 7079 7081 2 -21856 -7534.13 336.2 0.25 7080 7082 2 -21853 -7534.58 340.18 0.25 7081 7083 2 -21851.9 -7535.05 338.06 0.25 7082 7084 2 -21849.9 -7536.68 341.43 0.25 7083 7085 2 -21849.7 -7536.61 341.42 0.25 7084 7086 2 -21848.1 -7536.86 341.48 0.25 7085 7087 2 -21848.2 -7536.6 341.53 0.25 7086 7088 2 -21846 -7537.99 343.84 0.25 7087 7089 2 -21845.5 -7538.11 343.76 0.25 7088 7090 2 -21843.8 -7538.96 344.21 0.25 7089 7091 2 -21843.5 -7538.94 344.32 0.25 7090 7092 2 -21841.8 -7540.22 348.75 0.25 7091 7093 2 -21841.3 -7540.09 348.72 0.25 7092 7094 2 -21840.1 -7540.86 351.13 0.25 7093 7095 2 -21839 -7541.73 351.28 0.25 7094 7096 2 -21837 -7541.96 351.53 0.25 7095 7097 2 -21835.2 -7543.19 352.76 0.25 7096 7098 2 -21834.7 -7543.1 352.72 0.25 7097 7099 2 -21833.7 -7543.7 352.74 0.25 7098 7100 2 -21833.4 -7543.38 352.77 0.25 7099 7101 2 -21832.2 -7545.29 356.91 0.25 7100 7102 2 -21831.7 -7545.2 356.87 0.25 7101 7103 2 -21830.5 -7545.43 356.74 0.25 7102 7104 2 -21829 -7545.94 356.51 0.25 7103 7105 2 -21828.9 -7546.26 356.86 0.25 7104 7106 2 -21826.4 -7547.43 359.95 0.25 7105 7107 2 -21826.1 -7547.61 359.89 0.25 7106 7108 2 -21824.4 -7547.55 359.74 0.25 7107 7109 2 -21822.4 -7549.57 362.73 0.25 7108 7110 2 -21822 -7549.54 362.76 0.25 7109 7111 2 -21820.5 -7550.55 362.61 0.25 7110 7112 2 -21819.8 -7550.53 363.16 0.25 7111 7113 2 -21818 -7551.54 366.34 0.25 7112 7114 2 -21817.7 -7551.48 366.33 0.25 7113 7115 2 -21816.8 -7551.59 366.23 0.25 7114 7116 2 -21816.5 -7551.78 366.17 0.25 7115 7117 2 -21814.7 -7552.25 366.08 0.25 7116 7118 2 -21814.4 -7552.21 366.07 0.25 7117 7119 2 -21813.6 -7552.08 366.43 0.25 7118 7120 2 -21813.6 -7551.82 366.48 0.25 7119 7121 2 -21811.6 -7552.46 369.31 0.25 7120 7122 2 -21811.5 -7552.64 370.45 0.25 7121 7123 2 -21810.5 -7553.97 371.32 0.25 7122 7124 2 -21808.7 -7553.41 371.45 0.25 7123 7125 2 -21808.1 -7553.24 371.43 0.25 7124 7126 2 -21807.5 -7553.1 372.78 0.25 7125 7127 2 -21806.9 -7553.25 372.7 0.25 7126 7128 2 -21804.5 -7553.1 374.64 0.25 7127 7129 2 -21804.2 -7552.91 375.56 0.25 7128 7130 2 -21802 -7553.34 377.09 0.25 7129 7131 2 -21801.5 -7553.22 377.06 0.25 7130 7132 2 -21800.8 -7553.88 376.88 0.25 7131 7133 2 -21800.4 -7554.05 376.82 0.25 7132 7134 2 -21798.3 -7555.23 379.52 0.25 7133 7135 2 -21797.7 -7555.14 379.48 0.25 7134 7136 2 -21797.1 -7555.54 379.36 0.25 7135 7137 2 -21796.7 -7556.25 379.2 0.25 7136 7138 2 -21794.3 -7555.88 381.28 0.25 7137 7139 2 -21793.8 -7555.5 381.3 0.25 7138 7140 2 -21793.6 -7555.44 381.29 0.25 7139 7141 2 -21792.5 -7555.63 383.58 0.25 7140 7142 2 -21791.7 -7555.48 383.56 0.25 7141 7143 2 -21790.5 -7556.6 385.04 0.25 7142 7144 2 -21790.2 -7556.62 385.33 0.25 7143 7145 2 -21789.6 -7557.02 385.2 0.25 7144 7146 2 -21789.1 -7557.69 385.05 0.25 7145 7147 2 -21787.8 -7559.02 387.66 0.25 7146 7148 2 -21787.8 -7559.05 387.86 0.25 7147 7149 2 -21785.8 -7559.48 390.77 0.25 7148 7150 2 -21784.2 -7559.95 392.26 0.25 7149 7151 2 -21782.6 -7560.37 393.23 0.25 7150 7152 2 -21781.9 -7560.52 394.78 0.25 7151 7153 2 -21780.2 -7561.36 396.85 0.25 7152 7154 2 -21780.1 -7561.62 396.8 0.25 7153 7155 2 -21780.2 -7560.56 394.13 0.25 7154 7156 2 -21788.3 -7560.08 394.96 0.37 7155 7157 2 -21787.2 -7562.23 396.93 0.37 7156 7158 2 -21785.2 -7562.74 402.86 0.37 7157 7159 2 -21784.5 -7564.21 402.54 0.37 7158 7160 2 -21783.5 -7564.86 402.34 0.37 7159 7161 2 -21782.7 -7566.07 402.07 0.37 7160 7162 2 -21782.4 -7566.3 402 0.37 7161 7163 2 -21779.7 -7566.62 407.43 0.37 7162 7164 2 -21776.9 -7565.58 407.34 0.37 7163 7165 2 -21774.2 -7564.28 408.56 0.37 7164 7166 2 -21770.7 -7565.13 414.41 0.37 7165 7167 2 -21768.2 -7563.72 416.72 0.37 7166 7168 2 -21766.8 -7562.6 422.35 0.25 7167 7169 2 -21767.6 -7558.63 424.4 0.25 7168 7170 2 -21769 -7557.22 424.76 0.25 7169 7171 2 -21769.1 -7556.19 424.94 0.25 7170 7172 2 -21771.2 -7553.83 425.53 0.25 7171 7173 2 -21771.3 -7553.55 425.58 0.25 7172 7174 2 -21771.8 -7552.07 425.92 0.37 7173 7175 2 -21774 -7549.83 424.39 0.37 7174 7176 2 -21778.2 -7546.51 425.33 0.37 7175 7177 2 -21778.3 -7546.24 425.38 0.37 7176 7178 2 -21779.6 -7544.63 427.85 0.37 7177 7179 2 -21779.5 -7543.01 428.12 0.37 7178 7180 2 -21779.9 -7542.29 428.27 0.37 7179 7181 2 -21780 -7542.03 428.32 0.37 7180 7182 2 -21780.6 -7539.98 428.72 0.37 7181 7183 2 -21782.4 -7538.63 430.61 0.37 7182 7184 2 -21782.4 -7538.89 430.57 0.37 7183 7185 2 -21782.9 -7537.37 430.87 0.37 7184 7186 2 -21783.6 -7536.39 431.1 0.37 7185 7187 2 -21785.7 -7536.04 431.86 0.37 7186 7188 2 -21785.7 -7535.76 431.91 0.37 7187 7189 2 -21785.7 -7535.5 431.96 0.37 7188 7190 2 -21787.4 -7533.9 432.53 0.37 7189 7191 2 -21788.6 -7533.43 433.67 0.37 7190 7192 2 -21788.9 -7533.5 433.69 0.37 7191 7193 2 -21790 -7531.78 434.07 0.37 7192 7194 2 -21790 -7531.51 434.12 0.37 7193 7195 2 -21790.5 -7528.6 434.65 0.37 7194 7196 2 -21791.6 -7526.91 435.03 0.37 7195 7197 2 -21791.7 -7526.68 435.08 0.37 7196 7198 2 -21793.7 -7524.31 435.66 0.37 7197 7199 2 -21794 -7524.35 435.68 0.37 7198 7200 2 -21795.5 -7522.17 436.18 0.37 7199 7201 2 -21797.2 -7519.97 438.27 0.37 7200 7202 2 -21798.2 -7517.31 442.42 0.37 7201 7203 2 -21798.3 -7517.25 442.06 0.37 7202 7204 2 -21799.4 -7513.63 442.76 0.37 7203 7205 2 -21799.4 -7513.38 442.8 0.37 7204 7206 2 -21800.6 -7510.76 444.36 0.37 7205 7207 2 -21800.6 -7511.06 444.3 0.37 7206 7208 2 -21801.9 -7509.94 444.91 0.37 7207 7209 2 -21803.7 -7506.38 448.29 0.37 7208 7210 2 -21805.1 -7504.83 447.94 0.37 7209 7211 2 -21806 -7504.71 448.04 0.37 7210 7212 2 -21806.3 -7504.48 448.11 0.37 7211 7213 2 -21809.2 -7502.12 450.91 0.37 7212 7214 2 -21809.4 -7502.18 450.92 0.37 7213 7215 2 -21817.2 -7489.45 461.89 0.25 7214 7216 2 -21818.2 -7486.65 462.45 0.25 7215 7217 2 -21818.1 -7485.53 462.63 0.25 7216 7218 2 -21819.3 -7485.23 462.79 0.25 7217 7219 2 -21819.6 -7485.22 462.82 0.25 7218 7220 2 -21820.8 -7482.68 464.55 0.25 7219 7221 2 -21822.1 -7481.98 467.21 0.25 7220 7222 2 -21823.8 -7481.49 469.29 0.25 7221 7223 2 -21824.9 -7480.08 469.63 0.25 7222 7224 2 -21825 -7479.54 469.72 0.25 7223 7225 2 -21825.8 -7478.04 470.05 0.25 7224 7226 2 -21828.8 -7473.1 471.15 0.25 7225 7227 2 -21828.9 -7472.85 471.19 0.25 7226 7228 2 -21829.8 -7470.36 471.7 0.25 7227 7229 2 -21829.9 -7469.79 471.8 0.25 7228 7230 2 -21832.1 -7468.81 472.17 0.25 7229 7231 2 -21832.6 -7467.78 472.38 0.25 7230 7232 2 -21832.7 -7466.73 472.56 0.25 7231 7233 2 -21834.1 -7464.13 475.38 0.25 7232 7234 2 -21835.4 -7463.21 475.74 0.25 7233 7235 2 -21836.5 -7461.33 476.34 0.25 7234 7236 2 -21836.5 -7461.35 476.46 0.25 7235 7237 2 -21837.8 -7459.13 477.02 0.25 7236 7238 2 -21838.8 -7457.5 478.07 0.25 7237 7239 2 -21838.9 -7457.26 478.11 0.25 7238 7240 2 -21839.8 -7456.86 478.26 0.25 7239 7241 2 -21841.8 -7454.76 478.8 0.25 7240 7242 2 -21843.4 -7453.04 481.73 0.25 7241 7243 2 -21843.7 -7453.08 481.75 0.25 7242 7244 2 -21844.3 -7452.37 481.93 0.25 7243 7245 2 -21845.2 -7450.64 482.29 0.25 7244 7246 2 -21846.4 -7450.27 482.47 0.25 7245 7247 2 -21846.4 -7450.08 482.5 0.25 7246 7248 2 -21846.9 -7448.34 483.57 0.25 7247 7249 2 -21847.2 -7446.52 483.9 0.25 7248 7250 2 -21847.6 -7446.02 484.02 0.25 7249 7251 2 -21847.6 -7445.78 484.06 0.25 7250 7252 2 -21850 -7440.99 486.54 0.25 7251 7253 2 -21851.2 -7435.85 489.56 0.25 7252 7254 2 -21851.2 -7435.59 489.6 0.25 7253 7255 2 -21855.4 -7427.56 495.56 0.25 7254 7256 2 -21855.7 -7427.62 495.58 0.25 7255 7257 2 -21862 -7425.99 496.45 0.37 7256 7258 2 -21864.8 -7423 497.49 0.37 7257 7259 2 -21864.7 -7423.18 498.56 0.37 7258 7260 2 -21864.6 -7420.24 498.23 0.37 7259 7261 2 -21864.9 -7417.87 498.66 0.37 7260 7262 2 -21865.5 -7416.63 498.91 0.37 7261 7263 2 -21865.9 -7415.86 501.6 0.25 7262 7264 2 -21869.1 -7412.46 501.82 0.25 7263 7265 2 -21870.8 -7411.36 502.69 0.25 7264 7266 2 -21870.8 -7411.4 502.96 0.25 7265 7267 2 -21871.8 -7409.57 504.05 0.25 7266 7268 2 -21871.8 -7409.65 504.56 0.25 7267 7269 2 -21872.1 -7408.4 504.99 0.25 7268 7270 2 -21872.4 -7408.46 505.01 0.25 7269 7271 2 -21873 -7405.9 505.5 0.25 7270 7272 2 -21875.9 -7405.86 507.4 0.25 7271 7273 2 -21875.8 -7406.14 507.34 0.25 7272 7274 2 -21877 -7406.16 507.45 0.25 7273 7275 2 -21877.3 -7406.21 507.47 0.25 7274 7276 2 -21877.7 -7405.52 507.62 0.25 7275 7277 2 -21877.8 -7405.26 507.68 0.25 7276 7278 2 -21877.9 -7403.37 509.52 0.25 7277 7279 2 -21873.8 -7403.27 508.42 0.25 7271 7280 2 -21874.1 -7403.34 508.44 0.25 7279 7281 2 -21875.4 -7401.68 509.86 0.25 7280 7282 2 -21875.4 -7401.41 509.91 0.25 7281 7283 2 -21876.1 -7399.9 509.91 0.25 7282 7284 2 -21876.8 -7398.2 510.25 0.25 7283 7285 2 -21876.9 -7397.91 510.31 0.25 7284 7286 2 -21877.4 -7396.16 510.64 0.25 7285 7287 2 -21877 -7395.21 510.76 0.25 7286 7288 2 -21877 -7393.85 510.99 0.25 7287 7289 2 -21877.5 -7392.89 511.2 0.25 7288 7290 2 -21877.6 -7392.67 511.25 0.25 7289 7291 2 -21879.2 -7391.14 511.64 0.25 7290 7292 2 -21882.8 -7387.04 513.56 0.25 7291 7293 2 -21884 -7385.13 516.72 0.25 7292 7294 2 -21885.9 -7382.55 517.32 0.25 7293 7295 2 -21887.6 -7380.56 517.81 0.25 7294 7296 2 -21888.5 -7379.17 518.12 0.25 7295 7297 2 -21888.8 -7379.23 518.14 0.25 7296 7298 2 -21890.3 -7377.82 523.38 0.25 7297 7299 2 -21890.3 -7378.05 523.34 0.25 7298 7300 2 -21892.1 -7377.7 523.56 0.25 7299 7301 2 -21892.5 -7377.51 523.62 0.25 7300 7302 2 -21893.6 -7375.34 524.09 0.25 7301 7303 2 -21893.5 -7373.15 526.09 0.25 7302 7304 2 -21896.3 -7371.24 525.86 0.25 7303 7305 2 -21896.7 -7371.13 525.91 0.25 7304 7306 2 -21896.8 -7369.5 526.19 0.25 7305 7307 2 -21899.6 -7367.43 528.2 0.25 7306 7308 2 -21901.7 -7365.26 527.06 0.25 7307 7309 2 -21901.8 -7365.18 526.61 0.25 7308 7310 2 -21905.3 -7364.5 524.66 0.25 7309 7311 2 -21907.5 -7362.19 527.11 0.25 7310 7312 2 -21907.6 -7362.08 526.47 0.25 7311 7313 2 -21909.6 -7361.12 526.09 0.25 7312 7314 2 -21909.7 -7361.01 525.4 0.25 7313 7315 2 -21910.9 -7359.67 525.74 0.25 7314 7316 2 -21911.1 -7359.19 524.38 0.25 7315 7317 2 -21913.8 -7356.54 524.79 0.25 7316 7318 2 -21915.2 -7356.91 524.86 0.25 7317 7319 2 -21916.6 -7356.18 523.56 0.25 7318 7320 2 -21918.7 -7356.8 521.12 0.25 7319 7321 2 -21918.9 -7356.88 521.13 0.25 7320 7322 2 -21920.2 -7358.07 519.94 0.25 7321 7323 2 -21921.5 -7359.14 517.76 0.25 7322 7324 2 -21921.5 -7359.07 517.35 0.25 7323 7325 2 -21921.8 -7359.97 517.21 0.25 7324 7326 2 -21922.6 -7361.29 512.46 0.25 7325 7327 2 -21924.1 -7362.47 512.4 0.25 7326 7328 2 -21925.5 -7362.56 512.78 0.25 7327 7329 2 -21925.8 -7362.92 512.74 0.25 7328 7330 2 -21926.2 -7364.7 512.49 0.25 7329 7331 2 -21927.1 -7364.69 511.21 0.25 7330 7332 2 -21927.3 -7364.33 509.01 0.25 7331 7333 2 -21927.9 -7364.42 509.05 0.25 7332 7334 2 -21918 -7354.62 522.57 0.25 7319 7335 2 -21920 -7352.68 521.43 0.25 7334 7336 2 -21920.3 -7352.75 521.44 0.25 7335 7337 2 -21921 -7352.37 521.57 0.25 7336 7338 2 -21922.4 -7352.48 521.68 0.25 7337 7339 2 -21923.4 -7352.17 521.83 0.25 7338 7340 2 -21923.4 -7351.91 521.87 0.25 7339 7341 2 -21928 -7353.68 522.01 0.25 7340 7342 2 -21928.3 -7353.73 522.02 0.25 7341 7343 2 -21924.9 -7350.13 519.32 0.25 7340 7344 2 -21928.7 -7348.95 520.03 0.25 7343 7345 2 -21929.3 -7348.66 520.56 0.25 7344 7346 2 -21931.1 -7348.85 520.69 0.25 7345 7347 2 -21931.3 -7348.91 520.71 0.25 7346 7348 2 -21933.9 -7349.3 520.88 0.25 7347 7349 2 -21935.8 -7347.61 521.34 0.25 7348 7350 2 -21937.3 -7346.81 520 0.25 7349 7351 2 -21937.5 -7346.9 520.01 0.25 7350 7352 2 -21938.5 -7347.4 522.81 0.25 7351 7353 2 -21938.4 -7347.25 523.58 0.25 7352 7354 2 -21939 -7346.33 523.78 0.25 7353 7355 2 -21939 -7346.07 523.83 0.25 7354 7356 2 -21940.8 -7344.89 524.19 0.25 7355 7357 2 -21941.1 -7344.7 524.25 0.25 7356 7358 2 -21941.4 -7343.66 524.45 0.25 7357 7359 2 -21941.5 -7343.14 524.55 0.25 7358 7360 2 -21941.7 -7341.51 524.83 0.25 7359 7361 2 -21941.8 -7340.76 524.97 0.25 7360 7362 2 -21943.6 -7340.67 524.92 0.25 7361 7363 2 -21944.7 -7338.98 525.3 0.25 7362 7364 2 -21945.5 -7338.07 525.53 0.25 7363 7365 2 -21945.8 -7337.94 525.58 0.25 7364 7366 2 -21946.5 -7336.15 525.73 0.25 7365 7367 2 -21946.9 -7335.97 525.79 0.25 7366 7368 2 -21949.3 -7334.46 526.47 0.25 7367 7369 2 -21951.3 -7333.33 526.88 0.25 7368 7370 2 -21951.7 -7333.15 526.95 0.25 7369 7371 2 -21955.1 -7329.83 528.81 0.25 7370 7372 2 -21955.2 -7329.59 528.86 0.25 7371 7373 2 -21956.3 -7327.7 529.27 0.25 7372 7374 2 -21943.4 -7336.9 524.8 0.25 7361 7375 2 -21943.5 -7336.63 524.85 0.25 7374 7376 2 -21944.4 -7333.52 522.1 0.25 7375 7377 2 -21944.4 -7333.57 522.4 0.25 7376 7378 2 -21943.3 -7331.48 521.64 0.25 7377 7379 2 -21945.3 -7329.48 524.96 0.25 7377 7380 2 -21946.4 -7325.16 521.74 0.25 7379 7381 2 -21946.5 -7323.58 522 0.25 7380 7382 2 -21947.5 -7323.02 522.15 0.25 7381 7383 2 -21947.5 -7322.77 522.19 0.25 7382 7384 2 -21948.3 -7321.24 522.13 0.25 7383 7385 2 -21948.3 -7320.96 521.94 0.25 7384 7386 2 -21948.7 -7319.66 522.19 0.25 7385 7387 2 -21949.2 -7318.7 522.4 0.25 7386 7388 2 -21949.3 -7318.42 522.45 0.25 7387 7389 2 -21950.3 -7315.82 522.31 0.25 7388 7390 2 -21950.7 -7312.66 522.88 0.25 7389 7391 2 -21950.9 -7311.91 523.02 0.25 7390 7392 2 -21952 -7310.35 522.44 0.25 7391 7393 2 -21952.6 -7308.68 523.21 0.25 7392 7394 2 -21953 -7308.51 523.27 0.25 7393 7395 2 -21954.4 -7305.41 522.86 0.25 7394 7396 2 -21956.1 -7302.28 523.53 0.25 7395 7397 2 -21956.3 -7302.33 523.55 0.25 7396 7398 2 -21888.1 -7406.86 116.24 0.37 6801 7399 2 -21887.3 -7406.47 116.23 0.37 7398 7400 2 -21885.3 -7406.24 121.45 0.37 7399 7401 2 -21884.8 -7405.63 121.51 0.37 7400 7402 2 -21884.5 -7405.6 121.49 0.37 7401 7403 2 -21883.5 -7406.32 123.67 0.25 7402 7404 2 -21883.5 -7406.09 123.72 0.25 7403 7405 2 -21881.8 -7406.66 125.53 0.25 7404 7406 2 -21881.6 -7406.33 125.57 0.25 7405 7407 2 -21881.3 -7406.32 125.54 0.25 7406 7408 2 -21881 -7406.27 125.53 0.25 7407 7409 2 -21879.7 -7405.98 125.45 0.25 7408 7410 2 -21878.7 -7406.66 125.24 0.25 7409 7411 2 -21877.7 -7407.31 126.87 0.25 7410 7412 2 -21877.3 -7407.26 126.85 0.25 7411 7413 2 -21876.2 -7407.63 126.88 0.25 7412 7414 2 -21875.9 -7407.61 127.18 0.25 7413 7415 2 -21874.7 -7407.45 127.42 0.25 7414 7416 2 -21874.5 -7407.16 127.45 0.25 7415 7417 2 -21872.6 -7406.96 131.16 0.25 7416 7418 2 -21870.3 -7406.47 133.77 0.25 7417 7419 2 -21870 -7406.44 133.75 0.25 7418 7420 2 -21868.4 -7407.51 133.42 0.25 7419 7421 2 -21868.1 -7407.44 133.41 0.25 7420 7422 2 -21866.3 -7409.11 135.04 0.25 7421 7423 2 -21865.7 -7409.23 134.96 0.25 7422 7424 2 -21865.5 -7409.18 134.95 0.25 7423 7425 2 -21865 -7409.22 134.39 0.25 7424 7426 2 -21860.7 -7408.79 136.91 0.25 7425 7427 2 -21857.5 -7410.71 141.2 0.25 7426 7428 2 -21855.7 -7410.66 142.53 0.25 7427 7429 2 -21853.5 -7411.03 145.2 0.25 7428 7430 2 -21851.3 -7410.9 145.13 0.25 7429 7431 2 -21848.6 -7411.52 144.78 0.25 7430 7432 2 -21847.3 -7410.37 145.8 0.25 7431 7433 2 -21847.3 -7410.42 146.14 0.25 7432 7434 2 -21844.8 -7410.06 146.18 0.25 7433 7435 2 -21844.8 -7409.87 146.78 0.25 7434 7436 2 -21842.4 -7408.87 147.96 0.25 7435 7437 2 -21840.3 -7408.95 148.9 0.25 7436 7438 2 -21839.7 -7408.81 148.88 0.25 7437 7439 2 -21837.7 -7409.3 148.61 0.25 7438 7440 2 -21837.4 -7409.24 148.59 0.25 7439 7441 2 -21835.1 -7409.67 148.3 0.25 7440 7442 2 -21834 -7409.41 148.24 0.25 7441 7443 2 -21832.8 -7409.77 148.07 0.25 7442 7444 2 -21830.3 -7409.8 147.83 0.25 7443 7445 2 -21830 -7409.99 147.77 0.25 7444 7446 2 -21828 -7409.98 147.58 0.25 7445 7447 2 -21827.5 -7409.31 147.65 0.25 7446 7448 2 -21825.2 -7410.28 150.53 0.25 7447 7449 2 -21825 -7409.94 150.57 0.25 7448 7450 2 -21823.5 -7408.61 150.77 0.25 7449 7451 2 -21822.6 -7408.55 150.93 0.25 7450 7452 2 -21820.5 -7407.93 151.14 0.25 7451 7453 2 -21817.3 -7407.12 150.97 0.25 7452 7454 2 -21816.8 -7406.7 150.99 0.25 7453 7455 2 -21816.5 -7406.66 150.97 0.25 7454 7456 2 -21815.7 -7406.52 150.92 0.25 7455 7457 2 -21813.6 -7406.96 150.66 0.25 7456 7458 2 -21809.5 -7407.96 154.04 0.25 7457 7459 2 -21808.9 -7407.87 154 0.25 7458 7460 2 -21806.8 -7407.49 153.86 0.25 7459 7461 2 -21805.4 -7407.24 153.77 0.25 7460 7462 2 -21803.1 -7407.35 153.54 0.25 7461 7463 2 -21801.5 -7407.06 153.44 0.25 7462 7464 2 -21798.8 -7406.1 153.35 0.25 7463 7465 2 -21797.8 -7405.62 153.33 0.25 7464 7466 2 -21797.5 -7405.58 153.31 0.25 7465 7467 2 -21794.8 -7405.34 157.54 0.25 7466 7468 2 -21794.1 -7405.52 157.45 0.25 7467 7469 2 -21793 -7405.82 157.3 0.25 7468 7470 2 -21792.7 -7405.75 157.28 0.25 7469 7471 2 -21791.6 -7405.34 157.25 0.25 7470 7472 2 -21791.1 -7404.7 157.31 0.25 7471 7473 2 -21790.7 -7404.07 157.38 0.25 7472 7474 2 -21790 -7403.19 157.45 0.25 7473 7475 2 -21789 -7402.2 157.53 0.25 7474 7476 2 -21786.1 -7400.7 157.9 0.25 7475 7477 2 -21784.4 -7400.71 157.74 0.25 7476 7478 2 -21783.3 -7400.48 157.67 0.25 7477 7479 2 -21781.7 -7400.18 157.57 0.25 7478 7480 2 -21780.8 -7399.01 157.68 0.25 7479 7481 2 -21780.2 -7398.89 157.64 0.25 7480 7482 2 -21778.9 -7398.39 157.6 0.25 7481 7483 2 -21777.4 -7397.3 157.64 0.25 7482 7484 2 -21777.2 -7397 157.67 0.25 7483 7485 2 -21776.5 -7395.83 157.8 0.25 7484 7486 2 -21775.8 -7395.46 157.79 0.25 7485 7487 2 -21773.7 -7394.24 157.42 0.25 7486 7488 2 -21771.3 -7393.23 157.37 0.25 7487 7489 2 -21770.8 -7392.91 157.37 0.25 7488 7490 2 -21768.5 -7391.95 157.31 0.25 7489 7491 2 -21768 -7391.32 157.37 0.25 7490 7492 2 -21763.9 -7390.35 157.15 0.25 7491 7493 2 -21763.4 -7390.26 157.11 0.25 7492 7494 2 -21760.6 -7390.51 159.43 0.25 7493 7495 2 -21760.3 -7390.71 159.36 0.25 7494 7496 2 -21758 -7391.05 159.09 0.25 7495 7497 2 -21757.5 -7391 159.05 0.25 7496 7498 2 -21754.6 -7391 158.78 0.25 7497 7499 2 -21754.1 -7390.68 158.79 0.25 7498 7500 2 -21752.5 -7389.87 158.78 0.25 7499 7501 2 -21751.5 -7389.46 158.75 0.25 7500 7502 2 -21748.7 -7389.41 158.49 0.25 7501 7503 2 -21748.1 -7389.34 158.45 0.25 7502 7504 2 -21746 -7388.13 159.93 0.25 7503 7505 2 -21744.3 -7387.85 159.82 0.25 7504 7506 2 -21743.8 -7387.49 159.83 0.25 7505 7507 2 -21743.5 -7387.46 159.81 0.25 7506 7508 2 -21742.1 -7386.16 159.89 0.25 7507 7509 2 -21741.1 -7384.9 160.01 0.25 7508 7510 2 -21738.6 -7384.11 162.52 0.25 7509 7511 2 -21737.5 -7383.94 162.45 0.25 7510 7512 2 -21736.7 -7383.53 162.44 0.25 7511 7513 2 -21731 -7381.43 163.84 0.25 7512 7514 2 -21728.2 -7381.96 162.91 0.25 7513 7515 2 -21727 -7381.98 162.8 0.25 7514 7516 2 -21724.4 -7382.82 162.41 0.25 7515 7517 2 -21720.7 -7383.1 161.21 0.25 7516 7518 2 -21720.1 -7383.24 161.13 0.25 7517 7519 2 -21719.8 -7383.19 161.11 0.25 7518 7520 2 -21717.9 -7384.69 160.69 0.25 7519 7521 2 -21717 -7384.8 160.59 0.25 7520 7522 2 -21714.7 -7385.18 160.31 0.25 7521 7523 2 -21714.4 -7385.13 160.29 0.25 7522 7524 2 -21712.6 -7385.86 160 0.25 7523 7525 2 -21711.7 -7386.48 159.81 0.25 7524 7526 2 -21709.5 -7387.67 159.41 0.25 7525 7527 2 -21709.2 -7387.87 159.35 0.25 7526 7528 2 -21706.9 -7389.58 158.85 0.25 7527 7529 2 -21706.6 -7389.8 158.78 0.25 7528 7530 2 -21706.7 -7390.88 158.61 0.25 7529 7531 2 -21706.6 -7391.44 158.56 0.25 7530 7532 2 -21712 -7395.88 158.33 0.37 7531 7533 2 -21711 -7397.83 160.92 0.37 7532 7534 2 -21710.9 -7397.98 161.78 0.37 7533 7535 2 -21710.4 -7399.21 162.86 0.37 7534 7536 2 -21709.2 -7401.61 163.67 0.37 7535 7537 2 -21708.9 -7403.15 163.39 0.37 7536 7538 2 -21707.9 -7404.37 163.13 0.37 7537 7539 2 -21707.9 -7404.47 163.72 0.37 7538 7540 2 -21707.9 -7406.61 163.61 0.25 7539 7541 2 -21707.8 -7406.66 163.89 0.25 7540 7542 2 -21707.8 -7408.41 164.39 0.25 7541 7543 2 -21706.8 -7409.81 164.16 0.25 7542 7544 2 -21706.6 -7411.91 163.79 0.25 7543 7545 2 -21706.4 -7413.84 165.52 0.25 7544 7546 2 -21706.5 -7415.47 165.26 0.25 7545 7547 2 -21705.6 -7416.34 165.04 0.25 7546 7548 2 -21705 -7417.2 167.21 0.25 7547 7549 2 -21705.3 -7419.13 166.93 0.25 7548 7550 2 -21704.9 -7420.39 166.67 0.25 7549 7551 2 -21704.4 -7422.15 166.34 0.25 7550 7552 2 -21704 -7422.4 166.26 0.25 7551 7553 2 -21702.9 -7422.81 166.09 0.25 7552 7554 2 -21702.6 -7422.76 166.07 0.25 7553 7555 2 -21701.6 -7424.81 167.9 0.25 7554 7556 2 -21701 -7426.9 167.5 0.25 7555 7557 2 -21701 -7426.93 167.69 0.25 7556 7558 2 -21701.4 -7428.61 168.06 0.25 7557 7559 2 -21701.3 -7428.72 168.75 0.25 7558 7560 2 -21701.3 -7430.45 168.87 0.25 7559 7561 2 -21701.3 -7430.52 169.27 0.25 7560 7562 2 -21701.6 -7431.71 166.74 0.25 7561 7563 2 -21702.2 -7432.59 165.2 0.25 7562 7564 2 -21702 -7433.89 164.97 0.25 7563 7565 2 -21702.1 -7435.49 164.71 0.25 7564 7566 2 -21702.9 -7436.16 164.67 0.25 7565 7567 2 -21702.5 -7436.34 164.62 0.25 7566 7568 2 -21702.2 -7436.58 164.55 0.25 7567 7569 2 -21701.9 -7436.82 164.48 0.25 7568 7570 2 -21701.2 -7437.74 164.26 0.25 7569 7571 2 -21700.9 -7437.71 164.24 0.25 7570 7572 2 -21701.7 -7438.87 164.12 0.25 7571 7573 2 -21701.8 -7439.43 164.04 0.25 7572 7574 2 -21701.2 -7439.65 163.95 0.25 7573 7575 2 -21701 -7439.84 163.89 0.25 7574 7576 2 -21700.1 -7440.01 163.78 0.25 7575 7577 2 -21699.7 -7440.48 163.67 0.25 7576 7578 2 -21699.3 -7441.75 163.41 0.25 7577 7579 2 -21699 -7441.99 163.34 0.25 7578 7580 2 -21699 -7444.02 164.36 0.25 7579 7581 2 -21698.1 -7445.25 165.51 0.25 7580 7582 2 -21698.1 -7445.32 165.93 0.25 7581 7583 2 -21697.4 -7445.64 166.63 0.25 7582 7584 2 -21697.1 -7445.54 166.62 0.25 7583 7585 2 -21696.7 -7446.59 166.41 0.25 7584 7586 2 -21697.5 -7447.65 165.72 0.25 7585 7587 2 -21696.5 -7448.29 165.86 0.25 7586 7588 2 -21695.8 -7448.88 166.15 0.25 7587 7589 2 -21694.2 -7450.16 166.88 0.25 7588 7590 2 -21693.7 -7451.43 166.77 0.25 7589 7591 2 -21692.3 -7452.72 168.85 0.25 7590 7592 2 -21692.4 -7453.51 168.73 0.25 7591 7593 2 -21691.3 -7454.12 168.52 0.25 7592 7594 2 -21690.8 -7454.83 168.01 0.25 7593 7595 2 -21690.8 -7454.87 168.23 0.25 7594 7596 2 -21690.5 -7455.62 168.07 0.25 7595 7597 2 -21689.4 -7456.15 170.29 0.25 7596 7598 2 -21689.2 -7457.62 171.02 0.25 7597 7599 2 -21688.3 -7457.5 170.95 0.25 7598 7600 2 -21687.4 -7457.62 170.86 0.25 7599 7601 2 -21687 -7458.92 170.6 0.25 7600 7602 2 -21686.9 -7459.11 170.56 0.25 7601 7603 2 -21687.1 -7460.5 170.35 0.25 7602 7604 2 -21685.4 -7462.36 174.32 0.25 7603 7605 2 -21684.8 -7462.27 174.29 0.25 7604 7606 2 -21684.5 -7462.47 174.22 0.25 7605 7607 2 -21683.8 -7463.44 174 0.25 7606 7608 2 -21683 -7465.22 173.69 0.25 7607 7609 2 -21682.9 -7465.73 173.67 0.25 7608 7610 2 -21682.5 -7468.36 174.79 0.25 7609 7611 2 -21682.2 -7468.26 174.77 0.25 7610 7612 2 -21681.3 -7468.69 174.61 0.25 7611 7613 2 -21681.4 -7468.43 174.66 0.25 7612 7614 2 -21680.5 -7468.06 174.64 0.25 7613 7615 2 -21680.3 -7468.01 174.63 0.25 7614 7616 2 -21679.3 -7468.39 175.8 0.25 7615 7617 2 -21679.2 -7469.87 175.15 0.25 7616 7618 2 -21677.5 -7470.48 176.55 0.25 7617 7619 2 -21675.8 -7472.88 176.16 0.25 7618 7620 2 -21675.1 -7474.1 175.89 0.25 7619 7621 2 -21675 -7474.35 175.84 0.25 7620 7622 2 -21674.9 -7475.17 175.69 0.25 7621 7623 2 -21674.7 -7475.11 175.68 0.25 7622 7624 2 -21673.7 -7475.66 176.52 0.25 7623 7625 2 -21672.5 -7475.81 176.39 0.25 7624 7626 2 -21671.4 -7475.92 176.27 0.25 7625 7627 2 -21670.6 -7477.75 174.91 0.25 7626 7628 2 -21669.9 -7478.95 174.65 0.25 7627 7629 2 -21668.6 -7479.5 176.91 0.25 7628 7630 2 -21667.9 -7479.92 176.78 0.25 7629 7631 2 -21667.6 -7480.62 176.63 0.25 7630 7632 2 -21666.9 -7481.03 177.52 0.25 7631 7633 2 -21667 -7483.48 179.11 0.25 7632 7634 2 -21666.3 -7483.93 179.02 0.25 7633 7635 2 -21665.4 -7483.98 176.84 0.25 7634 7636 2 -21665 -7483.65 175.45 0.25 7635 7637 2 -21663.5 -7483.73 175.3 0.25 7636 7638 2 -21663.2 -7483.67 175.28 0.25 7637 7639 2 -21661.7 -7484.52 174.99 0.25 7638 7640 2 -21661.2 -7486.33 174.64 0.25 7639 7641 2 -21660.9 -7489.16 174.14 0.25 7640 7642 2 -21660.1 -7489.61 175.53 0.25 7641 7643 2 -21659 -7489.76 175.59 0.25 7642 7644 2 -21658.9 -7490.03 175.54 0.25 7643 7645 2 -21658.4 -7490.93 176.56 0.25 7644 7646 2 -21658.6 -7491.18 177.91 0.25 7645 7647 2 -21659.2 -7491.84 179.69 0.25 7646 7648 2 -21659.4 -7492.71 179.61 0.25 7647 7649 2 -21659.2 -7493.56 180.07 0.25 7648 7650 2 -21658.9 -7493.79 180.01 0.25 7649 7651 2 -21657.4 -7494.63 179.72 0.25 7650 7652 2 -21657 -7495.14 179.61 0.25 7651 7653 2 -21656.6 -7496.13 179.41 0.25 7652 7654 2 -21656.6 -7496.4 179.36 0.25 7653 7655 2 -21656.1 -7498.19 182.28 0.25 7654 7656 2 -21656.1 -7498.22 182.44 0.25 7655 7657 2 -21655.8 -7500.57 182.02 0.25 7656 7658 2 -21655.6 -7500.54 182 0.25 7657 7659 2 -21654.1 -7501.37 181.72 0.25 7658 7660 2 -21652.9 -7501.47 181.6 0.25 7659 7661 2 -21652.1 -7503.24 181.23 0.25 7660 7662 2 -21650.6 -7503.79 181 0.25 7661 7663 2 -21650.5 -7504.06 180.95 0.25 7662 7664 2 -21650.9 -7505.98 180.67 0.25 7663 7665 2 -21650.9 -7506.25 180.62 0.25 7664 7666 2 -21649.3 -7507.16 180.79 0.25 7665 7667 2 -21649 -7507.13 180.77 0.25 7666 7668 2 -21648.7 -7507.6 180.76 0.25 7667 7669 2 -21648 -7508.83 180.48 0.25 7668 7670 2 -21647.7 -7508.8 180.47 0.25 7669 7671 2 -21646.5 -7509.14 180.29 0.25 7670 7672 2 -21646.6 -7510.17 181.47 0.25 7671 7673 2 -21645.5 -7511.17 181.47 0.25 7672 7674 2 -21644.8 -7512.65 181.15 0.25 7673 7675 2 -21644.8 -7512.9 181.11 0.25 7674 7676 2 -21644.1 -7514.94 182.39 0.25 7675 7677 2 -21644 -7515.73 182.25 0.25 7676 7678 2 -21642.5 -7518.07 182.63 0.25 7677 7679 2 -21642.3 -7518.02 182.62 0.25 7678 7680 2 -21639.8 -7518.49 183.93 0.25 7679 7681 2 -21639.3 -7518.88 181.63 0.25 7680 7682 2 -21639.2 -7519.08 182.88 0.25 7681 7683 2 -21638.5 -7520.09 182.89 0.25 7682 7684 2 -21638.3 -7521.9 182.57 0.25 7683 7685 2 -21638 -7521.82 182.55 0.25 7684 7686 2 -21636.5 -7522.7 182.27 0.25 7685 7687 2 -21636.5 -7522.93 182.23 0.25 7686 7688 2 -21635.5 -7523.87 181.98 0.25 7687 7689 2 -21635.2 -7524.1 181.92 0.25 7688 7690 2 -21634.5 -7525.05 181.7 0.25 7689 7691 2 -21634.2 -7525.27 181.63 0.25 7690 7692 2 -21633.1 -7528.09 183.1 0.25 7691 7693 2 -21633.2 -7528.28 180.92 0.25 7692 7694 2 -21633 -7529.6 180.68 0.25 7693 7695 2 -21632.8 -7529.58 180.66 0.25 7694 7696 2 -21632.4 -7530.31 180.51 0.25 7695 7697 2 -21632.1 -7530.5 180.3 0.25 7696 7698 2 -21631.6 -7532.03 179.86 0.25 7697 7699 2 -21631.6 -7532.25 179.59 0.25 7698 7700 2 -21631.2 -7533.27 176.35 0.25 7699 7701 2 -21630.8 -7533.41 176.3 0.25 7700 7702 2 -21630.8 -7533.7 176.25 0.25 7701 7703 2 -21630.1 -7534.4 176.07 0.25 7702 7704 2 -21629.8 -7534.89 175.96 0.25 7703 7705 2 -21629.6 -7536.71 175.64 0.25 7704 7706 2 -21629.6 -7536.97 175.59 0.25 7705 7707 2 -21627.1 -7537.98 175.2 0.25 7706 7708 2 -21626.5 -7538.17 175.11 0.25 7707 7709 2 -21626 -7538.33 175.03 0.25 7708 7710 2 -21625 -7539.25 174.79 0.25 7709 7711 2 -21624.7 -7539.24 174.76 0.25 7710 7712 2 -21623.4 -7540.7 174.96 0.25 7711 7713 2 -21622.5 -7541.29 174.25 0.25 7712 7714 2 -21622.5 -7541.53 174.21 0.25 7713 7715 2 -21622 -7542.58 174.32 0.25 7714 7716 2 -21621.9 -7543.05 175.34 0.25 7715 7717 2 -21620.4 -7545.32 175.28 0.25 7716 7718 2 -21620.1 -7545.57 175.21 0.25 7717 7719 2 -21619.3 -7545.39 175.16 0.25 7718 7720 2 -21619 -7545.11 175.19 0.25 7719 7721 2 -21619 -7545.8 174.72 0.25 7720 7722 2 -21618.9 -7546.09 174.67 0.25 7721 7723 2 -21618.8 -7547.45 173.37 0.25 7722 7724 2 -21618.8 -7547.72 173.32 0.25 7723 7725 2 -21618.5 -7549.29 173.04 0.25 7724 7726 2 -21618.6 -7549.54 173 0.25 7725 7727 2 -21617.9 -7550.52 172.77 0.25 7726 7728 2 -21617.6 -7550.74 172.71 0.25 7727 7729 2 -21616.2 -7552.91 172.21 0.25 7728 7730 2 -21615.2 -7554.01 171.38 0.25 7729 7731 2 -21614.9 -7554.01 171.35 0.25 7730 7732 2 -21614.5 -7554.97 171.15 0.25 7731 7733 2 -21614.2 -7555.2 171.08 0.25 7732 7734 2 -21613.1 -7555.32 170.96 0.25 7733 7735 2 -21612.5 -7555.5 170.87 0.25 7734 7736 2 -21612.1 -7556.51 170.67 0.25 7735 7737 2 -21612.3 -7557.34 170.54 0.25 7736 7738 2 -21612.5 -7557.7 171.19 0.25 7737 7739 2 -21612.1 -7558.51 171.03 0.25 7738 7740 2 -21612 -7558.71 170.99 0.25 7739 7741 2 -21611.9 -7559.28 170.88 0.25 7740 7742 2 -21609.9 -7562.09 174.73 0.25 7741 7743 2 -21609.6 -7562.08 174.7 0.25 7742 7744 2 -21608.9 -7562.61 174.99 0.25 7743 7745 2 -21608.6 -7562.54 174.98 0.25 7744 7746 2 -21608.3 -7563.35 175.03 0.25 7745 7747 2 -21608 -7563.32 175.01 0.25 7746 7748 2 -21607.3 -7563.36 172.83 0.25 7747 7749 2 -21607.2 -7563.84 172.75 0.25 7748 7750 2 -21606.9 -7563.82 172.73 0.25 7749 7751 2 -21606.3 -7564.25 172.6 0.25 7750 7752 2 -21606.3 -7564.53 172.56 0.25 7751 7753 2 -21605.5 -7565.85 172.91 0.25 7752 7754 2 -21605.2 -7566.06 172.85 0.25 7753 7755 2 -21604.5 -7566.57 172.81 0.25 7754 7756 2 -21604.2 -7567.06 172.75 0.25 7755 7757 2 -21603.9 -7567.01 172.73 0.25 7756 7758 2 -21603.2 -7567.71 172.55 0.25 7757 7759 2 -21602.9 -7567.92 172.57 0.25 7758 7760 2 -21601.5 -7568.29 172.44 0.25 7759 7761 2 -21600.9 -7568.2 172.4 0.25 7760 7762 2 -21600.1 -7571.87 176.82 0.25 7761 7763 2 -21598.1 -7571.98 180.4 0.25 7762 7764 2 -21596.7 -7572.23 179.55 0.25 7763 7765 2 -21596.4 -7572.16 179.54 0.25 7764 7766 2 -21594.6 -7572.51 179.62 0.25 7765 7767 2 -21594.3 -7572.7 179.56 0.25 7766 7768 2 -21593.6 -7575.14 181.6 0.25 7767 7769 2 -21593.6 -7574.89 181.61 0.25 7768 7770 2 -21592.5 -7574.21 178.52 0.25 7769 7771 2 -21591.8 -7577.48 182.2 0.25 7770 7772 2 -21591.5 -7577.73 182.13 0.25 7771 7773 2 -21590.9 -7579.43 182.82 0.25 7772 7774 2 -21589.8 -7580.58 184.32 0.25 7773 7775 2 -21589.5 -7580.82 184.25 0.25 7774 7776 2 -21588.3 -7581.18 184.08 0.25 7775 7777 2 -21588 -7581.4 184.01 0.25 7776 7778 2 -21587.7 -7581.66 183.94 0.25 7777 7779 2 -21586.8 -7583.64 183.53 0.25 7778 7780 2 -21588.9 -7570.71 185.87 0.125 7779 7781 2 -21587.5 -7572.34 185.46 0.125 7780 7782 2 -21586.2 -7574.25 185.4 0.125 7781 7783 2 -21586.2 -7574.51 185.35 0.125 7782 7784 2 -21585.5 -7575.17 185.18 0.125 7783 7785 2 -21584 -7575.65 184.57 0.125 7784 7786 2 -21583.3 -7576.24 184.01 0.125 7785 7787 2 -21583.3 -7576.45 183.97 0.125 7786 7788 2 -21583.2 -7578.6 183.61 0.125 7787 7789 2 -21583 -7579.69 183.83 0.125 7788 7790 2 -21582.5 -7580.92 183.57 0.125 7789 7791 2 -21582.5 -7580.95 183.73 0.125 7790 7792 2 -21581.4 -7582.9 184.83 0.125 7791 7793 2 -21581.3 -7584.68 185.93 0.125 7792 7794 2 -21580.3 -7585.55 185.69 0.125 7793 7795 2 -21579.2 -7586.64 185.41 0.125 7794 7796 2 -21579.2 -7586.93 185.36 0.125 7795 7797 2 -21578.6 -7588.42 185.06 0.125 7796 7798 2 -21577.5 -7588.46 184.94 0.125 7797 7799 2 -21577.3 -7589.17 186.12 0.125 7798 7800 2 -21576.8 -7589.87 185.96 0.125 7799 7801 2 -21576.9 -7590.97 185.79 0.125 7800 7802 2 -21575.9 -7591.86 185.54 0.125 7801 7803 2 -21574.2 -7594.06 185.79 0.125 7802 7804 2 -21573 -7596.73 185.23 0.125 7803 7805 2 -21572.5 -7598.61 186.93 0.125 7804 7806 2 -21572 -7599.86 186.72 0.125 7805 7807 2 -21571.8 -7600.33 186.63 0.125 7806 7808 2 -21571.5 -7600.57 186.56 0.125 7807 7809 2 -21571.7 -7601.11 186.49 0.125 7808 7810 2 -21571.5 -7602.41 186.25 0.125 7809 7811 2 -21570.3 -7602.49 186.13 0.125 7810 7812 2 -21569.1 -7603.01 185.93 0.125 7811 7813 2 -21568.8 -7603.22 185.86 0.125 7812 7814 2 -21568.4 -7604.82 187.49 0.125 7813 7815 2 -21568.3 -7605.07 187.44 0.125 7814 7816 2 -21568.1 -7606.07 187.26 0.125 7815 7817 2 -21567.8 -7606.29 187.19 0.125 7816 7818 2 -21567.5 -7606.24 187.17 0.125 7817 7819 2 -21566.3 -7606.78 186.97 0.125 7818 7820 2 -21566 -7608.04 186.76 0.125 7819 7821 2 -21566 -7608.3 186.71 0.125 7820 7822 2 -21566 -7609.39 187.74 0.125 7821 7823 2 -21565.5 -7611.3 185.55 0.125 7822 7824 2 -21565.4 -7611.57 185.5 0.125 7823 7825 2 -21565 -7613.18 187.35 0.125 7824 7826 2 -21564.3 -7614.96 188.49 0.125 7825 7827 2 -21563.9 -7615.39 188.39 0.125 7826 7828 2 -21563.6 -7615.95 187.26 0.125 7827 7829 2 -21563.3 -7616.14 187.2 0.125 7828 7830 2 -21561.9 -7617.7 186.78 0.125 7829 7831 2 -21561.8 -7617.98 186.73 0.125 7830 7832 2 -21561.5 -7619.5 186.45 0.125 7831 7833 2 -21561.2 -7619.98 186.34 0.125 7832 7834 2 -21561.1 -7620.23 186.29 0.125 7833 7835 2 -21560.4 -7621.47 189.32 0.125 7834 7836 2 -21560.4 -7621.69 189.14 0.125 7835 7837 2 -21559.7 -7623.72 188.75 0.125 7836 7838 2 -21559.6 -7624.21 188.64 0.125 7837 7839 2 -21559 -7624.66 188.51 0.125 7838 7840 2 -21558.7 -7624.86 188.45 0.125 7839 7841 2 -21557.8 -7625.82 190.13 0.125 7840 7842 2 -21557.1 -7626.35 190.8 0.125 7841 7843 2 -21557 -7626.88 190.7 0.125 7842 7844 2 -21556.4 -7628.35 190.4 0.125 7843 7845 2 -21556.2 -7629.36 190.22 0.125 7844 7846 2 -21556 -7630.41 190.02 0.125 7845 7847 2 -21555.6 -7631.14 189.87 0.125 7846 7848 2 -21554.2 -7633.42 191.93 0.125 7847 7849 2 -21554.4 -7635.11 191.67 0.125 7848 7850 2 -21554.5 -7636.41 191.49 0.125 7849 7851 2 -21554.4 -7636.68 191.44 0.125 7850 7852 2 -21553.1 -7639.11 191.15 0.125 7851 7853 2 -21553 -7639.63 191.05 0.125 7852 7854 2 -21552.5 -7640.64 190.84 0.125 7853 7855 2 -21552.5 -7640.88 190.8 0.125 7854 7856 2 -21551.7 -7643.62 196.14 0.125 7855 7857 2 -21551.2 -7644.56 195.94 0.125 7856 7858 2 -21550.9 -7644.76 195.88 0.125 7857 7859 2 -21549.3 -7645.55 195.6 0.125 7858 7860 2 -21549 -7645.47 195.58 0.125 7859 7861 2 -21549 -7647.01 196.46 0.125 7860 7862 2 -21548.6 -7647.44 196.36 0.125 7861 7863 2 -21547.8 -7648.38 196.17 0.125 7862 7864 2 -21547.8 -7648.65 196.12 0.125 7863 7865 2 -21547.4 -7649.35 196.02 0.125 7864 7866 2 -21546.6 -7650.47 199.73 0.125 7865 7867 2 -21546.6 -7650.73 199.68 0.125 7866 7868 2 -21545 -7653.06 199.15 0.125 7867 7869 2 -21544.7 -7653.29 199.08 0.125 7868 7870 2 -21544.6 -7654.01 198.95 0.125 7869 7871 2 -21544.9 -7655.39 198.75 0.125 7870 7872 2 -21545.3 -7657.41 198.51 0.125 7871 7873 2 -21545.5 -7659.83 198.13 0.125 7872 7874 2 -21545.1 -7660.54 197.97 0.125 7873 7875 2 -21544.8 -7660.49 197.95 0.125 7874 7876 2 -21543.5 -7661.31 197.7 0.125 7875 7877 2 -21542.3 -7663.45 197.23 0.125 7876 7878 2 -21542 -7664.73 196.99 0.125 7877 7879 2 -21541.2 -7667.78 196.41 0.125 7878 7880 2 -21540.8 -7667.97 196.35 0.125 7879 7881 2 -21540.8 -7668.22 196.3 0.125 7880 7882 2 -21540.3 -7668.95 196.14 0.125 7881 7883 2 -21540.3 -7669.2 196.09 0.125 7882 7884 2 -21540.5 -7670.07 195.96 0.125 7883 7885 2 -21540.4 -7670.28 195.92 0.125 7884 7886 2 -21539.7 -7670.97 195.75 0.125 7885 7887 2 -21539.2 -7672.39 195.46 0.125 7886 7888 2 -21538.9 -7672.61 195.39 0.125 7887 7889 2 -21538.2 -7673.33 195.21 0.125 7888 7890 2 -21537.8 -7673.73 195.11 0.125 7889 7891 2 -21537.3 -7674.71 194.9 0.125 7890 7892 2 -21537 -7676.16 195.49 0.125 7891 7893 2 -21536.3 -7676.83 195.31 0.125 7892 7894 2 -21535.3 -7677.26 195.63 0.125 7893 7895 2 -21534 -7677.64 197.72 0.125 7894 7896 2 -21533.4 -7677.82 197.64 0.125 7895 7897 2 -21532.3 -7677.79 197.54 0.125 7896 7898 2 -21532.2 -7678.35 197.43 0.125 7897 7899 2 -21531.4 -7679.51 197.17 0.125 7898 7900 2 -21531.1 -7679.5 197.15 0.125 7899 7901 2 -21530.5 -7679.9 197.02 0.125 7900 7902 2 -21530.2 -7679.81 197.01 0.125 7901 7903 2 -21529.3 -7680.94 197.99 0.125 7902 7904 2 -21529 -7682.55 197.9 0.125 7903 7905 2 -21528.5 -7683.61 198.49 0.125 7904 7906 2 -21528 -7684.85 198.24 0.125 7905 7907 2 -21527.5 -7685.85 198.03 0.125 7906 7908 2 -21527.2 -7686.3 197.93 0.125 7907 7909 2 -21525.8 -7685.87 197.06 0.125 7908 7910 2 -21524.5 -7687.79 198.37 0.125 7909 7911 2 -21524 -7688.24 196.59 0.125 7910 7912 2 -21523.6 -7690 196.27 0.125 7911 7913 2 -21523.1 -7692.59 195.8 0.125 7912 7914 2 -21522 -7694.5 198.68 0.125 7913 7915 2 -21522 -7694.45 198.41 0.125 7914 7916 2 -21521.3 -7696.82 198.45 0.125 7915 7917 2 -21521 -7697.01 198.39 0.125 7916 7918 2 -21519.2 -7699.86 199.23 0.125 7917 7919 2 -21518 -7701.61 200.93 0.125 7918 7920 2 -21516.3 -7702.95 200.91 0.125 7919 7921 2 -21516.2 -7702.99 201.15 0.125 7920 7922 2 -21515.6 -7704.78 201.08 0.125 7921 7923 2 -21515.6 -7704.81 201.3 0.125 7922 7924 2 -21513.9 -7706.39 199.64 0.125 7923 7925 2 -21513.9 -7706.36 199.46 0.125 7924 7926 2 -21513.7 -7707.61 199.23 0.125 7925 7927 2 -21513.4 -7707.82 199.17 0.125 7926 7928 2 -21513.1 -7708.05 199.1 0.125 7927 7929 2 -21512 -7709.42 198.77 0.125 7928 7930 2 -21511 -7709.79 198.62 0.125 7929 7931 2 -21510.2 -7712.55 202.77 0.125 7930 7932 2 -21510.2 -7712.57 202.91 0.125 7931 7933 2 -21509.1 -7714.89 204.74 0.125 7932 7934 2 -21508.3 -7716.65 206.21 0.125 7933 7935 2 -21507.6 -7717.86 205.94 0.125 7934 7936 2 -21506.6 -7720.27 209.59 0.125 7935 7937 2 -21506.5 -7720.49 210.95 0.125 7936 7938 2 -21505.9 -7721.37 211.79 0.125 7937 7939 2 -21505.9 -7721.6 211.75 0.125 7938 7940 2 -21506.1 -7723.56 211.71 0.125 7939 7941 2 -21506.1 -7723.62 212.06 0.125 7940 7942 2 -21505.8 -7726.45 215.86 0.125 7941 7943 2 -21504.7 -7727.87 215.52 0.125 7942 7944 2 -21504.5 -7729.15 215.29 0.125 7943 7945 2 -21504.5 -7730.75 215.02 0.125 7944 7946 2 -21504.2 -7730.98 214.96 0.125 7945 7947 2 -21503.8 -7731.67 214.8 0.125 7946 7948 2 -21503.5 -7731.91 214.74 0.125 7947 7949 2 -21503 -7732.63 214.57 0.125 7948 7950 2 -21502.3 -7733.54 214.36 0.125 7949 7951 2 -21500.7 -7734.84 217.09 0.125 7950 7952 2 -21500.7 -7735.06 217.05 0.125 7951 7953 2 -21499.9 -7736.04 217.09 0.125 7952 7954 2 -21499.6 -7736.26 217.02 0.125 7953 7955 2 -21497.3 -7739.02 218.11 0.125 7954 7956 2 -21495.7 -7740.8 218.87 0.125 7955 7957 2 -21495.7 -7740.87 219.29 0.125 7956 7958 2 -21494.3 -7743.26 218.85 0.125 7957 7959 2 -21494.1 -7745.27 216.43 0.125 7958 7960 2 -21492.5 -7746.48 215.72 0.125 7959 7961 2 -21491.7 -7747.39 215.31 0.125 7960 7962 2 -21491.8 -7747.24 214.4 0.125 7961 7963 2 -21490.9 -7747.77 213.7 0.125 7962 7964 2 -21490.2 -7750.03 213.22 0.125 7963 7965 2 -21489.6 -7750.3 212.45 0.125 7964 7966 2 -21489 -7750.78 211.09 0.125 7965 7967 2 -21488.8 -7750.94 211.04 0.125 7966 7968 2 -21488 -7751.87 210.82 0.125 7967 7969 2 -21487.4 -7752.14 209.78 0.125 7968 7970 2 -21487.2 -7752.3 209.47 0.125 7969 7971 2 -21487.1 -7752.56 209.43 0.125 7970 7972 2 -21486.3 -7753.98 208.92 0.125 7971 7973 2 -21485.9 -7754.41 208.81 0.125 7972 7974 2 -21484.9 -7755.29 208.56 0.125 7973 7975 2 -21484 -7755.67 208.41 0.125 7974 7976 2 -21483.4 -7755.52 208.39 0.125 7975 7977 2 -21482.5 -7756.11 208.18 0.125 7976 7978 2 -21482.2 -7756.33 208.11 0.125 7977 7979 2 -21481.8 -7756.54 208.02 0.125 7978 7980 2 -21481.5 -7756.97 207.64 0.125 7979 7981 2 -21481.1 -7757.16 207.57 0.125 7980 7982 2 -21480.9 -7757.19 206.55 0.125 7981 7983 2 -21481 -7757.53 205.21 0.125 7982 7984 2 -21480 -7759.18 204.84 0.125 7983 7985 2 -21479.9 -7759.94 204.7 0.125 7984 7986 2 -21479.3 -7761.69 204.3 0.125 7985 7987 2 -21479.4 -7761.62 203.88 0.125 7986 7988 2 -21478.4 -7762.11 204.46 0.125 7987 7989 2 -21478.2 -7762.86 204.32 0.125 7988 7990 2 -21477.9 -7763.08 204.25 0.125 7989 7991 2 -21477.9 -7763.35 204.21 0.125 7990 7992 2 -21477.8 -7763.58 204.16 0.125 7991 7993 2 -21476.9 -7765.52 203.75 0.125 7992 7994 2 -21476.5 -7765.98 203.5 0.125 7993 7995 2 -21476.3 -7765.94 203.49 0.125 7994 7996 2 -21475.6 -7766.35 203.36 0.125 7995 7997 2 -21475 -7766.5 203.28 0.125 7996 7998 2 -21474.5 -7766.4 203.24 0.125 7997 7999 2 -21473.9 -7766.49 203.17 0.125 7998 8000 2 -21473 -7766.6 203.07 0.125 7999 8001 2 -21472.7 -7766.54 203.05 0.125 8000 8002 2 -21472 -7767.19 202.88 0.125 8001 8003 2 -21471.4 -7768.22 204.53 0.125 8002 8004 2 -21471.1 -7770.03 204.2 0.125 8003 8005 2 -21470.5 -7771.53 203.89 0.125 8004 8006 2 -21469 -7771.44 201.94 0.125 8005 8007 2 -21469 -7771.37 201.54 0.125 8006 8008 2 -21468.7 -7773.11 201.11 0.125 8007 8009 2 -21467.1 -7773.83 200.59 0.125 8008 8010 2 -21465.8 -7774.9 200.29 0.125 8009 8011 2 -21464.8 -7776.15 199.12 0.125 8010 8012 2 -21463 -7779.13 196.18 0.125 8011 8013 2 -21462.4 -7780.85 195.84 0.125 8012 8014 2 -21461.6 -7783.38 195.33 0.125 8013 8015 2 -21460.5 -7784.75 194.7 0.125 8014 8016 2 -21460.3 -7784.37 194.38 0.125 8015 8017 2 -21457.8 -7787.26 193.24 0.125 8016 8018 2 -21457.8 -7787.24 193.08 0.125 8017 8019 2 -21456.5 -7788.45 192.16 0.125 8018 8020 2 -21455.7 -7790.1 190.13 0.125 8019 8021 2 -21455.5 -7790.06 190.12 0.125 8020 8022 2 -21455.2 -7793.18 189.56 0.125 8021 8023 2 -21454.4 -7793.88 189.51 0.125 8022 8024 2 -21454.5 -7794.88 185.83 0.125 8023 8025 2 -21454.2 -7795.92 182.69 0.125 8024 8026 2 -21453 -7797.31 178.29 0.125 8025 8027 2 -21452.5 -7797.2 178.15 0.125 8026 8028 2 -21452.4 -7797.43 178.1 0.125 8027 8029 2 -21451.1 -7798.5 177.8 0.125 8028 8030 2 -21450.7 -7798.99 177.69 0.125 8029 8031 2 -21450.4 -7799.49 177.57 0.125 8030 8032 2 -21448.9 -7802.92 176.86 0.125 8031 8033 2 -21448 -7804.38 176.77 0.125 8032 8034 2 -21446.4 -7805.33 176.44 0.125 8033 8035 2 -21446 -7805.96 175.4 0.125 8034 8036 2 -21445.9 -7807.49 173.43 0.125 8035 8037 2 -21445.9 -7807.38 172.76 0.125 8036 8038 2 -21445.4 -7808.08 171.23 0.125 8037 8039 2 -21443.3 -7810.02 170.67 0.125 8038 8040 2 -21441.7 -7812.16 170.17 0.125 8039 8041 2 -21441.1 -7812.98 168.1 0.125 8040 8042 2 -21440.1 -7814.13 167.7 0.125 8041 8043 2 -21439.7 -7814.56 167.59 0.125 8042 8044 2 -21438.6 -7816.05 167.83 0.125 8043 8045 2 -21437.2 -7817.49 166.3 0.125 8044 8046 2 -21437.3 -7817.41 165.82 0.125 8045 8047 2 -21436.2 -7819.34 162.44 0.125 8046 8048 2 -21435.9 -7819.5 162.27 0.125 8047 8049 2 -21435.6 -7819.72 162.2 0.125 8048 8050 2 -21434 -7822.06 161.67 0.125 8049 8051 2 -21433.7 -7822.63 160.61 0.125 8050 8052 2 -21433.3 -7824.42 160.28 0.125 8051 8053 2 -21432.8 -7826.17 159.94 0.125 8052 8054 2 -21432.6 -7827.45 158.09 0.125 8053 8055 2 -21432.6 -7827.51 158.48 0.125 8054 8056 2 -21431.3 -7829.68 158.01 0.125 8055 8057 2 -21430.6 -7831.18 155.22 0.125 8056 8058 2 -21430.1 -7834.31 154.65 0.125 8057 8059 2 -21428.6 -7836.14 154.17 0.125 8058 8060 2 -21428.5 -7836.38 154.13 0.125 8059 8061 2 -21427.2 -7838.78 153.61 0.125 8060 8062 2 -21426.1 -7840.23 153.59 0.125 8061 8063 2 -21426.1 -7840.22 153.49 0.125 8062 8064 2 -21425.8 -7842.11 152.15 0.125 8063 8065 2 -21426.1 -7842.16 152.17 0.125 8064 8066 2 -21425.2 -7843.85 151.8 0.125 8065 8067 2 -21425.2 -7844.08 151.76 0.125 8066 8068 2 -21424.6 -7845.57 151.46 0.125 8067 8069 2 -21424.6 -7845.84 151.42 0.125 8068 8070 2 -21423.7 -7846.22 151.27 0.125 8069 8071 2 -21423.5 -7846.72 151.17 0.125 8070 8072 2 -21423 -7847.98 150.92 0.125 8071 8073 2 -21423 -7848.18 150.88 0.125 8072 8074 2 -21422.7 -7849.39 147.22 0.125 8073 8075 2 -21422.6 -7849.66 147.17 0.125 8074 8076 2 -21422.2 -7850.64 146.96 0.125 8075 8077 2 -21422.1 -7850.86 146.92 0.125 8076 8078 2 -21421.6 -7852.09 146.67 0.125 8077 8079 2 -21421.6 -7852.28 146.11 0.125 8078 8080 2 -21421.7 -7852.64 143.43 0.125 8079 8081 2 -21421.7 -7852.9 143.38 0.125 8080 8082 2 -21420.9 -7853.54 143.15 0.125 8081 8083 2 -21420.9 -7854.07 143.06 0.125 8082 8084 2 -21420.5 -7854.27 142.99 0.125 8083 8085 2 -21420.2 -7854.72 142.88 0.125 8084 8086 2 -21419.6 -7854.89 142.8 0.125 8085 8087 2 -21419.5 -7855.15 142.75 0.125 8086 8088 2 -21418.8 -7855.8 142.57 0.125 8087 8089 2 -21418.5 -7855.95 142.51 0.125 8088 8090 2 -21417.5 -7856.56 142.27 0.125 8089 8091 2 -21417.5 -7857.1 142.15 0.125 8090 8092 2 -21417.3 -7859.74 141.68 0.125 8091 8093 2 -21416.4 -7860.84 138.13 0.125 8092 8094 2 -21416.4 -7861.09 138.08 0.125 8093 8095 2 -21413.9 -7862.35 136.89 0.125 8094 8096 2 -21412.2 -7864.63 134.65 0.125 8095 8097 2 -21411.8 -7864.85 134.58 0.125 8096 8098 2 -21411.5 -7865.03 134.52 0.125 8097 8099 2 -21410.8 -7865.71 134.35 0.125 8098 8100 2 -21409.9 -7867.88 133.93 0.125 8099 8101 2 -21409.8 -7868.4 133.84 0.125 8100 8102 2 -21409.1 -7868.81 133.71 0.125 8101 8103 2 -21408.1 -7869.67 133.47 0.125 8102 8104 2 -21407.7 -7870.17 133.35 0.125 8103 8105 2 -21406.7 -7871.04 133.11 0.125 8104 8106 2 -21405.6 -7871.07 133 0.125 8105 8107 2 -21405.3 -7870.98 132.99 0.125 8106 8108 2 -21404.3 -7871.63 132.79 0.125 8107 8109 2 -21404.2 -7872.12 132.7 0.125 8108 8110 2 -21403.8 -7872.6 132.59 0.125 8109 8111 2 -21403.4 -7873.8 132.35 0.125 8110 8112 2 -21403.2 -7874.33 132.24 0.125 8111 8113 2 -21402.7 -7875.79 131.95 0.125 8112 8114 2 -21401.5 -7877.73 131.52 0.125 8113 8115 2 -21400.8 -7878.41 131.34 0.125 8114 8116 2 -21400.2 -7878.78 131.22 0.125 8115 8117 2 -21400.1 -7879.31 131.12 0.125 8116 8118 2 -21399.7 -7879.74 131.02 0.125 8117 8119 2 -21399.6 -7880.03 130.96 0.125 8118 8120 2 -21399.3 -7880.19 130.91 0.125 8119 8121 2 -21398.2 -7882.44 127.48 0.125 8120 8122 2 -21397.7 -7882.96 129.58 0.125 8121 8123 2 -21397.6 -7883.21 129.53 0.125 8122 8124 2 -21395.6 -7885.19 129.01 0.125 8123 8125 2 -21395.3 -7885.37 128.95 0.125 8124 8126 2 -21394.6 -7885.8 128.82 0.125 8125 8127 2 -21392.4 -7888.86 128.1 0.125 8126 8128 2 -21390.7 -7890.37 127.69 0.125 8127 8129 2 -21371.9 -7915.01 111.88 0.125 8128 8130 2 -21370.8 -7917.77 111.39 0.125 8129 8131 2 -21370.9 -7919.05 111.18 0.125 8130 8132 2 -21370.3 -7920.41 111.8 0.125 8131 8133 2 -21369.5 -7921.38 111.57 0.125 8132 8134 2 -21369.2 -7921.58 111.5 0.125 8133 8135 2 -21368 -7923.46 111.34 0.125 8134 8136 2 -21367.9 -7925.33 111.02 0.125 8135 8137 2 -21367.9 -7926.98 110.75 0.125 8136 8138 2 -21367.8 -7927.22 110.71 0.125 8137 8139 2 -21367.2 -7929.19 110.32 0.125 8138 8140 2 -21366.7 -7930.45 110.06 0.125 8139 8141 2 -21365.3 -7932.01 109.67 0.125 8140 8142 2 -21363 -7935.27 107.28 0.125 8141 8143 2 -21362.8 -7936.56 107.04 0.125 8142 8144 2 -21361.3 -7940 106.34 0.125 8143 8145 2 -21360.7 -7940.39 106.21 0.125 8144 8146 2 -21360.3 -7940.86 106.1 0.125 8145 8147 2 -21360.1 -7942.16 105.86 0.125 8146 8148 2 -21359.8 -7943.69 105.57 0.125 8147 8149 2 -21358.5 -7946.06 104.96 0.125 8148 8150 2 -21358.2 -7946.3 104.89 0.125 8149 8151 2 -21356.7 -7949.26 101.69 0.125 8150 8152 2 -21355.6 -7952.2 100.69 0.125 8151 8153 2 -21355.3 -7952.34 100.32 0.125 8152 8154 2 -21354.5 -7956.68 99.52 0.125 8153 8155 2 -21354.3 -7957.23 99.42 0.125 8154 8156 2 -21353.5 -7960.2 98.85 0.125 8155 8157 2 -21354.4 -7961.7 98.69 0.125 8156 8158 2 -21354.1 -7961.94 98.62 0.125 8157 8159 2 -21354 -7962.41 98.53 0.125 8158 8160 2 -21350.5 -7965.73 97.66 0.125 8159 8161 2 -21346.2 -7970.31 96.49 0.125 8160 8162 2 -21339.8 -7982.19 91.38 0.125 8161 8163 2 -21337.5 -7984.97 90.07 0.125 8162 8164 2 -21336.4 -7986.42 89.73 0.125 8163 8165 2 -21336 -7987.41 89.53 0.125 8164 8166 2 -21334.9 -7989.1 86.08 0.125 8165 8167 2 -21334.5 -7991.7 85.61 0.125 8166 8168 2 -21333.9 -7994.79 85.04 0.125 8167 8169 2 -21333.6 -7996.62 84.72 0.125 8168 8170 2 -21332.8 -7998.58 84.32 0.125 8169 8171 2 -21332.2 -8000.91 83.88 0.125 8170 8172 2 -21331.4 -8002.66 83.51 0.125 8171 8173 2 -21330 -8004.07 81.78 0.125 8172 8174 2 -21328.8 -8004.71 81.57 0.125 8173 8175 2 -21328.5 -8006.53 81.24 0.125 8174 8176 2 -21328.4 -8007.8 80.96 0.125 8175 8177 2 -21327.9 -8009.09 80.71 0.125 8176 8178 2 -21327.3 -8009.79 80.53 0.125 8177 8179 2 -21326.5 -8011.51 80.17 0.125 8178 8180 2 -21325.3 -8014.01 79.65 0.125 8179 8181 2 -21322.8 -8015.79 79.12 0.125 8180 8182 2 -21321.2 -8017.4 78.7 0.125 8181 8183 2 -21320.6 -8018.46 77.43 0.125 8182 8184 2 -21320.2 -8021.3 76.92 0.125 8183 8185 2 -21318.8 -8023.78 76.35 0.125 8184 8186 2 -21317.1 -8025.9 75.84 0.125 8185 8187 2 -21315.8 -8027.84 75.39 0.125 8186 8188 2 -21314.5 -8029.89 72.41 0.125 8187 8189 2 -21314.2 -8032.15 71.8 0.125 8188 8190 2 -21313 -8033.58 69.85 0.125 8189 8191 2 -21312 -8034.02 68.36 0.125 8190 8192 2 -21311 -8034.82 67.51 0.125 8191 8193 2 -21308.6 -8036.72 69.02 0.125 8192 8194 2 -21308.5 -8039.67 65.78 0.125 8193 8195 2 -21307.8 -8043.18 64.75 0.125 8194 8196 2 -21307.1 -8044.91 62.52 0.125 8195 8197 2 -21305.5 -8046.52 62.1 0.125 8196 8198 2 -21304.8 -8048.03 61.78 0.125 8197 8199 2 -21302.6 -8049.57 61.32 0.125 8198 8200 2 -21301.1 -8051.09 58.7 0.125 8199 8201 2 -21300.3 -8053.32 58.06 0.125 8200 8202 2 -21299.3 -8057.04 56.61 0.125 8201 8203 2 -21297.2 -8059.92 55.93 0.125 8202 8204 2 -21295.9 -8063.16 51.84 0.125 8203 8205 2 -21296.3 -8065.8 49.86 0.125 8204 8206 2 -21296.1 -8069.78 49.18 0.125 8205 8207 2 -21294.7 -8071.72 48.74 0.125 8206 8208 2 -21293.1 -8072.75 48.42 0.125 8207 8209 2 -21290.5 -8076.62 45.72 0.125 8208 8210 2 -21287 -8080.07 44.81 0.125 8209 8211 2 -21284.3 -8081.04 44.4 0.125 8210 8212 2 -21281.4 -8081.83 43.47 0.125 8211 8213 2 -21278.1 -8083.32 41.74 0.125 8212 8214 2 -21276 -8084.63 41.33 0.125 8213 8215 2 -21273.6 -8085.6 40.94 0.125 8214 8216 2 -21270.7 -8087.27 38.59 0.125 8215 8217 2 -21269.9 -8089.27 38.16 0.125 8216 8218 2 -21268 -8090.99 36.93 0.125 8217 8219 2 -21266.7 -8092.38 36.5 0.125 8218 8220 2 -21263.2 -8094.62 33.07 0.125 8219 8221 2 -21258.6 -8095.9 31.34 0.125 8220 8222 2 -21257.6 -8098.45 24.99 0.125 8221 8223 2 -21256.3 -8099.39 25.02 0.125 8222 8224 2 -21255.1 -8100 24.64 0.125 8223 8225 2 -21253.7 -8100.14 23.3 0.125 8224 8226 2 -21251.4 -8098.97 18.58 0.125 8225 8227 2 -21248.2 -8099.8 17.63 0.125 8226 8228 2 -21246.5 -8100.94 16.21 0.125 8227 8229 2 -21245.5 -8104.59 14.18 0.125 8228 8230 2 -21245.1 -8105.27 14.03 0.125 8229 8231 2 -21243.8 -8106.71 13.67 0.125 8230 8232 2 -21242.5 -8108.38 13.27 0.125 8231 8233 2 -21240.8 -8110.7 9.19 0.125 8232 8234 2 -21239.2 -8111.28 8.9 0.125 8233 8235 2 -21236.8 -8112.46 5.21 0.125 8234 8236 2 -21235.2 -8113.88 4.82 0.125 8235 8237 2 -21233.8 -8115.52 4.43 0.125 8236 8238 2 -21231.8 -8116.84 2.74 0.125 8237 8239 2 -21230.9 -8117.64 1.55 0.125 8238 8240 2 -21229 -8119.39 0.81 0.125 8239 8241 2 -21222.7 -8122.64 -5.67 0.125 8240 8242 2 -21220.6 -8120.02 -9.31 0.125 8241 8243 2 -21219.3 -8120.86 -9.56 0.125 8242 8244 2 -21218.2 -8120.02 -17.58 0.125 8243 8245 2 -21216.2 -8121.33 -16.15 0.125 8244 8246 2 -21212.7 -8121.41 -18.38 0.125 8245 8247 2 -21208.1 -8122.46 -20.34 0.125 8246 8248 2 -21202.8 -8125.08 -29.41 0.125 8247 8249 2 -21200.1 -8127.81 -30.15 0.125 8248 8250 2 -21196.4 -8129.06 -34.55 0.125 8249 8251 2 -21190.6 -8130.69 -35.37 0.125 8250 8252 2 -21187.4 -8133.66 -39.88 0.125 8251 8253 2 -21184.7 -8135.04 -40.36 0.125 8252 8254 2 -21181.1 -8136 -40.86 0.125 8253 8255 2 -21178.8 -8140.85 -44.93 0.125 8254 8256 2 -21176.8 -8142.86 -45.45 0.125 8255 8257 2 -21173.8 -8143.91 -45.9 0.125 8256 8258 2 -21169.8 -8144.08 -50.23 0.125 8257 8259 2 -21167.1 -8146.73 -50.92 0.125 8258 8260 2 -21162.4 -8150.2 -54.57 0.125 8259 8261 2 -21160.6 -8151.16 -54.9 0.125 8260 8262 2 -21153.8 -8153.77 -62.87 0.125 8261 8263 2 -21153.4 -8155.92 -65.48 0.125 8262 8264 2 -21150.2 -8157.72 -66.07 0.125 8263 8265 2 -21148.3 -8158.99 -66.46 0.125 8264 8266 2 -21146.2 -8160.4 -73.19 0.125 8265 8267 2 -21143.8 -8161.54 -73.61 0.125 8266 8268 2 -21141.3 -8162.66 -74.02 0.125 8267 8269 2 -21137.2 -8164.7 -77.05 0.125 8268 8270 2 -21134 -8166.77 -77.69 0.125 8269 8271 2 -21131.5 -8168.18 -78.16 0.125 8270 8272 2 -21128.6 -8169.03 -84.3 0.125 8271 8273 2 -21124.6 -8168.89 -87.46 0.125 8272 8274 2 -21120.6 -8171.08 -88.2 0.125 8273 8275 2 -21115.6 -8171.62 -91.03 0.125 8274 8276 2 -21113.6 -8173.11 -91.46 0.125 8275 8277 2 -21110.6 -8174.4 -91.96 0.125 8276 8278 2 -21107.5 -8175.4 -97.05 0.125 8277 8279 2 -21103 -8175.9 -97.84 0.125 8278 8280 2 -21091.9 -8179.5 -103.74 0.125 8279 8281 2 -21089.3 -8185.66 -105.01 0.125 8280 8282 2 -21087.3 -8186.85 -108.79 0.125 8281 8283 2 -21084.7 -8187.49 -109.15 0.125 8282 8284 2 -21083.3 -8187.81 -109.33 0.125 8283 8285 2 -21082.5 -8189.03 -109.21 0.125 8284 8286 2 -21081.8 -8189.49 -109.35 0.125 8285 8287 2 -21078.8 -8191.76 -112.31 0.125 8286 8288 2 -21077.1 -8193.78 -117.88 0.125 8287 8289 2 -21072 -8196.11 -118.79 0.125 8288 8290 2 -21069.9 -8198.64 -119.77 0.125 8289 8291 2 -21068.7 -8200.01 -121.63 0.125 8290 8292 2 -21066.8 -8200.5 -123.33 0.125 8291 8293 2 -21065.9 -8202.62 -124.47 0.125 8292 8294 2 -21064.4 -8205.01 -125.02 0.125 8293 8295 2 -21062 -8205.89 -128.87 0.125 8294 8296 2 -21059.3 -8207.43 -130.32 0.125 8295 8297 2 -21057.3 -8209.21 -135.03 0.125 8296 8298 2 -21055.1 -8210.71 -135.48 0.125 8297 8299 2 -21053.2 -8213.29 -139.23 0.125 8298 8300 2 -21050.9 -8213.74 -139.52 0.125 8299 8301 2 -21048.3 -8215.41 -140.03 0.125 8300 8302 2 -21046.6 -8216.76 -143.18 0.125 8301 8303 2 -21045.3 -8220.25 -143.89 0.125 8302 8304 2 -21044.7 -8222.31 -144.29 0.125 8303 8305 2 -21043.4 -8223.11 -144.53 0.125 8304 8306 2 -21040.4 -8223.51 -147.63 0.125 8305 8307 2 -21036.8 -8225.25 -148.26 0.125 8306 8308 2 -21033 -8226 -153.1 0.125 8307 8309 2 -21030.5 -8226.68 -156.53 0.125 8308 8310 2 -21026.6 -8230.26 -157.49 0.125 8309 8311 2 -21025.6 -8230.85 -156.27 0.125 8310 8312 2 -21022.3 -8233.52 -157.02 0.125 8311 8313 2 -21016.5 -8235.52 -160.63 0.125 8312 8314 2 -21014.4 -8236.74 -161.03 0.125 8313 8315 2 -21007.2 -8239.12 -161.74 0.125 8314 8316 2 -21003 -8240.43 -156.75 0.125 8315 8317 2 -21000.3 -8243.17 -157.45 0.125 8316 8318 2 -20997.2 -8242.88 -169.98 0.125 8317 8319 2 -20995.2 -8244.44 -170.42 0.125 8318 8320 2 -20991.5 -8246.76 -171.15 0.125 8319 8321 2 -20989.7 -8248.03 -171.53 0.125 8320 8322 2 -20988 -8249.31 -171.9 0.125 8321 8323 2 -20984.7 -8249.8 -174.21 0.125 8322 8324 2 -20982.2 -8251.49 -174.72 0.125 8323 8325 2 -20980 -8252.75 -175.13 0.125 8324 8326 2 -20977.9 -8255.32 -175.75 0.125 8325 8327 2 -20975.9 -8257.34 -176.28 0.125 8326 8328 2 -20972.7 -8258.64 -181.87 0.125 8327 8329 2 -20969.9 -8260.14 -183.1 0.125 8328 8330 2 -20964.3 -8265.54 -184.52 0.125 8329 8331 2 -20961.7 -8267.92 -188.94 0.125 8330 8332 2 -20959.7 -8269.47 -189.38 0.125 8331 8333 2 -20957.9 -8271.21 -191.87 0.125 8332 8334 2 -20956.8 -8272.62 -192.19 0.125 8333 8335 2 -20950.5 -8278.79 -193.45 0.125 8334 8336 2 -20950.3 -8280.22 -197.33 0.125 8335 8337 2 -20949 -8282.43 -197.81 0.125 8336 8338 2 -20946.9 -8283.39 -198.17 0.125 8337 8339 2 -20943.1 -8284.77 -199.26 0.125 8338 8340 2 -20944.2 -8288.13 -201.11 0.125 8339 8341 2 -20943.5 -8288.96 -204.96 0.125 8340 8342 2 -20940 -8288.64 -207.91 0.125 8341 8343 2 -20938 -8290.08 -208.33 0.125 8342 8344 2 -20935.1 -8291.23 -209.62 0.125 8343 8345 2 -20932.7 -8293.69 -210.25 0.125 8344 8346 2 -20929.6 -8294.98 -210.75 0.125 8345 8347 2 -20926.2 -8295.92 -214.19 0.125 8346 8348 2 -20922.5 -8296.28 -214.6 0.125 8347 8349 2 -20918.2 -8297.83 -215.25 0.125 8348 8350 2 -20914 -8300.53 -214.11 0.125 8349 8351 2 -20912.4 -8303.38 -216.4 0.125 8350 8352 2 -20910.2 -8306.39 -217.1 0.125 8351 8353 2 -20907 -8305.87 -219.85 0.125 8352 8354 2 -20905.8 -8307 -221.32 0.125 8353 8355 2 -20904.3 -8308.84 -226.32 0.125 8354 8356 2 -20903.6 -8310.31 -227.8 0.125 8355 8357 2 -20901.9 -8310.56 -229.58 0.125 8356 8358 2 -20899.9 -8310.8 -230.83 0.125 8357 8359 2 -20898.5 -8310.38 -234.57 0.125 8358 8360 2 -20896.1 -8311.27 -234.95 0.125 8359 8361 2 -20894.5 -8312.26 -235.26 0.125 8360 8362 2 -20893.6 -8314.18 -235.66 0.125 8361 8363 2 -20892.2 -8316.07 -236.11 0.125 8362 8364 2 -20890.4 -8316.51 -236.35 0.125 8363 8365 2 -20888.4 -8316.92 -238.07 0.125 8364 8366 2 -20886 -8317.77 -238.45 0.125 8365 8367 2 -20885.3 -8318.7 -238.67 0.125 8366 8368 2 -20883.3 -8318.26 -243.56 0.125 8367 8369 2 -20881.5 -8319.23 -243.89 0.125 8368 8370 2 -20877.3 -8323.46 -244.98 0.125 8369 8371 2 -20876.2 -8325.09 -245.35 0.125 8370 8372 2 -20873.7 -8324.66 -245.51 0.125 8371 8373 2 -20870.2 -8325.78 -252.19 0.125 8372 8374 2 -20869.6 -8326.18 -252.31 0.125 8373 8375 2 -20866.9 -8327.02 -252.7 0.125 8374 8376 2 -20864.7 -8327.94 -255.87 0.125 8375 8377 2 -20863.7 -8328.81 -256.11 0.125 8376 8378 2 -20860 -8330.72 -258.67 0.125 8377 8379 2 -20858.3 -8332.47 -263.38 0.125 8378 8380 2 -20855.5 -8333.55 -263.84 0.125 8379 8381 2 -20854 -8334.09 -264.07 0.125 8380 8382 2 -20851.2 -8335.15 -267.27 0.125 8381 8383 2 -20846.4 -8338.21 -268.22 0.125 8382 8384 2 -20845.2 -8340.02 -269.15 0.125 8383 8385 2 -20844 -8340.82 -269.4 0.125 8384 8386 2 -20840.8 -8341.83 -275.75 0.125 8385 8387 2 -20837.8 -8343.89 -276.36 0.125 8386 8388 2 -20835.2 -8346.01 -276.96 0.125 8387 8389 2 -20833.7 -8348.4 -277.5 0.125 8388 8390 2 -20831.5 -8350.89 -278.12 0.125 8389 8391 2 -20829.6 -8351.53 -281.65 0.125 8390 8392 2 -20829 -8353.52 -282.04 0.125 8391 8393 2 -20828 -8353.88 -282.18 0.125 8392 8394 2 -20826.8 -8354.44 -282.39 0.125 8393 8395 2 -20823.6 -8355.9 -286.27 0.125 8394 8396 2 -20822 -8356.91 -286.58 0.125 8395 8397 2 -20821.1 -8357.53 -286.77 0.125 8396 8398 2 -20820 -8358.66 -287.06 0.125 8397 8399 2 -20816.9 -8361.48 -290.94 0.125 8398 8400 2 -20814 -8361.92 -291.28 0.125 8399 8401 2 -20811.2 -8361.69 -291.51 0.125 8400 8402 2 -20808.2 -8364.27 -292.22 0.125 8401 8403 2 -20805 -8367.68 -293.08 0.125 8402 8404 2 -20798.3 -8369.83 -297.18 0.125 8403 8405 2 -20794.8 -8371.72 -301.1 0.125 8404 8406 2 -20791.7 -8372.33 -303.58 0.125 8405 8407 2 -20789 -8373.56 -304.6 0.125 8406 8408 2 -20781.1 -8379.57 -305.85 0.125 8407 8409 2 -21855.7 -7411.56 144.59 0.25 7429 8410 2 -21857.3 -7412.2 145.17 0.25 8409 8411 2 -21858.4 -7412.39 145.24 0.25 8410 8412 2 -21859.8 -7412.36 145.38 0.25 8411 8413 2 -21860.7 -7412.27 145.47 0.25 8412 8414 2 -21862.3 -7412.53 145.58 0.25 8413 8415 2 -21864.1 -7413.16 145.65 0.25 8414 8416 2 -21866.3 -7413.56 145.79 0.25 8415 8417 2 -21867.7 -7413.97 145.28 0.25 8416 8418 2 -21868.5 -7414.88 145.21 0.25 8417 8419 2 -21870.9 -7415.08 145.4 0.25 8418 8420 2 -21877.3 -7413.73 142.65 0.25 8419 8421 2 -21881.8 -7413.75 143.06 0.25 8420 8422 2 -21889.3 -7412.97 143.89 0.25 8421 8423 2 -21892.1 -7412.96 144.15 0.25 8422 8424 2 -21894.6 -7413.39 144.32 0.25 8423 8425 2 -21895.7 -7413.29 144.44 0.25 8424 8426 2 -21896 -7413.36 144.45 0.25 8425 8427 2 -21897.3 -7414.12 144.45 0.25 8426 8428 2 -21899.7 -7414.54 144.61 0.25 8427 8429 2 -21901.7 -7414.66 144.77 0.25 8428 8430 2 -21902 -7414.71 144.79 0.25 8429 8431 2 -21902.7 -7414.02 145 0.25 8430 8432 2 -21902.9 -7414.06 145.02 0.25 8431 8433 2 -21905.4 -7413.09 144.89 0.25 8432 8434 2 -21907.4 -7413.11 144.42 0.25 8433 8435 2 -21907.4 -7413.09 144.29 0.25 8434 8436 2 -21910.7 -7413.85 144.08 0.25 8435 8437 2 -21912.1 -7413.57 144.22 0.25 8436 8438 2 -21913.8 -7413.82 143.95 0.25 8437 8439 2 -21915.3 -7413.54 144.1 0.25 8438 8440 2 -21917 -7413.29 144.08 0.25 8439 8441 2 -21919.1 -7413.62 142.68 0.25 8440 8442 2 -21921.1 -7413.5 141.62 0.25 8441 8443 2 -21923.4 -7413.68 140.42 0.25 8442 8444 2 -21925.1 -7413.53 141.18 0.25 8443 8445 2 -21925.1 -7413.25 141.23 0.25 8444 8446 2 -21926.7 -7414.35 141.19 0.25 8445 8447 2 -21927 -7414.38 141.21 0.25 8446 8448 2 -21930 -7414.91 141.41 0.25 8447 8449 2 -21933.2 -7415.76 141.57 0.25 8448 8450 2 -21936.6 -7416.17 140.57 0.25 8449 8451 2 -21936.6 -7415.92 140.61 0.25 8450 8452 2 -21938.8 -7416.28 140.76 0.25 8451 8453 2 -21940.9 -7416.16 140.97 0.25 8452 8454 2 -21943.2 -7415.75 141.24 0.25 8453 8455 2 -21945.8 -7416.5 141.38 0.25 8454 8456 2 -21948.2 -7416.14 141.65 0.25 8455 8457 2 -21950.5 -7415.73 141.94 0.25 8456 8458 2 -21952 -7415.59 141.23 0.25 8457 8459 2 -21951.9 -7415.88 141.17 0.25 8458 8460 2 -21954.2 -7415.98 141.37 0.25 8459 8461 2 -21956.9 -7416.74 141.5 0.25 8460 8462 2 -21959 -7417.34 141.6 0.25 8461 8463 2 -21962 -7417.91 141.78 0.25 8462 8464 2 -21962.7 -7417.51 141.91 0.25 8463 8465 2 -21965 -7417.4 142.14 0.25 8464 8466 2 -21967 -7416.92 139.17 0.25 8465 8467 2 -21968.3 -7416.91 139.3 0.25 8466 8468 2 -21968.4 -7416.64 139.35 0.25 8467 8469 2 -21970 -7416.91 139.46 0.25 8468 8470 2 -21971.1 -7417.61 139.44 0.25 8469 8471 2 -21971.3 -7417.67 139.45 0.25 8470 8472 2 -21973.2 -7418.02 139.57 0.25 8471 8473 2 -21973.3 -7417.81 139.62 0.25 8472 8474 2 -21976.3 -7418.34 139.81 0.25 8473 8475 2 -21976.7 -7417.81 139.93 0.25 8474 8476 2 -21976.9 -7417.9 139.94 0.25 8475 8477 2 -21978.7 -7417.39 140.19 0.25 8476 8478 2 -21979 -7417.45 140.21 0.25 8477 8479 2 -21980 -7417.67 140.27 0.25 8478 8480 2 -21984.2 -7416.12 129.37 0.25 8479 8481 2 -21984.5 -7416.15 129.4 0.25 8480 8482 2 -21992.4 -7417.55 129.91 0.25 8481 8483 2 -22000.7 -7418.77 130.48 0.25 8482 8484 2 -22007.3 -7419.8 130.18 0.25 8483 8485 2 -22016.7 -7422.83 130.56 0.25 8484 8486 2 -22019.6 -7423.57 130.72 0.25 8485 8487 2 -22021.6 -7423.72 130.88 0.25 8486 8488 2 -22022.2 -7423.77 130.92 0.25 8487 8489 2 -22023.4 -7424.55 130.91 0.25 8488 8490 2 -22023.8 -7424.6 130.93 0.25 8489 8491 2 -22025.2 -7424.31 131.12 0.25 8490 8492 2 -22025.5 -7423.85 131.22 0.25 8491 8493 2 -22027.1 -7423.34 131.45 0.25 8492 8494 2 -22029 -7423.18 131.66 0.25 8493 8495 2 -22031.9 -7423.84 129.65 0.25 8494 8496 2 -22031.8 -7424.06 129.61 0.25 8495 8497 2 -22035.8 -7424.23 129.95 0.25 8496 8498 2 -22037.6 -7424.11 128.94 0.25 8497 8499 2 -22037.5 -7424.34 128.89 0.25 8498 8500 2 -22039.5 -7423.94 129.15 0.25 8499 8501 2 -22041 -7423.91 127.76 0.25 8500 8502 2 -22043.1 -7424.06 126.8 0.25 8501 8503 2 -22043 -7424.32 126.76 0.25 8502 8504 2 -22044.9 -7424.65 126.88 0.25 8503 8505 2 -22045.2 -7424.75 126.83 0.25 8504 8506 2 -22046.5 -7424.64 125.26 0.25 8505 8507 2 -22048.7 -7425.36 124.15 0.25 8506 8508 2 -22048.7 -7425.34 124.04 0.25 8507 8509 2 -22050.4 -7425.8 123.75 0.25 8508 8510 2 -22051.7 -7426.09 123.83 0.25 8509 8511 2 -22052.3 -7427.48 123.66 0.25 8510 8512 2 -22053.4 -7426.87 122.22 0.25 8511 8513 2 -22054.7 -7427.37 122.07 0.25 8512 8514 2 -22055 -7427.41 122.08 0.25 8513 8515 2 -22056.4 -7427.38 122.16 0.25 8514 8516 2 -22057.2 -7427.54 122.09 0.25 8515 8517 2 -22058.9 -7427.66 121.57 0.25 8516 8518 2 -22059 -7427.59 121.2 0.25 8517 8519 2 -22060.4 -7427.84 121.1 0.25 8518 8520 2 -22060.4 -7427.82 120.96 0.25 8519 8521 2 -22062.2 -7427.55 119.34 0.25 8520 8522 2 -22062.9 -7428.49 119.25 0.25 8521 8523 2 -22079 -7434.92 116.19 0.25 8522 8524 2 -22082 -7437.07 116.12 0.25 8523 8525 2 -22083.5 -7438.68 116.06 0.25 8524 8526 2 -22085.9 -7439.1 116.22 0.25 8525 8527 2 -22086.8 -7438.96 116.32 0.25 8526 8528 2 -22089.3 -7439.13 116.53 0.25 8527 8529 2 -22090.4 -7439.39 116.59 0.25 8528 8530 2 -22092.1 -7439.39 116.75 0.25 8529 8531 2 -22092.9 -7439.55 116.8 0.25 8530 8532 2 -22095.4 -7439.99 116.96 0.25 8531 8533 2 -22096 -7439.83 117.04 0.25 8532 8534 2 -22097.8 -7440.66 117.07 0.25 8533 8535 2 -22102.4 -7440.45 113.18 0.25 8534 8536 2 -22105.2 -7440.14 113.5 0.25 8535 8537 2 -22105.5 -7440.21 113.52 0.25 8536 8538 2 -22107.7 -7440.71 113.64 0.25 8537 8539 2 -22109 -7441.21 113.68 0.25 8538 8540 2 -22110 -7441.88 113.67 0.25 8539 8541 2 -22111.8 -7442.19 113.77 0.25 8540 8542 2 -22113.1 -7443.31 111.2 0.25 8541 8543 2 -22114.6 -7444.57 110.95 0.25 8542 8544 2 -22117.7 -7444.51 108.2 0.25 8543 8545 2 -22119.9 -7444.61 108.4 0.25 8544 8546 2 -22121.3 -7444.58 108.53 0.25 8545 8547 2 -22123.3 -7445.13 108.19 0.25 8546 8548 2 -22169.2 -7449.75 103.45 0.25 8547 8549 2 -22172.3 -7450.34 103.97 0.25 8548 8550 2 -22172.3 -7450.17 102.94 0.25 8549 8551 2 -22175.4 -7450.39 103.19 0.25 8550 8552 2 -22177.1 -7450.73 102 0.25 8551 8553 2 -22180 -7451.96 103.42 0.25 8552 8554 2 -22182.4 -7452.88 103.49 0.25 8553 8555 2 -22183.7 -7453.88 103.45 0.25 8554 8556 2 -22185.4 -7453.89 103.6 0.25 8555 8557 2 -22187 -7454.15 103.72 0.25 8556 8558 2 -22189.9 -7455.44 103.78 0.25 8557 8559 2 -22191.5 -7455.94 103.84 0.25 8558 8560 2 -22191.8 -7455.96 103.87 0.25 8559 8561 2 -22193.3 -7457.29 103.79 0.25 8560 8562 2 -22194.2 -7457.16 103.89 0.25 8561 8563 2 -22194.5 -7457.16 103.92 0.25 8562 8564 2 -22196 -7458.27 103.87 0.25 8563 8565 2 -22197.7 -7458.52 103.99 0.25 8564 8566 2 -22199.8 -7459.41 104.04 0.25 8565 8567 2 -22200.1 -7459.17 104.11 0.25 8566 8568 2 -22200.9 -7458.6 101.66 0.25 8567 8569 2 -22201.5 -7458.68 101.7 0.25 8568 8570 2 -22201.8 -7458.45 101.77 0.25 8569 8571 2 -22202.1 -7458.23 101.83 0.25 8570 8572 2 -22202.8 -7457.79 101.96 0.25 8571 8573 2 -22204.7 -7458.38 102.04 0.25 8572 8574 2 -22206.2 -7458.69 103.78 0.25 8573 8575 2 -22208.2 -7458.49 104 0.25 8574 8576 2 -22209 -7458.84 104.02 0.25 8575 8577 2 -22211.3 -7458.24 104.33 0.25 8576 8578 2 -22213.1 -7458.44 104.46 0.25 8577 8579 2 -22213 -7458.72 104.41 0.25 8578 8580 2 -22214.7 -7458.99 104.52 0.25 8579 8581 2 -22215.7 -7459.43 104.56 0.25 8580 8582 2 -22216.8 -7460.17 104.53 0.25 8581 8583 2 -22232.8 -7460.81 105.26 0.25 8582 8584 2 -22233.8 -7461.44 105.25 0.25 8583 8585 2 -22236 -7462.08 105.35 0.25 8584 8586 2 -22236.3 -7462.38 105.33 0.25 8585 8587 2 -22236.6 -7462.2 105.39 0.25 8586 8588 2 -22237.1 -7462.26 105.42 0.25 8587 8589 2 -22238.7 -7461.62 105.26 0.25 8588 8590 2 -22240.6 -7461.97 105.38 0.25 8589 8591 2 -22240.9 -7461.77 105.45 0.25 8590 8592 2 -22241.8 -7461.09 105.65 0.25 8591 8593 2 -22243 -7461.3 105.71 0.25 8592 8594 2 -22243.3 -7461.08 105.78 0.25 8593 8595 2 -22244.7 -7461.08 105.91 0.25 8594 8596 2 -22246.6 -7461.18 103.49 0.25 8595 8597 2 -22247.6 -7461.88 103.47 0.25 8596 8598 2 -22248.1 -7461.67 103.56 0.25 8597 8599 2 -22248.4 -7461.73 103.57 0.25 8598 8600 2 -22248.7 -7461.77 103.59 0.25 8599 8601 2 -22250.4 -7462.1 102.4 0.25 8600 8602 2 -22251.5 -7462.73 102.11 0.25 8601 8603 2 -22253.5 -7462.56 102.28 0.25 8602 8604 2 -22253.8 -7462.59 102.3 0.25 8603 8605 2 -22254.6 -7462.99 102.3 0.25 8604 8606 2 -22255.5 -7464.16 102.14 0.25 8605 8607 2 -22256.2 -7464.02 102.22 0.25 8606 8608 2 -22256.8 -7463.57 102.35 0.25 8607 8609 2 -22257 -7463.64 102.35 0.25 8608 8610 2 -22258.1 -7464.13 102.61 0.25 8609 8611 2 -22259.2 -7464.52 102.63 0.25 8610 8612 2 -22261.9 -7465.62 101.59 0.25 8611 8613 2 -22264 -7465.77 99.53 0.25 8612 8614 2 -22264.3 -7465.87 99.54 0.25 8613 8615 2 -22266.5 -7465.69 99.78 0.25 8614 8616 2 -22268.2 -7466.19 99.9 0.25 8615 8617 2 -22268.4 -7466.55 99.87 0.25 8616 8618 2 -22269.2 -7466.67 99.92 0.25 8617 8619 2 -22270.3 -7466.84 100 0.25 8618 8620 2 -22270.7 -7466.62 100.07 0.25 8619 8621 2 -22270.9 -7466.71 100.08 0.25 8620 8622 2 -22271.2 -7466.72 100.1 0.25 8621 8623 2 -22273 -7467.75 101.26 0.25 8622 8624 2 -22274 -7468.14 101.3 0.25 8623 8625 2 -22274.3 -7468.23 101.31 0.25 8624 8626 2 -22276.8 -7468.87 101.44 0.25 8625 8627 2 -22276.8 -7468.86 101.34 0.25 8626 8628 2 -22279.4 -7469.81 98.33 0.25 8627 8629 2 -22280.8 -7471.37 98.21 0.25 8628 8630 2 -22281 -7473.53 97.87 0.25 8629 8631 2 -21890.2 -7405.28 114.83 0.37 6799 8632 2 -21890.1 -7405.54 114.78 0.37 8631 8633 2 -21891.4 -7407.03 112.62 0.37 8632 8634 2 -21891.4 -7407.3 112.61 0.37 8633 8635 2 -21892.3 -7409.93 117.5 0.37 8634 8636 2 -21893.1 -7411.87 117.08 0.37 8635 8637 2 -21892.5 -7413.9 116.69 0.37 8636 8638 2 -21892.1 -7416.21 116.27 0.37 8637 8639 2 -21891.8 -7416.43 116.21 0.37 8638 8640 2 -21889.8 -7418.15 115.74 0.37 8639 8641 2 -21889.5 -7418.07 115.72 0.37 8640 8642 2 -21883.9 -7422.16 113.34 0.37 8641 8643 2 -21884.1 -7426.87 122.83 0.25 8642 8644 2 -21893.3 -7432.13 123.95 0.25 8643 8645 2 -21895.7 -7437.85 123.23 0.25 8644 8646 2 -21896 -7441.55 122.64 0.25 8645 8647 2 -21895.1 -7446.66 121.71 0.25 8646 8648 2 -21896.9 -7449.36 121.43 0.25 8647 8649 2 -21896.1 -7451.06 121.07 0.25 8648 8650 2 -21896.1 -7454.57 119.35 0.25 8649 8651 2 -21896.1 -7458.41 118.1 0.25 8650 8652 2 -21895.5 -7459.26 115.89 0.25 8651 8653 2 -21895.3 -7460.89 114.49 0.25 8652 8654 2 -21895.9 -7462.31 114.31 0.25 8653 8655 2 -21895.4 -7463.55 114.06 0.25 8654 8656 2 -21898 -7464.49 115.2 0.25 8655 8657 2 -21899.2 -7465.4 116.38 0.25 8656 8658 2 -21900.3 -7466.02 120.28 0.25 8657 8659 2 -21901.4 -7468.44 122.07 0.25 8658 8660 2 -21900.6 -7469.8 121.36 0.25 8659 8661 2 -21901.1 -7472.18 120.47 0.25 8660 8662 2 -21902.5 -7474.28 120.25 0.25 8661 8663 2 -21902.7 -7476.33 119.41 0.25 8662 8664 2 -21902 -7478.38 120.86 0.25 8663 8665 2 -21901.6 -7480.81 119.73 0.25 8664 8666 2 -21902.3 -7489.42 121.58 0.25 8665 8667 2 -21902.2 -7492.88 121.22 0.25 8666 8668 2 -21903.3 -7495.16 120.93 0.25 8667 8669 2 -21904.2 -7496.33 120.49 0.25 8668 8670 2 -21904.7 -7497.81 122.28 0.125 8669 8671 2 -21905.8 -7500.42 123.58 0.125 8670 8672 2 -21906.2 -7501.83 123.61 0.125 8671 8673 2 -21905.5 -7502.5 123.44 0.125 8672 8674 2 -21904.8 -7503.66 125.96 0.125 8673 8675 2 -21904.8 -7505.53 130.31 0.125 8674 8676 2 -21906.1 -7506.98 132.61 0.125 8675 8677 2 -21906.2 -7510.19 135.21 0.125 8676 8678 2 -21906 -7511.11 136.13 0.125 8677 8679 2 -21905.1 -7512.3 137.38 0.125 8678 8680 2 -21904.6 -7514.42 139 0.125 8679 8681 2 -21905.2 -7516.13 143.41 0.125 8680 8682 2 -21905.8 -7516.72 144.68 0.125 8681 8683 2 -21905.3 -7518.56 147.78 0.125 8682 8684 2 -21905.3 -7520.46 147.77 0.125 8683 8685 2 -21905.5 -7522.33 147.48 0.125 8684 8686 2 -21905.7 -7523.82 149.44 0.125 8685 8687 2 -21905.8 -7526.63 152.89 0.125 8686 8688 2 -21906.3 -7528.83 152.57 0.125 8687 8689 2 -21910 -7527.75 153.1 0.125 8688 8690 2 -21909.4 -7529.99 156.96 0.125 8689 8691 2 -21909.6 -7530.82 157.07 0.125 8690 8692 2 -21909.7 -7532.65 159.49 0.125 8691 8693 2 -21909.8 -7535.31 160.6 0.125 8692 8694 2 -21911.1 -7537.86 166.3 0.125 8693 8695 2 -21912 -7540.26 168.27 0.125 8694 8696 2 -21913.8 -7543.4 169.39 0.125 8695 8697 2 -21912 -7545.38 174.2 0.125 8696 8698 2 -21912.1 -7545.32 173.85 0.125 8697 8699 2 -21911.9 -7546.49 174.42 0.125 8698 8700 2 -21911 -7548.69 182.74 0.125 8699 8701 2 -21912.2 -7549.52 183.62 0.125 8700 8702 2 -21912.8 -7549.79 183.03 0.125 8701 8703 2 -21913.2 -7551.18 185.94 0.125 8702 8704 2 -21913.2 -7552.1 185.18 0.125 8703 8705 2 -21912.8 -7554.65 188.93 0.125 8704 8706 2 -21913.5 -7556.13 189.17 0.125 8705 8707 2 -21914.5 -7560.01 190.29 0.125 8706 8708 2 -21914.5 -7561.42 192.19 0.125 8707 8709 2 -21914.7 -7564.32 196.11 0.125 8708 8710 2 -21914.2 -7566.84 202.88 0.125 8709 8711 2 -21913.3 -7567.21 205.77 0.125 8710 8712 2 -21912.7 -7568.55 212.06 0.125 8711 8713 2 -21912.6 -7570.1 210.23 0.125 8712 8714 2 -21919.4 -7562.21 212.17 0.125 8713 8715 2 -21919.6 -7562.77 212.1 0.125 8714 8716 2 -21920.6 -7565.62 219.23 0.125 8715 8717 2 -21920.5 -7568.94 223.87 0.125 8716 8718 2 -21920.8 -7570.32 223.67 0.125 8717 8719 2 -21919.9 -7572.86 226.48 0.125 8718 8720 2 -21919.2 -7576.16 236.22 0.125 8719 8721 2 -21920.9 -7574.35 229.17 0.125 8720 8722 2 -21919.6 -7577.09 236.47 0.125 8721 8723 2 -21918.9 -7579 241.53 0.125 8722 8724 2 -21917.6 -7581.91 248.21 0.125 8723 8725 2 -21915.7 -7584.51 252.49 0.125 8724 8726 2 -21914.2 -7586.48 252.72 0.125 8725 8727 2 -21912.9 -7587.29 252.48 0.125 8726 8728 2 -21911.3 -7588.54 254.93 0.125 8727 8729 2 -21911.1 -7589.9 255.03 0.125 8728 8730 2 -21911.4 -7591.67 258.6 0.125 8729 8731 2 -21910.7 -7593.69 258.2 0.125 8730 8732 2 -21912 -7596.07 257.93 0.125 8731 8733 2 -21912.9 -7600.01 263.58 0.125 8732 8734 2 -21913 -7602.5 268.14 0.125 8733 8735 2 -21913.4 -7603.4 268.03 0.125 8734 8736 2 -21913.9 -7605.47 271.58 0.125 8735 8737 2 -21913.8 -7608.04 275.17 0.125 8736 8738 2 -21913.9 -7607.96 273.2 0.125 8737 8739 2 -21906.1 -7497.08 119.79 0.25 8669 8740 2 -21905.6 -7500.14 119.24 0.25 8739 8741 2 -21905.9 -7502.04 118.95 0.25 8740 8742 2 -21905.9 -7505.19 118.43 0.25 8741 8743 2 -21905.6 -7508.53 117.85 0.25 8742 8744 2 -21905.6 -7511.87 118.03 0.25 8743 8745 2 -21904.9 -7515.69 117.36 0.25 8744 8746 2 -21904.4 -7518.39 117.58 0.25 8745 8747 2 -21904.2 -7521.49 117.05 0.25 8746 8748 2 -21904.5 -7524.71 116.54 0.25 8747 8749 2 -21905.3 -7528.55 115.98 0.25 8748 8750 2 -21906.3 -7531.1 115.65 0.25 8749 8751 2 -21905.6 -7533.36 115.21 0.25 8750 8752 2 -21907.1 -7536.03 114.92 0.25 8751 8753 2 -21910 -7541.25 114.32 0.25 8752 8754 2 -21912.3 -7544.8 112.27 0.25 8753 8755 2 -21926 -7590.84 115.34 0.25 8754 8756 2 -21926.6 -7595.94 114.54 0.25 8755 8757 2 -21926.8 -7599.41 113.99 0.25 8756 8758 2 -21927.6 -7602.98 113.48 0.25 8757 8759 2 -21927.9 -7605.12 113.14 0.25 8758 8760 2 -21928.4 -7608.08 114 0.25 8759 8761 2 -21929.5 -7610.12 113.76 0.25 8760 8762 2 -21930.1 -7612.74 114.13 0.25 8761 8763 2 -21930.8 -7617.18 114.09 0.25 8762 8764 2 -21930.9 -7619.86 113.66 0.25 8763 8765 2 -21930.9 -7621.64 113.08 0.25 8764 8766 2 -21931.2 -7625.27 111.84 0.25 8765 8767 2 -21932 -7627.46 111.24 0.25 8766 8768 2 -21933.3 -7629.87 109.67 0.25 8767 8769 2 -21933.6 -7631.9 108.74 0.25 8768 8770 2 -21932.5 -7637.25 106.37 0.25 8769 8771 2 -21934.8 -7641.01 103.84 0.25 8770 8772 2 -21936.9 -7645.31 103.21 0.25 8771 8773 2 -21939.1 -7647.51 102.99 0.25 8772 8774 2 -21940.4 -7649.86 102.73 0.25 8773 8775 2 -21941.9 -7652.18 103.62 0.25 8774 8776 2 -21941.6 -7654.37 102.32 0.25 8775 8777 2 -21940.3 -7658.63 101.65 0.25 8776 8778 2 -21939.4 -7662.45 100.93 0.25 8777 8779 2 -21938.8 -7664.67 100.5 0.25 8778 8780 2 -21939.3 -7666.35 100.28 0.25 8779 8781 2 -21939.7 -7667.48 100.13 0.25 8780 8782 2 -21939.6 -7669.83 99.73 0.25 8781 8783 2 -21940.1 -7670.91 97.79 0.25 8782 8784 2 -21940.8 -7675.28 97.25 0.25 8783 8785 2 -21942.6 -7677.97 96.97 0.25 8784 8786 2 -21943.7 -7680 96.66 0.25 8785 8787 2 -21943.3 -7681.92 97.14 0.25 8786 8788 2 -21946.8 -7705.38 96.03 0.25 8787 8789 2 -21947.8 -7710.3 98.25 0.25 8788 8790 2 -21949.1 -7712.64 97.81 0.25 8789 8791 2 -21950.2 -7716.45 96.95 0.25 8790 8792 2 -21951.5 -7720.37 96.37 0.25 8791 8793 2 -21953.2 -7724.08 95.9 0.25 8792 8794 2 -21953.5 -7727.63 94.13 0.25 8793 8795 2 -21954 -7731.29 94.32 0.25 8794 8796 2 -21956.3 -7733.94 95.06 0.25 8795 8797 2 -21954.5 -7737.6 94.29 0.25 8796 8798 2 -21956.3 -7740.28 94.01 0.25 8797 8799 2 -21956.8 -7741.64 91.99 0.25 8798 8800 2 -21956.9 -7746.15 91.26 0.25 8799 8801 2 -21956.9 -7748.22 90.8 0.25 8800 8802 2 -21956.7 -7751.04 90.03 0.25 8801 8803 2 -21956.4 -7754.67 89.4 0.25 8802 8804 2 -21957.5 -7758.27 88.91 0.25 8803 8805 2 -21956.2 -7760.99 88.34 0.25 8804 8806 2 -21956.6 -7763.14 88.02 0.25 8805 8807 2 -21958.8 -7767.97 90.14 0.25 8806 8808 2 -21961.6 -7771.21 90.35 0.25 8807 8809 2 -21963.1 -7775.42 89.8 0.25 8808 8810 2 -21963.6 -7779.73 87.53 0.25 8809 8811 2 -21963.9 -7784.74 86.53 0.25 8810 8812 2 -21963.8 -7787.08 86.14 0.25 8811 8813 2 -21968.5 -7808.68 87.11 0.25 8812 8814 2 -21969.2 -7812.53 86.54 0.25 8813 8815 2 -21970.7 -7815.42 86.2 0.25 8814 8816 2 -21972 -7818.43 85.18 0.25 8815 8817 2 -21971.5 -7820.89 84.19 0.25 8816 8818 2 -21972.1 -7822.88 84.23 0.25 8817 8819 2 -21972.3 -7825.03 83.95 0.25 8818 8820 2 -21972.4 -7827.47 83.68 0.25 8819 8821 2 -21972.8 -7830.44 83.29 0.25 8820 8822 2 -21973.7 -7832.97 82.96 0.25 8821 8823 2 -21975 -7838.38 83.09 0.25 8822 8824 2 -21976.2 -7843.08 82.45 0.25 8823 8825 2 -21977.7 -7846.66 81.3 0.25 8824 8826 2 -21978.7 -7851.32 79.17 0.25 8825 8827 2 -21982 -7862.34 80 0.25 8826 8828 2 -21983.8 -7865.57 78.13 0.25 8827 8829 2 -21984.1 -7869.21 78.46 0.25 8828 8830 2 -21983.8 -7871.94 78.97 0.25 8829 8831 2 -21985.3 -7874.81 78.31 0.25 8830 8832 2 -21984.2 -7878.39 76.62 0.25 8831 8833 2 -21984.1 -7884.03 76.29 0.25 8832 8834 2 -21985.7 -7889.2 71.95 0.25 8833 8835 2 -21989.2 -7890.73 70.99 0.25 8834 8836 2 -21989 -7893.34 70.54 0.25 8835 8837 2 -21989.8 -7897.16 69.98 0.25 8836 8838 2 -21989.6 -7899.03 68.49 0.25 8837 8839 2 -21988.3 -7902.1 70.3 0.25 8838 8840 2 -21988.4 -7905.45 69.3 0.25 8839 8841 2 -21987.9 -7908.63 67.76 0.25 8840 8842 2 -21986.8 -7911.31 68.6 0.25 8841 8843 2 -21987.6 -7913.08 68.66 0.25 8842 8844 2 -21991.9 -7937.11 63.88 0.25 8843 8845 2 -21992.9 -7942.84 63.03 0.25 8844 8846 2 -21992.1 -7947.7 62.15 0.25 8845 8847 2 -21995.9 -7950.21 62.09 0.25 8846 8848 2 -21998.4 -7953.55 61.77 0.25 8847 8849 2 -21998 -7957.37 62.15 0.25 8848 8850 2 -21998.7 -7961.45 61.53 0.25 8849 8851 2 -21997.8 -7966.3 62.14 0.25 8850 8852 2 -21999.2 -7968.79 61.03 0.25 8851 8853 2 -21999.3 -7971.73 60.71 0.25 8852 8854 2 -21999.9 -7972.49 61.22 0.25 8853 8855 2 -22001.8 -7973.44 60.34 0.25 8854 8856 2 -22001.6 -7974.08 61.08 0.25 8855 8857 2 -22003 -7975.13 59.69 0.25 8856 8858 2 -22003.2 -7977.76 57.47 0.25 8857 8859 2 -22004.3 -7981.13 57.02 0.25 8858 8860 2 -22006.2 -7983.32 56.83 0.25 8859 8861 2 -22006.1 -7986.04 55.44 0.25 8860 8862 2 -22007.4 -7989 54.01 0.25 8861 8863 2 -22007.4 -7991.13 52.51 0.25 8862 8864 2 -22007.2 -7994.56 50.63 0.25 8863 8865 2 -22006.6 -7998.16 49.99 0.25 8864 8866 2 -22005.4 -8000.28 49.3 0.25 8865 8867 2 -22004.6 -8001.56 49.6 0.25 8866 8868 2 -22002.4 -8005.28 51.2 0.25 8867 8869 2 -22002.9 -8009.08 50.61 0.25 8868 8870 2 -22004.1 -8013.74 49.95 0.25 8869 8871 2 -22005.8 -8017.23 49.58 0.25 8870 8872 2 -22008.4 -8021.79 48.37 0.25 8871 8873 2 -22010.6 -8024.82 45.05 0.25 8872 8874 2 -22011.6 -8025.46 41.74 0.25 8873 8875 2 -22010.8 -8025.53 36.72 0.25 8874 8876 2 -22010.1 -8026.98 30.34 0.25 8875 8877 2 -22008.2 -8028.44 28.26 0.25 8876 8878 2 -22006.6 -8029.78 28.09 0.25 8877 8879 2 -22006.2 -8030.41 22.92 0.25 8878 8880 2 -22006.8 -8027.7 17.88 0.25 8879 8881 2 -22007.8 -8026.71 17.65 0.25 8880 8882 2 -22010 -8025.53 13.49 0.25 8881 8883 2 -22011.3 -8025.26 10.98 0.25 8882 8884 2 -22012.7 -8026.03 9.16 0.25 8883 8885 2 -22015 -8026.12 4.47 0.25 8884 8886 2 -22014.2 -8027.12 4.22 0.25 8885 8887 2 -22014.9 -8027.06 1.95 0.25 8886 8888 2 -22016.2 -8026.76 0.5 0.25 8887 8889 2 -22018.8 -8026.33 -4.04 0.25 8888 8890 2 -22019 -8027.98 -5.54 0.25 8889 8891 2 -22020.2 -8028.9 -8.97 0.25 8890 8892 2 -22021.1 -8029.37 -11.74 0.25 8891 8893 2 -22022.7 -8029.97 -15.83 0.25 8892 8894 2 -22024.1 -8031.56 -18.78 0.25 8893 8895 2 -22024.3 -8033.73 -19.14 0.25 8894 8896 2 -22024.2 -8034.71 -19.51 0.25 8895 8897 2 -22024.1 -8035.59 -25.25 0.25 8896 8898 2 -22023.2 -8038.33 -25.81 0.25 8897 8899 2 -22025.6 -8039.69 -35.36 0.25 8898 8900 2 -22026.6 -8040.81 -35.97 0.25 8899 8901 2 -22028.5 -8041.27 -36.85 0.25 8900 8902 2 -22029.7 -8041.07 -40.31 0.25 8901 8903 2 -22029.8 -8042.83 -42.85 0.25 8902 8904 2 -22031 -8044.86 -44.54 0.25 8903 8905 2 -22034.3 -8047.23 -47.62 0.25 8904 8906 2 -22035.6 -8047.58 -51.44 0.25 8905 8907 2 -22037.6 -8049.15 -54.95 0.25 8906 8908 2 -22041.4 -8054.08 -59.5 0.25 8907 8909 2 -22043.8 -8056.17 -62.08 0.25 8908 8910 2 -22046 -8059.81 -66.34 0.25 8909 8911 2 -22049.8 -8066.66 -70.98 0.25 8910 8912 2 -22056.1 -8069.19 -70.81 0.25 8911 8913 2 -22057.3 -8069.99 -75.22 0.25 8912 8914 2 -22058.5 -8070.68 -76.78 0.25 8913 8915 2 -22059.4 -8072.63 -79.08 0.25 8914 8916 2 -22060 -8074.26 -79.45 0.25 8915 8917 2 -22060.4 -8076.92 -81.92 0.25 8916 8918 2 -22061.4 -8077.57 -85.09 0.25 8917 8919 2 -22061.2 -8078.94 -86.43 0.25 8918 8920 2 -22063 -8080.42 -87.79 0.25 8919 8921 2 -22064 -8081.34 -88.02 0.25 8920 8922 2 -22065 -8081.99 -88.33 0.25 8921 8923 2 -22068.4 -8082.01 -93.21 0.25 8922 8924 2 -22069.4 -8083.67 -95.5 0.25 8923 8925 2 -22070.1 -8084.55 -100.26 0.25 8924 8926 2 -22069.4 -8086.8 -100.69 0.25 8925 8927 2 -22069.8 -8087.5 -104.58 0.25 8926 8928 2 -22069.8 -8089.02 -105.22 0.25 8927 8929 2 -22069.8 -8090.22 -107.52 0.25 8928 8930 2 -22069.5 -8092.03 -109.46 0.25 8929 8931 2 -22070.1 -8092.73 -110.56 0.25 8930 8932 2 -22069.8 -8094.2 -111.26 0.25 8931 8933 2 -22070.5 -8095.66 -112.77 0.25 8932 8934 2 -22071.6 -8097.2 -116.04 0.25 8933 8935 2 -22072.5 -8097.89 -116.06 0.25 8934 8936 2 -22072.6 -8100.99 -121.36 0.25 8935 8937 2 -22073.8 -8102.03 -121.42 0.25 8936 8938 2 -22075.3 -8104.13 -123.27 0.25 8937 8939 2 -22076.2 -8105.71 -126.02 0.125 8938 8940 2 -22076.4 -8106.31 -130.51 0.125 8939 8941 2 -22076.7 -8107.17 -133.5 0.125 8940 8942 2 -22076.9 -8108.1 -134.66 0.125 8941 8943 2 -22078 -8108.04 -137.36 0.125 8942 8944 2 -22079 -8108.12 -145.39 0.125 8943 8945 2 -22079.3 -8108.95 -148.54 0.125 8944 8946 2 -22079.5 -8111.78 -146.92 0.125 8945 8947 2 -22084.1 -8116.24 -147.23 0.125 8946 8948 2 -22085.6 -8119.64 -152.5 0.125 8947 8949 2 -22085.8 -8121.78 -156.06 0.125 8948 8950 2 -22086.2 -8122.9 -156.21 0.125 8949 8951 2 -22085.4 -8122.73 -157.98 0.125 8950 8952 2 -22086.5 -8123.96 -159.57 0.125 8951 8953 2 -22086.7 -8125.18 -162.34 0.125 8952 8954 2 -22087.3 -8125.67 -167.65 0.125 8953 8955 2 -22087.7 -8127.22 -168.6 0.125 8954 8956 2 -22089.2 -8128.39 -172.37 0.125 8955 8957 2 -22089.9 -8129.39 -173.58 0.125 8956 8958 2 -22090.5 -8130.17 -178.77 0.125 8957 8959 2 -22090.1 -8131.88 -182.7 0.125 8958 8960 2 -22090.3 -8134.75 -183.52 0.125 8959 8961 2 -22091.2 -8135.78 -186.24 0.125 8960 8962 2 -22091.4 -8137.51 -189.01 0.125 8961 8963 2 -22092.3 -8138.79 -193.24 0.125 8962 8964 2 -22093.3 -8139.69 -198.37 0.125 8963 8965 2 -22094 -8142.73 -203.26 0.125 8964 8966 2 -22095.8 -8141.99 -215.06 0.125 8965 8967 2 -22094.8 -8144.27 -213.72 0.125 8966 8968 2 -22096.8 -8145.97 -213.81 0.125 8967 8969 2 -22096.7 -8146.71 -213.94 0.125 8968 8970 2 -22090.5 -8150.14 -213.33 0.125 8969 8971 2 -22090.7 -8151.5 -210.38 0.125 8970 8972 2 -22091.9 -8152.17 -216.62 0.125 8971 8973 2 -22096.9 -8142.2 -210.25 0.125 8966 8974 2 -22098.2 -8141.99 -212.55 0.125 8973 8975 2 -22099.3 -8141.76 -216.32 0.125 8974 8976 2 -22100.7 -8142.27 -216.28 0.125 8975 8977 2 -22102.8 -8141.76 -216.11 0.125 8976 8978 2 -22105.3 -8140.07 -215.46 0.125 8977 8979 2 -22098.3 -8144.5 -216.85 0.125 8978 8980 2 -22100.8 -8144.23 -223.07 0.125 8979 8981 2 -22100.3 -8143.68 -230.48 0.125 8980 8982 2 -22100.3 -8144.91 -232.73 0.125 8981 8983 2 -22102.3 -8144.9 -232.55 0.125 8982 8984 2 -22103.8 -8143.6 -239.39 0.125 8983 8985 2 -22105.4 -8143.37 -241.7 0.125 8984 8986 2 -22106 -8142.34 -244.89 0.125 8985 8987 2 -22106.7 -8141.24 -248.39 0.125 8986 8988 2 -22106.8 -8142.23 -251.7 0.125 8987 8989 2 -22106.6 -8143.11 -257.32 0.125 8988 8990 2 -22108.2 -8143.83 -257.37 0.125 8989 8991 2 -22110.3 -8144.01 -259.29 0.125 8990 8992 2 -22111.3 -8143.63 -260.6 0.125 8991 8993 2 -22112.4 -8144.41 -261.52 0.125 8992 8994 2 -22114.4 -8144.87 -263.48 0.125 8993 8995 2 -22115.8 -8146.34 -265.21 0.125 8994 8996 2 -22117.8 -8147 -267.3 0.125 8995 8997 2 -22118.7 -8148.11 -269.34 0.125 8996 8998 2 -22119.5 -8148.69 -269.7 0.125 8997 8999 2 -22121.1 -8148.42 -272.32 0.125 8998 9000 2 -22121.9 -8149.42 -273.18 0.125 8999 9001 2 -22123 -8150.31 -273.46 0.125 9000 9002 2 -22123.3 -8150.8 -275.65 0.125 9001 9003 2 -22124.9 -8150.63 -277.67 0.125 9002 9004 2 -22125.6 -8150.95 -279.39 0.125 9003 9005 2 -22126.3 -8151.8 -279.68 0.125 9004 9006 2 -22127 -8152.07 -281.68 0.125 9005 9007 2 -22135.5 -8149.57 -280.47 0.125 9006 9008 2 -22136.7 -8149.88 -281.36 0.125 9007 9009 2 -22138.1 -8150.81 -283.51 0.125 9008 9010 2 -22140 -8152.12 -285.63 0.125 9009 9011 2 -22141.3 -8153.4 -287.41 0.125 9010 9012 2 -22142.9 -8152.73 -288.05 0.125 9011 9013 2 -22144.4 -8153.72 -291.54 0.125 9012 9014 2 -22145.8 -8155.57 -291.71 0.125 9013 9015 2 -22147.5 -8156.21 -298.96 0.125 9014 9016 2 -22149.8 -8156.49 -304.07 0.125 9015 9017 2 -22150.1 -8157.23 -306.26 0.125 9016 9018 2 -22150.5 -8159.42 -314.27 0.125 9017 9019 2 -22152.3 -8159.81 -321.42 0.125 9018 9020 2 -22153.3 -8160.77 -321.55 0.125 9019 9021 2 -22158.3 -8161.21 -328.26 0.125 9020 9022 2 -22160.3 -8161.3 -328.09 0.125 9021 9023 2 -22161.4 -8161.28 -332.47 0.125 9022 9024 2 -22162.5 -8163.04 -332.66 0.125 9023 9025 2 -22163.6 -8163.49 -337.37 0.125 9024 9026 2 -22164.2 -8164.04 -339.48 0.125 9025 9027 2 -22164.9 -8164.42 -342.45 0.125 9026 9028 2 -22166.1 -8164.88 -344.08 0.125 9027 9029 2 -22168 -8166.11 -346.74 0.125 9028 9030 2 -22170.2 -8167.62 -352.44 0.125 9029 9031 2 -22164.4 -8158.21 -351.43 0.125 9030 9032 2 -22166.4 -8159.66 -358.08 0.125 9031 9033 2 -22168.1 -8160.11 -360.78 0.125 9032 9034 2 -22170.1 -8160.78 -368.51 0.125 9033 9035 2 -22171 -8162.84 -373.26 0.125 9034 9036 2 -22171 -8163.65 -373.38 0.125 9035 9037 2 -22171.7 -8164.63 -373.33 0.125 9036 9038 2 -22173 -8164.72 -373.21 0.125 9037 9039 2 -22174.8 -8165.04 -375.09 0.125 9038 9040 2 -22176 -8164.9 -376.03 0.125 9039 9041 2 -22176.8 -8164.27 -376 0.125 9040 9042 2 -22177.7 -8162.63 -375.64 0.125 9041 9043 2 -22177.8 -8161.32 -375.42 0.125 9042 9044 2 -22177.5 -8159.48 -373.14 0.125 9043 9045 2 -22177.8 -8156.91 -372.68 0.125 9044 9046 2 -22177.5 -8156.03 -372.57 0.125 9045 9047 2 -22178.3 -8155.22 -373.48 0.125 9046 9048 2 -22178.4 -8152.6 -374.34 0.125 9047 9049 2 -22178.7 -8151.86 -375.68 0.125 9048 9050 2 -22179 -8151.1 -377.22 0.125 9049 9051 2 -22180.6 -8150.75 -379.75 0.125 9050 9052 2 -22181.9 -8151.19 -378.95 0.125 9051 9053 2 -22182.8 -8150.92 -378.65 0.125 9052 9054 2 -22185 -8150.6 -380.27 0.125 9053 9055 2 -22188.5 -8150.69 -381.23 0.125 9054 9056 2 -22190 -8150.04 -380.98 0.125 9055 9057 2 -22147.9 -8157.01 -307.11 0.125 9017 9058 2 -22147.5 -8156.04 -307.5 0.125 9057 9059 2 -22148.5 -8154.45 -309.6 0.125 9058 9060 2 -22148.9 -8153.98 -309.48 0.125 9059 9061 2 -22148.2 -8151.32 -312.78 0.125 9060 9062 2 -22145 -8151.11 -312.55 0.125 9061 9063 2 -22143 -8151.08 -312.47 0.125 9062 9064 2 -22140.5 -8150.64 -312.63 0.125 9063 9065 2 -22138.6 -8148.68 -312.48 0.125 9064 9066 2 -22136.5 -8148.83 -312.7 0.125 9065 9067 2 -22132.4 -8149.7 -313.23 0.125 9066 9068 2 -22130.4 -8148.25 -313.2 0.125 9067 9069 2 -22128.3 -8147.75 -315.61 0.125 9068 9070 2 -22124.7 -8147.93 -317.32 0.125 9069 9071 2 -22122.1 -8147.34 -316.75 0.125 9070 9072 2 -22071.3 -8084.01 -95.38 0.125 8924 9073 2 -22071.7 -8083.54 -95.27 0.125 9072 9074 2 -22071.8 -8082.79 -95.13 0.125 9073 9075 2 -22074.3 -8081.24 -98.65 0.125 9074 9076 2 -22075.1 -8080.32 -98.42 0.125 9075 9077 2 -22075.8 -8079.12 -98.17 0.125 9076 9078 2 -22076.3 -8078.15 -98.11 0.125 9077 9079 2 -22077 -8076.56 -99.97 0.125 9078 9080 2 -22078.1 -8075.77 -100.97 0.125 9079 9081 2 -22079.3 -8075.11 -101.22 0.125 9080 9082 2 -22080.8 -8074.96 -102.05 0.125 9081 9083 2 -22082.4 -8075 -103.34 0.125 9082 9084 2 -22083.1 -8074.26 -103.62 0.125 9083 9085 2 -22084.1 -8073.76 -104.35 0.125 9084 9086 2 -22085.9 -8073.22 -104.6 0.125 9085 9087 2 -22089.1 -8071.31 -104.86 0.125 9086 9088 2 -22090.7 -8070.62 -105.58 0.125 9087 9089 2 -22093.6 -8069.29 -106.71 0.125 9088 9090 2 -22095.4 -8069.2 -107.67 0.125 9089 9091 2 -22097.3 -8068.73 -108.97 0.125 9090 9092 2 -22097.7 -8067.96 -109.13 0.125 9091 9093 2 -22098.7 -8067.21 -109.7 0.125 9092 9094 2 -22100.5 -8066.15 -111.27 0.125 9093 9095 2 -22100.5 -8064.58 -111.01 0.125 9094 9096 2 -22102.4 -8064.93 -110.89 0.125 9095 9097 2 -22104.3 -8064.3 -111.82 0.125 9096 9098 2 -22104.9 -8062.81 -112.87 0.125 9097 9099 2 -22106.2 -8062.05 -114.06 0.125 9098 9100 2 -22107.5 -8060.9 -117.15 0.125 9099 9101 2 -22109.1 -8060.09 -116.97 0.125 9100 9102 2 -22110.6 -8059.48 -117.56 0.125 9101 9103 2 -22111.7 -8058.61 -117.32 0.125 9102 9104 2 -22113.5 -8058.12 -117.33 0.125 9103 9105 2 -22116 -8055.49 -117.57 0.125 9104 9106 2 -22117.4 -8054.26 -118.3 0.125 9105 9107 2 -22119 -8053.22 -122.65 0.125 9106 9108 2 -22120.6 -8052.71 -124.19 0.125 9107 9109 2 -22122.6 -8051.61 -126.02 0.125 9108 9110 2 -22123.9 -8051.01 -126.23 0.125 9109 9111 2 -22126.2 -8050.26 -129.58 0.125 9110 9112 2 -22128 -8050.05 -129.38 0.125 9111 9113 2 -22130.1 -8051.99 -129.5 0.125 9112 9114 2 -22132.5 -8051.81 -129.35 0.125 9113 9115 2 -22133.9 -8052.27 -131.19 0.125 9114 9116 2 -22135.3 -8052.51 -131.1 0.125 9115 9117 2 -22137.3 -8052.54 -131.03 0.125 9116 9118 2 -22138.6 -8052.44 -132.51 0.125 9117 9119 2 -22140.4 -8051.7 -132.22 0.125 9118 9120 2 -22141.9 -8051.64 -133.77 0.125 9119 9121 2 -22145.2 -8050.94 -137.51 0.125 9120 9122 2 -22147.6 -8052.11 -137.48 0.125 9121 9123 2 -22150.1 -8051.75 -140.13 0.125 9122 9124 2 -22151.3 -8050.12 -141.1 0.125 9123 9125 2 -22153.8 -8048.98 -141.88 0.125 9124 9126 2 -22156.2 -8047.03 -147.33 0.125 9125 9127 2 -22157.9 -8047.29 -147.22 0.125 9126 9128 2 -22159.7 -8047.16 -147.93 0.125 9127 9129 2 -22161.1 -8045.53 -147.54 0.125 9128 9130 2 -22161.7 -8045.21 -148.3 0.125 9129 9131 2 -22163.3 -8045.59 -150.58 0.125 9130 9132 2 -22165.5 -8044.9 -153.23 0.125 9131 9133 2 -22166.4 -8045.23 -153.67 0.125 9132 9134 2 -22167.7 -8044.42 -154.8 0.125 9133 9135 2 -22169 -8043.67 -155.54 0.125 9134 9136 2 -22169.5 -8043.62 -157.7 0.125 9135 9137 2 -22171.4 -8041.47 -163.61 0.125 9136 9138 2 -22172.6 -8041.07 -163.73 0.125 9137 9139 2 -22173.2 -8038.24 -166.31 0.125 9138 9140 2 -22173.5 -8037.07 -166.95 0.125 9139 9141 2 -22174.3 -8035.73 -167.59 0.125 9140 9142 2 -22175.3 -8033.61 -169.47 0.125 9141 9143 2 -22177.8 -8032.87 -169.6 0.125 9142 9144 2 -22179 -8032.28 -169.38 0.125 9143 9145 2 -22181.5 -8030.87 -168.91 0.125 9144 9146 2 -22181.8 -8030.36 -168.8 0.125 9145 9147 2 -22182.6 -8028.97 -168.1 0.125 9146 9148 2 -22012.2 -8028.3 46.24 0.25 8873 9149 2 -22012.6 -8031.16 45.24 0.25 9148 9150 2 -22013.7 -8033.59 44.13 0.25 9149 9151 2 -22014.4 -8038.79 40.68 0.25 9150 9152 2 -22017 -8039.33 38.36 0.25 9151 9153 2 -22019.2 -8043.17 33.35 0.25 9152 9154 2 -22022.4 -8048.29 31.84 0.25 9153 9155 2 -22024 -8051.42 28.03 0.25 9154 9156 2 -22026.7 -8054.47 27.46 0.25 9155 9157 2 -22027.8 -8056.23 27.27 0.25 9156 9158 2 -22033.5 -8072.68 24.21 0.25 9157 9159 2 -22036 -8074.7 19.81 0.25 9158 9160 2 -22036.5 -8077.13 19.22 0.25 9159 9161 2 -22037.2 -8080.58 16.53 0.25 9160 9162 2 -22040.3 -8084.4 13.8 0.25 9161 9163 2 -22041.7 -8086.73 13.28 0.25 9162 9164 2 -22042.7 -8090.04 11.12 0.25 9163 9165 2 -22042.8 -8093.19 8.99 0.25 9164 9166 2 -22042.6 -8097.18 5.67 0.25 9165 9167 2 -22042.3 -8100.28 5.17 0.125 9166 9168 2 -22043 -8103.55 3.07 0.125 9167 9169 2 -22044.7 -8105.33 -0.74 0.125 9168 9170 2 -22045.2 -8109.08 -1.32 0.125 9169 9171 2 -22045.3 -8111.72 -1.86 0.125 9170 9172 2 -22045.5 -8113.06 -3.62 0.125 9171 9173 2 -22045.2 -8114.42 -3.48 0.125 9172 9174 2 -22046.4 -8117.44 -4.31 0.125 9173 9175 2 -22046.2 -8119.5 -6.3 0.125 9174 9176 2 -22047.2 -8122.38 -7.71 0.125 9175 9177 2 -22048.1 -8125.59 -8.89 0.125 9176 9178 2 -22050 -8131.65 -11.63 0.125 9177 9179 2 -22050.3 -8135.39 -12.23 0.125 9178 9180 2 -22054.9 -8141.23 -12.79 0.125 9179 9181 2 -22055.9 -8150.92 -15.74 0.125 9180 9182 2 -22055.5 -8153.74 -16.25 0.125 9181 9183 2 -22043.6 -8100.09 7.5 0.125 9166 9184 2 -22045.4 -8102.56 4.38 0.125 9183 9185 2 -22049.2 -8104.58 1.4 0.125 9184 9186 2 -22058.5 -8113.76 3.18 0.125 9185 9187 2 -22061.2 -8117.67 2.79 0.125 9186 9188 2 -22065.7 -8119.78 2.85 0.125 9187 9189 2 -22071.5 -8122.83 5.28 0.125 9188 9190 2 -22075.7 -8126.73 5.02 0.125 9189 9191 2 -22079.6 -8130.33 4.8 0.125 9190 9192 2 -22080.9 -8132.69 4.53 0.125 9191 9193 2 -21999.6 -7976.3 60.29 0.125 8856 9194 2 -21998.9 -7978.59 60.11 0.125 9193 9195 2 -21998.9 -7981.25 61.35 0.125 9194 9196 2 -21999.1 -7983.67 61.12 0.125 9195 9197 2 -21998.3 -7986.54 61.01 0.125 9196 9198 2 -21998.2 -7987.78 59.12 0.125 9197 9199 2 -21998.8 -7988.49 56.4 0.125 9198 9200 2 -21999.9 -7990.53 56.16 0.125 9199 9201 2 -21999.9 -7991.85 55.95 0.125 9200 9202 2 -21999.5 -7994.52 54.5 0.125 9201 9203 2 -21999.6 -7996.77 56.51 0.125 9202 9204 2 -21999.2 -7998.72 56.83 0.125 9203 9205 2 -21998.3 -8002.06 56.76 0.125 9204 9206 2 -21997.1 -8003.86 57.44 0.125 9205 9207 2 -21996.7 -8006.04 57.77 0.125 9206 9208 2 -21995.4 -8008.22 58.98 0.125 9207 9209 2 -21995.2 -8009.56 59.11 0.125 9208 9210 2 -21987.6 -8015.72 58.21 0.125 9209 9211 2 -21996.4 -8010.85 53.14 0.125 9209 9212 2 -21996.5 -8014.17 51.89 0.125 9211 9213 2 -21998.3 -8016.61 51.66 0.125 9212 9214 2 -21999.8 -8017.5 49.32 0.125 9213 9215 2 -22001 -8019.08 47.89 0.125 9214 9216 2 -22000.8 -8020.47 46.71 0.125 9215 9217 2 -22000.9 -8021.93 45.69 0.125 9216 9218 2 -21999.8 -8025.56 44.4 0.125 9217 9219 2 -21999.2 -8022.98 42.59 0.125 9218 9220 2 -21999.1 -8021.66 42.8 0.125 9219 9221 2 -21998.8 -7974.52 64.48 0.125 8854 9222 2 -21997.7 -7976.93 65.44 0.125 9221 9223 2 -21998 -7975.68 64.16 0.125 9222 9224 2 -21997 -7977.12 66.98 0.125 9223 9225 2 -21995.8 -7981.33 70.49 0.125 9224 9226 2 -21995.2 -7982.82 70.19 0.125 9225 9227 2 -21995 -7985.47 76.18 0.125 9226 9228 2 -21994 -7988.38 82.63 0.125 9227 9229 2 -21997.3 -7987.65 84.02 0.125 9228 9230 2 -21997.1 -7989.2 83.75 0.125 9229 9231 2 -21996.6 -7990.49 83.61 0.125 9230 9232 2 -21996.5 -7992.78 84.48 0.125 9231 9233 2 -21996.2 -7995.35 88.36 0.125 9232 9234 2 -21996 -7997.15 87.81 0.125 9233 9235 2 -21996.7 -8001.86 92.28 0.125 9234 9236 2 -21997.4 -8004.48 94.33 0.125 9235 9237 2 -21998.3 -8006.86 96.32 0.125 9236 9238 2 -21998.8 -8008.66 98.47 0.125 9237 9239 2 -21999.4 -8008.68 93.68 0.125 9238 9240 2 -21999 -8010.67 95.79 0.125 9239 9241 2 -21999.5 -8012.71 96.14 0.125 9240 9242 2 -22000.5 -8013.44 96.57 0.125 9241 9243 2 -22000.2 -8015.42 100.2 0.125 9242 9244 2 -22001.7 -8016.36 102.93 0.125 9243 9245 2 -22003.3 -8016.59 103.05 0.125 9244 9246 2 -22005.3 -8016.86 103.19 0.125 9245 9247 2 -22007.7 -8016.87 104.48 0.125 9246 9248 2 -22010.5 -8017.3 104.94 0.125 9247 9249 2 -22012.7 -8018.43 108.42 0.125 9248 9250 2 -22014.9 -8018.76 108.58 0.125 9249 9251 2 -22018.1 -8020 113.28 0.125 9250 9252 2 -22020.5 -8021.13 113.31 0.125 9251 9253 2 -22022.7 -8021.96 113.38 0.125 9252 9254 2 -22025.3 -8024.11 115.92 0.125 9253 9255 2 -22027.4 -8025.75 115.85 0.125 9254 9256 2 -22029 -8029.63 121.32 0.125 9255 9257 2 -22030.9 -8032.01 122.71 0.125 9256 9258 2 -22032.2 -8034.3 123.89 0.125 9257 9259 2 -22033.8 -8035.31 123.87 0.125 9258 9260 2 -22035.3 -8037.22 127.34 0.125 9259 9261 2 -22039.2 -8039.25 128.39 0.125 9260 9262 2 -22040.7 -8040.33 128.69 0.125 9261 9263 2 -22041 -8040.1 128.76 0.125 9262 9264 2 -22043.8 -8043.62 131.34 0.125 9263 9265 2 -22045.1 -8045.25 133.23 0.125 9264 9266 2 -22046 -8046.31 135.63 0.125 9265 9267 2 -22047.5 -8047.44 136.2 0.125 9266 9268 2 -22048.5 -8047.83 136.24 0.125 9267 9269 2 -22049.7 -8050.14 136.15 0.125 9268 9270 2 -22058.4 -8039.65 138.71 0.125 9269 9271 2 -22060.6 -8041.04 141.16 0.125 9270 9272 2 -22063.8 -8042.07 142 0.125 9271 9273 2 -22065.7 -8043.45 147.62 0.125 9272 9274 2 -22068.7 -8044.89 151.08 0.125 9273 9275 2 -22070.9 -8046.26 153.41 0.125 9274 9276 2 -22072.8 -8046.38 153.58 0.125 9275 9277 2 -22074.2 -8047.34 155.93 0.125 9276 9278 2 -22077.6 -8048.95 161.38 0.125 9277 9279 2 -22078.3 -8049.65 161.47 0.125 9278 9280 2 -22080.5 -8051.21 161.55 0.125 9279 9281 2 -21935.8 -7647.97 100.91 0.125 8772 9282 2 -21933.4 -7651.87 100.59 0.125 9281 9283 2 -21930.9 -7654.8 99.59 0.125 9282 9284 2 -21930.1 -7656.49 99.25 0.125 9283 9285 2 -21929.1 -7658.46 100.66 0.125 9284 9286 2 -21926.3 -7661.01 100.64 0.125 9285 9287 2 -21925.5 -7662.68 100.29 0.125 9286 9288 2 -21924.6 -7664.14 101.63 0.125 9287 9289 2 -21923.9 -7664.82 101.45 0.125 9288 9290 2 -21921 -7666.94 100.89 0.125 9289 9291 2 -21919.5 -7669.05 100.41 0.125 9290 9292 2 -21918.5 -7670.42 100.08 0.125 9291 9293 2 -21917.5 -7672.88 99.58 0.125 9292 9294 2 -21915.5 -7674.42 99.14 0.125 9293 9295 2 -21914.5 -7675.5 98.87 0.125 9294 9296 2 -21911.4 -7677.09 98.32 0.125 9295 9297 2 -21910 -7677.38 98.13 0.125 9296 9298 2 -21907.8 -7679.11 99.36 0.125 9297 9299 2 -21906.3 -7681.5 98.89 0.125 9298 9300 2 -21904.8 -7682.54 98.57 0.125 9299 9301 2 -21903 -7683.26 98.29 0.125 9300 9302 2 -21902.5 -7684.24 98.08 0.125 9301 9303 2 -21900.3 -7685.26 98.19 0.125 9302 9304 2 -21897.3 -7687.57 98.76 0.125 9303 9305 2 -21893.5 -7689.86 98.13 0.125 9304 9306 2 -21891.3 -7689.94 97.9 0.125 9305 9307 2 -21889.6 -7691.75 97.44 0.125 9306 9308 2 -21887.6 -7693.25 97.03 0.125 9307 9309 2 -21885.3 -7694.44 98.2 0.125 9308 9310 2 -21882.7 -7696.38 97.83 0.125 9309 9311 2 -21881.5 -7698.65 98.11 0.125 9310 9312 2 -21879.9 -7701.32 97.61 0.125 9311 9313 2 -21877.9 -7705.73 98.37 0.125 9312 9314 2 -21876.8 -7708.64 98.85 0.125 9313 9315 2 -21873.1 -7710.59 98.14 0.125 9314 9316 2 -21871.8 -7711.99 96.49 0.125 9315 9317 2 -21870.3 -7714.33 95.96 0.125 9316 9318 2 -21865.3 -7715.96 99.02 0.125 9317 9319 2 -21862.8 -7717.1 98.61 0.125 9318 9320 2 -21859.4 -7720.28 98.18 0.125 9319 9321 2 -21856.7 -7724.8 97.21 0.125 9320 9322 2 -21853.2 -7727.19 95.78 0.125 9321 9323 2 -21851.4 -7731.46 95.51 0.125 9322 9324 2 -21847.4 -7734.25 96.55 0.125 9323 9325 2 -21845.4 -7737.82 95.66 0.125 9324 9326 2 -21842.3 -7739.63 95.06 0.125 9325 9327 2 -21839.7 -7743.37 94.2 0.125 9326 9328 2 -21837.9 -7747.31 93.38 0.125 9327 9329 2 -21836.1 -7749.33 92.81 0.125 9328 9330 2 -21833.2 -7751.74 92.12 0.125 9329 9331 2 -21827.4 -7755.7 90.93 0.125 9330 9332 2 -21823.6 -7756.86 90.38 0.125 9331 9333 2 -21823.3 -7760.25 89.9 0.125 9332 9334 2 -21821.8 -7763.82 89.86 0.125 9333 9335 2 -21819.2 -7764.39 89.51 0.125 9334 9336 2 -21818 -7767.88 88.83 0.125 9335 9337 2 -21816.2 -7770.69 88.19 0.125 9336 9338 2 -21817.2 -7772.99 87.84 0.125 9337 9339 2 -21895.5 -7465.55 111.53 0.125 8655 9340 2 -21894.8 -7469.06 109.09 0.125 9339 9341 2 -21892.9 -7473.84 107.37 0.125 9340 9342 2 -21889.8 -7474.65 103.95 0.125 9341 9343 2 -21887.6 -7477.25 101.01 0.125 9342 9344 2 -21886 -7478.89 97.94 0.125 9343 9345 2 -21883 -7480.04 93.47 0.125 9344 9346 2 -21880.9 -7483.01 90.92 0.125 9345 9347 2 -21880.4 -7484.98 90.31 0.125 9346 9348 2 -21877.2 -7490.18 85.86 0.125 9347 9349 2 -21875.3 -7495.4 83.61 0.125 9348 9350 2 -21872.7 -7498.87 82.56 0.125 9349 9351 2 -21871.1 -7500.24 82.53 0.125 9350 9352 2 -21871.1 -7503.58 81.48 0.125 9351 9353 2 -21871.6 -7506.16 81.11 0.125 9352 9354 2 -21871.4 -7507.58 76.92 0.125 9353 9355 2 -21870.8 -7511.65 75.85 0.125 9354 9356 2 -21869.1 -7516.28 72.88 0.125 9355 9357 2 -21867.9 -7519.85 73.95 0.125 9356 9358 2 -21865.9 -7523.33 69.19 0.125 9357 9359 2 -21865.4 -7526.13 69.96 0.125 9358 9360 2 -21864 -7528.79 67.74 0.125 9359 9361 2 -21862.5 -7531.32 66.56 0.125 9360 9362 2 -21861.4 -7533.11 65.27 0.125 9361 9363 2 -21860.7 -7534.76 64.48 0.125 9362 9364 2 -21858.5 -7535.73 64.11 0.125 9363 9365 2 -21857.7 -7536.12 62.42 0.125 9364 9366 2 -21856.8 -7537.16 61.48 0.125 9365 9367 2 -21856.3 -7538.98 59.79 0.125 9366 9368 2 -21854.4 -7542.28 58.48 0.125 9367 9369 2 -21853.4 -7544.23 56.53 0.125 9368 9370 2 -21852.8 -7546.24 56.06 0.125 9369 9371 2 -21851.6 -7547.62 54.25 0.125 9370 9372 2 -21851.1 -7548.89 54 0.125 9371 9373 2 -21849.7 -7550.93 52.97 0.125 9372 9374 2 -21848.1 -7553.73 48.69 0.125 9373 9375 2 -21848 -7556.04 47.91 0.125 9374 9376 2 -21848.5 -7558.16 44.2 0.125 9375 9377 2 -21846.9 -7561.28 43.27 0.125 9376 9378 2 -21846.5 -7562.56 43.18 0.125 9377 9379 2 -21845.3 -7564.84 41.6 0.125 9378 9380 2 -21844 -7568.31 40.86 0.125 9379 9381 2 -21842.8 -7570.26 40.41 0.125 9380 9382 2 -21842 -7572.17 39.81 0.125 9381 9383 2 -21841.4 -7573.29 37.49 0.125 9382 9384 2 -21840 -7574.99 36.04 0.125 9383 9385 2 -21838.4 -7578.2 35.36 0.125 9384 9386 2 -21837.7 -7579.77 34.15 0.125 9385 9387 2 -21837.3 -7580.92 33.44 0.125 9386 9388 2 -21836.2 -7585.42 30.56 0.125 9387 9389 2 -21835.3 -7586.23 27.97 0.125 9388 9390 2 -21833.4 -7588.31 26 0.125 9389 9391 2 -21832.3 -7590.5 25.53 0.125 9390 9392 2 -21831.4 -7592.44 25.13 0.125 9391 9393 2 -21830.7 -7594.98 24.64 0.125 9392 9394 2 -21829.7 -7596.13 24.36 0.125 9393 9395 2 -21828.1 -7596.45 25.89 0.125 9394 9396 2 -21827.3 -7598.43 25.43 0.125 9395 9397 2 -21826.3 -7600.2 27.08 0.125 9396 9398 2 -21825.6 -7601.86 24.75 0.125 9397 9399 2 -21824.6 -7602.77 24.51 0.125 9398 9400 2 -21824.1 -7604.25 24.23 0.125 9399 9401 2 -21824.1 -7606.38 23.85 0.125 9400 9402 2 -21824 -7607.58 21.51 0.125 9401 9403 2 -21821.9 -7613.19 22.82 0.125 9402 9404 2 -21819.8 -7616.55 20.52 0.125 9403 9405 2 -21820 -7618.97 20.14 0.125 9404 9406 2 -21818.1 -7620.25 19.75 0.125 9405 9407 2 -21815.6 -7622.31 18.16 0.125 9406 9408 2 -21811.4 -7626.07 16.82 0.125 9407 9409 2 -21808.9 -7630.21 17.59 0.125 9408 9410 2 -21808.2 -7635.98 13.99 0.125 9409 9411 2 -21806.6 -7637.84 11.96 0.125 9410 9412 2 -21805.1 -7642.55 10.78 0.125 9411 9413 2 -21802.4 -7646.61 11.31 0.125 9412 9414 2 -21799.8 -7650.94 10.35 0.125 9413 9415 2 -21798.3 -7653.93 11.52 0.125 9414 9416 2 -21797.2 -7656.11 11.1 0.125 9415 9417 2 -21794.7 -7659.83 9.51 0.125 9416 9418 2 -21792.9 -7663.15 6.87 0.125 9417 9419 2 -21790.7 -7667.17 8.3 0.125 9418 9420 2 -21790.6 -7672 6.51 0.125 9419 9421 2 -21788.4 -7676.13 3.99 0.125 9420 9422 2 -21785.5 -7682.5 2.55 0.125 9421 9423 2 -21782.5 -7685.5 0.48 0.125 9422 9424 2 -21781.5 -7688.46 -0.48 0.125 9423 9425 2 -21779.6 -7694.19 -1.81 0.125 9424 9426 2 -21776 -7698.02 -4.74 0.125 9425 9427 2 -21775.4 -7702.5 -6.69 0.125 9426 9428 2 -21775.4 -7706.41 -7.59 0.125 9427 9429 2 -21776.9 -7708.52 -9.31 0.125 9428 9430 2 -21776.5 -7712.8 -9.3 0.125 9429 9431 2 -21775.1 -7717.1 -8.57 0.125 9430 9432 2 -21774.1 -7720.84 -10.97 0.125 9431 9433 2 -21773.9 -7722.63 -11.69 0.125 9432 9434 2 -21772.9 -7724.06 -13.42 0.125 9433 9435 2 -21772.6 -7726.05 -14.16 0.125 9434 9436 2 -21771.5 -7730.07 -15.04 0.125 9435 9437 2 -21769.9 -7733.02 -15.67 0.125 9436 9438 2 -21769 -7735.21 -20.67 0.125 9437 9439 2 -21769.5 -7738.54 -23.94 0.125 9438 9440 2 -21770.1 -7739.88 -27.39 0.125 9439 9441 2 -21769.2 -7741.16 -28.89 0.125 9440 9442 2 -21767.2 -7741.35 -30.49 0.125 9441 9443 2 -21765.2 -7741.88 -30.38 0.125 9442 9444 2 -21763.1 -7742.92 -27.4 0.125 9443 9445 2 -21762 -7745.57 -25.43 0.125 9444 9446 2 -21761.6 -7747.74 -25.26 0.125 9445 9447 2 -21761.1 -7749.79 -27.04 0.125 9446 9448 2 -21760.4 -7754.72 -29.35 0.125 9447 9449 2 -21759.1 -7756.32 -29.75 0.125 9448 9450 2 -21757.3 -7758.69 -30.3 0.125 9449 9451 2 -21757.6 -7760.41 -30.01 0.125 9450 9452 2 -21757.8 -7762.56 -30.34 0.125 9451 9453 2 -21757.3 -7767.77 -31.35 0.125 9452 9454 2 -21757.4 -7771.41 -32.28 0.125 9453 9455 2 -21758.1 -7775.98 -33.37 0.125 9454 9456 2 -21758.2 -7779.15 -33.9 0.125 9455 9457 2 -21757.1 -7783.03 -35.61 0.125 9456 9458 2 -21756 -7787.03 -38.21 0.125 9457 9459 2 -21754.6 -7790.95 -39.47 0.125 9458 9460 2 -21753.1 -7793.87 -40.1 0.125 9459 9461 2 -21752.4 -7794.82 -40.32 0.125 9460 9462 2 -21750.7 -7796.63 -40.78 0.125 9461 9463 2 -21749.3 -7798.29 -41.18 0.125 9462 9464 2 -21749.5 -7802.8 -41.91 0.125 9463 9465 2 -21748.2 -7807.19 -42.19 0.125 9464 9466 2 -21746.8 -7814.28 -42.5 0.125 9465 9467 2 -21745.3 -7817.48 -41.79 0.125 9466 9468 2 -21744.2 -7821.77 -42.5 0.125 9467 9469 2 -21742 -7826.48 -43.47 0.125 9468 9470 2 -21740.8 -7829.17 -44.02 0.125 9469 9471 2 -21739.5 -7833.03 -44.25 0.125 9470 9472 2 -21737.5 -7840.99 -45.1 0.125 9471 9473 2 -21736.5 -7845.37 -45.85 0.125 9472 9474 2 -21735.5 -7848.38 -46.32 0.125 9473 9475 2 -21737.1 -7848.95 -46.09 0.125 9474 9476 2 -21735 -7851.78 -46.76 0.125 9475 9477 2 -21735.7 -7854.28 -47.11 0.125 9476 9478 2 -21735 -7856.64 -47.17 0.125 9477 9479 2 -21734.4 -7860.39 -46.9 0.125 9478 9480 2 -21734.1 -7862.7 -47.27 0.125 9479 9481 2 -21733.8 -7864.78 -47.65 0.125 9480 9482 2 -21734.9 -7866.28 -47.79 0.125 9481 9483 2 -21733.8 -7868.2 -48.21 0.125 9482 9484 2 -21733.2 -7870.49 -48.64 0.125 9483 9485 2 -21733.9 -7877.41 -50.2 0.125 9484 9486 2 -21733 -7879.14 -50.56 0.125 9485 9487 2 -21732.3 -7882.19 -51.14 0.125 9486 9488 2 -21731.7 -7886.3 -51.89 0.125 9487 9489 2 -21731.4 -7890.71 -54.4 0.125 9488 9490 2 -21730.6 -7896.06 -56.02 0.125 9489 9491 2 -21727.2 -7909.47 -58.73 0.125 9490 9492 2 -21724.6 -7914.4 -57.73 0.125 9491 9493 2 -21723 -7917.62 -58.3 0.125 9492 9494 2 -21721.2 -7922.39 -60.5 0.125 9493 9495 2 -21720.1 -7923.77 -59.51 0.125 9494 9496 2 -21719.8 -7926.33 -61.5 0.125 9495 9497 2 -21717.5 -7932.19 -61.98 0.125 9496 9498 2 -21716.9 -7933.97 -62.33 0.125 9497 9499 2 -21715.8 -7937.24 -62.97 0.125 9498 9500 2 -21714.5 -7941.15 -61.14 0.125 9499 9501 2 -21713.7 -7944.98 -61.85 0.125 9500 9502 2 -21713.5 -7946.35 -63.3 0.125 9501 9503 2 -21711.7 -7949.16 -64.04 0.125 9502 9504 2 -21710 -7951.31 -64.56 0.125 9503 9505 2 -21708.8 -7955.09 -65.29 0.125 9504 9506 2 -21708.3 -7957.11 -65.68 0.125 9505 9507 2 -21706.2 -7959.56 -66.95 0.125 9506 9508 2 -21701.8 -7961.36 -65.52 0.125 9507 9509 2 -21699.3 -7963.04 -66.03 0.125 9508 9510 2 -21697.9 -7964.95 -66.48 0.125 9509 9511 2 -21696.4 -7967.6 -67.06 0.125 9510 9512 2 -21695.2 -7969.79 -67.53 0.125 9511 9513 2 -21693.6 -7971.11 -67.9 0.125 9512 9514 2 -21692 -7974.29 -68.58 0.125 9513 9515 2 -21689.5 -7975.94 -64.89 0.125 9514 9516 2 -21688.1 -7979.17 -65.55 0.125 9515 9517 2 -21686.5 -7982.32 -66.22 0.125 9516 9518 2 -21685.6 -7984.3 -66.63 0.125 9517 9519 2 -21683.3 -7984.98 -66.88 0.125 9518 9520 2 -21681.9 -7988.75 -68.93 0.125 9519 9521 2 -21680.5 -7990.7 -69.11 0.125 9520 9522 2 -21678.8 -7993.89 -68.01 0.125 9521 9523 2 -21679.1 -7997.13 -68.51 0.125 9522 9524 2 -21678 -8000.4 -69.15 0.125 9523 9525 2 -21676.2 -8002.42 -66.87 0.125 9524 9526 2 -21674.2 -8004.46 -67.4 0.125 9525 9527 2 -21671.8 -8007.04 -67.85 0.125 9526 9528 2 -21669.2 -8010.46 -67.86 0.125 9527 9529 2 -21667.1 -8013.3 -68.53 0.125 9528 9530 2 -21664.4 -8016.74 -69.88 0.125 9529 9531 2 -21664 -8017.46 -70.04 0.125 9530 9532 2 -21661.3 -8020.09 -71.45 0.125 9531 9533 2 -21660.4 -8023.86 -70.8 0.125 9532 9534 2 -21658.5 -8025.14 -71.16 0.125 9533 9535 2 -21656.5 -8027.29 -71.2 0.125 9534 9536 2 -21655.5 -8030.98 -71.04 0.125 9535 9537 2 -21654.2 -8032.79 -71.93 0.125 9536 9538 2 -21653.1 -8033.41 -75.16 0.125 9537 9539 2 -21651.9 -8036.7 -77.08 0.125 9538 9540 2 -21649.5 -8038.66 -76.27 0.125 9539 9541 2 -21646 -8042.1 -75.35 0.125 9540 9542 2 -21645.2 -8045.14 -78.97 0.125 9541 9543 2 -21642.5 -8048 -78.88 0.125 9542 9544 2 -21642.2 -8051.96 -79.56 0.125 9543 9545 2 -21639.8 -8055.29 -78.7 0.125 9544 9546 2 -21638.6 -8058.2 -76.58 0.125 9545 9547 2 -21636.8 -8061.43 -78.39 0.125 9546 9548 2 -21634.1 -8064.25 -78.54 0.125 9547 9549 2 -21632.2 -8066.63 -77.41 0.125 9548 9550 2 -21631.3 -8068.7 -77.24 0.125 9549 9551 2 -21629.7 -8071.53 -76.79 0.125 9550 9552 2 -21628.2 -8073.31 -76.23 0.125 9551 9553 2 -21625.9 -8075.03 -75.36 0.125 9552 9554 2 -21624.8 -8076.74 -75.45 0.125 9553 9555 2 -21622.8 -8079.5 -74.98 0.125 9554 9556 2 -21621.1 -8081.62 -75.49 0.125 9555 9557 2 -21619.4 -8086.5 -74.29 0.125 9556 9558 2 -21618.9 -8089.43 -74.44 0.125 9557 9559 2 -21618.4 -8090.39 -77.5 0.125 9558 9560 2 -21615.5 -8092.79 -73.81 0.125 9559 9561 2 -21612.8 -8095.15 -73.48 0.125 9560 9562 2 -21611.4 -8096.62 -73.39 0.125 9561 9563 2 -21610.1 -8099.23 -73.07 0.125 9562 9564 2 -21608.8 -8099.4 -75.43 0.125 9563 9565 2 -21607 -8100.94 -77.31 0.125 9564 9566 2 -21604.5 -8103.83 -77.46 0.125 9565 9567 2 -21603.7 -8105.46 -78.26 0.125 9566 9568 2 -21603.6 -8107.42 -76.31 0.125 9567 9569 2 -21604.3 -8109.41 -75.06 0.125 9568 9570 2 -21603.5 -8112.19 -66.35 0.125 9569 9571 2 -21602.6 -8114.55 -66.23 0.125 9570 9572 2 -21601.3 -8116.96 -65.33 0.125 9571 9573 2 -21600.9 -8119.76 -65.99 0.125 9572 9574 3 -21923.2 -7391.08 203.58 1.485 1 9575 3 -21924 -7391.75 203.55 1.485 9574 9576 3 -21925 -7392.75 194.9 0.745 9575 9577 3 -21926.5 -7394.41 190.73 0.62 9576 9578 3 -21927.9 -7395.13 189.18 0.62 9577 9579 3 -21930.3 -7395.73 186.42 0.62 9578 9580 3 -21931 -7397.19 186.24 0.62 9579 9581 3 -21931.6 -7398.6 186.07 0.495 9580 9582 3 -21931.6 -7398.85 186.02 0.495 9581 9583 3 -21932.6 -7400.18 184.99 0.495 9582 9584 3 -21934.6 -7401.41 183.2 0.495 9583 9585 3 -21935.8 -7403.41 181.17 0.495 9584 9586 3 -21936 -7403.09 179.24 0.495 9585 9587 3 -21936.7 -7404.73 181.67 0.495 9586 9588 3 -21937 -7404.67 181.35 0.495 9587 9589 3 -21937.5 -7405.97 180.32 0.495 9588 9590 3 -21939.5 -7406.58 179.4 0.495 9589 9591 3 -21940.3 -7407.63 178.62 0.495 9590 9592 3 -21940.7 -7409.11 177.48 0.495 9591 9593 3 -21940.7 -7409.66 177.38 0.495 9592 9594 3 -21941 -7409.65 177.41 0.495 9593 9595 3 -21937.6 -7410.32 176.98 0.495 9594 9596 3 -21938.8 -7411.6 176.88 0.495 9595 9597 3 -21938.8 -7411.31 176.93 0.495 9596 9598 3 -21940.3 -7412.63 176.86 0.495 9597 9599 3 -21940.2 -7412.94 176.8 0.495 9598 9600 3 -21942.2 -7414.57 176.72 0.495 9599 9601 3 -21943.6 -7415.35 175.41 0.495 9600 9602 3 -21945.5 -7416.76 173.67 0.495 9601 9603 3 -21947.6 -7416.83 166.25 0.495 9602 9604 3 -21948.7 -7418.37 166.1 0.495 9603 9605 3 -21950.4 -7420.82 164.62 0.495 9604 9606 3 -21950.8 -7421.93 159.68 0.495 9605 9607 3 -21951.3 -7423.84 159.41 0.495 9606 9608 3 -21951.4 -7424.82 155.89 0.37 9607 9609 3 -21951.4 -7427.14 153.7 0.37 9608 9610 3 -21950.6 -7428.33 153.43 0.37 9609 9611 3 -21949.5 -7429.97 153.05 0.37 9610 9612 3 -21947.9 -7432.9 148.12 0.37 9611 9613 3 -21947.7 -7435.59 143.63 0.37 9612 9614 3 -21947.4 -7436.89 136.28 0.37 9613 9615 3 -21946.6 -7439.91 135.71 0.37 9614 9616 3 -21947.2 -7443.93 133.28 0.37 9615 9617 3 -21948 -7445.89 128.5 0.37 9616 9618 3 -21948.8 -7450.25 121.61 0.37 9617 9619 3 -21949.3 -7451.65 118.75 0.37 9618 9620 3 -21949.6 -7451.74 118.76 0.37 9619 9621 3 -21950.5 -7452.93 118.54 0.37 9620 9622 3 -21952.5 -7456.17 113.8 0.37 9621 9623 3 -21952.8 -7456.2 113.82 0.37 9622 9624 3 -21953.7 -7457.59 112.99 0.25 9623 9625 3 -21953.7 -7457.53 112.63 0.25 9624 9626 3 -21955.3 -7459.91 112.38 0.25 9625 9627 3 -21955.9 -7461.32 112.03 0.25 9626 9628 3 -21956.6 -7464.55 105.22 0.25 9627 9629 3 -21956.7 -7466.24 104.09 0.25 9628 9630 3 -21956.7 -7466.56 104.04 0.25 9629 9631 3 -21956 -7469.5 101.52 0.25 9630 9632 3 -21955.5 -7471 92.08 0.25 9631 9633 3 -21952.5 -7472.8 91.49 0.37 9632 9634 3 -21953.3 -7474.55 91.28 0.125 9633 9635 3 -21953.3 -7474.78 91.24 0.125 9634 9636 3 -21954.1 -7476.77 90.99 0.125 9635 9637 3 -21954.7 -7477.61 84.32 0.125 9636 9638 3 -21955.3 -7478.08 81.89 0.125 9637 9639 3 -21955.9 -7478.95 79 0.125 9638 9640 3 -21956.8 -7478.55 72.76 0.37 9639 9641 3 -21950.3 -7422.34 176.27 0.37 9604 9642 3 -21951.9 -7422.03 168.74 0.37 9641 9643 3 -21952.8 -7425.97 175.9 0.37 9642 9644 3 -21953 -7426.05 175.92 0.37 9643 9645 3 -21955.1 -7428.8 175.66 0.37 9644 9646 3 -21957.3 -7431.22 175.28 0.37 9645 9647 3 -21958.4 -7431.83 173.42 0.37 9646 9648 3 -21960.2 -7434.34 173.17 0.37 9647 9649 3 -21960.2 -7434.59 173.13 0.37 9648 9650 3 -21962.9 -7436.93 172.99 0.37 9649 9651 3 -21965.5 -7438.67 172.85 0.37 9650 9652 3 -21967.5 -7439.19 169.21 0.37 9651 9653 3 -21969.5 -7439.05 165.28 0.37 9652 9654 3 -21972.5 -7439.88 165.41 0.37 9653 9655 3 -21976.5 -7441.46 164.64 0.37 9654 9656 3 -21979.2 -7442.67 164.66 0.37 9655 9657 3 -21981.2 -7443.36 161.84 0.37 9656 9658 3 -21981.5 -7443.4 161.86 0.37 9657 9659 3 -21982.9 -7444.97 161.73 0.37 9658 9660 3 -21983.5 -7446.4 161.55 0.37 9659 9661 3 -21986.3 -7446.9 161.73 0.37 9660 9662 3 -21987.1 -7448.12 161.61 0.37 9661 9663 3 -21988.3 -7448.9 160.37 0.37 9662 9664 3 -21989.3 -7449.34 160.4 0.37 9663 9665 3 -21990.3 -7450.05 160.38 0.37 9664 9666 3 -21991.3 -7451.26 160.26 0.37 9665 9667 3 -21992.5 -7452.26 160.21 0.37 9666 9668 3 -21993.1 -7453.82 159.13 0.37 9667 9669 3 -21996 -7455.38 159.1 0.37 9668 9670 3 -21998.1 -7456.81 157.68 0.37 9669 9671 3 -21998.2 -7456.59 156.32 0.37 9670 9672 3 -22000 -7457.42 156.36 0.37 9671 9673 3 -22002.4 -7458.64 156.37 0.37 9672 9674 3 -22003.8 -7460.46 156.19 0.37 9673 9675 3 -22004 -7460.56 156.2 0.37 9674 9676 3 -22005.8 -7461.94 156.42 0.37 9675 9677 3 -22007.1 -7462.61 155.92 0.25 9676 9678 3 -22008 -7463.84 155.8 0.25 9677 9679 3 -22008.3 -7463.88 155.82 0.25 9678 9680 3 -22009.6 -7466 155.57 0.25 9679 9681 3 -22010.4 -7466.49 154.71 0.25 9680 9682 3 -22010.4 -7466.77 154.66 0.25 9681 9683 3 -22011.6 -7467.44 151.4 0.25 9682 9684 3 -22011.9 -7467.47 151.42 0.25 9683 9685 3 -22012.9 -7467.95 151.44 0.25 9684 9686 3 -22014.9 -7468.26 151.57 0.25 9685 9687 3 -22014.9 -7468.51 151.52 0.25 9686 9688 3 -22016.8 -7470.47 151.38 0.25 9687 9689 3 -22018.3 -7471.76 151.3 0.25 9688 9690 3 -22020 -7473.92 149.6 0.25 9689 9691 3 -22020.6 -7476.32 150.2 0.25 9690 9692 3 -22020.5 -7476.72 151.12 0.25 9691 9693 3 -22022.3 -7478.06 146.36 0.25 9692 9694 3 -22024 -7480.71 147.43 0.25 9693 9695 3 -22024 -7480.97 147.39 0.25 9694 9696 3 -22025.2 -7482.51 147.25 0.25 9695 9697 3 -22026.9 -7486.33 144.23 0.25 9696 9698 3 -22027.2 -7486.36 144.25 0.25 9697 9699 3 -22027.2 -7487.65 144.04 0.25 9698 9700 3 -22027.5 -7487.75 144.05 0.25 9699 9701 3 -21953.7 -7422.41 166.25 0.495 9642 9702 3 -21954.8 -7424.19 166.05 0.495 9701 9703 3 -21957.4 -7427.43 165.05 0.495 9702 9704 3 -21959.3 -7430.28 159.5 0.495 9703 9705 3 -21960.5 -7433.07 158.86 0.37 9704 9706 3 -21964.6 -7434.96 152.16 0.37 9705 9707 3 -21967.7 -7436.79 152.15 0.37 9706 9708 3 -21968.2 -7437.21 152.12 0.37 9707 9709 3 -21972.9 -7439.04 147.42 0.37 9708 9710 3 -21975.7 -7440.65 144.96 0.37 9709 9711 3 -21976.9 -7441.96 144.86 0.37 9710 9712 3 -21977.1 -7442.03 144.87 0.37 9711 9713 3 -21978.9 -7443.33 144.82 0.37 9712 9714 3 -21981.5 -7444.76 144.21 0.37 9713 9715 3 -21982.8 -7446.84 143.99 0.37 9714 9716 3 -21984.5 -7448.61 140.32 0.37 9715 9717 3 -21985.5 -7450.4 140.13 0.37 9716 9718 3 -21985.8 -7450.43 140.15 0.37 9717 9719 3 -21987.4 -7452.4 137.43 0.37 9718 9720 3 -21988.3 -7453.62 137.31 0.37 9719 9721 3 -21989.9 -7453.95 133.34 0.37 9720 9722 3 -21990.1 -7454.3 133.31 0.37 9721 9723 3 -21991.8 -7456.16 133.16 0.37 9722 9724 3 -21992.8 -7458.43 132.87 0.37 9723 9725 3 -21994.5 -7459.99 129.45 0.37 9724 9726 3 -21996.5 -7461.96 127.93 0.37 9725 9727 3 -21998.1 -7463.27 126.44 0.37 9726 9728 3 -22000.2 -7465.75 126.23 0.25 9727 9729 3 -22003 -7468.56 124.05 0.25 9728 9730 3 -22003.3 -7468.39 124.11 0.25 9729 9731 3 -22004.3 -7468.36 121.36 0.25 9730 9732 3 -22007.1 -7468.8 121.54 0.25 9731 9733 3 -22005.7 -7469.63 118.33 0.25 9731 9734 3 -22005.9 -7469.66 118.35 0.25 9733 9735 3 -22006.6 -7471.1 118.17 0.25 9734 9736 3 -22006.5 -7471.34 118.13 0.25 9735 9737 3 -22009.4 -7472.38 110.8 0.25 9736 9738 3 -22010.6 -7473.64 110.7 0.25 9737 9739 3 -22012.8 -7474.3 109.04 0.25 9738 9740 3 -22014.7 -7475.41 109.03 0.25 9739 9741 3 -22014.6 -7475.64 108.98 0.25 9740 9742 3 -21930 -7394.16 189.81 0.495 9578 9743 3 -21931.6 -7392.37 186.33 0.495 9742 9744 3 -21932.4 -7392.74 186.25 0.495 9743 9745 3 -21932.7 -7392.73 186.27 0.495 9744 9746 3 -21933.8 -7392.9 186.24 0.495 9745 9747 3 -21933.9 -7392.79 185.57 0.495 9746 9748 3 -21935.8 -7392.37 183.31 0.495 9747 9749 3 -21936.9 -7391.65 181.9 0.495 9748 9750 3 -21938.6 -7391.17 182.19 0.495 9749 9751 3 -21935.8 -7391.37 181.03 0.495 9750 9752 3 -21936.4 -7392.79 180.85 0.495 9751 9753 3 -21938.7 -7392.2 181.16 0.495 9752 9754 3 -21942.3 -7390.39 175.44 0.495 9753 9755 3 -21944.3 -7390.22 175.65 0.37 9754 9756 3 -21945.1 -7389.31 174.43 0.37 9755 9757 3 -21946 -7389.6 172 0.37 9756 9758 3 -21946 -7389.82 171.96 0.37 9757 9759 3 -21946.8 -7390.23 171.96 0.37 9758 9760 3 -21948.7 -7390.1 168.25 0.37 9759 9761 3 -21948.7 -7390.4 168.2 0.37 9760 9762 3 -21949.9 -7390.06 168.27 0.37 9761 9763 3 -21951.2 -7389.57 162.91 0.37 9762 9764 3 -21951.2 -7390.65 162.74 0.37 9763 9765 3 -21952.3 -7390.97 158.9 0.37 9764 9766 3 -21953.9 -7390.51 157.86 0.37 9765 9767 3 -21955.3 -7390.37 154.37 0.37 9766 9768 3 -21955.5 -7390.43 154.38 0.37 9767 9769 3 -21957.1 -7388.99 151.24 0.37 9768 9770 3 -21958.3 -7388.73 151.39 0.37 9769 9771 3 -21959.8 -7387.98 147.48 0.37 9770 9772 3 -21960.1 -7388 147.5 0.37 9771 9773 3 -21962 -7388.07 147.44 0.37 9772 9774 3 -21965.4 -7387.09 143.5 0.37 9773 9775 3 -21967.1 -7386.4 139.73 0.37 9774 9776 3 -21969.1 -7385.87 137.88 0.37 9775 9777 3 -21969.8 -7385.47 135.68 0.37 9776 9778 3 -21969.8 -7386.65 137.15 0.37 9777 9779 3 -21971.5 -7387.02 132.85 0.37 9778 9780 3 -21972.4 -7386.99 130.62 0.37 9779 9781 3 -21973.5 -7387.19 130.69 0.37 9780 9782 3 -21975.1 -7387.56 126.54 0.37 9781 9783 3 -21975.2 -7387.52 126.33 0.37 9782 9784 3 -21975.6 -7388.35 126.05 0.37 9783 9785 3 -21975.8 -7387.99 123.9 0.37 9784 9786 3 -21977.3 -7389.31 123.99 0.37 9785 9787 3 -21977.9 -7389.8 121.47 0.37 9786 9788 3 -21979 -7390.73 119.6 0.37 9787 9789 3 -21980.4 -7392 116.31 0.37 9788 9790 3 -21980.9 -7393.22 115.06 0.37 9789 9791 3 -21981.1 -7392.81 112.58 0.37 9790 9792 3 -21980.2 -7394.72 112.17 0.37 9791 9793 3 -21978 -7396.54 112.26 0.37 9792 9794 3 -21977.8 -7398.21 106.68 0.37 9793 9795 3 -21977.7 -7398.5 106.63 0.37 9794 9796 3 -21977.1 -7400.49 106.07 0.37 9795 9797 3 -21977.2 -7400.64 105.35 0.37 9796 9798 3 -21977.2 -7402.8 103.87 0.37 9797 9799 3 -21975.9 -7404.95 101.87 0.37 9798 9800 3 -21975.5 -7405.43 101.75 0.37 9799 9801 3 -21975.3 -7405.39 101.73 0.37 9800 9802 3 -21974.8 -7406.99 100.45 0.37 9801 9803 3 -21974.6 -7408.08 100.26 0.37 9802 9804 3 -21974.4 -7409.56 99.86 0.37 9803 9805 3 -21974 -7410.54 99.66 0.37 9804 9806 3 -21973.6 -7410.77 99.59 0.37 9805 9807 3 -21973.5 -7411.53 99.36 0.37 9806 9808 3 -21973.5 -7412.03 99.18 0.37 9807 9809 3 -21973.6 -7413.33 98.63 0.37 9808 9810 3 -21973.7 -7413.21 97.9 0.37 9809 9811 3 -21973.7 -7415.96 95.07 0.37 9810 9812 3 -21973.9 -7416.2 94.68 0.37 9811 9813 3 -21974.3 -7418.17 93.29 0.37 9812 9814 3 -21974.3 -7418.64 92.79 0.37 9813 9815 3 -21974.5 -7418.93 92.76 0.37 9814 9816 3 -21974.3 -7420.19 92.42 0.37 9815 9817 3 -21974.8 -7420.53 92.29 0.37 9816 9818 3 -21975.7 -7421.07 91.5 0.37 9817 9819 3 -21976.9 -7422.83 89.9 0.25 9818 9820 3 -21977.2 -7422.91 89.91 0.25 9819 9821 3 -21977.3 -7423.96 89.75 0.37 9820 9822 3 -21977.4 -7427.17 89.23 0.25 9821 9823 3 -21977.3 -7427.46 89.17 0.37 9822 9824 3 -21977.6 -7427.45 89.2 0.37 9823 9825 3 -21978.9 -7428.76 87.77 0.25 9824 9826 3 -21980.3 -7429.26 87.82 0.25 9825 9827 3 -21981 -7429.88 87.78 0.37 9826 9828 3 -21971.7 -7384.13 132.54 0.37 9777 9829 3 -21973.5 -7383.19 132.88 0.37 9828 9830 3 -21973.8 -7383.15 132.9 0.37 9829 9831 3 -21974.9 -7382.36 130.3 0.37 9830 9832 3 -21976.1 -7382.31 130.41 0.37 9831 9833 3 -21977.8 -7382.13 130.61 0.37 9832 9834 3 -21977.6 -7381.8 130.64 0.37 9833 9835 3 -21979.4 -7380.45 127.62 0.37 9834 9836 3 -21980.8 -7380.48 127.75 0.37 9835 9837 3 -21980.9 -7380.2 127.81 0.37 9836 9838 3 -21981 -7379.13 127.6 0.37 9837 9839 3 -21981.4 -7379.14 127.63 0.37 9838 9840 3 -21981.7 -7378.74 127.73 0.37 9839 9841 3 -21982.5 -7377.83 126.7 0.37 9840 9842 3 -21984.6 -7377.16 127 0.37 9841 9843 3 -21986.8 -7374.26 122.67 0.37 9842 9844 3 -21988.9 -7373.6 122.97 0.37 9843 9845 3 -21989.2 -7373.14 123.08 0.37 9844 9846 3 -21989.9 -7371.14 123.47 0.37 9845 9847 3 -21992 -7370.18 123.83 0.37 9846 9848 3 -21992.2 -7369.18 124.01 0.37 9847 9849 3 -21994.2 -7368.06 121.95 0.25 9848 9850 3 -21994.5 -7368.09 121.97 0.25 9849 9851 3 -21995.7 -7367.79 122.13 0.25 9850 9852 3 -21997.1 -7367.82 122.26 0.25 9851 9853 3 -21999.3 -7367.94 122.45 0.25 9852 9854 3 -21999.3 -7368.19 122.4 0.25 9853 9855 3 -22000.8 -7367.34 122.68 0.25 9854 9856 3 -22003.5 -7366.28 123.12 0.25 9855 9857 3 -22005.3 -7365.85 123.35 0.25 9856 9858 3 -22007.3 -7365.41 123.61 0.25 9857 9859 3 -22009.8 -7364.52 123.99 0.25 9858 9860 3 -22012.4 -7365.64 120.3 0.25 9859 9861 3 -22011.1 -7360.95 121.18 0.25 9859 9862 3 -21948 -7388.3 172.17 0.37 9759 9863 3 -21948.6 -7386.3 172.56 0.37 9862 9864 3 -21950.5 -7384.22 169.9 0.37 9863 9865 3 -21951.8 -7383.42 170.14 0.37 9864 9866 3 -21953.4 -7382.38 170.47 0.37 9865 9867 3 -21953.4 -7382.14 170.51 0.37 9866 9868 3 -21955.1 -7379.12 165.84 0.37 9867 9869 3 -21955.8 -7378.16 166.07 0.37 9868 9870 3 -21957.9 -7377.5 166.32 0.37 9869 9871 3 -21959.5 -7376.81 165.57 0.37 9870 9872 3 -21960.2 -7376.14 165.75 0.37 9871 9873 3 -21962.2 -7375.96 165.97 0.37 9872 9874 3 -21966.1 -7373.74 160.55 0.37 9873 9875 3 -21969.4 -7373.79 160.83 0.37 9874 9876 3 -21972.2 -7372.49 161.31 0.37 9875 9877 3 -21974.4 -7371.22 161.7 0.37 9876 9878 3 -21976.3 -7369.57 158.21 0.37 9877 9879 3 -21976.6 -7369.64 158.22 0.37 9878 9880 3 -21978.2 -7368.93 157.14 0.37 9879 9881 3 -21979.5 -7368.57 155.64 0.37 9880 9882 3 -21980.3 -7367.15 155.95 0.37 9881 9883 3 -21981.7 -7367.13 156.09 0.37 9882 9884 3 -21983.3 -7366.32 156.19 0.37 9883 9885 3 -21984.6 -7365.27 155.26 0.37 9884 9886 3 -21984.8 -7364.27 155.44 0.37 9885 9887 3 -21987.5 -7362.1 151.58 0.37 9886 9888 3 -21990.3 -7360.41 147.1 0.37 9887 9889 3 -21990.2 -7360.62 147.06 0.37 9888 9890 3 -21991 -7361.03 147.06 0.37 9889 9891 3 -21993.5 -7361.37 143.49 0.37 9890 9892 3 -21994.9 -7361.14 143.66 0.37 9891 9893 3 -21994.9 -7361.41 143.61 0.37 9892 9894 3 -21996.8 -7361.06 141.7 0.37 9893 9895 3 -21998.1 -7361.27 141.77 0.37 9894 9896 3 -21998.5 -7361.34 141.79 0.37 9895 9897 3 -22000 -7360.71 141.23 0.37 9896 9898 3 -22001.1 -7360.6 141.35 0.37 9897 9899 3 -22001.7 -7360.66 141.38 0.37 9898 9900 3 -22003.7 -7359.88 139.63 0.37 9899 9901 3 -22004.6 -7359.59 139.74 0.37 9900 9902 3 -22006.3 -7358.08 137.75 0.37 9901 9903 3 -22007.7 -7358.34 137.84 0.37 9902 9904 3 -22008 -7358.4 137.85 0.37 9903 9905 3 -22008.6 -7358.26 137.92 0.37 9904 9906 3 -22009.3 -7357.22 137.54 0.37 9905 9907 3 -22011.7 -7356.37 133.72 0.37 9906 9908 3 -22012.6 -7356.01 133.86 0.37 9907 9909 3 -22013.8 -7355.44 134.07 0.37 9908 9910 3 -22014.9 -7355.07 132.39 0.37 9909 9911 3 -22016.3 -7355.29 132.37 0.37 9910 9912 3 -22017.1 -7355.12 132.22 0.37 9911 9913 3 -22017.2 -7354.86 132.27 0.37 9912 9914 3 -22017.6 -7354.1 132.43 0.37 9913 9915 3 -22017.7 -7353.91 132.47 0.37 9914 9916 3 -22018.1 -7352.95 132.67 0.37 9915 9917 3 -22018.1 -7352.67 132.72 0.37 9916 9918 3 -22019.5 -7352.84 132.63 0.37 9917 9919 3 -22020.8 -7352.3 132.84 0.37 9918 9920 3 -22022.7 -7350.84 133.26 0.37 9919 9921 3 -22022.6 -7348.69 132.15 0.125 9920 9922 3 -22022.2 -7347.3 132.35 0.125 9921 9923 3 -22022.3 -7347.02 132.4 0.125 9922 9924 3 -22022 -7345.43 132.64 0.125 9923 9925 3 -22022 -7345.17 132.69 0.125 9924 9926 3 -22021.5 -7343.43 132.64 0.125 9925 9927 3 -22021.2 -7342.1 132.83 0.125 9926 9928 3 -22021.4 -7340.84 133.06 0.125 9927 9929 3 -22021.7 -7340.54 133.13 0.125 9928 9930 3 -22022.4 -7339.89 133.3 0.125 9929 9931 3 -21927 -7393.03 202.75 0.865 9575 9932 3 -21928.2 -7393.8 199.97 0.865 9931 9933 3 -21928.3 -7393.58 198.63 0.865 9932 9934 3 -21929.7 -7395.3 195.02 0.865 9933 9935 3 -21929.8 -7395.28 194.95 0.865 9934 9936 3 -21932 -7397.47 194.76 0.865 9935 9937 3 -21933.8 -7399.34 193.24 0.865 9936 9938 3 -21935.7 -7400.31 192.87 0.865 9937 9939 3 -21936.1 -7400.19 191.97 0.865 9938 9940 3 -21938.6 -7401.54 190.04 0.865 9939 9941 3 -21938.6 -7401.67 190.81 0.865 9940 9942 3 -21939.7 -7399.48 191.29 0.62 9941 9943 3 -21939.7 -7397.06 187.04 0.495 9942 9944 3 -21939.2 -7395.91 185.56 0.495 9943 9945 3 -21939.5 -7395.14 185.72 0.495 9944 9946 3 -21940.3 -7393.65 186.03 0.495 9945 9947 3 -21940.9 -7391.45 182 0.495 9946 9948 3 -21940.9 -7391.12 182.06 0.495 9947 9949 3 -21935.9 -7390.91 184.12 0.37 9948 9950 3 -21938.3 -7390.54 184.4 0.37 9949 9951 3 -21939.6 -7389.73 183.37 0.37 9950 9952 3 -21939.7 -7389.62 182.74 0.37 9951 9953 3 -21940.3 -7388.65 179.81 0.37 9952 9954 3 -21942.4 -7388.57 179.08 0.37 9953 9955 3 -21943.7 -7387.52 179.38 0.37 9954 9956 3 -21944.7 -7388.27 179.34 0.37 9955 9957 3 -21944.9 -7388.3 179.33 0.37 9956 9958 3 -21946.2 -7387.61 178.9 0.37 9957 9959 3 -21946.6 -7387.33 178.76 0.37 9958 9960 3 -21947.3 -7387.76 178.76 0.37 9959 9961 3 -21949.1 -7386.68 177.14 0.37 9960 9962 3 -21950 -7387.4 177.11 0.37 9961 9963 3 -21951 -7387.28 177.22 0.37 9962 9964 3 -21951.2 -7387.32 177.23 0.37 9963 9965 3 -21952.7 -7386 176.22 0.37 9964 9966 3 -21952.9 -7386.06 176.23 0.37 9965 9967 3 -21954.4 -7385.79 176.36 0.37 9966 9968 3 -21956.3 -7386.36 176.42 0.37 9967 9969 3 -21956.8 -7386.23 176.5 0.37 9968 9970 3 -21958.9 -7385.51 176.8 0.37 9969 9971 3 -21961.1 -7386.77 175.65 0.37 9970 9972 3 -21963.3 -7386.86 175.64 0.37 9971 9973 3 -21963.3 -7386.77 175.14 0.37 9972 9974 3 -21967 -7386.27 169.04 0.37 9973 9975 3 -21969 -7386.09 169.26 0.37 9974 9976 3 -21971.3 -7386.4 167.39 0.37 9975 9977 3 -21972.5 -7386.09 167.55 0.37 9976 9978 3 -21974.4 -7385.76 165.36 0.37 9977 9979 3 -21975.4 -7385.5 167.5 0.37 9978 9980 3 -21978.2 -7385.1 165.63 0.25 9979 9981 3 -21979.9 -7385.02 165.28 0.37 9980 9982 3 -21982 -7384.27 162.34 0.37 9981 9983 3 -21984.3 -7384.27 159.83 0.25 9982 9984 3 -21986 -7384.52 159.91 0.37 9983 9985 3 -21987.9 -7385.14 159.97 0.37 9984 9986 3 -21990.5 -7385.01 155.37 0.37 9985 9987 3 -21990.6 -7384.76 153.87 0.37 9986 9988 3 -21993.1 -7385.44 153.71 0.37 9987 9989 3 -21995.2 -7385.33 151.41 0.37 9988 9990 3 -21998 -7384.24 151.83 0.37 9989 9991 3 -22000.2 -7384.1 152.07 0.37 9990 9992 3 -22002.1 -7384.49 152.18 0.37 9991 9993 3 -22002.4 -7384.53 152.2 0.37 9992 9994 3 -22003.6 -7383.92 152.42 0.37 9993 9995 3 -22006.3 -7383 149.24 0.37 9994 9996 3 -22007.9 -7383.55 149.3 0.37 9995 9997 3 -22008.2 -7383.59 149.32 0.37 9996 9998 3 -22010.6 -7382.82 148.86 0.37 9997 9999 3 -22012.7 -7382.47 147.72 0.37 9998 10000 3 -22014.1 -7381.63 146.48 0.37 9999 10001 3 -22015 -7381.29 146.61 0.37 10000 10002 3 -22016.8 -7380.92 144.72 0.37 10001 10003 3 -22017.1 -7380.98 144.74 0.37 10002 10004 3 -22018.3 -7380.99 144.84 0.37 10003 10005 3 -22020.3 -7381.04 144.94 0.25 10004 10006 3 -22021.4 -7381.22 145.01 0.25 10005 10007 3 -22023.5 -7380.02 145.41 0.25 10006 10008 3 -22024.1 -7379.59 145.54 0.25 10007 10009 3 -22026.2 -7379.39 145.26 0.37 10008 10010 3 -22028.5 -7379.61 143.06 0.37 10009 10011 3 -22030.2 -7379.24 141.17 0.37 10010 10012 3 -22031.6 -7378.99 141.35 0.37 10011 10013 3 -22031.6 -7379.27 141.3 0.37 10012 10014 3 -22032.9 -7379.52 141.34 0.37 10013 10015 3 -22034.1 -7379.43 141.42 0.37 10014 10016 3 -22035 -7378.83 141.61 0.37 10015 10017 3 -22037.6 -7378.07 139.77 0.37 10016 10018 3 -22038.6 -7376.95 140.05 0.37 10017 10019 3 -22038.9 -7377.01 140.06 0.37 10018 10020 3 -22040.5 -7375.99 140.38 0.37 10019 10021 3 -22040.5 -7376.22 140.34 0.37 10020 10022 3 -22043.1 -7375.49 140.9 0.25 10021 10023 3 -22043.4 -7375.5 140.93 0.25 10022 10024 3 -22046.2 -7375.44 141.2 0.25 10023 10025 3 -22046.5 -7375.23 141.26 0.25 10024 10026 3 -22049.5 -7376.06 141.57 0.25 10025 10027 3 -22049.5 -7375.79 141.62 0.25 10026 10028 3 -22054.6 -7375.76 141.21 0.25 10027 10029 3 -22054.6 -7375.78 141.35 0.25 10028 10030 3 -22055.9 -7376.41 142.21 0.25 10029 10031 3 -22058.1 -7376.53 142.4 0.25 10030 10032 3 -22059.2 -7376.83 142.96 0.125 10031 10033 3 -21941.3 -7388.02 179.77 0.37 9953 10034 3 -21943.2 -7386.73 175.47 0.37 10033 10035 3 -21943.2 -7386.71 175.37 0.37 10034 10036 3 -21942.7 -7385.89 172.86 0.37 10035 10037 3 -21942.1 -7385.36 170.27 0.37 10036 10038 3 -21944.5 -7386.01 161.11 0.37 10037 10039 3 -21945.1 -7386.59 159.29 0.37 10038 10040 3 -21945.4 -7386.1 156.39 0.37 10039 10041 3 -21945.5 -7386.74 152.24 0.37 10040 10042 3 -21946.1 -7388.06 148.47 0.37 10041 10043 3 -21946.2 -7387.88 147.36 0.37 10042 10044 3 -21946.5 -7389.95 145.07 0.37 10043 10045 3 -21947.3 -7391.23 139.69 0.37 10044 10046 3 -21947.3 -7390.98 139.74 0.37 10045 10047 3 -21945.8 -7390.6 134.38 0.37 10046 10048 3 -21944.2 -7392.23 122.55 0.37 10047 10049 3 -21943.7 -7392.36 116.16 0.37 10048 10050 3 -21943.7 -7392.1 116.2 0.37 10049 10051 3 -21943.3 -7393.81 109.26 0.37 10050 10052 3 -21943.2 -7395.11 107.66 0.37 10051 10053 3 -21941.5 -7396.26 109.56 0.37 10052 10054 3 -21940 -7394.93 109.3 0.37 10053 10055 3 -21939.1 -7395.02 105.95 0.37 10054 10056 3 -21938.2 -7394.82 102.66 0.37 10055 10057 3 -21938.2 -7394.72 102.08 0.37 10056 10058 3 -21937.1 -7395.33 100.47 0.37 10057 10059 3 -21937.3 -7395.07 98.93 0.37 10058 10060 3 -21937.1 -7396.01 90.48 0.37 10059 10061 3 -21938.3 -7397.07 86.22 0.37 10060 10062 3 -21939.5 -7399.67 82.94 0.25 10061 10063 3 -21939.1 -7403.78 194.67 0.62 9941 10064 3 -21940.8 -7404.87 192.05 0.62 10063 10065 3 -21941 -7404.66 190.83 0.62 10064 10066 3 -21942.1 -7406.16 188.85 0.62 10065 10067 3 -21942.4 -7406.01 188.01 0.62 10066 10068 3 -21944.3 -7407.51 187.38 0.62 10067 10069 3 -21945.3 -7408.84 186.54 0.62 10068 10070 3 -21946.3 -7410.59 184.88 0.62 10069 10071 3 -21947.9 -7411.93 183.65 0.62 10070 10072 3 -21949.9 -7413.35 180.99 0.62 10071 10073 3 -21951.4 -7415.29 179.98 0.62 10072 10074 3 -21951.5 -7415.11 178.89 0.62 10073 10075 3 -21953 -7417.16 177.32 0.62 10074 10076 3 -21955.4 -7418.86 180.56 0.62 10075 10077 3 -21956.1 -7420.32 180.38 0.495 10076 10078 3 -21956.8 -7421.72 180.21 0.495 10077 10079 3 -21957.3 -7422.3 180.16 0.62 10078 10080 3 -21958.3 -7424.62 179.87 0.62 10079 10081 3 -21958.6 -7424.64 179.9 0.62 10080 10082 3 -21960 -7425.08 179.96 0.495 10081 10083 3 -21960.1 -7424.83 180 0.495 10082 10084 3 -21960.5 -7426.82 180.44 0.495 10083 10085 3 -21962.1 -7428.12 180.37 0.495 10084 10086 3 -21963.2 -7429.35 178.76 0.495 10085 10087 3 -21963.5 -7429.41 178.78 0.495 10086 10088 3 -21964.5 -7430.38 178.71 0.495 10087 10089 3 -21965 -7432.77 178.37 0.495 10088 10090 3 -21965.9 -7432.37 178.52 0.495 10089 10091 3 -21967 -7433.35 178.45 0.495 10090 10092 3 -21967.1 -7434.39 178.29 0.495 10091 10093 3 -21968 -7435.87 178.13 0.495 10092 10094 3 -21968.7 -7436.73 178.06 0.495 10093 10095 3 -21968.8 -7437.53 176.45 0.495 10094 10096 3 -21968.7 -7437.8 176.4 0.495 10095 10097 3 -21968.1 -7438.54 176.22 0.495 10096 10098 3 -21968 -7439.58 176.04 0.37 10097 10099 3 -21968.4 -7440.87 175.77 0.37 10098 10100 3 -21968.1 -7442.71 175.44 0.37 10099 10101 3 -21968.2 -7444.6 175.13 0.37 10100 10102 3 -21967.8 -7446.23 176.93 0.37 10101 10103 3 -21967.7 -7447.91 175.67 0.37 10102 10104 3 -21968.5 -7448.54 175.64 0.37 10103 10105 3 -21969.3 -7448.65 175.7 0.37 10104 10106 3 -21970 -7450.34 175.48 0.37 10105 10107 3 -21970.6 -7452.03 175.26 0.37 10106 10108 3 -21970.8 -7454.44 175.23 0.37 10107 10109 3 -21971.1 -7454.51 175.25 0.37 10108 10110 3 -21972.1 -7456.35 172.81 0.37 10109 10111 3 -21972.8 -7457.83 172.63 0.37 10110 10112 3 -21973.5 -7459.26 172.4 0.37 10111 10113 3 -21965.8 -7459.63 171.62 0.37 10112 10114 3 -21965.9 -7460.7 171.45 0.37 10113 10115 3 -21967.1 -7462.22 171.31 0.37 10114 10116 3 -21967.2 -7463.27 171.15 0.37 10115 10117 3 -21967.1 -7464.2 168.78 0.37 10116 10118 3 -21966.7 -7465.93 168.2 0.37 10117 10119 3 -21966.5 -7466.9 168 0.37 10118 10120 3 -21966.3 -7467.92 167.59 0.37 10119 10121 3 -21966.1 -7467.79 167.06 0.37 10120 10122 3 -21965.7 -7468.52 166.9 0.37 10121 10123 3 -21965.7 -7468.47 166.63 0.37 10122 10124 3 -21965.1 -7469.17 164.99 0.37 10123 10125 3 -21964.7 -7471.78 165.07 0.37 10124 10126 3 -21963.9 -7473.84 161.95 0.37 10125 10127 3 -21963 -7475.38 162.19 0.37 10126 10128 3 -21962.6 -7478.12 161.27 0.37 10127 10129 3 -21962.3 -7478.08 161.25 0.37 10128 10130 3 -21962.2 -7479.08 161 0.37 10129 10131 3 -21962.3 -7480.36 160.62 0.37 10130 10132 3 -21962.1 -7481.61 160.05 0.37 10131 10133 3 -21962.1 -7481.86 160.01 0.37 10132 10134 3 -21962.4 -7482.98 159.86 0.37 10133 10135 3 -21962.8 -7484.09 159.71 0.37 10134 10136 3 -21963 -7485.98 159.42 0.37 10135 10137 3 -21963.2 -7486.55 159.34 0.37 10136 10138 3 -21963.2 -7488.36 159.04 0.37 10137 10139 3 -21963.2 -7488.6 158.77 0.37 10138 10140 3 -21963.9 -7490.51 155.39 0.37 10139 10141 3 -21963.9 -7490.82 155.33 0.37 10140 10142 3 -21963.8 -7493.25 153.93 0.37 10141 10143 3 -21964.5 -7495.48 150.56 0.37 10142 10144 3 -21964.2 -7497.46 149.49 0.37 10143 10145 3 -21964.3 -7497.55 148.67 0.37 10144 10146 3 -21964.3 -7498.88 148.46 0.37 10145 10147 3 -21964.3 -7499.12 148.41 0.37 10146 10148 3 -21965.3 -7501.94 146.6 0.37 10147 10149 3 -21965.3 -7502.1 145.85 0.37 10148 10150 3 -21965.3 -7504.24 144.29 0.37 10149 10151 3 -21965.2 -7504.76 144.2 0.37 10150 10152 3 -21964.6 -7507.14 142.63 0.37 10151 10153 3 -21964.7 -7507.05 142.09 0.37 10152 10154 3 -21964.5 -7508.93 143.86 0.37 10153 10155 3 -21964.9 -7508.71 142.39 0.37 10154 10156 3 -21964.6 -7510.75 141.98 0.37 10155 10157 3 -21965 -7511.88 141.83 0.25 10156 10158 3 -21966.1 -7513.37 141.69 0.25 10157 10159 3 -21966.3 -7513.73 141.65 0.25 10158 10160 3 -21966.5 -7514.54 141.53 0.25 10159 10161 3 -21966.8 -7514.32 141.6 0.37 10160 10162 3 -21967 -7516.22 141.31 0.37 10161 10163 3 -21967.5 -7516.64 137.22 0.37 10162 10164 3 -21967.7 -7517.18 137.15 0.37 10163 10165 3 -21968.1 -7518.09 137.04 0.25 10164 10166 3 -21968.1 -7519.64 136.78 0.25 10165 10167 3 -21967.9 -7520.93 136.55 0.37 10166 10168 3 -21968.1 -7522 134.87 0.37 10167 10169 3 -21968.8 -7523.13 134.43 0.37 10168 10170 3 -21969 -7523.95 134.15 0.25 10169 10171 3 -21969 -7523.86 133.6 0.25 10170 10172 3 -21969.7 -7525.39 132.62 0.25 10171 10173 3 -21970.1 -7526.58 131.3 0.25 10172 10174 3 -21971.2 -7437.49 179.1 0.62 10094 10175 3 -21972.6 -7437.15 179.29 0.495 10174 10176 3 -21974.4 -7437.72 177.81 0.37 10175 10177 3 -21976.1 -7439 179.31 0.37 10176 10178 3 -21976.3 -7439.79 179.2 0.37 10177 10179 3 -21976.4 -7440.88 179.03 0.37 10178 10180 3 -21977.2 -7441.78 178.95 0.37 10179 10181 3 -21977.9 -7444.48 178.58 0.37 10180 10182 3 -21977.9 -7444.79 178.52 0.37 10181 10183 3 -21978 -7446.11 178.31 0.37 10182 10184 3 -21979.1 -7448.38 178.04 0.37 10183 10185 3 -21980.2 -7449.35 177.98 0.37 10184 10186 3 -21981.7 -7450.77 177.38 0.37 10185 10187 3 -21982.3 -7452.44 176.85 0.37 10186 10188 3 -21982.5 -7453.23 176.74 0.37 10187 10189 3 -21983.3 -7454.09 176.29 0.25 10188 10190 3 -21983.3 -7454.33 176.24 0.37 10189 10191 3 -21983.7 -7455.22 176.14 0.37 10190 10192 3 -21984.4 -7456.85 175.68 0.25 10191 10193 3 -21985.1 -7458.26 175.49 0.25 10192 10194 3 -21985.1 -7457.98 175.54 0.25 10193 10195 3 -21985.8 -7458.87 175.46 0.37 10194 10196 3 -21987.6 -7460.29 176.01 0.25 10195 10197 3 -21989 -7460.74 176.06 0.25 10196 10198 3 -21991.4 -7461.57 176.15 0.37 10197 10199 3 -21975.2 -7437.14 183.54 0.37 10175 10200 3 -21976.6 -7437.09 183.68 0.37 10199 10201 3 -21977.3 -7438.28 183.55 0.37 10200 10202 3 -21978 -7439.72 186.73 0.37 10201 10203 3 -21978.7 -7439.01 186.9 0.37 10202 10204 3 -21980 -7438.56 189.39 0.37 10203 10205 3 -21980.3 -7438.61 189.41 0.37 10204 10206 3 -21981.3 -7439.54 189.35 0.37 10205 10207 3 -21983.8 -7440.15 189.48 0.37 10206 10208 3 -21985.2 -7441.64 193.89 0.37 10207 10209 3 -21987.2 -7441.64 194.08 0.37 10208 10210 3 -21988.6 -7441.58 194.22 0.37 10209 10211 3 -21990.2 -7441.02 197.61 0.37 10210 10212 3 -21991.5 -7441.48 197.66 0.37 10211 10213 3 -21992.3 -7442.37 197.58 0.37 10212 10214 3 -21993.2 -7443.83 197.43 0.37 10213 10215 3 -21994 -7446.48 201.14 0.37 10214 10216 3 -21995.3 -7447.2 201.14 0.37 10215 10217 3 -21996.6 -7447.86 202.34 0.37 10216 10218 3 -21996.8 -7448.95 202.17 0.37 10217 10219 3 -21998.2 -7448.87 202.33 0.37 10218 10220 3 -21998.2 -7448.58 202.39 0.37 10219 10221 3 -22000 -7449.85 206.71 0.37 10220 10222 3 -22000.4 -7450.69 206.61 0.37 10221 10223 3 -22002.6 -7451.79 206.63 0.25 10222 10224 3 -22004.3 -7451.53 206.84 0.37 10223 10225 3 -22006 -7451.68 203.38 0.25 10224 10226 3 -22007.2 -7451.3 203.55 0.25 10225 10227 3 -22008.2 -7452 203.53 0.37 10226 10228 3 -22009.7 -7453.93 210.32 0.37 10227 10229 3 -22009.7 -7453.63 210.37 0.37 10228 10230 3 -22011.6 -7454.45 210.59 0.37 10229 10231 3 -22013.9 -7454.78 210.75 0.37 10230 10232 3 -22016.4 -7455.14 210.88 0.25 10231 10233 3 -22016.3 -7455.17 211.06 0.25 10232 10234 3 -22019.4 -7455.65 211.59 0.25 10233 10235 3 -22019.4 -7455.4 211.63 0.25 10234 10236 3 -22022.7 -7456.26 211.17 0.25 10235 10237 3 -22027.8 -7456.43 211.62 0.25 10236 10238 3 -22028.1 -7456.46 211.65 0.25 10237 10239 3 -22032.8 -7455.63 211.01 0.25 10238 10240 3 -22032.8 -7455.34 211.06 0.25 10239 10241 3 -22037.5 -7455.43 209.93 0.25 10240 10242 3 -21921.7 -7383.42 213.66 1.24 1 10243 3 -21923.2 -7382.39 209.65 1.24 10242 10244 3 -21924.1 -7381.74 209.85 1.24 10243 10245 3 -21925.6 -7380.85 210.14 1.24 10244 10246 3 -21926.9 -7379.96 210.4 1.24 10245 10247 3 -21928.8 -7378.34 207.58 0.99 10246 10248 3 -21930.9 -7377.29 207.91 0.99 10247 10249 3 -21932.4 -7375.64 205.16 1.24 10248 10250 3 -21934 -7374.55 205.45 1.115 10249 10251 3 -21935.5 -7373.21 202.78 0.745 10250 10252 3 -21936.7 -7372.54 203 0.495 10251 10253 3 -21938.6 -7372 201.76 0.62 10252 10254 3 -21939.5 -7371.59 201.91 1.115 10253 10255 3 -21939.5 -7371.88 201.86 1.115 10254 10256 3 -21940.8 -7372.35 201.91 1.115 10255 10257 3 -21942 -7371.97 202.08 0.865 10256 10258 3 -21943.4 -7370.76 200.38 0.745 10257 10259 3 -21944.8 -7370.95 200.48 0.745 10258 10260 3 -21946.8 -7370.94 200.66 0.62 10259 10261 3 -21948.3 -7370.37 200.9 0.62 10260 10262 3 -21950.1 -7369.7 197.33 0.62 10261 10263 3 -21950.3 -7369.71 197.34 0.62 10262 10264 3 -21952.3 -7369.17 191.33 0.495 10263 10265 3 -21954 -7368.88 191.54 0.495 10264 10266 3 -21955.6 -7368.53 189.95 0.495 10265 10267 3 -21956.9 -7369.24 189.96 0.495 10266 10268 3 -21957.5 -7369.29 186.65 0.495 10267 10269 3 -21952.7 -7369.73 186.14 0.495 10268 10270 3 -21954 -7371.34 184.53 0.495 10269 10271 3 -21955 -7371.51 181.65 0.495 10270 10272 3 -21955 -7371.61 180.86 0.495 10271 10273 3 -21956.9 -7372.64 180.18 0.495 10272 10274 3 -21957.3 -7373.6 176.07 0.495 10273 10275 3 -21959.1 -7374.18 176.09 0.495 10274 10276 3 -21961.2 -7372.8 165.06 0.37 10275 10277 3 -21962.3 -7373.22 165.04 0.37 10276 10278 3 -21963.2 -7373.38 158.91 0.37 10277 10279 3 -21963.2 -7373.63 158.87 0.37 10278 10280 3 -21963.8 -7373.87 156.75 0.37 10279 10281 3 -21964.1 -7373.44 154.1 0.37 10280 10282 3 -21966.8 -7372.89 148.21 0.37 10281 10283 3 -21969.8 -7370.96 138.19 0.37 10282 10284 3 -21971.2 -7371.59 132.88 0.37 10283 10285 3 -21971.2 -7371.53 132.48 0.37 10284 10286 3 -21971.9 -7371.04 132.03 0.37 10285 10287 3 -21972.8 -7369.79 130.31 0.37 10286 10288 3 -21974.7 -7368.19 122.62 0.25 10287 10289 3 -21976 -7366.19 113.44 0.25 10288 10290 3 -21976 -7366.76 119.81 0.25 10289 10291 3 -21974.5 -7365.27 119.92 0.495 10290 10292 3 -21974.8 -7362.64 114.04 0.495 10291 10293 3 -21974.6 -7362.33 114.07 0.495 10292 10294 3 -21973.6 -7359.71 109.52 0.37 10293 10295 3 -21972.6 -7360.64 109.27 0.37 10294 10296 3 -21972.3 -7358 102.4 0.37 10295 10297 3 -21971 -7356.06 97.33 0.37 10296 10298 3 -21970.5 -7354.05 97.36 0.37 10297 10299 3 -21970.7 -7352.83 97.96 0.125 10298 10300 3 -21952.5 -7368.46 197.76 0.62 10263 10301 3 -21954.2 -7368.42 197.93 0.62 10300 10302 3 -21956.2 -7368.43 198.11 0.62 10301 10303 3 -21957.5 -7367.27 198.42 0.62 10302 10304 3 -21957.5 -7367.01 198.47 0.62 10303 10305 3 -21958.7 -7366.29 194.9 0.62 10304 10306 3 -21959.3 -7367.69 196.16 0.495 10305 10307 3 -21960.6 -7368.11 196.22 0.495 10306 10308 3 -21961.9 -7368.12 193.71 0.495 10307 10309 3 -21962.4 -7368.97 193.61 0.495 10308 10310 3 -21963.7 -7369.16 190.67 0.495 10309 10311 3 -21963.6 -7370.89 189.72 0.495 10310 10312 3 -21963.8 -7371.14 186.48 0.495 10311 10313 3 -21958.8 -7372.28 186.96 0.495 10312 10314 3 -21960.2 -7373.14 184.38 0.37 10313 10315 3 -21960.3 -7374.17 184.07 0.37 10314 10316 3 -21961.4 -7375.04 183.24 0.37 10315 10317 3 -21962.9 -7375.05 181.89 0.37 10316 10318 3 -21963 -7374.9 180.97 0.37 10317 10319 3 -21964.2 -7374.84 179.41 0.37 10318 10320 3 -21965.1 -7375.51 176.38 0.37 10319 10321 3 -21966.6 -7375.85 175.41 0.37 10320 10322 3 -21968.5 -7374.77 170.96 0.37 10321 10323 3 -21969.3 -7374.68 171.06 0.37 10322 10324 3 -21971.7 -7374.97 168.94 0.37 10323 10325 3 -21972.4 -7375.66 168.89 0.37 10324 10326 3 -21974.6 -7375.15 162.68 0.37 10325 10327 3 -21976.4 -7374.29 159.42 0.37 10326 10328 3 -21977.8 -7374.96 157.21 0.37 10327 10329 3 -21979.1 -7375.68 152.76 0.37 10328 10330 3 -21980.2 -7376.1 152.36 0.37 10329 10331 3 -21981.9 -7376.27 148.63 0.25 10330 10332 3 -21981.8 -7376.51 148.58 0.25 10331 10333 3 -21983.1 -7377.19 148.15 0.25 10332 10334 3 -21984.6 -7377.83 145.87 0.25 10333 10335 3 -21985 -7378.95 145.61 0.25 10334 10336 3 -21986.8 -7379.68 145.09 0.25 10335 10337 3 -21988.2 -7379 142.85 0.25 10336 10338 3 -21990.1 -7378.81 140.32 0.25 10337 10339 3 -21991.8 -7379.96 137.5 0.25 10338 10340 3 -21993 -7381.56 136.39 0.25 10339 10341 3 -21993 -7381.52 136.15 0.25 10340 10342 3 -21994.7 -7382.21 135.12 0.25 10341 10343 3 -21995.7 -7382.21 132.88 0.25 10342 10344 3 -21995.8 -7382.03 131.81 0.25 10343 10345 3 -21997.7 -7382.79 129.67 0.25 10344 10346 3 -21999.2 -7383.45 127.58 0.25 10345 10347 3 -22000.2 -7384.72 130.65 0.25 10346 10348 3 -22000.3 -7384.59 129.82 0.25 10347 10349 3 -22000.1 -7386.16 128.06 0.25 10348 10350 3 -21999.8 -7387.92 127.74 0.25 10349 10351 3 -22001.9 -7381.27 117.38 0.25 10348 10352 3 -22002.7 -7380.12 117.64 0.25 10351 10353 3 -22003 -7380.16 117.66 0.25 10352 10354 3 -21966.1 -7369.48 192.55 0.62 10310 10355 3 -21968.1 -7369.54 192.73 0.495 10354 10356 3 -21969.8 -7369.24 192.94 0.495 10355 10357 3 -21971.6 -7368.7 193.2 0.495 10356 10358 3 -21972.8 -7368.64 193.31 0.495 10357 10359 3 -21973 -7368.61 193.35 0.495 10358 10360 3 -21975 -7368.91 193.48 0.495 10359 10361 3 -21975.6 -7368.1 190.18 0.495 10360 10362 3 -21976.9 -7368.85 190.18 0.37 10361 10363 3 -21977.3 -7369.14 188.54 0.37 10362 10364 3 -21975.1 -7369.05 188.35 0.37 10363 10365 3 -21977.5 -7368.88 186.63 0.37 10364 10366 3 -21977.6 -7368.8 186.17 0.37 10365 10367 3 -21977.4 -7367.18 184.9 0.37 10366 10368 3 -21978.4 -7366.74 182.89 0.37 10367 10369 3 -21978.5 -7366.66 182.36 0.37 10368 10370 3 -21978.8 -7367.48 179.28 0.37 10369 10371 3 -21980.9 -7367.29 177.78 0.37 10370 10372 3 -21981 -7367.13 176.83 0.37 10371 10373 3 -21982.5 -7367.43 175.63 0.37 10372 10374 3 -21982.7 -7367.5 175.64 0.37 10373 10375 3 -21984.5 -7366.58 173.5 0.37 10374 10376 3 -21986.2 -7366.63 173.65 0.37 10375 10377 3 -21988.7 -7366.5 173.9 0.37 10376 10378 3 -21989.7 -7365.69 172.84 0.37 10377 10379 3 -21989.8 -7365.45 172.83 0.37 10378 10380 3 -21990.7 -7364.82 172.96 0.37 10379 10381 3 -21990.8 -7364.51 172.85 0.37 10380 10382 3 -21991.6 -7362.07 167.56 0.37 10381 10383 3 -21992.8 -7361.01 165.29 0.37 10382 10384 3 -21992.9 -7360.88 164.48 0.37 10383 10385 3 -21993.9 -7360.27 162.98 0.37 10384 10386 3 -21994.4 -7359.33 161.83 0.37 10385 10387 3 -21995.3 -7358.98 161.96 0.37 10386 10388 3 -21996.6 -7358.36 161.96 0.37 10387 10389 3 -21996.9 -7358.41 161.98 0.37 10388 10390 3 -21997.8 -7357.72 161.88 0.37 10389 10391 3 -21999.6 -7357.14 158.4 0.37 10390 10392 3 -22001.8 -7356.58 154.84 0.37 10391 10393 3 -22003.1 -7356.27 155.01 0.37 10392 10394 3 -22004.2 -7356.45 155.08 0.37 10393 10395 3 -22004.5 -7356.53 155.09 0.37 10394 10396 3 -22006.7 -7356.91 155.23 0.37 10395 10397 3 -22008.7 -7355.85 152.02 0.37 10396 10398 3 -22010.7 -7356.22 152.14 0.37 10397 10399 3 -22012.2 -7356.42 148.91 0.37 10398 10400 3 -22012.5 -7356.46 148.93 0.37 10399 10401 3 -22013.7 -7356.16 149.1 0.37 10400 10402 3 -22014.1 -7355.42 149.26 0.37 10401 10403 3 -22014.1 -7355.16 149.3 0.37 10402 10404 3 -22015.4 -7354.31 146.19 0.37 10403 10405 3 -22015.3 -7354.52 146.15 0.37 10404 10406 3 -22016.5 -7353.97 146.35 0.37 10405 10407 3 -22018.8 -7353.85 146.59 0.37 10406 10408 3 -22019.5 -7354.53 146.54 0.37 10407 10409 3 -22020.6 -7353.22 143.07 0.37 10408 10410 3 -22022 -7352.92 143.26 0.37 10409 10411 3 -22023.3 -7352.57 136.94 0.37 10410 10412 3 -22023.6 -7352.65 136.96 0.37 10411 10413 3 -22024.9 -7352.86 134.16 0.37 10412 10414 3 -22026.2 -7352.44 129.25 0.37 10413 10415 3 -22026.2 -7352.41 129.1 0.37 10414 10416 3 -22022.3 -7352.32 131.79 0.37 10415 10417 3 -22022.4 -7352.3 131.71 0.37 10416 10418 3 -22021.4 -7350.79 131.66 0.37 10417 10419 3 -22019.1 -7350.01 126.05 0.37 10418 10420 3 -22018.7 -7350.18 122.85 0.37 10419 10421 3 -21977.4 -7368.37 191.9 0.495 10361 10422 3 -21979.2 -7368.13 192.11 0.495 10421 10423 3 -21980.9 -7367.5 192.37 0.37 10422 10424 3 -21982 -7367.97 192.4 0.37 10423 10425 3 -21984.3 -7367.73 192.65 0.37 10424 10426 3 -21984.3 -7367.48 192.69 0.37 10425 10427 3 -21985.8 -7365.92 190.7 0.37 10426 10428 3 -21986.6 -7364.49 191.01 0.37 10427 10429 3 -21986.9 -7364.25 191.08 0.37 10428 10430 3 -21988.3 -7364.14 191.23 0.37 10429 10431 3 -21988.3 -7363.96 191.26 0.37 10430 10432 3 -21989.4 -7363.52 192.8 0.37 10431 10433 3 -21991.7 -7361.74 193.31 0.37 10432 10434 3 -21993.5 -7360.56 193.19 0.37 10433 10435 3 -21994.3 -7360.96 193.2 0.37 10434 10436 3 -21994.3 -7361.24 193.15 0.37 10435 10437 3 -21996.4 -7360.72 193.43 0.37 10436 10438 3 -21997.5 -7360.34 193.61 0.37 10437 10439 3 -21999.9 -7359.88 193.9 0.37 10438 10440 3 -22001.6 -7359.74 193.56 0.37 10439 10441 3 -22003.4 -7359.46 193.77 0.37 10440 10442 3 -21999.9 -7360.37 193.3 0.37 10441 10443 3 -22000 -7360.14 193.34 0.37 10442 10444 3 -22001.5 -7360.88 193.37 0.37 10443 10445 3 -22003.7 -7361.57 193.45 0.37 10444 10446 3 -22005.5 -7361.43 192.4 0.37 10445 10447 3 -22007.6 -7362.2 191.92 0.37 10446 10448 3 -22007.6 -7362.47 191.87 0.37 10447 10449 3 -22009.9 -7362.08 192.26 0.37 10448 10450 3 -22011.3 -7362.35 192.34 0.37 10449 10451 3 -22013.4 -7363.51 192.34 0.37 10450 10452 3 -22016.5 -7363.28 192.67 0.37 10451 10453 3 -22017.4 -7364.52 192.55 0.37 10452 10454 3 -22017.4 -7364.77 192.51 0.37 10453 10455 3 -22017.9 -7367.42 194.93 0.37 10454 10456 3 -22018.1 -7367.45 194.95 0.37 10455 10457 3 -22020.6 -7367.67 195.15 0.37 10456 10458 3 -22022.4 -7366.97 195.43 0.37 10457 10459 3 -22024.5 -7366.54 195.7 0.37 10458 10460 3 -22028.4 -7366.03 193.76 0.37 10459 10461 3 -22030.4 -7366.09 193.91 0.37 10460 10462 3 -22032.7 -7366.1 193.32 0.37 10461 10463 3 -22034.6 -7366.69 193.4 0.37 10462 10464 3 -22037.2 -7367.72 193.47 0.37 10463 10465 3 -22040 -7368.63 191.58 0.37 10464 10466 3 -22042.4 -7369.32 191.7 0.37 10465 10467 3 -22047 -7369.95 189.38 0.37 10466 10468 3 -22049.2 -7370.77 187.51 0.37 10467 10469 3 -22051.3 -7372.7 185.75 0.25 10468 10470 3 -22052.7 -7374.3 185.62 0.25 10469 10471 3 -22056.3 -7377.22 183.38 0.25 10470 10472 3 -22057 -7377.88 183.33 0.25 10471 10473 3 -22057.3 -7378.12 183.32 0.25 10472 10474 3 -22057.4 -7379.47 183.1 0.25 10473 10475 3 -22059 -7379.69 178.35 0.25 10474 10476 3 -22061.1 -7380.84 178.36 0.25 10475 10477 3 -22061.1 -7380.76 177.86 0.25 10476 10478 3 -22064 -7380.94 173.24 0.25 10477 10479 3 -22067.8 -7381.66 173.48 0.25 10478 10480 3 -22072 -7383.21 173.62 0.25 10479 10481 3 -22074.4 -7384.43 173.64 0.25 10480 10482 3 -21960.3 -7366.73 199.27 0.495 10305 10483 3 -21962.1 -7366.37 199.49 0.495 10482 10484 3 -21962.9 -7364.75 198.7 0.495 10483 10485 3 -21963.6 -7364.05 198.88 0.495 10484 10486 3 -21964.5 -7363.13 199.12 0.495 10485 10487 3 -21964.6 -7362.92 199.16 0.495 10486 10488 3 -21966.5 -7363.13 199.31 0.495 10487 10489 3 -21968.4 -7361.92 198.77 0.37 10488 10490 3 -21968.5 -7361.18 198.9 0.37 10489 10491 3 -21968.6 -7360.91 198.95 0.37 10490 10492 3 -21969.2 -7360.39 199.09 0.37 10491 10493 3 -21970 -7358.93 199.41 0.37 10492 10494 3 -21970 -7358.66 199.45 0.37 10493 10495 3 -21970.3 -7356.57 199.54 0.495 10494 10496 3 -21969.5 -7355.39 199.67 0.495 10495 10497 3 -21969.8 -7355.43 199.69 0.495 10496 10498 3 -21970.6 -7354.23 199.96 0.495 10497 10499 3 -21970.6 -7353.97 200 0.495 10498 10500 3 -21970.3 -7354.13 201.31 0.495 10499 10501 3 -21971.2 -7351.93 200.39 0.37 10500 10502 3 -21971.5 -7350.92 200.59 0.37 10501 10503 3 -21971.4 -7349.84 200.76 0.37 10502 10504 3 -21971.8 -7349.38 200.87 0.37 10503 10505 3 -21971.8 -7349.13 200.91 0.37 10504 10506 3 -21972.7 -7348.48 201.11 0.37 10505 10507 3 -21973.5 -7347.98 202.62 0.37 10506 10508 3 -21973.3 -7347.44 202.69 0.37 10507 10509 3 -21973.1 -7346.6 202.81 0.37 10508 10510 3 -21974.5 -7345.4 202.58 0.37 10509 10511 3 -21975.1 -7344.7 202.76 0.37 10510 10512 3 -21975.5 -7344.47 202.83 0.37 10511 10513 3 -21975.5 -7343.69 202.96 0.37 10512 10514 3 -21975.8 -7343.71 202.99 0.37 10513 10515 3 -21976.2 -7342.69 203.14 0.37 10514 10516 3 -21978.5 -7341.02 206.06 0.37 10515 10517 3 -21978.9 -7340.32 206.21 0.37 10516 10518 3 -21980.2 -7339.42 206.48 0.37 10517 10519 3 -21980.8 -7338.98 206.61 0.37 10518 10520 3 -21980.8 -7338.73 206.65 0.37 10519 10521 3 -21981.2 -7338.01 206.81 0.37 10520 10522 3 -21983 -7336.92 207.16 0.37 10521 10523 3 -21984.6 -7337.66 207.18 0.37 10522 10524 3 -21984.7 -7337.41 207.23 0.37 10523 10525 3 -21985 -7337.21 207.29 0.37 10524 10526 3 -21985 -7336.95 207.34 0.37 10525 10527 3 -21985.8 -7335.97 209.25 0.37 10526 10528 3 -21986 -7334.97 209.43 0.37 10527 10529 3 -21987.8 -7333.93 209.78 0.37 10528 10530 3 -21989 -7333.81 209.9 0.37 10529 10531 3 -21990.3 -7332.81 209.43 0.37 10530 10532 3 -21991.5 -7331.9 209.7 0.37 10531 10533 3 -21994.3 -7330.44 210.2 0.37 10532 10534 3 -21996 -7329.9 210.46 0.37 10533 10535 3 -21998 -7328.98 212.89 0.37 10534 10536 3 -22000 -7328.94 213.08 0.37 10535 10537 3 -22000.2 -7328.71 213.15 0.37 10536 10538 3 -22002.2 -7328.48 213.37 0.37 10537 10539 3 -22003.2 -7327.42 214.38 0.495 10538 10540 3 -22005.3 -7326.42 214.75 0.37 10539 10541 3 -22006.6 -7325.5 215.02 0.37 10540 10542 3 -22008.3 -7325.18 215.23 0.37 10541 10543 3 -22009.1 -7323.74 218.81 0.37 10542 10544 3 -22010.4 -7323.28 221.48 0.37 10543 10545 3 -22013.2 -7322.14 220.61 0.37 10544 10546 3 -22014.2 -7321.21 220.86 0.37 10545 10547 3 -22014.5 -7320.98 220.92 0.37 10546 10548 3 -22014.7 -7319.45 221.2 0.37 10547 10549 3 -22015.2 -7317.67 221.54 0.37 10548 10550 3 -22015.5 -7317.7 221.56 0.37 10549 10551 3 -22017.2 -7316.87 223.23 0.37 10550 10552 3 -22018.2 -7315.95 223.48 0.37 10551 10553 3 -22019.4 -7314.77 223.79 0.37 10552 10554 3 -22020.5 -7313.07 224.17 0.37 10553 10555 3 -22021.7 -7312.18 224.43 0.37 10554 10556 3 -22023.3 -7311.73 226.85 0.37 10555 10557 3 -22024.7 -7310.33 227.21 0.37 10556 10558 3 -22026.5 -7308.48 229.34 0.37 10557 10559 3 -22028.2 -7307.76 230.18 0.37 10558 10560 3 -22028.5 -7307.79 230.2 0.37 10559 10561 3 -22030.5 -7305.03 231.49 0.37 10560 10562 3 -22032 -7304.2 231.76 0.37 10561 10563 3 -22034.2 -7302.85 232.19 0.37 10562 10564 3 -22034.2 -7300.74 232.54 0.37 10563 10565 3 -22034.2 -7300.49 232.58 0.37 10564 10566 3 -22036 -7299.66 232.88 0.37 10565 10567 3 -22037.6 -7298.22 232.75 0.37 10566 10568 3 -22038.9 -7296.85 233.1 0.37 10567 10569 3 -22039 -7296.56 233.15 0.37 10568 10570 3 -22039.4 -7295.04 233.45 0.25 10569 10571 3 -22040.4 -7294.13 233.69 0.25 10570 10572 3 -22040.7 -7293.91 233.76 0.25 10571 10573 3 -22041.3 -7294.05 237.21 0.37 10572 10574 3 -22041.6 -7294.1 237.23 0.37 10573 10575 3 -22042.2 -7293.66 237.36 0.37 10574 10576 3 -22043.2 -7292.48 237.65 0.25 10575 10577 3 -22043.2 -7292.19 237.7 0.25 10576 10578 3 -22043.9 -7290.99 237.97 0.37 10577 10579 3 -22045 -7289.54 238.3 0.25 10578 10580 3 -22045.3 -7289.56 238.32 0.25 10579 10581 3 -22045.7 -7289.92 243.13 0.37 10580 10582 3 -22046.9 -7289.82 243.26 0.37 10581 10583 3 -22048.4 -7289.02 243.53 0.37 10582 10584 3 -22048.7 -7288.75 243.6 0.37 10583 10585 3 -22050 -7288.69 245.45 0.37 10584 10586 3 -22052.3 -7288.09 246.58 0.125 10585 10587 3 -22052.2 -7288.15 246.99 0.125 10586 10588 3 -22055.5 -7286.96 251.63 0.125 10587 10589 3 -21972.6 -7352.61 197.08 0.37 10500 10590 3 -21973.4 -7350.85 197.26 0.37 10589 10591 3 -21974 -7348.45 192.56 0.37 10590 10592 3 -21969.2 -7347.82 192.22 0.37 10591 10593 3 -21969.3 -7347 192.36 0.37 10592 10594 3 -21970.2 -7345.4 192.71 0.37 10593 10595 3 -21970.2 -7345.1 192.76 0.37 10594 10596 3 -21971.8 -7343.47 189.84 0.37 10595 10597 3 -21973.1 -7342.92 190.05 0.25 10596 10598 3 -21973.4 -7341.12 190.38 0.37 10597 10599 3 -21975.5 -7339.59 187.65 0.37 10598 10600 3 -21975.9 -7339.18 187.75 0.37 10599 10601 3 -21975.7 -7338.61 187.83 0.37 10600 10602 3 -21975.7 -7338.37 187.87 0.37 10601 10603 3 -21976.1 -7337.36 188.08 0.37 10602 10604 3 -21977.3 -7337.05 188.24 0.37 10603 10605 3 -21978 -7335.71 186.2 0.37 10604 10606 3 -21977.8 -7335.19 186.26 0.37 10605 10607 3 -21978.5 -7333.94 186.54 0.37 10606 10608 3 -21978.6 -7333.72 186.58 0.37 10607 10609 3 -21979 -7333.52 186.65 0.37 10608 10610 3 -21979.8 -7332.75 184.71 0.37 10609 10611 3 -21980 -7332.56 184.76 0.37 10610 10612 3 -21980.2 -7332.28 184.82 0.37 10611 10613 3 -21981.7 -7331.31 184.26 0.37 10612 10614 3 -21981.8 -7331.17 183.42 0.37 10613 10615 3 -21982.5 -7330.57 183.58 0.37 10614 10616 3 -21983 -7329.59 183.79 0.37 10615 10617 3 -21983 -7329.31 183.84 0.37 10616 10618 3 -21982.4 -7327.07 182.37 0.37 10617 10619 3 -21982.7 -7326.59 182.48 0.37 10618 10620 3 -21982.3 -7325.73 182.58 0.37 10619 10621 3 -21982.1 -7325.4 182.62 0.37 10620 10622 3 -21981.2 -7323.03 180.79 0.37 10621 10623 3 -21981.2 -7322.79 180.84 0.37 10622 10624 3 -21981.5 -7321.4 180.27 0.37 10623 10625 3 -21981.6 -7321.01 179.7 0.37 10624 10626 3 -21981.5 -7320.21 179.73 0.37 10625 10627 3 -21981.2 -7319.9 179.76 0.37 10626 10628 3 -21980.8 -7317.41 180.13 0.37 10627 10629 3 -21980.8 -7317.19 180.17 0.37 10628 10630 3 -21982.4 -7315.87 177.63 0.37 10629 10631 3 -21983.5 -7314.5 177.95 0.37 10630 10632 3 -21984.3 -7312.13 176.15 0.37 10631 10633 3 -21984.6 -7311.62 176.13 0.37 10632 10634 3 -21985 -7311.41 175.85 0.37 10633 10635 3 -21984.9 -7310.21 175.36 0.37 10634 10636 3 -21985.1 -7309.8 174.41 0.37 10635 10637 3 -21985 -7309.31 173.46 0.37 10636 10638 3 -21985 -7309.04 173.51 0.37 10637 10639 3 -21984.1 -7305.76 169.68 0.37 10638 10640 3 -21984.2 -7305.26 169.77 0.37 10639 10641 3 -21984.3 -7304.52 169.91 0.37 10640 10642 3 -21984.4 -7304.24 169.96 0.37 10641 10643 3 -21984.5 -7303.49 170.09 0.37 10642 10644 3 -21984.5 -7303.26 170.14 0.37 10643 10645 3 -21984.5 -7302.99 170.17 0.37 10644 10646 3 -21985.8 -7299.9 168.59 0.25 10645 10647 3 -21985.5 -7298.03 168.87 0.37 10646 10648 3 -21985.5 -7296.1 170.2 0.37 10647 10649 3 -21986.3 -7294.63 170.52 0.37 10648 10650 3 -21987.3 -7291.3 169.25 0.37 10649 10651 3 -21987.1 -7288.91 169.63 0.25 10650 10652 3 -21988 -7288.46 170.87 0.25 10651 10653 3 -21988.8 -7286.04 170.21 0.25 10652 10654 3 -21988.8 -7285.88 169.24 0.25 10653 10655 3 -21989.3 -7283.59 168.39 0.25 10654 10656 3 -21989.3 -7282.28 167.23 0.25 10655 10657 3 -21989.5 -7281.87 165.97 0.25 10656 10658 3 -21989.2 -7277.6 163.74 0.25 10657 10659 3 -21988.5 -7275.92 162.3 0.25 10658 10660 3 -21988.5 -7275.64 162.35 0.25 10659 10661 3 -21986.5 -7274.46 162.35 0.25 10660 10662 3 -21985.2 -7273.43 162.41 0.25 10661 10663 3 -21984 -7273.59 162.78 0.37 10662 10664 3 -21908.5 -7376.82 208.84 2.48 1 10665 3 -21907.8 -7375.94 208.92 2.48 10664 10666 3 -21908.2 -7375.39 209.04 2.105 10665 10667 3 -21908.6 -7374.19 209.28 1.485 10666 10668 3 -21908.8 -7372.55 209.57 0.99 10667 10669 3 -21908.7 -7370.98 209.82 0.99 10668 10670 3 -21908.1 -7369.33 210.03 0.99 10669 10671 3 -21908.3 -7367.53 210.35 0.99 10670 10672 3 -21908.5 -7366.21 210.59 0.99 10671 10673 3 -21907.5 -7364.16 207.37 0.62 10672 10674 3 -21906.9 -7362.75 207.54 0.62 10673 10675 3 -21906.7 -7362.22 207.61 0.62 10674 10676 3 -21907.7 -7361.01 207.91 0.62 10675 10677 3 -21907.4 -7358.81 203.51 0.62 10676 10678 3 -21907.4 -7359.1 203.46 0.62 10677 10679 3 -21906.9 -7358.24 203.56 0.62 10678 10680 3 -21907 -7357.71 203.65 0.62 10679 10681 3 -21908.3 -7356.05 199.55 0.495 10680 10682 3 -21907.8 -7355.47 199.58 0.495 10681 10683 3 -21907.9 -7355.23 199.57 0.495 10682 10684 3 -21907.8 -7353.86 199.64 0.495 10683 10685 3 -21907.8 -7353.84 199.51 0.495 10684 10686 3 -21907.5 -7351.43 195.47 0.495 10685 10687 3 -21907.2 -7350.31 195.62 0.495 10686 10688 3 -21905.8 -7348.86 194.26 0.495 10687 10689 3 -21905.9 -7347.77 194.43 0.495 10688 10690 3 -21906.4 -7345.25 192.09 0.495 10689 10691 3 -21906 -7344.12 192.13 0.495 10690 10692 3 -21906 -7344.1 191.98 0.495 10691 10693 3 -21905.6 -7342.93 192.08 0.495 10692 10694 3 -21905.9 -7341.21 191.47 0.495 10693 10695 3 -21906 -7338.35 187.3 0.495 10694 10696 3 -21906 -7337.04 186.1 0.495 10695 10697 3 -21899.3 -7335.31 185.76 0.495 10696 10698 3 -21900 -7334.14 186.02 0.495 10697 10699 3 -21902.7 -7333.38 182.29 0.495 10698 10700 3 -21902.8 -7329.97 185.75 0.495 10699 10701 3 -21903.1 -7327.73 182.07 0.495 10700 10702 3 -21903.1 -7327.71 181.94 0.495 10701 10703 3 -21903.1 -7326.34 182.15 0.495 10702 10704 3 -21903 -7325.27 182.32 0.495 10703 10705 3 -21903.3 -7322.65 179.33 0.495 10704 10706 3 -21904 -7320.63 179.73 0.495 10705 10707 3 -21905 -7320.5 178.12 0.495 10706 10708 3 -21905.3 -7320.08 178.22 0.495 10707 10709 3 -21905 -7318.42 178.47 0.495 10708 10710 3 -21906.1 -7316.64 175.27 0.495 10709 10711 3 -21906.6 -7315.14 175.57 0.495 10710 10712 3 -21906.9 -7312.87 173.31 0.495 10711 10713 3 -21906.8 -7313.07 174.57 0.495 10712 10714 3 -21907.1 -7312.63 174.67 0.495 10713 10715 3 -21906.4 -7310.52 171.06 0.495 10714 10716 3 -21905.6 -7308.23 171.23 0.37 10715 10717 3 -21904.7 -7307.21 170.86 0.495 10716 10718 3 -21904.3 -7304.62 165.94 0.37 10717 10719 3 -21903.4 -7303.4 166.06 0.37 10718 10720 3 -21902 -7303.19 165.96 0.37 10719 10721 3 -21902.4 -7301.31 164.25 0.25 10720 10722 3 -21902.4 -7301.02 164.3 0.25 10721 10723 3 -21902.5 -7300.77 164.35 0.25 10722 10724 3 -21903.2 -7300.34 164.48 0.37 10723 10725 3 -21903.8 -7297.86 161.94 0.495 10724 10726 3 -21904 -7297.9 161.95 0.495 10725 10727 3 -21902.9 -7296.35 162.11 0.495 10726 10728 3 -21902.6 -7294.9 161.98 0.495 10727 10729 3 -21901.4 -7293.9 162.03 0.37 10728 10730 3 -21901.6 -7293.67 162.09 0.37 10729 10731 3 -21901 -7292.52 162.22 0.37 10730 10732 3 -21901 -7292.26 162.27 0.37 10731 10733 3 -21900.7 -7290.9 162.46 0.37 10732 10734 3 -21900.8 -7290.64 162.51 0.37 10733 10735 3 -21899.3 -7289.58 162.54 0.37 10734 10736 3 -21899.1 -7288.77 162.66 0.25 10735 10737 3 -21899.1 -7288.54 162.7 0.25 10736 10738 3 -21898.8 -7287.41 162.86 0.37 10737 10739 3 -21898.8 -7287.15 162.9 0.37 10738 10740 3 -21898.4 -7286.03 163.05 0.37 10739 10741 3 -21898.5 -7285.76 163.1 0.37 10740 10742 3 -21898.6 -7283.33 163.15 0.37 10741 10743 3 -21897 -7282.75 163.13 0.37 10742 10744 3 -21895.4 -7281.95 159.81 0.25 10743 10745 3 -21894.1 -7279.88 158.69 0.37 10744 10746 3 -21893.8 -7278.26 158.93 0.37 10745 10747 3 -21894 -7275.65 159.38 0.37 10746 10748 3 -21893.2 -7275.24 159.37 0.37 10747 10749 3 -21893.3 -7275 159.42 0.37 10748 10750 3 -21893.2 -7273.64 159.64 0.37 10749 10751 3 -21893 -7273.65 159.61 0.37 10750 10752 3 -21890 -7270.78 160.04 0.37 10751 10753 3 -21889.9 -7269.69 160.21 0.37 10752 10754 3 -21889.7 -7269.12 160.29 0.37 10753 10755 3 -21889.5 -7268.29 160.41 0.37 10754 10756 3 -21887.7 -7265.88 156.14 0.37 10755 10757 3 -21886.4 -7265.09 156.2 0.37 10756 10758 3 -21886 -7264.52 156.26 0.37 10757 10759 3 -21885.3 -7263.61 156.38 0.37 10758 10760 3 -21884.2 -7263.44 156.3 0.37 10759 10761 3 -21883.6 -7263.6 156.22 0.37 10760 10762 3 -21883 -7263.48 156.21 0.37 10761 10763 3 -21880.8 -7262.36 153.44 0.25 10762 10764 3 -21879 -7261.22 153.46 0.25 10763 10765 3 -21877.6 -7259.92 153.53 0.37 10764 10766 3 -21876.5 -7257.07 150.8 0.37 10765 10767 3 -21876.1 -7256.22 150.96 0.37 10766 10768 3 -21874.1 -7256.4 150.77 0.37 10767 10769 3 -21873.9 -7256.36 150.77 0.37 10768 10770 3 -21872.9 -7254.47 148.61 0.25 10769 10771 3 -21870.8 -7253.28 148.61 0.25 10770 10772 3 -21865.5 -7250.49 142.71 0.25 10771 10773 3 -21865 -7250.54 141.71 0.37 10772 10774 3 -21909.1 -7312.57 171.07 0.37 10714 10775 3 -21909.3 -7312.58 171.09 0.37 10774 10776 3 -21909.9 -7310.83 168.26 0.37 10775 10777 3 -21910 -7310.08 168.39 0.37 10776 10778 3 -21910.9 -7309.18 165.61 0.37 10777 10779 3 -21911.2 -7307.91 164.3 0.37 10778 10780 3 -21912.4 -7307.56 164.46 0.37 10779 10781 3 -21912.4 -7307.33 164.51 0.37 10780 10782 3 -21911.9 -7305.92 163.08 0.37 10781 10783 3 -21912.3 -7303.9 157.67 0.37 10782 10784 3 -21913.6 -7303.7 156.76 0.37 10783 10785 3 -21913.5 -7302.46 156.09 0.37 10784 10786 3 -21914 -7300.07 151.59 0.37 10785 10787 3 -21913.7 -7300.04 151.57 0.37 10786 10788 3 -21913.4 -7298.88 154.2 0.37 10787 10789 3 -21912.8 -7297.13 151.28 0.37 10788 10790 3 -21912.9 -7296.89 151.32 0.37 10789 10791 3 -21913.7 -7296.76 151.42 0.37 10790 10792 3 -21915.1 -7295.42 151.77 0.37 10791 10793 3 -21915.2 -7295.09 151.4 0.37 10792 10794 3 -21915.4 -7293.38 152.37 0.37 10793 10795 3 -21915 -7292.57 152.66 0.37 10794 10796 3 -21914.7 -7290.09 146.62 0.25 10795 10797 3 -21914.8 -7289.13 147.08 0.25 10796 10798 3 -21915.1 -7288.71 147.65 0.37 10797 10799 3 -21915.2 -7288.45 147.71 0.37 10798 10800 3 -21915 -7286.38 142.44 0.37 10799 10801 3 -21915 -7285.11 142.65 0.37 10800 10802 3 -21914.7 -7284.75 142.69 0.37 10801 10803 3 -21914.7 -7283.03 139 0.25 10802 10804 3 -21914.8 -7282.75 139.05 0.25 10803 10805 3 -21914.8 -7282.51 139.09 0.25 10804 10806 3 -21914 -7280.89 138.35 0.37 10805 10807 3 -21914.4 -7279.98 139.09 0.37 10806 10808 3 -21914.4 -7279.73 139.14 0.37 10807 10809 3 -21915 -7276.27 136.14 0.37 10808 10810 3 -21916 -7275.17 136.41 0.37 10809 10811 3 -21916.8 -7275.29 136.47 0.37 10810 10812 3 -21916.8 -7275.01 136.52 0.37 10811 10813 3 -21917.3 -7274.29 136.68 0.37 10812 10814 3 -21917.3 -7274.01 136.73 0.37 10813 10815 3 -21918.4 -7273.16 135.17 0.37 10814 10816 3 -21918.4 -7273.42 135.12 0.37 10815 10817 3 -21919.6 -7272.6 135.38 0.37 10816 10818 3 -21920.3 -7271.91 135.56 0.37 10817 10819 3 -21920.7 -7271.73 135.62 0.37 10818 10820 3 -21922.2 -7270.22 132.06 0.37 10819 10821 3 -21923.3 -7269.69 134.03 0.25 10820 10822 3 -21923.3 -7269.73 134.28 0.25 10821 10823 3 -21925.1 -7267.93 128.52 0.25 10822 10824 3 -21927.3 -7268.84 128.57 0.25 10823 10825 3 -21929.2 -7269.16 128.7 0.25 10824 10826 3 -21930.5 -7267.73 130.18 0.25 10825 10827 3 -21932.5 -7267.26 125.57 0.25 10826 10828 3 -21934.3 -7266.77 125.52 0.25 10827 10829 3 -21934.4 -7266.54 125.57 0.25 10828 10830 3 -21936.1 -7265.13 125.45 0.25 10829 10831 3 -21903.8 -7333.31 173.46 0.37 10699 10832 3 -21905.1 -7332.5 169.32 0.37 10831 10833 3 -21907.1 -7331.77 166.52 0.37 10832 10834 3 -21907.4 -7331.78 166.19 0.37 10833 10835 3 -21907.6 -7332.05 165.99 0.37 10834 10836 3 -21907.7 -7331.88 164.95 0.37 10835 10837 3 -21909.1 -7332.63 160.28 0.37 10836 10838 3 -21910.3 -7331.52 153.33 0.37 10837 10839 3 -21910.9 -7331.11 153.46 0.37 10838 10840 3 -21912.6 -7331.65 151.68 0.125 10839 10841 3 -21913.6 -7332.64 151.61 0.37 10840 10842 3 -21914.3 -7333.27 151.57 0.37 10841 10843 3 -21916.7 -7332.42 147.76 0.125 10842 10844 3 -21917.3 -7331.97 147.89 0.37 10843 10845 3 -21917.6 -7332.05 147.9 0.37 10844 10846 3 -21918.5 -7331.69 148.05 0.37 10845 10847 3 -21920.4 -7330.05 144.68 0.125 10846 10848 3 -21921.7 -7329.46 144.75 0.37 10847 10849 3 -21922 -7328.99 144.86 0.37 10848 10850 3 -21923.4 -7328.81 142.44 0.37 10849 10851 3 -21923.7 -7328.3 142.56 0.25 10850 10852 3 -21924.8 -7327.18 142.72 0.37 10851 10853 3 -21924.8 -7327.11 142.25 0.37 10852 10854 3 -21926.3 -7326.42 138.84 0.37 10853 10855 3 -21926.8 -7325.48 137.46 0.37 10854 10856 3 -21926.5 -7323.93 133.62 0.37 10855 10857 3 -21928.1 -7324.18 133.73 0.37 10856 10858 3 -21928.2 -7323.94 133.78 0.37 10857 10859 3 -21928 -7323.11 133.9 0.37 10858 10860 3 -21928.7 -7320.85 129.87 0.37 10859 10861 3 -21929.8 -7320.51 127.02 0.37 10860 10862 3 -21929.8 -7320.26 127.07 0.37 10861 10863 3 -21931.1 -7319.41 127.1 0.37 10862 10864 3 -21932.4 -7318.32 127.28 0.37 10863 10865 3 -21933.2 -7317.52 126.9 0.37 10864 10866 3 -21935 -7313.21 126.91 0.25 10865 10867 3 -21936.1 -7311.46 123.77 0.25 10866 10868 3 -21936.2 -7311.22 123.82 0.25 10867 10869 3 -21937.5 -7311.43 123.91 0.25 10868 10870 3 -21937.8 -7311.45 123.93 0.25 10869 10871 3 -21938.5 -7310.74 124.11 0.25 10870 10872 3 -21938.5 -7310.25 124.2 0.25 10871 10873 3 -21939.4 -7308.31 120.21 0.25 10872 10874 3 -21939.2 -7308.59 121.92 0.25 10873 10875 3 -21938.7 -7308.28 121.92 0.25 10874 10876 3 -21938.5 -7307.96 121.95 0.25 10875 10877 3 -21938.3 -7307.58 121.72 0.25 10876 10878 3 -21938 -7307.25 121.75 0.25 10877 10879 3 -21937.7 -7306.27 120.71 0.25 10878 10880 3 -21937.6 -7305.73 119.19 0.25 10879 10881 3 -21937.5 -7304.69 118.28 0.25 10880 10882 3 -21937.4 -7303.86 118.4 0.25 10881 10883 3 -21937.4 -7302.82 117.07 0.25 10882 10884 3 -21937.2 -7302.24 117.14 0.25 10883 10885 3 -21936.6 -7300.7 120.96 0.25 10884 10886 3 -21936.9 -7300.46 121.04 0.25 10885 10887 3 -21936.7 -7299.91 121.11 0.495 10886 10888 3 -21906.1 -7374.04 208.82 1.24 10666 10889 3 -21904.9 -7372.03 209.04 1.24 10888 10890 3 -21903.8 -7370.52 212.07 1.24 10889 10891 3 -21903.9 -7369.47 212.26 1.24 10890 10892 3 -21903.6 -7367.32 212.58 1.115 10891 10893 3 -21903.5 -7365.96 212.8 1.24 10892 10894 3 -21902.9 -7363.55 213.15 1.24 10893 10895 3 -21902.2 -7362.91 213.18 1.24 10894 10896 3 -21902 -7361.78 213.35 1.24 10895 10897 3 -21902 -7361.54 213.4 1.24 10896 10898 3 -21901.8 -7360.98 213.47 1.24 10897 10899 3 -21901.5 -7359.07 213.76 1.115 10898 10900 3 -21901.5 -7358.81 213.8 1.115 10899 10901 3 -21901.8 -7356.78 214.16 1.115 10900 10902 3 -21901.3 -7355.43 211.75 1.115 10901 10903 3 -21901.2 -7354.07 211.96 1.115 10902 10904 3 -21900.8 -7352.94 212.11 1.115 10903 10905 3 -21900.8 -7352.66 212.16 1.115 10904 10906 3 -21900.5 -7350.84 212.43 0.99 10905 10907 3 -21900.9 -7349.8 212.64 0.99 10906 10908 3 -21900.2 -7348.13 212.85 0.99 10907 10909 3 -21899 -7346.9 212.94 0.99 10908 10910 3 -21898.6 -7345.79 213.09 1.115 10909 10911 3 -21898.4 -7343.17 216.09 1.24 10910 10912 3 -21897.9 -7340.02 212.57 1.24 10911 10913 3 -21896.2 -7339.51 213.92 0.62 10912 10914 3 -21894.5 -7338.29 211.24 0.495 10913 10915 3 -21893.5 -7337.61 211.26 0.62 10914 10916 3 -21892.9 -7335.94 211.47 0.62 10915 10917 3 -21891.3 -7335.2 211.44 0.62 10916 10918 3 -21890.2 -7334.65 210.89 0.62 10917 10919 3 -21888.7 -7332.89 211.14 0.495 10918 10920 3 -21888.3 -7330.96 209.73 0.495 10919 10921 3 -21887.6 -7329.79 209.86 0.495 10920 10922 3 -21887.6 -7329.55 209.9 0.495 10921 10923 3 -21886.7 -7328.07 210.05 0.495 10922 10924 3 -21886.3 -7326.65 205.26 0.37 10923 10925 3 -21885.2 -7326.24 205.23 0.37 10924 10926 3 -21885 -7325.91 205.26 0.37 10925 10927 3 -21883.9 -7326.03 205.13 0.37 10926 10928 3 -21882.9 -7324.61 205.28 0.37 10927 10929 3 -21882.8 -7322.97 205.54 0.37 10928 10930 3 -21880.9 -7322.7 205.4 0.37 10929 10931 3 -21880.6 -7322.65 205.39 0.37 10930 10932 3 -21880.3 -7320.51 205.72 0.37 10931 10933 3 -21880.4 -7320.29 205.76 0.37 10932 10934 3 -21879.8 -7319.35 202.46 0.37 10933 10935 3 -21878.8 -7318.4 202.52 0.37 10934 10936 3 -21878.8 -7318.19 202.56 0.37 10935 10937 3 -21878.9 -7317.9 202.61 0.37 10936 10938 3 -21878.7 -7317.59 202.64 0.37 10937 10939 3 -21877.1 -7316.03 199.51 0.37 10938 10940 3 -21877.2 -7315.77 199.56 0.37 10939 10941 3 -21876.4 -7314.87 199.64 0.37 10940 10942 3 -21875.1 -7313.92 199.68 0.37 10941 10943 3 -21875.1 -7313.68 199.72 0.37 10942 10944 3 -21874.8 -7312.07 195.8 0.37 10943 10945 3 -21873.9 -7311.95 195.74 0.37 10944 10946 3 -21873.7 -7311.65 195.74 0.37 10945 10947 3 -21872.5 -7311.5 195.66 0.37 10946 10948 3 -21872.7 -7310.45 195.84 0.37 10947 10949 3 -21872.8 -7310.18 195.88 0.37 10948 10950 3 -21872.2 -7309.56 195.91 0.37 10949 10951 3 -21872.3 -7309.33 195.96 0.37 10950 10952 3 -21871.8 -7308.46 196.04 0.37 10951 10953 3 -21871.6 -7308.17 196.07 0.37 10952 10954 3 -21870 -7307.42 196.12 0.37 10953 10955 3 -21869.3 -7306.06 196.28 0.37 10954 10956 3 -21869.1 -7305.47 196.36 0.37 10955 10957 3 -21868.1 -7303.92 193.1 0.37 10956 10958 3 -21868.2 -7303.71 193.14 0.37 10957 10959 3 -21867.7 -7303.07 193.19 0.37 10958 10960 3 -21867.1 -7302.75 193.2 0.37 10959 10961 3 -21866.9 -7302.72 193.18 0.37 10960 10962 3 -21866 -7302.84 193.08 0.37 10961 10963 3 -21864.8 -7302.66 192.98 0.37 10962 10964 3 -21864.7 -7302.61 192.63 0.37 10963 10965 3 -21864.4 -7302.06 192.7 0.37 10964 10966 3 -21864.2 -7301.41 192.38 0.37 10965 10967 3 -21864.2 -7300 192.14 0.37 10966 10968 3 -21863.5 -7299.28 190.14 0.37 10967 10969 3 -21862 -7298.32 190.16 0.37 10968 10970 3 -21860.9 -7297.87 190.13 0.37 10969 10971 3 -21860.4 -7297.31 190.18 0.37 10970 10972 3 -21859.9 -7296.96 190.19 0.37 10971 10973 3 -21859.9 -7296.45 190.27 0.37 10972 10974 3 -21860 -7295.86 190.38 0.37 10973 10975 3 -21860.1 -7295.64 190.42 0.37 10974 10976 3 -21859.4 -7294.81 189.29 0.37 10975 10977 3 -21858 -7293.99 190.2 0.37 10976 10978 3 -21853.5 -7292.61 190 0.37 10977 10979 3 -21851.5 -7292.3 191.47 0.37 10978 10980 3 -21850.2 -7291.29 191.53 0.37 10979 10981 3 -21848.6 -7289.23 187.88 0.37 10980 10982 3 -21847 -7288.98 187.77 0.25 10981 10983 3 -21844.7 -7288.56 187.63 0.37 10982 10984 3 -21842.1 -7288.62 184.44 0.37 10983 10985 3 -21839.9 -7288.51 184.26 0.37 10984 10986 3 -21838.2 -7287.58 182.3 0.37 10985 10987 3 -21836.2 -7287.49 182.13 0.37 10986 10988 3 -21834 -7286.43 179.61 0.25 10987 10989 3 -21832.9 -7286 179.51 0.25 10988 10990 3 -21829.4 -7285.2 175.45 0.37 10989 10991 3 -21827.4 -7284.88 175.73 0.37 10990 10992 3 -21826 -7282.73 175.78 0.37 10991 10993 3 -21823.4 -7282.24 175.33 0.37 10992 10994 3 -21823.2 -7281.91 174.99 0.37 10993 10995 3 -21822.3 -7280.1 171.92 0.37 10994 10996 3 -21820.3 -7279.73 171.8 0.37 10995 10997 3 -21817.7 -7279 171.67 0.37 10996 10998 3 -21816.7 -7278.27 171.7 0.37 10997 10999 3 -21816.2 -7277.96 171.71 0.37 10998 11000 3 -21816.2 -7277.71 171.75 0.37 10999 11001 3 -21815.4 -7277.3 171.75 0.37 11000 11002 3 -21814.9 -7276.92 171.76 0.37 11001 11003 3 -21812.4 -7275.46 170.3 0.37 11002 11004 3 -21810.8 -7274.89 170.25 0.37 11003 11005 3 -21810.1 -7273.98 170.33 0.37 11004 11006 3 -21810.2 -7273.72 170.38 0.37 11005 11007 3 -21809.7 -7273.36 170.39 0.37 11006 11008 3 -21809.5 -7273.07 170.42 0.37 11007 11009 3 -21807.5 -7270.94 166.44 0.125 11008 11010 3 -21805.8 -7269.32 166.55 0.125 11009 11011 3 -21805.6 -7268.77 166.63 0.125 11010 11012 3 -21804.6 -7268.33 166.6 0.125 11011 11013 3 -21804.6 -7268.05 166.65 0.125 11012 11014 3 -21801.9 -7266.91 162.72 0.125 11013 11015 3 -21888 -7334.36 210.76 0.62 10918 11016 3 -21885.5 -7333.78 210.63 0.62 11015 11017 3 -21883.9 -7333.02 210.6 0.495 11016 11018 3 -21882.8 -7332.64 210.56 0.495 11017 11019 3 -21881.1 -7332.44 210.89 0.495 11018 11020 3 -21880.7 -7331.88 210.94 0.495 11019 11021 3 -21880.4 -7331.84 210.92 0.495 11020 11022 3 -21878.9 -7330.43 213.49 0.495 11021 11023 3 -21877.8 -7330.54 213.37 0.495 11022 11024 3 -21877.5 -7330.52 213.35 0.495 11023 11025 3 -21875.6 -7329.98 213.26 0.495 11024 11026 3 -21874.9 -7329.96 215.1 0.495 11025 11027 3 -21871.8 -7329.19 211.7 0.495 11026 11028 3 -21869.3 -7329.12 211.47 0.37 11027 11029 3 -21867.6 -7328.88 211.36 0.37 11028 11030 3 -21866 -7329.27 210.04 0.37 11029 11031 3 -21864.5 -7329.85 209.81 0.37 11030 11032 3 -21861.9 -7329.81 209.57 0.37 11031 11033 3 -21861.7 -7329.74 209.56 0.37 11032 11034 3 -21860 -7329.28 209.48 0.37 11033 11035 3 -21859.2 -7328.88 209.47 0.37 11034 11036 3 -21857.6 -7328.11 209.45 0.37 11035 11037 3 -21857.3 -7328.06 209.43 0.37 11036 11038 3 -21855.2 -7327.5 209.32 0.37 11037 11039 3 -21854.9 -7327.45 209.3 0.37 11038 11040 3 -21852 -7326.59 206.41 0.37 11039 11041 3 -21852.1 -7326.37 206.45 0.37 11040 11042 3 -21850.1 -7326.04 206.33 0.37 11041 11043 3 -21847.3 -7325.67 206.12 0.37 11042 11044 3 -21845 -7324.71 203.93 0.37 11043 11045 3 -21842.5 -7324.41 203.74 0.37 11044 11046 3 -21842.3 -7324.36 203.73 0.37 11045 11047 3 -21840.3 -7324.06 203.59 0.37 11046 11048 3 -21840.1 -7323.98 203.58 0.37 11047 11049 3 -21838.7 -7323.3 203.57 0.37 11048 11050 3 -21838.2 -7322.98 203.57 0.37 11049 11051 3 -21838 -7322.71 203.6 0.37 11050 11052 3 -21836.9 -7322.02 203.61 0.37 11051 11053 3 -21836.7 -7321.76 203.64 0.37 11052 11054 3 -21834.7 -7320.48 202.52 0.37 11053 11055 3 -21832.3 -7318.76 202.3 0.37 11054 11056 3 -21830.6 -7317.68 200.57 0.25 11055 11057 3 -21828.9 -7317.41 200.47 0.37 11056 11058 3 -21828.1 -7317.08 200.44 0.37 11057 11059 3 -21826.6 -7316.5 198.2 0.25 11058 11060 3 -21825.5 -7316.86 198.03 0.25 11059 11061 3 -21824.3 -7317.23 194.92 0.37 11060 11062 3 -21824.5 -7317.25 194.73 0.37 11061 11063 3 -21823.8 -7317.42 193.23 0.37 11062 11064 3 -21823.3 -7317.02 192.88 0.37 11063 11065 3 -21822.5 -7316.27 192.44 0.37 11064 11066 3 -21822.6 -7316.04 192.48 0.37 11065 11067 3 -21821.8 -7315.44 192.49 0.37 11066 11068 3 -21820.8 -7315.23 190.81 0.37 11067 11069 3 -21821.2 -7314.49 190.93 0.37 11068 11070 3 -21822.6 -7312.13 184.26 0.37 11069 11071 3 -21823.5 -7311.73 184.41 0.37 11070 11072 3 -21821.9 -7310.46 184.47 0.37 11071 11073 3 -21821.9 -7310.16 184.53 0.37 11072 11074 3 -21821.8 -7309.92 186.15 0.37 11073 11075 3 -21816 -7310.21 185.56 0.37 11074 11076 3 -21814.3 -7310.1 181.98 0.37 11075 11077 3 -21813 -7310.9 181.63 0.37 11076 11078 3 -21813 -7310.87 181.46 0.37 11077 11079 3 -21811.2 -7310.19 179.24 0.37 11078 11080 3 -21808.5 -7307.99 175.4 0.37 11079 11081 3 -21808.5 -7307.71 175.45 0.37 11080 11082 3 -21806.3 -7308.24 172.93 0.25 11081 11083 3 -21806.4 -7307.96 172.96 0.25 11082 11084 3 -21802.8 -7310.58 172.51 0.25 11083 11085 3 -21802.9 -7310.53 172.24 0.25 11084 11086 3 -21874.3 -7328.55 213.85 0.37 11026 11087 3 -21874.3 -7328.28 213.9 0.37 11086 11088 3 -21873.4 -7327.4 213.95 0.37 11087 11089 3 -21872.3 -7326.44 214.02 0.37 11088 11090 3 -21871.2 -7325.08 216.4 0.37 11089 11091 3 -21870.8 -7324.25 216.5 0.37 11090 11092 3 -21870.8 -7323.97 216.54 0.37 11091 11093 3 -21870.3 -7323.37 216.74 0.37 11092 11094 3 -21869.4 -7321.64 216.94 0.37 11093 11095 3 -21868.8 -7319.48 217.25 0.37 11094 11096 3 -21868.2 -7317.8 217.46 0.37 11095 11097 3 -21867.5 -7315.86 217.73 0.37 11096 11098 3 -21866.6 -7314.39 217.88 0.37 11097 11099 3 -21865 -7313.38 222.28 0.37 11098 11100 3 -21863.8 -7312.14 222.36 0.37 11099 11101 3 -21862.4 -7311.68 222.31 0.37 11100 11102 3 -21862.2 -7311.12 222.39 0.37 11101 11103 3 -21861.7 -7310.77 222.39 0.37 11102 11104 3 -21859.8 -7310.3 225.46 0.37 11103 11105 3 -21859.4 -7309.12 225.62 0.37 11104 11106 3 -21859.4 -7308.89 225.66 0.37 11105 11107 3 -21857.8 -7308.4 225.6 0.37 11106 11108 3 -21857.6 -7308.09 225.65 0.37 11107 11109 3 -21855.7 -7307.33 225.6 0.37 11108 11110 3 -21855.7 -7307.06 225.65 0.37 11109 11111 3 -21854.2 -7306.46 227.81 0.25 11110 11112 3 -21853.6 -7304.51 228.08 0.25 11111 11113 3 -21853.1 -7303.65 228.17 0.25 11112 11114 3 -21853.1 -7303.34 228.23 0.25 11113 11115 3 -21851.6 -7303.12 230.86 0.25 11114 11116 3 -21851.1 -7301.98 231.01 0.25 11115 11117 3 -21850.6 -7301.62 231.02 0.37 11116 11118 3 -21849.8 -7301.22 231.01 0.37 11117 11119 3 -21849.1 -7300.39 231.08 0.25 11118 11120 3 -21849.1 -7300.12 231.13 0.25 11119 11121 3 -21848.4 -7299.72 231.12 0.37 11120 11122 3 -21848.1 -7299.16 231.2 0.37 11121 11123 3 -21846.5 -7298.72 231.11 0.25 11122 11124 3 -21845.9 -7298.34 231.12 0.25 11123 11125 3 -21845.4 -7298 231.13 0.495 11124 11126 3 -21844.6 -7297.58 231.13 0.495 11125 11127 3 -21843.7 -7296.42 231.23 0.25 11126 11128 3 -21842.3 -7295.68 231.23 0.25 11127 11129 3 -21840.5 -7295.6 233.33 0.37 11128 11130 3 -21839.5 -7294.67 233.39 0.25 11129 11131 3 -21837.8 -7294.15 233.32 0.37 11130 11132 3 -21835.8 -7294.04 233.8 0.125 11131 11133 3 -21836.1 -7294.05 233.81 0.125 11132 11134 3 -21834.5 -7293.54 233.74 0.125 11133 11135 3 -21833.4 -7293.13 233.92 0.25 11134 11136 3 -21832.6 -7292.5 233.96 0.25 11135 11137 3 -21830.5 -7291.56 237.56 0.25 11136 11138 3 -21829.8 -7290.38 237.69 0.25 11137 11139 3 -21828.1 -7289.29 238.8 0.37 11138 11140 3 -21826.2 -7288.34 243.94 0.37 11139 11141 3 -21825.8 -7287.21 244.08 0.37 11140 11142 3 -21825.3 -7285.14 246.39 0.25 11141 11143 3 -21825.4 -7281.96 247.01 0.25 11142 11144 3 -21825.5 -7281.71 247.06 0.25 11143 11145 3 -21825.6 -7280.53 251.15 0.37 11144 11146 3 -21825.5 -7279.69 251.27 0.37 11145 11147 3 -21824.7 -7278.25 257.46 0.37 11146 11148 3 -21824.3 -7276.11 257.79 0.37 11147 11149 3 -21823.8 -7271.88 257.63 0.37 11148 11150 3 -21826.8 -7262.93 263.23 0.37 11149 11151 3 -21826.8 -7262.69 263.28 0.37 11150 11152 3 -21825.8 -7261.29 265.82 0.37 11151 11153 3 -21825.5 -7259.66 266.06 0.37 11152 11154 3 -21825.5 -7259.4 266.11 0.37 11153 11155 3 -21823.8 -7259.18 271.2 0.37 11154 11156 3 -21822.7 -7257.91 271.3 0.25 11155 11157 3 -21822.1 -7257.76 271.28 0.495 11156 11158 3 -21820.7 -7256.2 271.4 0.37 11157 11159 3 -21818.4 -7255.35 273.77 0.25 11158 11160 3 -21818.2 -7254.79 273.84 0.25 11159 11161 3 -21815.6 -7253.49 273.81 0.37 11160 11162 3 -21813.8 -7251.92 276.05 0.495 11161 11163 3 -21811.7 -7250.45 276.1 0.37 11162 11164 3 -21810.1 -7249.99 276.84 0.25 11163 11165 3 -21810.1 -7249.75 276.88 0.25 11164 11166 3 -21808.5 -7249.68 276.74 0.25 11165 11167 3 -21806.2 -7248.96 277.97 0.25 11166 11168 3 -21804.7 -7247.86 278.01 0.495 11167 11169 3 -21898.5 -7337.61 213.85 0.62 10912 11170 3 -21898.5 -7336.84 213.99 0.62 11169 11171 3 -21898.5 -7335.22 214.24 0.745 11170 11172 3 -21898.4 -7335.53 214.19 0.745 11171 11173 3 -21897.5 -7333.79 214.39 0.62 11172 11174 3 -21897.8 -7333.81 214.42 0.62 11173 11175 3 -21896.9 -7331.8 214.66 0.495 11174 11176 3 -21896.8 -7331.04 214.78 0.495 11175 11177 3 -21896.6 -7329.54 215.88 0.495 11176 11178 3 -21896.1 -7328.71 215.97 0.495 11177 11179 3 -21896.7 -7325.08 210.4 0.495 11178 11180 3 -21896.9 -7323.77 210.63 0.495 11179 11181 3 -21896.9 -7323.52 210.68 0.495 11180 11182 3 -21896.3 -7321.57 210.94 0.495 11181 11183 3 -21896.3 -7321.33 210.98 0.495 11182 11184 3 -21895.3 -7320.68 211 0.495 11183 11185 3 -21895.2 -7319.02 211.26 0.495 11184 11186 3 -21895.8 -7316.68 211.58 0.37 11185 11187 3 -21895.7 -7315.39 211.79 0.37 11186 11188 3 -21895.8 -7314.31 211.98 0.37 11187 11189 3 -21895.9 -7314.05 212.02 0.37 11188 11190 3 -21896.3 -7313.07 212.23 0.37 11189 11191 3 -21896.4 -7312.03 212.41 0.37 11190 11192 3 -21896.4 -7311.74 212.46 0.37 11191 11193 3 -21895.9 -7311.67 212.42 0.37 11192 11194 3 -21895.9 -7311.41 212.47 0.37 11193 11195 3 -21895.3 -7308.87 209.18 0.37 11194 11196 3 -21894.9 -7308.01 209.28 0.37 11195 11197 3 -21894.9 -7305.56 206.13 0.37 11196 11198 3 -21893.8 -7305.11 206.1 0.37 11197 11199 3 -21893.3 -7304.54 206.15 0.37 11198 11200 3 -21893.5 -7303.26 206.38 0.37 11199 11201 3 -21893.9 -7302.51 206.54 0.37 11200 11202 3 -21892.6 -7301.76 206.54 0.37 11201 11203 3 -21892.6 -7301.55 206.58 0.37 11202 11204 3 -21892.1 -7300.91 206.63 0.37 11203 11205 3 -21892.5 -7299.05 204.98 0.37 11204 11206 3 -21892.6 -7298.27 205.12 0.37 11205 11207 3 -21892 -7298.18 205.08 0.37 11206 11208 3 -21891.5 -7297.32 205.18 0.37 11207 11209 3 -21890.8 -7296.96 205.17 0.37 11208 11210 3 -21890.8 -7296.7 205.21 0.37 11209 11211 3 -21891.1 -7296.19 205.32 0.37 11210 11212 3 -21891.4 -7295.97 205.39 0.37 11211 11213 3 -21891.5 -7295.48 205.48 0.37 11212 11214 3 -21891.3 -7295.2 205.5 0.37 11213 11215 3 -21891 -7295.15 205.48 0.37 11214 11216 3 -21890.7 -7294.81 205.52 0.37 11215 11217 3 -21890.7 -7293.07 201.82 0.37 11216 11218 3 -21890.7 -7292.56 201.91 0.37 11217 11219 3 -21890.5 -7292.27 201.93 0.37 11218 11220 3 -21890.3 -7291.73 202 0.37 11219 11221 3 -21890 -7291.4 202.04 0.37 11220 11222 3 -21890.1 -7290.88 202.13 0.37 11221 11223 3 -21889.9 -7290.35 202.2 0.37 11222 11224 3 -21889.4 -7289.72 202.25 0.37 11223 11225 3 -21888.5 -7288.14 198.64 0.37 11224 11226 3 -21888.3 -7287.31 198.75 0.37 11225 11227 3 -21888.3 -7287.06 198.8 0.37 11226 11228 3 -21887.9 -7285.41 199 0.37 11227 11229 3 -21887.1 -7284.26 196.33 0.37 11228 11230 3 -21886.6 -7284.18 196.29 0.37 11229 11231 3 -21886.3 -7283.94 196.31 0.37 11230 11232 3 -21886.1 -7283.93 196.28 0.37 11231 11233 3 -21885.6 -7283.05 196.39 0.37 11232 11234 3 -21884.6 -7282.01 195.86 0.37 11233 11235 3 -21884.1 -7281.41 195.92 0.37 11234 11236 3 -21884.2 -7281.18 195.96 0.37 11235 11237 3 -21884.3 -7280.33 196.11 0.37 11236 11238 3 -21884.3 -7280.1 196.15 0.37 11237 11239 3 -21884.1 -7279.55 196.22 0.37 11238 11240 3 -21883.9 -7278.97 196.3 0.37 11239 11241 3 -21883.4 -7278.39 196.35 0.37 11240 11242 3 -21883.4 -7278.17 196.39 0.37 11241 11243 3 -21881.5 -7276.53 194.71 0.37 11242 11244 3 -21881.8 -7276.58 194.73 0.37 11243 11245 3 -21874.8 -7277.19 193.13 0.495 11244 11246 3 -21873.5 -7274.88 193.39 0.37 11245 11247 3 -21872.8 -7273.68 193.52 0.37 11246 11248 3 -21871.8 -7271.66 193.76 0.37 11247 11249 3 -21871 -7269.11 190.96 0.37 11248 11250 3 -21877.5 -7268.95 195.44 0.125 11249 11251 3 -21877.6 -7268.72 195.48 0.37 11250 11252 3 -21877.5 -7267.32 195.7 0.37 11251 11253 3 -21876.9 -7264.91 196.05 0.37 11252 11254 3 -21875.5 -7263.51 198.5 0.37 11253 11255 3 -21874.8 -7262.38 198.62 0.37 11254 11256 3 -21874.6 -7259.67 199.05 0.37 11255 11257 3 -21873.9 -7255.93 199.88 0.25 11256 11258 3 -21873.5 -7254.81 200.02 0.25 11257 11259 3 -21871.2 -7252.9 200.12 0.25 11258 11260 3 -21869.2 -7252.65 199.98 0.37 11259 11261 3 -21868.1 -7253 199.82 0.25 11260 11262 3 -21866.2 -7253.81 199.51 0.37 11261 11263 3 -21862.5 -7253.42 199 0.495 11262 11264 3 -21861.4 -7250.51 197.26 0.37 11263 11265 3 -21860 -7248.65 197.44 0.37 11264 11266 3 -21858.9 -7245.99 195.69 0.37 11265 11267 3 -21858.1 -7243.97 195.95 0.25 11266 11268 3 -21857.5 -7243.1 196.03 0.37 11267 11269 3 -21855.8 -7240.66 193.34 0.37 11268 11270 3 -21855.7 -7240.77 193.99 0.37 11269 11271 3 -21855.6 -7239.69 194.16 0.37 11270 11272 3 -21855.5 -7238.38 194.37 0.37 11271 11273 3 -21854.2 -7236.98 192.3 0.37 11272 11274 3 -21852.4 -7234.68 191.6 0.37 11273 11275 3 -21852.4 -7233.05 191.87 0.37 11274 11276 3 -21851.8 -7231.3 188.76 0.25 11275 11277 3 -21851.3 -7230.7 188.82 0.25 11276 11278 3 -21851.3 -7230.46 188.86 0.25 11277 11279 3 -21850 -7230.24 188.77 0.25 11278 11280 3 -21847.9 -7229.02 188.29 0.25 11279 11281 3 -21846.3 -7228.42 188.24 0.25 11280 11282 3 -21844.7 -7228.11 188.14 0.25 11281 11283 3 -21842.2 -7226.11 188.24 0.495 11282 11284 3 -21841.7 -7224.45 188.46 0.37 11283 11285 3 -21895.9 -7325.17 217.65 0.62 11178 11286 3 -21895 -7323.45 217.85 0.62 11285 11287 3 -21893.9 -7321.17 218.12 0.495 11286 11288 3 -21893 -7319.44 221.17 0.495 11287 11289 3 -21892.7 -7317.8 221.41 0.495 11288 11290 3 -21892.9 -7317.85 221.43 0.495 11289 11291 3 -21892.8 -7317.28 224.5 0.495 11290 11292 3 -21892.9 -7317 224.55 0.495 11291 11293 3 -21893.5 -7313.86 223.17 0.37 11292 11294 3 -21893.6 -7312.03 223.49 0.37 11293 11295 3 -21893.9 -7312.08 223.51 0.37 11294 11296 3 -21895 -7310.34 223.9 0.37 11295 11297 3 -21895.8 -7309.03 226.67 0.37 11296 11298 3 -21896.5 -7308.06 226.9 0.37 11297 11299 3 -21896.7 -7306.24 227.22 0.37 11298 11300 3 -21897.9 -7303.99 230.54 0.37 11299 11301 3 -21898.4 -7302.77 230.79 0.37 11300 11302 3 -21899.4 -7301.32 231.12 0.37 11301 11303 3 -21899.7 -7300.03 234.48 0.37 11302 11304 3 -21900.7 -7298.81 234.77 0.37 11303 11305 3 -21901.3 -7298.36 234.91 0.37 11304 11306 3 -21901.2 -7296.53 235.2 0.37 11305 11307 3 -21901.1 -7294.94 235.46 0.37 11306 11308 3 -21902.1 -7292.88 238.71 0.37 11307 11309 3 -21902.8 -7289.83 239.28 0.37 11308 11310 3 -21903.2 -7287.84 241.7 0.37 11309 11311 3 -21903.1 -7285.96 242 0.37 11310 11312 3 -21903.5 -7285 242.2 0.37 11311 11313 3 -21903.6 -7283.05 243.4 0.37 11312 11314 3 -21905.1 -7280.84 243.9 0.37 11313 11315 3 -21906.2 -7279.14 244.28 0.37 11314 11316 3 -21907.8 -7277.98 247.25 0.37 11315 11317 3 -21909 -7277.59 247.48 0.37 11316 11318 3 -21910.8 -7276.58 248.04 0.37 11317 11319 3 -21912.4 -7275.46 248.45 0.37 11318 11320 3 -21912.3 -7275.21 248.49 0.37 11319 11321 3 -21913.6 -7274.61 248.8 0.37 11320 11322 3 -21913.8 -7272.53 249.17 0.37 11321 11323 3 -21914.8 -7271.39 249.57 0.37 11322 11324 3 -21915.8 -7270.47 249.81 0.37 11323 11325 3 -21915.8 -7270.21 249.86 0.37 11324 11326 3 -21917.2 -7268.27 250.31 0.37 11325 11327 3 -21918 -7266.81 254 0.25 11326 11328 3 -21918.2 -7265.03 254.31 0.25 11327 11329 3 -21918.4 -7263.96 254.5 0.37 11328 11330 3 -21919.1 -7263.03 254.72 0.37 11329 11331 3 -21919.7 -7261.51 256.75 0.25 11330 11332 3 -21920.1 -7260.28 256.99 0.25 11331 11333 3 -21920.2 -7260.06 257.03 0.37 11332 11334 3 -21920.2 -7259.73 257.09 0.37 11333 11335 3 -21921.2 -7258.58 257.38 0.37 11334 11336 3 -21921.8 -7256.69 256.98 0.25 11335 11337 3 -21921.8 -7256.41 257.04 0.25 11336 11338 3 -21921.8 -7255.89 257.12 0.25 11337 11339 3 -21922.5 -7255.18 257.31 0.37 11338 11340 3 -21923.7 -7254.39 258.14 0.37 11339 11341 3 -21923.5 -7253.29 258.31 0.37 11340 11342 3 -21922.6 -7252.13 258.41 0.25 11341 11343 3 -21922.6 -7251.87 258.45 0.25 11342 11344 3 -21922.7 -7251.62 258.5 0.37 11343 11345 3 -21923.1 -7250.32 258.76 0.37 11344 11346 3 -21923.2 -7250.05 258.8 0.37 11345 11347 3 -21924.1 -7248.92 259.08 0.37 11346 11348 3 -21925 -7246.31 256.27 0.125 11347 11349 3 -21925 -7246.05 256.31 0.125 11348 11350 3 -21925.5 -7244.55 256.61 0.125 11349 11351 3 -21925.5 -7244.28 256.66 0.125 11350 11352 3 -21925.6 -7243.51 256.79 0.125 11351 11353 3 -21925.6 -7243.26 256.84 0.125 11352 11354 3 -21928.1 -7238.29 256.4 0.125 11353 11355 3 -21891 -7317.24 222.46 0.495 11292 11356 3 -21889.9 -7316.51 222.48 0.495 11355 11357 3 -21889.1 -7314.6 222.72 0.495 11356 11358 3 -21888.2 -7312.82 222.92 0.37 11357 11359 3 -21888.2 -7312.57 222.97 0.37 11358 11360 3 -21887.4 -7310.93 224.93 0.37 11359 11361 3 -21887.4 -7308.94 224.45 0.37 11360 11362 3 -21886.9 -7308.33 224.51 0.37 11361 11363 3 -21887.3 -7307.89 224.61 0.37 11362 11364 3 -21888.3 -7306.67 224.91 0.37 11363 11365 3 -21888.3 -7306.43 224.95 0.37 11364 11366 3 -21887.8 -7305.08 226.98 0.37 11365 11367 3 -21888.5 -7303.6 227.3 0.37 11366 11368 3 -21888.4 -7302.3 227.5 0.37 11367 11369 3 -21888.5 -7302 227.56 0.37 11368 11370 3 -21888.2 -7301.45 227.63 0.37 11369 11371 3 -21888.9 -7298.83 232.37 0.37 11370 11372 3 -21889.1 -7296.74 232.74 0.37 11371 11373 3 -21890.3 -7294.73 231.18 0.37 11372 11374 3 -21890.7 -7294.24 231.29 0.37 11373 11375 3 -21890.7 -7293.97 231.34 0.37 11374 11376 3 -21890.6 -7292.88 231.51 0.37 11375 11377 3 -21891.8 -7292.27 231.72 0.37 11376 11378 3 -21891.8 -7291.97 231.77 0.37 11377 11379 3 -21892 -7291.27 231.9 0.37 11378 11380 3 -21892 -7290.71 232 0.37 11379 11381 3 -21891.7 -7289.84 233.53 0.37 11380 11382 3 -21891.8 -7289.58 233.59 0.37 11381 11383 3 -21892.4 -7289.16 233.71 0.37 11382 11384 3 -21892.4 -7288.87 233.76 0.37 11383 11385 3 -21892.3 -7288.09 233.88 0.37 11384 11386 3 -21892 -7288.01 233.86 0.37 11385 11387 3 -21891.8 -7286.88 233.64 0.37 11386 11388 3 -21891.9 -7286.62 233.69 0.37 11387 11389 3 -21891.7 -7285.56 233.85 0.37 11388 11390 3 -21891.9 -7284.49 234.04 0.37 11389 11391 3 -21892.3 -7283.23 234.29 0.37 11390 11392 3 -21892.3 -7283.01 234.33 0.37 11391 11393 3 -21892 -7281.7 236.57 0.37 11392 11394 3 -21891.9 -7280.88 236.69 0.37 11393 11395 3 -21891.9 -7280.61 236.74 0.37 11394 11396 3 -21892.1 -7279.57 236.92 0.37 11395 11397 3 -21892.1 -7279.32 236.97 0.37 11396 11398 3 -21891.5 -7277.4 237.22 0.37 11397 11399 3 -21891.3 -7277.08 237.26 0.37 11398 11400 3 -21890.7 -7276.5 237.3 0.37 11399 11401 3 -21890.8 -7276.2 237.36 0.37 11400 11402 3 -21891 -7274.48 238.12 0.37 11401 11403 3 -21890.8 -7273.43 238.28 0.37 11402 11404 3 -21890.9 -7273.16 238.33 0.37 11403 11405 3 -21891.2 -7272.19 238.52 0.37 11404 11406 3 -21891.9 -7271.74 238.66 0.37 11405 11407 3 -21892.2 -7271.23 238.77 0.37 11406 11408 3 -21892.3 -7270.47 238.91 0.37 11407 11409 3 -21892.1 -7269.65 239.03 0.37 11408 11410 3 -21891.9 -7269.36 239.06 0.37 11409 11411 3 -21891.7 -7268.5 239.18 0.37 11410 11412 3 -21891.5 -7268.24 239.2 0.37 11411 11413 3 -21891.6 -7267.42 239.34 0.37 11412 11414 3 -21891.7 -7266.93 239.44 0.37 11413 11415 3 -21891.5 -7266.37 239.51 0.37 11414 11416 3 -21891.2 -7266.06 239.53 0.37 11415 11417 3 -21891.1 -7264.96 239.71 0.37 11416 11418 3 -21891 -7262.62 240.3 0.37 11417 11419 3 -21891 -7260.78 240.6 0.37 11418 11420 3 -21891 -7260.47 240.65 0.37 11419 11421 3 -21891.1 -7259.95 240.75 0.37 11420 11422 3 -21890.7 -7258.36 240.97 0.37 11421 11423 3 -21890.5 -7257.24 242.81 0.25 11422 11424 3 -21890.5 -7257.22 242.71 0.25 11423 11425 3 -21890.4 -7255.44 242.98 0.25 11424 11426 3 -21890.6 -7254.37 243.17 0.25 11425 11427 3 -21890.4 -7253.81 243.25 0.25 11426 11428 3 -21890.4 -7253.52 243.3 0.25 11427 11429 3 -21890.5 -7252.5 243.48 0.25 11428 11430 3 -21890.6 -7251.95 243.57 0.25 11429 11431 3 -21890.7 -7251.19 243.71 0.25 11430 11432 3 -21890.5 -7250.1 243.87 0.25 11431 11433 3 -21890 -7249.82 243.88 0.25 11432 11434 3 -21889.8 -7249.51 243.9 0.25 11433 11435 3 -21889.5 -7249.19 243.93 0.25 11434 11436 3 -21889.3 -7248.64 244 0.25 11435 11437 3 -21889.5 -7247.33 244.23 0.25 11436 11438 3 -21889.5 -7247.05 244.28 0.25 11437 11439 3 -21889.6 -7246.79 244.33 0.25 11438 11440 3 -21889.6 -7246.27 244.42 0.25 11439 11441 3 -21889.1 -7245.92 244.43 0.25 11440 11442 3 -21888.1 -7244.51 244.57 0.25 11441 11443 3 -21887.9 -7244.21 244.6 0.25 11442 11444 3 -21887.7 -7243.37 244.72 0.25 11443 11445 3 -21887.5 -7242.78 244.8 0.25 11444 11446 3 -21887.5 -7242.27 244.89 0.25 11445 11447 3 -21887.6 -7242.05 244.93 0.25 11446 11448 3 -21887.2 -7240.22 241.28 0.25 11447 11449 3 -21887.3 -7239.98 241.32 0.25 11448 11450 3 -21887.1 -7239.42 241.39 0.25 11449 11451 3 -21887 -7237.82 241.65 0.25 11450 11452 3 -21886.8 -7237.53 241.67 0.25 11451 11453 3 -21886.8 -7237.3 241.72 0.25 11452 11454 3 -21885.8 -7235.18 239.93 0.25 11453 11455 3 -21885.5 -7234.38 240.04 0.25 11454 11456 3 -21885.6 -7234.15 240.08 0.25 11455 11457 3 -21884.8 -7233.48 240.12 0.25 11456 11458 3 -21884.9 -7233.22 240.17 0.25 11457 11459 3 -21884.4 -7232.34 240.27 0.37 11458 11460 3 -21884 -7230.99 240.46 0.25 11459 11461 3 -21884.1 -7230.18 240.6 0.25 11460 11462 3 -21884 -7229.11 240.77 0.37 11461 11463 3 -21884.8 -7227.34 241.02 0.25 11462 11464 3 -21885.9 -7225.65 241.4 0.25 11463 11465 3 -21886.2 -7225.4 241.47 0.25 11464 11466 3 -21885.9 -7224.88 241.54 0.25 11465 11467 3 -21885.7 -7224.6 241.56 0.25 11466 11468 3 -21885 -7223.94 241.6 0.25 11467 11469 3 -21884.9 -7223.67 241.65 0.25 11468 11470 3 -21884.9 -7222.28 241.51 0.25 11469 11471 3 -21884.4 -7221.41 241.61 0.37 11470 11472 3 -21882.6 -7219.18 243.97 0.125 11471 11473 3 -21886.4 -7309.52 222.36 0.37 11360 11474 3 -21885.3 -7308.6 222.42 0.37 11473 11475 3 -21884.9 -7307.49 222.57 0.37 11474 11476 3 -21885 -7307.23 222.61 0.37 11475 11477 3 -21884.2 -7306.31 222.69 0.37 11476 11478 3 -21883 -7305.1 225.71 0.37 11477 11479 3 -21882.3 -7303.92 225.84 0.37 11478 11480 3 -21882.4 -7303.68 225.89 0.37 11479 11481 3 -21881.1 -7302.67 225.93 0.37 11480 11482 3 -21880.3 -7302.04 225.96 0.37 11481 11483 3 -21880.1 -7301.76 225.99 0.37 11482 11484 3 -21879.8 -7301.43 226.02 0.37 11483 11485 3 -21879.9 -7300.92 226.11 0.37 11484 11486 3 -21879.6 -7300.64 226.13 0.37 11485 11487 3 -21878.8 -7300.09 226.75 0.37 11486 11488 3 -21878.3 -7299.47 226.8 0.37 11487 11489 3 -21877.8 -7299.19 226.8 0.37 11488 11490 3 -21877.3 -7298.79 226.82 0.37 11489 11491 3 -21877.1 -7298.04 226.93 0.37 11490 11492 3 -21876.9 -7297.17 227.05 0.37 11491 11493 3 -21876.6 -7297.15 227.03 0.37 11492 11494 3 -21876.1 -7296.83 227.03 0.37 11493 11495 3 -21875.3 -7296.69 226.98 0.37 11494 11496 3 -21874.7 -7296.35 226.99 0.37 11495 11497 3 -21874.5 -7296.34 226.96 0.37 11496 11498 3 -21873.4 -7295.6 226.99 0.37 11497 11499 3 -21872.9 -7295.28 226.99 0.37 11498 11500 3 -21872.4 -7294.98 226.99 0.37 11499 11501 3 -21871.8 -7294.87 226.96 0.37 11500 11502 3 -21871 -7294.24 226.99 0.37 11501 11503 3 -21870.5 -7293.87 227 0.37 11502 11504 3 -21870 -7293.56 227.01 0.37 11503 11505 3 -21869.7 -7293.24 227.03 0.37 11504 11506 3 -21869.2 -7292.9 227.04 0.37 11505 11507 3 -21868.4 -7292.62 227.01 0.37 11506 11508 3 -21866.7 -7291.14 229.67 0.37 11507 11509 3 -21866.8 -7291.46 229.62 0.37 11508 11510 3 -21865.4 -7290.47 229.67 0.37 11509 11511 3 -21865.1 -7290.44 229.64 0.37 11510 11512 3 -21864.7 -7289.3 229.86 0.37 11511 11513 3 -21863.5 -7287.78 232.88 0.37 11512 11514 3 -21863 -7287.16 232.93 0.37 11513 11515 3 -21863.1 -7286.94 232.98 0.37 11514 11516 3 -21862.1 -7285.98 233.04 0.37 11515 11517 3 -21862.1 -7285.74 233.09 0.37 11516 11518 3 -21860.4 -7283.67 233.39 0.37 11517 11519 3 -21859.7 -7282.5 233.51 0.37 11518 11520 3 -21859.7 -7282.25 233.56 0.37 11519 11521 3 -21857.3 -7279.5 236.59 0.37 11520 11522 3 -21856.4 -7277.8 236.78 0.37 11521 11523 3 -21856.5 -7276.73 236.97 0.37 11522 11524 3 -21855.8 -7275.59 237.1 0.37 11523 11525 3 -21855.6 -7274.77 237.21 0.37 11524 11526 3 -21855.1 -7273.5 239.65 0.37 11525 11527 3 -21855.6 -7271.94 239.95 0.25 11526 11528 3 -21855.7 -7270.71 240.17 0.25 11527 11529 3 -21855.8 -7270.41 240.22 0.25 11528 11530 3 -21854.7 -7270.29 247.96 0.25 11529 11531 3 -21851.7 -7270.84 250.39 0.25 11530 11532 3 -21851.4 -7270.77 250.38 0.25 11531 11533 3 -21849.8 -7270.2 251.3 0.25 11532 11534 3 -21849.8 -7269.9 251.35 0.25 11533 11535 3 -21847 -7269.28 251.18 0.25 11534 11536 3 -21846.8 -7269 251.2 0.25 11535 11537 3 -21846.6 -7268.67 251.23 0.25 11536 11538 3 -21844.6 -7267.16 253.37 0.25 11537 11539 3 -21844.3 -7267.05 254.49 0.25 11538 11540 3 -21843 -7264.93 252.71 0.25 11539 11541 3 -21842.4 -7264.58 252.71 0.25 11540 11542 3 -21842.1 -7262.93 252.96 0.25 11541 11543 3 -21841.6 -7262.64 252.96 0.25 11542 11544 3 -21841.6 -7262.4 253 0.25 11543 11545 3 -21839.5 -7259.58 252.65 0.25 11544 11546 3 -21838.9 -7259.27 252.65 0.25 11545 11547 3 -21838.7 -7258.93 252.69 0.25 11546 11548 3 -21838.1 -7257.26 252.9 0.25 11547 11549 3 -21837.8 -7256.71 252.97 0.25 11548 11550 3 -21837.3 -7256.09 253.03 0.25 11549 11551 3 -21837.1 -7255.57 253.1 0.25 11550 11552 3 -21837 -7255.01 253.17 0.25 11551 11553 3 -21836.3 -7253.29 253.4 0.25 11552 11554 3 -21836 -7253.04 253.41 0.25 11553 11555 3 -21835.8 -7252.67 253.45 0.25 11554 11556 3 -21833.7 -7251.12 258.14 0.25 11555 11557 3 -21833.5 -7250.9 257.17 0.25 11556 11558 3 -21831.3 -7249.37 259.4 0.25 11557 11559 3 -21831 -7249.09 259.43 0.25 11558 11560 3 -21830.5 -7248.22 261.07 0.25 11559 11561 3 -21830.3 -7247.4 261.63 0.25 11560 11562 3 -21828.1 -7244.91 260.89 0.25 11561 11563 3 -21826.2 -7242.45 260.97 0.25 11562 11564 3 -21826.5 -7242.01 261.07 0.25 11563 11565 3 -21826.5 -7241.7 261.12 0.25 11564 11566 3 -21826.5 -7239.59 261.47 0.25 11565 11567 3 -21825.9 -7237.41 261.78 0.25 11566 11568 3 -21825.5 -7236.52 261.88 0.25 11567 11569 3 -21825.3 -7235.96 261.96 0.25 11568 11570 3 -21824.5 -7235.89 261.89 0.25 11569 11571 3 -21823.9 -7235.53 261.9 0.25 11570 11572 3 -21823.2 -7234.36 262.03 0.25 11571 11573 3 -21823 -7233.84 262.1 0.25 11572 11574 3 -21822.3 -7232.92 262.18 0.25 11573 11575 3 -21821.2 -7232.5 262.15 0.25 11574 11576 3 -21820.9 -7232.16 262.18 0.25 11575 11577 3 -21819.9 -7231.29 262.23 0.25 11576 11578 3 -21819.7 -7230.97 262.26 0.25 11577 11579 3 -21818.6 -7229.06 264.7 0.25 11578 11580 3 -21815.2 -7225.48 263.62 0.25 11579 11581 3 -21812.4 -7225.38 263.39 0.25 11580 11582 3 -21811.7 -7223.92 263.57 0.25 11581 11583 3 -21810.1 -7223.46 263.49 0.25 11582 11584 3 -21809.8 -7223.15 263.52 0.25 11583 11585 3 -21806.7 -7221.64 263.47 0.25 11584 11586 3 -21901.8 -7393.09 207.04 1.86 1 11587 3 -21901.5 -7393.56 206.93 1.86 11586 11588 3 -21899.3 -7394.84 206.52 1.86 11587 11589 3 -21897.5 -7395.97 206.15 1.61 11588 11590 3 -21895.8 -7397.28 199.43 1.985 11589 11591 3 -21894.4 -7397.32 199.2 1.985 11590 11592 3 -21894.7 -7397.62 194.88 1.735 11591 11593 3 -21895.1 -7398.91 194.04 1.24 11592 11594 3 -21894.3 -7400.4 193.72 0.99 11593 11595 3 -21894.8 -7401.01 190.73 0.865 11594 11596 3 -21894.2 -7402.31 192.34 0.62 11595 11597 3 -21893.7 -7403.13 189.44 0.495 11596 11598 3 -21893.9 -7404.29 188.23 0.495 11597 11599 3 -21893.6 -7405.13 187.12 0.495 11598 11600 3 -21893.5 -7405.97 185.71 0.62 11599 11601 3 -21893.8 -7405.98 185.74 0.62 11600 11602 3 -21894 -7406.96 181.74 0.495 11601 11603 3 -21894.2 -7407.24 181.72 0.495 11602 11604 3 -21894.2 -7408.37 180.54 0.495 11603 11605 3 -21893.4 -7409.89 180.22 0.495 11604 11606 3 -21893.4 -7411.14 178.05 0.495 11605 11607 3 -21893.2 -7412.3 174.08 0.495 11606 11608 3 -21893.1 -7413.37 173.89 0.495 11607 11609 3 -21889.1 -7413.55 175.81 0.495 11608 11610 3 -21889 -7413.89 171.84 0.495 11609 11611 3 -21887.9 -7415.27 171.51 0.495 11610 11612 3 -21887.9 -7416.88 171.25 0.495 11611 11613 3 -21887.6 -7417.26 166.42 0.495 11612 11614 3 -21886.9 -7418.67 165.62 0.495 11613 11615 3 -21887.2 -7418.67 165.65 0.495 11614 11616 3 -21886.6 -7420.14 165.35 0.495 11615 11617 3 -21886 -7421.07 158.94 0.495 11616 11618 3 -21885.4 -7422.18 156.48 0.495 11617 11619 3 -21884 -7423.17 157.29 0.37 11618 11620 3 -21883 -7424.32 157.01 0.37 11619 11621 3 -21882.2 -7425.28 156.78 0.37 11620 11622 3 -21881.1 -7426.88 156.41 0.37 11621 11623 3 -21880.1 -7427.74 156.17 0.37 11622 11624 3 -21879.9 -7429.02 155.95 0.37 11623 11625 3 -21879.5 -7429.79 155.78 0.37 11624 11626 3 -21878.6 -7430.1 155.64 0.37 11625 11627 3 -21878.1 -7431.57 155.3 0.37 11626 11628 3 -21877.6 -7432.85 155.04 0.37 11627 11629 3 -21876.8 -7434.51 154.69 0.37 11628 11630 3 -21876.3 -7435.7 154.14 0.37 11629 11631 3 -21875.7 -7435.85 154.06 0.37 11630 11632 3 -21875.7 -7436.13 154.01 0.37 11631 11633 3 -21875.3 -7436.59 153.9 0.37 11632 11634 3 -21875 -7436.57 153.88 0.37 11633 11635 3 -21874 -7437.66 153.63 0.37 11634 11636 3 -21872.9 -7439 152.96 0.37 11635 11637 3 -21872.4 -7440.01 149.95 0.37 11636 11638 3 -21871.6 -7441.42 149.65 0.37 11637 11639 3 -21871.5 -7442.43 149.47 0.37 11638 11640 3 -21870.9 -7443.95 149.16 0.37 11639 11641 3 -21870.3 -7444.12 149.08 0.37 11640 11642 3 -21869.7 -7444.47 148.96 0.37 11641 11643 3 -21869.2 -7445.21 145.68 0.37 11642 11644 3 -21868.7 -7446.4 145.43 0.37 11643 11645 3 -21868.1 -7446.6 145.34 0.37 11644 11646 3 -21867.3 -7447.54 145.12 0.37 11645 11647 3 -21866.5 -7449.21 144.77 0.37 11646 11648 3 -21866.3 -7449.66 142.96 0.37 11647 11649 3 -21865.3 -7450.51 142.72 0.37 11648 11650 3 -21864.2 -7450.6 142.6 0.37 11649 11651 3 -21863.8 -7451.55 142.4 0.37 11650 11652 3 -21862.9 -7452.18 140.66 0.37 11651 11653 3 -21862.4 -7453.14 140.46 0.37 11652 11654 3 -21861.6 -7454.07 137.07 0.37 11653 11655 3 -21861 -7454.52 136.94 0.37 11654 11656 3 -21861 -7454.72 136.9 0.37 11655 11657 3 -21861 -7456.01 136.69 0.37 11656 11658 3 -21859.5 -7456.3 136.3 0.37 11657 11659 3 -21858.7 -7457.41 134.35 0.37 11658 11660 3 -21858.9 -7459.55 134.02 0.37 11659 11661 3 -21858.1 -7460.57 129.91 0.37 11660 11662 3 -21857.2 -7462.79 129.45 0.37 11661 11663 3 -21856 -7464.95 124.42 0.37 11662 11664 3 -21854.9 -7468.11 123.79 0.37 11663 11665 3 -21854 -7469.71 119.5 0.37 11664 11666 3 -21852.6 -7471.04 119.16 0.37 11665 11667 3 -21851.9 -7472.05 114.83 0.37 11666 11668 3 -21850.8 -7473.44 114.5 0.25 11667 11669 3 -21849.9 -7474.03 114.32 0.25 11668 11670 3 -21848.7 -7475.68 113.93 0.37 11669 11671 3 -21848.3 -7476.39 113.78 0.37 11670 11672 3 -21847.6 -7477.85 113.47 0.25 11671 11673 3 -21847.1 -7478.82 113.26 0.25 11672 11674 3 -21846.7 -7479.79 113.04 0.37 11673 11675 3 -21846 -7480.19 112.91 0.37 11674 11676 3 -21844.1 -7481.43 112.53 0.37 11675 11677 3 -21844.4 -7482.67 110.22 0.37 11676 11678 3 -21843.8 -7483.75 106.11 0.37 11677 11679 3 -21843.2 -7484.44 105.94 0.37 11678 11680 3 -21842.9 -7484.56 101.12 0.25 11679 11681 3 -21842.3 -7484.77 101.03 0.25 11680 11682 3 -21841.8 -7485.71 100.83 0.37 11681 11683 3 -21840.9 -7486.34 100.64 0.37 11682 11684 3 -21840 -7486.99 100.45 0.25 11683 11685 3 -21839.4 -7487.43 100.32 0.25 11684 11686 3 -21838.3 -7488.27 100.08 0.37 11685 11687 3 -21837.9 -7488.45 96.8 0.25 11686 11688 3 -21836.8 -7488.71 96.64 0.25 11687 11689 3 -21835.4 -7490.35 96.24 0.37 11688 11690 3 -21834.4 -7491.15 95.51 0.37 11689 11691 3 -21833.4 -7491.26 92.67 0.25 11690 11692 3 -21832.3 -7492.91 92.3 0.25 11691 11693 3 -21829 -7496.95 89.16 0.25 11692 11694 3 -21827.9 -7497.84 88.91 0.25 11693 11695 3 -21884 -7424.46 153.69 0.495 11618 11696 3 -21883.7 -7425.7 150.52 0.495 11695 11697 3 -21883.2 -7427.22 148.93 0.495 11696 11698 3 -21882.7 -7429.25 148.54 0.495 11697 11699 3 -21882.9 -7430.62 145.5 0.495 11698 11700 3 -21883 -7431.88 144.83 0.495 11699 11701 3 -21882.2 -7432.25 141.89 0.495 11700 11702 3 -21882 -7434.12 141.55 0.495 11701 11703 3 -21881.8 -7435.03 136.3 0.495 11702 11704 3 -21881.5 -7437.06 135.83 0.495 11703 11705 3 -21881.6 -7437.79 132.41 0.495 11704 11706 3 -21881.2 -7440.25 131.13 0.495 11705 11707 3 -21881.5 -7441.41 128.21 0.495 11706 11708 3 -21881.2 -7442.9 127.94 0.495 11707 11709 3 -21881 -7444.18 127.7 0.495 11708 11710 3 -21881 -7444.83 120.63 0.495 11709 11711 3 -21880.7 -7446.85 120.19 0.495 11710 11712 3 -21880.8 -7446.83 120.07 0.495 11711 11713 3 -21880.9 -7449.03 115.46 0.495 11712 11714 3 -21880.7 -7450.32 115.23 0.495 11713 11715 3 -21880.4 -7452.53 109.97 0.495 11714 11716 3 -21880.1 -7454.36 109.64 0.495 11715 11717 3 -21880.3 -7455.92 105.97 0.495 11716 11718 3 -21880.3 -7457.14 103.87 0.37 11717 11719 3 -21880.9 -7457.81 101.15 0.37 11718 11720 3 -21880.9 -7457.98 100.34 0.37 11719 11721 3 -21881 -7459.23 100.14 0.37 11720 11722 3 -21880.8 -7459.84 100.03 0.37 11721 11723 3 -21880.1 -7461.2 99.73 0.37 11722 11724 3 -21879.8 -7463.78 96.17 0.37 11723 11725 3 -21879.8 -7466.32 93.51 0.37 11724 11726 3 -21879.3 -7468.33 93.12 0.37 11725 11727 3 -21879.5 -7468.35 93.14 0.37 11726 11728 3 -21879 -7470.45 90.03 0.37 11727 11729 3 -21879 -7470.66 89.99 0.37 11728 11730 3 -21879.5 -7471.94 90.32 0.25 11729 11731 3 -21880 -7472.54 88.57 0.25 11730 11732 3 -21880.6 -7473.61 86.33 0.25 11731 11733 3 -21881.3 -7474.88 85.28 0.25 11732 11734 3 -21881.4 -7474.77 84.67 0.25 11733 11735 3 -21881.9 -7475.75 82.16 0.25 11734 11736 3 -21882 -7477.25 81.32 0.25 11735 11737 3 -21882.2 -7477.32 81.33 0.25 11736 11738 3 -21882.9 -7477.83 76.01 0.25 11737 11739 3 -21882.6 -7477.75 76 0.25 11738 11740 3 -21882.5 -7477.93 72.51 0.25 11739 11741 3 -21882 -7477.96 67.07 0.25 11740 11742 3 -21880.6 -7480.65 65.01 0.25 11741 11743 3 -21879.7 -7482.08 61.54 0.25 11742 11744 3 -21879.4 -7482.55 61.43 0.25 11743 11745 3 -21879.3 -7482.85 61.37 0.25 11744 11746 3 -21879.5 -7484.48 60.02 0.25 11745 11747 3 -21879.6 -7484.23 58.5 0.25 11746 11748 3 -21880.1 -7485.79 56.33 0.25 11747 11749 3 -21880.2 -7485.59 55.13 0.25 11748 11750 3 -21880.9 -7488.1 49.94 0.25 11749 11751 3 -21881 -7491.27 49.43 0.25 11750 11752 3 -21892.2 -7397.93 196.67 1.61 11592 11753 3 -21890.7 -7398.52 196.44 1.61 11752 11754 3 -21889.6 -7398.59 196.32 1.485 11753 11755 3 -21888.7 -7399.04 196.16 1.24 11754 11756 3 -21887.3 -7399.49 195.11 1.24 11755 11757 3 -21887.3 -7399.24 195.16 1.24 11756 11758 3 -21886.5 -7400.68 194.84 0.62 11757 11759 3 -21885.3 -7401.88 194.53 0.62 11758 11760 3 -21884.7 -7402.7 192.01 0.495 11759 11761 3 -21885.1 -7402.46 192.08 0.495 11760 11762 3 -21884.5 -7403.69 190.02 0.495 11761 11763 3 -21884.5 -7403.39 190.07 0.495 11762 11764 3 -21883.8 -7404.4 189.84 0.495 11763 11765 3 -21883.7 -7405.68 189.61 0.495 11764 11766 3 -21883.6 -7406.08 188.88 0.495 11765 11767 3 -21882.9 -7407.01 188.52 0.495 11766 11768 3 -21882.7 -7408.13 187.19 0.495 11767 11769 3 -21882.7 -7408.03 186.6 0.495 11768 11770 3 -21880.8 -7409.32 185.89 0.495 11769 11771 3 -21880.9 -7409.22 185.32 0.495 11770 11772 3 -21879.8 -7409.42 184.24 0.495 11771 11773 3 -21878.6 -7408.51 183.29 0.495 11772 11774 3 -21877.8 -7408.17 183.27 0.495 11773 11775 3 -21876 -7407.91 183.15 0.495 11774 11776 3 -21875 -7407.47 183.13 0.495 11775 11777 3 -21873.8 -7407.35 183.04 0.495 11776 11778 3 -21873.6 -7407.09 183.06 0.495 11777 11779 3 -21872 -7406.95 182.09 0.495 11778 11780 3 -21870.7 -7407 180.29 0.495 11779 11781 3 -21869.8 -7406.95 177.67 0.495 11780 11782 3 -21869.8 -7406.84 176.96 0.495 11781 11783 3 -21869.1 -7407.26 173.79 0.495 11782 11784 3 -21869.1 -7407.52 173.74 0.495 11783 11785 3 -21867.2 -7409.14 173.29 0.495 11784 11786 3 -21865.7 -7409.04 170.8 0.495 11785 11787 3 -21866 -7409.14 170.82 0.495 11786 11788 3 -21860.7 -7409.17 170.31 0.495 11787 11789 3 -21860 -7409.6 170.18 0.495 11788 11790 3 -21858.8 -7410.24 166.34 0.37 11789 11791 3 -21858.2 -7411.76 166.03 0.37 11790 11792 3 -21857.3 -7412.61 165.39 0.37 11791 11793 3 -21855.9 -7413.44 162.31 0.37 11792 11794 3 -21855.4 -7415.16 161.91 0.37 11793 11795 3 -21854.3 -7415.92 159.45 0.37 11794 11796 3 -21853.6 -7416.78 157.5 0.37 11795 11797 3 -21853.7 -7416.54 156.06 0.37 11796 11798 3 -21853.7 -7417.47 154.15 0.37 11797 11799 3 -21852.8 -7418.01 154.44 0.37 11798 11800 3 -21851.6 -7420.16 156.99 0.25 11799 11801 3 -21851.3 -7420.38 156.93 0.37 11800 11802 3 -21850.3 -7421.76 156.58 0.37 11801 11803 3 -21849.3 -7422.45 155.39 0.37 11802 11804 3 -21848.6 -7422.93 147.98 0.37 11803 11805 3 -21848.6 -7422.68 148.03 0.37 11804 11806 3 -21847.2 -7424.22 147.49 0.37 11805 11807 3 -21847.4 -7423.84 146.93 0.37 11806 11808 3 -21846.1 -7424.77 145.38 0.37 11807 11809 3 -21843.8 -7426.14 143.36 0.37 11808 11810 3 -21843 -7427.82 142.76 0.37 11809 11811 3 -21842 -7429.32 139.99 0.37 11810 11812 3 -21840.6 -7430.72 136.92 0.37 11811 11813 3 -21839.3 -7431.74 136.54 0.37 11812 11814 3 -21838.2 -7431.54 130.43 0.37 11813 11815 3 -21837 -7432.55 129.5 0.37 11814 11816 3 -21835.6 -7433.37 126.28 0.37 11815 11817 3 -21834.2 -7434.55 123.42 0.37 11816 11818 3 -21832.8 -7435.71 119.12 0.37 11817 11819 3 -21831.9 -7435.85 116.5 0.37 11818 11820 3 -21830.5 -7437.8 114.88 0.37 11819 11821 3 -21829.2 -7439.52 112.19 0.37 11820 11822 3 -21827.8 -7440.81 108.67 0.37 11821 11823 3 -21827.2 -7441.95 104.77 0.37 11822 11824 3 -21825.4 -7443.26 107.83 0.37 11823 11825 3 -21824.8 -7443.71 103.14 0.25 11824 11826 3 -21824.1 -7444.33 102.97 0.25 11825 11827 3 -21823.2 -7444.55 99.01 0.25 11826 11828 3 -21823.2 -7444.49 98.62 0.25 11827 11829 3 -21822.8 -7445.34 94.59 0.25 11828 11830 3 -21820.3 -7446.49 98.82 0.25 11829 11831 3 -21819.6 -7446.99 94.9 0.25 11830 11832 3 -21818.2 -7448.85 94.37 0.25 11831 11833 3 -21817.1 -7449.85 90.36 0.25 11832 11834 3 -21815 -7452.5 88.84 0.25 11833 11835 3 -21814 -7454.27 86.37 0.25 11834 11836 3 -21813.3 -7456.81 85.88 0.25 11835 11837 3 -21812.6 -7457.73 85.66 0.25 11836 11838 3 -21810.1 -7459.04 85.67 0.125 11837 11839 3 -21809.5 -7460.14 83.18 0.125 11838 11840 3 -21808.8 -7461.44 81.96 0.125 11839 11841 3 -21807.9 -7461.98 78.04 0.125 11840 11842 3 -21807.8 -7462.24 78 0.125 11841 11843 3 -21806.7 -7464.33 76.95 0.125 11842 11844 3 -21805.9 -7465.42 74.65 0.125 11843 11845 3 -21804.4 -7466.85 74.76 0.125 11844 11846 3 -21803.3 -7467.03 70.68 0.125 11845 11847 3 -21803 -7467.25 70.62 0.125 11846 11848 3 -21802.7 -7467.21 70.59 0.125 11847 11849 3 -21800.2 -7469.1 69.78 0.125 11848 11850 3 -21799.9 -7469.06 69.77 0.125 11849 11851 3 -21798.7 -7469.69 69.55 0.125 11850 11852 3 -21798 -7469.84 69.46 0.125 11851 11853 3 -21796.8 -7470.87 68.8 0.125 11852 11854 3 -21796.3 -7470.82 68.76 0.125 11853 11855 3 -21795.2 -7470.87 66.94 0.125 11854 11856 3 -21794.9 -7470.81 66.92 0.125 11855 11857 3 -21794.1 -7471.44 64.95 0.125 11856 11858 3 -21850.8 -7417.48 153.77 0.37 11798 11859 3 -21849 -7416.81 148.74 0.37 11858 11860 3 -21846.6 -7416.85 142.41 0.37 11859 11861 3 -21843.7 -7417.29 136.68 0.37 11860 11862 3 -21842.4 -7416.46 131.72 0.37 11861 11863 3 -21840.4 -7417.4 131.37 0.37 11862 11864 3 -21840.1 -7417.32 131.36 0.37 11863 11865 3 -21837 -7417.32 126.67 0.37 11864 11866 3 -21836.6 -7418.31 126.47 0.37 11865 11867 3 -21835.8 -7418.3 119.69 0.37 11866 11868 3 -21835 -7419.04 117.06 0.37 11867 11869 3 -21834.2 -7419.46 111.11 0.37 11868 11870 3 -21833.6 -7420.03 105.65 0.37 11869 11871 3 -21833.7 -7419.77 105.7 0.37 11870 11872 3 -21832.8 -7421.68 105.3 0.37 11871 11873 3 -21831.6 -7421.24 102.29 0.25 11872 11874 3 -21831.1 -7421.94 99.28 0.25 11873 11875 3 -21830.2 -7422.61 94.66 0.25 11874 11876 3 -21829.6 -7421.27 89.21 0.25 11875 11877 3 -21828.3 -7420.5 89.22 0.25 11876 11878 3 -21826.8 -7421.49 90.14 0.125 11877 11879 3 -21826.8 -7421.23 90.18 0.125 11878 11880 3 -21824.6 -7419.96 83.38 0.125 11879 11881 3 -21822.9 -7418.71 82.35 0.125 11880 11882 3 -21822.7 -7417.91 82.47 0.125 11881 11883 3 -21823.2 -7415.28 77.96 0.125 11882 11884 3 -21824.8 -7414.51 72.56 0.125 11883 11885 3 -21824.8 -7414.25 72.61 0.125 11884 11886 3 -21826.6 -7412.7 67.09 0.125 11885 11887 3 -21826.8 -7411.4 67.32 0.125 11886 11888 3 -21826.6 -7410.86 67.39 0.125 11887 11889 3 -21828 -7408.78 63.58 0.125 11888 11890 3 -21828 -7408.76 63.43 0.125 11889 11891 3 -21828.1 -7408.08 64.08 0.125 11890 11892 3 -21884.8 -7399.42 198.13 1.115 11757 11893 3 -21882.8 -7398.88 198.03 1.115 11892 11894 3 -21883.1 -7398.96 198.05 1.115 11893 11895 3 -21881.4 -7398.92 197.9 1.115 11894 11896 3 -21880 -7399.28 197.7 1.115 11895 11897 3 -21877.4 -7398.93 197.52 1.115 11896 11898 3 -21875.9 -7399.77 197.24 1.115 11897 11899 3 -21874.1 -7400.01 195.11 1.115 11898 11900 3 -21872.8 -7400.61 194.9 1.115 11899 11901 3 -21871.4 -7401.24 194.66 1.115 11900 11902 3 -21870 -7403.44 197.64 1.115 11901 11903 3 -21870 -7403.21 197.68 1.115 11902 11904 3 -21869.2 -7403.6 197.54 1.115 11903 11905 3 -21869.3 -7404.72 197.36 1.115 11904 11906 3 -21869.2 -7404.96 197.32 1.115 11905 11907 3 -21868.7 -7406.72 196.98 0.99 11906 11908 3 -21868.3 -7408.24 196.68 0.865 11907 11909 3 -21867.2 -7410.31 195.33 0.865 11908 11910 3 -21867.3 -7411.68 195.12 0.865 11909 11911 3 -21866.3 -7413.36 194.74 0.865 11910 11912 3 -21866.2 -7413.66 194.69 0.865 11911 11913 3 -21864.9 -7415.03 194.28 0.745 11912 11914 3 -21864.2 -7416.08 197.52 0.745 11913 11915 3 -21864.1 -7417.07 197.35 0.745 11914 11916 3 -21863.9 -7418.11 193.9 0.745 11915 11917 3 -21864.2 -7418.13 193.92 0.745 11916 11918 3 -21863.6 -7418.85 193.75 0.745 11917 11919 3 -21862.8 -7420.1 193.47 0.62 11918 11920 3 -21862 -7421.58 193.15 0.62 11919 11921 3 -21861.6 -7422.55 192.95 0.62 11920 11922 3 -21860.8 -7423.24 192.76 0.62 11921 11923 3 -21860.2 -7423.38 192.68 0.495 11922 11924 3 -21860.1 -7424.3 191.75 0.495 11923 11925 3 -21859.7 -7426 193.94 0.495 11924 11926 3 -21859.7 -7425.84 189.74 0.495 11925 11927 3 -21858.7 -7427.42 193.23 0.495 11926 11928 3 -21858.6 -7427.65 193.18 0.495 11927 11929 3 -21858.3 -7428.92 191.3 0.495 11928 11930 3 -21858.6 -7428.96 191.32 0.495 11929 11931 3 -21858.2 -7430.12 190.33 0.495 11930 11932 3 -21858 -7431.89 189.99 0.495 11931 11933 3 -21857.7 -7432.34 189.49 0.495 11932 11934 3 -21857.4 -7432.59 189.41 0.495 11933 11935 3 -21856.7 -7433.27 189.24 0.495 11934 11936 3 -21856.4 -7434.01 189.09 0.495 11935 11937 3 -21857.4 -7435.28 187.75 0.495 11936 11938 3 -21856.8 -7436.81 189.31 0.495 11937 11939 3 -21856.8 -7437.07 189.26 0.495 11938 11940 3 -21856.5 -7437.54 189.15 0.495 11939 11941 3 -21855.8 -7439.49 189.72 0.495 11940 11942 3 -21855.3 -7441.27 189.38 0.495 11941 11943 3 -21855.2 -7443.77 191.46 0.495 11942 11944 3 -21855 -7444.59 191.31 0.495 11943 11945 3 -21854.6 -7445.54 191.11 0.495 11944 11946 3 -21854.8 -7446.36 190.99 0.495 11945 11947 3 -21854.8 -7446.67 190.94 0.495 11946 11948 3 -21854.9 -7447.5 190.82 0.37 11947 11949 3 -21854.6 -7447.95 190.71 0.37 11948 11950 3 -21854.5 -7448.21 190.66 0.37 11949 11951 3 -21854.2 -7449.33 189.52 0.37 11950 11952 3 -21853.7 -7450.61 189.26 0.37 11951 11953 3 -21853.3 -7452.73 190.74 0.37 11952 11954 3 -21852.6 -7454.47 190.38 0.37 11953 11955 3 -21851.5 -7456.66 189.93 0.37 11954 11956 3 -21851.2 -7458.71 189.59 0.37 11955 11957 3 -21849.7 -7458.83 189.47 0.37 11956 11958 3 -21848.8 -7459.24 189.32 0.37 11957 11959 3 -21847.9 -7459.95 189.12 0.37 11958 11960 3 -21848.2 -7459.95 189.15 0.37 11959 11961 3 -21847.8 -7461.25 188.89 0.62 11960 11962 3 -21846.9 -7461.35 188.79 0.37 11961 11963 3 -21846 -7461.55 188.9 0.37 11962 11964 3 -21845.4 -7462.02 188.77 0.37 11963 11965 3 -21845.4 -7461.72 188.82 0.37 11964 11966 3 -21844.1 -7463.11 188.47 0.37 11965 11967 3 -21843.5 -7463.31 188.38 0.37 11966 11968 3 -21843.6 -7464.88 188.13 0.37 11967 11969 3 -21843.3 -7465.4 188.01 0.37 11968 11970 3 -21842.8 -7466.15 187.85 0.37 11969 11971 3 -21842.2 -7466.63 187.71 0.37 11970 11972 3 -21841.3 -7466.51 189.6 0.37 11971 11973 3 -21840.7 -7466.66 189.52 0.37 11972 11974 3 -21840.2 -7468.22 189.22 0.37 11973 11975 3 -21839.5 -7468.65 189.08 0.37 11974 11976 3 -21838.9 -7468.81 189 0.37 11975 11977 3 -21837.8 -7469.43 188.79 0.37 11976 11978 3 -21837.7 -7469.76 188.73 0.37 11977 11979 3 -21837.2 -7470.21 187.21 0.37 11978 11980 3 -21836.6 -7470.61 187.08 0.37 11979 11981 3 -21836.2 -7471.34 186.93 0.37 11980 11982 3 -21835.7 -7473.17 186.58 0.37 11981 11983 3 -21834.8 -7473.59 186.42 0.37 11982 11984 3 -21834 -7474.41 188.54 0.37 11983 11985 3 -21833.8 -7475.42 188.35 0.37 11984 11986 3 -21833.6 -7477 188.08 0.37 11985 11987 3 -21833.3 -7477.97 187.58 0.37 11986 11988 3 -21832.5 -7479.17 187.31 0.37 11987 11989 3 -21832.8 -7479.2 187.33 0.37 11988 11990 3 -21832 -7479.34 187.23 0.37 11989 11991 3 -21831.3 -7479.8 187.09 0.37 11990 11992 3 -21830.5 -7479.92 186.99 0.37 11991 11993 3 -21829.2 -7480.3 186.81 0.37 11992 11994 3 -21829.2 -7480.56 186.77 0.37 11993 11995 3 -21829 -7481.85 186.54 0.37 11994 11996 3 -21828.8 -7482.08 186.48 0.37 11995 11997 3 -21828.4 -7482.85 186.31 0.37 11996 11998 3 -21828.4 -7483.08 186.27 0.37 11997 11999 3 -21828.2 -7484.15 186.08 0.37 11998 12000 3 -21828.2 -7484.38 186.04 0.37 11999 12001 3 -21828 -7485.67 185.81 0.37 12000 12002 3 -21827.2 -7487.39 186.65 0.37 12001 12003 3 -21826.7 -7488.88 186.36 0.37 12002 12004 3 -21827 -7488.89 186.38 0.37 12003 12005 3 -21825.7 -7489.79 186.12 0.37 12004 12006 3 -21824.5 -7490.43 185.9 0.37 12005 12007 3 -21824.6 -7491.81 185.68 0.37 12006 12008 3 -21824.5 -7493.07 185.47 0.25 12007 12009 3 -21823.6 -7492.98 185.4 0.495 12008 12010 3 -21823.1 -7492.9 185.37 0.495 12009 12011 3 -21822.8 -7493.12 185.3 0.495 12010 12012 3 -21821.8 -7493.77 185.1 0.25 12011 12013 3 -21822.1 -7493.82 185.12 0.25 12012 12014 3 -21821.4 -7495.01 184.86 0.37 12013 12015 3 -21821.3 -7495.8 184.72 0.37 12014 12016 3 -21821.6 -7495.83 184.74 0.37 12015 12017 3 -21820.4 -7496.99 185.86 0.37 12016 12018 3 -21819.8 -7497.99 185.63 0.37 12017 12019 3 -21819.8 -7497.7 185.69 0.37 12018 12020 3 -21819.2 -7498.19 185.55 0.125 12019 12021 3 -21818.6 -7498.63 185.42 0.125 12020 12022 3 -21818.8 -7498.63 185.44 0.125 12021 12023 3 -21818.2 -7498.83 185.35 0.37 12022 12024 3 -21817.6 -7499.53 185.18 0.37 12023 12025 3 -21817.5 -7500.29 185.04 0.37 12024 12026 3 -21817.5 -7500.08 185.08 0.37 12025 12027 3 -21816.8 -7501.21 184.56 0.125 12026 12028 3 -21816.6 -7501.52 183.43 0.37 12027 12029 3 -21816.5 -7501.61 183.95 0.37 12028 12030 3 -21816.1 -7503.15 183.65 0.37 12029 12031 3 -21815.9 -7504.17 183.47 0.125 12030 12032 3 -21815.5 -7505.15 183.27 0.125 12031 12033 3 -21815.2 -7505.46 183.18 0.125 12032 12034 3 -21815.2 -7505.14 183.24 0.125 12033 12035 3 -21814.8 -7506.67 182.94 0.495 12034 12036 3 -21815.1 -7506.71 182.96 0.495 12035 12037 3 -21815.2 -7507.79 182.8 0.495 12036 12038 3 -21814.3 -7509.66 183.31 0.25 12037 12039 3 -21813.6 -7510.63 183.09 0.25 12038 12040 3 -21813 -7511.35 182.91 0.495 12039 12041 3 -21812.6 -7511.83 182.8 0.495 12040 12042 3 -21811.9 -7512.8 182.57 0.25 12041 12043 3 -21811.6 -7513.28 182.46 0.25 12042 12044 3 -21810.8 -7514.78 182.14 0.495 12043 12045 3 -21810.5 -7515.63 181.48 0.125 12044 12046 3 -21810.7 -7515.72 181.49 0.125 12045 12047 3 -21810.9 -7516.53 181.38 0.125 12046 12048 3 -21811.1 -7517.35 181.39 0.37 12047 12049 3 -21810.9 -7518.89 181.12 0.125 12048 12050 3 -21811.3 -7519.78 181.01 0.125 12049 12051 3 -21811.1 -7519.73 180.99 0.495 12050 12052 3 -21811.1 -7519.97 180.95 0.495 12051 12053 3 -21810.9 -7521.08 180.76 0.25 12052 12054 3 -21810.9 -7521.31 180.71 0.25 12053 12055 3 -21811.1 -7521.73 181.35 0.25 12054 12056 3 -21811.3 -7524.15 180.98 0.25 12055 12057 3 -21811.3 -7526.53 180.58 0.25 12056 12058 3 -21811.1 -7528.1 180.3 0.25 12057 12059 3 -21851.5 -7460.51 190.08 0.495 11956 12060 3 -21851.5 -7461.94 190.35 0.495 12059 12061 3 -21851.2 -7461.9 190.33 0.495 12060 12062 3 -21851.1 -7462.67 190.19 0.495 12061 12063 3 -21850.8 -7463.2 190.08 0.37 12062 12064 3 -21850.4 -7464.44 189.83 0.37 12063 12065 3 -21850.1 -7465.84 190.49 0.37 12064 12066 3 -21850.3 -7466.96 190.33 0.37 12065 12067 3 -21849.6 -7467.68 190.14 0.37 12066 12068 3 -21849.5 -7468.85 192.41 0.37 12067 12069 3 -21850 -7469.72 192.32 0.37 12068 12070 3 -21850.3 -7471.54 193.1 0.37 12069 12071 3 -21850.5 -7472.61 192.94 0.37 12070 12072 3 -21850.7 -7474.09 194.96 0.37 12071 12073 3 -21850.9 -7474.62 194.89 0.37 12072 12074 3 -21850.8 -7474.92 194.83 0.37 12073 12075 3 -21850.1 -7475.78 195.68 0.37 12074 12076 3 -21850.2 -7476.78 196.54 0.25 12075 12077 3 -21850.3 -7477.49 197.41 0.25 12076 12078 3 -21850.2 -7477.65 198.39 0.25 12077 12079 3 -21850.6 -7477.71 196.8 0.495 12078 12080 3 -21850.3 -7478.87 199.14 0.495 12079 12081 3 -21851 -7479.57 199.3 0.25 12080 12082 3 -21851.5 -7479.86 199.3 0.495 12081 12083 3 -21851.1 -7481.39 201.81 0.25 12082 12084 3 -21850.8 -7482.13 201.65 0.25 12083 12085 3 -21850.7 -7483.15 201.48 0.495 12084 12086 3 -21849.8 -7484.14 203.11 0.125 12085 12087 3 -21849.2 -7484.35 203.02 0.125 12086 12088 3 -21849.2 -7484.64 202.97 0.125 12087 12089 3 -21848.9 -7484.85 202.91 0.37 12088 12090 3 -21849.8 -7485.62 203.6 0.125 12089 12091 3 -21850 -7486.44 203.53 0.125 12090 12092 3 -21849.6 -7487.45 207.88 0.37 12091 12093 3 -21850.4 -7488.62 209.54 0.37 12092 12094 3 -21850.3 -7489.93 209.31 0.125 12093 12095 3 -21850.1 -7490.99 209.12 0.37 12094 12096 3 -21849.5 -7492.7 214.53 0.125 12095 12097 3 -21849.4 -7493.74 214.35 0.37 12096 12098 3 -21850 -7495.13 215.56 0.37 12097 12099 3 -21850.8 -7495.23 215.62 0.37 12098 12100 3 -21851.7 -7496.07 213.59 0.125 12099 12101 3 -21851.9 -7496.36 213.56 0.125 12100 12102 3 -21852.3 -7497.41 214.62 0.37 12101 12103 3 -21854 -7499.5 214.41 0.37 12102 12104 3 -21854.6 -7501.01 219.42 0.125 12103 12105 3 -21854.8 -7501.87 219.29 0.125 12104 12106 3 -21854.6 -7503.42 219.02 0.125 12105 12107 3 -21854.7 -7504.72 218.81 0.125 12106 12108 3 -21854.2 -7505.55 219.23 0.125 12107 12109 3 -21854.4 -7506.63 219.07 0.37 12108 12110 3 -21854.8 -7507.75 218.93 0.37 12109 12111 3 -21854.8 -7436.4 185.45 0.37 11936 12112 3 -21853.5 -7437.85 185.09 0.37 12111 12113 3 -21853.2 -7438.61 184.93 0.37 12112 12114 3 -21849.7 -7441.33 179.91 0.37 12113 12115 3 -21847.3 -7444.77 173.57 0.37 12114 12116 3 -21845.9 -7445.7 173.28 0.37 12115 12117 3 -21845.6 -7445.37 173.32 0.37 12116 12118 3 -21843.7 -7446.97 172.87 0.37 12117 12119 3 -21843.5 -7446.92 172.85 0.37 12118 12120 3 -21841.9 -7447.33 170.11 0.37 12119 12121 3 -21840.7 -7448.1 168.9 0.37 12120 12122 3 -21838.4 -7449.39 170.48 0.37 12121 12123 3 -21837.4 -7450.59 170.18 0.37 12122 12124 3 -21836.6 -7450.78 170.07 0.125 12123 12125 3 -21835.8 -7450.86 169.98 0.125 12124 12126 3 -21834.8 -7451.02 169.87 0.37 12125 12127 3 -21834.1 -7452.53 169.55 0.37 12126 12128 3 -21834.1 -7452.24 169.6 0.37 12127 12129 3 -21833.7 -7453.26 169.39 0.125 12128 12130 3 -21833.1 -7453.42 169.31 0.125 12129 12131 3 -21832.9 -7453.19 168.18 0.37 12130 12132 3 -21832.3 -7453.93 168 0.37 12131 12133 3 -21832.2 -7454.7 167.86 0.125 12132 12134 3 -21831.8 -7455.68 167.66 0.37 12133 12135 3 -21830.2 -7456.55 167.37 0.125 12134 12136 3 -21829.7 -7456.53 167.7 0.125 12135 12137 3 -21829 -7457.25 167.51 0.37 12136 12138 3 -21829 -7457.49 167.47 0.37 12137 12139 3 -21828.8 -7458.1 164.91 0.125 12138 12140 3 -21828.5 -7458.34 164.85 0.37 12139 12141 3 -21827.6 -7458.49 164.74 0.37 12140 12142 3 -21826.9 -7459.12 164.57 0.125 12141 12143 3 -21826.8 -7459.28 165.57 0.125 12142 12144 3 -21826 -7459.75 165.41 0.125 12143 12145 3 -21825.2 -7460.43 162.04 0.37 12144 12146 3 -21824 -7461.34 161.78 0.125 12145 12147 3 -21824.3 -7461.39 161.79 0.37 12146 12148 3 -21823 -7462.22 161.43 0.37 12147 12149 3 -21822.6 -7463.2 161.23 0.37 12148 12150 3 -21821.7 -7464.14 160.99 0.25 12149 12151 3 -21820.9 -7465.33 160.72 0.25 12150 12152 3 -21819.3 -7466.04 158.11 0.25 12151 12153 3 -21818.6 -7466.7 159.03 0.25 12152 12154 3 -21817.2 -7466.49 158.93 0.25 12153 12155 3 -21811.7 -7465.6 158.31 0.125 12154 12156 3 -21809.3 -7466.67 157.33 0.37 12155 12157 3 -21808.8 -7466.3 157.34 0.37 12156 12158 3 -21807.4 -7466.93 154.67 0.125 12157 12159 3 -21806.8 -7467.37 154.54 0.125 12158 12160 3 -21806.5 -7467.55 154.48 0.125 12159 12161 3 -21806.1 -7467.76 154.41 0.125 12160 12162 3 -21804.6 -7469.44 156.18 0.37 12161 12163 3 -21804 -7469.36 156.14 0.37 12162 12164 3 -21802.4 -7469.31 155.99 0.125 12163 12165 3 -21801.7 -7469.77 155.86 0.125 12164 12166 3 -21801.4 -7470.2 155.75 0.25 12165 12167 3 -21800.8 -7471.68 155.46 0.37 12166 12168 3 -21800.9 -7471.42 155.5 0.37 12167 12169 3 -21800.6 -7473.2 155.18 0.25 12168 12170 3 -21799.9 -7474.57 152.87 0.25 12169 12171 3 -21799.6 -7476.14 152.59 0.25 12170 12172 3 -21799.2 -7477.65 150.82 0.37 12171 12173 3 -21799.1 -7478.17 150.72 0.25 12172 12174 3 -21798.5 -7478.8 150.56 0.25 12173 12175 3 -21797.5 -7479.69 150.32 0.37 12174 12176 3 -21797.7 -7479.72 150.34 0.37 12175 12177 3 -21796.5 -7480.33 150.12 0.25 12176 12178 3 -21795.2 -7481.42 149.82 0.37 12177 12179 3 -21795.3 -7481.15 149.87 0.37 12178 12180 3 -21794.2 -7482.34 150.16 0.37 12179 12181 3 -21793.6 -7484.15 149.81 0.37 12180 12182 3 -21793.9 -7484.16 149.83 0.37 12181 12183 3 -21794.3 -7485.02 149.72 0.125 12182 12184 3 -21794.6 -7486.42 149.53 0.125 12183 12185 3 -21794.2 -7487.13 149.37 0.25 12184 12186 3 -21795.2 -7487.78 149.36 0.25 12185 12187 3 -21795.4 -7488.36 149.28 0.37 12186 12188 3 -21796.2 -7489.82 150.6 0.25 12187 12189 3 -21796.6 -7490.46 150.53 0.495 12188 12190 3 -21797.8 -7491.71 150.44 0.37 12189 12191 3 -21798.8 -7492.96 150.32 0.125 12190 12192 3 -21799.9 -7493.13 150.4 0.125 12191 12193 3 -21800.1 -7493.47 150.36 0.125 12192 12194 3 -21801.6 -7494.05 152.29 0.25 12193 12195 3 -21803.6 -7493.34 152.6 0.25 12194 12196 3 -21803.9 -7493.41 152.61 0.37 12195 12197 3 -21803.3 -7495.4 152.18 0.125 12194 12198 3 -21804.1 -7495.8 152.19 0.37 12197 12199 3 -21857.6 -7426.2 187.15 0.495 11926 12200 3 -21856.2 -7426.81 185.58 0.37 12199 12201 3 -21853.7 -7427.15 183.11 0.37 12200 12202 3 -21852 -7427.89 182.08 0.495 12201 12203 3 -21850.3 -7428.32 181.21 0.495 12202 12204 3 -21848.8 -7428.64 181.02 0.495 12203 12205 3 -21846.9 -7428.89 180.8 0.495 12204 12206 3 -21845.6 -7429.71 180.54 0.495 12205 12207 3 -21844.9 -7430.2 180.4 0.495 12206 12208 3 -21844.7 -7430.17 180.38 0.495 12207 12209 3 -21843.8 -7430.56 180.23 0.37 12208 12210 3 -21842.8 -7430.7 180.12 0.37 12209 12211 3 -21842.3 -7431.15 179.93 0.37 12210 12212 3 -21841.6 -7431.83 179.75 0.37 12211 12213 3 -21841 -7432.55 179.57 0.25 12212 12214 3 -21840.8 -7432.77 177.83 0.495 12213 12215 3 -21839.9 -7432.91 177.73 0.495 12214 12216 3 -21838.7 -7434.25 180.25 0.37 12215 12217 3 -21837.8 -7434.97 180.04 0.37 12216 12218 3 -21836.8 -7435.61 179.85 0.37 12217 12219 3 -21836 -7435.82 178.46 0.37 12218 12220 3 -21835.4 -7436.25 178.33 0.37 12219 12221 3 -21834.7 -7437.77 178.23 0.37 12220 12222 3 -21834 -7438.72 178.01 0.37 12221 12223 3 -21833 -7439.42 177.8 0.25 12222 12224 3 -21832.7 -7439.91 177.69 0.37 12223 12225 3 -21831.4 -7441.02 177.35 0.37 12224 12226 3 -21830.8 -7441.19 177.27 0.37 12225 12227 3 -21829.3 -7441.64 177.59 0.37 12226 12228 3 -21828.1 -7442.26 177.38 0.37 12227 12229 3 -21827.2 -7442.91 177.18 0.37 12228 12230 3 -21825.5 -7444.29 178.38 0.37 12229 12231 3 -21824.3 -7445.13 178.07 0.37 12230 12232 3 -21823.8 -7446.09 174.57 0.25 12231 12233 3 -21823.2 -7446.55 174.44 0.25 12232 12234 3 -21821.9 -7447.17 174.22 0.25 12233 12235 3 -21820.5 -7447.69 173.79 0.25 12234 12236 3 -21820.2 -7448.48 172.04 0.25 12235 12237 3 -21819.3 -7449.15 171.84 0.25 12236 12238 3 -21818.4 -7449.31 171.73 0.25 12237 12239 3 -21817.6 -7449.9 171.03 0.37 12238 12240 3 -21816.3 -7450.48 170.82 0.37 12239 12241 3 -21815.1 -7451.11 170.72 0.25 12240 12242 3 -21814.5 -7451.59 170.6 0.25 12241 12243 3 -21813.3 -7452.24 170.38 0.37 12242 12244 3 -21812.1 -7452.05 170.31 0.37 12243 12245 3 -21811.5 -7451.72 167.39 0.37 12244 12246 3 -21810.9 -7452.45 167.2 0.37 12245 12247 3 -21810 -7452.83 167.06 0.37 12246 12248 3 -21809.3 -7453.26 166.93 0.25 12247 12249 3 -21808.8 -7453.18 166.89 0.25 12248 12250 3 -21808.2 -7453.65 166.75 0.25 12249 12251 3 -21807.5 -7454.32 166.58 0.37 12250 12252 3 -21805.8 -7454.64 166.36 0.37 12251 12253 3 -21804 -7454.91 166.16 0.37 12252 12254 3 -21803.4 -7455.33 162.81 0.37 12253 12255 3 -21802.6 -7456.6 162.53 0.25 12254 12256 3 -21802.1 -7456.74 162.45 0.37 12255 12257 3 -21801.5 -7456.87 162.38 0.37 12256 12258 3 -21801.4 -7457.14 162.33 0.37 12257 12259 3 -21800.2 -7458.06 162.06 0.37 12258 12260 3 -21799.9 -7458.32 161.99 0.495 12259 12261 3 -21799.3 -7458.74 161.86 0.495 12260 12262 3 -21797.5 -7459.4 160.8 0.37 12261 12263 3 -21796.2 -7461.1 160.4 0.37 12262 12264 3 -21795.5 -7462.3 160.13 0.37 12263 12265 3 -21795.7 -7462.35 160.14 0.37 12264 12266 3 -21794.5 -7462.73 160.35 0.125 12265 12267 3 -21793.9 -7463.5 160.17 0.37 12266 12268 3 -21793.2 -7464.18 159.99 0.37 12267 12269 3 -21792.3 -7465.13 159.74 0.125 12268 12270 3 -21790.8 -7464.91 159.65 0.125 12269 12271 3 -21790.5 -7465.39 159.54 0.37 12270 12272 3 -21788.3 -7465.1 159.37 0.37 12271 12273 3 -21786.8 -7465.67 159.14 0.37 12272 12274 3 -21785.3 -7466 158.95 0.37 12273 12275 3 -21785.6 -7466.02 158.97 0.37 12274 12276 3 -21784.6 -7466.7 158.77 0.37 12275 12277 3 -21783.6 -7466.8 154.15 0.25 12276 12278 3 -21782.5 -7467.76 155.54 0.125 12277 12279 3 -21781.8 -7468.74 155.31 0.37 12278 12280 3 -21781.2 -7469.8 154.18 0.125 12279 12281 3 -21780.3 -7470.18 154.03 0.125 12280 12282 3 -21780.2 -7470.47 153.98 0.125 12281 12283 3 -21778.8 -7470.8 153.79 0.37 12282 12284 3 -21778.7 -7471.3 153.7 0.37 12283 12285 3 -21777.6 -7472.77 153.36 0.125 12284 12286 3 -21776.2 -7473.08 153.17 0.125 12285 12287 3 -21775.3 -7473.23 153.07 0.125 12286 12288 3 -21774.2 -7473.66 151.8 0.495 12287 12289 3 -21774.2 -7473.95 151.75 0.495 12288 12290 3 -21861.8 -7416.23 198.81 0.62 11914 12291 3 -21861.8 -7416.52 198.76 0.62 12290 12292 3 -21859.8 -7417.11 202.21 0.62 12291 12293 3 -21857.9 -7418.66 206 0.495 12292 12294 3 -21856.8 -7420.05 207 0.495 12293 12295 3 -21856.3 -7421.6 206.69 0.495 12294 12296 3 -21855 -7425.27 217.64 0.495 12295 12297 3 -21854.5 -7426.25 217.43 0.495 12296 12298 3 -21853.7 -7427.83 217.51 0.495 12297 12299 3 -21853.4 -7428.57 217.36 0.495 12298 12300 3 -21853.2 -7429.85 217.13 0.495 12299 12301 3 -21852.1 -7430.42 216.62 0.37 12300 12302 3 -21851.4 -7431.13 216.44 0.37 12301 12303 3 -21851.6 -7431.45 216.41 0.495 12302 12304 3 -21851.3 -7432.18 216.25 0.495 12303 12305 3 -21849.4 -7433.85 219.12 0.495 12304 12306 3 -21849 -7434.54 218.97 0.495 12305 12307 3 -21847.8 -7435.19 218.75 0.495 12306 12308 3 -21846 -7435.73 218.49 0.495 12307 12309 3 -21845.9 -7437.04 218.26 0.495 12308 12310 3 -21844.1 -7437.62 218 0.495 12309 12311 3 -21842.9 -7438.05 218.49 0.37 12310 12312 3 -21841.6 -7438.95 218.23 0.37 12311 12313 3 -21841.3 -7439.44 218.11 0.495 12312 12314 3 -21840.7 -7439.35 218.07 0.495 12313 12315 3 -21839.4 -7439.89 220.37 0.495 12314 12316 3 -21837.6 -7440.93 223.12 0.495 12315 12317 3 -21837.5 -7441.72 222.98 0.495 12316 12318 3 -21836.8 -7442.66 222.76 0.37 12317 12319 3 -21835.9 -7442.8 222.66 0.37 12318 12320 3 -21835 -7443.2 222.5 0.495 12319 12321 3 -21834.2 -7443.37 222.39 0.495 12320 12322 3 -21833.3 -7443.8 222.24 0.495 12321 12323 3 -21831.8 -7444.63 221.96 0.495 12322 12324 3 -21829.8 -7446.69 226.05 0.495 12323 12325 3 -21828.8 -7447.62 225.8 0.25 12324 12326 3 -21828.4 -7448.4 225.64 0.495 12325 12327 3 -21827.1 -7449 225.42 0.495 12326 12328 3 -21825.2 -7450.59 226.39 0.495 12327 12329 3 -21824.6 -7452.07 227.76 0.495 12328 12330 3 -21824.5 -7452.21 228.62 0.495 12329 12331 3 -21826.8 -7442.29 228.6 0.37 12330 12332 3 -21825.1 -7443.31 228.27 0.37 12331 12333 3 -21823.9 -7444.42 231.05 0.37 12332 12334 3 -21823.6 -7444.35 231.03 0.37 12333 12335 3 -21823.7 -7445.68 230.82 0.37 12334 12336 3 -21822.6 -7447.06 233.25 0.37 12335 12337 3 -21821.7 -7448.78 232.88 0.37 12336 12338 3 -21821.5 -7448.72 232.87 0.37 12337 12339 3 -21820.5 -7449.52 232.64 0.37 12338 12340 3 -21820.4 -7450.3 232.5 0.37 12339 12341 3 -21819.3 -7451.2 232.13 0.37 12340 12342 3 -21819.1 -7451.13 232.12 0.37 12341 12343 3 -21817.8 -7451.4 231.96 0.37 12342 12344 3 -21817.6 -7451.39 231.94 0.37 12343 12345 3 -21817.4 -7452.35 231.76 0.37 12344 12346 3 -21816.1 -7453.76 234.49 0.37 12345 12347 3 -21816.1 -7454.05 234.47 0.37 12346 12348 3 -21816 -7454.52 234.38 0.37 12347 12349 3 -21815.6 -7454.72 234.31 0.37 12348 12350 3 -21814.9 -7455.9 234.05 0.37 12349 12351 3 -21814.5 -7456.09 233.98 0.37 12350 12352 3 -21813.9 -7456.54 233.85 0.37 12351 12353 3 -21813.2 -7457.19 233.67 0.37 12352 12354 3 -21813.1 -7457.68 233.59 0.37 12353 12355 3 -21813 -7457.95 233.53 0.37 12354 12356 3 -21812.7 -7458.15 233.47 0.37 12355 12357 3 -21811.8 -7458.73 233.07 0.37 12356 12358 3 -21810.9 -7458.78 232.98 0.37 12357 12359 3 -21810.2 -7459.46 232.8 0.37 12358 12360 3 -21809.9 -7459.66 232.74 0.37 12359 12361 3 -21809.2 -7460.13 232.6 0.37 12360 12362 3 -21809 -7460.02 232.6 0.37 12361 12363 3 -21807.8 -7460.34 232.43 0.37 12362 12364 3 -21807.5 -7460.28 232.41 0.37 12363 12365 3 -21807.2 -7460.21 232.4 0.37 12364 12366 3 -21806.6 -7460.63 232.27 0.37 12365 12367 3 -21805.7 -7461.84 233.77 0.37 12366 12368 3 -21805.4 -7462.03 233.7 0.37 12367 12369 3 -21804.5 -7462.4 233.7 0.37 12368 12370 3 -21802.2 -7462.41 232.76 0.37 12369 12371 3 -21801 -7462.9 232.28 0.37 12370 12372 3 -21800.7 -7462.58 232.3 0.37 12371 12373 3 -21798.8 -7463.57 231.98 0.37 12372 12374 3 -21798.6 -7463.49 231.96 0.37 12373 12375 3 -21798.2 -7464.25 231.8 0.37 12374 12376 3 -21797.8 -7465.57 233.73 0.37 12375 12377 3 -21797.8 -7465.85 233.68 0.37 12376 12378 3 -21797 -7467.02 233.42 0.37 12377 12379 3 -21796.4 -7467.15 233.34 0.37 12378 12380 3 -21795.8 -7467.32 233.26 0.37 12379 12381 3 -21795.5 -7467.22 233.24 0.37 12380 12382 3 -21795.3 -7468.02 233.1 0.25 12381 12383 3 -21795 -7468.49 232.99 0.25 12382 12384 3 -21794.7 -7468.42 232.97 0.25 12383 12385 3 -21794 -7469.35 232.75 0.25 12384 12386 3 -21793.6 -7469.3 232.73 0.25 12385 12387 3 -21792.5 -7470.73 234.16 0.25 12386 12388 3 -21792.2 -7470.95 234.1 0.25 12387 12389 3 -21791.2 -7472.63 233.73 0.25 12388 12390 3 -21790.7 -7472.77 233.66 0.25 12389 12391 3 -21790.1 -7472.63 233.63 0.25 12390 12392 3 -21788.7 -7472.37 233.54 0.25 12391 12393 3 -21788.1 -7472.79 233.42 0.25 12392 12394 3 -21787.5 -7473.19 233.29 0.25 12393 12395 3 -21787.4 -7473.69 233.2 0.25 12394 12396 3 -21787 -7473.9 233.13 0.25 12395 12397 3 -21786.4 -7474.02 233.05 0.25 12396 12398 3 -21851.8 -7428.43 228.56 0.495 12297 12399 3 -21851.1 -7429.13 228.4 0.37 12398 12400 3 -21850.5 -7429.61 228.27 0.37 12399 12401 3 -21849.8 -7431.12 231.5 0.37 12400 12402 3 -21849.1 -7431.6 231.36 0.37 12401 12403 3 -21848.4 -7432.81 231.1 0.37 12402 12404 3 -21847.9 -7433.64 233.33 0.37 12403 12405 3 -21849.3 -7424.88 238 0.37 12404 12406 3 -21849.4 -7424.63 238.04 0.37 12405 12407 3 -21846.6 -7424.73 246.01 0.37 12406 12408 3 -21846.6 -7424.48 246.06 0.37 12407 12409 3 -21845.1 -7424.51 246.17 0.37 12408 12410 3 -21844.9 -7424.25 246.55 0.37 12409 12411 3 -21843.4 -7424.76 246.32 0.37 12410 12412 3 -21843.1 -7424.8 247 0.37 12411 12413 3 -21841.6 -7425.1 247.09 0.37 12412 12414 3 -21841.5 -7425.18 247.58 0.37 12413 12415 3 -21838.1 -7425.72 250.85 0.37 12414 12416 3 -21838.2 -7425.47 250.9 0.37 12415 12417 3 -21835.8 -7425.76 250.63 0.37 12416 12418 3 -21835.7 -7425.86 251.22 0.37 12417 12419 3 -21834.3 -7426.39 253.94 0.37 12418 12420 3 -21834 -7426.58 253.88 0.37 12419 12421 3 -21833.7 -7426.78 253.82 0.37 12420 12422 3 -21833.1 -7426.68 253.78 0.37 12421 12423 3 -21832 -7426.49 253.81 0.37 12422 12424 3 -21831.6 -7428.52 259.24 0.37 12423 12425 3 -21831.5 -7428.87 259.62 0.37 12424 12426 3 -21831.3 -7429.44 259.99 0.37 12425 12427 3 -21831.5 -7431.63 269.01 0.37 12426 12428 3 -21828.3 -7431.57 279.17 0.37 12427 12429 3 -21827 -7432.74 282.56 0.37 12428 12430 3 -21826.4 -7433.11 286.68 0.37 12429 12431 3 -21826.3 -7433.26 287.56 0.37 12430 12432 3 -21824.3 -7434.71 293.06 0.37 12431 12433 3 -21824.6 -7434.75 293.08 0.37 12432 12434 3 -21823.7 -7435.45 296.18 0.37 12433 12435 3 -21853.5 -7421.88 212.1 0.25 12295 12436 3 -21852.3 -7422.14 214.12 0.37 12435 12437 3 -21850.9 -7421.67 214.07 0.25 12436 12438 3 -21849.8 -7421.27 214.03 0.37 12437 12439 3 -21849.2 -7421.42 213.95 0.37 12438 12440 3 -21849 -7421.39 213.93 0.37 12439 12441 3 -21848 -7422.07 213.73 0.37 12440 12442 3 -21848 -7422.35 213.68 0.37 12441 12443 3 -21846.8 -7422.94 213.47 0.25 12442 12444 3 -21845.6 -7423.33 213.29 0.37 12443 12445 3 -21843.9 -7423.32 213.14 0.37 12444 12446 3 -21842 -7424.09 214.19 0.25 12445 12447 3 -21840.8 -7424.73 213.97 0.37 12446 12448 3 -21839.6 -7424.85 213.84 0.37 12447 12449 3 -21838.1 -7425.26 214.43 0.37 12448 12450 3 -21837.2 -7425.67 214.28 0.37 12449 12451 3 -21837.3 -7425.41 214.32 0.37 12450 12452 3 -21836.1 -7425.78 214.15 0.25 12451 12453 3 -21835.2 -7425.94 214.04 0.25 12452 12454 3 -21834.1 -7425.81 213.96 0.37 12453 12455 3 -21833.5 -7425.73 213.92 0.37 12454 12456 3 -21833.3 -7425.42 213.95 0.37 12455 12457 3 -21833 -7425.38 213.93 0.37 12456 12458 3 -21829.8 -7425.66 214.86 0.37 12457 12459 3 -21828.3 -7425.76 214.71 0.37 12458 12460 3 -21826.9 -7426.89 216.09 0.37 12459 12461 3 -21826.1 -7426.56 216.06 0.37 12460 12462 3 -21825.5 -7426.44 216.03 0.37 12461 12463 3 -21823.6 -7426.11 217.14 0.37 12462 12464 3 -21822.2 -7425.89 217.05 0.37 12463 12465 3 -21819.2 -7426.18 217.58 0.25 12464 12466 3 -21817.7 -7426.75 217.35 0.37 12465 12467 3 -21816.5 -7428.19 216.99 0.37 12466 12468 3 -21813.8 -7429.22 220.2 0.125 12467 12469 3 -21812.2 -7428.99 218.52 0.37 12468 12470 3 -21811.3 -7429.19 218.4 0.37 12469 12471 3 -21809.5 -7430.79 221.35 0.37 12470 12472 3 -21808.5 -7431.97 221.06 0.125 12471 12473 3 -21807.6 -7432.42 220.9 0.125 12472 12474 3 -21805.6 -7433.38 223.53 0.37 12473 12475 3 -21804.3 -7434.84 223.17 0.37 12474 12476 3 -21807.3 -7425.43 225.02 0.37 12475 12477 3 -21807.6 -7425.51 225.03 0.37 12476 12478 3 -21805.3 -7425.61 224.8 0.125 12477 12479 3 -21802.9 -7424.85 224.7 0.37 12478 12480 3 -21800.2 -7425.02 229.88 0.25 12479 12481 3 -21798.4 -7424.38 229.81 0.37 12480 12482 3 -21795.9 -7424.93 230.71 0.37 12481 12483 3 -21796.1 -7424.96 230.73 0.37 12482 12484 3 -21793.8 -7425.3 230.46 0.25 12483 12485 3 -21792 -7425.68 231.27 0.37 12484 12486 3 -21788.6 -7425.82 233.83 0.37 12485 12487 3 -21788.9 -7425.86 233.85 0.37 12486 12488 3 -21784.3 -7424.85 233.03 0.25 12487 12489 3 -21784.3 -7424.62 233.07 0.25 12488 12490 3 -21868.7 -7400.65 193.58 0.745 11904 12491 3 -21866.8 -7400.15 193.49 0.745 12490 12492 3 -21864.8 -7399.87 193.35 0.62 12491 12493 3 -21862.8 -7399.37 193.25 0.495 12492 12494 3 -21862.1 -7399.19 193.2 0.495 12493 12495 3 -21860.4 -7396.52 192.9 0.495 12494 12496 3 -21858.4 -7394.36 193.07 0.495 12495 12497 3 -21857.2 -7393.18 193.15 0.495 12496 12498 3 -21855.5 -7391.06 193.34 0.37 12497 12499 3 -21853.5 -7388.92 193.52 0.37 12498 12500 3 -21851.9 -7387.87 197.75 0.37 12499 12501 3 -21851.9 -7387.57 197.8 0.37 12500 12502 3 -21850.6 -7384.81 198.48 0.37 12501 12503 3 -21849.4 -7382.88 199.25 0.37 12502 12504 3 -21847.6 -7381.07 202.76 0.37 12503 12505 3 -21846.6 -7379.44 204.9 0.37 12504 12506 3 -21846.5 -7379.46 205.05 0.37 12505 12507 3 -21844.7 -7378.53 208.68 0.37 12506 12508 3 -21844 -7377.38 208.81 0.37 12507 12509 3 -21842.5 -7375.61 210.92 0.37 12508 12510 3 -21841.3 -7375.2 210.88 0.37 12509 12511 3 -21840.5 -7374.94 214.5 0.37 12510 12512 3 -21839.1 -7374.04 221.5 0.37 12511 12513 3 -21838.6 -7375.18 229.68 0.37 12512 12514 3 -21838.6 -7374.93 229.72 0.37 12513 12515 3 -21837.8 -7376.56 238.05 0.37 12514 12516 3 -21837.7 -7376.51 239.24 0.37 12515 12517 3 -21837.5 -7377.15 241.58 0.37 12516 12518 3 -21837.8 -7377.21 241.6 0.37 12517 12519 3 -21839.2 -7367.93 244.31 0.495 12518 12520 3 -21837 -7367.42 248.05 0.495 12519 12521 3 -21834.7 -7366.66 250.8 0.495 12520 12522 3 -21834.7 -7366.78 251.49 0.495 12521 12523 3 -21832.7 -7366.87 255.44 0.495 12522 12524 3 -21829.9 -7367.96 264.29 0.495 12523 12525 3 -21828.5 -7368.85 266.12 0.495 12524 12526 3 -21828.2 -7368.8 266.1 0.495 12525 12527 3 -21825.3 -7369.11 270.74 0.495 12526 12528 3 -21825 -7369.08 270.99 0.495 12527 12529 3 -21823 -7369.1 275.85 0.495 12528 12530 3 -21821.8 -7369.41 275.96 0.495 12529 12531 3 -21821.4 -7369.67 276.1 0.495 12530 12532 3 -21819.4 -7369.58 280.76 0.495 12531 12533 3 -21819.3 -7369.69 281.4 0.495 12532 12534 3 -21817.3 -7369.72 282.2 0.37 12533 12535 3 -21817 -7369.69 282.18 0.37 12534 12536 3 -21815.5 -7369.94 282.06 0.37 12535 12537 3 -21815.3 -7369.89 282.05 0.37 12536 12538 3 -21812.2 -7370.75 286.84 0.37 12537 12539 3 -21812 -7370.7 286.82 0.37 12538 12540 3 -21809.8 -7370.03 286.73 0.37 12539 12541 3 -21808.3 -7370.49 286.51 0.37 12540 12542 3 -21808 -7370.45 286.5 0.37 12541 12543 3 -21805.7 -7371.33 289.18 0.37 12542 12544 3 -21802.8 -7371.24 290.04 0.25 12543 12545 3 -21798.7 -7369.89 291.34 0.25 12544 12546 3 -21796 -7369.28 293.78 0.25 12545 12547 3 -21838.1 -7373.18 215.66 0.37 12511 12548 3 -21836.3 -7372.4 215.61 0.37 12547 12549 3 -21836.3 -7372.2 215.65 0.37 12548 12550 3 -21834.8 -7370.58 215.77 0.37 12549 12551 3 -21832.3 -7370.02 215.63 0.37 12550 12552 3 -21830.5 -7368.7 215.68 0.37 12551 12553 3 -21830 -7368.38 215.69 0.37 12552 12554 3 -21828.1 -7367.31 215.69 0.37 12553 12555 3 -21827.6 -7367.26 215.65 0.37 12554 12556 3 -21827.3 -7366.92 215.68 0.37 12555 12557 3 -21827 -7366.87 215.66 0.37 12556 12558 3 -21826.8 -7366.85 215.64 0.37 12557 12559 3 -21825.6 -7367.21 215.47 0.37 12558 12560 3 -21824.4 -7367.31 215.35 0.37 12559 12561 3 -21823.9 -7366.99 215.35 0.37 12560 12562 3 -21822.9 -7366.02 218.45 0.37 12561 12563 3 -21821.7 -7365.5 218.02 0.37 12562 12564 3 -21820.5 -7365.14 217.98 0.37 12563 12565 3 -21820 -7364.81 217.98 0.37 12564 12566 3 -21819.2 -7364.95 217.88 0.37 12565 12567 3 -21818 -7365.04 217.76 0.37 12566 12568 3 -21817.3 -7364.36 217.8 0.37 12567 12569 3 -21817 -7364.37 217.77 0.37 12568 12570 3 -21814.3 -7364.66 223.03 0.37 12569 12571 3 -21814 -7364.37 223.06 0.37 12570 12572 3 -21812.7 -7363.92 223.01 0.37 12571 12573 3 -21812.4 -7363.91 222.98 0.37 12572 12574 3 -21810.4 -7363.88 222.8 0.37 12573 12575 3 -21810.2 -7363.84 222.78 0.37 12574 12576 3 -21809.6 -7363.25 222.84 0.37 12575 12577 3 -21809.4 -7363.19 222.82 0.37 12576 12578 3 -21806.9 -7363.18 226.01 0.37 12577 12579 3 -21804.7 -7362.85 225.86 0.25 12578 12580 3 -21803.3 -7362.41 225.8 0.25 12579 12581 3 -21802.7 -7362.85 225.67 0.37 12580 12582 3 -21801.3 -7362.69 225.57 0.37 12581 12583 3 -21799.9 -7362.24 225.51 0.37 12582 12584 3 -21797.9 -7361.01 227.7 0.37 12583 12585 3 -21796.6 -7361.11 230.69 0.37 12584 12586 3 -21794.8 -7361.43 230.7 0.37 12585 12587 3 -21794.8 -7361.14 230.75 0.37 12586 12588 3 -21793 -7360.06 230.76 0.37 12587 12589 3 -21790.8 -7358.5 232.67 0.37 12588 12590 3 -21790.5 -7358.44 232.66 0.37 12589 12591 3 -21789.7 -7358.07 232.64 0.37 12590 12592 3 -21789.5 -7358.09 232.62 0.37 12591 12593 3 -21787.8 -7357.09 234.46 0.37 12592 12594 3 -21787.5 -7357.04 234.54 0.37 12593 12595 3 -21784.8 -7357.37 235.12 0.37 12594 12596 3 -21784.5 -7357.43 235.62 0.37 12595 12597 3 -21783 -7356.48 234.42 0.37 12596 12598 3 -21783 -7356.22 234.46 0.37 12597 12599 3 -21781.4 -7355.44 234.45 0.37 12598 12600 3 -21781.1 -7355.42 234.42 0.37 12599 12601 3 -21779.2 -7354.63 234.38 0.37 12600 12602 3 -21779.3 -7354.38 234.42 0.37 12601 12603 3 -21778.2 -7353.67 234.44 0.37 12602 12604 3 -21777.7 -7353.13 234.48 0.37 12603 12605 3 -21777.5 -7353.04 234.47 0.37 12604 12606 3 -21776.9 -7352.72 234.47 0.37 12605 12607 3 -21776.7 -7352.4 234.51 0.37 12606 12608 3 -21775.5 -7351.93 235.61 0.25 12607 12609 3 -21774.5 -7351.53 235.58 0.25 12608 12610 3 -21774.2 -7351.46 235.56 0.25 12609 12611 3 -21773.1 -7350.8 235.58 0.25 12610 12612 3 -21773.2 -7350.56 235.62 0.25 12611 12613 3 -21771.5 -7350.3 235.51 0.37 12612 12614 3 -21771.3 -7350.03 235.53 0.37 12613 12615 3 -21770.5 -7349.36 235.57 0.37 12614 12616 3 -21770.5 -7349.1 235.61 0.37 12615 12617 3 -21769.5 -7347.93 235.72 0.25 12616 12618 3 -21768.7 -7347.54 235.7 0.25 12617 12619 3 -21768.7 -7347.28 235.75 0.25 12618 12620 3 -21767.2 -7346.27 235.77 0.37 12619 12621 3 -21766.1 -7345.84 235.74 0.37 12620 12622 3 -21765.9 -7345.03 235.86 0.37 12621 12623 3 -21765.8 -7344.21 235.98 0.37 12622 12624 3 -21764 -7343.22 237.76 0.37 12623 12625 3 -21763.3 -7342.58 237.8 0.25 12624 12626 3 -21763.3 -7342.35 237.84 0.25 12625 12627 3 -21762 -7341.88 237.79 0.37 12626 12628 3 -21759.3 -7341.53 239.15 0.37 12627 12629 3 -21759.6 -7341.56 239.17 0.37 12628 12630 3 -21757.7 -7341.3 239.04 0.25 12629 12631 3 -21756.6 -7340.6 239.06 0.25 12630 12632 3 -21754.4 -7340 240.29 0.25 12631 12633 3 -21753.2 -7339.87 240.2 0.25 12632 12634 3 -21753.3 -7339.57 240.25 0.25 12633 12635 3 -21749.9 -7339.13 240.06 0.125 12634 12636 3 -21750 -7338.86 240.11 0.125 12635 12637 3 -21748.2 -7339.66 239.81 0.125 12636 12638 3 -21745.5 -7340.36 239.43 0.125 12637 12639 3 -21745.3 -7340.1 239.45 0.125 12638 12640 3 -21743.9 -7339.87 239.36 0.125 12639 12641 3 -21747.7 -7332.73 243.65 0.125 12640 12642 3 -21747.6 -7331.62 243.82 0.125 12641 12643 3 -21745.8 -7330.21 243.89 0.125 12642 12644 3 -21745.7 -7329.87 243.93 0.125 12643 12645 3 -21744.1 -7329.09 243.91 0.125 12644 12646 3 -21743.9 -7328.74 243.95 0.125 12645 12647 3 -21741 -7327.5 247.49 0.125 12646 12648 3 -21741 -7327.77 247.43 0.125 12647 12649 3 -21738.4 -7327.16 249.86 0.125 12648 12650 3 -21737 -7326.64 249.82 0.125 12649 12651 3 -21859.6 -7399.15 191.68 0.495 12494 12652 3 -21857.6 -7399.46 191.44 0.495 12651 12653 3 -21854.7 -7398.77 185.21 0.495 12652 12654 3 -21853.2 -7399.33 184.98 0.495 12653 12655 3 -21851.6 -7399.84 182.78 0.495 12654 12656 3 -21850.5 -7399.71 182.69 0.495 12655 12657 3 -21850.5 -7399.43 182.74 0.495 12656 12658 3 -21848.4 -7398.62 178.18 0.495 12657 12659 3 -21845.9 -7398.28 178 0.37 12658 12660 3 -21844.2 -7397.61 178.37 0.495 12659 12661 3 -21843.7 -7397.25 178.39 0.495 12660 12662 3 -21841.3 -7395.82 174.99 0.37 12661 12663 3 -21841.6 -7395.88 175.01 0.37 12662 12664 3 -21839.1 -7395.6 173.74 0.495 12663 12665 3 -21839.5 -7395.65 173.76 0.495 12664 12666 3 -21838.3 -7394.96 173.77 0.495 12665 12667 3 -21838.6 -7395.02 173.78 0.495 12666 12668 3 -21837.1 -7393.77 173.85 0.37 12667 12669 3 -21836 -7393.32 173.82 0.495 12668 12670 3 -21834.7 -7392.6 173.82 0.495 12669 12671 3 -21833.4 -7391.64 173.85 0.37 12670 12672 3 -21833.1 -7391.59 173.84 0.37 12671 12673 3 -21831.2 -7391.28 173.71 0.37 12672 12674 3 -21830.5 -7391.01 172.35 0.37 12673 12675 3 -21827.6 -7391.12 172.39 0.495 12674 12676 3 -21825.8 -7391.46 172.17 0.495 12675 12677 3 -21823.7 -7392.47 171.81 0.495 12676 12678 3 -21821.6 -7391.89 170.05 0.37 12677 12679 3 -21820.2 -7392.23 169.87 0.37 12678 12680 3 -21819.6 -7392.27 170.62 0.37 12679 12681 3 -21819.5 -7392.52 170.57 0.37 12680 12682 3 -21814.6 -7392.59 170.1 0.37 12681 12683 3 -21814.9 -7392.64 170.12 0.37 12682 12684 3 -21811.7 -7392.64 169.82 0.37 12683 12685 3 -21810.2 -7391.81 169.81 0.37 12684 12686 3 -21807.6 -7391.87 169.56 0.37 12685 12687 3 -21807.4 -7391.61 169.59 0.37 12686 12688 3 -21805.2 -7391.15 169.46 0.37 12687 12689 3 -21803.4 -7390.32 169.42 0.37 12688 12690 3 -21802.6 -7390.17 169.38 0.37 12689 12691 3 -21800.6 -7390.37 169.15 0.37 12690 12692 3 -21800.3 -7390.58 169.09 0.37 12691 12693 3 -21798 -7390.68 168.86 0.37 12692 12694 3 -21795.4 -7390.75 168.66 0.37 12693 12695 3 -21795.6 -7390.42 166.64 0.37 12694 12696 3 -21792.8 -7390.43 166.38 0.37 12695 12697 3 -21790.7 -7390.9 166.11 0.37 12696 12698 3 -21790.4 -7390.84 166.09 0.37 12697 12699 3 -21789.1 -7389.56 163.45 0.37 12698 12700 3 -21788.9 -7389.29 163.47 0.37 12699 12701 3 -21787.2 -7388.98 160.3 0.37 12700 12702 3 -21786.9 -7388.64 160.33 0.37 12701 12703 3 -21784.4 -7388.28 160.72 0.37 12702 12704 3 -21784.5 -7388.23 160.4 0.37 12703 12705 3 -21782.8 -7388.48 160.05 0.37 12704 12706 3 -21781.9 -7388.32 159.93 0.37 12705 12707 3 -21780.6 -7388.01 159.78 0.37 12706 12708 3 -21780.1 -7387.91 159.7 0.37 12707 12709 3 -21779 -7387.46 159.63 0.37 12708 12710 3 -21778.7 -7387.43 159.58 0.37 12709 12711 3 -21777.1 -7386.87 159.47 0.37 12710 12712 3 -21776.8 -7386.8 159.42 0.37 12711 12713 3 -21775.6 -7385.85 155.16 0.37 12712 12714 3 -21774.4 -7384.58 155.26 0.37 12713 12715 3 -21774.1 -7384.24 155.29 0.37 12714 12716 3 -21772.3 -7383.64 155.21 0.37 12715 12717 3 -21772 -7383.6 155.19 0.37 12716 12718 3 -21770.1 -7383.54 155.02 0.37 12717 12719 3 -21769.5 -7383.45 154.99 0.37 12718 12720 3 -21767.3 -7383.04 154.84 0.37 12719 12721 3 -21767 -7382.98 154.83 0.37 12720 12722 3 -21764 -7382.49 150.17 0.37 12721 12723 3 -21761.8 -7382.85 151.48 0.37 12722 12724 3 -21761 -7381.3 146.6 0.125 12723 12725 3 -21761 -7381.53 146.56 0.125 12724 12726 3 -21760.2 -7381.41 146.5 0.37 12725 12727 3 -21758.2 -7381.63 146.27 0.125 12726 12728 3 -21757 -7381.9 146.12 0.37 12727 12729 3 -21755.5 -7382.17 145.94 0.125 12728 12730 3 -21754.1 -7382.08 142.3 0.25 12729 12731 3 -21752.5 -7383.4 141.92 0.37 12730 12732 3 -21751.1 -7384.05 139.37 0.25 12731 12733 3 -21751.3 -7384.13 139.38 0.25 12732 12734 3 -21748.9 -7383.64 139.23 0.37 12733 12735 3 -21746.9 -7383.57 139.06 0.25 12734 12736 3 -21744.5 -7383.7 135.99 0.37 12735 12737 3 -21742.2 -7384.06 137.2 0.125 12736 12738 3 -21914.3 -7398.98 213.19 2.105 1 12739 3 -21914.4 -7400.37 212.97 2.105 12738 12740 3 -21915 -7402.02 212.76 1.86 12739 12741 3 -21915.5 -7404.23 212.44 1.61 12740 12742 3 -21915.9 -7405.87 212.2 1.61 12741 12743 3 -21916.6 -7407 212.08 1.485 12742 12744 3 -21917.2 -7409.5 211.72 1.365 12743 12745 3 -21918 -7411.85 212.21 1.365 12744 12746 3 -21918.2 -7414.26 211.84 1.365 12745 12747 3 -21918.8 -7416.2 211.58 1.365 12746 12748 3 -21919.4 -7418.39 211.27 1.365 12747 12749 3 -21920 -7420.04 211.05 1.24 12748 12750 3 -21920.8 -7422.56 212.14 1.115 12749 12751 3 -21920.9 -7425.73 216.21 1.115 12750 12752 3 -21921.1 -7426.27 216.16 0.99 12751 12753 3 -21921.2 -7427.38 216 0.865 12752 12754 3 -21921.4 -7428.18 215.88 0.865 12753 12755 3 -21923 -7428.68 215.96 0.865 12754 12756 3 -21923.6 -7431.4 215.56 0.865 12755 12757 3 -21923.5 -7431.4 215.58 0.865 12756 12758 3 -21923.1 -7433.44 218.27 0.865 12757 12759 3 -21923.5 -7434.59 218.12 0.865 12758 12760 3 -21923.9 -7436.2 217.88 0.865 12759 12761 3 -21923.3 -7439.03 220.66 0.865 12760 12762 3 -21924.5 -7440.32 220.57 0.745 12761 12763 3 -21925.8 -7441.02 220.57 0.62 12762 12764 3 -21926.2 -7442.37 220.39 0.62 12763 12765 3 -21927.5 -7443.3 220.1 0.495 12764 12766 3 -21928.7 -7443.46 220.18 0.495 12765 12767 3 -21928.6 -7443.78 220.12 0.495 12766 12768 3 -21930.2 -7444.75 220.11 0.495 12767 12769 3 -21931.5 -7445.98 220.03 0.495 12768 12770 3 -21932.3 -7446.37 220.04 0.495 12769 12771 3 -21932.5 -7446.72 220 0.495 12770 12772 3 -21933.2 -7448.13 223.07 0.495 12771 12773 3 -21934.3 -7448.49 223.11 0.495 12772 12774 3 -21935.2 -7448.65 223.17 0.495 12773 12775 3 -21936.7 -7449.45 223.67 0.495 12774 12776 3 -21937.5 -7450.1 223.64 0.495 12775 12777 3 -21939.2 -7450.35 223.75 0.495 12776 12778 3 -21939.7 -7452.18 227.58 0.495 12777 12779 3 -21940.8 -7452.61 227.61 0.495 12778 12780 3 -21942.1 -7453.58 227.58 0.495 12779 12781 3 -21942.5 -7454.98 227.39 0.495 12780 12782 3 -21943.3 -7455.84 227.31 0.37 12781 12783 3 -21944 -7456.48 227.28 0.37 12782 12784 3 -21944.8 -7457.13 227.24 0.37 12783 12785 3 -21945.5 -7458.03 227.16 0.37 12784 12786 3 -21945.8 -7458.36 227.13 0.37 12785 12787 3 -21946.7 -7460.01 226.94 0.37 12786 12788 3 -21947.7 -7461.22 226.83 0.37 12787 12789 3 -21948.9 -7463 226.65 0.37 12788 12790 3 -21950.1 -7464.21 226.57 0.37 12789 12791 3 -21950.4 -7464.26 226.59 0.37 12790 12792 3 -21952.3 -7465.87 225.19 0.37 12791 12793 3 -21952.5 -7465.91 225.21 0.37 12792 12794 3 -21954.2 -7466.39 225.29 0.37 12793 12795 3 -21954.2 -7466.64 225.24 0.37 12794 12796 3 -21956 -7467.46 225.28 0.37 12795 12797 3 -21958 -7468.03 225.37 0.37 12796 12798 3 -21959.6 -7469.23 228.22 0.37 12797 12799 3 -21961.5 -7469.74 228.31 0.37 12798 12800 3 -21961.8 -7469.8 228.33 0.37 12799 12801 3 -21963.8 -7470.62 229.77 0.37 12800 12802 3 -21964.7 -7470.74 229.83 0.37 12801 12803 3 -21965.7 -7471.9 229.73 0.37 12802 12804 3 -21966.2 -7472.55 229.66 0.37 12803 12805 3 -21967.7 -7473.87 230 0.37 12804 12806 3 -21968.9 -7475.08 229.92 0.37 12805 12807 3 -21969.2 -7475.11 229.94 0.37 12806 12808 3 -21970 -7475.48 229.95 0.37 12807 12809 3 -21970.7 -7476.15 229.91 0.37 12808 12810 3 -21971.8 -7477.07 229.86 0.37 12809 12811 3 -21972.8 -7477.73 229.85 0.37 12810 12812 3 -21973.1 -7478.04 229.82 0.37 12811 12813 3 -21973.5 -7478.86 234.02 0.37 12812 12814 3 -21974.9 -7480.43 235.6 0.37 12813 12815 3 -21975.1 -7481.84 239.06 0.37 12814 12816 3 -21976 -7482.09 236.82 0.37 12815 12817 3 -21976.2 -7482.7 236.73 0.37 12816 12818 3 -21976.9 -7483.86 236.73 0.37 12817 12819 3 -21983 -7477.67 236.71 0.37 12818 12820 3 -21984.5 -7478.2 236.79 0.37 12819 12821 3 -21985.5 -7479.77 240.13 0.37 12820 12822 3 -21985.4 -7479.96 241.28 0.37 12821 12823 3 -21986 -7482.08 243.29 0.37 12822 12824 3 -21987.3 -7484.6 246.68 0.37 12823 12825 3 -21987.5 -7485.2 246.61 0.37 12824 12826 3 -21988.7 -7486.07 246.82 0.37 12825 12827 3 -21988.7 -7486.16 247.39 0.37 12826 12828 3 -21989.4 -7488.13 248.65 0.37 12827 12829 3 -21989.3 -7488.39 248.6 0.37 12828 12830 3 -21990.2 -7489.66 248.74 0.37 12829 12831 3 -21991.3 -7490.94 248.65 0.37 12830 12832 3 -21991.6 -7491.24 248.63 0.37 12831 12833 3 -21992.5 -7492.27 248.77 0.37 12832 12834 3 -21993.5 -7493.11 249.34 0.37 12833 12835 3 -21993.8 -7493.15 249.36 0.37 12834 12836 3 -21994.2 -7494.1 249.54 0.37 12835 12837 3 -21995.6 -7495.18 250.9 0.37 12836 12838 3 -21996.9 -7497.27 250.67 0.37 12837 12839 3 -21998.7 -7498.43 250.65 0.37 12838 12840 3 -21999.9 -7499.24 250.86 0.37 12839 12841 3 -22000.2 -7501.65 253.29 0.37 12840 12842 3 -22000.1 -7502.14 253.2 0.37 12841 12843 3 -22001.1 -7504.49 252.91 0.37 12842 12844 3 -22002 -7505.47 252.83 0.37 12843 12845 3 -22002 -7505.73 252.78 0.37 12844 12846 3 -22003.3 -7507.65 252.96 0.37 12845 12847 3 -22003.3 -7507.88 252.92 0.37 12846 12848 3 -22003.1 -7510.54 252.46 0.37 12847 12849 3 -22003.3 -7510.82 252.44 0.37 12848 12850 3 -22004.3 -7513.16 252.14 0.37 12849 12851 3 -22005.5 -7515.74 253.07 0.37 12850 12852 3 -22005.6 -7516.38 253.22 0.37 12851 12853 3 -22008 -7519.74 255.77 0.37 12852 12854 3 -22008.9 -7521.93 256.4 0.37 12853 12855 3 -22008.9 -7522.19 256.36 0.37 12854 12856 3 -22007.8 -7524.95 256.07 0.37 12855 12857 3 -22007.7 -7525.75 255.93 0.37 12856 12858 3 -22006.6 -7528.22 255.84 0.37 12857 12859 3 -22007.7 -7532.3 257.35 0.37 12858 12860 3 -22008.2 -7532.92 257.29 0.37 12859 12861 3 -22009.8 -7535.93 258.46 0.37 12860 12862 3 -22012.8 -7538.03 259.51 0.25 12861 12863 3 -22015.7 -7540.3 255.54 0.25 12862 12864 3 -22017.8 -7541.52 255.53 0.25 12863 12865 3 -21922 -7439.66 220.44 0.745 12761 12866 3 -21922.3 -7439.74 220.45 0.745 12865 12867 3 -21921.1 -7440.11 220.28 0.62 12866 12868 3 -21919.3 -7440.92 219.98 0.495 12867 12869 3 -21918.7 -7441.62 219.8 0.495 12868 12870 3 -21919.2 -7442.5 219.7 0.495 12869 12871 3 -21919.3 -7443.3 219.58 0.495 12870 12872 3 -21919.6 -7443.34 219.61 0.495 12871 12873 3 -21918.6 -7445.64 225.43 0.495 12872 12874 3 -21919 -7447.25 225.2 0.495 12873 12875 3 -21919.3 -7447.28 225.22 0.495 12874 12876 3 -21918.8 -7448.8 224.92 0.495 12875 12877 3 -21919 -7450.67 226.14 0.495 12876 12878 3 -21918.9 -7450.91 226.1 0.495 12877 12879 3 -21919.2 -7451.22 226.07 0.495 12878 12880 3 -21919.1 -7452.01 225.93 0.495 12879 12881 3 -21918.5 -7453.26 228.65 0.62 12880 12882 3 -21919.1 -7454.67 228.48 0.62 12881 12883 3 -21919.3 -7456.01 228.27 0.62 12882 12884 3 -21919.2 -7456.28 228.22 0.62 12883 12885 3 -21918.3 -7457.18 227.98 0.495 12884 12886 3 -21917.5 -7458.43 227.71 0.62 12885 12887 3 -21918.5 -7459.35 227.65 0.62 12886 12888 3 -21918.7 -7460.38 227.49 0.62 12887 12889 3 -21918.2 -7462.45 230.1 0.62 12888 12890 3 -21917.8 -7464.79 231.42 0.62 12889 12891 3 -21916.9 -7467.33 230.91 0.62 12890 12892 3 -21917.1 -7468.83 228.36 0.495 12891 12893 3 -21916.9 -7470.63 228.04 0.495 12892 12894 3 -21916.2 -7471.62 227.81 0.495 12893 12895 3 -21916.2 -7472.14 227.72 0.62 12894 12896 3 -21916.3 -7473.22 227.55 0.62 12895 12897 3 -21917.3 -7473.91 227.54 0.62 12896 12898 3 -21917.9 -7475.74 231.48 0.62 12897 12899 3 -21922.9 -7467.83 233.25 0.495 12898 12900 3 -21923 -7469.05 238.2 0.495 12899 12901 3 -21922.2 -7470.99 239.27 0.495 12900 12902 3 -21922.2 -7471.04 239.55 0.495 12901 12903 3 -21921.8 -7472.96 240.12 0.495 12902 12904 3 -21921.6 -7473.37 240.9 0.495 12903 12905 3 -21921.5 -7475.26 242.28 0.495 12904 12906 3 -21921.4 -7476.73 244.49 0.495 12905 12907 3 -21919.9 -7477.97 245.28 0.495 12906 12908 3 -21919.5 -7478.46 245.38 0.495 12907 12909 3 -21920 -7480.2 245.41 0.495 12908 12910 3 -21919 -7481.35 245.2 0.495 12909 12911 3 -21918.3 -7482.76 246.13 0.495 12910 12912 3 -21919 -7483.49 246.16 0.495 12911 12913 3 -21919 -7483.52 246.33 0.495 12912 12914 3 -21919 -7485.4 246.11 0.495 12913 12915 3 -21918.9 -7485.61 246.07 0.495 12914 12916 3 -21918.6 -7487.16 245.79 0.495 12915 12917 3 -21918.1 -7488.38 245.54 0.495 12916 12918 3 -21918.1 -7488.67 245.48 0.495 12917 12919 3 -21917.4 -7490.66 245.09 0.495 12918 12920 3 -21917.6 -7491.22 245.02 0.495 12919 12921 3 -21917.7 -7492.55 244.8 0.495 12920 12922 3 -21917.7 -7493.68 244.82 0.495 12921 12923 3 -21917.6 -7494.43 244.68 0.495 12922 12924 3 -21916.8 -7497.21 244.15 0.495 12923 12925 3 -21916.7 -7497.72 244.05 0.495 12924 12926 3 -21916.4 -7499.27 243.77 0.495 12925 12927 3 -21915.1 -7500.1 243.51 0.495 12926 12928 3 -21914.4 -7500.73 243.34 0.495 12927 12929 3 -21914.3 -7501.51 243.2 0.495 12928 12930 3 -21914.2 -7502.27 243.06 0.495 12929 12931 3 -21913.1 -7503.93 245.61 0.495 12930 12932 3 -21912 -7505.55 245.26 0.495 12931 12933 3 -21912.1 -7506.38 245.15 0.495 12932 12934 3 -21912.2 -7507.21 245.07 0.495 12933 12935 3 -21912.1 -7507.48 245.02 0.495 12934 12936 3 -21912 -7508.25 244.91 0.495 12935 12937 3 -21911.6 -7508.69 244.8 0.495 12936 12938 3 -21911.3 -7508.92 244.79 0.495 12937 12939 3 -21911 -7509.11 244.77 0.495 12938 12940 3 -21907.5 -7512.3 249.12 0.495 12939 12941 3 -21907.5 -7512.58 249.07 0.495 12940 12942 3 -21907 -7514.32 250.03 0.495 12941 12943 3 -21907 -7514.57 249.98 0.495 12942 12944 3 -21906.7 -7515.85 249.74 0.495 12943 12945 3 -21905.8 -7517.89 249.74 0.495 12944 12946 3 -21905.8 -7517.94 250.09 0.495 12945 12947 3 -21905.5 -7520.11 256.19 0.495 12946 12948 3 -21905.5 -7520.34 256.15 0.495 12947 12949 3 -21905.6 -7521.42 255.98 0.495 12948 12950 3 -21905.5 -7521.7 255.93 0.495 12949 12951 3 -21905 -7522.95 255.68 0.495 12950 12952 3 -21904.7 -7523.36 255.57 0.495 12951 12953 3 -21904.6 -7523.89 255.47 0.495 12952 12954 3 -21904.2 -7524.35 255.36 0.495 12953 12955 3 -21905 -7526.63 258.07 0.495 12954 12956 3 -21906 -7527.38 258.04 0.495 12955 12957 3 -21906.9 -7529.99 262.38 0.495 12956 12958 3 -21908.4 -7531.14 262.72 0.495 12957 12959 3 -21908.3 -7533.68 260.39 0.495 12958 12960 3 -21908.1 -7537.97 264.29 0.495 12959 12961 3 -21908 -7538.33 264.94 0.495 12960 12962 3 -21908.2 -7539.91 265.98 0.495 12961 12963 3 -21907.9 -7540.86 268.29 0.495 12962 12964 3 -21907.6 -7541.03 268.23 0.495 12963 12965 3 -21906.8 -7543.74 268.67 0.495 12964 12966 3 -21906.4 -7544.22 268.74 0.495 12965 12967 3 -21906 -7545.92 269.44 0.495 12966 12968 3 -21905.9 -7546.11 270.56 0.495 12967 12969 3 -21904.4 -7547.46 272.02 0.37 12968 12970 3 -21904.3 -7547.68 271.98 0.37 12969 12971 3 -21904.2 -7550.18 275.18 0.37 12970 12972 3 -21905.2 -7554.21 279.66 0.37 12971 12973 3 -21905.2 -7554.97 280.8 0.37 12972 12974 3 -21905.7 -7556.84 277.1 0.37 12973 12975 3 -21905.6 -7557.39 277 0.37 12974 12976 3 -21905.3 -7559.95 277.77 0.37 12975 12977 3 -21905.1 -7560.45 279.15 0.37 12976 12978 3 -21904.7 -7563.26 281.74 0.37 12977 12979 3 -21904.6 -7563.86 282.04 0.37 12978 12980 3 -21903.9 -7568.62 281.61 0.37 12979 12981 3 -21903.9 -7568.85 281.57 0.37 12980 12982 3 -21915.9 -7489.88 258.89 0.37 12913 12983 3 -21915.9 -7489.93 259.2 0.37 12982 12984 3 -21914.9 -7492.58 265.69 0.37 12983 12985 3 -21913.5 -7496.34 272.62 0.37 12984 12986 3 -21913.2 -7496.27 272.6 0.37 12985 12987 3 -21910.5 -7497.43 275.73 0.37 12986 12988 3 -21910.2 -7497.69 275.9 0.37 12987 12989 3 -21909.8 -7497.77 276.88 0.37 12988 12990 3 -21909.5 -7497.74 276.86 0.37 12989 12991 3 -21906.9 -7500.02 280.1 0.37 12990 12992 3 -21905.7 -7502.7 285.24 0.37 12991 12993 3 -21905.5 -7504.57 292.7 0.37 12992 12994 3 -21905.5 -7504.85 292.65 0.37 12993 12995 3 -21905.1 -7507.22 292.23 0.37 12994 12996 3 -21903.8 -7509.14 294.79 0.37 12995 12997 3 -21901.7 -7509.73 298.61 0.37 12996 12998 3 -21900 -7508.93 298.59 0.37 12997 12999 3 -21898.4 -7508.22 299.02 0.37 12998 13000 3 -21895.7 -7508.24 305.12 0.37 12999 13001 3 -21895.7 -7508.16 304.68 0.37 13000 13002 3 -21915.6 -7468.07 228.03 0.37 12891 13003 3 -21914.6 -7469.22 227.75 0.37 13002 13004 3 -21913.7 -7470.99 229.21 0.37 13003 13005 3 -21912.8 -7471.38 229.06 0.37 13004 13006 3 -21911.8 -7472.29 228.82 0.37 13005 13007 3 -21911.6 -7473.84 228.54 0.37 13006 13008 3 -21911.5 -7474.61 228.4 0.37 13007 13009 3 -21911.1 -7475.16 228.28 0.37 13008 13010 3 -21911 -7476.19 228.1 0.37 13009 13011 3 -21909.7 -7476.83 229.61 0.37 13010 13012 3 -21909.1 -7477.25 229.48 0.37 13011 13013 3 -21908.6 -7478.52 228.9 0.37 13012 13014 3 -21908.6 -7479.24 228.75 0.37 13013 13015 3 -21908.6 -7478.99 228.77 0.495 13014 13016 3 -21908 -7480.27 230.29 0.37 13015 13017 3 -21907.1 -7481.45 230.01 0.37 13016 13018 3 -21906.4 -7481.91 229.87 0.37 13017 13019 3 -21906.3 -7483.08 228.9 0.37 13018 13020 3 -21910.8 -7477.19 229.46 0.37 13019 13021 3 -21909 -7478.97 229 0.37 13020 13022 3 -21908 -7479.85 228.76 0.37 13021 13023 3 -21907.9 -7480.11 228.71 0.37 13022 13024 3 -21906.2 -7481.33 228.35 0.37 13023 13025 3 -21905.4 -7482.78 228.03 0.37 13024 13026 3 -21904.6 -7485.55 227.5 0.37 13025 13027 3 -21903.1 -7486.32 227.23 0.37 13026 13028 3 -21903 -7486.54 227.19 0.37 13027 13029 3 -21901.9 -7487.71 226.89 0.37 13028 13030 3 -21901.7 -7487.89 226.84 0.37 13029 13031 3 -21900.9 -7490.04 227.15 0.37 13030 13032 3 -21900.1 -7492.69 226.19 0.37 13031 13033 3 -21899.7 -7493.9 225.63 0.37 13032 13034 3 -21899.6 -7493.97 226.11 0.37 13033 13035 3 -21899.1 -7496.81 225.59 0.37 13034 13036 3 -21899.1 -7497.06 225.55 0.37 13035 13037 3 -21898.5 -7498.12 225.9 0.37 13036 13038 3 -21898.2 -7498.6 225.79 0.37 13037 13039 3 -21896.8 -7499.92 225.47 0.37 13038 13040 3 -21896.4 -7500.38 225.36 0.37 13039 13041 3 -21896.3 -7500.63 225.31 0.37 13040 13042 3 -21895.7 -7501.06 225.18 0.37 13041 13043 3 -21895.2 -7503.2 226.7 0.37 13042 13044 3 -21893.2 -7505.17 226.18 0.37 13043 13045 3 -21892.8 -7505.33 226.12 0.37 13044 13046 3 -21891.4 -7506.67 225.77 0.37 13045 13047 3 -21890.7 -7507.6 225.55 0.37 13046 13048 3 -21889.5 -7509.49 225.12 0.37 13047 13049 3 -21889.1 -7510.21 224.96 0.37 13048 13050 3 -21888.2 -7511.27 225.78 0.37 13049 13051 3 -21886.3 -7512.25 225.44 0.37 13050 13052 3 -21884.9 -7513.86 225.04 0.37 13051 13053 3 -21883.8 -7516.58 229.29 0.37 13052 13054 3 -21921.9 -7425.35 210.33 0.865 12750 13055 3 -21921.9 -7425.61 210.28 0.865 13054 13056 3 -21922.9 -7427.48 204.59 0.865 13055 13057 3 -21922.9 -7427.44 204.33 0.865 13056 13058 3 -21923.4 -7429.7 201.34 0.865 13057 13059 3 -21925.5 -7431.38 203.09 0.745 13058 13060 3 -21927.4 -7432.65 203.05 0.62 13059 13061 3 -21928.5 -7433.13 198.9 0.495 13060 13062 3 -21929.2 -7434.03 198.81 0.495 13061 13063 3 -21929.8 -7433.88 198.89 0.495 13062 13064 3 -21930.8 -7434.39 199.52 0.495 13063 13065 3 -21931.6 -7434.8 199.54 0.495 13064 13066 3 -21932.6 -7435.03 197.46 0.495 13065 13067 3 -21933.4 -7435.4 197.47 0.495 13066 13068 3 -21933.3 -7435.68 197.42 0.495 13067 13069 3 -21934.9 -7436.31 195.15 0.495 13068 13070 3 -21936.2 -7437.13 195.62 0.495 13069 13071 3 -21937.3 -7437.75 193.86 0.495 13070 13072 3 -21937.8 -7438.32 193.81 0.495 13071 13073 3 -21938.1 -7438.36 193.83 0.495 13072 13074 3 -21939.1 -7439.28 193.77 0.495 13073 13075 3 -21939.4 -7439.64 193.74 0.495 13074 13076 3 -21941 -7440.31 193.6 0.495 13075 13077 3 -21942.3 -7440.78 193.65 0.495 13076 13078 3 -21943.6 -7441.77 193.61 0.495 13077 13079 3 -21945.3 -7441.98 188.89 0.495 13078 13080 3 -21945.9 -7443.2 190.74 0.495 13079 13081 3 -21946.2 -7443.63 190 0.495 13080 13082 3 -21946.1 -7443.88 189.95 0.495 13081 13083 3 -21947.6 -7444.6 188.39 0.495 13082 13084 3 -21949 -7445.05 187.06 0.495 13083 13085 3 -21950.8 -7446.42 186.99 0.495 13084 13086 3 -21950.8 -7446.68 186.95 0.495 13085 13087 3 -21951.4 -7447.49 185.01 0.495 13086 13088 3 -21952.8 -7448.32 184 0.495 13087 13089 3 -21953.9 -7448.94 183.59 0.495 13088 13090 3 -21953.8 -7449.2 183.55 0.495 13089 13091 3 -21954.3 -7449.82 183.47 0.495 13090 13092 3 -21955.2 -7450.71 182.16 0.495 13091 13093 3 -21955.1 -7450.98 182.11 0.495 13092 13094 3 -21956.5 -7451.7 180.4 0.495 13093 13095 3 -21957.9 -7451.95 177.88 0.495 13094 13096 3 -21959.1 -7453.38 177.76 0.495 13095 13097 3 -21959.1 -7453.15 177.8 0.495 13096 13098 3 -21960.4 -7453.52 175.77 0.495 13097 13099 3 -21962 -7453.99 175.84 0.495 13098 13100 3 -21963.3 -7454.62 176.77 0.495 13099 13101 3 -21964.3 -7455.54 176.71 0.495 13100 13102 3 -21965.1 -7456.63 179.44 0.495 13101 13103 3 -21966.2 -7458.07 180.32 0.37 13102 13104 3 -21967.6 -7458.77 180.36 0.37 13103 13105 3 -21968.7 -7459.95 181.49 0.37 13104 13106 3 -21970 -7460.99 181.84 0.37 13105 13107 3 -21970.4 -7461.82 181.74 0.37 13106 13108 3 -21971.6 -7461.74 181.87 0.37 13107 13109 3 -21972.1 -7462.83 181.73 0.37 13108 13110 3 -21972.4 -7464.02 181.73 0.37 13109 13111 3 -21973.4 -7464.94 181.67 0.37 13110 13112 3 -21974.6 -7465.51 184.25 0.37 13111 13113 3 -21975.5 -7465.96 182.77 0.37 13112 13114 3 -21976 -7466.85 182.67 0.37 13113 13115 3 -21976.1 -7468.18 185.48 0.37 13114 13116 3 -21976.9 -7469.31 185.36 0.37 13115 13117 3 -21977.9 -7469.45 185.44 0.37 13116 13118 3 -21979 -7470.66 185.33 0.37 13117 13119 3 -21980 -7471.04 185.37 0.37 13118 13120 3 -21981.2 -7472.83 185.19 0.37 13119 13121 3 -21983.2 -7473.36 185.28 0.37 13120 13122 3 -21984.3 -7475.46 188.91 0.37 13121 13123 3 -21985.8 -7475.46 189.05 0.37 13122 13124 3 -21987.7 -7475.18 189.28 0.37 13123 13125 3 -21989.2 -7475.63 192.22 0.37 13124 13126 3 -21991.7 -7475.93 192.4 0.37 13125 13127 3 -21993.9 -7475.99 192.6 0.37 13126 13128 3 -21995.5 -7475.52 191.71 0.37 13127 13129 3 -21996.3 -7475.59 191.78 0.37 13128 13130 3 -21998.4 -7475.41 192.15 0.37 13129 13131 3 -22000.8 -7475.99 192.29 0.37 13130 13132 3 -22001.8 -7476.94 192.22 0.37 13131 13133 3 -22003.6 -7477.91 193.15 0.37 13132 13134 3 -22004.4 -7478.51 193.12 0.37 13133 13135 3 -22005.7 -7479.49 193.08 0.37 13134 13136 3 -22007 -7480.45 193.04 0.37 13135 13137 3 -22008.9 -7480.69 192.93 0.37 13136 13138 3 -22009.5 -7480.77 192.97 0.37 13137 13139 3 -22010 -7481.34 192.93 0.37 13138 13140 3 -22010.2 -7481.94 192.85 0.37 13139 13141 3 -22012.2 -7482.73 191.24 0.37 13140 13142 3 -22013.4 -7484.42 190.81 0.25 13141 13143 3 -22014.7 -7485.36 190.7 0.25 13142 13144 3 -22016.3 -7486.37 190.48 0.25 13143 13145 3 -22018.3 -7488.21 190.36 0.25 13144 13146 3 -22020.2 -7490.02 188.25 0.25 13145 13147 3 -22022.5 -7491.9 188.15 0.25 13146 13148 3 -22024 -7493.42 188.04 0.25 13147 13149 3 -22025 -7494.64 187.93 0.25 13148 13150 3 -22025.9 -7496.54 187.41 0.25 13149 13151 3 -22027.8 -7497.65 187.4 0.25 13150 13152 3 -22028.7 -7499.07 187.25 0.25 13151 13153 3 -22029.6 -7500.26 188.74 0.25 13152 13154 3 -22030.3 -7501.42 188.62 0.25 13153 13155 3 -22031.1 -7501.87 188.62 0.25 13154 13156 3 -22031 -7502.07 188.58 0.25 13155 13157 3 -22031.8 -7503 188.49 0.25 13156 13158 3 -22032.2 -7504.09 188.36 0.25 13157 13159 3 -22033.2 -7505.01 188.3 0.25 13158 13160 3 -22033.4 -7505.66 188.85 0.25 13159 13161 3 -22034.8 -7505.87 188.95 0.125 13160 13162 3 -22035 -7507.82 193.81 0.125 13161 13163 3 -22035.7 -7509.32 193.62 0.125 13162 13164 3 -22036.1 -7510.67 193.44 0.125 13163 13165 3 -22037 -7512.33 193.25 0.125 13164 13166 3 -22037.2 -7511.94 190.87 0.125 13165 13167 3 -21922.7 -7432.2 200.77 0.865 13058 13168 3 -21922.8 -7434.07 200.47 0.865 13167 13169 3 -21923.3 -7436.23 193.81 0.865 13168 13170 3 -21923.2 -7438.89 193.37 0.865 13169 13171 3 -21922.7 -7440.43 193.34 0.865 13170 13172 3 -21922.3 -7441.95 193.04 0.865 13171 13173 3 -21922.3 -7443.83 192.74 0.865 13172 13174 3 -21923.5 -7444.82 186.91 0.865 13173 13175 3 -21923.8 -7446.6 185.75 0.865 13174 13176 3 -21923.8 -7449.59 184.29 0.865 13175 13177 3 -21923.6 -7451.17 184.06 0.865 13176 13178 3 -21921.1 -7453.43 183.45 0.495 13177 13179 3 -21921.1 -7453.19 183.49 0.495 13178 13180 3 -21920.4 -7454.43 183.22 0.495 13179 13181 3 -21919.2 -7455.92 185.02 0.495 13180 13182 3 -21918.5 -7457.01 185.31 0.495 13181 13183 3 -21917.5 -7459.09 185.59 0.495 13182 13184 3 -21916.9 -7460.03 185.32 0.495 13183 13185 3 -21915.4 -7460.65 185.08 0.495 13184 13186 3 -21913.6 -7461.17 184.83 0.495 13185 13187 3 -21912.2 -7463.12 184.38 0.495 13186 13188 3 -21911 -7463.73 184.16 0.495 13187 13189 3 -21911.2 -7464.56 184.04 0.495 13188 13190 3 -21911.1 -7465.37 183.9 0.495 13189 13191 3 -21910.7 -7466.07 183.75 0.495 13190 13192 3 -21909.5 -7467.95 184.46 0.495 13191 13193 3 -21909.1 -7469.25 184.2 0.495 13192 13194 3 -21908.6 -7470.74 183.91 0.495 13193 13195 3 -21908.7 -7472.34 183.65 0.495 13194 13196 3 -21908.5 -7473.96 183.36 0.495 13195 13197 3 -21908.1 -7474.25 180.8 0.495 13196 13198 3 -21907.2 -7474.86 180.62 0.495 13197 13199 3 -21905.6 -7475.75 180.33 0.495 13198 13200 3 -21905.8 -7476.84 180.16 0.495 13199 13201 3 -21905.3 -7478.07 179.91 0.495 13200 13202 3 -21904.8 -7478.71 177.56 0.495 13201 13203 3 -21904.8 -7478.91 177.53 0.495 13202 13204 3 -21904.2 -7479.15 177.43 0.495 13203 13205 3 -21902.7 -7479.91 176.83 0.495 13204 13206 3 -21902.1 -7480.61 176.65 0.495 13205 13207 3 -21901.4 -7481.6 176.42 0.495 13206 13208 3 -21900.4 -7482 176.27 0.37 13207 13209 3 -21900.5 -7482.28 176.22 0.37 13208 13210 3 -21899.6 -7482.44 176.12 0.37 13209 13211 3 -21898.9 -7483.63 175.85 0.37 13210 13212 3 -21898 -7484.02 175.7 0.37 13211 13213 3 -21897 -7484.94 175.46 0.37 13212 13214 3 -21895.2 -7485.5 175.2 0.37 13213 13215 3 -21895 -7485.44 175.18 0.37 13214 13216 3 -21894.8 -7486.48 175 0.37 13215 13217 3 -21894.1 -7487.73 174.67 0.37 13216 13218 3 -21893.2 -7487.88 174.57 0.37 13217 13219 3 -21892.6 -7488.58 174.39 0.37 13218 13220 3 -21891.9 -7489.53 174.17 0.37 13219 13221 3 -21891.5 -7490.26 174.01 0.37 13220 13222 3 -21891 -7492.03 173.67 0.37 13221 13223 3 -21890.2 -7492.66 171.48 0.37 13222 13224 3 -21889.3 -7492.82 171.37 0.37 13223 13225 3 -21888 -7493.4 169.36 0.37 13224 13226 3 -21888.2 -7493.4 169.38 0.37 13225 13227 3 -21887.6 -7494.37 169.16 0.37 13226 13228 3 -21886.6 -7495.3 168.91 0.37 13227 13229 3 -21885.7 -7495.91 168.35 0.37 13228 13230 3 -21885.7 -7496.18 168.31 0.37 13229 13231 3 -21885.1 -7496.07 168.27 0.37 13230 13232 3 -21884.5 -7496.3 168.18 0.37 13231 13233 3 -21883.8 -7496.56 165.73 0.37 13232 13234 3 -21883.7 -7497.33 165.59 0.37 13233 13235 3 -21882.8 -7498.26 165.35 0.37 13234 13236 3 -21882.7 -7498.51 165.3 0.37 13235 13237 3 -21882.1 -7499 165.16 0.37 13236 13238 3 -21881.5 -7499.89 163.01 0.37 13237 13239 3 -21881 -7500.04 162.93 0.37 13238 13240 3 -21879.6 -7499.85 162.83 0.37 13239 13241 3 -21879.3 -7499.8 162.81 0.37 13240 13242 3 -21879 -7500.04 162.74 0.37 13241 13243 3 -21878.4 -7502.08 162.35 0.37 13242 13244 3 -21878 -7502.76 163.47 0.37 13243 13245 3 -21876.4 -7504.11 163.09 0.37 13244 13246 3 -21875.4 -7505.04 162.85 0.37 13245 13247 3 -21875.6 -7505.35 162.82 0.37 13246 13248 3 -21875 -7506.04 162.65 0.37 13247 13249 3 -21873.8 -7506.46 162.47 0.37 13248 13250 3 -21873.5 -7506.93 162.36 0.37 13249 13251 3 -21873.4 -7507.73 162.22 0.37 13250 13252 3 -21873.3 -7507.93 162.18 0.37 13251 13253 3 -21873.1 -7510.29 161.76 0.37 13252 13254 3 -21873 -7510.56 161.71 0.37 13253 13255 3 -21872.2 -7511.77 158.45 0.37 13254 13256 3 -21871.7 -7513.54 158.11 0.37 13255 13257 3 -21871 -7514.52 157.88 0.37 13256 13258 3 -21870.2 -7515.2 156.05 0.37 13257 13259 3 -21870.2 -7515.45 156.01 0.37 13258 13260 3 -21869.5 -7516.13 155.83 0.37 13259 13261 3 -21868.8 -7517.34 155.57 0.37 13260 13262 3 -21868.2 -7517.53 155.48 0.37 13261 13263 3 -21868.2 -7517.8 155.43 0.37 13262 13264 3 -21867.8 -7518.81 155.23 0.37 13263 13265 3 -21867 -7520.02 154.96 0.25 13264 13266 3 -21867.3 -7520.33 154.93 0.25 13265 13267 3 -21866.9 -7521.33 154.73 0.25 13266 13268 3 -21866.3 -7521.77 154.6 0.25 13267 13269 3 -21866 -7522.92 155.1 0.25 13268 13270 3 -21865.6 -7524.18 154.84 0.25 13269 13271 3 -21864 -7525.59 154.46 0.25 13270 13272 3 -21864.8 -7526.2 154.44 0.25 13271 13273 3 -21864.4 -7526.93 154.28 0.25 13272 13274 3 -21864.3 -7527.95 154.1 0.25 13273 13275 3 -21864.3 -7530.05 153.75 0.25 13274 13276 3 -21863.6 -7531.35 153.47 0.25 13275 13277 3 -21861.7 -7532.65 153.08 0.25 13276 13278 3 -21862 -7534.19 153.79 0.25 13277 13279 3 -21861.8 -7535.73 153.51 0.25 13278 13280 3 -21861.5 -7536.33 152.64 0.25 13279 13281 3 -21904.7 -7479.84 173.3 0.495 13204 13282 3 -21903.7 -7481.01 173.01 0.495 13281 13283 3 -21903.1 -7482.4 170.28 0.495 13282 13284 3 -21903.4 -7482.4 170.31 0.495 13283 13285 3 -21903 -7483.63 169.98 0.495 13284 13286 3 -21902.5 -7485.15 169.43 0.495 13285 13287 3 -21902.5 -7484.89 169.48 0.495 13286 13288 3 -21901.7 -7485.69 166.89 0.495 13287 13289 3 -21902 -7485.74 166.91 0.495 13288 13290 3 -21901.3 -7486.04 164.99 0.495 13289 13291 3 -21900.6 -7487.57 163.31 0.495 13290 13292 3 -21899.7 -7488.02 163.15 0.495 13291 13293 3 -21898.9 -7488.66 161.23 0.495 13292 13294 3 -21894.2 -7488.74 160.78 0.495 13293 13295 3 -21894.2 -7488.49 160.82 0.495 13294 13296 3 -21893.7 -7489.88 161.44 0.37 13295 13297 3 -21893.3 -7491.05 159.43 0.37 13296 13298 3 -21892.8 -7491.18 159.35 0.37 13297 13299 3 -21892 -7492.01 155.38 0.37 13298 13300 3 -21892.3 -7492.09 155.39 0.37 13299 13301 3 -21891.4 -7492.8 154.02 0.37 13300 13302 3 -21889.6 -7494.77 154.69 0.37 13301 13303 3 -21889.3 -7495.95 151.14 0.37 13302 13304 3 -21888.6 -7497.18 149.57 0.37 13303 13305 3 -21888.7 -7497.1 149.07 0.37 13304 13306 3 -21888.2 -7498.04 148.73 0.37 13305 13307 3 -21887.7 -7499.32 148.47 0.37 13306 13308 3 -21886.7 -7500.65 148.15 0.37 13307 13309 3 -21887 -7500.8 144.15 0.25 13308 13310 3 -21886.2 -7501.45 143.98 0.37 13309 13311 3 -21884.9 -7502.82 143.63 0.25 13310 13312 3 -21884.8 -7503.09 143.58 0.25 13311 13313 3 -21884.7 -7503.28 140.27 0.25 13312 13314 3 -21883.8 -7503.66 140.13 0.495 13313 13315 3 -21883.3 -7503.81 140.05 0.495 13314 13316 3 -21882.1 -7505.19 139.72 0.25 13315 13317 3 -21881.1 -7504.89 138.95 0.25 13316 13318 3 -21879.5 -7505.88 138.64 0.37 13317 13319 3 -21878.2 -7505.64 138.55 0.25 13318 13320 3 -21878.2 -7505.42 138.59 0.25 13319 13321 3 -21877 -7505.98 138.39 0.25 13320 13322 3 -21875 -7506.16 138.17 0.37 13321 13323 3 -21874.4 -7506.59 138.04 0.37 13322 13324 3 -21873.5 -7507.83 135.29 0.37 13323 13325 3 -21872.6 -7509.52 134.93 0.37 13324 13326 3 -21871.8 -7510.73 134.65 0.25 13325 13327 3 -21870.9 -7511.36 134.46 0.25 13326 13328 3 -21870.1 -7511.64 132.61 0.37 13327 13329 3 -21868.6 -7512.33 131.69 0.25 13328 13330 3 -21867.8 -7512.9 129.54 0.37 13329 13331 3 -21867.8 -7513.11 129.51 0.37 13330 13332 3 -21866.8 -7513.76 129.31 0.37 13331 13333 3 -21865.9 -7513.69 126.8 0.25 13332 13334 3 -21864 -7514.75 126.45 0.25 13333 13335 3 -21924.4 -7454.09 186.36 0.62 13177 13336 3 -21924.2 -7455.63 182.74 0.62 13335 13337 3 -21924.2 -7455.57 182.37 0.62 13336 13338 3 -21924.5 -7457.46 179.2 0.62 13337 13339 3 -21924.2 -7458.88 181 0.495 13338 13340 3 -21923.8 -7459.6 180.85 0.495 13339 13341 3 -21923.6 -7460.93 180.62 0.495 13340 13342 3 -21924.4 -7461.89 179.11 0.495 13341 13343 3 -21924 -7462.82 178.92 0.62 13342 13344 3 -21924.2 -7464.29 177.74 0.62 13343 13345 3 -21924.1 -7465.31 177.56 0.62 13344 13346 3 -21923.1 -7466.56 177.45 0.62 13345 13347 3 -21922.7 -7467.82 177.19 0.62 13346 13348 3 -21922.5 -7468.79 177.02 0.62 13347 13349 3 -21922.3 -7470.63 176.69 0.495 13348 13350 3 -21922 -7471.72 173.59 0.495 13349 13351 3 -21922 -7472.49 173.46 0.62 13350 13352 3 -21922.1 -7473.57 173.29 0.62 13351 13353 3 -21921.7 -7474.32 173.13 0.495 13352 13354 3 -21921 -7475.57 172.86 0.495 13353 13355 3 -21921 -7476.75 170.62 0.495 13354 13356 3 -21921.7 -7478.1 169.97 0.62 13355 13357 3 -21921.8 -7479.96 169.7 0.62 13356 13358 3 -21921.9 -7481.03 169.54 0.62 13357 13359 3 -21922 -7482.41 169.32 0.495 13358 13360 3 -21921.8 -7483.74 169.18 0.495 13359 13361 3 -21922 -7484.93 168.13 0.495 13360 13362 3 -21922.8 -7485.32 168.14 0.495 13361 13363 3 -21922.4 -7486.33 168.22 0.495 13362 13364 3 -21922.3 -7487.11 168.08 0.495 13363 13365 3 -21922.4 -7488.54 168.23 0.495 13364 13366 3 -21922.5 -7489.58 168.07 0.495 13365 13367 3 -21921.9 -7490.71 167.13 0.495 13366 13368 3 -21921.9 -7492.29 166.87 0.495 13367 13369 3 -21921.8 -7493.87 166.57 0.37 13368 13370 3 -21920.8 -7494.69 165.86 0.37 13369 13371 3 -21920.7 -7495.48 165.72 0.37 13370 13372 3 -21919.7 -7496.41 165.47 0.37 13371 13373 3 -21919.1 -7497.38 165.25 0.37 13372 13374 3 -21918.1 -7498.5 164.54 0.37 13373 13375 3 -21917.2 -7499.17 164.34 0.37 13374 13376 3 -21917 -7500.73 164.07 0.37 13375 13377 3 -21916.9 -7501.23 163.97 0.37 13376 13378 3 -21916.9 -7501.5 163.93 0.37 13377 13379 3 -21916 -7502.17 163.73 0.37 13378 13380 3 -21915.5 -7503.12 163.53 0.37 13379 13381 3 -21916.1 -7504.02 161.94 0.37 13380 13382 3 -21915.7 -7505.28 161.69 0.37 13381 13383 3 -21915.3 -7507.67 161.46 0.37 13382 13384 3 -21910.3 -7506.92 161.11 0.37 13383 13385 3 -21909.5 -7508.22 161.41 0.37 13384 13386 3 -21908.9 -7509.72 161.11 0.37 13385 13387 3 -21908.7 -7511.01 160.88 0.37 13386 13388 3 -21908.8 -7511.8 160.76 0.37 13387 13389 3 -21909.2 -7512.95 160.6 0.37 13388 13390 3 -21909.1 -7513.69 160.47 0.37 13389 13391 3 -21908.5 -7515.49 160.25 0.37 13390 13392 3 -21908.2 -7517.24 159.92 0.37 13391 13393 3 -21908.2 -7517.52 159.87 0.37 13392 13394 3 -21908.1 -7519.36 159.56 0.37 13393 13395 3 -21908.2 -7520.93 159.3 0.37 13394 13396 3 -21908.1 -7521.18 159.26 0.37 13395 13397 3 -21907.9 -7522.72 158.98 0.37 13396 13398 3 -21907.8 -7523 158.93 0.37 13397 13399 3 -21907.3 -7524.93 158.11 0.37 13398 13400 3 -21907.7 -7527.38 157.74 0.37 13399 13401 3 -21908.4 -7529.87 157.4 0.37 13400 13402 3 -21909.5 -7531.62 157.21 0.25 13401 13403 3 -21910 -7532.51 157.11 0.37 13402 13404 3 -21909.9 -7534.36 156.8 0.37 13403 13405 3 -21910.4 -7535 155.44 0.25 13404 13406 3 -21909.9 -7536.48 155.15 0.25 13405 13407 3 -21909.7 -7538 154.87 0.37 13406 13408 3 -21909.5 -7539.04 154.68 0.37 13407 13409 3 -21908.8 -7540.63 151.99 0.25 13408 13410 3 -21908.6 -7542.15 151.71 0.25 13409 13411 3 -21908.3 -7543.44 151.47 0.37 13410 13412 3 -21908.4 -7543.18 151.52 0.37 13411 13413 3 -21908.6 -7545.57 151.14 0.37 13412 13414 3 -21908.6 -7545.32 151.19 0.37 13413 13415 3 -21907.3 -7547.76 146.4 0.25 13414 13416 3 -21907.1 -7549.05 146.15 0.37 13415 13417 3 -21906.9 -7550.54 145.88 0.37 13416 13418 3 -21906.8 -7552.5 145.98 0.37 13417 13419 3 -21907 -7554.37 145.69 0.25 13418 13420 3 -21907 -7554.59 145.66 0.25 13419 13421 3 -21906.9 -7555.41 145.51 0.25 13420 13422 3 -21906.5 -7555.88 145.39 0.495 13421 13423 3 -21906.9 -7556.99 145.25 0.495 13422 13424 3 -21907.6 -7558.17 145.11 0.495 13423 13425 3 -21907.6 -7559.75 144.85 0.37 13424 13426 3 -21907.6 -7561.31 144.59 0.37 13425 13427 3 -21907.4 -7562.66 144.35 0.37 13426 13428 3 -21907.9 -7564.1 141.61 0.37 13427 13429 3 -21909.1 -7565.16 141.55 0.37 13428 13430 3 -21909.3 -7566.69 139.68 0.25 13429 13431 3 -21909.5 -7568.58 139.39 0.37 13430 13432 3 -21909.8 -7569.98 139.19 0.25 13431 13433 3 -21910.5 -7571.1 139.07 0.37 13432 13434 3 -21910.5 -7572.73 138.8 0.37 13433 13435 3 -21910.3 -7573.75 138.61 0.37 13434 13436 3 -21910.8 -7575.6 134.93 0.25 13435 13437 3 -21910.7 -7575.74 135.79 0.25 13436 13438 3 -21910.8 -7576.61 135.93 0.25 13437 13439 3 -21912 -7578.38 134.19 0.25 13438 13440 3 -21911.5 -7580.12 133.85 0.25 13439 13441 3 -21911.4 -7580.66 133.75 0.25 13440 13442 3 -21911.2 -7583.47 133.27 0.25 13441 13443 3 -21911.1 -7583.73 133.22 0.25 13442 13444 3 -21910.2 -7589.38 132.2 0.25 13443 13445 3 -21910.5 -7589.43 132.21 0.25 13444 13446 3 -21911.3 -7591.88 129.95 0.25 13445 13447 3 -21912.1 -7594.09 129.66 0.25 13446 13448 3 -21912.3 -7596.31 129.31 0.25 13447 13449 3 -21912.3 -7598.46 126.51 0.25 13448 13450 3 -21912.3 -7600.3 126.2 0.25 13449 13451 3 -21912.5 -7601.14 126.12 0.37 13450 13452 3 -21912.4 -7603.03 125.81 0.25 13451 13453 3 -21912.5 -7604.59 125.55 0.37 13452 13454 3 -21912.2 -7606.11 125.28 0.25 13453 13455 3 -21912.5 -7606.17 125.29 0.25 13454 13456 3 -21912 -7607.13 125.09 0.37 13455 13457 3 -21912.1 -7608.74 124.83 0.25 13456 13458 3 -21911.8 -7610.24 124.55 0.37 13457 13459 3 -21911.8 -7611.82 124.29 0.25 13458 13460 3 -21911.7 -7613.94 123.93 0.37 13459 13461 3 -21910.9 -7619.5 120.92 0.125 13460 brian-1.3.1/brian/experimental/realtime_monitor.py000066400000000000000000000115331167451777000223440ustar00rootroot00000000000000from brian import * import pygame import matplotlib.cm as cm __all__ = ['RealtimeConnectionMonitor'] class RealtimeConnectionMonitor(NetworkOperation): ''' Realtime monitoring of weight matrix Short docs: ``C`` Connection to monitor ``size`` Dimensions (width, height) of output window, leave as ``None`` to use ``C.W.shape``. ``scaling`` If output window dimensions are different to connection matrix, scaling is used, options are ``'fast'`` for no interpolation, or ``'smooth'`` for (slower) smooth interpolation. ``wmin, wmax`` Minimum and maximum weight matrix values, if left to ``None`` then the min/max of the weight matrix at each moment is used (and this scaling can change over time). ``clock`` Leave to ``None`` for an update every 100ms. ``cmap`` Colour map to use, black and white by default. Get other values from ``matplotlib.cm.*``. Note that this class uses PyGame and due to a limitation with pygame, there can be only one window using it. Other options are being considered. ''' def __init__(self, C, size=None, scaling='fast', wmin=None, wmax=None, clock=None, cmap=cm.gray): self.C = C self.wmin = wmin self.wmax = wmax self.cmap = cmap if clock is None: clock = EventClock(dt=100 * ms) NetworkOperation.__init__(self, lambda:None, clock=clock) pygame.init() if size is None: width, height = C.W.shape self.scaling = None else: width, height = size self.scaling = scaling self.width, self.height = width, height self.screen = pygame.display.set_mode((width, height)) self.screen_arr = pygame.surfarray.pixels3d(self.screen) def __call__(self): W = self.C.W.todense() wmin, wmax = self.wmin, self.wmax if wmin is None: wmin = amin(W) if wmax is None: wmax = amax(W) if wmax - wmin < 1e-20: wmax = wmin + 1e-20 W = self.cmap(clip((W - wmin) / (wmax - wmin), 0, 1), bytes=True) if self.scaling is None: self.screen_arr[:, :, :] = W[:, :, :3] elif self.scaling == 'fast': srf = pygame.surfarray.make_surface(W[:, :, :3]) pygame.transform.scale(srf, (self.width, self.height), self.screen) elif self.scaling == 'smooth': srf = pygame.surfarray.make_surface(W[:, :, :3]) pygame.transform.smoothscale(srf, (self.width, self.height), self.screen) pygame.display.flip() pygame.event.pump() if __name__ == '__main__': Nin = 200 Nout = 200 duration = 40 * second Fmax = 50 * Hz tau = 10 * ms taue = 2 * ms taui = 5 * ms sigma = 0.#0.4 # Note that factor (tau/taue) makes integral(v(t)) the same when the connection # acts on ge as if it acted directly on v. eqs = Equations(''' #dv/dt = -v/tau + sigma*xi/(2*tau)**.5 : 1 dv/dt = (-v+(tau/taue)*ge-(tau/taui)*gi)/tau + sigma*xi/(2*tau)**.5 : 1 dge/dt = -ge/taue : 1 dgi/dt = -gi/taui : 1 excitatory = ge inhibitory = gi ''') reset = 0 threshold = 1 refractory = 0 * ms taup = 5 * ms taud = 5 * ms Ap = .1 Ad = -Ap * taup / taud * 1.2 wmax_ff = 0.1 wmax_rec = wmax_ff wmax_inh = wmax_rec width = 0.2 recwidth = 0.2 Gin = PoissonGroup(Nin) Gout = NeuronGroup(Nout, eqs, reset=reset, threshold=threshold, refractory=refractory) ff = Connection(Gin, Gout, 'excitatory', structure='dense', weight=rand(Nin, Nout) * wmax_ff) for i in xrange(Nin): ff[i, :] = (rand(Nout) > .5) * wmax_ff rec = Connection(Gout, Gout, 'excitatory') for i in xrange(Nout): d = abs(float(i) / Nout - linspace(0, 1, Nout)) d[d > .5] = 1. - d[d > .5] dsquared = d ** 2 prob = exp(-dsquared / (2 * recwidth ** 2)) prob[i] = -1 inds = (rand(Nout) < prob).nonzero()[0] w = rand(len(inds)) * wmax_rec rec[i, inds] = w inh = Connection(Gout, Gout, 'inhibitory', sparseness=1, weight=wmax_inh) stdp_ff = ExponentialSTDP(ff, taup, taud, Ap, Ad, wmax=wmax_ff) stdp_rec = ExponentialSTDP(rec, taup, taud, Ap, Ad, wmax=wmax_rec) M = RealtimeConnectionMonitor(ff, size=(500, 500), scaling='smooth', wmin=0., wmax=wmax_ff, clock=EventClock(dt=20 * ms), cmap=cm.jet) run(0 * ms) @network_operation(clock=EventClock(dt=20 * ms)) def stimulation(): Gin.rate = Fmax * exp(-(linspace(0, 1, Nin) - rand())**2 / (2 * width ** 2)) run(duration, report='stderr') brian-1.3.1/brian/experimental/spatialneuron.py000066400000000000000000000104741167451777000216620ustar00rootroot00000000000000''' Compartmental neurons See BEP-15 TODO: * Threshold and reset are special (not as normal NeuronGroup because only 1 spike) * Hines method * Point processes * StateMonitor * neuron.plot('gl') * Iteration (over the branch or the entire tree?) ''' from morphology import * from brian.stdunits import * from brian.units import * from brian.reset import NoReset from brian.stateupdater import StateUpdater from brian.equations import Equations from brian.group import Group from itertools import count from brian.neurongroup import NeuronGroup __all__ = ['SpatialNeuron', 'CompartmentalNeuron'] class SpatialNeuron(NeuronGroup): """ Compartmental model with morphology. """ def __init__(self, morphology=None, model=None, threshold=None, reset=NoReset(), refractory=0 * ms, level=0, clock=None, unit_checking=True, compile=False, freeze=False, cm=0.9 * uF / cm ** 2, Ri=150 * ohm * cm): clock = guess_clock(clock) N = len(morphology) # number of compartments # Equations for morphology eqs_morphology = Equations(""" diameter : um length : um x : um y : um z : um area : um**2 """) # Create the state updater if isinstance(model, str): model = Equations(model, level=level + 1) model += Equations(''' v:volt # membrane potential #Im:amp/cm**2 # membrane current (should we have it?) ''') full_model = model + eqs_morphology Group.__init__(self, full_model, N, unit_checking=unit_checking) self._eqs = model self._state_updater = SpatialStateUpdater(model, clock) var_names = full_model._diffeq_names self.cm = cm # could be a vector? self.Ri = Ri S0 = {} # Fill missing units for key, value in full_model._units.iteritems(): if not key in S0: S0[key] = 0 * value self._S0 = [0] * len(var_names) for var, i in zip(var_names, count()): self._S0[i] = S0[var] NeuronGroup.__init__(self, N, model=self._state_updater, threshold=threshold, reset=reset, refractory=refractory, level=level + 1, clock=clock, unit_checking=unit_checking) # Insert morphology self.morphology = morphology self.morphology.compress(diameter=self.diameter, length=self.length, x=self.x, y=self.y, z=self.z, area=self.area) def subgroup(self, N): # Subgrouping cannot be done in this way raise NotImplementedError def __getitem__(self, x): ''' Subgrouping mechanism. self['axon'] returns the subtree named "axon". TODO: self[:] returns the full branch. ''' morpho = self.morphology[x] N = self[morpho._origin:morpho._origin + len(morpho)] N.morphology = morpho return N def __getattr__(self, x): if (x != 'morphology') and ((x in self.morphology._namedkid) or all([c in 'LR123456789' for c in x])): # subtree return self[x] else: return NeuronGroup.__getattr__(self, x) class SpatialStateUpdater(StateUpdater): """ State updater for compartmental models. """ def __init__(self, eqs, clock=None): self.eqs = eqs def __len__(self): ''' Number of state variables ''' return len(self.eqs) CompartmentalNeuron = SpatialNeuron if __name__ == '__main__': from brian import * morpho = Morphology('oi24rpy1.CNG.swc') # visual L3 pyramidal cell print len(morpho), "compartments" El = -70 * mV eqs = ''' # The same equations for the whole neuron, but possibly different parameter values Im=gl*(El-v) : amp/cm**2 # distributed transmembrane current gl : siemens/cm**2 # spatially distributed conductance ''' neuron = SpatialNeuron(morphology=morpho, threshold="axon[50*um].v>0*mV", model=eqs, refractory=4 * ms, cm=0.9 * uF / cm ** 2, Ri=150 * ohm * cm) neuron.axon[0 * um:50 * um].gl = 1e-3 * siemens / cm ** 2 print sum(neuron.axon.gl) print neuron.axon[40 * um].gl #branch=neuron.axon[0*um:50*um] neuron.morphology.plot() show() brian-1.3.1/brian/experimental/stdp/000077500000000000000000000000001167451777000173705ustar00rootroot00000000000000brian-1.3.1/brian/experimental/stdp/STDP1.py000066400000000000000000000032131167451777000205740ustar00rootroot00000000000000''' Spike-timing dependent plasticity Adapted from Song, Miller and Abbott (2000) and Song and Abbott (2001) This simulation takes a long time! ''' from brian import * from time import time from eventbased_stdp import * #log_level_debug() # uncomment to see the STDP equations N = 1000 taum = 10 * ms tau_pre = 20 * ms tau_post = tau_pre Ee = 0 * mV vt = -54 * mV vr = -60 * mV El = -74 * mV taue = 5 * ms F = 15 * Hz gmax = .01 dA_pre = .01 dA_post = -dA_pre * tau_pre / tau_post * 1.05 eqs_neurons = ''' dv/dt=(ge*(Ee-vr)+El-v)/taum : volt # the synaptic current is linearized dge/dt=-ge/taue : 1 ''' input = PoissonGroup(N, rates=F) neurons = NeuronGroup(1, model=eqs_neurons, threshold=vt, reset=vr) synapses = Connection(input, neurons, 'ge', weight=rand(len(input), len(neurons)) * gmax)#, structure='dense') neurons.v = vr ## Explicit STDP rule eqs_stdp = ''' A_pre : 1 A_post : 1 last_t : second # last update ''' dA_post *= gmax dA_pre *= gmax pre = """ A_pre=A_pre*exp(-(t-last_t)/tau_pre)+dA_pre A_post=A_post*exp(-(t-last_t)/tau_post) w+=A_post last_t = t """ post = """ A_pre=A_pre*exp(-(t-last_t)/tau_pre) A_post=A_post*exp(-(t-last_t)/tau_post)+dA_post w+=A_pre last_t = t """ stdp = EventBasedSTDP(synapses, eqs=eqs_stdp, pre=pre,post=post, wmax=gmax) rate = PopulationRateMonitor(neurons) start_time = time() run(100 * second, report='text') print "Simulation time:", time() - start_time figure() subplot(311) plot(rate.times / second, rate.smooth_rate(30 * ms)) subplot(312) plot(synapses.W.todense() / gmax, '.') subplot(313) hist(synapses.W.todense() / gmax, 20) show() brian-1.3.1/brian/experimental/stdp/__init__.py000066400000000000000000000000001167451777000214670ustar00rootroot00000000000000brian-1.3.1/brian/experimental/stdp/eventbased_stdp.py000066400000000000000000000143731167451777000231240ustar00rootroot00000000000000""" Event-based STDP If this is satisfactory, we could merge it into the usual STDP class. New features: * t is in the namespace * there are synaptic variables (matrices instead of 1D arrays) If we want heterogeneous delays, we will need 3D arrays for synaptic variables! We may also want to insert the fast exp in the namespace. A few propositions: * Give easy access to pre/post synaptic variables in neurongroups, for example: v_post * Automatic handling of t_pre/t_post (time of last update of variables) * Specification of pre/post variables in equations, or automatic detection, e.g.: A_pre : 1 (pre) A_post : 1 (post) A : 1 (synapse) # or other keyword? """ from brian import * from brian.utils.documentation import flattened_docstring from brian.inspection import * from brian.equations import * from brian.optimiser import freeze import re __all__=['EventBasedSTDP'] class EventSTDPUpdater(SpikeMonitor): ''' Updates STDP variables at spike times Almost the same as usual one, but with t in the namespace ''' def __init__(self, source, C, code, namespace, delay=0 * ms): ''' source = source group C = connection vars = variable names M = matrix of the linear differential system code = code to execute for every spike namespace = namespace for the code delay = transmission delay ''' super(EventSTDPUpdater, self).__init__(source, record=False, delay=delay) self._code = code # update code self._namespace = namespace # code namespace self.C = C def propagate(self, spikes): if len(spikes): self._namespace['spikes'] = spikes self._namespace['w'] = self.C.W self._namespace['t'] = self.C.source.clock._t exec self._code in self._namespace class EventBasedSTDP(NetworkOperation): ''' Spike-timing-dependent plasticity Event-based implementation ''' def __init__(self, C, eqs, pre, post, wmin=0, wmax=Inf, level=0, clock=None, delay_pre=None, delay_post=None): ''' C: connection object eqs: equations (with units) pre: Python code for presynaptic spikes post: Python code for postsynaptic spikes wmax: maximum weight (default unlimited) delay_pre: presynaptic delay delay_post: postsynaptic delay (backward propagating spike) ''' NetworkOperation.__init__(self, lambda:None, clock=clock) # Convert to equations object if isinstance(eqs, Equations): eqs_obj = eqs else: eqs_obj = Equations(eqs, level=level + 1) # handle multi-line pre, post equations and multi-statement equations separated by ; if '\n' in pre: pre = flattened_docstring(pre) elif ';' in pre: pre = '\n'.join([line.strip() for line in pre.split(';')]) if '\n' in post: post = flattened_docstring(post) elif ';' in post: post = '\n'.join([line.strip() for line in post.split(';')]) # Check units eqs_obj.compile_functions() eqs_obj.check_units() # Get variable names vars = eqs_obj._diffeq_names # Create namespaces for pre and post codes pre_namespace = namespace(pre, level=level + 1) post_namespace = namespace(post, level=level + 1) pre_namespace['clip'] = clip post_namespace['clip'] = clip # freeze pre and post (otherwise units will cause problems) [why?] all_vars = list(vars) + ['w','t','t_pre','t_post'] pre = '\n'.join(freeze(line.strip(), all_vars, pre_namespace) for line in pre.split('\n')) post = '\n'.join(freeze(line.strip(), all_vars, post_namespace) for line in post.split('\n')) # Create synaptic variables self.var=dict() #C.compress() for x in vars: self.var[x]=C.W.connection_matrix(copy=True) self.var[x][:]=0 # reset values # Create code # Indent and loop pre = re.compile('^', re.M).sub(' ', pre) post = re.compile('^', re.M).sub(' ', post) pre = 'for _i in spikes:\n' + pre post = 'for _i in spikes:\n' + post # Pre/post code for var in vars+['w']: # presynaptic variables (vectorisation) pre = re.sub(r'\b' + var + r'\b', var + '[_i,:]', pre) post = re.sub(r'\b' + var + r'\b', var + '[:,_i]', post) # Bounds: add one line to pre/post code (clip(w,min,max,w)) # or actual code? (rather than compiled string) pre += '\n w[_i,:]=clip(w[_i,:],%(min)e,%(max)e)' % {'min':wmin, 'max':wmax} post += '\n w[:,_i]=clip(w[:,_i],%(min)e,%(max)e)' % {'min':wmin, 'max':wmax} log_debug('brian.stdp', 'PRE CODE:\n'+pre) log_debug('brian.stdp', 'POST CODE:\n'+post) # Compile code pre_code = compile(pre, "Presynaptic code", "exec") post_code = compile(post, "Postsynaptic code", "exec") # Delays connection_delay = C.delay * C.source.clock.dt if (delay_pre is None) and (delay_post is None): # same delays as the Connnection C delay_pre = connection_delay delay_post = 0 * ms elif delay_pre is None: delay_pre = connection_delay - delay_post if delay_pre < 0 * ms: raise AttributeError, "Postsynaptic delay is too large" elif delay_post is None: delay_post = connection_delay - delay_pre if delay_post < 0 * ms: raise AttributeError, "Postsynaptic delay is too large" # Put variables in namespace for ns in (pre_namespace,post_namespace): for var in vars: ns[var]=self.var[var] # create forward and backward Connection objects or SpikeMonitor objects pre_updater = EventSTDPUpdater(C.source, C, code=pre_code, namespace=pre_namespace, delay=delay_pre) post_updater = EventSTDPUpdater(C.target, C, code=post_code, namespace=post_namespace, delay=delay_post) self.contained_objects += [pre_updater, post_updater] if __name__ == '__main__': pass brian-1.3.1/brian/experimental/stochasticconnection.py000077500000000000000000000034601167451777000232220ustar00rootroot00000000000000from ..connections.base import * from ..connections.sparsematrix import * from ..connections.connectionvector import * from ..connections.constructionmatrix import * from ..connections.connectionmatrix import * from ..connections.construction import * from ..connections.connection import * from ..connections.delayconnection import * from ..connections.propagation_c_code import * import warnings import numpy.random network_operation = None # we import this when we need to because of order of import issues __all__ = [ 'StochasticConnection', ] class StochasticConnection(Connection): ''' Connection which implements probabilistic (unreliable) synapses Initialised as for a :class:`Connection`, but with the additional keyword: ``reliability`` Specifies the probability that a presynaptic spike triggers a postsynaptic current in the postsynaptic neuron. ''' @check_units(delay=second) def __init__(self, source, target, state=0, delay=0 * msecond, structure='dense', reliability=1.0, **kwds): Connection.__init__(self, source, target, state=state, delay=delay, structure=structure, **kwds) if (len(source) != len(target)): raise AttributeError, 'The connected (sub)groups must have the same size.' source.set_max_delay(delay) self.reliability = reliability def propagate(self, spikes): ''' Propagates the spikes to the target. ''' if len(spikes): rows = self.W[spikes, :] * (numpy.random.rand(len(spikes), self.W.shape[1]) <= self.reliability) for row in rows: self.target._S[self.nstate, :] += array(row, dtype=float32) def compress(self): pass brian-1.3.1/brian/experimental/trace_analysis.py000066400000000000000000000111141167451777000217670ustar00rootroot00000000000000''' Analysis of voltage traces (current-clamp experiments or HH models). ''' from brian import * from numpy import * #from brian.stdunits import mV from brian.tools.io import * from time import time from scipy import optimize from scipy import stats from scipy.signal import lfilter __all__ = ['fit_EIF', 'IV_curve', 'threshold_model', 'estimate_capacitance'] """ TODO: * A better fit_EIF function (conjugate gradient? Check also Badel et al., 2008. Or maybe just calculate gradient) * Fit subthreshold kernel (least squares, see also Jolivet) * Standard threshold methods """ """ The good strategy: * optimise for every tau for best threshold prediction * find the tau that minimises misses """ def threshold_model(v, onsets=None, dt=1., tau=None): ''' Fits adaptive threshold model. Model: tau*dvt/dt=vt0+a*[v-vi]^+ tau can be fixed or found by optimization. Returns vt0, vi, a[, tau] (tau in timesteps by default) ''' if onsets is None: onsets = spike_onsets(v) if tau is None: spikes = spike_mask(v, onsets) def f(tau): vt0, vi, a = threshold_model(v, onsets=onsets, dt=dt, tau=tau) theta = vt0 + a * lowpass(clip(v - vi, 0, inf), tau, dt=0.05) return mean(theta[-spikes] < v[-spikes]) # false alarm rate tau = float(optimize.fmin(lambda x:f(*x), 10., disp=0)) return list(threshold_model(v, onsets=onsets, dt=dt, tau=tau)) + [tau] else: threshold = v[onsets] def f(vt0, vi, a, tau): return vt0 + a * lowpass(clip(v - vi, 0, inf), tau, dt=dt) return optimize.leastsq(lambda x:f(x[0], x[1], x[2], tau)[onsets] - threshold, [-55., -65., 1.])[0] def estimate_capacitance(i, v, dt=1, guess=100.): ''' Estimates capacitance from current-clamp recording with white noise (see Badel et al., 2008). Hint for units: if v and dt are in ms, then the capacitance has the same relationship to F than the current to A (pA -> pF). ''' dv = diff(v) / dt i, v = i[:-1], v[:-1] mask = -spike_mask(v) # subthreshold trace dv, v, i = dv[mask], v[mask], i[mask] return optimize.fmin(lambda C:var(dv - i / C), guess, disp=0)[0] def IV_curve(i, v, dt=1, C=None, bins=None, T=0): ''' Dynamic I-V curve (see Badel et al., 2008) E[C*dV/dt-I]=f(V) T: time after spike onset to include in estimation. bins: bins for v (default: 20 bins between min and max). ''' C = C or estimate_capacitance(i, v, dt) if bins is None: bins = linspace(min(v), max(v), 20) dv = diff(v) / dt v, i = v[:-1], i[:-1] mask = -spike_mask(v, spike_onsets(v) + T) # subthreshold trace dv, v, i = dv[mask], v[mask], i[mask] fv = i - C * dv # intrinsic current return array([mean(fv[(v >= vmin) & (v < vmax)]) for vmin, vmax in zip(bins[:-1], bins[1:])]) def fit_EIF(i, v, dt=1, C=None, T=0): ''' Fits the exponential model of spike initiation in the phase plane (v,dv). T: time after spike onset to include in estimation. Returns gl, El, deltat, vt (leak conductance, leak reversal potential, slope factor, threshold) The result does not seem very reliable (deltat depends critically on T). Hint for units: if v is in mV, dt in ms, i in pA, then gl is in nS ''' C = C or estimate_capacitance(i, v, dt) dv = diff(v) / dt v, i = v[:-1], i[:-1] mask = -spike_mask(v, spike_onsets(v) + T) # subthreshold trace dv, v, i = dv[mask], v[mask], i[mask] f = lambda gl, El, deltat, vt:C * dv - i - (gl * (El - v) + gl * deltat * exp((v - vt) / deltat)) x, _ = optimize.leastsq(lambda x:f(*x), [50., -60., 3., -55.]) return x if __name__ == '__main__': filename = r"D:\My Dropbox\Neuron\Hu\recordings_Ifluct\I0_1_std_1_tau_10_sampling_20\vs.dat" #filename=r'D:\Anna\2010_03_0020_random_noise_200pA.atf' #filename2=r'D:\Anna\input_file.atf' # in pF from pylab import * #M=loadtxt(filename) #t,vs=M[:,0],M[:,1] t, vs = read_neuron_dat(filename) #t,vs=read_atf(filename) #print array(spike_duration(vs,full=True))*.05 #vt0,vi,a,tau=threshold_model(vs,dt=0.05) #print vt0,vi,a,tau #theta=vt0+a*lowpass(clip(vs-vi,0,inf),tau,dt=0.05) spikes = spike_onsets(vs) print std(vs[spikes]) #spikes2=spike_onsets_dv2(vs) #spikes3=spike_onsets_dv3(vs) plot(t, vs, 'k') #plot(t,theta,'b') plot(t[spikes], vs[spikes], '.r') #plot(t[spikes2],vs[spikes2],'.g') #plot(t[spikes3],vs[spikes3],'.k') show() brian-1.3.1/brian/globalprefs.py000066400000000000000000000067261167451777000166060ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # """ Global preferences for Brian ---------------------------- The following global preferences have been defined: """ # the global preferences referred to above are automatically # added to the docstring when they are defined via # define_global_preference __docformat__ = "restructuredtext en" __all__ = ['set_global_preferences', 'get_global_preference', 'exists_global_preference', 'define_global_preference'] import sys from utils.documentation import * globalprefdocs = "" class BrianGlobalPreferences: pass g_prefs = BrianGlobalPreferences() def set_global_preferences(**kwds): """Set global preferences for Brian Usage:: ``set_global_preferences(...)`` where ... is a list of keyword assignments. """ for k, v in kwds.iteritems(): g_prefs.__dict__[k] = v def get_global_preference(k): """Get the value of the named global preference """ return g_prefs.__dict__[k] def exists_global_preference(k): """Determine if named global preference exists """ return hasattr(g_prefs, k) def define_global_preference(k, defaultvaluedesc, desc): """Define documentation for a new global preference Arguments: ``k`` The name of the preference (a string) ``defaultvaluedesc`` A string description of the default value ``desc`` A multiline description of the preference in docstring format """ global globalprefdocs globalprefdocs += '``' + k + ' = ' + defaultvaluedesc + '``\n' globalprefdocs += flattened_docstring(desc, numtabs=1) brian-1.3.1/brian/group.py000066400000000000000000000146471167451777000154430ustar00rootroot00000000000000from brian import * __all__ = ['Group', 'MultiGroup'] class Group(object): ''' Generic fixed-length variable values container class Used internally by Brian to store values associated with variables. Primary use is for NeuronGroup to store a state matrix of values associated to the different variables. Each differential equation in ``equations`` is allocated an array of length ``N`` in the attribute ``_S`` of the object. Unit consistency checking is performed. ''' def __init__(self, equations, N, level=0, unit_checking=True): if isinstance(equations, str): equations = Equations(equations, level=level + 1) equations.prepare(check_units=unit_checking) var_names = equations._diffeq_names M = len(var_names) self._S = zeros((M, N)) self.staticvars = dict([(name, equations._function[name]) for name in equations._eq_names]) self.var_index = dict(zip(var_names, range(M))) self.var_index.update(zip(range(M), range(M))) # name integer i -> state variable i for var1, var2 in equations._alias.iteritems(): self.var_index[var1] = self.var_index[var2] def get_var_index(self, name): ''' Returns the index of state variable "name". ''' return self.var_index[name] def __len__(self): ''' Number of neurons in the group. ''' return self._S.shape[1] def num_states(self): return self._S.shape[0] def state_(self, name): ''' Gets the state variable named "name" as a reference to the underlying array ''' if isinstance(name, int): return self._S[name] if name in self.staticvars: f = self.staticvars[name] return f(*[self.state_(var) for var in f.func_code.co_varnames]) i = self.var_index[name] return self._S[i] state = state_ def __getattr__(self, name): if name == 'var_index': # this seems mad - the reason is that getattr is only called if the thing hasn't # been found using the standard methods of finding attributes, which for var_index # should have worked, this is important because the next line looks for var_index # and if we haven't got a var_index we don't want to get stuck in an infinite # loop raise AttributeError if not hasattr(self, 'var_index'): # only provide lookup of variable names if we have some variable names, i.e. # if the var_index attribute exists raise AttributeError try: return self.state(name) except KeyError: if len(name) and name[-1] == '_': try: origname = name[:-1] return self.state_(origname) except KeyError: raise AttributeError raise AttributeError def __setattr__(self, name, val): origname = name if len(name) and name[-1] == '_': origname = name[:-1] if not hasattr(self, 'var_index') or (name not in self.var_index and origname not in self.var_index): object.__setattr__(self, name, val) else: if name in self.var_index: self.state(name)[:] = val else: self.state_(origname)[:] = val class MultiGroup(object): ''' Generic variable-length variable values container class Used internally by Brian to store values associated with variables. See :class:`Group` for more details. This class takes a list of groups or of ``(equations, N)`` pairs. The ``state()`` and ``state_()`` methods work as if it were a single :class:`Group` but return arrays of different lengths from different state matrix structures depending on where the state value is defined. Similarly for attribute access. ''' def __init__(self, groups, level=0): newgroups = [] self.var_index = dict() for g in groups: if not isinstance(g, Group): eqs, N = g g = Group(eqs, N, level=level + 1) newgroups.append(g) self.var_index.update(dict((k, g) for k in g.var_index.keys())) self.var_index.update(dict((k, g) for k in g.staticvars.keys())) self.groups = newgroups def state_(self, name): return self.var_index[name].state_(name) state = state_ def __getattr__(self, name): try: return self.state(name) except KeyError: if len(name) and name[-1] == '_': try: origname = name[:-1] return self.state_(origname) except KeyError: raise AttributeError raise AttributeError def __setattr__(self, name, val): origname = name if len(name) and name[-1] == '_': origname = name[:-1] if not hasattr(self, 'var_index') or (name not in self.var_index and origname not in self.var_index): object.__setattr__(self, name, val) else: if name in self.var_index: self.state(name)[:] = val else: self.state_(origname)[:] = val if __name__ == '__main__': if True: eqs_A = Equations(''' dV/dt = -V/(1*second) : volt W = V*V : volt2 x = V ''') eqs_B = Equations(''' dVb/dt = -Vb/(1*second) : volt Wb = Vb*Vb : volt2 xb = Vb ''') g = MultiGroup(((eqs_A, 10), (eqs_B, 5))) for h in g.groups: h._S[:] = rand(*h._S.shape) print g.V print g.V ** 2 print g.W print g.x print g.Vb print g.Vb ** 2 print g.Wb print g.xb if False: eqs = Equations(''' dV/dt = -V/(1*second) : volt W = V*V : volt2 x = V ''') g = Group(eqs, 10) print g._S g._S[:] = rand(*g._S.shape) print g._S print g.V print g.V ** 2 print g.W print g.x g.V = -1 print g.V print g.W print g.x brian-1.3.1/brian/hears/000077500000000000000000000000001167451777000150235ustar00rootroot00000000000000brian-1.3.1/brian/hears/__init__.py000066400000000000000000000002631167451777000171350ustar00rootroot00000000000000from prefs import * from bufferable import * from sounds import * from onlinesounds import * from erb import * from filtering import * from hrtf import * from db import * brian-1.3.1/brian/hears/bufferable.py000066400000000000000000000143041167451777000174740ustar00rootroot00000000000000''' The Bufferable class serves as a base for all the other Brian.hears classes ''' from numpy import zeros, empty, hstack, vstack, arange, diff class Bufferable(object): ''' Base class for Brian.hears classes Defines a buffering interface of two methods: ``buffer_init()`` Initialise the buffer, should set the time pointer to zero and do any other initialisation that the object needs. ``buffer_fetch(start, end)`` Fetch the next samples ``start:end`` from the buffer. Value returned should be an array of shape ``(end-start, nchannels)``. Can throw an ``IndexError`` exception if it is outside the possible range. In addition, bufferable objects should define attributes: ``nchannels`` The number of channels in the buffer. ``samplerate`` The sample rate in Hz. By default, the class will define a default buffering mechanism which can easily be extended. To extend the default buffering mechanism, simply implement the method: ``buffer_fetch_next(samples)`` Returns the next ``samples`` from the buffer. The default methods for ``buffer_init()`` and ``buffer_fetch()`` will define a buffer cache which will get larger if it needs to to accommodate a ``buffer_fetch(start, end)`` where ``end-start`` is larger than the current cache. If the filterbank has a ``minimum_buffer_size`` attribute, the internal cache will always have at least this size, and the ``buffer_fetch_next(samples)`` method will always get called with ``samples>=minimum_buffer_size``. This can be useful to ensure that the buffering is done efficiently internally, even if the user request buffered chunks that are too small. If the filterbank has a ``maximum_buffer_size`` attribute then ``buffer_fetch_next(samples)`` will always be called with ``samples<=maximum_buffer_size`` - this can be useful for either memory consumption reasons or for implementing time varying filters that need to update on a shorter time window than the overall buffer size. The following attributes will automatically be maintained: ``self.cached_buffer_start``, ``self.cached_buffer_end`` The start and end of the cached segment of the buffer ``self.cached_buffer_output`` An array of shape ``((cached_buffer_end-cached_buffer_start, nchannels)`` with the current cached segment of the buffer. Note that this array can change size. ''' def buffer_fetch(self, start, end): if not hasattr(self, 'cached_buffer_start'): self.buffer_init() # optimisations for the most typical cases, which are when start:end is # the current cached segment, or when start:end is the next cached # segment of the same size as the current one if start==self.cached_buffer_start and end==self.cached_buffer_end: return self.cached_buffer_output if start==self.cached_buffer_end and end-start==self.cached_buffer_output.shape[0]: self.cached_buffer_output = self._buffer_fetch_next(end-start) self.cached_buffer_start = start self.cached_buffer_end = end return self.cached_buffer_output # handle bad inputs if endfloat(other) def __ge__(self, other): if not isinstance(other, dB_type): raise dB_error('Can only compare with another dB') return float(self)>=float(other) def __eq__(self, other): if not isinstance(other, dB_type): raise dB_error('Can only compare with another dB') return float(self)==float(other) def __ne__(self, other): if not isinstance(other, dB_type): raise dB_error('Can only compare with another dB') return float(self)!=float(other) def __reduce__(self): return (dB_type, (float(self),)) def gain(self): return 10**(float(self)/20.0) dB = dB_type(1.0) def gain(level): ''' Returns the gain factor associated to a level in dB. The formula is: gain = 10**(level/20.0) ''' if not isinstance(level, dB_type): raise dB_error('Level must be in dB') return level.gain() brian-1.3.1/brian/hears/erb.py000066400000000000000000000015341167451777000161500ustar00rootroot00000000000000''' Utility functions adapted from MAP ''' from brian import * __all__ = ['erbspace'] @check_units(low=Hz, high=Hz) def erbspace(low, high, N, earQ=9.26449, minBW=24.7, order=1): ''' Returns the centre frequencies on an ERB scale. ``low``, ``high`` Lower and upper frequencies ``N`` Number of channels ``earQ=9.26449``, ``minBW=24.7``, ``order=1`` Default Glasberg and Moore parameters. ''' low = float(low) high = float(high) cf = -(earQ * minBW) + exp((arange(N)) * (-log(high + earQ * minBW) + \ log(low + earQ * minBW)) / (N-1)) * (high + earQ * minBW) cf = cf[::-1] return cf # Testing if __name__ == '__main__': cf = erbspace(20 * Hz, 20 * kHz, 3000) print amin(cf), amax(cf) print diff(cf)[-5:] plot(cf) show() brian-1.3.1/brian/hears/filtering/000077500000000000000000000000001167451777000170065ustar00rootroot00000000000000brian-1.3.1/brian/hears/filtering/__init__.py000066400000000000000000000004401167451777000211150ustar00rootroot00000000000000from filterbank import * from linearfilterbank import * from firfilterbank import * from filterbanklibrary import * from filterbankgroup import * from drnl import DRNL from dcgc import DCGC from fractionaldelay import * #from zilany import ZILANY from tan_carney import TanCarneybrian-1.3.1/brian/hears/filtering/dcgc.py000066400000000000000000000165001167451777000202620ustar00rootroot00000000000000from brian import * from filterbank import Filterbank,FunctionFilterbank,ControlFilterbank,CombinedFilterbank,RestructureFilterbank from filterbanklibrary import * from linearfilterbank import * __all__ = ['DCGC'] def set_parameters(cf,param): parameters=dict() parameters['b1'] = 1.81 parameters['c1'] = -2.96 parameters['b2'] = 2.17 parameters['c2'] = 2.2 parameters['decay_tcst'] = .5*ms parameters['lev_weight'] = .5 parameters['level_ref'] = 50. parameters['level_pwr1'] = 1.5 parameters['level_pwr2'] = .5 parameters['RMStoSPL'] = 30. parameters['frat0'] = .2330 parameters['frat1'] = .005 parameters['lct_ERB'] = 1.5 #value of the shift in ERB frequencies parameters['frat_control'] = 1.08 parameters['order_gc']=4 parameters['ERBrate']= 21.4*log10(4.37*cf/1000+1) parameters['ERBwidth']= 24.7*(4.37*cf/1000 + 1) if param: if not isinstance(param, dict): raise Error('given parameters must be a dict') for key in param.keys(): if not parameters.has_key(key): raise Exception(key + ' is invalid key entry for given parameters') parameters[key] = param[key] return parameters #defition of the controler class class AsymCompUpdate: def __init__(self,target,samplerate,fp1,param): fp1=atleast_1d(fp1) self.iteration=0 self.target=target self.samplerate=samplerate self.fp1=fp1 self.exp_deca_val = exp(-1/(param['decay_tcst'] *samplerate)*log(2)) self.level_min = 10**(- param['RMStoSPL']/20) self.level_ref = 10**(( param['level_ref'] - param['RMStoSPL'])/20) self.b=param['b2'] self.c=param['c2'] self.lev_weight=param['lev_weight'] self.level_ref=param['level_ref'] self.level_pwr1=param['level_pwr1'] self.level_pwr2=param['level_pwr2'] self.RMStoSPL=param['RMStoSPL'] self.frat0=param['frat0'] self.frat1=param['frat1'] self.level1_prev=-100 self.level2_prev=-100 self.p0=2 self.p1=1.7818*(1-0.0791*self.b)*(1-0.1655*abs(self.c)) self.p2=0.5689*(1-0.1620*self.b)*(1-0.0857*abs(self.c)) self.p3=0.2523*(1-0.0244*self.b)*(1+0.0574*abs(self.c)) self.p4=1.0724 def __call__(self,*input): value1=input[0][-1,:] value2=input[1][-1,:] level1 = maximum(maximum(value1,0),self.level1_prev*self.exp_deca_val) level2 = maximum(maximum(value2,0),self.level2_prev*self.exp_deca_val) self.level1_prev=level1 self.level2_prev=level2 level_total=self.lev_weight*self.level_ref*(level1/self.level_ref)**self.level_pwr1+(1-self.lev_weight)*self.level_ref*(level2/self.level_ref)**self.level_pwr2 level_dB=20*log10(maximum(level_total,self.level_min))+self.RMStoSPL frat = self.frat0 + self.frat1*level_dB fr2 = self.fp1*frat self.iteration+=1 self.target.filt_b, self.target.filt_a=asymmetric_compensation_coeffs(self.samplerate,fr2,self.target.filt_b,self.target.filt_a,self.b,self.c,self.p0,self.p1,self.p2,self.p3,self.p4) class DCGC(CombinedFilterbank): ''' The compressive gammachirp auditory filter as described in Irino, T. and Patterson R., "A compressive gammachirp auditory filter for both physiological and psychophysical data", JASA 2001. Technical implementation details and notation can be found in Irino, T. and Patterson R., "A Dynamic Compressive Gammachirp Auditory Filterbank", IEEE Trans Audio Speech Lang Processing. The model consists of a control pathway and a signal pathway in parallel. The control pathway consists of a bank of bandpass filters followed by a bank of highpass filters (this chain yields a bank of gammachirp filters). The signal pathway consist of a bank of fix bandpass filters followed by a bank of highpass filters with variable cutoff frequencies (this chain yields a bank of gammachirp filters with a level-dependent bandwidth). The highpass filters of the signal pathway are controlled by the output levels of the two stages of the control pathway. Initialised with arguments: ``source`` Source of the cochlear model. ``cf`` List or array of center frequencies. ``update_interval`` Interval in samples controlling how often the band pass filter of the signal pathway is updated. Smaller values are more accurate, but give longer computation times. ``param`` Dictionary used to overwrite the default parameters given in the original paper. The possible parameters to change and their default values (see Irino, T. and Patterson R., "A Dynamic Compressive Gammachirp Auditory Filterbank", IEEE Trans Audio Speech Lang Processing) are:: param['b1'] = 1.81 param['c1'] = -2.96 param['b2'] = 2.17 param['c2'] = 2.2 param['decay_tcst'] = .5*ms param['lev_weight'] = .5 param['level_ref'] = 50. param['level_pwr1'] = 1.5 param['level_pwr2'] = .5 param['RMStoSPL'] = 30. param['frat0'] = .2330 param['frat1'] = .005 param['lct_ERB'] = 1.5 #value of the shift in ERB frequencies param['frat_control'] = 1.08 param['order_gc']=4 param['ERBrate']= 21.4*log10(4.37*cf/1000+1) # cf is the center frequency param['ERBwidth']= 24.7*(4.37*cf/1000 + 1) ''' def __init__(self, source,cf,update_interval,param={}): CombinedFilterbank.__init__(self, source) source = self.get_modified_source() parameters=set_parameters(cf,param) ERBspace = mean(diff(parameters['ERBrate'])) cf=atleast_1d(cf) #bank of passive gammachirp filters. As the control path uses the same passive filterbank than the signal path (buth shifted in frequency) #this filterbanl is used by both pathway. pGc=LogGammachirp(source,cf,b=parameters['b1'], c=parameters['c1']) # self.gc.filt_b=pGc.filt_b # self.gc.filt_a=pGc.filt_a fp1 = cf + parameters['c1']*parameters['ERBwidth']*parameters['b1']/parameters['order_gc'] nbr_cf=len(cf) #### Control Path #### n_ch_shift = round(parameters['lct_ERB']/ERBspace); #value of the shift in channels indch1_control = minimum(maximum(1, arange(1,nbr_cf+1)+n_ch_shift),nbr_cf).astype(int)-1 fp1_control = fp1[indch1_control] pGc_control=RestructureFilterbank(pGc,indexmapping=indch1_control) fr2_control = parameters['frat_control']*fp1_control asym_comp_control=AsymmetricCompensation(pGc_control, fr2_control,b=parameters['b2'], c=parameters['c2']) #### Signal Path #### fr1=fp1*parameters['frat0'] signal_path= AsymmetricCompensation(pGc, fr1,b=parameters['b2'], c=parameters['c2']) # self.asym_comp.filt_b=signal_path.filt_b # self.asym_comp.filt_a=signal_path.filt_a #### Controler #### updater = AsymCompUpdate(signal_path,source.samplerate,fp1,parameters) #the updater control = ControlFilterbank(signal_path, [pGc_control,asym_comp_control], signal_path, updater, update_interval) self.set_output(control)brian-1.3.1/brian/hears/filtering/drnl.py000077500000000000000000000176321167451777000203330ustar00rootroot00000000000000from brian import * from filterbank import Filterbank, FunctionFilterbank, CombinedFilterbank from filterbanklibrary import * __all__ = ['DRNL'] def set_parameters(cf,type,param): parameters=dict() parameters['stape_scale']=0.00014 parameters['order_linear']=3 parameters['order_nonlinear']=3 if type=='guinea pig': parameters['cf_lin_p0']=0.339 parameters['cf_lin_m']=0.339 parameters['bw_lin_p0']=1.3 parameters['bw_lin_m']=0.53 parameters['cf_nl_p0']=0 parameters['cf_nl_m']=1. parameters['bw_nl_p0']=0.8 parameters['bw_nl_m']=0.58 parameters['a_p0']=1.87 parameters['a_m']=0.45 parameters['b_p0']=-5.65 parameters['b_m']=0.875 parameters['c_p0']=-1. parameters['c_m']=0 parameters['g_p0']=5.68 parameters['g_m']=-0.97 parameters['lp_lin_cutoff_p0']=0.339 parameters['lp_lin_cutoff_m']=0.339 parameters['lp_nl_cutoff_p0']=0 parameters['lp_nl_cutoff_m']=1. elif type=='human': parameters['cf_lin_p0']=-0.067 parameters['cf_lin_m']=1.016 parameters['bw_lin_p0']=0.037 parameters['bw_lin_m']=0.785 parameters['cf_nl_p0']=-0.052 parameters['cf_nl_m']=1.016 parameters['bw_nl_p0']=-0.031 parameters['bw_nl_m']=0.774 parameters['a_p0']=1.402 parameters['a_m']=0.819 parameters['b_p0']=1.619 parameters['b_m']=-0.818 parameters['c_p0']=-0.602 parameters['c_m']=0 parameters['g_p0']=4.2 parameters['g_m']=0.48 parameters['lp_lin_cutoff_p0']=-0.067 parameters['lp_lin_cutoff_m']=1.016 parameters['lp_nl_cutoff_p0']=-0.052 parameters['lp_nl_cutoff_m']=1.016 if param: if not isinstance(param, dict): raise Error('given parameters must be a dict') for key in param.keys(): if not parameters.has_key(key): raise Exception(key + ' is invalid key entry for given parameters') parameters[key] = param[key] return parameters class DRNL(CombinedFilterbank): r''' Implementation of the dual resonance nonlinear (DRNL) filter as described in Lopez-Paveda, E. and Meddis, R., "A human nonlinear cochlear filterbank", JASA 2001. The entire pathway consists of the sum of a linear and a nonlinear pathway. The linear path consists of a bank of bandpass filters (second order gammatone), a low pass function, and a gain/attenuation factor, g, in a cascade. The nonlinear path is a cascade consisting of a bank of gammatone filters, a compression function, a second bank of gammatone filters, and a low pass function, in that order. Initialised with arguments: ``source`` Source of the cochlear model. ``cf`` List or array of center frequencies. ``type`` defines the parameters set corresponding to a certain fit. It can be either: ``type='human'`` The parameters come from Lopez-Paveda, E. and Meddis, R.., "A human nonlinear cochlear filterbank", JASA 2001. ``type ='guinea pig'`` The parameters come from Summer et al., "A nonlinear filter-bank model of the guinea-pig cochlear nerve: Rate responses", JASA 2003. ``param`` Dictionary used to overwrite the default parameters given in the original papers. The possible parameters to change and their default values for humans (see Lopez-Paveda, E. and Meddis, R.,"A human nonlinear cochlear filterbank", JASA 2001. for notation) are:: param['stape_scale']=0.00014 param['order_linear']=3 param['order_nonlinear']=3 from there on the parameters are given in the form :math:`x=10^{\mathrm{p0}+m\log_{10}(\mathrm{cf})}` where ``cf`` is the center frequency:: param['cf_lin_p0']=-0.067 param['cf_lin_m']=1.016 param['bw_lin_p0']=0.037 param['bw_lin_m']=0.785 param['cf_nl_p0']=-0.052 param['cf_nl_m']=1.016 param['bw_nl_p0']=-0.031 param['bw_nl_m']=0.774 param['a_p0']=1.402 param['a_m']=0.819 param['b_p0']=1.619 param['b_m']=-0.818 param['c_p0']=-0.602 param['c_m']=0 param['g_p0']=4.2 param['g_m']=0.48 param['lp_lin_cutoff_p0']=-0.067 param['lp_lin_cutoff_m']=1.016 param['lp_nl_cutoff_p0']=-0.052 param['lp_nl_cutoff_m']=1.016 ''' def __init__(self, source, cf, type='human', param={}): CombinedFilterbank.__init__(self, source) source = self.get_modified_source() cf = atleast_1d(cf) nbr_cf=len(cf) parameters=set_parameters(cf,type,param) #conversion to stape velocity (which are the units needed for the further centres) source=source*parameters['stape_scale'] #### Linear Pathway #### #bandpass filter (second order gammatone filter) cf_linear=10**(parameters['cf_lin_p0']+parameters['cf_lin_m']*log10(cf)) bandwidth_linear=10**(parameters['bw_lin_p0']+parameters['bw_lin_m']*log10(cf)) gammatone=ApproximateGammatone(source, cf_linear, bandwidth_linear, order=parameters['order_linear']) #linear gain g=10**(parameters['g_p0']+parameters['g_m']*log10(cf)) func_gain=lambda x:g*x gain= FunctionFilterbank(gammatone,func_gain) #low pass filter(cascade of 4 second order lowpass butterworth filters) cutoff_frequencies_linear=10**(parameters['lp_lin_cutoff_p0']+parameters['lp_lin_cutoff_m']*log10(cf)) order_lowpass_linear=2 lp_l=LowPass(gain,cutoff_frequencies_linear) lowpass_linear=Cascade(gain,lp_l,4) #### Nonlinear Pathway #### #bandpass filter (third order gammatone filters) cf_nonlinear=10**(parameters['cf_nl_p0']+parameters['cf_nl_m']*log10(cf)) bandwidth_nonlinear=10**(parameters['bw_nl_p0']+parameters['bw_nl_m']*log10(cf)) bandpass_nonlinear1=ApproximateGammatone(source, cf_nonlinear, bandwidth_nonlinear, order=parameters['order_nonlinear']) #compression (linear at low level, compress at high level) a=10**(parameters['a_p0']+parameters['a_m']*log10(cf)) #linear gain b=10**(parameters['b_p0']+parameters['b_m']*log10(cf)) v=10**(parameters['c_p0']+parameters['c_m']*log10(cf))#compression exponent func_compression=lambda x:sign(x)*minimum(a*abs(x),b*abs(x)**v) compression=FunctionFilterbank(bandpass_nonlinear1, func_compression) #bandpass filter (third order gammatone filters) bandpass_nonlinear2=ApproximateGammatone(compression, cf_nonlinear, bandwidth_nonlinear, order=parameters['order_nonlinear']) #low pass filter cutoff_frequencies_nonlinear=10**(parameters['lp_nl_cutoff_p0']+parameters['lp_nl_cutoff_m']*log10(cf)) order_lowpass_nonlinear=2 lp_nl=LowPass(bandpass_nonlinear2,cutoff_frequencies_nonlinear) lowpass_nonlinear=Cascade(bandpass_nonlinear2,lp_nl,3) #adding the two pathways drnl_filter=lowpass_linear+lowpass_nonlinear self.set_output(drnl_filter) if __name__ == '__main__': from brian import * set_global_preferences(usenewbrianhears=True, useweave=False) from brian.hears import * dBlevel=60*dB # dB level in rms dB SPL sound=Sound.load('/home/bertrand/Data/Toolboxes/AIM2006-1.40/Sounds/aimmat.wav') samplerate=sound.samplerate sound=sound.atlevel(dBlevel) simulation_duration=len(sound)/samplerate nbr_center_frequencies=50 center_frequencies=log_space(100*Hz, 1000*Hz, nbr_center_frequencies) dnrl_filter=DRNL(sound,center_frequencies) dnrl_filter.buffer_init() dnrl=dnrl_filter.buffer_fetch(0, len(sound))brian-1.3.1/brian/hears/filtering/filterbank.py000066400000000000000000001013611167451777000215030ustar00rootroot00000000000000from brian import * from scipy import signal, weave, random from ..bufferable import Bufferable from operator import isSequenceType from __builtin__ import all __all__ = ['Filterbank', 'RestructureFilterbank', 'Repeat', 'Tile', 'Join', 'Interleave', 'FunctionFilterbank', 'SumFilterbank', 'DoNothingFilterbank', 'ControlFilterbank', 'CombinedFilterbank', ] class Filterbank(Bufferable): ''' Generalised filterbank object **Documentation common to all filterbanks** Filterbanks all share a few basic attributes: .. autoattribute:: source .. attribute:: nchannels The number of channels. .. attribute:: samplerate The sample rate. .. autoattribute:: duration To process the output of a filterbank, the following method can be used: .. automethod:: process Alternatively, the buffer interface can be used, which is described in more detail below. Filterbank also defines arithmetical operations for +, -, ``*``, / where the other operand can be a filterbank or scalar. **Details on the class** This class is a base class not designed to be instantiated. A Filterbank object should define the interface of :class:`Bufferable`, as well as defining a ``source`` attribute. This is normally a :class:`Bufferable` object, but could be an iterable of sources (for example, for filterbanks that mix or add multiple inputs). The ``buffer_fetch_next(samples)`` method has a default implementation that fetches the next input, and calls the ``buffer_apply(input)`` method on it, which can be overridden by a derived class. This is typically the easiest way to implement a new filterbank. Filterbanks with multiple sources will need to override this default implementation. There is a default ``__init__`` method that can be called by a derived class that sets the ``source``, ``nchannels`` and ``samplerate`` from that of the ``source`` object. For multiple sources, the default implementation will check that each source has the same number of channels and samplerate and will raise an error if not. There is a default ``buffer_init()`` method that calls ``buffer_init()`` on the ``source`` (or list of sources). **Example of deriving a class** The following class takes N input channels and sums them to a single output channel:: class AccumulateFilterbank(Filterbank): def __init__(self, source): Filterbank.__init__(self, source) self.nchannels = 1 def buffer_apply(self, input): return reshape(sum(input, axis=1), (input.shape[0], 1)) Note that the default ``Filterbank.__init__`` will set the number of channels equal to the number of source channels, but we want to change it to have a single output channel. We use the ``buffer_apply`` method which automatically handles the efficient cacheing of the buffer for us. The method receives the array ``input`` which has shape ``(bufsize, nchannels)`` and sums over the channels (``axis=1``). It's important to reshape the output so that it has shape ``(bufsize, outputnchannels)`` so that it can be used as the input to subsequent filterbanks. ''' def __init__(self, source): if isinstance(source, Bufferable): self.source = source self.nchannels = source.nchannels self.samplerate = source.samplerate else: self.nchannels = source[0].nchannels self.samplerate = source[0].samplerate for s in source: if s.nchannels!=self.nchannels: raise ValueError('All sources must have the same number of channels.') if int(s.samplerate)!=int(self.samplerate): raise ValueError('All sources must have the same samplerate.') self.source = source def change_source(self, source): if not hasattr(self, '_source') or self._source is None: self._source = source return if isinstance(source, tuple): for s in source: if int(s.samplerate)!=int(self.samplerate): raise ValueError('source samplerate is wrong.') for news, olds in zip(source, self._source): if news.nchannels!=olds.nchannels: raise ValueError('New sources have different numbers of channels to old sources.') self._source = source return if source.nchannels==self.nchannels: self._source = source return if source.nchannels==1: self._source = Repeat(source, self.nchannels) else: raise ValueError('New source must have the same number of channels as old source.') source = property(fget=lambda self:self._source, fset=lambda self, source:self.change_source(source), doc=''' The source of the filterbank, a :class:`Bufferable` object, e.g. another :class:`Filterbank` or a :class:`Sound`. It can also be a tuple of sources. Can be changed after the object is created, although note that for some filterbanks this may cause problems if they do make assumptions about the input based on the first source object they were passed. If this is causing problems, you can insert a dummy filterbank (:class:`DoNothingFilterbank`) which is guaranteed to work if you change the source. ''') def get_duration(self): if hasattr(self, '_duration'): return self._duration else: source = self.source if isinstance(source, Bufferable): source = [source] try: durations = [s.duration for s in source] duration = max(durations) return duration except KeyError: raise KeyError('Cannot compute duration from sources.') def set_duration(self, duration): self._duration = duration duration = property(fget=get_duration, fset=set_duration, doc=''' The duration of the filterbank. If it is not specified by the user, it is computed by finding the maximum of its source durations. If these are not specified a :class:`KeyError` will be raised (for example, using :class:`OnlineSound` as a source). ''') def process(self, func=None, duration=None, buffersize=32): ''' Returns the output of the filterbank for the given duration. ``func`` If a function is specified, it should be a function of one or two arguments that will be called on each filtered buffered segment (of shape ``(buffersize, nchannels)`` in order. If the function has one argument, the argument should be buffered segment. If it has two arguments, the second argument is the value returned by the previous application of the function (or 0 for the first application). In this case, the method will return the final value returned by the function. See example below. ``duration=None`` The length of time (in seconds) or number of samples to process. If no ``func`` is specified, the method will return an array of shape ``(duration, nchannels)`` with the filtered outputs. Note that in many cases, this will be too large to fit in memory, in which you will want to process the filtered outputs online, by providing a function ``func`` (see example below). If no duration is specified, the maximum duration of the inputs to the filterbank will be used, or an error raised if they do not have durations (e.g. in the case of :class:`OnlineSound`). ``buffersize=32`` The size of the buffered segments to fetch, as a length of time or number of samples. 32 samples typically gives reasonably good performance. For example, to compute the RMS of each channel in a filterbank, you would do:: def sum_of_squares(input, running_sum_of_squares): return running_sum_of_squares+sum(input**2, axis=0) rms = sqrt(fb.process(sum_of_squares)/nsamples) ''' if duration is None: duration = self.duration if not isinstance(duration, int): duration = int(duration*self.samplerate) if not isinstance(buffersize, int): buffersize = int(buffersize*self.samplerate) self.buffer_init() endpoints = hstack((arange(0, duration, buffersize), duration)) zendpoints = zip(endpoints[:-1], endpoints[1:]) #sizes = diff(endpoints) if func is None: return vstack(tuple(self.buffer_fetch(start, end) for start, end in zendpoints)) else: if func.func_code.co_argcount==1: for start, end in zendpoints: func(self.buffer_fetch(start, end)) else: runningval = 0 for start, end in zendpoints: runningval = func(self.buffer_fetch(start, end), runningval) return runningval def buffer_init(self): Bufferable.buffer_init(self) if isinstance(self.source, Bufferable): self.source.buffer_init() else: for s in self.source: s.buffer_init() self.next_sample = 0 def buffer_apply(self, input): raise NotImplementedError def buffer_fetch_next(self, samples): start = self.next_sample self.next_sample += samples end = start+samples input = self.source.buffer_fetch(start, end) return self.buffer_apply(input) def __add__ (self, other): if isinstance(other, Bufferable): return SumFilterbank((self, other)) else: func = lambda x: other+x return FunctionFilterbank(self, func) __radd__ = __add__ def __sub__ (self, other): if isinstance(other, Bufferable): return SumFilterbank((self, other), (1, -1)) else: func = lambda x: x-other return FunctionFilterbank(self, func) def __rsub__ (self, other): # Note that __rsub__ should return other-self if isinstance(other, Bufferable): return SumFilterbank((self, other), (-1, 1)) else: func = lambda x: other-x return FunctionFilterbank(self, func) def __mul__(self, other): if isinstance(other, Bufferable): func = lambda x, y: x*y return FunctionFilterbank((self, other), func) else: func = lambda x: x*other return FunctionFilterbank(self, func) __rmul__ = __mul__ def __div__(self, other): if isinstance(other, Bufferable): func = lambda x, y: x/y return FunctionFilterbank((self, other), func) else: func = lambda x: x/other return FunctionFilterbank(self, func) def __rdiv__(self, other): # Note __rdiv__ returns other/self if isinstance(other, Bufferable): func = lambda x, y: x/y return FunctionFilterbank((other, self), func) else: func = lambda x: other/x return FunctionFilterbank(self, func) class RestructureFilterbank(Filterbank): ''' Filterbank used to restructure channels, including repeating and interleaving. **Standard forms of usage:** Repeat mono source N times:: RestructureFilterbank(source, N) For a stereo source, N copies of the left channel followed by N copies of the right channel:: RestructureFilterbank(source, N) For a stereo source, N copies of the channels tiled as LRLRLR...LR:: RestructureFilterbank(source, numtile=N) For two stereo sources AB and CD, join them together in serial to form the output channels in order ABCD:: RestructureFilterbank((AB, CD)) For two stereo sources AB and CD, join them together interleaved to form the output channels in order ACBD:: RestructureFilterbank((AB, CD), type='interleave') These arguments can also be combined together, for example to AB and CD into output channels AABBCCDDAABBCCDDAABBCCDD:: RestructureFilterbank((AB, CD), 2, 'serial', 3) The three arguments are the number of repeats before joining, the joining type ('serial' or 'interleave') and the number of tilings after joining. See below for details. **Initialise arguments:** ``source`` Input source or list of sources. ``numrepeat=1`` Number of times each channel in each of the input sources is repeated before mixing the source channels. For example, with repeat=2 an input source with channels ``AB`` will be repeated to form ``AABB`` ``type='serial'`` The method for joining the source channels, the options are ``'serial'`` to join the channels in series, or ``'interleave'`` to interleave them. In the case of ``'interleave'``, each source must have the same number of channels. An example of serial, if the input sources are ``abc`` and ``def`` the output would be ``abcdef``. For interleave, the output would be ``adbecf``. ``numtile=1`` The number of times the joined channels are tiled, so if the joined channels are ``ABC`` and ``numtile=3`` the output will be ``ABCABCABC``. ``indexmapping=None`` Instead of specifying the restructuring via ``numrepeat, type, numtile`` you can directly give the mapping of input indices to output indices. So for a single stereo source input, ``indexmapping=[1,0]`` would reverse left and right. Similarly, with two mono sources, ``indexmapping=[1,0]`` would have channel 0 of the output correspond to source 1 and channel 1 of the output corresponding to source 0. This is because the indices are counted in order of channels starting from the first source and continuing to the last. For example, suppose you had two sources, each consisting of a stereo sound, say source 0 was ``AB`` and source 1 was ``CD`` then ``indexmapping=[1, 0, 3, 2]`` would swap the left and right of each source, but leave the order of the sources the same, i.e. the output would be ``BADC``. ''' def __init__(self, source, numrepeat=1, type='serial', numtile=1, indexmapping=None): self._has_been_optimised = False self._reinit(source, numrepeat, type, numtile, indexmapping) def _do_reinit(self): self._reinit(*self._original_init_arguments) if self._has_been_optimised: self._optimisation_target._do_reinit() def _reinit(self, source, numrepeat, type, numtile, indexmapping): self._original_init_arguments = (source, numrepeat, type, numtile, indexmapping) if isinstance(source, Bufferable): source = (source,) if indexmapping is None: nchannels = array([s.nchannels for s in source]) idx = hstack(([0], cumsum(nchannels))) I = [arange(start, stop) for start, stop in zip(idx[:-1], idx[1:])] I = tuple(repeat(i, numrepeat) for i in I) if type=='serial': indexmapping = hstack(I) elif type=='interleave': if len(unique(nchannels))!=1: raise ValueError('For interleaving, all inputs must have an equal number of channels.') I0 = len(I[0]) indexmapping = zeros(I0*len(I), dtype=int) for j, i in enumerate(I): indexmapping[j::len(I)] = i else: raise ValueError('Type must be "serial" or "interleave"') indexmapping = tile(indexmapping, numtile) if not isinstance(indexmapping, ndarray): indexmapping = array(indexmapping, dtype=int) # optimisation to reduce multiple RestructureFilterbanks into a single # one, by collating the sources and reconstructing the indexmapping # from the individual indexmappings if all(isinstance(s, RestructureFilterbank) for s in source): newsource = () newsourcesizes = () for s in source: s._has_been_optimised = True s._optimisation_target = self newsource += s.source inputsourcesize = sum(inpsource.nchannels for inpsource in s.source) newsourcesizes += (inputsourcesize,) newsourcesizes = array(newsourcesizes) newsourceoffsets = hstack((0, cumsum(newsourcesizes))) new_indexmapping = zeros_like(indexmapping) sourcesizes = array(tuple(s.nchannels for s in source)) sourceoffsets = hstack((0, cumsum(sourcesizes))) # gives the index of the source of each element of indexmapping sourceindices = digitize(indexmapping, cumsum(sourcesizes)) for i in xrange(len(indexmapping)): source_index = sourceindices[i] s = source[source_index] relative_index = indexmapping[i]-sourceoffsets[source_index] source_relative_index = s.indexmapping[relative_index] new_index = source_relative_index+newsourceoffsets[source_index] new_indexmapping[i] = new_index source = newsource indexmapping = new_indexmapping self.indexmapping = indexmapping self.nchannels = len(indexmapping) self.samplerate = source[0].samplerate for s in source: if int(s.samplerate)!=int(self.samplerate): raise ValueError('All sources must have the same samplerate.') self._source = source def buffer_fetch_next(self, samples): start = self.next_sample self.next_sample += samples end = start+samples inputs = tuple(s.buffer_fetch(start, end) for s in self.source) input = hstack(inputs) input = input[:, self.indexmapping] return input def change_source(self, source): if not hasattr(self, '_source') or self._source is None: self._source = source return oldsource, numrepeat, type, numtile, indexmapping = self._original_init_arguments self._original_init_arguments = source, numrepeat, type, numtile, indexmapping self._do_reinit() # self._reinit(source, numrepeat, type, numtile, indexmapping) # if self._has_been_optimised: # target = self._optimisation_target # target._reinit(*target._original_init_arguments) class Repeat(RestructureFilterbank): ''' Filterbank that repeats each channel from its input, e.g. with 3 repeats channels ABC would map to AAABBBCCC. ''' def __init__(self, source, numrepeat): RestructureFilterbank.__init__(self, source, numrepeat) class Tile(RestructureFilterbank): ''' Filterbank that tiles the channels from its input, e.g. with 3 tiles channels ABC would map to ABCABCABC. ''' def __init__(self, source, numtile): RestructureFilterbank.__init__(self, source, numtile=numtile) class Join(RestructureFilterbank): ''' Filterbank that joins the channels of its inputs in series, e.g. with two input sources with channels AB and CD respectively, the output would have channels ABCD. You can initialise with multiple sources separated by commas, or by passing a list of sources. ''' def __init__(self, *sources): source = [] for s in sources: if isinstance(s, Bufferable): source.append(s) else: source.extend(s) RestructureFilterbank.__init__(self, tuple(source), type='serial') class Interleave(RestructureFilterbank): ''' Filterbank that interleaves the channels of its inputs, e.g. with two input sources with channels AB and CD respectively, the output would have channels ACBD. You can initialise with multiple sources separated by commas, or by passing a list of sources. ''' def __init__(self, *sources): source = [] for s in sources: if isinstance(s, Bufferable): source.append(s) else: source.extend(s) RestructureFilterbank.__init__(self, tuple(source), type='interleave') class FunctionFilterbank(Filterbank): ''' Filterbank that just applies a given function. The function should take as many arguments as there are sources. For example, to half-wave rectify inputs:: FunctionFilterbank(source, lambda x: clip(x, 0, Inf)) The syntax ``lambda x: clip(x, 0, Inf)`` defines a function object that takes a single argument ``x`` and returns ``clip(x, 0, Inf)``. The numpy function ``clip(x, low, high)`` returns the values of ``x`` clipped between ``low`` and ``high`` (so if ``xhigh`` it returns ``high``, otherwise it returns ``x``). The symbol ``Inf`` means infinity, i.e. no clipping of positive values. **Technical details** Note that functions should operate on arrays, in particular on 2D buffered segments, which are arrays of shape ``(bufsize, nchannels)``. Typically, most standard functions from numpy will work element-wise. If you want a filterbank that changes the shape of the input (e.g. changes the number of channels), set the ``nchannels`` keyword argument to the number of output channels. ''' def __init__(self, source, func, nchannels=None,**params): if isinstance(source, Bufferable): source = (source,) Filterbank.__init__(self, source) self.func = func if nchannels is not None: self.nchannels = nchannels self.params = params def buffer_fetch_next(self, samples): start = self.cached_buffer_end end = start+samples inputs = tuple(s.buffer_fetch(start, end) for s in self.source) # print inputs,self.params return self.func(*inputs,**self.params) class SumFilterbank(FunctionFilterbank): ''' Sum filterbanks together with given weight vectors. For example, to take the sum of two filterbanks:: SumFilterbank((fb1, fb2)) To take the difference:: SumFilterbank((fb1, fb2), (1, -1)) ''' def __init__(self, source, weights=None): if weights is None: weights = ones(len(source)) self.weights = weights func = lambda *inputs: sum(input*w for input, w in zip(inputs, weights)) FunctionFilterbank.__init__(self, source, func) class DoNothingFilterbank(Filterbank): ''' Filterbank that does nothing to its input. Useful for removing a set of filters without having to rewrite your code. Can also be used for simply writing compound derived classes. For example, if you want a compound Filterbank that does AFilterbank and then BFilterbank, but you want to encapsulate that into a single class, you could do:: class ABFilterbank(DoNothingFilterbank): def __init__(self, source): a = AFilterbank(source) b = BFilterbank(a) DoNothingFilterbank.__init__(self, b) However, a more general way of writing compound filterbanks is to use :class:`CombinedFilterbank`. ''' def buffer_apply(self, input): return input class ControlFilterbank(Filterbank): ''' Filterbank that can be used for controlling behaviour at runtime Typically, this class is used to implement a control path in an auditory model, modifying some filterbank parameters based on the output of other filterbanks (or the same ones). The controller has a set of input filterbanks whose output values are used to modify a set of output filterbanks. The update is done by a user specified function or class which is passed these output values. The controller should be inserted as the last bank in a chain. Initialisation arguments: ``source`` The source filterbank, the values from this are used unmodified as the output of this filterbank. ``inputs`` Either a single filterbank, or sequence of filterbanks which are used as inputs to the ``updater``. ``targets`` The filterbank or sequence of filterbanks that are modified by the updater. ``updater`` The function or class which does the updating, see below. ``max_interval`` If specified, ensures that the updater is called at least as often as this interval (but it may be called more often). Can be specified as a time or a number of samples. **The updater** The ``updater`` argument can be either a function or class instance. If it is a function, it should have a form like:: # A single input def updater(input): ... # Two inputs def updater(input1, input2): ... # Arbitrary number of inputs def updater(*inputs): ... Each argument ``input`` to the function is a numpy array of shape ``(numsamples, numchannels)`` where ``numsamples`` is the number of samples just computed, and ``numchannels`` is the number of channels in the corresponding filterbank. The function is not restricted in what it can do with these inputs. Functions can be used to implement relatively simple controllers, but for more complicated situations you may want to maintain some state variables for example, and in this case you can use a class. The object ``updater`` should be an instance of a class that defines the ``__call__`` method (with the same syntax as above for functions). In addition, you can define a reinitialisation method ``reinit()`` which will be called when the ``buffer_init()`` method is called on the filterbank, although this is entirely optional. **Example** The following will do a simple form of gain control, where the gain parameter will drift exponentially towards target_rms/rms with a given time constant:: # This class implements the gain (see Filterbank for details) class GainFilterbank(Filterbank): def __init__(self, source, gain=1.0): Filterbank.__init__(self, source) self.gain = gain def buffer_apply(self, input): return self.gain*input # This is the class for the updater object class GainController(object): def __init__(self, target, target_rms, time_constant): self.target = target self.target_rms = target_rms self.time_constant = time_constant def reinit(self): self.sumsquare = 0 self.numsamples = 0 def __call__(self, input): T = input.shape[0]/self.target.samplerate self.sumsquare += sum(input**2) self.numsamples += input.size rms = sqrt(self.sumsquare/self.numsamples) g = self.target.gain g_tgt = self.target_rms/rms tau = self.time_constant self.target.gain = g_tgt+exp(-T/tau)*(g-g_tgt) And an example of using this with an input ``source``, a target RMS of 0.2 and a time constant of 50 ms, updating every 10 ms:: gain_fb = GainFilterbank(source) updater = GainController(gain_fb, 0.2, 50*ms) control = ControlFilterbank(gain_fb, source, gain_fb, updater, 10*ms) ''' def __init__(self, source, inputs, targets, updater, max_interval=None): Filterbank.__init__(self, source) if not isinstance(inputs, (list, tuple)): inputs = [inputs] if not isinstance(targets, (list, tuple)): targets = [targets] self.inputs = inputs self.updater = updater if max_interval is not None: if not isinstance(max_interval, int): max_interval = int(max_interval*source.samplerate) for x in inputs+targets: x.maximum_buffer_size = max_interval self.maximum_buffer_size = max_interval def buffer_init(self): Filterbank.buffer_init(self) if hasattr(self.updater, 'reinit'): self.updater.reinit() def buffer_fetch_next(self, samples): start = self.next_sample self.next_sample += samples end = start+samples source_input = self.source.buffer_fetch(start, end) input_buffers = [x.buffer_fetch(start, end) for x in self.inputs] self.updater(*input_buffers) return source_input class CombinedFilterbank(Filterbank): ''' Filterbank that encapsulates a chain of filterbanks internally. This class should mostly be used by people writing extensions to Brian hears rather than by users directly. The purpose is to take an existing chain of filterbanks and wrap them up so they appear to the user as a single filterbank which can be used exactly as any other filterbank. In order to do this, derive from this class and in your initialisation follow this pattern:: class RectifiedGammatone(CombinedFilterbank): def __init__(self, source, cf): CombinedFilterbank.__init__(self, source) source = self.get_modified_source() # At this point, insert your chain of filterbanks acting on # the modified source object gfb = Gammatone(source, cf) rectified = FunctionFilterbank(gfb, lambda input: clip(input, 0, Inf)) # Finally, set the output filterbank to be the last in your chain self.set_output(fb) This combination of a :class:`Gammatone` and a rectification via a :class:`FunctionFilterbank` can now be used as a single filterbank, for example:: x = whitenoise(100*ms) fb = RectifiedGammatone(x, [1*kHz, 1.5*kHz]) y = fb.process() **Details** The reason for the ``get_modified_source()`` call is that the source attribute of a filterbank can be changed after creation. The modified source provides a buffer (in fact, a :class:`DoNothingFilterbank`) so that the input to the chain of filters defined by the derived class doesn't need to be changed. ''' def __init__(self, source): Filterbank.__init__(self, source) def get_duration(self): if hasattr(self, '_duration'): return self._duration else: return max(Filterbank.get_duration(self), self.output.duration) source = property(fget=lambda self:self._source, fset=lambda self, source:self.change_source(source)) def change_source(self, source): Filterbank.change_source(self, source) if hasattr(self, '_modified_source'): self._modified_source.source = source def get_modified_source(self): self._modified_source = DoNothingFilterbank(self.source) return self._modified_source def set_output(self, output): self.output = output self.nchannels = output.nchannels def buffer_init(self): Filterbank.buffer_init(self) self.output.buffer_init() def buffer_fetch(self, start, end): return self.output.buffer_fetch(start, end) brian-1.3.1/brian/hears/filtering/filterbankgroup.py000066400000000000000000000060041167451777000225560ustar00rootroot00000000000000from brian import StateUpdater, NeuronGroup, Equations, Clock, network_operation __all__ = ['FilterbankGroup'] class FilterbankGroup(NeuronGroup): ''' Allows a Filterbank object to be used as a NeuronGroup Initialised as a standard :class:`NeuronGroup` object, but with two additional arguments at the beginning, and no ``N`` (number of neurons) argument. The number of neurons in the group will be the number of channels in the filterbank. (TODO: add reference to interleave/serial channel stuff here.) ``filterbank`` The Filterbank object to be used by the group. In fact, any Bufferable object can be used. ``targetvar`` The target variable to put the filterbank output into. One additional keyword is available beyond that of :class:`NeuronGroup`: ``buffersize=32`` The size of the buffered segments to fetch each time. The efficiency depends on this in an unpredictable way, larger values mean more time spent in optimised code, but are worse for the cache. In many cases, the default value is a good tradeoff. Values can be given as a number of samples, or a length of time in seconds. Note that if you specify your own :class:`Clock`, it should have 1/dt=samplerate. ''' def __init__(self, filterbank, targetvar, *args, **kwds): self.targetvar = targetvar self.filterbank = filterbank filterbank.buffer_init() # update level keyword kwds['level'] = kwds.get('level', 0)+1 # Sanitize the clock - does it have the right dt value? if 'clock' in kwds: if int(1/kwds['clock'].dt)!=int(filterbank.samplerate): raise ValueError('Clock should have 1/dt=samplerate') else: kwds['clock'] = Clock(dt=1/filterbank.samplerate) buffersize = kwds.pop('buffersize', 32) if not isinstance(buffersize, int): buffersize = int(buffersize*self.samplerate) self.buffersize = buffersize self.buffer_pointer = buffersize self.buffer_start = -buffersize NeuronGroup.__init__(self, filterbank.nchannels, *args, **kwds) @network_operation(when='start', clock=self.clock) def apply_filterbank_output(): if self.buffer_pointer>=self.buffersize: self.buffer_pointer = 0 self.buffer_start += self.buffersize self.buffer = self.filterbank.buffer_fetch(self.buffer_start, self.buffer_start+self.buffersize) setattr(self, targetvar, self.buffer[self.buffer_pointer, :]) self.buffer_pointer += 1 self.contained_objects.append(apply_filterbank_output) def reinit(self): NeuronGroup.reinit(self) self.filterbank.buffer_init() self.buffer_pointer = self.buffersize self.buffer_start = -self.buffersize brian-1.3.1/brian/hears/filtering/filterbanklibrary.py000066400000000000000000001035241167451777000230730ustar00rootroot00000000000000from brian import * from scipy import signal, weave, random from operator import isSequenceType from filterbank import Filterbank,RestructureFilterbank from linearfilterbank import * from firfilterbank import * __all__ = ['Cascade', 'Gammatone', 'ApproximateGammatone', 'LogGammachirp', 'LinearGammachirp', 'LinearGaborchirp', 'IIRFilterbank', 'Butterworth', 'AsymmetricCompensation', 'LowPass', 'asymmetric_compensation_coeffs', 'BiQuadratic' ] class Gammatone(LinearFilterbank): ''' Bank of gammatone filters. They are implemented as cascades of four 2nd-order IIR filters (this 8th-order digital filter corresponds to a 4th-order gammatone filter). The approximated impulse response :math:`\\mathrm{IR}` is defined as follow :math:`\\mathrm{IR}(t)=t^3\\exp(-2\\pi b \\mathrm{ERB}(f)t)\\cos(2\\pi f t)` where :math:`\\mathrm{ERB}(f)=24.7+0.108 f` [Hz] is the equivalent rectangular bandwidth of the filter centered at :math:`f`. It comes from Slaney's exact gammatone implementation (Slaney, M., 1993, "An Efficient Implementation of the Patterson-Holdsworth Auditory Filter Bank". Apple Computer Technical Report #35). The code is based on `Slaney's Matlab implementation `__. Initialised with arguments: ``source`` Source of the filterbank. ``cf`` List or array of center frequencies. ``b=1.019`` parameter which determines the bandwidth of the filters (and reciprocally the duration of its impulse response). In particular, the bandwidth = b.ERB(cf), where ERB(cf) is the equivalent bandwidth at frequency ``cf``. The default value of ``b`` to a best fit (Patterson et al., 1992). ``b`` can either be a scalar and will be the same for every channel or an array of the same length as ``cf``. ``erb_order=1``, ``ear_Q=9.26449``, ``min_bw=24.7`` Parameters used to compute the ERB bandwidth. :math:`\\mathrm{ERB} = ((\mathrm{cf}/\mathrm{ear\\_Q})^{\\mathrm{erb}\\_\\mathrm{order}} + \\mathrm{min\\_bw}^{\\mathrm{erb}\\_\\mathrm{order}})^{(1/\\mathrm{erb}\\_\\mathrm{order})}`. Their default values are the ones recommended in Glasberg and Moore, 1990. ``cascade=None`` Specify 1 or 2 to use a cascade of 1 or 2 order 8 or 4 filters instead of 4 2nd order filters. Note that this is more efficient but may induce numerical stability issues. ''' def __init__(self, source, cf, b=1.019, erb_order=1, ear_Q=9.26449, min_bw=24.7, cascade=None): cf = atleast_1d(cf) self.cf = cf self.samplerate = source.samplerate T = 1/self.samplerate self.b,self.erb_order,self.EarQ,self.min_bw=b,erb_order,ear_Q,min_bw erb = ((cf/ear_Q)**erb_order + min_bw**erb_order)**(1/erb_order) B = b*2*pi*erb # B = 2*pi*b A0 = T A2 = 0 B0 = 1 B1 = -2*cos(2*cf*pi*T)/exp(B*T) B2 = exp(-2*B*T) A11 = -(2*T*cos(2*cf*pi*T)/exp(B*T) + 2*sqrt(3+2**1.5)*T*sin(2*cf*pi*T) / \ exp(B*T))/2 A12=-(2*T*cos(2*cf*pi*T)/exp(B*T)-2*sqrt(3+2**1.5)*T*sin(2*cf*pi*T)/\ exp(B*T))/2 A13=-(2*T*cos(2*cf*pi*T)/exp(B*T)+2*sqrt(3-2**1.5)*T*sin(2*cf*pi*T)/\ exp(B*T))/2 A14=-(2*T*cos(2*cf*pi*T)/exp(B*T)-2*sqrt(3-2**1.5)*T*sin(2*cf*pi*T)/\ exp(B*T))/2 i=1j gain=abs((-2*exp(4*i*cf*pi*T)*T+\ 2*exp(-(B*T)+2*i*cf*pi*T)*T*\ (cos(2*cf*pi*T)-sqrt(3-2**(3./2))*\ sin(2*cf*pi*T)))*\ (-2*exp(4*i*cf*pi*T)*T+\ 2*exp(-(B*T)+2*i*cf*pi*T)*T*\ (cos(2*cf*pi*T)+sqrt(3-2**(3./2))*\ sin(2*cf*pi*T)))*\ (-2*exp(4*i*cf*pi*T)*T+\ 2*exp(-(B*T)+2*i*cf*pi*T)*T*\ (cos(2*cf*pi*T)-\ sqrt(3+2**(3./2))*sin(2*cf*pi*T)))*\ (-2*exp(4*i*cf*pi*T)*T+2*exp(-(B*T)+2*i*cf*pi*T)*T*\ (cos(2*cf*pi*T)+sqrt(3+2**(3./2))*sin(2*cf*pi*T)))/\ (-2/exp(2*B*T)-2*exp(4*i*cf*pi*T)+\ 2*(1+exp(4*i*cf*pi*T))/exp(B*T))**4) allfilts=ones(len(cf)) self.A0, self.A11, self.A12, self.A13, self.A14, self.A2, self.B0, self.B1, self.B2, self.gain=\ A0*allfilts, A11, A12, A13, A14, A2*allfilts, B0*allfilts, B1, B2, gain self.filt_a=dstack((array([ones(len(cf)), B1, B2]).T,)*4) self.filt_b=dstack((array([A0/gain, A11/gain, A2/gain]).T, array([A0*ones(len(cf)), A12, zeros(len(cf))]).T, array([A0*ones(len(cf)), A13, zeros(len(cf))]).T, array([A0*ones(len(cf)), A14, zeros(len(cf))]).T)) LinearFilterbank.__init__(self, source, self.filt_b, self.filt_a) if cascade is not None: self.decascade(cascade) class ApproximateGammatone(LinearFilterbank): r''' Bank of approximate gammatone filters implemented as a cascade of ``order`` IIR gammatone filters. The filter is derived from the sampled version of the complex analog gammatone impulse response :math:`g_{\gamma}(t)=t^{\gamma-1} (\lambda e^{i \eta t})^{\gamma}` where :math:`\gamma` corresponds to ``order``, :math:`\eta` defines the oscillation frequency ``cf``, and :math:`\lambda` defines the bandwidth parameter. The design is based on the Hohmann implementation as described in Hohmann, V., 2002, "Frequency analysis and synthesis using a Gammatone filterbank", Acta Acustica United with Acustica. The code is based on the Matlab gammatone implementation from `Meddis' toolbox `__. Initialised with arguments: ``source`` Source of the filterbank. ``cf`` List or array of center frequencies. ``bandwidth`` List or array of filters bandwidth corresponding, one for each cf. ``order=4`` The number of 1st-order gammatone filters put in cascade, and therefore the order the resulting gammatone filters. ''' def __init__(self, source, cf, bandwidth,order=4): cf = atleast_1d(cf) bandwidth = atleast_1d(bandwidth) self.cf = cf self.samplerate = source.samplerate dt = 1/self.samplerate phi = 2 * pi * bandwidth * dt theta = 2 * pi * cf * dt cos_theta = cos(theta) sin_theta = sin(theta) alpha = -exp(-phi) * cos_theta b0 = ones(len(cf)) b1 = 2 * alpha b2 = exp(-2 * phi) z1 = (1 + alpha * cos_theta) - (alpha * sin_theta) * 1j z2 = (1 + b1 * cos_theta) - (b1 * sin_theta) * 1j z3 = (b2 * cos(2 * theta)) - (b2 * sin(2 * theta)) * 1j tf = (z2 + z3) / z1 a0 = abs(tf) a1 = alpha * a0 # we apply the same filters order times so we just duplicate them in the 3rd axis for the parallel_lfilter_step command self.filt_a = dstack((array([b0, b1, b2]).T,)*order) self.filt_b = dstack((array([a0, a1, zeros(len(cf))]).T,)*order) self.order = order LinearFilterbank.__init__(self,source, self.filt_b, self.filt_a) class LogGammachirp(LinearFilterbank): r''' Bank of gammachirp filters with a logarithmic frequency sweep. The approximated impulse response :math:`\mathrm{IR}` is defined as follows: :math:`\mathrm{IR}(t)=t^3e^{-2\pi b \mathrm{ERB}(f)t}\cos(2\pi (f t +c\cdot\ln(t))` where :math:`\mathrm{ERB}(f)=24.7+0.108 f` [Hz] is the equivalent rectangular bandwidth of the filter centered at :math:`f`. The implementation is a cascade of 4 2nd-order IIR gammatone filters followed by a cascade of ncascades 2nd-order asymmetric compensation filters as introduced in Unoki et al. 2001, "Improvement of an IIR asymmetric compensation gammachirp filter". Initialisation parameters: ``source`` Source sound or filterbank. ``f`` List or array of the sweep ending frequencies (:math:`f_{\mathrm{instantaneous}}=f+c/t`). ``b=1.019`` Parameters which determine the duration of the impulse response. ``b`` can either be a scalar and will be the same for every channel or an array with the same length as ``f``. ``c=1`` The glide slope (or sweep rate) given in Hz/second. The trajectory of the instantaneous frequency towards f is an upchirp when c<0 and a downchirp when c>0. ``c`` can either be a scalar and will be the same for every channel or an array with the same length as ``f``. ``ncascades=4`` Number of times the asymmetric compensation filter is put in cascade. The default value comes from Unoki et al. 2001. ''' def __init__(self, source, f,b=1.019,c=1,ncascades=4): f = atleast_1d(f) self.f = f self.samplerate= source.samplerate self.c=c self.b=b gammatone=Gammatone(source, f,b) self.gammatone_filt_b=gammatone.filt_b self.gammatone_filt_a=gammatone.filt_a ERBw=24.7*(4.37e-3*f+1.) p0=2 p1=1.7818*(1-0.0791*b)*(1-0.1655*abs(c)) p2=0.5689*(1-0.1620*b)*(1-0.0857*abs(c)) p3=0.2523*(1-0.0244*b)*(1+0.0574*abs(c)) p4=1.0724 self.asymmetric_filt_b=zeros((len(f),3, ncascades)) self.asymmetric_filt_a=zeros((len(f),3, ncascades)) self.asymmetric_filt_b,self.asymmetric_filt_a=asymmetric_compensation_coeffs(self.samplerate,f,self.asymmetric_filt_b,self.asymmetric_filt_a,b,c,p0,p1,p2,p3,p4) #concatenate the gammatone filter coefficients so that everything is in cascade in each frequency channel self.filt_b=concatenate([self.gammatone_filt_b, self.asymmetric_filt_b],axis=2) self.filt_a=concatenate([self.gammatone_filt_a, self.asymmetric_filt_a],axis=2) LinearFilterbank.__init__(self, source, self.filt_b,self.filt_a) class LinearGammachirp(FIRFilterbank): r''' Bank of gammachirp filters with linear frequency sweeps and gamma envelope as described in Wagner et al. 2009, "Auditory responses in the barn owl's nucleus laminaris to clicks: impulse response and signal analysis of neurophonic potential", J. Neurophysiol. The impulse response :math:`\mathrm{IR}` is defined as follow :math:`\mathrm{IR}(t)=t^3e^{-t/\sigma}\cos(2\pi (f t +c/2 t^2)+\phi)` where :math:`\sigma` corresponds to ``time_constant`` and :math:`\phi` to ``phase`` (see definition of parameters). Those filters are implemented as FIR filters using truncated time representations of gammachirp functions as the impulse response. The impulse responses, which need to have the same length for every channel, have a duration of 15 times the biggest time constant. The length of the impulse response is therefore ``15*max(time_constant)*sampling_rate``. The impulse responses are normalized with respect to the transmitted power, i.e. the rms of the filter taps is 1. Initialisation parameters: ``source`` Source sound or filterbank. ``f`` List or array of the sweep starting frequencies (:math:`f_{\mathrm{instantaneous}}=f+ct`). ``time_constant`` Determines the duration of the envelope and consequently the length of the impulse response. ``c=1`` The glide slope (or sweep rate) given in Hz/second. The time-dependent instantaneous frequency is ``f+c*t`` and is therefore going upward when c>0 and downward when c<0. ``c`` can either be a scalar and will be the same for every channel or an array with the same length as ``f``. ``phase=0`` Phase shift of the carrier. Has attributes: ``length_impulse_response`` Number of samples in the impulse responses. ``impulse_response`` Array of shape ``(nchannels, length_impulse_response)`` with each row being an impulse response for the corresponding channel. ''' def __init__(self,source, f, time_constant, c, phase=0): self.f=f=atleast_1d(f) self.c=c=atleast_1d(c) self.phase=phase=atleast_1d(phase) self.time_constant=time_constant=atleast_1d(time_constant) if len(time_constant)==1: time_constant=time_constant*ones(len(f)) if len(c)==1: c=c*ones(len(f)) if len(phase)==1: phase=phase*ones(len(f)) self.samplerate= source.samplerate Tcst_max=max(time_constant) t_start=-Tcst_max*3*second t=arange(t_start,-4*t_start,1./self.samplerate) self.impulse_response=zeros((len(f),len(t))) for ich in xrange(len(f)): env=(t-t_start)**3*exp(-(t-t_start)/time_constant[ich]) self.impulse_response[ich,:]=env*cos(2*pi*(f[ich]*t+c[ich]/2*t**2)+phase[ich]) # self.impulse_response[ich,:]=self.impulse_response[ich,:]/sqrt(sum(self.impulse_response[ich,:]**2)) self.impulse_response[ich,:]=self.impulse_response[ich,:]/sum(abs(self.impulse_response[ich,:])) FIRFilterbank.__init__(self,source, self.impulse_response) class LinearGaborchirp(FIRFilterbank): r''' Bank of gammachirp filters with linear frequency sweeps and gaussian envelope as described in Wagner et al. 2009, "Auditory responses in the barn owl's nucleus laminaris to clicks: impulse response and signal analysis of neurophonic potential", J. Neurophysiol. The impulse response :math:`\mathrm{IR}` is defined as follows: :math:`\mathrm{IR}(t)=e^{-t/2\sigma^2}\cos(2\pi (f t +c/2 t^2)+\phi)`, where :math:`\sigma` corresponds to ``time_constant`` and :math:`\phi` to ``phase`` (see definition of parameters). These filters are implemented as FIR filters using truncated time representations of gammachirp functions as the impulse response. The impulse responses, which need to have the same length for every channel, have a duration of 12 times the biggest time constant. The length of the impulse response is therefore ``12*max(time_constant)*sampling_rate``. The envelope is a gaussian function (Gabor filter). The impulse responses are normalized with respect to the transmitted power, i.e. the rms of the filter taps is 1. Initialisation parameters: ``source`` Source sound or filterbank. ``f`` List or array of the sweep starting frequencies (:math:`f_{\mathrm{instantaneous}}=f+c*t`). ``time_constant`` Determines the duration of the envelope and consequently the length of the impluse response. ``c=1`` The glide slope (or sweep rate) given ins Hz/second. The time-dependent instantaneous frequency is ``f+c*t`` and is therefore going upward when c>0 and downward when c<0. ``c`` can either be a scalar and will be the same for every channel or an array with the same length as ``f``. ``phase=0`` Phase shift of the carrier. Has attributes: ``length_impulse_response`` Number of sample in the impulse responses. ``impulse_response`` Array of shape ``(nchannels, length_impulse_response)`` with each row being an impulse response for the corresponding channel. ''' def __init__(self,source, f, time_constant, c, phase=0): self.f=f=atleast_1d(f) self.c=c=atleast_1d(c) self.phase=phase=atleast_1d(phase) self.time_constant=time_constant=atleast_1d(time_constant) if len(time_constant)==1: time_constant=time_constant*ones(len(f)) if len(c)==1: c=c*ones(len(f)) if len(phase)==1: phase=phase*ones(len(f)) self.samplerate = source.samplerate Tcst_max=max(time_constant) t_start=-Tcst*6*second t=arange(t_start,-t_start,1./self.samplerate) self.impulse_response=zeros((len(f),len(t))) for ich in xrange(len(f)): env=exp(-(t/(2*time_constant[ich]))**2) self.impulse_response[ich,:]=env*cos(2*pi*(f[ich]*t+c[ich]/2*t**2)+phase[ich]) self.impulse_response[ich,:]=self.impulse_response[ich,:]/sqrt(sum(self.impulse_response[ich,:]**2)) FIRFilterbank.__init__(self, source, self.impulse_response) class IIRFilterbank(LinearFilterbank): ''' Filterbank of IIR filters. The filters can be low, high, bandstop or bandpass and be of type Elliptic, Butterworth, Chebyshev etc. The ``passband`` and ``stopband`` can be scalars (for low or high pass) or pairs of parameters (for stopband and passband) yielding similar filters for every channel. They can also be arrays of shape ``(1, nchannels)`` for low and high pass or ``(2, nchannels)`` for stopband and passband yielding different filters along channels. This class uses the scipy iirdesign function to generate filter coefficients for every channel. See the documentation for scipy.signal.iirdesign for more details. Initialisation parameters: ``samplerate`` The sample rate in Hz. ``nchannels`` The number of channels in the bank ``passband``, ``stopband`` The edges of the pass and stop bands in Hz. For lowpass and highpass filters, in the case of similar filters for each channel, they are scalars and passbandpassband for a highpass. For a bandpass or bandstop filter, in the case of similar filters for each channel, make passband and stopband a list with two elements, e.g. for a bandpass have ``passband=[200*Hz, 500*Hz]`` and ``stopband=[100*Hz, 600*Hz]``. ``passband`` and ``stopband`` can also be arrays of shape ``(1, nchannels)`` for low and high pass or ``(2, nchannels)`` for stopband and passband yielding different filters along channels. ``gpass`` The maximum loss in the passband in dB. Can be a scalar or an array of length ``nchannels``. ``gstop`` The minimum attenuation in the stopband in dB. Can be a scalar or an array of length ``nchannels``. ``btype`` One of 'low', 'high', 'bandpass' or 'bandstop'. ``ftype`` The type of IIR filter to design: 'ellip' (elliptic), 'butter' (Butterworth), 'cheby1' (Chebyshev I), 'cheby2' (Chebyshev II), 'bessel' (Bessel). ''' def __init__(self, source, nchannels, passband, stopband, gpass, gstop, btype, ftype): Wpassband = passband.copy() Wstopband = stopband.copy() Wpassband = atleast_1d(Wpassband) Wstopband = atleast_1d(Wstopband) gpass = atleast_1d(gpass) gstop = atleast_1d(gstop) self.samplerate=source.samplerate if Wpassband.shape != Wstopband.shape: raise Exeption('passband and stopband must contain the same number of ent') try: Wpassband=Wpassband/self.samplerate*2+0.0 # wn=1 corresponding to half the sample rate Wstopband=Wstopband/self.samplerate*2+0.0 except DimensionMismatchError: raise DimensionMismatchError('IIRFilterbank passband, stopband parameters must be in Hz') # now design filterbank if btype=='low' or btype=='high': if len(Wpassband)==1: #if there is only one Wn value for all channel just repeat it self.filt_b, self.filt_a = signal.iirdesign(Wpassband, Wstopband, gpass, gstop, ftype=ftype) self.filt_b=kron(ones((nchannels,1)),self.filt_b) self.filt_a=kron(ones((nchannels,1)),self.filt_a) else: #else make nchannels different filters if len(gstop) != nchannels: #if the ripple parameters are scalar make them as long as the number of channels gpass=repeat(gpass,nchannels) if len(gstop) != nchannels: gstop=repeat(gstop,nchannels) order=0 filt_b, filt_a =[1]*nchannels,[1]*nchannels for i in xrange((nchannels)): #generate the different filter coeffcients filt_b[i], filt_a[i] = signal.iirdesign(Wpassband[i], Wstopband[i], gpass[i], gstop[i], ftype=ftype) if len(filt_b[i])>order: #take the highst order of them to be the size of the filter coefficient matrix order=len(filt_b[i]) self.filt_b=zeros((nchannels,order)) self.filt_a=zeros((nchannels,order)) for i in xrange((nchannels)): #fill the coefficient matrix self.filt_b[i,:len(filt_b[i])], self.filt_a[i,:len(filt_a[i])] = filt_b[i],filt_a[i] else: if Wpassband.ndim==1: #if there is only one Wn pair of values for all channel just repeat it self.filt_b, self.filt_a = signal.iirdesign(Wpassband, Wstopband, gpass, gstop, ftype=ftype) self.filt_b=kron(ones((nchannels,1)),self.filt_b) self.filt_a=kron(ones((nchannels,1)),self.filt_a) else: if len(gstop) != nchannels:#if the ripple parameters are scalar make them as long as the number of channels gpass=repeat(gpass,nchannels) if len(gstop) != nchannels: gstop=repeat(gstop,nchannels) order=0 filt_b, filt_a =[1]*nchannels,[1]*nchannels for i in xrange((nchannels)):#take the highst order of them to be the size of the filter coefficient matrix filt_b[i], filt_a[i] = signal.iirdesign(Wpassband[:,i], Wstopband[:,i], gpass[i], gstop[i], ftype=ftype) if len(filt_b[i])>order: order=len(filt_b[i]) self.filt_b=zeros((nchannels,order)) self.filt_a=zeros((nchannels,order)) for i in xrange((nchannels)):#fill the coefficient matrix self.filt_b[i,:len(filt_b[i])], self.filt_a[i,:len(filt_a[i])] = filt_b[i],filt_a[i] self.filt_a=self.filt_a.reshape(self.filt_a.shape[0],self.filt_a.shape[1],1) self.filt_b=self.filt_b.reshape(self.filt_b.shape[0],self.filt_b.shape[1],1) self.nchannels = nchannels self.passband = passband self.stopband = stopband self.gpass = gpass self.gstop = gstop self.ftype= ftype self.order= self.filt_a.shape[1]-1 LinearFilterbank.__init__(self,source, self.filt_b, self.filt_a) class Butterworth(LinearFilterbank): ''' Filterbank of low, high, bandstop or bandpass Butterworth filters. The cut-off frequencies or the band frequencies can either be the same for each channel or different along channels. Initialisation parameters: ``samplerate`` Sample rate. ``nchannels`` Number of filters in the bank. ``order`` Order of the filters. ``fc`` Cutoff parameter(s) in Hz. For the case of a lowpass or highpass filterbank, ``fc`` is either a scalar (thus the same value for all of the channels) or an array of length ``nchannels``. For the case of a bandpass or bandstop, ``fc`` is either a pair of scalar defining the bandpass or bandstop (thus the same values for all of the channels) or an array of shape ``(2, nchannels)`` to define a pair for every channel. ``btype`` One of 'low', 'high', 'bandpass' or 'bandstop'. ''' def __init__(self,source, nchannels, order, fc, btype='low'): Wn=fc.copy() Wn=atleast_1d(Wn) #Scalar inputs are converted to 1-dimensional arrays self.samplerate = source.samplerate try: Wn= Wn/self.samplerate *2+0.0 # wn=1 corresponding to half the sample rate except DimensionMismatchError: raise DimensionMismatchError('Wn must be in Hz') if btype=='low' or btype=='high': self.filt_b=zeros((nchannels,order+1)) self.filt_a=zeros((nchannels,order+1)) if len(Wn)==1: #if there is only one Wn value for all channel just repeat it self.filt_b, self.filt_a = signal.butter(order, Wn, btype=btype) self.filt_b=kron(ones((nchannels,1)),self.filt_b) self.filt_a=kron(ones((nchannels,1)),self.filt_a) else: #else make nchannels different filters for i in xrange((nchannels)): self.filt_b[i,:], self.filt_a[i,:] = signal.butter(order, Wn[i], btype=btype) else: self.filt_b=zeros((nchannels,2*order+1)) self.filt_a=zeros((nchannels,2*order+1)) if Wn.ndim==1: #if there is only one Wn pair of values for all channel just repeat it self.filt_b, self.filt_a = signal.butter(order, Wn, btype=btype) self.filt_b=kron(ones((nchannels,1)),self.filt_b) self.filt_a=kron(ones((nchannels,1)),self.filt_a) else: for i in xrange((nchannels)): self.filt_b[i,:], self.filt_a[i,:] = signal.butter(order, Wn[:,i], btype=btype) self.filt_a=self.filt_a.reshape(self.filt_a.shape[0],self.filt_a.shape[1],1) self.filt_b=self.filt_b.reshape(self.filt_b.shape[0],self.filt_b.shape[1],1) self.nchannels = nchannels LinearFilterbank.__init__(self,source, self.filt_b, self.filt_a) class BiQuadratic(LinearFilterbank): ''' Bank of biquadratic bandpass filters The transfer function of the filters are like the ones of all second-order linear filters :math:`H(s)=\frac{Kw_{0}^{2}}{s_{2}+w_{0}/Qs+w_{0}^{2}}` where :math:`w_{0}` is the centre frequency and :math:`Q` the quality factor of the filter The implementation is a 2nd-order IIR filter with a tranfer function being the ratio of two quadratic functions. Initialisation parameters: ``source`` Source sound or filterbank. ``f`` List or array of the centre frequencies. (:math:`w_{0}^{2}/2\pi`) ``Q`` Quality factor of the filters (dimensionless). It can be a scalar (the same for every channel) or a list/array.``Q`` defines the bandwidth such that ``BW`` Alternativl ''' def __init__(self, source, cf,Q): cf=cf[0] Q=Q[0] # Q=1 # cf = atleast_1d(cf) self.samplerate= source.samplerate w0 = 2*pi*cf/self.samplerate/second BW = 2./log(2)*arcsinh((1./2/Q)) alpha = sin(w0)*sinh(log(2)/2 * BW * w0/sin(w0) ) b_temp = array([alpha,0,-alpha])/(1 + alpha) a_temp = array([1 + alpha,-2*cos(w0),1 - alpha])/(1 + alpha) print b_temp,a_temp self.filt_b = tile(b_temp.reshape([3,1]),[1,1,1]) self.filt_a = tile(a_temp.reshape([3,1]),[1,1,1]) LinearFilterbank.__init__(self, source, self.filt_b,self.filt_a) class LowPass(LinearFilterbank): ''' Bank of 1st-order lowpass filters The code is based on the code found in the `Meddis toolbox `__. It was implemented here to be used in the DRNL cochlear model implementation. Initialised with arguments: ``source`` Source of the filterbank. ``fc`` Value, list or array (with length = number of channels) of cutoff frequencies. ''' def __init__(self,source,fc): if not isSequenceType(fc): fc = fc*ones(source.nchannels) nchannels=len(fc) self.samplerate= source.samplerate dt=1./self.samplerate self.filt_b=zeros((nchannels, 2, 1)) self.filt_a=zeros((nchannels, 2, 1)) tau=1/(2*pi*fc) self.filt_b[:,0,0]=dt/tau self.filt_b[:,1,0]=0*ones(nchannels) self.filt_a[:,0,0]=1*ones(nchannels) self.filt_a[:,1,0]=-(1-dt/tau) LinearFilterbank.__init__(self,source, self.filt_b, self.filt_a) class Cascade(LinearFilterbank): ''' Cascade of ``n`` times a linear filterbank. Initialised with arguments: ``source`` Source of the new filterbank. ``filterbank`` Filterbank object to be put in cascade ``n`` Number of cascades ''' def __init__(self,source, filterbank,n): b=filterbank.filt_b a=filterbank.filt_a self.samplerate = source.samplerate self.nchannels=filterbank.nchannels self.filt_b=zeros((b.shape[0], b.shape[1],n)) self.filt_a=zeros((a.shape[0], a.shape[1],n)) for i in range((n)): self.filt_b[:,:,i]=b[:,:,0] self.filt_a[:,:,i]=a[:,:,0] LinearFilterbank.__init__(self, source,self.filt_b, self.filt_a) class AsymmetricCompensation(LinearFilterbank): ''' Bank of asymmetric compensation filters. Those filters are meant to be used in cascade with gammatone filters to approximate gammachirp filters (Unoki et al., 2001, Improvement of an IIR asymmetric compensation gammachirp filter, Acoust. Sci. & Tech.). They are implemented a a cascade of low order filters. The code is based on the implementation found in the `AIM-MAT toolbox `__. Initialised with arguments: ``source`` Source of the filterbank. ``f`` List or array of the cut off frequencies. ``b=1.019`` Determines the duration of the impulse response. Can either be a scalar and will be the same for every channel or an array with the same length as ``cf``. ``c=1`` The glide slope when this filter is used to implement a gammachirp. Can either be a scalar and will be the same for every channel or an array with the same length as ``cf``. ``ncascades=4`` The number of time the basic filter is put in cascade. ''' def __init__(self, source, f,b=1.019, c=1,ncascades=4): f = atleast_1d(f) self.f = f self.samplerate = source.samplerate ERBw=24.7*(4.37e-3*f+1.) p0=2 p1=1.7818*(1-0.0791*b)*(1-0.1655*abs(c)) p2=0.5689*(1-0.1620*b)*(1-0.0857*abs(c)) p3=0.2523*(1-0.0244*b)*(1+0.0574*abs(c)) p4=1.0724 self.filt_b=zeros((len(f), 3, ncascades)) self.filt_a=zeros((len(f), 3, ncascades)) for k in arange(ncascades): r=exp(-p1*(p0/p4)**(k)*2*pi*b*ERBw/self.samplerate) #k instead of k-1 because range 0 N-1 Df=(p0*p4)**(k)*p2*c*b*ERBw phi=2*pi*maximum((f+Df), 0)/self.samplerate psy=2*pi*maximum((f-Df), 0)/self.samplerate ap=vstack((ones(r.shape),-2*r*cos(phi), r**2)).T bz=vstack((ones(r.shape),-2*r*cos(psy), r**2)).T fn=f#+ compensation_filter_order* p3 *c *b *ERBw/4; vwr=exp(1j*2*pi*fn/self.samplerate) vwrs=vstack((ones(vwr.shape), vwr, vwr**2)).T ##normilization stuff nrm=abs(sum(vwrs*ap, 1)/sum(vwrs*bz, 1)) bz=bz*tile(nrm,[3,1]).T self.filt_b[:, :, k]=bz self.filt_a[:, :, k]=ap LinearFilterbank.__init__(self, source, self.filt_b, self.filt_a) def asymmetric_compensation_coeffs(samplerate,fr,filt_b,filt_a,b,c,p0,p1,p2,p3,p4): ''' This function is used to generated the coefficient of the asymmetric compensation filter used for the gammachirp implementation. ''' ERBw=24.7*(4.37e-3*fr+1.) nbr_cascade=4 for k in arange(nbr_cascade): r=exp(-p1*(p0/p4)**(k)*2*pi*b*ERBw/samplerate) #k instead of k-1 because range 0 N-1 Dfr=(p0*p4)**(k)*p2*c*b*ERBw phi=2*pi*maximum((fr+Dfr), 0)/samplerate psy=2*pi*maximum((fr-Dfr), 0)/samplerate ap=vstack((ones(r.shape),-2*r*cos(phi), r**2)).T bz=vstack((ones(r.shape),-2*r*cos(psy), r**2)).T vwr=exp(1j*2*pi*fr/samplerate) vwrs=vstack((ones(vwr.shape), vwr, vwr**2)).T ##normilization stuff nrm=abs(sum(vwrs*ap, 1)/sum(vwrs*bz, 1)) bz=bz*tile(nrm,[3,1]).T filt_b[:, :, k]=bz filt_a[:, :, k]=ap return filt_b,filt_a def factorial(n): return prod(arange(1, n+1)) brian-1.3.1/brian/hears/filtering/firfilterbank.py000066400000000000000000000200521167451777000222010ustar00rootroot00000000000000''' FIR filterbank, can be treated as a special case of LinearFilterbank, but an optimisation is possible using buffered output by using FFT based convolution as in HRTF.apply. To do this is slightly tricky because it needs to cache previous inputs. For the moment, we implement it as a special case of LinearFilterbank but later this will change to using the FFT method. ''' from brian import * from filterbank import * from linearfilterbank import * __all__ = ['FIRFilterbank', 'LinearFIRFilterbank', 'FFTFIRFilterbank'] class LinearFIRFilterbank(LinearFilterbank): def __init__(self, source, impulse_response, minimum_buffer_size=None): # if a 1D impulse response is given we apply it to every channel # Note that because we are using LinearFilterbank at the moment, this # means duplicating the impulse response. However, it could be stored # just once when we move to using FFT based convolution and in fact this # will save a lot of computation as the FFT only needs to be computed # once then. if len(impulse_response.shape)==1: impulse_response = repeat(reshape(impulse_response, (1, len(impulse_response))), source.nchannels, axis=0) # Automatically duplicate mono input to fit the desired output shape if impulse_response.shape[0]!=source.nchannels: if source.nchannels!=1: raise ValueError('Can only automatically duplicate source channels for mono sources, use RestructureFilterbank.') source = RestructureFilterbank(source, impulse_response.shape[0]) # Implement it as a LinearFilterbank b = reshape(impulse_response, impulse_response.shape+(1,)) a = zeros_like(b) a[:, 0, :] = 1 LinearFilterbank.__init__(self, source, b, a) if minimum_buffer_size is not None: self.minimum_buffer_size = minimum_buffer_size class FFTFIRFilterbank(Filterbank): def __init__(self, source, impulse_response, minimum_buffer_size=None): # if a 1D impulse response is given we apply it to every channel # Note that because we are using LinearFilterbank at the moment, this # means duplicating the impulse response. However, it could be stored # just once when we move to using FFT based convolution and in fact this # will save a lot of computation as the FFT only needs to be computed # once then. if len(impulse_response.shape)==1: impulse_response = repeat(reshape(impulse_response, (1, len(impulse_response))), source.nchannels, axis=0) # Automatically duplicate mono input to fit the desired output shape if impulse_response.shape[0]!=source.nchannels: if source.nchannels!=1: raise ValueError('Can only automatically duplicate source channels for mono sources, use RestructureFilterbank.') source = RestructureFilterbank(source, impulse_response.shape[0]) Filterbank.__init__(self, source) self.input_cache = zeros((impulse_response.shape[1], self.nchannels)) self.impulse_response = impulse_response self.fftcache_nmax = -1 if minimum_buffer_size is None: minimum_buffer_size = 3*impulse_response.shape[1] self.minimum_buffer_size = minimum_buffer_size def buffer_init(self): Filterbank.buffer_init(self) self.input_cache[:] = 0 # This version uses a single FFT/IFFT call, using the axis keyword, but it # doesn't appear to be any more efficient than looping, and uses much more # memory, although my tests weren't exhaustive. # def buffer_apply(self, input): # nmax = max(self.input_cache.shape[0]+input.shape[0], self.impulse_response.shape[1]) # nmax = 2**int(ceil(log2(nmax))) # if self.fftcache_nmax!=nmax: # # impulse response: (nchannels, ir_length) # ir = zeros((self.nchannels, nmax)) # ir[:, :self.impulse_response.shape[1]] = self.impulse_response # # fftcache: (ir_length, nchannels) # self.fftcache = fft(ir, n=nmax, axis=1).T # self.fftcache_nmax = nmax # fullinput = vstack((self.input_cache, input)) # fullinput = vstack((fullinput, zeros((nmax-fullinput.shape[1], self.nchannels)))) # # fullinput: (ir_length, nchannels) # fullinput_fft = fft(fullinput, n=nmax, axis=0) # fulloutput_fft = fullinput_fft*self.fftcache # fulloutput = ifft(fulloutput_fft, n=nmax, axis=0).real # output = fulloutput[self.input_cache.shape[0]:self.input_cache.shape[0]+input.shape[0]] # # update input cache # nic = self.input_cache.shape[0] # ni = input.shape[0] # #print ni, nic # if ni>=nic: # self.input_cache[:, :] = input[-nic:, :] # else: # self.input_cache[:-ni, :] = self.input_cache[ni:, :] # self.input_cache[-ni:, :] = input # return output def buffer_apply(self, input): output = zeros_like(input) nmax = max(self.input_cache.shape[0]+input.shape[0], self.impulse_response.shape[1]) nmax = 2**int(ceil(log2(nmax))) if self.fftcache_nmax!=nmax: self.fftcache = [] for i, (previnput, curinput, ir) in enumerate(zip(self.input_cache.T, input.T, self.impulse_response)): fullinput = hstack((previnput, curinput)) # pad fullinput = hstack((fullinput, zeros(nmax-len(fullinput)))) # apply fft if self.fftcache_nmax!=nmax: # recompute IR fft, first pad, then take fft, then store ir = hstack((ir, zeros(nmax-len(ir)))) ir_fft = fft(ir, n=nmax) self.fftcache.append(ir_fft) else: ir_fft = self.fftcache[i] fullinput_fft = fft(fullinput, n=nmax) curoutput_fft = fullinput_fft*ir_fft curoutput = ifft(curoutput_fft) # unpad curoutput = curoutput[len(previnput):len(previnput)+len(curinput)] output[:, i] = curoutput.real if self.fftcache_nmax!=nmax: self.fftcache_nmax = nmax # update input cache nic = self.input_cache.shape[0] ni = input.shape[0] #print ni, nic if ni>=nic: self.input_cache[:, :] = input[-nic:, :] else: self.input_cache[:-ni, :] = self.input_cache[ni:, :] self.input_cache[-ni:, :] = input return output class FIRFilterbank(Filterbank): ''' Finite impulse response filterbank Initialisation parameters: ``source`` Source sound or filterbank. ``impulse_response`` Either a 1D array providing a single impulse response applied to every input channel, or a 2D array of shape ``(nchannels, ir_length)`` for ``ir_length`` the number of samples in the impulse response. Note that if you are using a multichannel sound ``x`` as a set of impulse responses, the array should be ``impulse_response=array(x.T)``. ``minimum_buffer_size=None`` If specified, gives a minimum size to the buffer. By default, for the FFT convolution based implementation of ``FIRFilterbank``, the minimum buffer size will be ``3*ir_length``. For maximum efficiency with FFTs, ``buffer_size+ir_length`` should be a power of 2 (otherwise there will be some zero padding), and ``buffer_size`` should be as large as possible. ''' def __init__(self, source, impulse_response, use_linearfilterbank=False, minimum_buffer_size=None): if use_linearfilterbank: self.__class__ = LinearFIRFilterbank else: self.__class__ = FFTFIRFilterbank self.__init__(source, impulse_response, minimum_buffer_size=minimum_buffer_size) brian-1.3.1/brian/hears/filtering/fractionaldelay.py000066400000000000000000000074661167451777000225360ustar00rootroot00000000000000from brian import * from filterbank import * from firfilterbank import * __all__ = ['FractionalDelay'] class FractionalDelay(FIRFilterbank): ''' Filterbank for applying delays which are fractional multiples of the timestep Initialised with arguments: ``source`` Source sound or filterbank. ``delays`` A list or array of delays to apply (the number of channels in the filterbank will be equal to the length of this). ``filter_length=None`` Use this to explicitly set the length of the impulse response, should be odd. If not specified, it will be automatically determined from the delays. See notes below. ``**args`` Arguments to pass to :class:`FIRFilterbank` (from which this class is derived). **Attributes** .. attribute:: delay_offset The global delay offset. If the specified delay in a given channel is ``delay`` the actual delay will be ``delay_offset+delay``. It is equal to ``(filter_length/2)/source.samplerate``. .. attribute:: filter_length The length of the filter to use. This is automatically determined from the delays. Note that ``delay_offset`` should be larger than the maximum positive or negative delay. The minimum filter length is by default 2048 samples, which allows for good accuracy for signals with power above 20 Hz. For low frequency analysis, longer filters will be necessary. For high frequency analysis, a shorter filter length could be used for a more efficient computation. **Notes** Inducing a delay for a sound that is an integer multiple of the timestep (1/samplerate) can be done simply by offsetting the samples, e.g. ``sound[3:]`` is ``sound`` delayed by ``3/sound.samplerate``. However, for fractional multiples of the timestep, the sound needs to be filtered. The theory and code for this was adapted from `http://www.labbookpages.co.uk/audio/beamforming/fractionalDelay.html `__. The filters induce a delay of ``delay_offset+delay`` where ``delay_offset`` is a positive value larger than the maximum positive or negative delay. This value is available as the attribute ``delay_offset``. ''' def __init__(self, source, delays, filter_length=None, **args): delays = array(delays) delay_max = amax(abs(delays)) delay_max_int = int(ceil(source.samplerate*delay_max)) if filter_length is None: filter_length = 2*int(delay_max_int*1.25)+1 if filter_length<2048: filter_length = 2048 if filter_length/2<=delay_max_int: raise ValueError('Filter length not long enough for selected delays.') self.delay_offset = (filter_length//2)/source.samplerate self.filter_length = filter_length self.delays = delays irs = [fractional_delay_ir(delay, source.samplerate, filter_length=filter_length) for delay in delays] irs = array(irs) self.impulse_response = irs FIRFilterbank.__init__(self, source, irs, **args) # Adapted from # http://www.labbookpages.co.uk/audio/beamforming/fractionalDelay.html def fractional_delay_ir(delay, samplerate, filter_length=151): delay = delay*samplerate centre_tap = filter_length // 2 t = arange(filter_length) x = t-delay if abs(round(delay)-float(delay))<1e-10: return array(x==centre_tap, dtype=float) sinc = sin(pi*(x-centre_tap))/(pi*(x-centre_tap)) window = 0.54-0.46*cos(2.0*pi*(x+0.5)/filter_length) # Hamming window tap_weight = window*sinc return tap_weight brian-1.3.1/brian/hears/filtering/gpulinearfilterbank.py000066400000000000000000000205501167451777000234120ustar00rootroot00000000000000''' ''' # TODO: support for GPUBufferedArray? from numpy import * import pycuda #import pycuda.autoinit as autoinit import pycuda.driver as drv import pycuda.compiler from pycuda import gpuarray from brian.experimental.cuda.buffering import * import re from filterbank import Filterbank, RestructureFilterbank from gputools import * import gc __all__ = ['LinearFilterbank'] class LinearFilterbank(Filterbank): ''' Generalised parallel linear filterbank This filterbank allows you to construct a chain of linear filters in a bank so that each channel in the bank has its own filters. You pass the (b,a) parameters for the filter in the format specified in the function ``apply_linear_filterbank``. Note that there are additional GPU specific options: ``precision`` Should be single for older GPUs. ``forcesync`` Should be set to true to force a copy of GPU->CPU after each filter step, but not if the output is being used in ongoing GPU computations. By default this is ``True`` for compatibility. ``pagelocked_mem`` Allocate faster pagelocked memory for GPU->CPU copies (doubles copy rate). ``unroll_filterorder`` Whether or not to unroll the loop for the filter order, normally this should be done for small filter orders. By default, it is done if the filter order is less than or equal to 32. ''' def __init__(self, source, b, a, samplerate=None, precision='double', forcesync=True, pagelocked_mem=True, unroll_filterorder=None): # Automatically duplicate mono input to fit the desired output shape if b.shape[0]!=source.nchannels: if source.nchannels!=1: raise ValueError('Can only automatically duplicate source channels for mono sources, use RestructureFilterbank.') source = RestructureFilterbank(source, b.shape[0]) Filterbank.__init__(self, source) if pycuda.context is None: set_gpu_device(0) self.precision=precision if self.precision=='double': self.precision_dtype=float64 else: self.precision_dtype=float32 self.forcesync=forcesync self.pagelocked_mem=pagelocked_mem n, m, p=b.shape self.filt_b=b self.filt_a=a filt_b_gpu=array(b, dtype=self.precision_dtype) filt_a_gpu=array(a, dtype=self.precision_dtype) filt_state=zeros((n, m-1, p), dtype=self.precision_dtype) if pagelocked_mem: filt_y=drv.pagelocked_zeros((n,), dtype=self.precision_dtype) self.pre_x=drv.pagelocked_zeros((n,), dtype=self.precision_dtype) else: filt_y=zeros(n, dtype=self.precision_dtype) self.pre_x=zeros(n, dtype=self.precision_dtype) self.filt_b_gpu=gpuarray.to_gpu(filt_b_gpu.T.flatten()) # transform to Fortran order for better GPU mem self.filt_a_gpu=gpuarray.to_gpu(filt_a_gpu.T.flatten()) # access speeds self.filt_state=gpuarray.to_gpu(filt_state.T.flatten()) self.unroll_filterorder = unroll_filterorder if unroll_filterorder is None: if m<=32: unroll_filterorder = True else: unroll_filterorder = False # TODO: improve code, check memory access patterns, maybe use local memory code=''' #define x(s,i) _x[(s)*n+(i)] #define y(s,i) _y[(s)*n+(i)] #define a(i,j,k) _a[(i)+(j)*n+(k)*n*m] #define b(i,j,k) _b[(i)+(j)*n+(k)*n*m] #define zi(i,j,k) _zi[(i)+(j)*n+(k)*n*(m-1)] __global__ void filt(SCALAR *_b, SCALAR *_a, SCALAR *_x, SCALAR *_zi, SCALAR *_y, int numsamples) { int j = blockIdx.x * blockDim.x + threadIdx.x; if(j>=n) return; for(int s=0; s1: # we need to do this so as not to alter the values in x in the C code below # but if zi.shape[2] is 1 there is only one filter in the chain and the # copy operation at the end of the C code will never happen. x = array(x, copy=True) y = empty_like(x) n, m, p = b.shape n1, m1, p1 = a.shape numsamples = x.shape[0] if n1!=n or m1!=m or p1!=p or x.shape!=(numsamples, n) or zi.shape!=(n, m, p): raise ValueError('Data has wrong shape.') if numsamples>1 and not x.flags['C_CONTIGUOUS']: raise ValueError('Input data must be C_CONTIGUOUS') if not b.flags['F_CONTIGUOUS'] or not a.flags['F_CONTIGUOUS'] or not zi.flags['F_CONTIGUOUS']: raise ValueError('Filter parameters must be F_CONTIGUOUS') code = ''' #define X(s,i) x[(s)*n+(i)] #define Y(s,i) y[(s)*n+(i)] #define A(i,j,k) a[(i)+(j)*n+(k)*n*m] #define B(i,j,k) b[(i)+(j)*n+(k)*n*m] #define Zi(i,j,k) zi[(i)+(j)*n+(k)*n*(m-1)] for(int s=0; s=0 x[ind]=A0*log(x[ind]*B+1.0) ind = x<0 dtemp = (-x[ind])**C tempA = -A0*(dtemp+D)/(3*dtemp+D); x[ind]=tempA*log(abs(x[ind])*B+1.0) return x class TanCarney(CombinedFilterbank): ''' Class implementing the nonlinear auditory filterbank model as described in Tan, G. and Carney, L., "A phenomenological model for the responses of auditory-nerve fibers. II. Nonlinear tuning with a frequency glide", JASA 2003. The model consists of a control path and a signal path. The control path controls both its own bandwidth via a feedback loop and also the bandwidth of the signal path. Initialised with arguments: ``source`` Source of the cochlear model. ``cf`` List or array of center frequencies. ``update_interval`` Interval in samples controlling how often the band pass filter of the signal pathway is updated. Smaller values are more accurate but increase the computation time. ``param`` Dictionary used to overwrite the default parameters given in the original paper. ''' def __init__(self, source,cf,update_interval,param={}): CombinedFilterbank.__init__(self, source) source = self.get_modified_source() cf = atleast_1d(cf) nbr_cf=len(cf) samplerate=source.samplerate parameters=set_parameters(cf,param) if int(source.samplerate)!=50000: warnings.warn('To use the TanCarney cochlear model the sample rate should be 50kHz') # if not have_scikits_samplerate: # raise ImportError('To use the PMFR cochlear model the sample rate should be 50kHz and scikits.samplerate package is needed for resampling') # #source=source.resample(50*kHz) # warnings.warn('The input to the PMFR cochlear model has been resampled to 50kHz' # ##### Control Path #### # band pass filter control_coef = Control_Coefficients(cf, samplerate) [filt_b,filt_a] = control_coef.return_coefficients(0) BP_control = LinearFilterbank(source,filt_b,filt_a) # first non linearity of control path Acp,Bcp,Ccp=100.,2.5,0.60 func_NL1_control=lambda x:sign(x)*Bcp*log(1.+Acp*abs(x)**Ccp) NL1_control=FunctionFilterbank(BP_control,func_NL1_control) # second non linearity of control path asym,s0,x1,s1=7.,8.,5.,3. shift = 1./(1.+asym) x0 = s0*log((1.0/shift-1)/(1+exp(x1/s1))) func_NL2_control=lambda x:(1.0/(1.0+exp(-(x-x0)/s0)*(1.0+exp(-(x-x1)/s1)))-shift)*parameters['nlgain'] NL2_control=FunctionFilterbank(NL1_control,func_NL2_control) #control low pass filter (its output will be used to control the signal path) gain_lp_con = (2*pi*parameters['fc_LP_control'])**3*1.5 LP_control = LowPass_filter(NL2_control,cf,parameters['fc_LP_control'],gain_lp_con,3) #low pass filter for feedback to control band pass (its output will be used to control the control path) gain_lp_fb = parameters['fc_LP_fb']*2*pi *10 LP_feed_back = LowPass_filter(LP_control,cf,parameters['fc_LP_fb'],gain_lp_fb,1) #### signal path #### # band pass filter signal_coef = Signal_Coefficients(cf, samplerate,parameters) [filt_b,filt_a] = signal_coef.return_coefficients(0) BP_signal = LinearFilterbank(source,filt_b,filt_a) ## Saturation saturation = FunctionFilterbank(BP_signal,saturation_fc,A0=0.1,B=2000,C=1.74,D=6.87e-9) ## low pass IHC ihc = LowPass_IHC(saturation,cf,3800,1,7) ### controlers ### updater1=Filter_Update(BP_control,control_coef) #instantiation of the updater for the control path output1 = ControlFilterbank(ihc, LP_feed_back, BP_control,updater1, update_interval) #controler for the band pass filter of the control path updater2=Filter_Update(BP_signal,signal_coef) #instantiation of the updater for the control path output2 = ControlFilterbank(output1, LP_control, BP_signal,updater2, update_interval) #controler for the band pass filter of the control path # # self.control_cont = updater1.param # self.signal_cont = updater2.param # self.set_output(BP_control) self.set_output(output2) brian-1.3.1/brian/hears/filtering/test.py000066400000000000000000000026751167451777000203510ustar00rootroot00000000000000from time import time #http://amtoolbox.sourceforge.net/doc/filters/gammatone.php from brian import * set_global_preferences(useweave=True) from scipy.io import loadmat,savemat from brian.hears import * #from zilany import * simulation_duration = 1*ms set_default_samplerate(100*kHz) #sound = whitenoise(simulation_duration) file="/home/bertrand/Data/MatlabProg/brian_hears/ZilanyCarney-JASAcode-2009/sound.mat" X=loadmat(file,struct_as_record=False) sound = Sound(X['sound'].flatten()) sound.samplerate = 100*kHz #sound = sound.atlevel(10*dB) # level in rms dB SPL #X={} #X['sound'] = sound.__array__() #savemat('/home/bertrand/Data/MatlabProg/brian_hears/ZilanyCarney-JASAcode-2009/sound.mat',X) #sound = Sound(randn(1000)) #plot(sound) #show() #sound.samplerate = 100*kHz cf = array([100*Hz,100*Hz,100*Hz,100*Hz])#erbspace(100*Hz, 1000*Hz, 50) # centre frequencies cf = erbspace(100*Hz, 1000*Hz, 500) # centre frequencies param_drnl = {} #param_drnl['lp_nl_cutoff_m'] = 1.1 zilany_filter=ZILANY(sound, cf,32) #zilany_filter=DRNL(sound, cf) t1=time() drnl = zilany_filter.process() print time()-t1 #drnl =zilany_filter.rsigma X={} X['out_BM'] = drnl[:] #X['out_BM'] = zilany_filter.rsigma savemat('/home/bertrand/Data/MatlabProg/brian_hears/ZilanyCarney-JASAcode-2009/out_BM.mat',X) #figure() subplot(211) ##print drnl[:]R plot(drnl[:]) #imshow(flipud(drnl.T), aspect='auto') subplot(212) #print sound plot(sound) #imshow(flipud(dcgc.T), aspect='auto') show() brian-1.3.1/brian/hears/filtering/test2.py000077500000000000000000000026411167451777000204270ustar00rootroot00000000000000from time import time #http://amtoolbox.sourceforge.net/doc/filters/gammatone.php from brian import * set_global_preferences(useweave=True) from scipy.io import loadmat,savemat from brian.hears import * #from zilany import * simulation_duration = 100*ms set_default_samplerate(50*kHz) sound = whitenoise(simulation_duration) #file="/home/bertrand/Data/MatlabProg/brian_hears/Carney/sound.mat" #X=loadmat(file,struct_as_record=False) #sound = Sound(X['sound'].flatten()) sound.samplerate = 50*kHz sound = sound.atlevel(120*dB) # level in rms dB SPL X={} X['sound'] = sound.__array__() savemat('/home/bertrand/Data/MatlabProg/brian_hears/Carney/sound.mat',X) #sound = Sound(randn(1000)) #plot(sound) #show() #sound.samplerate = 100*kHz cf = array([1000*Hz])#erbspace(100*Hz, 1000*Hz, 50) # centre frequencies #cf = erbspace(100*Hz, 1000*Hz, 500) # centre frequencies param_drnl = {} #param_drnl['lp_nl_cutoff_m'] = 1.1 zilany_filter=TAN(sound, cf,1) #zilany_filter=DRNL(sound, cf) t1=time() drnl = zilany_filter.process() print time()-t1 print drnl.shape #drnl =zilany_filter.control_cont #drnl =zilany_filter.signal_cont X={} X['out_BM'] = drnl[:] #X['out_BM'] = zilany_filter.param savemat('/home/bertrand/Data/MatlabProg/brian_hears/Carney/out_BM.mat',X) #figure() subplot(211) ##print drnl[:]R plot(drnl[:]) #imshow(flipud(drnl.T), aspect='auto') subplot(212) #print sound plot(sound) #imshow(flipud(dcgc.T), aspect='auto') show() brian-1.3.1/brian/hears/filtering/zilany.py000066400000000000000000000365651167451777000207050ustar00rootroot00000000000000from brian import * from filterbank import Filterbank,FunctionFilterbank,ControlFilterbank, CombinedFilterbank from filterbanklibrary import * from linearfilterbank import * import warnings from scipy.io import loadmat,savemat from brian.hears import * try: from scikits.samplerate import resample have_scikits_samplerate = True except (ImportError, ValueError): have_scikits_samplerate = False #print have_scikits_samplerate def set_parameters(cf,param): parameters=dict() parameters['fc_LP_control']=800*Hz parameters['fc_LP_fb']=500*Hz parameters['fp1']=1.0854*cf-106.0034 parameters['ta']=10**(log10(cf)*1.0230 + 0.1607) parameters['tb']=10**(log10(cf)*1.4292 - 1.1550) - 1000 parameters['gain80']=10**(log10(cf)*0.5732 + 1.5220) parameters['rgain']=10**( log10(cf)*0.4 + 1.9) parameters['average_control']=0.3357 parameters['zero_r']= array(-10**( log10(cf)*1.5-0.9 )) if param: if not isinstance(param, dict): raise Error('given parameters must be a dict') for key in param.keys(): if not parameters.has_key(key): raise Exception(key + ' is invalid key entry for given parameters') parameters[key] = param[key] parameters['nlgain']= (parameters['gain80'] - parameters['rgain'])/parameters['average_control'] return parameters ## def gain_groupdelay(binwidth,centerfreq,cf,tau): tmpcos = cos(2*pi*(centerfreq-cf)*binwidth) dtmp2 = tau*2.0/binwidth c1LP = (dtmp2-1)/(dtmp2+1) c2LP = 1.0/(dtmp2+1) tmp1 = 1+c1LP*c1LP-2*c1LP*tmpcos tmp2 = 2*c2LP*c2LP*(1+tmpcos) wb_gain = sqrt(tmp1/tmp2) grdelay = floor((0.5-(c1LP*c1LP-c1LP*tmpcos)/(1+c1LP*c1LP-2*c1LP*tmpcos))).astype(int) return wb_gain,grdelay def get_taubm(cf,CAgain,taumax): bwfactor = 0.7; factor = 2.5; ratio = 10**(-CAgain/(20.0*factor)) bmTaumax = taumax/bwfactor; bmTaumin = bmTaumax*ratio; return bmTaumax,bmTaumin,ratio def get_tauwb(cf,CAgain,order): #### ratio = 10**(-CAgain/(20.0*order)) #ratio of TauMin/TauMax according to the gain, order */ ##Q10 = pow(10,0.4708*log10(cf/1e3)+0.5469); */ /* 75th percentile */ Q10 = 10**(0.4708*log10(cf/1e3)+0.4664) #/* 50th percentile */ ##Q10 = pow(10,0.4708*log10(cf/1e3)+0.3934); */ /* 25th percentile */ bw = cf/Q10; taumax = 2.0/(2*pi*bw); taumin = taumax*ratio; return taumax,taumin ### function to initialize the chirp filters class Chirp_Coefficients: def __init__(self,cf,taumax,samplerate,rsigma,fcohc): self.nch=len(cf) self.T = 1./ samplerate self.TWOPI = 2*pi self.sigma0 = 1/taumax self.rsigma = rsigma self.fcohc = fcohc self.ipw = 1.01*cf*self.TWOPI-50 self.ipb = 0.2343*self.TWOPI*cf-1104 self.rpa = pow(10, log10(cf)*0.9 + 0.55)+ 2000 self.pzero = pow(10,log10(cf)*0.7+1.6)+500 self.order_of_pole = 10 self.half_order_pole = self.order_of_pole/2 self.order_of_zero = self.half_order_pole self.fs_bilinear = self.TWOPI*cf/tan(self.TWOPI*cf*self.T/2) self.fs_bilinear =tile(self.fs_bilinear.reshape(self.nch,-1),5) self.rzero = -self.pzero self.CF = self.TWOPI*cf self.nch=len(self.CF) self.filt_a = zeros((len(cf),3,self.half_order_pole), order='F') self.filt_a[:,0,:] = 1 self.filt_b = zeros((len(cf),3,self.half_order_pole), order='F') self.preal = zeros((self.nch,3)) self.pimg = zeros((self.nch,6)) self.preal = self.analog_poles_real(0*ones(self.nch),1*ones(self.nch)) self.pimg = self.analog_poles_img() self.CFmat = tile(self.CF.reshape(self.nch,-1),5) self.rzeromat = tile(self.rzero.reshape(self.nch,-1),5) self.Cinitphase = sum(arctan(self.CFmat/(-self.rzeromat))\ -arctan((self.CFmat-self.pimg[:,[0,2,1,0,1]])/(-self.preal[:,[0,2,1,0,1]]))\ -arctan((self.CFmat+self.pimg[:,[0,2,1,0,1]])/(-self.preal[:,[0,2,1,0,1]])),axis=1) self.CFmat10 = tile(self.CF.reshape(self.nch,-1),10) self.Cgain_norm = prod((self.CFmat10-self.pimg[:,[0,1,2,3,4,5,0,3,1,5]])**2+self.preal[:,[0,1,2,0,2,1,0,0,1,1]]**2,axis=1) self.norm_gain = sqrt(self.Cgain_norm)/sqrt(self.CF**2+self.rzero**2)**5 self.norm_gain= tile(self.norm_gain.reshape(self.nch,-1),3) def return_coefficients(self): self.preal = self.analog_poles_real(self.rsigma,self.fcohc) self.phase = sum(-arctan((self.CFmat-self.pimg[:,[0,2,1,0,1]])/(-self.preal[:,[0,2,1,0,1]]))\ -arctan((self.CFmat+self.pimg[:,[0,2,1,0,1]])/(-self.preal[:,[0,2,1,0,1]])),axis=1) self.rzero = -self.CF/tan((self.Cinitphase-self.phase)/self.order_of_zero) self.rzero = tile(self.rzero.reshape(self.nch,-1),5) iord = [0,2,1,0,1] temp = (self.fs_bilinear-self.preal[:,iord])**2+self.pimg[:,iord]**2 self.filt_a[:,0,:] = 1 self.filt_a[:,1,:] = -2*(self.fs_bilinear**2-self.preal[:,iord]**2-self.pimg[:,iord]**2)/temp self.filt_a[:,2,:] = ((self.fs_bilinear+self.preal[:,iord])**2+self.pimg[:,iord]**2)/temp self.filt_b[:,0,:] = (-self.rzero+self.fs_bilinear)/temp self.filt_b[:,1,:] = (-2*self.rzero)/temp self.filt_b[:,2,:] = (-self.rzero-self.fs_bilinear)/temp self.filt_b[:,:,4] = self.norm_gain/4.*self.filt_b[:,:,4] return self.filt_b,self.filt_a def analog_poles_real(self,rsigma,fcohc): self.preal[:,0] = -self.sigma0*fcohc-rsigma #0 self.preal[:,1] = self.preal[:,0] - self.rpa #4 self.preal[:,2] = (self.preal[:,0]+self.preal[:,1])*0.5 #2 return self.preal def analog_poles_img(self): self.pimg[:,0] = self.ipw #0 self.pimg[:,1] = self.pimg[:,0] - self.ipb #4 self.pimg[:,2] = (self.pimg[:,0]+self.pimg[:,1])*0.5 #2 self.pimg[:,3] = -self.pimg[:,0] #1 self.pimg[:,4] = -self.pimg[:,2] #3 self.pimg[:,5] = -self.pimg[:,1] #5 return self.pimg #### controlers ##### #definition of the class updater for the signal path bandpass filter class Filter_Update: def __init__(self, target,c1_coefficients,samplerate,cf,bmTaumax,bmTaumin,cohc,TauWBMax,TauWBMin): self.bmTaumax = bmTaumax self.bmTaumin = bmTaumin self.target=target self.samplerate=samplerate self.cf=atleast_1d(cf) self.cohc = cohc self.TauWBMax = TauWBMax self.TauWBMin = TauWBMin bmplace = 11.9 * log10(0.80 + cf / 456.0) self.centerfreq = 456.0*(pow(10,(bmplace+1.2)/11.9)-0.80) self.c1_coefficients=c1_coefficients self.rsigma=[] def __call__(self,input): tmptauc1 = input[-1,:] tauc1 = self.cohc*(tmptauc1-self.bmTaumin)+self.bmTaumin #signal path update self.c1_coefficients.rsigma = 1/tauc1-1/self.bmTaumax # self.target[0].filt_b,self.target[0].filt_a = self.c1_coefficients.return_coefficients() # #control path update # tauwb = self.TauWBMax+(tauc1-self.bmTaumax)*(self.TauWBMax-self.TauWBMin)/(self.bmTaumax-self.bmTaumin) # [wb_gain,self.grdelay] = gain_groupdelay(1./self.samplerate,self.centerfreq,self.cf,tauwb); # # grd[n] = grdelay # if ((grd[n]+n)taumax out[ix,ind] = taumax[ind] return out class LowPass_filter(LinearFilterbank): def __init__(self,source,cf,fc,gain,order): nch = len(cf) TWOPI = 2*pi self.samplerate = source.samplerate c = 2.0 * self.samplerate c1LP = ( c/Hz - TWOPI*fc ) / ( c/Hz + TWOPI*fc ) c2LP = TWOPI*fc/Hz / (TWOPI*fc + c/Hz) b_temp = array([c2LP,c2LP]) a_temp = array([1,-c1LP]) filt_b = tile(b_temp.reshape([2,1]),[nch,1,order]) filt_a = tile(a_temp.reshape([2,1]),[nch,1,order]) filt_b[:,:,0] = filt_b[:,:,0]*gain LinearFilterbank.__init__(self, source, filt_b, filt_a) class ZILANY(CombinedFilterbank): ''' Class implementing the nonlinear auditory filterbank model as described in Tan, G. and Carney, L., "A phenomenological model for the responses of auditory-nerve fibers. II. Nonlinear tuning with a frequency glide", JASA 2003. The model consists of a control path and a signal path. The control path controls both its own bandwidth via a feedback loop and also the bandwidth of the signal path. Initialised with arguments: ``source`` Source of the cochlear model. ``cf`` List or array of center frequencies. ``update_interval`` Interval in samples controlling how often the band pass filter of the signal pathway is updated. Smaller values are more accurate but increase the computation time. ``param`` Dictionary used to overwrite the default parameters given in the original paper. ''' def __init__(self, source,cf,update_interval,param={}): file="/home/bertrand/Data/MatlabProg/brian_hears/ZilanyCarney-JASAcode-2009/wbout.mat" X=loadmat(file,struct_as_record=False) wbout = Sound(X['wbout'].flatten()) wbout.samplerate = 100*kHz CombinedFilterbank.__init__(self, source) # source = self.get_modified_source() cf = atleast_1d(cf) nbr_cf=len(cf) samplerate=source.samplerate parameters=set_parameters(cf,param) # if int(source.samplerate)!=50000: # warnings.warn('To use the PMFR cochlear model the sample rate should be 50kHz') # if not have_scikits_samplerate: # raise ImportError('To use the PMFR cochlear model the sample rate should be 50kHz and scikits.samplerate package is needed for resampling') # #source=source.resample(50*kHz) # warnings.warn('The input to the PMFR cochlear model has been resampled to 50kHz') cohc =1 # ohc scaling factor: 1 is normal OHC function; 0 is complete OHC dysfunction cihc = 1 # i ihc scaling factor: 1 is normal IHC function; 0 is complete IHC dysfunction bmplace = 11.9 * log10(0.80 + cf / 456.0) centerfreq = 456.0*(pow(10,(bmplace+1.2)/11.9)-0.80) CAgain = minimum(60,maximum(15*ones(len(cf)),52/2*(tanh(2.2*log10(cf/600)+0.15)+1))) # Parameters for the control-path wideband filter =======*/ bmorder = 3; Taumax,Taumin = get_tauwb(cf,CAgain,bmorder) taubm = cohc*(Taumax-Taumin)+Taumin; ratiowb = Taumin/Taumax; #====== Parameters for the signal-path C1 filter ======*/ bmTaumax,bmTaumin,ratiobm = get_taubm(cf,CAgain,Taumax) bmTaubm = cohc*(bmTaumax-bmTaumin)+bmTaumin; fcohc = bmTaumax/bmTaubm; # Parameters for the control-path wideband filter =======*/ wborder = 3 TauWBMax = Taumin+0.2*(Taumax-Taumin); TauWBMin = TauWBMax/Taumax*Taumin; tauwb = TauWBMax+(bmTaubm-bmTaumax)*(TauWBMax-TauWBMin)/(bmTaumax-bmTaumin); [wbgain,grdelay] = gain_groupdelay(1./self.samplerate,centerfreq,cf,tauwb) # Nonlinear asymmetry of OHC function and IHC C1 transduction function*/ ohcasym = 7.0 ihcasym = 3.0 # ##### Control Path #### # print tauwb*pi*centerfreq # gt_control = BiQuadratic(source, centerfreq,tauwb*2*pi*centerfreq) gt_control = ApproximateGammatone(source, centerfreq, 1./(2*pi*tauwb), order=3) # gt_control = LinearGammachirp(source, centerfreq,tauwb, c=0) # gt_control = Gammatone(source, centerfreq, b=1./(pi*tauwb)) # def wb_gain(x,cf=array([1000]),tauwb=array([.0003]),TauWBMax=array([.0003]),wborder=3): # print x.shape,cf.shape # out=zeros_like(x) # for ix in xrange(len(x)): # out[ix,:] = (tauwb/TauWBMax)**wborder*x[ix,:]*10e3*maximum(ones(len(cf)),cf/5e3) # return out # gt_control1 = FunctionFilterbank(gt_control,wb_gain,cf=cf,tauwb=tauwb,TauWBMax=TauWBMax,wborder=wborder) max_temp = (tauwb/TauWBMax)**wborder*10e3*maximum(ones(len(cf)),cf/5e3) gt_control1 = FunctionFilterbank(gt_control,lambda x:x*max_temp) # first non linearity of control path NL1_control=FunctionFilterbank(gt_control1,boltzman,asym=ohcasym,s0=12.0,s1=5.0,x1=5.0) # #control low pass filter (its output will be used to control the signal path) LP_control = LowPass_filter(NL1_control,cf,600,1.0,2) # # second non linearity of control path NL2_control=FunctionFilterbank(LP_control,OHC_transduction,taumin=bmTaumin, taumax=bmTaumax, asym=ohcasym) # # #### C1 #### rsigma = zeros(len(cf)) c1_coefficients = Chirp_Coefficients(cf, bmTaumax, samplerate, rsigma,1*ones(len(cf))) [filt_b,filt_a] = c1_coefficients.return_coefficients() C1_filter = LinearFilterbank(source,filt_b,filt_a) C1_IHC = FunctionFilterbank(C1_filter,IHC_transduction,slope = 0.1,asym = ihcasym,sign=1) #### C2 #### c2_coefficients = Chirp_Coefficients(cf, bmTaumax, samplerate, 0*ones(len(cf)),1./ratiobm) [filt_b,filt_a] = c2_coefficients.return_coefficients() C2_filter = LinearFilterbank(source,filt_b,filt_a) gain_temp = cf**2/2e4 C2_IHC_pre = FunctionFilterbank(C2_filter,lambda x:x*abs(x)*gain_temp) C2_IHC = FunctionFilterbank(C2_IHC_pre,IHC_transduction,slope = 0.2,asym = 1.0,sign=-1) # C_IHC = C1_IHC + C2_IHC # C_IHC_lp = LowPass_filter(C_IHC,cf,3000,1.0,7) #controlers definition updater=Filter_Update([C1_filter],c1_coefficients,samplerate,cf,bmTaumax,bmTaumin,cohc,TauWBMax,TauWBMin) #instantiation of the updater for the control path output = ControlFilterbank(C1_filter, NL2_control, [C1_filter],updater, update_interval) #controler for the band pass filter of the control path self.set_output(output) #line 354brian-1.3.1/brian/hears/hrtf/000077500000000000000000000000001167451777000157665ustar00rootroot00000000000000brian-1.3.1/brian/hears/hrtf/__init__.py000066400000000000000000000000741167451777000201000ustar00rootroot00000000000000from hrtf import * from ircam import * from itd import * brian-1.3.1/brian/hears/hrtf/hrtf.py000066400000000000000000000244531167451777000173130ustar00rootroot00000000000000from brian import * from ..sounds import Sound from ..filtering import FIRFilterbank from copy import copy __all__ = ['HRTF', 'HRTFSet', 'HRTFDatabase', 'make_coordinates'] class HRTF(object): ''' Head related transfer function. **Attributes** ``impulse_response`` The pair of impulse responses (as stereo :class:`Sound` objects) ``fir`` The impulse responses in a format suitable for using with :class:`FIRFilterbank` (the transpose of ``impulse_response``). ``left``, ``right`` The two HRTFs (mono :class:`Sound` objects) ``samplerate`` The sample rate of the HRTFs. **Methods** .. automethod:: apply .. automethod:: filterbank You can get the number of samples in the impulse response with ``len(hrtf)``. ''' def __init__(self, hrir_l, hrir_r=None): if hrir_r is None: hrir = hrir_l else: hrir = Sound((hrir_l, hrir_r), samplerate=hrir_l.samplerate) self.samplerate = hrir.samplerate self.impulse_response = hrir self.left = hrir.left self.right = hrir.right def apply(self, sound): ''' Returns a stereo :class:`Sound` object formed by applying the pair of HRTFs to the mono ``sound`` input. Equivalently, you can write ``hrtf(sound)`` for ``hrtf`` an :class:`HRTF` object. ''' # Note we use an FFT based method for applying HRTFs that is # mathematically equivalent to using convolution (accurate to 1e-15 # in practice) and around 100x faster. if not sound.nchannels==1: raise ValueError('HRTF can only be applied to mono sounds') if len(unique(array([self.samplerate, sound.samplerate], dtype=int)))>1: raise ValueError('HRTF and sound samplerates do not match.') sound = asarray(sound).flatten() # Pad left/right/sound with zeros of length max(impulse response length) # at the beginning, and at the end so that they are all the same length # which should be a power of 2 for efficiency. The reason to pad at # the beginning is that the first output samples are not guaranteed to # be equal because of the delays in the impulse response, but they # exactly equalise after the length of the impulse response, so we just # zero pad. The reason for padding at the end is so that for the FFT we # can just multiply the arrays, which should have the same shape. left = asarray(self.left).flatten() right = asarray(self.right).flatten() ir_nmax = max(len(left), len(right)) nmax = max(ir_nmax, len(sound))+ir_nmax nmax = 2**int(ceil(log2(nmax))) leftpad = hstack((left, zeros(nmax-len(left)))) rightpad = hstack((right, zeros(nmax-len(right)))) soundpad = hstack((zeros(ir_nmax), sound, zeros(nmax-ir_nmax-len(sound)))) # Compute FFTs, multiply and compute IFFT left_fft = fft(leftpad, n=nmax) right_fft = fft(rightpad, n=nmax) sound_fft = fft(soundpad, n=nmax) left_sound_fft = left_fft*sound_fft right_sound_fft = right_fft*sound_fft left_sound = ifft(left_sound_fft).real right_sound = ifft(right_sound_fft).real # finally, we take only the unpadded parts of these left_sound = left_sound[ir_nmax:ir_nmax+len(sound)] right_sound = right_sound[ir_nmax:ir_nmax+len(sound)] return Sound((left_sound, right_sound), samplerate=self.samplerate) __call__ = apply def get_fir(self): return array(self.impulse_response.T, copy=True) fir = property(fget=get_fir) def filterbank(self, source, **kwds): ''' Returns an :class:`FIRFilterbank` object that can be used to apply the HRTF as part of a chain of filterbanks. ''' return FIRFilterbank(source, self.fir, **kwds) def __len__(self): return self.impulse_response.shape[0] def make_coordinates(**kwds): ''' Creates a numpy record array from the keywords passed to the function. Each keyword/value pair should be the name of the coordinate the array of values of that coordinate for each location. Returns a numpy record array. For example:: coords = make_coordinates(azimuth=[0, 30, 60, 0, 30, 60], elevation=[0, 0, 0, 30, 30, 30]) print coords['azimuth'] ''' dtype = [(name, float) for name in kwds.keys()] n = len(kwds.values()[0]) x = zeros(n, dtype=dtype) for name, values in kwds.items(): x[name] = values return x class HRTFSet(object): ''' A collection of HRTFs, typically for a single individual. Normally this object is created automatically by an :class:`HRTFDatabase`. **Attributes** ``hrtf`` A list of ``HRTF`` objects for each index. ``num_indices`` The number of HRTF locations. You can also use ``len(hrtfset)``. ``num_samples`` The sample length of each HRTF. ``fir_serial``, ``fir_interleaved`` The impulse responses in a format suitable for using with :class:`FIRFilterbank`, in serial (LLLLL...RRRRR....) or interleaved (LRLRLR...). **Methods** .. automethod:: subset .. automethod:: filterbank You can access an HRTF by index via ``hrtfset[index]``, or by its coordinates via ``hrtfset(coord1=val1, coord2=val2)``. **Initialisation** ``data`` An array of shape (2, num_indices, num_samples) where data[0,:,:] is the left ear and data[1,:,:] is the right ear, num_indices is the number of HRTFs for each ear, and num_samples is the length of the HRTF. ``samplerate`` The sample rate for the HRTFs (should have units of Hz). ``coordinates`` A record array of length ``num_indices`` giving the coordinates of each HRTF. You can use :func:`make_coordinates` to help with this. ''' def __init__(self, data, samplerate, coordinates): self.data = data self.samplerate = samplerate self.coordinates = coordinates self.hrtf = [] for i in xrange(self.num_indices): l = Sound(self.data[0, i, :], samplerate=self.samplerate) r = Sound(self.data[1, i, :], samplerate=self.samplerate) self.hrtf.append(HRTF(l, r)) def __getitem__(self, key): return self.hrtf[key] def __call__(self, **kwds): I = ones(self.num_indices, dtype=bool) for key, value in kwds.items(): I = logical_and(I, abs(self.coordinates[key]-value)<1e-10) indices = I.nonzero()[0] if len(indices)==0: raise IndexError('No HRTF exists with those coordinates') if len(indices)>1: raise IndexError('More than one HRTF exists with those coordinates') return self.hrtf[indices[0]] def subset(self, condition): ''' Generates the subset of the set of HRTFs whose coordinates satisfy the ``condition``. This should be one of: a boolean array of length the number of HRTFs in the set, with values of True/False to indicate if the corresponding HRTF should be included or not; an integer array with the indices of the HRTFs to keep; or a function whose argument names are names of the parameters of the coordinate system, e.g. ``condition=lambda azim:azim`__. The database object can be initialised with the following arguments: ``basedir`` The directory where the database has been downloaded and extracted, e.g. ``r'D:\HRTF\IRCAM'``. Multiple directories in a list can be provided as well (e.g IRCAM and IRCAM New). ``compensated=False`` Whether to use the raw or compensated impulse responses. ``samplerate=None`` If specified, you can resample the impulse responses to a different samplerate, otherwise uses the default 44.1 kHz. The coordinates are pairs ``(azim, elev)`` where ``azim`` ranges from 0 to 345 degrees in steps of 15 degrees, and elev ranges from -45 to 90 in steps of 15 degrees. After loading the database, the attribute 'subjects' gives all the subjects number that were detected as installed. **Obtaining the database** The database can be downloaded `here `__. Each subject archive should be extracted to a folder (e.g. IRCAM) with the names of the subject, e.g. IRCAM/IRC_1002, etc. ''' def __init__(self, basedir, compensated=False, samplerate=None): if not isinstance(basedir, (list, tuple)): basedir = [basedir] self.basedir = basedir self.compensated = compensated names = [] for basedir in self.basedir: names += glob(os.path.join(basedir, 'IRC_*')) splitnames = [os.path.split(name) for name in names] p = re.compile('IRC_\d{4,4}') self.subjects = [int(name[4:8]) for base, name in splitnames if not (p.match(name[-8:]) is None)] if samplerate is not None: raise ValueError('Custom samplerate not supported.') self.samplerate = samplerate def load_subject(self, subject, rounddot5 = False): subject = str(subject) if subject[0] == '3': # this is the case only for stuffed animals recordings # IRC_30.. samplerate = 192*kHz else: samplerate = 44.1*kHz ok = False k = 0 while k < len(self.basedir) and not ok: try: filename = os.path.join(self.basedir[k], 'IRC_' + subject) if self.compensated: filename = os.path.join(filename, 'COMPENSATED/MAT/HRIR/IRC_' + subject + '_C_HRIR.mat') else: filename = os.path.join(filename, 'RAW/MAT/HRIR/IRC_' + subject + '_R_HRIR.mat') m = loadmat(filename, struct_as_record=True) ok = True except IOError: ok = False k += 1 if not ok: raise IOError("Couldn't find the HRTF files for subject "+str(subject)) if 'l_hrir_S' in m.keys(): # RAW DATA affix = '_hrir_S' else: # COMPENSATED DATA affix = '_eq_hrir_S' l, r = m['l' + affix], m['r' + affix] azim = l['azim_v'][0][0][:, 0] elev = l['elev_v'][0][0][:, 0] if len(azim) == len(elev) and len(azim) == 1: # it is the case with IRCAM_New db # - the coordinates are 1xN instead of Nx1 # - some measures that should be at the same elevation are # at very close but different elevations (7.47 # vs. 7.5). This is annoying for interpolation. Hence I # allow one to round the elevations conv = lambda x : x if rounddot5: conv = lambda x: np.round(2*x)/2 azim = conv(l['azim_v'][0][0][0, :]) elev = l['elev_v'][0][0][0, :] coords = make_coordinates(azim=azim, elev=elev) l = l['content_m'][0][0] r = r['content_m'][0][0] # self.data has shape (num_ears=2, num_indices, hrir_length) data = vstack((reshape(l, (1,) + l.shape), reshape(r, (1,) + r.shape))) hrtfset = HRTFSet(data, samplerate, coords) hrtfset.name = 'IRCAM_'+subject return hrtfset brian-1.3.1/brian/hears/hrtf/itd.py000066400000000000000000000076471167451777000171360ustar00rootroot00000000000000from brian import * from hrtf import * from ..prefs import get_samplerate from ..filtering.fractionaldelay import FractionalDelay from ..sounds import silence __all__ = ['HeadlessDatabase'] speed_of_sound_in_air = 343.2*metre/second class HeadlessDatabase(HRTFDatabase): ''' Database for creating HRTFSet with artificial interaural time-differences Initialisation keywords: ``n``, ``azim_max``, ``diameter`` Specify the ITDs for two ears separated by distance ``diameter`` with no head. ITDs corresponding to ``n`` angles equally spaced between ``-azim_max`` and ``azim_max`` are used. The default diameter is that which gives the maximum ITD as 650 microseconds. The ITDs are computed with the formula ``diameter*sin(azim)/speed_of_sound_in_air``. In this case, the generated :class:`HRTFSet` will have coordinates of ``azim`` and ``itd``. ``itd`` Instead of specifying the keywords above, just give the ITDs directly. In this case, the generated :class:`HRTFSet` will have coordinates of ``itd`` only. ``fractional_itds=False`` Set this to ``True`` to allow ITDs with a fractional multiple of the timestep ``1/samplerate``. Note that the filters used to do this are not perfect and so this will introduce a small amount of numerical error, and so shouldn't be used unless this level of timing precision is required. See :class:`FractionalDelay` for more details. To get the HRTFSet, the simplest thing to do is just:: hrtfset = HeadlessDatabase(13).load_subject() The generated ITDs can be returned using the ``itd`` attribute of the :class:`HeadlessDatabase` object. If ``fractional_itds=False`` then Note that the delays induced in the left and right channels are not symmetric as making them so wastes half the samplerate (if the delay to the left channel is itd/2 and the delay to the right channel is -itd/2). Instead, for each channel either the left channel delay is 0 and the right channel delay is -itd (if itd<0) or the left channel delay is itd and the right channel delay is 0 (if itd>0). If ``fractional_itds=True`` then delays in the left and right channels will be symmetric around a global offset of ``delay_offset``. ''' def __init__(self, n=None, azim_max=pi/2, diameter=speed_of_sound_in_air*650*usecond, itd=None, samplerate=None, fractional_itds=False): if itd is None: azim = linspace(-azim_max, azim_max, n) itd = diameter*sin(azim)/speed_of_sound_in_air coords = make_coordinates(azim=azim, itd=itd) else: coords = make_coordinates(itd=itd) self.itd = itd samplerate = self.samplerate = get_samplerate(samplerate) if not fractional_itds: dl = itd.copy() dr = -itd dl[dl<0] = 0 dr[dr<0] = 0 dl = array(rint(dl*samplerate), dtype=int) dr = array(rint(dr*samplerate), dtype=int) idxmax = max(amax(dl), amax(dr)) data = zeros((2, len(itd), idxmax+1)) data[0, arange(len(itd)), dl] = 1 data[1, arange(len(itd)), dr] = 1 else: delays = hstack((itd/2, -itd/2)) fd = FractionalDelay(silence(1*ms, samplerate=samplerate), delays) ir = fd.impulse_response data = zeros((2, len(itd), fd.filter_length)) data[0, :, :] = ir[:len(itd), :] data[1, :, :] = ir[len(itd):, :] self.delay_offset = fd.delay_offset self.hrtfset = HRTFSet(data, samplerate, coords) self.hrtfset.name = 'ITDDatabaseSubject' self.subjects = ['0'] def load_subject(self, subject='0'): return self.hrtfset brian-1.3.1/brian/hears/onlinesounds.py000066400000000000000000000066621167451777000201270ustar00rootroot00000000000000# TODO: update all of this with the new interface/buffering mechanism # TODO: decide on a good interface for online sounds, that is general from brian import * from numpy import * import numpy import wave import array as pyarray import time try: import pygame have_pygame = True except ImportError: have_pygame = False try: from scikits.samplerate import resample have_scikits_samplerate = True except (ImportError, ValueError): have_scikits_samplerate = False from bufferable import Bufferable from sounds import Sound, BaseSound __all__ = ['OnlineSound', 'OnlineWhiteNoise', 'OnlineWhiteNoiseBuffered', 'OnlineWhiteNoiseShifted', ] class OnlineSound(BaseSound): def __init__(self): pass def update(self): pass class OnlineWhiteNoise(OnlineSound): ''' Noise generator which produces one sample at a time online input parameters are the mean mu and the variance sigma default mu=0, sigma=1 ''' def __init__(self,mu=None,sigma=None,tomux=1): if mu==None: self.mu=0 if sigma==None: self.sigma=1 self.tomux=tomux self.nchannels = 1 def buffer_init(self): pass def buffer_fetch(self, start, end): samples = end-start temp = (self.mu+sqrt(self.sigma)*randn(samples))*self.tomux temp = temp.reshape(len(temp),1) return temp class OnlineWhiteNoiseBuffered(OnlineSound): def __init__(self,samplerate,mu,sigma,max_abs_itd): self.samplerate=samplerate self.length_buffer=int(max_abs_itd * self.samplerate) self.mu=mu self.sigma=sigma self.buffer=[0]*(2*self.length_buffer+1) def update(self): self.buffer.pop() self.buffer.insert(0,self.mu+self.sigma*randn(1)) return self.buffer[self.length_buffer] class OnlineWhiteNoiseShifted(OnlineSound): def __init__(self,samplerate,online_white_noise_buffered,shift=lambda:randn(1)*ms,time_interval=-1*ms): #self.shift_applied=[] self.samplerate=samplerate self.interval_in_sample= int(time_interval *self.samplerate) #print self.interval_in_sample self.count=0 self.shift=shift self.length_buffer=online_white_noise_buffered.length_buffer self.shift_in_sample=int(shift()* self.samplerate) if abs(self.shift_in_sample) > self.length_buffer: self.shift_in_sample=sign(self.shift_in_sample)*self.length_buffer self.ITDused=[] self.ITDused.append(self.shift_in_sample/self.samplerate) self.reference=online_white_noise_buffered def update(self): if self.count ==self.interval_in_sample: self.shift_in_sample=int(self.shift()* self.samplerate) if abs(self.shift_in_sample) > self.length_buffer: self.shift_in_sample=sign(self.shift_in_sample)*self.length_buffer self.ITDused.append(self.shift_in_sample/self.samplerate) #print self.shift() self.count=0 self.count=self.count+1 # print self.length_buffer # print self.shift_in_sample # print self.length_buffer+1+self.shift_in_sample return self.reference.buffer[self.length_buffer+self.shift_in_sample] brian-1.3.1/brian/hears/prefs.py000066400000000000000000000007141167451777000165160ustar00rootroot00000000000000from brian import * __all__ = ['get_samplerate', 'set_default_samplerate'] default_samplerate = 44.1*kHz def get_samplerate(samplerate): if samplerate is None: return default_samplerate else: return samplerate def set_default_samplerate(samplerate): ''' Sets the default samplerate for Brian hears objects, by default 44.1 kHz. ''' global default_samplerate default_samplerate = samplerate brian-1.3.1/brian/hears/sounds.py000066400000000000000000001325301167451777000167140ustar00rootroot00000000000000from brian import * from numpy import * import numpy import array as pyarray import time import struct try: import pygame have_pygame = True except ImportError: have_pygame = False try: from scikits.samplerate import resample have_scikits_samplerate = True except (ImportError, ValueError): have_scikits_samplerate = False from bufferable import Bufferable from prefs import get_samplerate from db import dB, dB_type, dB_error, gain from scipy.signal import fftconvolve, lfilter from scipy.misc import factorial __all__ = ['BaseSound', 'Sound', 'pinknoise','brownnoise','powerlawnoise', 'whitenoise', 'irns', 'irno', 'tone', 'click', 'clicks', 'silence', 'sequence', 'harmoniccomplex', 'loadsound', 'savesound', 'play', 'vowel' ] _mixer_status = [-1,-1] class BaseSound(Bufferable): ''' Base class for Sound and OnlineSound ''' pass class Sound(BaseSound, numpy.ndarray): ''' Class for working with sounds, including loading/saving, manipulating and playing. For an overview, see :ref:`sounds_overview`. **Initialisation** The following arguments are used to initialise a sound object ``data`` Can be a filename, an array, a function or a sequence (list or tuple). If its a filename, the sound file (WAV or AIFF) will be loaded. If its an array, it should have shape ``(nsamples, nchannels)``. If its a function, it should be a function f(t). If its a sequence, the items in the sequence can be filenames, functions, arrays or Sound objects. The output will be a multi-channel sound with channels the corresponding sound for each element of the sequence. ``samplerate=None`` The samplerate, if necessary, will use the default (for an array or function) or the samplerate of the data (for a filename). ``duration=None`` The duration of the sound, if initialising with a function. **Loading, saving and playing** .. automethod:: load .. automethod:: save .. automethod:: play **Properties** .. autoattribute:: duration .. autoattribute:: nsamples .. autoattribute:: nchannels .. autoattribute:: times .. autoattribute:: left .. autoattribute:: right .. automethod:: channel **Generating sounds** All sound generating methods can be used with durations arguments in samples (int) or units (e.g. 500*ms). One can also set the number of channels by setting the keyword argument nchannels to the desired value. Notice that for noise the channels will be generated independantly. .. automethod:: tone .. automethod:: whitenoise .. automethod:: powerlawnoise .. automethod:: brownnoise .. automethod:: pinknoise .. automethod:: silence .. automethod:: click .. automethod:: clicks .. automethod:: harmoniccomplex .. automethod:: vowel **Timing and sequencing** .. automethod:: sequence(*sounds, samplerate=None) .. automethod:: repeat .. automethod:: extended .. automethod:: shifted .. automethod:: resized **Slicing** One can slice sound objects in various ways, for example ``sound[100*ms:200*ms]`` returns the part of the sound between 100 ms and 200 ms (not including the right hand end point). If the sound is less than 200 ms long it will be zero padded. You can also set values using slicing, e.g. ``sound[:50*ms] = 0`` will silence the first 50 ms of the sound. The syntax is the same as usual for Python slicing. In addition, you can select a subset of the channels by doing, for example, ``sound[:, -5:]`` would be the last 5 channels. For time indices, either times or samples can be given, e.g. ``sound[:100]`` gives the first 100 samples. In addition, steps can be used for example to reverse a sound as ``sound[::-1]``. **Arithmetic operations** Standard arithemetical operations and numpy functions work as you would expect with sounds, e.g. ``sound1+sound2``, ``3*sound`` or ``abs(sound)``. **Level** .. autoattribute:: level .. automethod:: atlevel .. autoattribute:: maxlevel .. automethod:: atmaxlevel **Ramping** .. automethod:: ramp .. automethod:: ramped **Plotting** .. automethod:: spectrogram .. automethod:: spectrum ''' duration = property(fget=lambda self:len(self) / self.samplerate, doc='The length of the sound in seconds.') nsamples = property(fget=lambda self:len(self), doc='The number of samples in the sound.') times = property(fget=lambda self:arange(len(self), dtype=float) / self.samplerate, doc='An array of times (in seconds) corresponding to each sample.') nchannels = property(fget=lambda self:self.shape[1], doc='The number of channels in the sound.') left = property(fget=lambda self:self.channel(0), doc='The left channel for a stereo sound.') right = property(fget=lambda self:self.channel(1), doc='The right channel for a stereo sound.') @check_units(samplerate=Hz, duration=second) def __new__(cls, data, samplerate=None, duration=None): if isinstance(data, numpy.ndarray): samplerate = get_samplerate(samplerate) # if samplerate is None: # raise ValueError('Must specify samplerate to initialise Sound with array.') if duration is not None: raise ValueError('Cannot specify duration when initialising Sound with array.') x = array(data, dtype=float) elif isinstance(data, str): if duration is not None: raise ValueError('Cannot specify duration when initialising Sound from file.') if samplerate is not None: raise ValueError('Cannot specify samplerate when initialising Sound from a file.') x = Sound.load(data) samplerate = x.samplerate elif callable(data): samplerate = get_samplerate(samplerate) # if samplerate is None: # raise ValueError('Must specify samplerate to initialise Sound with function.') if duration is None: raise ValueError('Must specify duration to initialise Sound with function.') L = int(rint(duration * samplerate)) t = arange(L, dtype=float) / samplerate x = data(t) elif isinstance(data, (list, tuple)): kwds = {} if samplerate is not None: kwds['samplerate'] = samplerate if duration is not None: kwds['duration'] = duration channels = tuple(Sound(c, **kwds) for c in data) x = hstack(channels) samplerate = channels[0].samplerate else: raise TypeError('Cannot initialise Sound with data of class ' + str(data.__class__)) if len(x.shape)==1: x.shape = (len(x), 1) x = x.view(cls) x.samplerate = samplerate x.buffer_init() return x def __array_wrap__(self, obj, context=None): handled = False x = numpy.ndarray.__array_wrap__(self, obj, context) if not hasattr(x, 'samplerate') and hasattr(self, 'samplerate'): x.samplerate = self.samplerate if context is not None: ufunc = context[0] args = context[1] return x def __array_finalize__(self,obj): if obj is None: return self.samplerate = getattr(obj, 'samplerate', None) def buffer_init(self): pass def buffer_fetch(self, start, end): if start<0: raise IndexError('Can only use positive indices in buffer.') samples = end-start X = asarray(self)[start:end, :] if X.shape[0] int(self.samplerate): self = self.resample(other.samplerate) elif int(other.samplerate) < int(self.samplerate): other = other.resample(self.samplerate) if len(self) > len(other): other = other.resized(len(self)) elif len(self) < len(other): self = self.resized(len(other)) return Sound(numpy.ndarray.__add__(self, other), samplerate=self.samplerate) else: x = numpy.ndarray.__add__(self, other) return Sound(x, self.samplerate) __radd__ = __add__ # getslice and setslice need to be implemented for compatibility reasons, # but __getitem__ covers all the functionality so we just use that def __getslice__(self, start, stop): return self.__getitem__(slice(start, stop)) def __setslice__(self, start, stop, seq): return self.__setitem__(slice(start, stop), seq) def __getitem__(self,key): channel = slice(None) if isinstance(key, tuple): channel = key[1] key = key[0] if isinstance(key, int): return np.ndarray.__getitem__(self, key) if isinstance(key, float): return np.ndarray.__getitem__(self, round(key*self.samplerate)) sliceattr = [v for v in [key.start, key.stop] if v is not None] slicedims = array([units.have_same_dimensions(flag, second) for flag in sliceattr]) attrisint = array([isinstance(v, int) for v in sliceattr]) s = sum(attrisint) if s!=0 and s!=len(sliceattr): raise ValueError('Slice attributes must be all ints or all times') if s==len(sliceattr): # all ints start = key.start or 0 stop = key.stop or self.shape[0] step = key.step or 1 if start>=0 and stop<=self.shape[0]: return Sound(np.ndarray.__getitem__(self, (key, channel)), self.samplerate) else: startpad = max(-start, 0) endpad = max(stop-self.shape[0], 0) startmid = max(start, 0) endmid = min(stop, self.shape[0]) atstart = zeros((startpad, self.shape[1])) atend = zeros((endpad, self.shape[1])) return Sound(vstack((atstart, asarray(self)[startmid:endmid:step], atend)), self.samplerate) if not slicedims.all(): raise DimensionMismatchError('Slicing', *[units.get_unit(d) for d in sliceattr]) start = key.start or 0*msecond stop = key.stop or self.duration step = key.step or 1 if int(step)!=step: #resampling raise NotImplementedError start = int(rint(start*self.samplerate)) stop = int(rint(stop*self.samplerate)) return self.__getitem__((slice(start,stop,step),channel)) def __setitem__(self,key,value): channel=slice(None) if isinstance(key,tuple): channel=key[1] key=key[0] if isinstance(key,int) or isinstance(key,float): return np.ndarray.__setitem__(self,(key,channel),value) if isinstance(key,Quantity): return np.ndarray.__setitem__(self,(int(rint(key*self.samplerate)),channel),value) sliceattr = [v for v in [key.start, key.step, key.stop] if v is not None] slicedims=array([units.have_same_dimensions(flag,second) for flag in sliceattr]) if not slicedims.any(): # If value is a mono sound its shape will be (N, 1) but the numpy # setitem will have shape (N,) so in this case it's a shape mismatch # so we squeeze the array to make sure this doesn't happen. if isinstance(value,Sound) and channel!=slice(None): value=value.squeeze() return asarray(self).__setitem__((key,channel),value) # print np.ndarray.__getitem__(self, (key, channel)).shape # print key, channel # return np.ndarray.__setitem__(self, (key, channel), value) if not slicedims.all(): raise DimensionMismatchError('Slicing',*[units.get_unit(d) for d in sliceattr]) if key.__getattribute__('step') is not None: # resampling? raise NotImplementedError start = key.start stop = key.stop or self.duration if (start is not None and start<0*ms) or stop > self.duration: raise IndexError('Slice bigger than Sound object') if start is not None: start = int(rint(start*self.samplerate)) #if (stop is not None) and (not stop=self.duration): stop = int(stop*self.samplerate) stop = int(rint(stop*self.samplerate)) return self.__setitem__((slice(start,stop),channel),value) def extended(self, duration): ''' Returns the Sound with length extended by the given duration, which can be the number of samples or a length of time in seconds. ''' duration = get_duration(duration, self.samplerate) return self[:self.nsamples+duration] def resized(self, L): ''' Returns the Sound with length extended (or contracted) to have L samples. ''' if L == len(self): return self elif L < len(self): return Sound(self[:L, :], samplerate=self.samplerate) else: padding = zeros((L - len(self), self.nchannels)) return Sound(concatenate((self, padding)), samplerate=self.samplerate) def shifted(self, duration, fractional=False, filter_length=2048): ''' Returns the sound delayed by duration, which can be the number of samples or a length of time in seconds. Normally, only integer numbers of samples will be used, but if ``fractional=True`` then the filtering method from `http://www.labbookpages.co.uk/audio/beamforming/fractionalDelay.html `__ will be used (introducing some small numerical errors). With this method, you can specify the ``filter_length``, larger values are slower but more accurate, especially at higher frequencies. The large default value of 2048 samples provides good accuracy for sounds with frequencies above 20 Hz, but not for lower frequency sounds. If you are restricted to high frequency sounds, a smaller value will be more efficient. Note that if ``fractional=True`` then ``duration`` is assumed to be a time not a number of samples. ''' if not fractional: if not isinstance(duration, int): duration = int(rint(duration*self.samplerate)) if duration>=0: y = vstack((zeros((duration, self.nchannels)), self)) return Sound(y, samplerate=self.samplerate) else: return self[-duration:, :] else: if self.nchannels>1: sounds = [self.channel(i).shifted(duration, fractional=True, filter_length=filter_length) for i in xrange(self.nchannels)] return Sound(array(sounds), samplerate=self.samplerate) # Adapted from # http://www.labbookpages.co.uk/audio/beamforming/fractionalDelay.html delay = duration*self.samplerate if delay>=0: idelay = int(delay) elif delay<0: idelay = -int(-delay) delay -= idelay centre_tap = filter_length // 2 t = arange(filter_length) x = t-delay if abs(round(delay)-delay)<1e-10: tap_weight = array(x==centre_tap, dtype=float) else: sinc = sin(pi*(x-centre_tap))/(pi*(x-centre_tap)) window = 0.54-0.46*cos(2.0*pi*(x+0.5)/filter_length) # Hamming window tap_weight = window*sinc if filter_length<256: y = convolve(tap_weight, self.flatten()) else: y = fftconvolve(tap_weight, self.flatten()) y = y[filter_length/2:-filter_length/2] sound = Sound(y, self.samplerate) sound = sound.shifted(idelay) return sound def repeat(self, n): ''' Repeats the sound n times ''' x = vstack((self,)*n) return Sound(x, samplerate=self.samplerate) ### TODO: test this - I haven't installed scikits.samplerate on windows # it should work, according to the documentation 2D arrays are acceptable # in the format we use fof sounds here @check_units(samplerate=Hz) def resample(self, samplerate, resample_type='sinc_best'): ''' Returns a resampled version of the sound. ''' if not have_scikits_samplerate: raise ImportError('Need scikits.samplerate package for resampling') y = array(resample(self, float(samplerate / self.samplerate), resample_type), dtype=float64) return Sound(y, samplerate=samplerate) def _init_mixer(self): global _mixer_status if _mixer_status==[-1,-1] or _mixer_status[0]!=self.nchannels or _mixer_status != self.samplerate: pygame.mixer.quit() pygame.mixer.init(int(self.samplerate), -16, self.nchannels) _mixer_status=[self.nchannels,self.samplerate] def play(self, normalise=False, sleep=False): ''' Plays the sound (normalised to avoid clipping if required). If sleep=True then the function will wait until the sound has finished playing before returning. ''' if self.nchannels>2: raise ValueError("Can only play sounds with 1 or 2 channels.") self._init_mixer() if normalise: a = amax(abs(self)) else: a = 1 x = array((2 ** 15 - 1) * clip(self / a, -1, 1), dtype=int16) if self.nchannels==1: x.shape = x.size # Make sure pygame receives an array in C-order x = pygame.sndarray.make_sound(ascontiguousarray(x)) x.play() if sleep: time.sleep(self.duration) def spectrogram(self, low=None, high=None, log_power=True, other = None, **kwds): ''' Plots a spectrogram of the sound Arguments: ``low=None``, ``high=None`` If these are left unspecified, it shows the full spectrogram, otherwise it shows only between ``low`` and ``high`` in Hz. ``log_power=True`` If True the colour represents the log of the power. ``**kwds`` Are passed to Pylab's ``specgram`` command. Returns the values returned by pylab's ``specgram``, namely ``(pxx, freqs, bins, im)`` where ``pxx`` is a 2D array of powers, ``freqs`` is the corresponding frequencies, ``bins`` are the time bins, and ``im`` is the image axis. ''' if self.nchannels>1: raise ValueError('Can only plot spectrograms for mono sounds.') if other is not None: x = self.flatten()-other.flatten() else: x = self.flatten() pxx, freqs, bins, im = specgram(x, Fs=self.samplerate, **kwds) if low is not None or high is not None: restricted = True if low is None: low = 0*Hz if high is None: high = amax(freqs)*Hz I = logical_and(low <= freqs, freqs <= high) I2 = where(I)[0] I2 = [max(min(I2) - 1, 0), min(max(I2) + 1, len(freqs) - 1)] Z = pxx[I2[0]:I2[-1], :] else: restricted = False Z = pxx if log_power: Z[Z < 1e-20] = 1e-20 # no zeros because we take logs Z = 10 * log10(Z) Z = flipud(Z) if restricted: imshow(Z, extent=(0, amax(bins), freqs[I2[0]], freqs[I2[-1]]), aspect='auto') else: imshow(Z, extent=(0, amax(bins), freqs[0], freqs[-1]), aspect='auto') xlabel('Time (s)') ylabel('Frequency (Hz)') return (pxx, freqs, bins, im) def spectrum(self, low=None, high=None, log_power=True, display=False): ''' Returns the spectrum of the sound and optionally plots it. Arguments: ``low``, ``high`` If these are left unspecified, it shows the full spectrum, otherwise it shows only between ``low`` and ``high`` in Hz. ``log_power=True`` If True it returns the log of the power. ``display=False`` Whether to plot the output. Returns ``(Z, freqs, phase)`` where ``Z`` is a 1D array of powers, ``freqs`` is the corresponding frequencies, ``phase`` is the unwrapped phase of spectrum. ''' if self.nchannels>1: raise ValueError('Can only plot spectrum for mono sounds.') sp = numpy.fft.fft(array(self)) freqs = array(range(len(sp)), dtype=float64) / len(sp) * float64(self.samplerate) pxx = abs(sp) ** 2 phase = unwrap(mod(angle(sp), 2 * pi)) if low is not None or high is not None: restricted = True if low is None: low = 0*Hz if high is None: high = amax(freqs)*Hz I = logical_and(low <= freqs, freqs <= high) I2 = where(I)[0] Z = pxx[I2] freqs = freqs[I2] phase = phase[I2] else: restricted = False Z = pxx if log_power: Z[Z < 1e-20] = 1e-20 # no zeros because we take logs Z = 10 * log10(Z) if display: subplot(211) semilogx(freqs, Z) ticks_freqs = 32000 * 2 ** -array(range(18), dtype=float64) xticks(ticks_freqs, map(str, ticks_freqs)) grid() xlim((freqs[0], freqs[-1])) xlabel('Frequency (Hz)') ylabel('Power (dB/Hz)') if log_power else ylabel('Power') subplot(212) semilogx(freqs, phase) ticks_freqs = 32000 * 2 ** -array(range(18), dtype=float64) xticks(ticks_freqs, map(str, ticks_freqs)) grid() xlim((freqs[0], freqs[-1])) xlabel('Frequency (Hz)') ylabel('Phase (rad)') show() return (Z, freqs, phase) def get_level(self): ''' Returns level in dB SPL (RMS) assuming array is in Pascals. In the case of multi-channel sounds, returns an array of levels for each channel, otherwise returns a float. ''' if self.nchannels==1: rms_value = sqrt(mean((asarray(self)-mean(asarray(self)))**2)) rms_dB = 20.0*log10(rms_value/2e-5) return rms_dB*dB else: return array(tuple(self.channel(i).get_level() for i in xrange(self.nchannels))) def set_level(self, level): ''' Sets level in dB SPL (RMS) assuming array is in Pascals. ``level`` should be a value in dB, or a tuple of levels, one for each channel. ''' rms_dB = self.get_level() if self.nchannels>1: level = array(level) if level.size==1: level = level.repeat(self.nchannels) level = reshape(level, (1, self.nchannels)) rms_dB = reshape(rms_dB, (1, self.nchannels)) else: if not isinstance(level, dB_type): raise dB_error('Must specify level in dB') rms_dB = float(rms_dB) level = float(level) gain = 10**((level-rms_dB)/20.) self *= gain level = property(fget=get_level, fset=set_level, doc=''' Can be used to get or set the level of a sound, which should be in dB. For single channel sounds a value in dB is used, for multiple channel sounds a value in dB can be used for setting the level (all channels will be set to the same level), or a list/tuple/array of levels. It is assumed that the unit of the sound is Pascals. ''') def atlevel(self, level): ''' Returns the sound at the given level in dB SPL (RMS) assuming array is in Pascals. ``level`` should be a value in dB, or a tuple of levels, one for each channel. ''' newsound = self.copy() newsound.level = level return newsound def get_maxlevel(self): return amax(self.level)*dB def set_maxlevel(self, level): self.level += level-self.maxlevel maxlevel = property(fget=get_maxlevel, fset=set_maxlevel, doc=''' Can be used to set or get the maximum level of a sound. For mono sounds, this is the same as the level, but for multichannel sounds it is the maximum level across the channels. Relative level differences will be preserved. The specified level should be a value in dB, and it is assumed that the unit of the sound is Pascals. ''') def atmaxlevel(self, level): ''' Returns the sound with the maximum level across channels set to the given level. Relative level differences will be preserved. The specified level should be a value in dB and it is assumed that the unit of the sound is Pascals. ''' newsound = self.copy() newsound.maxlevel = level return newsound def ramp(self, when='onset', duration=10*ms, envelope=None, inplace=True): ''' Adds a ramp on/off to the sound ``when='onset'`` Can take values 'onset', 'offset' or 'both' ``duration=10*ms`` The time over which the ramping happens ``envelope`` A ramping function, if not specified uses ``sin(pi*t/2)**2``. The function should be a function of one variable ``t`` ranging from 0 to 1, and should increase from ``f(0)=0`` to ``f(0)=1``. The reverse is applied for the offset ramp. ``inplace`` Whether to apply ramping to current sound or return a new array. ''' when = when.lower().strip() if envelope is None: envelope = lambda t:sin(pi * t / 2) ** 2 if not isinstance(duration, int): sz = int(rint(duration * self.samplerate)) else: sz = duration multiplier = envelope(reshape(linspace(0.0, 1.0, sz), (sz, 1))) if inplace: target = self else: target = Sound(copy(self), self.samplerate) if when == 'onset' or when == 'both': target[:sz, :] *= multiplier if when == 'offset' or when == 'both': target[target.nsamples-sz:, :] *= multiplier[::-1] return target def ramped(self, when='onset', duration=10*ms, envelope=None): ''' Returns a ramped version of the sound (see :meth:`Sound.ramp`). ''' return self.ramp(when=when, duration=duration, envelope=envelope, inplace=False) def fft(self,n=None): ''' Performs an n-point FFT on the sound object, that is an array of the same size containing the DFT of each channel. n defaults to the number of samples of the sound, but can be changed manually setting the ``n`` keyword argument ''' if n is None: n=self.shape[0] res=zeros(n,self.nchannels) for i in range(self.nchannels): res[:,i]=fft(asarray(self)[:,i].flatten(),n=n) return res @staticmethod def tone(frequency, duration, phase=0, samplerate=None, nchannels=1): ''' Returns a pure tone at frequency for duration, using the default samplerate or the given one. The ``frequency`` and ``phase`` parameters can be single values, in which case multiple channels can be specified with the ``nchannels`` argument, or they can be sequences (lists/tuples/arrays) in which case there is one frequency or phase for each channel. ''' samplerate = get_samplerate(samplerate) duration = get_duration(duration,samplerate) frequency = array(frequency) phase = array(phase) if frequency.size>nchannels and nchannels==1: nchannels = frequency.size if phase.size>nchannels and nchannels==1: nchannels = phase.size if frequency.size==nchannels: frequency.shape = (nchannels, 1) if phase.size==nchannels: phase.shape =(nchannels, 1) t = arange(0, duration, 1)/samplerate t.shape = (t.size, 1) # ensures C-order (in contrast to tile(...).T ) x = sin(phase + 2.0 * pi * frequency * tile(t, (1, nchannels))) return Sound(x, samplerate) @staticmethod def harmoniccomplex(f0, duration, amplitude=1, phase=0, samplerate=None, nchannels=1): ''' Returns a harmonic complex composed of pure tones at integer multiples of the fundamental frequency ``f0``. The ``amplitude`` and ``phase`` keywords can be set to either a single value or an array of values. In the former case the value is set for all harmonics, and harmonics up the the sampling frequency are generated. In the latter each harmonic parameter is set separately, and the number of harmonics generated corresponds to the length of the array. ''' samplerate=get_samplerate(samplerate) phases = np.array(phase).flatten() amplitudes = np.array(amplitude).flatten() if len(phases)>1 or len(amplitudes)>1: if (len(phases)>1 and len(amplitudes)>1) and (len(phases) != len(amplitudes)): raise ValueError('Please specify the same number of phases and amplitudes') Nharmonics = max(len(phases),len(amplitudes)) else: Nharmonics = int(np.floor( samplerate/(2*f0) ) ) if len(phases) == 1: phases = np.tile(phase, Nharmonics) if len(amplitudes) == 1: amplitudes = np.tile(amplitude, Nharmonics) x = amplitudes[0]*tone(f0, duration, phase = phases[0], samplerate = samplerate, nchannels = nchannels) for i in range(1,Nharmonics): x += amplitudes[i]*tone((i+1)*f0, duration, phase = phases[i], samplerate = samplerate, nchannels = nchannels) return Sound(x,samplerate) @staticmethod def whitenoise(duration, samplerate=None, nchannels=1): ''' Returns a white noise. If the samplerate is not specified, the global default value will be used. ''' samplerate = get_samplerate(samplerate) duration = get_duration(duration,samplerate) x = randn(duration,nchannels) return Sound(x, samplerate) @staticmethod def powerlawnoise(duration, alpha, samplerate=None, nchannels=1,normalise=False): ''' Returns a power-law noise for the given duration. Spectral density per unit of bandwidth scales as 1/(f**alpha). Sample usage:: noise = powerlawnoise(200*ms, 1, samplerate=44100*Hz) Arguments: ``duration`` Duration of the desired output. ``alpha`` Power law exponent. ``samplerate`` Desired output samplerate ''' samplerate = get_samplerate(samplerate) duration = get_duration(duration,samplerate) # Adapted from http://www.eng.ox.ac.uk/samp/software/powernoise/powernoise.m # Little MA et al. (2007), "Exploiting nonlinear recurrence and fractal # scaling properties for voice disorder detection", Biomed Eng Online, 6:23 n=duration n2=floor(n/2) f=array(fftfreq(n,d=1.0/samplerate), dtype=complex) f.shape=(len(f),1) f=tile(f,(1,nchannels)) if n%2==1: z=(randn(n2,nchannels)+1j*randn(n2,nchannels)) a2=1.0/( f[1:(n2+1),:]**(alpha/2.0)) else: z=(randn(n2-1,nchannels)+1j*randn(n2-1,nchannels)) a2=1.0/(f[1:n2,:]**(alpha/2.0)) a2*=z if n%2==1: d=vstack((ones((1,nchannels)),a2, flipud(conj(a2)))) else: d=vstack((ones((1,nchannels)),a2, 1.0/( abs(f[n2])**(alpha/2.0) )* randn(1,nchannels), flipud(conj(a2)))) x=real(ifft(d.flatten())) x.shape=(n,nchannels) if normalise: for i in range(nchannels): #x[:,i]=normalise_rms(x[:,i]) x[:,i] = ((x[:,i] - amin(x[:,i]))/(amax(x[:,i]) - amin(x[:,i])) - 0.5) * 2; return Sound(x,samplerate) @staticmethod def pinknoise(duration, samplerate=None, nchannels=1, normalise=False): ''' Returns pink noise, i.e :func:`powerlawnoise` with alpha=1 ''' return Sound.powerlawnoise(duration,1.0,samplerate=samplerate,normalise=False) @staticmethod def brownnoise(duration, samplerate=None, nchannels=1,normalise=False): ''' Returns brown noise, i.e :func:`powerlawnoise` with alpha=2 ''' return Sound.powerlawnoise(duration,2.0,samplerate=samplerate,normalise=False) @staticmethod def irns(delay, gain, niter, duration, samplerate=None, nchannels=1): ''' Returns an IRN_S noise. The iterated ripple noise is obtained trough a cascade of gain and delay filtering. For more details: see Yost 1996 or chapter 15 in Hartman Sound Signal Sensation. ''' samplerate = get_samplerate(samplerate) noise=Sound.whitenoise(duration) splrate=noise.samplerate x=array(noise.T)[0] IRNfft=np.fft.fft(x) Nspl,spl_dur=len(IRNfft),float(1.0/splrate) w=2*pi*fftfreq(Nspl,spl_dur) d=float(delay) for k in range(1,niter+1): nchoosek=factorial(niter)/(factorial(niter-k)*factorial(k)) IRNfft+=nchoosek*(gain**k)*IRNfft*exp(-1j*w*k*d) IRNadd = np.fft.ifft(IRNfft) x=real(IRNadd) return Sound(x,samplerate) @staticmethod def irno(delay, gain, niter, duration, samplerate=None, nchannels=1): ''' Returns an IRN_O noise. The iterated ripple noise is obtained many attenuated and delayed version of the original broadband noise. For more details: see Yost 1996 or chapter 15 in Hartman Sound Signal Sensation. ''' samplerate = get_samplerate(samplerate) noise=Sound.whitenoise(duration) splrate=noise.samplerate x=array(noise.T)[0] IRNadd=np.fft.fft(x) Nspl,spl_dur=len(IRNadd),float(1.0/splrate) w=2*pi*fftfreq(Nspl,spl_dur) d=float(delay) for k in range(1,niter+1): IRNadd+=(gain**k)*IRNadd*exp(-1j*w*k*d) IRNadd = np.fft.ifft(IRNadd) x=real(IRNadd) return Sound(x, samplerate) @staticmethod def click(duration, peak=None, samplerate=None, nchannels=1): ''' Returns a click of the given duration. If ``peak`` is not specified, the amplitude will be 1, otherwise ``peak`` refers to the peak dB SPL of the click, according to the formula ``28e-6*10**(peak/20.)``. ''' samplerate = get_samplerate(samplerate) duration = get_duration(duration,samplerate) if peak is not None: if not isinstance(peak, dB_type): raise dB_error('Peak must be given in dB') amplitude = 28e-6*10**(float(peak)/20.) else: amplitude = 1 x = amplitude*ones((duration,nchannels)) return Sound(x, samplerate) @staticmethod def clicks(duration, n, interval, peak=None, samplerate=None, nchannels=1): ''' Returns a series of n clicks (see :func:`click`) separated by interval. ''' oneclick = Sound.click(duration, peak=peak, samplerate=samplerate) return oneclick[:interval].repeat(n) @staticmethod def silence(duration, samplerate=None, nchannels=1): ''' Returns a silent, zero sound for the given duration. Set nchannels to set the number of channels. ''' samplerate = get_samplerate(samplerate) duration = get_duration(duration,samplerate) x=numpy.zeros((duration,nchannels)) return Sound(x, samplerate) @staticmethod def vowel(vowel=None, formants=None, pitch=100*Hz, duration=1*second, samplerate=None, nchannels=1): ''' Returns an artifically created spoken vowel sound (following the source-filter model of speech production) with a given ``pitch``. The vowel can be specified by either providing ``vowel`` as a string ('a', 'i' or 'u') or by setting ``formants`` to a sequence of formant frequencies. The returned sound is normalized to a maximum amplitude of 1. The implementation is based on the MakeVowel function written by Richard O. Duda, part of the Auditory Toolbox for Matlab by Malcolm Slaney: http://cobweb.ecn.purdue.edu/~malcolm/interval/1998-010/ ''' samplerate = get_samplerate(samplerate) duration = get_duration(duration, samplerate) if not (vowel or formants): raise ValueError('Need either a vowel or a list of formants') elif (vowel and formants): raise ValueError('Cannot use both vowel and formants') if vowel: if vowel == 'a' or vowel == '/a/': formants = (730.0*Hz, 1090.0*Hz, 2440.0*Hz) elif vowel == 'i' or vowel == '/i/': formants = (270.0*Hz, 2290.0*Hz, 3010.0*Hz) elif vowel == 'u' or vowel == '/u/': formants = (300.0*Hz, 870.0*Hz, 2240.0*Hz) else: raise ValueError('Unknown vowel: "%s"' % (vowel)) points = np.arange(0, duration - 1, samplerate / pitch) indices = np.floor(points).astype(int) y = np.zeros(duration) y[indices] = (indices + 1) - points y[indices + 1] = points - indices # model the sound source (periodic glottal excitation) a = np.exp(-250.*Hz * 2 * np.pi / samplerate) y = lfilter([1],[1, 0, -a * a], y.copy()) # model the filtering by the vocal tract bandwidth = 50.*Hz for f in formants: cft = f / samplerate q = f / bandwidth rho = np.exp(-np.pi * cft / q) theta = 2 * np.pi * cft * np.sqrt(1 - 1/(4.0 * q * q)) a2 = -2 * rho * np.cos(theta) a3 = rho * rho y = lfilter([1 + a2 + a3], [1, a2, a3], y.copy()) #normalize sound data = y / np.max(np.abs(y), axis=0) data.shape = (data.size, 1) return Sound(np.tile(data, (nchannels, 1)), samplerate=samplerate) @staticmethod def sequence(*args, **kwds): ''' Returns the sequence of sounds in the list sounds joined together ''' samplerate = kwds.pop('samplerate', None) if len(kwds): raise TypeError('Unexpected keywords to function sequence()') sounds = [] for arg in args: if isinstance(arg, (list, tuple)): sounds.extend(arg) else: sounds.append(arg) if samplerate is None: samplerate = max(s.samplerate for s in sounds) rates = unique([int(s.samplerate) for s in sounds]) if len(rates)>1: sounds = tuple(s.resample(samplerate) for s in sounds) x = vstack(sounds) return Sound(x, samplerate) def save(self, filename, normalise=False, samplewidth=2): ''' Save the sound as a WAV. If the normalise keyword is set to True, the amplitude of the sound will be normalised to 1. The samplewidth keyword can be 1 or 2 to save the data as 8 or 16 bit samples. ''' ext = filename.split('.')[-1].lower() if ext=='wav': import wave as sndmodule elif ext=='aiff' or ext=='aifc': import aifc as sndmodule raise NotImplementedError('Can only save as wav soundfiles') else: raise NotImplementedError('Can only save as wav soundfiles') if samplewidth != 1 and samplewidth != 2: raise ValueError('Sample width must be 1 or 2 bytes.') scale = {2:2 ** 15, 1:2 ** 7-1}[samplewidth] if ext=='wav': meanval = {2:0, 1:2**7}[samplewidth] dtype = {2:int16, 1:uint8}[samplewidth] typecode = {2:'h', 1:'B'}[samplewidth] else: meanval = {2:0, 1:2**7}[samplewidth] dtype = {2:int16, 1:uint8}[samplewidth] typecode = {2:'h', 1:'B'}[samplewidth] w = sndmodule.open(filename, 'wb') w.setnchannels(self.nchannels) w.setsampwidth(samplewidth) w.setframerate(int(self.samplerate)) x = array(self,copy=True) am=amax(x) z = zeros(x.shape[0]*self.nchannels, dtype=x.dtype) x.shape=(x.shape[0],self.nchannels) for i in range(self.nchannels): if normalise: x[:,i] /= am x[:,i] = (x[:,i]) * scale + meanval z[i::self.nchannels] = x[::1,i] data = array(z, dtype=dtype) data = pyarray.array(typecode, data) w.writeframes(data.tostring()) w.close() @staticmethod def load(filename): ''' Load the file given by filename and returns a Sound object. Sound file can be either a .wav or a .aif file. ''' ext = filename.split('.')[-1].lower() if ext=='wav': import wave as sndmodule elif ext=='aif' or ext=='aiff': import aifc as sndmodule else: raise NotImplementedError('Can only load aif or wav soundfiles') wav = sndmodule.open(filename, "r") nchannels, sampwidth, framerate, nframes, comptype, compname = wav.getparams() frames = wav.readframes(nframes * nchannels) typecode = {2:'h', 1:'B'}[sampwidth] out = frombuffer(frames, dtype=dtype(typecode)) scale = {2:2 ** 15, 1:2 ** 7-1}[sampwidth] meanval = {2:0, 1:2**7}[sampwidth] data = zeros((nframes, nchannels)) for i in range(nchannels): data[:, i] = out[i::nchannels] data[:, i] /= scale data[:, i] -= meanval return Sound(data, samplerate=framerate*Hz) def __reduce__(self): return (_load_Sound_from_pickle, (asarray(self), float(self.samplerate))) def _load_Sound_from_pickle(arr, samplerate): return Sound(arr, samplerate=samplerate*Hz) def play(*sounds, **kwds): ''' Plays a sound or sequence of sounds. For example:: play(sound) play(sound1, sound2) play([sound1, sound2, sound3]) If ``normalise=True``, the sequence of sounds will be normalised to the maximum range (-1 to 1), and if ``sleep=True`` the function will wait until the sounds have finished playing before returning. ''' normalise = kwds.pop('normalise', False) sleep = kwds.pop('sleep', False) if len(kwds): raise TypeError('Unexpected keyword arguments to function play()') sound = sequence(*sounds) sound.play(normalise=normalise, sleep=sleep) play.__doc__ = Sound.play.__doc__ def savesound(sound, filename, normalise=False, samplewidth=2): sound.save(filename, normalise=normalise, samplewidth=samplewidth) savesound.__doc__ = Sound.save.__doc__ def get_duration(duration,samplerate): if not isinstance(duration, int): duration = int(rint(duration * samplerate)) return duration whitenoise = Sound.whitenoise powerlawnoise = Sound.powerlawnoise pinknoise = Sound.pinknoise brownnoise = Sound.brownnoise irns = Sound.irns irno = Sound.irno tone = Sound.tone harmoniccomplex = Sound.harmoniccomplex click = Sound.click clicks = Sound.clicks silence = Sound.silence sequence = Sound.sequence vowel = Sound.vowel loadsound = Sound.load brian-1.3.1/brian/inspection.py000066400000000000000000000355451167451777000164620ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Inspection of strings with python statements and models defined by differential equations. TODO: some of the module is obsolete ''' __all__ = ['is_affine', 'depends_on', 'Term', 'get_global_term', 'get_var_names', 'check_equations_units', 'fill_vars', 'AffineFunction', 'get_identifiers', 'modified_variables', 'namespace', 'clean_text', 'namespace_replace_quantity_with_pure', 'list_replace_quantity_with_pure'] import numpy from numpy import array from units import * import parser import re import inspect from copy import copy class PureQuantityBase(Quantity): def __init__(self, value): self.dim = value.dim def __div__(self, other): if not isinstance(other, Quantity) and not is_scalar_type(other): return NotImplemented try: return Quantity.__div__(self, other) except ZeroDivisionError: try: odim = other.dim except AttributeError: odim = Dimension() return Quantity.with_dimensions(0, self.dim / odim) __truediv__ = __div__ def __rdiv__(self, other): if not isinstance(other, Quantity) and not is_scalar_type(other): return NotImplemented try: return Quantity.__rdiv__(self, other) except ZeroDivisionError: try: odim = other.dim except AttributeError: odim = Dimension() return Quantity.with_dimensions(0, odim / self.dim) __rtruediv__ = __rdiv__ def __mod__(self, other): if not isinstance(other, Quantity) and not is_scalar_type(other): return NotImplemented try: return Quantity.__mod__(self, other) except ZeroDivisionError: return Quantity.with_dimensions(0, self.dim) def returnpure(meth): def f(*args, **kwds): x = meth(*args, **kwds) if isinstance(x, Quantity): return PureQuantity(x) else: return x return f class PureQuantity(PureQuantityBase): ''' Use this class for unit checking. The idea is that operations should always work if they are dimensionally consistent regardless of the values. The key one is that division by zero does not raise an error but returns a value with the correct dimensions (in fact, it returns 0 with the correct dimensions). This is important for Brian because we do unit checking by substituting zeros into the equations, which sometimes gives a divide by zero error. The way it works is that it derives from Quantity, but for division it wraps a try: except ZeroDivisionError: around the operation, and returns a zero with the correct units if it encounters it. In addition to that, it wraps every method of Quantity so that they return PureQuantity objects instead of Quantity objects (otherwise e.g. a*b would be a Quantity not a PureQuantity even if a, b were PureQuantity). Finally, the *_replace_quantity_with_pure functions are just designed to scan through a dict or list of variables and replace Quantity objects by PureQuantity objects. You need to do this when evaluating some code in a user namespace, for example. ''' for methname in dir(PureQuantityBase): meth = getattr(PureQuantityBase, methname) try: meth2 = getattr(numpy.float64, methname) except AttributeError: meth2 = meth if callable(meth) and meth is not meth2: exec methname + '=returnpure(PureQuantityBase.' + methname + ')' del meth, meth2, methname def namespace_replace_quantity_with_pure(ns): newns = {} for k, v in ns.iteritems(): if isinstance(v, Quantity): v = PureQuantity(v) newns[k] = v return newns def list_replace_quantity_with_pure(L): newL = [] for v in L: if isinstance(v, Quantity): v = PureQuantity(v) newL.append(v) return newL def namespace(expr, level=0, return_unknowns=False): ''' Returns a namespace with the values of identifiers in expr, taking from: * local namespace * global namespace * units ''' # Build the namespace frame = inspect.stack()[level + 1][0] global_namespace, local_namespace = frame.f_globals, frame.f_locals # Find external objects space = {} unknowns = [] for var in get_identifiers(expr): if var in local_namespace: #local space[var] = local_namespace[var] elif var in global_namespace: #global space[var] = global_namespace[var] elif var in globals(): # typically units space[var] = globals()[var] else: unknowns.append(var) if return_unknowns: return space, unknowns else: return space def get_identifiers(expr): ''' Returns the list of identifiers (variables or functions) in the Python expression (string). ''' # cleaner: parser.expr(expr).tolist() then find leaves of the form [1,name] return parser.suite(expr).compile().co_names def clean_text(expr): ''' Cleans a Python expression or statement: * Remove comments (# comment) * Merge multi-line statements (\) * Split at semi-columns (careful: indentation is ignored) ''' # Merge multi-line statements expr = re.sub('\\\s*?\n', ' ', expr) # Remove comments expr = re.sub('#.*', '', expr) # Split at semi-columns expr = re.sub(';', '\n', expr) return expr def modified_variables(expr): ''' Returns the list of variables or functions in expr that are in left-hand sides, e.g.: x+=5 expr can be a multiline statement. Multiline comments are not allowed but multiline statements are. Functions may also be returned as in: do(something) # here do is returned, not something TODO: maybe functions should be removed? TODO: better handling of semi-columns (;) ''' vars = get_identifiers(expr) expr = clean_text(expr) # Find lines that start by an identifier mod_vars = [] for line in expr.splitlines(): s = re.search(r'^\s*(\w+)\b', line) if s and (s.group(1) in vars): mod_vars.append(s.group(1)) return mod_vars def fill_vars(f, keepnamespace=False, *varnames): ''' Returns a function with arguments given by varnames (list or tuple), given that the arguments of f are in varnames. If keepnamespace is True, then the original func_globals dictionary of the function is kept. Purpose: changing the syntax for function calls. Example: f=lambda x:2*x g=fill_vars(f,'y','x') Then g(1,2) returns f(2). This is somehow the inverse of partial (in module functools). N.B.: the order of variables matters. ''' if list(f.func_code.co_varnames) == varnames: return f varstring = varnames[0] for name in varnames[1:]: varstring += ',' + name shortvarstring = f.func_code.co_varnames[0] for name in f.func_code.co_varnames[1:]: shortvarstring += ',' + name if keepnamespace: # Create a unique name fname = 'fill_vars_function' + id(f) f.func_globals[fname] = f return eval('lambda ' + varstring + ': ' + fname + '(' + shortvarstring + ')', f.func_globals) else: return eval('lambda ' + varstring + ': f(' + shortvarstring + ')', {'f':f}) def check_equations_units(eqs, x): ''' Check the units of the differential equations, using the units of x. df_i/dt must have units of x_i / time. ''' try: for f, x_i in zip(eqs, x): f.func_globals['xi'] = 0 * second ** -.5 # Noise f(*x) + (x_i / second) # Check that the two terms have the same dimension except DimensionMismatchError, inst: raise DimensionMismatchError("The differential equations are not homogeneous!", *inst._dims) def get_var_names(eqs): ''' Get the variable names from the set of equations. Returns a list. N.B.: the order is preserved. ''' names = list(eqs[0].func_code.co_varnames) for eq in eqs: for name in eq.func_code.co_varnames: if not(name in names): names.append(name) return names class AffineFunction(object): ''' An object that can be added and multiplied by a float (or array or int). ''' def __init__(self, a=1., b=0.): ''' Defines an affine function as a*x+b. ''' self.a = a self.b = b def __add__(self, y): if isinstance(y, AffineFunction): return AffineFunction(self.a + y.a, self.b + y.b) else: return AffineFunction(self.a, self.b + array(y)) def __radd__(self, x): if isinstance(x, AffineFunction): return AffineFunction(self.a + x.a, self.b + x.b) else: return AffineFunction(self.a, self.b + array(x)) def __neg__(self): return AffineFunction(-self.a, -self.b) def __sub__(self, y): if isinstance(y, AffineFunction): return AffineFunction(self.a - y.a, self.b - y.b) else: return AffineFunction(self.a, self.b - array(y)) def __rsub__(self, x): if isinstance(x, AffineFunction): return AffineFunction(x.a - self.a, x.b - self.b) else: return AffineFunction(-self.a, array(x) - self.b) def __mul__(self, y): if isinstance(y, float) or isinstance(y, int) or isinstance(y, array): return AffineFunction(self.a * array(y), self.b * array(y)) else: return y.__rmul__(self) def __rmul__(self, x): if isinstance(x, float) or isinstance(x, int) or isinstance(x, array): return AffineFunction(array(x) * self.a, array(x) * self.b) else: return x.__mul__(self) def __div__(self, y): if isinstance(y, float) or isinstance(y, int) or isinstance(y, array): return AffineFunction(self.a / array(y), self.b / array(y)) else: return y.__rdiv__(self) def __repr__(self): return str(self.a) + '*x+' + str(self.b) class Term(object): ''' A variable that can be used to isolate terms in a function, e.g.: f=lambda x:a+3*x f(Term()) --> 3*Term() Idea: a+x(z) = x(z) + a = x(z) a*x(z) = x(z)*a = x(a*z) ''' def __init__(self, x=1.): self.x = x def __add__(self, y): return self def __radd__(self, y): return self def __mul__(self, y): return Term(self.x * y) def __rmul__(self, y): return Term(self.x * y) def __div__(self, y): return Term(self.x / y) def __neg__(self): return Term(-self.x) def __repr__(self): return str(self.x) + '*Term()' def __print__(self): return str(self.x) + '*Term()' def is_affine(f): ''' Tests whether f is an affine function. ''' nargs = f.func_code.co_argcount try: f(*([AffineFunction()]*nargs)) except: return False return True #def is_affine1st(f,x0): # ''' # Tests whether f is affine in its 1st variable. # ''' # nargs=f.func_code.co_argcount # return is_affine(lambda x:f(x,*x0)) def depends_on(f, x, x0): ''' Tests whether f depends on global variable x. N.B.: returns True also if f generates an error (e.g. with undefined variables). x0 is the test value for the variables (tuple). Other idea: use 'xi' in f.func_code.co_names (but not working for nested functions) ''' #return x in f.func_code.co_names # if x in f.func_globals: # oldx=f.func_globals[x] # else: # oldx=None old_func_globals = copy(f.func_globals) x0 = list_replace_quantity_with_pure(x0) f.func_globals.update(namespace_replace_quantity_with_pure(f.func_globals)) result = False f.func_globals[x] = None nargs = f.func_code.co_argcount try: f(*x0) except: result = True # if oldx==None: # del f.func_globals[x] # x was not defined # else: # f.func_globals[x]=oldx # previous value f.func_globals.update(old_func_globals) return result def get_global_term(f, x, x0): ''' Extract the term in global variable x from function x, returns a float. x0 is the test value for the variables (tuple). Example: getterm(lambda x:2*x+5*y,'y',(0,0))) --> 5 ''' old_func_globals = copy(f.func_globals) x0 = list_replace_quantity_with_pure(x0) f.func_globals.update(namespace_replace_quantity_with_pure(f.func_globals)) f.func_globals[x] = Term() result = f(*x0).x f.func_globals.update(old_func_globals) return result brian-1.3.1/brian/library/000077500000000000000000000000001167451777000153655ustar00rootroot00000000000000brian-1.3.1/brian/library/IF.py000066400000000000000000000066341167451777000162460ustar00rootroot00000000000000''' Integrate-and-Fire models. ''' from brian.units import * from brian.stdunits import * from brian.membrane_equations import * from numpy import exp __all__ = ['leaky_IF', 'perfect_IF', 'exp_IF', 'quadratic_IF', 'Brette_Gerstner', 'Izhikevich', \ 'AdaptiveReset', 'aEIF'] __credits__ = dict(author='Romain Brette (brette@di.ens.fr)', date='April 2008') #TODO: specific integration methods """ ****************************************** One-dimensional integrate-and-fire models ****************************************** """ @check_units(tau=second, El=volt) def leaky_IF(tau, El): ''' A leaky integrate-and-fire model (membrane equation). tau dvm/dt = EL - vm ''' return MembraneEquation(tau) + \ Current('Im=El-vm:volt', current_name='Im', El=El) @check_units(tau=second) def perfect_IF(tau): ''' A perfect integrator. tau dvm/dt = ... ''' return MembraneEquation(tau) @check_units(C=farad, gL=siemens, EL=volt, VT=volt, DeltaT=volt) def exp_IF(C, gL, EL, VT, DeltaT): ''' An exponential integrate-and-fire model (membrane equation). ''' return MembraneEquation(C) + \ Current('Im=gL*(EL-vm)+gL*DeltaT*exp((vm-VT)/DeltaT):amp', \ gL=gL, EL=EL, DeltaT=DeltaT, exp=exp, VT=VT) @check_units(C=farad, a=siemens / volt, EL=volt, VT=volt) def quadratic_IF(C, a, EL, VT): ''' Quadratic integrate-and-fire model. C*dvm/dt=a*(vm-EL)*(vm-VT) ''' return MembraneEquation(C) + \ Current('Im=a*(vm-EL)*(vm-VT):amp', \ a=a, EL=EL, exp=exp, VT=VT) """ ****************************************** Two-dimensional integrate-and-fire models ****************************************** """ @check_units(a=1 / second, b=1 / second) def Izhikevich(a=0.02 / ms, b=0.2 / ms): ''' Returns a membrane equation for the Izhikevich model (variables: vm and w). ''' return MembraneEquation(1.) + \ Current(''' Im=(0.04/ms/mV)*vm**2+(5/ms)*vm+140*mV/ms-w : volt/second dw/dt=a*(b*vm-w) : volt/second ''', current_name='Im') @check_units(C=farad, gL=siemens, EL=volt, VT=volt, DeltaT=volt, \ tauw=second, a=siemens) def Brette_Gerstner(C=281 * pF, gL=30 * nS, EL= -70.6 * mV, VT= -50.4 * mV, \ DeltaT=2 * mV, tauw=144 * ms, a=4 * nS): ''' Returns a membrane equation for the Brette-Gerstner model. Default: a regular spiking cortical cell. Brette, R. and W. Gerstner (2005). Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of Neurophysiology 94: 3637-3642. ''' return exp_IF(C, gL, EL, VT, DeltaT) + \ IonicCurrent('dw/dt=(a*(vm-EL)-w)/tauw:amp', \ a=a, EL=EL, tauw=tauw) aEIF = Brette_Gerstner # synonym AdEx = aEIF class AdaptiveReset(object): ''' A two-variable reset: V<-Vr w<-w+b (used in Izhikevich and Brette-Gerstner models) ''' def __init__(self, Vr= -70.6 * mvolt, b=0.0805 * nA): self.Vr = Vr self.b = b def __call__(self, P): ''' Clamps membrane potential at reset value. ''' spikes = P.LS.lastspikes() P.vm_[spikes] = self.Vr P.w_[spikes] += self.b brian-1.3.1/brian/library/__init__.py000066400000000000000000000000021167451777000174660ustar00rootroot00000000000000 brian-1.3.1/brian/library/electrophysiology/000077500000000000000000000000001167451777000211515ustar00rootroot00000000000000brian-1.3.1/brian/library/electrophysiology/__init__.py000066400000000000000000000001541167451777000232620ustar00rootroot00000000000000from electrophysiology_models import * from trace_analysis import * from electrode_compensation import * brian-1.3.1/brian/library/electrophysiology/electrode_compensation.py000066400000000000000000000442711167451777000262600ustar00rootroot00000000000000""" Electrode compensation ---------------------- Two methods are implemented: * Lp parametric compensation * Active Electrode Compensation (AEC) """ from brian.stateupdater import get_linear_equations from brian.log import log_info from brian import second, Mohm, mV, ms, Equations, ohm, volt, second from scipy.optimize import fmin from scipy.signal import lfilter from scipy import linalg from numpy import sqrt, ceil, zeros, eye, poly, dot, hstack, array from scipy import zeros, array, optimize, mean, arange, diff, rand, exp, sum, convolve, eye, linalg, sqrt import time __all__=['Lp_compensate', 'full_kernel', 'full_kernel_from_step', 'electrode_kernel_soma', 'electrode_kernel_dendrite', 'solve_convolution', 'electrode_kernel', 'AEC_compensate'] def compute_filter(A, row=0): """ From a linear discrete-time multidimensional system Y(n+1)=AY(n)+X(n), with X(n) a vector with X(n)[1:]==0, compute a linear 1D filter for simulating the variable indexed by "row". It returns two vectors b, a, that are to be provided to the Scipy function lfilter. """ d = len(A) # compute a: characteristic polynomial of the matrix a = poly(A) # with a[0]=1 # compute b recursively b = zeros(d+1) T = eye(d) b[0] = T[row, 0] for i in range(1, d+1): T = a[i]*eye(d) + dot(A, T) b[i] = T[row, 0] return b, a def simulate(eqs, I, dt, row=0): # export? """ Simulate a system of neuron equations in response to an injected current I with sampling frequency 1/dt, using a Scipy linear filter rather than a Brian integrator (it's more efficient since there are no spikes). The variable indexed by "row" only is simulated. I must not appear in the equations, rather, it will be injected in the *first* variable only. That is to say, if the equations correspond to the differential system dY/dt=MY+I, where only the first component of I is nonzero, then the equations must correspond to the matrix M. For instance, if the equations were dv/dt=(R*I-v)/tau, then the following equation must be given: "dv/dt=-v/tau", and the input current will be "I*R/tau". """ # get M and B such that the system is dY/dy=M(Y-B) # NOTE: it only works if the system is invertible (det(M)!=0) ! M, B = get_linear_equations(eqs) # discretization of the system: Y(n+1)=AY(n), were A=exp(M*dt). A = linalg.expm(M * dt) # compute the filter b, a = compute_filter(A, row=row) # apply the filter to the injected current y = lfilter(b, a, I*dt) + B[row] return y """ Lp Electrode Compensation ------------------------- From: Rossant et al (?): to be submitted. """ class ElectrodeCompensation (object): """ Lp Electrode Compensation class. ================================ Implements an electrode compensation technique based on linear model fitting of an electrode and a neuron in response to an injected current. The fitness function is the Lp error between the full model response and the raw trace, with p<2 to minimize the bias due to the nonlinear voltage excursions of action potentials. The fitting procedure uses the Scipy fmin optimization function. The model simulation is performed efficiently on the CPU using a linear filter directly applied to the current. This filter is computed dynamically from the electrode and neuron differential equations. """ eqs = """ dV/dt=Re*(-Iinj)/taue : volt dV0/dt=(R*Iinj-V0+Vr)/tau : volt Iinj=(V-V0)/Re : amp """ def __init__(self, I, Vraw, dt, durslice=10*second, p=1.0, *params): """ Class constructor. * I: injected current, 1D vector. * Vraw: raw (uncompensated) voltage trace, 1D vector, same length as I. * dt: sampling period (inverse of the sampling frequency), in second. * durslice=1*second: duration of slices: the fit is performed independently on each slice to capture the possible parameter changes during the recordings. * p=1.0: parameter of the Lp error. In general, p<2, and a smaller value for p (like 0.5) will yield better results, especially if there are a lot of action potentials. * *params: a list of initial parameters for the optimization, in the following order: R, tau, Vr, Re, taue. """ self.I = I self.Vraw = Vraw self.p = p self.dt = dt self.dt_ = float(dt) self.x0 = self.params_to_vector(*params) self.duration = len(I) * dt self.durslice = min(durslice, self.duration) self.slicesteps = int(durslice/dt) self.nslices = int(ceil(len(I)*dt/durslice)) self.islice = 0 self.I_list = [I[self.slicesteps*i:self.slicesteps*(i+1)] for i in range(self.nslices)] self.Vraw_list = [Vraw[self.slicesteps*i:self.slicesteps*(i+1)] for i in range(self.nslices)] def vector_to_params(self, *x): """ Convert a vector of parameters (used for the optimization) to a tuple of actual parameters. """ R,tau,Vr,Re,taue = x R = R*R tau = tau*tau Re = Re*Re taue = taue*taue return R,tau,Vr,Re,taue def params_to_vector(self, *params): """ Opposite of vector_to_params. """ x = params x = [sqrt(params[0]), sqrt(params[1]), params[2], sqrt(params[3]), sqrt(params[4])] return list(x) def get_model_trace(self, row, *x): """ Compute the model response (variable index "row") to the injected current, at a specific slice (stored in self.islice), with model parameters specified with the vector x. """ R, tau, Vr, Re, taue = self.vector_to_params(*x) # put units again R, tau, Vr, Re, taue = R*ohm, tau*second, Vr*volt, Re*ohm, taue*second eqs = Equations(self.eqs) eqs.prepare() self._eqs = eqs y = simulate(eqs, self.I_list[self.islice] * Re/taue, self.dt, row=row) return y def fitness(self, x): """ fitness function provided to the fmin optimization procedure. Simulate the model and compute the Lp error between the model response and the raw trace. """ R, tau, Vr, Re, taue = self.vector_to_params(*x) y = self.get_model_trace(0, *x) e = self.dt_*sum(abs(self.Vraw_list[self.islice]-y)**self.p) return e def compensate_slice(self, x0): """ Compensate on the current slice, by calling fmin on the fitness function. """ fun = lambda x: self.fitness(x) x = fmin(fun, x0, maxiter=10000, maxfun=10000, disp=False) return x def compensate(self): """ Compute compensate_slice for all slices. Use the previous best parameters as initial parameters for the next slice. """ self.params_list = [] self.xlist = [self.x0] t0 = time.clock() for self.islice in range(self.nslices): newx = self.compensate_slice(self.xlist[self.islice]) self.xlist.append(newx) self.params_list.append(self.vector_to_params(*newx)) log_info("electrode_compensation","Slice %d/%d compensated in %.2f seconds" % \ (self.islice+1, self.nslices, time.clock()-t0)) t0 = time.clock() self.xlist = self.xlist[1:] return self.xlist def get_compensated_trace(self): """ Called after the compensation procedure. Compute the full model traces, for all slices, using the best parameters found by the optimization. Returns only the compensated trace, but this function also computes the neuron and electrode responses. """ Vcomp_list = [] Vneuron_list = [] Velec_list = [] for self.islice in range(self.nslices): x = self.xlist[self.islice] V = self.get_model_trace(0, *x) V0 = self.get_model_trace(1, *x) Velec = V-V0 Vneuron_list.append(V0) Velec_list.append(Velec) Vcomp_list.append(self.Vraw_list[self.islice] - Velec) self.Vcomp = hstack(Vcomp_list) self.Vneuron = hstack(Vneuron_list) self.Velec = hstack(Velec_list) return self.Vcomp def Lp_compensate(I, Vraw, dt, slice_duration=1*second, p=1.0, full=False, **initial_params): """ Lp Electrode Compensation. Implements an electrode compensation technique based on linear model fitting of an electrode and a neuron in response to an injected current. The fitness function is the Lp error between the full model response and the raw trace, with p<2 to minimize the bias due to the nonlinear voltage excursions of action potentials. The fitting procedure uses the Scipy fmin optimization function. The model simulation is performed efficiently on the CPU using a linear filter directly applied to the current. This filter is computed dynamically from the electrode and neuron differential equations. Arguments * I: injected current, 1D vector. * Vraw: raw (uncompensated) voltage trace, 1D vector, same length as I. * dt: sampling period (inverse of the sampling frequency), in second. * slice_duration=1*second: duration of slices: the fit is performed independently on each slice to capture the possible parameter changes during the recordings. * p=1.0: parameter of the Lp error. In general, p<2, and a smaller value for p (like 0.5) will yield better results, especially if there are a lot of action potentials. * **initial_params: initial parameters for the optimization: R, tau, Vr, Re, taue. * R: neuron resistance, default 100 MOhm * tau: neuron membrane time constant, default 20 ms * Vr: neuron rest potentiel, default -70 mV * Re: electrode resistance, default 50 MOhm * taue: electrode time constant, default 0.5 ms """ R = initial_params.get("R", 100*Mohm) tau = initial_params.get("tau", 20*ms) Vr = initial_params.get("Vr", -70*mV) Re = initial_params.get("Re", 50*Mohm) taue = initial_params.get("taue", .5*ms) comp = ElectrodeCompensation(I, Vraw, dt, slice_duration, p, R, tau, Vr, Re, taue) comp.compensate() Vcomp = comp.get_compensated_trace() params = array(comp.params_list).transpose() if not full: return Vcomp, params else: return dict(Vcompensated=Vcomp, Vneuron=comp.Vneuron, Velectrode=comp.Velec, params=params) ''' Active Electrode Compensation ----------------------------- From: Brette et al (2008). High-resolution intracellular recordings using a real-time computational model of the electrode. Neuron 59(3):379-91. ''' def full_kernel(v, i, ksize, full_output=False): ''' Calculates the full kernel from the recording v and the input current i. The last ksize steps of v should be null. ksize = size of the resulting kernel full_output = returns K,v0 if True (v0 is the resting potential) ''' # Calculate the correlation vector # and the autocorrelation vector vi = zeros(ksize) ii = zeros(ksize) vref = mean(v) # taking as the reference potential simplifies the formulas for k in range(ksize): vi[k] = mean((v[k:] - vref) * i[:len(i) - k]) ii[k] = mean(i[k:] * i[:len(i) - k]) vi -= mean(i) ** 2 K = levinson_durbin(ii, vi) if full_output: v0 = vref - mean(i) * sum(K) return K, v0 else: return K def full_kernel_from_step(V, I): ''' Calculates the full kernel from the response (V) to a step input (I, constant). ''' return diff(V) / I def solve_convolution(K, Km): ''' Solves Ke = K - Km * Ke/Re Linear problem ''' Re = sum(K) - sum(Km) n = len(Km) A = eye(n) * (1 + Km[0] / Re) for k in range(n): for m in range(k): A[k, m] = Km[k - m] / Re return linalg.lstsq(A, K)[0] def electrode_kernel_dendrite(Karg, start_tail, full_output=False): ''' (For dendritic recordings) Extracts the electrode kernel Ke from the raw kernel K by removing the membrane kernel, estimated from the indexes >= start_tail of the raw kernel. full_output = returns Ke,Km if True (otherwise Ke) (Ke=electrode filter, Km=membrane filter) ''' K = Karg.copy() def remove_km(RawK, Km): ''' Solves Ke = RawK - Km * Ke/Re for a dendritic Km. ''' Kel = RawK - Km # DOES NOT CONVERGE!! for _ in range(5): # Iterative solution Kel = RawK - convolve(Km, Kel)[:len(Km)] / sum(Kel) # NB: Re=sum(Kel) increases after every iteration return Kel # Fit of the tail to a dendritic kernel to find the membrane time constant t = arange(len(K)) tail = arange(start_tail, len(K)) Ktail = K[tail] f = lambda params:params[0] * ((tail + 1) ** -.5) * exp(-params[1] ** 2 * (tail + 1)) - Ktail #Rtail=sum(Ktail) #g=lambda tau:sum((tail+1)**(-.5)*exp(-(tail+1)/tau)) #J=lambda tau:sum(((tail+1)**(-.5)*exp(-(tail+1)/tau)/g(tau)-Ktail/Rtail)**2) p, _ = optimize.leastsq(f, array([1., .03])) #p=optimize.fminbound(J,.1,10000.) #p=optimize.golden(J) #print "tau_dend=",p*.1 #Km=(t+1)**(-.5)*exp(-(t+1)/p)*Rtail/g(p) print "tau_dend=", .1 / (p[1] ** 2) Km = p[0] * ((t + 1) ** -.5) * exp(-p[1] ** 2 * (t + 1)) K[tail] = Km[tail] # Find the minimum z = optimize.fminbound(lambda x:sum(solve_convolution(K, x * Km)[tail] ** 2), .5, 1.) Ke = solve_convolution(K, z * Km) if full_output: return Ke[:start_tail], z * Km else: return Ke[:start_tail] def electrode_kernel_soma(Karg, start_tail, full_output=False): ''' (For somatic recordings - alternative method) Extracts the electrode kernel Ke from the raw kernel K by removing the membrane kernel, estimated from the indexes >= start_tail of the raw kernel. full_output = returns Ke,Km if True (otherwise Ke) (Ke=electrode filter, Km=membrane filter) ''' K = Karg.copy() def remove_km(RawK, Km): ''' Solves Ke = RawK - Km * Ke/Re for a dendritic Km. ''' Kel = RawK - Km for _ in range(5): # Iterative solution Kel = RawK - convolve(Km, Kel)[:len(Km)] / sum(Kel) # NB: Re=sum(Kel) increases after every iteration return Kel # Fit of the tail to a somatic kernel to find the membrane time constant t = arange(len(K)) tail = arange(start_tail, len(K)) Ktail = K[tail] f = lambda params:params[0] * exp(-params[1] ** 2 * (tail + 1)) - Ktail p, _ = optimize.leastsq(f, array([1., .3])) Km = p[0] * exp(-p[1] ** 2 * (t + 1)) print "tau_soma=", .1 / (p[1] ** 2) K[tail] = Km[tail] # Find the minimum z = optimize.fminbound(lambda x:sum(solve_convolution(K, x * Km)[tail] ** 2), .5, 1.) Ke = solve_convolution(K, z * Km) print "R=", sum(z * p[0] * exp(-p[1] ** 2 * (arange(1000) + 1))) if full_output: return Ke[:start_tail], z * Km else: return Ke[:start_tail] def electrode_kernel(Karg, start_tail, full_output=False): ''' Extracts the electrode kernel Ke from the raw kernel K by removing the membrane kernel, estimated from the indexes >= start_tail of the raw kernel. full_output = returns Ke,Km if True (otherwise Ke) (Ke=electrode filter, Km=membrane filter) Finds automatically whether to use dendritic or somatic kernel. ''' K = Karg.copy() # Fit of the tail to a somatic kernel to find the membrane time constant t = arange(len(K)) tail = arange(start_tail, len(K)) Ktail = K[tail] f = lambda params:params[0] * exp(-params[1] ** 2 * (tail + 1)) - Ktail p, _ = optimize.leastsq(f, array([1., .3])) Km_soma = p[0] * exp(-p[1] ** 2 * (t + 1)) f = lambda params:params[0] * ((tail + 1) ** -.5) * exp(-params[1] ** 2 * (tail + 1)) - Ktail p, _ = optimize.leastsq(f, array([1., .03])) Km_dend = p[0] * ((t + 1) ** -.5) * exp(-p[1] ** 2 * (t + 1)) if sum((Km_soma[tail] - Ktail) ** 2) < sum((Km_dend[tail] - Ktail) ** 2): print "Somatic kernel" Km = Km_soma else: print "Dendritic kernel" Km = Km_dend K[tail] = Km[tail] # Find the minimum z = optimize.fminbound(lambda x:sum(solve_convolution(K, x * Km)[tail] ** 2), .5, 1.) Ke = solve_convolution(K, z * Km) if full_output: return Ke[:start_tail], z * Km else: return Ke[:start_tail] def AEC_compensate(v, i, ke): ''' Active Electrode Compensation, done offline. v = recorded potential i = injected current ke = electrode kernel Returns the compensated potential. ''' return v - convolve(ke, i)[:-(len(ke) - 1)] def levinson_durbin(a, y): ''' Solves AX=Y where A is a symetrical Toeplitz matrix with coefficients given by the vector a (a = first row = first column of A). ''' b = 0 * a x = 0 * a b[0] = 1. / a[0] x[0] = y[0] * b[0] for i in range(1, len(a)): alpha = sum(a[1:i + 1] * b[:i]) u = 1. / (1 - alpha ** 2) v = -alpha * u tmp = b[i - 1] if i > 1: b[1:i] = v * b[i - 2::-1] + u * b[:i - 1] b[0] = v * tmp b[i] = u * tmp beta = y[i] - sum(a[i:0:-1] * x[:i]) x += beta * b return x brian-1.3.1/brian/library/electrophysiology/electrophysiology_models.py000066400000000000000000000277021167451777000266620ustar00rootroot00000000000000''' Electrophysiology models for Brian. R. Brette 2008. Contains: * Electrode models * Current clamp and voltage amplifiers * DCC and SEVC amplifiers (discontinuous) * AEC (Active Electrode Compensation) * Acquisition board ''' from brian.units import amp, check_units, volt, farad from brian.equations import Equations, unique_id from operator import isSequenceType from brian.units import ohm, Mohm from brian.stdunits import pF, ms, nA, mV, nS from brian.neurongroup import NeuronGroup from scipy import zeros, array, optimize, mean, arange, diff, rand, exp, sum, convolve, eye, linalg, sqrt from brian.clock import Clock from electrode_compensation import * __all__ = ['electrode', 'current_clamp', 'voltage_clamp', 'DCC', 'SEVC', 'AcquisitionBoard', 'AEC', 'VC_AEC'] ''' ------------ Electrodes ------------ ''' # TODO: sharp electrode model (cone) # No unit checking of Re and Ce because they can be lists @check_units(v_rec=volt, vm=volt, i_inj=amp, i_cmd=amp) def electrode(Re, Ce, v_el='v_el', vm='vm', i_inj='i_inj', i_cmd='i_cmd'): ''' An intracellular electrode modeled as an RC circuit, or multiple RC circuits in series (if Re, Ce are lists). v_el = electrode (=recording) potential vm = membrane potential i_inj = current entering the membrane i_cmd = electrode command current (None = no injection) Returns an Equations() object. ''' if isSequenceType(Re): if len(Re) != len(Ce) or len(Re) < 2: raise TypeError, "Re and Ce must have the same length" v_mid, i_mid = [], [] for i in range(len(Re) - 1): v_mid.append('v_mid_' + str(i) + unique_id()) i_mid.append('i_mid_' + str(i) + unique_id()) eqs = electrode(Re[0], Ce[0], v_mid[0], vm, i_inj, i_mid[0]) for i in range(1, len(Re) - 1): eqs + electrode(Re[i], Ce[i], v_mid[i], v_mid[i - 1], i_mid[i - 1], i_mid[i]) eqs += electrode(Re[-1], Ce[-1], v_el, v_mid[-1], i_mid[-1], i_cmd) return eqs else: if Ce > 0 * farad: return Equations(''' dvr/dt = ((vm-vr)/Re+ic)/Ce : mV ie = (vr-vm)/Re : nA''', vr=v_el, vm=vm, ic=i_cmd, ie=i_inj, \ Re=Re, Ce=Ce) else: # ideal electrode - pb here return Equations(''' vr = vm+Re*ic : volt ie = ic : amp''', vr=v_el, vm=vm, ic=i_cmd, ie=i_inj) ''' ------------ Amplifiers ------------ ''' def voltage_clamp(vm='vm', v_cmd='v_cmd', i_rec='i_rec', i_inj='i_inj', Re=20 * Mohm, Rs=0 * Mohm, tau_u=1 * ms): ''' Continuous voltage-clamp amplifier + ideal electrode (= input capacitance is already neutralized). vm = membrane potential (or electrode) variable v_cmd = command potential i_inj = injected current (into the neuron) i_rec = recorded current (= -i_inj) Re = electrode resistance Rs = series resistance compensation tau_u = delay of series compensation (for stability) ''' return Equations(''' Irec=-I : amp I=(Vc+U-Vr)/Re : amp dU/dt=(Rs*I-U)/tau : volt ''', Vr=vm, Vc=v_cmd, I=i_inj, Rs=Rs, tau=tau_u, Irec=i_rec, Re=Re) # TODO: Re, Ce as lists def current_clamp(vm='vm', i_inj='i_inj', v_rec='v_rec', i_cmd='i_cmd', Re=80 * Mohm, Ce=4 * pF, bridge=0 * ohm, capa_comp=0 * farad, v_uncomp=None): ''' Continuous current-clamp amplifier + electrode. vm = membrane potential (or electrode) variable i_inj = injected current (into the neuron) v_rec = recorded potential i_cmd = command current bridge = bridge resistance compensation capa_comp = capacitance neutralization Re = electrode resistance Ce = electrode capacitance (input capacitance) v_uncomp = uncompensated potential (raw measured potential) ''' if capa_comp != Ce: return Equations(''' Vr=U-R*Ic : volt I=(U-V)/Re : amp dU/dt=(Ic-I)/(Ce-CC) : volt ''', Vr=v_rec, V=vm, I=i_inj, Ic=i_cmd, R=bridge, Ce=Ce, CC=capa_comp, U=v_uncomp, Re=Re) else: return Equations(''' Vr=V+(Re-R)*I : volt I=Ic : amp # not exactly an alias because the units of Ic is unknown ''', Vr=v_rec, V=vm, I=i_inj, Ic=i_cmd, R=bridge, Re=Re) class AcquisitionBoard(NeuronGroup): ''' Digital acquisition board (DSP). Use: board=AcquisitionBoard(P=neuron,V='V',I='I',clock) where P = neuron group V = potential variable name (in P) I = current variable name (in P) clock = acquisition clock Recording: vm=board.record Injecting: board.command=... Injects I, records V. ''' def __init__(self, P, V, I, clock=None): eqs = Equations(''' record : units_record command : units_command ''', units_record=P.unit(V), units_command=P.unit(I)) NeuronGroup.__init__(self, len(P), model=eqs, clock=clock) self._P = P self._V = V self._I = I def update(self): self.record = self._P.state(self._V) # Record self._P.state(self._I)[:] = self.command # Inject class DCC(AcquisitionBoard): ''' Discontinuous current-clamp. Use: board=DCC(P=neuron,V='V',I='I',frequency=2*kHz) where P = neuron group V = potential variable name (in P) I = current variable name (in P) frequency = sampling frequency Recording: vm=board.record Injecting: board.command=I ''' def __init__(self, P, V, I, frequency): self.clock = Clock(dt=1. / (3. * frequency)) AcquisitionBoard.__init__(self, P, V, I, self.clock) self._cycle = 0 def set_frequency(self, frequency): ''' Sets the sampling frequency. ''' self.clock.dt = 1. / (3. * frequency) def update(self): if self._cycle == 0: self.record = self._P.state(self._V) # Record self._P.state(self._I)[:] = 3 * self.command # Inject else: self._P.state(self._I)[:] = 0 #*nA self._cycle = (self._cycle + 1) % 3 class SEVC(DCC): ''' Discontinuous voltage-clamp. Use: board=SEVC(P=neuron,record='V',command='I',frequency=2*kHz,gain=10*nS) where P = neuron group V = potential variable name (in P) I = current variable name (in P) frequency = sampling frequency gain = feedback gain gain2 = control gain (integral controller) Recording: i=board.record Setting the clamp potential: board.command=-20*mV ''' def __init__(self, P, V, I, frequency, gain=100 * nS, gain2=0 * nS / ms): DCC.__init__(self, P, V, I, frequency) self._J = zeros(len(P)) # finer control self._gain = gain self._gain2 = gain2 def update(self): if self._cycle == 0: self._J += self.clock._dt * self._gain2 * (self._P.state(self._V) - self.command) self.record = self._gain * (self._P.state(self._V) - self.command) + self._J self._P.state(self._I)[:] = -3 * self.record # Inject else: self._P.state(self._I)[:] = 0 #*nA self._cycle = (self._cycle + 1) % 3 ''' ------------------------------------- Active Electrode Compensation (AEC) The technique was presented in the following paper: High-resolution intracellular recordings using a real-time computational model of the electrode R. Brette, Z. Piwkowska, C. Monier, M. Rudolph-Lilith, J. Fournier, M. Levy, Y. Fregnac, T. Bal, A. Destexhe Neuron (2008) 59(3):379-91. ------------------------------------- ''' class AEC(AcquisitionBoard): """ An acquisition board with AEC. Warning: only works with 1 neuron (not N). Use: board=AEC(neuron,'V','I',clock) where P = neuron group V = potential variable name (in P) I = current variable name (in P) clock = acquisition clock Recording: vm=board.V Injecting: board.command(I) """ def __init__(self, P, V, I, clock=None): AcquisitionBoard.__init__(self, P, V, I, clock=clock) self._estimation = False self._compensation = False self.Ke = None self._Vrec = [] self._Irec = [] def start_injection(self, amp=.5 * nA, DC=0 * nA): ''' Start white noise injection for kernel estimation. amp = current amplitude DC = additional DC current ''' self._amp, self._DC = amp, DC self._estimation = True def stop_injection(self): ''' Stop white noise injection. ''' self._estimation = False self.command = 0 #*nA def estimate(self, ksize=150, ktail=50, dendritic=False): ''' Estimate electrode kernel Ke (after injection) ksize = kernel size (in bins) ktail = tail parameter (in bins), indicates the end of the electrode kernel ''' self._ksize = ksize self._ktail = ktail # Calculate Ke vrec = array(self._Vrec) / mV irec = array(self._Irec) / nA K = full_kernel(vrec, irec, self._ksize) self._K = K * Mohm if dendritic: self.Ke = electrode_kernel_dendrite(K, ktail) * Mohm else: self.Ke = electrode_kernel_soma(K, ktail) * Mohm self._Vrec = [] self._Irec = [] def switch_on(self, Ke=None): ''' Switch compensation on, with kernel Ke. (If not given: use last kernel) ''' self._compensation = True self._lastI = zeros(self._ktail) self._posI = 0 def switch_off(self): ''' Switch compensation off. ''' self._compensation = False def update(self): AcquisitionBoard.update(self) if self._estimation: I = 2. * (rand() - .5) * self._amp + self._DC self.command = I # Record self._Vrec.append(self.record[0]) self._Irec.append(I) if self._compensation: # Compensate self._lastI[self._posI] = self.command[0] self._posI = (self._posI - 1) % self._ktail self.record[0] = self.record[0] - sum(self.Ke * self._lastI[range(self._posI, self._ktail) + range(0, self._posI)]) class VC_AEC(AEC): def __init__(self, P, V, I, gain=50 * nS, gain2=0 * nS / ms, clock=None): AEC.__init__(self, P, V, I, clock=clock) self._gain = gain self._gain2 = gain2 self._J = zeros(len(P)) def stop_injection(self): ''' Stop white noise injection. ''' self._estimation = False self.record = 0 #*nA def update(self): V = self._P.state(self._V) # Record self._P.state(self._I)[:] = -self.record # Inject if self._estimation: I = 2. * (rand() - .5) * self._amp + self._DC self.record = -I # Record self._Vrec.append(V[0]) self._Irec.append(I) if self._compensation: # Compensate self._lastI[self._posI] = -self.record[0] self._posI = (self._posI - 1) % self._ktail V[0] = V[0] - sum(self.Ke * self._lastI[range(self._posI, self._ktail) + range(0, self._posI)]) self._J += self.clock._dt * self._gain2 * (self.command - V) self.record = -(self._gain * (self.command - V) + self._J) if __name__ == '__main__': from brian import * taum = 20 * ms gl = 20 * nS Cm = taum * gl eqs = Equations('dv/dt=(-gl*v+i_inj)/Cm : volt') + electrode(50 * Mohm, 10 * pF, vm='v', i_cmd=.5 * nA) neuron = NeuronGroup(1, model=eqs) M = StateMonitor(neuron, 'v_el', record=True) run(100 * ms) plot(M.times / ms, M[0] / mV) show() brian-1.3.1/brian/library/electrophysiology/trace_analysis.py000066400000000000000000000202751167451777000245320ustar00rootroot00000000000000""" Analysis of voltage traces. Mainly about analysis of spike shapes. """ from numpy import * from scipy import optimize from scipy.signal import lfilter __all__ = ['find_spike_criterion', 'spike_peaks', 'spike_onsets', 'find_onset_criterion', 'slope_threshold', 'vm_threshold', 'spike_shape', 'spike_duration', 'reset_potential', 'spike_mask', 'lowpass', 'spike_onsets_dv2', 'spike_onsets_dv3'] def lowpass(x, tau, dt=1.): """ Low-pass filters x(t) with time constant tau. """ a = exp(-dt / tau) return lfilter([1. - a], [1., -a], x) def spike_duration(v, onsets=None, full=False): ''' Average spike duration. Default: time from onset to next minimum. If full is True: * Time from onset to peak * Time from onset down to same value (spike width) * Total duration from onset to next minimum * Standard deviations for these 3 values ''' if onsets is None: onsets = spike_onsets(v) dv = diff(v) total_duration = [] time_to_peak = [] spike_width = [] for i, spike in enumerate(onsets): if i == len(onsets) - 1: next_spike = len(dv) else: next_spike = onsets[i + 1] total_duration.append(((dv[spike:next_spike - 1] <= 0) & (dv[spike + 1:next_spike] > 0)).argmax()) time_to_peak.append((dv[spike:next_spike] <= 0).argmax()) spike_width.append((v[spike + 1:next_spike] <= v[spike]).argmax()) if full: return mean(time_to_peak), mean(spike_width), mean(total_duration), \ std(time_to_peak), std(spike_width), std(total_duration) else: return mean(total_duration) def reset_potential(v, peaks=None, full=False): ''' Average reset potential, calculated as next minimum after spike peak. If full is True, also returns the standard deviation. ''' if peaks is None: peaks = spike_peaks(v) dv = diff(v) reset = [] for i, spike in enumerate(peaks): if i == len(peaks) - 1: next_spike = len(dv) else: next_spike = peaks[i + 1] reset.append(v[spike + ((dv[spike:next_spike - 1] <= 0) & (dv[spike + 1:next_spike] > 0)).argmax() + 1]) if full: return mean(reset), std(reset) else: return mean(reset) def find_spike_criterion(v): ''' This is a rather complex method to determine above which voltage vc one should consider that a spike is produced. We look in phase space (v,dv/dt), at the horizontal axis dv/dt=0. We look for a voltage for which the voltage cannot be still. Algorithm: find the largest interval of voltages for which there is no sign change of dv/dt, and pick the middle. ''' # Rather: find the lowest maximum? dv = diff(v) sign_changes = ((dv[1:] * dv[:-1]) <= 0).nonzero()[0] vc = v[sign_changes + 1] i = argmax(diff(vc)) return .5 * (vc[i] + vc[i + 1]) def spike_peaks(v, vc=None): ''' Returns the indexes of spike peaks. vc is the spike criterion (voltage above which we consider we have a spike) ''' # Possibly: add refractory criterion vc = vc or find_spike_criterion(v) dv = diff(v) spikes = ((v[1:] > vc) & (v[:-1] < vc)).nonzero()[0] peaks = [] if len(spikes) > 0: for i in range(len(spikes) - 1): peaks.append(spikes[i] + (dv[spikes[i]:spikes[i + 1]] <= 0).nonzero()[0][0]) decreasing = (dv[spikes[-1]:] <= 0).nonzero()[0] if len(decreasing) > 0: peaks.append(spikes[-1] + decreasing[0]) else: peaks.append(len(dv)) # last element (maybe should be deleted?) return array(peaks) def spike_onsets(v, criterion=None, vc=None): ''' Returns the indexes of spike onsets. vc is the spike criterion (voltage above which we consider we have a spike). First derivative criterion (dv>criterion). ''' vc = vc or find_spike_criterion(v) criterion = criterion or find_onset_criterion(v, vc=vc) peaks = spike_peaks(v, vc) dv = diff(v) d2v = diff(dv) previous_i = 0 j = 0 l = [] for i in peaks: # Find last peak of derivative (commented: point where derivative is largest) # inflexion = previous_i + argmax(dv[previous_i:i]) inflexion=where(d2v[previous_i:i-2]*d2v[previous_i+1:i-1]<0)[0][-1]+2+previous_i j += max((dv[j:inflexion] < criterion).nonzero()[0]) + 1 l.append(j) previous_i = i return array(l) def spike_onsets_dv2(v, vc=None): ''' Returns the indexes of spike onsets. vc is the spike criterion (voltage above which we consider we have a spike). Maximum of 2nd derivative. DOESN'T SEEM GOOD ''' vc = vc or find_spike_criterion(v) peaks = spike_peaks(v, vc) d2v = diff(diff(v)) d3v = diff(d2v) # I'm guessing you have to shift v by 1/2 per differentiation j = 0 l = [] previous_i=0 for i in peaks: # Find peak of derivative inflexion=where(d2v[previous_i:i-1]*d2v[previous_i+1:i]<0)[0][-1]+2+previous_i j += max(((d3v[j:inflexion - 1] > 0) & (d3v[j + 1:inflexion] < 0)).nonzero()[0]) # +2? l.append(j) previous_i=i return array(l) def spike_onsets_dv3(v, vc=None): ''' Returns the indexes of spike onsets. vc is the spike criterion (voltage above which we consider we have a spike). Maximum of 3rd derivative. DOESN'T SEEM GOOD ''' vc = vc or find_spike_criterion(v) peaks = spike_peaks(v, vc) dv4 = diff(diff(diff(diff(v)))) j = 0 l = [] for i in peaks: # Find peak of derivative (alternatively: last sign change of d2v, i.e. last local peak) j += max(((dv4[j:i - 1] > 0) & (dv4[j + 1:i] < 0)).nonzero()[0]) + 3 l.append(j) return array(l) def find_onset_criterion(v, guess=0.0001, vc=None): ''' Finds the best criterion on dv/dt to determine spike onsets, based on minimum threshold variability. ''' vc = vc or find_spike_criterion(v) return float(optimize.fmin(lambda x:std(v[spike_onsets(v, x, vc)]), guess, disp=0)) def spike_shape(v, onsets=None, before=100, after=100): ''' Spike shape (before peaks). Aligned on spike onset by default (to align on peaks, just pass onsets=peaks). onsets: spike onset times before: number of timesteps before onset after: number of timesteps after onset ''' if onsets is None: onsets = spike_onsets(v) shape = zeros(after + before) for i in onsets: v0 = v[max(0, i - before):i + after] shape[len(shape) - len(v0):] += v0 return shape / len(onsets) def vm_threshold(v, onsets=None, T=None): ''' Average membrane potential before spike threshold (T steps). ''' if onsets is None: onsets = spike_onsets(v) l = [] for i in onsets: l.append(mean(v[max(0, i - T):i])) return array(l) def slope_threshold(v, onsets=None, T=None): ''' Slope of membrane potential before spike threshold (T steps). Returns all slopes as an array. ''' if onsets is None: onsets = spike_onsets(v) l = [] for i in onsets: v0 = v[max(0, i - T):i] M = len(v0) x = arange(M) - M + 1 slope = sum((v0 - v[i]) * x) / sum(x ** 2) l.append(slope) return array(l) def spike_mask(v, spikes=None, T=None): ''' Returns an array of booleans which are True in spikes. spikes: starting points of spikes (default: onsets) T: duration (default: next minimum) Ex: v=v[spike_mask(v)] # only spikes v=v[-spike_mask(v)] # subthreshold trace ''' if spikes is None: spikes = spike_onsets(v) ind = (v == 1e9) if T is None: dv = diff(v) for i, spike in enumerate(spikes): if i == len(spikes) - 1: next_spike = len(dv) else: next_spike = spikes[i + 1] T = ((dv[spike:next_spike - 1] <= 0) & (dv[spike + 1:next_spike] > 0)).argmax() ind[spike:spike + T + 1] = True else: # fixed duration for i in spikes: ind[i:i + T] = True return ind brian-1.3.1/brian/library/ionic_currents.py000066400000000000000000000035561167451777000207760ustar00rootroot00000000000000''' Ionic currents for Brian ''' from brian.units import check_units, siemens, volt from brian.membrane_equations import Current from brian.equations import unique_id @check_units(El=volt) def leak_current(gl, El, current_name=None): ''' Leak current: gl*(El-vm) ''' current_name = current_name or unique_id() return Current('I=gl*(El-vm) : amp', gl=gl, El=El, I=current_name, current_name=current_name) #check_units(EK=volt) def K_current_HH(gmax, EK, current_name=None): ''' Hodkin-Huxley K+ current. Resting potential is 0 mV. ''' current_name = current_name or unique_id() return Current(''' I=gmax*n**4*(EK-vm) : amp dn/dt=alphan*(1-n)-betan*n : 1 alphan=.01*(10*mV-vm)/(exp(1-.1*vm/mV)-1)/mV/ms : Hz betan=.125*exp(-.0125*vm/mV)/ms : Hz ''', gmax=gmax, EK=EK, I=current_name, current_name=current_name) # 2 problems here: # * The current variable is determined thanks to the units, # so it doesn't work if units are off. # * Variables n, alphan and betan may conflict with other variables # (e.g. in other currents). # # Solutions: # * Set current_name explicitly. If None, generate an id or use I_K. # * Anonymise names with unique id, on option. This could be an option in # Current(). #check_units(ENa=volt) def Na_current_HH(gmax, ENa, current_name=None): ''' Hodkin-Huxley Na+ current. ''' current_name = current_name or unique_id() return Current(''' I=gmax*m**3*h*(ENa-vm) : amp dm/dt=alpham*(1-m)-betam*m : 1 dh/dt=alphah*(1-h)-betah*h : 1 alpham=.1*(25*mV-vm)/(exp(2.5-.1*vm/mV)-1)/mV/ms : Hz betam=4*exp(-.0556*vm/mV)/ms : Hz alphah=.07*exp(-.05*vm/mV)/ms : Hz betah=1./(1+exp(3.-.1*vm/mV))/ms : Hz ''', gmax=gmax, ENa=ENa, I=current_name, current_name=current_name) brian-1.3.1/brian/library/modelfitting/000077500000000000000000000000001167451777000200525ustar00rootroot00000000000000brian-1.3.1/brian/library/modelfitting/__init__.py000066400000000000000000000004201167451777000221570ustar00rootroot00000000000000from modelfitting import * from brian.experimental.codegen.integration_schemes import * #__all__ = ['modelfitting', 'print_table', 'get_spikes', 'predict', # 'PSO', 'GA','CMAES', # 'euler_scheme', 'rk2_scheme', 'exp_euler_scheme' # ] brian-1.3.1/brian/library/modelfitting/gpu_modelfitting.py000066400000000000000000000667561167451777000240100ustar00rootroot00000000000000from brian import * from playdoh import * import brian.optimiser as optimiser #import pycuda.autoinit as autoinit import pycuda.driver as drv import pycuda from pycuda.gpuarray import GPUArray from pycuda import gpuarray try: from pycuda.compiler import SourceModule except ImportError: from pycuda.driver import SourceModule from numpy import * from brian.experimental.codegen.integration_schemes import * from brian.experimental.codegen.codegen_gpu import * import re __all__ = ['GPUModelFitting', 'euler_scheme', 'rk2_scheme', 'exp_euler_scheme' ] if drv.get_version() == (2, 0, 0): # cuda version default_precision = 'float' elif drv.get_version() > (2, 0, 0): default_precision = 'double' else: raise Exception, "CUDA 2.0 required" class ModelfittingGPUCodeGenerator(GPUCodeGenerator): def generate(self, eqs, scheme): vartype = self.vartype() code = '' for line in self.scheme(eqs, scheme).split('\n'): line = line.strip() if line: code += ' ' + line + '\n' return code modelfitting_kernel_template = """ __global__ void runsim( int Tstart, int Tend, // Start, end time as integer (t=T*dt) // State variables %SCALAR% *state_vars, // State variables are offset from this double *I_arr, // Input current int *I_arr_offset, // Input current offset (for separate input // currents for each neuron) int *spikecount, // Number of spikes produced by each neuron int *num_coincidences, // Count of coincidences for each neuron int *spiketimes, // Array of all spike times as integers (begin and // end each train with large negative value) int *spiketime_indices, // Pointer into above array for each neuron int *spikedelay_arr, // Integer delay for each spike int *refractory_arr, // Integer refractory times int *next_allowed_spiketime_arr,// Integer time of the next allowed spike (for refractoriness) int onset // Time onset (only count spikes from here onwards) %COINCIDENCE_COUNT_DECLARE_EXTRA_STATE_VARIABLES% ) { const int neuron_index = blockIdx.x * blockDim.x + threadIdx.x; if(neuron_index>=%NUM_NEURONS%) return; %EXTRACT_STATE_VARIABLES% // Load variables at start %LOAD_VARIABLES% int spiketime_index = spiketime_indices[neuron_index]; int last_spike_time = spiketimes[spiketime_index]; int next_spike_time = spiketimes[spiketime_index+1]; %COINCIDENCE_COUNT_INIT% int ncoinc = num_coincidences[neuron_index]; int nspikes = spikecount[neuron_index]; int I_offset = I_arr_offset[neuron_index]; int spikedelay = spikedelay_arr[neuron_index]; const int refractory = refractory_arr[neuron_index]; int next_allowed_spiketime = next_allowed_spiketime_arr[neuron_index]; for(int T=Tstart; T=onset); // Reset if(has_spiked||is_refractory) { %RESET% } if(has_spiked) next_allowed_spiketime = T+refractory; // Coincidence counter const int Tspike = T+spikedelay; %COINCIDENCE_COUNT_TEST% if(Tspike>=next_spike_time){ spiketime_index++; last_spike_time = next_spike_time; next_spike_time = spiketimes[spiketime_index+1]; %COINCIDENCE_COUNT_NEXT% } } // Store variables at end %STORE_VARIABLES% %COINCIDENCE_COUNT_STORE_VARIABLES% next_allowed_spiketime_arr[neuron_index] = next_allowed_spiketime; spiketime_indices[neuron_index] = spiketime_index; num_coincidences[neuron_index] = ncoinc; spikecount[neuron_index] = nspikes; } """ coincidence_counting_algorithm_src = { 'inclusive':{ '%COINCIDENCE_COUNT_DECLARE_EXTRA_STATE_VARIABLES%':'', '%COINCIDENCE_COUNT_INIT%':'', '%COINCIDENCE_COUNT_TEST%':''' ncoinc += has_spiked && (((last_spike_time+%DELTA%)>=Tspike) || ((next_spike_time-%DELTA%)<=Tspike)); ''', '%COINCIDENCE_COUNT_NEXT%':'', '%COINCIDENCE_COUNT_STORE_VARIABLES%':'', }, 'exclusive':{ '%COINCIDENCE_COUNT_DECLARE_EXTRA_STATE_VARIABLES%':''', bool *last_spike_allowed_arr, bool *next_spike_allowed_arr ''', '%COINCIDENCE_COUNT_INIT%':''' bool last_spike_allowed = last_spike_allowed_arr[neuron_index]; bool next_spike_allowed = next_spike_allowed_arr[neuron_index]; ''', '%COINCIDENCE_COUNT_TEST%':''' bool near_last_spike = last_spike_time+%DELTA%>=Tspike; bool near_next_spike = next_spike_time-%DELTA%<=Tspike; near_last_spike = near_last_spike && has_spiked; near_next_spike = near_next_spike && has_spiked; ncoinc += (near_last_spike&&last_spike_allowed) || (near_next_spike&&next_spike_allowed); bool near_both_allowed = (near_last_spike&&last_spike_allowed) && (near_next_spike&&next_spike_allowed); last_spike_allowed = last_spike_allowed && !near_last_spike; next_spike_allowed = (next_spike_allowed && !near_next_spike) || near_both_allowed; ''', '%COINCIDENCE_COUNT_NEXT%':''' last_spike_allowed = next_spike_allowed; next_spike_allowed = true; ''', '%COINCIDENCE_COUNT_STORE_VARIABLES%':''' last_spike_allowed_arr[neuron_index] = last_spike_allowed; next_spike_allowed_arr[neuron_index] = next_spike_allowed; ''', }, } def generate_modelfitting_kernel_src(G, eqs, threshold, reset, dt, num_neurons, delta, coincidence_count_algorithm='exclusive', precision=default_precision, scheme=euler_scheme ): eqs.prepare() src = modelfitting_kernel_template # Substitute state variable declarations indexvar = dict((v, k) for k, v in G.var_index.iteritems() if isinstance(k, str) and k!='I') extractions = '\n '.join('%SCALAR% *'+name+'_arr = state_vars+'+str(i*num_neurons)+';' for i, name in indexvar.iteritems()) src = src.replace('%EXTRACT_STATE_VARIABLES%', extractions) # Substitute load variables loadvar_names = eqs._diffeq_names + [] loadvar_names.remove('I') # I is assumed to be a parameter and loaded per time step loadvars = '\n '.join('%SCALAR% ' + name + ' = ' + name + '_arr[neuron_index];' for name in loadvar_names) src = src.replace('%LOAD_VARIABLES%', loadvars) # Substitute save variables savevars = '\n '.join(name + '_arr[neuron_index] = ' + name + ';' for name in loadvar_names) src = src.replace('%STORE_VARIABLES%', savevars) # Substitute threshold src = src.replace('%THRESHOLD%', threshold) # Substitute reset reset = '\n '.join(line.strip() + ';' for line in reset.split('\n') if line.strip()) src = src.replace('%RESET%', reset) # Substitute state update sulines = ModelfittingGPUCodeGenerator(dtype=precision).generate(eqs, scheme) sulines = re.sub(r'\bdt\b', '%DT%', sulines) src = src.replace('%STATE_UPDATE%', sulines.strip()) # Substitute coincidence counting algorithm ccalgo = coincidence_counting_algorithm_src[coincidence_count_algorithm] for search, replace in ccalgo.iteritems(): src = src.replace(search, replace) # Substitute dt src = src.replace('%DT%', str(float(dt))) # Substitute SCALAR src = src.replace('%SCALAR%', precision) # Substitute number of neurons src = src.replace('%NUM_NEURONS%', str(num_neurons)) # Substitute delta, the coincidence window half-width src = src.replace('%DELTA%', str(int(rint(delta / dt)))) return src class GPUModelFitting(object): ''' Model fitting class to interface with GPU Initialisation arguments: ``G`` The initialised NeuronGroup to work from. ``eqs`` The equations defining the NeuronGroup. ``I``, ``I_offset`` The current array and offsets (see below). ``spiketimes``, ``spiketimes_offset`` The spike times array and offsets (see below). ``spikedelays``, Array of delays for each neuron. ``refractory``, Array of refractory periods, or a single value. ``delta`` The half-width of the coincidence window. ``precision`` Should be 'float' or 'double' - by default the highest precision your GPU supports. ``coincidence_count_algorithm`` Should be 'inclusive' if multiple predicted spikes can match one target spike, or 'exclusive' (default) if multiple predicted spikes can match only one target spike (earliest spikes are matched first). Methods: ``reinit_vars(I, I_offset, spiketimes, spiketimes_offset, spikedelays)`` Reinitialises all the variables, counters, etc. The state variable values are copied from the NeuronGroup G again, and the variables I, I_offset, etc. are copied from the method arguments. ``launch(duration[, stepsize])`` Runs the kernel on the GPU for simulation time duration. If ``stepsize`` is given, the simulation is broken into pieces of that size. This is useful on Windows because driver limitations mean that individual GPU kernel launches cannot last more than a few seconds without causing a crash. Attributes: ``coincidence_count`` An array of the number of coincidences counted for each neuron. ``spike_count`` An array of the number of spikes counted for each neuron. **Details** The equations for the NeuronGroup can be anything, but they will be solved with the Euler method. One restriction is that there must be a parameter named I which is the time varying input current. (TODO: support for multiple input currents? multiple names?) The current I is passed to the GPU in the following way. Define a single 1D array I which contains all the time varying current arrays one after the other. Now define an array I_offset of integers so that neuron i should see currents: I[I_offset[i]], I[I_offset[i]+1], I[I_offset[i]+2], etc. The experimentally recorded spike times should be passed in a similar way, put all the spike times in a single array and pass an offsets array spiketimes_offset. One difference is that it is essential that each spike train with the spiketimes array should begin with a spike at a large negative time (e.g. -1*second) and end with a spike that is a long time after the duration of the run (e.g. duration+1*second). The GPU uses this to mark the beginning and end of the train rather than storing the number of spikes for each train. ''' def __init__(self, G, eqs, I, I_offset, spiketimes, spiketimes_offset, spikedelays, refractory, delta, onset=0 * ms, coincidence_count_algorithm='exclusive', precision=default_precision, scheme=euler_scheme ): eqs.prepare() self.precision = precision if precision == 'double': self.mydtype = float64 else: self.mydtype = float32 self.N = N = len(G) self.dt = dt = G.clock.dt self.delta = delta self.onset = onset self.eqs = eqs self.G = G self.coincidence_count_algorithm = coincidence_count_algorithm threshold = G._threshold if threshold.__class__ is Threshold: state = threshold.state if isinstance(state, int): state = eqs._diffeq_names[state] threshold = state + '>' + str(float(threshold.threshold)) elif isinstance(threshold, VariableThreshold): state = threshold.state if isinstance(state, int): state = eqs._diffeq_names[state] threshold = state + '>' + threshold.threshold_state elif isinstance(threshold, StringThreshold): namespace = threshold._namespace expr = threshold._expr all_variables = eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t'] expr = optimiser.freeze(expr, all_variables, namespace) threshold = expr else: raise ValueError('Threshold must be constant, VariableThreshold or StringThreshold.') self.threshold = threshold reset = G._resetfun if reset.__class__ is Reset: state = reset.state if isinstance(state, int): state = eqs._diffeq_names[state] reset = state + ' = ' + str(float(reset.resetvalue)) elif isinstance(reset, VariableReset): state = reset.state if isinstance(state, int): state = eqs._diffeq_names[state] reset = state + ' = ' + reset.resetvaluestate elif isinstance(reset, StringReset): namespace = reset._namespace expr = reset._expr all_variables = eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t'] expr = optimiser.freeze(expr, all_variables, namespace) reset = expr self.reset = reset self.kernel_src = generate_modelfitting_kernel_src( self.G, eqs, threshold, reset, dt, N, delta, coincidence_count_algorithm=coincidence_count_algorithm, precision=precision, scheme=scheme) def reinit_vars(self, I, I_offset, spiketimes, spiketimes_offset, spikedelays, refractory): self.kernel_module = SourceModule(self.kernel_src) self.kernel_func = self.kernel_module.get_function('runsim') blocksize = 128 try: blocksize = self.kernel_func.get_attribute(pycuda.driver.function_attribute.MAX_THREADS_PER_BLOCK) except: # above won't work unless CUDA>=2.2 pass self.block = (blocksize, 1, 1) self.grid = (int(ceil(float(self.N) / blocksize)), 1) self.kernel_func_kwds = {'block':self.block, 'grid':self.grid} mydtype = self.mydtype N = self.N eqs = self.eqs statevars_arr = gpuarray.to_gpu(array(self.G._S.flatten(), dtype=mydtype)) self.I = gpuarray.to_gpu(array(I, dtype=mydtype)) self.statevars_arr = statevars_arr self.I_offset = gpuarray.to_gpu(array(I_offset, dtype=int32)) self.spiketimes = gpuarray.to_gpu(array(rint(spiketimes / self.dt), dtype=int32)) self.spiketime_indices = gpuarray.to_gpu(array(spiketimes_offset, dtype=int32)) self.num_coincidences = gpuarray.to_gpu(zeros(N, dtype=int32)) self.spikecount = gpuarray.to_gpu(zeros(N, dtype=int32)) self.spikedelay_arr = gpuarray.to_gpu(array(rint(spikedelays / self.dt), dtype=int32)) if isinstance(refractory, float): refractory = refractory*ones(N) self.refractory_arr = gpuarray.to_gpu(array(rint(refractory / self.dt), dtype=int32)) self.next_allowed_spiketime_arr = gpuarray.to_gpu(-ones(N, dtype=int32)) self.next_spike_allowed_arr = gpuarray.to_gpu(ones(N, dtype=bool)) self.last_spike_allowed_arr = gpuarray.to_gpu(zeros(N, dtype=bool)) self.kernel_func_args = [self.statevars_arr, self.I, self.I_offset, self.spikecount, self.num_coincidences, self.spiketimes, self.spiketime_indices, self.spikedelay_arr, self.refractory_arr, self.next_allowed_spiketime_arr, int32(rint(self.onset / self.dt))] if self.coincidence_count_algorithm == 'exclusive': self.kernel_func_args += [self.last_spike_allowed_arr, self.next_spike_allowed_arr] def launch(self, duration, stepsize=1 * second): if stepsize is None: self.kernel_func(int32(0), int32(duration / self.dt), *self.kernel_func_args, **self.kernel_func_kwds) pycuda.context.synchronize() else: stepsize = int(stepsize / self.dt) duration = int(duration / self.dt) for Tstart in xrange(0, duration, stepsize): Tend = Tstart + min(stepsize, duration - Tstart) self.kernel_func(int32(Tstart), int32(Tend), *self.kernel_func_args, **self.kernel_func_kwds) pycuda.context.synchronize() def get_coincidence_count(self): return self.num_coincidences.get() def get_spike_count(self): return self.spikecount.get() coincidence_count = property(fget=lambda self:self.get_coincidence_count()) spike_count = property(fget=lambda self:self.get_spike_count()) if __name__ == '__main__': import time from matplotlib.cm import jet from playdoh import * initialise_cuda() set_gpu_device(0) if 1: set_global_preferences(usecodegenthreshold=False) N = 10000 nvar = 100 delta = 4 * ms doplot = True eqs = ''' dV/dt = (-V+R*I)/tau : volt Vt : volt tau : second R : ohm I : amp ''' eqs += '\n'.join(['var'+str(i)+':0' for i in range(nvar)]) eqs = Equations(eqs) Vr = 0.0 * volt Vt = 1.0 * volt I = loadtxt('../../../dev/ideas/cuda/modelfitting/current.txt') spiketimes = loadtxt('../../../dev/ideas/cuda/modelfitting/spikes.txt') spiketimes -= int(min(spiketimes)) I_offset = zeros(N, dtype=int) spiketimes_offset = zeros(N, dtype=int) G = NeuronGroup(N, eqs, reset='V=0*volt; Vt+=1*volt', threshold='V>1*volt') G.R = rand(N) * 2e9 + 1e9 G.tau = rand(N) * 49 * ms + 1 * ms spikedelays = rand(N) * 5 * ms duration = len(I) * G.clock.dt spiketimes = hstack((-1, spiketimes, float(duration) + 1)) mf = GPUModelFitting(G, eqs, I, I_offset, spiketimes, spiketimes_offset, spikedelays, 0*ms, delta, coincidence_count_algorithm='exclusive') mf.reinit_vars(I, I_offset, spiketimes, spiketimes_offset, spikedelays, 0*ms) print mf.kernel_src start_time = time.time() mf.launch(duration) running_time = time.time() - start_time print 'N:', N print 'Duration:', duration print 'Total running time:', running_time if doplot: spikecount = mf.spike_count num_coincidences = mf.coincidence_count R = G.R tau = G.tau print 'Spike count varies between', spikecount.min(), 'and', spikecount.max() print 'Num coincidences varies between', num_coincidences.min(), 'and', num_coincidences.max() subplot(221) scatter(R, tau, color=jet(spikecount / float(spikecount.max()))) xlabel('R') ylabel('tau') title('Spike count, max = ' + str(spikecount.max())) axis('tight') subplot(222) scatter(R, tau, color=jet(num_coincidences / float(num_coincidences.max()))) xlabel('R') ylabel('tau') title('Num coincidences, max = ' + str(num_coincidences.max())) axis('tight') spikecount -= num_coincidences num_coincidences -= spikecount num_coincidences[num_coincidences < 0] = 0 maxcoinc = num_coincidences.max() num_coincidences = (1. * num_coincidences) / maxcoinc subplot(212) scatter(R, tau, color=jet(num_coincidences)) xlabel('R') ylabel('tau') title('Hot = ' + str(maxcoinc) + ' excess coincidences, cool = 0 or less') axis('tight') show() if 0: N = 10000 eqs = Equations(''' dV/dt = (-V+R*I)/tau : volt dVt/dt = -(V-1*volt)/tau_t : volt R : ohm tau : second tau_t : second Vt_delta : volt I : amp ''') threshold = 'V>Vt' reset = ''' V = 0*volt Vt += Vt_delta ''' delta = 4 * ms src, declarations_seq = generate_modelfitting_kernel_src(eqs, threshold, reset, defaultclock.dt, N, delta) print src print print declarations_seq if 0: # test traces #clk = RegularClock(makedefaultclock=True) clk = defaultclock N = 1 duration = 200 * ms delta = 4 * ms Ntarg = 20 randspikes = hstack(([-1 * second], sort(rand(Ntarg) * duration * .9 + duration * 0.05), [duration + 1 * second])) #randspikes = sort(unique(array(randspikes/(2*delta), dtype=int)))*2*delta randspikes = sort(unique(array(randspikes / clk.dt, dtype=int))) * clk.dt + 1e-10 eqs = Equations(''' dV/dt = (-V+I)/(10*ms) : 1 I : 1 ''') threshold = 'V>1' reset = 'V=0' G = NeuronGroup(N, eqs, threshold=threshold, reset=reset, method='Euler', clock=clk) #from brian.experimental.ccodegen import * #su = AutoCompiledNonlinearStateUpdater(eqs, G.clock, freeze=True) #G._state_updater = su #I = 1.1*ones(int(duration/defaultclock.dt)) I = 5.0 * rand(int(duration / defaultclock.dt)) #I = hstack((zeros(100), 10*ones(int(duration/defaultclock.dt)))) #I = hstack((zeros(100), 10*ones(100))*(int(duration/defaultclock.dt)/200)) #I = hstack((zeros(100), 10*exp(-linspace(0,2,100)))*(int(duration/defaultclock.dt)/200)) #G.I = TimedArray(hstack((0, I))) #G.I = TimedArray(I[1:], clock=clk) G.I = TimedArray(I, clock=clk) M = StateMonitor(G, 'V', record=True, when='end', clock=clk) MS = SpikeMonitor(G) cc_ex = CoincidenceCounter(source=G, data=randspikes, delta=delta, coincidence_count_algorithm='exclusive') cc_in = CoincidenceCounter(source=G, data=randspikes, delta=delta, coincidence_count_algorithm='inclusive') # cc2 = CoincidenceCounter(source=G, data=randspikes[1:-1], delta=delta) run(duration) #spiketimes = array([-1*second, duration+1*second]) #spiketimes_offset = zeros(N, dtype=int) #spiketimes = [-1*second]+MS[0]+[duration+1*second] spiketimes = randspikes spiketimes_offset = zeros(N, dtype=int) #I = array([0]+I) I_offset = zeros(N, dtype=int) spikedelays = zeros(N) reinit_default_clock() G.V = 0 mf = GPUModelFitting(G, eqs, I, I_offset, spiketimes, spiketimes_offset, spikedelays, delta, coincidence_count_algorithm='exclusive') allV = [] oldnc = 0 oldsc = 0 allcoinc = [] all_pst = [] all_nst = [] allspike = [] all_nsa = [] all_lsa = [] if 0: for i in xrange(len(M.times)): mf.kernel_func(int32(i), int32(i + 1), *mf.kernel_func_args, **mf.kernel_func_kwds) #autoinit.context.synchronize() pycuda.context.synchronize() allV.append(mf.state_vars['V'].get()) all_pst.append(mf.spiketimes.get()[mf.spiketime_indices.get()]) all_nst.append(mf.spiketimes.get()[mf.spiketime_indices.get() + 1]) all_nsa.append(mf.next_spike_allowed_arr.get()[0]) all_lsa.append(mf.last_spike_allowed_arr.get()[0]) # self.next_spike_allowed_arr = gpuarray.to_gpu(ones(N, dtype=bool)) # self.last_spike_allowed_arr = gpuarray.to_gpu(zeros(N, dtype=bool)) nc = mf.coincidence_count[0] if nc > oldnc: oldnc = nc allcoinc.append(i * clk.dt) sc = mf.spike_count[0] if sc > oldsc: oldsc = sc allspike.append(i * clk.dt) else: mf.launch(duration, stepsize=None) print 'Num target spikes:', len(randspikes) - 2 print 'Predicted spike counts:', MS.nspikes, mf.spike_count[0] print 'Coincidences:' print 'GPU', mf.coincidence_count # print 'CPU bis inc', cc_in.coincidences print 'CPU bis exc', cc_ex.coincidences # print 'CPU', cc2.coincidences # for t in randspikes[1:-1]: # plot([t*second-delta, t*second+delta], [0, 0], lw=5, color=(.9,.9,.9)) # plot(randspikes[1:-1], zeros(len(randspikes)-2), '+', ms=15) # plot(M.times, M[0]) if len(allV): # plot(M.times, allV) # plot(allcoinc, zeros(len(allcoinc)), 'o') # figure() plot(M.times, array(all_pst) * clk.dt) plot(M.times, array(all_nst) * clk.dt) plot(randspikes[1:-1], randspikes[1:-1], 'o') plot(allspike, allspike, 'x') plot(allcoinc, allcoinc, '+') plot(M.times, array(all_nsa) * M.times, '--') plot(M.times, array(all_lsa) * M.times, '-.') predicted_spikes = allspike target_spikes = [t * second for t in randspikes] i = 0 truecoinc = [] for pred_t in predicted_spikes: while target_spikes[i] < pred_t + delta: if abs(target_spikes[i] - pred_t) < delta: truecoinc.append((pred_t, target_spikes[i])) i += 1 break i += 1 print 'Truecoinc:', len(truecoinc) for t1, t2 in truecoinc: plot([t1, t2], [t1, t2], ':', color=(0.5, 0, 0), lw=3) # show() brian-1.3.1/brian/library/modelfitting/gputools.py000066400000000000000000000027541167451777000223100ustar00rootroot00000000000000#from ..log import * import atexit __all__ = ['initialise_cuda', 'set_gpu_device', 'close_cuda'] try: import pycuda import pycuda.driver as drv pycuda.context = None pycuda.isinitialised = False MAXGPU = 0 # drv.Device.count() # def initialise_cuda(): global MAXGPU if not pycuda.isinitialised: drv.init() pycuda.isinitialised = True MAXGPU = drv.Device.count() def set_gpu_device(n): """ This function makes pycuda use GPU number n in the system. """ initialise_cuda() #log_debug('brian.hears', "Setting PyCUDA context number %d" % n) try: pycuda.context.detach() except: pass pycuda.context = drv.Device(n).make_context() def close_cuda(): """ Closes the current context. MUST be called at the end of the script. """ if pycuda.context is not None: #log_debug('brian.hears', "Closing current PyCUDA context") try: pycuda.context.pop() pycuda.context = None except: pass atexit.register(close_cuda) except: MAXGPU = 0 def initialise_cuda(): pass def set_gpu_device(n): raise Exception("PyCUDA not available") pass def close_cuda(): pass #if __name__ == '__main__': # close_cuda()brian-1.3.1/brian/library/modelfitting/modelfitting.py000066400000000000000000000615611167451777000231220ustar00rootroot00000000000000#from brian import Equations, NeuronGroup, Clock, CoincidenceCounter, Network, zeros, array, \ # ones, kron, ms, second, concatenate, hstack, sort, nonzero, diff, TimedArray, \ # reshape, sum, log from brian import * from brian.tools.statistics import firing_rate, get_gamma_factor try: from playdoh import * except Exception, e: print e raise ImportError("Playdoh must be installed (https://code.google.com/p/playdoh/)") try: import pycuda from brian.library.modelfitting.gpu_modelfitting import GPUModelFitting can_use_gpu = True except ImportError: can_use_gpu = False from brian.experimental.codegen.integration_schemes import * import sys, cPickle __all__ = ['modelfitting', 'print_table', 'get_spikes', 'predict', 'PSO', 'GA','CMAES', 'MAXCPU', 'MAXGPU', 'debug_level', 'info_level', 'warning_level', 'open_server'] class ModelFitting(Fitness): def initialize(self, **kwds): self.use_gpu = self.unit_type=='GPU' # Gets the key,value pairs in shared_data for key, val in self.shared_data.iteritems(): setattr(self, key, val) # Gets the key,value pairs in **kwds for key, val in kwds.iteritems(): setattr(self, key, val) # log_info(self.model) self.model = cPickle.loads(self.model) # log_info(self.model) # if model is a string if type(self.model) is str: self.model = Equations(self.model) self.total_steps = int(self.duration / self.dt) self.neurons = self.nodesize self.groups = self.groups # Time slicing self.input = self.input[0:self.slices * (len(self.input) / self.slices)] # makes sure that len(input) is a multiple of slices self.duration = len(self.input) * self.dt # duration of the input self.sliced_steps = len(self.input) / self.slices # timesteps per slice self.overlap_steps = int(self.overlap / self.dt) # timesteps during the overlap self.total_steps = self.sliced_steps + self.overlap_steps # total number of timesteps self.sliced_duration = self.overlap + self.duration / self.slices # duration of the vectorized simulation self.N = self.neurons * self.slices # TOTAL number of neurons in this worker self.input = hstack((zeros(self.overlap_steps), self.input)) # add zeros at the beginning because there is no overlap from the previous slice # Prepares data (generates I_offset, spiketimes, spiketimes_offset) self.prepare_data() # Add 'refractory' parameter on the CPU on the CPU only if not self.use_gpu: if self.max_refractory is not None: refractory = 'refractory' self.model.add_param('refractory', second) else: refractory = self.refractory else: if self.max_refractory is not None: refractory = 0*ms else: refractory = self.refractory # Must recompile the Equations : the functions are not transfered after pickling/unpickling self.model.compile_functions() self.group = NeuronGroup(self.N, model=self.model, reset=self.reset, threshold=self.threshold, refractory=refractory, max_refractory = self.max_refractory, method = self.method, clock=Clock(dt=self.dt)) if self.initial_values is not None: for param, value in self.initial_values.iteritems(): self.group.state(param)[:] = value # Injects current in consecutive subgroups, where I_offset have the same value # on successive intervals k = -1 for i in hstack((nonzero(diff(self.I_offset))[0], len(self.I_offset) - 1)): I_offset_subgroup_value = self.I_offset[i] I_offset_subgroup_length = i - k sliced_subgroup = self.group.subgroup(I_offset_subgroup_length) input_sliced_values = self.input[I_offset_subgroup_value:I_offset_subgroup_value + self.total_steps] sliced_subgroup.set_var_by_array(self.input_var, TimedArray(input_sliced_values, clock=self.group.clock)) k = i if self.use_gpu: # Select integration scheme according to method if self.method == 'Euler': scheme = euler_scheme elif self.method == 'RK': scheme = rk2_scheme elif self.method == 'exponential_Euler': scheme = exp_euler_scheme else: raise Exception("The numerical integration method is not valid") self.mf = GPUModelFitting(self.group, self.model, self.input, self.I_offset, self.spiketimes, self.spiketimes_offset, zeros(self.neurons), 0*ms, self.delta, precision=self.precision, scheme=scheme) else: self.cc = CoincidenceCounter(self.group, self.spiketimes, self.spiketimes_offset, onset=self.onset, delta=self.delta) def prepare_data(self): """ Generates I_offset, spiketimes, spiketimes_offset from data, and also target_length and target_rates. The neurons are first grouped by time slice : there are group_size*group_count per group/time slice Within each time slice, the neurons are grouped by target train : there are group_size neurons per group/target train """ # Generates I_offset self.I_offset = zeros(self.N, dtype=int) for slice in range(self.slices): self.I_offset[self.neurons * slice:self.neurons * (slice + 1)] = self.sliced_steps * slice # Generates spiketimes, spiketimes_offset, target_length, target_rates i, t = zip(*self.data) i = array(i) t = array(t) alls = [] n = 0 pointers = [] dt = self.dt target_length = [] target_rates = [] model_target = [] group_index = 0 neurons_in_group = self.subpopsize for j in xrange(self.groups): # neurons_in_group = self.groups[j] # number of neurons in the current group and current worker s = sort(t[i == j]) target_length.extend([len(s)] * neurons_in_group) target_rates.extend([firing_rate(s)] * neurons_in_group) for k in xrange(self.slices): # first sliced group : 0...0, second_train...second_train, ... # second sliced group : first_train_second_slice...first_train_second_slice, second_train_second_slice... spikeindices = (s >= k * self.sliced_steps * dt) & (s < (k + 1) * self.sliced_steps * dt) # spikes targeted by sliced neuron number k, for target j targeted_spikes = s[spikeindices] - k * self.sliced_steps * dt + self.overlap_steps * dt # targeted spikes in the "local clock" for sliced neuron k targeted_spikes = hstack((-1 * second, targeted_spikes, self.sliced_duration + 1 * second)) model_target.extend([k + group_index * self.slices] * neurons_in_group) alls.append(targeted_spikes) pointers.append(n) n += len(targeted_spikes) group_index += 1 pointers = array(pointers, dtype=int) model_target = array(hstack(model_target), dtype=int) self.spiketimes = hstack(alls) self.spiketimes_offset = pointers[model_target] # [pointers[i] for i in model_target] # Duplicates each target_length value 'group_size' times so that target_length[i] # is the length of the train targeted by neuron i self.target_length = array(target_length, dtype=int) self.target_rates = array(target_rates) def evaluate(self, **param_values): """ Use fitparams['delays'] to take delays into account Use fitparams['refractory'] to take refractory into account """ delays = param_values.pop('delays', zeros(self.neurons)) refractory = param_values.pop('refractory', zeros(self.neurons)) # kron spike delays delays = kron(delays, ones(self.slices)) refractory = kron(refractory, ones(self.slices)) # Sets the parameter values in the NeuronGroup object self.group.reinit() for param, value in param_values.iteritems(): self.group.state(param)[:] = kron(value, ones(self.slices)) # kron param_values if slicing # Reinitializes the model variables if self.initial_values is not None: for param, value in self.initial_values.iteritems(): self.group.state(param)[:] = value if self.use_gpu: # Reinitializes the simulation object self.mf.reinit_vars(self.input, self.I_offset, self.spiketimes, self.spiketimes_offset, delays, refractory) # LAUNCHES the simulation on the GPU self.mf.launch(self.duration, self.stepsize) coincidence_count = self.mf.coincidence_count spike_count = self.mf.spike_count else: # set the refractory period if self.max_refractory is not None: self.group.refractory = refractory self.cc = CoincidenceCounter(self.group, self.spiketimes, self.spiketimes_offset, onset=self.onset, delta=self.delta) # Sets the spike delay values self.cc.spikedelays = delays # Reinitializes the simulation objects self.group.clock.reinit() # self.cc.reinit() net = Network(self.group, self.cc) # LAUNCHES the simulation on the CPU net.run(self.duration) coincidence_count = self.cc.coincidences spike_count = self.cc.model_length coincidence_count = sum(reshape(coincidence_count, (self.slices, -1)), axis=0) spike_count = sum(reshape(spike_count, (self.slices, -1)), axis=0) gamma = get_gamma_factor(coincidence_count, spike_count, self.target_length, self.target_rates, self.delta) return gamma def modelfitting(model=None, reset=None, threshold=None, refractory=0*ms, data=None, input_var='I', input=None, dt=None, popsize=1000, maxiter=10, delta=4*ms, slices=1, overlap=0*second, initial_values=None, stepsize=100 * ms, unit_type=None, total_units=None, cpu=None, gpu=None, precision='double', # set to 'float' or 'double' to specify single or double precision on the GPU machines=[], allocation=None, returninfo=False, scaling=None, algorithm=CMAES, async = None, optparams={}, method='Euler', **params): """ Model fitting function. Fits a spiking neuron model to electrophysiological data (injected current and spikes). See also the section :ref:`model-fitting-library` in the user manual. **Arguments** ``model`` An :class:`~brian.Equations` object containing the equations defining the model. ``reset`` A reset value for the membrane potential, or a string containing the reset equations. ``threshold`` A threshold value for the membrane potential, or a string containing the threshold equations. ``refractory`` The refractory period in second. If it's a single value, the same refractory will be used in all the simulations. If it's a list or a tuple, the fitting will also optimize the refractory period (see ``**params`` below). Warning: when using a refractory period, you can't use a custom reset, only a fixed one. ``data`` A list of spike times, or a list of several spike trains as a list of pairs (index, spike time) if the fit must be performed in parallel over several target spike trains. In this case, the modelfitting function returns as many parameters sets as target spike trains. ``input_var='I'`` The variable name used in the equations for the input current. ``input`` A vector of values containing the time-varying signal the neuron responds to (generally an injected current). ``dt`` The time step of the input (the inverse of the sampling frequency). ``**params`` The list of parameters to fit the model with. Each parameter must be set as follows: ``param_name=[bound_min, min, max, bound_max]`` where ``bound_min`` and ``bound_max`` are the boundaries, and ``min`` and ``max`` specify the interval from which the parameter values are uniformly sampled at the beginning of the optimization algorithm. If not using boundaries, set ``param_name=[min, max]``. Also, you can add a fit parameter which is a spike delay for all spikes : add the special parameter ``delays`` in ``**params``, for example ``modelfitting(..., delays=[-10*ms, 10*ms])``. You can also add fit the refractory period by specifying ``modelfitting(..., refractory=[-10*ms, 10*ms])``. ``popsize`` Size of the population (number of particles) per target train used by the optimization algorithm. ``maxiter`` Number of iterations in the optimization algorithm. ``optparams`` Optimization algorithm parameters. It is a dictionary: keys are parameter names, values are parameter values or lists of parameters (one value per group). This argument is specific to the optimization algorithm used. See :class:`PSO`, :class:`GA`, :class:`CMAES`. ``delta=4*ms`` The precision factor delta (a scalar value in second). ``slices=1`` The number of time slices to use. ``overlap=0*ms`` When using several time slices, the overlap between consecutive slices, in seconds. ``initial_values`` A dictionary containing the initial values for the state variables. ``cpu`` The number of CPUs to use in parallel. It is set to the number of CPUs in the machine by default. ``gpu`` The number of GPUs to use in parallel. It is set to the number of GPUs in the machine by default. ``precision`` GPU only: a string set to either ``float`` or ``double`` to specify whether to use single or double precision on the GPU. If it is not specified, it will use the best precision available. ``returninfo=False`` Boolean indicating whether the modelfitting function should return technical information about the optimization. ``scaling=None`` Specify the scaling used for the parameters during the optimization. It can be ``None`` or ``'mapminmax'``. It is ``None`` by default (no scaling), and ``mapminmax`` by default for the CMAES algorithm. ``algorithm=CMAES`` The optimization algorithm. It can be :class:`PSO`, :class:`GA` or :class:`CMAES`. ``optparams={}`` Optimization parameters. See ``method='Euler'`` Integration scheme used on the CPU and GPU: ``'Euler'`` (default), ``RK``, or ``exponential_Euler``. See also :ref:`numerical-integration`. ``machines=[]`` A list of machine names to use in parallel. See :ref:`modelfitting-clusters`. **Return values** Return an :class:`OptimizationResult` object with the following attributes: ``best_pos`` Minimizing position found by the algorithm. For array-like fitness functions, it is a single vector if there is one group, or a list of vectors. For keyword-like fitness functions, it is a dictionary where keys are parameter names and values are numeric values. If there are several groups, it is a list of dictionaries. ``best_fit`` The value of the fitness function for the best positions. It is a single value if there is one group, or it is a list if there are several groups. ``info`` A dictionary containing various information about the optimization. Also, the following syntax is possible with an ``OptimizationResult`` instance ``or``. The ``key`` is either an optimizing parameter name for keyword-like fitness functions, or a dimension index for array-like fitness functions. ``or[key]`` it is the best ``key`` parameter found (single value), or the list of the best parameters ``key`` found for all groups. ``or[i]`` where ``i`` is a group index. This object has attributes ``best_pos``, ``best_fit``, ``info`` but only for group ``i``. ``or[i][key]`` where ``i`` is a group index, is the same as ``or[i].best_pos[key]``. For more details on the gamma factor, see `Jolivet et al. 2008, "A benchmark test for a quantitative assessment of simple neuron models", J. Neurosci. Methods `__ (available in PDF `here `__). """ for param in params.keys(): if (param not in model._diffeq_names) and (param != 'delays'): raise Exception("Parameter %s must be defined as a parameter in the model" % param) # Make sure that 'data' is a N*2-array data = array(data) if data.ndim == 1: data = concatenate((zeros((len(data), 1)), data.reshape((-1, 1))), axis=1) # dt must be set if dt is None: raise Exception('dt (sampling frequency of the input) must be set') # default overlap when no time slicing if slices == 1: overlap = 0 * ms # default allocation if cpu is None and gpu is None and unit_type is None: if CANUSEGPU: unit_type = 'GPU' else: unit_type = 'CPU' # check numerical integration method if (gpu>0 or unit_type == 'GPU') and method not in ['Euler', 'RK', 'exponential_Euler']: raise Exception("The method can only be 'Euler', 'RK', or 'exponential_Euler' when using the GPU") if method not in ['Euler', 'RK', 'exponential_Euler', 'linear', 'nonlinear']: raise Exception("The method can only be 'Euler', 'RK', 'exponential_Euler', 'linear', or 'nonlinear'") if (algorithm == CMAES) & (scaling is None): scaling = 'mapminmax' # determines whether optimization over refractoriness or not if type(refractory) is tuple or type(refractory) is list: params['refractory'] = refractory max_refractory = refractory[-1] # refractory = 'refractory' else: max_refractory = None # common values # group_size = particles # Number of particles per target train groups = int(array(data)[:, 0].max() + 1) # number of target trains # N = group_size * group_count # number of neurons duration = len(input) * dt # duration of the input # keyword arguments for Modelfitting initialize kwds = dict( model=cPickle.dumps(model), threshold=threshold, reset=reset, refractory=refractory, max_refractory=max_refractory, input_var=input_var,dt=dt, duration=duration,delta=delta, slices=slices, overlap=overlap, returninfo=returninfo, precision=precision, stepsize=stepsize, method=method, onset=0 * ms) shared_data = dict(input=input, data=data, initial_values=initial_values) if async: r = maximize_async( ModelFitting, shared_data=shared_data, kwds = kwds, groups=groups, popsize=popsize, maxiter=maxiter, optparams=optparams, unit_type = unit_type, total_units = total_units, machines=machines, allocation=allocation, cpu=cpu, gpu=gpu, returninfo=returninfo, codedependencies=[], algorithm=algorithm, scaling=scaling, **params) else: r = maximize( ModelFitting, shared_data=shared_data, kwds = kwds, groups=groups, popsize=popsize, maxiter=maxiter, optparams=optparams, unit_type = unit_type, total_units = total_units, machines=machines, allocation=allocation, cpu=cpu, gpu=gpu, returninfo=returninfo, codedependencies=[], algorithm=algorithm, scaling=scaling, **params) return r def get_spikes(model=None, reset=None, threshold=None, input=None, input_var='I', dt=None, **params): """ Retrieves the spike times corresponding to the best parameters found by the modelfitting function. **Arguments** ``model``, ``reset``, ``threshold``, ``input``, ``input_var``, ``dt`` Same parameters as for the ``modelfitting`` function. ``**params`` The best parameters returned by the ``modelfitting`` function. **Returns** ``spiketimes`` The spike times of the model with the given input and parameters. """ duration = len(input) * dt ngroups = len(params[params.keys()[0]]) group = NeuronGroup(N=ngroups, model=model, reset=reset, threshold=threshold, clock=Clock(dt=dt)) group.set_var_by_array(input_var, TimedArray(input, clock=group.clock)) for param, values in params.iteritems(): if (param == 'delays') | (param == 'fitness'): continue group.state(param)[:] = values M = SpikeMonitor(group) net = Network(group, M) net.run(duration) reinit_default_clock() return M.spikes def predict(model=None, reset=None, threshold=None, data=None, delta=4 * ms, input=None, input_var='I', dt=None, **params): """ Predicts the gamma factor of a fitted model with respect to the data with a different input current. **Arguments** ``model``, ``reset``, ``threshold``, ``input_var``, ``dt`` Same parameters as for the ``modelfitting`` function. ``input`` The input current, that can be different from the current used for the fitting procedure. ``data`` The experimental spike times to compute the gamma factor against. They have been obtained with the current ``input``. ``**params`` The best parameters returned by the ``modelfitting`` function. **Returns** ``gamma`` The gamma factor of the model spike trains against the data. If there were several groups in the fitting procedure, it is a vector containing the gamma factor for each group. """ spikes = get_spikes(model=model, reset=reset, threshold=threshold, input=input, input_var=input_var, dt=dt, **params) ngroups = len(params[params.keys()[0]]) gamma = zeros(ngroups) for i in xrange(ngroups): spk = [t for j, t in spikes if j == i] gamma[i] = gamma_factor(spk, data, delta, normalize=True, dt=dt) if len(gamma) == 1: return gamma[0] else: return gamma brian-1.3.1/brian/library/random_processes.py000066400000000000000000000031401167451777000213030ustar00rootroot00000000000000''' Random processes for Brian. ''' from brian.equations import Equations from brian.units import get_unit from scipy.signal import lfilter from numpy import exp, sqrt from numpy.random import randn from brian.stdunits import ms __all__ = ['OrnsteinUhlenbeck', 'white_noise', 'colored_noise'] def OrnsteinUhlenbeck(x, mu, sigma, tau): ''' An Ornstein-Uhlenbeck process. mu = mean sigma = standard deviation tau = time constant x = name of the variable Returns an Equations() object ''' return Equations('dx/dt=(mu-x)*invtau+sigma*((2.*invtau)**.5)*xi : unit', \ x=x, mu=mu, sigma=sigma, invtau=1. / tau, unit=get_unit(mu)) def white_noise(dt, duration): n = int(duration/dt) noise = randn(n) return noise def colored_noise(tau, dt, duration): noise = white_noise(dt, duration) a = [1., -exp(-dt/tau)] b = [1.] fnoise = sqrt(2*dt/tau)*lfilter(b, a, noise) return fnoise if __name__ == '__main__': tau = 10*ms dt = .1*ms duration = 1000*ms noise = white_noise(dt, duration) cnoise = colored_noise(tau, dt, duration) from numpy import linspace t = linspace(0*ms, duration, len(noise)) from pylab import acorr, show, subplot, plot, ylabel, xlim subplot(221) plot(t, noise) ylabel('white noise') subplot(222) acorr(noise, maxlags=200) xlim(-200,200) subplot(223) plot(t, cnoise) ylabel('colored noise') subplot(224) acorr(cnoise, maxlags=200) xlim(-200,200) show()brian-1.3.1/brian/library/synapses.py000066400000000000000000000117051167451777000176100ustar00rootroot00000000000000''' Synapse models for Brian (no plasticity here). ''' from brian.equations import * from brian.units import check_units, second, amp, siemens from brian.membrane_equations import Current __all__ = ['exp_current', 'alpha_current', 'biexp_current', \ 'exp_conductance', 'alpha_conductance', 'biexp_conductance', \ 'exp_synapse', 'alpha_synapse', 'biexp_synapse'] __credits__ = dict(author='Romain Brette (brette@di.ens.fr)', date='April 2008') # ----------------- # Synaptic currents # ----------------- @check_units(tau=second) def exp_current(input, tau, current_name=None, unit=amp): ''' Exponential synaptic current. input = name of input variable (where presynaptic spikes act). current_name = name of current variable ''' current_name = current_name or unique_id() current = Current() + exp_synapse(input, tau, unit, current_name) current.set_current_name(current_name) return current @check_units(tau=second) def alpha_current(input, tau, current_name=None, unit=amp): ''' Alpha synaptic current. current_name = name of current variable ''' current_name = current_name or unique_id() current = Current() + alpha_synapse(input, tau, unit, current_name) current.set_current_name(current_name) return current @check_units(tau1=second, tau2=second) def biexp_current(input, tau1, tau2, current_name=None, unit=amp): ''' Biexponential synaptic current. current_name = name of current variable ''' current_name = current_name or unique_id() current = Current() + biexp_synapse(input, tau1, tau2, unit, current_name) current.set_current_name(current_name) return current # --------------------- # Synaptic conductances # --------------------- @check_units(tau=second) def exp_conductance(input, E, tau, conductance_name=None, unit=siemens): ''' Exponential synaptic conductance. conductance_name = name of conductance variable E = synaptic reversal potential ''' conductance_name = conductance_name or unique_id() return Current('I=g*(E-vm): amp', I=input + '_current', g=conductance_name, E=E) + \ exp_synapse(input, tau, unit, conductance_name) @check_units(tau=second) def alpha_conductance(input, E, tau, conductance_name=None, unit=siemens): ''' Alpha synaptic conductance. conductance_name = name of conductance variable E = synaptic reversal potential ''' conductance_name = conductance_name or unique_id() return Current('I=g*(E-vm): amp', I=input + '_current', g=conductance_name, E=E) + \ alpha_synapse(input, tau, unit, conductance_name) @check_units(tau1=second, tau2=second) def biexp_conductance(input, E, tau1, tau2, conductance_name=None, unit=siemens): ''' Exponential synaptic conductance. conductance_name = name of conductance variable E = synaptic reversal potential ''' conductance_name = conductance_name or unique_id() return Current('I=g*(E-vm): amp', I=input + '_current', g=conductance_name, E=E) + \ biexp_synapse(input, tau1, tau2, unit, conductance_name) # --------------- # Synaptic inputs # --------------- @check_units(tau=second) def exp_synapse(input, tau, unit, output=None): ''' Exponentially decaying synaptic current/conductance: g(t)=exp(-t/tau) output = output variable name (plugged into the membrane equation). input = input variable name (where spikes are received). ''' if output is None: output = input + '_out' return Equations(''' dx/dt = -x*invtau : unit y=x''', x=output, y=input, unit=unit, invtau=1. / tau) @check_units(tau=second) def alpha_synapse(input, tau, unit, output=None): ''' Alpha synaptic current/conductance: g(t)=(t/tau)*exp(1-t/tau) output = output variable name (plugged into the membrane equation). input = input variable name (where spikes are received). The peak is 1 at time t=tau. ''' if output is None: output = input + '_out' return Equations(''' dx/dt = (y-x)*invtau : unit dy/dt = -y*invtau : unit ''', x=output, y=input, unit=unit, invtau=1. / tau) @check_units(tau1=second, tau2=second) def biexp_synapse(input, tau1, tau2, unit, output=None): ''' Biexponential synaptic current/conductance: g(t)=(tau2/(tau2-tau1))*(exp(-t/tau1)-exp(-t/tau2)) output = output variable name (plugged into the membrane equation). input = input variable name (where spikes are received). The peak is 1 at time t=tau1*tau2/(tau2-tau1)*log(tau2/tau1) ''' if output is None: output = input + '_out' invpeak = (tau2 / tau1) ** (tau1 / (tau2 - tau1)) return Equations(''' dx/dt = (invpeak*y-x)*invtau1 : unit dy/dt = -y*invtau2 : unit ''', x=output, y=input, unit=unit, invtau1=1. / tau1, invtau2=1. / tau2, invpeak=invpeak) brian-1.3.1/brian/license.txt000066400000000000000000000034341167451777000161100ustar00rootroot00000000000000---------------------------------------------------------------------------------- Copyright ENS, INRIA, CNRS Contributors: Romain Brette (romain.brette@ens.fr) and Dan Goodman (dan.goodman@ens.fr) Brian is a computer program whose purpose is to simulate models of biological neural networks. This software is governed by the CeCILL license under French law and abiding by the rules of distribution of free software. You can use, modify and/ or redistribute the software under the terms of the CeCILL license as circulated by CEA, CNRS and INRIA at the following URL "http://www.cecill.info". As a counterpart to the access to the source code and rights to copy, modify and redistribute granted by the license, users are provided only with a limited warranty and the software's author, the holder of the economic rights, and the successive licensors have only limited liability. In this respect, the user's attention is drawn to the risks associated with loading, using, modifying and/or developing or reproducing the software by the user in light of its specific status of free software, that may mean that it is complicated to manipulate, and that also therefore means that it is reserved for developers and experienced professionals having in-depth computer knowledge. Users are therefore encouraged to load and test the software's suitability as regards their requirements in conditions enabling the security of their systems and/or data to be ensured and, more generally, to use and operate it in the same conditions as regards security. The fact that you are presently reading this means that you have had knowledge of the CeCILL license and that you accept its terms. ---------------------------------------------------------------------------------- brian-1.3.1/brian/log.py000066400000000000000000000071631167451777000150630ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Logging information for Brian ''' import logging import sys __all__ = [ 'log_warn', 'log_info', 'log_debug', 'log_level_error', 'log_level_warn', 'log_level_info', 'log_level_debug' ] console = logging.StreamHandler(sys.stderr) formatter = logging.Formatter('%(name)-18s: %(levelname)-8s %(message)s') console.setFormatter(formatter) logging.getLogger('brian').addHandler(console) #brian_loggers = {} get_log = logging.getLogger #def get_log(logname): # if logname in brian_loggers: # return brian_loggers[logname] # newlog = logging.getLogger(logname) # #newlog.addHandler(console) # brian_loggers[logname] = newlog # return newlog def log_warn(logname, message): get_log(logname).warn(message) def log_info(logname, message): get_log(logname).info(message) def log_debug(logname, message): get_log(logname).debug(message) def log_level_error(): '''Shows log messages only of level ERROR or higher. ''' logging.getLogger('brian').setLevel(logging.ERROR) def log_level_warn(): '''Shows log messages only of level WARNING or higher (including ERROR level). ''' logging.getLogger('brian').setLevel(logging.WARN) def log_level_info(): '''Shows log messages only of level INFO or higher (including WARNING and ERROR levels). ''' logging.getLogger('brian').setLevel(logging.INFO) def log_level_debug(): '''Shows log messages only of level DEBUG or higher (including INFO, WARNING and ERROR levels). ''' logging.getLogger('brian').setLevel(logging.DEBUG) if __name__ == '__main__': #log_level_info() log_warn('brian.fish', 'Warning') log_warn('brian.monkey', 'Ook') log_info('brian.moose', 'Mook') log_debug('brian.goat', 'Bah') brian-1.3.1/brian/magic.py000066400000000000000000000307261167451777000153630ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # """Magic function tools Use these functions to automagically find objects of a particular class. The way it works is that whenever a new object is created from a class derived from InstanceTracker, it is stored along with the 'frame' it was called from (loosely speaking, the function where the object is defined). When you call one of the functions below, it picks out all the objects of the required type in a frame a specified number of levels before the current one (e.g. in the frame of the calling function). Functions --------- get_instances(instancetype, level=1, all=False) This function finds all instances at the given level in the sequence of calling frames. If all is True then levels will not be used and all instances will be found. find_instances(instancetype, startlevel=1, all=False) This function searches the frames starting from the given startlevel until it finds at least one object of the required type, at which point it will return all objects of that type from that level If all is True then levels will not be used and all instances will be found. find_all_instances(instancetype,startlevel=1, all=False): This function searches the frames starting from the given startlevel, and returns all objects of the required type from all levels. Noe that includeglobals is set to False by default so as not to pick up multiple copies of objects If all is True then levels will not be used and all instances will be found. Variables: instancetype A class (must be derived from InstanceTracker) level The level in the sequence of calling frames. So, for a function f, calling with level=0 will find variables defined in that function f, whereas calling with level=1 will find variables defined in the function which called f. The latter is the default value because magic functions are usually used within Brian functions to find variables defined by the user. Return values: All the functions return a tuple (objects, names) where objects is the list of matching objects, and names is a list of strings giving the objects' names if they are defined. At the moment, the only name returned is the id of the object. Notes: These functions return each object at most once. Classes ------- InstanceTracker Derive your class from this one to automagically keep track of instances of it. If you want a subclass of a tracked class not to be tracked, define the method _track_instances to return False. """ __docformat__ = "restructuredtext en" from weakref import * from inspect import * from globalprefs import * __all__ = [ 'get_instances', 'find_instances', 'find_all_instances', 'magic_register', 'magic_return', 'InstanceTracker' ] class ExtendedRef(ref): """A weak reference which also defines an optional id """ def __init__(self, ob, callback=None, **annotations): super(ExtendedRef, self).__init__(ob, callback) self.__id = 0 def set_i_d(self, id): self.__id = id def get_i_d(self): return self.__id class WeakSet(set): """A set of extended references Removes references from the set when they are destroyed.""" def add(self, value, id=0): wr = ExtendedRef(value, self.remove) wr.set_i_d(id) set.add(self, wr) def set_i_d(self, value, id): for _ in self: if _() is value: _.set_i_d(id) return def get_i_d(self, value): for _ in self: if _() is value: return _.get_i_d() def get(self, id=None): if id is None: return [ _() for _ in self if _.get_i_d() != -1 ] else: return [ _() for _ in self if _.get_i_d() == id] class InstanceFollower(object): """Keep track of all instances of classes derived from InstanceTracker The variable __instancesets__ is a dictionary with keys which are class objects, and values which are WeakSets, so __instanceset__[cls] is a weak set tracking all of the instances of class cls (or a subclass). """ __instancesets__ = {} def add(self, value, id=0): for cls in value.__class__.__mro__: # MRO is the Method Resolution Order which contains all the superclasses of a class if cls not in self.__instancesets__: self.__instancesets__[cls] = WeakSet() self.__instancesets__[cls].add(value, id) def set_i_d(self, value, id): for cls in value.__class__.__mro__: # MRO is the Method Resolution Order which contains all the superclasses of a class if cls in self.__instancesets__: self.__instancesets__[cls].set_i_d(value, id) def get_i_d(self, value): for cls in value.__class__.__mro__: if cls in self.__instancesets__: return self.__instancesets__[cls].get_i_d(value) def get(self, cls, id=None): if not cls in self.__instancesets__: return [] return self.__instancesets__[cls].get(id) class InstanceTracker(object): """Base class for all classes whose instances are to be tracked Derive your class from this one to automagically keep track of instances of it. If you want a subclass of a tracked class not to be tracked, define the method _track_instances to return False. """ __instancefollower__ = InstanceFollower() # static property of all objects of class derived from InstanceTracker @staticmethod def _track_instances(): return True def set_instance_id(self, idvalue=None, level=1): if idvalue is None: idvalue = id(getouterframes(currentframe())[level + 1][0]) self.__instancefollower__.set_i_d(self, idvalue) def get_instance_id(self): return self.__instancefollower__.get_i_d(self) def __new__(typ, *args, **kw): obj = object.__new__(typ)#, *args, **kw) outer_frame = id(getouterframes(currentframe())[1][0]) # the id is the id of the calling frame if obj._track_instances(): obj.__instancefollower__.add(obj, outer_frame) return obj def magic_register(*args, **kwds): '''Declare that a magically tracked object should be put in a particular frame **Standard usage** If ``A`` is a tracked class (derived from :class:`InstanceTracker`), then the following wouldn't work:: def f(): x = A('x') return x objs = f() print get_instances(A,0)[0] Instead you write:: def f(): x = A('x') magic_register(x) return x objs = f() print get_instances(A,0)[0] **Definition** Call as:: magic_register(...[,level=1]) The ``...`` can be any sequence of tracked objects or containers of tracked objects, and each tracked object will have its instance id (the execution frame in which it was created) set to that of its parent (or to its parent at the given level). This is equivalent to calling:: x.set_instance_id(level=level) For each object ``x`` passed to :func:`magic_register`. ''' level = kwds.get('level', 1) for x in args: if isinstance(x, InstanceTracker): x.set_instance_id(level=level + 1) else: magic_register(*x, **{'level':level + 1}) def magic_return(f): ''' Decorator to ensure that the returned object from a function is recognised by magic functions **Usage example:** :: @magic_return def f(): return PulsePacket(50*ms, 100, 10*ms) **Explanation** Normally, code like the following wouldn't work:: def f(): return PulsePacket(50*ms, 100, 10*ms) pp = f() M = SpikeMonitor(pp) run(100*ms) raster_plot() show() The reason is that the magic function :func:`run()` only recognises objects created in the same execution frame that it is run from. The :func:`magic_return` decorator corrects this, it registers the return value of a function with the magic module. The following code will work as expected:: @magic_return def f(): return PulsePacket(50*ms, 100, 10*ms) pp = f() M = SpikeMonitor(pp) run(100*ms) raster_plot() show() **Technical details** The :func:`magic_return` function uses :func:`magic_register` with the default ``level=1`` on just the object returned by a function. See details for :func:`magic_register`. ''' def new_f(*args, **kwds): obj = f(*args, **kwds) magic_register(obj) return obj new_f.__name__ = f.__name__ new_f.__doc__ = f.__doc__ return new_f def get_instances(instancetype, level=1, all=False): """Find all instances of a given class at a given level in the stack See documentation for module Brian.magic """ try: instancetype.__instancefollower__ except AttributeError: raise InstanceTrackerError('Cannot track instances of type ', instancetype) target_frame = id(getouterframes(currentframe())[level + 1][0]) if all or not get_global_preference('magic_useframes'): target_frame = None objs = instancetype.__instancefollower__.get(instancetype, target_frame) return (objs, map(str, map(id, objs))) def find_instances(instancetype, startlevel=1, all=False): """Find first instances of a given class in the stack See documentation for module Brian.magic """ # Note that we start from startlevel+1 because startlevel means from the calling function's point of view for level in range(startlevel + 1, len(getouterframes(currentframe()))): objs, names = get_instances(instancetype, level, all=all) if len(objs): return (objs, names) return ([], []) def find_all_instances(instancetype, startlevel=1, all=False): """Find all instances of a given class in the stack See documentation for module Brian.magic """ objs = [] names = [] # Note that we start from startlevel+1 because startlevel means from the calling function's point of view for level in range(startlevel + 1, len(getouterframes(currentframe()))): newobjs, newnames = get_instances(instancetype, level, all=all) objs += newobjs names += newnames return (objs, names) if __name__ == '__main__': print __doc__ brian-1.3.1/brian/membrane_equations.py000066400000000000000000000140471167451777000201570ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Membrane equations for Brian models. ''' __all__ = ['Current', 'IonicCurrent', 'InjectedCurrent', 'MembraneEquation'] from equations import * from units import have_same_dimensions, get_unit, second, volt, amp from warnings import warn class Current(Equations): ''' A set of equations defining a current. current_name is the name of the variable that must be added as a membrane current to the membrane equation. ''' def __init__(self, expr='', current_name=None, level=0, surfacic=False, **kwd): Equations.__init__(self, expr, level=level + 1, **kwd) if surfacic: # A surfacic current is multiplied by membrane area in a MembraneEquation self._prefix = '__scurrent_' else: self._prefix = '__current_' # Find which variable is the current if current_name: # Explicitly given self.set_current_name(current_name) elif expr != '': # Guess if len(self._units) == 2: # only one variable (the other one is t) # Only 1 variable: it's the current correct_names = [name for name in self._units.keys() if name != 't'] if len(correct_names) != 1: raise NameError, "The equations do not include time (variable t)" name, = correct_names self.set_current_name(name) else: # Look for variables with dimensions of current: won't work with units off! current_names = [name for name, unit in self._units.iteritems()\ if have_same_dimensions(unit, amp)] if len(current_names) == 1: # only one current self.set_current_name(current_names[0]) else: warn("The current variable could not be found!") def set_current_name(self, name): if name != 't': if name is None: name = unique_id() current_name = self._prefix + name self.add_eq(current_name, name, self._units[name]) # not an alias because read-only def __iadd__(self, other): # Adding a MembraneEquation if isinstance(other, MembraneEquation): return other.__iadd__(self) else: return Equations.__iadd__(self, other) class IonicCurrent(Current): ''' A ionic current; current direction is defined from intracellular to extracellular. ''' def set_current_name(self, name): if name != 't': if name is None: name = unique_id() current_name = self._prefix + name self.add_eq(current_name, '-' + name, self._units[name]) InjectedCurrent = Current class MembraneEquation(Equations): ''' A membrane equation, defined by a capacitance C and a sum of currents. Ex: eq=MembraneEquation(200*pF) eq=MembraneEquation(200*pF,vm='V') No more than one membrane equation allowed per system of equations. ''' def __init__(self, C=None, **kwd): if C is not None: expr = ''' dvm/dt=__membrane_Im/C : volt __membrane_Im=0*unit : unit ''' Equations.__init__(self, expr, C=C, unit=get_unit(C * volt / second), **kwd) else: Equations.__init__(self) def __iadd__(self, other): Equations.__iadd__(self, other) self.set_membrane_current() return self def set_membrane_current(self): current_vars = [name for name in self._eq_names if name.startswith('__current_')] # point current scurrent_vars = [name for name in self._eq_names if name.startswith('__scurrent_')] # surfacic current if scurrent_vars != []: if current_vars != []: self._string['__membrane_Im'] = '+'.join(current_vars) + '+__area*(' + '+'.join(scurrent_vars) + ')' else: self._string['__membrane_Im'] = '__area*(' + '+'.join(scurrent_vars) + ')' self._namespace[name]['__area'] = self.area elif current_vars != []: self._string['__membrane_Im'] = '+'.join(current_vars) brian-1.3.1/brian/monitor.py000066400000000000000000001640351167451777000157730ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Monitors (spikes and state variables). * Tip: Spike monitors should have non significant impact on simulation time if properly coded. ''' __all__ = ['VanRossumMetric','SpikeMonitor', 'PopulationSpikeCounter', 'SpikeCounter', 'FileSpikeMonitor', 'StateMonitor', 'ISIHistogramMonitor', 'Monitor', 'PopulationRateMonitor', 'StateSpikeMonitor', 'MultiStateMonitor', 'RecentStateMonitor', 'CoincidenceCounter', 'CoincidenceMatrixCounter', 'StateHistogramMonitor'] from units import * from connections import Connection, SparseConnectionVector from numpy import array, zeros, mean, histogram, linspace, tile, digitize, \ copy, ones, rint, exp, arange, convolve, argsort, mod, floor, asarray, \ maximum, Inf, amin, amax, sort, nonzero, setdiff1d, diag, hstack, resize,\ inf, var,tril,empty,float64,array,sum from scipy.spatial.distance import sqeuclidean from itertools import repeat, izip from clock import guess_clock, EventClock, Clock from network import NetworkOperation, network_operation from stdunits import ms, Hz from collections import defaultdict import types from operator import isSequenceType from tools.statistics import firing_rate from neurongroup import NeuronGroup import bisect from base import * from time import time try: import pylab, matplotlib except: pass from globalprefs import * from scipy import weave class Monitor(object): pass class SpikeMonitor(Connection, Monitor): ''' Counts or records spikes from a :class:`NeuronGroup` Initialised as one of:: SpikeMonitor(source(,record=True)) SpikeMonitor(source,function=function) Where: ``source`` A :class:`NeuronGroup` to record from ``record`` ``True`` or ``False`` to record all the spikes or just summary statistics. ``function`` A function ``f(spikes)`` which is passed the array of neuron numbers that have fired called each step, to define custom spike monitoring. Has three attributes: ``nspikes`` The number of recorded spikes ``spikes`` A time ordered list of pairs ``(i,t)`` where neuron ``i`` fired at time ``t``. ``spiketimes`` A dictionary with keys the indices of the neurons, and values an array of the spike times of that neuron. For example, ``t=M.spiketimes[3]`` gives the spike times for neuron 3. For ``M`` a :class:`SpikeMonitor`, you can also write: ``M[i]`` An array of the spike times of neuron ``i``. Notes: :class:`SpikeMonitor` is subclassed from :class:`Connection`. To define a custom monitor, either define a subclass and rewrite the ``propagate`` method, or pass the monitoring function as an argument (``function=myfunction``, with ``def myfunction(spikes):...``) ''' # isn't there a units problem here for delay? def __init__(self, source, record=True, delay=0, function=None): # recordspikes > record? self.source = source # pointer to source group self.target = None self.nspikes = 0 self.spikes = [] self.record = record self.W = None # should we just remove this variable? source.set_max_delay(delay) self.delay = int(delay / source.clock.dt) # Synaptic delay in time bins self._newspikes = True if function != None: self.propagate = function def reinit(self): """ Clears all monitored spikes """ self.nspikes = 0 self.spikes = [] self._newspikes = True #recreate self._spiketimes on next access def propagate(self, spikes): ''' Deals with the spikes. Overload this function to store or process spikes. Default: counts the spikes (variable nspikes) ''' if len(spikes): self._newspikes = True self.nspikes += len(spikes) if self.record: self.spikes += zip(spikes, repeat(self.source.clock.t)) def origin(self, P, Q): ''' Returns the starting coordinate of the given groups in the connection matrix W. ''' return (P.origin - self.source.origin, 0) def compress(self): pass def __getitem__(self, i): return self.getspiketimes()[i] def getspiketimes(self): if self._newspikes: self._newspikes = False self._spiketimes = {} for i in xrange(len(self.source)): self._spiketimes[i] = [] for i, t in self.spikes: self._spiketimes[i].append(float(t)) for i in xrange(len(self.source)): self._spiketimes[i] = array(self._spiketimes[i]) return self._spiketimes spiketimes = property(fget=getspiketimes) # def getvspikes(self): # if isinstance(self.source, VectorizedNeuronGroup): # N = self.source.neuron_number # overlap = self.source.overlap # duration = self.source.duration # vspikes = [(mod(i,N),(t-overlap)+i/N*(duration-overlap)*second) for (i,t) in self.spikes if t >= overlap] # vspikes.sort(cmp=lambda x,y:2*int(x[1]>y[1])-1) # return vspikes # concatenated_spikes = property(fget=getvspikes) class AutoCorrelogram(SpikeMonitor): ''' Calculates autocorrelograms for the selected neurons (online). Initialised as:: AutoCorrelogram(source,record=[1,2,3], delay=10*ms) where ``delay`` is the size of the autocorrelogram. NOT FINISHED ''' def __init__(self, source, record=True, delay=0): SpikeMonitor.__init__(self, source, record=record, delay=delay) self.reinit() if record is not False: if record is not True and not isinstance(record, int): self.recordindex = dict((i, j) for i, j in zip(self.record, range(len(self.record)))) def reinit(self): if self.record == True: self._autocorrelogram = zeros((len(self.record), len(self.source))) else: self._autocorrelogram = zeros((len(self.record), self.delay)) def propagate(self, spikes): spikes_set = set(spikes) if self.record == True: for i in xrange(self.delay): # Not a brilliant implementation self._autocorrelogram[spikes_set.intersection(self.source.LS[i]), i] += 1 def __getitem__(self, i): # TODO: returns the autocorrelogram of neuron i pass class PopulationSpikeCounter(SpikeMonitor): ''' Counts spikes from a :class:`NeuronGroup` Initialised as:: PopulationSpikeCounter(source) With argument: ``source`` A :class:`NeuronGroup` to record from Has one attribute: ``nspikes`` The number of recorded spikes ''' def __init__(self, source, delay=0): SpikeMonitor.__init__(self, source, record=False, delay=delay) class SpikeCounter(PopulationSpikeCounter): ''' Counts spikes from a :class:`NeuronGroup` Initialised as:: SpikeCounter(source) With argument: ``source`` A :class:`NeuronGroup` to record from Has two attributes: ``nspikes`` The number of recorded spikes ``count`` An array of spike counts for each neuron For a :class:`SpikeCounter` ``M`` you can also write ``M[i]`` for the number of spikes counted for neuron ``i``. ''' def __init__(self, source): PopulationSpikeCounter.__init__(self, source) self.count = zeros(len(source), dtype=int) def __getitem__(self, i): return int(self.count[i]) def propagate(self, spikes): if len(spikes): PopulationSpikeCounter.propagate(self, spikes) self.count[spikes] += 1 def reinit(self): self.count[:] = 0 PopulationSpikeCounter.reinit(self) class StateSpikeMonitor(SpikeMonitor): ''' Counts or records spikes and state variables at spike times from a :class:`NeuronGroup` Initialised as:: StateSpikeMonitor(source, var) Where: ``source`` A :class:`NeuronGroup` to record from ``var`` The variable name or number to record from, or a tuple of variable names or numbers if you want to record multiple variables for each spike. Has two attributes: .. attribute:: nspikes The number of recorded spikes .. attribute:: spikes A time ordered list of tuples ``(i,t,v)`` where neuron ``i`` fired at time ``t`` and the specified variable had value ``v``. If you specify multiple variables, each tuple will be of the form ``(i,t,v0,v1,v2,...)`` where the ``vi`` are the values corresponding in order to the variables you specified in the ``var`` keyword. And two methods: .. method:: times(i=None) Returns an array of the spike times for the whole monitored group, or just for neuron ``i`` if specified. .. method:: values(var, i=None) Returns an array of the values of variable ``var`` for the whole monitored group, or just for neuron ``i`` if specified. ''' def __init__(self, source, var): SpikeMonitor.__init__(self, source) if isinstance(var, (str, int)) or not isSequenceType(var): var = (var,) self._varnames = var self._vars = [source.state_(v) for v in var] self._varindex = dict((v, i + 2) for i, v in enumerate(var)) self._units = [source.unit(v) for v in var] def propagate(self, spikes): if len(spikes): self.nspikes += len(spikes) recordedstate = [ [x * u for x in v[spikes]] for v, u in izip(self._vars, self._units) ] self.spikes += zip(spikes, repeat(self.source.clock.t), *recordedstate) def __getitem__(self, i): return NotImplemented # don't use the version from SpikeMonitor def times(self, i=None): '''Returns the spike times (of neuron ``i`` if specified)''' if i is not None: return array([x[1] for x in self.spikes if x[0] == i]) else: return array([x[1] for x in self.spikes]) def values(self, var, i=None): '''Returns the recorded values of ``var`` (for spikes from neuron ``i`` if specified)''' v = self._varindex[var] if i is not None: return array([x[v] for x in self.spikes if x[0] == i]) else: return array([x[v] for x in self.spikes]) class HistogramMonitorBase(SpikeMonitor): pass class ISIHistogramMonitor(HistogramMonitorBase): ''' Records the interspike interval histograms of a group. Initialised as:: ISIHistogramMonitor(source, bins) ``source`` The source group to record from. ``bins`` The lower bounds for each bin, so that e.g. ``bins = [0*ms, 10*ms, 20*ms]`` would correspond to bins with intervals 0-10ms, 10-20ms and 20+ms. Has properties: ``bins`` The ``bins`` array passed at initialisation. ``count`` An array of length ``len(bins)`` counting how many ISIs were in each bin. This object can be passed directly to the plotting function :func:`hist_plot`. ''' def __init__(self, source, bins, delay=0): SpikeMonitor.__init__(self, source, delay) self.bins = array(bins) self.reinit() def reinit(self): super(ISIHistogramMonitor, self).reinit() self.count = zeros(len(self.bins)) self.LS = 1000 * second * ones(len(self.source)) def propagate(self, spikes): if len(spikes): super(ISIHistogramMonitor, self).propagate(spikes) isi = self.source.clock.t - self.LS[spikes] self.LS[spikes] = self.source.clock.t # all this nonsense is necessary to deal with the fact that # numpy changed the semantics of histogram in 1.2.0 or thereabouts try: h, a = histogram(isi, self.bins, new=True) except TypeError: h, a = histogram(isi, self.bins) if len(h) == len(self.count): self.count += h else: self.count[:-1] += h self.count[-1] += len(isi) - sum(h) class FileSpikeMonitor(SpikeMonitor): """Records spikes to a file Initialised as:: FileSpikeMonitor(source, filename[, record=False]) Does everything that a :class:`SpikeMonitor` does except also records the spikes to the named file. note that spikes are recorded as an ASCII file of lines each of the form: ``i, t`` Where ``i`` is the neuron that fired, and ``t`` is the time in seconds. Has one additional method: ``close_file()`` Closes the file manually (will happen automatically when the program ends). """ def __init__(self, source, filename, record=False, delay=0): super(FileSpikeMonitor, self).__init__(source, record, delay) self.filename = filename self.f = open(filename, 'w') def reinit(self): self.close_file() self.f = open(self.filename, 'w') def propagate(self, spikes): if len(spikes): super(FileSpikeMonitor, self).propagate(spikes) for i in spikes: self.f.write(str(i) + ", " + str(float(self.source.clock.t)) + "\n") def close_file(self): self.f.close() class PopulationRateMonitor(SpikeMonitor): ''' Monitors and stores the (time-varying) population rate Initialised as:: PopulationRateMonitor(source,bin) Records the average activity of the group for every bin. Properties: ``rate``, ``rate_`` An array of the rates in Hz. ``times``, ``times_`` The times of the bins. ``bin`` The duration of a bin (in second). ''' times = property(fget=lambda self:array(self._times)) times_ = times rate = property(fget=lambda self:array(self._rate)) rate_ = rate def __init__(self, source, bin=None): SpikeMonitor.__init__(self, source) if bin: self._bin = int(bin / source.clock.dt) else: self._bin = 1 # bin size in number self._rate = [] self._times = [] self._curstep = 0 self._clock = source.clock self._factor = 1. / float(self._bin * source.clock.dt * len(source)) def reinit(self): SpikeMonitor.reinit(self) self._rate = [] self._times = [] self._curstep = 0 def propagate(self, spikes): if self._curstep == 0: self._rate.append(0.) self._times.append(self._clock._t) # +.5*bin? self._curstep = self._bin self._rate[-1] += len(spikes) * self._factor self._curstep -= 1 def smooth_rate(self, width=None, filter='gaussian'): """ Returns a smoothed version of the vector of rates, convolving the rates with a filter (gaussian or flat) with the given width. """ if width is None: # automatic with Shinomoto's algorithms if filter=='flat': """ (Experimental) If width is not given and the filter is flat, then the bin size is automatically chosen using Shimazaki and Shinomoto's method: Shimazaki and Shinomoto, A method for selecting the bin size of a time histogram Neural Computation 19(6), 1503-1527, 2007 http://dx.doi.org/10.1162/neco.2007.19.6.1503 """ # Shinomoto's method to find the optimal bin size. Adapted from: # Shimazaki and Shinomoto, A method for selecting the bin size of a time histogram # Neural Computation 19(6), 1503-1527, 2007 # http://dx.doi.org/10.1162/neco.2007.19.6.1503 counts=array(self._rate)/self._factor best_value=inf for nbins in range(2,500): # possible number of bins (maybe a less brutal optimization?) binsize=len(counts)/nbins x=resize(counts,(len(counts)/binsize,binsize)) #x.reshape((x.size,1))[len(counts):]=0 # unnecessary because smaller x=x.sum(1) # x is the histogram with nbins bins K=mean(x) # average number of spikes per recording bin value=(2*K-var(x))/binsize**2 if value= self.onset: self.model_length[spiking_neurons] += 1 T_spiking = array(rint((self.source.clock.t + self.spikedelays[spiking_neurons]) / dt), dtype=int) remaining_neurons = spiking_neurons remaining_T_spiking = T_spiking while True: remaining_indices, = (remaining_T_spiking > self.next_spike_time[remaining_neurons]).nonzero() if len(remaining_indices): indices = remaining_neurons[remaining_indices] self.target_length[indices] += 1 self.spiketime_index[indices] += 1 self.last_spike_time[indices] = self.next_spike_time[indices] self.next_spike_time[indices] = array(rint(self.data[self.spiketime_index[indices] + 1] / dt), dtype=int) if self.coincidence_count_algorithm == 'exclusive': self.last_spike_allowed[indices] = self.next_spike_allowed[indices] self.next_spike_allowed[indices] = True remaining_neurons = remaining_neurons[remaining_indices] remaining_T_spiking = remaining_T_spiking[remaining_indices] else: break # Updates coincidences count near_last_spike = self.last_spike_time[spiking_neurons] + self.delta >= T_spiking near_next_spike = self.next_spike_time[spiking_neurons] - self.delta <= T_spiking last_spike_allowed = self.last_spike_allowed[spiking_neurons] next_spike_allowed = self.next_spike_allowed[spiking_neurons] I = (near_last_spike & last_spike_allowed) | (near_next_spike & next_spike_allowed) if self.source.clock.t >= self.onset: self.coincidences[spiking_neurons[I]] += 1 if self.coincidence_count_algorithm == 'exclusive': near_both_allowed = (near_last_spike & last_spike_allowed) & (near_next_spike & next_spike_allowed) self.last_spike_allowed[spiking_neurons] = last_spike_allowed & -near_last_spike self.next_spike_allowed[spiking_neurons] = (next_spike_allowed & -near_next_spike) | near_both_allowed class VanRossumMetric(StateMonitor): """ van Rossum spike train metric. From M. van Rossum (2001): A novel spike distance (Neural Computation). Compute the van Rossum distance between every spike train from the source population. Arguments: ``source`` The group to compute the distances for. ``tau`` Time constant of the kernel (low pass filter). Has one attribute: ``distance`` A square symmetric matrix containing the distances. """ def __init__(self, source, tau=2*ms): self.dt = source.clock.dt self.source = source self.nbr_neurons = len(source) self.tau=tau eqs=""" dv/dt=(-v)/tau: volt """ kernel=NeuronGroup(self.nbr_neurons,model=eqs) C = Connection(source, kernel, 'v') C.connect_one_to_one(source,kernel) StateMonitor.__init__(self,kernel, 'v', record=True) self.contained_objects=[kernel,C] #self.distance_matrix=zeros((self.nbr_neurons,self.nbr_neurons)) def reinit(self): StateMonitor.reinit(self) # def define(self): # @network_operation(clock=EventClock(dt=self.dt)) # def get_distance_online(): # tt=time() # for neuron_idx1 in range(self.nbr_neurons): # for neuron_idx2 in range((neuron_idx1+1)): # self.distance_matrix[neuron_idx1,neuron_idx2]=self.distance_matrix[neuron_idx1,neuron_idx2]+self.dt/self.tau*abs(self[neuron_idx1][-1]-self[neuron_idx2][-1])**2 # print time()-tt # self.contained_objects.append(get_distance_online) def get_distance(self): if get_global_preference('useweave'): _cpp_compiler=get_global_preference('weavecompiler') _extra_compile_args=['-O3'] if _cpp_compiler=='gcc': _extra_compile_args+=get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] nbr_neurons=int(self.nbr_neurons) distance_matrix=zeros((nbr_neurons,nbr_neurons),dtype=float64) nbr_time_step=int(len(self[0])) dt=float(self.dt) traces=self.values tau=float(self.tau) code=''' for(int k1=0;k1200: for neuron_idx1 in xrange(self.nbr_neurons): vidx1 = values[neuron_idx1] for neuron_idx2 in xrange((neuron_idx1+1)): self.distance_matrix[neuron_idx1,neuron_idx2]=self.dt/self.tau*sum((vidx1-values[neuron_idx2])**2) else: for neuron_idx1 in xrange(self.nbr_neurons): Vi = values[neuron_idx1].reshape((1, nbr_time_step)) Vj = values.reshape((self.nbr_neurons, nbr_time_step)) self.distance_matrix[neuron_idx1, :] = (self.dt/self.tau)*sum((Vi-Vj)**2, axis=1) #print time()-tt return tril(self.distance_matrix,k=0)+tril(self.distance_matrix,k=0).T distance = property(fget=get_distance) class CoincidenceMatrixCounter(SpikeMonitor): """ Coincidence counter matrix class. Counts the number of coincidences between the spikes of the neurons in the network (model spikes). This yields a matrix with the coincidence counts between every pair of neurons in the network Initialised as:: cc = CoincidenceCounter(source, delta = 4*ms) with the following arguments: ``source`` A :class:`NeuronGroup` object which neurons are being monitored. ``delta=4*ms`` The half-width of the time window for the coincidence counting algorithm. ``onset`` A scalar value in seconds giving the start of the counting: no coincidences are counted before ``onset``. Has three attributes: ``coincidences`` The matrix containg the number of coincidences between each neuron of the :class:`NeuronGroup`. ``coincidences[i,j]`` is the number of coincidences between neuron i and j. ``model_length`` The number of spikes for each neuron. ``model_length[i]`` is the spike count for neuron i. """ def __init__(self, source, onset=None, delta=4 * ms): source.set_max_delay(0) self.source = source self.delay = 0 if onset is None: onset = 0 * ms self.onset = onset self.N = len(source) dt = self.source.clock.dt self.delta = array(rint(delta / dt), dtype=int) self.reinit() def reinit(self): dt = self.source.clock.dt # does not seem to be used # Number of spikes for each neuron self.model_length = zeros(self.N, dtype='int') self.target_length = zeros(self.N, dtype='int') self._coincidences = zeros((self.N, self.N), dtype='int') self.last_spike_time = -100 * ones(self.N, dtype='int') def get_coincidences(self): M = array(self._coincidences, dtype=float) M -= diag(M.diagonal() / 2) return M coincidences = property(fget=get_coincidences) def propagate(self, spiking_neurons): dt = self.source.clock.dt spiking_neurons = array(spiking_neurons) if len(spiking_neurons): if self.source.clock.t >= self.onset: self.model_length[spiking_neurons] += 1 tint = array(rint(self.source.clock.t / dt), dtype=int) self.last_spike_time[spiking_neurons] = tint I, = (abs(self.last_spike_time - tint) <= self.delta).nonzero() if self.source.clock.t >= self.onset: for ispike in spiking_neurons: #self.coincidences[ispike,setdiff1d(I,ispike)]+=1 self._coincidences[ispike, I[ispike <= I]] += 1 self._coincidences[I[ispike <= I], ispike] += 1 # if self.source.clock.t == self.onset: # self.coincidences=(self.coincidences+self.coincidences.T)/2 class StateHistogramMonitor(NetworkOperation, Monitor): ''' Records the histogram of a state variable from a :class:`NeuronGroup`. Initialise as:: StateHistogramMonitor(P,varname,range(,period=1*ms)(,nbins=20)) Where: ``P`` The group to be recorded from ``varname`` The state variable name or number to be recorded ``range`` The minimum and maximum values for the state variable. A 2-tuple of floats. ``period`` When to record. ``nbins`` Number of bins for the histogram. The :class:`StateHistogramMonitor` object has the following properties: ``mean`` The mean value of the state variable for every neuron in the group ``var`` The unbiased estimate of the variances, as in ``mean`` ``std`` The square root of ``var``, as in ``mean`` ``hist`` A 2D array of the histogram values of all the neurons, each row is a single neuron's histogram. ``bins`` A 1D array of the bin centers used to compute the histogram ``bin_edges`` A 1D array of the bin edges used to compute the histogram In addition, if :class:`M`` is a :class:`StateHistogramMonitor` object, you write:: M[i] for the histogram of neuron ``i``. ''' def __init__(self, group, varname, range, period=1 * ms, nbins=20): self.clock = Clock(period) NetworkOperation.__init__(self, None, clock=self.clock) self.group = group self.len = len(group) self.varname = varname self.nbins = nbins self.n = 0 self.bin_edges = linspace(range[0], range[1], self.nbins + 1) self._hist = zeros((self.len, self.nbins + 2)) self.sum = zeros(self.len) self.sum2 = zeros(self.len) def __call__(self): x = self.group.state_(self.varname) self.sum += x self.sum2 += x ** 2 inds = digitize(x, self.bin_edges) for i in xrange(self.len): self._hist[i, inds[i]] += 1 self.n += 1 def __getitem__(self, i): return self.hist[i, :] hist = property(fget=lambda self:self._hist[:, 1:-1] / self.n) bins = property(fget=lambda self:(self.bin_edges[:-1] + self.bin_edges[1:]) / 2) mean = property(fget=lambda self:self.sum / self.n) var = property(fget=lambda self:(self.sum2 / self.n - self.mean ** 2)) std = property(fget=lambda self:self.var ** .5) brian-1.3.1/brian/network.py000066400000000000000000001151201167451777000157640ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Network class ''' __all__ = ['Network', 'MagicNetwork', 'NetworkOperation', 'network_operation', 'run', 'reinit', 'stop', 'clear', 'forget', 'recall'] from Queue import Queue from connections import * from neurongroup import NeuronGroup from clock import guess_clock, Clock import magic from inspect import * from operator import isSequenceType import types from itertools import chain from collections import defaultdict import copy from base import * from units import second import time from utils.progressreporting import * from globalprefs import * import gc import heapq globally_stopped = False class Network(object): ''' Contains simulation objects and runs simulations **Initialised as:** :: Network(...) with ``...`` any collection of objects that should be added to the :class:`Network`. You can also pass lists of objects, lists of lists of objects, etc. Objects that need to passed to the :class:`Network` object are: * :class:`NeuronGroup` and anything derived from it such as :class:`PoissonGroup`. * :class:`Connection` and anything derived from it. * Any monitor such as :class:`SpikeMonitor` or :class:`StateMonitor`. * Any network operation defined with the :func:`network_operation` decorator. Models, equations, etc. do not need to be passed to the :class:`Network` object. The most important method is the ``run(duration)`` method which runs the simulation for the given length of time (see below for details about what happens when you do this). **Example usage:** :: G = NeuronGroup(...) C = Connection(...) net = Network(G,C) net.run(1*second) **Methods** ``add(...)`` Add additional objects after initialisation, works the same way as initialisation. ``run(duration[, report[, report_period]])`` Runs the network for the given duration. See below for details about what happens when you do this. See documentation for :func:`run` for an explanation of the ``report`` and ``report_period`` keywords. ``reinit(states=True)`` Reinitialises the network, runs each object's ``reinit()`` and each clock's ``reinit()`` method (resetting them to 0). If ``states=False`` then it will not reinitialise the :class:`NeuronGroup` state variables. ``stop()`` Can be called from a :func:`network_operation` for example to stop the network from running. ``__len__()`` Returns the number of neurons in the network. ``__call__(obj)`` Similar to ``add``, but you can only pass one object and that object is returned. You would only need this in obscure circumstances where objects needed to be added to the network but were either not stored elsewhere or were stored in a way that made them difficult to extract, for example below the NeuronGroup object is only added to the network if certain conditions hold:: net = Network(...) if some_condition: x = net(NeuronGroup(...)) **What happens when you run** For an overview, see the Concepts chapter of the main documentation. When you run the network, the first thing that happens is that it checks if it has been prepared and calls the ``prepare()`` method if not. This just does various housekeeping tasks and optimisations to make the simulation run faster. Also, an update schedule is built at this point (see below). Now the ``update()`` method is repeatedly called until every clock has run for the given length of time. After each call of the ``update()`` method, the clock is advanced by one tick, and if multiple clocks are being used, the next clock is determined (this is the clock whose value of ``t`` is minimal amongst all the clocks). For example, if you had two clocks in operation, say ``clock1`` with ``dt=3*ms`` and ``clock2`` with ``dt=5*ms`` then this will happen: 1. ``update()`` for ``clock1``, tick ``clock1`` to ``t=3*ms``, next clock is ``clock2`` with ``t=0*ms``. 2. ``update()`` for ``clock2``, tick ``clock2`` to ``t=5*ms``, next clock is ``clock1`` with ``t=3*ms``. 3. ``update()`` for ``clock1``, tick ``clock1`` to ``t=6*ms``, next clock is ``clock2`` with ``t=5*ms``. 4. ``update()`` for ``clock2``, tick ``clock2`` to ``t=10*ms``, next clock is ``clock1`` with ``t=6*ms``. 5. ``update()`` for ``clock1``, tick ``clock1`` to ``t=9*ms``, next clock is ``clock1`` with ``t=9*ms``. 6. ``update()`` for ``clock1``, tick ``clock1`` to ``t=12*ms``, next clock is ``clock2`` with ``t=10*ms``. etc. The ``update()`` method simply runs each operation in the current clock's update schedule. See below for details on the update schedule. **Update schedules** An update schedule is the sequence of operations that are called for each ``update()`` step. The standard update schedule is: * Network operations with ``when = 'start'`` * Network operations with ``when = 'before_groups'`` * Call ``update()`` method for each :class:`NeuronGroup`, this typically performs an integration time step for the differential equations defining the neuron model. * Network operations with ``when = 'after_groups'`` * Network operations with ``when = 'middle'`` * Network operations with ``when = 'before_connections'`` * Call ``do_propagate()`` method for each :class:`Connection`, this typically adds a value to the target state variable of each neuron that a neuron that has fired is connected to. See Tutorial 2: Connections for a more detailed explanation of this. * Network operations with ``when = 'after_connections'`` * Network operations with ``when = 'before_resets'`` * Call ``reset()`` method for each :class:`NeuronGroup`, typically resets a given state variable to a given reset value for each neuron that fired in this update step. * Network operations with ``when = 'after_resets'`` * Network operations with ``when = 'end'`` There is one predefined alternative schedule, which you can choose by calling the ``update_schedule_groups_resets_connections()`` method before running the network for the first time. As the name suggests, the reset operations are done before connections (and the appropriately named network operations are called relative to this rearrangement). You can also define your own update schedule with the ``set_update_schedule`` method (see that method's API documentation for details). This might be useful for example if you have a sequence of network operations which need to be run in a given order. ''' operations = property(fget=lambda self:self._all_operations) def __init__(self, *args, **kwds): self.clock = None # Initialized later self.groups = [] self.connections = [] # The following dict keeps a copy of which operations are in which slot self._operations_dict = defaultdict(list) self._all_operations = [] self.update_schedule_standard() self.prepared = False for o in chain(args, kwds.itervalues()): self.add(o) def add(self, *objs): """ Add an object or container of objects to the network """ for obj in objs: if isinstance(obj, NeuronGroup): if obj not in self.groups: self.groups.append(obj) elif isinstance(obj, Connection): if obj not in self.connections: self.connections.append(obj) elif isinstance(obj, NetworkOperation): if obj not in self._all_operations: self._operations_dict[obj.when].append(obj) self._all_operations.append(obj) elif isSequenceType(obj): for o in obj: self.add(o) else: raise TypeError('Only the following types of objects can be added to a network: NeuronGroup, Connection or NetworkOperation') try: gco = obj.contained_objects if gco is not None: self.add(gco) except AttributeError: pass def __call__(self, obj): """ Add an object to the network and return it """ self.add(obj) return obj def reinit(self, states=True): ''' Resets the objects and clocks. If ``states=False`` it will not reinit the state variables. ''' objs = self.groups + self.connections + self.operations if self.clock is not None: objs.append(self.clock) else: guess_clock(None).reinit() if hasattr(self, 'clocks'): objs.extend(self.clocks) for P in objs: if hasattr(P, 'reinit'): if isinstance(P, NeuronGroup): try: P.reinit(states=states) except TypeError: P.reinit() else: P.reinit() def prepare(self): ''' Prepares the network for simulation: + Checks the clocks of the neuron groups + Gather connections with identical subgroups + Compresses the connection matrices for faster simulation Calling this function is not mandatory but speeds up the simulation. ''' # Set the clock if self.same_clocks(): self.set_clock() else: self.set_many_clocks() # Gather connections with identical subgroups # 'subgroups' maps subgroups to connections (initialize with immutable object (not [])!) subgroups = dict.fromkeys([(C.source, C.delay) for C in self.connections], None) for C in self.connections: if subgroups[(C.source, C.delay)] == None: subgroups[(C.source, C.delay)] = [C] else: subgroups[(C.source, C.delay)].append(C) self.connections = subgroups.values() cons = self.connections # just for readability for i in range(len(cons)): if len(cons[i]) > 1: # at least 2 connections with the same subgroup cons[i] = MultiConnection(cons[i][0].source, cons[i]) else: cons[i] = cons[i][0] # Compress connections for C in self.connections: C.compress() # Experimental support for new propagation code if get_global_preference('usenewpropagate') and get_global_preference('useweave'): from experimental.new_c_propagate import make_new_connection for C in self.connections: make_new_connection(C) # build operations list for each clock self._build_update_schedule() self.prepared = True def update_schedule_standard(self): self._schedule = ['ops start', 'ops before_groups', 'groups', 'ops after_groups', 'ops middle', 'ops before_connections', 'connections', 'ops after_connections', 'ops before_resets', 'resets', 'ops after_resets', 'ops end' ] self._build_update_schedule() def update_schedule_groups_resets_connections(self): self._schedule = ['ops start', 'ops before_groups', 'groups', 'ops after_groups', 'ops middle', 'ops before_resets', 'resets', 'ops after_resets', 'ops before_connections', 'connections', 'ops after_connections', 'ops end' ] self._build_update_schedule() def set_update_schedule(self, schedule): """ Defines a custom update schedule A custom update schedule is a list of schedule items. Each update step of the network, the schedule items will be run in turn. A schedule item can be defined as a string or tuple. The following string definitions are possible: 'groups' Calls the 'update' function of each group in turn, this is typically the integration step of the simulation. 'connections' Calls the 'do_propagate' function of each connection in turn, this is typically propagating spikes forward (and backward in the case of STDP). 'resets' Calls the 'reset' function of each group in turn. 'ops '+name Calls each operation in turn whose 'when' parameter is set to 'name'. The standard set of 'when' names is start, before_groups, after_groups, before_resets, after_resets, before_connections, after_connections, end, but you can use any you like. If a tuple is provided, it should be of the form: (objset, func, allclocks) with: objset a list of objects to be processed func Either None or a string. In the case of none, each object in objset must be callable and will be called. In the case of a string, obj.func will be called for each obj in objset. allclocks Either True or False. If it's set to True, then the object will be placed in the update schedule of every clock in the network. If False, it will be placed in the update schedule only of the clock obj.clock. """ self._schedule = schedule self._build_update_schedule() def _build_update_schedule(self): ''' Defines what the update step does For each clock we build a list self._update_schedule[id(clock)] of functions which are called at the update step if that clock is active. This is generic and works for single or multiple clocks. See documentation for set_update_schedule for an explanation of the self._schedule object. ''' self._update_schedule = defaultdict(list) if hasattr(self, 'clocks'): clocks = self.clocks else: clocks = [self.clock] clockset = clocks for item in self._schedule: # we define some simple names for common schedule items if isinstance(item, str): if item == 'groups': objset = self.groups objfun = 'update' allclocks = False elif item == 'resets': objset = self.groups objfun = 'reset' allclocks = False elif item == 'connections': objset = self.connections objfun = 'do_propagate' allclocks = False # Connections do not define their own clock, but they should # be updated on the schedule of their source group for obj in objset: obj.clock = obj.source.clock elif len(item) > 4 and item[0:3] == 'ops': # the item is of the forms 'ops when' objset = self._operations_dict[item[4:]] objfun = None allclocks = False else: # we allow the more general form of usage as well objset, objfun, allclocks = item for obj in objset: if objfun is None: f = obj else: f = getattr(obj, objfun) if not allclocks: useclockset = [obj.clock] else: useclockset = clockset for clock in useclockset: self._update_schedule[id(clock)].append(f) def update(self): for f in self._update_schedule[id(self.clock)]: f() """ def update_threaded(self,queue): ''' EXPERIMENTAL (not useful for the moment) Parallel update of the network (using threads). ''' # Update groups: one group = one thread for P in self.groups: queue.put(P) queue.join() # Wait until job is done # The following is done serially # Propagate spikes for C in self.connections: C.propagate(C.source.get_spikes(C.delay)) # Miscellanous operations for op in self.operations: op() """ def run(self, duration, threads=1, report=None, report_period=10 * second): ''' Runs the simulation for the given duration. ''' global globally_stopped self.stopped = False globally_stopped = False if not self.prepared: self.prepare() self.clock.set_duration(duration) try: for c in self.clocks: c.set_duration(duration) except AttributeError: pass if report is not None: start_time = time.time() if not isinstance(report, ProgressReporter): report = ProgressReporter(report, report_period) next_report_time = start_time + float(report_period) else: report_period = report.period next_report_time = report.next_report_time if self.clock.still_running() and not self.stopped and not globally_stopped: not_same_clocks = not self.same_clocks() clk = self.clock while clk.still_running() and not self.stopped and not globally_stopped: if report is not None: cur_time = time.time() if cur_time > next_report_time: next_report_time = cur_time + float(report_period) report.update((self.clock.t - self.clock.start) / duration) self.update() clk.tick() if not_same_clocks: # Find the next clock to update #self.clock = min([(clock.t, id(clock), clock) for clock in self.clocks])[2] clk = self.clock = min(self.clocks) #heapq.heappush(self.clocks, self.clock) #clk = self.clock = heapq.heappop(self.clocks) #clk = self.clock = heapq.heappushpop(self.clocks, self.clock) #if not clk0: clock = groups_and_operations[0].clock return all([obj.clock == clock for obj in groups_and_operations]) else: return True def set_clock(self): ''' Sets the clock and checks that clocks of all groups are synchronized. ''' if self.same_clocks(): groups_and_operations=self.groups + self.operations if len(groups_and_operations)>0: self.clock = groups_and_operations[0].clock else: self.clock = guess_clock() else: raise TypeError, 'Clocks are not synchronized!' # other error type? def set_many_clocks(self): ''' Sets a list of clocks. self.clock points to the current clock between considered. ''' self.clocks = list(set([obj.clock for obj in self.groups + self.operations])) #self.clocks.sort(key=lambda c:-c._dt) #heapq.heapify(self.clocks) #self.clock = min([(clock.t, clock) for clock in self.clocks])[1] self.clock = min(self.clocks) #self.clock = heapq.heappop(self.clocks) def __len__(self): ''' Number of neurons in the network ''' n = 0 for P in self.groups: n += len(P) # use compact iterator function? return n def __repr__(self): return 'Network of' + str(len(self)) + 'neurons' # TODO: obscure custom update schedules might still lead to unpicklable Network object def __reduce__(self): # This code might need some explanation: # # The problem with pickling the Network object is that you cannot pickle # 'instance methods', that is a copy of a method of an instance. The # Network object does this because the _update_schedule attribute stores # a copy of all the functions that need to be called each time step, and # these are all instance methods (of NeuronGroup, Reset, etc.). So, we # solve the problem by deleting this attribute at pickling time, and then # rebuilding it at unpickling time. The function unpickle_network defined # below does the unpickling. # # We basically want to make a copy of the current object, and delete the # update schedule from it, and then pickle that. Some weird recursive # stuff happens if you try to do this in the obvious way, so we take the # seemingly mad step of converting the object to a general 'heap' class # (that is, a new-style class with no methods or anything, in this case # the NetworkNoMethods class defined below), do all our operations on # this, store a copy of the actual class of the object (which may not be # Network for derived classes), work with this, and then restore # everything back to the way it was when everything is done. # oldclass = self.__class__ # class may be derived from Network self.__class__ = NetworkNoMethods # stops recursion in copy.copy net = copy.copy(self) # we make a copy because after returning from this function we can't restore the class self.__class__ = oldclass # restore the class of the original, which is now back in its original state net._update_schedule = None # remove the problematic element from the copy return (unpickle_network, (oldclass, net)) # the unpickle_network function called with arguments oldclass, net restores it as it was # This class just used as a general 'heap' class - has no methods but can have attributes class NetworkNoMethods(object): pass def unpickle_network(oldclass, net): # See Network.__reduce__ for an explanation, basically the _update_schedule # cannot be pickled because it contains instance methods, but it can just be # rebuilt. net.__class__ = oldclass net._build_update_schedule() return net class NetworkOperation(magic.InstanceTracker, ObjectContainer): """Callable class for operations that should be called every update step Typically, you should just use the :func:`network_operation` decorator, but if you can't for whatever reason, use this. Note: current implementation only works for functions, not any callable object. **Initialisation:** :: NetworkOperation(function[,clock]) If your function takes an argument, the clock will be passed as that argument. """ def __init__(self, function=None, clock=None, when='end'): self.clock = guess_clock(clock) self.when = when self.function = function if hasattr(function, 'func_code'): self._has_arg = (self.function.func_code.co_argcount==1) def __call__(self): if not hasattr(self, 'function') or self.function is None: return if self._has_arg: self.function(self.clock) else: self.function() def network_operation(*args, **kwds): """Decorator to make a function into a :class:`NetworkOperation` A :class:`NetworkOperation` is a callable class which is called every time step by the :class:`Network` ``run`` method. Sometimes it is useful to just define a function which is to be run every update step. This decorator can be used to turn a function into a :class:`NetworkOperation` to be added to a :class:`Network` object. **Example usages** Operation doesn't need a clock:: @network_operation def f(): ... Automagically detect clock:: @network_operation def f(clock): ... Specify a clock:: @network_operation(specifiedclock) def f(clock): ... Specify when the network operation is run (default is ``'end'``):: @network_operation(when='start') def f(): ... Then add to a network as follows:: net = Network(f,...) """ # Notes on this decorator: # Normally, a decorator comes in two types, with or without arguments. If # it has no arguments, e.g. # @decorator # def f(): # ... # then the decorator function is defined with an argument, and that # argument is the function f. In this case, the decorator function # returns a new function in place of f. # # However, you can also define: # @decorator(arg) # def f(): # ... # in which case the argument to the decorator function is arg, and the # decorator function returns a 'function factory', that is a callable # object that takes a function as argument and returns a new function. # # It might be clearer just to note that the first form above is equivalent # to: # f = decorator(f) # and the second to: # f = decorator(arg)(f) # # In this case, we're allowing the decorator to be called either with or # without an argument, so we have to look at the arguments and determine # if it's a function argument (in which case we do the first case above), # or if the arguments are arguments to the decorator, in which case we # do the second case above. # # Here, the 'function factory' is the locally defined class # do_network_operation, which is a callable object that takes a function # as argument and returns a NetworkOperation object. class do_network_operation(object): def __init__(self, clock=None, when='end'): self.clock = clock self.when = when def __call__(self, f, level=1): new_network_operation = NetworkOperation(f, self.clock, self.when) # Depending on whether we were called as @network_operation or # @network_operation(...) we need different levels, the level is # 2 in the first case and 1 in the second case (because in the # first case we go originalcaller->network_operation->do_network_operation # and in the second case we go originalcaller->do_network_operation # at the time when this method is called). new_network_operation.set_instance_id(level=level) new_network_operation.__name__ = f.__name__ new_network_operation.__doc__ = f.__doc__ new_network_operation.__dict__.update(f.__dict__) return new_network_operation if len(args) == 1 and callable(args[0]): # We're in case (1), the user has written: # @network_operation # def f(): # ... # and the single argument to the decorator is the function f return do_network_operation()(args[0], level=2) else: # We're in case (2), the user has written: # @network_operation(...) # def f(): # ... # and the arguments might be clocks or strings, and may have been # called with or without names, so we check both the variable length # argument list *args, and the keyword dictionary **kwds, falling # back on the default values if nothing is given. clk = None when = 'end' for arg in args: if isinstance(arg, Clock): clk = arg elif isinstance(arg, str): when = arg for key, val in kwds.iteritems(): if key == 'clock': clk = val if key == 'when': when = val return do_network_operation(clock=clk, when=when) #raise TypeError, "Decorator must be used as @network_operation or @network_operation(clock)" class MagicNetwork(Network): ''' Creates a :class:`Network` object from any suitable objects **Initialised as:** :: MagicNetwork() The object returned can then be used just as a regular :class:`Network` object. It works by finding any object in the ''execution frame'' (i.e. in the same function, script or section of module code where the :class:`MagicNetwork` was created) derived from :class:`NeuronGroup`, :class:`Connection` or :class:`NetworkOperation`. **Sample usage:** :: G = NeuronGroup(...) C = Connection(...) @network_operation def f(): ... net = MagicNetwork() Each of the objects ``G``, ``C`` and ``f`` are added to ``net``. **Advanced usage:** :: MagicNetwork([verbose=False[,level=1]]) with arguments: ``verbose`` Set to ``True`` to print out a list of objects that were added to the network, for debugging purposes. ``level`` Where to find objects. ``level=1`` means look for objects where the :class:`MagicNetwork` object was created. The ``level`` argument says how many steps back in the stack to look. ''' def __init__(self, verbose=False, level=1): ''' Set verbose=False to turn off comments. The level variable contains the location of the namespace. ''' (groups, groupnames) = magic.find_instances(NeuronGroup) groups = [g for g in groups if g._owner is g] groupnames = [gn for g, gn in zip(groups, groupnames) if g._owner is g] (connections, connectionnames) = magic.find_instances(Connection) (operations, operationnames) = magic.find_instances(NetworkOperation) if verbose: print "[MagicNetwork] Groups:", groupnames print "[MagicNetwork] Connections:", connectionnames print "[MagicNetwork] Operations:", operationnames # Use set() to discard duplicates Network.__init__(self, list(set(groups)), list(set(connections)), list(set(operations))) def run(duration, threads=1, report=None, report_period=10 * second): ''' Run a network created from any suitable objects that can be found Arguments: ``duration`` the length of time to run the network for. ``report`` How to report progress, the default ``None`` doesn't report the progress. Some standard values for ``report``: ``text``, ``stdout`` Prints progress to the standard output. ``stderr`` Prints progress to the standard error output stderr. ``graphical``, ``tkinter`` Uses the Tkinter module to show a graphical progress bar, this may interfere with any other GUI code you have. Alternatively, you can provide your own callback function by setting ``report`` to be a function ``report(elapsed, complete)`` of two variables ``elapsed``, the amount of time elapsed in seconds, and ``complete`` the proportion of the run duration simulated (between 0 and 1). The ``report`` function is guaranteed to be called at the end of the run with ``complete=1.0`` so this can be used as a condition for reporting that the computation is finished. ``report_period`` How often the progress is reported (by default, every 10s). Works by constructing a :class:`MagicNetwork` object from all the suitable objects that could be found (:class:`NeuronGroup`, :class:`Connection`, etc.) and then running that network. Not suitable for repeated runs or situations in which you need precise control. ''' MagicNetwork(verbose=False, level=2).run(duration, threads=threads, report=report, report_period=report_period) def reinit(states=True): ''' Reinitialises any suitable objects that can be found **Usage:** :: reinit(states=True) Works by constructing a :class:`MagicNetwork` object from all the suitable objects that could be found (:class:`NeuronGroup`, :class:`Connection`, etc.) and then calling ``reinit()`` for each of them. Not suitable for repeated runs or situations in which you need precise control. If ``states=False`` then :class:`NeuronGroup` state variables will not be reinitialised. ''' MagicNetwork(verbose=False, level=2).reinit(states=states) def stop(): ''' Globally stops any running network, this is reset the next time a network is run ''' global globally_stopped globally_stopped = True def clear(erase=True, all=False): ''' Clears all Brian objects. Specifically, it stops all existing Brian objects from being collected by :class:`MagicNetwork` (objects created after clearing will still be collected). If ``erase`` is ``True`` then it will also delete all data from these objects. This is useful in, for example, ``ipython`` which stores persistent references to objects in any given session, stopping the data and memory from being freed up. If ``all=True`` then all Brian objects will be cleared. See also :func:`forget`. ''' if all is False: net = MagicNetwork(level=2) objs = net.groups + net.connections + net.operations else: groups, _ = magic.find_instances(NeuronGroup, all=True) connections, _ = magic.find_instances(Connection, all=True) operations, _ = magic.find_instances(NetworkOperation, all=True) objs = groups+connections+operations for o in objs: o.set_instance_id(-1) if erase: for k, v in o.__dict__.iteritems(): object.__setattr__(o, k, None) gc.collect() def forget(*objs): ''' Forgets the list of objects passed Forgetting means that :class:`MagicNetwork` will not pick up these objects, but all data is retained. You can pass objects or lists of objects. Forgotten objects can be recalled with :func:`recall`. See also :func:`clear`. ''' for obj in objs: if isinstance(obj, (NeuronGroup, Connection, NetworkOperation)): obj._forgotten_instance_id = obj.get_instance_id() obj.set_instance_id(-1) elif isSequenceType(obj): for o in obj: forget(o) else: raise TypeError('Only the following types of objects can be forgotten: NeuronGroup, Connection or NetworkOperation') def recall(*objs): ''' Recalls previously forgotten objects See :func:`forget` and :func:`clear`. ''' for obj in objs: if isinstance(obj, (NeuronGroup, Connection, NetworkOperation)): if hasattr(obj, '_forgotten_instance_id'): obj.set_instance_id(obj._forgotten_instance_id) elif isSequenceType(obj): for o in obj: recall(o) else: raise TypeError('Only the following types of objects can be recalled: NeuronGroup, Connection or NetworkOperation') brian-1.3.1/brian/neurongroup.py000066400000000000000000000663371167451777000166750ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Neuron groups ''' __all__ = ['NeuronGroup', 'linked_var'] from numpy import * from scipy import rand, linalg, random from numpy.random import exponential, randint import copy from units import * from threshold import * import bisect from reset import * from clock import * from stateupdater import * from inspection import * from operator import isSequenceType import types from utils.circular import * import magic from itertools import count from equations import * from globalprefs import * import sys from brian_unit_prefs import bup import numpy from base import * from group import * from threshold import select_threshold from collections import defaultdict timedarray = None # ugly hack: import this module when it is needed, can't do it here because of order of imports network = None # ugly hack: import this module when it is needed, can't do it here because of order of imports class TArray(numpy.ndarray): ''' This internal class is just used for when Brian sends an array t to an object. All the elements will be the same in this case, and you can check for isinstance(arr, TArray) to do optimisations based on this. This behaviour may change in the future. ''' def __new__(subtype, arr): # All numpy.ndarray subclasses need something like this, see # http://www.scipy.org/Subclasses return numpy.array(arr, copy=False).view(subtype) class LinkedVar(object): def __init__(self, source, var=0, func=None, when='start', clock=None): self.source = source self.var = var self.func = func self.when = when self.clock = clock def linked_var(source, var=0, func=None, when='start', clock=None): """ Used for linking one :class:`NeuronGroup` variable to another. Sample usage:: G = NeuronGroup(...) H = NeuronGroup(...) G.V = linked_var(H, 'W') In this scenario, the variable V in group G will always be updated with the values from variable W in group H. The groups G and H must be the same size (although subgroups can be used if they are not the same size). Arguments: ``source`` The group from which values will be taken. ``var`` The state variable of the source group to take values from. ``func`` An additional function of one argument to pass the source variable values through, e.g. ``func=lambda x:clip(x,0,Inf)`` to half rectify the values. ``when`` The time in the main Brian loop at which the copy operation is performed, as explained in :class:`Network`. ``clock`` The update clock for the copy operation, by default it will use the clock of the target group. """ return LinkedVar(source, var, func, when, clock) class NeuronGroup(magic.InstanceTracker, ObjectContainer, Group): """Group of neurons Initialised with arguments: ``N`` The number of neurons in the group. ``model`` An object defining the neuron model. It can be an :class:`Equations` object, a string defining an :class:`Equations` object, a :class:`StateUpdater` object, or a list or tuple of :class:`Equations` and strings. ``threshold=None`` A :class:`Threshold` object, a function, a scalar quantity or a string. If ``threshold`` is a function with one argument, it will be converted to a :class:`SimpleFunThreshold`, otherwise it will be a :class:`FunThreshold`. If ``threshold`` is a scalar, then a constant single valued threshold with that value will be used. In this case, the variable to apply the threshold to will be guessed. If there is only one variable, or if you have a variable named one of ``V``, ``Vm``, ``v`` or ``vm`` it will be used. If ``threshold`` is a string then the appropriate threshold type will be chosen, for example you could do ``threshold='V>10*mV'``. The string must be a one line string. ``reset=None`` A :class:`Reset` object, a function, a scalar quantity or a string. If it's a function, it will be converted to a :class:`FunReset` object. If it's a scalar, then a constant single valued reset with that value will be used. In this case, the variable to apply the reset to will be guessed. If there is only one variable, or if you have a variable named one of ``V``, ``Vm``, ``v`` or ``vm`` it will be used. If ``reset`` is a string it should be a series of expressions which are evaluated for each neuron that is resetting. The series of expressions can be multiline or separated by a semicolon. For example, ``reset=`Vt+=5*mV; V=Vt'``. Statements involving ``if`` constructions will often not work because the code is automatically vectorised. For such constructions, use a function instead of a string. ``refractory=0*ms``, ``min_refractory``, ``max_refractory`` A refractory period, used in combination with the ``reset`` value if it is a scalar. For constant resets only, you can specify refractory as an array of length the number of elements in the group, or as a string, giving the name of a state variable in the group. In the case of these variable refractory periods, you should specify ``min_refractory`` (optional) and ``max_refractory`` (required). ``clock`` A clock to use for scheduling this :class:`NeuronGroup`, if omitted the default clock will be used. ``order=1`` The order to use for nonlinear differential equation solvers. TODO: more details. ``implicit=False`` Whether to use an implicit method for solving the differential equations. TODO: more details. ``max_delay=0*ms`` The maximum allowable delay (larger values use more memory). This doesn't usually need to be specified because Connections will update it. ``compile=False`` Whether or not to attempt to compile the differential equation solvers (into Python code). Typically, for best performance, both ``compile`` and ``freeze`` should be set to ``True`` for nonlinear differential equations. ``freeze=False`` If True, parameters are replaced by their values at the time of initialization. ``method=None`` If not None, the integration method is forced. Possible values are linear, nonlinear, Euler, exponential_Euler (overrides implicit and order keywords). ``unit_checking=True`` Set to ``False`` to bypass unit-checking. **Methods** .. method:: subgroup(N) Returns the next sequential subgroup of ``N`` neurons. See the section on subgroups below. .. method:: state(var) Returns the array of values for state variable ``var``, with length the number of neurons in the group. .. method:: rest() Sets the neuron state values at rest for their differential equations. The following usages are also possible for a group ``G``: ``G[i:j]`` Returns the subgroup of neurons from ``i`` to ``j``. ``len(G)`` Returns the number of neurons in ``G``. ``G.x`` For any valid Python variable name ``x`` corresponding to a state variable of the the :class:`NeuronGroup`, this returns the array of values for the state variable ``x``, as for the :meth:`state` method above. Writing ``G.x = arr`` for ``arr`` a :class:`TimedArray` will set the values of variable x to be ``arr(t)`` at time t. See :class:`TimedArraySetter` for details. **Subgroups** A subgroup is a view on a group. It isn't a new group, it's just a convenient way of referring to a subset of the neurons in an already defined group. The subset has to be a continguous set of neurons. They can be overlapping if defined with the slice notation, or consecutive if defined with the :meth:`subgroup` method. Subgroups can themselves be subgrouped. Subgroups can be used in almost all situations exactly as if they were groups, except that they cannot be passed to the :class:`Network` object. **Details** TODO: details of other methods and properties for people wanting to write extensions? """ @check_units(max_delay=second) def __init__(self, N, model=None, threshold=None, reset=NoReset(), init=None, refractory=0 * msecond, level=0, clock=None, order=1, implicit=False, unit_checking=True, max_delay=0 * msecond, compile=False, freeze=False, method=None, max_refractory=None, ):#**args): # any reason why **args was included here? ''' Initializes the group. ''' self._spiking = True # by default, produces spikes if bup.use_units: # one additional frame level induced by the decorator level += 1 # If it is a string, convert to Equations object if isinstance(model, (str, list, tuple)): model = Equations(model, level=level + 1) if isinstance(threshold, str): if isinstance(model, Equations): threshold = select_threshold(threshold, model, level=level + 1) else: threshold = StringThreshold(threshold, level=level + 1) if isinstance(reset, str): if isinstance(model, Equations): reset = select_reset(reset, model, level=level + 1) else: reset = StringReset(reset, level=level + 1) # Clock clock = guess_clock(clock)#not needed with protocol checking self.clock = clock # Initial state self._S0 = init self.staticvars = [] # StateUpdater if isinstance(model, StateUpdater): self._state_updater = model # Update mechanism self._all_units = defaultdict() elif isinstance(model, Equations): self._eqs = model if (init == None) and (model._units == {}): raise AttributeError, "The group must be initialized." self._state_updater, var_names = magic_state_updater(model, clock=clock, order=order, check_units=unit_checking, implicit=implicit, compile=compile, freeze=freeze, method=method) Group.__init__(self, model, N, unit_checking=unit_checking) self._all_units = model._units # Converts S0 from dictionary to tuple if self._S0 == None: # No initialization: 0 with units S0 = {} else: S0 = self._S0.copy() # Fill missing units for key, value in model._units.iteritems(): if not key in S0: S0[key] = 0 * value self._S0 = [0] * len(var_names) for var, i in zip(var_names, count()): self._S0[i] = S0[var] else: raise TypeError, "StateUpdater must be specified at initialization." # TODO: remove temporary unit hack, this makes all state variables dimensionless if no units are specified # What is this?? if self._S0 is None: self._S0 = dict((i, 1.) for i in range(len(self._state_updater))) # Threshold if isinstance(threshold, Threshold): self._threshold = threshold elif type(threshold) == types.FunctionType: if threshold.func_code.co_argcount == 1: self._threshold = SimpleFunThreshold(threshold) else: self._threshold = FunThreshold(threshold) elif is_scalar_type(threshold): # Check unit if self._S0 != None: try: threshold + self._S0[0] except DimensionMismatchError, inst: raise DimensionMismatchError("The threshold does not have correct units.", *inst._dims) self._threshold = Threshold(threshold=threshold) else: # maybe raise an error? self._threshold = NoThreshold() self._spiking = False # Initialization of the state matrix if not hasattr(self, '_S'): self._S = zeros((len(self._state_updater), N)) if self._S0 != None: for i in range(len(self._state_updater)): self._S[i, :] = self._S0[i] # Reset and refractory period self._variable_refractory_time = False period_max = 0 if is_scalar_type(reset) or reset.__class__ is Reset: if reset.__class__ is Reset: if isinstance(reset.state, str): numstate = self.get_var_index(reset.state) else: numstate = reset.state reset = reset.resetvalue else: numstate = 0 # Check unit if self._S0 != None: try: reset + self._S0[numstate] except DimensionMismatchError, inst: raise DimensionMismatchError("The reset does not have correct units.", *inst._dims) if isinstance(refractory, float): max_refractory = refractory else: if isinstance(refractory, str): if max_refractory is None: raise ValueError('Must specify max_refractory if using variable refractoriness.') self._refractory_variable = refractory self._refractory_array = None else: max_refractory = amax(refractory) * second self._refractory_variable = None self._refractory_array = refractory self._variable_refractory_time = True # What is this 0.9 ?!! Answer: it's just to check that the refractory period is at least clock.dt otherwise don't bother if max_refractory > 0.9 * clock.dt: # Refractory period - unit checking is done here self._resetfun = Refractoriness(period=max_refractory, resetvalue=reset, state=numstate) period_max = int(max_refractory / clock.dt) + 1 else: # Simple reset self._resetfun = Reset(reset, state=numstate) elif type(reset) == types.FunctionType: self._resetfun = FunReset(reset) if refractory > 0.9 * clock.dt: raise ValueError('Refractoriness for custom reset functions not yet implemented, see http://groups.google.fr/group/briansupport/browse_thread/thread/182aaf1af3499a63?hl=en for some options.') elif hasattr(reset, 'period'): # A reset with refractoriness # TODO: check unit (in Reset()) self._resetfun = reset # reset function period_max = int(reset.period / clock.dt) + 1 else: # No reset? self._resetfun = reset if hasattr(threshold, 'refractory'): # A threshold with refractoriness period_max = max(period_max, threshold.refractory + 1) if max_refractory is None: max_refractory = refractory if max_delay < period_max * clock.dt: max_delay = period_max * clock.dt self._max_delay = 0 self.set_max_delay(max_delay) self._next_allowed_spiketime = -ones(N) self._refractory_time = float(max_refractory) - 0.5 * clock._dt if not self._variable_refractory_time and max_refractory < 0.9 * clock.dt: self._use_next_allowed_spiketime_refractoriness = False else: self._use_next_allowed_spiketime_refractoriness = True self._owner = self # owner (for subgroups) self._subgroup_set = magic.WeakSet() self._origin = 0 # start index from owner if subgroup self._next_subgroup = 0 # start index of next subgroup # ensure that var_index has all the 0,...,N-1 integers as names if not hasattr(self, 'var_index'): self.var_index = {} for i in range(self.num_states()): self.var_index[i] = i # these are here for the weave accelerated version of the threshold # call mechanism. self._spikesarray = zeros(N, dtype=int) # various things for optimising self.__t = TArray(zeros(N)) self._var_array = {} for i in range(self.num_states()): self._var_array[i] = self._S[i] for kk, i in self.var_index.iteritems(): sv = self.state_(i) if sv.base is self._S: self._var_array[kk] = sv # todo: should we have a guarantee that var_index exists (even if it just # consists of mappings i->i)? def set_max_delay(self, max_delay): if hasattr(self, '_owner') and self._owner is not self: self._owner.set_max_delay(max_delay) return _max_delay = int(max_delay / self.clock.dt) + 2 # in time bins if _max_delay > self._max_delay: self._max_delay = _max_delay self.LS = SpikeContainer(self._max_delay, useweave=get_global_preference('useweave'), compiler=get_global_preference('weavecompiler')) # Spike storage # update all subgroups if any exist if hasattr(self, '_subgroup_set'): # the first time set_max_delay is called this is false for G in self._owner._subgroup_set.get(): G._max_delay = self._max_delay G.LS = self.LS def rest(self): ''' Sets the variables at rest. ''' self._state_updater.rest(self) def reinit(self, states=True): ''' Resets the variables. ''' if self._owner is self: if states: if self._S0 is not None: for i in range(len(self._state_updater)): self._S[i, :] = self._S0[i] else: self._S[:] = 0 # State matrix self._next_allowed_spiketime[:] = -1 self.LS.reinit() def update(self): ''' Updates the state variables. ''' self._state_updater(self) # update the variables if self._spiking: spikes = self._threshold(self) # get spikes if not isinstance(spikes, numpy.ndarray): spikes = array(spikes, dtype=int) if self._use_next_allowed_spiketime_refractoriness: spikes = spikes[self._next_allowed_spiketime[spikes] <= self.clock._t] if self._variable_refractory_time: if self._refractory_variable is not None: refractime = self.state_(self._refractory_variable) else: refractime = self._refractory_array self._next_allowed_spiketime[spikes] = self.clock._t + refractime[spikes] else: self._next_allowed_spiketime[spikes] = self.clock._t + self._refractory_time self.LS.push(spikes) # Store spikes def get_refractory_indices(self): return (self._next_allowed_spiketime > self.clock._t).nonzero()[0] def get_spikes(self, delay=0): ''' Returns indexes of neurons that spiked at time t-delay*dt. ''' if self._owner == self: # Group # if delay==0: # return self.LS.lastspikes() #return self.LS[delay] # last spikes return self.LS.get_spikes(delay, 0, len(self)) else: # Subgroup return self.LS.get_spikes(delay, self._origin, len(self)) # if delay==0: # ls = self.LS.lastspikes() # else: # ls = self.LS[delay] #ls = self.LS[delay] # spikes = ls-self._origin # return spikes[bisect.bisect_left(spikes,0):\ # bisect.bisect_left(spikes,len(self))] # return ls[bisect.bisect_left(ls,self._origin):\ # bisect.bisect_left(ls,len(self)+self._origin)]-self._origin def reset(self): ''' Resets the neurons. ''' self._resetfun(self) def subgroup(self, N): if self._next_subgroup + N > len(self): raise IndexError, "Subgroup is too large." P = self[self._next_subgroup:self._next_subgroup + N] self._next_subgroup += N; return P def unit(self, name): ''' Returns the unit of variable name ''' if name in self._all_units: return self._all_units[name] elif name in self.staticvars: f = self.staticvars[name] print f.func_code.co_varnames print [(var, self.unit(var)) for var in f.func_code.co_varnames] return get_unit(f(*[1. * self.unit(var) for var in f.func_code.co_varnames])) elif name == 't': # time return second else: return get_unit(self._S0[self.get_var_index(name)]) def state_(self, name): if name == 't': self.__t[:] = self.clock._t return self.__t else: return Group.state_(self, name) state = state_ def __getitem__(self, i): if i == -1: return self[self._S.shape[1] - 1:] else: return self[i:i + 1] def __getslice__(self, i, j): ''' Creates subgroup (view). TODO: views for all arrays. ''' Q = copy.copy(self) Q._S = self._S[:, i:j] Q.N = Q._S.shape[1] Q._origin = self._origin + i Q._next_subgroup = 0 self._subgroup_set.add(Q) return Q def same(self, Q): ''' Tests if the two groups (subgroups) are of the same kind, i.e., if they can be added. This is not used at the moment. OBSOLETE ''' # Same class? if self.__class__ != Q.__class__: return False # Check all variables except arrays and a few ones exceptvar = ['owner', 'nextsubgroup', 'origin'] for v, val in self.__dict__.iteritems(): if not(v in Q.__dict__): return False if (not(isinstance(val, ndarray)) and (not v in exceptvar) and (val != Q.__dict__[v])): return False for v in Q.__dict__.iterkeys(): if not(v in self.__dict__): return False return True def __repr__(self): if self._owner == self: return 'Group of ' + str(len(self)) + ' neurons' else: return 'Subgroup of ' + str(len(self)) + ' neurons' def __setattr__(self, name, val): global timedarray if timedarray is None: import timedarray if isinstance(val, timedarray.TimedArray): self.set_var_by_array(name, val) elif isinstance(val, LinkedVar): self.link_var(name, val.source, val.var, val.func, val.when, val.clock) else: Group.__setattr__(self, name, val) def link_var(self, var, source, sourcevar, func=None, when='start', clock=None): global network if network is None: import network if clock is None: clock = self.clock # check that var is not an equation (it really should only be a parameter # but not sure how to make this generic and still work with neurongroups # that aren't defined by Equations objects) if hasattr(self, 'staticvars') and var in self.staticvars: raise ValueError("Cannot set a static variable (equation) with a linked variable.") selfarr = self.state_(var) if hasattr(source, 'staticvars') and sourcevar in source.staticvars: if func is None: func = lambda x: x @network.network_operation(when=when, clock=clock) def update_link_var(): selfarr[:] = func(getattr(source, sourcevar)) else: sourcearr = source.state_(sourcevar) if func is None: @network.network_operation(when=when, clock=clock) def update_link_var(): selfarr[:] = sourcearr else: @network.network_operation(when=when, clock=clock) def update_link_var(): selfarr[:] = func(sourcearr) self._owner.contained_objects.append(update_link_var) def set_var_by_array(self, var, arr, times=None, clock=None, start=None, dt=None): # ugly hack, have to import this here because otherwise the order of imports # is messed up. import timedarray timedarray.set_group_var_by_array(self, var, arr, times=times, clock=clock, start=start, dt=dt) brian-1.3.1/brian/new_features.txt000066400000000000000000000270661167451777000171640ustar00rootroot00000000000000New features since 1.3.1 ------------------------ Major features: Minor features: Improvements: Bug fixes: New features since 1.3.0 ------------------------ Major features: Minor features: * New PoissonInput class * New auditory model: TanCarney (brian.hears) * Many more examples from papers * New electrode compensation module (in library.electrophysiology) * New trace analysis module (in library.electrophysiology) * Added new brian.tools.taskfarm.run_tasks function to use multiple CPUs to perform multiple runs of a simulation and save results to a DataManager, with an optional GUI interface. * Added FractionalDelay filterbank to brian.hears, fractional itds to HeadlessDatabase and fractional shifts to Sound.shifted. * Added vowel function to brian.hears for creating artificial vowel sounds * New spike_triggered_average function * Added maxlevel and atmaxlevel to Sound * New IRNS/IRNO noise functions Improvements: * SpikeGeneratorGroup is much faster. * Added RemoteControlClient.set(var, name) to allow sending data to the server from the client (previously you could only receive data from the server but not send it, except in string form). * Monitors do not process empty spike arrays when there have not been any spikes, increases speed for monitored networks with sparse firing (#78) * Various speed optimisations Bug fixes: * Fixed bug with frozen equations and time variable in equations * Fixed bug with loading sounds using Sound('filename.wav') * SpikeMonitor now clears spiketimes correctly on reinit (#75) * MultiConnection now propagates reinit (important for monitors) (#76) * Fixed bug in realtime plotting * Fixed various bugs in Sound * Fixed bugs in STDP * Bow propagates spikes only if spikes exist (#78) New features since 1.2.1 ------------------------ Major features: * Added Brian.hears auditory library Minor features: * Added new brian.tools.datamanager.DataManager, moved from brian.experimental * reinit(states=False) will now not reset NeuronGroup state variables to 0. * modelfitting now has support for refractoriness * New examples in misc: after_potential, non_reliability, reliability, van_rossum_metric, remotecontrolserver, remotecontrolclient * New experimental.neuromorphic package * Van Rossum metric added Improvements: * SpikeGeneratorGroup is faster for large number of events ("gather" option). * Speed improvement for pure Python version of sparse matrix preparation * Speed improvements for spike propagation weave code (50-100% faster). * Clocks have been changed and should now behave more predictably. In addition, you can now specify an order attribute for clocks. * modelfitting is now based on playdoh 0.3 * modelfitting can now use euler/exp.euler or RK2 integration schemes * Loading AER data is much faster * Freezing now uses higher precision (used to only use 12sf now uses 17sf) Bug fixes: * Bug in STDP with small values for wmin/wmax fixed (ticket #63) * Equations/aliases now work correctly in STDP (ticket #56) * Bug in sparse matrices introduced in scipy 0.8.0 fixed * Bug in TimedArray when dt keyword is used now fixed (thanks to Adrien Wohrer for pointing out the bug). * Units now work correctly in STDP (ticket #60) * STDP raises an error if operations are reordered (ticket #57) * linked_var works with static vars (equations) (ticket #68) * Changing clock.t during a run won't end the run * Fixed ticket #66 (unit bug) * Fixed ticket #64 (bug with freeze) * Can now run a network with no group * Exception handling now works properly for C version of circiular spike container * ccircular now builds correctly on linux and 64 bit Internal changes: * brian.connection deprecated and replaced by subpackage brian.connections, making the code structure much more straightforward and setting up for future work on code generation, etc. New features since 1.2.0 ------------------------ Major features: * New remote controlling of running Brian scripts via RemoteControlServer and RemoteControlClient. Minor features: * New module tools.io * weight and sparseness can now both be functions in connect_random * New StateHistogramMonitor object * clear now has a new keyword all which allows you to destroy all Brian objects regardless of whether or not they would be found by MagicNetwork. In addition, garbage collection is called after a clear. * New method StateMonitor.insert_spikes to have spikes on voltage traces. Improvements * The sparseness keyword in connect_random can be a function * Added 'wmin' to STDP * You can now access STDP internal variables, e.g. stdp.A_pre, and monitor them by doing e.g. StateMonitor(stdp.pre_group, 'A_pre') * STDP now supports nonlinear equations and parameters * refractory can now be a vector (see docstring for NeuronGroup) for constant resets. * modelfitting now uses playdoh library * C++ compiled code is now much faster thanks to adding -ffast-math switch to gcc, and there is an option which allows you to set your own compiler switches, for example -march=native on gcc 4.2+. * SpikeGeneratorGroup now has a spiketimes attribute to reset the list of spike times. * StateMonitor now caches values in an array, improving speed for M[i] operation and resolving ticket #53 * clear() now does a garbage collection after clearing, making memory buildups less likely on repeated runs Bug fixes * Sparse matrices with some versions of scipy * Weave now works on 64 bit platforms with 64 bit Python * Fixed bug introduced in 1.2.0 where dense DelayConnection structures would not propagate any spikes * Fixed bug where connect* functions on DelayConnection didn't work with subgroups but only with the whole group. * Fixed bug with linked_var from subgroups not working * Fixed bug with adding Equations objects together using a shared base equation (ticket #9 on the trac) * unit_checking=False now works (didn't do anything before) * Fixed bug with using Equations object twice (for two different NeuronGroups) * Fixed unit checking bug and ZeroDivisionError (ticket #38) * Fixed rare problems with spikes being lost due to wrong size of SpikeContainer, it now dynamically adapts to the number of spikes. * Fixed ticket #5, ionic_currents did not work with units off * Fixed ticket #6, Current+MembraneEquation now works * Fixed bug in modelfitting : the fitness was not computed right with CPUs. * Fixed bug in modelfitting with random seeds on Unix systems. * brian.hears.filtering now works correctly on 64 bit systems Removed features * Model has now been removed from Brian (it was deprecated in 1.1). New features since 1.1.3 ------------------------ Major features: * Model fitting toolbox (library.modelfitting) Minor features: * New real-time ``refresh=`` options added to plotting functions * Gamma factor in tools.statistics * New RegularClock object * Added brian_sample_run function to test installation in place of nose tests Improvements: * Speed improvements to monitors and plotting functions * Sparse matrix support improved, should work with scipy versions up to 0.7.1 * Various improvements to brian.hears (still experimental though) * Parameters now picklable * Made Equations picklable Bug fixes: * Fixed major bug with subgroups and connections (announced on webpage) * Fixed major bug with multiple clocks (announced on webpage) * No warnings with Python 2.6 * Minor bugfix to TimedArray caused by floating point comparisons * Bugfix: refractory neurons could fire in very extreme circumstances * Fixed bug with DelayConnection not setting max_delay * Fixed bug with STP * Fixed bug with weight=lambda i,j:rand() New examples: * New multiprocessing examples * Added polychronisation example * Added modelfitting examples * Added examples of TimedArray and linked_var * Added examples of using derived classes with Brian * Realtime plotting example New features since 1.1.2 ------------------------ * STDP now works with DelayConnection * Added EventClock * Added RecentStateMonitor * Added colormap option to StateMonitor.plot * Added timed array module, see TimedArray class for details. * Added optional progress reporting to run() * New recall() function (converse to forget()) * Added progress reporting module (brian.utils.progressreporting) * Added SpikeMonitor.spiketimes * Added developer's guide to docs * Early version of brian.hears subpackage for auditory modelling * Various bug fixes New features since 1.1.1 ------------------------ * Standard functions rand() and randn() can now be used in string resets. * New forget() function. * Major bugfix for STP New features since 1.1.0 ------------------------ * New statistical function: vector_strength * Bugfix for one line string thresholds/resets New features since 1.0.0 ------------------------ * clear() function added for ipython users * New DelayConnection for heterogeneous delays * Short-term plasticity (Tsodyks-Markram model) * Simplified initialisation of Connection objects * Optional unit checking in NeuronGroup * STDP * New code for Connections, including new 'dynamic' connection matrix type * Reset and threshold can be specified with strings (Python expressions) * Spike train statistics (utils.statistics) * Miscellaneous optimisations * New MultiStateMonitor class * New Group, MultiGroup objects (for convenience of people writing extensions mostly) * Improved contained_objects protocol with ObjectContainer class in brian.base New features since 1.0.0 rc5 ---------------------------- * 2nd order Runge-Kutta method (use order=2) * Quantity arrays are disabled (units only for scalars) * brian_global_config added * UserComputedConnectionMatrix and UserComputedSparseConnectionMatrix * SimpleCustomRefractoriness, CustomRefractoriness New features since 1.0.0 rc4 ---------------------------- * Bugfix of sparse matrix problems * Compiled version of spike propagation (much faster for networks with lots of spikes) * Assorted small improvements New features since 1.0.0 rc3 ---------------------------- * Added StateSpikeMonitor * Changed QuantityArray behaviour to work better with numpy, scipy and pylab New features since 1.0.0 rc2 ---------------------------- * Small bugfixes New features since 1.0.0 rc1 ---------------------------- * Documentation system now much better, using Sphinx, includes cross references, index, etc. * Added VariableReset * Added run_all_tests() * numpywrappers module added, but not in global namespace * Quantity comparison to zero doesn't check units (positivity/negativity) New features since 1.0.0 beta ----------------------------- * Connection: connect_full allows a functional weight argument (like connect_random) ** Short-term plasticity ** In Connection: 'modulation' argument allows modulating weights by a state variable from the source group (see examples). * HomogeneousCorrelatedSpikeTrains: input spike trains with exponential correlations. * Network.stop(): stops the simulation (can be called by a user script) * PopulationRateMonitor: smooth_rate method * Optimisation of Euler code: use compile=True when initialising NeuronGroup * More examples * Pickling now works (saving and loading states) * dot(a,b) now works correctly with qarray's with homogeneous units * Parallel simulations using Parallel Python (independent simulations only) * Example of html inferfaces to Brian scripts using CherryPy * Time dependence in equations (see phase_locking example) * SpikeCounter and PopulationSpikeCounterbrian-1.3.1/brian/optimiser.py000066400000000000000000000102111167451777000163010ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Optimizations for Brian. ''' from scipy.weave import blitz import re import warnings import parser from inspection import * from log import * try: import sympy use_sympy = True except: warnings.warn('sympy not installed') use_sympy = False #TODO: also insert a global pref? __all__ = ['freeze', 'simplify_expr', 'symbolic_eval'] def freeze(expr, vars, namespace={}, safe=False): """ Replaces all identifiers in expr by their float value. The variables vars are not changed. If safe is True, freezing fails if one variable is not a quantity """ # Find variables ids = [name for name in get_identifiers(expr) if name not in vars] # Check that they are in the namespaces and find their value value = {} for id in ids: if id in namespace: value[id] = namespace[id] else: log_warn('brian.optimizer.freeze', "Freezing impossible because the value of " + id + " is missing") return None if not isinstance(value[id], (int, float)): # or unit? if safe: log_warn('brian.optimizer.freeze', "Freezing impossible because " + id + " is not a number") return None else: value[id] = id else: value[id] = float(value[id]) # downcast Quantity to float # Substitute for id in ids: if isinstance(value[id], float): strver = repr(value[id]) else: strver = str(value[id]) expr = re.sub("\\b" + id + "\\b", strver, expr) # Clean (changes -- to +) expr = re.sub("--", "+", expr) #print "freezing:",expr #return simplify_expr(expr) return expr def symbolic_eval(expr): """ Evaluates expr as a symbolic expression. """ if not use_sympy: return expr # TODO: not with all symbols # Find all symbols namespace = {} vars = get_identifiers(expr) for var in vars: namespace[var] = sympy.Symbol(var) return eval(expr, namespace) def simplify_expr(expr): ''' Simplifies a string expression for an equation. NB: does not seem to yield any speed up! ''' return str(symbolic_eval(expr)) #return str(sympy.simplify(symbolic_eval(expr))) brian-1.3.1/brian/plotting.py000066400000000000000000000260011167451777000161320ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # """ Plotting routines for Brian Functions: * ``raster_plot(monitors...,options...)`` * ``hist_plot(monitor,options...)`` """ __docformat__ = "restructuredtext en" __all__ = ['plot', 'show', 'figure', 'xlabel', 'ylabel', 'title', 'axis', 'raster_plot', 'raster_plot_spiketimes', 'hist_plot'] try: from pylab import plot, show, figure, xlabel, ylabel, title, axis, xlim import pylab, matplotlib except: plot, show, figure, xlabel, ylabel, title, axis, xlim = (None,)*8 from stdunits import * import magic from connections import * from monitor import * from monitor import HistogramMonitorBase from network import network_operation from clock import EventClock import warnings from log import * from numpy import amax, amin, array, hstack import bisect def _take_options(myopts, givenopts): """Takes options from one dict into another Any key defined in myopts and givenopts will be removed from givenopts and placed in myopts. """ for k in myopts.keys(): if k in givenopts: myopts[k] = givenopts.pop(k) def raster_plot(*monitors, **plotoptions): """Raster plot of a :class:`SpikeMonitor` **Usage** ``raster_plot(monitor,options...)`` Plots the spike times of the monitor on the x-axis, and the neuron number on the y-axis ``raster_plot(monitor0,monitor1,...,options...)`` Plots the spike times for all the monitors given, with y-axis defined by placing a spike from neuron n of m in monitor i at position i+n/m ``raster_plot(options...)`` Guesses the monitors to plot automagically **Options** Any of PyLab options for the ``plot`` command can be given, as well as: ``showplot=False`` set to ``True`` to run pylab's ``show()`` function ``newfigure=False`` set to ``True`` to create a new figure with pylab's ``figure()`` function ``xlabel`` label for the x-axis ``ylabel`` label for the y-axis ``title`` title for the plot ``showgrouplines=False`` set to ``True`` to show a line between each monitor ``grouplinecol`` colour for group lines ``spacebetweengroups`` value between 0 and 1 to insert a space between each group on the y-axis ``refresh`` Specify how often (in simulation time) you would like the plot to refresh. Note that this will only work if pylab is in interactive mode, to ensure this call the pylab ``ion()`` command. ``showlast`` If you are using the ``refresh`` option above, plots are much quicker if you specify a fixed time window to display (e.g. the last 100ms). ``redraw`` If you are using more than one realtime monitor, only one of them needs to issue a redraw command, therefore set this to ``False`` for all but one of them. Note that with some IDEs, interactive plotting will not work with the default matplotlib backend, try doing something like this at the beginning of your script (before importing brian):: import matplotlib matplotlib.use('WXAgg') You may need to experiment, try WXAgg, GTKAgg, QTAgg, TkAgg. """ if len(monitors) == 0: (monitors, monitornames) = magic.find_instances(SpikeMonitor) if len(monitors): # OPTIONS # Defaults myopts = {"title":"", "xlabel":"Time (ms)", "showplot":False, "showgrouplines":False, \ "spacebetweengroups":0.0, "grouplinecol":"k", 'newfigure':False, 'refresh':None, 'showlast':None, 'redraw':True} if len(monitors) == 1: myopts["ylabel"] = 'Neuron number' else: myopts["ylabel"] = 'Group number' # User options _take_options(myopts, plotoptions) # PLOTTING ROUTINE spacebetween = myopts['spacebetweengroups'] class SecondTupleArray(object): def __init__(self, obj): self.obj = obj def __getitem__(self, i): return float(self.obj[i][1]) def __len__(self): return len(self.obj) def get_plot_coords(tmin=None, tmax=None): allsn = [] allst = [] for i, m in enumerate(monitors): mspikes = m.spikes if tmin is not None and tmax is not None: x = SecondTupleArray(mspikes) imin = bisect.bisect_left(x, tmin) imax = bisect.bisect_right(x, tmax) mspikes = mspikes[imin:imax] if len(mspikes): sn, st = array(mspikes).T else: sn, st = array([]), array([]) st /= ms if len(monitors) == 1: allsn = [sn] else: allsn.append(i + ((1. - spacebetween) / float(len(m.source))) * sn) allst.append(st) sn = hstack(allsn) st = hstack(allst) if len(monitors) == 1: nmax = len(monitors[0].source) else: nmax = len(monitors) return st, sn, nmax st, sn, nmax = get_plot_coords() if myopts['newfigure']: pylab.figure() if myopts['refresh'] is None: line, = pylab.plot(st, sn, '.', **plotoptions) else: line, = pylab.plot([], [], '.', **plotoptions) if myopts['refresh'] is not None: pylab.axis(ymin=0, ymax=nmax) if myopts['showlast'] is not None: pylab.axis(xmin= -myopts['showlast'] / ms, xmax=0) ax = pylab.gca() if myopts['showgrouplines']: for i in range(len(monitors)): pylab.axhline(i, color=myopts['grouplinecol']) pylab.axhline(i + (1 - spacebetween), color=myopts['grouplinecol']) pylab.ylabel(myopts['ylabel']) pylab.xlabel(myopts['xlabel']) pylab.title(myopts["title"]) if myopts["showplot"]: pylab.show() if myopts['refresh'] is not None: @network_operation(clock=EventClock(dt=myopts['refresh'])) def refresh_raster_plot(clk): if matplotlib.is_interactive(): if myopts['showlast'] is None: st, sn, nmax = get_plot_coords() line.set_xdata(st) line.set_ydata(sn) ax.set_xlim(0, amax(st)) else: st, sn, nmax = get_plot_coords(clk._t - float(myopts['showlast']), clk._t) ax.set_xlim((clk.t - myopts['showlast']) / ms, clk.t / ms) line.set_xdata(array(st)) line.set_ydata(sn) if myopts['redraw']: pylab.draw() pylab.get_current_fig_manager().canvas.flush_events() monitors[0].contained_objects.append(refresh_raster_plot) def raster_plot_spiketimes(spiketimes): """ Raster plot of a list of spike times """ m = Monitor() m.source = [] m.spikes = spiketimes raster_plot(m) t = array(spiketimes)[:,1] def hist_plot(histmon=None, **plotoptions): """Plot a histogram **Usage** ``hist_plot(histmon,options...)`` Plot the given histogram monitor ``hist_plot(options...)`` Guesses which histogram monitor to use with argument: ``histmon`` is a monitor of histogram type **Notes** Plots only the first n-1 of n bars in the histogram, because the nth bar is for the interval (-,infinity). **Options** Any of PyLab options for bar can be given, as well as: ``showplot=False`` set to ``True`` to run pylab's ``show()`` function ``newfigure=True`` set to ``False`` not to create a new figure with pylab's ``figure()`` function ``xlabel`` label for the x-axis ``ylabel`` label for the y-axis ``title`` title for the plot """ if histmon is None: (histmons, histmonnames) = magic.find_instances(HistogramMonitorBase) if len(histmons) == 0: raise TypeError, "No histogram monitors found." elif len(histmons) > 1: log_info('brian.hist_plot', "Found more than one histogram monitor, using first one.") histmon = histmons[0] # OPTIONS # Defaults myopts = {"title":"", "xlabel":"Time (ms)", "ylabel":"Count", "showplot":False, 'newfigure':True } # User options _take_options(myopts, plotoptions) # PLOTTING ROUTINE if myopts['newfigure']: pylab.figure() pylab.bar(histmon.bins[:-1] / ms, histmon.count[:-1], (histmon.bins[1:] - histmon.bins[:-1]) / ms, **plotoptions) pylab.ylabel(myopts['ylabel']) pylab.xlabel(myopts['xlabel']) pylab.title(myopts["title"]) if myopts["showplot"]: pylab.show() brian-1.3.1/brian/reset.py000066400000000000000000000401731167451777000154220ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Reset mechanisms ''' __all__ = ['Reset', 'VariableReset', 'Refractoriness', 'NoReset', 'FunReset', 'CustomRefractoriness', 'SimpleCustomRefractoriness', 'StringReset', 'select_reset'] from numpy import where, zeros from units import * from clock import * import inspect import re import numpy from inspection import * from utils.documentation import flattened_docstring from globalprefs import * from log import * CReset = PythonReset = None def select_reset(expr, eqs, level=0): ''' Automatically selects the appropriate Reset object from a string. Matches the following patterns if expr is a one liner: var_name = const : Reset var_name = var_name : VariableReset others : StringReset ''' # plan: # - strip it and see if it is one line, if not select StringReset # - see if it matches A = B, if not select StringReset # - check if A, B both match diffeq variable names, and if so # select VariableReset # - check that A is a variable name, if not select StringReset # - extract all the identifiers from B, and if none of them are # callable, assume it is a constant, try to eval it and then use # Reset. If not, or if eval fails, use StringReset # This misses the case of e.g. V=10*mV*exp(1) because exp will be # callable, but in general a callable means that it could be # non-constant. global CReset, PythonReset use_codegen = get_global_preference('usecodegen') and get_global_preference('usecodegenreset') use_weave = get_global_preference('useweave') and get_global_preference('usecodegenweave') if use_codegen: if CReset is None: from experimental.codegen.reset import CReset, PythonReset if use_weave: log_warn('brian.reset', 'Using codegen CReset') return CReset(expr, level=level + 1) else: log_warn('brian.reset', 'Using codegen PythonReset') return PythonReset(expr, level=level + 1) expr = expr.strip() if '\n' in expr: return StringReset(expr, level=level + 1) eqs.prepare() ns = namespace(expr, level=level + 1) s = re.search(r'\s*(\w+)\s*=(.+)', expr) if not s: return StringReset(expr, level=level + 1) A = s.group(1) B = s.group(2).strip() if A not in eqs._diffeq_names: return StringReset(expr, level=level + 1) if B in eqs._diffeq_names: return VariableReset(B, A) vars = get_identifiers(B) all_vars = eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t'] for v in vars: if v not in ns or v in all_vars or callable(ns[v]): return StringReset(expr, level=level + 1) try: val = eval(B, ns) except: return StringReset(expr, level=level + 1) return Reset(val, A) class Reset(object): ''' Resets specified state variable to a fixed value **Initialise as:** :: R = Reset([resetvalue=0*mvolt[, state=0]]) with arguments: ``resetvalue`` The value to reset to. ``state`` The name or number of the state variable to reset. This will reset all of the neurons that have just spiked. The given state variable of the neuron group will be set to value ``resetvalue``. ''' def __init__(self, resetvalue=0 * mvolt, state=0): self.resetvalue = resetvalue self.state = state self.statevectors = {} def __call__(self, P): ''' Clamps membrane potential at reset value. ''' V = self.statevectors.get(id(P), None) if V is None: V = P.state_(self.state) self.statevectors[id(P)] = V V[P.LS.lastspikes()] = self.resetvalue def __repr__(self): return 'Reset ' + str(self.resetvalue) class StringReset(Reset): ''' Reset defined by a string Initialised with arguments: ``expr`` The string expression used to reset. This can include multiple lines or statements separated by a semicolon. For example, ``'V=-70*mV'`` or ``'V=-70*mV; Vt+=10*mV'``. Some standard functions are provided, see below. ``level`` How many levels up in the calling sequence to look for names in the namespace. Usually 0 for user code. Standard functions for expressions: ``rand()`` A uniform random number between 0 and 1. ``randn()`` A Gaussian random number with mean 0 and standard deviation 1. For example, these could be used to implement an adaptive model with random reset noise with the following string:: E -= 1*mV V = Vr+rand()*5*mV ''' def __init__(self, expr, level=0): expr = flattened_docstring(expr) self._namespace, unknowns = namespace(expr, level=level + 1, return_unknowns=True) self._prepared = False self._expr = expr class Replacer(object): def __init__(self, func, n): self.n = n self.func = func def __call__(self): return self.func(self.n) self._Replacer = Replacer def __call__(self, P): if not self._prepared: unknowns = [var for var in P.var_index if isinstance(var, str)] expr = self._expr for var in unknowns: expr = re.sub("\\b" + var + "\\b", var + '[_spikes_]', expr) self._code = compile(expr, "StringReset", "exec") self._vars = unknowns self._prepared = True spikes = P.LS.lastspikes() self._namespace['_spikes_'] = spikes self._namespace['rand'] = self._Replacer(numpy.random.rand, len(spikes)) self._namespace['randn'] = self._Replacer(numpy.random.randn, len(spikes)) for var in self._vars: self._namespace[var] = P.state(var) exec self._code in self._namespace def __repr__(self): return "String reset" class VariableReset(Reset): ''' Resets specified state variable to the value of another state variable Initialised with arguments: ``resetvaluestate`` The state variable which contains the value to reset to. ``state`` The name or number of the state variable to reset. This will reset all of the neurons that have just spiked. The given state variable of the neuron group will be set to the value of the state variable ``resetvaluestate``. ''' def __init__(self, resetvaluestate=1, state=0): self.resetvaluestate = resetvaluestate self.state = state self.resetstatevectors = {} self.statevectors = {} def __call__(self, P): ''' Clamps membrane potential at reset value. ''' V = self.statevectors.get(id(P), None) if V is None: V = P.state_(self.state) self.statevectors[id(P)] = V Vr = self.resetstatevectors.get(id(P), None) if Vr is None: Vr = P.state_(self.resetvaluestate) self.resetstatevectors[id(P)] = Vr lastspikes = P.LS.lastspikes() V[lastspikes] = Vr[lastspikes] def __repr__(self): return 'VariableReset(' + str(self.resetvaluestate) + ', ' + str(self.state) + ')' class FunReset(Reset): ''' A reset with a user-defined function. **Initialised as:** :: FunReset(resetfun) with argument: ``resetfun`` A function ``f(G,spikes)`` where ``G`` is the :class:`NeuronGroup` and ``spikes`` is an array of the indexes of the neurons to be reset. ''' def __init__(self, resetfun): self.resetfun = resetfun def __call__(self, P): self.resetfun(P, P.LS.lastspikes()) class Refractoriness(Reset): ''' Holds the state variable at the reset value for a fixed time after a spike. Initialised with arguments: ``resetvalue`` The value to reset and hold to. ``period`` The length of time to hold at the reset value. If using variable refractoriness, this is the maximum period. ``state`` The name or number of the state variable to reset and hold. ''' @check_units(period=second) def __init__(self, resetvalue=0 * mvolt, period=5 * msecond, state=0): self.period = period self.resetvalue = resetvalue self.state = state self._periods = {} # a dictionary mapping group IDs to periods self.statevectors = {} def __call__(self, P): ''' Clamps state variable at reset value. ''' # if we haven't computed the integer period for this group yet. # do so now if id(P) in self._periods: period = self._periods[id(P)] else: period = int(self.period / P.clock.dt) + 1 self._periods[id(P)] = period V = self.statevectors.get(id(P), None) if V is None: V = P.state_(self.state) self.statevectors[id(P)] = V neuronindices = P.LS[0:period] if P._variable_refractory_time: neuronindices = neuronindices[P._next_allowed_spiketime[neuronindices] > (P.clock._t - P.clock._dt * 0.25)] V[neuronindices] = self.resetvalue def __repr__(self): return 'Refractory period, ' + str(self.period) class SimpleCustomRefractoriness(Refractoriness): ''' Holds the state variable at the custom reset value for a fixed time after a spike. **Initialised as:** :: SimpleCustomRefractoriness(resetfunc[,period=5*ms[,state=0]]) with arguments: ``resetfun`` The custom reset function ``resetfun(P, spikes)`` for ``P`` a :class:`NeuronGroup` and ``spikes`` a list of neurons that fired spikes. ``period`` The length of time to hold at the reset value. ``state`` The name or number of the state variable to reset and hold, it is your responsibility to check that this corresponds to the custom reset function. The assumption is that ``resetfun(P, spikes)`` will reset the state variable ``state`` on the group ``P`` for the spikes with indices ``spikes``. The values assigned by the custom reset function are stored by this object, and they are clamped at these values for ``period``. This object does not introduce refractoriness for more than the one specified variable ``state`` or for spike indices other than those in the variable ``spikes`` passed to the custom reset function. ''' @check_units(period=second) def __init__(self, resetfun, period=5 * msecond, state=0): self.period = period self.resetfun = resetfun self.state = state self._periods = {} # a dictionary mapping group IDs to periods self.statevectors = {} self.lastresetvalues = {} def __call__(self, P): ''' Clamps state variable at reset value. ''' # if we haven't computed the integer period for this group yet. # do so now if id(P) in self._periods: period = self._periods[id(P)] else: period = int(self.period / P.clock.dt) + 1 self._periods[id(P)] = period V = self.statevectors.get(id(P), None) if V is None: V = P.state_(self.state) self.statevectors[id(P)] = V LRV = self.lastresetvalues.get(id(P), None) if LRV is None: LRV = zeros(len(V)) self.lastresetvalues[id(P)] = LRV lastspikes = P.LS.lastspikes() self.resetfun(P, lastspikes) # call custom reset function LRV[lastspikes] = V[lastspikes] # store a copy of the custom resetted values clampedindices = P.LS[0:period] V[clampedindices] = LRV[clampedindices] # clamp at custom resetted values def __repr__(self): return 'Custom refractory period, ' + str(self.period) class CustomRefractoriness(Refractoriness): ''' Holds the state variable at the custom reset value for a fixed time after a spike. **Initialised as:** :: CustomRefractoriness(resetfunc[,period=5*ms[,refracfunc=resetfunc]]) with arguments: ``resetfunc`` The custom reset function ``resetfunc(P, spikes)`` for ``P`` a :class:`NeuronGroup` and ``spikes`` a list of neurons that fired spikes. ``refracfunc`` The custom refractoriness function ``refracfunc(P, indices)`` for ``P`` a :class:`NeuronGroup` and ``indices`` a list of neurons that are in their refractory periods. In some cases, you can choose not to specify this, and it will use the reset function. ``period`` The length of time to hold at the reset value. ''' @check_units(period=second) def __init__(self, resetfun, period=5 * msecond, refracfunc=None): self.period = period self.resetfun = resetfun if refracfunc is None: refracfunc = resetfun self.refracfunc = refracfunc self._periods = {} # a dictionary mapping group IDs to periods def __call__(self, P): ''' Clamps state variable at reset value. ''' # if we haven't computed the integer period for this group yet. # do so now if id(P) in self._periods: period = self._periods[id(P)] else: period = int(self.period / P.clock.dt) + 1 self._periods[id(P)] = period lastspikes = P.LS.lastspikes() self.resetfun(P, lastspikes) # call custom reset function clampedindices = P.LS[0:period] self.refracfunc(P, clampedindices) def __repr__(self): return 'Custom refractory period, ' + str(self.period) class NoReset(Reset): ''' Absence of reset mechanism. **Initialised as:** :: NoReset() ''' def __init__(self): pass def __call__(self, P): pass def __repr__(self): return 'No reset' brian-1.3.1/brian/stateupdater.py000066400000000000000000000652131167451777000170070ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Neuron StateUpdaters ''' __all__ = ['StateUpdater', 'LinearStateUpdater', 'NonlinearStateUpdater', 'SynapticNoise', 'LazyStateUpdater', 'magic_state_updater', 'FunStateUpdater', 'get_linear_equations'] #from scipy.weave import blitz from numpy import * from scipy import linalg from scipy.linalg import LinAlgError from scipy import weave from scipy.optimize import fsolve import copy from operator import isSequenceType from inspection import * from units import second, mvolt from clock import guess_clock import magic from equations import * from itertools import count from units import Quantity import warnings from log import * from globalprefs import * from experimental.codegen import * CStateUpdater = PythonStateUpdater = None def magic_state_updater(model, clock=None, order=1, implicit=False, compile=False, freeze=False, \ method=None, check_units=True): ''' Examines the set of differential equations in 'model' (Equations object) and returns a StateUpdater object and the list of dynamic variables. For example, the magic_state_updater function can determine if it is linear or nonlinear. Available methods: * None: the method is automatically selected * linear * Euler * RK (Runge-Kutta, second order) * exponential_Euler * nonlinear: automatic selection, but not linear ''' global CStateUpdater, PythonStateUpdater if method == 'exponential_Euler': implicit = True order = 1 elif method == 'Euler': implicit = False order = 1 elif method == 'RK': implicit = False order = 2 elif method == 'linear' or method is None: pass else: raise AttributeError, "Unknown integration method!" # All the first below should go in Equations if not(isinstance(model, Equations)): # a set of equations? raise TypeError, "An Equations object must be passed." model.prepare(check_units=check_units) # check units and other things dynamicvars = model._diffeq_names # Dynamic variables # Identify stochastic equations noiselist = [] for statevar in model._diffeq_names: f = model._function[statevar] x0 = [model._units[var] for var in f.func_code.co_varnames] # init variables if depends_on(f, 'xi', x0): noiselist.append((statevar, get_global_term(f, 'xi', x0))) # s.d. of noise f.func_globals['xi'] = 0 * second ** -.5 # better: remove in string use_codegen = get_global_preference('usecodegen') and get_global_preference('usecodegenstateupdate') use_weave = get_global_preference('useweave') and get_global_preference('usecodegenweave') if CStateUpdater is None: from experimental.codegen.stateupdaters import CStateUpdater, PythonStateUpdater # Linearity test # insert this in equations allow_linear = (method is None) or (method == 'linear') if allow_linear and model.is_linear(): log_info('brian.stateupdater', "Linear model: using exact updates") stateupdaterobj = LinearStateUpdater(model, clock=clock) else: # Nonlinear model - check order of the method if implicit: # implicit integration schemes if model.is_conditionally_linear(): log_info('brian.stateupdater', "Using exponential Euler") if not use_codegen: stateupdaterobj = ExponentialEulerStateUpdater(model, clock=clock, compile=compile, freeze=freeze) elif use_weave: stateupdaterobj = CStateUpdater(model, exp_euler_scheme, clock=clock, freeze=freeze) log_warn('brian.stateupdater', 'Using codegen CStateUpdater') else: stateupdaterobj = PythonStateUpdater(model, exp_euler_scheme, clock=clock, freeze=freeze) log_warn('brian.stateupdater', 'Using codegen PythonStateUpdater') else: raise TypeError, "General implicit methods are not implemented yet." else: # explicit method if order == 1: if not use_codegen: stateupdaterobj = NonlinearStateUpdater(model, clock=clock, compile=compile, freeze=freeze) elif use_weave: stateupdaterobj = CStateUpdater(model, euler_scheme, clock=clock, freeze=freeze) log_warn('brian.stateupdater', 'Using codegen CStateUpdater') else: stateupdaterobj = PythonStateUpdater(model, euler_scheme, clock=clock, freeze=freeze) log_warn('brian.stateupdater', 'Using codegen PythonStateUpdater') elif order == 2: if not use_codegen: stateupdaterobj = RK2StateUpdater(model, clock=clock, compile=compile, freeze=freeze) elif use_weave: stateupdaterobj = CStateUpdater(model, rk2_scheme, clock=clock, freeze=freeze) log_warn('brian.stateupdater', 'Using codegen CStateUpdater') else: stateupdaterobj = PythonStateUpdater(model, rk2_scheme, clock=clock, freeze=freeze) log_warn('brian.stateupdater', 'Using codegen PythonStateUpdater') else: raise TypeError, "Methods with order greater than 2 are not implemented yet." # Insert noise for var, sigma in noiselist: # TODO: noise with mu = 0 i = dynamicvars.index(var) stateupdaterobj = SynapticNoise(stateupdaterobj, i, 0 * model._units[var] / second, sigma, clock) return stateupdaterobj, dynamicvars # TODO: StateUpdater should be lazy by default class StateUpdater(object): ''' A callable state update mechanism. By default, a leaky integrate-and-fire model with zero resting potential and unit time constant. Warning: to update the state matrix, use the slice operation, e.g. S[:]=0 (not S=0) otherwise operations are not done in place (a new object is created), so that all views are compromised (the reference to the data changes). ''' def __init__(self, clock=None): ''' Default model: dv/dt=-v ''' if clock == None: self.update_factor = exp(-clock.dt) # The update matrix else: raise TypeError, "A time reference must be passed." def rest(self, P): ''' Sets the variables at rest. P is the neuron group. ''' warnings.warn('Rest is not implemented for this model') def __call__(self, P): ''' Updates the state variables. Careful here: always use the slice operation for affectations. P is the neuron group. ''' P._S[:] *= self.update_factor def __repr__(self): return 'Leaky integrate-and-fire StateUpdater' def __len__(self): ''' Number of state variables ''' return 1 #def get_linear_equations(eqs): # ''' # Returns the matrices M and B for the linear model dX/dt = M(X-B), # where eqs is an Equations object. # ''' # # Otherwise assumes it is given in functional form # n=len(eqs._diffeq_names) # number of state variables # dynamicvars=eqs._diffeq_names # # Calculate B # AB=zeros((n,1)) # d=dict.fromkeys(dynamicvars) # for j in range(n): # d[dynamicvars[j]]=0.*eqs._units[dynamicvars[j]] # for var,i in zip(dynamicvars,count()): # AB[i]=-eqs.apply(var,d) # # Calculate A # M=zeros((n,n)) # for i in range(n): # for j in range(n): # d[dynamicvars[j]]=0.*eqs._units[dynamicvars[j]] # if isinstance(eqs._units[dynamicvars[i]],Quantity): # d[dynamicvars[i]]=Quantity.with_dimensions(1.,eqs._units[dynamicvars[i]].get_dimensions()) # else: # d[dynamicvars[i]]=1. # for var,j in zip(dynamicvars,count()): # M[j,i]=eqs.apply(var,d)+AB[j] # M-=eye(n)*1e-10 # quick dirty fix for problem of constant derivatives; dimension = Hz # B=linalg.lstsq(M,AB)[0] # We use this instead of solve in case M is degenerate # return M,B def get_linear_equations(eqs): ''' Returns the matrices M and B for the linear model dX/dt = M(X-B), where eqs is an Equations object. ''' # Otherwise assumes it is given in functional form n = len(eqs._diffeq_names) # number of state variables dynamicvars = eqs._diffeq_names # Calculate B AB = zeros((n, 1)) d = dict.fromkeys(dynamicvars) for j in range(n): d[dynamicvars[j]] = 0. * eqs._units[dynamicvars[j]] for var, i in zip(dynamicvars, count()): AB[i] = -eqs.apply(var, d) # Calculate A M = zeros((n, n)) for i in range(n): for j in range(n): d[dynamicvars[j]] = 0. * eqs._units[dynamicvars[j]] if isinstance(eqs._units[dynamicvars[i]], Quantity): d[dynamicvars[i]] = Quantity.with_dimensions(1., eqs._units[dynamicvars[i]].get_dimensions()) else: d[dynamicvars[i]] = 1. for var, j in zip(dynamicvars, count()): M[j, i] = eqs.apply(var, d) + AB[j] #M-=eye(n)*1e-10 # quick dirty fix for problem of constant derivatives; dimension = Hz #B=linalg.lstsq(M,AB)[0] # We use this instead of solve in case M is degenerate B = linalg.solve(M, AB) # We use this instead of solve in case M is degenerate return M, B def get_linear_equations_solution_numerically(eqs, dt): # Otherwise assumes it is given in functional form n = len(eqs._diffeq_names) # number of state variables dynamicvars = eqs._diffeq_names # Calculate B AB = zeros((n, 1)) d = dict.fromkeys(dynamicvars) for j in range(n): d[dynamicvars[j]] = 0. * eqs._units[dynamicvars[j]] for var, i in zip(dynamicvars, count()): AB[i] = -eqs.apply(var, d) # Calculate A M = zeros((n, n)) for i in range(n): for j in range(n): d[dynamicvars[j]] = 0. * eqs._units[dynamicvars[j]] if isinstance(eqs._units[dynamicvars[i]], Quantity): d[dynamicvars[i]] = Quantity.with_dimensions(1., eqs._units[dynamicvars[i]].get_dimensions()) else: d[dynamicvars[i]] = 1. for var, j in zip(dynamicvars, count()): M[j, i] = eqs.apply(var, d) + AB[j] #B=linalg.solve(M,AB) numeulersteps = 100 deltat = dt / numeulersteps E = eye(n) + deltat * M C = eye(n) D = zeros((n, 1)) for step in xrange(numeulersteps): C, D = dot(E, C), dot(E, D) - AB * deltat return C, D #return M,B set_global_preferences(useweave_linear_diffeq=False) define_global_preference('useweave_linear_diffeq', 'False', desc=""" Whether to use weave C++ acceleration for the solution of linear differential equations. Note that on some platforms, typically older ones, this is faster and on some platforms, typically new ones, this is actually slower. """) class LinearStateUpdater(StateUpdater): ''' A linear model with dynamics dX/dt = M(X-B) or dX/dt = MX. **Initialised as:** :: LinearStateUpdater(M[,B[,clock]]) with arguments: ``M`` Matrix defining the differential equation. ``B`` Optional linear term in the differential equation. ``clock`` Optional clock. Computes an update matrix A=exp(M dt) for the linear system, and performs the update step. TODO: more mathematical details? ''' #TODO: sparse linear models (e.g. cable equations) def __init__(self, M, B=None, clock=None): ''' Initialize a linear model with dynamics dX/dt = M(X-B) or dX/dt = MX, where B is a column vector. TODO: more checks TODO: rest ''' self._useaccel = get_global_preference('useweave_linear_diffeq') self._cpp_compiler = get_global_preference('weavecompiler') self._extra_compile_args = ['-O3'] if self._cpp_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] self._useB = False if clock == None: clock = guess_clock() if isinstance(M, ndarray): self.A = linalg.expm(M * clock.dt) self.B = B elif isinstance(M, Equations): try: M, self.B = get_linear_equations(M) self.A = linalg.expm(M * clock.dt) #self.A=array(self.A,single) if self.B is not None: self._C = -dot(self.A, self.B) + self.B #self._C=array(self._C,single) self._useB = True else: self._useB = False except LinAlgError: log_info('brian.stateupdater', 'Solving linear equations numerically') self.A, self._C = get_linear_equations_solution_numerically(M, clock.dt) self.B = NotImplemented # raises error on trying to use this self._useB = True # note the numpy dot command works faster if self.A has C ordering compared # to fortran ordering (although maybe this depends on which implementation # of BLAS you're using). The difference is only significant in small # calculations because making a copy of self.A is usually not serious, its # size is only the number of variables, not the number of neurons. self.A = array(self.A, order='C') if self._useB: self._C = array(self._C, order='C') def rest(self, P): if self._useB: if self.B is NotImplemented: raise NotImplementedError, \ "The resting potential cannot be found because the equations are degenerate " + \ "(most likely because they include a parameter)" P._S[:] = self.B else: P._S[:] = 0 def __call__(self, P): ''' Updates the state variables. Careful here: always use the slice operation for affectations. P is the neuron group. ''' if self._useB: # This could be removed if not self._useaccel: #P._S[:]=dot(self.A,P._S)+self._C P._S[:] = dot(self.A, P._S) #P._S = dot(self.A,P._S) #P._S += self._C add(P._S, self._C, P._S) #P._S[:]=dot(self.A,P._S-self.B)+self.B else: m = len(self) S = P._S n = S.shape[1] #n = len(P) A = self.A c = self._C code = ''' double x[m]; for(int i=0;i0: # additional_deps.append('__pre_deps='+'+'.join(vars_pre)) #if len(vars_post)>0: # additional_deps.append('__post_deps='+'+'.join(vars_post)) additional_deps = ['__pre_deps='+'+'.join(vars_pre), '__post_deps='+'+'.join(vars_post)] separated_equations = separate_equations(eqs_obj, additional_deps) if not len(separated_equations) == 2: raise ValueError('Equations should separate into pre and postsynaptic variables.') sep_pre, sep_post = separated_equations for v in vars_pre: if v in sep_post._diffeq_names: sep_pre, sep_post = sep_post, sep_pre break index_pre = [i for i in range(len(vars)) if vars[i] in vars_pre or vars[i] in sep_pre._diffeq_names] index_post = [i for i in range(len(vars)) if vars[i] in vars_post or vars[i] in sep_post._diffeq_names] vars_pre = array(vars)[index_pre] vars_post = array(vars)[index_post] # Check pre/post consistency shared_vars = set(vars_pre).intersection(vars_post) if shared_vars != set([]): raise Exception, str(list(shared_vars)) + " are both presynaptic and postsynaptic!" # Substitute equations/aliases into pre/post code def substitute_eqs(code): for name in sep_pre._eq_names[-1::-1]+sep_post._eq_names[-1::-1]: # reverse order, as in equations.py if name in sep_pre._eq_names: expr = sep_pre._string[name] else: expr = sep_post._string[name] code = re.sub("\\b" + name + "\\b", '(' + expr + ')', code) return code pre = substitute_eqs(pre) post = substitute_eqs(post) # Create namespaces for pre and post codes pre_namespace = namespace(pre, level=level + 1) post_namespace = namespace(post, level=level + 1) pre_namespace['clip'] = clip post_namespace['clip'] = clip pre_namespace['Inf'] = Inf post_namespace['Inf'] = Inf pre_namespace['enumerate'] = enumerate post_namespace['enumerate'] = enumerate # freeze pre and post (otherwise units will cause problems) all_vars = list(vars_pre) + list(vars_post) + ['w'] pre = '\n'.join(freeze(line.strip(), all_vars, pre_namespace) for line in pre.split('\n')) post = '\n'.join(freeze(line.strip(), all_vars, post_namespace) for line in post.split('\n')) # Neuron groups G_pre = NeuronGroup(len(C.source), model=sep_pre, clock=self.clock) G_post = NeuronGroup(len(C.target), model=sep_post, clock=self.clock) G_pre._S[:] = 0 G_post._S[:] = 0 self.pre_group = G_pre self.post_group = G_post var_group = {} # maps variable name to group for v in vars_pre: var_group[v] = G_pre for v in vars_post: var_group[v] = G_post self.var_group = var_group # Create updaters and monitors if isinstance(C, DelayConnection): G_pre_monitors = {} # these get values put in them later G_post_monitors = {} max_delay = C._max_delay * C.target.clock.dt def gencode(incode, vars, other_vars, wreplacement): num_immediate = num_delayed = 0 reordering_warning = False incode_lines = [line.strip() for line in incode.split('\n')] outcode_immediate = 'for _i in spikes:\n' # delayed variables outcode_delayed = 'for _j, _i in enumerate(spikes):\n' for var in other_vars: outcode_delayed += ' ' + var + '__delayed = ' + var + '__delayed_values_seq[_j]\n' for line in incode_lines: if not line.strip(): continue m = re.search(r'\bw\b\s*[^><=]?=', line) # lines of the form w = ..., w *= ..., etc. for var in vars: line = re.sub(r'\b' + var + r'\b', var + '[_i]', line) for var in other_vars: line = re.sub(r'\b' + var + r'\b', var + '__delayed', line) if m: num_delayed += 1 outcode_delayed += ' ' + line + '\n' else: if num_delayed!=0 and not reordering_warning: log_warn('brian.stdp', 'STDP operations are being re-ordered for delay connection, results may be wrong.') reordering_warning = True num_immediate += 1 outcode_immediate += ' ' + line + '\n' outcode_delayed = re.sub(r'\bw\b', wreplacement, outcode_delayed) outcode_delayed += '\n %(w)s = clip(%(w)s, %(min)e, %(max)e)' % {'min':wmin, 'max':wmax, 'w':wreplacement} return (outcode_immediate, outcode_delayed) pre_immediate, pre_delayed = gencode(pre, vars_pre, vars_post, 'w[_i,:]') post_immediate, post_delayed = gencode(post, vars_post, vars_pre, 'w[:,_i]') log_debug('brian.stdp', 'PRE CODE IMMEDIATE:\n'+pre_immediate) log_debug('brian.stdp', 'PRE CODE DELAYED:\n'+pre_delayed) log_debug('brian.stdp', 'POST CODE:\n'+post_immediate+post_delayed) pre_delay_expr = 'max_delay-d' post_delay_expr = 'd' pre_code_immediate = compile(pre_immediate, "Presynaptic code immediate", "exec") pre_code_delayed = compile(pre_delayed, "Presynaptic code delayed", "exec") post_code = compile(post_immediate + post_delayed, "Postsynaptic code", "exec") if delay_pre is not None or delay_post is not None: raise ValueError("Must use delay_pre=delay_post=None for the moment.") max_delay = C._max_delay * C.target.clock.dt # Ensure that the source and target neuron spikes are kept for at least the # DelayConnection's maximum delay C.source.set_max_delay(max_delay) C.target.set_max_delay(max_delay) # create forward and backward Connection objects or SpikeMonitor objects pre_updater_immediate = STDPUpdater(C.source, C, vars=vars_pre, code=pre_code_immediate, namespace=pre_namespace, delay=0 * ms) pre_updater_delayed = DelayedSTDPUpdater(C, reverse=False, delay_expr=pre_delay_expr, max_delay=max_delay, vars=vars_pre, other_vars=vars_post, varmon=G_pre_monitors, othervarmon=G_post_monitors, code=pre_code_delayed, namespace=pre_namespace, delay=max_delay) post_updater = DelayedSTDPUpdater(C, reverse=True, delay_expr=post_delay_expr, max_delay=max_delay, vars=vars_post, other_vars=vars_pre, varmon=G_post_monitors, othervarmon=G_pre_monitors, code=post_code, namespace=post_namespace, delay=0 * ms) updaters = [pre_updater_immediate, pre_updater_delayed, post_updater] self.contained_objects += updaters vars_pre_ind = dict((var, i) for i, var in enumerate(vars_pre)) vars_post_ind = dict((var, i) for i, var in enumerate(vars_post)) self.G_pre_monitors = G_pre_monitors self.G_post_monitors = G_post_monitors self.G_pre_monitors.update(((var, RecentStateMonitor(G_pre, vars_pre_ind[var], duration=(C._max_delay + 1) * C.target.clock.dt, clock=G_pre.clock)) for var in vars_pre)) self.G_post_monitors.update(((var, RecentStateMonitor(G_post, vars_post_ind[var], duration=(C._max_delay + 1) * C.target.clock.dt, clock=G_post.clock)) for var in vars_post)) self.contained_objects += self.G_pre_monitors.values() self.contained_objects += self.G_post_monitors.values() else: # Indent and loop pre = re.compile('^', re.M).sub(' ', pre) post = re.compile('^', re.M).sub(' ', post) pre = 'for _i in spikes:\n' + pre post = 'for _i in spikes:\n' + post # Pre code for var in vars_pre: # presynaptic variables (vectorisation) pre = re.sub(r'\b' + var + r'\b', var + '[_i]', pre) pre = re.sub(r'\bw\b', 'w[_i,:]', pre) # synaptic weight # Post code for var in vars_post: # postsynaptic variables (vectorisation) post = re.sub(r'\b' + var + r'\b', var + '[_i]', post) post = re.sub(r'\bw\b', 'w[:,_i]', post) # synaptic weight # Bounds: add one line to pre/post code (clip(w,min,max,w)) # or actual code? (rather than compiled string) if wmax==Inf: pre += '\n w[_i,:]=clip(w[_i,:],%(min)e,Inf)' % {'min':wmin} post += '\n w[:,_i]=clip(w[:,_i],%(min)e,Inf)' % {'min':wmin} else: pre += '\n w[_i,:]=clip(w[_i,:],%(min)e,%(max)e)' % {'min':wmin, 'max':wmax} post += '\n w[:,_i]=clip(w[:,_i],%(min)e,%(max)e)' % {'min':wmin, 'max':wmax} log_debug('brian.stdp', 'PRE CODE:\n'+pre) log_debug('brian.stdp', 'POST CODE:\n'+post) # Compile code pre_code = compile(pre, "Presynaptic code", "exec") post_code = compile(post, "Postsynaptic code", "exec") connection_delay = C.delay * C.source.clock.dt if (delay_pre is None) and (delay_post is None): # same delays as the Connnection C delay_pre = connection_delay delay_post = 0 * ms elif delay_pre is None: delay_pre = connection_delay - delay_post if delay_pre < 0 * ms: raise AttributeError, "Presynaptic delay is too large" elif delay_post is None: delay_post = connection_delay - delay_pre if delay_post < 0 * ms: raise AttributeError, "Postsynaptic delay is too large" # create forward and backward Connection objects or SpikeMonitor objects pre_updater = STDPUpdater(C.source, C, vars=vars_pre, code=pre_code, namespace=pre_namespace, delay=delay_pre) post_updater = STDPUpdater(C.target, C, vars=vars_post, code=post_code, namespace=post_namespace, delay=delay_post) updaters = [pre_updater, post_updater] self.contained_objects += [pre_updater, post_updater] # Put variables in namespaces for i, var in enumerate(vars_pre): for updater in updaters: updater._namespace[var] = G_pre._S[i] for i, var in enumerate(vars_post): for updater in updaters: updater._namespace[var] = G_post._S[i] self.contained_objects += [G_pre, G_post] def __call__(self): pass def __getattr__(self, name): if name == 'var_group': # this seems mad - the reason is that getattr is only called if the thing hasn't # been found using the standard methods of finding attributes, which for var_index # should have worked, this is important because the next line looks for var_index # and if we haven't got a var_index we don't want to get stuck in an infinite # loop raise AttributeError if not hasattr(self, 'var_group'): # only provide lookup of variable names if we have some variable names, i.e. # if the var_index attribute exists raise AttributeError G = self.var_group[name] return G.state_(name) def __setattr__(self, name, val): if not hasattr(self, 'var_group') or name not in self.var_group: object.__setattr__(self, name, val) else: G = self.var_group[name] G.state_(name)[:] = val class ExponentialSTDP(STDP): ''' Exponential STDP. Initialised with the following arguments: ``taup``, ``taum``, ``Ap``, ``Am`` Synaptic weight change (relative to the maximum weight wmax):: f(s) = Ap*exp(-s/taup) if s >0 f(s) = Am*exp(s/taum) if s <0 ``interactions`` * 'all': contributions from all pre-post pairs are added * 'nearest': only nearest-neighbour pairs are considered * 'nearest_pre': nearest presynaptic spike, all postsynaptic spikes * 'nearest_post': nearest postsynaptic spike, all presynaptic spikes ``wmin=0`` minimum synaptic weight ``wmax`` maximum synaptic weight ``update`` * 'additive': modifications are additive (independent of synaptic weight) (or "hard bounds") * 'multiplicative': modifications are multiplicative (proportional to w) (or "soft bounds") * 'mixed': depression is multiplicative, potentiation is additive See documentation for :class:`STDP` for more details. ''' def __init__(self, C, taup, taum, Ap, Am, interactions='all', wmin=0, wmax=None, update='additive', delay_pre=None, delay_post=None, clock=None): if wmax is None: raise AttributeError, "You must specify the maximum synaptic weight" wmax = float(wmax) # removes units eqs = Equations(''' dA_pre/dt=-A_pre/taup : 1 dA_post/dt=-A_post/taum : 1''', taup=taup, taum=taum, wmax=wmax) if interactions == 'all': pre = 'A_pre+=Ap' post = 'A_post+=Am' elif interactions == 'nearest': pre = 'A_pre=Ap' post = 'A_post=Am' elif interactions == 'nearest_pre': pre = 'A_pre=Ap' post = 'A_post=+Am' elif interactions == 'nearest_post': pre = 'A_pre+=Ap' post = 'A_post=Am' else: raise AttributeError, "Unknown interaction type " + interactions if update == 'additive': Ap *= wmax Am *= wmax pre += '\nw+=A_post' post += '\nw+=A_pre' elif update == 'multiplicative': if Am < 0: pre += '\nw*=(1+A_post)' else: pre += '\nw+=(wmax-w)*A_post' if Ap < 0: post += '\nw*=(1+A_pre)' else: post += '\nw+=(wmax-w)*A_pre' elif update == 'mixed': if Am < 0 and Ap > 0: Ap *= wmax pre += '\nw*=(1+A_post)' post += '\nw+=A_pre' elif Am > 0 and Ap < 0: Am *= wmax post += '\nw*=(1+A_pre)' pre += '\nw+=A_post' else: if Am > 0: raise AttributeError, "There is no depression in STDP rule" else: raise AttributeError, "There is no potentiation in STDP rule" else: raise AttributeError, "Unknown update type " + update STDP.__init__(self, C, eqs=eqs, pre=pre, post=post, wmin=wmin, wmax=wmax, delay_pre=delay_pre, delay_post=delay_post, clock=clock) if __name__ == '__main__': pass brian-1.3.1/brian/stdunits.py000066400000000000000000000055771167451777000161660ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ######### PHYSICAL UNIT NAMES ##################### #------------------------------------ Dan Goodman - # These are optional shorthand unit names which in # most circumstances shouldn't clash with local names """Optional short unit names This module defines the following short unit names: mV, mA, uA (micro_amp), nA, pA, mF, uF, nF, mS, uS, ms, Hz, kHz, MHz, cm, cm2, cm3, mm, mm2, mm3, um, um2, um3 """ import units as _units mV = _units.mvolt mA = _units.mamp uA = _units.uamp nA = _units.namp pA = _units.pamp pF = _units.pfarad uF = _units.ufarad nF = _units.nfarad nS = _units.nsiemens uS = _units.usiemens ms = _units.msecond Hz = _units.hertz kHz = _units.khertz MHz = _units.Mhertz cm = _units.cmetre cm2 = _units.cmetre2 cm3 = _units.cmetre3 mm = _units.mmetre mm2 = _units.mmetre2 mm3 = _units.mmetre3 um = _units.umetre um2 = _units.umetre2 um3 = _units.umetre3 _units.all_units.extend([mV, mA, uA, nA, pA, pF, uF, nF, nS, uS, ms, Hz, kHz, MHz, cm, cm2, cm3, mm, mm2, mm3, um, um2, um3]) brian-1.3.1/brian/stp.py000066400000000000000000000130741167451777000151060ustar00rootroot00000000000000''' Short-term synaptic plasticity. Implements the short-term plasticity model described in: Markram et al (1998). Differential signaling via the same axon of neocortical pyramidal neurons, PNAS. Synaptic dynamics is described by two variables x and u, which follows the following differential equations:: dx/dt=(1-x)/taud (depression) du/dt=(U-u)/tauf (facilitation) where taud, tauf are time constants and U is a parameter in 0..1. Each presynaptic spike triggers modifications of the variables:: x<-x*(1-u) u<-u+U*(1-u) Synaptic weights are modulated by the product u*x (in 0..1) (before update). ''' # See BEP-1 from network import NetworkOperation from neurongroup import NeuronGroup from monitor import SpikeMonitor from scipy import zeros, exp, isscalar from connections import DelayConnection __all__ = ['STP'] class STPGroup(NeuronGroup): ''' Neuron group forwarding spikes with short term plasticity modulation. ''' def __init__(self, N, clock=None): eqs = ''' ux : 1 x : 1 u : 1 ''' NeuronGroup.__init__(self, N, model=eqs, clock=clock) def update(self): pass class STPUpdater(SpikeMonitor): ''' Event-driven updates of STP variables. ''' def __init__(self, source, P, taud, tauf, U, delay=0): SpikeMonitor.__init__(self, source, record=False, delay=delay) # P is the group with the STP variables N = len(P) self.P = P self.minvtaud = -1. / taud self.minvtauf = -1. / tauf self.U = U self.ux = P.ux self.x = P.x self.u = P.u self.lastt = zeros(N) # last update self.clock = P.clock def propagate(self, spikes): interval = self.clock.t - self.lastt[spikes] self.u[spikes] = self.U + (self.u[spikes] - self.U) * exp(interval * self.minvtauf) tmp = 1 - self.u[spikes] self.x[spikes] = 1 + (self.x[spikes] - 1) * exp(interval * self.minvtaud) self.ux[spikes] = self.u[spikes] * self.x[spikes] self.x[spikes] *= tmp self.u[spikes] += self.U * tmp self.lastt[spikes] = self.clock.t self.P.LS.push(spikes) class STPUpdater2(STPUpdater): ''' STP Updater where U, taud and tauf are vectors ''' def propagate(self, spikes): interval = self.clock.t - self.lastt[spikes] self.u[spikes] = self.U[spikes] + (self.u[spikes] - self.U[spikes]) * exp(interval * self.minvtauf[spikes]) tmp = 1 - self.u[spikes] self.x[spikes] = 1 + (self.x[spikes] - 1) * exp(interval * self.minvtaud[spikes]) self.ux[spikes] = self.u[spikes] * self.x[spikes] self.x[spikes] *= tmp self.u[spikes] += self.U[spikes] * tmp self.lastt[spikes] = self.clock.t self.P.LS.push(spikes) class SynapticDepressionUpdater(SpikeMonitor): ''' Event-driven updates of STP variables. Special case: tauf=0*ms (synaptic depression). dx/dt=(1-x)/taud (depression) x<-x*(1-U) NOT FINISHED ''' def __init__(self, source, P, taud, tauf, U, delay=0): SpikeMonitor.__init__(self, source, record=False, delay=delay) # P is the group with the STP variables N = len(P) self.P = P self.minvtaud = -1. / taud self.U = U self.ux = P.ux self.x = P.x self.lastt = zeros(N) # last update self.clock = P.clock def propagate(self, spikes): interval = self.clock.t - self.lastt[spikes] self.x[spikes] = 1 + (self.x[spikes] - 1) * exp(interval * self.minvtaud) self.ux[spikes] = self.U * self.x[spikes] self.x[spikes] *= 1 - self.U self.lastt[spikes] = self.clock.t self.P.LS.push(spikes) class STP(NetworkOperation): ''' Short-term synaptic plasticity, following the Tsodyks-Markram model. Implements the short-term plasticity model described in Markram et al (1998). Differential signaling via the same axon of neocortical pyramidal neurons, PNAS. Synaptic dynamics is described by two variables x and u, which follow the following differential equations:: dx/dt=(1-x)/taud (depression) du/dt=(U-u)/tauf (facilitation) where taud, tauf are time constants and U is a parameter in 0..1. Each presynaptic spike triggers modifications of the variables:: u<-u+U*(1-u) x<-x*(1-u) Synaptic weights are modulated by the product ``u*x`` (in 0..1) (before update). Reference: * Markram et al (1998). "Differential signaling via the same axon of neocortical pyramidal neurons", PNAS. ''' def __init__(self, C, taud, tauf, U): if isinstance(C, DelayConnection): raise AttributeError, "STP does not handle heterogeneous connections yet." NetworkOperation.__init__(self, lambda:None, clock=C.source.clock) N = len(C.source) P = STPGroup(N, clock=C.source.clock) P.x = 1 P.u = U P.ux = U if (isscalar(taud) & isscalar(tauf) & isscalar(U)): updater = STPUpdater(C.source, P, taud, tauf, U, delay=C.delay * C.source.clock.dt) else: updater = STPUpdater2(C.source, P, taud, tauf, U, delay=C.delay * C.source.clock.dt) self.contained_objects = [updater] C.source = P C.delay = 0 C._nstate_mod = 0 # modulation of synaptic weights self.vars = P def __call__(self): pass brian-1.3.1/brian/tests/000077500000000000000000000000001167451777000150635ustar00rootroot00000000000000brian-1.3.1/brian/tests/__init__.py000066400000000000000000000013621167451777000171760ustar00rootroot00000000000000from brian import * import brian import os def go(): try: import nose except ImportError: print "Brian testing framework uses the 'nose' package." print 'Brian running from file:', brian.__file__ # For running tests from an IPython shell, use magic_useframes=True, but we # restore the state after running magic_useframes = get_global_preference('magic_useframes') set_global_preferences(magic_useframes=True) nose.config.logging.disable(nose.config.logging.ERROR) basedir, _ = os.path.split(__file__) cwd = os.getcwd() os.chdir(basedir) nose.run() os.chdir(cwd) set_global_preferences(magic_useframes=magic_useframes) if __name__ == '__main__': go() brian-1.3.1/brian/tests/simpletest.py000066400000000000000000000015561167451777000176350ustar00rootroot00000000000000from brian import * __all__ = ['brian_sample_run'] def brian_sample_run(): ''' A simple Brian script for users to test they have installed Brian correctly. ''' reinit_default_clock() clear(True) eqs = ''' dv/dt = (ge+gi-(v+49*mV))/(20*ms) : volt dge/dt = -ge/(5*ms) : volt dgi/dt = -gi/(10*ms) : volt ''' P = NeuronGroup(4000, eqs, threshold= -50 * mV, reset= -60 * mV) P.v = -60 * mV + 10 * mV * rand(len(P)) Pe = P.subgroup(3200) Pi = P.subgroup(800) Ce = Connection(Pe, P, 'ge') Ci = Connection(Pi, P, 'gi') Ce.connect_random(Pe, P, 0.02, weight=1.4 * mV) Ci.connect_random(Pi, P, 0.02, weight= -9 * mV) M = SpikeMonitor(P) run(1000 * ms) raster_plot(M) print 'Brian sample run finished OK!' show() if __name__ == '__main__': brian_sample_run() brian-1.3.1/brian/tests/testcorrectness/000077500000000000000000000000001167451777000203155ustar00rootroot00000000000000brian-1.3.1/brian/tests/testcorrectness/__init__.py000066400000000000000000000001701167451777000224240ustar00rootroot00000000000000''' Created on 4 sept. 2009 @author: goodman ''' #if __name__=='__main__': # import nose # nose.main() brian-1.3.1/brian/tests/testcorrectness/test_epsp.py000066400000000000000000000043671167451777000227070ustar00rootroot00000000000000from brian import * def testepsp(): """Tests whether an alpha function EPSP works algebraically. The expected behaviour of the network below is that it should solve the following differential equation: taum dV/dt = -V + x taupsp dx/dt = -x + y taupsp dy/dt = -y V(0) = 0 volt x(0) = 0 volt y(0) = y0 volt This gives the following analytical solution for V (computed with Mathematica): V(t) = (E^(-(t/taum) - t/taupsp)*(-(E^(t/taum)*t*taum) + E^(t/taum)*t*taupsp - E^(t/taum)*taum*taupsp + E^(t/taupsp)*taum*taupsp)*y0)/(taum - taupsp)^2 This doesn't have an analytical solution for the maximum value of V, but the following numerical value was computed with the analytic formula: Vmax = 0.136889 mvolt (accurate to that many sig figs) at time t = 1.69735 ms (accurate to +/- 0.00001ms) The Brian network consists of two neurons, one governed by the differential equations given above, the other fires a single spike at time t=0 and is connected to the first """ reinit_default_clock() clock = Clock(dt=0.1 * ms) expected_vmax = 0.136889 * mvolt expected_vmaxtime = 1.69735 * msecond desired_vmaxaccuracy = 0.001 * mvolt desired_vmaxtimeaccuracy = max(clock.dt, 0.00001 * ms) taum = 10 * ms taupsp = 0.325 * ms y0 = 4.86 * mV P = NeuronGroup(N=1, model=''' dV/dt = (-V+x)*(1./taum) : volt dx/dt = (-x+y)*(1./taupsp) : volt dy/dt = -y*(1./taupsp) : volt ''', threshold=100 * mV, reset=0 * mV) Pinit = SpikeGeneratorGroup(1, [(0, 0 * ms)]) C = Connection(Pinit, P, 'y') C.connect_full(Pinit, P, y0) M = StateMonitor(P, 'V', record=0) run(10 * ms) V = M[0] Vmax = 0 Vi = 0 for i in range(len(V)): if V[i] > Vmax: Vmax = V[i] Vi = i Vmaxtime = M.times[Vi] * second Vmax = Vmax * volt assert abs(Vmax - expected_vmax) < desired_vmaxaccuracy assert abs(Vmaxtime - expected_vmaxtime) < desired_vmaxtimeaccuracy if __name__ == '__main__': testepsp() brian-1.3.1/brian/tests/testcorrectness/test_exponential_current.py000066400000000000000000000034411167451777000260200ustar00rootroot00000000000000from brian import * def testexponentialcurrent(): '''Tests whether an exponential current works as predicted From Tutorial 2b. The scheme we implement is the following diffential equations: | taum dV/dt = -V + ge - gi | taue dge/dt = -ge | taui dgi/dt = -gi An excitatory neuron connects to state ge, and an inhibitory neuron connects to state gi. When an excitatory spike arrives, ge instantaneously increases, then decays exponentially. Consequently, V will initially but continuously rise and then fall. Solving these equations, if V(0)=0, ge(0)=g0 corresponding to an excitatory spike arriving at time 0, and gi(0)=0 then: | gi = 0 | ge = g0 exp(-t/taue) | V = (exp(-t/taum) - exp(-t/taue)) taue g0 / (taum-taue) ''' reinit_default_clock() taum = 20 * ms taue = 1 * ms taui = 10 * ms Vt = 10 * mV Vr = 0 * mV spiketimes = [(0, 0 * ms)] G1 = SpikeGeneratorGroup(2, spiketimes) G2 = NeuronGroup(N=1, model=''' dV/dt = (-V+ge-gi)/taum : volt dge/dt = -ge/taue : volt dgi/dt = -gi/taui : volt ''', threshold=Vt, reset=Vr) G2.V = Vr C1 = Connection(G1, G2, 'ge') C2 = Connection(G1, G2, 'gi') C1[0, 0] = 3 * mV C2[1, 0] = 3 * mV Mv = StateMonitor(G2, 'V', record=True) Mge = StateMonitor(G2, 'ge', record=True) Mgi = StateMonitor(G2, 'gi', record=True) run(100 * ms) t = Mv.times Vpredicted = (exp(-t / taum) - exp(-t / taue)) * taue * (3 * mV) / (taum - taue) Vdiff = abs(Vpredicted - Mv[0]) assert max(Vdiff) < 0.00001 if __name__ == '__main__': testexponentialcurrent() brian-1.3.1/brian/tests/testcorrectness/test_from_tutorial1c.py000066400000000000000000000022631167451777000250430ustar00rootroot00000000000000from brian import * def testfromtutorial1c(): '''Tests a behaviour from Tutorial 1c Solving the differential equation gives: V = El + (Vr-El) exp (-t/tau) Setting V=Vt at time t gives: t = tau log( (Vr-El) / (Vt-El) ) If the simulator runs for time T, and fires a spike immediately at the beginning of the run it will then generate n spikes, where: n = [T/t] + 1 If you have m neurons all doing the same thing, you get nm spikes. This calculation with the parameters above gives: t = 48.0 ms n = 21 nm = 840 As predicted. ''' reinit_default_clock() tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value El = -49 * mvolt # resting potential (same as the reset) dV = 'dV/dt = -(V-El)/tau : volt # membrane potential' G = NeuronGroup(N=40, model=dV, threshold=Vt, reset=Vr) G.V = El M = SpikeMonitor(G) run(1 * second) assert M.nspikes == 840 if __name__ == '__main__': testfromtutorial1c() brian-1.3.1/brian/tests/testfeatures/000077500000000000000000000000001167451777000176015ustar00rootroot00000000000000brian-1.3.1/brian/tests/testfeatures/__init__.py000066400000000000000000000000021167451777000217020ustar00rootroot00000000000000 brian-1.3.1/brian/tests/testfeatures/test_connect.py000066400000000000000000000153021167451777000226440ustar00rootroot00000000000000from brian import * def test_delay_connect_with_subgroups(): G = NeuronGroup(4, 'V:1') # test connect method C1 = Connection(G, G, 'V', delay=True, structure='dense') C2 = Connection(G, G, 'V', delay=True, structure='sparse') C3 = Connection(G, G, 'V', delay=True, structure='dynamic') for i, C in enumerate([C1, C2, C3]): C.connect(G[0:2], G[0:2], W=ones((2, 2)), delay=1) C.connect(G[0:2], G[2:4], W=ones((2, 2)), delay=(2.1, 2.2)) C.connect(G[2:4], G[0:2], W=ones((2, 2)), delay=lambda:4) C.connect(G[2:4], G[2:4], W=ones((2, 2)), delay=lambda i, j:i + 10 * j) assert (C.W.todense() == 1).all(), 'Problem with connection C' + str(i + 1) D = C.delayvec.todense() assert (array(D, dtype=int) == array([[1, 1, 2, 2], [1, 1, 2, 2], [4, 4, 0, 10], [4, 4, 1, 11]], dtype=int)).all(), 'Problem with connection C' + str(i + 1) # test connect_random method C1 = Connection(G, G, 'V', delay=True, structure='dense') C2 = Connection(G, G, 'V', delay=True, structure='sparse') C3 = Connection(G, G, 'V', delay=True, structure='dynamic') for i, C in enumerate([C1, C2, C3]): C.connect_random(G[0:2], G[0:2], p=1, weight=1, delay=1) C.connect_random(G[0:2], G[2:4], p=1, weight=1, delay=(2.1, 2.2)) C.connect_random(G[2:4], G[0:2], p=1, weight=1, delay=lambda:4) C.connect_random(G[2:4], G[2:4], p=1, weight=1, delay=lambda i, j:i + 10 * j) assert (C.W.todense() == 1).all(), 'Problem with connection C' + str(i + 1) D = C.delayvec.todense() assert (array(D, dtype=int) == array([[1, 1, 2, 2], [1, 1, 2, 2], [4, 4, 0, 10], [4, 4, 1, 11]], dtype=int)).all(), 'Problem with connection C' + str(i + 1) # test connect_full method C1 = Connection(G, G, 'V', delay=True, structure='dense') C2 = Connection(G, G, 'V', delay=True, structure='sparse') C3 = Connection(G, G, 'V', delay=True, structure='dynamic') for i, C in enumerate([C1, C2, C3]): C.connect_full(G[0:2], G[0:2], weight=1, delay=1) C.connect_full(G[0:2], G[2:4], weight=1, delay=(2.1, 2.2)) C.connect_full(G[2:4], G[0:2], weight=1, delay=lambda:4) C.connect_full(G[2:4], G[2:4], weight=1, delay=lambda i, j:i + 10 * j) assert (C.W.todense() == 1).all(), 'Problem with connection C' + str(i + 1) D = C.delayvec.todense() assert (array(D, dtype=int) == array([[1, 1, 2, 2], [1, 1, 2, 2], [4, 4, 0, 10], [4, 4, 1, 11]], dtype=int)).all(), 'Problem with connection C' + str(i + 1) # test connect_one_to_one method C1 = Connection(G, G, 'V', delay=True, structure='dense') C2 = Connection(G, G, 'V', delay=True, structure='sparse') C3 = Connection(G, G, 'V', delay=True, structure='dynamic') for i, C in enumerate([C1, C2, C3]): C.connect_one_to_one(G[0:2], G[0:2], weight=1, delay=1) C.connect_one_to_one(G[0:2], G[2:4], weight=1, delay=(2.1, 2.2)) C.connect_one_to_one(G[2:4], G[0:2], weight=1, delay=lambda:4) C.connect_one_to_one(G[2:4], G[2:4], weight=1, delay=lambda i, j:i + 10 * j) D = C.delayvec.todense() assert (array(D, dtype=int) == array([[1, 0, 2, 0], [0, 1, 0, 2], [4, 0, 0, 0], [0, 4, 0, 11]], dtype=int)).all(), 'Problem with connection C' + str(i + 1) def test_connect_with_subgroups(): G = NeuronGroup(4, 'V:1') # test connect method C1 = Connection(G, G, 'V', structure='dense') C2 = Connection(G, G, 'V', structure='sparse') C3 = Connection(G, G, 'V', structure='dynamic') for i, C in enumerate([C1, C2, C3]): C.connect(G[0:2], G[0:2], W=ones((2, 2))) C.connect(G[0:2], G[2:4], W=2 * ones((2, 2))) C.connect(G[2:4], G[0:2], W=4 * ones((2, 2))) C.connect(G[2:4], G[2:4], W=array([[0, 10], [1, 11]])) W = C.W.todense() assert (array(W, dtype=int) == array([[1, 1, 2, 2], [1, 1, 2, 2], [4, 4, 0, 10], [4, 4, 1, 11]], dtype=int)).all(), 'Problem with connection C' + str(i + 1) # test connect_random method C1 = Connection(G, G, 'V', structure='dense') C2 = Connection(G, G, 'V', structure='sparse') C3 = Connection(G, G, 'V', structure='dynamic') for i, C in enumerate([C1, C2, C3]): C.connect_random(G[0:2], G[0:2], p=1, weight=1) C.connect_random(G[2:4], G[2:4], p=1, weight=lambda i, j:i + 10 * j) W = C.W.todense() assert (array(W, dtype=int) == array([[1, 1, 0, 0], [1, 1, 0, 0], [0, 0, 0, 10], [0, 0, 1, 11]], dtype=int)).all(), 'Problem with connection C' + str(i + 1) # test connect_full method C1 = Connection(G, G, 'V', structure='dense') C2 = Connection(G, G, 'V', structure='sparse') C3 = Connection(G, G, 'V', structure='dynamic') for i, C in enumerate([C1, C2, C3]): C.connect_full(G[0:2], G[0:2], weight=1) C.connect_full(G[2:4], G[2:4], weight=lambda i, j:i + 10 * j) W = C.W.todense() assert (array(W, dtype=int) == array([[1, 1, 0, 0], [1, 1, 0, 0], [0, 0, 0, 10], [0, 0, 1, 11]], dtype=int)).all(), 'Problem with connection C' + str(i + 1) # test connect_one_to_one method C1 = Connection(G, G, 'V', structure='dense') C2 = Connection(G, G, 'V', structure='sparse') C3 = Connection(G, G, 'V', structure='dynamic') for i, C in enumerate([C1, C2, C3]): C.connect_one_to_one(G[0:2], G[0:2], weight=1) W = C.W.todense() assert (array(W, dtype=int) == array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], dtype=int)).all(), 'Problem with connection C' + str(i + 1) if __name__ == '__main__': test_delay_connect_with_subgroups() test_connect_with_subgroups() brian-1.3.1/brian/tests/testfeatures/test_propagation.py000066400000000000000000000035341167451777000235420ustar00rootroot00000000000000from brian import * def test_structures(): reinit_default_clock() H = NeuronGroup(1, 'V:1\nmod:1', reset=0, threshold=1) H.V = 2 H.mod = 1 G = [NeuronGroup(10, 'V:1', reset=0, threshold=1) for _ in range(12)] M = [SpikeMonitor(g) for g in G] C0 = Connection(H, G[0], 'V', weight=2) C1 = Connection(H, G[1], 'V', weight=2, structure='dense') C2 = Connection(H, G[2], 'V', weight=2, structure='dynamic') C3 = Connection(H, G[3], 'V', weight=2, modulation='mod') C4 = Connection(H, G[4], 'V', weight=2, structure='dense', modulation='mod') C5 = Connection(H, G[5], 'V', weight=2, structure='dynamic', modulation='mod') C6 = Connection(H, G[6], 'V', weight=2, delay=True) C7 = Connection(H, G[7], 'V', weight=2, delay=True, structure='dense') C8 = Connection(H, G[8], 'V', weight=2, delay=True, structure='dynamic') C9 = Connection(H, G[9], 'V', weight=2, delay=True, modulation='mod') C10 = Connection(H, G[10], 'V', weight=2, delay=True, structure='dense', modulation='mod') C11 = Connection(H, G[11], 'V', weight=2, delay=True, structure='dynamic', modulation='mod') for c in [C6, C7, C8, C9, C10, C11]: c.delay[0, :] = arange(10) * defaultclock.dt + defaultclock.dt / 2 run(2 * ms) for k, m in enumerate(M): assert len(m.spikes), 'Problem with connection ' + str(k) i, j = zip(*m.spikes) i = array(i) j = array(j) j = array((j + defaultclock.dt / 2) / defaultclock.dt, dtype=int) assert (i == arange(10)).all(), 'Problem with connection ' + str(k) if k < 6: assert (j == 1).all(), 'Problem with connection ' + str(k) else: assert (j == (1 + arange(10))).all(), 'Problem with connection ' + str(k) + ': j=' + str(j) if __name__ == '__main__': test_structures() brian-1.3.1/brian/tests/testinterface/000077500000000000000000000000001167451777000177235ustar00rootroot00000000000000brian-1.3.1/brian/tests/testinterface/__init__.py000066400000000000000000000000671167451777000220370ustar00rootroot00000000000000''' Created on 4 sept. 2009 @author: goodman ''' brian-1.3.1/brian/tests/testinterface/test_clock.py000066400000000000000000000106501167451777000224310ustar00rootroot00000000000000from brian import * from nose.tools import * def test(): """ The :class:`Clock` object ~~~~~~~~~~~~~~~~~~~~~~~~~ A :class:`Clock` object is initialised via:: c = Clock(dt=0.1*msecond, t=0*msecond, makedefaultclock=False) In particular, the following will work and do the same thing:: c = Clock() c = Clock(t=0*second) c = Clock(dt=0.1*msecond) c = Clock(0.1*msecond,0*second) Setting the ``makedefaultclock=True`` argument sets the newly created clock as the default one. The default clock ~~~~~~~~~~~~~~~~~ The default clock can be found using the :func:`get_default_clock` function, and redefined using the :func:`define_default_clock` function, where the arguments passed to :func:`define_default_clock` are the same as the initialising arguments to the ``Clock(...)`` statement. The default clock can be reinitialised by calling :func:`reinit_default_clock`. A less safe way to access the default clock is to refer directly to the variable :data:`defaultclock`. If the default clock has been redefined, this won't work. The :func:`guess_clock` function ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The function :func:`guess_clock` should return (in this order of priority): * The clock passed as an argument, if one was passed * A clock defined in the calling function, if there was one * The default clock otherwise If more than one clock was defined in the calling function, it should raise a ``TypeError`` exception. """ reinit_default_clock() # check that 'defaultclock' default clock exists and starts at t=0 assert defaultclock.t < 0.001 * msecond # check that default clock exists and starts at t = 0 c = guess_clock() assert c.t < 0.001 * msecond # check that passing no arguments works c = Clock() # check that passing t argument works c = Clock(t=1 * second) assert c.t > 0.9 * second # check that passing dt argument works c = Clock(dt=10 * msecond) assert c.dt > 9 * msecond # check that passing t and dt arguments works c = Clock(t=2 * second, dt=1 * msecond) assert c.t > 1.9 * second assert 0.5 * msecond < c.dt < 2 * msecond # check that making this the default clock works assert get_global_preference('defaultclock').dt < 9 * msecond c = Clock(dt=10 * msecond, makedefaultclock=True) assert get_global_preference('defaultclock').dt > 9 * msecond # check that the other ways of defining a default clock work define_default_clock(dt=3 * msecond) assert 2.9 * msecond < get_global_preference('defaultclock').dt < 3.1 * msecond # check that the get_default_clock function works assert 2.9 * msecond < get_default_clock().dt < 3.1 * msecond # check that passing unnamed arguments in the order dt, t works c = Clock(10 * msecond, 1 * second) assert c.t > 0.9 * second assert c.dt > 9 * msecond # check that reinit() sets t=0 c = Clock(t=1 * second) c.reinit() assert c.t < 0.001 * second # check that tick() works c = Clock(t=1 * second, dt=1 * msecond) for i in range(10): c.tick() assert c.t > 9 * msecond # check that reinit_default_clock works for i in range(10): get_default_clock().tick() reinit_default_clock() assert get_default_clock().t < 0.0001 * second # check that guess_clock passed a clock returns that clock assert 0.6 * second < guess_clock(Clock(t=0.7 * second)).t < 0.8 * second # check that guess_clock passed no clock returns the only clock we've defined in this function so far c = Clock() assert guess_clock() is c # check that if no clock is defined, guess_clock returns the default clock del c assert guess_clock() is get_default_clock() # check that if two or more clocks are defined, guess_clock raises a TypeError c = Clock() d = Clock() assert_raises(TypeError, guess_clock) del d # check that if we have a calling stack, the innermost clock is found c = Clock() def f(): d = Clock() assert guess_clock() is d del d return guess_clock() assert f() is c # cleanup: reset the default clock to its default state set_global_preferences(defaultclock=defaultclock) brian-1.3.1/brian/tests/testinterface/test_connection.py000066400000000000000000000100221167451777000234660ustar00rootroot00000000000000from brian import * from nose.tools import * from brian.utils.approximatecomparisons import is_approx_equal def test(): ''' :class:`Connection` ~~~~~~~~~~~~~~~~~~~ **Initialised as:** :: Connection(source, target[, state=0[, delay=0*ms]]) With arguments: ``source`` The group from which spikes will be propagated. ``target`` The group to which spikes will be propagated. ``state`` The state variable name or number that spikes will be propagated to in the target group. ``delay`` The delay between a spike being generated at the source and received at the target. At the moment, the mechanism for delays only works for relatively short delays (an error will be generated for delays that are too long), but this is subject to change. The exact behaviour then is not part of the assured interface, although it is very likely that the syntax will not change (or will at least be backwards compatible). **Methods** ``connect_random(P,Q,p[,weight=1])`` Connects each neuron in ``P`` to each neuron in ``Q``. ``connect_full(P,Q[,weight=1])`` Connect every neuron in ``P`` to every neuron in ``Q``. ``connect_one_to_one(P,Q)`` If ``P`` and ``Q`` have the same number of neurons then neuron ``i`` in ``P`` will be connected to neuron ``i`` in ``Q`` with weight 1. Additionally, you can directly access the matrix of weights by writing:: C = Connection(P,Q) print C[i,j] C[i,j] = ... Where here ``i`` is the source neuron and ``j`` is the target neuron. Note: No unit checking is currently done if you use this method, but this is subject to change for future releases. The behaviour when a list of neuron ``spikes`` is received is to add ``W[i,:]`` to the target state variable for each ``i`` in ``spikes``. ''' reinit_default_clock() # test Connection object eqs = ''' da/dt = 0.*hertz : 1. db/dt = 0.*hertz : 1. ''' spikes = [(0, 1 * msecond), (1, 3 * msecond)] G1 = SpikeGeneratorGroup(2, spikes) G2 = NeuronGroup(2, model=eqs, threshold=10., reset=0.) # first test the methods # connect_full C = Connection(G1, G2) C.connect_full(G1, G2, weight=2.) for i in range(2): for j in range(2): assert (is_approx_equal(C[i, j], 2.)) # connect_random C = Connection(G1, G2) C.connect_random(G1, G2, 0.5, weight=2.) # can't assert anything about that # connect_one_to_one C = Connection(G1, G2) C.connect_one_to_one(G1, G2) for i in range(2): for j in range(2): if i == j: assert (is_approx_equal(C[i, j], 1.)) else: assert (is_approx_equal(C[i, j], 0.)) del C # and we will use a specific set of connections in the next part Ca = Connection(G1, G2, 'a') Cb = Connection(G1, G2, 'b') Ca[0, 0] = 1. Ca[0, 1] = 1. Ca[1, 0] = 1. #Ca[1,1]=0 by default #Cb[0,0]=0 by default Cb[0, 1] = 1. Cb[1, 0] = 1. Cb[1, 1] = 1. net = Network(G1, G2, Ca, Cb) net.run(2 * msecond) # after 2 ms, neuron 0 will have fired, so a 0 and 1 should # have increased by 1 to [1,1], and b 1 should have increased # by 1 to 1 assert (is_approx_equal(G2.a[0], 1.)) assert (is_approx_equal(G2.a[1], 1.)) assert (is_approx_equal(G2.b[0], 0.)) assert (is_approx_equal(G2.b[1], 1.)) net.run(2 * msecond) # after 4 ms, neuron 1 will have fired, so a 0 should have # increased by 1 to 2, and b 0 and 1 should have increased # by 1 to [1, 2] assert (is_approx_equal(G2.a[0], 2.)) assert (is_approx_equal(G2.a[1], 1.)) assert (is_approx_equal(G2.b[0], 1.)) assert (is_approx_equal(G2.b[1], 2.)) reinit_default_clock() if __name__ == '__main__': test() brian-1.3.1/brian/tests/testinterface/test_directcontrol.py000066400000000000000000000342451167451777000242170ustar00rootroot00000000000000from brian import * from brian.utils.approximatecomparisons import * from nose.tools import * def test(): """ Spike containers ~~~~~~~~~~~~~~~~ A spike container is either an iterable object or a callable object which returns an iterable object. For example, a list is a spike container, as is a generator, as is a function which returns a list, or a generator function. :class:`MultipleSpikeGeneratorGroup` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Called as:: MultipleSpikeGeneratorGroup(spiketimes[,clock]) spiketimes is a list of spike containers, one for each neuron in the group. The elements of the spike containers are spike times. Callable spike containers are called when the group is reinitialised. If you provide a generator rather than a callable object, reinitialising the group will not reinitialise the generator. If the containers are numpy arrays units will not be checked (times should be in seconds). So for example, the following will correspond to a group of 2 neurons, where the first fires at times 0ms, 2ms and 5ms, and the second fires at times 1ms and 3ms:: spiketimes = [[0*msecond, 2*msecond, 5*msecond], [1*msecond, 3*msecond]] G = MultipleSpikeGeneratorGroup(spiketimes) You could do the same thing with generator functions (rather perversely in this case):: def st1(): yield 0*msecond yield 2*msecond yield 5*msecond def st2(): yield 1*msecond yield 3*msecond G = MultipleSpikeGeneratorGroup([st1(), st2()]) Note that if two or more spike times fall within the same dt, spikes will stack up and come out one per dt until the stack is exhausted. A warning will be generated if this happens. If a clock is provided, updates of the group will be synchronised with that clock, otherwise the standard clock guessing procedure will be used (see :func:`~brian.clock.guess_clock` in the :mod:`~brian.clock` module). :class:`SpikeGeneratorGroup` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Called as:: SpikeGeneratorGroup(N,spiketimes[,clock]) where N is the number of neurons in the group, and spiketimes is a spike container, whose elements are tuples (i,t) meaning neuron i fires at time t. Pairs (i,t) need to be sorted in time unless spiketimes is a tuple or list. For example:: from math import random def spikefirer(N,lower,upper): nexttime = random.uniform(lower,upper) while True: yield (random.randint(0,N-1),nexttime) nexttime = nexttime + random.uniform(lower,upper) G = SpikeGeneratorGroup(10,uniform_isi(10,0*msecond,10*msecond)) would give a neuron group P with 10 neurons, where a random one of the neurons fires with an interval between spikes which is uniform in (0ms, 10ms). If spiketimes is callable, it will be called when the group is reinitialised. If you provide a generator rather than a callable object, reinitialising the group will not reinitialise the generator. Note that if a neuron fires more than one spike in a given interval dt, additional spikes will be discarded. If a clock is provided, updates of the group will be synchronised with that clock, otherwise the standard clock guessing procedure will be used (see :func:`~brian.clock.guess_clock` in the :mod:`~brian.clock` module). :class:`PulsePacket` ~~~~~~~~~~~~~~~~~~~~ Fires a Gaussian distributed packet of n spikes with given spread, called as:: PulsePacket(t,n,sigma[,clock]) You can change the parameters by calling the method ``generate(t,n,sigma)``. If a clock is provided, updates of the group will be synchronised with that clock, otherwise the standard clock guessing procedure will be used (see :func:`~brian.clock.guess_clock` in the :mod:`~brian.clock` module). """ reinit_default_clock() # helper function for running a simple network def mininet(grouptype, *args, **kwds): reinit_default_clock() G = grouptype(*args, **kwds) M = SpikeMonitor(G, True) net = Network(G, M) net.run(5 * msecond) reinit_default_clock() return M.spikes # run mininet twice to see if it handles reinit() correctly def mininet2(grouptype, *args, **kwds): reinit_default_clock() G = grouptype(*args, **kwds) M = SpikeMonitor(G, True) net = Network(G, M) net.run(5 * msecond) spikes1 = M.spikes net.reinit() net.run(5 * msecond) spikes2 = M.spikes reinit_default_clock() return (spikes1, spikes2) # check multiple spike generator group with lists spiketimes = [[0 * msecond, 2 * msecond, 4 * msecond], [1 * msecond, 3 * msecond]] spikes = mininet(MultipleSpikeGeneratorGroup, spiketimes) def test1(spikes): assert len(spikes) == 5 i, t = zip(*spikes) # zip(*...) is the inverse of zip, so i is the ordered list of neurons that fired, and t is the ordered list of times assert i == (0, 1, 0, 1, 0) # check that the order of neuron firings is correct for s1, s2 in enumerate(t): assert is_approx_equal(s1 * msecond, s2) # the firing times are (0,1,2,3,4)ms test1(spikes) # check multiple spike generator group with arrays # NOTE: Units are not checked, array has to be in seconds spiketimes = [array([0.0, 0.002, 0.004]), array([0.001, 0.003])] spikes = mininet(MultipleSpikeGeneratorGroup, spiketimes) test1(spikes) # multiple spike generator group with generator and period def gen1(): yield 0 * msecond def gen2(): yield 1 * msecond spikes = mininet(MultipleSpikeGeneratorGroup, [gen1, gen2], period=2*ms) test1(spikes) # check that given a different clock it works as expected, wrap in a function to stop magic functions from # picking up the clock objects we define here def testwithclock(): spikes = mininet(MultipleSpikeGeneratorGroup, spiketimes, clock=Clock(dt=0.1 * msecond)) test1(spikes) spikes = mininet(MultipleSpikeGeneratorGroup, spiketimes, clock=Clock(dt=2 * msecond)) assert len(spikes) == 5 i, t = zip(*spikes) # zip(*...) is the inverse of zip, so i is the ordered list of neurons that fired, and t is the ordered list of times for s1, s2 in zip([0, 2, 2, 4, 4], t): assert is_approx_equal(s1 * msecond, s2) # the firing times are (0,2,2,4,4)ms testwithclock() # check multiple spike generator group with generators def st1(): yield 0 * msecond yield 2 * msecond yield 4 * msecond def st2(): yield 1 * msecond yield 3 * msecond spikes = mininet(MultipleSpikeGeneratorGroup, [st1(), st2()]) test1(spikes) # check reinit spikes1, spikes2 = mininet2(MultipleSpikeGeneratorGroup, [st1, st2]) test1(spikes1) test1(spikes2) # spike generator with list spiketimes = [(0, 0 * msecond), (1, 1 * msecond), (0, 2 * msecond), (1, 3 * msecond), (0, 4 * msecond) ] spikes = mininet(SpikeGeneratorGroup, 2, spiketimes) test1(spikes) # spike generator with list (already sorted so pass sort=False) spiketimes = [(0, 0 * msecond), (1, 1 * msecond), (0, 2 * msecond), (1, 3 * msecond), (0, 4 * msecond) ] spikes = mininet(SpikeGeneratorGroup, 2, spiketimes, sort=False) test1(spikes) # spike generator with unsorted (inversely sorted) list (sort=True is default) spiketimes = [(0, 4 * msecond), (1, 3 * msecond), (0, 2 * msecond), (1, 1 * msecond), (0, 0 * msecond) ] spikes = mininet(SpikeGeneratorGroup, 2, spiketimes) test1(spikes) # check that it works with a clock def testwithclock(): spikes = mininet(SpikeGeneratorGroup, 2, spiketimes, clock=Clock(dt=0.1 * msecond)) test1(spikes) spikes = mininet(SpikeGeneratorGroup, 2, spiketimes, clock=Clock(dt=2 * msecond)) assert len(spikes) == 5 i, t = zip(*spikes) # zip(*...) is the inverse of zip, so i is the ordered list of neurons that fired, and t is the ordered list of times for s1, s2 in zip([0, 2, 2, 4, 4], t): assert is_approx_equal(s1 * msecond, s2) # the firing times are (0,2,2,4,4)ms testwithclock() # spike generator with a function returning a list def return_spikes(): return [(0, 0 * msecond), (1, 1 * msecond), (0, 2 * msecond), (1, 3 * msecond), (0, 4 * msecond) ] spikes = mininet(SpikeGeneratorGroup, 2, return_spikes) test1(spikes) # spike generator with a list of spikes with simultaneous spikes across neurons spiketimes = [(0, 0 * msecond), (1, 0 * msecond), (0, 2 * msecond), (1, 2 * msecond), (0, 4 * msecond), (1, 4 * msecond) ] def test2(spikes): assert len(spikes) == 6 #check both neurons spiked at the correct times for neuron in [0, 1]: for s1, s2 in zip([0, 2, 4], [t for i, t in spikes if i==neuron]): assert is_approx_equal(s1 * msecond, s2) spikes = mininet(SpikeGeneratorGroup, 2, spiketimes) test2(spikes) # same but using the gather=True option spikes = mininet(SpikeGeneratorGroup, 2, spiketimes, gather=True) test2(spikes) # same but with index arrays instead of single neuron indices spiketimes = [([0, 1], 0 * msecond), ([0, 1], 2 * msecond), ([0, 1], 4 * msecond)] spikes = mininet(SpikeGeneratorGroup, 2, spiketimes) test2(spikes) # spike generator with single indices and index arrays of varying length spiketimes = [([0, 1], 0 * msecond), (0, 1 * msecond), (1, 2 * msecond), ([0], 3 * msecond) ] spikes = mininet(SpikeGeneratorGroup, 2, spiketimes) def test3(spikes): assert len(spikes) == 5 #check both neurons spiked at the correct times for s1, s2 in zip([0, 1, 3], [t for i, t in spikes if i==0]): assert is_approx_equal(s1 * msecond, s2) for s1, s2 in zip([0, 2], [t for i, t in spikes if i==1]): assert is_approx_equal(s1 * msecond, s2) test3(spikes) # spike generator with an array of (non-simultaneous) spikes # NOTE: For an array, the times have to be in seconds and sorted spiketimes = array([[0, 0.0], [1, 0.001], [0, 0.002], [1, 0.003], [0, 0.004]]) spikes = mininet(SpikeGeneratorGroup, 2, spiketimes) test1(spikes) # spike generator with an array of (simultaneous) spikes spiketimes = array([[0, 0.0], [1, 0.0], [0, 0.002], [1, 0.002], [0, 0.004], [1, 0.004]]) spikes = mininet(SpikeGeneratorGroup, 2, spiketimes) test2(spikes) # spike generator with an array of (simultaneous) spikes, using gather=True spiketimes = array([[0, 0.0], [1, 0.0], [0, 0.002], [1, 0.002], [0, 0.004], [1, 0.004]]) spikes = mininet(SpikeGeneratorGroup, 2, spiketimes, gather=True) test2(spikes) # test the handling of an empty initialization and direct setting of spiketimes def test_attribute_setting(): reinit_default_clock() G = SpikeGeneratorGroup(2, []) M = SpikeMonitor(G, True) net = Network(G, M) net.run(5 * msecond) assert len(M.spikes) == 0 reinit_default_clock() net.reinit() G.spiketimes = [(0, 0 * msecond), (1, 1 * msecond), (0, 2 * msecond), (1, 3 * msecond), (0, 4 * msecond)] net.run(5 * msecond) test1(M.spikes) test_attribute_setting() # tests a subtle difficulty when setting spiketimes and using a subgroup def test_attribute_setting_subgroup(): reinit_default_clock() G = SpikeGeneratorGroup(2, []) subG = G.subgroup(2) M = SpikeMonitor(subG, True) G.spiketimes = [(0, 0 * msecond), (1, 1 * msecond), (0, 2 * msecond), (1, 3 * msecond), (0, 4 * msecond)] G.spiketimes = [(0, 0 * msecond), (1, 1 * msecond), (0, 2 * msecond), (1, 3 * msecond), (0, 4 * msecond)] net = Network(G, M) net.run(5 * msecond) test1(M.spikes) test_attribute_setting_subgroup() # spike generator with generator def sg(): yield (0, 0 * msecond) yield (1, 1 * msecond) yield (0, 2 * msecond) yield (1, 3 * msecond) yield (0, 4 * msecond) spikes = mininet(SpikeGeneratorGroup, 2, sg()) test1(spikes) # spike generator reinit spikes1, spikes2 = mininet2(SpikeGeneratorGroup, 2, sg) test1(spikes1) test1(spikes2) # spike generator group with generator and period def gen(): yield (0, 0 * msecond) yield (1, 1 * msecond) spikes = mininet(SpikeGeneratorGroup, 2, gen, period=2*ms) test1(spikes) # spike generator group with list and period spiketimes = [(0, 0 * msecond), (1, 1 * msecond)] spikes = mininet(SpikeGeneratorGroup, 2, spiketimes, period=2*ms) test1(spikes) # pulse packet with 0 spread spikes = mininet(PulsePacket, 2.5 * msecond, 10, 0 * msecond) assert len(spikes) == 10 i, t = zip(*spikes) for s in t: assert is_approx_equal(2.5 * msecond, s) # do not attempt to verify the behaviour of PulsePacket here, this is # an interface test only def test_poissoninput(): eqs = Equations("dv/dt=(1-v)/(1*second) : 1") group = NeuronGroup(N=1, model=eqs, reset=0, threshold=1) input = PoissonInput(group, N = 10, rate=50 * Hz, weight = .11, state='v') m = SpikeCounter(group) net = Network(group, input, m) net.run(500 * ms) #only checks that there some spikes assert (m.nspikes >= 1) if __name__ == '__main__': test() test_poissoninput() brian-1.3.1/brian/tests/testinterface/test_equations.py000066400000000000000000000052731167451777000233530ustar00rootroot00000000000000from brian import * from nose.tools import * def test(): ''' Equations module. ''' reinit_default_clock() tau = 2 * ms eqs = Equations(''' x=v**2 : volt**2 y=x # a comment dv/dt=(y-v)/tau : volt z : amp''') # Parsing and building assert eqs._eq_names == ['x', 'y'] # An alias is also a static equation assert eqs._diffeq_names == ['v', 'z'] # A parameter is a differential equation dz/dt=0 assert eqs._diffeq_names_nonzero == ['v'] assert eqs._alias == {'y':'x'} assert eqs._units == {'t':second, 'x':volt ** 2, 'z':amp, 'y':volt ** 2, 'v': volt} assert eqs._string == {'y': 'x', 'x': 'v**2', 'z': '0*amp/second', 'v': '(y-v)/tau'} assert eqs._namespace['v']['tau'] == 2 * ms assert 'tau' not in eqs._namespace['x'] # Name substitutions assert Equations('dx/dt=-x/(2*ms):1', x='y')._diffeq_names == ['y'] # Explicit namespace eqs2 = Equations('dx/dt=-x/tau:volt', tau=1 * ms) assert eqs2._namespace['x']['tau'] == 1 * ms assert eqs2._namespace['x'] == {'tau':1 * ms, 'volt':volt} # Find membrane potential assert eqs.get_Vm() == 'v' assert Equations('v=x**2 : 1').get_Vm() == None # must be a differential equation assert Equations('dx/dt=1/(2*ms) : 1').get_Vm() == None assert Equations('dvm/dt=1/(2*ms) : 1').get_Vm() == 'vm' assert Equations('''dvm/dt=1/(2*ms) : 1 dv/dt=1/(2*ms) : 1 ''').get_Vm() == None # ambiguous # Unit checking eqs = Equations('dv/dt=-v : volt') assert_raises(DimensionMismatchError, eqs.prepare) # Free variables eqs = Equations('dv/dt=(freevar-v)/tau : volt') assert eqs.free_variables() == ['tau', 'freevar'] # Equation types: stochastic, time dependent, linear, conditionally linear eqs = Equations(''' dv/dt=(f(t)-v)/tau : volt dw/dt=-w/tau+3*tau**-.5*xi : 1''') assert eqs.is_stochastic() assert eqs.is_stochastic(var='v') is False assert eqs.is_stochastic(var='w') assert eqs.is_time_dependent() assert eqs.is_time_dependent(var='v') assert eqs.is_time_dependent(var='w') is False assert eqs.is_linear() is False # not linear if time-dependent eqs = Equations('dv/dt=-v/tau : volt') assert eqs.is_linear() eqs.prepare() assert eqs.is_conditionally_linear() eqs = Equations(''' dv/dt=(w**3-v)/tau : 1 dw/dt=(v**2-w)/tau : 1 ''') eqs.prepare() assert eqs.is_conditionally_linear() eqs = Equations(''' dv/dt=(w**3-v**2)/tau : 1 dw/dt=(v**2-w)/tau : 1 ''') eqs.prepare() assert eqs.is_conditionally_linear() is False if __name__ == '__main__': test() brian-1.3.1/brian/tests/testinterface/test_inspection.py000066400000000000000000000007551167451777000235160ustar00rootroot00000000000000from brian import * from brian.inspection import * def test(): ''' Inspection module ''' reinit_default_clock() expr = ''' x_12+=y*12 # comment pour(water,\ "on desk")''' # Check that identifiers are correctly extracted assert (get_identifiers(expr) == ('x_12', 'y', 'pour', 'water')) # Check that modified variables are correctly extracted assert (modified_variables(expr) == ['x_12', 'pour']) if __name__ == '__main__': test() brian-1.3.1/brian/tests/testinterface/test_magic.py000066400000000000000000000156301167451777000224210ustar00rootroot00000000000000from brian import * from nose.tools import * def test(): """ See the main documentation or the API documentation for details of the purpose, main functions and classes of the magic module. Functions ~~~~~~~~~ * :func:`get_instances(instancetype,level=1)` * :func:`find_instances(instancetype,startlevel=1)` * :func:`find_all_instances(instancetype,startlevel=1)` Here instancetype is a class derived from :class:`InstanceTracker`, including :class:`Clock`, :class:`NeuronGroup`, :class:`Connection`, :class:`NetworkOperation`. ``level`` is an integer greater than 0 that tells the function how far back it should search in the sequence of called functions. ``level=0`` means it should find instances from the function calling one of the magic functions, ``level=1`` means it should find instances from the function calling the function calling one of the magic functions, etc. :func:`get_instances` returns all instances at a specified level. :func:`find_instances` searches increasing levels starting from the given ``startlevel`` until it finds a nonzero number of instances, and then returns those. :func:`find_all_instances` finds all instances from a given level onwards. Classes ~~~~~~~ An object of a class ``cls`` derived from ``InstanceTracker`` will be tracked if ``cls._track_instances()`` returns ``True``. The default behaviour of ``InstanceTracker`` is to always return ``True``, but this method can be redefined if you do not want to track instances of a particular subclass of a tracked class. Note that this method is a static method of a class, and cannot be used to stop particular instances being tracked. Redefine it using something like:: class A(InstanceTracker): pass class B(A): @staticmethod def _track_instances(): return False A warning (technical detail) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The current implementation of instance tracking will return every variable that was created in a given execution frame (i.e. in a given function) that is still alive (i.e. a reference still exists to it somewhere). So for example, the following will not find any instance:: class A(InstanceTracker): pass def f(A): return A() a = f(A) print get_instances(A,level=0) The reason is that the object a was created in the function ``f``, but the :func:`get_instances` statement is run in the main body. Similarly, the following will return two instances rather than one as you might expect:: class A(InstanceTracker): pass def f(A): a = A() return get_instances(A,level=0) insts1 = f(A) isnts2 = f(A) print insts2 The reason is that the first call to ``f`` defines an instance of ``A`` and returns (via :func:`get_instances`) an object which contains a reference to it. The object is still therefore alive. The second call to ``f`` creates a new ``A`` object, but the :func:`get_instances` call returns both ``A`` objects, because they are both still alive (the first is stored in ``insts1``) and both created in the function ``f``. This behaviour is not part of the assured interface of the magic module, and you shouldn't rely on it. """ reinit_default_clock() # Define a heirarchy of classes A, B, C, D, E and track instances of them # Do not track instances of D or E class A(InstanceTracker): gval = 0 def __init__(self): self.value = A.gval A.gval += 1 def __repr__(self): return str(self) def __str__(self): return str(self.value) class B(A): pass class C(B): pass class D(C): @staticmethod def _track_instances(): return False class E(D): pass # Create some sample objects of each type a1 = A() # object 0 a2 = A() # object 1 b = B() # object 2 c = C() # object 3 d = D() # object 4 e = E() # object 5 # Find the instances of each type on this level instA, names = get_instances(A, level=0) instB, names = get_instances(B, level=0) instC, names = get_instances(C, level=0) instD, names = get_instances(D, level=0) instE, names = get_instances(E, level=0) # This is the expected behaviour: # instA = [a1, a2, b, c] assert all(o in instA for o in [a1, a2, b, c]) assert all(o not in instA for o in [d, e]) # instB = [b, c] assert all(o in instB for o in [b, c]) assert all(o not in instB for o in [a1, a2, d, e]) # instC = [c] assert c in instC assert all(o not in instC for o in [a1, a2, b, d, e]) # instD = instE = [] assert len(instD) == 0 assert len(instE) == 0 # Check that level=0 and level=1 work as expected def f1(vars, A, B, C, D, E): a3 = A() # object 6 inst_ahere, names = get_instances(A, level=0) # level=0 should refer to definitions inside f inst_abefore, names = get_instances(A, level=1) # level=1 should refer to definitions inside the function calling f # inst_abefore = [a1, a2, b, c] assert all(o in instA for o in vars[0:4]) assert all(o not in instA for o in vars[4:]) # inst_ahere = [a3] assert len(inst_ahere) == 1 and a3 in inst_ahere f1([a1, a2, b, c, d, e], A, B, C, D, E) # Check that nested function calling works as expected def f2(A): a4 = A() # object 7 return [get_instances(A, level) for level in range(2)] inst = f2(A) # inst[0][0] = [a4] assert str(inst[0][0]) == '[7]' # inst[1][0] = [a1,a2,c,b] assert len(inst[1][0]) == 4 and all(o in inst[1][0] for o in [a1, a2, b, c]) def f3(A): a5 = A() # object 9 return [get_instances(A, level) for level in range(2)] def f4(A): a6 = A() # object 8 return f3(A) inst = f4(A) # inst[0][0] = [a5] assert str(inst[0][0]) == '[9]' # inst[1][0] = [a6] assert str(inst[1][0]) == '[8]' # check that find_instances works as expected def f5(A): return f6(A) def f6(A): a = A() # object 10 return f7(A) def f7(A): return find_instances(A, startlevel=0) inst = f5(A)[0][0] assert str(inst) == '10' # check that find_all_instances works as expected def f8(A): a = A() # object 11 return find_all_instances(A, startlevel=0) insts = f8(A)[0] # should be objects 0,1,2,3 and 11 s = map(str, insts) s.sort() assert str(s) == "['0', '1', '11', '2', '3']" if __name__ == '__main__': test() brian-1.3.1/brian/tests/testinterface/test_monitor.py000066400000000000000000000414631167451777000230330ustar00rootroot00000000000000from brian import * from nose.tools import * from brian.utils.approximatecomparisons import is_approx_equal, is_within_absolute_tolerance try: from brian.experimental.cuda.gpu_modelfitting import * import pycuda.autoinit as autoinit use_gpu = True except ImportError: use_gpu = False def test_spikemonitor(): ''' :class:`SpikeMonitor` ~~~~~~~~~~~~~~~~~~~~~ Records spikes from a :class:`NeuronGroup`. Initialised as one of:: SpikeMonitor(source(,record=True)) SpikeMonitor(source,function=function) Where: source A :class:`NeuronGroup` to record from record True or False to record all the spikes or just summary statistics. function A function f(spikes) which is passed the array of spikes numbers that have fired called each step, to define custom spike monitoring. Has two attributes: nspikes The number of recorded spikes spikes A time ordered list of pairs (i,t) where neuron i fired at time t. :class:`StateMonitor` ~~~~~~~~~~~~~~~~~~~~~ Records the values of a state variable from a :class:`NeuronGroup`. Initialise as:: StateMonitor(P,varname(,record=False) (,when='end)(,timestep=1)(,clock=clock)) Where: P The group to be recorded from varname The state variable name or number to be recorded record What to record. The default value is False and the monitor will only record summary statistics for the variable. You can choose record=integer to record every value of the neuron with that number, record=list of integers to record every value of each of those neurons, or record=True to record every value of every neuron (although beware that this may use a lot of memory). when When the recording should be made in the :class:`Network` update, possible values are any of the strings: 'start', 'before_groups', 'after_groups', 'before_connections', 'after_connections', 'before_resets', 'after_resets', 'end' (in order of when they are run). timestep A recording will be made each timestep clock updates (so timestep should be an integer). clock A clock for the update schedule, use this if you have specified a clock other than the default one in your network, or to update at a lower frequency than the update cycle. Note though that if the clock here is different from the main clock, the when parameter will not be taken into account, as network updates are done clock by clock. Use the timestep parameter if you need recordings to be made at a precise point in the network update step. The :class:`StateMonitor` object has the following properties: times The times at which recordings were made mean The mean value of the state variable for every neuron in the group (not just the ones specified in the record keyword) var The unbiased estimate of the variances, as in mean std The square root of var, as in mean In addition, if M is a :class:`StateMonitor` object, you write:: M[i] for the recorded values of neuron i (if it was specified with the record keyword). It returns an array object. Others ~~~~~~ The following monitors also exist, but are not part of the assured interface because their syntax is subject to change. See the documentation for each class for more details. * :class:`Monitor` (base class) * :class:`ISIHistogramMonitor` * :class:`FileSpikeMonitor` * :class:`PopulationRateMonitor` ''' reinit_default_clock() # test that SpikeMonitor retrieves the spikes generator by SpikeGeneratorGroup spikes = [(0, 3 * ms), (1, 4 * ms), (0, 7 * ms)] G = SpikeGeneratorGroup(2, spikes, clock=defaultclock) M = SpikeMonitor(G) net = Network(G, M) net.run(10 * ms) assert (M.nspikes == 3) for (mi, mt), (i, t) in zip(M.spikes, spikes): assert (mi == i) assert (is_approx_equal(mt, t)) # test that the spiketimes are saved and accessed correctly assert(len(M[0]) == len(M.spiketimes[0]) == 2 and len(M[1]) == len(M.spiketimes[1]) == 1) assert((M.spiketimes[0] == M[0]).all() and (M.spiketimes[1] == M[1]).all()) # test that spiketimes are cleared on reinit M.reinit() assert (M.nspikes == 0) assert (len(M.spikes) == 0) assert (len(M[0]) == 0 and len(M[1]) == 0) assert (len(M.spiketimes[0]) == 0 and len(M.spiketimes[1]) == 0) # test that SpikeMonitor function calling usage does what you'd expect f_spikes = [] def f(spikes): if len(spikes): f_spikes.extend(spikes) G = SpikeGeneratorGroup(2, spikes, clock=defaultclock) M = SpikeMonitor(G, function=f) net = Network(G, M) reinit_default_clock() net.run(10 * ms) assert (f_spikes == [0, 1, 0]) # test that SpikeMonitors in MultiConnection objects do reinitialize # properly G = SpikeGeneratorGroup(2, spikes, clock=defaultclock) G2 = NeuronGroup(1, model='dv/dt = -v / (5*ms) : 1') C = Connection(G, G2, 'v', weight=0) M = SpikeMonitor(G) # Note: Because M and C share the source (G), they are replaced by a # MultiConnection net = Network(G, G2, C, M) net.run(10 * ms) net.reinit() # make sure that the reinit propagates to the SpikeMonitor assert (M.nspikes == 0) assert (len(M.spikes) == 0) assert (len(M[0]) == 0 and len(M[1]) == 0) assert (len(M.spiketimes[0]) == 0 and len(M.spiketimes[1]) == 0) # test interface for StateMonitor object dV = 'dV/dt = 0*Hz : 1.' G = NeuronGroup(3, model=dV, reset=0., threshold=10.) @network_operation(when='start') def f(clock): if clock.t >= 1 * ms: G.V = [1., 2., 3.] M1 = StateMonitor(G, 'V') M2 = StateMonitor(G, 'V', record=0) M3 = StateMonitor(G, 'V', record=[0, 1]) M4 = StateMonitor(G, 'V', record=True) reinit_default_clock() net = Network(G, f, M1, M2, M3, M4) net.run(2 * ms) assert (is_within_absolute_tolerance(M2[0][0], 0.)) assert (is_within_absolute_tolerance(M2[0][-1], 1.)) assert (is_within_absolute_tolerance(M3[1][0], 0.)) assert (is_within_absolute_tolerance(M3[1][-1], 2.)) assert (is_within_absolute_tolerance(M4[2][0], 0.)) assert (is_within_absolute_tolerance(M4[2][-1], 3.)) assert_raises(IndexError, M1.__getitem__, 0) assert_raises(IndexError, M2.__getitem__, 1) assert_raises(IndexError, M3.__getitem__, 2) assert_raises(IndexError, M4.__getitem__, 3) for M in [M3, M4]: assert (is_within_absolute_tolerance(float(max(abs(M.times - M2.times))), float(0 * ms))) assert (is_within_absolute_tolerance(float(max(abs(M.times_ - M2.times_))), 0.)) assert (is_within_absolute_tolerance(float(M2.times[0]), float(0 * ms))) d = diff(M2.times) assert (is_within_absolute_tolerance(max(d), min(d))) assert (is_within_absolute_tolerance(float(max(d)), float(get_default_clock().dt))) # construct unbiased estimator from variances of recorded arrays v = array([ var(M4[0]), var(M4[1]), var(M4[2]) ]) * float(len(M4[0])) / float(len(M4[0]) - 1) m = array([0.5, 1.0, 1.5]) assert (is_within_absolute_tolerance(abs(max(M1.mean - m)), 0.)) assert (is_within_absolute_tolerance(abs(max(M1.var - v)), 0.)) assert (is_within_absolute_tolerance(abs(max(M1.std - v ** 0.5)), 0.)) # test when, timestep, clock for StateMonitor c = Clock(dt=0.1 * ms) cslow = Clock(dt=0.2 * ms) dV = 'dV/dt = 0*Hz : 1.' G = NeuronGroup(1, model=dV, reset=0., threshold=1., clock=c) @network_operation(when='start', clock=c) def f(): G.V = 2. M1 = StateMonitor(G, 'V', record=True, clock=cslow) M2 = StateMonitor(G, 'V', record=True, timestep=2, clock=c) M3 = StateMonitor(G, 'V', record=True, when='before_groups', clock=c) net = Network(G, f, M1, M2, M3, M4) net.run(2 * ms) print M1[0], M3[0] assert (2 * len(M1[0]) == len(M3[0])) assert (len(M1[0]) == len(M2[0])) for i in range(len(M1[0])): assert (is_within_absolute_tolerance(M1[0][i], M2[0][i])) assert (is_within_absolute_tolerance(M1[0][i], 0.)) for x in M3[0]: assert (is_within_absolute_tolerance(x, 2.)) reinit_default_clock() # for next test def test_counter(): ''' Tests the consistency of :class:`SpikeCounter`, :class:`PopulationSpikeCounter` and :class:`SpikeMonitor`. ''' reinit_default_clock() # Test whether all monitors count the same number of spikes spikes = [(0, 3 * ms), (1, 4 * ms), (0, 7 * ms)] G = SpikeGeneratorGroup(2, spikes, clock=defaultclock) M = SpikeMonitor(G) C = SpikeCounter(G) P = PopulationSpikeCounter(G) net = Network(G, M, C, P) net.run(10 * ms) # total number of spikes assert(M.nspikes == C.nspikes == P.nspikes == 3) # spikes per neuron assert(len(M[0]) == C[0] == C.count[0] == 2) assert(len(M[1]) == C[1] == C.count[1] == 1) # check for correct reinit net.reinit() assert(M.nspikes == C.nspikes == P.nspikes == 0) assert(len(M[0]) == C[0] == C.count[0] == 0) assert(len(M[1]) == C[1] == C.count[1] == 0) reinit_default_clock() # for next test def test_coincidencecounter(): """ Simulates an IF model with constant input current and checks the total number of coincidences with prediction. """ eqs = """ dV/dt = (-V+R*I)/tau : 1 I : 1 R : 1 tau : second """ reset = 0 threshold = 1 duration = 500 * ms input = 1.2 + .2 * randn(int(duration / defaultclock._dt)) delta = 4 * ms n = 10 def get_data(n): # Generates data from an IF neuron group = NeuronGroup(N=1, model=eqs, reset=reset, threshold=threshold, method='Euler', refractory=3 * delta) group.I = TimedArray(input, start=0 * second, dt=defaultclock.dt) group.R = 1.0 group.tau = 20 * ms M = SpikeMonitor(group) stM = StateMonitor(group, 'V', record=True) net = Network(group, M, stM) net.run(duration) data = M.spikes # train0 = M.spiketimes[0] reinit_default_clock() # trains = [] # for i in range(n): # trains += zip(i*ones(len(train0), dtype='int'), (array(train0) + (delta*c) * rand(len(train0)))) # trains.sort(lambda x,y: (2*int(x[1]>y[1])-1)) # trains = [(i,t*second) for i,t in trains] return data, stM.values#, trains data, data_voltage = get_data(n=n) train0 = [t for i, t in data] group = NeuronGroup(n, eqs, reset=reset, threshold=threshold, method='Euler') group.I = TimedArray(input, start=0 * second, dt=defaultclock.dt) group.R = 1.0 * ones(n) group.tau = 20 * ms * (1 + .1 * (2 * rand(n) - 1)) cc = CoincidenceCounter(source=group, data=([-1 * second] + train0 + [duration + 1 * second]), delta=delta) sm = SpikeMonitor(group) statem = StateMonitor(group, 'V', record=True) net = Network(group, cc, sm, statem) net.run(duration) reinit_default_clock() cpu_voltage = statem.values online_coincidences = cc.coincidences cpu_spike_count = array([len(sm[i]) for i in range(n)]) offline_coincidences = array([gamma_factor(sm[i], train0, delta=delta, normalize=False, dt=defaultclock.dt) for i in range(n)]) if use_gpu: # Compute gamma factor with GPU inp = array(input) I_offset = zeros(n, dtype=int) #spiketimes = array(hstack(([-1*second],train0,[data[-1][1]+1*second]))) spiketimes = array(hstack(([-1 * second], train0, [duration + 1 * second]))) spiketimes_offset = zeros(n, dtype=int) spikedelays = zeros(n) cd = CoincidenceCounter(source=group, data=data, delta=delta) group.V = 0.0 mf = GPUModelFitting(group, Equations(eqs), inp, I_offset, spiketimes, spiketimes_offset, spikedelays, delta) # Normal GPU launch mf.launch(duration) # GPU record of voltage and spikes # allV = [] # oldnc = 0 # oldsc = 0 # allcoinc = [] # all_pst = [] # all_nst = [] # allspike = [] # all_nsa = [] # all_lsa = [] # # for i in xrange(int(duration/defaultclock.dt)): # mf.kernel_func(int32(i), int32(i+1), # *mf.kernel_func_args, **mf.kernel_func_kwds) # autoinit.context.synchronize() # allV.append(mf.state_vars['V'].get()) # all_pst.append(mf.spiketimes.get()[mf.spiketime_indices.get()]) # all_nst.append(mf.spiketimes.get()[mf.spiketime_indices.get()+1]) # all_nsa.append(mf.next_spike_allowed_arr.get()[0]) # all_lsa.append(mf.last_spike_allowed_arr.get()[0]) # # self.next_spike_allowed_arr = gpuarray.to_gpu(ones(N, dtype=bool)) # # self.last_spike_allowed_arr = gpuarray.to_gpu(zeros(N, dtype=bool)) # nc = mf.coincidence_count[0] # if nc>oldnc: # oldnc = nc # allcoinc.append(i*defaultclock.dt) # sc = mf.spike_count[0] # if sc>oldsc: # oldsc = sc # allspike.append(i*defaultclock.dt) # # gpu_voltage = array(allV) cc = mf.coincidence_count gpu_spike_count = mf.spike_count cd._model_length = gpu_spike_count cd._coincidences = cc gpu_coincidences = cc print "Spike count" print "Data", len(data) print "CPU", cpu_spike_count if use_gpu: print "GPU", gpu_spike_count print "max error : %.1f" % max(abs(cpu_spike_count - gpu_spike_count)) print print "Offline" print offline_coincidences print print "Online" print online_coincidences print "max error : %.6f" % max(abs(online_coincidences - offline_coincidences)) if use_gpu: print print "GPU" print gpu_coincidences print "max error : %.6f" % max(abs(gpu_coincidences - offline_coincidences)) bad_neuron = nonzero(abs(gpu_coincidences - offline_coincidences) > 1e-10)[0] if len(bad_neuron) > 0: print "Bad neuron", bad_neuron, group.tau[bad_neuron[0]] print print return # plot(linspace(0,duration/second,len(data_voltage[0])), data_voltage[0], 'k', linewidth=.5) # plot(linspace(0,duration/second,len(cpu_voltage[0])), cpu_voltage[0], 'b') # plot(linspace(0,duration/second,len(gpu_voltage[:,0])), gpu_voltage[:,0], 'g') # show() times = linspace(0, duration / second, len(all_pst)) figure() plot(times, array(all_pst) * defaultclock.dt) plot(times, array(all_nst) * defaultclock.dt) plot(train0, train0, 'o') plot(allspike, allspike, 'x') plot(allcoinc, allcoinc, '+') plot(times, array(all_nsa) * times, '--') plot(times, array(all_lsa) * times, '-.') predicted_spikes = allspike target_spikes = train0 + [duration + 1 * second] i = 0 truecoinc = [] for pred_t in predicted_spikes: while target_spikes[i] <= pred_t + delta - 1e-10 * second: if abs(target_spikes[i] - pred_t) <= delta + 1e-10 * second: truecoinc.append((pred_t, target_spikes[i])) i += 1 break i += 1 print 'Truecoinc:', len(truecoinc) for t1, t2 in truecoinc: plot([t1, t2], [t1, t2], ':', color=(0.5, 0, 0), lw=3) show() # assert is_within_absolute_tolerance(online_gamma1,offline_gamma1) # assert is_within_absolute_tolerance(online_gamma2,offline_gamma2) #def test_vectorized_spikemonitor(): # eqs = """ # dV/dt = (-V+I)/tau : 1 # tau : second # I : 1 # """ # N = 30 # taus = 10*ms + 90*ms * rand(N) # duration = 1000*ms # input = 2.0 + 3.0 * rand(int(duration/defaultclock._dt)) # vgroup = VectorizedNeuronGroup(model=eqs, reset=0, threshold=1, # input=input, slices=2, overlap=200*ms, tau=taus) # M = SpikeMonitor(vgroup) # run(vgroup.duration) # raster_plot(M) # show() if __name__ == '__main__': test_spikemonitor() test_counter() # test_coincidencecounter() brian-1.3.1/brian/tests/testinterface/test_neurongroup.py000066400000000000000000000007521167451777000237230ustar00rootroot00000000000000from brian import * from brian.utils.approximatecomparisons import * from nose.tools import * def test(): reinit_default_clock() # PoissonGroup G = PoissonGroup(10, rates=10 * Hz) G1 = G[:5] G2 = G[5:] G1.rate = 5 * Hz G2.rate = 20 * Hz assert(is_approx_equal(G.rate[0], float(5 * Hz))) assert(is_approx_equal(G1.rate[0], float(5 * Hz))) assert(is_approx_equal(G2.rate[0], float(20 * Hz))) if __name__ == '__main__': test() brian-1.3.1/brian/tests/testinterface/test_reset.py000066400000000000000000000065741167451777000224720ustar00rootroot00000000000000from brian import * from nose.tools import * from brian.utils.approximatecomparisons import is_approx_equal def test(): """ :class:`Reset` ~~~~~~~~~~~~~~ Initialised as:: R = Reset(resetvalue=0*mvolt, state=0) After a neuron from a group with this reset fires, it will set the specified state variable to the given value. State variable 0 is customarily the membrane voltage, but this isn't required. :class:`FunReset` ~~~~~~~~~~~~~~~~~ Initialised as:: R = FunReset(resetfun) Where resetfun is a function taking two arguments, the group it is acting on, and the indices of the spikes to be reset. The following is an example reset function:: def f(P,spikeindices): P._S[0,spikeindices]=array([i/10. for i in range(len(spikeindices))]) :class:`Refractoriness` ~~~~~~~~~~~~~~~~~~~~~~~ Initialised as:: R = Refractoriness(resetvalue=0*mvolt,period=5*msecond,state=0) After a neuron from a group with this reset fires, the specified state variable of the neuron will be set to the specified resetvalue for the specified period. :class:`NoReset` ~~~~~~~~~~~~~~~~ Initialised as:: R = NoReset() Does nothing. """ reinit_default_clock() # test that reset works as expected # the setup below is that group G starts with state values (1,1,1,1,1,0,0,0,0,0) threshold # value 0.5 (which should be initiated for the first 5 neurons) and reset 0.2 so that the # final state should be (0.2,0.2,0.2,0.2,0.2,0,0,0,0,0) G = NeuronGroup(10, model=LazyStateUpdater(), reset=Reset(0.2), threshold=Threshold(0.5), init=(0.,)) G1 = G.subgroup(5) G2 = G.subgroup(5) G1.state(0)[:] = array([1.] * 5) G2.state(0)[:] = array([0.] * 5) net = Network(G) net.run(1 * msecond) assert (all(G1.state(0) < 0.21) and all(0.19 < G1.state(0)) and all(G2.state(0) < 0.01)) # check that function reset works as expected def f(P, spikeindices): P._S[0, spikeindices] = array([i / 10. for i in range(len(spikeindices))]) P.called_f = True G = NeuronGroup(10, model=LazyStateUpdater(), reset=FunReset(f), threshold=Threshold(2.), init=(3.,)) G.called_f = False net = Network(G) net.run(1 * msecond) assert (G.called_f) for i, v in enumerate(G.state(0)): assert (is_approx_equal(i / 10., v)) # check that refractoriness works as expected # the network below should start at V=15, immediately spike as it is above threshold=1, # then should be clamped at V=-.5 until t=1ms at which point it should quickly evolve # via the DE to a value near 0 (and certainly between -.5 and 0). We test that the # value at t=0.5 is exactly -.5 and the value at t=1.5 is between -0.4 and 0.1 (to # avoid floating point problems) dV = 'dV/dt=-V/(.1*msecond):1.' G = NeuronGroup(1, model=dV, threshold=1., reset=Refractoriness(-.5, 1 * msecond)) G.V = 15. net = Network(G) net.run(0.5 * msecond) for v in G.state(0): assert (is_approx_equal(v, -.5)) net.run(1 * msecond) for v in G.state(0): assert (-0.4 < v < 0.1) get_default_clock().reinit() if __name__ == '__main__': test() brian-1.3.1/brian/tests/testinterface/test_statistics.py000066400000000000000000000004441167451777000235300ustar00rootroot00000000000000from brian import * from nose.tools import * def test(): """ Statistics module """ assert total_correlation([],[]) is NaN assert correlogram([],[]) is NaN assert firing_rate([]) is NaN assert CV([]) is NaN if __name__ == '__main__': test() brian-1.3.1/brian/tests/testinterface/test_threshold.py000066400000000000000000000153251167451777000233360ustar00rootroot00000000000000from brian import * from nose.tools import * from operator import itemgetter from brian.utils.approximatecomparisons import is_approx_equal def test(): """ :class:`Threshold` ~~~~~~~~~~~~~~~~~~ Initialised as ``Threshold(threshold[,state=0])`` Causes a spike whenever the given state variable is above the threshold value. :class:`NoThreshold` ~~~~~~~~~~~~~~~~~~~~ Does nothing, initialised as ``NoThreshold()`` Functional thresholds ~~~~~~~~~~~~~~~~~~~~~ Initialised as:: FunThreshold(thresholdfun) SimpleFunThreshold(thresholdfun[,state=0]) Threshold functions return a boolean array the same size as the number of neurons in the group, where if the returned array is True at index i then neuron i fires. The arguments passed to the :class:`FunThreshold` function are the full array of state variables for the group in order. The argument passed to the :class:`SimpleFunThreshold` function is the array of length N corresponding to the given state variable. :class:`VariableThreshold` ~~~~~~~~~~~~~~~~~~~~~~~~~~ Initialised as ``VariableThreshold(threshold_state[,state=0])`` Causes a spike whenever the state variable defined by state is above the state variable defined by threshold_state. :class:`EmpiricalThreshold` ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Initialised as:: EmpiricalThreshold(threshold[,refractory=1*msecond[,state=0[,clock]]]) Causes a spike when the given state variable exceeds the threshold value, but only if there has been no spike within the refractory period. Will use the given clock if specified, otherwise the standard guessing procedure is used. Poisson thresholds ~~~~~~~~~~~~~~~~~~ Initialised as:: PoissonThreshold([state=0]) HomogeneousPoissonThreshold([state=0]) The Poisson process gets the rates from the specified state variable, the homogeneous version uses the rates from the specified variable of the first neuron in the group. """ reinit_default_clock() # test that Threshold works as expected with default state G = NeuronGroup(3, model=LazyStateUpdater(), reset=Reset(0.), threshold=Threshold(1.), init=(0.,)) M = SpikeMonitor(G, True) net = Network(G, M) net.run(1 * msecond) assert (len(M.spikes) == 0) G.state(0)[:] = array([0.5, 1.5, 2.5]) net.run(1 * msecond) i, t = zip(*sorted(M.spikes, key=itemgetter(0))) assert (i == (1, 2)) for s in t: assert (is_approx_equal(s, 1 * msecond)) # test that Threshold works as expected with specified state G = NeuronGroup(3, model=LazyStateUpdater(numstatevariables=2), reset=Reset(0., state=1), threshold=Threshold(1., state=1), init=(0., 0.)) M = SpikeMonitor(G, True) net = Network(G, M) net.run(1 * msecond) assert (len(M.spikes) == 0) net.reinit() G.state(0)[:] = array([0.5, 1.5, 2.5]) net.run(1 * msecond) assert (len(M.spikes) == 0) net.reinit() G.state(1)[:] = array([0.5, 1.5, 2.5]) net.run(1 * msecond) i, t = zip(*sorted(M.spikes, key=itemgetter(0))) assert (i == (1, 2)) for s in t: assert (is_approx_equal(s, 0 * msecond)) # test that VariableThreshold works as expected G = NeuronGroup(3, model=LazyStateUpdater(numstatevariables=3), reset=Reset(0., state=1), threshold=VariableThreshold(2, state=1), init=(0., 0., 0.)) M = SpikeMonitor(G, True) net = Network(G, M) get_default_clock().reinit() G.state(2)[:] = array([1., 2., 3.]) # the thresholds G.state(1)[:] = array([4., 1., 2.]) # the values net.run(1 * msecond) i, t = zip(*sorted(M.spikes, key=itemgetter(0))) assert (i == (0,)) assert (is_approx_equal(t[0], 0 * second)) # test that FunThreshold works as expected def f(S0, S1): return S0 > S1 * S1 G = NeuronGroup(3, model=LazyStateUpdater(numstatevariables=2), reset=Reset(0.), threshold=FunThreshold(f), init=(0., 0.)) G.state(0)[:] = array([2., 3., 10.]) G.state(1)[:] = array([1., 2., 3.]) # the square root of the threshold values M = SpikeMonitor(G, True) net = Network(G, M) get_default_clock().reinit() net.run(1 * msecond) i, t = zip(*sorted(M.spikes, key=itemgetter(0))) assert (i == (0, 2)) for s in t: assert (is_approx_equal(s, 0 * msecond)) # test that SimpleFunThreshold works as expected def f(S): return S > 1. G = NeuronGroup(3, model=LazyStateUpdater(), reset=Reset(0.), threshold=SimpleFunThreshold(f), init=(0.,)) G.state(0)[:] = array([0.5, 1.5, 2.5]) M = SpikeMonitor(G, True) net = Network(G, M) get_default_clock().reinit() net.run(1 * msecond) i, t = zip(*sorted(M.spikes, key=itemgetter(0))) assert (i == (1, 2)) for s in t: assert (is_approx_equal(s, 0 * msecond)) # test that EmpiricalThreshold works as expected G = NeuronGroup(1, model=LazyStateUpdater(numstatevariables=2), reset=NoReset(), threshold=EmpiricalThreshold(1., refractory=0.5 * msecond, state=1), init=(0., 2.)) M = SpikeMonitor(G, True) net = Network(G, M) get_default_clock().reinit() net.run(1.6 * msecond) i, t = zip(*sorted(M.spikes, key=itemgetter(1))) assert (i == (0, 0, 0, 0)) for i, s in enumerate(t): assert (is_approx_equal(s, i * 0.5 * msecond)) # test that PoissonThreshold works init = float(1. / get_default_clock().dt) # should cause spiking at every time interval G = NeuronGroup(3, model=LazyStateUpdater(), reset=NoReset(), threshold=PoissonThreshold()) G.state(0)[:] = array([0., init, 0.]) M = SpikeMonitor(G, True) net = Network(G, M) net.run(1 * msecond) assert (len(M.spikes)) i, t = zip(*sorted(M.spikes, key=itemgetter(1))) assert (all(j == 1 for j in i)) # test that HomogeneousPoissonThreshold works init = float(1. / get_default_clock().dt) # should cause spiking at every time interval G = NeuronGroup(3, model=LazyStateUpdater(), reset=NoReset(), threshold=HomogeneousPoissonThreshold()) M = SpikeMonitor(G, True) net = Network(G, M) G.state(0)[:] = array([0., init, 0.]) # should do nothing, because only first neuron is looked at net.run(1 * msecond) assert (len(M.spikes) == 0) G.state(0)[:] = array([init, 0., 0.]) # should do nothing, because only first neuron is looked at net.run(1 * msecond) # we actually cannot make any assertion about the behaviour of this system, other than # that it should run correctly if __name__ == '__main__': test() brian-1.3.1/brian/tests/testinterface/test_units.py000066400000000000000000000141011167451777000224730ustar00rootroot00000000000000from brian import * from nose.tools import * import numpy def test(): """ Names ~~~~~ The following units should exist: metre, kilogram, second, amp, kelvin, mole, candle radian, steradian, hertz, newton, pascal, joule, watt, coulomb, volt, farad, ohm, siemens, weber, tesla, henry, celsius, lumen, lux, becquerel, gray, sievert, katal, gram, gramme In addition, all versions of these units scaled by the following prefixes should exist (in descending order of size): Y, Z, E, P, T, G, M, k, h, da, d, c, m, u, n, p, f, a, z, y And, all of the above units with the suffixes 2 and 3 exist, and refer to the unit to the power of 2 and 3, e.g. ``metre3 = metre**3``. Arithmetic ~~~~~~~~~~ The following operations on :class:`Quantity` objects require that the operands have the same dimensions: +, -, <, <=, >, >=, ==, != The following operations on :class:`Quantity` objects work with any pair of operands: / and * In addition, ``-x`` and ``abs(x)`` will work on any :class:`Quantity` ``x``, and will return values with the same dimension as their argument. The power operation ``x**y`` requires that ``y`` be dimensionless. Casting ~~~~~~~ The three rules that define the casting operations for :class:`Quantity` object are: 1. :class:`Quantity` op :class:`Quantity` = :class:`Quantity`: Performs dimension consistency check if appropriate. 2. Scalar op :class:`Quantity` = :class:`Quantity`: Assumes that the scalar is dimensionless 3. other op :class:`Quantity` = other: The :class:`Quantity` object is downcast to a ``float`` Scalar types are 1 dimensional number types, including ``float``, ``int``, etc. but not ``array``. The :class:`Quantity` class is a derived class of ``float``, so many other operations will also downcast to ``float``. For example, ``sin(x)`` where ``x`` is a quantity will return ``sin(float(x))`` without doing any dimension checking. Although see the Brian.unitsafefunctions module for a way round this. It is better to be explicit about casting if you can be. TODO: more details on ``numpy.array``/:class:`Quantity` operations? :func:`check_units` decorator ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The decorator :func:`check_units` can be used to check that the arguments passed to a function have the right units, e.g.:: @check_units(I=amp,V=volt) def find_resistance(I,V): return V/I will work if you try ``find_resistance(1*amp,1*volt)`` but raise an exception if you try ``find_resistance(1,1)`` say. """ reinit_default_clock() # the following units should exist: units_which_should_exist = [ metre, meter, kilogram, second, amp, kelvin, mole, candle, radian, steradian, hertz, newton, pascal, joule, watt, coulomb, volt, farad, ohm, siemens, weber, tesla, henry, celsius, lumen, lux, becquerel, gray, sievert, katal, gram, gramme ] # scaled versions of all these units should exist (we just check farad as an example) some_scaled_units = [ Yfarad, Zfarad, Efarad, Pfarad, Tfarad, Gfarad, Mfarad, kfarad, hfarad, dafarad, dfarad, cfarad, mfarad, ufarad, nfarad, pfarad, ffarad, afarad, zfarad, yfarad ] # and check that the above is in descending order of size import copy sorted_scaled_units = copy.copy(some_scaled_units) sorted_scaled_units.sort(reverse=True) assert some_scaled_units == sorted_scaled_units # some powered units powered_units = [ cmetre2, Yfarad3 ] # check that operations requiring consistent units work with consistent units a = 1 * kilogram b = 2 * kilogram c = [ a + b, a - b, a < b, a <= b, a > b, a >= b, a == b, a != b ] # check that given inconsistent units they raise an exception from operator import add, sub, mul, div, lt, le, gt, ge, eq, ne tryops = [add, sub, lt, le, gt, ge, eq, ne] a = 1 * kilogram b = 1 * second def inconsistent_operation(a, b, op): return op(a, b) for op in tryops: assert_raises(DimensionMismatchError, inconsistent_operation, a, b, op) # check that operations not requiring consistent units work a = 1 * kilogram b = 1 * second c = [ a * b, a / b ] # check that - and abs give results with the same dimensions assert (-a).has_same_dimensions(a) assert abs(a).has_same_dimensions(a) assert - abs(a) < abs(a) # well why not, this should be true # check that pow requires the index to be dimensionless a = (1 * kilogram) ** 0.352 def inconsistent_power(a, b): return a ** b a = 1 * kilogram b = 1 * second assert_raises(DimensionMismatchError, inconsistent_power, a, b) # check casting rule 1 a = 1 * kilogram b = 2 * kilogram for op in [add, sub, mul, div]: assert isinstance(op(a, b), Quantity) # check casting rule 2 a = 1 b = 1 * kilogram for op in [mul, div]: assert isinstance(op(a, b), Quantity) and isinstance(op(b, a), Quantity) for op in [add, sub]: assert_raises(DimensionMismatchError, inconsistent_operation, a, b, op) for op in [add, sub, mul, div]: assert isinstance(op(a, b / b), Quantity) and isinstance(op(b / b, a), Quantity) # check casting rule 3 assert isinstance(numpy.array([1, 2]) * (1 * kilogram), numpy.ndarray) assert isinstance((1 * kilogram) * numpy.array([1, 2]), numpy.ndarray) # check_units decorator @check_units(I=amp, V=volt) def find_resistance(I, V): return V / I R = find_resistance(1 * amp, 1 * volt) assert_raises(DimensionMismatchError, find_resistance, 1, 1) if __name__ == '__main__': test() brian-1.3.1/brian/tests/testinterface/test_unitsafefunctions.py000066400000000000000000000024621167451777000251070ustar00rootroot00000000000000from brian import * from nose.tools import * import numpy def test(): """ Each of the following functions f(x) should use units if they are passed a :class:`Quantity` object or fall back on their numpy versions otherwise. sqrt, log, exp, sin, cos, tan, arcsin, arccos, arctan, sinh, cosh, tanh, arcsinh, arccosh, arctanh """ reinit_default_clock() # check sqrt behaves as expected x = 3 * second z = numpy.array([3., 2.]) assert (have_same_dimensions(sqrt(x), second ** 0.5)) assert (isinstance(sqrt(z), numpy.ndarray)) # check the return types are right for all other functions x = 0.5 * second / second funcs = [ sqrt, log, exp, sin, cos, tan, arcsin, arccos, arctan, sinh, cosh, tanh, arcsinh, arccosh, arctanh ] for f in funcs: assert (isinstance(f(x), Quantity)) assert (isinstance(f(z), numpy.ndarray)) # check that attempting to use these functions on something with units fails funcs = [ log, exp, sin, cos, tan, arcsin, arccos, arctan, sinh, cosh, tanh, arcsinh, arccosh, arctanh ] x = 3 * second for f in funcs: assert_raises(DimensionMismatchError, f, x) if __name__ == '__main__': test() brian-1.3.1/brian/tests/testutils/000077500000000000000000000000001167451777000171235ustar00rootroot00000000000000brian-1.3.1/brian/tests/testutils/__init__.py000066400000000000000000000000021167451777000212240ustar00rootroot00000000000000 brian-1.3.1/brian/tests/testutils/test_sparse.py000066400000000000000000000012531167451777000220320ustar00rootroot00000000000000from brian.connections import SparseMatrix from numpy import zeros, array from nose.tools import * def test_sparse_matrix(): x = SparseMatrix((10, 10)) z = zeros((10, 10), dtype=int) x[0, 4] = 3 x[1, :] = 5 x[2, []] = [] x[3, [0]] = [6] x[4, [0, 1]] = [7, 8] x[5, 2:4] = [1, 2] x[6:8, 4:7] = 1 # same thing with z to compare z[0, 4] = 3 z[1, :] = 5 z[2, []] = [] z[3, [0]] = [6] z[4, [0, 1]] = [7, 8] z[5, 2:4] = [1, 2] z[6:8, 4:7] = 1 # check the values are correct y = array(x.todense(), dtype=int) assert (y==z).all() if __name__ == '__main__': test_sparse_matrix() brian-1.3.1/brian/tests/testutils/test_statistics.py000066400000000000000000000011121167451777000227210ustar00rootroot00000000000000from brian.tools.statistics import * from brian.units import * from brian.stdunits import * from numpy import * from numpy.random import * from nose.tools import * def test_group_correlations(): rate = 10.0 n = 100 poisson = cumsum(exponential(1 / rate, n)) spikes = [(0, t * second) for t in poisson] spikes.extend([(0, t * second + 1 * ms) for t in poisson]) spikes = sort_spikes(spikes) S, tauc = group_correlations(spikes, delta=3 * ms) assert abs(tauc[0] - .001) < .0005 if __name__ == '__main__': test_group_correlations() brian-1.3.1/brian/threshold.py000066400000000000000000000431041167451777000162710ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Threshold mechanisms ''' __all__ = ['Threshold', 'FunThreshold', 'VariableThreshold', 'NoThreshold', 'EmpiricalThreshold', 'SimpleFunThreshold', 'PoissonThreshold', 'HomogeneousPoissonThreshold', 'StringThreshold'] from numpy import where, array, zeros, Inf from units import * from itertools import count from clock import guess_clock, get_default_clock, reinit_default_clock from random import sample # Python standard random module (sample is different) from scipy import random from numpy import clip import bisect from scipy import weave from globalprefs import * import warnings from utils.approximatecomparisons import is_approx_equal from log import * import inspect from inspection import * import re import numpy CThreshold = PythonThreshold = None def select_threshold(expr, eqs, level=0): ''' Automatically selects the appropriate Threshold object from a string. Matches the following patterns: var_name > or >= const : Threshold var_name > or >= var_name : VariableThreshold others : StringThreshold ''' global CThreshold, PythonThreshold use_codegen = get_global_preference('usecodegen') and get_global_preference('usecodegenthreshold') use_weave = get_global_preference('useweave') and get_global_preference('usecodegenweave') if use_codegen: if CThreshold is None: from experimental.codegen.threshold import CThreshold, PythonThreshold if use_weave: log_warn('brian.threshold', 'Using codegen CThreshold') return CThreshold(expr, level=level + 1) else: log_warn('brian.threshold', 'Using codegen PythonThreshold') return PythonThreshold(expr, level=level + 1) # plan: # - see if it matches A > B or A >= B, if not select StringThreshold # - check if A, B both match diffeq variable names, and if so # select VariableThreshold # - check that A is a variable name, if not select StringThreshold # - extract all the identifiers from B, and if none of them are # callable, assume it is a constant, try to eval it and then use # Threshold. If not, or if eval fails, use StringThreshold. # This misses the case of e.g. V>10*mV*exp(1) because exp will be # callable, but in general a callable means that it could be # non-constant. expr = expr.strip() eqs.prepare() ns = namespace(expr, level=level + 1) s = re.search(r'^\s*(\w+)\s*>=?(.+)', expr) if not s: return StringThreshold(expr, level=level + 1) A = s.group(1) B = s.group(2).strip() if A not in eqs._diffeq_names: return StringThreshold(expr, level=level + 1) if B in eqs._diffeq_names: return VariableThreshold(B, A) try: vars = get_identifiers(B) except SyntaxError: return StringThreshold(expr, level=level + 1) all_vars = eqs._eq_names + eqs._diffeq_names + eqs._alias.keys() + ['t'] for v in vars: if v not in ns or v in all_vars or callable(ns[v]): return StringThreshold(expr, level=level + 1) try: val = eval(B, ns) except: return StringThreshold(expr, level=level + 1) return Threshold(val, A) class Threshold(object): ''' All neurons with a specified state variable above a fixed value fire a spike. **Initialised as:** :: Threshold([threshold=1*mV[,state=0]) with arguments: ``threshold`` The value above which a neuron will fire. ``state`` The state variable which is checked. **Compilation** Note that if the global variable ``useweave`` is set to ``True`` then this function will use a ``C++`` accelerated version which runs approximately 3x faster. ''' def __init__(self, threshold=1 * mvolt, state=0): self.threshold = threshold self.state = state self._useaccel = get_global_preference('useweave') self._cpp_compiler = get_global_preference('weavecompiler') self._extra_compile_args = ['-O3'] if self._cpp_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] def __call__(self, P): ''' Checks the threshold condition and returns spike times. P is the neuron group. Note the accelerated version runs 3x faster. ''' if self._useaccel: spikes = P._spikesarray V = P.state_(self.state) Vt = float(self.threshold) N = int(len(P)) code = """ int numspikes=0; for(int i=0;iVt) spikes(numspikes++) = i; return_val = numspikes; """ # WEAVE NOTE: set the environment variable USER if your username has a space # in it, say set USER=DanGoodman if your username is Dan Goodman, this is # because weave uses this to create file names, but doesn't correctly send these # values to the compiler, causing problems. numspikes = weave.inline(code, ['spikes', 'V', 'Vt', 'N'], compiler=self._cpp_compiler, type_converters=weave.converters.blitz, extra_compile_args=self._extra_compile_args) # WEAVE NOTE: setting verbose=True in the weave.inline function may help in # finding errors. return spikes[0:numspikes] else: return ((P.state_(self.state) > self.threshold).nonzero())[0] def __repr__(self): return 'Threshold mechanism with value=' + str(self.threshold) + " acting on state " + str(self.state) class StringThreshold(Threshold): ''' A threshold specified by a string expression. Initialised with arguments: ``expr`` The expression used to test whether a neuron has fired a spike. Should be a single statement that returns a value. For example, ``'V>50*mV'`` or ``'V>Vt'``. ``level`` How many levels up in the calling sequence to look for names in the namespace. Usually 0 for user code. ''' def __init__(self, expr, level=0): self._namespace, unknowns = namespace(expr, level=level + 1, return_unknowns=True) self._vars = unknowns self._expr = expr self._code = compile(expr, "StringThreshold", "eval") class Replacer(object): def __init__(self, func, n): self.n = n self.func = func def __call__(self): return self.func(self.n) self._Replacer = Replacer def __call__(self, P): for var in self._vars: # couldn't we do this just once? self._namespace[var] = P.state(var) self._namespace['rand'] = self._Replacer(numpy.random.rand, len(P)) self._namespace['randn'] = self._Replacer(numpy.random.randn, len(P)) return eval(self._code, self._namespace).nonzero()[0] def __repr__(self): return "String threshold" class NoThreshold(Threshold): ''' No thresholding mechanism. **Initialised as:** :: NoThreshold() ''' def __init__(self): pass def __call__(self, P): return [] def __repr__(self): return "No Threshold" class FunThreshold(Threshold): ''' Threshold mechanism with a user-specified function. **Initialised as:** :: FunThreshold(thresholdfun) where ``thresholdfun`` is a function with one argument, the 2d state value array, where each row is an array of values for one state, of length N for N the number of neurons in the group. For efficiency, data are numpy arrays and there is no unit checking. Note: if you only need to consider one state variable, use the :class:`SimpleFunThreshold` object instead. ''' def __init__(self, thresholdfun): self.thresholdfun = thresholdfun # Threshold function def __call__(self, P): ''' Checks the threshold condition and returns spike times. P is the neuron group. ''' spikes = (self.thresholdfun(*P._S).nonzero())[0] return spikes def __repr__(self): return 'Functional threshold mechanism' class SimpleFunThreshold(FunThreshold): ''' Threshold mechanism with a user-specified function. **Initialised as:** :: FunThreshold(thresholdfun[,state=0]) with arguments: ``thresholdfun`` A function with one argument, the array of values for the specified state variable. For efficiency, this is a numpy array, and there is no unit checking. ``state`` The name or number of the state variable to pass to the threshold function. **Sample usage:** :: FunThreshold(lambda V:V>=Vt,state='V') ''' def __init__(self, thresholdfun, state=0): self.thresholdfun = thresholdfun # Threshold function self.state = state def __call__(self, P): ''' Checks the threshold condition and returns spike times. P is the neuron group. ''' spikes = (self.thresholdfun(P.state_(self.state)).nonzero())[0] #P.LS[spikes]=P.clock.t # Time of last spike (this line should be general) return spikes class VariableThreshold(Threshold): ''' Threshold mechanism where one state variable is compared to another. **Initialised as:** :: VariableThreshold([threshold_state=1[,state=0]]) with arguments: ``threshold_state`` The state holding the lower bound for spiking. ``state`` The state that is checked. If ``x`` is the value of state variable ``threshold_state`` on neuron ``i`` and ``y`` is the value of state variable ``state`` on neuron ``i`` then neuron ``i`` will fire if ``y>x``. Typically, using this class is more time efficient than writing a custom thresholding operation. **Compilation** Note that if the global variable ``useweave`` is set to ``True`` then this function will use a ``C++`` accelerated version. ''' def __init__(self, threshold_state=1, state=0): self.threshold_state = threshold_state # State variable representing the threshold self.state = state self._useaccel = get_global_preference('useweave') self._cpp_compiler = get_global_preference('weavecompiler') self._extra_compile_args = ['-O3'] if self._cpp_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] def __call__(self, P): ''' Checks the threshold condition, resets and returns spike times. P is the neuron group. ''' if self._useaccel: spikes = P._spikesarray V = P.state_(self.state) Vt = P.state_(self.threshold_state) N = int(len(P)) code = """ int numspikes=0; for(int i=0;iVt(i)) spikes(numspikes++) = i; return_val = numspikes; """ numspikes = weave.inline(code, ['spikes', 'V', 'Vt', 'N'], \ compiler=self._cpp_compiler, type_converters=weave.converters.blitz, extra_compile_args=self._extra_compile_args) return spikes[0:numspikes] else: return ((P.state_(self.state) > P.state_(self.threshold_state)).nonzero())[0] def __repr__(self): return 'Variable threshold mechanism' class EmpiricalThreshold(Threshold): ''' Empirical threshold, e.g. for Hodgkin-Huxley models. In empirical models such as the Hodgkin-Huxley method, after a spike neurons are not instantaneously reset, but reset themselves as part of the dynamical equations defining their behaviour. This class can be used to model that. It is a simple threshold mechanism that checks e.g. ``V>=Vt`` but it only does so for neurons that haven't recently fired (giving the dynamical equations time to reset the values naturally). It should be used in conjunction with the :class:`NoReset` object. **Initialised as:** :: EmpiricalThreshold([threshold=1*mV[,refractory=1*ms[,state=0[,clock]]]]) with arguments: ``threshold`` The lower bound for the state variable to induce a spike. ``refractory`` The time to wait after a spike before checking for spikes again. ``state`` The name or number of the state variable to check. ``clock`` If this object is being used for a :class:`NeuronGroup` which doesn't use the default clock, you need to specify its clock here. ''' @check_units(refractory=second) def __init__(self, threshold=1 * mvolt, refractory=1 * msecond, state=0, clock=None): self.threshold = threshold # Threshold value self.state = state clock = guess_clock(clock) self.refractory = int(refractory / clock.dt) # this assumes that if the state stays over the threshold, and say # refractory=5ms the user wants spiking at 0ms 5ms 10ms 15ms etc. if is_approx_equal(self.refractory * clock.dt, refractory) and self.refractory > 0: self.refractory -= 1 def __call__(self, P): ''' Checks the threshold condition, resets and returns spike times. P is the neuron group. ''' #spikes=where((P._S[0,:]>self.Vt) & ((P.LS self.threshold spikescond[P.LS[0:self.refractory]] = False return spikescond.nonzero()[0] #P.LS[spikes]=P.clock.t # Time of last spike (this line should be general) #return spikes def __repr__(self): return 'Empirical threshold with value=' + str(self.threshold) + " acting on state " + str(self.state) class PoissonThreshold(Threshold): ''' Poisson threshold: a spike is produced with some probability S[0]*dt, or S[state]*dt. ''' # TODO: check the state has units in Hz def __init__(self, state=0): self.state = state def __call__(self, P): return (random.rand(len(P)) < P.state_(self.state)[:] * P.clock.dt).nonzero()[0] def __repr__(self): return 'Poisson threshold' class HomogeneousPoissonThreshold(PoissonThreshold): ''' Poisson threshold for spike trains with identical rates. The underlying NeuronGroup has only one state variable. N.B.: "homogeneous" is meant in the spatial (not temporal) sense, the rate may change in time. ''' def __call__(self, P): # N.B.: is "float" necessary? # Other possibility to avoid sorting: use an exponential distribution n = random.poisson(float(len(P) * P.clock.dt * clip(P._S[self.state][0], 0, Inf))) # number of spikes if n > len(P): n = len(P) log_warn('brian.HomogeneousPoissonThreshold', 'HomogeneousPoissonThreshold cannot generate enough spikes.') spikes = sample(xrange(len(P)), n) spikes.sort() # necessary only for subgrouping return spikes brian-1.3.1/brian/timedarray.py000066400000000000000000000321131167451777000164340ustar00rootroot00000000000000from clock import * from network import * import neurongroup from units import second import numpy import warnings try: import pylab except: warnings.warn("Couldn't import pylab.") __all__ = ['TimedArray', 'TimedArraySetter', 'set_group_var_by_array'] class TimedArray(numpy.ndarray): ''' An array where each value has an associated time. Initialisation arguments: ``arr`` The values of the array. The first index is the time index. Any array shape works in principle, but only 1D/2D arrays are supported (other shapes may work, but may not). The idea is to, have the shapes (T,) or (T, N) for T the number of time steps and N the number of neurons. ``times`` A 1D array of times whose length should be the same as the first dimension of ``arr``. Usually it is preferable to specify a clock rather than an array of times, but this doesn't work in the case where the time intervals are not fixed. ``clock`` Specify the times corresponding to array values by a clock. The ``t`` attribute of the clock is the time of the first value in the array, and the time interval is the ``dt`` attribute of the clock. If neither ``times`` nor ``clock`` is specified, a clock will be guessed in the usual way (see :class:`Clock`). ``start, dt`` Rather than specifying a clock, you can specify the start time and time interval explicitly. Technically, this is useful because it doesn't create a :class:`Clock` object which can lead to ambiguity about which clock is the default. If dt is specified and start is not, start is assumed to be 0. Note that if the clock, or start time and dt, of the array should be the default clock values, then you should not specify clock, start or dt (see Technical notes below). Arbitrary slicing of the array is supported, but the clock will only be preserved where the intervals can be guaranteed to be fixed, that is except for the case where lists or numpy arrays are used on the time index. Timed arrays can be called as if they were a function of time if the array times are based on a clock (but not if the array times are arbitrary as the look up costs would be excessive). If ``x(t)`` is called where ``times[i]<=t 1: newdt = newtimes[1] - newtimes[0] newclock = Clock(t=newtimes[0] * second, dt=newdt * second) return TimedArray(x, clock=newclock) else: return TimedArray(x, self.times[item]) if isinstance(item, int): return TimedArray(x, self.times[item:item + 1]) if isinstance(item, tuple): item0 = item[0] times = self.times[item0] if isinstance(item0, slice) and self.clock is not None and hasattr(times, '__len__') and len(times) > 1: newdt = times[1] - times[0] newclock = Clock(t=times[0] * second, dt=newdt * second) return TimedArray(x, clock=newclock) if not isinstance(times, numpy.ndarray): times = numpy.array([times]) return TimedArray(x, times) times = self.times[item] if not isinstance(times, numpy.ndarray): times = numpy.array([times]) return TimedArray(x, times) def __getslice__(self, start, end): # Just use __getitem__ for this (it's been deprecated since Python 2.0 # but you need to implement it because the base class does) return self.__getitem__(slice(start, end)) def plot(self, *args, **kwds): if self.size > self.times.size and len(self.shape) == 2: for i in xrange(self.shape[1]): kwds['label'] = str(i) self[:, i].plot(*args, **kwds) else: pylab.plot(self.times, self, *args, **kwds) def __call__(self, t): global tlast if self.clock is None: raise ValueError('Can only call timed arrays if they are based on a clock.') else: if isinstance(t, (list, tuple)): t = numpy.array(t) if isinstance(t, neurongroup.TArray): # In this case, we know that t = ones(N)*t so we just use the first value t = t[0] elif isinstance(t, numpy.ndarray): if len(self.shape) > 2: raise ValueError('Calling TimedArray with array valued t only supported for 1D or 2D TimedArray.') if len(self.shape) == 2 and len(t) != self.shape[1]: raise ValueError('Calling TimedArray with array valued t on 2D TimedArray requires len(t)=arr.shape[1]') #t = numpy.array((t-self._t_init)/self._dt, dtype=int) t = numpy.array(numpy.rint((t - self._t_init) / self._dt), dtype=int) t[t < 0] = 0 t[t >= len(self.times)] = len(self.times) - 1 if len(self.shape) == 1: return numpy.asarray(self)[t] return numpy.asarray(self)[t, numpy.arange(len(t))] t = float(t) ot = t #t = int((t-self._t_init)/self._dt) t = int(numpy.rint((t - self._t_init) / self._dt)) # if t-tlast!=1: # print 'int t:', t, tlast # print 't:', repr(ot) # print 't_init:', repr(self._t_init) # print 't/dt:', repr((ot-self._t_init)/self._dt) # tlast = t if t < 0: t = 0 if t >= len(self.times): t = len(self.times) - 1 return numpy.asarray(self)[t] tlast = 0 class TimedArraySetter(NetworkOperation): ''' Sets NeuronGroup values with a TimedArray. At the beginning of each update step, this object will set the values of a given state variable of a group with the value from the array corresponding to the current simulation time. Initialisation arguments: ``group`` The :class:`NeuronGroup` to which the variable belongs. ``var`` The name or index of the state variable in the group. ``arr`` The array of values used to set the variable in the group. Can be an array or a :class:`TimedArray`. If it is an array, you should specify the ``times`` or ``clock`` arguments, or leave them blank to use the default clock. ``times`` Times corresponding to the array values, see :class:`TimedArray` for more details. ``clock`` The clock for the :class:`NetworkOperation`. If none is specified, use the group's clock. If ``arr`` is not a :class:`TimedArray` then this clock will be used to initialise it too. ``start, dt`` Can specify these instead of a clock (see :class:`TimedArray` for details). ``when`` The standard :class:`NetworkOperation` ``when`` keyword, although note that the default value is 'start'. ''' def __init__(self, group, var, arr, times=None, clock=None, start=None, dt=None, when='start'): if clock is None: if isinstance(arr, TimedArray): self.clock = clock = arr.clock else: self.clock = clock = group.clock else: self.clock = clock self.when = when self.group = group self.var = var if not isinstance(arr, TimedArray): arr = TimedArray(arr, times=times, clock=clock, start=start, dt=dt) self.arr = arr self.reinit() def __call__(self): if self.arr.clock is None: # in this case, the time intervals need not be fixed so we # have to step through the array until we find the appropriate # one tcur = self.clock._t while True: if self._cur_i == len(self.arr.times) - 1: self.group.state_(self.var)[:] = self.arr[self._cur_i] return ti_now = self.arr.times[self._cur_i] ti_next = self.arr.times[self._cur_i + 1] if ti_next >= tcur: self.group.state_(self.var)[:] = self.arr[self._cur_i] return self._cur_i += 1 else: self.group.state_(self.var)[:] = self.arr(self.clock._t) def reinit(self): if self.arr.clock is None: self._cur_i = 0 def set_group_var_by_array(group, var, arr, times=None, clock=None, start=None, dt=None): ''' Sets NeuronGroup values with a TimedArray. Creates a :class:`TimedArraySetter`, see that class for details. ''' array_setter = TimedArraySetter(group, var, arr, times=times, clock=clock, start=start, dt=dt) group._owner.contained_objects.append(array_setter) brian-1.3.1/brian/tools/000077500000000000000000000000001167451777000150615ustar00rootroot00000000000000brian-1.3.1/brian/tools/__init__.py000066400000000000000000000002331167451777000171700ustar00rootroot00000000000000from io import * from tabulate import * from statistics import * from parameters import * from correlatedspikes import * from remotecontrol import * brian-1.3.1/brian/tools/autodiff.py000066400000000000000000000340441167451777000172410ustar00rootroot00000000000000''' Automatic differentiation. (C) R. Brette 2008 This file is released under the terms of the BSD licence -------------------------------------------------------- Method: forward accumulation by operator overloading. Use:: df=differentiate(f) # where f is a single-variable function y=differentiate(f,x0) # differentiate at x=x0 y=differentiate(f,x0,order=2) # second-order derivative at x=x0 y=differentiate(f,(a,b),order=(1,2)) # d2f/dxdy(a,b) y=differentiate(f,order=(1,2)) # d2f/dxdy a0,a1,a2=taylor_series(f,x0,order=2) # 2nd order Taylor series of f at x0 J=gradient(f,(a,b)) # gradient of f at (a,b) (returns a list) H=hessian(f,(a,b)) # hessian of f at (a,b) (returns a numpy array) TODO: * replace x by x0 * merge HigherDifferentiable in Differentiable? * simplify code * gradient, taylor_series, hessian: x0 should be optional * iadd, etc. (more operators) * NonDifferentiableException and maybe use this on conditions (when cmp=0) * Differential operators: Laplacian... * gradient, taylor_series, hessian: x0 should be optional * test with units, functions Rp->something, alternative differential operators, differential (f:Rp->Rn), complex numbers, arithmetic derivative * remove most dependency with numpy (except Hessian) * implicitly defined functions If several partial derivatives of the same function need to be calculated, it is faster to use the lower level object Differentiable. Example:: y=f(Differentiable('x',3),Differentiable('y',2)) will return a Differentiable object with y.val=f(3,2) and y.diff={'x': df/dx(3,2), 'y': df/dy(3,2)} ''' from numpy import exp, sin, cos, log, zeros # todo: look in frame instead? import types __all__ = ['differentiate', 'Differentiable', 'taylor_series', 'gradient', 'hessian'] def differentiate(f, x=None, order=1): ''' Calculate the derivative of f at point x. Higher-order derivatives are possible. Ex.: differentiate(lambda x:3*x*x+2,x=2,order=2) (Returns: 6.0) Partial derivatives: 1) Give a tuple with the order. Ex.: differentiate(lambda x,y:x*y+2*y+1,x=(1,2),order=(0,1)) (Returns: 3.0 (partial derivative d/dy) ) 2) Give a dictionnary with the order: Ex.: differentiate(lambda x,y:x*y+2*y+1,x=(1,2),order={'y':1}) ''' if x is None: return lambda x:differentiate(f, x, order) if (type(order) == types.ListType) or (type(order) == types.TupleType): # several variables # Build the list of arguments n = sum(order) # total order args = [HigherDifferentiable(i, x[i], n) for i in range(len(x))] y = f(*args) for i in xrange(len(x)): for _ in xrange(order[i]): if isinstance(y, Differentiable) and (i in y.diff): y = y.diff[i] else: # constant return 0. return y elif (type(order) == types.DictType): # named variables # Build the list of arguments n = sum(order.itervalues()) vars = list(f.func_code.co_varnames) args = [] args.extend(x) for name in order.iterkeys(): i = vars.index(name) args[i] = HigherDifferentiable(name, x[i], n) y = f(*args) for name, n in order.iteritems(): for j in range(n): if isinstance(y, Differentiable) and (name in y.diff): y = y.diff[name] else: # constant return 0. return y elif order == 0: return f(x) elif order == 1: y = f(Differentiable('x', x)) if isinstance(y, Differentiable) and ('x' in y.diff): return y.diff['x'] else: # constant return 0. else: return differentiate(lambda y:differentiate(f, y, order=order - 1), x) def taylor_series(f, x, order=1): ''' Returns the list of coefficients of the Taylor Series of f at x (scalar). ''' y = f(HigherDifferentiable('x', x, order)) series = [] n = 1. for i in range(order + 1): # Value val = y while isinstance(val, Differentiable): val = val.val series.append(val / n) n *= (i + 1) # Next order if isinstance(y, Differentiable) and ('x' in y.diff): y = y.diff['x'] else: y = 0. return series def gradient(f, x): ''' Gradient of f at x (= tuple). Result = list. ''' args = [Differentiable(i, val=x[i]) for i in range(len(x))] y = f(*args) if isinstance(y, Differentiable): result = [] for i in range(len(x)): if i in y.diff: result.append(y.diff[i]) else: result.append(0.) return result else: # constant return [0.] * len(x) def hessian(f, x): ''' Hessian of f at x (= tuple). Result = array. N.B.: not working with vectors. ''' args = [HigherDifferentiable(i, x[i], 2) for i in range(len(x))] y = f(*args) result = zeros((len(x), len(x))) if isinstance(y, Differentiable): # non-zero Hessian for i in range(len(x)): if i in y.diff and isinstance(y.diff[i], Differentiable): for j in range(len(x)): if j in y.diff[i].diff: result[i, j] = y.diff[i].diff[j] return result class Differentiable(object): ''' A differentiable variable. Implemented operations: +,-,*,[],len,exp,sin,cos,/,abs,sqrt,comparisons,** ''' def __init__(self, name=None, val=0.): ''' Initializes a variable and its derivative. If the name is None, then it is a constant. ''' self.val = val # value if name == None: # constant self.diff = {} # derivative else: self.diff = {name:1.} # or jacobian? (eye) def sqrt(self): return self ** .5 def __cmp__(self, x): if isinstance(x, Differentiable): return cmp(self.val, x.val) else: return cmp(self.val, x) def __abs__(self): # NOT WORKING WITH VECTORS if self.val == 0: # TODO: specific exception raise ZeroDivisionError, "x:abs(x) is not differentiable at x=0." elif self.val > 0: return self else: return - self def __add__(self, y): # VECTOR-READY if isinstance(y, Differentiable): zdict = {} zdict.update(self.diff) zdict.update(y.diff) for key in zdict.iterkeys(): if (key in self.diff) and (key in y.diff): zdict[key] = self.diff[key] + y.diff[key] z = Differentiable() z.val = self.val + y.val z.diff = zdict return z else: # Adding a constant z = Differentiable(val=self.val + y) z.diff = self.diff return z def __radd__(self, x): # VECTOR-READY if isinstance(x, Differentiable): zdict = {} zdict.update(self.diff) zdict.update(x.diff) for key in zdict.iterkeys(): if (key in self.diff) and (key in x.diff): zdict[key] = x.diff[key] + self.diff[key] z = Differentiable() z.val = x.val + self.val z.diff = zdict return z else: # Adding a constant z = Differentiable(val=x + self.val) z.diff = self.diff return z def __mul__(self, y): # !Not working with vectors! if isinstance(y, Differentiable): #TODO: with vectors zdict = {} zdict.update(self.diff) zdict.update(y.diff) for key in zdict.iterkeys(): if (key in self.diff) and (key in y.diff): zdict[key] = self.diff[key] * y.val + self.val * y.diff[key] elif (key in self.diff): zdict[key] = self.diff[key] * y.val else: zdict[key] = self.val * y.diff[key] z = Differentiable() z.val = self.val * y.val z.diff = zdict return z else: # Multiplying by a constant z = Differentiable(val=self.val * y) for key in self.diff: z.diff[key] = self.diff[key] * y return z def __rmul__(self, x): if isinstance(x, Differentiable): #TODO: with vectors zdict = {} zdict.update(self.diff) zdict.update(x.diff) for key in zdict.iterkeys(): if (key in self.diff) and (key in x.diff): zdict[key] = x.val * self.diff[key] + x.diff[key] * self.val elif (key in self.diff): zdict[key] = x.val * self.diff[key] else: zdict[key] = x.diff[key] * self.val z = Differentiable() z.val = x.val * self.val z.diff = zdict return z else: # Multiplying by a constant z = Differentiable(val=x * self.val) for key in self.diff: z.diff[key] = x * self.diff[key] return z def __div__(self, x): return self * (x ** -1) def __rdiv__(self, x): return x * (self ** -1) def __sub__(self, y): # VECTOR-READY if isinstance(y, Differentiable): zdict = {} zdict.update(self.diff) zdict.update(y.diff) for key in zdict.iterkeys(): if (key in self.diff) and (key in y.diff): zdict[key] = self.diff[key] - y.diff[key] z = Differentiable() z.val = self.val - y.val z.diff = zdict return z else: # Subtracting a constant z = Differentiable(val=self.val - y) z.diff = self.diff return z def __rsub__(self, x): # VECTOR-READY if isinstance(x, Differentiable): zdict = {} zdict.update(self.diff) zdict.update(x.diff) for key in zdict.iterkeys(): if (key in self.diff) and (key in x.diff): zdict[key] = x.diff[key] - self.diff[key] z = Differentiable() z.val = x.val - self.val z.diff = zdict return z else: # Subtracting a constant z = Differentiable(val=x - self.val) for key in self.diff: z.diff[key] = -self.diff[key] return z def __neg__(self): # VECTOR-READY z = Differentiable(val= -self.val) for key in self.diff: z.diff[key] = -self.diff[key] return z def __pow__(self, x): # NOT WORKING WITH VECTORS if isinstance(x, Differentiable): return exp(x * log(self)) elif x == 0: z = Differentiable(val=self.val ** x) for key in self.diff: z.diff[key] = 0 * self.diff[key] return z else: z = Differentiable(val=self.val ** x) for key in self.diff: z.diff[key] = x * self.val ** (x - 1) * self.diff[key] return z def __rpow__(self, x): # NOT WORKING WITH VECTORS return exp(self * log(x)) def __getitem__(self, i): z = Differentiable(val=self.val[i]) z.diff = {} for key in self.diff.iterkeys(): z.diff[key] = self.diff[key][i, :] return z def __len__(self): return len(self.val) def __str__(self): s = str(self.val) for key, value in self.diff.iteritems(): s += ' ; d/d' + key + '=' + str(value) return s def exp(self): # !! Not working for vectors !! zdict = {} zdict.update(self.diff) for key in zdict.iterkeys(): zdict[key] = exp(self.val) * self.diff[key] z = Differentiable(val=exp(self.val)) z.diff = zdict return z def cos(self): # !! Not working for vectors !! zdict = {} zdict.update(self.diff) for key in zdict.iterkeys(): zdict[key] = -sin(self.val) * self.diff[key] z = Differentiable(val=cos(self.val)) z.diff = zdict return z def sin(self): # !! Not working for vectors !! zdict = {} zdict.update(self.diff) for key in zdict.iterkeys(): zdict[key] = cos(self.val) * self.diff[key] z = Differentiable(val=sin(self.val)) z.diff = zdict return z def log(self): # !! Not working for vectors !! zdict = {} zdict.update(self.diff) for key in zdict.iterkeys(): zdict[key] = (1. / (self.val)) * self.diff[key] z = Differentiable(val=log(self.val)) z.diff = zdict return z class HigherDifferentiable(Differentiable): ''' A differentiable variable with order n. ''' def __init__(self, name=None, val=0., order=1): Differentiable.__init__(self, name, val) if order > 1: self.val = HigherDifferentiable(name, val, order - 1) if __name__ == '__main__': print "Derivative:" print "> differentiate(lambda x:3*x*x+2,x=2,order=2)" print differentiate(lambda x:3 * x * x + 2, x=2, order=2) print "> differentiate(lambda x:3*x*x+2,order=2)(2)" print differentiate(lambda x:3 * x * x + 2, order=2)(2) print "Partial derivative:" print "> differentiate(lambda x,y:x*y+2*y+1,x=(1,2),order=(0,1))" print differentiate(lambda x, y:x * y + 2 * y + 1, x=(1, 2), order=(0, 1)) brian-1.3.1/brian/tools/correlatedspikes.py000066400000000000000000000257511167451777000210100ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Generation of correlated spike trains. Based on the article: Brette, R. (2009). Generation of correlated spike trains. http://audition.ens.fr/brette/papers/Brette2008NC.html The models for correlated spike trains are from the paper but the implemented algorithms are simple ones. See brian.utils.statistics for correlograms. ''' from ..threshold import PoissonThreshold, HomogeneousPoissonThreshold from ..neurongroup import NeuronGroup from ..equations import Equations from scipy.special import erf from scipy.optimize import newton, fmin_tnc from scipy import * from ..units import check_units, hertz, second from ..utils.circular import SpikeContainer from numpy.random import poisson, binomial, rand, exponential from random import sample __all__ = ['rectified_gaussian', 'inv_rectified_gaussian', 'HomogeneousCorrelatedSpikeTrains', \ 'MixtureHomogeneousCorrelatedSpikeTrains', 'CorrelatedSpikeTrains', 'mixture_process', 'find_mixture'] """ Utility functions """ def rectified_gaussian(mu, sigma): ''' Calculates the mean and standard deviation for a rectified Gaussian distribution. mu, sigma: parameters of the original distribution Returns mur,sigmar: parameters of the rectified distribution ''' a = 1. + erf(mu / (sigma * (2 ** .5))); mur = (sigma / (2. * pi) ** .5) * exp(-0.5 * (mu / sigma) ** 2) + .5 * mu * a sigmar = ((mu - mur) * mur + .5 * sigma ** 2 * a) ** .5 return (mur, sigmar) def inv_rectified_gaussian(mur, sigmar): ''' Inverse of the function rectified_gaussian ''' if sigmar == 0 * sigmar: # for unit consistency return (mur, sigmar) x0 = mur / sigmar ratio = lambda u, v:u / v f = lambda x:ratio(*rectified_gaussian(x, 1.)) - x0 y = newton(f, x0 * 1.1) # Secant method sigma = mur / (exp(-0.5 * y ** 2) / ((2. * pi) ** .5) + .5 * y * (1. + erf(y * (2 ** (-.5))))) mu = y * sigma return (mu, sigma) """ Doubly stochastic processes (Cox processes). Good for long correlation time constants. """ class HomogeneousCorrelatedSpikeTrains(NeuronGroup): ''' Correlated spike trains with identical rates and homogeneous exponential correlations. Uses Cox processes (Ornstein-Uhlenbeck). ''' @check_units(r=hertz, tauc=second) def __init__(self, N, r, c, tauc, clock=None): ''' Initialization: r = rate (Hz) c = total correlation strength (in [0,1]) tauc = correlation time constant (ms) Cross-covariance functions are (c*r/tauc)*exp(-|s|/tauc) ''' self.N = N # Correction of mu and sigma sigmar = (c * r / (2. * tauc)) ** .5 mu, sigma = inv_rectified_gaussian(r, sigmar) eq = Equations('drate/dt=(mu-rate)/tauc + sigma*xi/tauc**.5 : Hz', mu=mu, tauc=tauc, sigma=sigma) NeuronGroup.__init__(self, 1, model=eq, threshold=HomogeneousPoissonThreshold(), clock=clock) self._use_next_allowed_spiketime_refractoriness = False self.rate = mu self.LS = SpikeContainer(1) # Spike storage def __len__(self): # We need to redefine this because it is not the size of the state matrix return self.N def __getslice__(self, i, j): Q = NeuronGroup.__getslice__(self, i, j) # Is this correct? Q.N = j - i return Q class MixtureHomogeneousCorrelatedSpikeTrains(NeuronGroup): def __init__(self, N, r, c, cl=None): """ Generates a pool of N homogeneous correlated spike trains with identical rates r and correlation c (0 <= c <= 1). """ NeuronGroup.__init__(self, N, model="v : 1") self.N = N self.r = r self.c = c if cl is None: cl = guessclock() self.clock = cl self.dt = cl.dt # called at each time step, returns the spiking neurons? def update(self): # Generates a source Poisson spike train with rate r/c if rand(1) <= self.r / self.c * self.dt: # If there is a source spike, it is copied to each target train with probability c spikes = nonzero(rand(self.N) <= self.c)[0] else: spikes = [] self.LS.push(spikes) def decompose_correlation_matrix(C, R): ''' Completes the diagonal of C and finds L such that C=LL^T. C is matrix of correlation coefficients with unspecified diagonal. R is the rate vector. C must be symmetric. N.B.: The diagonal of C is modified (with zeros). ''' # 0) Remove diagonal entries and calculate D (var_i(x) is should have magnitude r_i^2) D = R ** 2 C -= diag(diag(C)) # Completion # 1) Calculate D^{-1}C L = dot(diag(1. / D), C) # 2) Find the smallest eigenvalue eigenvals = linalg.eig(L)[0] alpha = -min(eigenvals[isreal(eigenvals)]) # 3) Complete the diagonal with alpha*ri^2 #alpha=alpha+.01; // avoids bad conditioning problems (uncomment if Cholesky fails) C += diag(alpha * D) # 4) Calculate a square root (Cholesky is unstable, use singular value decomposition) #return linalg.cholesky(C,lower=1) U, S, V = linalg.svd(C) return dot(dot(U, sqrt(diag(S))), V.T) class CorrelatedSpikeTrains(NeuronGroup): ''' Correlated spike trains with arbitrary rates and pair-wise exponential correlations. Uses Cox processes (Ornstein-Uhlenbeck). P.rate is the vector of (time-varying) rates. ''' @check_units(tauc=second) def __init__(self, rates, C, tauc, clock=None): ''' Initialization: rates = rates (Hz) C = correlation matrix tauc = correlation time constant (ms) Cross-covariance functions are C[i,j]*exp(-|s|/tauc) ''' eq = Equations(''' rate : Hz dy/dt = -y*(1./tauc)+xi/(.5*tauc)**.5 : 1 ''') NeuronGroup.__init__(self, len(rates), model=eq, threshold=PoissonThreshold(), \ clock=clock) self._R = array(rates) self._L = decompose_correlation_matrix(array(C), self._R) def update(self): # Calculate rates self.rate_ = self._R + dot(self._L, self.y_) NeuronGroup.update(self) row_vector = lambda V:V.reshape(1, len(V)) column_vector = lambda V:V.reshape(len(V), 1) def homogeneous_mixture(R, c): ''' Returns a mixture (nu,P) for a homogeneous synchronization structure: C(i,j)=2*c*(R(i)*R(j))/ ''' pass def find_mixture(R, C, iter=10000): ''' Finds a mixture matrix P and source rate vector nu from the correlation matrix C and the target rates R. Returns nu,P. Gradient descent. TODO: use Scipy optimization algorithms. ''' N = len(R) # Steps b = 0.1 / N a = b / N # Initial value: here such that F=0 P = eye(N) R = column_vector(R) nu = row_vector(R) for _ in xrange(iter): # Compute error Q = dot(P, diag(sqrt(nu.flatten()))) C2 = dot(Q, Q.T) E = linalg.norm(C - C2 - diag(diag(C - C2)), 'fro') F = sum(clip(dot(P, nu.T) - R, 0, Inf)) A = -(C - C2 - diag(diag(C - C2))) #print "E=",E,"F=",F,"Etot=",E*a+F*b # Gradient in E dPE = 4 * dot(dot(A, P), diag(nu.flatten())) dNuE = row_vector(2 * diag(dot(dot(P.T, A), P))) # Gradient in F HF = (dot(P, nu.T) - R) > 0 dNuF = dot(HF.T, P) dPF = dot(HF, nu) # One step of gradient descent P = P - a * dPE - b * dPF nu = nu - a * dNuE - b * dNuF # Clipping nu = clip(nu, 0, Inf) P = clip(P, 0, 1) # Now we complete nu = hstack((nu, (R - dot(P, nu.T)).T)) P = hstack((P, eye(N))) print E, F return nu, P def mixture_process(nu, P, tauc, t): ''' Generate correlated spike trains from a mixture process. nu = rates of source spike trains P = mixture matrix tauc = correlation time constant t = duration Returns a list of (neuron_number,spike_time) to be passed to SpikeGeneratorGroup. ''' n = array(poisson(nu * t)) # number of spikes for each source spike train if n.ndim == 0: n = array([n]) # Only non-zero entries: nonzero = n.nonzero()[0] n = n[nonzero] P = array(P.take(nonzero, axis=1)) nik = binomial(n, P) # number of spikes from k in i result = [] for k in xrange(P.shape[1]): spikes = rand(n[k]) * t for i in xrange(P.shape[0]): m = nik[i, k] if m > 0: if tauc > 0: selection = sample(spikes, m) + array(exponential(tauc, m)) else: selection = sample(spikes, m) result.extend(zip([i] * m, selection)) result = [(i,t*second) for i,t in result] return result if __name__ == '__main__': from time import time R = 2 + rand(5) C = rand(5, 5) C = .5 * (C + C.T) t1 = time() nu, P = find_mixture(R, C) t2 = time() print t2 - t1 brian-1.3.1/brian/tools/datamanager.py000066400000000000000000000214021167451777000176760ustar00rootroot00000000000000from uuid import uuid4 from getpass import getuser import os import shelve import platform from glob import glob from multiprocessing import Manager __all__ = ['DataManager'] class LockingSession(object): def __init__(self, dataman, session_filename): self.dataman = dataman self.session_filename = session_filename self.lock = Manager().Lock() def acquire(self): self.lock.acquire() self.session = DataManager.shelf(self.session_filename) def release(self): self.session.close() self.session = None self.lock.release() def __getitem__(self, item): self.acquire() ret = self.session[item] self.release() return ret def __setitem__(self, item, value): self.acquire() self.session[item] = value self.release() class DataManager(object): ''' DataManager is a simple class for managing data produced by multiple runs of a simulation carried out in separate processes or machines. Each process is assigned a unique ID and Python Shelf object to write its data to. Each shelf is a dictionary whose keys must be strings. The DataManager can collate information across multiple shelves using the get(key) method, which returns a dictionary with keys the unique session names, and values the value written in that session (typically only the values will be of interest). If each value is a tuple or list then you can use the get_merged(key) to get a concatenated list. If the data type is more complicated you can use the get(key) method and merge by hand. The idea is each process generates files with names that do not interfere with each other so that there are no file concurrency issues, and then in the data analysis phase, the data generated separately by each process is merged together. Methods: ``get(key)`` Return dictionary with keys the session names, and values the values stored in that session for the given key. ``get_merged(key)`` Return a single list of the merged lists or tuples if each value for every session is a list or tuple. ``get_matching(match)`` Returns a dictionary with keys the keys matching match and values get(key). If match is a string, a matching key has to start with that string. If match is a function, a key matches if match(key). ``get_merged_matching(match)`` Like get_merged(key) but across all keys that match. ``get_flat_matching(match)`` Returns a straight list of every value session[key] for all sessions and all keys matching match. ``iteritems()`` Returns all ``(key, value)`` pairs, for each Shelf file, as an iterator (useful for large files with too much data to be loaded into memory). ``itervalues()`` Return all values, for each Shelf file, as an iterator. ``items()``, ``values()`` As for ``iteritems`` and ``itervalues`` but returns a list rather than an iterator. ``itemcount()`` Returns the total number of items across all the Shelf files. ``keys()`` A list of all the keys across all sessions. ``session()`` Returns a randomly named session Shelf, multiple processes can write to these without worrying about concurrency issues. ``computer_session()`` Returns a consistently named Shelf specific to that user and computer, only one process can write to it without worrying about concurrency issues. ``locking_session()``, ``locking_computer_session()`` Returns a LockingSession object, a limited proxy to the underlying Shelf which acquires and releases a lock before and after every operation, making it safe for concurrent access. ``session_filenames()`` A list of all the shelf filenames for all sessions. ``make_unique_key()`` Generates a unique key for inserting an element into a session without overwriting data, uses uuid4. Attributes: ``basepath`` The base path for data files. ``computer_name`` A (hopefully) unique identifier for the user and computer, consists of the username and the computer network name. ``computer_session_filename`` The filename of the computer-specific session file. This file should only be accessed by one process at a time, there's no way to protect against concurrent write accesses causing it to be corrupted. ''' def __init__(self, name, datapath=''): # check if directory exists, and if not, make it basepath = os.path.join(datapath, name + '.data') if not os.path.exists(basepath): subpaths = (name + '.data').split('/') curpath = '' for path in subpaths: curpath += path if not os.path.exists(os.path.join(datapath, curpath)): os.mkdir(os.path.join(datapath, curpath)) curpath += '/' self.basepath = basepath self.computer_name = getuser() + '.' + platform.node() self.computer_session_filename = self.session_filename(self.computer_name) def session_name(self): return getuser() + '.' + str(uuid4()) def session_filename(self, session_name=None): if session_name is None: session_name = self.session_name() fname = os.path.normpath(os.path.join(self.basepath, session_name)) return fname @staticmethod def shelf(fname): return shelve.open(fname, protocol=2) def session(self, session_name=None): return self.shelf(self.session_filename(session_name)) def computer_session(self): return self.shelf(self.computer_session_filename) def locking_session(self, session_name=None): return LockingSession(self, self.session_filename(session_name)) def locking_computer_session(self): return LockingSession(self, self.computer_session_filename) def session_filenames(self): return glob(os.path.join(self.basepath, '*')) def get(self, key): allfiles = self.session_filenames() ret = {} for name in allfiles: path, file = os.path.split(name) shelf = shelve.open(name, protocol=2) if key in shelf: ret[file] = shelf[key] return ret def get_merged(self, key): allitems = self.get(key) ret = [] for _, val in allitems.iteritems(): if isinstance(val, (list, tuple)): ret.extend(val) else: raise TypeError('Can only get merged items of list or tuple type, use get() method and merge by hand.') return ret def get_matching_keys(self, match): allkeys = self.keys() matching_keys = [key for key in allkeys if (callable(match) and match(key)) or key.startswith(match)] return matching_keys def get_matching(self, match): ret = {} for key in self.get_matching_keys(match): ret[key] = self.get(key) return ret def get_merged_matching(self, match): ret = [] for key in self.get_matching_keys(match): ret.extend(self.get_merged(key)) return ret def get_flat_matching(self, match): allitems = self.get_matching(match) ret = [] for matching_key, matching_dict in allitems.iteritems(): ret.extend(matching_dict.values()) return ret def keys(self): allkeys = set([]) for name in self.session_filenames(): allkeys.update(set(shelve.open(name, protocol=2).keys())) return list(allkeys) def make_unique_key(self): return str(uuid4()) def iteritems(self): allfiles = self.session_filenames() for name in allfiles: shelf = shelve.open(name, protocol=2) for key, value in shelf.iteritems(): yield key, value def itervalues(self): for key, val in self.iteritems(): yield val def items(self): return list(self.iteritems()) def values(self): return list(self.itervalues()) def itemcount(self): return sum(len(shelve.open(name, protocol=2)) for name in self.session_filenames()) if __name__ == '__main__': d = DataManager('test/testing') #s = d.session() #s['a'] = [7] print d.get_merged('a') #d = DataManager('test') #print d.get_merged('b') #print d.keys() #print d['b'] #s = d.session() #s['b'] = 6 brian-1.3.1/brian/tools/information_theory.py000066400000000000000000000115711167451777000213570ustar00rootroot00000000000000''' Information theory measures. Adapted by R Brette from several papers (see in the code). Uses the ANN wrapper in scikits. ''' import scikits.ann as ann from scipy.special import gamma, psi from scipy.linalg import det, inv from scipy import * __all__ = ['nearest_distances', 'entropy', 'mutual_information', 'entropy_gaussian', 'mutual_information2'] def nearest_distances(X, k=1): ''' X = array(N,M) N = number of points M = number of dimensions returns the squared distance to the kth nearest neighbor for every point in X ''' ktree = ann.kdtree(X) _, d = ktree.knn(X, k + 1) # the first nearest neighbor is itself return d[:, -1] # returns the distance to the kth nearest neighbor def entropy_gaussian(C): ''' Entropy of a gaussian variable with covariance matrix C ''' if isscalar(C): # C is the variance return .5 * (1 + log(2 * pi)) + .5 * log(C) else: n = C.shape[0] # dimension return .5 * n * (1 + log(2 * pi)) + .5 * log(abs(det(C))) def entropy(X, k=1): ''' Returns the entropy of X, given as array X = array(n,dx) where n = number of samples dx = number of dimensions Optionally: k = number of nearest neighbors for density estimation ''' # Distance to kth nearest neighbor r = nearest_distances(X, k) # squared distances n, d = X.shape volume_unit_ball = (pi ** (.5 * d)) / gamma(.5 * d + 1) ''' F. Perez-Cruz, (2008). Estimation of Information Theoretic Measures for Continuous Random Variables. Advances in Neural Information Processing Systems 21 (NIPS). Vancouver (Canada), December. return .5*d*mean(log(r))+log(volume_unit_ball)+log(n-1)-log(k) ''' ''' Kozachenko, L. F. & Leonenko, N. N. 1987 Sample estimate of entropy of a random vector. Probl. Inf. Transm. 23, 95-101. See also: Evans, D. 2008 A computationally efficient estimator for mutual information, Proc. R. Soc. A 464 (2093), 1203-1215. and: Kraskov A, Stogbauer H, Grassberger P. (2004). Estimating mutual information. Phys Rev E 69(6 Pt 2):066138. ''' return .5 * d * mean(log(r)) + log(volume_unit_ball) + psi(n) - psi(k) def mutual_information(X, Y, k=1): ''' Returns the mutual information between X and Y. Each variable is a matrix X = array(n,dx) where n = number of samples dx,dy = number of dimensions Optionally, the following keyword argument can be specified: k = number of nearest neighbors for density estimation ''' return entropy(X) + entropy(Y) - entropy(hstack((X, Y)), k=k) def mutual_information2(X, p): ''' Mutual information of continuous variable X and discrete variable Y taking p values with equal probabilities. X is a matrix (n,d) such that X[(n/p)*i:(n/p)*(i+1),:] are the samples for the same value of Y. Maybe to a principal component analysis? (problem of highly correlated variables) Method: * Empirical average I= * p(X) and p(X|Y) are estimated with Gaussian assumptions ''' X = X / X.max() n, d = X.shape M = mean(X, axis=0).reshape((d, 1)) C = cov(X.T) C += C.max()*.001 * eye(d) # avoids degeneracy (hack!) invC = inv(C) m = int(n / p) # number of samples / value of Y I = 0. for i in range(p): Z = X[m * i:m * (i + 1), :] # same Y MZ = mean(Z, axis=0).reshape((d, 1)) CZ = cov(Z.T) CZ += CZ.max()*.001 * eye(d) # avoids degeneracy (hack!) #try: invCZ = inv(CZ) #except LinAlgError: # singular matrix! # #A=invC-invCZ Ii = 0. for j in range(m): x = X[m * i + j, :].reshape((d, 1)) Ii = dot(dot(x.T - M.T, invC), x - M) - dot(dot(x.T - MZ.T, invCZ), x - MZ) #Ii+=dot(dot(x.T,A),x) Ii = .5 * (log(det(C * invCZ)) + 1) + Ii / m I += Ii I = I / p return I[0, 0] if __name__ == '__main__': ''' Testing against correlated Gaussian variables (analytical results are known) ''' from scipy import * # Entropy of a 15-dimensional gaussian variable n = 10000 d = 15 P = randn(d, d) # change of variables C = dot(P, P.T) Y = randn(d, n) X = dot(P, Y) H = entropy_gaussian(C) Hest = entropy(X.T, k=5) print Hest, H # Mutual information between two correlated gaussian variables P = randn(2, 2) C = dot(P, P.T) U = randn(2, n) Z = dot(P, U).T X = Z[:, 0] X = X.reshape(len(X), 1) Y = Z[:, 1] Y = Y.reshape(len(Y), 1) # in bits print mutual_information(X, Y, k=5) / log(2.), \ (entropy_gaussian(C[0, 0]) + entropy_gaussian(C[1, 1]) - entropy_gaussian(C)) / log(2.) brian-1.3.1/brian/tools/io.py000066400000000000000000000020001167451777000160320ustar00rootroot00000000000000''' Input/output utility functions ''' import numpy as np __all__ = ['read_neuron_dat', 'read_atf'] def read_neuron_dat(name): ''' Reads a Neuron vector file (.dat). Returns vector of times, vector of values ''' f = open(name) f.readline(), f.readline() # skip first two lines M = np.loadtxt(f) f.close() return M[:, 0], M[:, 1] def read_atf(name): ''' Reads an Axon ATF file (.atf). Returns vector of times, vector of values ''' f = open(name) f.readline() n = int(f.readline().split()[0]) # skip first two lines for _ in range(n + 1): f.readline() M = np.loadtxt(f) f.close() return M[:, 0], M[:, 1] if __name__ == '__main__': from pylab import * t, v = read_neuron_dat(r"D:\My Dropbox\Neuron\Hu\recordings_Ifluct\I0_05_std_1_tau_10_sampling_100\vs.dat") #t,v=read_atf(r"/home/bertrand/Data/Measurements/Anna/input_file.atf") plot(t[:50000], v[:50000]) show() brian-1.3.1/brian/tools/parameters.py000066400000000000000000000203251167451777000176000ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # """ Classes and functions for storing and using parameters """ __all__ = ['attribdict', 'Parameters'] from itertools import chain from inspect import * class attribdict(dict): """ Dictionary that can be accessed via keys Note that attributes starting with _ won't be added to the dict but will instead be directly added to the object. Note that an attribdict cannot have new real attributes added to it unless they start with _, because they will be added to the dict instead. To add a real attribute, use object.__setattr__(obj,name,val). """ def __init__(self, **kwds): super(attribdict, self).__init__(**kwds) def __getattr__(self, name): try: return self[name] except KeyError: raise AttributeError def __setattr__(self, name, val): if name in dir(self) or (len(name) and name[0] == '_'): dict.__setattr__(self, name, val) return self[name] = val def __repr__(self): s = 'Attributes:' for k, v in self.iteritems(): s += '\n ' + k + ' = ' + str(v) return s class Parameters(attribdict): """ A storage class for keeping track of parameters Example usage:: p = Parameters( a = 5, b = 6, computed_parameters = ''' c = a + b ''') print p.c p.a = 1 print p.c The first ``print`` statement will give 11, the second gives 7. Details: Call as:: p = Parameters(...) Where the ``...`` consists of a list of keyword / value pairs (like a ``dict``). Keywords must not start with the underscore ``_`` character. Any keyword that starts with ``computed_`` should be a string of valid Python statements that compute new values based on the given ones. Whenever a non-computed value is changed, the computed parameters are recomputed, in alphabetical order of their keyword names (so ``computed_a`` is computed before ``computed_b`` for example). Non-computed values can be accessed and set via ``p.x``, ``p.x=1`` for example, whereas computed values can only be accessed and not set. New parameters can be added after the :class:`Parameters` object is created, including new ``computed_*`` parameters. You can 'derive' a new parameters object from a given one as follows:: p1 = Parameters(x=1) p2 = Parameters(y=2,**p1) print p2.x Note that changing the value of ``x`` in ``p2`` will not change the value of ``x`` in ``p1`` (this is a copy operation). """ def __init__(self, **kwds): super(Parameters, self).__init__(**kwds) self._recompute() def __getattr__(self, name): try: return self[name] except KeyError: try: return self._computed_values[name] except KeyError: raise AttributeError def __setattr__(self, name, val): if hasattr(self, '_computed_values') and name in self._computed_values: raise AttributeError('Cannot set computed value') attribdict.__setattr__(self, name, val) self._recompute() def _recompute(self): cv = dict(self) items = self.items() items.sort() for k, v in items: if k[:9] == 'computed_': v = '\n'.join([line.strip() for line in v.split('\n')]) exec v in cv for k in cv.keys(): if k[:1] == '_': # this is used to get rid of things like __builtins__ etc. cv.pop(k) for k in self.iterkeys(): cv.pop(k) object.__setattr__(self, '_computed_values', cv) def ascode(self, name): """ Returns a string which can be executed which gives all the parameters name is the name of the Parameters variable in the local namespace: Usage: P = Parameters(x=1) exec P.ascode('P') print x """ s = '' allvals = dict(**self) allvals.update(self._computed_values) for k in allvals.iterkeys(): if k[:9] != 'computed_': s += k + '=' + name + '.' + k + '\n' return s def get_vars(self, *vars): ''' Returns a tuple of variables given their names vars can be a list of string names, or a single space separated string of names. ''' vars = [v.split(' ') for v in vars] return tuple(getattr(self, v) for v in chain(*vars)) def __repr__(self): s = 'Values:' for k, v in self.iteritems(): if k[:9] != 'computed_': s += '\n ' + k + ' = ' + str(v) if len(self._computed_values): s += '\n_computed values:' for k, v in self._computed_values.iteritems(): s += '\n ' + k + ' = ' + str(v) return s def __call__(self, **kwds): ''' Returns a copy with specified arguments overwritten Sample usage: default_p = Parameters(x=1,y=2) specific_p = default_p(x=3) ''' p = Parameters(**self) # pylint: disable-msg=W0621 for k, v in kwds.iteritems(): setattr(p, k, v) return p def __reduce__(self): return (_load_Parameters_from_pickle, (self.items(),)) def _load_Parameters_from_pickle(items): return Parameters(**dict(items)) if __name__ == "__main__": # turn off warning about attribute defined outside __init__ # pylint: disable-msg=W0201 p = Parameters( a=5, b=6, computed_p1=''' c = a + b ''', computed_p2=''' x = c**2 ''') q = Parameters( d=100, computed_q=''' e = a+d ''', **p ) print p.c p.a = 1 print p.c print p print print q q.a = -50 print print q print try: q.c = 5 except AttributeError, e: print e p.y = 6 p.computed_p3 = ''' w = a*b ''' print p p.a = 2 print p brian-1.3.1/brian/tools/particle_swarm.py000066400000000000000000000111741167451777000204530ustar00rootroot00000000000000from scipy import * from time import time, clock __all__ = ['particle_swarm'] def particle_swarm(X0, fun, iterations, pso_params, min_values=None, max_values=None, group_size=None, verbose=True, return_matrix=None): """ Computes the argument of fun which maximizes it using the Particle Swarm Optimization algorithm. INPUTS X0 is a N*M matrix N is the space dimension M is the number of particles If min_values is set, it is a N-long array such that X >= min_values. If max_values is set, it is a N-long array such that X <= max_values. fun(x0) is a 1*M row vector If group_size is set, it means that particles are grouped within groups of size group_size The objective function is assumed to make a different computation for each group. The PSO algorithm then maximizes f in each group independently. It returns a matrix of size N*M/group_size : each column is the result of the optimization for the corresponding group. OUTPUTS pso(x0, fun) returns argmax fun(x) """ (N, M) = X0.shape if group_size is None: group_size = M group_number = M / group_size if (min_values is None): min_values = -inf * ones(N) if (max_values is None): max_values = inf * ones(N) min_values = tile(min_values.reshape((-1, 1)), (1, M)) max_values = tile(max_values.reshape((-1, 1)), (1, M)) fitness_X = -inf * ones((1, M)) fitness_lbest = fitness_X fitness_gbest = -inf * ones(group_number) omega = pso_params[0] c1 = pso_params[1] c2 = pso_params[2] X0 = maximum(X0, min_values) X0 = minimum(X0, max_values) X = X0 V = zeros((N, M)) X_lbest = X X_gbest = X[:, arange(0, M, group_size)] # N*group_number matrix if return_matrix: fitness_X_matrix = zeros((M, iterations)) time0 = clock() for k in range(iterations): if verbose: print 'Step %d/%d' % (k + 1 , iterations) R1 = rand(N, M) R2 = rand(N, M) X_gbest2 = zeros((N, M)) for j in range(group_number): X_gbest2[:, j * group_size:(j + 1) * group_size] = tile(X_gbest[:, j].reshape(N, 1), (1, group_size)) V = omega * V + c1 * R1 * (X_lbest - X) + c2 * R2 * (X_gbest2 - X) X = X + V X = maximum(X, min_values) X = minimum(X, max_values) time1 = clock() fitness_X = fun(X) time2 = clock() if return_matrix: fitness_X_matrix[:, k] = fitness_X # Local update indices_lbest = nonzero(fitness_X > fitness_lbest)[1] if (len(indices_lbest) > 0): X_lbest[:, indices_lbest] = X[:, indices_lbest] fitness_lbest[:, indices_lbest] = fitness_X[:, indices_lbest] # Global update max_fitness_X = array([max(fitness_X[j * group_size:(j + 1) * group_size]) for j in range(group_number)]) for j in nonzero(max_fitness_X > fitness_gbest)[0]: # groups for which a global best has been reached at this iteration sub_fitness_X = fitness_X[j * group_size:(j + 1) * group_size] index_gbest = nonzero(sub_fitness_X == max_fitness_X[j])[0] if not(isscalar(index_gbest)): index_gbest = index_gbest[0] X_gbest[:, j] = X[:, j * group_size + index_gbest] fitness_gbest[j] = max_fitness_X[j] if verbose: print 'Evaluation time :', '%.2f s' % (time2 - time1) print 'Total elapsed time :', '%.2f s' % (clock() - time0) print 'Fitness: mean = %.3f, max = %.3f, std = %.3f' % (mean(fitness_X), max(fitness_X), std(fitness_X)) # print ' min :', '%.5f' % fitness_gbest.min() # print ' mean :', '%.5f' % fitness_gbest.mean() # print ' max :', '%.5f' % fitness_gbest.max() # print ' std :', '%.5f' % fitness_gbest.std() # for j in range(group_number): # print ' %d :' % (j+1), '%.5f' % fitness_gbest[j] print if return_matrix: return(X_gbest, fitness_gbest, clock() - time0, fitness_X_matrix) else: return (X_gbest, fitness_gbest, clock() - time0) if __name__ == "__main__": group_number = 5 group_size = 10 def fun(x): y = exp(-.7 * sum(x ** 2, axis=0)) return y result = optimize(X0=3 * rand(5, group_size * group_number), fun=fun, iterations=500, pso_params=[.8, 1.8, 1.8], group_size=group_size) # print '-------------------------------' # print '-------------------------------' # print 'Final value : %.10f' % result brian-1.3.1/brian/tools/remotecontrol.py000066400000000000000000000206731167451777000203370ustar00rootroot00000000000000''' Remote control of a Brian process, for example using an IPython shell The process running the simulation calls something like: server = RemoteControlServer() and the IPython shell calls: client = RemoteControlClient() The shell can now execute and evaluate in the server process via: spikes = client.evaluate('M.spikes') i, t = zip(*spikes) plot(t, i, '.') client.execute('stop()') ''' from ..network import NetworkOperation try: import multiprocessing from multiprocessing.connection import Listener, Client except ImportError: multiprocessing = None import select import inspect __all__ = ['RemoteControlServer', 'RemoteControlClient'] class RemoteControlServer(NetworkOperation): ''' Allows remote control (via IP) of a running Brian script Initialisation arguments: ``server`` The IP server details, a pair (host, port). If you want to allow control only on the one machine (for example, by an IPython shell), leave this as ``None`` (which defaults to host='localhost', port=2719). To allow remote control, use ('', portnumber). ``authkey`` The authentication key to allow access, change it from 'brian' if you are allowing access from outside (otherwise you allow others to run arbitrary code on your machine). ``clock`` The clock specifying how often to poll for incoming commands. ``global_ns``, ``local_ns``, ``level`` Namespaces in which incoming commands will be executed or evaluated, if you leave them blank it will be the local and global namespace of the frame from which this function was called (if level=1, or from a higher level if you specify a different level here). Once this object has been created, use a :class:`RemoteControlClient` to issue commands. **Example usage** Main simulation code includes a line like this:: server = RemoteControlServer() In an IPython shell you can do something like this:: client = RemoteControlClient() spikes = client.evaluate('M.spikes') i, t = zip(*spikes) plot(t, i, '.') client.execute('stop()') ''' def __init__(self, server=None, authkey='brian', clock=None, global_ns=None, local_ns=None, level=0): if multiprocessing is None: raise ImportError('Cannot import the required multiprocessing module.') NetworkOperation.__init__(self, lambda:None, clock=clock) if server is None: server = ('localhost', 2719) frame = inspect.stack()[level + 1][0] ns_global, ns_local = frame.f_globals, frame.f_locals if global_ns is None: global_ns = frame.f_globals if local_ns is None: local_ns = frame.f_locals self.local_ns = local_ns self.global_ns = global_ns self.listener = Listener(server, authkey=authkey) self.conn = None def __call__(self): if self.conn is None: # This is kind of a hack. The multiprocessing.Listener class doesn't # allow you to tell if an incoming connection has been requested # without accepting that connection, which means if nothing attempts # to connect it will wait forever for something to connect. What # we do here is check if there is any incoming data on the # underlying IP socket used internally by multiprocessing.Listener. socket = self.listener._listener._socket sel, _, _ = select.select([socket], [], [], 0) if len(sel): self.conn = self.listener.accept() if self.conn is None: return conn = self.conn global_ns = self.global_ns local_ns = self.local_ns paused = 1 while conn and paused != 0: if paused >= 0 and not conn.poll(): return try: job = conn.recv() except: self.conn = None break jobtype, jobargs = job if paused == 1: paused = 0 try: result = None if jobtype == 'exec': exec jobargs in global_ns, local_ns elif jobtype == 'eval': result = eval(jobargs, global_ns, local_ns) elif jobtype == 'setvar': varname, varval = jobargs local_ns[varname] = varval elif jobtype == 'pause': paused = -1 elif jobtype == 'go': paused = 0 except Exception, e: # if it raised an exception, we return that exception and the # client can then raise it. result = e conn.send(result) class RemoteControlClient(object): ''' Used to remotely control (via IP) a running Brian script Initialisation arguments: ``server`` The IP server details, a pair (host, port). If you want to allow control only on the one machine (for example, by an IPython shell), leave this as ``None`` (which defaults to host='localhost', port=2719). To allow remote control, use ('', portnumber). ``authkey`` The authentication key to allow access, change it from 'brian' if you are allowing access from outside (otherwise you allow others to run arbitrary code on your machine). Use a :class:`RemoteControlServer` on the simulation you want to control. Has the following methods: .. method:: execute(code) Executes the specified code in the server process. If it raises an exception, the server process will catch it and reraise it in the client process. .. method:: evaluate(code) Evaluate the code in the server process and return the result. If it raises an exception, the server process will catch it and reraise it in the client process. .. method:: set(name, value) Sets the variable ``name`` (a string) to the given value (can be an array, etc.). Note that the variable is set in the local namespace, not the global one, and so this cannot be used to modify global namespace variables. To do that, set a local namespace variable and then call :meth:`~RemoteControlClient.execute` with an instruction to change the global namespace variable. .. method:: pause() Temporarily stop the simulation in the server process, continue simulation with the :meth:'go' method. .. method:: go() Continue a simulation that was paused. .. method:: stop() Stop a simulation, equivalent to ``execute('stop()')``. **Example usage** Main simulation code includes a line like this:: server = RemoteControlServer() In an IPython shell you can do something like this:: client = RemoteControlClient() spikes = client.evaluate('M.spikes') i, t = zip(*spikes) plot(t, i, '.') client.execute('stop()') ''' def __init__(self, server=None, authkey='brian'): if multiprocessing is None: raise ImportError('Cannot import the required multiprocessing module.') if server is None: server = ('localhost', 2719) self.client = Client(server, authkey=authkey) def execute(self, code): self.client.send(('exec', code)) result = self.client.recv() if isinstance(result, Exception): raise result def evaluate(self, code): self.client.send(('eval', code)) result = self.client.recv() if isinstance(result, Exception): raise result return result def set(self, name, value): self.client.send(('setvar', (name, value))) result = self.client.recv() if isinstance(result, Exception): raise result def pause(self): self.client.send(('pause', '')) self.client.recv() def go(self): self.client.send(('go', '')) self.client.recv() def stop(self): self.execute('stop()') brian-1.3.1/brian/tools/statistics.py000066400000000000000000000265611167451777000176370ustar00rootroot00000000000000''' Spike statistics ---------------- In all functions below, spikes is a sorted list of spike times ''' from numpy import * from brian.units import check_units, second from brian.stdunits import ms, Hz from operator import itemgetter __all__ = ['firing_rate', 'CV', 'correlogram', 'autocorrelogram', 'CCF', 'ACF', 'CCVF', 'ACVF', 'group_correlations', 'sort_spikes', 'total_correlation', 'vector_strength', 'gamma_factor', 'get_gamma_factor_matrix', 'get_gamma_factor','spike_triggered_average'] # First-order statistics def firing_rate(spikes): ''' Rate of the spike train. ''' if spikes==[]: return NaN return (len(spikes) - 1) / (spikes[-1] - spikes[0]) def CV(spikes): ''' Coefficient of variation. ''' if spikes==[]: return NaN ISI = diff(spikes) # interspike intervals return std(ISI) / mean(ISI) # Second-order statistics def correlogram(T1, T2, width=20 * ms, bin=1 * ms, T=None): ''' Returns a cross-correlogram with lag in [-width,width] and given bin size. T is the total duration (optional) and should be greater than the duration of T1 and T2. The result is in Hz (rate of coincidences in each bin). N.B.: units are discarded. TODO: optimise? ''' if (T1==[]) or (T2==[]): # empty spike train return NaN # Remove units width = float(width) T1 = array(T1) T2 = array(T2) i = 0 j = 0 n = int(ceil(width / bin)) # Histogram length l = [] for t in T1: while i < len(T2) and T2[i] < t - width: # other possibility use searchsorted i += 1 while j < len(T2) and T2[j] < t + width: j += 1 l.extend(T2[i:j] - t) H, _ = histogram(l, bins=arange(2 * n + 1) * bin - n * bin) #, new = True) # Divide by time to get rate if T is None: T = max(T1[-1], T2[-1]) - min(T1[0], T2[0]) # Windowing function (triangle) W = zeros(2 * n) W[:n] = T - bin * arange(n - 1, -1, -1) W[n:] = T - bin * arange(n) return H / W def autocorrelogram(T0, width=20 * ms, bin=1 * ms, T=None): ''' Returns an autocorrelogram with lag in [-width,width] and given bin size. T is the total duration (optional) and should be greater than the duration of T1 and T2. The result is in Hz (rate of coincidences in each bin). N.B.: units are discarded. ''' return correlogram(T0, T0, width, bin, T) def CCF(T1, T2, width=20 * ms, bin=1 * ms, T=None): ''' Returns the cross-correlation function with lag in [-width,width] and given bin size. T is the total duration (optional). The result is in Hz**2: CCF(T1,T2)= N.B.: units are discarded. ''' return correlogram(T1, T2, width, bin, T) / bin def ACF(T0, width=20 * ms, bin=1 * ms, T=None): ''' Returns the autocorrelation function with lag in [-width,width] and given bin size. T is the total duration (optional). The result is in Hz**2: ACF(T0)= N.B.: units are discarded. ''' return CCF(T0, T0, width, bin, T) def CCVF(T1, T2, width=20 * ms, bin=1 * ms, T=None): ''' Returns the cross-covariance function with lag in [-width,width] and given bin size. T is the total duration (optional). The result is in Hz**2: CCVF(T1,T2)=- N.B.: units are discarded. ''' return CCF(T1, T2, width, bin, T) - firing_rate(T1) * firing_rate(T2) def ACVF(T0, width=20 * ms, bin=1 * ms, T=None): ''' Returns the autocovariance function with lag in [-width,width] and given bin size. T is the total duration (optional). The result is in Hz**2: ACVF(T0)=-**2 N.B.: units are discarded. ''' return CCVF(T0, T0, width, bin, T) def spike_triggered_average(spikes,stimulus,max_interval,dt,onset=None,display=False): ''' Spike triggered average reverse correlation. spikes is an array containing spike times stimulus is an array containing the stimulus max_interval (second) is the duration of the averaging window dt (second) is the sampling period onset (second) before which the spikes are discarded. Note: it will be at least as long as max_interval display (default=False) display the number of spikes processed out of the total number output the spike triggered average and the corresponding time axis ''' stimulus = stimulus.flatten() if onset < max_interval or onset == None: onset = max_interval nspikes = len(spikes) sta_length = int(max_interval/dt) spike_triggered_ensemble=zeros((nspikes,sta_length)) time_axis = linspace(0*ms,max_interval,sta_length) onset = float(onset) for ispike,spike in enumerate(spikes): if display==True: print 'sta: spike #',ispike,' out of :',nspikes if spike>onset: spike = int(spike/dt) print stimulus[spike-sta_length:spike].shape spike_triggered_ensemble[ispike,:] = stimulus[spike-sta_length:spike] ispike +=1 return sum(spike_triggered_ensemble,axis=0)[::-1]/(nspikes-1),time_axis def total_correlation(T1, T2, width=20 * ms, T=None): ''' Returns the total correlation coefficient with lag in [-width,width]. T is the total duration (optional). The result is a real (typically in [0,1]): total_correlation(T1,T2)=int(CCVF(T1,T2))/rate(T1) ''' if (T1==[]) or (T2==[]): # empty spike train return NaN # Remove units width = float(width) T1 = array(T1) T2 = array(T2) # Divide by time to get rate if T is None: T = max(T1[-1], T2[-1]) - min(T1[0], T2[0]) i = 0 j = 0 x = 0 for t in T1: while i < len(T2) and T2[i] < t - width: # other possibility use searchsorted i += 1 while j < len(T2) and T2[j] < t + width: j += 1 x += sum(1. / (T - abs(T2[i:j] - t))) # counts coincidences with windowing (probabilities) return float(x / firing_rate(T1)) - float(firing_rate(T2) * 2 * width) def sort_spikes(spikes): """ Sorts spikes stored in a (i,t) list by time. """ spikes = sorted(spikes, key=itemgetter(1)) return spikes def group_correlations(spikes, delta=None): """ Computes the pairwise correlation strength and timescale of the given pool of spike trains. spikes is a (i,t) list and must be sorted. delta is the length of the time window, 10*ms by default. """ aspikes = array(spikes) N = aspikes[:, 0].max() + 1 # neuron count T = aspikes[:, 1].max() # total duration spikecount = zeros(N) tauc = zeros((N, N)) S = zeros((N, N)) if delta is None: delta = 10 * ms # size of the window windows = -2 * delta * ones(N) # windows[i] is the end of the window for neuron i = lastspike[i}+delta for i, t in spikes: sources = (t <= windows) # neurons such that (i,t) is a target spike for them if sum(sources) > 0: indices = nonzero(sources)[0] S[indices, i] += 1 delays = t - windows[indices] + delta # print i, t, indices, delays tauc[indices, i] += delays spikecount[i] += 1 windows[i] = t + delta # window update tauc /= S S = S / tile(spikecount.reshape((-1, 1)), (1, N)) # normalize S rates = spikecount / T S = S - tile(rates.reshape((1, -1)), (N, 1)) * delta S[isnan(S)] = 0.0 tauc[isnan(tauc)] = 0.0 return S, tauc # Phase-locking properties def vector_strength(spikes, period): ''' Returns the vector strength of the given train ''' return abs(mean(exp(array(spikes) * 1j * 2 * pi / period))) # Normalize the coincidence count of two spike trains (return the gamma factor) def get_gamma_factor(coincidence_count, model_length, target_length, target_rates, delta): NCoincAvg = 2 * delta * target_length * target_rates norm = .5 * (1 - 2 * delta * target_rates) gamma = (coincidence_count - NCoincAvg) / (norm * (target_length + model_length)) return gamma # Normalize the coincidence matrix between a set of trains (return the gamma factor matrix) def get_gamma_factor_matrix(coincidence_matrix, model_length, target_length, target_rates, delta): target_lengthMAT =tile(target_length,(len(model_length),1)) target_rateMAT =tile(target_rates,(len(model_length),1)) model_lengthMAT =tile(model_length.reshape(-1,1),(1,len(target_length))) NCoincAvg =2 * delta * target_lengthMAT * target_rateMAT norm =.5 * (1 - 2 * delta * target_rateMAT) # print target_rateMAT print coincidence_matrix #print NCoincAvg #print (norm * (target_lengthMAT + model_lengthMAT)) gamma = (coincidence_matrix - NCoincAvg) / (norm * (target_lengthMAT + model_lengthMAT)) gamma=triu(gamma,0)+triu(gamma,1).T return gamma # Gamma factor @check_units(delta=second) def gamma_factor(source, target, delta, normalize=True, dt=None): ''' Returns the gamma precision factor between source and target trains, with precision delta. source and target are lists of spike times. If normalize is True, the function returns the normalized gamma factor (less than 1.0), otherwise it returns the number of coincidences. dt is the precision of the trains, by default it is defaultclock.dt Reference: R. Jolivet et al., 'A benchmark test for a quantitative assessment of simple neuron models', Journal of Neuroscience Methods 169, no. 2 (2008): 417-424. ''' source = array(source) target = array(target) target_rate = firing_rate(target) * Hz if dt is None: delta_diff = delta else: source = array(rint(source / dt), dtype=int) target = array(rint(target / dt), dtype=int) delta_diff = int(rint(delta / dt)) source_length = len(source) target_length = len(target) if (target_length == 0 or source_length == 0): return 0 if (source_length > 1): bins = .5 * (source[1:] + source[:-1]) indices = digitize(target, bins) diff = abs(target - source[indices]) matched_spikes = (diff <= delta_diff) coincidences = sum(matched_spikes) else: indices = [amin(abs(source - target[i])) <= delta_diff for i in xrange(target_length)] coincidences = sum(indices) # Normalization of the coincidences count # NCoincAvg = 2 * delta * target_length * target_rate # norm = .5*(1 - 2 * target_rate * delta) # gamma = (coincidences - NCoincAvg)/(norm*(source_length + target_length)) # TODO: test this gamma = get_gamma_factor(coincidences, source_length, target_length, target_rate, delta) if normalize: return gamma else: return coincidences if __name__ == '__main__': from brian import * print vector_strength([1.1 * ms, 1 * ms, .9 * ms], 2 * ms) N = 100000 T1 = cumsum(rand(N) * 10 * ms) T2 = cumsum(rand(N) * 10 * ms) duration = T1[N / 2] # Cut so that both spike trains have the same duration T1 = T1[T1 < duration] T2 = T2[T2 < duration] print firing_rate(T1) C = CCVF(T1, T2, bin=1 * ms) print total_correlation(T1, T2) plot(C) show() brian-1.3.1/brian/tools/tabulate.py000066400000000000000000000120621167451777000172350ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Tabulation of numerical functions. ''' __all__ = ['Tabulate', 'TabulateInterp'] from brian.units import get_unit, Quantity, is_dimensionless from brian.unitsafefunctions import array, arange, zeros from numpy import NaN class Tabulate(object): ''' An object to tabulate a numerical function. Sample use:: g=Tabulate(f,0.,1.,1000) y=g(.5) v=g([.1,.3]) v=g(array([.1,.3])) Arguments of g must lie in [xmin,xmax). An IndexError is raised is arguments are above xmax, but not always when they are below xmin (it can give weird results). ''' def __init__(self, f, xmin, xmax, n): self.xmin = xmin self.xmax = xmax self.dx = (xmax - xmin) / float(n) self.invdx = 1 / self.dx self.unit = get_unit(f(xmin)) # Tabulation at midpoints x = xmin + (.5 + arange(n)) * self.dx try: self.f = f(x) except: # If it fails we try passing the values one by one self.f = zeros(n) * f(xmin) # for the unit for i in xrange(n): self.f[i] = f(x[i]) def __call__(self, x): try: # possible problem if x is an array and an array is wanted return self.f[array((array(x) - self.xmin) * self.invdx, dtype=int)] except IndexError: # out of bounds return NaN * self.unit def __repr__(self): return 'Tabulated function with ' + str(len(self.f)) + ' points' class TabulateInterp(object): ''' An object to tabulate a numerical function with linear interpolation. Sample use:: g=TabulateInterp(f,0.,1.,1000) y=g(.5) v=g([.1,.3]) v=g(array([.1,.3])) Arguments of g must lie in [xmin,xmax). An IndexError is raised is arguments are above xmax, but not always when they are below xmin (it can give weird results). ''' def __init__(self, f, xmin, xmax, n): self.xmin = xmin self.xmax = xmax self.dx = (xmax - xmin) / float(n - 1) self.invdx = 1 / self.dx # Not at midpoints here x = xmin + arange(n) * self.dx self.unit = get_unit(f(xmin)) try: self.f = f(x) except: # If it fails we try passing the values one by one self.f = zeros(n) * f(xmin) # for the unit for i in xrange(n): self.f[i] = f(x[i]) self.f = array(self.f) self.df = (self.f[range(1, n)] - self.f[range(n - 1)]) * float(self.invdx) def __call__(self, x): # the units of x is not checked y = array(x) - self.xmin ind = array(y * self.invdx, dtype=int) try: if is_dimensionless(x): # could be a problem if it is a Quantity with units=1 return self.f[ind] + self.df[ind] * (y - array(ind) * self.dx) else: return array(self.f[ind] + self.df[ind] * (y - array(ind) * self.dx)) * self.unit except IndexError: # out of bounds return NaN * self.unit def __repr__(self): return 'Tabulated function with ' + str(len(self.f)) + ' points (interpolated)' brian-1.3.1/brian/tools/taskfarm.py000066400000000000000000000333531167451777000172520ustar00rootroot00000000000000from datamanager import DataManager import multiprocessing from Queue import Empty as QueueEmpty import Tkinter from brian.utils.progressreporting import make_text_report import inspect import time import os from numpy import ndarray, zeros __all__ = ['run_tasks'] # This is the default task class used if the user provides only a function class FunctionTask(object): def __init__(self, func): self.func = func def __call__(self, *args, **kwds): # If the function has a 'report' argument, we pass it the reporter # function that will have been passed in kwds (see task_compute) fc = self.func.func_code if 'report' in fc.co_varnames[:fc.co_argcount] or fc.co_flags&8: return self.func(*args, **kwds) else: return self.func(*args) def run_tasks(dataman, task, items, gui=True, poolsize=0, initargs=None, initkwds=None, verbose=None, numitems=None): ''' Run a series of tasks using multiple CPUs on a single computer. Initialised with arguments: ``dataman`` The :class:`~brian.tools.datamanager.DataManager` object used to store the results in, see below. ``task`` The task function or class (see below). ``items`` A sequence (e.g. list or iterator) of arguments to be passed to the task. ``gui=True`` Whether or not to use a Tkinter based GUI to show progress and terminate the task run. ``poolsize=0`` The number of CPUs to use. If the value is 0, use all available CPUs, if it is -1 use all but one CPU, etc. ``initargs``, ``initkwds`` If ``task`` is a class, these are the initialisation arguments and keywords for the class. ``verbose=None`` Specify True or False to print out every progress message (defaults to False if the GUI is used, or True if not). ``numitems=None`` For iterables (rather than fixed length sequences), if you specify the number of items, an estimate of the time remaining will be given. The task (defined by a function or class, see below) will be called on each item in ``items``, and the results saved to ``dataman``. Results are stored in the format ``(key, val)`` where ``key`` is a unique but meaningless identifier. Results can be retrieved using ``dataman.values()`` or (for large data sets that should be iterated over) ``dataman.itervalues()``. The task can either be a function or a class. If it is a function, it will be called for each item in ``items``. If the items are tuples, the function will be called with those tuples as arguments (e.g. if the item is ``(1,2,3)`` the function will be called as ``task(1, 2, 3)``). If the task is a class, it can have an ``__init__`` method that is called once for each process (each CPU) at the beginning of the task run. If the ``__init__`` method has a ``process_number`` argument, it will be passed an integer value from 0 to ``numprocesses-1`` giving the number of the process (note, this is not the process ID). The class should define a ``__call__`` method that behaves the same as above for ``task`` being a function. In both cases (function or class), if the arguments include a keyword ``report`` then it will be passed a value that can be passed as the ``report`` keyword in Brian's :func:`run` function to give feedback on the simulation as it runs. A ``task`` function can also set ``self.taskname`` as a string that will be displayed on the GUI to give additional information. .. warning:: On Windows, make sure that :func:`run_tasks` is only called from within a block such as:: if __name__=='__main__': run_tasks(...) Otherwise, the program will go into a recursive loop. Note that this class only allows you to run tasks on a single computer, to distribute work over multiple computers, we suggest using `Playdoh `__. ''' # User can provide task as a class or a function, if its a function we # we use the default FunctionTask if not inspect.isclass(task): f = task initargs = (task,) task = FunctionTask else: f = task.__call__ fc = f.func_code if 'report' in fc.co_varnames[:fc.co_argcount] or fc.co_flags&8: will_report = True else: will_report = False if numitems is None and isinstance(items, (list, tuple, ndarray)): numitems = len(items) # This will be used to provide process safe access to the data manager # (so that multiple processes do not attempt to write to the session at # the same time) session = dataman.locking_computer_session() if poolsize<=0: numprocesses = poolsize+multiprocessing.cpu_count() elif poolsize>0: numprocesses = poolsize manager = multiprocessing.Manager() # We have to send the process number to the initializer via this silly # queue because of a limitation of multiprocessing process_number_queue = manager.Queue() for n in range(numprocesses): process_number_queue.put(n) # This will be used to send messages about the status of the run, i.e. # percentage complete message_queue = manager.Queue() if initargs is None: initargs = () if initkwds is None: initkwds = {} pool = multiprocessing.Pool(processes=numprocesses, initializer=pool_initializer, initargs=(process_number_queue, message_queue, dataman, session, task, initargs, initkwds)) results = pool.imap_unordered(task_compute, items) # We use this to map process IDs to task number, so that we can show the # information on the GUI in a consistent fashion pid_to_id = dict((pid, i) for i, pid in enumerate([p.pid for p in pool._pool])) start = time.time() stoprunningsim = [False] def terminate_sim(): # We acquire the datamanager session lock so that if a process is in the # middle of writing data, it won't be terminated until its finished, # meaning we can safely terminate the process without worrying about # data loss. session.acquire() pool.terminate() session.release() stoprunningsim[0] = True if gui: if verbose is None: verbose = False controller = GuiTaskController(numprocesses, terminate_sim, verbose=verbose, will_report=will_report) else: if verbose is None: verbose = True controller = TextTaskController(numprocesses, terminate_sim, verbose=verbose) for i in range(numprocesses): controller.update_process(i, 0, 0, 'No task information') i = 0 controller.update_overall(0, numitems) def empty_message_queue(): while not message_queue.empty(): try: pid, taskname, elapsed, complete = message_queue.get_nowait() controller.update_process(pid_to_id[pid], elapsed, complete, taskname) except QueueEmpty: break controller.update() while True: try: # This waits 0.1s for a new result, and otherwise raises a # TimeoutError that allows the GUI to update the percentage # complete nextresult = results.next(0.1) empty_message_queue() i = i+1 elapsed = time.time()-start complete = 0.0 controller.update_overall(i, numitems) except StopIteration: terminate_sim() print 'Finished.' break except (KeyboardInterrupt, SystemExit): terminate_sim() print 'Terminated task processes' raise except multiprocessing.TimeoutError: empty_message_queue() if stoprunningsim[0]: print 'Terminated task processes' break controller.destroy() # We store these values as global values, which are initialised by # pool_initializer on each process task_object = None task_dataman = None task_session = None task_message_queue = None def pool_initializer(process_number_queue, message_queue, dataman, session, task, initargs, initkwds): global task_object, task_dataman, task_session, task_message_queue n = process_number_queue.get() init_method = task.__init__ fc = init_method.func_code # Checks if there is a process_number argument explicitly given in the # __init__ method of the task class, the co_flags&8 checks i there is a # **kwds parameter in the definition if 'process_number' in fc.co_varnames[:fc.co_argcount] or fc.co_flags&8: initkwds['process_number'] = n task_object = task(*initargs, **initkwds) task_dataman = dataman task_session = session task_message_queue = message_queue def task_reporter(elapsed, complete): # If the task class defines a task name, we can display it with the # percentage complete if hasattr(task_object, 'taskname'): taskname = task_object.taskname else: taskname = 'No task information' # This queue is used by the main loop in run_tasks task_message_queue.put((os.getpid(), taskname, elapsed, complete)) def task_compute(args): if not isinstance(args, tuple): args = (args,) # We check if the task function has a report argument, and if it does we # pass it task_reporter so that it can integrate with the GUI kwds = {} fc = task_object.__call__.func_code if 'report' in fc.co_varnames[:fc.co_argcount] or fc.co_flags&8: kwds['report'] = task_reporter result = task_object(*args, **kwds) # Save the results, with a unique key, to the locking session of the dataman task_session[task_dataman.make_unique_key()] = result class TaskController(object): def __init__(self, processes, terminator, verbose=True): self.verbose = verbose self.completion = zeros(processes) self.numitems, self.numdone = None, 0 self.start_time = time.time() def update_process(self, i, elapsed, complete, msg): self.completion[i] = complete%1.0 if self.verbose: print 'Process '+str(i)+': '+make_text_report(elapsed, complete)+': '+msg _, msg = self.get_overall_completion() print msg def get_overall_completion(self): complete = 0.0 numitems, numdone = self.numitems, self.numdone elapsed = time.time()-self.start_time if numitems is not None: complete = (numdone+sum(self.completion))/numitems txt = 'Overall, '+str(numdone)+' done' if numitems is not None: txt += ' of '+str(numitems)+': '+make_text_report(elapsed, complete) return complete, txt def update_overall(self, numdone, numitems): self.numdone = numdone self.numitems = numitems def recompute_overall(self): pass def update(self): pass def destroy(self): pass class TextTaskController(TaskController): def update_overall(self, numdone, numitems): TaskController.update_overall(self, numdone, numitems) _, msg = self.get_overall_completion() print msg # task control GUI class GuiTaskController(Tkinter.Tk, TaskController): def __init__(self, processes, terminator, width=600, verbose=False, will_report=True): Tkinter.Tk.__init__(self, None) TaskController.__init__(self, processes, terminator, verbose=verbose) self.parent = None self.grid() button = Tkinter.Button(self, text='Terminate task', command=terminator) button.grid(column=0, row=0) self.pb_width = width self.progressbars = [] numbars = 1 self.will_report = will_report if will_report: numbars += processes for i in xrange(numbars): can = Tkinter.Canvas(self, width=width, height=30) can.grid(column=0, row=1+i) can.create_rectangle(0, 0, width, 30, fill='#aaaaaa') if i float(other) else: raise DimensionMismatchError("GreaterThan", self.dim, other.dim) elif is_scalar_type(other): if other == 0 or other == 0.: return float(self) > other if numpy.isneginf(other): return True if numpy.isposinf(other): return False if self.is_dimensionless(): return float(self) > other else: raise DimensionMismatchError("GreaterThan", self.dim, Dimension()) else: return NotImplemented #return super(Quantity,self).__gt__(other) def __ge__(self, other): if isinstance(other, Quantity): if self.dim == other.dim: return float(self) >= float(other) else: raise DimensionMismatchError("GreaterThanOrEquals", self.dim, other.dim) elif is_scalar_type(other): if other == 0 or other == 0.: return float(self) >= other if numpy.isneginf(other): return True if numpy.isposinf(other): return False if self.is_dimensionless(): return float(self) >= other else: raise DimensionMismatchError("GreaterThanOrEquals", self.dim, Dimension()) else: return NotImplemented #return super(Quantity,self).__ge__(other) def __eq__(self, other): if isinstance(other, Quantity): if self.dim == other.dim: return float(self) == float(other) else: raise DimensionMismatchError("Equals", self.dim, other.dim) elif is_scalar_type(other): if other == 0 or other == 0. or numpy.isinf(other): return float(self) == other if self.dim.is_dimensionless(): return float(self) == other else: raise DimensionMismatchError("Equals", self.dim, Dimension()) else: return NotImplemented #return super(Quantity,self).__eq__(other) def __ne__(self, other): if isinstance(other, Quantity): if self.dim == other.dim: return float(self) != float(other) else: raise DimensionMismatchError("Equals", self.dim, other.dim) elif is_scalar_type(other): if other == 0 or other == 0. or numpy.isinf(other): return float(self) != other if self.dim.is_dimensionless(): return float(self) != other else: raise DimensionMismatchError("NotEquals", self.dim, Dimension()) else: return NotImplemented #return super(Quantity,self).__ne__(other) #### MAKE QUANTITY PICKABLE #### def __reduce__(self): return (quantity_with_dimensions, (float(self), self.dim)) class Unit(Quantity): ''' A physical unit Normally, you do not need to worry about the implementation of units. They are derived from the :class:`Quantity` object with some additional information (name and string representation). You can define new units which will be used when generating string representations of quantities simply by doing an arithmetical operation with only units, for example:: Nm = newton * metre Note that operations with units are slower than operations with :class:`Quantity` objects, so for efficiency if you do not need the extra information that a :class:`Unit` object carries around, write ``1*second`` in preference to ``second``. ''' # original documentation """A physical unit Basically, a unit is just a quantity with given dimensions, e.g. mvolt = 0.001 with the dimensions of voltage. The units module defines a large number of standard units, and you can also define your own (see below). The unit class also keeps track of various things that were used to define it so as to generate a nice string representation of it. See Representation below. Typical usage: x = 3 * mvolt # returns a quantity print x.in_unit(uvolt) # returns 3000 uV Standard units: The units class has the following fundamental units: metre, kilogram, second, amp, kelvin, mole, candle And these additional basic units: radian, steradian, hertz, newton, pascal, joule, watt, coulomb, volt, farad, ohm, siemens, weber, tesla, henry, celsius, lumen, lux, becquerel, gray, sievert, katal And additionally, it includes all scaled versions of these units using the following prefixes Factor Name Prefix ----- ---- ------ 10^24 yotta Y 10^21 zetta Z 10^18 exa E 10^15 peta P 10^12 tera T 10^9 giga G 10^6 mega M 10^3 kilo k 10^2 hecto h 10^1 deka da 1 10^-1 deci d 10^-2 centi c 10^-3 milli m 10^-6 micro u (\mu in SI) 10^-9 nano n 10^-12 pico p 10^-15 femto f 10^-18 atto a 10^-21 zepto z 10^-24 yocto y So for example nohm, ytesla, etc. are all defined. Defining your own: It can be useful to define your own units for printing purposes. So for example, to define the newton metre, you write: Nm = newton * metre Writing: print (1*Nm).in_unit(Nm) will return "1 Nm" because the Unit class generates a new display name of "Nm" from the display names "N" and "m" for newtons and metres automatically (see Representation below). To register this unit for use in the automatic printing of the Quantity.in_best_unit() method, see the documentation for the UnitRegistry class. Construction: The best way to construct a new unit is to use standard units already defined and arithmetic operations, e.g. newton*metre. See the documentation for __init__ and the static methods create(...) and create_scaled_units(...) for more details. If you don't like the automatically generated display name for the unit, use the set_display_name(name) method. Representation: A new unit defined by multiplication, division or taking powers generates a name for the unit automatically, so that for example the name for pfarad/mmetre**2 is "pF/mm^2", etc. If you don't like the automatically generated name, use the set_display_name(name) method. """ __slots__ = ["dim", "scale", "scalefactor", "dispname", "name", "iscompound"] #### CONSTRUCTION #### def __init__(self, value): """Initialises a dimensionless unit """ super(Unit, self).__init__(value) self.dim = Dimension() self.scale = [ "", "", "", "", "", "", "" ] self.scalefactor = "" self.dispname = "" self.iscompound = False def __new__(typ, *args, **kw): obj = super(Unit, typ).__new__(typ, *args, **kw) global automatically_register_units if automatically_register_units: register_new_unit(obj) return obj @staticmethod def create(dim, name="", dispname="", scalefactor="", **keywords): """Creates a new named unit dim -- the dimensions of the unit name -- the full name of the unit, e.g. volt dispname -- the display name, e.g. V scalefactor -- scaling factor, e.g. m for mvolt keywords -- the scaling for each SI dimension, e.g. length="m", mass="-1", etc. """ scale = [ "", "", "", "", "", "", "" ] for k in keywords: scale[_di[k]] = keywords[k] v = 1.0 for s, i in izip(scale, dim._dims): if i: v *= _siprefixes[s] ** i u = Unit(v * _siprefixes[scalefactor]) u.dim = dim u.scale = scale u.scalefactor = scalefactor + "" u.name = name + "" u.dispname = dispname + "" u.iscompound = False return u @staticmethod def create_scaled_unit(baseunit, scalefactor): """Create a scaled unit from a base unit baseunit -- e.g. volt, amp scalefactor -- e.g. "m" for mvolt, mamp """ u = Unit(float(baseunit) * _siprefixes[scalefactor]) u.dim = baseunit.dim u.scale = baseunit.scale u.scalefactor = scalefactor u.name = scalefactor + baseunit.name u.dispname = scalefactor + baseunit.dispname u.iscompound = False return u #### METHODS #### def set_name(self, name): """Sets the name for the unit """ self.name = name def set_display_name(self, name): """Sets the display name for the unit """ self.dispname = name #### REPRESENTATION #### def __repr__(self): return self.__str__() def __str__(self): if self.dispname == "": s = self.scalefactor + " " for i in range(7): if self.dim._dims[i]: s += self.scale[i] + _ilabel[i] if self.dim._dims[i] != 1: s += "^" + str(self.dim._dims[i]) s += " " if not len(s): return "1" return s.strip() else: return self.dispname #### ARITHMETIC #### def __mul__(self, other): if isinstance(other, Unit): u = Unit(float(self) * float(other)) u.name = self.name + other.name u.dispname = self.dispname + ' ' + other.dispname u.dim = self.dim * other.dim u.iscompound = True return u else: return super(Unit, self).__mul__(other) def __div__(self, other): if isinstance(other, Unit): u = Unit(float(self) / float(other)) u.name = self.name + 'inv_' + other.name + '_endinv' if other.iscompound: u.dispname = '(' + self.dispname + ')' else: u.dispname = self.dispname u.dispname += '/' if other.iscompound: u.dispname += '(' + other.dispname + ')' else: u.dispname += other.dispname u.dim = self.dim / other.dim u.iscompound = True return u else: return super(Unit, self).__div__(other) def __pow__(self, other): if is_scalar_type(other): u = Unit(float(self) ** other) u.name = self.name + 'pow_' + str(other) + '_endpow' if self.iscompound: u.dispname = '(' + self.dispname + ')' else: u.dispname = self.dispname u.dispname += '^' + str(other) u.dim = self.dim ** other return u else: return super(Unit, self).__mul__(other) automatically_register_units = False #### FUNDAMENTAL UNITS metre = Unit.create(Dimension(m=1), "metre", "m") meter = Unit.create(Dimension(m=1), "meter", "m") kilogram = Unit.create(Dimension(kg=1), "kilogram", "kg") gram = Unit.create_scaled_unit(kilogram, "m") gram.set_name('gram') gram.set_display_name('g') gramme = Unit.create_scaled_unit(kilogram, "m") gramme.set_name('gramme') gramme.set_display_name('g') second = Unit.create(Dimension(s=1), "second", "s") amp = Unit.create(Dimension(A=1), "amp", "A") kelvin = Unit.create(Dimension(K=1), "kelvin", "K") mole = Unit.create(Dimension(mol=1), "mole", "mol") candle = Unit.create(Dimension(candle=1), "candle", "cd") fundamental_units = [ metre, meter, gram, second, amp, kelvin, mole, candle ] #### DERIVED UNITS, from http://physics.nist.gov/cuu/Units/units.html derived_unit_table = \ [\ [ 'radian', 'rad', Dimension() ], \ [ 'steradian', 'sr', Dimension() ], \ [ 'hertz', 'Hz', Dimension(s= -1) ], \ [ 'newton', 'N', Dimension(m=1, kg=1, s= -2) ], \ [ 'pascal', 'Pa', Dimension(m= -1, kg=1, s= -2) ], \ [ 'joule', 'J', Dimension(m=2, kg=1, s= -2) ], \ [ 'watt', 'W', Dimension(m=2, kg=1, s= -3) ], \ [ 'coulomb', 'C', Dimension(s=1, A=1) ], \ [ 'volt', 'V', Dimension(m=2, kg=1, s= -3, A= -1) ], \ [ 'farad', 'F', Dimension(m= -2, kg= -1, s=4, A=2) ], \ [ 'ohm', 'ohm', Dimension(m=2, kg=1, s= -3, A= -2) ], \ [ 'siemens', 'S', Dimension(m= -2, kg= -1, s=3, A=2) ], \ [ 'weber', 'Wb', Dimension(m=2, kg=1, s= -2, A= -1) ], \ [ 'tesla', 'T', Dimension(kg=1, s= -2, A= -1) ], \ [ 'henry', 'H', Dimension(m=2, kg=1, s= -2, A= -2) ], \ [ 'celsius', 'degC', Dimension(K=1) ], \ [ 'lumen', 'lm', Dimension(cd=1) ], \ [ 'lux', 'lx', Dimension(m= -2, cd=1) ], \ [ 'becquerel', 'Bq', Dimension(s= -1) ], \ [ 'gray', 'Gy', Dimension(m=2, s= -2) ], \ [ 'sievert', 'Sv', Dimension(m=2, s= -2) ], \ [ 'katal', 'kat', Dimension(s= -1, mol=1) ]\ ] # Pointless list only here so that static analysis in Eclipse works ok # All the values here are overwritten by the code below volt = Unit(1); mvolt = Unit(1); uvolt = Unit(1) namp = Unit(1); mamp = Unit(1); uamp = Unit(1); pamp = Unit(1) ohm = Unit(1); Mohm = Unit(1); kohm = Unit(1) siemens = Unit(1); msiemens = Unit(1); usiemens = Unit(1) hertz = Unit(1); khertz = Unit(1); Mhertz = Unit(1) farad = Unit(1); mfarad = Unit(1); ufarad = Unit(1); nfarad = Unit(1) msecond = Unit(1) # Generate derived unit objects and make a table of base units from these and the fundamental ones base_units = fundamental_units + [gramme, kilogram] # make a copy for _du in derived_unit_table: _u = Unit.create(_du[2], _du[0], _du[1]) exec _du[0] + "=_u" base_units.append(_u) all_units = base_units + [] # Generate scaled units for all base units scaled_units = [] for _bu in base_units: for _k in _siprefixes.keys(): if len(_k): _u = Unit.create_scaled_unit(_bu, _k) exec _k + _bu.name + "=_u" all_units.append(_u) if not _k in ["da", "d", "c", "h"]: scaled_units.append(_u) # Generate 2nd and 3rd powers for all scaled base units powered_units = [] for bu in all_units + []: for i in [2, 3]: u = bu ** i u.name = bu.name + str(i) exec bu.name + str(i) + '=u' all_units.append(u) if not bu.scalefactor in ['da', 'd', 'c', 'h']: powered_units.append(u) # Define additional units # Current list from http://physics.nist.gov/cuu/Units/units.html, far from complete additional_units = [ pascal * second, newton * metre, watt / metre ** 2, joule / kelvin, \ joule / (kilogram * kelvin), joule / kilogram, watt / (metre * kelvin), \ joule / metre ** 3, volt / metre ** 3, coulomb / metre ** 3, coulomb / metre ** 2, \ farad / metre, henry / metre, joule / mole, joule / (mole * kelvin), \ coulomb / kilogram, gray / second, katal / metre ** 3 ] automatically_register_units = True class UnitRegistry(object): """Stores known units for printing in best units All a user needs to do is to use the register_new_unit(u) function. Default registries: The units module defines three registries, the standard units, user units, and additional units. Finding best units is done by first checking standard, then user, then additional. New user units are added by using the register_new_unit(u) function. Standard units includes all the basic non-compound unit names built in to the module, including volt, amp, etc. Additional units defines some compound units like newton metre (Nm) etc. Methods: add(u) - add a new unit __getitem__(x) - get the best unit for quantity x e.g. UnitRegistry ur; ur[3*mvolt] returns mvolt """ def __init__(self): self.objs = [] def add(self, u): """Add a unit to the registry """ self.objs.append(u) def __getitem__(self, x): """Returns the best unit for quantity x The algorithm is to consider the value: m=abs(x/u) for all matching units u. If there is a unit u with a value of m in [1,1000) then we select that unit. Otherwise, we select the first matching unit (which will typically be the unscaled version). """ matching = filter(lambda o: have_same_dimensions(o, x), self.objs) if len(matching) == 0: raise KeyError("Unit not found in registry.") floatrep = filter(lambda o: 0.1 <= abs(float(x / o)) < 100, matching) if len(floatrep): return floatrep[0] else: return matching[0] def register_new_unit(u): """Register a new unit for automatic displaying of quantities Example usage: 2.0*farad/metre**2 = 2.0 m^-4 kg^-1 s^4 A^2 register_new_unit(pfarad / mmetre**2) 2.0*farad/metre**2 = 2000000.0 pF/mm^2 """ UserUnitRegister.add(u) standard_unit_register = UnitRegistry() additional_unit_register = UnitRegistry() UserUnitRegister = UnitRegistry() map(standard_unit_register.add, base_units + scaled_units + powered_units) map(additional_unit_register.add, additional_units) def all_registered_units(*regs): """Returns all registered units in the correct order """ if not len(regs): regs = [ standard_unit_register, UserUnitRegister, additional_unit_register] for r in regs: for u in r.objs: yield u def _get_best_unit(x, *regs): """Returns the best unit for quantity x Checks the registries regs, unless none are provided in which case it will check the standard, user and additional unit registers in turn. """ if get_dimensions(x) == Dimension(): return Quantity(1) if len(regs): for r in regs: try: return r[x] except KeyError: pass return Quantity.with_dimensions(1, x.dim) else: return _get_best_unit(x, standard_unit_register, UserUnitRegister, additional_unit_register) def get_unit(x, *regs): ''' Find the most appropriate consistent unit from the unit registries, or just return a Quantity with the same dimensions and value 1 ''' for u in all_registered_units(*regs): if is_equal(float(u), 1) and have_same_dimensions(u, x): return u return Quantity.with_dimensions(1, get_dimensions(x)) def get_unit_fast(x): ''' Return a quantity with value 1 and the same dimensions ''' return Quantity.with_dimensions(1, get_dimensions(x)) #### DECORATORS def check_units(**au): """Decorator to check units of arguments passed to a function **Sample usage:** :: @check_units(I=amp,R=ohm,wibble=metre,result=volt) def getvoltage(I,R,**k): return I*R You don't have to check the units of every variable in the function, and you can define what the units should be for variables that aren't explicitly named in the definition of the function. For example, the code above checks that the variable wibble should be a length, so writing:: getvoltage(1*amp,1*ohm,wibble=1) would fail, but:: getvoltage(1*amp,1*ohm,wibble=1*metre) would pass. String arguments are not checked (e.g. ``getvoltage(wibble='hello')`` would pass). The special name ``result`` is for the return value of the function. An error in the input value raises a :exc:`DimensionMismatchError`, and an error in the return value raises an ``AssertionError`` (because it is a code problem rather than a value problem). **Notes** This decorator will destroy the signature of the original function, and replace it with the signature ``(*args, **kwds)``. Other decorators will do the same thing, and this decorator critically needs to know the signature of the function it is acting on, so it is important that it is the first decorator to act on a function. It cannot be used in combination with another decorator that also needs to know the signature of the function. """ def do_check_units(f): @wraps(f) def new_f(*args, **kwds): newkeyset = kwds.copy() arg_names = f.func_code.co_varnames[0:f.func_code.co_argcount] for (n, v) in zip(arg_names, args[0:f.func_code.co_argcount]): newkeyset[n] = v for k in newkeyset.iterkeys(): if (k in au.keys()) and not isinstance(newkeyset[k], str): # string variables are allowed to pass, the presumption is they name another variable if not have_same_dimensions(newkeyset[k], au[k]): raise DimensionMismatchError("Function " + f.__name__ + " variable " + k + " should have dimensions of " + str(au[k]), get_dimensions(newkeyset[k])) result = f(*args, **kwds) if "result" in au: assert have_same_dimensions(result, au["result"]), \ "Function " + f.__name__ + " should return a value with unit " + str(au["result"]) + " but has returned " + str(get_dimensions(result)) return result # new_f.__name__ = f.__name__ # new_f.__doc__ = f.__doc__ # new_f.__dict__.update(f.__dict__) return new_f return do_check_units ## Note: do not normally call this, see note on importing of decorator module at the top of this module #if use_decorator: # old_check_units = check_units # def check_units(**au): # return lambda f : decorator.new_wrapper(old_check_units(**au)(f), f) # check_units.__doc__ = old_check_units.__doc__ def _check_nounits(**au): """Don't bother checking units decorator """ def dont_check_units(f): return f return dont_check_units def scalar_representation(x): if isinstance(x, Unit): return x.name u = get_unit(x) if isinstance(u, Unit): return '(' + repr(float(x)) + '*' + u.name + ')' if isinstance(x, Quantity): return '(Quantity.with_dimensions(' + repr(float(x)) + ',' + repr(x.dim._dims) + '))' return repr(x) # Remove all units if not bup.use_units: for _u in all_units: exec _u.name + "=float(_u)" check_units = _check_nounits def get_dimensions(obj): return Dimension() def is_dimensionless(obj): return True def have_same_dimensions(obj1, obj2): return True def get_unit(x, *regs): return 1. def scalar_representation(x): return '1.0' # Add unit names to __all__ all_unit_names = [u.name for u in all_units] __all__.extend(all_unit_names) if __name__ == "__main__": # # the pattern 'pat' below is a regular expression for all the unit names # # you can use it as an exclusion pattern for the epydoc api docs # base_unit_ids = set([id(_) for _ in base_units]) # l = [_ for _ in __all__ if id(locals()[_]) in base_unit_ids] # anybaseunit = '('+'|'.join(l)+')' # print anybaseunit # prefixes = [_ for _ in _siprefixes.keys() if _] # anyprefix = '('+'|'.join(prefixes)+')' # print anyprefix # pat = anyprefix+'?'+anybaseunit+'[23]?' # print pat # import re # for x in __all__: # if not len(re.findall(pat,x)): # print x from numpy import * # shorthand function used for example code below def pE(vname, str): uname = vname if vname == "": uname = "temp" exec(uname + "=" + str) if vname != "": print vname, "=", print str, if locals()[uname] != None: print '=', locals()[uname] else: print return locals()[uname] V = pE("V", "3 * volt") I = pE("I", "2 * amp") a = pE("a", "array([1,2,3])") print R = pE("R", "V/I") pE("", "I*R") print pE("", "a*V") pE("", "V*a") pE("", "a+V") print pE("", "1000*metre") pE("", "1000*mmetre") print pE("", "(2*volt).in_unit(mvolt)") pE("", "(2*volt)/mvolt") print "(2*volt).in_unit(amp) =", try: print (2 * volt).in_unit(amp) except DimensionMismatchError, i: print "DimensionMismatchError:", i print pE("", "have_same_dimensions(1*volt,1*amp*ohm)") pE("", "have_same_dimensions(1*volt*second,1*amp*ohm)") print pE("", "(ufarad/nmetre)**2") print pE("", "2.0*farad/metre**2") pE("", "register_new_unit(pfarad / mmetre**2)") pE("", "2.0*farad/metre**2") print print "Some decorator examples (see code):" print @check_units(I=amp, R=ohm, wibble=metre, result=volt) def getvoltage(I, R, *args, **k): return I * R try: print getvoltage(1 * amp, 2 * ohm, 20) print getvoltage(R=2 * ohm, I=1 * amp, wibble=7 * mmetre) print getvoltage(1 * amp, 2 * ohm * metre) except DimensionMismatchError, inst: print "DME:", inst print pE("", "get_unit(3*msecond)") ################################################### ##### ADDITIONAL INFORMATION #SI DIMENSIONS #------------- #Quantity Unit Symbol #-------- ---- ------ #Length metre m #Mass kilogram kg #Time second s #Electric current ampere A #Temperature kelvin K #Quantity of substance mole mol #Luminosity candle cd # SI UNIT PREFIXES # ---------------- # Factor Name Prefix # ----- ---- ------ # 10^24 yotta Y # 10^21 zetta Z # 10^18 exa E # 10^15 peta P # 10^12 tera T # 10^9 giga G # 10^6 mega M # 10^3 kilo k # 10^2 hecto h # 10^1 deka da # 1 # 10^-1 deci d # 10^-2 centi c # 10^-3 milli m # 10^-6 micro u (\mu in SI) # 10^-9 nano n # 10^-12 pico p # 10^-15 femto f # 10^-18 atto a # 10^-21 zepto z # 10^-24 yocto y brian-1.3.1/brian/unitsafefunctions.py000066400000000000000000000063661167451777000200550ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # """ Functions which check the dimensions of their arguments, etc. Functions updated to provide Quantity functionality --------------------------------------------------- With any dimensions: * sqrt Dimensionless: * log, exp * sin, cos, tan * arcsin, arccos, arctan * sinh, cosh, tanh * arcsinh, arccosh, arctanh With homogeneous dimensions: * dot """ from brian_unit_prefs import bup from units import * import numpy, math, scipy from numpy import * from numpy.random import * from scipy.integrate import * import inspect __all__ = [] # these functions are the ones that will work with the template immediately below, and # extend the numpy functions to know about Quantity objects quantity_versions = [ 'sqrt', 'log', 'exp', 'sin', 'cos', 'tan', 'arcsin', 'arccos', 'arctan', 'sinh', 'cosh', 'tanh', 'arcsinh', 'arccosh', 'arctanh' ] def make_quantity_version(func): funcname = func.__name__ def f(x): if isinstance(x, Quantity): return getattr(x, funcname)() return func(x) f.__name__ = func.__name__ f.__doc__ = func.__doc__ if hasattr(func, '__dict__'): f.__dict__.update(func.__dict__) return f for name in quantity_versions: if bup.use_units: exec name + '=make_quantity_version(' + name + ')' __all__.append(name) brian-1.3.1/brian/unused/000077500000000000000000000000001167451777000152245ustar00rootroot00000000000000brian-1.3.1/brian/unused/current.py000066400000000000000000000141411167451777000172610ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Currents - Objects that can be plugged into membrane equations ''' import types from operator import isSequenceType from stateupdater import StateUpdater from units import * __all__ = ['find_capacitance', 'Current', 'exp_current', 'exp_conductance', \ 'leak_current'] def find_capacitance(model): ''' Tries to find the membrane capacitance from a set of differential equations or a model given as a dictionnary (with typical keys 'model', 'threshold' and 'reset'). ''' if type(model) == types.DictType: if 'Cm' in model: return model['Cm'] if 'C' in model: return model['C'] # Failed: look the equations if 'model' in model: model = model['model'] else: # no clue! raise TypeError, 'Strange model!' if isinstance(model, StateUpdater): if hasattr(model, 'Cm'): return model.Cm if hasattr(model, 'C'): return model.C if isSequenceType(model): model = model[0] # The first equation is the membrane equation if type(model) == types.FunctionType: if 'Cm' in model.func_globals: return model.func_globals['Cm'] if 'C' in model.func_globals: return model.func_globals['C'] # Nothing was found! raise TypeError, 'No capacitance found!' class Current(object): ''' A membrane current. ''' def __init__(self, I=lambda v:0, eqs=[]): ''' I = current function, e.g. I=lambda v,g: g*(v-E) eqs= differential system, e.g. eqs=[lambda v,g:-g] ''' self.I = I self.eqs = eqs def __radd__(self, model): ''' model + self: addition of a current to a membrane equation ''' Cm = find_capacitance(model) # Dictionnary? modeldict = None if type(model) == types.DictType: modeldict = model model = model['model'] # Only one function? if type(model) == types.FunctionType: model = [model] # Assume sequence type n = len(model) # number of equations in model m = len(self.eqs) # number of equations in current print n, m newmodel = [0] * (n + m) # Insert current in membrane equation newmodel[0] = lambda * args: model[0](*(args[0:n])) + \ self.I(args[0], *args[n:n + m]) / Cm # Adjust the number of variables # Tricky bit here for i in range(1, n): md = model[i] newmodel[i] = lambda * args: md(*(args[0:n])) #newmodel[1:n]=[lambda *args: md(*(args[0:n])) for md in model[1:n]] # Add current variables # Adjust the number of variables for i in range(n, n + m): md = self.eqs[i - n] newmodel[i] = lambda * args: md(args[0], *(args[n:n + m])) #newmodel[n:n+m]=[lambda *args: md(args[0],*(args[n:n+m])) for md in self.eqs[n:n+m]] # Dictionnary? if modeldict != None: modeldict['model'] = newmodel return modeldict else: return newmodel # --------------------- # Synaptic currents # TODO: alpha and biexp # --------------------- @check_units(tau=second) def exp_current(tau): ''' Exponential current. I=g dg=-g/tau ''' return Current(I=lambda v, g:g, eqs=[lambda v, g:-g / tau]) @check_units(tau=second, E=volt) def exp_conductance(tau, E): ''' Exponential conductance. I=g*(E-v) dg=-g/tau ''' return Current(I=lambda v, g:g * (E - v), eqs=[lambda v, g:-g / tau]) # ------------------ # Intrinsic currents # ------------------ @check_units(gl=siemens, El=volt) def leak_current(gl, El): ''' Leak current. I=gl*(El-v) ''' return Current(I=lambda v:gl * (v - El), eqs=[]) # Test if __name__ == '__main__': C = 2. dv = lambda v:-v / C #mymodel={'model':[dv],'Cm':3} m = dv + exp_conductance(10 * msecond, E= -70 * mvolt) + \ exp_conductance(20 * msecond, E= -60 * mvolt) print m print m[0](1. * mvolt, 2., 3.), m[1](1. * volt, 2., 3.), m[2](1. * volt, 2., 3.) brian-1.3.1/brian/utils/000077500000000000000000000000001167451777000150615ustar00rootroot00000000000000brian-1.3.1/brian/utils/__init__.py000066400000000000000000000035351167451777000172000ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # brian-1.3.1/brian/utils/approximatecomparisons.py000066400000000000000000000107601167451777000222460ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # """Util to test if two floating point numbers are equal Use these functions for very precise tests of equality: -- is_equal(x,y), for x=y -- is_less_than_or_equal(x,y), for x<=y -- is_greater_than_or_equal(x,y), for x>=y Use these functions for less precise tests (for example, if you have done some operations on two varaibles): -- is_approx_equal(x,y), for x=y -- is_approx_less_than_or_equal(x,y), for x<=y -- is_approx_greater_than_or_equal(x,y), for x>=y The underlying mechanism is that the more precise version tests for equality using machine epsilon precision, that is, x=y if abs(x-y)epsilon. The less precise mechanism simply uses 100*epsilon instead of epsilon. Use this function for testing if you want to specify an absolute tolerance: -- is_within_absolute_tolerance(x,y[,absolutetolerance]) The default tolerance is the sqrt of epsilon, or about 1e-8 for a 64 bit float Note also that you can use the numpy function: -- allclose(a, b, rtol = 1e-5, atol = 1e-8) Where rtol is the relative tolerance, and atol is the absolute tolerance which comes into play when the numbers are very close to zero. Warning: none of these functions can be guaranteed to work in the way you might expect them to. Errors can accumulate to the point where even 100*epsilon is an inappropriate test for approximate equality. """ import math # This finds the 'machine epsilon' for the current hardware float type, the # smallest value of epsilon so that 1+epsilon>1 epsilon = 1. while 1. + epsilon > 1.: epsilon /= 2 epsilon *= 2. # Result for 32 bit float should be: 1.1929093e-7 # For 64 bit float should be: 2.220446049250313e-16 # This value can be used for more approximate testing approxepsilon = epsilon * 10000 # This value is the default tolerance for medium sized numbers (used in the units class) defaultabsolutetolerance = math.sqrt(epsilon) # 1.4901161193847656e-008 on 64 bit system def is_equal(x, y): if x == y: return True return abs(x - y) < abs(x) * epsilon def is_less_than_or_equal(x, y): return x < y or is_equal(x, y) def is_greater_than_or_equal(x, y): return x > y or is_equal(x, y) def is_approx_equal(x, y): if x == y: return True return abs(x - y) < abs(x) * approxepsilon def is_approx_less_than_or_equal(x, y): return x < y or is_approx_equal(x, y) def is_approx_greater_than_or_equal(x, y): return x > y or is_approx_equal(x, y) def is_within_absolute_tolerance(x, y, absolutetolerance=defaultabsolutetolerance): return float(abs(x - y)) < absolutetolerance brian-1.3.1/brian/utils/ccircular/000077500000000000000000000000001167451777000170305ustar00rootroot00000000000000brian-1.3.1/brian/utils/ccircular/__init__.py000066400000000000000000000000021167451777000211310ustar00rootroot00000000000000 brian-1.3.1/brian/utils/ccircular/ccircular.h000066400000000000000000000024201167451777000211460ustar00rootroot00000000000000#ifndef _BRIAN_LIB_H #define _BRIAN_LIB_H #include #include #include #include #include using namespace std; #define neuron_value(group, neuron, state) group->S[neuron+state*(group->num_neurons)] #define BrianException std::runtime_error class CircularVector { private: inline int index(int i); inline int getitem(int i); public: long *X, cursor, n; long *retarray; CircularVector(int n); ~CircularVector(); void reinit(); void advance(int k); int __len__(); int __getitem__(int i); void __setitem__(int i, int x); void __getslice__(long **ret, int *ret_n, int i, int j); void get_conditional(long **ret, int *ret_n, int i, int j, int min, int max, int offset=0); void __setslice__(int i, int j, long *x, int n); string __repr__(); string __str__(); void expand(long n); }; class SpikeContainer { public: CircularVector *S; CircularVector *ind; int remaining_space; SpikeContainer(int m); ~SpikeContainer(); void reinit(); void push(long *x, int n); void lastspikes(long **ret, int *ret_n); void __getitem__(long **ret, int *ret_n, int i); void get_spikes(long **ret, int *ret_n, int delay, int origin, int N); void __getslice__(long **ret, int *ret_n, int i, int j); string __repr__(); string __str__(); }; #endif brian-1.3.1/brian/utils/ccircular/ccircular.i000066400000000000000000000015161167451777000211540ustar00rootroot00000000000000%module ccircular %include "exception.i" %exception { try { $action } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } %allowexception; %{ #define SWIG_FILE_WITH_INIT #include "ccircular.h" %} %include "numpy.i" %init %{ import_array(); %} %apply (double* INPLACE_ARRAY2, int DIM1, int DIM2) {(double *S, int n, int m)}; %apply (double* INPLACE_ARRAY2, int DIM1, int DIM2) {(double *M, int M_n, int M_m)}; %apply (long* INPLACE_ARRAY1, int DIM1) {(long *x, int n)}; %apply (long* INPLACE_ARRAY1, int DIM1) {(long *y, int n)}; %apply (double* INPLACE_ARRAY1, int DIM1) {(double *b, int b_n)}; %apply (double* ARGOUT_ARRAY1, int DIM1) {(double *S_out_flat, int nm)}; %apply (long** ARGOUTVIEW_ARRAY1, int* DIM1 ) {(long **ret, int *ret_n)}; %include "ccircular.h" brian-1.3.1/brian/utils/ccircular/ccircular.py000066400000000000000000000136421167451777000213570ustar00rootroot00000000000000# This file was automatically generated by SWIG (http://www.swig.org). # Version 1.3.36 # # Don't modify this file, modify the SWIG interface instead. # This file is compatible with both classic and new-style classes. import _ccircular import new new_instancemethod = new.instancemethod try: _swig_property = property except NameError: pass # Python < 2.2 doesn't have 'property'. def _swig_setattr_nondynamic(self,class_type,name,value,static=1): if (name == "thisown"): return self.this.own(value) if (name == "this"): if type(value).__name__ == 'PySwigObject': self.__dict__[name] = value return method = class_type.__swig_setmethods__.get(name,None) if method: return method(self,value) if (not static) or hasattr(self,name): self.__dict__[name] = value else: raise AttributeError("You cannot add attributes to %s" % self) def _swig_setattr(self,class_type,name,value): return _swig_setattr_nondynamic(self,class_type,name,value,0) def _swig_getattr(self,class_type,name): if (name == "thisown"): return self.this.own() method = class_type.__swig_getmethods__.get(name,None) if method: return method(self) raise AttributeError,name def _swig_repr(self): try: strthis = "proxy of " + self.this.__repr__() except: strthis = "" return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,) import types try: _object = types.ObjectType _newclass = 1 except AttributeError: class _object : pass _newclass = 0 del types class CircularVector(_object): __swig_setmethods__ = {} __setattr__ = lambda self, name, value: _swig_setattr(self, CircularVector, name, value) __swig_getmethods__ = {} __getattr__ = lambda self, name: _swig_getattr(self, CircularVector, name) __swig_setmethods__["X"] = _ccircular.CircularVector_X_set __swig_getmethods__["X"] = _ccircular.CircularVector_X_get if _newclass:X = _swig_property(_ccircular.CircularVector_X_get, _ccircular.CircularVector_X_set) __swig_setmethods__["cursor"] = _ccircular.CircularVector_cursor_set __swig_getmethods__["cursor"] = _ccircular.CircularVector_cursor_get if _newclass:cursor = _swig_property(_ccircular.CircularVector_cursor_get, _ccircular.CircularVector_cursor_set) __swig_setmethods__["n"] = _ccircular.CircularVector_n_set __swig_getmethods__["n"] = _ccircular.CircularVector_n_get if _newclass:n = _swig_property(_ccircular.CircularVector_n_get, _ccircular.CircularVector_n_set) __swig_setmethods__["retarray"] = _ccircular.CircularVector_retarray_set __swig_getmethods__["retarray"] = _ccircular.CircularVector_retarray_get if _newclass:retarray = _swig_property(_ccircular.CircularVector_retarray_get, _ccircular.CircularVector_retarray_set) def __init__(self, *args): this = _ccircular.new_CircularVector(*args) try: self.this.append(this) except: self.this = this __swig_destroy__ = _ccircular.delete_CircularVector __del__ = lambda self : None; def reinit(*args): return _ccircular.CircularVector_reinit(*args) def advance(*args): return _ccircular.CircularVector_advance(*args) def __len__(*args): return _ccircular.CircularVector___len__(*args) def __getitem__(*args): return _ccircular.CircularVector___getitem__(*args) def __setitem__(*args): return _ccircular.CircularVector___setitem__(*args) def __getslice__(*args): return _ccircular.CircularVector___getslice__(*args) def get_conditional(*args): return _ccircular.CircularVector_get_conditional(*args) def __setslice__(*args): return _ccircular.CircularVector___setslice__(*args) def __repr__(*args): return _ccircular.CircularVector___repr__(*args) def __str__(*args): return _ccircular.CircularVector___str__(*args) def expand(*args): return _ccircular.CircularVector_expand(*args) CircularVector_swigregister = _ccircular.CircularVector_swigregister CircularVector_swigregister(CircularVector) class SpikeContainer(_object): __swig_setmethods__ = {} __setattr__ = lambda self, name, value: _swig_setattr(self, SpikeContainer, name, value) __swig_getmethods__ = {} __getattr__ = lambda self, name: _swig_getattr(self, SpikeContainer, name) __swig_setmethods__["S"] = _ccircular.SpikeContainer_S_set __swig_getmethods__["S"] = _ccircular.SpikeContainer_S_get if _newclass:S = _swig_property(_ccircular.SpikeContainer_S_get, _ccircular.SpikeContainer_S_set) __swig_setmethods__["ind"] = _ccircular.SpikeContainer_ind_set __swig_getmethods__["ind"] = _ccircular.SpikeContainer_ind_get if _newclass:ind = _swig_property(_ccircular.SpikeContainer_ind_get, _ccircular.SpikeContainer_ind_set) __swig_setmethods__["remaining_space"] = _ccircular.SpikeContainer_remaining_space_set __swig_getmethods__["remaining_space"] = _ccircular.SpikeContainer_remaining_space_get if _newclass:remaining_space = _swig_property(_ccircular.SpikeContainer_remaining_space_get, _ccircular.SpikeContainer_remaining_space_set) def __init__(self, *args): this = _ccircular.new_SpikeContainer(*args) try: self.this.append(this) except: self.this = this __swig_destroy__ = _ccircular.delete_SpikeContainer __del__ = lambda self : None; def reinit(*args): return _ccircular.SpikeContainer_reinit(*args) def push(*args): return _ccircular.SpikeContainer_push(*args) def lastspikes(*args): return _ccircular.SpikeContainer_lastspikes(*args) def __getitem__(*args): return _ccircular.SpikeContainer___getitem__(*args) def get_spikes(*args): return _ccircular.SpikeContainer_get_spikes(*args) def __getslice__(*args): return _ccircular.SpikeContainer___getslice__(*args) def __repr__(*args): return _ccircular.SpikeContainer___repr__(*args) def __str__(*args): return _ccircular.SpikeContainer___str__(*args) SpikeContainer_swigregister = _ccircular.SpikeContainer_swigregister SpikeContainer_swigregister(SpikeContainer) brian-1.3.1/brian/utils/ccircular/ccircular_wrap.cxx000066400000000000000000005025451167451777000225670ustar00rootroot00000000000000/* ---------------------------------------------------------------------------- * This file was automatically generated by SWIG (http://www.swig.org). * Version 1.3.36 * * This file is not intended to be easily readable and contains a number of * coding conventions designed to improve portability and efficiency. Do not make * changes to this file unless you know what you are doing--modify the SWIG * interface file instead. * ----------------------------------------------------------------------------- */ #define SWIGPYTHON #define SWIG_PYTHON_DIRECTOR_NO_VTABLE #ifdef __cplusplus template class SwigValueWrapper { T *tt; public: SwigValueWrapper() : tt(0) { } SwigValueWrapper(const SwigValueWrapper& rhs) : tt(new T(*rhs.tt)) { } SwigValueWrapper(const T& t) : tt(new T(t)) { } ~SwigValueWrapper() { delete tt; } SwigValueWrapper& operator=(const T& t) { delete tt; tt = new T(t); return *this; } operator T&() const { return *tt; } T *operator&() { return tt; } private: SwigValueWrapper& operator=(const SwigValueWrapper& rhs); }; template T SwigValueInit() { return T(); } #endif /* ----------------------------------------------------------------------------- * This section contains generic SWIG labels for method/variable * declarations/attributes, and other compiler dependent labels. * ----------------------------------------------------------------------------- */ /* template workaround for compilers that cannot correctly implement the C++ standard */ #ifndef SWIGTEMPLATEDISAMBIGUATOR # if defined(__SUNPRO_CC) && (__SUNPRO_CC <= 0x560) # define SWIGTEMPLATEDISAMBIGUATOR template # elif defined(__HP_aCC) /* Needed even with `aCC -AA' when `aCC -V' reports HP ANSI C++ B3910B A.03.55 */ /* If we find a maximum version that requires this, the test would be __HP_aCC <= 35500 for A.03.55 */ # define SWIGTEMPLATEDISAMBIGUATOR template # else # define SWIGTEMPLATEDISAMBIGUATOR # endif #endif /* inline attribute */ #ifndef SWIGINLINE # if defined(__cplusplus) || (defined(__GNUC__) && !defined(__STRICT_ANSI__)) # define SWIGINLINE inline # else # define SWIGINLINE # endif #endif /* attribute recognised by some compilers to avoid 'unused' warnings */ #ifndef SWIGUNUSED # if defined(__GNUC__) # if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) # define SWIGUNUSED __attribute__ ((__unused__)) # else # define SWIGUNUSED # endif # elif defined(__ICC) # define SWIGUNUSED __attribute__ ((__unused__)) # else # define SWIGUNUSED # endif #endif #ifndef SWIG_MSC_UNSUPPRESS_4505 # if defined(_MSC_VER) # pragma warning(disable : 4505) /* unreferenced local function has been removed */ # endif #endif #ifndef SWIGUNUSEDPARM # ifdef __cplusplus # define SWIGUNUSEDPARM(p) # else # define SWIGUNUSEDPARM(p) p SWIGUNUSED # endif #endif /* internal SWIG method */ #ifndef SWIGINTERN # define SWIGINTERN static SWIGUNUSED #endif /* internal inline SWIG method */ #ifndef SWIGINTERNINLINE # define SWIGINTERNINLINE SWIGINTERN SWIGINLINE #endif /* exporting methods */ #if (__GNUC__ >= 4) || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4) # ifndef GCC_HASCLASSVISIBILITY # define GCC_HASCLASSVISIBILITY # endif #endif #ifndef SWIGEXPORT # if defined(_WIN32) || defined(__WIN32__) || defined(__CYGWIN__) # if defined(STATIC_LINKED) # define SWIGEXPORT # else # define SWIGEXPORT __declspec(dllexport) # endif # else # if defined(__GNUC__) && defined(GCC_HASCLASSVISIBILITY) # define SWIGEXPORT __attribute__ ((visibility("default"))) # else # define SWIGEXPORT # endif # endif #endif /* calling conventions for Windows */ #ifndef SWIGSTDCALL # if defined(_WIN32) || defined(__WIN32__) || defined(__CYGWIN__) # define SWIGSTDCALL __stdcall # else # define SWIGSTDCALL # endif #endif /* Deal with Microsoft's attempt at deprecating C standard runtime functions */ #if !defined(SWIG_NO_CRT_SECURE_NO_DEPRECATE) && defined(_MSC_VER) && !defined(_CRT_SECURE_NO_DEPRECATE) # define _CRT_SECURE_NO_DEPRECATE #endif /* Deal with Microsoft's attempt at deprecating methods in the standard C++ library */ #if !defined(SWIG_NO_SCL_SECURE_NO_DEPRECATE) && defined(_MSC_VER) && !defined(_SCL_SECURE_NO_DEPRECATE) # define _SCL_SECURE_NO_DEPRECATE #endif /* Python.h has to appear first */ #include /* ----------------------------------------------------------------------------- * swigrun.swg * * This file contains generic CAPI SWIG runtime support for pointer * type checking. * ----------------------------------------------------------------------------- */ /* This should only be incremented when either the layout of swig_type_info changes, or for whatever reason, the runtime changes incompatibly */ #define SWIG_RUNTIME_VERSION "4" /* define SWIG_TYPE_TABLE_NAME as "SWIG_TYPE_TABLE" */ #ifdef SWIG_TYPE_TABLE # define SWIG_QUOTE_STRING(x) #x # define SWIG_EXPAND_AND_QUOTE_STRING(x) SWIG_QUOTE_STRING(x) # define SWIG_TYPE_TABLE_NAME SWIG_EXPAND_AND_QUOTE_STRING(SWIG_TYPE_TABLE) #else # define SWIG_TYPE_TABLE_NAME #endif /* You can use the SWIGRUNTIME and SWIGRUNTIMEINLINE macros for creating a static or dynamic library from the swig runtime code. In 99.9% of the cases, swig just needs to declare them as 'static'. But only do this if is strictly necessary, ie, if you have problems with your compiler or so. */ #ifndef SWIGRUNTIME # define SWIGRUNTIME SWIGINTERN #endif #ifndef SWIGRUNTIMEINLINE # define SWIGRUNTIMEINLINE SWIGRUNTIME SWIGINLINE #endif /* Generic buffer size */ #ifndef SWIG_BUFFER_SIZE # define SWIG_BUFFER_SIZE 1024 #endif /* Flags for pointer conversions */ #define SWIG_POINTER_DISOWN 0x1 #define SWIG_CAST_NEW_MEMORY 0x2 /* Flags for new pointer objects */ #define SWIG_POINTER_OWN 0x1 /* Flags/methods for returning states. The swig conversion methods, as ConvertPtr, return and integer that tells if the conversion was successful or not. And if not, an error code can be returned (see swigerrors.swg for the codes). Use the following macros/flags to set or process the returning states. In old swig versions, you usually write code as: if (SWIG_ConvertPtr(obj,vptr,ty.flags) != -1) { // success code } else { //fail code } Now you can be more explicit as: int res = SWIG_ConvertPtr(obj,vptr,ty.flags); if (SWIG_IsOK(res)) { // success code } else { // fail code } that seems to be the same, but now you can also do Type *ptr; int res = SWIG_ConvertPtr(obj,(void **)(&ptr),ty.flags); if (SWIG_IsOK(res)) { // success code if (SWIG_IsNewObj(res) { ... delete *ptr; } else { ... } } else { // fail code } I.e., now SWIG_ConvertPtr can return new objects and you can identify the case and take care of the deallocation. Of course that requires also to SWIG_ConvertPtr to return new result values, as int SWIG_ConvertPtr(obj, ptr,...) { if () { if () { *ptr = ; return SWIG_NEWOBJ; } else { *ptr = ; return SWIG_OLDOBJ; } } else { return SWIG_BADOBJ; } } Of course, returning the plain '0(success)/-1(fail)' still works, but you can be more explicit by returning SWIG_BADOBJ, SWIG_ERROR or any of the swig errors code. Finally, if the SWIG_CASTRANK_MODE is enabled, the result code allows to return the 'cast rank', for example, if you have this int food(double) int fooi(int); and you call food(1) // cast rank '1' (1 -> 1.0) fooi(1) // cast rank '0' just use the SWIG_AddCast()/SWIG_CheckState() */ #define SWIG_OK (0) #define SWIG_ERROR (-1) #define SWIG_IsOK(r) (r >= 0) #define SWIG_ArgError(r) ((r != SWIG_ERROR) ? r : SWIG_TypeError) /* The CastRankLimit says how many bits are used for the cast rank */ #define SWIG_CASTRANKLIMIT (1 << 8) /* The NewMask denotes the object was created (using new/malloc) */ #define SWIG_NEWOBJMASK (SWIG_CASTRANKLIMIT << 1) /* The TmpMask is for in/out typemaps that use temporal objects */ #define SWIG_TMPOBJMASK (SWIG_NEWOBJMASK << 1) /* Simple returning values */ #define SWIG_BADOBJ (SWIG_ERROR) #define SWIG_OLDOBJ (SWIG_OK) #define SWIG_NEWOBJ (SWIG_OK | SWIG_NEWOBJMASK) #define SWIG_TMPOBJ (SWIG_OK | SWIG_TMPOBJMASK) /* Check, add and del mask methods */ #define SWIG_AddNewMask(r) (SWIG_IsOK(r) ? (r | SWIG_NEWOBJMASK) : r) #define SWIG_DelNewMask(r) (SWIG_IsOK(r) ? (r & ~SWIG_NEWOBJMASK) : r) #define SWIG_IsNewObj(r) (SWIG_IsOK(r) && (r & SWIG_NEWOBJMASK)) #define SWIG_AddTmpMask(r) (SWIG_IsOK(r) ? (r | SWIG_TMPOBJMASK) : r) #define SWIG_DelTmpMask(r) (SWIG_IsOK(r) ? (r & ~SWIG_TMPOBJMASK) : r) #define SWIG_IsTmpObj(r) (SWIG_IsOK(r) && (r & SWIG_TMPOBJMASK)) /* Cast-Rank Mode */ #if defined(SWIG_CASTRANK_MODE) # ifndef SWIG_TypeRank # define SWIG_TypeRank unsigned long # endif # ifndef SWIG_MAXCASTRANK /* Default cast allowed */ # define SWIG_MAXCASTRANK (2) # endif # define SWIG_CASTRANKMASK ((SWIG_CASTRANKLIMIT) -1) # define SWIG_CastRank(r) (r & SWIG_CASTRANKMASK) SWIGINTERNINLINE int SWIG_AddCast(int r) { return SWIG_IsOK(r) ? ((SWIG_CastRank(r) < SWIG_MAXCASTRANK) ? (r + 1) : SWIG_ERROR) : r; } SWIGINTERNINLINE int SWIG_CheckState(int r) { return SWIG_IsOK(r) ? SWIG_CastRank(r) + 1 : 0; } #else /* no cast-rank mode */ # define SWIG_AddCast # define SWIG_CheckState(r) (SWIG_IsOK(r) ? 1 : 0) #endif #include #ifdef __cplusplus extern "C" { #endif typedef void *(*swig_converter_func)(void *, int *); typedef struct swig_type_info *(*swig_dycast_func)(void **); /* Structure to store information on one type */ typedef struct swig_type_info { const char *name; /* mangled name of this type */ const char *str; /* human readable name of this type */ swig_dycast_func dcast; /* dynamic cast function down a hierarchy */ struct swig_cast_info *cast; /* linked list of types that can cast into this type */ void *clientdata; /* language specific type data */ int owndata; /* flag if the structure owns the clientdata */ } swig_type_info; /* Structure to store a type and conversion function used for casting */ typedef struct swig_cast_info { swig_type_info *type; /* pointer to type that is equivalent to this type */ swig_converter_func converter; /* function to cast the void pointers */ struct swig_cast_info *next; /* pointer to next cast in linked list */ struct swig_cast_info *prev; /* pointer to the previous cast */ } swig_cast_info; /* Structure used to store module information * Each module generates one structure like this, and the runtime collects * all of these structures and stores them in a circularly linked list.*/ typedef struct swig_module_info { swig_type_info **types; /* Array of pointers to swig_type_info structures that are in this module */ size_t size; /* Number of types in this module */ struct swig_module_info *next; /* Pointer to next element in circularly linked list */ swig_type_info **type_initial; /* Array of initially generated type structures */ swig_cast_info **cast_initial; /* Array of initially generated casting structures */ void *clientdata; /* Language specific module data */ } swig_module_info; /* Compare two type names skipping the space characters, therefore "char*" == "char *" and "Class" == "Class", etc. Return 0 when the two name types are equivalent, as in strncmp, but skipping ' '. */ SWIGRUNTIME int SWIG_TypeNameComp(const char *f1, const char *l1, const char *f2, const char *l2) { for (;(f1 != l1) && (f2 != l2); ++f1, ++f2) { while ((*f1 == ' ') && (f1 != l1)) ++f1; while ((*f2 == ' ') && (f2 != l2)) ++f2; if (*f1 != *f2) return (*f1 > *f2) ? 1 : -1; } return (int)((l1 - f1) - (l2 - f2)); } /* Check type equivalence in a name list like ||... Return 0 if not equal, 1 if equal */ SWIGRUNTIME int SWIG_TypeEquiv(const char *nb, const char *tb) { int equiv = 0; const char* te = tb + strlen(tb); const char* ne = nb; while (!equiv && *ne) { for (nb = ne; *ne; ++ne) { if (*ne == '|') break; } equiv = (SWIG_TypeNameComp(nb, ne, tb, te) == 0) ? 1 : 0; if (*ne) ++ne; } return equiv; } /* Check type equivalence in a name list like ||... Return 0 if equal, -1 if nb < tb, 1 if nb > tb */ SWIGRUNTIME int SWIG_TypeCompare(const char *nb, const char *tb) { int equiv = 0; const char* te = tb + strlen(tb); const char* ne = nb; while (!equiv && *ne) { for (nb = ne; *ne; ++ne) { if (*ne == '|') break; } equiv = (SWIG_TypeNameComp(nb, ne, tb, te) == 0) ? 1 : 0; if (*ne) ++ne; } return equiv; } /* think of this as a c++ template<> or a scheme macro */ #define SWIG_TypeCheck_Template(comparison, ty) \ if (ty) { \ swig_cast_info *iter = ty->cast; \ while (iter) { \ if (comparison) { \ if (iter == ty->cast) return iter; \ /* Move iter to the top of the linked list */ \ iter->prev->next = iter->next; \ if (iter->next) \ iter->next->prev = iter->prev; \ iter->next = ty->cast; \ iter->prev = 0; \ if (ty->cast) ty->cast->prev = iter; \ ty->cast = iter; \ return iter; \ } \ iter = iter->next; \ } \ } \ return 0 /* Check the typename */ SWIGRUNTIME swig_cast_info * SWIG_TypeCheck(const char *c, swig_type_info *ty) { SWIG_TypeCheck_Template(strcmp(iter->type->name, c) == 0, ty); } /* Same as previous function, except strcmp is replaced with a pointer comparison */ SWIGRUNTIME swig_cast_info * SWIG_TypeCheckStruct(swig_type_info *from, swig_type_info *into) { SWIG_TypeCheck_Template(iter->type == from, into); } /* Cast a pointer up an inheritance hierarchy */ SWIGRUNTIMEINLINE void * SWIG_TypeCast(swig_cast_info *ty, void *ptr, int *newmemory) { return ((!ty) || (!ty->converter)) ? ptr : (*ty->converter)(ptr, newmemory); } /* Dynamic pointer casting. Down an inheritance hierarchy */ SWIGRUNTIME swig_type_info * SWIG_TypeDynamicCast(swig_type_info *ty, void **ptr) { swig_type_info *lastty = ty; if (!ty || !ty->dcast) return ty; while (ty && (ty->dcast)) { ty = (*ty->dcast)(ptr); if (ty) lastty = ty; } return lastty; } /* Return the name associated with this type */ SWIGRUNTIMEINLINE const char * SWIG_TypeName(const swig_type_info *ty) { return ty->name; } /* Return the pretty name associated with this type, that is an unmangled type name in a form presentable to the user. */ SWIGRUNTIME const char * SWIG_TypePrettyName(const swig_type_info *type) { /* The "str" field contains the equivalent pretty names of the type, separated by vertical-bar characters. We choose to print the last name, as it is often (?) the most specific. */ if (!type) return NULL; if (type->str != NULL) { const char *last_name = type->str; const char *s; for (s = type->str; *s; s++) if (*s == '|') last_name = s+1; return last_name; } else return type->name; } /* Set the clientdata field for a type */ SWIGRUNTIME void SWIG_TypeClientData(swig_type_info *ti, void *clientdata) { swig_cast_info *cast = ti->cast; /* if (ti->clientdata == clientdata) return; */ ti->clientdata = clientdata; while (cast) { if (!cast->converter) { swig_type_info *tc = cast->type; if (!tc->clientdata) { SWIG_TypeClientData(tc, clientdata); } } cast = cast->next; } } SWIGRUNTIME void SWIG_TypeNewClientData(swig_type_info *ti, void *clientdata) { SWIG_TypeClientData(ti, clientdata); ti->owndata = 1; } /* Search for a swig_type_info structure only by mangled name Search is a O(log #types) We start searching at module start, and finish searching when start == end. Note: if start == end at the beginning of the function, we go all the way around the circular list. */ SWIGRUNTIME swig_type_info * SWIG_MangledTypeQueryModule(swig_module_info *start, swig_module_info *end, const char *name) { swig_module_info *iter = start; do { if (iter->size) { register size_t l = 0; register size_t r = iter->size - 1; do { /* since l+r >= 0, we can (>> 1) instead (/ 2) */ register size_t i = (l + r) >> 1; const char *iname = iter->types[i]->name; if (iname) { register int compare = strcmp(name, iname); if (compare == 0) { return iter->types[i]; } else if (compare < 0) { if (i) { r = i - 1; } else { break; } } else if (compare > 0) { l = i + 1; } } else { break; /* should never happen */ } } while (l <= r); } iter = iter->next; } while (iter != end); return 0; } /* Search for a swig_type_info structure for either a mangled name or a human readable name. It first searches the mangled names of the types, which is a O(log #types) If a type is not found it then searches the human readable names, which is O(#types). We start searching at module start, and finish searching when start == end. Note: if start == end at the beginning of the function, we go all the way around the circular list. */ SWIGRUNTIME swig_type_info * SWIG_TypeQueryModule(swig_module_info *start, swig_module_info *end, const char *name) { /* STEP 1: Search the name field using binary search */ swig_type_info *ret = SWIG_MangledTypeQueryModule(start, end, name); if (ret) { return ret; } else { /* STEP 2: If the type hasn't been found, do a complete search of the str field (the human readable name) */ swig_module_info *iter = start; do { register size_t i = 0; for (; i < iter->size; ++i) { if (iter->types[i]->str && (SWIG_TypeEquiv(iter->types[i]->str, name))) return iter->types[i]; } iter = iter->next; } while (iter != end); } /* neither found a match */ return 0; } /* Pack binary data into a string */ SWIGRUNTIME char * SWIG_PackData(char *c, void *ptr, size_t sz) { static const char hex[17] = "0123456789abcdef"; register const unsigned char *u = (unsigned char *) ptr; register const unsigned char *eu = u + sz; for (; u != eu; ++u) { register unsigned char uu = *u; *(c++) = hex[(uu & 0xf0) >> 4]; *(c++) = hex[uu & 0xf]; } return c; } /* Unpack binary data from a string */ SWIGRUNTIME const char * SWIG_UnpackData(const char *c, void *ptr, size_t sz) { register unsigned char *u = (unsigned char *) ptr; register const unsigned char *eu = u + sz; for (; u != eu; ++u) { register char d = *(c++); register unsigned char uu; if ((d >= '0') && (d <= '9')) uu = ((d - '0') << 4); else if ((d >= 'a') && (d <= 'f')) uu = ((d - ('a'-10)) << 4); else return (char *) 0; d = *(c++); if ((d >= '0') && (d <= '9')) uu |= (d - '0'); else if ((d >= 'a') && (d <= 'f')) uu |= (d - ('a'-10)); else return (char *) 0; *u = uu; } return c; } /* Pack 'void *' into a string buffer. */ SWIGRUNTIME char * SWIG_PackVoidPtr(char *buff, void *ptr, const char *name, size_t bsz) { char *r = buff; if ((2*sizeof(void *) + 2) > bsz) return 0; *(r++) = '_'; r = SWIG_PackData(r,&ptr,sizeof(void *)); if (strlen(name) + 1 > (bsz - (r - buff))) return 0; strcpy(r,name); return buff; } SWIGRUNTIME const char * SWIG_UnpackVoidPtr(const char *c, void **ptr, const char *name) { if (*c != '_') { if (strcmp(c,"NULL") == 0) { *ptr = (void *) 0; return name; } else { return 0; } } return SWIG_UnpackData(++c,ptr,sizeof(void *)); } SWIGRUNTIME char * SWIG_PackDataName(char *buff, void *ptr, size_t sz, const char *name, size_t bsz) { char *r = buff; size_t lname = (name ? strlen(name) : 0); if ((2*sz + 2 + lname) > bsz) return 0; *(r++) = '_'; r = SWIG_PackData(r,ptr,sz); if (lname) { strncpy(r,name,lname+1); } else { *r = 0; } return buff; } SWIGRUNTIME const char * SWIG_UnpackDataName(const char *c, void *ptr, size_t sz, const char *name) { if (*c != '_') { if (strcmp(c,"NULL") == 0) { memset(ptr,0,sz); return name; } else { return 0; } } return SWIG_UnpackData(++c,ptr,sz); } #ifdef __cplusplus } #endif /* Errors in SWIG */ #define SWIG_UnknownError -1 #define SWIG_IOError -2 #define SWIG_RuntimeError -3 #define SWIG_IndexError -4 #define SWIG_TypeError -5 #define SWIG_DivisionByZero -6 #define SWIG_OverflowError -7 #define SWIG_SyntaxError -8 #define SWIG_ValueError -9 #define SWIG_SystemError -10 #define SWIG_AttributeError -11 #define SWIG_MemoryError -12 #define SWIG_NullReferenceError -13 /* Add PyOS_snprintf for old Pythons */ #if PY_VERSION_HEX < 0x02020000 # if defined(_MSC_VER) || defined(__BORLANDC__) || defined(_WATCOM) # define PyOS_snprintf _snprintf # else # define PyOS_snprintf snprintf # endif #endif /* A crude PyString_FromFormat implementation for old Pythons */ #if PY_VERSION_HEX < 0x02020000 #ifndef SWIG_PYBUFFER_SIZE # define SWIG_PYBUFFER_SIZE 1024 #endif static PyObject * PyString_FromFormat(const char *fmt, ...) { va_list ap; char buf[SWIG_PYBUFFER_SIZE * 2]; int res; va_start(ap, fmt); res = vsnprintf(buf, sizeof(buf), fmt, ap); va_end(ap); return (res < 0 || res >= (int)sizeof(buf)) ? 0 : PyString_FromString(buf); } #endif /* Add PyObject_Del for old Pythons */ #if PY_VERSION_HEX < 0x01060000 # define PyObject_Del(op) PyMem_DEL((op)) #endif #ifndef PyObject_DEL # define PyObject_DEL PyObject_Del #endif /* A crude PyExc_StopIteration exception for old Pythons */ #if PY_VERSION_HEX < 0x02020000 # ifndef PyExc_StopIteration # define PyExc_StopIteration PyExc_RuntimeError # endif # ifndef PyObject_GenericGetAttr # define PyObject_GenericGetAttr 0 # endif #endif /* Py_NotImplemented is defined in 2.1 and up. */ #if PY_VERSION_HEX < 0x02010000 # ifndef Py_NotImplemented # define Py_NotImplemented PyExc_RuntimeError # endif #endif /* A crude PyString_AsStringAndSize implementation for old Pythons */ #if PY_VERSION_HEX < 0x02010000 # ifndef PyString_AsStringAndSize # define PyString_AsStringAndSize(obj, s, len) {*s = PyString_AsString(obj); *len = *s ? strlen(*s) : 0;} # endif #endif /* PySequence_Size for old Pythons */ #if PY_VERSION_HEX < 0x02000000 # ifndef PySequence_Size # define PySequence_Size PySequence_Length # endif #endif /* PyBool_FromLong for old Pythons */ #if PY_VERSION_HEX < 0x02030000 static PyObject *PyBool_FromLong(long ok) { PyObject *result = ok ? Py_True : Py_False; Py_INCREF(result); return result; } #endif /* Py_ssize_t for old Pythons */ /* This code is as recommended by: */ /* http://www.python.org/dev/peps/pep-0353/#conversion-guidelines */ #if PY_VERSION_HEX < 0x02050000 && !defined(PY_SSIZE_T_MIN) typedef int Py_ssize_t; # define PY_SSIZE_T_MAX INT_MAX # define PY_SSIZE_T_MIN INT_MIN #endif /* ----------------------------------------------------------------------------- * error manipulation * ----------------------------------------------------------------------------- */ SWIGRUNTIME PyObject* SWIG_Python_ErrorType(int code) { PyObject* type = 0; switch(code) { case SWIG_MemoryError: type = PyExc_MemoryError; break; case SWIG_IOError: type = PyExc_IOError; break; case SWIG_RuntimeError: type = PyExc_RuntimeError; break; case SWIG_IndexError: type = PyExc_IndexError; break; case SWIG_TypeError: type = PyExc_TypeError; break; case SWIG_DivisionByZero: type = PyExc_ZeroDivisionError; break; case SWIG_OverflowError: type = PyExc_OverflowError; break; case SWIG_SyntaxError: type = PyExc_SyntaxError; break; case SWIG_ValueError: type = PyExc_ValueError; break; case SWIG_SystemError: type = PyExc_SystemError; break; case SWIG_AttributeError: type = PyExc_AttributeError; break; default: type = PyExc_RuntimeError; } return type; } SWIGRUNTIME void SWIG_Python_AddErrorMsg(const char* mesg) { PyObject *type = 0; PyObject *value = 0; PyObject *traceback = 0; if (PyErr_Occurred()) PyErr_Fetch(&type, &value, &traceback); if (value) { PyObject *old_str = PyObject_Str(value); PyErr_Clear(); Py_XINCREF(type); PyErr_Format(type, "%s %s", PyString_AsString(old_str), mesg); Py_DECREF(old_str); Py_DECREF(value); } else { PyErr_SetString(PyExc_RuntimeError, mesg); } } #if defined(SWIG_PYTHON_NO_THREADS) # if defined(SWIG_PYTHON_THREADS) # undef SWIG_PYTHON_THREADS # endif #endif #if defined(SWIG_PYTHON_THREADS) /* Threading support is enabled */ # if !defined(SWIG_PYTHON_USE_GIL) && !defined(SWIG_PYTHON_NO_USE_GIL) # if (PY_VERSION_HEX >= 0x02030000) /* For 2.3 or later, use the PyGILState calls */ # define SWIG_PYTHON_USE_GIL # endif # endif # if defined(SWIG_PYTHON_USE_GIL) /* Use PyGILState threads calls */ # ifndef SWIG_PYTHON_INITIALIZE_THREADS # define SWIG_PYTHON_INITIALIZE_THREADS PyEval_InitThreads() # endif # ifdef __cplusplus /* C++ code */ class SWIG_Python_Thread_Block { bool status; PyGILState_STATE state; public: void end() { if (status) { PyGILState_Release(state); status = false;} } SWIG_Python_Thread_Block() : status(true), state(PyGILState_Ensure()) {} ~SWIG_Python_Thread_Block() { end(); } }; class SWIG_Python_Thread_Allow { bool status; PyThreadState *save; public: void end() { if (status) { PyEval_RestoreThread(save); status = false; }} SWIG_Python_Thread_Allow() : status(true), save(PyEval_SaveThread()) {} ~SWIG_Python_Thread_Allow() { end(); } }; # define SWIG_PYTHON_THREAD_BEGIN_BLOCK SWIG_Python_Thread_Block _swig_thread_block # define SWIG_PYTHON_THREAD_END_BLOCK _swig_thread_block.end() # define SWIG_PYTHON_THREAD_BEGIN_ALLOW SWIG_Python_Thread_Allow _swig_thread_allow # define SWIG_PYTHON_THREAD_END_ALLOW _swig_thread_allow.end() # else /* C code */ # define SWIG_PYTHON_THREAD_BEGIN_BLOCK PyGILState_STATE _swig_thread_block = PyGILState_Ensure() # define SWIG_PYTHON_THREAD_END_BLOCK PyGILState_Release(_swig_thread_block) # define SWIG_PYTHON_THREAD_BEGIN_ALLOW PyThreadState *_swig_thread_allow = PyEval_SaveThread() # define SWIG_PYTHON_THREAD_END_ALLOW PyEval_RestoreThread(_swig_thread_allow) # endif # else /* Old thread way, not implemented, user must provide it */ # if !defined(SWIG_PYTHON_INITIALIZE_THREADS) # define SWIG_PYTHON_INITIALIZE_THREADS # endif # if !defined(SWIG_PYTHON_THREAD_BEGIN_BLOCK) # define SWIG_PYTHON_THREAD_BEGIN_BLOCK # endif # if !defined(SWIG_PYTHON_THREAD_END_BLOCK) # define SWIG_PYTHON_THREAD_END_BLOCK # endif # if !defined(SWIG_PYTHON_THREAD_BEGIN_ALLOW) # define SWIG_PYTHON_THREAD_BEGIN_ALLOW # endif # if !defined(SWIG_PYTHON_THREAD_END_ALLOW) # define SWIG_PYTHON_THREAD_END_ALLOW # endif # endif #else /* No thread support */ # define SWIG_PYTHON_INITIALIZE_THREADS # define SWIG_PYTHON_THREAD_BEGIN_BLOCK # define SWIG_PYTHON_THREAD_END_BLOCK # define SWIG_PYTHON_THREAD_BEGIN_ALLOW # define SWIG_PYTHON_THREAD_END_ALLOW #endif /* ----------------------------------------------------------------------------- * Python API portion that goes into the runtime * ----------------------------------------------------------------------------- */ #ifdef __cplusplus extern "C" { #if 0 } /* cc-mode */ #endif #endif /* ----------------------------------------------------------------------------- * Constant declarations * ----------------------------------------------------------------------------- */ /* Constant Types */ #define SWIG_PY_POINTER 4 #define SWIG_PY_BINARY 5 /* Constant information structure */ typedef struct swig_const_info { int type; char *name; long lvalue; double dvalue; void *pvalue; swig_type_info **ptype; } swig_const_info; #ifdef __cplusplus #if 0 { /* cc-mode */ #endif } #endif /* ----------------------------------------------------------------------------- * See the LICENSE file for information on copyright, usage and redistribution * of SWIG, and the README file for authors - http://www.swig.org/release.html. * * pyrun.swg * * This file contains the runtime support for Python modules * and includes code for managing global variables and pointer * type checking. * * ----------------------------------------------------------------------------- */ /* Common SWIG API */ /* for raw pointers */ #define SWIG_Python_ConvertPtr(obj, pptr, type, flags) SWIG_Python_ConvertPtrAndOwn(obj, pptr, type, flags, 0) #define SWIG_ConvertPtr(obj, pptr, type, flags) SWIG_Python_ConvertPtr(obj, pptr, type, flags) #define SWIG_ConvertPtrAndOwn(obj,pptr,type,flags,own) SWIG_Python_ConvertPtrAndOwn(obj, pptr, type, flags, own) #define SWIG_NewPointerObj(ptr, type, flags) SWIG_Python_NewPointerObj(ptr, type, flags) #define SWIG_CheckImplicit(ty) SWIG_Python_CheckImplicit(ty) #define SWIG_AcquirePtr(ptr, src) SWIG_Python_AcquirePtr(ptr, src) #define swig_owntype int /* for raw packed data */ #define SWIG_ConvertPacked(obj, ptr, sz, ty) SWIG_Python_ConvertPacked(obj, ptr, sz, ty) #define SWIG_NewPackedObj(ptr, sz, type) SWIG_Python_NewPackedObj(ptr, sz, type) /* for class or struct pointers */ #define SWIG_ConvertInstance(obj, pptr, type, flags) SWIG_ConvertPtr(obj, pptr, type, flags) #define SWIG_NewInstanceObj(ptr, type, flags) SWIG_NewPointerObj(ptr, type, flags) /* for C or C++ function pointers */ #define SWIG_ConvertFunctionPtr(obj, pptr, type) SWIG_Python_ConvertFunctionPtr(obj, pptr, type) #define SWIG_NewFunctionPtrObj(ptr, type) SWIG_Python_NewPointerObj(ptr, type, 0) /* for C++ member pointers, ie, member methods */ #define SWIG_ConvertMember(obj, ptr, sz, ty) SWIG_Python_ConvertPacked(obj, ptr, sz, ty) #define SWIG_NewMemberObj(ptr, sz, type) SWIG_Python_NewPackedObj(ptr, sz, type) /* Runtime API */ #define SWIG_GetModule(clientdata) SWIG_Python_GetModule() #define SWIG_SetModule(clientdata, pointer) SWIG_Python_SetModule(pointer) #define SWIG_NewClientData(obj) PySwigClientData_New(obj) #define SWIG_SetErrorObj SWIG_Python_SetErrorObj #define SWIG_SetErrorMsg SWIG_Python_SetErrorMsg #define SWIG_ErrorType(code) SWIG_Python_ErrorType(code) #define SWIG_Error(code, msg) SWIG_Python_SetErrorMsg(SWIG_ErrorType(code), msg) #define SWIG_fail goto fail /* Runtime API implementation */ /* Error manipulation */ SWIGINTERN void SWIG_Python_SetErrorObj(PyObject *errtype, PyObject *obj) { SWIG_PYTHON_THREAD_BEGIN_BLOCK; PyErr_SetObject(errtype, obj); Py_DECREF(obj); SWIG_PYTHON_THREAD_END_BLOCK; } SWIGINTERN void SWIG_Python_SetErrorMsg(PyObject *errtype, const char *msg) { SWIG_PYTHON_THREAD_BEGIN_BLOCK; PyErr_SetString(errtype, (char *) msg); SWIG_PYTHON_THREAD_END_BLOCK; } #define SWIG_Python_Raise(obj, type, desc) SWIG_Python_SetErrorObj(SWIG_Python_ExceptionType(desc), obj) /* Set a constant value */ SWIGINTERN void SWIG_Python_SetConstant(PyObject *d, const char *name, PyObject *obj) { PyDict_SetItemString(d, (char*) name, obj); Py_DECREF(obj); } /* Append a value to the result obj */ SWIGINTERN PyObject* SWIG_Python_AppendOutput(PyObject* result, PyObject* obj) { #if !defined(SWIG_PYTHON_OUTPUT_TUPLE) if (!result) { result = obj; } else if (result == Py_None) { Py_DECREF(result); result = obj; } else { if (!PyList_Check(result)) { PyObject *o2 = result; result = PyList_New(1); PyList_SetItem(result, 0, o2); } PyList_Append(result,obj); Py_DECREF(obj); } return result; #else PyObject* o2; PyObject* o3; if (!result) { result = obj; } else if (result == Py_None) { Py_DECREF(result); result = obj; } else { if (!PyTuple_Check(result)) { o2 = result; result = PyTuple_New(1); PyTuple_SET_ITEM(result, 0, o2); } o3 = PyTuple_New(1); PyTuple_SET_ITEM(o3, 0, obj); o2 = result; result = PySequence_Concat(o2, o3); Py_DECREF(o2); Py_DECREF(o3); } return result; #endif } /* Unpack the argument tuple */ SWIGINTERN int SWIG_Python_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, PyObject **objs) { if (!args) { if (!min && !max) { return 1; } else { PyErr_Format(PyExc_TypeError, "%s expected %s%d arguments, got none", name, (min == max ? "" : "at least "), (int)min); return 0; } } if (!PyTuple_Check(args)) { PyErr_SetString(PyExc_SystemError, "UnpackTuple() argument list is not a tuple"); return 0; } else { register Py_ssize_t l = PyTuple_GET_SIZE(args); if (l < min) { PyErr_Format(PyExc_TypeError, "%s expected %s%d arguments, got %d", name, (min == max ? "" : "at least "), (int)min, (int)l); return 0; } else if (l > max) { PyErr_Format(PyExc_TypeError, "%s expected %s%d arguments, got %d", name, (min == max ? "" : "at most "), (int)max, (int)l); return 0; } else { register int i; for (i = 0; i < l; ++i) { objs[i] = PyTuple_GET_ITEM(args, i); } for (; l < max; ++l) { objs[l] = 0; } return i + 1; } } } /* A functor is a function object with one single object argument */ #if PY_VERSION_HEX >= 0x02020000 #define SWIG_Python_CallFunctor(functor, obj) PyObject_CallFunctionObjArgs(functor, obj, NULL); #else #define SWIG_Python_CallFunctor(functor, obj) PyObject_CallFunction(functor, "O", obj); #endif /* Helper for static pointer initialization for both C and C++ code, for example static PyObject *SWIG_STATIC_POINTER(MyVar) = NewSomething(...); */ #ifdef __cplusplus #define SWIG_STATIC_POINTER(var) var #else #define SWIG_STATIC_POINTER(var) var = 0; if (!var) var #endif /* ----------------------------------------------------------------------------- * Pointer declarations * ----------------------------------------------------------------------------- */ /* Flags for new pointer objects */ #define SWIG_POINTER_NOSHADOW (SWIG_POINTER_OWN << 1) #define SWIG_POINTER_NEW (SWIG_POINTER_NOSHADOW | SWIG_POINTER_OWN) #define SWIG_POINTER_IMPLICIT_CONV (SWIG_POINTER_DISOWN << 1) #ifdef __cplusplus extern "C" { #if 0 } /* cc-mode */ #endif #endif /* How to access Py_None */ #if defined(_WIN32) || defined(__WIN32__) || defined(__CYGWIN__) # ifndef SWIG_PYTHON_NO_BUILD_NONE # ifndef SWIG_PYTHON_BUILD_NONE # define SWIG_PYTHON_BUILD_NONE # endif # endif #endif #ifdef SWIG_PYTHON_BUILD_NONE # ifdef Py_None # undef Py_None # define Py_None SWIG_Py_None() # endif SWIGRUNTIMEINLINE PyObject * _SWIG_Py_None(void) { PyObject *none = Py_BuildValue((char*)""); Py_DECREF(none); return none; } SWIGRUNTIME PyObject * SWIG_Py_None(void) { static PyObject *SWIG_STATIC_POINTER(none) = _SWIG_Py_None(); return none; } #endif /* The python void return value */ SWIGRUNTIMEINLINE PyObject * SWIG_Py_Void(void) { PyObject *none = Py_None; Py_INCREF(none); return none; } /* PySwigClientData */ typedef struct { PyObject *klass; PyObject *newraw; PyObject *newargs; PyObject *destroy; int delargs; int implicitconv; } PySwigClientData; SWIGRUNTIMEINLINE int SWIG_Python_CheckImplicit(swig_type_info *ty) { PySwigClientData *data = (PySwigClientData *)ty->clientdata; return data ? data->implicitconv : 0; } SWIGRUNTIMEINLINE PyObject * SWIG_Python_ExceptionType(swig_type_info *desc) { PySwigClientData *data = desc ? (PySwigClientData *) desc->clientdata : 0; PyObject *klass = data ? data->klass : 0; return (klass ? klass : PyExc_RuntimeError); } SWIGRUNTIME PySwigClientData * PySwigClientData_New(PyObject* obj) { if (!obj) { return 0; } else { PySwigClientData *data = (PySwigClientData *)malloc(sizeof(PySwigClientData)); /* the klass element */ data->klass = obj; Py_INCREF(data->klass); /* the newraw method and newargs arguments used to create a new raw instance */ if (PyClass_Check(obj)) { data->newraw = 0; data->newargs = obj; Py_INCREF(obj); } else { #if (PY_VERSION_HEX < 0x02020000) data->newraw = 0; #else data->newraw = PyObject_GetAttrString(data->klass, (char *)"__new__"); #endif if (data->newraw) { Py_INCREF(data->newraw); data->newargs = PyTuple_New(1); PyTuple_SetItem(data->newargs, 0, obj); } else { data->newargs = obj; } Py_INCREF(data->newargs); } /* the destroy method, aka as the C++ delete method */ data->destroy = PyObject_GetAttrString(data->klass, (char *)"__swig_destroy__"); if (PyErr_Occurred()) { PyErr_Clear(); data->destroy = 0; } if (data->destroy) { int flags; Py_INCREF(data->destroy); flags = PyCFunction_GET_FLAGS(data->destroy); #ifdef METH_O data->delargs = !(flags & (METH_O)); #else data->delargs = 0; #endif } else { data->delargs = 0; } data->implicitconv = 0; return data; } } SWIGRUNTIME void PySwigClientData_Del(PySwigClientData* data) { Py_XDECREF(data->newraw); Py_XDECREF(data->newargs); Py_XDECREF(data->destroy); } /* =============== PySwigObject =====================*/ typedef struct { PyObject_HEAD void *ptr; swig_type_info *ty; int own; PyObject *next; } PySwigObject; SWIGRUNTIME PyObject * PySwigObject_long(PySwigObject *v) { return PyLong_FromVoidPtr(v->ptr); } SWIGRUNTIME PyObject * PySwigObject_format(const char* fmt, PySwigObject *v) { PyObject *res = NULL; PyObject *args = PyTuple_New(1); if (args) { if (PyTuple_SetItem(args, 0, PySwigObject_long(v)) == 0) { PyObject *ofmt = PyString_FromString(fmt); if (ofmt) { res = PyString_Format(ofmt,args); Py_DECREF(ofmt); } Py_DECREF(args); } } return res; } SWIGRUNTIME PyObject * PySwigObject_oct(PySwigObject *v) { return PySwigObject_format("%o",v); } SWIGRUNTIME PyObject * PySwigObject_hex(PySwigObject *v) { return PySwigObject_format("%x",v); } SWIGRUNTIME PyObject * #ifdef METH_NOARGS PySwigObject_repr(PySwigObject *v) #else PySwigObject_repr(PySwigObject *v, PyObject *args) #endif { const char *name = SWIG_TypePrettyName(v->ty); PyObject *hex = PySwigObject_hex(v); PyObject *repr = PyString_FromFormat("", name, PyString_AsString(hex)); Py_DECREF(hex); if (v->next) { #ifdef METH_NOARGS PyObject *nrep = PySwigObject_repr((PySwigObject *)v->next); #else PyObject *nrep = PySwigObject_repr((PySwigObject *)v->next, args); #endif PyString_ConcatAndDel(&repr,nrep); } return repr; } SWIGRUNTIME int PySwigObject_print(PySwigObject *v, FILE *fp, int SWIGUNUSEDPARM(flags)) { #ifdef METH_NOARGS PyObject *repr = PySwigObject_repr(v); #else PyObject *repr = PySwigObject_repr(v, NULL); #endif if (repr) { fputs(PyString_AsString(repr), fp); Py_DECREF(repr); return 0; } else { return 1; } } SWIGRUNTIME PyObject * PySwigObject_str(PySwigObject *v) { char result[SWIG_BUFFER_SIZE]; return SWIG_PackVoidPtr(result, v->ptr, v->ty->name, sizeof(result)) ? PyString_FromString(result) : 0; } SWIGRUNTIME int PySwigObject_compare(PySwigObject *v, PySwigObject *w) { void *i = v->ptr; void *j = w->ptr; return (i < j) ? -1 : ((i > j) ? 1 : 0); } SWIGRUNTIME PyTypeObject* _PySwigObject_type(void); SWIGRUNTIME PyTypeObject* PySwigObject_type(void) { static PyTypeObject *SWIG_STATIC_POINTER(type) = _PySwigObject_type(); return type; } SWIGRUNTIMEINLINE int PySwigObject_Check(PyObject *op) { return ((op)->ob_type == PySwigObject_type()) || (strcmp((op)->ob_type->tp_name,"PySwigObject") == 0); } SWIGRUNTIME PyObject * PySwigObject_New(void *ptr, swig_type_info *ty, int own); SWIGRUNTIME void PySwigObject_dealloc(PyObject *v) { PySwigObject *sobj = (PySwigObject *) v; PyObject *next = sobj->next; if (sobj->own == SWIG_POINTER_OWN) { swig_type_info *ty = sobj->ty; PySwigClientData *data = ty ? (PySwigClientData *) ty->clientdata : 0; PyObject *destroy = data ? data->destroy : 0; if (destroy) { /* destroy is always a VARARGS method */ PyObject *res; if (data->delargs) { /* we need to create a temporal object to carry the destroy operation */ PyObject *tmp = PySwigObject_New(sobj->ptr, ty, 0); res = SWIG_Python_CallFunctor(destroy, tmp); Py_DECREF(tmp); } else { PyCFunction meth = PyCFunction_GET_FUNCTION(destroy); PyObject *mself = PyCFunction_GET_SELF(destroy); res = ((*meth)(mself, v)); } Py_XDECREF(res); } #if !defined(SWIG_PYTHON_SILENT_MEMLEAK) else { const char *name = SWIG_TypePrettyName(ty); printf("swig/python detected a memory leak of type '%s', no destructor found.\n", (name ? name : "unknown")); } #endif } Py_XDECREF(next); PyObject_DEL(v); } SWIGRUNTIME PyObject* PySwigObject_append(PyObject* v, PyObject* next) { PySwigObject *sobj = (PySwigObject *) v; #ifndef METH_O PyObject *tmp = 0; if (!PyArg_ParseTuple(next,(char *)"O:append", &tmp)) return NULL; next = tmp; #endif if (!PySwigObject_Check(next)) { return NULL; } sobj->next = next; Py_INCREF(next); return SWIG_Py_Void(); } SWIGRUNTIME PyObject* #ifdef METH_NOARGS PySwigObject_next(PyObject* v) #else PySwigObject_next(PyObject* v, PyObject *SWIGUNUSEDPARM(args)) #endif { PySwigObject *sobj = (PySwigObject *) v; if (sobj->next) { Py_INCREF(sobj->next); return sobj->next; } else { return SWIG_Py_Void(); } } SWIGINTERN PyObject* #ifdef METH_NOARGS PySwigObject_disown(PyObject *v) #else PySwigObject_disown(PyObject* v, PyObject *SWIGUNUSEDPARM(args)) #endif { PySwigObject *sobj = (PySwigObject *)v; sobj->own = 0; return SWIG_Py_Void(); } SWIGINTERN PyObject* #ifdef METH_NOARGS PySwigObject_acquire(PyObject *v) #else PySwigObject_acquire(PyObject* v, PyObject *SWIGUNUSEDPARM(args)) #endif { PySwigObject *sobj = (PySwigObject *)v; sobj->own = SWIG_POINTER_OWN; return SWIG_Py_Void(); } SWIGINTERN PyObject* PySwigObject_own(PyObject *v, PyObject *args) { PyObject *val = 0; #if (PY_VERSION_HEX < 0x02020000) if (!PyArg_ParseTuple(args,(char *)"|O:own",&val)) #else if (!PyArg_UnpackTuple(args, (char *)"own", 0, 1, &val)) #endif { return NULL; } else { PySwigObject *sobj = (PySwigObject *)v; PyObject *obj = PyBool_FromLong(sobj->own); if (val) { #ifdef METH_NOARGS if (PyObject_IsTrue(val)) { PySwigObject_acquire(v); } else { PySwigObject_disown(v); } #else if (PyObject_IsTrue(val)) { PySwigObject_acquire(v,args); } else { PySwigObject_disown(v,args); } #endif } return obj; } } #ifdef METH_O static PyMethodDef swigobject_methods[] = { {(char *)"disown", (PyCFunction)PySwigObject_disown, METH_NOARGS, (char *)"releases ownership of the pointer"}, {(char *)"acquire", (PyCFunction)PySwigObject_acquire, METH_NOARGS, (char *)"aquires ownership of the pointer"}, {(char *)"own", (PyCFunction)PySwigObject_own, METH_VARARGS, (char *)"returns/sets ownership of the pointer"}, {(char *)"append", (PyCFunction)PySwigObject_append, METH_O, (char *)"appends another 'this' object"}, {(char *)"next", (PyCFunction)PySwigObject_next, METH_NOARGS, (char *)"returns the next 'this' object"}, {(char *)"__repr__",(PyCFunction)PySwigObject_repr, METH_NOARGS, (char *)"returns object representation"}, {0, 0, 0, 0} }; #else static PyMethodDef swigobject_methods[] = { {(char *)"disown", (PyCFunction)PySwigObject_disown, METH_VARARGS, (char *)"releases ownership of the pointer"}, {(char *)"acquire", (PyCFunction)PySwigObject_acquire, METH_VARARGS, (char *)"aquires ownership of the pointer"}, {(char *)"own", (PyCFunction)PySwigObject_own, METH_VARARGS, (char *)"returns/sets ownership of the pointer"}, {(char *)"append", (PyCFunction)PySwigObject_append, METH_VARARGS, (char *)"appends another 'this' object"}, {(char *)"next", (PyCFunction)PySwigObject_next, METH_VARARGS, (char *)"returns the next 'this' object"}, {(char *)"__repr__",(PyCFunction)PySwigObject_repr, METH_VARARGS, (char *)"returns object representation"}, {0, 0, 0, 0} }; #endif #if PY_VERSION_HEX < 0x02020000 SWIGINTERN PyObject * PySwigObject_getattr(PySwigObject *sobj,char *name) { return Py_FindMethod(swigobject_methods, (PyObject *)sobj, name); } #endif SWIGRUNTIME PyTypeObject* _PySwigObject_type(void) { static char swigobject_doc[] = "Swig object carries a C/C++ instance pointer"; static PyNumberMethods PySwigObject_as_number = { (binaryfunc)0, /*nb_add*/ (binaryfunc)0, /*nb_subtract*/ (binaryfunc)0, /*nb_multiply*/ (binaryfunc)0, /*nb_divide*/ (binaryfunc)0, /*nb_remainder*/ (binaryfunc)0, /*nb_divmod*/ (ternaryfunc)0,/*nb_power*/ (unaryfunc)0, /*nb_negative*/ (unaryfunc)0, /*nb_positive*/ (unaryfunc)0, /*nb_absolute*/ (inquiry)0, /*nb_nonzero*/ 0, /*nb_invert*/ 0, /*nb_lshift*/ 0, /*nb_rshift*/ 0, /*nb_and*/ 0, /*nb_xor*/ 0, /*nb_or*/ (coercion)0, /*nb_coerce*/ (unaryfunc)PySwigObject_long, /*nb_int*/ (unaryfunc)PySwigObject_long, /*nb_long*/ (unaryfunc)0, /*nb_float*/ (unaryfunc)PySwigObject_oct, /*nb_oct*/ (unaryfunc)PySwigObject_hex, /*nb_hex*/ #if PY_VERSION_HEX >= 0x02050000 /* 2.5.0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 /* nb_inplace_add -> nb_index */ #elif PY_VERSION_HEX >= 0x02020000 /* 2.2.0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 /* nb_inplace_add -> nb_inplace_true_divide */ #elif PY_VERSION_HEX >= 0x02000000 /* 2.0.0 */ 0,0,0,0,0,0,0,0,0,0,0 /* nb_inplace_add -> nb_inplace_or */ #endif }; static PyTypeObject pyswigobject_type; static int type_init = 0; if (!type_init) { const PyTypeObject tmp = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ (char *)"PySwigObject", /* tp_name */ sizeof(PySwigObject), /* tp_basicsize */ 0, /* tp_itemsize */ (destructor)PySwigObject_dealloc, /* tp_dealloc */ (printfunc)PySwigObject_print, /* tp_print */ #if PY_VERSION_HEX < 0x02020000 (getattrfunc)PySwigObject_getattr, /* tp_getattr */ #else (getattrfunc)0, /* tp_getattr */ #endif (setattrfunc)0, /* tp_setattr */ (cmpfunc)PySwigObject_compare, /* tp_compare */ (reprfunc)PySwigObject_repr, /* tp_repr */ &PySwigObject_as_number, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)0, /* tp_hash */ (ternaryfunc)0, /* tp_call */ (reprfunc)PySwigObject_str, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT, /* tp_flags */ swigobject_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ #if PY_VERSION_HEX >= 0x02020000 0, /* tp_iter */ 0, /* tp_iternext */ swigobject_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ #endif #if PY_VERSION_HEX >= 0x02030000 0, /* tp_del */ #endif #ifdef COUNT_ALLOCS 0,0,0,0 /* tp_alloc -> tp_next */ #endif }; pyswigobject_type = tmp; pyswigobject_type.ob_type = &PyType_Type; type_init = 1; } return &pyswigobject_type; } SWIGRUNTIME PyObject * PySwigObject_New(void *ptr, swig_type_info *ty, int own) { PySwigObject *sobj = PyObject_NEW(PySwigObject, PySwigObject_type()); if (sobj) { sobj->ptr = ptr; sobj->ty = ty; sobj->own = own; sobj->next = 0; } return (PyObject *)sobj; } /* ----------------------------------------------------------------------------- * Implements a simple Swig Packed type, and use it instead of string * ----------------------------------------------------------------------------- */ typedef struct { PyObject_HEAD void *pack; swig_type_info *ty; size_t size; } PySwigPacked; SWIGRUNTIME int PySwigPacked_print(PySwigPacked *v, FILE *fp, int SWIGUNUSEDPARM(flags)) { char result[SWIG_BUFFER_SIZE]; fputs("pack, v->size, 0, sizeof(result))) { fputs("at ", fp); fputs(result, fp); } fputs(v->ty->name,fp); fputs(">", fp); return 0; } SWIGRUNTIME PyObject * PySwigPacked_repr(PySwigPacked *v) { char result[SWIG_BUFFER_SIZE]; if (SWIG_PackDataName(result, v->pack, v->size, 0, sizeof(result))) { return PyString_FromFormat("", result, v->ty->name); } else { return PyString_FromFormat("", v->ty->name); } } SWIGRUNTIME PyObject * PySwigPacked_str(PySwigPacked *v) { char result[SWIG_BUFFER_SIZE]; if (SWIG_PackDataName(result, v->pack, v->size, 0, sizeof(result))){ return PyString_FromFormat("%s%s", result, v->ty->name); } else { return PyString_FromString(v->ty->name); } } SWIGRUNTIME int PySwigPacked_compare(PySwigPacked *v, PySwigPacked *w) { size_t i = v->size; size_t j = w->size; int s = (i < j) ? -1 : ((i > j) ? 1 : 0); return s ? s : strncmp((char *)v->pack, (char *)w->pack, 2*v->size); } SWIGRUNTIME PyTypeObject* _PySwigPacked_type(void); SWIGRUNTIME PyTypeObject* PySwigPacked_type(void) { static PyTypeObject *SWIG_STATIC_POINTER(type) = _PySwigPacked_type(); return type; } SWIGRUNTIMEINLINE int PySwigPacked_Check(PyObject *op) { return ((op)->ob_type == _PySwigPacked_type()) || (strcmp((op)->ob_type->tp_name,"PySwigPacked") == 0); } SWIGRUNTIME void PySwigPacked_dealloc(PyObject *v) { if (PySwigPacked_Check(v)) { PySwigPacked *sobj = (PySwigPacked *) v; free(sobj->pack); } PyObject_DEL(v); } SWIGRUNTIME PyTypeObject* _PySwigPacked_type(void) { static char swigpacked_doc[] = "Swig object carries a C/C++ instance pointer"; static PyTypeObject pyswigpacked_type; static int type_init = 0; if (!type_init) { const PyTypeObject tmp = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ (char *)"PySwigPacked", /* tp_name */ sizeof(PySwigPacked), /* tp_basicsize */ 0, /* tp_itemsize */ (destructor)PySwigPacked_dealloc, /* tp_dealloc */ (printfunc)PySwigPacked_print, /* tp_print */ (getattrfunc)0, /* tp_getattr */ (setattrfunc)0, /* tp_setattr */ (cmpfunc)PySwigPacked_compare, /* tp_compare */ (reprfunc)PySwigPacked_repr, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)0, /* tp_hash */ (ternaryfunc)0, /* tp_call */ (reprfunc)PySwigPacked_str, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT, /* tp_flags */ swigpacked_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ #if PY_VERSION_HEX >= 0x02020000 0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ #endif #if PY_VERSION_HEX >= 0x02030000 0, /* tp_del */ #endif #ifdef COUNT_ALLOCS 0,0,0,0 /* tp_alloc -> tp_next */ #endif }; pyswigpacked_type = tmp; pyswigpacked_type.ob_type = &PyType_Type; type_init = 1; } return &pyswigpacked_type; } SWIGRUNTIME PyObject * PySwigPacked_New(void *ptr, size_t size, swig_type_info *ty) { PySwigPacked *sobj = PyObject_NEW(PySwigPacked, PySwigPacked_type()); if (sobj) { void *pack = malloc(size); if (pack) { memcpy(pack, ptr, size); sobj->pack = pack; sobj->ty = ty; sobj->size = size; } else { PyObject_DEL((PyObject *) sobj); sobj = 0; } } return (PyObject *) sobj; } SWIGRUNTIME swig_type_info * PySwigPacked_UnpackData(PyObject *obj, void *ptr, size_t size) { if (PySwigPacked_Check(obj)) { PySwigPacked *sobj = (PySwigPacked *)obj; if (sobj->size != size) return 0; memcpy(ptr, sobj->pack, size); return sobj->ty; } else { return 0; } } /* ----------------------------------------------------------------------------- * pointers/data manipulation * ----------------------------------------------------------------------------- */ SWIGRUNTIMEINLINE PyObject * _SWIG_This(void) { return PyString_FromString("this"); } SWIGRUNTIME PyObject * SWIG_This(void) { static PyObject *SWIG_STATIC_POINTER(swig_this) = _SWIG_This(); return swig_this; } /* #define SWIG_PYTHON_SLOW_GETSET_THIS */ SWIGRUNTIME PySwigObject * SWIG_Python_GetSwigThis(PyObject *pyobj) { if (PySwigObject_Check(pyobj)) { return (PySwigObject *) pyobj; } else { PyObject *obj = 0; #if (!defined(SWIG_PYTHON_SLOW_GETSET_THIS) && (PY_VERSION_HEX >= 0x02030000)) if (PyInstance_Check(pyobj)) { obj = _PyInstance_Lookup(pyobj, SWIG_This()); } else { PyObject **dictptr = _PyObject_GetDictPtr(pyobj); if (dictptr != NULL) { PyObject *dict = *dictptr; obj = dict ? PyDict_GetItem(dict, SWIG_This()) : 0; } else { #ifdef PyWeakref_CheckProxy if (PyWeakref_CheckProxy(pyobj)) { PyObject *wobj = PyWeakref_GET_OBJECT(pyobj); return wobj ? SWIG_Python_GetSwigThis(wobj) : 0; } #endif obj = PyObject_GetAttr(pyobj,SWIG_This()); if (obj) { Py_DECREF(obj); } else { if (PyErr_Occurred()) PyErr_Clear(); return 0; } } } #else obj = PyObject_GetAttr(pyobj,SWIG_This()); if (obj) { Py_DECREF(obj); } else { if (PyErr_Occurred()) PyErr_Clear(); return 0; } #endif if (obj && !PySwigObject_Check(obj)) { /* a PyObject is called 'this', try to get the 'real this' PySwigObject from it */ return SWIG_Python_GetSwigThis(obj); } return (PySwigObject *)obj; } } /* Acquire a pointer value */ SWIGRUNTIME int SWIG_Python_AcquirePtr(PyObject *obj, int own) { if (own == SWIG_POINTER_OWN) { PySwigObject *sobj = SWIG_Python_GetSwigThis(obj); if (sobj) { int oldown = sobj->own; sobj->own = own; return oldown; } } return 0; } /* Convert a pointer value */ SWIGRUNTIME int SWIG_Python_ConvertPtrAndOwn(PyObject *obj, void **ptr, swig_type_info *ty, int flags, int *own) { if (!obj) return SWIG_ERROR; if (obj == Py_None) { if (ptr) *ptr = 0; return SWIG_OK; } else { PySwigObject *sobj = SWIG_Python_GetSwigThis(obj); if (own) *own = 0; while (sobj) { void *vptr = sobj->ptr; if (ty) { swig_type_info *to = sobj->ty; if (to == ty) { /* no type cast needed */ if (ptr) *ptr = vptr; break; } else { swig_cast_info *tc = SWIG_TypeCheck(to->name,ty); if (!tc) { sobj = (PySwigObject *)sobj->next; } else { if (ptr) { int newmemory = 0; *ptr = SWIG_TypeCast(tc,vptr,&newmemory); if (newmemory == SWIG_CAST_NEW_MEMORY) { assert(own); if (own) *own = *own | SWIG_CAST_NEW_MEMORY; } } break; } } } else { if (ptr) *ptr = vptr; break; } } if (sobj) { if (own) *own = *own | sobj->own; if (flags & SWIG_POINTER_DISOWN) { sobj->own = 0; } return SWIG_OK; } else { int res = SWIG_ERROR; if (flags & SWIG_POINTER_IMPLICIT_CONV) { PySwigClientData *data = ty ? (PySwigClientData *) ty->clientdata : 0; if (data && !data->implicitconv) { PyObject *klass = data->klass; if (klass) { PyObject *impconv; data->implicitconv = 1; /* avoid recursion and call 'explicit' constructors*/ impconv = SWIG_Python_CallFunctor(klass, obj); data->implicitconv = 0; if (PyErr_Occurred()) { PyErr_Clear(); impconv = 0; } if (impconv) { PySwigObject *iobj = SWIG_Python_GetSwigThis(impconv); if (iobj) { void *vptr; res = SWIG_Python_ConvertPtrAndOwn((PyObject*)iobj, &vptr, ty, 0, 0); if (SWIG_IsOK(res)) { if (ptr) { *ptr = vptr; /* transfer the ownership to 'ptr' */ iobj->own = 0; res = SWIG_AddCast(res); res = SWIG_AddNewMask(res); } else { res = SWIG_AddCast(res); } } } Py_DECREF(impconv); } } } } return res; } } } /* Convert a function ptr value */ SWIGRUNTIME int SWIG_Python_ConvertFunctionPtr(PyObject *obj, void **ptr, swig_type_info *ty) { if (!PyCFunction_Check(obj)) { return SWIG_ConvertPtr(obj, ptr, ty, 0); } else { void *vptr = 0; /* here we get the method pointer for callbacks */ const char *doc = (((PyCFunctionObject *)obj) -> m_ml -> ml_doc); const char *desc = doc ? strstr(doc, "swig_ptr: ") : 0; if (desc) { desc = ty ? SWIG_UnpackVoidPtr(desc + 10, &vptr, ty->name) : 0; if (!desc) return SWIG_ERROR; } if (ty) { swig_cast_info *tc = SWIG_TypeCheck(desc,ty); if (tc) { int newmemory = 0; *ptr = SWIG_TypeCast(tc,vptr,&newmemory); assert(!newmemory); /* newmemory handling not yet implemented */ } else { return SWIG_ERROR; } } else { *ptr = vptr; } return SWIG_OK; } } /* Convert a packed value value */ SWIGRUNTIME int SWIG_Python_ConvertPacked(PyObject *obj, void *ptr, size_t sz, swig_type_info *ty) { swig_type_info *to = PySwigPacked_UnpackData(obj, ptr, sz); if (!to) return SWIG_ERROR; if (ty) { if (to != ty) { /* check type cast? */ swig_cast_info *tc = SWIG_TypeCheck(to->name,ty); if (!tc) return SWIG_ERROR; } } return SWIG_OK; } /* ----------------------------------------------------------------------------- * Create a new pointer object * ----------------------------------------------------------------------------- */ /* Create a new instance object, whitout calling __init__, and set the 'this' attribute. */ SWIGRUNTIME PyObject* SWIG_Python_NewShadowInstance(PySwigClientData *data, PyObject *swig_this) { #if (PY_VERSION_HEX >= 0x02020000) PyObject *inst = 0; PyObject *newraw = data->newraw; if (newraw) { inst = PyObject_Call(newraw, data->newargs, NULL); if (inst) { #if !defined(SWIG_PYTHON_SLOW_GETSET_THIS) PyObject **dictptr = _PyObject_GetDictPtr(inst); if (dictptr != NULL) { PyObject *dict = *dictptr; if (dict == NULL) { dict = PyDict_New(); *dictptr = dict; PyDict_SetItem(dict, SWIG_This(), swig_this); } } #else PyObject *key = SWIG_This(); PyObject_SetAttr(inst, key, swig_this); #endif } } else { PyObject *dict = PyDict_New(); PyDict_SetItem(dict, SWIG_This(), swig_this); inst = PyInstance_NewRaw(data->newargs, dict); Py_DECREF(dict); } return inst; #else #if (PY_VERSION_HEX >= 0x02010000) PyObject *inst; PyObject *dict = PyDict_New(); PyDict_SetItem(dict, SWIG_This(), swig_this); inst = PyInstance_NewRaw(data->newargs, dict); Py_DECREF(dict); return (PyObject *) inst; #else PyInstanceObject *inst = PyObject_NEW(PyInstanceObject, &PyInstance_Type); if (inst == NULL) { return NULL; } inst->in_class = (PyClassObject *)data->newargs; Py_INCREF(inst->in_class); inst->in_dict = PyDict_New(); if (inst->in_dict == NULL) { Py_DECREF(inst); return NULL; } #ifdef Py_TPFLAGS_HAVE_WEAKREFS inst->in_weakreflist = NULL; #endif #ifdef Py_TPFLAGS_GC PyObject_GC_Init(inst); #endif PyDict_SetItem(inst->in_dict, SWIG_This(), swig_this); return (PyObject *) inst; #endif #endif } SWIGRUNTIME void SWIG_Python_SetSwigThis(PyObject *inst, PyObject *swig_this) { PyObject *dict; #if (PY_VERSION_HEX >= 0x02020000) && !defined(SWIG_PYTHON_SLOW_GETSET_THIS) PyObject **dictptr = _PyObject_GetDictPtr(inst); if (dictptr != NULL) { dict = *dictptr; if (dict == NULL) { dict = PyDict_New(); *dictptr = dict; } PyDict_SetItem(dict, SWIG_This(), swig_this); return; } #endif dict = PyObject_GetAttrString(inst, (char*)"__dict__"); PyDict_SetItem(dict, SWIG_This(), swig_this); Py_DECREF(dict); } SWIGINTERN PyObject * SWIG_Python_InitShadowInstance(PyObject *args) { PyObject *obj[2]; if (!SWIG_Python_UnpackTuple(args,(char*)"swiginit", 2, 2, obj)) { return NULL; } else { PySwigObject *sthis = SWIG_Python_GetSwigThis(obj[0]); if (sthis) { PySwigObject_append((PyObject*) sthis, obj[1]); } else { SWIG_Python_SetSwigThis(obj[0], obj[1]); } return SWIG_Py_Void(); } } /* Create a new pointer object */ SWIGRUNTIME PyObject * SWIG_Python_NewPointerObj(void *ptr, swig_type_info *type, int flags) { if (!ptr) { return SWIG_Py_Void(); } else { int own = (flags & SWIG_POINTER_OWN) ? SWIG_POINTER_OWN : 0; PyObject *robj = PySwigObject_New(ptr, type, own); PySwigClientData *clientdata = type ? (PySwigClientData *)(type->clientdata) : 0; if (clientdata && !(flags & SWIG_POINTER_NOSHADOW)) { PyObject *inst = SWIG_Python_NewShadowInstance(clientdata, robj); if (inst) { Py_DECREF(robj); robj = inst; } } return robj; } } /* Create a new packed object */ SWIGRUNTIMEINLINE PyObject * SWIG_Python_NewPackedObj(void *ptr, size_t sz, swig_type_info *type) { return ptr ? PySwigPacked_New((void *) ptr, sz, type) : SWIG_Py_Void(); } /* -----------------------------------------------------------------------------* * Get type list * -----------------------------------------------------------------------------*/ #ifdef SWIG_LINK_RUNTIME void *SWIG_ReturnGlobalTypeList(void *); #endif SWIGRUNTIME swig_module_info * SWIG_Python_GetModule(void) { static void *type_pointer = (void *)0; /* first check if module already created */ if (!type_pointer) { #ifdef SWIG_LINK_RUNTIME type_pointer = SWIG_ReturnGlobalTypeList((void *)0); #else type_pointer = PyCObject_Import((char*)"swig_runtime_data" SWIG_RUNTIME_VERSION, (char*)"type_pointer" SWIG_TYPE_TABLE_NAME); if (PyErr_Occurred()) { PyErr_Clear(); type_pointer = (void *)0; } #endif } return (swig_module_info *) type_pointer; } #if PY_MAJOR_VERSION < 2 /* PyModule_AddObject function was introduced in Python 2.0. The following function is copied out of Python/modsupport.c in python version 2.3.4 */ SWIGINTERN int PyModule_AddObject(PyObject *m, char *name, PyObject *o) { PyObject *dict; if (!PyModule_Check(m)) { PyErr_SetString(PyExc_TypeError, "PyModule_AddObject() needs module as first arg"); return SWIG_ERROR; } if (!o) { PyErr_SetString(PyExc_TypeError, "PyModule_AddObject() needs non-NULL value"); return SWIG_ERROR; } dict = PyModule_GetDict(m); if (dict == NULL) { /* Internal error -- modules must have a dict! */ PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", PyModule_GetName(m)); return SWIG_ERROR; } if (PyDict_SetItemString(dict, name, o)) return SWIG_ERROR; Py_DECREF(o); return SWIG_OK; } #endif SWIGRUNTIME void SWIG_Python_DestroyModule(void *vptr) { swig_module_info *swig_module = (swig_module_info *) vptr; swig_type_info **types = swig_module->types; size_t i; for (i =0; i < swig_module->size; ++i) { swig_type_info *ty = types[i]; if (ty->owndata) { PySwigClientData *data = (PySwigClientData *) ty->clientdata; if (data) PySwigClientData_Del(data); } } Py_DECREF(SWIG_This()); } SWIGRUNTIME void SWIG_Python_SetModule(swig_module_info *swig_module) { static PyMethodDef swig_empty_runtime_method_table[] = { {NULL, NULL, 0, NULL} };/* Sentinel */ PyObject *module = Py_InitModule((char*)"swig_runtime_data" SWIG_RUNTIME_VERSION, swig_empty_runtime_method_table); PyObject *pointer = PyCObject_FromVoidPtr((void *) swig_module, SWIG_Python_DestroyModule); if (pointer && module) { PyModule_AddObject(module, (char*)"type_pointer" SWIG_TYPE_TABLE_NAME, pointer); } else { Py_XDECREF(pointer); } } /* The python cached type query */ SWIGRUNTIME PyObject * SWIG_Python_TypeCache(void) { static PyObject *SWIG_STATIC_POINTER(cache) = PyDict_New(); return cache; } SWIGRUNTIME swig_type_info * SWIG_Python_TypeQuery(const char *type) { PyObject *cache = SWIG_Python_TypeCache(); PyObject *key = PyString_FromString(type); PyObject *obj = PyDict_GetItem(cache, key); swig_type_info *descriptor; if (obj) { descriptor = (swig_type_info *) PyCObject_AsVoidPtr(obj); } else { swig_module_info *swig_module = SWIG_Python_GetModule(); descriptor = SWIG_TypeQueryModule(swig_module, swig_module, type); if (descriptor) { obj = PyCObject_FromVoidPtr(descriptor, NULL); PyDict_SetItem(cache, key, obj); Py_DECREF(obj); } } Py_DECREF(key); return descriptor; } /* For backward compatibility only */ #define SWIG_POINTER_EXCEPTION 0 #define SWIG_arg_fail(arg) SWIG_Python_ArgFail(arg) #define SWIG_MustGetPtr(p, type, argnum, flags) SWIG_Python_MustGetPtr(p, type, argnum, flags) SWIGRUNTIME int SWIG_Python_AddErrMesg(const char* mesg, int infront) { if (PyErr_Occurred()) { PyObject *type = 0; PyObject *value = 0; PyObject *traceback = 0; PyErr_Fetch(&type, &value, &traceback); if (value) { PyObject *old_str = PyObject_Str(value); Py_XINCREF(type); PyErr_Clear(); if (infront) { PyErr_Format(type, "%s %s", mesg, PyString_AsString(old_str)); } else { PyErr_Format(type, "%s %s", PyString_AsString(old_str), mesg); } Py_DECREF(old_str); } return 1; } else { return 0; } } SWIGRUNTIME int SWIG_Python_ArgFail(int argnum) { if (PyErr_Occurred()) { /* add information about failing argument */ char mesg[256]; PyOS_snprintf(mesg, sizeof(mesg), "argument number %d:", argnum); return SWIG_Python_AddErrMesg(mesg, 1); } else { return 0; } } SWIGRUNTIMEINLINE const char * PySwigObject_GetDesc(PyObject *self) { PySwigObject *v = (PySwigObject *)self; swig_type_info *ty = v ? v->ty : 0; return ty ? ty->str : (char*)""; } SWIGRUNTIME void SWIG_Python_TypeError(const char *type, PyObject *obj) { if (type) { #if defined(SWIG_COBJECT_TYPES) if (obj && PySwigObject_Check(obj)) { const char *otype = (const char *) PySwigObject_GetDesc(obj); if (otype) { PyErr_Format(PyExc_TypeError, "a '%s' is expected, 'PySwigObject(%s)' is received", type, otype); return; } } else #endif { const char *otype = (obj ? obj->ob_type->tp_name : 0); if (otype) { PyObject *str = PyObject_Str(obj); const char *cstr = str ? PyString_AsString(str) : 0; if (cstr) { PyErr_Format(PyExc_TypeError, "a '%s' is expected, '%s(%s)' is received", type, otype, cstr); } else { PyErr_Format(PyExc_TypeError, "a '%s' is expected, '%s' is received", type, otype); } Py_XDECREF(str); return; } } PyErr_Format(PyExc_TypeError, "a '%s' is expected", type); } else { PyErr_Format(PyExc_TypeError, "unexpected type is received"); } } /* Convert a pointer value, signal an exception on a type mismatch */ SWIGRUNTIME void * SWIG_Python_MustGetPtr(PyObject *obj, swig_type_info *ty, int argnum, int flags) { void *result; if (SWIG_Python_ConvertPtr(obj, &result, ty, flags) == -1) { PyErr_Clear(); if (flags & SWIG_POINTER_EXCEPTION) { SWIG_Python_TypeError(SWIG_TypePrettyName(ty), obj); SWIG_Python_ArgFail(argnum); } } return result; } #ifdef __cplusplus #if 0 { /* cc-mode */ #endif } #endif #define SWIG_exception_fail(code, msg) do { SWIG_Error(code, msg); SWIG_fail; } while(0) #define SWIG_contract_assert(expr, msg) if (!(expr)) { SWIG_Error(SWIG_RuntimeError, msg); SWIG_fail; } else #define SWIG_exception(code, msg) do { SWIG_Error(code, msg); SWIG_fail;; } while(0) /* -------- TYPES TABLE (BEGIN) -------- */ #define SWIGTYPE_p_CircularVector swig_types[0] #define SWIGTYPE_p_SpikeContainer swig_types[1] #define SWIGTYPE_p_char swig_types[2] #define SWIGTYPE_p_int swig_types[3] #define SWIGTYPE_p_long swig_types[4] #define SWIGTYPE_p_p_long swig_types[5] #define SWIGTYPE_p_string swig_types[6] static swig_type_info *swig_types[8]; static swig_module_info swig_module = {swig_types, 7, 0, 0, 0, 0}; #define SWIG_TypeQuery(name) SWIG_TypeQueryModule(&swig_module, &swig_module, name) #define SWIG_MangledTypeQuery(name) SWIG_MangledTypeQueryModule(&swig_module, &swig_module, name) /* -------- TYPES TABLE (END) -------- */ #if (PY_VERSION_HEX <= 0x02000000) # if !defined(SWIG_PYTHON_CLASSIC) # error "This python version requires swig to be run with the '-classic' option" # endif #endif /*----------------------------------------------- @(target):= _ccircular.so ------------------------------------------------*/ #define SWIG_init init_ccircular #define SWIG_name "_ccircular" #define SWIGVERSION 0x010336 #define SWIG_VERSION SWIGVERSION #define SWIG_as_voidptr(a) const_cast< void * >(static_cast< const void * >(a)) #define SWIG_as_voidptrptr(a) ((void)SWIG_as_voidptr(*a),reinterpret_cast< void** >(a)) #include namespace swig { class PyObject_ptr { protected: PyObject *_obj; public: PyObject_ptr() :_obj(0) { } PyObject_ptr(const PyObject_ptr& item) : _obj(item._obj) { Py_XINCREF(_obj); } PyObject_ptr(PyObject *obj, bool initial_ref = true) :_obj(obj) { if (initial_ref) { Py_XINCREF(_obj); } } PyObject_ptr & operator=(const PyObject_ptr& item) { Py_XINCREF(item._obj); Py_XDECREF(_obj); _obj = item._obj; return *this; } ~PyObject_ptr() { Py_XDECREF(_obj); } operator PyObject *() const { return _obj; } PyObject *operator->() const { return _obj; } }; } namespace swig { struct PyObject_var : PyObject_ptr { PyObject_var(PyObject* obj = 0) : PyObject_ptr(obj, false) { } PyObject_var & operator = (PyObject* obj) { Py_XDECREF(_obj); _obj = obj; return *this; } }; } #define SWIG_FILE_WITH_INIT #include "ccircular.h" #ifndef SWIG_FILE_WITH_INIT # define NO_IMPORT_ARRAY #endif #include "stdio.h" #include SWIGINTERN int SWIG_AsVal_double (PyObject *obj, double *val) { int res = SWIG_TypeError; if (PyFloat_Check(obj)) { if (val) *val = PyFloat_AsDouble(obj); return SWIG_OK; } else if (PyInt_Check(obj)) { if (val) *val = PyInt_AsLong(obj); return SWIG_OK; } else if (PyLong_Check(obj)) { double v = PyLong_AsDouble(obj); if (!PyErr_Occurred()) { if (val) *val = v; return SWIG_OK; } else { PyErr_Clear(); } } #ifdef SWIG_PYTHON_CAST_MODE { int dispatch = 0; double d = PyFloat_AsDouble(obj); if (!PyErr_Occurred()) { if (val) *val = d; return SWIG_AddCast(SWIG_OK); } else { PyErr_Clear(); } if (!dispatch) { long v = PyLong_AsLong(obj); if (!PyErr_Occurred()) { if (val) *val = v; return SWIG_AddCast(SWIG_AddCast(SWIG_OK)); } else { PyErr_Clear(); } } } #endif return res; } #include #include SWIGINTERNINLINE int SWIG_CanCastAsInteger(double *d, double min, double max) { double x = *d; if ((min <= x && x <= max)) { double fx = floor(x); double cx = ceil(x); double rd = ((x - fx) < 0.5) ? fx : cx; /* simple rint */ if ((errno == EDOM) || (errno == ERANGE)) { errno = 0; } else { double summ, reps, diff; if (rd < x) { diff = x - rd; } else if (rd > x) { diff = rd - x; } else { return 1; } summ = rd + x; reps = diff/summ; if (reps < 8*DBL_EPSILON) { *d = rd; return 1; } } } return 0; } SWIGINTERN int SWIG_AsVal_long (PyObject *obj, long* val) { if (PyInt_Check(obj)) { if (val) *val = PyInt_AsLong(obj); return SWIG_OK; } else if (PyLong_Check(obj)) { long v = PyLong_AsLong(obj); if (!PyErr_Occurred()) { if (val) *val = v; return SWIG_OK; } else { PyErr_Clear(); } } #ifdef SWIG_PYTHON_CAST_MODE { int dispatch = 0; long v = PyInt_AsLong(obj); if (!PyErr_Occurred()) { if (val) *val = v; return SWIG_AddCast(SWIG_OK); } else { PyErr_Clear(); } if (!dispatch) { double d; int res = SWIG_AddCast(SWIG_AsVal_double (obj,&d)); if (SWIG_IsOK(res) && SWIG_CanCastAsInteger(&d, LONG_MIN, LONG_MAX)) { if (val) *val = (long)(d); return res; } } } #endif return SWIG_TypeError; } #define SWIG_From_long PyInt_FromLong #include #if !defined(SWIG_NO_LLONG_MAX) # if !defined(LLONG_MAX) && defined(__GNUC__) && defined (__LONG_LONG_MAX__) # define LLONG_MAX __LONG_LONG_MAX__ # define LLONG_MIN (-LLONG_MAX - 1LL) # define ULLONG_MAX (LLONG_MAX * 2ULL + 1ULL) # endif #endif SWIGINTERN int SWIG_AsVal_int (PyObject * obj, int *val) { long v; int res = SWIG_AsVal_long (obj, &v); if (SWIG_IsOK(res)) { if ((v < INT_MIN || v > INT_MAX)) { return SWIG_OverflowError; } else { if (val) *val = static_cast< int >(v); } } return res; } SWIGINTERNINLINE PyObject * SWIG_From_int (int value) { return SWIG_From_long (value); } /* Support older NumPy data type names */ #if NDARRAY_VERSION < 0x01000000 #define NPY_BOOL PyArray_BOOL #define NPY_BYTE PyArray_BYTE #define NPY_UBYTE PyArray_UBYTE #define NPY_SHORT PyArray_SHORT #define NPY_USHORT PyArray_USHORT #define NPY_INT PyArray_INT #define NPY_UINT PyArray_UINT #define NPY_LONG PyArray_LONG #define NPY_ULONG PyArray_ULONG #define NPY_LONGLONG PyArray_LONGLONG #define NPY_ULONGLONG PyArray_ULONGLONG #define NPY_FLOAT PyArray_FLOAT #define NPY_DOUBLE PyArray_DOUBLE #define NPY_LONGDOUBLE PyArray_LONGDOUBLE #define NPY_CFLOAT PyArray_CFLOAT #define NPY_CDOUBLE PyArray_CDOUBLE #define NPY_CLONGDOUBLE PyArray_CLONGDOUBLE #define NPY_OBJECT PyArray_OBJECT #define NPY_STRING PyArray_STRING #define NPY_UNICODE PyArray_UNICODE #define NPY_VOID PyArray_VOID #define NPY_NTYPES PyArray_NTYPES #define NPY_NOTYPE PyArray_NOTYPE #define NPY_CHAR PyArray_CHAR #define NPY_USERDEF PyArray_USERDEF #define npy_intp intp #define NPY_MAX_BYTE MAX_BYTE #define NPY_MIN_BYTE MIN_BYTE #define NPY_MAX_UBYTE MAX_UBYTE #define NPY_MAX_SHORT MAX_SHORT #define NPY_MIN_SHORT MIN_SHORT #define NPY_MAX_USHORT MAX_USHORT #define NPY_MAX_INT MAX_INT #define NPY_MIN_INT MIN_INT #define NPY_MAX_UINT MAX_UINT #define NPY_MAX_LONG MAX_LONG #define NPY_MIN_LONG MIN_LONG #define NPY_MAX_ULONG MAX_ULONG #define NPY_MAX_LONGLONG MAX_LONGLONG #define NPY_MIN_LONGLONG MIN_LONGLONG #define NPY_MAX_ULONGLONG MAX_ULONGLONG #define NPY_MAX_INTP MAX_INTP #define NPY_MIN_INTP MIN_INTP #define NPY_FARRAY FARRAY #define NPY_F_CONTIGUOUS F_CONTIGUOUS #endif /* Macros to extract array attributes. */ #define is_array(a) ((a) && PyArray_Check((PyArrayObject *)a)) #define array_type(a) (int)(PyArray_TYPE(a)) #define array_numdims(a) (((PyArrayObject *)a)->nd) #define array_dimensions(a) (((PyArrayObject *)a)->dimensions) #define array_size(a,i) (((PyArrayObject *)a)->dimensions[i]) #define array_data(a) (((PyArrayObject *)a)->data) #define array_is_contiguous(a) (PyArray_ISCONTIGUOUS(a)) #define array_is_native(a) (PyArray_ISNOTSWAPPED(a)) #define array_is_fortran(a) (PyArray_ISFORTRAN(a)) /* Given a PyObject, return a string describing its type. */ char* pytype_string(PyObject* py_obj) { if (py_obj == NULL ) return "C NULL value"; if (py_obj == Py_None ) return "Python None" ; if (PyCallable_Check(py_obj)) return "callable" ; if (PyString_Check( py_obj)) return "string" ; if (PyInt_Check( py_obj)) return "int" ; if (PyFloat_Check( py_obj)) return "float" ; if (PyDict_Check( py_obj)) return "dict" ; if (PyList_Check( py_obj)) return "list" ; if (PyTuple_Check( py_obj)) return "tuple" ; if (PyFile_Check( py_obj)) return "file" ; if (PyModule_Check( py_obj)) return "module" ; if (PyInstance_Check(py_obj)) return "instance" ; return "unkown type"; } /* Given a NumPy typecode, return a string describing the type. */ char* typecode_string(int typecode) { static char* type_names[25] = {"bool", "byte", "unsigned byte", "short", "unsigned short", "int", "unsigned int", "long", "unsigned long", "long long", "unsigned long long", "float", "double", "long double", "complex float", "complex double", "complex long double", "object", "string", "unicode", "void", "ntypes", "notype", "char", "unknown"}; return typecode < 24 ? type_names[typecode] : type_names[24]; } /* Make sure input has correct numpy type. Allow character and byte * to match. Also allow int and long to match. This is deprecated. * You should use PyArray_EquivTypenums() instead. */ int type_match(int actual_type, int desired_type) { return PyArray_EquivTypenums(actual_type, desired_type); } /* Given a PyObject pointer, cast it to a PyArrayObject pointer if * legal. If not, set the python error string appropriately and * return NULL. */ PyArrayObject* obj_to_array_no_conversion(PyObject* input, int typecode) { PyArrayObject* ary = NULL; if (is_array(input) && (typecode == NPY_NOTYPE || PyArray_EquivTypenums(array_type(input), typecode))) { ary = (PyArrayObject*) input; } else if is_array(input) { char* desired_type = typecode_string(typecode); char* actual_type = typecode_string(array_type(input)); PyErr_Format(PyExc_TypeError, "Array of type '%s' required. Array of type '%s' given", desired_type, actual_type); ary = NULL; } else { char * desired_type = typecode_string(typecode); char * actual_type = pytype_string(input); PyErr_Format(PyExc_TypeError, "Array of type '%s' required. A '%s' was given", desired_type, actual_type); ary = NULL; } return ary; } /* Convert the given PyObject to a NumPy array with the given * typecode. On success, return a valid PyArrayObject* with the * correct type. On failure, the python error string will be set and * the routine returns NULL. */ PyArrayObject* obj_to_array_allow_conversion(PyObject* input, int typecode, int* is_new_object) { PyArrayObject* ary = NULL; PyObject* py_obj; if (is_array(input) && (typecode == NPY_NOTYPE || PyArray_EquivTypenums(array_type(input),typecode))) { ary = (PyArrayObject*) input; *is_new_object = 0; } else { py_obj = PyArray_FromObject(input, typecode, 0, 0); /* If NULL, PyArray_FromObject will have set python error value.*/ ary = (PyArrayObject*) py_obj; *is_new_object = 1; } return ary; } /* Given a PyArrayObject, check to see if it is contiguous. If so, * return the input pointer and flag it as not a new object. If it is * not contiguous, create a new PyArrayObject using the original data, * flag it as a new object and return the pointer. */ PyArrayObject* make_contiguous(PyArrayObject* ary, int* is_new_object, int min_dims, int max_dims) { PyArrayObject* result; if (array_is_contiguous(ary)) { result = ary; *is_new_object = 0; } else { result = (PyArrayObject*) PyArray_ContiguousFromObject((PyObject*)ary, array_type(ary), min_dims, max_dims); *is_new_object = 1; } return result; } /* Convert a given PyObject to a contiguous PyArrayObject of the * specified type. If the input object is not a contiguous * PyArrayObject, a new one will be created and the new object flag * will be set. */ PyArrayObject* obj_to_array_contiguous_allow_conversion(PyObject* input, int typecode, int* is_new_object) { int is_new1 = 0; int is_new2 = 0; PyArrayObject* ary2; PyArrayObject* ary1 = obj_to_array_allow_conversion(input, typecode, &is_new1); if (ary1) { ary2 = make_contiguous(ary1, &is_new2, 0, 0); if ( is_new1 && is_new2) { Py_DECREF(ary1); } ary1 = ary2; } *is_new_object = is_new1 || is_new2; return ary1; } /* Test whether a python object is contiguous. If array is * contiguous, return 1. Otherwise, set the python error string and * return 0. */ int require_contiguous(PyArrayObject* ary) { int contiguous = 1; if (!array_is_contiguous(ary)) { PyErr_SetString(PyExc_TypeError, "Array must be contiguous. A non-contiguous array was given"); contiguous = 0; } return contiguous; } /* Require that a numpy array is not byte-swapped. If the array is * not byte-swapped, return 1. Otherwise, set the python error string * and return 0. */ int require_native(PyArrayObject* ary) { int native = 1; if (!array_is_native(ary)) { PyErr_SetString(PyExc_TypeError, "Array must have native byteorder. " "A byte-swapped array was given"); native = 0; } return native; } /* Require the given PyArrayObject to have a specified number of * dimensions. If the array has the specified number of dimensions, * return 1. Otherwise, set the python error string and return 0. */ int require_dimensions(PyArrayObject* ary, int exact_dimensions) { int success = 1; if (array_numdims(ary) != exact_dimensions) { PyErr_Format(PyExc_TypeError, "Array must have %d dimensions. Given array has %d dimensions", exact_dimensions, array_numdims(ary)); success = 0; } return success; } /* Require the given PyArrayObject to have one of a list of specified * number of dimensions. If the array has one of the specified number * of dimensions, return 1. Otherwise, set the python error string * and return 0. */ int require_dimensions_n(PyArrayObject* ary, int* exact_dimensions, int n) { int success = 0; int i; char dims_str[255] = ""; char s[255]; for (i = 0; i < n && !success; i++) { if (array_numdims(ary) == exact_dimensions[i]) { success = 1; } } if (!success) { for (i = 0; i < n-1; i++) { sprintf(s, "%d, ", exact_dimensions[i]); strcat(dims_str,s); } sprintf(s, " or %d", exact_dimensions[n-1]); strcat(dims_str,s); PyErr_Format(PyExc_TypeError, "Array must have %s dimensions. Given array has %d dimensions", dims_str, array_numdims(ary)); } return success; } /* Require the given PyArrayObject to have a specified shape. If the * array has the specified shape, return 1. Otherwise, set the python * error string and return 0. */ int require_size(PyArrayObject* ary, npy_intp* size, int n) { int i; int success = 1; int len; char desired_dims[255] = "["; char s[255]; char actual_dims[255] = "["; for(i=0; i < n;i++) { if (size[i] != -1 && size[i] != array_size(ary,i)) { success = 0; } } if (!success) { for (i = 0; i < n; i++) { if (size[i] == -1) { sprintf(s, "*,"); } else { sprintf(s, "%ld,", (long int)size[i]); } strcat(desired_dims,s); } len = strlen(desired_dims); desired_dims[len-1] = ']'; for (i = 0; i < n; i++) { sprintf(s, "%ld,", (long int)array_size(ary,i)); strcat(actual_dims,s); } len = strlen(actual_dims); actual_dims[len-1] = ']'; PyErr_Format(PyExc_TypeError, "Array must have shape of %s. Given array has shape of %s", desired_dims, actual_dims); } return success; } /* Require the given PyArrayObject to to be FORTRAN ordered. If the * the PyArrayObject is already FORTRAN ordered, do nothing. Else, * set the FORTRAN ordering flag and recompute the strides. */ int require_fortran(PyArrayObject* ary) { int success = 1; int nd = array_numdims(ary); int i; if (array_is_fortran(ary)) return success; /* Set the FORTRAN ordered flag */ ary->flags = NPY_FARRAY; /* Recompute the strides */ ary->strides[0] = ary->strides[nd-1]; for (i=1; i < nd; ++i) ary->strides[i] = ary->strides[i-1] * array_size(ary,i-1); return success; } #ifdef __cplusplus extern "C" { #endif SWIGINTERN PyObject *_wrap_CircularVector_X_set(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; long *arg2 = (long *) 0 ; void *argp1 = 0 ; int res1 = 0 ; void *argp2 = 0 ; int res2 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OO:CircularVector_X_set",&obj0,&obj1)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_X_set" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); res2 = SWIG_ConvertPtr(obj1, &argp2,SWIGTYPE_p_long, SWIG_POINTER_DISOWN | 0 ); if (!SWIG_IsOK(res2)) { SWIG_exception_fail(SWIG_ArgError(res2), "in method '" "CircularVector_X_set" "', argument " "2"" of type '" "long *""'"); } arg2 = reinterpret_cast< long * >(argp2); { try { if (arg1) (arg1)->X = arg2; } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_X_get(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; long *result = 0 ; if (!PyArg_ParseTuple(args,(char *)"O:CircularVector_X_get",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_X_get" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); { try { result = (long *) ((arg1)->X); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_NewPointerObj(SWIG_as_voidptr(result), SWIGTYPE_p_long, 0 | 0 ); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_cursor_set(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; long arg2 ; void *argp1 = 0 ; int res1 = 0 ; long val2 ; int ecode2 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OO:CircularVector_cursor_set",&obj0,&obj1)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_cursor_set" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); ecode2 = SWIG_AsVal_long(obj1, &val2); if (!SWIG_IsOK(ecode2)) { SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "CircularVector_cursor_set" "', argument " "2"" of type '" "long""'"); } arg2 = static_cast< long >(val2); { try { if (arg1) (arg1)->cursor = arg2; } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_cursor_get(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; long result; if (!PyArg_ParseTuple(args,(char *)"O:CircularVector_cursor_get",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_cursor_get" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); { try { result = (long) ((arg1)->cursor); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_From_long(static_cast< long >(result)); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_n_set(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; long arg2 ; void *argp1 = 0 ; int res1 = 0 ; long val2 ; int ecode2 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OO:CircularVector_n_set",&obj0,&obj1)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_n_set" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); ecode2 = SWIG_AsVal_long(obj1, &val2); if (!SWIG_IsOK(ecode2)) { SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "CircularVector_n_set" "', argument " "2"" of type '" "long""'"); } arg2 = static_cast< long >(val2); { try { if (arg1) (arg1)->n = arg2; } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_n_get(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; long result; if (!PyArg_ParseTuple(args,(char *)"O:CircularVector_n_get",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_n_get" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); { try { result = (long) ((arg1)->n); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_From_long(static_cast< long >(result)); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_retarray_set(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; long *arg2 = (long *) 0 ; void *argp1 = 0 ; int res1 = 0 ; void *argp2 = 0 ; int res2 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OO:CircularVector_retarray_set",&obj0,&obj1)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_retarray_set" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); res2 = SWIG_ConvertPtr(obj1, &argp2,SWIGTYPE_p_long, SWIG_POINTER_DISOWN | 0 ); if (!SWIG_IsOK(res2)) { SWIG_exception_fail(SWIG_ArgError(res2), "in method '" "CircularVector_retarray_set" "', argument " "2"" of type '" "long *""'"); } arg2 = reinterpret_cast< long * >(argp2); { try { if (arg1) (arg1)->retarray = arg2; } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_retarray_get(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; long *result = 0 ; if (!PyArg_ParseTuple(args,(char *)"O:CircularVector_retarray_get",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_retarray_get" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); { try { result = (long *) ((arg1)->retarray); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_NewPointerObj(SWIG_as_voidptr(result), SWIGTYPE_p_long, 0 | 0 ); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_new_CircularVector(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; int arg1 ; int val1 ; int ecode1 = 0 ; PyObject * obj0 = 0 ; CircularVector *result = 0 ; if (!PyArg_ParseTuple(args,(char *)"O:new_CircularVector",&obj0)) SWIG_fail; ecode1 = SWIG_AsVal_int(obj0, &val1); if (!SWIG_IsOK(ecode1)) { SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "new_CircularVector" "', argument " "1"" of type '" "int""'"); } arg1 = static_cast< int >(val1); { try { result = (CircularVector *)new CircularVector(arg1); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_NewPointerObj(SWIG_as_voidptr(result), SWIGTYPE_p_CircularVector, SWIG_POINTER_NEW | 0 ); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_delete_CircularVector(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; if (!PyArg_ParseTuple(args,(char *)"O:delete_CircularVector",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, SWIG_POINTER_DISOWN | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "delete_CircularVector" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); { try { delete arg1; } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_reinit(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; if (!PyArg_ParseTuple(args,(char *)"O:CircularVector_reinit",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_reinit" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); { try { (arg1)->reinit(); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_advance(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; int arg2 ; void *argp1 = 0 ; int res1 = 0 ; int val2 ; int ecode2 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OO:CircularVector_advance",&obj0,&obj1)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_advance" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); ecode2 = SWIG_AsVal_int(obj1, &val2); if (!SWIG_IsOK(ecode2)) { SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "CircularVector_advance" "', argument " "2"" of type '" "int""'"); } arg2 = static_cast< int >(val2); { try { (arg1)->advance(arg2); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector___len__(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; int result; if (!PyArg_ParseTuple(args,(char *)"O:CircularVector___len__",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector___len__" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); { try { result = (int)(arg1)->__len__(); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_From_int(static_cast< int >(result)); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector___getitem__(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; int arg2 ; void *argp1 = 0 ; int res1 = 0 ; int val2 ; int ecode2 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; int result; if (!PyArg_ParseTuple(args,(char *)"OO:CircularVector___getitem__",&obj0,&obj1)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector___getitem__" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); ecode2 = SWIG_AsVal_int(obj1, &val2); if (!SWIG_IsOK(ecode2)) { SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "CircularVector___getitem__" "', argument " "2"" of type '" "int""'"); } arg2 = static_cast< int >(val2); { try { result = (int)(arg1)->__getitem__(arg2); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_From_int(static_cast< int >(result)); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector___setitem__(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; int arg2 ; int arg3 ; void *argp1 = 0 ; int res1 = 0 ; int val2 ; int ecode2 = 0 ; int val3 ; int ecode3 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; PyObject * obj2 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OOO:CircularVector___setitem__",&obj0,&obj1,&obj2)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector___setitem__" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); ecode2 = SWIG_AsVal_int(obj1, &val2); if (!SWIG_IsOK(ecode2)) { SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "CircularVector___setitem__" "', argument " "2"" of type '" "int""'"); } arg2 = static_cast< int >(val2); ecode3 = SWIG_AsVal_int(obj2, &val3); if (!SWIG_IsOK(ecode3)) { SWIG_exception_fail(SWIG_ArgError(ecode3), "in method '" "CircularVector___setitem__" "', argument " "3"" of type '" "int""'"); } arg3 = static_cast< int >(val3); { try { (arg1)->__setitem__(arg2,arg3); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector___getslice__(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; long **arg2 = (long **) 0 ; int *arg3 = (int *) 0 ; int arg4 ; int arg5 ; void *argp1 = 0 ; int res1 = 0 ; long *data_temp2 ; int dim_temp2 ; int val4 ; int ecode4 = 0 ; int val5 ; int ecode5 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; PyObject * obj2 = 0 ; { arg2 = &data_temp2; arg3 = &dim_temp2; } if (!PyArg_ParseTuple(args,(char *)"OOO:CircularVector___getslice__",&obj0,&obj1,&obj2)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector___getslice__" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); ecode4 = SWIG_AsVal_int(obj1, &val4); if (!SWIG_IsOK(ecode4)) { SWIG_exception_fail(SWIG_ArgError(ecode4), "in method '" "CircularVector___getslice__" "', argument " "4"" of type '" "int""'"); } arg4 = static_cast< int >(val4); ecode5 = SWIG_AsVal_int(obj2, &val5); if (!SWIG_IsOK(ecode5)) { SWIG_exception_fail(SWIG_ArgError(ecode5), "in method '" "CircularVector___getslice__" "', argument " "5"" of type '" "int""'"); } arg5 = static_cast< int >(val5); { try { (arg1)->__getslice__(arg2,arg3,arg4,arg5); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); { npy_intp dims[1] = { *arg3 }; PyObject * array = PyArray_SimpleNewFromData(1, dims, NPY_LONG, (void*)(*arg2)); if (!array) SWIG_fail; resultobj = SWIG_Python_AppendOutput(resultobj,array); } return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_get_conditional__SWIG_0(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; long **arg2 = (long **) 0 ; int *arg3 = (int *) 0 ; int arg4 ; int arg5 ; int arg6 ; int arg7 ; int arg8 ; void *argp1 = 0 ; int res1 = 0 ; long *data_temp2 ; int dim_temp2 ; int val4 ; int ecode4 = 0 ; int val5 ; int ecode5 = 0 ; int val6 ; int ecode6 = 0 ; int val7 ; int ecode7 = 0 ; int val8 ; int ecode8 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; PyObject * obj2 = 0 ; PyObject * obj3 = 0 ; PyObject * obj4 = 0 ; PyObject * obj5 = 0 ; { arg2 = &data_temp2; arg3 = &dim_temp2; } if (!PyArg_ParseTuple(args,(char *)"OOOOOO:CircularVector_get_conditional",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_get_conditional" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); ecode4 = SWIG_AsVal_int(obj1, &val4); if (!SWIG_IsOK(ecode4)) { SWIG_exception_fail(SWIG_ArgError(ecode4), "in method '" "CircularVector_get_conditional" "', argument " "4"" of type '" "int""'"); } arg4 = static_cast< int >(val4); ecode5 = SWIG_AsVal_int(obj2, &val5); if (!SWIG_IsOK(ecode5)) { SWIG_exception_fail(SWIG_ArgError(ecode5), "in method '" "CircularVector_get_conditional" "', argument " "5"" of type '" "int""'"); } arg5 = static_cast< int >(val5); ecode6 = SWIG_AsVal_int(obj3, &val6); if (!SWIG_IsOK(ecode6)) { SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "CircularVector_get_conditional" "', argument " "6"" of type '" "int""'"); } arg6 = static_cast< int >(val6); ecode7 = SWIG_AsVal_int(obj4, &val7); if (!SWIG_IsOK(ecode7)) { SWIG_exception_fail(SWIG_ArgError(ecode7), "in method '" "CircularVector_get_conditional" "', argument " "7"" of type '" "int""'"); } arg7 = static_cast< int >(val7); ecode8 = SWIG_AsVal_int(obj5, &val8); if (!SWIG_IsOK(ecode8)) { SWIG_exception_fail(SWIG_ArgError(ecode8), "in method '" "CircularVector_get_conditional" "', argument " "8"" of type '" "int""'"); } arg8 = static_cast< int >(val8); { try { (arg1)->get_conditional(arg2,arg3,arg4,arg5,arg6,arg7,arg8); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); { npy_intp dims[1] = { *arg3 }; PyObject * array = PyArray_SimpleNewFromData(1, dims, NPY_LONG, (void*)(*arg2)); if (!array) SWIG_fail; resultobj = SWIG_Python_AppendOutput(resultobj,array); } return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_get_conditional__SWIG_1(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; long **arg2 = (long **) 0 ; int *arg3 = (int *) 0 ; int arg4 ; int arg5 ; int arg6 ; int arg7 ; void *argp1 = 0 ; int res1 = 0 ; long *data_temp2 ; int dim_temp2 ; int val4 ; int ecode4 = 0 ; int val5 ; int ecode5 = 0 ; int val6 ; int ecode6 = 0 ; int val7 ; int ecode7 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; PyObject * obj2 = 0 ; PyObject * obj3 = 0 ; PyObject * obj4 = 0 ; { arg2 = &data_temp2; arg3 = &dim_temp2; } if (!PyArg_ParseTuple(args,(char *)"OOOOO:CircularVector_get_conditional",&obj0,&obj1,&obj2,&obj3,&obj4)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_get_conditional" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); ecode4 = SWIG_AsVal_int(obj1, &val4); if (!SWIG_IsOK(ecode4)) { SWIG_exception_fail(SWIG_ArgError(ecode4), "in method '" "CircularVector_get_conditional" "', argument " "4"" of type '" "int""'"); } arg4 = static_cast< int >(val4); ecode5 = SWIG_AsVal_int(obj2, &val5); if (!SWIG_IsOK(ecode5)) { SWIG_exception_fail(SWIG_ArgError(ecode5), "in method '" "CircularVector_get_conditional" "', argument " "5"" of type '" "int""'"); } arg5 = static_cast< int >(val5); ecode6 = SWIG_AsVal_int(obj3, &val6); if (!SWIG_IsOK(ecode6)) { SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "CircularVector_get_conditional" "', argument " "6"" of type '" "int""'"); } arg6 = static_cast< int >(val6); ecode7 = SWIG_AsVal_int(obj4, &val7); if (!SWIG_IsOK(ecode7)) { SWIG_exception_fail(SWIG_ArgError(ecode7), "in method '" "CircularVector_get_conditional" "', argument " "7"" of type '" "int""'"); } arg7 = static_cast< int >(val7); { try { (arg1)->get_conditional(arg2,arg3,arg4,arg5,arg6,arg7); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); { npy_intp dims[1] = { *arg3 }; PyObject * array = PyArray_SimpleNewFromData(1, dims, NPY_LONG, (void*)(*arg2)); if (!array) SWIG_fail; resultobj = SWIG_Python_AppendOutput(resultobj,array); } return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_get_conditional(PyObject *self, PyObject *args) { int argc; PyObject *argv[7]; int ii; if (!PyTuple_Check(args)) SWIG_fail; argc = (int)PyObject_Length(args); for (ii = 0; (ii < argc) && (ii < 6); ii++) { argv[ii] = PyTuple_GET_ITEM(args,ii); } if (argc == 5) { int _v; void *vptr = 0; int res = SWIG_ConvertPtr(argv[0], &vptr, SWIGTYPE_p_CircularVector, 0); _v = SWIG_CheckState(res); if (_v) { { int res = SWIG_AsVal_int(argv[1], NULL); _v = SWIG_CheckState(res); } if (_v) { { int res = SWIG_AsVal_int(argv[2], NULL); _v = SWIG_CheckState(res); } if (_v) { { int res = SWIG_AsVal_int(argv[3], NULL); _v = SWIG_CheckState(res); } if (_v) { { int res = SWIG_AsVal_int(argv[4], NULL); _v = SWIG_CheckState(res); } if (_v) { return _wrap_CircularVector_get_conditional__SWIG_1(self, args); } } } } } } if (argc == 6) { int _v; void *vptr = 0; int res = SWIG_ConvertPtr(argv[0], &vptr, SWIGTYPE_p_CircularVector, 0); _v = SWIG_CheckState(res); if (_v) { { int res = SWIG_AsVal_int(argv[1], NULL); _v = SWIG_CheckState(res); } if (_v) { { int res = SWIG_AsVal_int(argv[2], NULL); _v = SWIG_CheckState(res); } if (_v) { { int res = SWIG_AsVal_int(argv[3], NULL); _v = SWIG_CheckState(res); } if (_v) { { int res = SWIG_AsVal_int(argv[4], NULL); _v = SWIG_CheckState(res); } if (_v) { { int res = SWIG_AsVal_int(argv[5], NULL); _v = SWIG_CheckState(res); } if (_v) { return _wrap_CircularVector_get_conditional__SWIG_0(self, args); } } } } } } } fail: SWIG_SetErrorMsg(PyExc_NotImplementedError,"Wrong number of arguments for overloaded function 'CircularVector_get_conditional'.\n" " Possible C/C++ prototypes are:\n" " get_conditional(CircularVector *,long **,int *,int,int,int,int,int)\n" " get_conditional(CircularVector *,long **,int *,int,int,int,int)\n"); return NULL; } SWIGINTERN PyObject *_wrap_CircularVector___setslice__(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; int arg2 ; int arg3 ; long *arg4 = (long *) 0 ; int arg5 ; void *argp1 = 0 ; int res1 = 0 ; int val2 ; int ecode2 = 0 ; int val3 ; int ecode3 = 0 ; PyArrayObject *array4 = NULL ; int i4 = 1 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; PyObject * obj2 = 0 ; PyObject * obj3 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OOOO:CircularVector___setslice__",&obj0,&obj1,&obj2,&obj3)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector___setslice__" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); ecode2 = SWIG_AsVal_int(obj1, &val2); if (!SWIG_IsOK(ecode2)) { SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "CircularVector___setslice__" "', argument " "2"" of type '" "int""'"); } arg2 = static_cast< int >(val2); ecode3 = SWIG_AsVal_int(obj2, &val3); if (!SWIG_IsOK(ecode3)) { SWIG_exception_fail(SWIG_ArgError(ecode3), "in method '" "CircularVector___setslice__" "', argument " "3"" of type '" "int""'"); } arg3 = static_cast< int >(val3); { array4 = obj_to_array_no_conversion(obj3, NPY_LONG); if (!array4 || !require_dimensions(array4,1) || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; arg4 = (long*) array_data(array4); arg5 = 1; for (i4=0; i4 < array_numdims(array4); ++i4) arg5 *= array_size(array4,i4); } { try { (arg1)->__setslice__(arg2,arg3,arg4,arg5); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector___repr__(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; string result; if (!PyArg_ParseTuple(args,(char *)"O:CircularVector___repr__",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector___repr__" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); { try { result = (arg1)->__repr__(); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_NewPointerObj((new string(static_cast< const string& >(result))), SWIGTYPE_p_string, SWIG_POINTER_OWN | 0 ); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector___str__(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; string result; if (!PyArg_ParseTuple(args,(char *)"O:CircularVector___str__",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector___str__" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); { try { result = (arg1)->__str__(); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_NewPointerObj((new string(static_cast< const string& >(result))), SWIGTYPE_p_string, SWIG_POINTER_OWN | 0 ); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_CircularVector_expand(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; CircularVector *arg1 = (CircularVector *) 0 ; long arg2 ; void *argp1 = 0 ; int res1 = 0 ; long val2 ; int ecode2 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OO:CircularVector_expand",&obj0,&obj1)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_CircularVector, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "CircularVector_expand" "', argument " "1"" of type '" "CircularVector *""'"); } arg1 = reinterpret_cast< CircularVector * >(argp1); ecode2 = SWIG_AsVal_long(obj1, &val2); if (!SWIG_IsOK(ecode2)) { SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "CircularVector_expand" "', argument " "2"" of type '" "long""'"); } arg2 = static_cast< long >(val2); { try { (arg1)->expand(arg2); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *CircularVector_swigregister(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *obj; if (!PyArg_ParseTuple(args,(char*)"O:swigregister", &obj)) return NULL; SWIG_TypeNewClientData(SWIGTYPE_p_CircularVector, SWIG_NewClientData(obj)); return SWIG_Py_Void(); } SWIGINTERN PyObject *_wrap_SpikeContainer_S_set(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; CircularVector *arg2 = (CircularVector *) 0 ; void *argp1 = 0 ; int res1 = 0 ; void *argp2 = 0 ; int res2 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OO:SpikeContainer_S_set",&obj0,&obj1)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer_S_set" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); res2 = SWIG_ConvertPtr(obj1, &argp2,SWIGTYPE_p_CircularVector, SWIG_POINTER_DISOWN | 0 ); if (!SWIG_IsOK(res2)) { SWIG_exception_fail(SWIG_ArgError(res2), "in method '" "SpikeContainer_S_set" "', argument " "2"" of type '" "CircularVector *""'"); } arg2 = reinterpret_cast< CircularVector * >(argp2); { try { if (arg1) (arg1)->S = arg2; } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer_S_get(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; CircularVector *result = 0 ; if (!PyArg_ParseTuple(args,(char *)"O:SpikeContainer_S_get",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer_S_get" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); { try { result = (CircularVector *) ((arg1)->S); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_NewPointerObj(SWIG_as_voidptr(result), SWIGTYPE_p_CircularVector, 0 | 0 ); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer_ind_set(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; CircularVector *arg2 = (CircularVector *) 0 ; void *argp1 = 0 ; int res1 = 0 ; void *argp2 = 0 ; int res2 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OO:SpikeContainer_ind_set",&obj0,&obj1)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer_ind_set" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); res2 = SWIG_ConvertPtr(obj1, &argp2,SWIGTYPE_p_CircularVector, SWIG_POINTER_DISOWN | 0 ); if (!SWIG_IsOK(res2)) { SWIG_exception_fail(SWIG_ArgError(res2), "in method '" "SpikeContainer_ind_set" "', argument " "2"" of type '" "CircularVector *""'"); } arg2 = reinterpret_cast< CircularVector * >(argp2); { try { if (arg1) (arg1)->ind = arg2; } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer_ind_get(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; CircularVector *result = 0 ; if (!PyArg_ParseTuple(args,(char *)"O:SpikeContainer_ind_get",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer_ind_get" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); { try { result = (CircularVector *) ((arg1)->ind); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_NewPointerObj(SWIG_as_voidptr(result), SWIGTYPE_p_CircularVector, 0 | 0 ); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer_remaining_space_set(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; int arg2 ; void *argp1 = 0 ; int res1 = 0 ; int val2 ; int ecode2 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OO:SpikeContainer_remaining_space_set",&obj0,&obj1)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer_remaining_space_set" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); ecode2 = SWIG_AsVal_int(obj1, &val2); if (!SWIG_IsOK(ecode2)) { SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "SpikeContainer_remaining_space_set" "', argument " "2"" of type '" "int""'"); } arg2 = static_cast< int >(val2); { try { if (arg1) (arg1)->remaining_space = arg2; } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer_remaining_space_get(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; int result; if (!PyArg_ParseTuple(args,(char *)"O:SpikeContainer_remaining_space_get",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer_remaining_space_get" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); { try { result = (int) ((arg1)->remaining_space); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_From_int(static_cast< int >(result)); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_new_SpikeContainer(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; int arg1 ; int val1 ; int ecode1 = 0 ; PyObject * obj0 = 0 ; SpikeContainer *result = 0 ; if (!PyArg_ParseTuple(args,(char *)"O:new_SpikeContainer",&obj0)) SWIG_fail; ecode1 = SWIG_AsVal_int(obj0, &val1); if (!SWIG_IsOK(ecode1)) { SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "new_SpikeContainer" "', argument " "1"" of type '" "int""'"); } arg1 = static_cast< int >(val1); { try { result = (SpikeContainer *)new SpikeContainer(arg1); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_NewPointerObj(SWIG_as_voidptr(result), SWIGTYPE_p_SpikeContainer, SWIG_POINTER_NEW | 0 ); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_delete_SpikeContainer(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; if (!PyArg_ParseTuple(args,(char *)"O:delete_SpikeContainer",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, SWIG_POINTER_DISOWN | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "delete_SpikeContainer" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); { try { delete arg1; } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer_reinit(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; if (!PyArg_ParseTuple(args,(char *)"O:SpikeContainer_reinit",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer_reinit" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); { try { (arg1)->reinit(); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer_push(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; long *arg2 = (long *) 0 ; int arg3 ; void *argp1 = 0 ; int res1 = 0 ; PyArrayObject *array2 = NULL ; int i2 = 1 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OO:SpikeContainer_push",&obj0,&obj1)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer_push" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); { array2 = obj_to_array_no_conversion(obj1, NPY_LONG); if (!array2 || !require_dimensions(array2,1) || !require_contiguous(array2) || !require_native(array2)) SWIG_fail; arg2 = (long*) array_data(array2); arg3 = 1; for (i2=0; i2 < array_numdims(array2); ++i2) arg3 *= array_size(array2,i2); } { try { (arg1)->push(arg2,arg3); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer_lastspikes(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; long **arg2 = (long **) 0 ; int *arg3 = (int *) 0 ; void *argp1 = 0 ; int res1 = 0 ; long *data_temp2 ; int dim_temp2 ; PyObject * obj0 = 0 ; { arg2 = &data_temp2; arg3 = &dim_temp2; } if (!PyArg_ParseTuple(args,(char *)"O:SpikeContainer_lastspikes",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer_lastspikes" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); { try { (arg1)->lastspikes(arg2,arg3); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); { npy_intp dims[1] = { *arg3 }; PyObject * array = PyArray_SimpleNewFromData(1, dims, NPY_LONG, (void*)(*arg2)); if (!array) SWIG_fail; resultobj = SWIG_Python_AppendOutput(resultobj,array); } return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer___getitem__(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; long **arg2 = (long **) 0 ; int *arg3 = (int *) 0 ; int arg4 ; void *argp1 = 0 ; int res1 = 0 ; long *data_temp2 ; int dim_temp2 ; int val4 ; int ecode4 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; { arg2 = &data_temp2; arg3 = &dim_temp2; } if (!PyArg_ParseTuple(args,(char *)"OO:SpikeContainer___getitem__",&obj0,&obj1)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer___getitem__" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); ecode4 = SWIG_AsVal_int(obj1, &val4); if (!SWIG_IsOK(ecode4)) { SWIG_exception_fail(SWIG_ArgError(ecode4), "in method '" "SpikeContainer___getitem__" "', argument " "4"" of type '" "int""'"); } arg4 = static_cast< int >(val4); { try { (arg1)->__getitem__(arg2,arg3,arg4); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); { npy_intp dims[1] = { *arg3 }; PyObject * array = PyArray_SimpleNewFromData(1, dims, NPY_LONG, (void*)(*arg2)); if (!array) SWIG_fail; resultobj = SWIG_Python_AppendOutput(resultobj,array); } return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer_get_spikes(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; long **arg2 = (long **) 0 ; int *arg3 = (int *) 0 ; int arg4 ; int arg5 ; int arg6 ; void *argp1 = 0 ; int res1 = 0 ; long *data_temp2 ; int dim_temp2 ; int val4 ; int ecode4 = 0 ; int val5 ; int ecode5 = 0 ; int val6 ; int ecode6 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; PyObject * obj2 = 0 ; PyObject * obj3 = 0 ; { arg2 = &data_temp2; arg3 = &dim_temp2; } if (!PyArg_ParseTuple(args,(char *)"OOOO:SpikeContainer_get_spikes",&obj0,&obj1,&obj2,&obj3)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer_get_spikes" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); ecode4 = SWIG_AsVal_int(obj1, &val4); if (!SWIG_IsOK(ecode4)) { SWIG_exception_fail(SWIG_ArgError(ecode4), "in method '" "SpikeContainer_get_spikes" "', argument " "4"" of type '" "int""'"); } arg4 = static_cast< int >(val4); ecode5 = SWIG_AsVal_int(obj2, &val5); if (!SWIG_IsOK(ecode5)) { SWIG_exception_fail(SWIG_ArgError(ecode5), "in method '" "SpikeContainer_get_spikes" "', argument " "5"" of type '" "int""'"); } arg5 = static_cast< int >(val5); ecode6 = SWIG_AsVal_int(obj3, &val6); if (!SWIG_IsOK(ecode6)) { SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "SpikeContainer_get_spikes" "', argument " "6"" of type '" "int""'"); } arg6 = static_cast< int >(val6); { try { (arg1)->get_spikes(arg2,arg3,arg4,arg5,arg6); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); { npy_intp dims[1] = { *arg3 }; PyObject * array = PyArray_SimpleNewFromData(1, dims, NPY_LONG, (void*)(*arg2)); if (!array) SWIG_fail; resultobj = SWIG_Python_AppendOutput(resultobj,array); } return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer___getslice__(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; long **arg2 = (long **) 0 ; int *arg3 = (int *) 0 ; int arg4 ; int arg5 ; void *argp1 = 0 ; int res1 = 0 ; long *data_temp2 ; int dim_temp2 ; int val4 ; int ecode4 = 0 ; int val5 ; int ecode5 = 0 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; PyObject * obj2 = 0 ; { arg2 = &data_temp2; arg3 = &dim_temp2; } if (!PyArg_ParseTuple(args,(char *)"OOO:SpikeContainer___getslice__",&obj0,&obj1,&obj2)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer___getslice__" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); ecode4 = SWIG_AsVal_int(obj1, &val4); if (!SWIG_IsOK(ecode4)) { SWIG_exception_fail(SWIG_ArgError(ecode4), "in method '" "SpikeContainer___getslice__" "', argument " "4"" of type '" "int""'"); } arg4 = static_cast< int >(val4); ecode5 = SWIG_AsVal_int(obj2, &val5); if (!SWIG_IsOK(ecode5)) { SWIG_exception_fail(SWIG_ArgError(ecode5), "in method '" "SpikeContainer___getslice__" "', argument " "5"" of type '" "int""'"); } arg5 = static_cast< int >(val5); { try { (arg1)->__getslice__(arg2,arg3,arg4,arg5); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_Py_Void(); { npy_intp dims[1] = { *arg3 }; PyObject * array = PyArray_SimpleNewFromData(1, dims, NPY_LONG, (void*)(*arg2)); if (!array) SWIG_fail; resultobj = SWIG_Python_AppendOutput(resultobj,array); } return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer___repr__(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; string result; if (!PyArg_ParseTuple(args,(char *)"O:SpikeContainer___repr__",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer___repr__" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); { try { result = (arg1)->__repr__(); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_NewPointerObj((new string(static_cast< const string& >(result))), SWIGTYPE_p_string, SWIG_POINTER_OWN | 0 ); return resultobj; fail: return NULL; } SWIGINTERN PyObject *_wrap_SpikeContainer___str__(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; SpikeContainer *arg1 = (SpikeContainer *) 0 ; void *argp1 = 0 ; int res1 = 0 ; PyObject * obj0 = 0 ; string result; if (!PyArg_ParseTuple(args,(char *)"O:SpikeContainer___str__",&obj0)) SWIG_fail; res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_SpikeContainer, 0 | 0 ); if (!SWIG_IsOK(res1)) { SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "SpikeContainer___str__" "', argument " "1"" of type '" "SpikeContainer *""'"); } arg1 = reinterpret_cast< SpikeContainer * >(argp1); { try { result = (arg1)->__str__(); } catch( std::runtime_error &e ) { PyErr_SetString(PyExc_RuntimeError, const_cast(e.what())); return NULL; } } resultobj = SWIG_NewPointerObj((new string(static_cast< const string& >(result))), SWIGTYPE_p_string, SWIG_POINTER_OWN | 0 ); return resultobj; fail: return NULL; } SWIGINTERN PyObject *SpikeContainer_swigregister(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *obj; if (!PyArg_ParseTuple(args,(char*)"O:swigregister", &obj)) return NULL; SWIG_TypeNewClientData(SWIGTYPE_p_SpikeContainer, SWIG_NewClientData(obj)); return SWIG_Py_Void(); } static PyMethodDef SwigMethods[] = { { (char *)"CircularVector_X_set", _wrap_CircularVector_X_set, METH_VARARGS, NULL}, { (char *)"CircularVector_X_get", _wrap_CircularVector_X_get, METH_VARARGS, NULL}, { (char *)"CircularVector_cursor_set", _wrap_CircularVector_cursor_set, METH_VARARGS, NULL}, { (char *)"CircularVector_cursor_get", _wrap_CircularVector_cursor_get, METH_VARARGS, NULL}, { (char *)"CircularVector_n_set", _wrap_CircularVector_n_set, METH_VARARGS, NULL}, { (char *)"CircularVector_n_get", _wrap_CircularVector_n_get, METH_VARARGS, NULL}, { (char *)"CircularVector_retarray_set", _wrap_CircularVector_retarray_set, METH_VARARGS, NULL}, { (char *)"CircularVector_retarray_get", _wrap_CircularVector_retarray_get, METH_VARARGS, NULL}, { (char *)"new_CircularVector", _wrap_new_CircularVector, METH_VARARGS, NULL}, { (char *)"delete_CircularVector", _wrap_delete_CircularVector, METH_VARARGS, NULL}, { (char *)"CircularVector_reinit", _wrap_CircularVector_reinit, METH_VARARGS, NULL}, { (char *)"CircularVector_advance", _wrap_CircularVector_advance, METH_VARARGS, NULL}, { (char *)"CircularVector___len__", _wrap_CircularVector___len__, METH_VARARGS, NULL}, { (char *)"CircularVector___getitem__", _wrap_CircularVector___getitem__, METH_VARARGS, NULL}, { (char *)"CircularVector___setitem__", _wrap_CircularVector___setitem__, METH_VARARGS, NULL}, { (char *)"CircularVector___getslice__", _wrap_CircularVector___getslice__, METH_VARARGS, NULL}, { (char *)"CircularVector_get_conditional", _wrap_CircularVector_get_conditional, METH_VARARGS, NULL}, { (char *)"CircularVector___setslice__", _wrap_CircularVector___setslice__, METH_VARARGS, NULL}, { (char *)"CircularVector___repr__", _wrap_CircularVector___repr__, METH_VARARGS, NULL}, { (char *)"CircularVector___str__", _wrap_CircularVector___str__, METH_VARARGS, NULL}, { (char *)"CircularVector_expand", _wrap_CircularVector_expand, METH_VARARGS, NULL}, { (char *)"CircularVector_swigregister", CircularVector_swigregister, METH_VARARGS, NULL}, { (char *)"SpikeContainer_S_set", _wrap_SpikeContainer_S_set, METH_VARARGS, NULL}, { (char *)"SpikeContainer_S_get", _wrap_SpikeContainer_S_get, METH_VARARGS, NULL}, { (char *)"SpikeContainer_ind_set", _wrap_SpikeContainer_ind_set, METH_VARARGS, NULL}, { (char *)"SpikeContainer_ind_get", _wrap_SpikeContainer_ind_get, METH_VARARGS, NULL}, { (char *)"SpikeContainer_remaining_space_set", _wrap_SpikeContainer_remaining_space_set, METH_VARARGS, NULL}, { (char *)"SpikeContainer_remaining_space_get", _wrap_SpikeContainer_remaining_space_get, METH_VARARGS, NULL}, { (char *)"new_SpikeContainer", _wrap_new_SpikeContainer, METH_VARARGS, NULL}, { (char *)"delete_SpikeContainer", _wrap_delete_SpikeContainer, METH_VARARGS, NULL}, { (char *)"SpikeContainer_reinit", _wrap_SpikeContainer_reinit, METH_VARARGS, NULL}, { (char *)"SpikeContainer_push", _wrap_SpikeContainer_push, METH_VARARGS, NULL}, { (char *)"SpikeContainer_lastspikes", _wrap_SpikeContainer_lastspikes, METH_VARARGS, NULL}, { (char *)"SpikeContainer___getitem__", _wrap_SpikeContainer___getitem__, METH_VARARGS, NULL}, { (char *)"SpikeContainer_get_spikes", _wrap_SpikeContainer_get_spikes, METH_VARARGS, NULL}, { (char *)"SpikeContainer___getslice__", _wrap_SpikeContainer___getslice__, METH_VARARGS, NULL}, { (char *)"SpikeContainer___repr__", _wrap_SpikeContainer___repr__, METH_VARARGS, NULL}, { (char *)"SpikeContainer___str__", _wrap_SpikeContainer___str__, METH_VARARGS, NULL}, { (char *)"SpikeContainer_swigregister", SpikeContainer_swigregister, METH_VARARGS, NULL}, { NULL, NULL, 0, NULL } }; /* -------- TYPE CONVERSION AND EQUIVALENCE RULES (BEGIN) -------- */ static swig_type_info _swigt__p_CircularVector = {"_p_CircularVector", "CircularVector *", 0, 0, (void*)0, 0}; static swig_type_info _swigt__p_SpikeContainer = {"_p_SpikeContainer", "SpikeContainer *", 0, 0, (void*)0, 0}; static swig_type_info _swigt__p_char = {"_p_char", "char *", 0, 0, (void*)0, 0}; static swig_type_info _swigt__p_int = {"_p_int", "int *", 0, 0, (void*)0, 0}; static swig_type_info _swigt__p_long = {"_p_long", "long *", 0, 0, (void*)0, 0}; static swig_type_info _swigt__p_p_long = {"_p_p_long", "long **", 0, 0, (void*)0, 0}; static swig_type_info _swigt__p_string = {"_p_string", "string *", 0, 0, (void*)0, 0}; static swig_type_info *swig_type_initial[] = { &_swigt__p_CircularVector, &_swigt__p_SpikeContainer, &_swigt__p_char, &_swigt__p_int, &_swigt__p_long, &_swigt__p_p_long, &_swigt__p_string, }; static swig_cast_info _swigc__p_CircularVector[] = { {&_swigt__p_CircularVector, 0, 0, 0},{0, 0, 0, 0}}; static swig_cast_info _swigc__p_SpikeContainer[] = { {&_swigt__p_SpikeContainer, 0, 0, 0},{0, 0, 0, 0}}; static swig_cast_info _swigc__p_char[] = { {&_swigt__p_char, 0, 0, 0},{0, 0, 0, 0}}; static swig_cast_info _swigc__p_int[] = { {&_swigt__p_int, 0, 0, 0},{0, 0, 0, 0}}; static swig_cast_info _swigc__p_long[] = { {&_swigt__p_long, 0, 0, 0},{0, 0, 0, 0}}; static swig_cast_info _swigc__p_p_long[] = { {&_swigt__p_p_long, 0, 0, 0},{0, 0, 0, 0}}; static swig_cast_info _swigc__p_string[] = { {&_swigt__p_string, 0, 0, 0},{0, 0, 0, 0}}; static swig_cast_info *swig_cast_initial[] = { _swigc__p_CircularVector, _swigc__p_SpikeContainer, _swigc__p_char, _swigc__p_int, _swigc__p_long, _swigc__p_p_long, _swigc__p_string, }; /* -------- TYPE CONVERSION AND EQUIVALENCE RULES (END) -------- */ static swig_const_info swig_const_table[] = { {0, 0, 0, 0.0, 0, 0}}; #ifdef __cplusplus } #endif /* ----------------------------------------------------------------------------- * Type initialization: * This problem is tough by the requirement that no dynamic * memory is used. Also, since swig_type_info structures store pointers to * swig_cast_info structures and swig_cast_info structures store pointers back * to swig_type_info structures, we need some lookup code at initialization. * The idea is that swig generates all the structures that are needed. * The runtime then collects these partially filled structures. * The SWIG_InitializeModule function takes these initial arrays out of * swig_module, and does all the lookup, filling in the swig_module.types * array with the correct data and linking the correct swig_cast_info * structures together. * * The generated swig_type_info structures are assigned staticly to an initial * array. We just loop through that array, and handle each type individually. * First we lookup if this type has been already loaded, and if so, use the * loaded structure instead of the generated one. Then we have to fill in the * cast linked list. The cast data is initially stored in something like a * two-dimensional array. Each row corresponds to a type (there are the same * number of rows as there are in the swig_type_initial array). Each entry in * a column is one of the swig_cast_info structures for that type. * The cast_initial array is actually an array of arrays, because each row has * a variable number of columns. So to actually build the cast linked list, * we find the array of casts associated with the type, and loop through it * adding the casts to the list. The one last trick we need to do is making * sure the type pointer in the swig_cast_info struct is correct. * * First off, we lookup the cast->type name to see if it is already loaded. * There are three cases to handle: * 1) If the cast->type has already been loaded AND the type we are adding * casting info to has not been loaded (it is in this module), THEN we * replace the cast->type pointer with the type pointer that has already * been loaded. * 2) If BOTH types (the one we are adding casting info to, and the * cast->type) are loaded, THEN the cast info has already been loaded by * the previous module so we just ignore it. * 3) Finally, if cast->type has not already been loaded, then we add that * swig_cast_info to the linked list (because the cast->type) pointer will * be correct. * ----------------------------------------------------------------------------- */ #ifdef __cplusplus extern "C" { #if 0 } /* c-mode */ #endif #endif #if 0 #define SWIGRUNTIME_DEBUG #endif SWIGRUNTIME void SWIG_InitializeModule(void *clientdata) { size_t i; swig_module_info *module_head, *iter; int found, init; clientdata = clientdata; /* check to see if the circular list has been setup, if not, set it up */ if (swig_module.next==0) { /* Initialize the swig_module */ swig_module.type_initial = swig_type_initial; swig_module.cast_initial = swig_cast_initial; swig_module.next = &swig_module; init = 1; } else { init = 0; } /* Try and load any already created modules */ module_head = SWIG_GetModule(clientdata); if (!module_head) { /* This is the first module loaded for this interpreter */ /* so set the swig module into the interpreter */ SWIG_SetModule(clientdata, &swig_module); module_head = &swig_module; } else { /* the interpreter has loaded a SWIG module, but has it loaded this one? */ found=0; iter=module_head; do { if (iter==&swig_module) { found=1; break; } iter=iter->next; } while (iter!= module_head); /* if the is found in the list, then all is done and we may leave */ if (found) return; /* otherwise we must add out module into the list */ swig_module.next = module_head->next; module_head->next = &swig_module; } /* When multiple interpeters are used, a module could have already been initialized in a different interpreter, but not yet have a pointer in this interpreter. In this case, we do not want to continue adding types... everything should be set up already */ if (init == 0) return; /* Now work on filling in swig_module.types */ #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: size %d\n", swig_module.size); #endif for (i = 0; i < swig_module.size; ++i) { swig_type_info *type = 0; swig_type_info *ret; swig_cast_info *cast; #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: type %d %s\n", i, swig_module.type_initial[i]->name); #endif /* if there is another module already loaded */ if (swig_module.next != &swig_module) { type = SWIG_MangledTypeQueryModule(swig_module.next, &swig_module, swig_module.type_initial[i]->name); } if (type) { /* Overwrite clientdata field */ #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: found type %s\n", type->name); #endif if (swig_module.type_initial[i]->clientdata) { type->clientdata = swig_module.type_initial[i]->clientdata; #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: found and overwrite type %s \n", type->name); #endif } } else { type = swig_module.type_initial[i]; } /* Insert casting types */ cast = swig_module.cast_initial[i]; while (cast->type) { /* Don't need to add information already in the list */ ret = 0; #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: look cast %s\n", cast->type->name); #endif if (swig_module.next != &swig_module) { ret = SWIG_MangledTypeQueryModule(swig_module.next, &swig_module, cast->type->name); #ifdef SWIGRUNTIME_DEBUG if (ret) printf("SWIG_InitializeModule: found cast %s\n", ret->name); #endif } if (ret) { if (type == swig_module.type_initial[i]) { #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: skip old type %s\n", ret->name); #endif cast->type = ret; ret = 0; } else { /* Check for casting already in the list */ swig_cast_info *ocast = SWIG_TypeCheck(ret->name, type); #ifdef SWIGRUNTIME_DEBUG if (ocast) printf("SWIG_InitializeModule: skip old cast %s\n", ret->name); #endif if (!ocast) ret = 0; } } if (!ret) { #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: adding cast %s\n", cast->type->name); #endif if (type->cast) { type->cast->prev = cast; cast->next = type->cast; } type->cast = cast; } cast++; } /* Set entry in modules->types array equal to the type */ swig_module.types[i] = type; } swig_module.types[i] = 0; #ifdef SWIGRUNTIME_DEBUG printf("**** SWIG_InitializeModule: Cast List ******\n"); for (i = 0; i < swig_module.size; ++i) { int j = 0; swig_cast_info *cast = swig_module.cast_initial[i]; printf("SWIG_InitializeModule: type %d %s\n", i, swig_module.type_initial[i]->name); while (cast->type) { printf("SWIG_InitializeModule: cast type %s\n", cast->type->name); cast++; ++j; } printf("---- Total casts: %d\n",j); } printf("**** SWIG_InitializeModule: Cast List ******\n"); #endif } /* This function will propagate the clientdata field of type to * any new swig_type_info structures that have been added into the list * of equivalent types. It is like calling * SWIG_TypeClientData(type, clientdata) a second time. */ SWIGRUNTIME void SWIG_PropagateClientData(void) { size_t i; swig_cast_info *equiv; static int init_run = 0; if (init_run) return; init_run = 1; for (i = 0; i < swig_module.size; i++) { if (swig_module.types[i]->clientdata) { equiv = swig_module.types[i]->cast; while (equiv) { if (!equiv->converter) { if (equiv->type && !equiv->type->clientdata) SWIG_TypeClientData(equiv->type, swig_module.types[i]->clientdata); } equiv = equiv->next; } } } } #ifdef __cplusplus #if 0 { /* c-mode */ #endif } #endif #ifdef __cplusplus extern "C" { #endif /* Python-specific SWIG API */ #define SWIG_newvarlink() SWIG_Python_newvarlink() #define SWIG_addvarlink(p, name, get_attr, set_attr) SWIG_Python_addvarlink(p, name, get_attr, set_attr) #define SWIG_InstallConstants(d, constants) SWIG_Python_InstallConstants(d, constants) /* ----------------------------------------------------------------------------- * global variable support code. * ----------------------------------------------------------------------------- */ typedef struct swig_globalvar { char *name; /* Name of global variable */ PyObject *(*get_attr)(void); /* Return the current value */ int (*set_attr)(PyObject *); /* Set the value */ struct swig_globalvar *next; } swig_globalvar; typedef struct swig_varlinkobject { PyObject_HEAD swig_globalvar *vars; } swig_varlinkobject; SWIGINTERN PyObject * swig_varlink_repr(swig_varlinkobject *SWIGUNUSEDPARM(v)) { return PyString_FromString(""); } SWIGINTERN PyObject * swig_varlink_str(swig_varlinkobject *v) { PyObject *str = PyString_FromString("("); swig_globalvar *var; for (var = v->vars; var; var=var->next) { PyString_ConcatAndDel(&str,PyString_FromString(var->name)); if (var->next) PyString_ConcatAndDel(&str,PyString_FromString(", ")); } PyString_ConcatAndDel(&str,PyString_FromString(")")); return str; } SWIGINTERN int swig_varlink_print(swig_varlinkobject *v, FILE *fp, int SWIGUNUSEDPARM(flags)) { PyObject *str = swig_varlink_str(v); fprintf(fp,"Swig global variables "); fprintf(fp,"%s\n", PyString_AsString(str)); Py_DECREF(str); return 0; } SWIGINTERN void swig_varlink_dealloc(swig_varlinkobject *v) { swig_globalvar *var = v->vars; while (var) { swig_globalvar *n = var->next; free(var->name); free(var); var = n; } } SWIGINTERN PyObject * swig_varlink_getattr(swig_varlinkobject *v, char *n) { PyObject *res = NULL; swig_globalvar *var = v->vars; while (var) { if (strcmp(var->name,n) == 0) { res = (*var->get_attr)(); break; } var = var->next; } if (res == NULL && !PyErr_Occurred()) { PyErr_SetString(PyExc_NameError,"Unknown C global variable"); } return res; } SWIGINTERN int swig_varlink_setattr(swig_varlinkobject *v, char *n, PyObject *p) { int res = 1; swig_globalvar *var = v->vars; while (var) { if (strcmp(var->name,n) == 0) { res = (*var->set_attr)(p); break; } var = var->next; } if (res == 1 && !PyErr_Occurred()) { PyErr_SetString(PyExc_NameError,"Unknown C global variable"); } return res; } SWIGINTERN PyTypeObject* swig_varlink_type(void) { static char varlink__doc__[] = "Swig var link object"; static PyTypeObject varlink_type; static int type_init = 0; if (!type_init) { const PyTypeObject tmp = { PyObject_HEAD_INIT(NULL) 0, /* Number of items in variable part (ob_size) */ (char *)"swigvarlink", /* Type name (tp_name) */ sizeof(swig_varlinkobject), /* Basic size (tp_basicsize) */ 0, /* Itemsize (tp_itemsize) */ (destructor) swig_varlink_dealloc, /* Deallocator (tp_dealloc) */ (printfunc) swig_varlink_print, /* Print (tp_print) */ (getattrfunc) swig_varlink_getattr, /* get attr (tp_getattr) */ (setattrfunc) swig_varlink_setattr, /* Set attr (tp_setattr) */ 0, /* tp_compare */ (reprfunc) swig_varlink_repr, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ (reprfunc)swig_varlink_str, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ 0, /* tp_flags */ varlink__doc__, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ #if PY_VERSION_HEX >= 0x02020000 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* tp_iter -> tp_weaklist */ #endif #if PY_VERSION_HEX >= 0x02030000 0, /* tp_del */ #endif #ifdef COUNT_ALLOCS 0,0,0,0 /* tp_alloc -> tp_next */ #endif }; varlink_type = tmp; varlink_type.ob_type = &PyType_Type; type_init = 1; } return &varlink_type; } /* Create a variable linking object for use later */ SWIGINTERN PyObject * SWIG_Python_newvarlink(void) { swig_varlinkobject *result = PyObject_NEW(swig_varlinkobject, swig_varlink_type()); if (result) { result->vars = 0; } return ((PyObject*) result); } SWIGINTERN void SWIG_Python_addvarlink(PyObject *p, char *name, PyObject *(*get_attr)(void), int (*set_attr)(PyObject *p)) { swig_varlinkobject *v = (swig_varlinkobject *) p; swig_globalvar *gv = (swig_globalvar *) malloc(sizeof(swig_globalvar)); if (gv) { size_t size = strlen(name)+1; gv->name = (char *)malloc(size); if (gv->name) { strncpy(gv->name,name,size); gv->get_attr = get_attr; gv->set_attr = set_attr; gv->next = v->vars; } } v->vars = gv; } SWIGINTERN PyObject * SWIG_globals(void) { static PyObject *_SWIG_globals = 0; if (!_SWIG_globals) _SWIG_globals = SWIG_newvarlink(); return _SWIG_globals; } /* ----------------------------------------------------------------------------- * constants/methods manipulation * ----------------------------------------------------------------------------- */ /* Install Constants */ SWIGINTERN void SWIG_Python_InstallConstants(PyObject *d, swig_const_info constants[]) { PyObject *obj = 0; size_t i; for (i = 0; constants[i].type; ++i) { switch(constants[i].type) { case SWIG_PY_POINTER: obj = SWIG_NewPointerObj(constants[i].pvalue, *(constants[i]).ptype,0); break; case SWIG_PY_BINARY: obj = SWIG_NewPackedObj(constants[i].pvalue, constants[i].lvalue, *(constants[i].ptype)); break; default: obj = 0; break; } if (obj) { PyDict_SetItemString(d, constants[i].name, obj); Py_DECREF(obj); } } } /* -----------------------------------------------------------------------------*/ /* Fix SwigMethods to carry the callback ptrs when needed */ /* -----------------------------------------------------------------------------*/ SWIGINTERN void SWIG_Python_FixMethods(PyMethodDef *methods, swig_const_info *const_table, swig_type_info **types, swig_type_info **types_initial) { size_t i; for (i = 0; methods[i].ml_name; ++i) { const char *c = methods[i].ml_doc; if (c && (c = strstr(c, "swig_ptr: "))) { int j; swig_const_info *ci = 0; const char *name = c + 10; for (j = 0; const_table[j].type; ++j) { if (strncmp(const_table[j].name, name, strlen(const_table[j].name)) == 0) { ci = &(const_table[j]); break; } } if (ci) { size_t shift = (ci->ptype) - types; swig_type_info *ty = types_initial[shift]; size_t ldoc = (c - methods[i].ml_doc); size_t lptr = strlen(ty->name)+2*sizeof(void*)+2; char *ndoc = (char*)malloc(ldoc + lptr + 10); if (ndoc) { char *buff = ndoc; void *ptr = (ci->type == SWIG_PY_POINTER) ? ci->pvalue : 0; if (ptr) { strncpy(buff, methods[i].ml_doc, ldoc); buff += ldoc; strncpy(buff, "swig_ptr: ", 10); buff += 10; SWIG_PackVoidPtr(buff, ptr, ty->name, lptr); methods[i].ml_doc = ndoc; } } } } } } #ifdef __cplusplus } #endif /* -----------------------------------------------------------------------------* * Partial Init method * -----------------------------------------------------------------------------*/ #ifdef __cplusplus extern "C" #endif SWIGEXPORT void SWIG_init(void) { PyObject *m, *d; /* Fix SwigMethods to carry the callback ptrs when needed */ SWIG_Python_FixMethods(SwigMethods, swig_const_table, swig_types, swig_type_initial); m = Py_InitModule((char *) SWIG_name, SwigMethods); d = PyModule_GetDict(m); SWIG_InitializeModule(0); SWIG_InstallConstants(d,swig_const_table); import_array(); } brian-1.3.1/brian/utils/ccircular/circular.cpp000066400000000000000000000146771167451777000213570ustar00rootroot00000000000000#include "ccircular.h" #include #include CircularVector::CircularVector(int n) { this->X = NULL; this->retarray = NULL; this->n = n; this->X = new long[n]; // we don't worry about memory errors for the moment... this->retarray = new long[n]; if(!this->X || !this->retarray){ if(this->X) { delete [] this->X; this->X = 0; } if(this->retarray) { delete [] this->retarray; this->retarray = 0; } throw BrianException("Not enough memory in creating CircularVector."); } this->reinit(); } CircularVector::~CircularVector() { if(this->X) delete [] this->X; if(this->retarray) delete [] this->retarray; this->X = NULL; this->retarray = NULL; } void CircularVector::expand(long n) { long orig_n = this->n; this->n += n; n = this->n; long *new_X = new long[n]; long *new_retarray = new long[n]; if(!new_X || !new_retarray){ if(new_X) delete [] new_X; if(new_retarray) delete [] new_retarray; throw BrianException("Not enough memory in expanding CircularVector."); } // newS.X[:S.n-S.cursor] = S.X[S.cursor:] memcpy((void *)new_X, (void *)(this->X+this->cursor), sizeof(long)*(orig_n-this->cursor)); // newS.X[S.n-S.cursor:S.n] = S.X[:S.cursor] memcpy((void *)(new_X+orig_n-this->cursor), (void *)(this->X), sizeof(long)*this->cursor); this->cursor = orig_n; delete [] this->X; this->X = new_X; delete [] this->retarray; this->retarray = new_retarray; } void CircularVector::reinit() { this->cursor = 0; for(int i=0;in;i++) this->X[i] = 0; } void CircularVector::advance(int k) { this->cursor = this->index(k); } int CircularVector::__len__() { return this->n; } inline int CircularVector::index(int i) { int j = (this->cursor+i)%this->n; if(j<0) j+=this->n; return j; } inline int CircularVector::getitem(int i) { return this->X[this->index(i)]; } int CircularVector::__getitem__(int i) { return this->getitem(i); } void CircularVector::__setitem__(int i, int x) { this->X[this->index(i)] = x; } void CircularVector::__getslice__(long **ret, int *ret_n, int i, int j) { int i0 = this->index(i); int j0 = this->index(j); int n = 0; for(int k=i0;k!=j0;k=(k+1)%this->n) this->retarray[n++] = this->X[k]; *ret = this->retarray; *ret_n = n; } void CircularVector::get_conditional(long **ret, int *ret_n, int i, int j, int min, int max, int offset) { int i0, j0; int lo, mid, hi; int ioff = this->index(i); int joff = this->index(j); int lensearch; if(joff>=ioff) lensearch = joff-ioff; else lensearch = this->n-(ioff-joff); // start with a binary search to the left, note that we use here the bisect_left // algorithm from the standard python library translated into C++ code, and // altered to work with a circular array with an offset. The lo and hi variables // vary from 0 to lensearch but the probe this->X[...] is offset by ioff, the // position in the X array of the i variable passed to the function. lo = 0; hi = lensearch; while(loX[(mid+ioff)%this->n]n; // then a binary search to the right //lo = 0; // we can use the lo output from the previous step hi = lensearch; while(loX[(mid+ioff)%this->n]n; // then fill in the return array int n = 0; for(int k=i0;k!=j0;k=(k+1)%this->n) this->retarray[n++] = this->X[k]-offset; *ret = this->retarray; *ret_n = n; } void CircularVector::__setslice__(int i, int j, long *x, int n) { if(j>i) { int i0 = this->index(i); int j0 = this->index(j); for(int k=i0,l=0;k!=j0 && ln,l++) this->X[k] = x[l]; } } string CircularVector::__repr__() { stringstream out; out << "CircularVector("; out << "cursor=" << this->cursor; out << ", X=["; for(int i=0;in;i++) { if(i) out << " "; out << this->X[i]; } out << "])"; return out.str(); } string CircularVector::__str__() { return this->__repr__(); } SpikeContainer::SpikeContainer(int m) { try{ this->S = NULL; this->ind = NULL; this->S = new CircularVector(2); this->remaining_space = 1; if(m<2) m=2; this->ind = new CircularVector(m+1); } catch(BrianException &e) { if(this->S){ delete this->S; this->S = 0; } if(this->ind){ delete this->ind; this->ind = 0; } throw; } } SpikeContainer::~SpikeContainer() { if(this->S) delete this->S; if(this->ind) delete this->ind; } void SpikeContainer::reinit() { this->S->reinit(); this->ind->reinit(); } void SpikeContainer::push(long *y, int n) { long freed_space = (this->ind->__getitem__(2)-this->ind->__getitem__(1))%this->S->n; if(freed_space<0) freed_space += this->S->n; this->remaining_space += freed_space; while(n>=this->remaining_space){ long orig_cursor = this->S->cursor; long orig_n = this->S->n; this->S->expand(this->S->n); // double size of S for(long i=0; iind->n; i++){ this->ind->X[i] = (this->ind->X[i]-orig_cursor)%orig_n; if(this->ind->X[i]<0) this->ind->X[i] += orig_n; if(this->ind->X[i]==0) this->ind->X[i] = orig_n; } this->remaining_space += orig_n; } this->S->__setslice__(0, n, y, n); this->S->advance(n); this->ind->advance(1); this->ind->__setitem__(0, this->S->cursor); this->remaining_space -= n; } void SpikeContainer::lastspikes(long **ret, int *ret_n) { this->S->__getslice__(ret, ret_n, this->ind->__getitem__(-1)-this->S->cursor, this->S->n); } void SpikeContainer::__getitem__(long **ret, int *ret_n, int i) { this->S->__getslice__(ret, ret_n, this->ind->__getitem__(-i-1)-this->S->cursor, this->ind->__getitem__(-i)-this->S->cursor+this->S->n); } void SpikeContainer::get_spikes(long **ret, int *ret_n, int delay, int origin, int N) { return this->S->get_conditional(ret, ret_n, this->ind->__getitem__(-delay-1)-this->S->cursor, this->ind->__getitem__(-delay)-this->S->cursor+this->S->n, origin, origin+N, origin); } void SpikeContainer::__getslice__(long **ret, int *ret_n, int i, int j) { return this->S->__getslice__(ret, ret_n, this->ind->__getitem__(-j)-this->S->cursor, this->ind->__getitem__(-i)-this->S->cursor+this->S->n); } string SpikeContainer::__repr__() { stringstream out; out << "SpikeContainer(" << endl; out << " S: "; out << this->S->__repr__() << endl; out << " ind: "; out << this->ind->__repr__(); out << ")"; return out.str(); } string SpikeContainer::__str__() { return this->__repr__(); } brian-1.3.1/brian/utils/ccircular/numpy.i000066400000000000000000001554401167451777000203630ustar00rootroot00000000000000/* -*- C -*- (not really, but good for syntax highlighting) */ #ifdef SWIGPYTHON %{ #ifndef SWIG_FILE_WITH_INIT # define NO_IMPORT_ARRAY #endif #include "stdio.h" #include %} /**********************************************************************/ %fragment("NumPy_Backward_Compatibility", "header") { /* Support older NumPy data type names */ %#if NDARRAY_VERSION < 0x01000000 %#define NPY_BOOL PyArray_BOOL %#define NPY_BYTE PyArray_BYTE %#define NPY_UBYTE PyArray_UBYTE %#define NPY_SHORT PyArray_SHORT %#define NPY_USHORT PyArray_USHORT %#define NPY_INT PyArray_INT %#define NPY_UINT PyArray_UINT %#define NPY_LONG PyArray_LONG %#define NPY_ULONG PyArray_ULONG %#define NPY_LONGLONG PyArray_LONGLONG %#define NPY_ULONGLONG PyArray_ULONGLONG %#define NPY_FLOAT PyArray_FLOAT %#define NPY_DOUBLE PyArray_DOUBLE %#define NPY_LONGDOUBLE PyArray_LONGDOUBLE %#define NPY_CFLOAT PyArray_CFLOAT %#define NPY_CDOUBLE PyArray_CDOUBLE %#define NPY_CLONGDOUBLE PyArray_CLONGDOUBLE %#define NPY_OBJECT PyArray_OBJECT %#define NPY_STRING PyArray_STRING %#define NPY_UNICODE PyArray_UNICODE %#define NPY_VOID PyArray_VOID %#define NPY_NTYPES PyArray_NTYPES %#define NPY_NOTYPE PyArray_NOTYPE %#define NPY_CHAR PyArray_CHAR %#define NPY_USERDEF PyArray_USERDEF %#define npy_intp intp %#define NPY_MAX_BYTE MAX_BYTE %#define NPY_MIN_BYTE MIN_BYTE %#define NPY_MAX_UBYTE MAX_UBYTE %#define NPY_MAX_SHORT MAX_SHORT %#define NPY_MIN_SHORT MIN_SHORT %#define NPY_MAX_USHORT MAX_USHORT %#define NPY_MAX_INT MAX_INT %#define NPY_MIN_INT MIN_INT %#define NPY_MAX_UINT MAX_UINT %#define NPY_MAX_LONG MAX_LONG %#define NPY_MIN_LONG MIN_LONG %#define NPY_MAX_ULONG MAX_ULONG %#define NPY_MAX_LONGLONG MAX_LONGLONG %#define NPY_MIN_LONGLONG MIN_LONGLONG %#define NPY_MAX_ULONGLONG MAX_ULONGLONG %#define NPY_MAX_INTP MAX_INTP %#define NPY_MIN_INTP MIN_INTP %#define NPY_FARRAY FARRAY %#define NPY_F_CONTIGUOUS F_CONTIGUOUS %#endif } /**********************************************************************/ /* The following code originally appeared in * enthought/kiva/agg/src/numeric.i written by Eric Jones. It was * translated from C++ to C by John Hunter. Bill Spotz has modified * it to fix some minor bugs, upgrade from Numeric to numpy (all * versions), add some comments and functionality, and convert from * direct code insertion to SWIG fragments. */ %fragment("NumPy_Macros", "header") { /* Macros to extract array attributes. */ %#define is_array(a) ((a) && PyArray_Check((PyArrayObject *)a)) %#define array_type(a) (int)(PyArray_TYPE(a)) %#define array_numdims(a) (((PyArrayObject *)a)->nd) %#define array_dimensions(a) (((PyArrayObject *)a)->dimensions) %#define array_size(a,i) (((PyArrayObject *)a)->dimensions[i]) %#define array_data(a) (((PyArrayObject *)a)->data) %#define array_is_contiguous(a) (PyArray_ISCONTIGUOUS(a)) %#define array_is_native(a) (PyArray_ISNOTSWAPPED(a)) %#define array_is_fortran(a) (PyArray_ISFORTRAN(a)) } /**********************************************************************/ %fragment("NumPy_Utilities", "header") { /* Given a PyObject, return a string describing its type. */ char* pytype_string(PyObject* py_obj) { if (py_obj == NULL ) return "C NULL value"; if (py_obj == Py_None ) return "Python None" ; if (PyCallable_Check(py_obj)) return "callable" ; if (PyString_Check( py_obj)) return "string" ; if (PyInt_Check( py_obj)) return "int" ; if (PyFloat_Check( py_obj)) return "float" ; if (PyDict_Check( py_obj)) return "dict" ; if (PyList_Check( py_obj)) return "list" ; if (PyTuple_Check( py_obj)) return "tuple" ; if (PyFile_Check( py_obj)) return "file" ; if (PyModule_Check( py_obj)) return "module" ; if (PyInstance_Check(py_obj)) return "instance" ; return "unkown type"; } /* Given a NumPy typecode, return a string describing the type. */ char* typecode_string(int typecode) { static char* type_names[25] = {"bool", "byte", "unsigned byte", "short", "unsigned short", "int", "unsigned int", "long", "unsigned long", "long long", "unsigned long long", "float", "double", "long double", "complex float", "complex double", "complex long double", "object", "string", "unicode", "void", "ntypes", "notype", "char", "unknown"}; return typecode < 24 ? type_names[typecode] : type_names[24]; } /* Make sure input has correct numpy type. Allow character and byte * to match. Also allow int and long to match. This is deprecated. * You should use PyArray_EquivTypenums() instead. */ int type_match(int actual_type, int desired_type) { return PyArray_EquivTypenums(actual_type, desired_type); } } /**********************************************************************/ %fragment("NumPy_Object_to_Array", "header", fragment="NumPy_Backward_Compatibility", fragment="NumPy_Macros", fragment="NumPy_Utilities") { /* Given a PyObject pointer, cast it to a PyArrayObject pointer if * legal. If not, set the python error string appropriately and * return NULL. */ PyArrayObject* obj_to_array_no_conversion(PyObject* input, int typecode) { PyArrayObject* ary = NULL; if (is_array(input) && (typecode == NPY_NOTYPE || PyArray_EquivTypenums(array_type(input), typecode))) { ary = (PyArrayObject*) input; } else if is_array(input) { char* desired_type = typecode_string(typecode); char* actual_type = typecode_string(array_type(input)); PyErr_Format(PyExc_TypeError, "Array of type '%s' required. Array of type '%s' given", desired_type, actual_type); ary = NULL; } else { char * desired_type = typecode_string(typecode); char * actual_type = pytype_string(input); PyErr_Format(PyExc_TypeError, "Array of type '%s' required. A '%s' was given", desired_type, actual_type); ary = NULL; } return ary; } /* Convert the given PyObject to a NumPy array with the given * typecode. On success, return a valid PyArrayObject* with the * correct type. On failure, the python error string will be set and * the routine returns NULL. */ PyArrayObject* obj_to_array_allow_conversion(PyObject* input, int typecode, int* is_new_object) { PyArrayObject* ary = NULL; PyObject* py_obj; if (is_array(input) && (typecode == NPY_NOTYPE || PyArray_EquivTypenums(array_type(input),typecode))) { ary = (PyArrayObject*) input; *is_new_object = 0; } else { py_obj = PyArray_FromObject(input, typecode, 0, 0); /* If NULL, PyArray_FromObject will have set python error value.*/ ary = (PyArrayObject*) py_obj; *is_new_object = 1; } return ary; } /* Given a PyArrayObject, check to see if it is contiguous. If so, * return the input pointer and flag it as not a new object. If it is * not contiguous, create a new PyArrayObject using the original data, * flag it as a new object and return the pointer. */ PyArrayObject* make_contiguous(PyArrayObject* ary, int* is_new_object, int min_dims, int max_dims) { PyArrayObject* result; if (array_is_contiguous(ary)) { result = ary; *is_new_object = 0; } else { result = (PyArrayObject*) PyArray_ContiguousFromObject((PyObject*)ary, array_type(ary), min_dims, max_dims); *is_new_object = 1; } return result; } /* Convert a given PyObject to a contiguous PyArrayObject of the * specified type. If the input object is not a contiguous * PyArrayObject, a new one will be created and the new object flag * will be set. */ PyArrayObject* obj_to_array_contiguous_allow_conversion(PyObject* input, int typecode, int* is_new_object) { int is_new1 = 0; int is_new2 = 0; PyArrayObject* ary2; PyArrayObject* ary1 = obj_to_array_allow_conversion(input, typecode, &is_new1); if (ary1) { ary2 = make_contiguous(ary1, &is_new2, 0, 0); if ( is_new1 && is_new2) { Py_DECREF(ary1); } ary1 = ary2; } *is_new_object = is_new1 || is_new2; return ary1; } } /**********************************************************************/ %fragment("NumPy_Array_Requirements", "header", fragment="NumPy_Backward_Compatibility", fragment="NumPy_Macros") { /* Test whether a python object is contiguous. If array is * contiguous, return 1. Otherwise, set the python error string and * return 0. */ int require_contiguous(PyArrayObject* ary) { int contiguous = 1; if (!array_is_contiguous(ary)) { PyErr_SetString(PyExc_TypeError, "Array must be contiguous. A non-contiguous array was given"); contiguous = 0; } return contiguous; } /* Require that a numpy array is not byte-swapped. If the array is * not byte-swapped, return 1. Otherwise, set the python error string * and return 0. */ int require_native(PyArrayObject* ary) { int native = 1; if (!array_is_native(ary)) { PyErr_SetString(PyExc_TypeError, "Array must have native byteorder. " "A byte-swapped array was given"); native = 0; } return native; } /* Require the given PyArrayObject to have a specified number of * dimensions. If the array has the specified number of dimensions, * return 1. Otherwise, set the python error string and return 0. */ int require_dimensions(PyArrayObject* ary, int exact_dimensions) { int success = 1; if (array_numdims(ary) != exact_dimensions) { PyErr_Format(PyExc_TypeError, "Array must have %d dimensions. Given array has %d dimensions", exact_dimensions, array_numdims(ary)); success = 0; } return success; } /* Require the given PyArrayObject to have one of a list of specified * number of dimensions. If the array has one of the specified number * of dimensions, return 1. Otherwise, set the python error string * and return 0. */ int require_dimensions_n(PyArrayObject* ary, int* exact_dimensions, int n) { int success = 0; int i; char dims_str[255] = ""; char s[255]; for (i = 0; i < n && !success; i++) { if (array_numdims(ary) == exact_dimensions[i]) { success = 1; } } if (!success) { for (i = 0; i < n-1; i++) { sprintf(s, "%d, ", exact_dimensions[i]); strcat(dims_str,s); } sprintf(s, " or %d", exact_dimensions[n-1]); strcat(dims_str,s); PyErr_Format(PyExc_TypeError, "Array must have %s dimensions. Given array has %d dimensions", dims_str, array_numdims(ary)); } return success; } /* Require the given PyArrayObject to have a specified shape. If the * array has the specified shape, return 1. Otherwise, set the python * error string and return 0. */ int require_size(PyArrayObject* ary, npy_intp* size, int n) { int i; int success = 1; int len; char desired_dims[255] = "["; char s[255]; char actual_dims[255] = "["; for(i=0; i < n;i++) { if (size[i] != -1 && size[i] != array_size(ary,i)) { success = 0; } } if (!success) { for (i = 0; i < n; i++) { if (size[i] == -1) { sprintf(s, "*,"); } else { sprintf(s, "%ld,", (long int)size[i]); } strcat(desired_dims,s); } len = strlen(desired_dims); desired_dims[len-1] = ']'; for (i = 0; i < n; i++) { sprintf(s, "%ld,", (long int)array_size(ary,i)); strcat(actual_dims,s); } len = strlen(actual_dims); actual_dims[len-1] = ']'; PyErr_Format(PyExc_TypeError, "Array must have shape of %s. Given array has shape of %s", desired_dims, actual_dims); } return success; } /* Require the given PyArrayObject to to be FORTRAN ordered. If the * the PyArrayObject is already FORTRAN ordered, do nothing. Else, * set the FORTRAN ordering flag and recompute the strides. */ int require_fortran(PyArrayObject* ary) { int success = 1; int nd = array_numdims(ary); int i; if (array_is_fortran(ary)) return success; /* Set the FORTRAN ordered flag */ ary->flags = NPY_FARRAY; /* Recompute the strides */ ary->strides[0] = ary->strides[nd-1]; for (i=1; i < nd; ++i) ary->strides[i] = ary->strides[i-1] * array_size(ary,i-1); return success; } } /* Combine all NumPy fragments into one for convenience */ %fragment("NumPy_Fragments", "header", fragment="NumPy_Backward_Compatibility", fragment="NumPy_Macros", fragment="NumPy_Utilities", fragment="NumPy_Object_to_Array", fragment="NumPy_Array_Requirements") { } /* End John Hunter translation (with modifications by Bill Spotz) */ /* %numpy_typemaps() macro * * This macro defines a family of 41 typemaps that allow C arguments * of the form * * (DATA_TYPE IN_ARRAY1[ANY]) * (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) * (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) * * (DATA_TYPE IN_ARRAY2[ANY][ANY]) * (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) * (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) * * (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) * (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_ARRAY3) * (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_FARRAY3) * * (DATA_TYPE INPLACE_ARRAY1[ANY]) * (DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) * (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1) * * (DATA_TYPE INPLACE_ARRAY2[ANY][ANY]) * (DATA_TYPE* INPLACE_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_ARRAY2) * (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_FARRAY2) * * (DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) * (DATA_TYPE* INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_ARRAY3) * (DATA_TYPE* INPLACE_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_FARRAY3) * * (DATA_TYPE ARGOUT_ARRAY1[ANY]) * (DATA_TYPE* ARGOUT_ARRAY1, DIM_TYPE DIM1) * (DIM_TYPE DIM1, DATA_TYPE* ARGOUT_ARRAY1) * * (DATA_TYPE ARGOUT_ARRAY2[ANY][ANY]) * * (DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) * * (DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1) * (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEW_ARRAY1) * * (DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) * (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_ARRAY2) * (DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) * (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_FARRAY2) * * (DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) * (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) * (DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) * (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3) * * where "DATA_TYPE" is any type supported by the NumPy module, and * "DIM_TYPE" is any int-like type suitable for specifying dimensions. * The difference between "ARRAY" typemaps and "FARRAY" typemaps is * that the "FARRAY" typemaps expect FORTRAN ordering of * multidimensional arrays. In python, the dimensions will not need * to be specified (except for the "DATA_TYPE* ARGOUT_ARRAY1" * typemaps). The IN_ARRAYs can be a numpy array or any sequence that * can be converted to a numpy array of the specified type. The * INPLACE_ARRAYs must be numpy arrays of the appropriate type. The * ARGOUT_ARRAYs will be returned as new numpy arrays of the * appropriate type. * * These typemaps can be applied to existing functions using the * %apply directive. For example: * * %apply (double* IN_ARRAY1, int DIM1) {(double* series, int length)}; * double prod(double* series, int length); * * %apply (int DIM1, int DIM2, double* INPLACE_ARRAY2) * {(int rows, int cols, double* matrix )}; * void floor(int rows, int cols, double* matrix, double f); * * %apply (double IN_ARRAY3[ANY][ANY][ANY]) * {(double tensor[2][2][2] )}; * %apply (double ARGOUT_ARRAY3[ANY][ANY][ANY]) * {(double low[2][2][2] )}; * %apply (double ARGOUT_ARRAY3[ANY][ANY][ANY]) * {(double upp[2][2][2] )}; * void luSplit(double tensor[2][2][2], * double low[2][2][2], * double upp[2][2][2] ); * * or directly with * * double prod(double* IN_ARRAY1, int DIM1); * * void floor(int DIM1, int DIM2, double* INPLACE_ARRAY2, double f); * * void luSplit(double IN_ARRAY3[ANY][ANY][ANY], * double ARGOUT_ARRAY3[ANY][ANY][ANY], * double ARGOUT_ARRAY3[ANY][ANY][ANY]); */ %define %numpy_typemaps(DATA_TYPE, DATA_TYPECODE, DIM_TYPE) /************************/ /* Input Array Typemaps */ /************************/ /* Typemap suite for (DATA_TYPE IN_ARRAY1[ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE IN_ARRAY1[ANY]) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE IN_ARRAY1[ANY]) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[1] = { $1_dim0 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 1) || !require_size(array, size, 1)) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(freearg) (DATA_TYPE IN_ARRAY1[ANY]) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[1] = { -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 1) || !require_size(array, size, 1)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); } %typemap(freearg) (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[1] = {-1}; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 1) || !require_size(array, size, 1)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE IN_ARRAY2[ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE IN_ARRAY2[ANY][ANY]) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE IN_ARRAY2[ANY][ANY]) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { $1_dim0, $1_dim1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2)) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(freearg) (DATA_TYPE IN_ARRAY2[ANY][ANY]) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); } %typemap(freearg) (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2) || !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); } %typemap(freearg) (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { $1_dim0, $1_dim1, $1_dim2 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3)) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(freearg) (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { -1, -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); } %typemap(freearg) (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* IN_ARRAY3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_ARRAY3) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_ARRAY3) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { -1, -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_ARRAY3) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { -1, -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3) | !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); } %typemap(freearg) (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* IN_FARRAY3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_FARRAY3) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_FARRAY3) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { -1, -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_FARRAY3) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /***************************/ /* In-Place Array Typemaps */ /***************************/ /* Typemap suite for (DATA_TYPE INPLACE_ARRAY1[ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE INPLACE_ARRAY1[ANY]) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE INPLACE_ARRAY1[ANY]) (PyArrayObject* array=NULL) { npy_intp size[1] = { $1_dim0 }; array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,1) || !require_size(array, size, 1) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = ($1_ltype) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) (PyArrayObject* array=NULL, int i=1) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,1) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = 1; for (i=0; i < array_numdims(array); ++i) $2 *= array_size(array,i); } /* Typemap suite for (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1) (PyArrayObject* array=NULL, int i=0) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,1) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = 1; for (i=0; i < array_numdims(array); ++i) $1 *= array_size(array,i); $2 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE INPLACE_ARRAY2[ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE INPLACE_ARRAY2[ANY][ANY]) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE INPLACE_ARRAY2[ANY][ANY]) (PyArrayObject* array=NULL) { npy_intp size[2] = { $1_dim0, $1_dim1 }; array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_size(array, size, 2) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = ($1_ltype) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_ARRAY2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_ARRAY2) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_ARRAY2) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_FARRAY2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_FARRAY2) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_FARRAY2) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) (PyArrayObject* array=NULL) { npy_intp size[3] = { $1_dim0, $1_dim1, $1_dim2 }; array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_size(array, size, 3) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = ($1_ltype) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* INPLACE_ARRAY3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_ARRAY3) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_ARRAY3) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* INPLACE_FARRAY3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_FARRAY3) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_FARRAY3) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DATA_TYPE*) array_data(array); } /*************************/ /* Argout Array Typemaps */ /*************************/ /* Typemap suite for (DATA_TYPE ARGOUT_ARRAY1[ANY]) */ %typemap(in,numinputs=0, fragment="NumPy_Backward_Compatibility,NumPy_Macros") (DATA_TYPE ARGOUT_ARRAY1[ANY]) (PyObject * array = NULL) { npy_intp dims[1] = { $1_dim0 }; array = PyArray_SimpleNew(1, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(argout) (DATA_TYPE ARGOUT_ARRAY1[ANY]) { $result = SWIG_Python_AppendOutput($result,array$argnum); } /* Typemap suite for (DATA_TYPE* ARGOUT_ARRAY1, DIM_TYPE DIM1) */ %typemap(in,numinputs=1, fragment="NumPy_Fragments") (DATA_TYPE* ARGOUT_ARRAY1, DIM_TYPE DIM1) (PyObject * array = NULL) { npy_intp dims[1]; if (!PyInt_Check($input)) { char* typestring = pytype_string($input); PyErr_Format(PyExc_TypeError, "Int dimension expected. '%s' given.", typestring); SWIG_fail; } $2 = (DIM_TYPE) PyInt_AsLong($input); dims[0] = (npy_intp) $2; array = PyArray_SimpleNew(1, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); } %typemap(argout) (DATA_TYPE* ARGOUT_ARRAY1, DIM_TYPE DIM1) { $result = SWIG_Python_AppendOutput($result,array$argnum); } /* Typemap suite for (DIM_TYPE DIM1, DATA_TYPE* ARGOUT_ARRAY1) */ %typemap(in,numinputs=1, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DATA_TYPE* ARGOUT_ARRAY1) (PyObject * array = NULL) { npy_intp dims[1]; if (!PyInt_Check($input)) { char* typestring = pytype_string($input); PyErr_Format(PyExc_TypeError, "Int dimension expected. '%s' given.", typestring); SWIG_fail; } $1 = (DIM_TYPE) PyInt_AsLong($input); dims[0] = (npy_intp) $1; array = PyArray_SimpleNew(1, dims, DATA_TYPECODE); if (!array) SWIG_fail; $2 = (DATA_TYPE*) array_data(array); } %typemap(argout) (DIM_TYPE DIM1, DATA_TYPE* ARGOUT_ARRAY1) { $result = SWIG_Python_AppendOutput($result,array$argnum); } /* Typemap suite for (DATA_TYPE ARGOUT_ARRAY2[ANY][ANY]) */ %typemap(in,numinputs=0, fragment="NumPy_Backward_Compatibility,NumPy_Macros") (DATA_TYPE ARGOUT_ARRAY2[ANY][ANY]) (PyObject * array = NULL) { npy_intp dims[2] = { $1_dim0, $1_dim1 }; array = PyArray_SimpleNew(2, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(argout) (DATA_TYPE ARGOUT_ARRAY2[ANY][ANY]) { $result = SWIG_Python_AppendOutput($result,array$argnum); } /* Typemap suite for (DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) */ %typemap(in,numinputs=0, fragment="NumPy_Backward_Compatibility,NumPy_Macros") (DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) (PyObject * array = NULL) { npy_intp dims[3] = { $1_dim0, $1_dim1, $1_dim2 }; array = PyArray_SimpleNew(3, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(argout) (DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) { $result = SWIG_Python_AppendOutput($result,array$argnum); } /*****************************/ /* Argoutview Array Typemaps */ /*****************************/ /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1 ) (DATA_TYPE* data_temp , DIM_TYPE dim_temp) { $1 = &data_temp; $2 = &dim_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1) { npy_intp dims[1] = { *$2 }; PyObject * array = PyArray_SimpleNewFromData(1, dims, DATA_TYPECODE, (void*)(*$1)); if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,array); } /* Typemap suite for (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEW_ARRAY1) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DATA_TYPE** ARGOUTVIEW_ARRAY1) (DIM_TYPE dim_temp, DATA_TYPE* data_temp ) { $1 = &dim_temp; $2 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEW_ARRAY1) { npy_intp dims[1] = { *$1 }; PyObject * array = PyArray_SimpleNewFromData(1, dims, DATA_TYPECODE, (void*)(*$2)); if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,array); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 ) (DATA_TYPE* data_temp , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) { npy_intp dims[2] = { *$2, *$3 }; PyObject * array = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$1)); if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,array); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_ARRAY2) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DATA_TYPE** ARGOUTVIEW_ARRAY2) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DATA_TYPE* data_temp ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_ARRAY2) { npy_intp dims[2] = { *$1, *$2 }; PyObject * array = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$3)); if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,array); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 ) (DATA_TYPE* data_temp , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) { npy_intp dims[2] = { *$2, *$3 }; PyObject * obj = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject * array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_FARRAY2) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DATA_TYPE** ARGOUTVIEW_FARRAY2) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DATA_TYPE* data_temp ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_FARRAY2) { npy_intp dims[2] = { *$1, *$2 }; PyObject * obj = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$3)); PyArrayObject * array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) (DATA_TYPE* data_temp, DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; $4 = &dim3_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) { npy_intp dims[3] = { *$2, *$3, *$4 }; PyObject * array = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$1)); if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,array); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DATA_TYPE* data_temp) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &dim3_temp; $4 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) { npy_intp dims[3] = { *$1, *$2, *$3 }; PyObject * array = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$3)); if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,array); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) (DATA_TYPE* data_temp, DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; $4 = &dim3_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) { npy_intp dims[3] = { *$2, *$3, *$4 }; PyObject * obj = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject * array = (PyArrayObject*) obj; if (!array || require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DATA_TYPE* data_temp) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &dim3_temp; $4 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3) { npy_intp dims[3] = { *$1, *$2, *$3 }; PyObject * obj = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$3)); PyArrayObject * array = (PyArrayObject*) obj; if (!array || require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } %enddef /* %numpy_typemaps() macro */ /* *************************************************************** */ /* Concrete instances of the %numpy_typemaps() macro: Each invocation * below applies all of the typemaps above to the specified data type. */ %numpy_typemaps(signed char , NPY_BYTE , int) %numpy_typemaps(unsigned char , NPY_UBYTE , int) %numpy_typemaps(short , NPY_SHORT , int) %numpy_typemaps(unsigned short , NPY_USHORT , int) %numpy_typemaps(int , NPY_INT , int) %numpy_typemaps(unsigned int , NPY_UINT , int) %numpy_typemaps(long , NPY_LONG , int) %numpy_typemaps(unsigned long , NPY_ULONG , int) %numpy_typemaps(long long , NPY_LONGLONG , int) %numpy_typemaps(unsigned long long, NPY_ULONGLONG, int) %numpy_typemaps(float , NPY_FLOAT , int) %numpy_typemaps(double , NPY_DOUBLE , int) /* *************************************************************** * The follow macro expansion does not work, because C++ bool is 4 * bytes and NPY_BOOL is 1 byte * * %numpy_typemaps(bool, NPY_BOOL, int) */ /* *************************************************************** * On my Mac, I get the following warning for this macro expansion: * 'swig/python detected a memory leak of type 'long double *', no destructor found.' * * %numpy_typemaps(long double, NPY_LONGDOUBLE, int) */ /* *************************************************************** * Swig complains about a syntax error for the following macro * expansions: * * %numpy_typemaps(complex float, NPY_CFLOAT , int) * * %numpy_typemaps(complex double, NPY_CDOUBLE, int) * * %numpy_typemaps(complex long double, NPY_CLONGDOUBLE, int) */ #endif /* SWIGPYTHON */ brian-1.3.1/brian/utils/ccircular/setup.py000066400000000000000000000015321167451777000205430ustar00rootroot00000000000000#!/usr/bin/env python """ setup.py file for C++ version of circular spike container """ from distutils.core import setup, Extension import os, numpy numpy_base_dir = os.path.split(numpy.__file__)[0] numpy_include_dir = os.path.join(numpy_base_dir, 'core/include') # 'c:\\python25\\lib\\site-packages\\numpy\\core\\include' ccircular_module = Extension('_ccircular', sources=['ccircular_wrap.cxx', 'circular.cpp', ], include_dirs=[numpy_include_dir], ) setup (name='ccircular', version='0.1', author="Dan Goodman", description="""C++ version of circular spike container""", ext_modules=[ccircular_module], py_modules=["ccircular"], ) brian-1.3.1/brian/utils/ccircular/swigcompile.bat000066400000000000000000000001131167451777000220350ustar00rootroot00000000000000@echo off swig -python -c++ ccircular.i setup.py build_ext --inplace %* brian-1.3.1/brian/utils/circular.py000066400000000000000000000250551167451777000172460ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # ''' Circular arrays Ideas for speed improvements: use put, putmask and take with mode='wrap' and out=... ''' from numpy import * from scipy import weave import bisect import os import warnings from ..globalprefs import get_global_preference __all__ = ['CircularVector', 'SpikeContainer'] class CircularVector(object): ''' A vector with circular structure. Variables: * X = the data (array of size n) * cursor = current position in the array (where the 0 index is) ''' def __init__(self, n, dtype=float, useweave=False, compiler=None): # pylint: disable-msg=W0621 ''' n is the size of the vector. ''' self.X = zeros(n, dtype=dtype) self.dtype = dtype self.cursor = 0 self.n = n self._useweave = useweave if useweave: self._optimisedreturnarray = zeros(n, dtype=dtype) self._cpp_compiler = compiler self._extra_compile_args = ['-O3'] if self._cpp_compiler == 'gcc': self._extra_compile_args += get_global_preference('gcc_options') # ['-march=native', '-ffast-math'] else: self._cpp_compiler = '' def reinit(self): self.X[:] = zeros(self.n, self.dtype) self.cursor = 0 def advance(self, k): self.cursor = (self.cursor + k) % self.n def __len__(self): return self.n def __getitem__(self, i): ''' V[i] ''' return self.X[(self.cursor + i) % self.n] def __setitem__(self, i, x): ''' V[i]=x ''' self.X[(self.cursor + i) % self.n] = x def __getslice__(self, i, j): n = self.n i0 = (self.cursor + i) % n # pylint: disable-msg=W0621 j0 = (self.cursor + j) % n # pylint: disable-msg=W0621 if j0 >= i0: return self.X[i0:j0] else: #return self.X[range(i0,n)+range(0,j0)] return concatenate((self.X[i0:], self.X[0:j0])) # this version is MUCH faster def get_conditional(self, i, j, min, max, offset=0): """ Returns only those vectors with values between min and max This rather specialised usage of a circular vector is for the benefit of the SpikeContainer class get_spikes method, which is in turn used by the NeuronGroup.get_spikes method. It returns v-offset for those elements v in self[i:j] such that min=min && Xk i: n = self.n i0 = (self.cursor + i) % n # pylint: disable-msg=W0621 j0 = (self.cursor + j) % n if j0 > i0: self.X[i0:j0] = W elif isinstance(W, ndarray): self.X[i0:] = W[0:n - i0] self.X[0:j0] = W[n - i0:n - i0 + j0] def __repr__(self): return repr(hstack((self[0:self.n - 1], self[self.n - 1:self.n]))) def __print__(self): return (hstack((self[0:self.n - 1], self[self.n - 1:self.n]))).__print__() class SpikeContainer(object): ''' An object that stores previous spikes. S[0] is an array of the last spikes (neuron indexes). S[1] is an array with the spikes at time t-dt, etc. S[0:50] contains all spikes in last 50 bins. ''' def __init__(self, m, useweave=False, compiler=None): ''' n = maximum number of spikes stored (not used anymore) m = maximum number of bins stored ''' if m < 2: m = 2 self.S = CircularVector(2, dtype=int, useweave=useweave, compiler=compiler) self.ind = CircularVector(m + 1, dtype=int, useweave=useweave, compiler=compiler) # indexes of bins self.remaining_space = 1 self._useweave = useweave def reinit(self): self.S.reinit() self.ind.reinit() def push(self, spikes): ''' Stores spikes in the array at time dt. ''' ns = len(spikes) self.remaining_space += (self.ind[2] - self.ind[1]) % self.S.n while ns >= self.remaining_space: # double size of array S = self.S newS = CircularVector(2 * S.n, dtype=int, useweave=S._useweave, compiler=S._cpp_compiler) newS.X[:S.n - S.cursor] = S.X[S.cursor:] newS.X[S.n - S.cursor:S.n] = S.X[:S.cursor] newS.cursor = S.n self.S = newS self.ind.X = (self.ind.X - S.cursor) % S.n self.ind.X[self.ind.X == 0] = S.n self.remaining_space += S.n self.S[0:ns] = spikes self.S.advance(ns) self.ind.advance(1) self.ind[0] = self.S.cursor self.remaining_space -= ns def lastspikes(self): ''' Returns S[0]. ''' return self.S[self.ind[-1] - self.S.cursor:self.S.n] def __getitem__(self, i): ''' S[i]: returns the spikes at time t-i*dt. ''' # NB: this could be optimized return self.S[self.ind[-i - 1] - self.S.cursor:self.ind[-i] - self.S.cursor + self.S.n] # optimised version of the above, but the speed improvement is not very much, might be # better to just wait and write a fully C/C++ version of the whole library def get_spikes(self, delay, origin, N): """ Returns those spikes in self[delay] between origin and origin+N """ return self.S.get_conditional(self.ind[-delay - 1] - self.S.cursor, \ self.ind[-delay] - self.S.cursor + self.S.n, \ origin, origin + N, origin) def __getslice__(self, i, j): return self.S[self.ind[-j] - self.S.cursor:self.ind[-i] - self.S.cursor + self.S.n] def __repr__(self): return "Spike container." def __print__(self): return self.__repr__() try: import ccircular.ccircular as _ccircular class SpikeContainer(_ccircular.SpikeContainer): def __init__(self, m, useweave=False, compiler=None): _ccircular.SpikeContainer.__init__(self, m) #warnings.warn('Using C++ SpikeContainer') except ImportError: pass # I am not sure that class below is useful! class ModInt(object): ''' A number in Z/nZ, i.e., modulo n. Variables: * x = number modulo n * n Implemented: additions and subtraction. N.B.: not a very useful class. ''' def __init__(self, x, n): self.x = x self.n = n def __add__(self, m): if isinstance(m, ModInt): assert self.n == m.n return ModInt((self.x + m.x) % self.n, self.n) else: return ModInt((self.x + m) % self.n, self.n) def __radd__(self, m): if isinstance(m, ModInt): assert self.n == m.n return ModInt((self.x + m.x) % self.n, self.n) else: return ModInt((self.x + m) % self.n, self.n) def __sub__(self, m): if isinstance(m, ModInt): assert self.n == m.n return ModInt((self.x - m.x) % self.n, self.n) else: return ModInt((self.x - m) % self.n, self.n) def __repr__(self): return str(self.x) + ' [' + str(self.n) + ']' def __print__(self): return self.__repr__() brian-1.3.1/brian/utils/documentation.py000066400000000000000000000106711167451777000203110ustar00rootroot00000000000000# ---------------------------------------------------------------------------------- # Copyright ENS, INRIA, CNRS # Contributors: Romain Brette (brette@di.ens.fr) and Dan Goodman (goodman@di.ens.fr) # # Brian is a computer program whose purpose is to simulate models # of biological neural networks. # # This software is governed by the CeCILL license under French law and # abiding by the rules of distribution of free software. You can use, # modify and/ or redistribute the software under the terms of the CeCILL # license as circulated by CEA, CNRS and INRIA at the following URL # "http://www.cecill.info". # # As a counterpart to the access to the source code and rights to copy, # modify and redistribute granted by the license, users are provided only # with a limited warranty and the software's author, the holder of the # economic rights, and the successive licensors have only limited # liability. # # In this respect, the user's attention is drawn to the risks associated # with loading, using, modifying and/or developing or reproducing the # software by the user in light of its specific status of free software, # that may mean that it is complicated to manipulate, and that also # therefore means that it is reserved for developers and experienced # professionals having in-depth computer knowledge. Users are therefore # encouraged to load and test the software's suitability as regards their # requirements in conditions enabling the security of their systems and/or # data to be ensured and, more generally, to use and operate it in the # same conditions as regards security. # # The fact that you are presently reading this means that you have had # knowledge of the CeCILL license and that you accept its terms. # ---------------------------------------------------------------------------------- # """ Various utilities for documentation """ def indent_string(s, numtabs=1, spacespertab=4, split=False): """ Indents a given string or list of lines split=True returns the output as a list of lines """ indent = ' ' * (numtabs * spacespertab) if isinstance(s, str): indentedstring = indent + s.replace('\n', '\n' + indent) else: indentedstring = '' for l in s: indentedstring += indent + l + '\n' indentedstring = indentedstring.rstrip() + '\n' if split: return indentedstring.split('\n') return indentedstring def flattened_docstring(docstr, numtabs=0, spacespertab=4, split=False): """ Returns a docstring with the indentation removed according to the Python standard split=True returns the output as a list of lines Changing numtabs adds a custom indentation afterwards """ if isinstance(docstr, str): lines = docstr.split('\n') else: lines = docstr if len(lines) < 2: # nothing to do return docstr flattenedstring = '' # Interpret multiline strings according to the Python docstring standard indentlevel = min(# the smallest number of whitespace characters in the lines of the description map(# the number of whitespaces at the beginning of each string in the lines of the description lambda l:len(l) - len(l.lstrip()), # the number of whitespaces at the beginning of the string filter(# only those lines with some text on are counted lambda l:len(l.strip()) , lines[1:] # ignore the first line ))) if lines[0].strip(): # treat first line differently (probably has nothing on it) flattenedstring += lines[0] + '\n' for l in lines[1:-1]: flattenedstring += l[indentlevel:] + '\n' if lines[-1].strip(): # treat last line differently (probably has nothing on it) flattenedstring += lines[-1][indentlevel:] + '\n' return indent_string(flattenedstring, numtabs=numtabs, spacespertab=spacespertab, split=split) def rest_section(name, level='-', split=False): """ Returns a restructuredtext section heading Looks like: name ---- """ s = name + '\n' + level * len(name) + '\n\n' if split: return s.split('\n') return s if __name__ == '__main__': print rest_section('Test'), print flattened_docstring(flattened_docstring.__doc__), brian-1.3.1/brian/utils/fastexp/000077500000000000000000000000001167451777000165335ustar00rootroot00000000000000brian-1.3.1/brian/utils/fastexp/__init__.py000066400000000000000000000004751167451777000206520ustar00rootroot00000000000000from numpy import empty, ndarray, exp from fastexp import fastexp as _fastexp def fastexp(x, out=None): if not isinstance(x, ndarray) or len(x.shape) > 1: return exp(x) if out is None: y = empty(x.shape) _fastexp(x, y) return y _fastexp(x, out) return out brian-1.3.1/brian/utils/fastexp/fastexp.cpp000066400000000000000000000001561167451777000207130ustar00rootroot00000000000000#include "fexp.h" void fastexp(double *x, int n, double *y, int m) { for(;n;n--) *y++ = fexp(*x++); } brian-1.3.1/brian/utils/fastexp/fastexp.h000066400000000000000000000000631167451777000203550ustar00rootroot00000000000000void fastexp(double *x, int n, double *y, int m); brian-1.3.1/brian/utils/fastexp/fastexp.i000066400000000000000000000004401167451777000203550ustar00rootroot00000000000000%module fastexp %{ #define SWIG_FILE_WITH_INIT #include "fastexp.h" %} %include "numpy.i" %init %{ import_array(); %} %apply (double* INPLACE_ARRAY1, int DIM1) {(double *x, int n)}; %apply (double* INPLACE_ARRAY1, int DIM1) {(double *y, int m)}; %include "fastexp.h" brian-1.3.1/brian/utils/fastexp/fastexp.py000066400000000000000000000032141167451777000205570ustar00rootroot00000000000000# This file was automatically generated by SWIG (http://www.swig.org). # Version 1.3.36 # # Don't modify this file, modify the SWIG interface instead. # This file is compatible with both classic and new-style classes. import _fastexp import new new_instancemethod = new.instancemethod try: _swig_property = property except NameError: pass # Python < 2.2 doesn't have 'property'. def _swig_setattr_nondynamic(self, class_type, name, value, static=1): if (name == "thisown"): return self.this.own(value) if (name == "this"): if type(value).__name__ == 'PySwigObject': self.__dict__[name] = value return method = class_type.__swig_setmethods__.get(name, None) if method: return method(self, value) if (not static) or hasattr(self, name): self.__dict__[name] = value else: raise AttributeError("You cannot add attributes to %s" % self) def _swig_setattr(self, class_type, name, value): return _swig_setattr_nondynamic(self, class_type, name, value, 0) def _swig_getattr(self, class_type, name): if (name == "thisown"): return self.this.own() method = class_type.__swig_getmethods__.get(name, None) if method: return method(self) raise AttributeError, name def _swig_repr(self): try: strthis = "proxy of " + self.this.__repr__() except: strthis = "" return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,) import types try: _object = types.ObjectType _newclass = 1 except AttributeError: class _object : pass _newclass = 0 del types fastexp = _fastexp.fastexp brian-1.3.1/brian/utils/fastexp/fastexp_wrap.cxx000066400000000000000000003261501167451777000217710ustar00rootroot00000000000000/* ---------------------------------------------------------------------------- * This file was automatically generated by SWIG (http://www.swig.org). * Version 1.3.36 * * This file is not intended to be easily readable and contains a number of * coding conventions designed to improve portability and efficiency. Do not make * changes to this file unless you know what you are doing--modify the SWIG * interface file instead. * ----------------------------------------------------------------------------- */ #define SWIGPYTHON #define SWIG_PYTHON_DIRECTOR_NO_VTABLE #ifdef __cplusplus template class SwigValueWrapper { T *tt; public: SwigValueWrapper() : tt(0) { } SwigValueWrapper(const SwigValueWrapper& rhs) : tt(new T(*rhs.tt)) { } SwigValueWrapper(const T& t) : tt(new T(t)) { } ~SwigValueWrapper() { delete tt; } SwigValueWrapper& operator=(const T& t) { delete tt; tt = new T(t); return *this; } operator T&() const { return *tt; } T *operator&() { return tt; } private: SwigValueWrapper& operator=(const SwigValueWrapper& rhs); }; template T SwigValueInit() { return T(); } #endif /* ----------------------------------------------------------------------------- * This section contains generic SWIG labels for method/variable * declarations/attributes, and other compiler dependent labels. * ----------------------------------------------------------------------------- */ /* template workaround for compilers that cannot correctly implement the C++ standard */ #ifndef SWIGTEMPLATEDISAMBIGUATOR # if defined(__SUNPRO_CC) && (__SUNPRO_CC <= 0x560) # define SWIGTEMPLATEDISAMBIGUATOR template # elif defined(__HP_aCC) /* Needed even with `aCC -AA' when `aCC -V' reports HP ANSI C++ B3910B A.03.55 */ /* If we find a maximum version that requires this, the test would be __HP_aCC <= 35500 for A.03.55 */ # define SWIGTEMPLATEDISAMBIGUATOR template # else # define SWIGTEMPLATEDISAMBIGUATOR # endif #endif /* inline attribute */ #ifndef SWIGINLINE # if defined(__cplusplus) || (defined(__GNUC__) && !defined(__STRICT_ANSI__)) # define SWIGINLINE inline # else # define SWIGINLINE # endif #endif /* attribute recognised by some compilers to avoid 'unused' warnings */ #ifndef SWIGUNUSED # if defined(__GNUC__) # if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) # define SWIGUNUSED __attribute__ ((__unused__)) # else # define SWIGUNUSED # endif # elif defined(__ICC) # define SWIGUNUSED __attribute__ ((__unused__)) # else # define SWIGUNUSED # endif #endif #ifndef SWIG_MSC_UNSUPPRESS_4505 # if defined(_MSC_VER) # pragma warning(disable : 4505) /* unreferenced local function has been removed */ # endif #endif #ifndef SWIGUNUSEDPARM # ifdef __cplusplus # define SWIGUNUSEDPARM(p) # else # define SWIGUNUSEDPARM(p) p SWIGUNUSED # endif #endif /* internal SWIG method */ #ifndef SWIGINTERN # define SWIGINTERN static SWIGUNUSED #endif /* internal inline SWIG method */ #ifndef SWIGINTERNINLINE # define SWIGINTERNINLINE SWIGINTERN SWIGINLINE #endif /* exporting methods */ #if (__GNUC__ >= 4) || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4) # ifndef GCC_HASCLASSVISIBILITY # define GCC_HASCLASSVISIBILITY # endif #endif #ifndef SWIGEXPORT # if defined(_WIN32) || defined(__WIN32__) || defined(__CYGWIN__) # if defined(STATIC_LINKED) # define SWIGEXPORT # else # define SWIGEXPORT __declspec(dllexport) # endif # else # if defined(__GNUC__) && defined(GCC_HASCLASSVISIBILITY) # define SWIGEXPORT __attribute__ ((visibility("default"))) # else # define SWIGEXPORT # endif # endif #endif /* calling conventions for Windows */ #ifndef SWIGSTDCALL # if defined(_WIN32) || defined(__WIN32__) || defined(__CYGWIN__) # define SWIGSTDCALL __stdcall # else # define SWIGSTDCALL # endif #endif /* Deal with Microsoft's attempt at deprecating C standard runtime functions */ #if !defined(SWIG_NO_CRT_SECURE_NO_DEPRECATE) && defined(_MSC_VER) && !defined(_CRT_SECURE_NO_DEPRECATE) # define _CRT_SECURE_NO_DEPRECATE #endif /* Deal with Microsoft's attempt at deprecating methods in the standard C++ library */ #if !defined(SWIG_NO_SCL_SECURE_NO_DEPRECATE) && defined(_MSC_VER) && !defined(_SCL_SECURE_NO_DEPRECATE) # define _SCL_SECURE_NO_DEPRECATE #endif /* Python.h has to appear first */ #include /* ----------------------------------------------------------------------------- * swigrun.swg * * This file contains generic CAPI SWIG runtime support for pointer * type checking. * ----------------------------------------------------------------------------- */ /* This should only be incremented when either the layout of swig_type_info changes, or for whatever reason, the runtime changes incompatibly */ #define SWIG_RUNTIME_VERSION "4" /* define SWIG_TYPE_TABLE_NAME as "SWIG_TYPE_TABLE" */ #ifdef SWIG_TYPE_TABLE # define SWIG_QUOTE_STRING(x) #x # define SWIG_EXPAND_AND_QUOTE_STRING(x) SWIG_QUOTE_STRING(x) # define SWIG_TYPE_TABLE_NAME SWIG_EXPAND_AND_QUOTE_STRING(SWIG_TYPE_TABLE) #else # define SWIG_TYPE_TABLE_NAME #endif /* You can use the SWIGRUNTIME and SWIGRUNTIMEINLINE macros for creating a static or dynamic library from the swig runtime code. In 99.9% of the cases, swig just needs to declare them as 'static'. But only do this if is strictly necessary, ie, if you have problems with your compiler or so. */ #ifndef SWIGRUNTIME # define SWIGRUNTIME SWIGINTERN #endif #ifndef SWIGRUNTIMEINLINE # define SWIGRUNTIMEINLINE SWIGRUNTIME SWIGINLINE #endif /* Generic buffer size */ #ifndef SWIG_BUFFER_SIZE # define SWIG_BUFFER_SIZE 1024 #endif /* Flags for pointer conversions */ #define SWIG_POINTER_DISOWN 0x1 #define SWIG_CAST_NEW_MEMORY 0x2 /* Flags for new pointer objects */ #define SWIG_POINTER_OWN 0x1 /* Flags/methods for returning states. The swig conversion methods, as ConvertPtr, return and integer that tells if the conversion was successful or not. And if not, an error code can be returned (see swigerrors.swg for the codes). Use the following macros/flags to set or process the returning states. In old swig versions, you usually write code as: if (SWIG_ConvertPtr(obj,vptr,ty.flags) != -1) { // success code } else { //fail code } Now you can be more explicit as: int res = SWIG_ConvertPtr(obj,vptr,ty.flags); if (SWIG_IsOK(res)) { // success code } else { // fail code } that seems to be the same, but now you can also do Type *ptr; int res = SWIG_ConvertPtr(obj,(void **)(&ptr),ty.flags); if (SWIG_IsOK(res)) { // success code if (SWIG_IsNewObj(res) { ... delete *ptr; } else { ... } } else { // fail code } I.e., now SWIG_ConvertPtr can return new objects and you can identify the case and take care of the deallocation. Of course that requires also to SWIG_ConvertPtr to return new result values, as int SWIG_ConvertPtr(obj, ptr,...) { if () { if () { *ptr = ; return SWIG_NEWOBJ; } else { *ptr = ; return SWIG_OLDOBJ; } } else { return SWIG_BADOBJ; } } Of course, returning the plain '0(success)/-1(fail)' still works, but you can be more explicit by returning SWIG_BADOBJ, SWIG_ERROR or any of the swig errors code. Finally, if the SWIG_CASTRANK_MODE is enabled, the result code allows to return the 'cast rank', for example, if you have this int food(double) int fooi(int); and you call food(1) // cast rank '1' (1 -> 1.0) fooi(1) // cast rank '0' just use the SWIG_AddCast()/SWIG_CheckState() */ #define SWIG_OK (0) #define SWIG_ERROR (-1) #define SWIG_IsOK(r) (r >= 0) #define SWIG_ArgError(r) ((r != SWIG_ERROR) ? r : SWIG_TypeError) /* The CastRankLimit says how many bits are used for the cast rank */ #define SWIG_CASTRANKLIMIT (1 << 8) /* The NewMask denotes the object was created (using new/malloc) */ #define SWIG_NEWOBJMASK (SWIG_CASTRANKLIMIT << 1) /* The TmpMask is for in/out typemaps that use temporal objects */ #define SWIG_TMPOBJMASK (SWIG_NEWOBJMASK << 1) /* Simple returning values */ #define SWIG_BADOBJ (SWIG_ERROR) #define SWIG_OLDOBJ (SWIG_OK) #define SWIG_NEWOBJ (SWIG_OK | SWIG_NEWOBJMASK) #define SWIG_TMPOBJ (SWIG_OK | SWIG_TMPOBJMASK) /* Check, add and del mask methods */ #define SWIG_AddNewMask(r) (SWIG_IsOK(r) ? (r | SWIG_NEWOBJMASK) : r) #define SWIG_DelNewMask(r) (SWIG_IsOK(r) ? (r & ~SWIG_NEWOBJMASK) : r) #define SWIG_IsNewObj(r) (SWIG_IsOK(r) && (r & SWIG_NEWOBJMASK)) #define SWIG_AddTmpMask(r) (SWIG_IsOK(r) ? (r | SWIG_TMPOBJMASK) : r) #define SWIG_DelTmpMask(r) (SWIG_IsOK(r) ? (r & ~SWIG_TMPOBJMASK) : r) #define SWIG_IsTmpObj(r) (SWIG_IsOK(r) && (r & SWIG_TMPOBJMASK)) /* Cast-Rank Mode */ #if defined(SWIG_CASTRANK_MODE) # ifndef SWIG_TypeRank # define SWIG_TypeRank unsigned long # endif # ifndef SWIG_MAXCASTRANK /* Default cast allowed */ # define SWIG_MAXCASTRANK (2) # endif # define SWIG_CASTRANKMASK ((SWIG_CASTRANKLIMIT) -1) # define SWIG_CastRank(r) (r & SWIG_CASTRANKMASK) SWIGINTERNINLINE int SWIG_AddCast(int r) { return SWIG_IsOK(r) ? ((SWIG_CastRank(r) < SWIG_MAXCASTRANK) ? (r + 1) : SWIG_ERROR) : r; } SWIGINTERNINLINE int SWIG_CheckState(int r) { return SWIG_IsOK(r) ? SWIG_CastRank(r) + 1 : 0; } #else /* no cast-rank mode */ # define SWIG_AddCast # define SWIG_CheckState(r) (SWIG_IsOK(r) ? 1 : 0) #endif #include #ifdef __cplusplus extern "C" { #endif typedef void *(*swig_converter_func)(void *, int *); typedef struct swig_type_info *(*swig_dycast_func)(void **); /* Structure to store information on one type */ typedef struct swig_type_info { const char *name; /* mangled name of this type */ const char *str; /* human readable name of this type */ swig_dycast_func dcast; /* dynamic cast function down a hierarchy */ struct swig_cast_info *cast; /* linked list of types that can cast into this type */ void *clientdata; /* language specific type data */ int owndata; /* flag if the structure owns the clientdata */ } swig_type_info; /* Structure to store a type and conversion function used for casting */ typedef struct swig_cast_info { swig_type_info *type; /* pointer to type that is equivalent to this type */ swig_converter_func converter; /* function to cast the void pointers */ struct swig_cast_info *next; /* pointer to next cast in linked list */ struct swig_cast_info *prev; /* pointer to the previous cast */ } swig_cast_info; /* Structure used to store module information * Each module generates one structure like this, and the runtime collects * all of these structures and stores them in a circularly linked list.*/ typedef struct swig_module_info { swig_type_info **types; /* Array of pointers to swig_type_info structures that are in this module */ size_t size; /* Number of types in this module */ struct swig_module_info *next; /* Pointer to next element in circularly linked list */ swig_type_info **type_initial; /* Array of initially generated type structures */ swig_cast_info **cast_initial; /* Array of initially generated casting structures */ void *clientdata; /* Language specific module data */ } swig_module_info; /* Compare two type names skipping the space characters, therefore "char*" == "char *" and "Class" == "Class", etc. Return 0 when the two name types are equivalent, as in strncmp, but skipping ' '. */ SWIGRUNTIME int SWIG_TypeNameComp(const char *f1, const char *l1, const char *f2, const char *l2) { for (;(f1 != l1) && (f2 != l2); ++f1, ++f2) { while ((*f1 == ' ') && (f1 != l1)) ++f1; while ((*f2 == ' ') && (f2 != l2)) ++f2; if (*f1 != *f2) return (*f1 > *f2) ? 1 : -1; } return (int)((l1 - f1) - (l2 - f2)); } /* Check type equivalence in a name list like ||... Return 0 if not equal, 1 if equal */ SWIGRUNTIME int SWIG_TypeEquiv(const char *nb, const char *tb) { int equiv = 0; const char* te = tb + strlen(tb); const char* ne = nb; while (!equiv && *ne) { for (nb = ne; *ne; ++ne) { if (*ne == '|') break; } equiv = (SWIG_TypeNameComp(nb, ne, tb, te) == 0) ? 1 : 0; if (*ne) ++ne; } return equiv; } /* Check type equivalence in a name list like ||... Return 0 if equal, -1 if nb < tb, 1 if nb > tb */ SWIGRUNTIME int SWIG_TypeCompare(const char *nb, const char *tb) { int equiv = 0; const char* te = tb + strlen(tb); const char* ne = nb; while (!equiv && *ne) { for (nb = ne; *ne; ++ne) { if (*ne == '|') break; } equiv = (SWIG_TypeNameComp(nb, ne, tb, te) == 0) ? 1 : 0; if (*ne) ++ne; } return equiv; } /* think of this as a c++ template<> or a scheme macro */ #define SWIG_TypeCheck_Template(comparison, ty) \ if (ty) { \ swig_cast_info *iter = ty->cast; \ while (iter) { \ if (comparison) { \ if (iter == ty->cast) return iter; \ /* Move iter to the top of the linked list */ \ iter->prev->next = iter->next; \ if (iter->next) \ iter->next->prev = iter->prev; \ iter->next = ty->cast; \ iter->prev = 0; \ if (ty->cast) ty->cast->prev = iter; \ ty->cast = iter; \ return iter; \ } \ iter = iter->next; \ } \ } \ return 0 /* Check the typename */ SWIGRUNTIME swig_cast_info * SWIG_TypeCheck(const char *c, swig_type_info *ty) { SWIG_TypeCheck_Template(strcmp(iter->type->name, c) == 0, ty); } /* Same as previous function, except strcmp is replaced with a pointer comparison */ SWIGRUNTIME swig_cast_info * SWIG_TypeCheckStruct(swig_type_info *from, swig_type_info *into) { SWIG_TypeCheck_Template(iter->type == from, into); } /* Cast a pointer up an inheritance hierarchy */ SWIGRUNTIMEINLINE void * SWIG_TypeCast(swig_cast_info *ty, void *ptr, int *newmemory) { return ((!ty) || (!ty->converter)) ? ptr : (*ty->converter)(ptr, newmemory); } /* Dynamic pointer casting. Down an inheritance hierarchy */ SWIGRUNTIME swig_type_info * SWIG_TypeDynamicCast(swig_type_info *ty, void **ptr) { swig_type_info *lastty = ty; if (!ty || !ty->dcast) return ty; while (ty && (ty->dcast)) { ty = (*ty->dcast)(ptr); if (ty) lastty = ty; } return lastty; } /* Return the name associated with this type */ SWIGRUNTIMEINLINE const char * SWIG_TypeName(const swig_type_info *ty) { return ty->name; } /* Return the pretty name associated with this type, that is an unmangled type name in a form presentable to the user. */ SWIGRUNTIME const char * SWIG_TypePrettyName(const swig_type_info *type) { /* The "str" field contains the equivalent pretty names of the type, separated by vertical-bar characters. We choose to print the last name, as it is often (?) the most specific. */ if (!type) return NULL; if (type->str != NULL) { const char *last_name = type->str; const char *s; for (s = type->str; *s; s++) if (*s == '|') last_name = s+1; return last_name; } else return type->name; } /* Set the clientdata field for a type */ SWIGRUNTIME void SWIG_TypeClientData(swig_type_info *ti, void *clientdata) { swig_cast_info *cast = ti->cast; /* if (ti->clientdata == clientdata) return; */ ti->clientdata = clientdata; while (cast) { if (!cast->converter) { swig_type_info *tc = cast->type; if (!tc->clientdata) { SWIG_TypeClientData(tc, clientdata); } } cast = cast->next; } } SWIGRUNTIME void SWIG_TypeNewClientData(swig_type_info *ti, void *clientdata) { SWIG_TypeClientData(ti, clientdata); ti->owndata = 1; } /* Search for a swig_type_info structure only by mangled name Search is a O(log #types) We start searching at module start, and finish searching when start == end. Note: if start == end at the beginning of the function, we go all the way around the circular list. */ SWIGRUNTIME swig_type_info * SWIG_MangledTypeQueryModule(swig_module_info *start, swig_module_info *end, const char *name) { swig_module_info *iter = start; do { if (iter->size) { register size_t l = 0; register size_t r = iter->size - 1; do { /* since l+r >= 0, we can (>> 1) instead (/ 2) */ register size_t i = (l + r) >> 1; const char *iname = iter->types[i]->name; if (iname) { register int compare = strcmp(name, iname); if (compare == 0) { return iter->types[i]; } else if (compare < 0) { if (i) { r = i - 1; } else { break; } } else if (compare > 0) { l = i + 1; } } else { break; /* should never happen */ } } while (l <= r); } iter = iter->next; } while (iter != end); return 0; } /* Search for a swig_type_info structure for either a mangled name or a human readable name. It first searches the mangled names of the types, which is a O(log #types) If a type is not found it then searches the human readable names, which is O(#types). We start searching at module start, and finish searching when start == end. Note: if start == end at the beginning of the function, we go all the way around the circular list. */ SWIGRUNTIME swig_type_info * SWIG_TypeQueryModule(swig_module_info *start, swig_module_info *end, const char *name) { /* STEP 1: Search the name field using binary search */ swig_type_info *ret = SWIG_MangledTypeQueryModule(start, end, name); if (ret) { return ret; } else { /* STEP 2: If the type hasn't been found, do a complete search of the str field (the human readable name) */ swig_module_info *iter = start; do { register size_t i = 0; for (; i < iter->size; ++i) { if (iter->types[i]->str && (SWIG_TypeEquiv(iter->types[i]->str, name))) return iter->types[i]; } iter = iter->next; } while (iter != end); } /* neither found a match */ return 0; } /* Pack binary data into a string */ SWIGRUNTIME char * SWIG_PackData(char *c, void *ptr, size_t sz) { static const char hex[17] = "0123456789abcdef"; register const unsigned char *u = (unsigned char *) ptr; register const unsigned char *eu = u + sz; for (; u != eu; ++u) { register unsigned char uu = *u; *(c++) = hex[(uu & 0xf0) >> 4]; *(c++) = hex[uu & 0xf]; } return c; } /* Unpack binary data from a string */ SWIGRUNTIME const char * SWIG_UnpackData(const char *c, void *ptr, size_t sz) { register unsigned char *u = (unsigned char *) ptr; register const unsigned char *eu = u + sz; for (; u != eu; ++u) { register char d = *(c++); register unsigned char uu; if ((d >= '0') && (d <= '9')) uu = ((d - '0') << 4); else if ((d >= 'a') && (d <= 'f')) uu = ((d - ('a'-10)) << 4); else return (char *) 0; d = *(c++); if ((d >= '0') && (d <= '9')) uu |= (d - '0'); else if ((d >= 'a') && (d <= 'f')) uu |= (d - ('a'-10)); else return (char *) 0; *u = uu; } return c; } /* Pack 'void *' into a string buffer. */ SWIGRUNTIME char * SWIG_PackVoidPtr(char *buff, void *ptr, const char *name, size_t bsz) { char *r = buff; if ((2*sizeof(void *) + 2) > bsz) return 0; *(r++) = '_'; r = SWIG_PackData(r,&ptr,sizeof(void *)); if (strlen(name) + 1 > (bsz - (r - buff))) return 0; strcpy(r,name); return buff; } SWIGRUNTIME const char * SWIG_UnpackVoidPtr(const char *c, void **ptr, const char *name) { if (*c != '_') { if (strcmp(c,"NULL") == 0) { *ptr = (void *) 0; return name; } else { return 0; } } return SWIG_UnpackData(++c,ptr,sizeof(void *)); } SWIGRUNTIME char * SWIG_PackDataName(char *buff, void *ptr, size_t sz, const char *name, size_t bsz) { char *r = buff; size_t lname = (name ? strlen(name) : 0); if ((2*sz + 2 + lname) > bsz) return 0; *(r++) = '_'; r = SWIG_PackData(r,ptr,sz); if (lname) { strncpy(r,name,lname+1); } else { *r = 0; } return buff; } SWIGRUNTIME const char * SWIG_UnpackDataName(const char *c, void *ptr, size_t sz, const char *name) { if (*c != '_') { if (strcmp(c,"NULL") == 0) { memset(ptr,0,sz); return name; } else { return 0; } } return SWIG_UnpackData(++c,ptr,sz); } #ifdef __cplusplus } #endif /* Errors in SWIG */ #define SWIG_UnknownError -1 #define SWIG_IOError -2 #define SWIG_RuntimeError -3 #define SWIG_IndexError -4 #define SWIG_TypeError -5 #define SWIG_DivisionByZero -6 #define SWIG_OverflowError -7 #define SWIG_SyntaxError -8 #define SWIG_ValueError -9 #define SWIG_SystemError -10 #define SWIG_AttributeError -11 #define SWIG_MemoryError -12 #define SWIG_NullReferenceError -13 /* Add PyOS_snprintf for old Pythons */ #if PY_VERSION_HEX < 0x02020000 # if defined(_MSC_VER) || defined(__BORLANDC__) || defined(_WATCOM) # define PyOS_snprintf _snprintf # else # define PyOS_snprintf snprintf # endif #endif /* A crude PyString_FromFormat implementation for old Pythons */ #if PY_VERSION_HEX < 0x02020000 #ifndef SWIG_PYBUFFER_SIZE # define SWIG_PYBUFFER_SIZE 1024 #endif static PyObject * PyString_FromFormat(const char *fmt, ...) { va_list ap; char buf[SWIG_PYBUFFER_SIZE * 2]; int res; va_start(ap, fmt); res = vsnprintf(buf, sizeof(buf), fmt, ap); va_end(ap); return (res < 0 || res >= (int)sizeof(buf)) ? 0 : PyString_FromString(buf); } #endif /* Add PyObject_Del for old Pythons */ #if PY_VERSION_HEX < 0x01060000 # define PyObject_Del(op) PyMem_DEL((op)) #endif #ifndef PyObject_DEL # define PyObject_DEL PyObject_Del #endif /* A crude PyExc_StopIteration exception for old Pythons */ #if PY_VERSION_HEX < 0x02020000 # ifndef PyExc_StopIteration # define PyExc_StopIteration PyExc_RuntimeError # endif # ifndef PyObject_GenericGetAttr # define PyObject_GenericGetAttr 0 # endif #endif /* Py_NotImplemented is defined in 2.1 and up. */ #if PY_VERSION_HEX < 0x02010000 # ifndef Py_NotImplemented # define Py_NotImplemented PyExc_RuntimeError # endif #endif /* A crude PyString_AsStringAndSize implementation for old Pythons */ #if PY_VERSION_HEX < 0x02010000 # ifndef PyString_AsStringAndSize # define PyString_AsStringAndSize(obj, s, len) {*s = PyString_AsString(obj); *len = *s ? strlen(*s) : 0;} # endif #endif /* PySequence_Size for old Pythons */ #if PY_VERSION_HEX < 0x02000000 # ifndef PySequence_Size # define PySequence_Size PySequence_Length # endif #endif /* PyBool_FromLong for old Pythons */ #if PY_VERSION_HEX < 0x02030000 static PyObject *PyBool_FromLong(long ok) { PyObject *result = ok ? Py_True : Py_False; Py_INCREF(result); return result; } #endif /* Py_ssize_t for old Pythons */ /* This code is as recommended by: */ /* http://www.python.org/dev/peps/pep-0353/#conversion-guidelines */ #if PY_VERSION_HEX < 0x02050000 && !defined(PY_SSIZE_T_MIN) typedef int Py_ssize_t; # define PY_SSIZE_T_MAX INT_MAX # define PY_SSIZE_T_MIN INT_MIN #endif /* ----------------------------------------------------------------------------- * error manipulation * ----------------------------------------------------------------------------- */ SWIGRUNTIME PyObject* SWIG_Python_ErrorType(int code) { PyObject* type = 0; switch(code) { case SWIG_MemoryError: type = PyExc_MemoryError; break; case SWIG_IOError: type = PyExc_IOError; break; case SWIG_RuntimeError: type = PyExc_RuntimeError; break; case SWIG_IndexError: type = PyExc_IndexError; break; case SWIG_TypeError: type = PyExc_TypeError; break; case SWIG_DivisionByZero: type = PyExc_ZeroDivisionError; break; case SWIG_OverflowError: type = PyExc_OverflowError; break; case SWIG_SyntaxError: type = PyExc_SyntaxError; break; case SWIG_ValueError: type = PyExc_ValueError; break; case SWIG_SystemError: type = PyExc_SystemError; break; case SWIG_AttributeError: type = PyExc_AttributeError; break; default: type = PyExc_RuntimeError; } return type; } SWIGRUNTIME void SWIG_Python_AddErrorMsg(const char* mesg) { PyObject *type = 0; PyObject *value = 0; PyObject *traceback = 0; if (PyErr_Occurred()) PyErr_Fetch(&type, &value, &traceback); if (value) { PyObject *old_str = PyObject_Str(value); PyErr_Clear(); Py_XINCREF(type); PyErr_Format(type, "%s %s", PyString_AsString(old_str), mesg); Py_DECREF(old_str); Py_DECREF(value); } else { PyErr_SetString(PyExc_RuntimeError, mesg); } } #if defined(SWIG_PYTHON_NO_THREADS) # if defined(SWIG_PYTHON_THREADS) # undef SWIG_PYTHON_THREADS # endif #endif #if defined(SWIG_PYTHON_THREADS) /* Threading support is enabled */ # if !defined(SWIG_PYTHON_USE_GIL) && !defined(SWIG_PYTHON_NO_USE_GIL) # if (PY_VERSION_HEX >= 0x02030000) /* For 2.3 or later, use the PyGILState calls */ # define SWIG_PYTHON_USE_GIL # endif # endif # if defined(SWIG_PYTHON_USE_GIL) /* Use PyGILState threads calls */ # ifndef SWIG_PYTHON_INITIALIZE_THREADS # define SWIG_PYTHON_INITIALIZE_THREADS PyEval_InitThreads() # endif # ifdef __cplusplus /* C++ code */ class SWIG_Python_Thread_Block { bool status; PyGILState_STATE state; public: void end() { if (status) { PyGILState_Release(state); status = false;} } SWIG_Python_Thread_Block() : status(true), state(PyGILState_Ensure()) {} ~SWIG_Python_Thread_Block() { end(); } }; class SWIG_Python_Thread_Allow { bool status; PyThreadState *save; public: void end() { if (status) { PyEval_RestoreThread(save); status = false; }} SWIG_Python_Thread_Allow() : status(true), save(PyEval_SaveThread()) {} ~SWIG_Python_Thread_Allow() { end(); } }; # define SWIG_PYTHON_THREAD_BEGIN_BLOCK SWIG_Python_Thread_Block _swig_thread_block # define SWIG_PYTHON_THREAD_END_BLOCK _swig_thread_block.end() # define SWIG_PYTHON_THREAD_BEGIN_ALLOW SWIG_Python_Thread_Allow _swig_thread_allow # define SWIG_PYTHON_THREAD_END_ALLOW _swig_thread_allow.end() # else /* C code */ # define SWIG_PYTHON_THREAD_BEGIN_BLOCK PyGILState_STATE _swig_thread_block = PyGILState_Ensure() # define SWIG_PYTHON_THREAD_END_BLOCK PyGILState_Release(_swig_thread_block) # define SWIG_PYTHON_THREAD_BEGIN_ALLOW PyThreadState *_swig_thread_allow = PyEval_SaveThread() # define SWIG_PYTHON_THREAD_END_ALLOW PyEval_RestoreThread(_swig_thread_allow) # endif # else /* Old thread way, not implemented, user must provide it */ # if !defined(SWIG_PYTHON_INITIALIZE_THREADS) # define SWIG_PYTHON_INITIALIZE_THREADS # endif # if !defined(SWIG_PYTHON_THREAD_BEGIN_BLOCK) # define SWIG_PYTHON_THREAD_BEGIN_BLOCK # endif # if !defined(SWIG_PYTHON_THREAD_END_BLOCK) # define SWIG_PYTHON_THREAD_END_BLOCK # endif # if !defined(SWIG_PYTHON_THREAD_BEGIN_ALLOW) # define SWIG_PYTHON_THREAD_BEGIN_ALLOW # endif # if !defined(SWIG_PYTHON_THREAD_END_ALLOW) # define SWIG_PYTHON_THREAD_END_ALLOW # endif # endif #else /* No thread support */ # define SWIG_PYTHON_INITIALIZE_THREADS # define SWIG_PYTHON_THREAD_BEGIN_BLOCK # define SWIG_PYTHON_THREAD_END_BLOCK # define SWIG_PYTHON_THREAD_BEGIN_ALLOW # define SWIG_PYTHON_THREAD_END_ALLOW #endif /* ----------------------------------------------------------------------------- * Python API portion that goes into the runtime * ----------------------------------------------------------------------------- */ #ifdef __cplusplus extern "C" { #if 0 } /* cc-mode */ #endif #endif /* ----------------------------------------------------------------------------- * Constant declarations * ----------------------------------------------------------------------------- */ /* Constant Types */ #define SWIG_PY_POINTER 4 #define SWIG_PY_BINARY 5 /* Constant information structure */ typedef struct swig_const_info { int type; char *name; long lvalue; double dvalue; void *pvalue; swig_type_info **ptype; } swig_const_info; #ifdef __cplusplus #if 0 { /* cc-mode */ #endif } #endif /* ----------------------------------------------------------------------------- * See the LICENSE file for information on copyright, usage and redistribution * of SWIG, and the README file for authors - http://www.swig.org/release.html. * * pyrun.swg * * This file contains the runtime support for Python modules * and includes code for managing global variables and pointer * type checking. * * ----------------------------------------------------------------------------- */ /* Common SWIG API */ /* for raw pointers */ #define SWIG_Python_ConvertPtr(obj, pptr, type, flags) SWIG_Python_ConvertPtrAndOwn(obj, pptr, type, flags, 0) #define SWIG_ConvertPtr(obj, pptr, type, flags) SWIG_Python_ConvertPtr(obj, pptr, type, flags) #define SWIG_ConvertPtrAndOwn(obj,pptr,type,flags,own) SWIG_Python_ConvertPtrAndOwn(obj, pptr, type, flags, own) #define SWIG_NewPointerObj(ptr, type, flags) SWIG_Python_NewPointerObj(ptr, type, flags) #define SWIG_CheckImplicit(ty) SWIG_Python_CheckImplicit(ty) #define SWIG_AcquirePtr(ptr, src) SWIG_Python_AcquirePtr(ptr, src) #define swig_owntype int /* for raw packed data */ #define SWIG_ConvertPacked(obj, ptr, sz, ty) SWIG_Python_ConvertPacked(obj, ptr, sz, ty) #define SWIG_NewPackedObj(ptr, sz, type) SWIG_Python_NewPackedObj(ptr, sz, type) /* for class or struct pointers */ #define SWIG_ConvertInstance(obj, pptr, type, flags) SWIG_ConvertPtr(obj, pptr, type, flags) #define SWIG_NewInstanceObj(ptr, type, flags) SWIG_NewPointerObj(ptr, type, flags) /* for C or C++ function pointers */ #define SWIG_ConvertFunctionPtr(obj, pptr, type) SWIG_Python_ConvertFunctionPtr(obj, pptr, type) #define SWIG_NewFunctionPtrObj(ptr, type) SWIG_Python_NewPointerObj(ptr, type, 0) /* for C++ member pointers, ie, member methods */ #define SWIG_ConvertMember(obj, ptr, sz, ty) SWIG_Python_ConvertPacked(obj, ptr, sz, ty) #define SWIG_NewMemberObj(ptr, sz, type) SWIG_Python_NewPackedObj(ptr, sz, type) /* Runtime API */ #define SWIG_GetModule(clientdata) SWIG_Python_GetModule() #define SWIG_SetModule(clientdata, pointer) SWIG_Python_SetModule(pointer) #define SWIG_NewClientData(obj) PySwigClientData_New(obj) #define SWIG_SetErrorObj SWIG_Python_SetErrorObj #define SWIG_SetErrorMsg SWIG_Python_SetErrorMsg #define SWIG_ErrorType(code) SWIG_Python_ErrorType(code) #define SWIG_Error(code, msg) SWIG_Python_SetErrorMsg(SWIG_ErrorType(code), msg) #define SWIG_fail goto fail /* Runtime API implementation */ /* Error manipulation */ SWIGINTERN void SWIG_Python_SetErrorObj(PyObject *errtype, PyObject *obj) { SWIG_PYTHON_THREAD_BEGIN_BLOCK; PyErr_SetObject(errtype, obj); Py_DECREF(obj); SWIG_PYTHON_THREAD_END_BLOCK; } SWIGINTERN void SWIG_Python_SetErrorMsg(PyObject *errtype, const char *msg) { SWIG_PYTHON_THREAD_BEGIN_BLOCK; PyErr_SetString(errtype, (char *) msg); SWIG_PYTHON_THREAD_END_BLOCK; } #define SWIG_Python_Raise(obj, type, desc) SWIG_Python_SetErrorObj(SWIG_Python_ExceptionType(desc), obj) /* Set a constant value */ SWIGINTERN void SWIG_Python_SetConstant(PyObject *d, const char *name, PyObject *obj) { PyDict_SetItemString(d, (char*) name, obj); Py_DECREF(obj); } /* Append a value to the result obj */ SWIGINTERN PyObject* SWIG_Python_AppendOutput(PyObject* result, PyObject* obj) { #if !defined(SWIG_PYTHON_OUTPUT_TUPLE) if (!result) { result = obj; } else if (result == Py_None) { Py_DECREF(result); result = obj; } else { if (!PyList_Check(result)) { PyObject *o2 = result; result = PyList_New(1); PyList_SetItem(result, 0, o2); } PyList_Append(result,obj); Py_DECREF(obj); } return result; #else PyObject* o2; PyObject* o3; if (!result) { result = obj; } else if (result == Py_None) { Py_DECREF(result); result = obj; } else { if (!PyTuple_Check(result)) { o2 = result; result = PyTuple_New(1); PyTuple_SET_ITEM(result, 0, o2); } o3 = PyTuple_New(1); PyTuple_SET_ITEM(o3, 0, obj); o2 = result; result = PySequence_Concat(o2, o3); Py_DECREF(o2); Py_DECREF(o3); } return result; #endif } /* Unpack the argument tuple */ SWIGINTERN int SWIG_Python_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, PyObject **objs) { if (!args) { if (!min && !max) { return 1; } else { PyErr_Format(PyExc_TypeError, "%s expected %s%d arguments, got none", name, (min == max ? "" : "at least "), (int)min); return 0; } } if (!PyTuple_Check(args)) { PyErr_SetString(PyExc_SystemError, "UnpackTuple() argument list is not a tuple"); return 0; } else { register Py_ssize_t l = PyTuple_GET_SIZE(args); if (l < min) { PyErr_Format(PyExc_TypeError, "%s expected %s%d arguments, got %d", name, (min == max ? "" : "at least "), (int)min, (int)l); return 0; } else if (l > max) { PyErr_Format(PyExc_TypeError, "%s expected %s%d arguments, got %d", name, (min == max ? "" : "at most "), (int)max, (int)l); return 0; } else { register int i; for (i = 0; i < l; ++i) { objs[i] = PyTuple_GET_ITEM(args, i); } for (; l < max; ++l) { objs[l] = 0; } return i + 1; } } } /* A functor is a function object with one single object argument */ #if PY_VERSION_HEX >= 0x02020000 #define SWIG_Python_CallFunctor(functor, obj) PyObject_CallFunctionObjArgs(functor, obj, NULL); #else #define SWIG_Python_CallFunctor(functor, obj) PyObject_CallFunction(functor, "O", obj); #endif /* Helper for static pointer initialization for both C and C++ code, for example static PyObject *SWIG_STATIC_POINTER(MyVar) = NewSomething(...); */ #ifdef __cplusplus #define SWIG_STATIC_POINTER(var) var #else #define SWIG_STATIC_POINTER(var) var = 0; if (!var) var #endif /* ----------------------------------------------------------------------------- * Pointer declarations * ----------------------------------------------------------------------------- */ /* Flags for new pointer objects */ #define SWIG_POINTER_NOSHADOW (SWIG_POINTER_OWN << 1) #define SWIG_POINTER_NEW (SWIG_POINTER_NOSHADOW | SWIG_POINTER_OWN) #define SWIG_POINTER_IMPLICIT_CONV (SWIG_POINTER_DISOWN << 1) #ifdef __cplusplus extern "C" { #if 0 } /* cc-mode */ #endif #endif /* How to access Py_None */ #if defined(_WIN32) || defined(__WIN32__) || defined(__CYGWIN__) # ifndef SWIG_PYTHON_NO_BUILD_NONE # ifndef SWIG_PYTHON_BUILD_NONE # define SWIG_PYTHON_BUILD_NONE # endif # endif #endif #ifdef SWIG_PYTHON_BUILD_NONE # ifdef Py_None # undef Py_None # define Py_None SWIG_Py_None() # endif SWIGRUNTIMEINLINE PyObject * _SWIG_Py_None(void) { PyObject *none = Py_BuildValue((char*)""); Py_DECREF(none); return none; } SWIGRUNTIME PyObject * SWIG_Py_None(void) { static PyObject *SWIG_STATIC_POINTER(none) = _SWIG_Py_None(); return none; } #endif /* The python void return value */ SWIGRUNTIMEINLINE PyObject * SWIG_Py_Void(void) { PyObject *none = Py_None; Py_INCREF(none); return none; } /* PySwigClientData */ typedef struct { PyObject *klass; PyObject *newraw; PyObject *newargs; PyObject *destroy; int delargs; int implicitconv; } PySwigClientData; SWIGRUNTIMEINLINE int SWIG_Python_CheckImplicit(swig_type_info *ty) { PySwigClientData *data = (PySwigClientData *)ty->clientdata; return data ? data->implicitconv : 0; } SWIGRUNTIMEINLINE PyObject * SWIG_Python_ExceptionType(swig_type_info *desc) { PySwigClientData *data = desc ? (PySwigClientData *) desc->clientdata : 0; PyObject *klass = data ? data->klass : 0; return (klass ? klass : PyExc_RuntimeError); } SWIGRUNTIME PySwigClientData * PySwigClientData_New(PyObject* obj) { if (!obj) { return 0; } else { PySwigClientData *data = (PySwigClientData *)malloc(sizeof(PySwigClientData)); /* the klass element */ data->klass = obj; Py_INCREF(data->klass); /* the newraw method and newargs arguments used to create a new raw instance */ if (PyClass_Check(obj)) { data->newraw = 0; data->newargs = obj; Py_INCREF(obj); } else { #if (PY_VERSION_HEX < 0x02020000) data->newraw = 0; #else data->newraw = PyObject_GetAttrString(data->klass, (char *)"__new__"); #endif if (data->newraw) { Py_INCREF(data->newraw); data->newargs = PyTuple_New(1); PyTuple_SetItem(data->newargs, 0, obj); } else { data->newargs = obj; } Py_INCREF(data->newargs); } /* the destroy method, aka as the C++ delete method */ data->destroy = PyObject_GetAttrString(data->klass, (char *)"__swig_destroy__"); if (PyErr_Occurred()) { PyErr_Clear(); data->destroy = 0; } if (data->destroy) { int flags; Py_INCREF(data->destroy); flags = PyCFunction_GET_FLAGS(data->destroy); #ifdef METH_O data->delargs = !(flags & (METH_O)); #else data->delargs = 0; #endif } else { data->delargs = 0; } data->implicitconv = 0; return data; } } SWIGRUNTIME void PySwigClientData_Del(PySwigClientData* data) { Py_XDECREF(data->newraw); Py_XDECREF(data->newargs); Py_XDECREF(data->destroy); } /* =============== PySwigObject =====================*/ typedef struct { PyObject_HEAD void *ptr; swig_type_info *ty; int own; PyObject *next; } PySwigObject; SWIGRUNTIME PyObject * PySwigObject_long(PySwigObject *v) { return PyLong_FromVoidPtr(v->ptr); } SWIGRUNTIME PyObject * PySwigObject_format(const char* fmt, PySwigObject *v) { PyObject *res = NULL; PyObject *args = PyTuple_New(1); if (args) { if (PyTuple_SetItem(args, 0, PySwigObject_long(v)) == 0) { PyObject *ofmt = PyString_FromString(fmt); if (ofmt) { res = PyString_Format(ofmt,args); Py_DECREF(ofmt); } Py_DECREF(args); } } return res; } SWIGRUNTIME PyObject * PySwigObject_oct(PySwigObject *v) { return PySwigObject_format("%o",v); } SWIGRUNTIME PyObject * PySwigObject_hex(PySwigObject *v) { return PySwigObject_format("%x",v); } SWIGRUNTIME PyObject * #ifdef METH_NOARGS PySwigObject_repr(PySwigObject *v) #else PySwigObject_repr(PySwigObject *v, PyObject *args) #endif { const char *name = SWIG_TypePrettyName(v->ty); PyObject *hex = PySwigObject_hex(v); PyObject *repr = PyString_FromFormat("", name, PyString_AsString(hex)); Py_DECREF(hex); if (v->next) { #ifdef METH_NOARGS PyObject *nrep = PySwigObject_repr((PySwigObject *)v->next); #else PyObject *nrep = PySwigObject_repr((PySwigObject *)v->next, args); #endif PyString_ConcatAndDel(&repr,nrep); } return repr; } SWIGRUNTIME int PySwigObject_print(PySwigObject *v, FILE *fp, int SWIGUNUSEDPARM(flags)) { #ifdef METH_NOARGS PyObject *repr = PySwigObject_repr(v); #else PyObject *repr = PySwigObject_repr(v, NULL); #endif if (repr) { fputs(PyString_AsString(repr), fp); Py_DECREF(repr); return 0; } else { return 1; } } SWIGRUNTIME PyObject * PySwigObject_str(PySwigObject *v) { char result[SWIG_BUFFER_SIZE]; return SWIG_PackVoidPtr(result, v->ptr, v->ty->name, sizeof(result)) ? PyString_FromString(result) : 0; } SWIGRUNTIME int PySwigObject_compare(PySwigObject *v, PySwigObject *w) { void *i = v->ptr; void *j = w->ptr; return (i < j) ? -1 : ((i > j) ? 1 : 0); } SWIGRUNTIME PyTypeObject* _PySwigObject_type(void); SWIGRUNTIME PyTypeObject* PySwigObject_type(void) { static PyTypeObject *SWIG_STATIC_POINTER(type) = _PySwigObject_type(); return type; } SWIGRUNTIMEINLINE int PySwigObject_Check(PyObject *op) { return ((op)->ob_type == PySwigObject_type()) || (strcmp((op)->ob_type->tp_name,"PySwigObject") == 0); } SWIGRUNTIME PyObject * PySwigObject_New(void *ptr, swig_type_info *ty, int own); SWIGRUNTIME void PySwigObject_dealloc(PyObject *v) { PySwigObject *sobj = (PySwigObject *) v; PyObject *next = sobj->next; if (sobj->own == SWIG_POINTER_OWN) { swig_type_info *ty = sobj->ty; PySwigClientData *data = ty ? (PySwigClientData *) ty->clientdata : 0; PyObject *destroy = data ? data->destroy : 0; if (destroy) { /* destroy is always a VARARGS method */ PyObject *res; if (data->delargs) { /* we need to create a temporal object to carry the destroy operation */ PyObject *tmp = PySwigObject_New(sobj->ptr, ty, 0); res = SWIG_Python_CallFunctor(destroy, tmp); Py_DECREF(tmp); } else { PyCFunction meth = PyCFunction_GET_FUNCTION(destroy); PyObject *mself = PyCFunction_GET_SELF(destroy); res = ((*meth)(mself, v)); } Py_XDECREF(res); } #if !defined(SWIG_PYTHON_SILENT_MEMLEAK) else { const char *name = SWIG_TypePrettyName(ty); printf("swig/python detected a memory leak of type '%s', no destructor found.\n", (name ? name : "unknown")); } #endif } Py_XDECREF(next); PyObject_DEL(v); } SWIGRUNTIME PyObject* PySwigObject_append(PyObject* v, PyObject* next) { PySwigObject *sobj = (PySwigObject *) v; #ifndef METH_O PyObject *tmp = 0; if (!PyArg_ParseTuple(next,(char *)"O:append", &tmp)) return NULL; next = tmp; #endif if (!PySwigObject_Check(next)) { return NULL; } sobj->next = next; Py_INCREF(next); return SWIG_Py_Void(); } SWIGRUNTIME PyObject* #ifdef METH_NOARGS PySwigObject_next(PyObject* v) #else PySwigObject_next(PyObject* v, PyObject *SWIGUNUSEDPARM(args)) #endif { PySwigObject *sobj = (PySwigObject *) v; if (sobj->next) { Py_INCREF(sobj->next); return sobj->next; } else { return SWIG_Py_Void(); } } SWIGINTERN PyObject* #ifdef METH_NOARGS PySwigObject_disown(PyObject *v) #else PySwigObject_disown(PyObject* v, PyObject *SWIGUNUSEDPARM(args)) #endif { PySwigObject *sobj = (PySwigObject *)v; sobj->own = 0; return SWIG_Py_Void(); } SWIGINTERN PyObject* #ifdef METH_NOARGS PySwigObject_acquire(PyObject *v) #else PySwigObject_acquire(PyObject* v, PyObject *SWIGUNUSEDPARM(args)) #endif { PySwigObject *sobj = (PySwigObject *)v; sobj->own = SWIG_POINTER_OWN; return SWIG_Py_Void(); } SWIGINTERN PyObject* PySwigObject_own(PyObject *v, PyObject *args) { PyObject *val = 0; #if (PY_VERSION_HEX < 0x02020000) if (!PyArg_ParseTuple(args,(char *)"|O:own",&val)) #else if (!PyArg_UnpackTuple(args, (char *)"own", 0, 1, &val)) #endif { return NULL; } else { PySwigObject *sobj = (PySwigObject *)v; PyObject *obj = PyBool_FromLong(sobj->own); if (val) { #ifdef METH_NOARGS if (PyObject_IsTrue(val)) { PySwigObject_acquire(v); } else { PySwigObject_disown(v); } #else if (PyObject_IsTrue(val)) { PySwigObject_acquire(v,args); } else { PySwigObject_disown(v,args); } #endif } return obj; } } #ifdef METH_O static PyMethodDef swigobject_methods[] = { {(char *)"disown", (PyCFunction)PySwigObject_disown, METH_NOARGS, (char *)"releases ownership of the pointer"}, {(char *)"acquire", (PyCFunction)PySwigObject_acquire, METH_NOARGS, (char *)"aquires ownership of the pointer"}, {(char *)"own", (PyCFunction)PySwigObject_own, METH_VARARGS, (char *)"returns/sets ownership of the pointer"}, {(char *)"append", (PyCFunction)PySwigObject_append, METH_O, (char *)"appends another 'this' object"}, {(char *)"next", (PyCFunction)PySwigObject_next, METH_NOARGS, (char *)"returns the next 'this' object"}, {(char *)"__repr__",(PyCFunction)PySwigObject_repr, METH_NOARGS, (char *)"returns object representation"}, {0, 0, 0, 0} }; #else static PyMethodDef swigobject_methods[] = { {(char *)"disown", (PyCFunction)PySwigObject_disown, METH_VARARGS, (char *)"releases ownership of the pointer"}, {(char *)"acquire", (PyCFunction)PySwigObject_acquire, METH_VARARGS, (char *)"aquires ownership of the pointer"}, {(char *)"own", (PyCFunction)PySwigObject_own, METH_VARARGS, (char *)"returns/sets ownership of the pointer"}, {(char *)"append", (PyCFunction)PySwigObject_append, METH_VARARGS, (char *)"appends another 'this' object"}, {(char *)"next", (PyCFunction)PySwigObject_next, METH_VARARGS, (char *)"returns the next 'this' object"}, {(char *)"__repr__",(PyCFunction)PySwigObject_repr, METH_VARARGS, (char *)"returns object representation"}, {0, 0, 0, 0} }; #endif #if PY_VERSION_HEX < 0x02020000 SWIGINTERN PyObject * PySwigObject_getattr(PySwigObject *sobj,char *name) { return Py_FindMethod(swigobject_methods, (PyObject *)sobj, name); } #endif SWIGRUNTIME PyTypeObject* _PySwigObject_type(void) { static char swigobject_doc[] = "Swig object carries a C/C++ instance pointer"; static PyNumberMethods PySwigObject_as_number = { (binaryfunc)0, /*nb_add*/ (binaryfunc)0, /*nb_subtract*/ (binaryfunc)0, /*nb_multiply*/ (binaryfunc)0, /*nb_divide*/ (binaryfunc)0, /*nb_remainder*/ (binaryfunc)0, /*nb_divmod*/ (ternaryfunc)0,/*nb_power*/ (unaryfunc)0, /*nb_negative*/ (unaryfunc)0, /*nb_positive*/ (unaryfunc)0, /*nb_absolute*/ (inquiry)0, /*nb_nonzero*/ 0, /*nb_invert*/ 0, /*nb_lshift*/ 0, /*nb_rshift*/ 0, /*nb_and*/ 0, /*nb_xor*/ 0, /*nb_or*/ (coercion)0, /*nb_coerce*/ (unaryfunc)PySwigObject_long, /*nb_int*/ (unaryfunc)PySwigObject_long, /*nb_long*/ (unaryfunc)0, /*nb_float*/ (unaryfunc)PySwigObject_oct, /*nb_oct*/ (unaryfunc)PySwigObject_hex, /*nb_hex*/ #if PY_VERSION_HEX >= 0x02050000 /* 2.5.0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 /* nb_inplace_add -> nb_index */ #elif PY_VERSION_HEX >= 0x02020000 /* 2.2.0 */ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 /* nb_inplace_add -> nb_inplace_true_divide */ #elif PY_VERSION_HEX >= 0x02000000 /* 2.0.0 */ 0,0,0,0,0,0,0,0,0,0,0 /* nb_inplace_add -> nb_inplace_or */ #endif }; static PyTypeObject pyswigobject_type; static int type_init = 0; if (!type_init) { const PyTypeObject tmp = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ (char *)"PySwigObject", /* tp_name */ sizeof(PySwigObject), /* tp_basicsize */ 0, /* tp_itemsize */ (destructor)PySwigObject_dealloc, /* tp_dealloc */ (printfunc)PySwigObject_print, /* tp_print */ #if PY_VERSION_HEX < 0x02020000 (getattrfunc)PySwigObject_getattr, /* tp_getattr */ #else (getattrfunc)0, /* tp_getattr */ #endif (setattrfunc)0, /* tp_setattr */ (cmpfunc)PySwigObject_compare, /* tp_compare */ (reprfunc)PySwigObject_repr, /* tp_repr */ &PySwigObject_as_number, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)0, /* tp_hash */ (ternaryfunc)0, /* tp_call */ (reprfunc)PySwigObject_str, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT, /* tp_flags */ swigobject_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ #if PY_VERSION_HEX >= 0x02020000 0, /* tp_iter */ 0, /* tp_iternext */ swigobject_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ #endif #if PY_VERSION_HEX >= 0x02030000 0, /* tp_del */ #endif #ifdef COUNT_ALLOCS 0,0,0,0 /* tp_alloc -> tp_next */ #endif }; pyswigobject_type = tmp; pyswigobject_type.ob_type = &PyType_Type; type_init = 1; } return &pyswigobject_type; } SWIGRUNTIME PyObject * PySwigObject_New(void *ptr, swig_type_info *ty, int own) { PySwigObject *sobj = PyObject_NEW(PySwigObject, PySwigObject_type()); if (sobj) { sobj->ptr = ptr; sobj->ty = ty; sobj->own = own; sobj->next = 0; } return (PyObject *)sobj; } /* ----------------------------------------------------------------------------- * Implements a simple Swig Packed type, and use it instead of string * ----------------------------------------------------------------------------- */ typedef struct { PyObject_HEAD void *pack; swig_type_info *ty; size_t size; } PySwigPacked; SWIGRUNTIME int PySwigPacked_print(PySwigPacked *v, FILE *fp, int SWIGUNUSEDPARM(flags)) { char result[SWIG_BUFFER_SIZE]; fputs("pack, v->size, 0, sizeof(result))) { fputs("at ", fp); fputs(result, fp); } fputs(v->ty->name,fp); fputs(">", fp); return 0; } SWIGRUNTIME PyObject * PySwigPacked_repr(PySwigPacked *v) { char result[SWIG_BUFFER_SIZE]; if (SWIG_PackDataName(result, v->pack, v->size, 0, sizeof(result))) { return PyString_FromFormat("", result, v->ty->name); } else { return PyString_FromFormat("", v->ty->name); } } SWIGRUNTIME PyObject * PySwigPacked_str(PySwigPacked *v) { char result[SWIG_BUFFER_SIZE]; if (SWIG_PackDataName(result, v->pack, v->size, 0, sizeof(result))){ return PyString_FromFormat("%s%s", result, v->ty->name); } else { return PyString_FromString(v->ty->name); } } SWIGRUNTIME int PySwigPacked_compare(PySwigPacked *v, PySwigPacked *w) { size_t i = v->size; size_t j = w->size; int s = (i < j) ? -1 : ((i > j) ? 1 : 0); return s ? s : strncmp((char *)v->pack, (char *)w->pack, 2*v->size); } SWIGRUNTIME PyTypeObject* _PySwigPacked_type(void); SWIGRUNTIME PyTypeObject* PySwigPacked_type(void) { static PyTypeObject *SWIG_STATIC_POINTER(type) = _PySwigPacked_type(); return type; } SWIGRUNTIMEINLINE int PySwigPacked_Check(PyObject *op) { return ((op)->ob_type == _PySwigPacked_type()) || (strcmp((op)->ob_type->tp_name,"PySwigPacked") == 0); } SWIGRUNTIME void PySwigPacked_dealloc(PyObject *v) { if (PySwigPacked_Check(v)) { PySwigPacked *sobj = (PySwigPacked *) v; free(sobj->pack); } PyObject_DEL(v); } SWIGRUNTIME PyTypeObject* _PySwigPacked_type(void) { static char swigpacked_doc[] = "Swig object carries a C/C++ instance pointer"; static PyTypeObject pyswigpacked_type; static int type_init = 0; if (!type_init) { const PyTypeObject tmp = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ (char *)"PySwigPacked", /* tp_name */ sizeof(PySwigPacked), /* tp_basicsize */ 0, /* tp_itemsize */ (destructor)PySwigPacked_dealloc, /* tp_dealloc */ (printfunc)PySwigPacked_print, /* tp_print */ (getattrfunc)0, /* tp_getattr */ (setattrfunc)0, /* tp_setattr */ (cmpfunc)PySwigPacked_compare, /* tp_compare */ (reprfunc)PySwigPacked_repr, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)0, /* tp_hash */ (ternaryfunc)0, /* tp_call */ (reprfunc)PySwigPacked_str, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT, /* tp_flags */ swigpacked_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ #if PY_VERSION_HEX >= 0x02020000 0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ #endif #if PY_VERSION_HEX >= 0x02030000 0, /* tp_del */ #endif #ifdef COUNT_ALLOCS 0,0,0,0 /* tp_alloc -> tp_next */ #endif }; pyswigpacked_type = tmp; pyswigpacked_type.ob_type = &PyType_Type; type_init = 1; } return &pyswigpacked_type; } SWIGRUNTIME PyObject * PySwigPacked_New(void *ptr, size_t size, swig_type_info *ty) { PySwigPacked *sobj = PyObject_NEW(PySwigPacked, PySwigPacked_type()); if (sobj) { void *pack = malloc(size); if (pack) { memcpy(pack, ptr, size); sobj->pack = pack; sobj->ty = ty; sobj->size = size; } else { PyObject_DEL((PyObject *) sobj); sobj = 0; } } return (PyObject *) sobj; } SWIGRUNTIME swig_type_info * PySwigPacked_UnpackData(PyObject *obj, void *ptr, size_t size) { if (PySwigPacked_Check(obj)) { PySwigPacked *sobj = (PySwigPacked *)obj; if (sobj->size != size) return 0; memcpy(ptr, sobj->pack, size); return sobj->ty; } else { return 0; } } /* ----------------------------------------------------------------------------- * pointers/data manipulation * ----------------------------------------------------------------------------- */ SWIGRUNTIMEINLINE PyObject * _SWIG_This(void) { return PyString_FromString("this"); } SWIGRUNTIME PyObject * SWIG_This(void) { static PyObject *SWIG_STATIC_POINTER(swig_this) = _SWIG_This(); return swig_this; } /* #define SWIG_PYTHON_SLOW_GETSET_THIS */ SWIGRUNTIME PySwigObject * SWIG_Python_GetSwigThis(PyObject *pyobj) { if (PySwigObject_Check(pyobj)) { return (PySwigObject *) pyobj; } else { PyObject *obj = 0; #if (!defined(SWIG_PYTHON_SLOW_GETSET_THIS) && (PY_VERSION_HEX >= 0x02030000)) if (PyInstance_Check(pyobj)) { obj = _PyInstance_Lookup(pyobj, SWIG_This()); } else { PyObject **dictptr = _PyObject_GetDictPtr(pyobj); if (dictptr != NULL) { PyObject *dict = *dictptr; obj = dict ? PyDict_GetItem(dict, SWIG_This()) : 0; } else { #ifdef PyWeakref_CheckProxy if (PyWeakref_CheckProxy(pyobj)) { PyObject *wobj = PyWeakref_GET_OBJECT(pyobj); return wobj ? SWIG_Python_GetSwigThis(wobj) : 0; } #endif obj = PyObject_GetAttr(pyobj,SWIG_This()); if (obj) { Py_DECREF(obj); } else { if (PyErr_Occurred()) PyErr_Clear(); return 0; } } } #else obj = PyObject_GetAttr(pyobj,SWIG_This()); if (obj) { Py_DECREF(obj); } else { if (PyErr_Occurred()) PyErr_Clear(); return 0; } #endif if (obj && !PySwigObject_Check(obj)) { /* a PyObject is called 'this', try to get the 'real this' PySwigObject from it */ return SWIG_Python_GetSwigThis(obj); } return (PySwigObject *)obj; } } /* Acquire a pointer value */ SWIGRUNTIME int SWIG_Python_AcquirePtr(PyObject *obj, int own) { if (own == SWIG_POINTER_OWN) { PySwigObject *sobj = SWIG_Python_GetSwigThis(obj); if (sobj) { int oldown = sobj->own; sobj->own = own; return oldown; } } return 0; } /* Convert a pointer value */ SWIGRUNTIME int SWIG_Python_ConvertPtrAndOwn(PyObject *obj, void **ptr, swig_type_info *ty, int flags, int *own) { if (!obj) return SWIG_ERROR; if (obj == Py_None) { if (ptr) *ptr = 0; return SWIG_OK; } else { PySwigObject *sobj = SWIG_Python_GetSwigThis(obj); if (own) *own = 0; while (sobj) { void *vptr = sobj->ptr; if (ty) { swig_type_info *to = sobj->ty; if (to == ty) { /* no type cast needed */ if (ptr) *ptr = vptr; break; } else { swig_cast_info *tc = SWIG_TypeCheck(to->name,ty); if (!tc) { sobj = (PySwigObject *)sobj->next; } else { if (ptr) { int newmemory = 0; *ptr = SWIG_TypeCast(tc,vptr,&newmemory); if (newmemory == SWIG_CAST_NEW_MEMORY) { assert(own); if (own) *own = *own | SWIG_CAST_NEW_MEMORY; } } break; } } } else { if (ptr) *ptr = vptr; break; } } if (sobj) { if (own) *own = *own | sobj->own; if (flags & SWIG_POINTER_DISOWN) { sobj->own = 0; } return SWIG_OK; } else { int res = SWIG_ERROR; if (flags & SWIG_POINTER_IMPLICIT_CONV) { PySwigClientData *data = ty ? (PySwigClientData *) ty->clientdata : 0; if (data && !data->implicitconv) { PyObject *klass = data->klass; if (klass) { PyObject *impconv; data->implicitconv = 1; /* avoid recursion and call 'explicit' constructors*/ impconv = SWIG_Python_CallFunctor(klass, obj); data->implicitconv = 0; if (PyErr_Occurred()) { PyErr_Clear(); impconv = 0; } if (impconv) { PySwigObject *iobj = SWIG_Python_GetSwigThis(impconv); if (iobj) { void *vptr; res = SWIG_Python_ConvertPtrAndOwn((PyObject*)iobj, &vptr, ty, 0, 0); if (SWIG_IsOK(res)) { if (ptr) { *ptr = vptr; /* transfer the ownership to 'ptr' */ iobj->own = 0; res = SWIG_AddCast(res); res = SWIG_AddNewMask(res); } else { res = SWIG_AddCast(res); } } } Py_DECREF(impconv); } } } } return res; } } } /* Convert a function ptr value */ SWIGRUNTIME int SWIG_Python_ConvertFunctionPtr(PyObject *obj, void **ptr, swig_type_info *ty) { if (!PyCFunction_Check(obj)) { return SWIG_ConvertPtr(obj, ptr, ty, 0); } else { void *vptr = 0; /* here we get the method pointer for callbacks */ const char *doc = (((PyCFunctionObject *)obj) -> m_ml -> ml_doc); const char *desc = doc ? strstr(doc, "swig_ptr: ") : 0; if (desc) { desc = ty ? SWIG_UnpackVoidPtr(desc + 10, &vptr, ty->name) : 0; if (!desc) return SWIG_ERROR; } if (ty) { swig_cast_info *tc = SWIG_TypeCheck(desc,ty); if (tc) { int newmemory = 0; *ptr = SWIG_TypeCast(tc,vptr,&newmemory); assert(!newmemory); /* newmemory handling not yet implemented */ } else { return SWIG_ERROR; } } else { *ptr = vptr; } return SWIG_OK; } } /* Convert a packed value value */ SWIGRUNTIME int SWIG_Python_ConvertPacked(PyObject *obj, void *ptr, size_t sz, swig_type_info *ty) { swig_type_info *to = PySwigPacked_UnpackData(obj, ptr, sz); if (!to) return SWIG_ERROR; if (ty) { if (to != ty) { /* check type cast? */ swig_cast_info *tc = SWIG_TypeCheck(to->name,ty); if (!tc) return SWIG_ERROR; } } return SWIG_OK; } /* ----------------------------------------------------------------------------- * Create a new pointer object * ----------------------------------------------------------------------------- */ /* Create a new instance object, whitout calling __init__, and set the 'this' attribute. */ SWIGRUNTIME PyObject* SWIG_Python_NewShadowInstance(PySwigClientData *data, PyObject *swig_this) { #if (PY_VERSION_HEX >= 0x02020000) PyObject *inst = 0; PyObject *newraw = data->newraw; if (newraw) { inst = PyObject_Call(newraw, data->newargs, NULL); if (inst) { #if !defined(SWIG_PYTHON_SLOW_GETSET_THIS) PyObject **dictptr = _PyObject_GetDictPtr(inst); if (dictptr != NULL) { PyObject *dict = *dictptr; if (dict == NULL) { dict = PyDict_New(); *dictptr = dict; PyDict_SetItem(dict, SWIG_This(), swig_this); } } #else PyObject *key = SWIG_This(); PyObject_SetAttr(inst, key, swig_this); #endif } } else { PyObject *dict = PyDict_New(); PyDict_SetItem(dict, SWIG_This(), swig_this); inst = PyInstance_NewRaw(data->newargs, dict); Py_DECREF(dict); } return inst; #else #if (PY_VERSION_HEX >= 0x02010000) PyObject *inst; PyObject *dict = PyDict_New(); PyDict_SetItem(dict, SWIG_This(), swig_this); inst = PyInstance_NewRaw(data->newargs, dict); Py_DECREF(dict); return (PyObject *) inst; #else PyInstanceObject *inst = PyObject_NEW(PyInstanceObject, &PyInstance_Type); if (inst == NULL) { return NULL; } inst->in_class = (PyClassObject *)data->newargs; Py_INCREF(inst->in_class); inst->in_dict = PyDict_New(); if (inst->in_dict == NULL) { Py_DECREF(inst); return NULL; } #ifdef Py_TPFLAGS_HAVE_WEAKREFS inst->in_weakreflist = NULL; #endif #ifdef Py_TPFLAGS_GC PyObject_GC_Init(inst); #endif PyDict_SetItem(inst->in_dict, SWIG_This(), swig_this); return (PyObject *) inst; #endif #endif } SWIGRUNTIME void SWIG_Python_SetSwigThis(PyObject *inst, PyObject *swig_this) { PyObject *dict; #if (PY_VERSION_HEX >= 0x02020000) && !defined(SWIG_PYTHON_SLOW_GETSET_THIS) PyObject **dictptr = _PyObject_GetDictPtr(inst); if (dictptr != NULL) { dict = *dictptr; if (dict == NULL) { dict = PyDict_New(); *dictptr = dict; } PyDict_SetItem(dict, SWIG_This(), swig_this); return; } #endif dict = PyObject_GetAttrString(inst, (char*)"__dict__"); PyDict_SetItem(dict, SWIG_This(), swig_this); Py_DECREF(dict); } SWIGINTERN PyObject * SWIG_Python_InitShadowInstance(PyObject *args) { PyObject *obj[2]; if (!SWIG_Python_UnpackTuple(args,(char*)"swiginit", 2, 2, obj)) { return NULL; } else { PySwigObject *sthis = SWIG_Python_GetSwigThis(obj[0]); if (sthis) { PySwigObject_append((PyObject*) sthis, obj[1]); } else { SWIG_Python_SetSwigThis(obj[0], obj[1]); } return SWIG_Py_Void(); } } /* Create a new pointer object */ SWIGRUNTIME PyObject * SWIG_Python_NewPointerObj(void *ptr, swig_type_info *type, int flags) { if (!ptr) { return SWIG_Py_Void(); } else { int own = (flags & SWIG_POINTER_OWN) ? SWIG_POINTER_OWN : 0; PyObject *robj = PySwigObject_New(ptr, type, own); PySwigClientData *clientdata = type ? (PySwigClientData *)(type->clientdata) : 0; if (clientdata && !(flags & SWIG_POINTER_NOSHADOW)) { PyObject *inst = SWIG_Python_NewShadowInstance(clientdata, robj); if (inst) { Py_DECREF(robj); robj = inst; } } return robj; } } /* Create a new packed object */ SWIGRUNTIMEINLINE PyObject * SWIG_Python_NewPackedObj(void *ptr, size_t sz, swig_type_info *type) { return ptr ? PySwigPacked_New((void *) ptr, sz, type) : SWIG_Py_Void(); } /* -----------------------------------------------------------------------------* * Get type list * -----------------------------------------------------------------------------*/ #ifdef SWIG_LINK_RUNTIME void *SWIG_ReturnGlobalTypeList(void *); #endif SWIGRUNTIME swig_module_info * SWIG_Python_GetModule(void) { static void *type_pointer = (void *)0; /* first check if module already created */ if (!type_pointer) { #ifdef SWIG_LINK_RUNTIME type_pointer = SWIG_ReturnGlobalTypeList((void *)0); #else type_pointer = PyCObject_Import((char*)"swig_runtime_data" SWIG_RUNTIME_VERSION, (char*)"type_pointer" SWIG_TYPE_TABLE_NAME); if (PyErr_Occurred()) { PyErr_Clear(); type_pointer = (void *)0; } #endif } return (swig_module_info *) type_pointer; } #if PY_MAJOR_VERSION < 2 /* PyModule_AddObject function was introduced in Python 2.0. The following function is copied out of Python/modsupport.c in python version 2.3.4 */ SWIGINTERN int PyModule_AddObject(PyObject *m, char *name, PyObject *o) { PyObject *dict; if (!PyModule_Check(m)) { PyErr_SetString(PyExc_TypeError, "PyModule_AddObject() needs module as first arg"); return SWIG_ERROR; } if (!o) { PyErr_SetString(PyExc_TypeError, "PyModule_AddObject() needs non-NULL value"); return SWIG_ERROR; } dict = PyModule_GetDict(m); if (dict == NULL) { /* Internal error -- modules must have a dict! */ PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", PyModule_GetName(m)); return SWIG_ERROR; } if (PyDict_SetItemString(dict, name, o)) return SWIG_ERROR; Py_DECREF(o); return SWIG_OK; } #endif SWIGRUNTIME void SWIG_Python_DestroyModule(void *vptr) { swig_module_info *swig_module = (swig_module_info *) vptr; swig_type_info **types = swig_module->types; size_t i; for (i =0; i < swig_module->size; ++i) { swig_type_info *ty = types[i]; if (ty->owndata) { PySwigClientData *data = (PySwigClientData *) ty->clientdata; if (data) PySwigClientData_Del(data); } } Py_DECREF(SWIG_This()); } SWIGRUNTIME void SWIG_Python_SetModule(swig_module_info *swig_module) { static PyMethodDef swig_empty_runtime_method_table[] = { {NULL, NULL, 0, NULL} };/* Sentinel */ PyObject *module = Py_InitModule((char*)"swig_runtime_data" SWIG_RUNTIME_VERSION, swig_empty_runtime_method_table); PyObject *pointer = PyCObject_FromVoidPtr((void *) swig_module, SWIG_Python_DestroyModule); if (pointer && module) { PyModule_AddObject(module, (char*)"type_pointer" SWIG_TYPE_TABLE_NAME, pointer); } else { Py_XDECREF(pointer); } } /* The python cached type query */ SWIGRUNTIME PyObject * SWIG_Python_TypeCache(void) { static PyObject *SWIG_STATIC_POINTER(cache) = PyDict_New(); return cache; } SWIGRUNTIME swig_type_info * SWIG_Python_TypeQuery(const char *type) { PyObject *cache = SWIG_Python_TypeCache(); PyObject *key = PyString_FromString(type); PyObject *obj = PyDict_GetItem(cache, key); swig_type_info *descriptor; if (obj) { descriptor = (swig_type_info *) PyCObject_AsVoidPtr(obj); } else { swig_module_info *swig_module = SWIG_Python_GetModule(); descriptor = SWIG_TypeQueryModule(swig_module, swig_module, type); if (descriptor) { obj = PyCObject_FromVoidPtr(descriptor, NULL); PyDict_SetItem(cache, key, obj); Py_DECREF(obj); } } Py_DECREF(key); return descriptor; } /* For backward compatibility only */ #define SWIG_POINTER_EXCEPTION 0 #define SWIG_arg_fail(arg) SWIG_Python_ArgFail(arg) #define SWIG_MustGetPtr(p, type, argnum, flags) SWIG_Python_MustGetPtr(p, type, argnum, flags) SWIGRUNTIME int SWIG_Python_AddErrMesg(const char* mesg, int infront) { if (PyErr_Occurred()) { PyObject *type = 0; PyObject *value = 0; PyObject *traceback = 0; PyErr_Fetch(&type, &value, &traceback); if (value) { PyObject *old_str = PyObject_Str(value); Py_XINCREF(type); PyErr_Clear(); if (infront) { PyErr_Format(type, "%s %s", mesg, PyString_AsString(old_str)); } else { PyErr_Format(type, "%s %s", PyString_AsString(old_str), mesg); } Py_DECREF(old_str); } return 1; } else { return 0; } } SWIGRUNTIME int SWIG_Python_ArgFail(int argnum) { if (PyErr_Occurred()) { /* add information about failing argument */ char mesg[256]; PyOS_snprintf(mesg, sizeof(mesg), "argument number %d:", argnum); return SWIG_Python_AddErrMesg(mesg, 1); } else { return 0; } } SWIGRUNTIMEINLINE const char * PySwigObject_GetDesc(PyObject *self) { PySwigObject *v = (PySwigObject *)self; swig_type_info *ty = v ? v->ty : 0; return ty ? ty->str : (char*)""; } SWIGRUNTIME void SWIG_Python_TypeError(const char *type, PyObject *obj) { if (type) { #if defined(SWIG_COBJECT_TYPES) if (obj && PySwigObject_Check(obj)) { const char *otype = (const char *) PySwigObject_GetDesc(obj); if (otype) { PyErr_Format(PyExc_TypeError, "a '%s' is expected, 'PySwigObject(%s)' is received", type, otype); return; } } else #endif { const char *otype = (obj ? obj->ob_type->tp_name : 0); if (otype) { PyObject *str = PyObject_Str(obj); const char *cstr = str ? PyString_AsString(str) : 0; if (cstr) { PyErr_Format(PyExc_TypeError, "a '%s' is expected, '%s(%s)' is received", type, otype, cstr); } else { PyErr_Format(PyExc_TypeError, "a '%s' is expected, '%s' is received", type, otype); } Py_XDECREF(str); return; } } PyErr_Format(PyExc_TypeError, "a '%s' is expected", type); } else { PyErr_Format(PyExc_TypeError, "unexpected type is received"); } } /* Convert a pointer value, signal an exception on a type mismatch */ SWIGRUNTIME void * SWIG_Python_MustGetPtr(PyObject *obj, swig_type_info *ty, int argnum, int flags) { void *result; if (SWIG_Python_ConvertPtr(obj, &result, ty, flags) == -1) { PyErr_Clear(); if (flags & SWIG_POINTER_EXCEPTION) { SWIG_Python_TypeError(SWIG_TypePrettyName(ty), obj); SWIG_Python_ArgFail(argnum); } } return result; } #ifdef __cplusplus #if 0 { /* cc-mode */ #endif } #endif #define SWIG_exception_fail(code, msg) do { SWIG_Error(code, msg); SWIG_fail; } while(0) #define SWIG_contract_assert(expr, msg) if (!(expr)) { SWIG_Error(SWIG_RuntimeError, msg); SWIG_fail; } else /* -------- TYPES TABLE (BEGIN) -------- */ #define SWIGTYPE_p_char swig_types[0] static swig_type_info *swig_types[2]; static swig_module_info swig_module = {swig_types, 1, 0, 0, 0, 0}; #define SWIG_TypeQuery(name) SWIG_TypeQueryModule(&swig_module, &swig_module, name) #define SWIG_MangledTypeQuery(name) SWIG_MangledTypeQueryModule(&swig_module, &swig_module, name) /* -------- TYPES TABLE (END) -------- */ #if (PY_VERSION_HEX <= 0x02000000) # if !defined(SWIG_PYTHON_CLASSIC) # error "This python version requires swig to be run with the '-classic' option" # endif #endif /*----------------------------------------------- @(target):= _fastexp.so ------------------------------------------------*/ #define SWIG_init init_fastexp #define SWIG_name "_fastexp" #define SWIGVERSION 0x010336 #define SWIG_VERSION SWIGVERSION #define SWIG_as_voidptr(a) const_cast< void * >(static_cast< const void * >(a)) #define SWIG_as_voidptrptr(a) ((void)SWIG_as_voidptr(*a),reinterpret_cast< void** >(a)) #include namespace swig { class PyObject_ptr { protected: PyObject *_obj; public: PyObject_ptr() :_obj(0) { } PyObject_ptr(const PyObject_ptr& item) : _obj(item._obj) { Py_XINCREF(_obj); } PyObject_ptr(PyObject *obj, bool initial_ref = true) :_obj(obj) { if (initial_ref) { Py_XINCREF(_obj); } } PyObject_ptr & operator=(const PyObject_ptr& item) { Py_XINCREF(item._obj); Py_XDECREF(_obj); _obj = item._obj; return *this; } ~PyObject_ptr() { Py_XDECREF(_obj); } operator PyObject *() const { return _obj; } PyObject *operator->() const { return _obj; } }; } namespace swig { struct PyObject_var : PyObject_ptr { PyObject_var(PyObject* obj = 0) : PyObject_ptr(obj, false) { } PyObject_var & operator = (PyObject* obj) { Py_XDECREF(_obj); _obj = obj; return *this; } }; } #define SWIG_FILE_WITH_INIT #include "fastexp.h" #ifndef SWIG_FILE_WITH_INIT # define NO_IMPORT_ARRAY #endif #include "stdio.h" #include /* Support older NumPy data type names */ #if NDARRAY_VERSION < 0x01000000 #define NPY_BOOL PyArray_BOOL #define NPY_BYTE PyArray_BYTE #define NPY_UBYTE PyArray_UBYTE #define NPY_SHORT PyArray_SHORT #define NPY_USHORT PyArray_USHORT #define NPY_INT PyArray_INT #define NPY_UINT PyArray_UINT #define NPY_LONG PyArray_LONG #define NPY_ULONG PyArray_ULONG #define NPY_LONGLONG PyArray_LONGLONG #define NPY_ULONGLONG PyArray_ULONGLONG #define NPY_FLOAT PyArray_FLOAT #define NPY_DOUBLE PyArray_DOUBLE #define NPY_LONGDOUBLE PyArray_LONGDOUBLE #define NPY_CFLOAT PyArray_CFLOAT #define NPY_CDOUBLE PyArray_CDOUBLE #define NPY_CLONGDOUBLE PyArray_CLONGDOUBLE #define NPY_OBJECT PyArray_OBJECT #define NPY_STRING PyArray_STRING #define NPY_UNICODE PyArray_UNICODE #define NPY_VOID PyArray_VOID #define NPY_NTYPES PyArray_NTYPES #define NPY_NOTYPE PyArray_NOTYPE #define NPY_CHAR PyArray_CHAR #define NPY_USERDEF PyArray_USERDEF #define npy_intp intp #define NPY_MAX_BYTE MAX_BYTE #define NPY_MIN_BYTE MIN_BYTE #define NPY_MAX_UBYTE MAX_UBYTE #define NPY_MAX_SHORT MAX_SHORT #define NPY_MIN_SHORT MIN_SHORT #define NPY_MAX_USHORT MAX_USHORT #define NPY_MAX_INT MAX_INT #define NPY_MIN_INT MIN_INT #define NPY_MAX_UINT MAX_UINT #define NPY_MAX_LONG MAX_LONG #define NPY_MIN_LONG MIN_LONG #define NPY_MAX_ULONG MAX_ULONG #define NPY_MAX_LONGLONG MAX_LONGLONG #define NPY_MIN_LONGLONG MIN_LONGLONG #define NPY_MAX_ULONGLONG MAX_ULONGLONG #define NPY_MAX_INTP MAX_INTP #define NPY_MIN_INTP MIN_INTP #define NPY_FARRAY FARRAY #define NPY_F_CONTIGUOUS F_CONTIGUOUS #endif /* Macros to extract array attributes. */ #define is_array(a) ((a) && PyArray_Check((PyArrayObject *)a)) #define array_type(a) (int)(PyArray_TYPE(a)) #define array_numdims(a) (((PyArrayObject *)a)->nd) #define array_dimensions(a) (((PyArrayObject *)a)->dimensions) #define array_size(a,i) (((PyArrayObject *)a)->dimensions[i]) #define array_data(a) (((PyArrayObject *)a)->data) #define array_is_contiguous(a) (PyArray_ISCONTIGUOUS(a)) #define array_is_native(a) (PyArray_ISNOTSWAPPED(a)) #define array_is_fortran(a) (PyArray_ISFORTRAN(a)) /* Given a PyObject, return a string describing its type. */ char* pytype_string(PyObject* py_obj) { if (py_obj == NULL ) return "C NULL value"; if (py_obj == Py_None ) return "Python None" ; if (PyCallable_Check(py_obj)) return "callable" ; if (PyString_Check( py_obj)) return "string" ; if (PyInt_Check( py_obj)) return "int" ; if (PyFloat_Check( py_obj)) return "float" ; if (PyDict_Check( py_obj)) return "dict" ; if (PyList_Check( py_obj)) return "list" ; if (PyTuple_Check( py_obj)) return "tuple" ; if (PyFile_Check( py_obj)) return "file" ; if (PyModule_Check( py_obj)) return "module" ; if (PyInstance_Check(py_obj)) return "instance" ; return "unkown type"; } /* Given a NumPy typecode, return a string describing the type. */ char* typecode_string(int typecode) { static char* type_names[25] = {"bool", "byte", "unsigned byte", "short", "unsigned short", "int", "unsigned int", "long", "unsigned long", "long long", "unsigned long long", "float", "double", "long double", "complex float", "complex double", "complex long double", "object", "string", "unicode", "void", "ntypes", "notype", "char", "unknown"}; return typecode < 24 ? type_names[typecode] : type_names[24]; } /* Make sure input has correct numpy type. Allow character and byte * to match. Also allow int and long to match. This is deprecated. * You should use PyArray_EquivTypenums() instead. */ int type_match(int actual_type, int desired_type) { return PyArray_EquivTypenums(actual_type, desired_type); } /* Given a PyObject pointer, cast it to a PyArrayObject pointer if * legal. If not, set the python error string appropriately and * return NULL. */ PyArrayObject* obj_to_array_no_conversion(PyObject* input, int typecode) { PyArrayObject* ary = NULL; if (is_array(input) && (typecode == NPY_NOTYPE || PyArray_EquivTypenums(array_type(input), typecode))) { ary = (PyArrayObject*) input; } else if is_array(input) { char* desired_type = typecode_string(typecode); char* actual_type = typecode_string(array_type(input)); PyErr_Format(PyExc_TypeError, "Array of type '%s' required. Array of type '%s' given", desired_type, actual_type); ary = NULL; } else { char * desired_type = typecode_string(typecode); char * actual_type = pytype_string(input); PyErr_Format(PyExc_TypeError, "Array of type '%s' required. A '%s' was given", desired_type, actual_type); ary = NULL; } return ary; } /* Convert the given PyObject to a NumPy array with the given * typecode. On success, return a valid PyArrayObject* with the * correct type. On failure, the python error string will be set and * the routine returns NULL. */ PyArrayObject* obj_to_array_allow_conversion(PyObject* input, int typecode, int* is_new_object) { PyArrayObject* ary = NULL; PyObject* py_obj; if (is_array(input) && (typecode == NPY_NOTYPE || PyArray_EquivTypenums(array_type(input),typecode))) { ary = (PyArrayObject*) input; *is_new_object = 0; } else { py_obj = PyArray_FromObject(input, typecode, 0, 0); /* If NULL, PyArray_FromObject will have set python error value.*/ ary = (PyArrayObject*) py_obj; *is_new_object = 1; } return ary; } /* Given a PyArrayObject, check to see if it is contiguous. If so, * return the input pointer and flag it as not a new object. If it is * not contiguous, create a new PyArrayObject using the original data, * flag it as a new object and return the pointer. */ PyArrayObject* make_contiguous(PyArrayObject* ary, int* is_new_object, int min_dims, int max_dims) { PyArrayObject* result; if (array_is_contiguous(ary)) { result = ary; *is_new_object = 0; } else { result = (PyArrayObject*) PyArray_ContiguousFromObject((PyObject*)ary, array_type(ary), min_dims, max_dims); *is_new_object = 1; } return result; } /* Convert a given PyObject to a contiguous PyArrayObject of the * specified type. If the input object is not a contiguous * PyArrayObject, a new one will be created and the new object flag * will be set. */ PyArrayObject* obj_to_array_contiguous_allow_conversion(PyObject* input, int typecode, int* is_new_object) { int is_new1 = 0; int is_new2 = 0; PyArrayObject* ary2; PyArrayObject* ary1 = obj_to_array_allow_conversion(input, typecode, &is_new1); if (ary1) { ary2 = make_contiguous(ary1, &is_new2, 0, 0); if ( is_new1 && is_new2) { Py_DECREF(ary1); } ary1 = ary2; } *is_new_object = is_new1 || is_new2; return ary1; } /* Test whether a python object is contiguous. If array is * contiguous, return 1. Otherwise, set the python error string and * return 0. */ int require_contiguous(PyArrayObject* ary) { int contiguous = 1; if (!array_is_contiguous(ary)) { PyErr_SetString(PyExc_TypeError, "Array must be contiguous. A non-contiguous array was given"); contiguous = 0; } return contiguous; } /* Require that a numpy array is not byte-swapped. If the array is * not byte-swapped, return 1. Otherwise, set the python error string * and return 0. */ int require_native(PyArrayObject* ary) { int native = 1; if (!array_is_native(ary)) { PyErr_SetString(PyExc_TypeError, "Array must have native byteorder. " "A byte-swapped array was given"); native = 0; } return native; } /* Require the given PyArrayObject to have a specified number of * dimensions. If the array has the specified number of dimensions, * return 1. Otherwise, set the python error string and return 0. */ int require_dimensions(PyArrayObject* ary, int exact_dimensions) { int success = 1; if (array_numdims(ary) != exact_dimensions) { PyErr_Format(PyExc_TypeError, "Array must have %d dimensions. Given array has %d dimensions", exact_dimensions, array_numdims(ary)); success = 0; } return success; } /* Require the given PyArrayObject to have one of a list of specified * number of dimensions. If the array has one of the specified number * of dimensions, return 1. Otherwise, set the python error string * and return 0. */ int require_dimensions_n(PyArrayObject* ary, int* exact_dimensions, int n) { int success = 0; int i; char dims_str[255] = ""; char s[255]; for (i = 0; i < n && !success; i++) { if (array_numdims(ary) == exact_dimensions[i]) { success = 1; } } if (!success) { for (i = 0; i < n-1; i++) { sprintf(s, "%d, ", exact_dimensions[i]); strcat(dims_str,s); } sprintf(s, " or %d", exact_dimensions[n-1]); strcat(dims_str,s); PyErr_Format(PyExc_TypeError, "Array must have %s dimensions. Given array has %d dimensions", dims_str, array_numdims(ary)); } return success; } /* Require the given PyArrayObject to have a specified shape. If the * array has the specified shape, return 1. Otherwise, set the python * error string and return 0. */ int require_size(PyArrayObject* ary, npy_intp* size, int n) { int i; int success = 1; int len; char desired_dims[255] = "["; char s[255]; char actual_dims[255] = "["; for(i=0; i < n;i++) { if (size[i] != -1 && size[i] != array_size(ary,i)) { success = 0; } } if (!success) { for (i = 0; i < n; i++) { if (size[i] == -1) { sprintf(s, "*,"); } else { sprintf(s, "%ld,", (long int)size[i]); } strcat(desired_dims,s); } len = strlen(desired_dims); desired_dims[len-1] = ']'; for (i = 0; i < n; i++) { sprintf(s, "%ld,", (long int)array_size(ary,i)); strcat(actual_dims,s); } len = strlen(actual_dims); actual_dims[len-1] = ']'; PyErr_Format(PyExc_TypeError, "Array must have shape of %s. Given array has shape of %s", desired_dims, actual_dims); } return success; } /* Require the given PyArrayObject to to be FORTRAN ordered. If the * the PyArrayObject is already FORTRAN ordered, do nothing. Else, * set the FORTRAN ordering flag and recompute the strides. */ int require_fortran(PyArrayObject* ary) { int success = 1; int nd = array_numdims(ary); int i; if (array_is_fortran(ary)) return success; /* Set the FORTRAN ordered flag */ ary->flags = NPY_FARRAY; /* Recompute the strides */ ary->strides[0] = ary->strides[nd-1]; for (i=1; i < nd; ++i) ary->strides[i] = ary->strides[i-1] * array_size(ary,i-1); return success; } #ifdef __cplusplus extern "C" { #endif SWIGINTERN PyObject *_wrap_fastexp(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { PyObject *resultobj = 0; double *arg1 = (double *) 0 ; int arg2 ; double *arg3 = (double *) 0 ; int arg4 ; PyArrayObject *array1 = NULL ; int i1 = 1 ; PyArrayObject *array3 = NULL ; int i3 = 1 ; PyObject * obj0 = 0 ; PyObject * obj1 = 0 ; if (!PyArg_ParseTuple(args,(char *)"OO:fastexp",&obj0,&obj1)) SWIG_fail; { array1 = obj_to_array_no_conversion(obj0, NPY_DOUBLE); if (!array1 || !require_dimensions(array1,1) || !require_contiguous(array1) || !require_native(array1)) SWIG_fail; arg1 = (double*) array_data(array1); arg2 = 1; for (i1=0; i1 < array_numdims(array1); ++i1) arg2 *= array_size(array1,i1); } { array3 = obj_to_array_no_conversion(obj1, NPY_DOUBLE); if (!array3 || !require_dimensions(array3,1) || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; arg3 = (double*) array_data(array3); arg4 = 1; for (i3=0; i3 < array_numdims(array3); ++i3) arg4 *= array_size(array3,i3); } fastexp(arg1,arg2,arg3,arg4); resultobj = SWIG_Py_Void(); return resultobj; fail: return NULL; } static PyMethodDef SwigMethods[] = { { (char *)"fastexp", _wrap_fastexp, METH_VARARGS, NULL}, { NULL, NULL, 0, NULL } }; /* -------- TYPE CONVERSION AND EQUIVALENCE RULES (BEGIN) -------- */ static swig_type_info _swigt__p_char = {"_p_char", "char *", 0, 0, (void*)0, 0}; static swig_type_info *swig_type_initial[] = { &_swigt__p_char, }; static swig_cast_info _swigc__p_char[] = { {&_swigt__p_char, 0, 0, 0},{0, 0, 0, 0}}; static swig_cast_info *swig_cast_initial[] = { _swigc__p_char, }; /* -------- TYPE CONVERSION AND EQUIVALENCE RULES (END) -------- */ static swig_const_info swig_const_table[] = { {0, 0, 0, 0.0, 0, 0}}; #ifdef __cplusplus } #endif /* ----------------------------------------------------------------------------- * Type initialization: * This problem is tough by the requirement that no dynamic * memory is used. Also, since swig_type_info structures store pointers to * swig_cast_info structures and swig_cast_info structures store pointers back * to swig_type_info structures, we need some lookup code at initialization. * The idea is that swig generates all the structures that are needed. * The runtime then collects these partially filled structures. * The SWIG_InitializeModule function takes these initial arrays out of * swig_module, and does all the lookup, filling in the swig_module.types * array with the correct data and linking the correct swig_cast_info * structures together. * * The generated swig_type_info structures are assigned staticly to an initial * array. We just loop through that array, and handle each type individually. * First we lookup if this type has been already loaded, and if so, use the * loaded structure instead of the generated one. Then we have to fill in the * cast linked list. The cast data is initially stored in something like a * two-dimensional array. Each row corresponds to a type (there are the same * number of rows as there are in the swig_type_initial array). Each entry in * a column is one of the swig_cast_info structures for that type. * The cast_initial array is actually an array of arrays, because each row has * a variable number of columns. So to actually build the cast linked list, * we find the array of casts associated with the type, and loop through it * adding the casts to the list. The one last trick we need to do is making * sure the type pointer in the swig_cast_info struct is correct. * * First off, we lookup the cast->type name to see if it is already loaded. * There are three cases to handle: * 1) If the cast->type has already been loaded AND the type we are adding * casting info to has not been loaded (it is in this module), THEN we * replace the cast->type pointer with the type pointer that has already * been loaded. * 2) If BOTH types (the one we are adding casting info to, and the * cast->type) are loaded, THEN the cast info has already been loaded by * the previous module so we just ignore it. * 3) Finally, if cast->type has not already been loaded, then we add that * swig_cast_info to the linked list (because the cast->type) pointer will * be correct. * ----------------------------------------------------------------------------- */ #ifdef __cplusplus extern "C" { #if 0 } /* c-mode */ #endif #endif #if 0 #define SWIGRUNTIME_DEBUG #endif SWIGRUNTIME void SWIG_InitializeModule(void *clientdata) { size_t i; swig_module_info *module_head, *iter; int found, init; clientdata = clientdata; /* check to see if the circular list has been setup, if not, set it up */ if (swig_module.next==0) { /* Initialize the swig_module */ swig_module.type_initial = swig_type_initial; swig_module.cast_initial = swig_cast_initial; swig_module.next = &swig_module; init = 1; } else { init = 0; } /* Try and load any already created modules */ module_head = SWIG_GetModule(clientdata); if (!module_head) { /* This is the first module loaded for this interpreter */ /* so set the swig module into the interpreter */ SWIG_SetModule(clientdata, &swig_module); module_head = &swig_module; } else { /* the interpreter has loaded a SWIG module, but has it loaded this one? */ found=0; iter=module_head; do { if (iter==&swig_module) { found=1; break; } iter=iter->next; } while (iter!= module_head); /* if the is found in the list, then all is done and we may leave */ if (found) return; /* otherwise we must add out module into the list */ swig_module.next = module_head->next; module_head->next = &swig_module; } /* When multiple interpeters are used, a module could have already been initialized in a different interpreter, but not yet have a pointer in this interpreter. In this case, we do not want to continue adding types... everything should be set up already */ if (init == 0) return; /* Now work on filling in swig_module.types */ #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: size %d\n", swig_module.size); #endif for (i = 0; i < swig_module.size; ++i) { swig_type_info *type = 0; swig_type_info *ret; swig_cast_info *cast; #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: type %d %s\n", i, swig_module.type_initial[i]->name); #endif /* if there is another module already loaded */ if (swig_module.next != &swig_module) { type = SWIG_MangledTypeQueryModule(swig_module.next, &swig_module, swig_module.type_initial[i]->name); } if (type) { /* Overwrite clientdata field */ #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: found type %s\n", type->name); #endif if (swig_module.type_initial[i]->clientdata) { type->clientdata = swig_module.type_initial[i]->clientdata; #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: found and overwrite type %s \n", type->name); #endif } } else { type = swig_module.type_initial[i]; } /* Insert casting types */ cast = swig_module.cast_initial[i]; while (cast->type) { /* Don't need to add information already in the list */ ret = 0; #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: look cast %s\n", cast->type->name); #endif if (swig_module.next != &swig_module) { ret = SWIG_MangledTypeQueryModule(swig_module.next, &swig_module, cast->type->name); #ifdef SWIGRUNTIME_DEBUG if (ret) printf("SWIG_InitializeModule: found cast %s\n", ret->name); #endif } if (ret) { if (type == swig_module.type_initial[i]) { #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: skip old type %s\n", ret->name); #endif cast->type = ret; ret = 0; } else { /* Check for casting already in the list */ swig_cast_info *ocast = SWIG_TypeCheck(ret->name, type); #ifdef SWIGRUNTIME_DEBUG if (ocast) printf("SWIG_InitializeModule: skip old cast %s\n", ret->name); #endif if (!ocast) ret = 0; } } if (!ret) { #ifdef SWIGRUNTIME_DEBUG printf("SWIG_InitializeModule: adding cast %s\n", cast->type->name); #endif if (type->cast) { type->cast->prev = cast; cast->next = type->cast; } type->cast = cast; } cast++; } /* Set entry in modules->types array equal to the type */ swig_module.types[i] = type; } swig_module.types[i] = 0; #ifdef SWIGRUNTIME_DEBUG printf("**** SWIG_InitializeModule: Cast List ******\n"); for (i = 0; i < swig_module.size; ++i) { int j = 0; swig_cast_info *cast = swig_module.cast_initial[i]; printf("SWIG_InitializeModule: type %d %s\n", i, swig_module.type_initial[i]->name); while (cast->type) { printf("SWIG_InitializeModule: cast type %s\n", cast->type->name); cast++; ++j; } printf("---- Total casts: %d\n",j); } printf("**** SWIG_InitializeModule: Cast List ******\n"); #endif } /* This function will propagate the clientdata field of type to * any new swig_type_info structures that have been added into the list * of equivalent types. It is like calling * SWIG_TypeClientData(type, clientdata) a second time. */ SWIGRUNTIME void SWIG_PropagateClientData(void) { size_t i; swig_cast_info *equiv; static int init_run = 0; if (init_run) return; init_run = 1; for (i = 0; i < swig_module.size; i++) { if (swig_module.types[i]->clientdata) { equiv = swig_module.types[i]->cast; while (equiv) { if (!equiv->converter) { if (equiv->type && !equiv->type->clientdata) SWIG_TypeClientData(equiv->type, swig_module.types[i]->clientdata); } equiv = equiv->next; } } } } #ifdef __cplusplus #if 0 { /* c-mode */ #endif } #endif #ifdef __cplusplus extern "C" { #endif /* Python-specific SWIG API */ #define SWIG_newvarlink() SWIG_Python_newvarlink() #define SWIG_addvarlink(p, name, get_attr, set_attr) SWIG_Python_addvarlink(p, name, get_attr, set_attr) #define SWIG_InstallConstants(d, constants) SWIG_Python_InstallConstants(d, constants) /* ----------------------------------------------------------------------------- * global variable support code. * ----------------------------------------------------------------------------- */ typedef struct swig_globalvar { char *name; /* Name of global variable */ PyObject *(*get_attr)(void); /* Return the current value */ int (*set_attr)(PyObject *); /* Set the value */ struct swig_globalvar *next; } swig_globalvar; typedef struct swig_varlinkobject { PyObject_HEAD swig_globalvar *vars; } swig_varlinkobject; SWIGINTERN PyObject * swig_varlink_repr(swig_varlinkobject *SWIGUNUSEDPARM(v)) { return PyString_FromString(""); } SWIGINTERN PyObject * swig_varlink_str(swig_varlinkobject *v) { PyObject *str = PyString_FromString("("); swig_globalvar *var; for (var = v->vars; var; var=var->next) { PyString_ConcatAndDel(&str,PyString_FromString(var->name)); if (var->next) PyString_ConcatAndDel(&str,PyString_FromString(", ")); } PyString_ConcatAndDel(&str,PyString_FromString(")")); return str; } SWIGINTERN int swig_varlink_print(swig_varlinkobject *v, FILE *fp, int SWIGUNUSEDPARM(flags)) { PyObject *str = swig_varlink_str(v); fprintf(fp,"Swig global variables "); fprintf(fp,"%s\n", PyString_AsString(str)); Py_DECREF(str); return 0; } SWIGINTERN void swig_varlink_dealloc(swig_varlinkobject *v) { swig_globalvar *var = v->vars; while (var) { swig_globalvar *n = var->next; free(var->name); free(var); var = n; } } SWIGINTERN PyObject * swig_varlink_getattr(swig_varlinkobject *v, char *n) { PyObject *res = NULL; swig_globalvar *var = v->vars; while (var) { if (strcmp(var->name,n) == 0) { res = (*var->get_attr)(); break; } var = var->next; } if (res == NULL && !PyErr_Occurred()) { PyErr_SetString(PyExc_NameError,"Unknown C global variable"); } return res; } SWIGINTERN int swig_varlink_setattr(swig_varlinkobject *v, char *n, PyObject *p) { int res = 1; swig_globalvar *var = v->vars; while (var) { if (strcmp(var->name,n) == 0) { res = (*var->set_attr)(p); break; } var = var->next; } if (res == 1 && !PyErr_Occurred()) { PyErr_SetString(PyExc_NameError,"Unknown C global variable"); } return res; } SWIGINTERN PyTypeObject* swig_varlink_type(void) { static char varlink__doc__[] = "Swig var link object"; static PyTypeObject varlink_type; static int type_init = 0; if (!type_init) { const PyTypeObject tmp = { PyObject_HEAD_INIT(NULL) 0, /* Number of items in variable part (ob_size) */ (char *)"swigvarlink", /* Type name (tp_name) */ sizeof(swig_varlinkobject), /* Basic size (tp_basicsize) */ 0, /* Itemsize (tp_itemsize) */ (destructor) swig_varlink_dealloc, /* Deallocator (tp_dealloc) */ (printfunc) swig_varlink_print, /* Print (tp_print) */ (getattrfunc) swig_varlink_getattr, /* get attr (tp_getattr) */ (setattrfunc) swig_varlink_setattr, /* Set attr (tp_setattr) */ 0, /* tp_compare */ (reprfunc) swig_varlink_repr, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ (reprfunc)swig_varlink_str, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ 0, /* tp_flags */ varlink__doc__, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ #if PY_VERSION_HEX >= 0x02020000 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, /* tp_iter -> tp_weaklist */ #endif #if PY_VERSION_HEX >= 0x02030000 0, /* tp_del */ #endif #ifdef COUNT_ALLOCS 0,0,0,0 /* tp_alloc -> tp_next */ #endif }; varlink_type = tmp; varlink_type.ob_type = &PyType_Type; type_init = 1; } return &varlink_type; } /* Create a variable linking object for use later */ SWIGINTERN PyObject * SWIG_Python_newvarlink(void) { swig_varlinkobject *result = PyObject_NEW(swig_varlinkobject, swig_varlink_type()); if (result) { result->vars = 0; } return ((PyObject*) result); } SWIGINTERN void SWIG_Python_addvarlink(PyObject *p, char *name, PyObject *(*get_attr)(void), int (*set_attr)(PyObject *p)) { swig_varlinkobject *v = (swig_varlinkobject *) p; swig_globalvar *gv = (swig_globalvar *) malloc(sizeof(swig_globalvar)); if (gv) { size_t size = strlen(name)+1; gv->name = (char *)malloc(size); if (gv->name) { strncpy(gv->name,name,size); gv->get_attr = get_attr; gv->set_attr = set_attr; gv->next = v->vars; } } v->vars = gv; } SWIGINTERN PyObject * SWIG_globals(void) { static PyObject *_SWIG_globals = 0; if (!_SWIG_globals) _SWIG_globals = SWIG_newvarlink(); return _SWIG_globals; } /* ----------------------------------------------------------------------------- * constants/methods manipulation * ----------------------------------------------------------------------------- */ /* Install Constants */ SWIGINTERN void SWIG_Python_InstallConstants(PyObject *d, swig_const_info constants[]) { PyObject *obj = 0; size_t i; for (i = 0; constants[i].type; ++i) { switch(constants[i].type) { case SWIG_PY_POINTER: obj = SWIG_NewPointerObj(constants[i].pvalue, *(constants[i]).ptype,0); break; case SWIG_PY_BINARY: obj = SWIG_NewPackedObj(constants[i].pvalue, constants[i].lvalue, *(constants[i].ptype)); break; default: obj = 0; break; } if (obj) { PyDict_SetItemString(d, constants[i].name, obj); Py_DECREF(obj); } } } /* -----------------------------------------------------------------------------*/ /* Fix SwigMethods to carry the callback ptrs when needed */ /* -----------------------------------------------------------------------------*/ SWIGINTERN void SWIG_Python_FixMethods(PyMethodDef *methods, swig_const_info *const_table, swig_type_info **types, swig_type_info **types_initial) { size_t i; for (i = 0; methods[i].ml_name; ++i) { const char *c = methods[i].ml_doc; if (c && (c = strstr(c, "swig_ptr: "))) { int j; swig_const_info *ci = 0; const char *name = c + 10; for (j = 0; const_table[j].type; ++j) { if (strncmp(const_table[j].name, name, strlen(const_table[j].name)) == 0) { ci = &(const_table[j]); break; } } if (ci) { size_t shift = (ci->ptype) - types; swig_type_info *ty = types_initial[shift]; size_t ldoc = (c - methods[i].ml_doc); size_t lptr = strlen(ty->name)+2*sizeof(void*)+2; char *ndoc = (char*)malloc(ldoc + lptr + 10); if (ndoc) { char *buff = ndoc; void *ptr = (ci->type == SWIG_PY_POINTER) ? ci->pvalue : 0; if (ptr) { strncpy(buff, methods[i].ml_doc, ldoc); buff += ldoc; strncpy(buff, "swig_ptr: ", 10); buff += 10; SWIG_PackVoidPtr(buff, ptr, ty->name, lptr); methods[i].ml_doc = ndoc; } } } } } } #ifdef __cplusplus } #endif /* -----------------------------------------------------------------------------* * Partial Init method * -----------------------------------------------------------------------------*/ #ifdef __cplusplus extern "C" #endif SWIGEXPORT void SWIG_init(void) { PyObject *m, *d; /* Fix SwigMethods to carry the callback ptrs when needed */ SWIG_Python_FixMethods(SwigMethods, swig_const_table, swig_types, swig_type_initial); m = Py_InitModule((char *) SWIG_name, SwigMethods); d = PyModule_GetDict(m); SWIG_InitializeModule(0); SWIG_InstallConstants(d,swig_const_table); import_array(); } brian-1.3.1/brian/utils/fastexp/fexp.c000066400000000000000000000047531167451777000176520ustar00rootroot00000000000000/* -------------------------------------------------------------- Fastest exponent function with 11-bit precision. Home page: www.imach.uran.ru/exp Copyright 2001-2002 by Dr. Raul N.Shakirov, IMach of RAS(UB), Phillip S. Pang, Ph.D. Biochemistry and Molecular Biophysics. Columbia University. NYC. All Rights Reserved. Permission has been granted to copy, distribute and modify software in any context without fee, including a commercial application, provided that the aforesaid copyright statement is present here as well as exhaustive description of changes. THE SOFTWARE IS DISTRIBUTED "AS IS". NO WARRANTY OF ANY KIND IS EXPRESSED OR IMPLIED. YOU USE AT YOUR OWN RISK. THE AUTHOR WILL NOT BE LIABLE FOR DATA LOSS, DAMAGES, LOSS OF PROFITS OR ANY OTHER KIND OF LOSS WHILE USING OR MISUSING THIS SOFTWARE. Method: A Fast, Compact Approximation of the Exponential Function Technical Report IDSIA-07-98 to appear in Neural Computation 11(4) Nicol N. Schraudolph ftp://ftp.idsia.ch/pub/nic/exp.ps.gz -------------------------------------------------------------- */ #include "fexp.h" /* Header file for fexp() function */ #ifdef __cplusplus extern "C" { #endif/*__cplusplus*/ /* -------------------------------------------------------------- Structure and constants for fexp() function and macros. -------------------------------------------------------------- */ union eco _eco; const double _eco_m = (1048576L/0.693147180559945309417232121458177); const double _eco_a = (1072693248L - 60801L); /* -------------------------------------------------------------- Name: fexp Purpose: 11-bit precision exponent for Intel x86. Usage: fexp (arg) Domain: Same as for standard exp() function (approximately -709 <= arg <= 709). Result: Approximate exp of arg, if within domain, otherwise undefined. -------------------------------------------------------------- */ inline double fexp (double arg) { #ifdef _MSC_VER #pragma warning(push) #pragma warning(disable: 4410) __asm fld _eco_m __asm fmul arg __asm fadd _eco_a __asm fistp _eco.n.j return _eco.d; #pragma warning(pop) #else /*_MSC_VER*/ return RFEXP (arg); #endif/*_MSC_VER*/ } #ifdef __cplusplus } #endif/*__cplusplus*/ brian-1.3.1/brian/utils/fastexp/fexp.h000066400000000000000000000105371167451777000176540ustar00rootroot00000000000000/* -------------------------------------------------------------- Header file for exponent function with 11-bit precision. Home page: www.imach.uran.ru/exp Copyright 2001-2002 by Dr. Raul N.Shakirov, IMach of RAS(UB), Phillip S. Pang, Ph.D. Biochemistry and Molecular Biophysics. Columbia University. NYC. All Rights Reserved. Permission has been granted to copy, distribute and modify software in any context without fee, including a commercial application, provided that the aforesaid copyright statement is present here as well as exhaustive description of changes. THE SOFTWARE IS DISTRIBUTED "AS IS". NO WARRANTY OF ANY KIND IS EXPRESSED OR IMPLIED. YOU USE AT YOUR OWN RISK. THE AUTHOR WILL NOT BE LIABLE FOR DATA LOSS, DAMAGES, LOSS OF PROFITS OR ANY OTHER KIND OF LOSS WHILE USING OR MISUSING THIS SOFTWARE. Method: A Fast, Compact Approximation of the Exponential Function Technical Report IDSIA-07-98 to appear in Neural Computation 11(4) Nicol N. Schraudolph ftp://ftp.idsia.ch/pub/nic/exp.ps.gz -------------------------------------------------------------- */ #ifndef FEXP_H #define FEXP_H #ifdef __cplusplus extern "C" { #endif/*__cplusplus*/ /* -------------------------------------------------------------- Structure and constants for fexp() function and macros. -------------------------------------------------------------- */ union eco { double d; struct {long i, j;} n; }; extern union eco _eco; extern const double _eco_m; extern const double _eco_a; /* -------------------------------------------------------------- Name: fexp Purpose: 11-bit precision exponent for Intel x86. Usage: fexp (arg) Domain: Same as for standard exp() function (approximately -709 <= arg <= 709). Result: Approximate exp of arg, if within domain, otherwise undefined. -------------------------------------------------------------- */ extern double fexp (double arg); /* -------------------------------------------------------------- Name: RFEXP Purpose: ANSI compatible 11-bit precision exponent for machines, which support IEEE-754 and store least significant bit of integers first. Usage: RFEXP (arg) Domain: Same as for standard exp() function (approximately -709 <= arg <= 709). Result: Approximate exp of arg if within domain, otherwise undefined. -------------------------------------------------------------- */ #define RFEXP(v) (_eco.n.j = (long)(_eco_m * (v) + _eco_a), _eco.d) /* -------------------------------------------------------------- Name: LFEXP Purpose: ANSI compatible 11-bit precision exponent for machines, which support IEEE-754 and store the highest significant bit of integers first. Usage: LFEXP (arg) Domain: Same as for standard exp() function (approximately -709 <= arg <= 709). Result: Approximate exp of arg if within domain, otherwise undefined. -------------------------------------------------------------- */ #define LFEXP(v) (_eco.n.i = (long)(_eco_m * (v) + _eco_a), _eco.d) /* -------------------------------------------------------------- Name: USE_RFEXP Purpose: Check if machine supports IEEE-754 and stores the least significant bit of integers first. Usage: USE_RFEXP() Result: Non-zero value if yes, 0 if no. -------------------------------------------------------------- */ #define USE_RFEXP (eco.d = 1.0, (eco.n.j - 1072693248L || eco.n.i == 0)) /* -------------------------------------------------------------- Name: USE_LFEXP Purpose: Check if machine supports IEEE-754 and stores the highest significant bit of integers first. Usage: USE_LFEXP() Result: Non-zero value if yes, 0 if no. -------------------------------------------------------------- */ #define USE_LFEXP (eco.d = 1.0, (eco.n.i == 1072693248L || eco.n.j == 0)) #ifdef __cplusplus } #endif/*__cplusplus*/ #endif/*FEXP_H*/ brian-1.3.1/brian/utils/fastexp/numpy.i000066400000000000000000001554401167451777000200660ustar00rootroot00000000000000/* -*- C -*- (not really, but good for syntax highlighting) */ #ifdef SWIGPYTHON %{ #ifndef SWIG_FILE_WITH_INIT # define NO_IMPORT_ARRAY #endif #include "stdio.h" #include %} /**********************************************************************/ %fragment("NumPy_Backward_Compatibility", "header") { /* Support older NumPy data type names */ %#if NDARRAY_VERSION < 0x01000000 %#define NPY_BOOL PyArray_BOOL %#define NPY_BYTE PyArray_BYTE %#define NPY_UBYTE PyArray_UBYTE %#define NPY_SHORT PyArray_SHORT %#define NPY_USHORT PyArray_USHORT %#define NPY_INT PyArray_INT %#define NPY_UINT PyArray_UINT %#define NPY_LONG PyArray_LONG %#define NPY_ULONG PyArray_ULONG %#define NPY_LONGLONG PyArray_LONGLONG %#define NPY_ULONGLONG PyArray_ULONGLONG %#define NPY_FLOAT PyArray_FLOAT %#define NPY_DOUBLE PyArray_DOUBLE %#define NPY_LONGDOUBLE PyArray_LONGDOUBLE %#define NPY_CFLOAT PyArray_CFLOAT %#define NPY_CDOUBLE PyArray_CDOUBLE %#define NPY_CLONGDOUBLE PyArray_CLONGDOUBLE %#define NPY_OBJECT PyArray_OBJECT %#define NPY_STRING PyArray_STRING %#define NPY_UNICODE PyArray_UNICODE %#define NPY_VOID PyArray_VOID %#define NPY_NTYPES PyArray_NTYPES %#define NPY_NOTYPE PyArray_NOTYPE %#define NPY_CHAR PyArray_CHAR %#define NPY_USERDEF PyArray_USERDEF %#define npy_intp intp %#define NPY_MAX_BYTE MAX_BYTE %#define NPY_MIN_BYTE MIN_BYTE %#define NPY_MAX_UBYTE MAX_UBYTE %#define NPY_MAX_SHORT MAX_SHORT %#define NPY_MIN_SHORT MIN_SHORT %#define NPY_MAX_USHORT MAX_USHORT %#define NPY_MAX_INT MAX_INT %#define NPY_MIN_INT MIN_INT %#define NPY_MAX_UINT MAX_UINT %#define NPY_MAX_LONG MAX_LONG %#define NPY_MIN_LONG MIN_LONG %#define NPY_MAX_ULONG MAX_ULONG %#define NPY_MAX_LONGLONG MAX_LONGLONG %#define NPY_MIN_LONGLONG MIN_LONGLONG %#define NPY_MAX_ULONGLONG MAX_ULONGLONG %#define NPY_MAX_INTP MAX_INTP %#define NPY_MIN_INTP MIN_INTP %#define NPY_FARRAY FARRAY %#define NPY_F_CONTIGUOUS F_CONTIGUOUS %#endif } /**********************************************************************/ /* The following code originally appeared in * enthought/kiva/agg/src/numeric.i written by Eric Jones. It was * translated from C++ to C by John Hunter. Bill Spotz has modified * it to fix some minor bugs, upgrade from Numeric to numpy (all * versions), add some comments and functionality, and convert from * direct code insertion to SWIG fragments. */ %fragment("NumPy_Macros", "header") { /* Macros to extract array attributes. */ %#define is_array(a) ((a) && PyArray_Check((PyArrayObject *)a)) %#define array_type(a) (int)(PyArray_TYPE(a)) %#define array_numdims(a) (((PyArrayObject *)a)->nd) %#define array_dimensions(a) (((PyArrayObject *)a)->dimensions) %#define array_size(a,i) (((PyArrayObject *)a)->dimensions[i]) %#define array_data(a) (((PyArrayObject *)a)->data) %#define array_is_contiguous(a) (PyArray_ISCONTIGUOUS(a)) %#define array_is_native(a) (PyArray_ISNOTSWAPPED(a)) %#define array_is_fortran(a) (PyArray_ISFORTRAN(a)) } /**********************************************************************/ %fragment("NumPy_Utilities", "header") { /* Given a PyObject, return a string describing its type. */ char* pytype_string(PyObject* py_obj) { if (py_obj == NULL ) return "C NULL value"; if (py_obj == Py_None ) return "Python None" ; if (PyCallable_Check(py_obj)) return "callable" ; if (PyString_Check( py_obj)) return "string" ; if (PyInt_Check( py_obj)) return "int" ; if (PyFloat_Check( py_obj)) return "float" ; if (PyDict_Check( py_obj)) return "dict" ; if (PyList_Check( py_obj)) return "list" ; if (PyTuple_Check( py_obj)) return "tuple" ; if (PyFile_Check( py_obj)) return "file" ; if (PyModule_Check( py_obj)) return "module" ; if (PyInstance_Check(py_obj)) return "instance" ; return "unkown type"; } /* Given a NumPy typecode, return a string describing the type. */ char* typecode_string(int typecode) { static char* type_names[25] = {"bool", "byte", "unsigned byte", "short", "unsigned short", "int", "unsigned int", "long", "unsigned long", "long long", "unsigned long long", "float", "double", "long double", "complex float", "complex double", "complex long double", "object", "string", "unicode", "void", "ntypes", "notype", "char", "unknown"}; return typecode < 24 ? type_names[typecode] : type_names[24]; } /* Make sure input has correct numpy type. Allow character and byte * to match. Also allow int and long to match. This is deprecated. * You should use PyArray_EquivTypenums() instead. */ int type_match(int actual_type, int desired_type) { return PyArray_EquivTypenums(actual_type, desired_type); } } /**********************************************************************/ %fragment("NumPy_Object_to_Array", "header", fragment="NumPy_Backward_Compatibility", fragment="NumPy_Macros", fragment="NumPy_Utilities") { /* Given a PyObject pointer, cast it to a PyArrayObject pointer if * legal. If not, set the python error string appropriately and * return NULL. */ PyArrayObject* obj_to_array_no_conversion(PyObject* input, int typecode) { PyArrayObject* ary = NULL; if (is_array(input) && (typecode == NPY_NOTYPE || PyArray_EquivTypenums(array_type(input), typecode))) { ary = (PyArrayObject*) input; } else if is_array(input) { char* desired_type = typecode_string(typecode); char* actual_type = typecode_string(array_type(input)); PyErr_Format(PyExc_TypeError, "Array of type '%s' required. Array of type '%s' given", desired_type, actual_type); ary = NULL; } else { char * desired_type = typecode_string(typecode); char * actual_type = pytype_string(input); PyErr_Format(PyExc_TypeError, "Array of type '%s' required. A '%s' was given", desired_type, actual_type); ary = NULL; } return ary; } /* Convert the given PyObject to a NumPy array with the given * typecode. On success, return a valid PyArrayObject* with the * correct type. On failure, the python error string will be set and * the routine returns NULL. */ PyArrayObject* obj_to_array_allow_conversion(PyObject* input, int typecode, int* is_new_object) { PyArrayObject* ary = NULL; PyObject* py_obj; if (is_array(input) && (typecode == NPY_NOTYPE || PyArray_EquivTypenums(array_type(input),typecode))) { ary = (PyArrayObject*) input; *is_new_object = 0; } else { py_obj = PyArray_FromObject(input, typecode, 0, 0); /* If NULL, PyArray_FromObject will have set python error value.*/ ary = (PyArrayObject*) py_obj; *is_new_object = 1; } return ary; } /* Given a PyArrayObject, check to see if it is contiguous. If so, * return the input pointer and flag it as not a new object. If it is * not contiguous, create a new PyArrayObject using the original data, * flag it as a new object and return the pointer. */ PyArrayObject* make_contiguous(PyArrayObject* ary, int* is_new_object, int min_dims, int max_dims) { PyArrayObject* result; if (array_is_contiguous(ary)) { result = ary; *is_new_object = 0; } else { result = (PyArrayObject*) PyArray_ContiguousFromObject((PyObject*)ary, array_type(ary), min_dims, max_dims); *is_new_object = 1; } return result; } /* Convert a given PyObject to a contiguous PyArrayObject of the * specified type. If the input object is not a contiguous * PyArrayObject, a new one will be created and the new object flag * will be set. */ PyArrayObject* obj_to_array_contiguous_allow_conversion(PyObject* input, int typecode, int* is_new_object) { int is_new1 = 0; int is_new2 = 0; PyArrayObject* ary2; PyArrayObject* ary1 = obj_to_array_allow_conversion(input, typecode, &is_new1); if (ary1) { ary2 = make_contiguous(ary1, &is_new2, 0, 0); if ( is_new1 && is_new2) { Py_DECREF(ary1); } ary1 = ary2; } *is_new_object = is_new1 || is_new2; return ary1; } } /**********************************************************************/ %fragment("NumPy_Array_Requirements", "header", fragment="NumPy_Backward_Compatibility", fragment="NumPy_Macros") { /* Test whether a python object is contiguous. If array is * contiguous, return 1. Otherwise, set the python error string and * return 0. */ int require_contiguous(PyArrayObject* ary) { int contiguous = 1; if (!array_is_contiguous(ary)) { PyErr_SetString(PyExc_TypeError, "Array must be contiguous. A non-contiguous array was given"); contiguous = 0; } return contiguous; } /* Require that a numpy array is not byte-swapped. If the array is * not byte-swapped, return 1. Otherwise, set the python error string * and return 0. */ int require_native(PyArrayObject* ary) { int native = 1; if (!array_is_native(ary)) { PyErr_SetString(PyExc_TypeError, "Array must have native byteorder. " "A byte-swapped array was given"); native = 0; } return native; } /* Require the given PyArrayObject to have a specified number of * dimensions. If the array has the specified number of dimensions, * return 1. Otherwise, set the python error string and return 0. */ int require_dimensions(PyArrayObject* ary, int exact_dimensions) { int success = 1; if (array_numdims(ary) != exact_dimensions) { PyErr_Format(PyExc_TypeError, "Array must have %d dimensions. Given array has %d dimensions", exact_dimensions, array_numdims(ary)); success = 0; } return success; } /* Require the given PyArrayObject to have one of a list of specified * number of dimensions. If the array has one of the specified number * of dimensions, return 1. Otherwise, set the python error string * and return 0. */ int require_dimensions_n(PyArrayObject* ary, int* exact_dimensions, int n) { int success = 0; int i; char dims_str[255] = ""; char s[255]; for (i = 0; i < n && !success; i++) { if (array_numdims(ary) == exact_dimensions[i]) { success = 1; } } if (!success) { for (i = 0; i < n-1; i++) { sprintf(s, "%d, ", exact_dimensions[i]); strcat(dims_str,s); } sprintf(s, " or %d", exact_dimensions[n-1]); strcat(dims_str,s); PyErr_Format(PyExc_TypeError, "Array must have %s dimensions. Given array has %d dimensions", dims_str, array_numdims(ary)); } return success; } /* Require the given PyArrayObject to have a specified shape. If the * array has the specified shape, return 1. Otherwise, set the python * error string and return 0. */ int require_size(PyArrayObject* ary, npy_intp* size, int n) { int i; int success = 1; int len; char desired_dims[255] = "["; char s[255]; char actual_dims[255] = "["; for(i=0; i < n;i++) { if (size[i] != -1 && size[i] != array_size(ary,i)) { success = 0; } } if (!success) { for (i = 0; i < n; i++) { if (size[i] == -1) { sprintf(s, "*,"); } else { sprintf(s, "%ld,", (long int)size[i]); } strcat(desired_dims,s); } len = strlen(desired_dims); desired_dims[len-1] = ']'; for (i = 0; i < n; i++) { sprintf(s, "%ld,", (long int)array_size(ary,i)); strcat(actual_dims,s); } len = strlen(actual_dims); actual_dims[len-1] = ']'; PyErr_Format(PyExc_TypeError, "Array must have shape of %s. Given array has shape of %s", desired_dims, actual_dims); } return success; } /* Require the given PyArrayObject to to be FORTRAN ordered. If the * the PyArrayObject is already FORTRAN ordered, do nothing. Else, * set the FORTRAN ordering flag and recompute the strides. */ int require_fortran(PyArrayObject* ary) { int success = 1; int nd = array_numdims(ary); int i; if (array_is_fortran(ary)) return success; /* Set the FORTRAN ordered flag */ ary->flags = NPY_FARRAY; /* Recompute the strides */ ary->strides[0] = ary->strides[nd-1]; for (i=1; i < nd; ++i) ary->strides[i] = ary->strides[i-1] * array_size(ary,i-1); return success; } } /* Combine all NumPy fragments into one for convenience */ %fragment("NumPy_Fragments", "header", fragment="NumPy_Backward_Compatibility", fragment="NumPy_Macros", fragment="NumPy_Utilities", fragment="NumPy_Object_to_Array", fragment="NumPy_Array_Requirements") { } /* End John Hunter translation (with modifications by Bill Spotz) */ /* %numpy_typemaps() macro * * This macro defines a family of 41 typemaps that allow C arguments * of the form * * (DATA_TYPE IN_ARRAY1[ANY]) * (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) * (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) * * (DATA_TYPE IN_ARRAY2[ANY][ANY]) * (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) * (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) * * (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) * (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_ARRAY3) * (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_FARRAY3) * * (DATA_TYPE INPLACE_ARRAY1[ANY]) * (DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) * (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1) * * (DATA_TYPE INPLACE_ARRAY2[ANY][ANY]) * (DATA_TYPE* INPLACE_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_ARRAY2) * (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_FARRAY2) * * (DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) * (DATA_TYPE* INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_ARRAY3) * (DATA_TYPE* INPLACE_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) * (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_FARRAY3) * * (DATA_TYPE ARGOUT_ARRAY1[ANY]) * (DATA_TYPE* ARGOUT_ARRAY1, DIM_TYPE DIM1) * (DIM_TYPE DIM1, DATA_TYPE* ARGOUT_ARRAY1) * * (DATA_TYPE ARGOUT_ARRAY2[ANY][ANY]) * * (DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) * * (DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1) * (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEW_ARRAY1) * * (DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) * (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_ARRAY2) * (DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) * (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_FARRAY2) * * (DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) * (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) * (DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) * (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3) * * where "DATA_TYPE" is any type supported by the NumPy module, and * "DIM_TYPE" is any int-like type suitable for specifying dimensions. * The difference between "ARRAY" typemaps and "FARRAY" typemaps is * that the "FARRAY" typemaps expect FORTRAN ordering of * multidimensional arrays. In python, the dimensions will not need * to be specified (except for the "DATA_TYPE* ARGOUT_ARRAY1" * typemaps). The IN_ARRAYs can be a numpy array or any sequence that * can be converted to a numpy array of the specified type. The * INPLACE_ARRAYs must be numpy arrays of the appropriate type. The * ARGOUT_ARRAYs will be returned as new numpy arrays of the * appropriate type. * * These typemaps can be applied to existing functions using the * %apply directive. For example: * * %apply (double* IN_ARRAY1, int DIM1) {(double* series, int length)}; * double prod(double* series, int length); * * %apply (int DIM1, int DIM2, double* INPLACE_ARRAY2) * {(int rows, int cols, double* matrix )}; * void floor(int rows, int cols, double* matrix, double f); * * %apply (double IN_ARRAY3[ANY][ANY][ANY]) * {(double tensor[2][2][2] )}; * %apply (double ARGOUT_ARRAY3[ANY][ANY][ANY]) * {(double low[2][2][2] )}; * %apply (double ARGOUT_ARRAY3[ANY][ANY][ANY]) * {(double upp[2][2][2] )}; * void luSplit(double tensor[2][2][2], * double low[2][2][2], * double upp[2][2][2] ); * * or directly with * * double prod(double* IN_ARRAY1, int DIM1); * * void floor(int DIM1, int DIM2, double* INPLACE_ARRAY2, double f); * * void luSplit(double IN_ARRAY3[ANY][ANY][ANY], * double ARGOUT_ARRAY3[ANY][ANY][ANY], * double ARGOUT_ARRAY3[ANY][ANY][ANY]); */ %define %numpy_typemaps(DATA_TYPE, DATA_TYPECODE, DIM_TYPE) /************************/ /* Input Array Typemaps */ /************************/ /* Typemap suite for (DATA_TYPE IN_ARRAY1[ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE IN_ARRAY1[ANY]) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE IN_ARRAY1[ANY]) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[1] = { $1_dim0 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 1) || !require_size(array, size, 1)) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(freearg) (DATA_TYPE IN_ARRAY1[ANY]) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[1] = { -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 1) || !require_size(array, size, 1)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); } %typemap(freearg) (DATA_TYPE* IN_ARRAY1, DIM_TYPE DIM1) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[1] = {-1}; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 1) || !require_size(array, size, 1)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DATA_TYPE* IN_ARRAY1) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE IN_ARRAY2[ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE IN_ARRAY2[ANY][ANY]) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE IN_ARRAY2[ANY][ANY]) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { $1_dim0, $1_dim1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2)) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(freearg) (DATA_TYPE IN_ARRAY2[ANY][ANY]) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); } %typemap(freearg) (DATA_TYPE* IN_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_ARRAY2) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2) || !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); } %typemap(freearg) (DATA_TYPE* IN_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[2] = { -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 2) || !require_size(array, size, 2) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* IN_FARRAY2) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { $1_dim0, $1_dim1, $1_dim2 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3)) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(freearg) (DATA_TYPE IN_ARRAY3[ANY][ANY][ANY]) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { -1, -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); } %typemap(freearg) (DATA_TYPE* IN_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* IN_ARRAY3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_ARRAY3) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_ARRAY3) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { -1, -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_ARRAY3) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { -1, -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3) | !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); } %typemap(freearg) (DATA_TYPE* IN_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* IN_FARRAY3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_FARRAY3) { $1 = is_array($input) || PySequence_Check($input); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_FARRAY3) (PyArrayObject* array=NULL, int is_new_object=0) { npy_intp size[3] = { -1, -1, -1 }; array = obj_to_array_contiguous_allow_conversion($input, DATA_TYPECODE, &is_new_object); if (!array || !require_dimensions(array, 3) || !require_size(array, size, 3) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DATA_TYPE*) array_data(array); } %typemap(freearg) (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* IN_FARRAY3) { if (is_new_object$argnum && array$argnum) { Py_DECREF(array$argnum); } } /***************************/ /* In-Place Array Typemaps */ /***************************/ /* Typemap suite for (DATA_TYPE INPLACE_ARRAY1[ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE INPLACE_ARRAY1[ANY]) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE INPLACE_ARRAY1[ANY]) (PyArrayObject* array=NULL) { npy_intp size[1] = { $1_dim0 }; array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,1) || !require_size(array, size, 1) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = ($1_ltype) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) (PyArrayObject* array=NULL, int i=1) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,1) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = 1; for (i=0; i < array_numdims(array); ++i) $2 *= array_size(array,i); } /* Typemap suite for (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DATA_TYPE* INPLACE_ARRAY1) (PyArrayObject* array=NULL, int i=0) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,1) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = 1; for (i=0; i < array_numdims(array); ++i) $1 *= array_size(array,i); $2 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE INPLACE_ARRAY2[ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE INPLACE_ARRAY2[ANY][ANY]) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE INPLACE_ARRAY2[ANY][ANY]) (PyArrayObject* array=NULL) { npy_intp size[2] = { $1_dim0, $1_dim1 }; array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_size(array, size, 2) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = ($1_ltype) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_ARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_ARRAY2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_ARRAY2) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_ARRAY2) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_FARRAY2, DIM_TYPE DIM1, DIM_TYPE DIM2) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_FARRAY2) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_FARRAY2) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DATA_TYPE* INPLACE_FARRAY2) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,2) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY]) (PyArrayObject* array=NULL) { npy_intp size[3] = { $1_dim0, $1_dim1, $1_dim2 }; array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_size(array, size, 3) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = ($1_ltype) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_ARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* INPLACE_ARRAY3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_ARRAY3) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_ARRAY3) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DATA_TYPE*) array_data(array); } /* Typemap suite for (DATA_TYPE* INPLACE_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, * DIM_TYPE DIM3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_FARRAY3, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = (DIM_TYPE) array_size(array,0); $3 = (DIM_TYPE) array_size(array,1); $4 = (DIM_TYPE) array_size(array,2); } /* Typemap suite for (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, * DATA_TYPE* INPLACE_FARRAY3) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_FARRAY3) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DATA_TYPE* INPLACE_FARRAY3) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,3) || !require_contiguous(array) || !require_native(array) || !require_fortran(array)) SWIG_fail; $1 = (DIM_TYPE) array_size(array,0); $2 = (DIM_TYPE) array_size(array,1); $3 = (DIM_TYPE) array_size(array,2); $4 = (DATA_TYPE*) array_data(array); } /*************************/ /* Argout Array Typemaps */ /*************************/ /* Typemap suite for (DATA_TYPE ARGOUT_ARRAY1[ANY]) */ %typemap(in,numinputs=0, fragment="NumPy_Backward_Compatibility,NumPy_Macros") (DATA_TYPE ARGOUT_ARRAY1[ANY]) (PyObject * array = NULL) { npy_intp dims[1] = { $1_dim0 }; array = PyArray_SimpleNew(1, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(argout) (DATA_TYPE ARGOUT_ARRAY1[ANY]) { $result = SWIG_Python_AppendOutput($result,array$argnum); } /* Typemap suite for (DATA_TYPE* ARGOUT_ARRAY1, DIM_TYPE DIM1) */ %typemap(in,numinputs=1, fragment="NumPy_Fragments") (DATA_TYPE* ARGOUT_ARRAY1, DIM_TYPE DIM1) (PyObject * array = NULL) { npy_intp dims[1]; if (!PyInt_Check($input)) { char* typestring = pytype_string($input); PyErr_Format(PyExc_TypeError, "Int dimension expected. '%s' given.", typestring); SWIG_fail; } $2 = (DIM_TYPE) PyInt_AsLong($input); dims[0] = (npy_intp) $2; array = PyArray_SimpleNew(1, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); } %typemap(argout) (DATA_TYPE* ARGOUT_ARRAY1, DIM_TYPE DIM1) { $result = SWIG_Python_AppendOutput($result,array$argnum); } /* Typemap suite for (DIM_TYPE DIM1, DATA_TYPE* ARGOUT_ARRAY1) */ %typemap(in,numinputs=1, fragment="NumPy_Fragments") (DIM_TYPE DIM1, DATA_TYPE* ARGOUT_ARRAY1) (PyObject * array = NULL) { npy_intp dims[1]; if (!PyInt_Check($input)) { char* typestring = pytype_string($input); PyErr_Format(PyExc_TypeError, "Int dimension expected. '%s' given.", typestring); SWIG_fail; } $1 = (DIM_TYPE) PyInt_AsLong($input); dims[0] = (npy_intp) $1; array = PyArray_SimpleNew(1, dims, DATA_TYPECODE); if (!array) SWIG_fail; $2 = (DATA_TYPE*) array_data(array); } %typemap(argout) (DIM_TYPE DIM1, DATA_TYPE* ARGOUT_ARRAY1) { $result = SWIG_Python_AppendOutput($result,array$argnum); } /* Typemap suite for (DATA_TYPE ARGOUT_ARRAY2[ANY][ANY]) */ %typemap(in,numinputs=0, fragment="NumPy_Backward_Compatibility,NumPy_Macros") (DATA_TYPE ARGOUT_ARRAY2[ANY][ANY]) (PyObject * array = NULL) { npy_intp dims[2] = { $1_dim0, $1_dim1 }; array = PyArray_SimpleNew(2, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(argout) (DATA_TYPE ARGOUT_ARRAY2[ANY][ANY]) { $result = SWIG_Python_AppendOutput($result,array$argnum); } /* Typemap suite for (DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) */ %typemap(in,numinputs=0, fragment="NumPy_Backward_Compatibility,NumPy_Macros") (DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) (PyObject * array = NULL) { npy_intp dims[3] = { $1_dim0, $1_dim1, $1_dim2 }; array = PyArray_SimpleNew(3, dims, DATA_TYPECODE); if (!array) SWIG_fail; $1 = ($1_ltype) array_data(array); } %typemap(argout) (DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY]) { $result = SWIG_Python_AppendOutput($result,array$argnum); } /*****************************/ /* Argoutview Array Typemaps */ /*****************************/ /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1 ) (DATA_TYPE* data_temp , DIM_TYPE dim_temp) { $1 = &data_temp; $2 = &dim_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1) { npy_intp dims[1] = { *$2 }; PyObject * array = PyArray_SimpleNewFromData(1, dims, DATA_TYPECODE, (void*)(*$1)); if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,array); } /* Typemap suite for (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEW_ARRAY1) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DATA_TYPE** ARGOUTVIEW_ARRAY1) (DIM_TYPE dim_temp, DATA_TYPE* data_temp ) { $1 = &dim_temp; $2 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEW_ARRAY1) { npy_intp dims[1] = { *$1 }; PyObject * array = PyArray_SimpleNewFromData(1, dims, DATA_TYPECODE, (void*)(*$2)); if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,array); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 ) (DATA_TYPE* data_temp , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) { npy_intp dims[2] = { *$2, *$3 }; PyObject * array = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$1)); if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,array); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_ARRAY2) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DATA_TYPE** ARGOUTVIEW_ARRAY2) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DATA_TYPE* data_temp ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_ARRAY2) { npy_intp dims[2] = { *$1, *$2 }; PyObject * array = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$3)); if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,array); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1 , DIM_TYPE* DIM2 ) (DATA_TYPE* data_temp , DIM_TYPE dim1_temp, DIM_TYPE dim2_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2) { npy_intp dims[2] = { *$2, *$3 }; PyObject * obj = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject * array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_FARRAY2) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1 , DIM_TYPE* DIM2 , DATA_TYPE** ARGOUTVIEW_FARRAY2) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DATA_TYPE* data_temp ) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_FARRAY2) { npy_intp dims[2] = { *$1, *$2 }; PyObject * obj = PyArray_SimpleNewFromData(2, dims, DATA_TYPECODE, (void*)(*$3)); PyArrayObject * array = (PyArrayObject*) obj; if (!array || !require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) (DATA_TYPE* data_temp, DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; $4 = &dim3_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) { npy_intp dims[3] = { *$2, *$3, *$4 }; PyObject * array = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$1)); if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,array); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DATA_TYPE* data_temp) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &dim3_temp; $4 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3) { npy_intp dims[3] = { *$1, *$2, *$3 }; PyObject * array = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$3)); if (!array) SWIG_fail; $result = SWIG_Python_AppendOutput($result,array); } /* Typemap suite for (DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) */ %typemap(in,numinputs=0) (DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) (DATA_TYPE* data_temp, DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp) { $1 = &data_temp; $2 = &dim1_temp; $3 = &dim2_temp; $4 = &dim3_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3) { npy_intp dims[3] = { *$2, *$3, *$4 }; PyObject * obj = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$1)); PyArrayObject * array = (PyArrayObject*) obj; if (!array || require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } /* Typemap suite for (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3) */ %typemap(in,numinputs=0) (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3) (DIM_TYPE dim1_temp, DIM_TYPE dim2_temp, DIM_TYPE dim3_temp, DATA_TYPE* data_temp) { $1 = &dim1_temp; $2 = &dim2_temp; $3 = &dim3_temp; $4 = &data_temp; } %typemap(argout, fragment="NumPy_Backward_Compatibility,NumPy_Array_Requirements") (DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3) { npy_intp dims[3] = { *$1, *$2, *$3 }; PyObject * obj = PyArray_SimpleNewFromData(3, dims, DATA_TYPECODE, (void*)(*$3)); PyArrayObject * array = (PyArrayObject*) obj; if (!array || require_fortran(array)) SWIG_fail; $result = SWIG_Python_AppendOutput($result,obj); } %enddef /* %numpy_typemaps() macro */ /* *************************************************************** */ /* Concrete instances of the %numpy_typemaps() macro: Each invocation * below applies all of the typemaps above to the specified data type. */ %numpy_typemaps(signed char , NPY_BYTE , int) %numpy_typemaps(unsigned char , NPY_UBYTE , int) %numpy_typemaps(short , NPY_SHORT , int) %numpy_typemaps(unsigned short , NPY_USHORT , int) %numpy_typemaps(int , NPY_INT , int) %numpy_typemaps(unsigned int , NPY_UINT , int) %numpy_typemaps(long , NPY_LONG , int) %numpy_typemaps(unsigned long , NPY_ULONG , int) %numpy_typemaps(long long , NPY_LONGLONG , int) %numpy_typemaps(unsigned long long, NPY_ULONGLONG, int) %numpy_typemaps(float , NPY_FLOAT , int) %numpy_typemaps(double , NPY_DOUBLE , int) /* *************************************************************** * The follow macro expansion does not work, because C++ bool is 4 * bytes and NPY_BOOL is 1 byte * * %numpy_typemaps(bool, NPY_BOOL, int) */ /* *************************************************************** * On my Mac, I get the following warning for this macro expansion: * 'swig/python detected a memory leak of type 'long double *', no destructor found.' * * %numpy_typemaps(long double, NPY_LONGDOUBLE, int) */ /* *************************************************************** * Swig complains about a syntax error for the following macro * expansions: * * %numpy_typemaps(complex float, NPY_CFLOAT , int) * * %numpy_typemaps(complex double, NPY_CDOUBLE, int) * * %numpy_typemaps(complex long double, NPY_CLONGDOUBLE, int) */ #endif /* SWIGPYTHON */ brian-1.3.1/brian/utils/fastexp/setup.py000066400000000000000000000014531167451777000202500ustar00rootroot00000000000000#!/usr/bin/env python """ setup.py file for SWIG example """ from distutils.core import setup, Extension import os, numpy numpy_base_dir = os.path.split(numpy.__file__)[0] numpy_include_dir = os.path.join(numpy_base_dir, 'core/include') fastexp_module = Extension('_fastexp', sources=['fastexp_wrap.cxx', 'fastexp.cpp', 'fexp.c'], include_dirs=[numpy_include_dir], extra_compile_args=['-O3'] ) setup (name='fastexp', version='0.1', author="Dan Goodman", description="""Simple swig example from docs""", ext_modules=[fastexp_module], py_modules=["fastexp"], ) brian-1.3.1/brian/utils/fastexp/swigcompile.bat000066400000000000000000000001211167451777000215370ustar00rootroot00000000000000@echo off swig -python -c++ fastexp.i setup.py build_ext --inplace -c mingw32 brian-1.3.1/brian/utils/fastexp/testfastexp.py000066400000000000000000000005461167451777000214640ustar00rootroot00000000000000import time from numpy import * from numpy.random import * from fastexp import * x = array([1, 2, 3], dtype=float) y = exp(x) z = zeros(len(x)) fastexp(x, z) print y print z x = randn(10000000) y = empty(x.shape) t = time.time() z = exp(x) print "exp", time.time() - t t = time.time() fastexp(x, y) print "fastexp", time.time() - t brian-1.3.1/brian/utils/fastexp/testfastexp2.py000066400000000000000000000001711167451777000215400ustar00rootroot00000000000000from numpy import * from numpy.random import * from __init__ import * x = randn(3) print exp(x) print fastexp(x) brian-1.3.1/brian/utils/progressreporting.py000066400000000000000000000153031167451777000212330ustar00rootroot00000000000000import sys, time __all__ = ['ProgressReporter'] def time_rep(t): ''' Textual representation of time t given in seconds ''' t = int(t) if t < 60: return str(t) + 's' secs = t % 60 mins = t // 60 if mins < 60: return str(mins) + 'm ' + str(secs) + 's' mins = mins % 60 hours = t // (60 * 60) if hours < 24: return str(hours) + 'h ' + str(mins) + 'm ' + str(secs) + 's' hours = hours % 24 days = t // (60 * 60 * 24) return str(days) + 'd ' + str(hours) + 'h ' + str(mins) + 'm ' + str(secs) + 's' def make_text_report(elapsed, complete): s = str(int(100 * complete)) + '% complete, ' s += time_rep(elapsed) + ' elapsed' if complete > .001: remtime = elapsed / complete - elapsed s += ', approximately ' + time_rep(remtime) + ' remaining.' else: s += '.' return s def build_text_reporter(output_stream): def text_report(elapsed, complete): s = make_text_report(elapsed, complete) + '\n' output_stream.write(s) output_stream.flush() return text_report class ProgressReporter(object): ''' Standard text and graphical progress reports Initialised with arguments: ``report`` Can be one of the following strings: ``'print'``, ``'text'``, ``'stdout'`` Reports progress to standard console. ``'stderr'`` Reports progress to error console. ``'graphical'``, ``'tkinter'`` A simple graphical progress bar using Tkinter. Alternatively, it can be any output stream in which case text reports will be sent to it, or a custom callback function ``report(elapsed, complete)`` taking arguments ``elapsed`` the amount of time that has passed and ``complete`` the fraction of the computation finished. ``period`` How often reports should be generated in seconds. Methods: .. method:: start() Call at the beginning of a task to start timing it. .. method:: finish() Call at the end of a task to finish timing it. Note that with the Tkinter class, if you do not call this it will stop the Python script from finishing, stopping memory from being freed up. .. method:: update(complete) Call with the fraction of the task (or subtask if ``subtask()`` has been called) completed, between 0 and 1. .. method:: subtask(complete, tasksize) After calling ``subtask(complete, tasksize)``, subsequent calls to update will report progress between a fraction ``complete`` and ``complete+tasksize`` of the total task. ``complete`` represents the amount of the total task completed at the beginning of the task, and ``tasksize`` the size of the subtask as a proportion of the whole task. .. method:: equal_subtask(tasknum, numtasks) If a task can be divided into ``numtasks`` equally sized subtasks, you can use this method instead of ``subtask``, where ``tasknum`` is the number of the subtask about to start. ''' def __init__(self, report, period=10.0): self.period = float(period) self.report = get_reporter(report) self.start() # just in case the user forgets to call start() def start(self): self.start_time = time.time() self.next_report_time = self.start_time + self.period self.subtask_complete = 0.0 self.subtask_size = 1.0 def finish(self): self.update(1) def subtask(self, complete, tasksize): self.subtask_complete = complete self.subtask_size = tasksize def equal_subtask(self, tasknum, numtasks): self.subtask(float(tasknum) / float(numtasks), 1. / numtasks) def update(self, complete): cur_time = time.time() totalcomplete = self.subtask_complete + complete * self.subtask_size if cur_time > self.next_report_time or totalcomplete == 1.0 or totalcomplete == 1: self.next_report_time = cur_time + self.period elapsed = time.time() - self.start_time self.report(elapsed, totalcomplete) def get_reporter(report): if report == 'print' or report == 'text' or report == 'stdout': report = build_text_reporter(sys.stdout) elif report == 'stderr': report = build_text_reporter(sys.stderr) elif hasattr(report, 'write') and hasattr(report, 'flush'): report = build_text_reporter(report) elif report == 'graphical' or report == 'tkinter': import Tkinter class ProgressBar(object): ''' Adapted from: http://code.activestate.com/recipes/492230/ ''' # Create Progress Bar def __init__(self, width, height): self.__root = Tkinter.Tk() self.__root.resizable(False, False) self.__root.title('Progress Bar') self.__canvas = Tkinter.Canvas(self.__root, width=width, height=height) self.__canvas.grid() self.__width = width self.__height = height # Open Progress Bar def open(self): self.__root.deiconify() # Close Progress Bar def close(self): self.__root.withdraw() # Update Progress Bar def update(self, ratio, newtitle=None): self.__canvas.delete(Tkinter.ALL) self.__canvas.create_rectangle(0, 0, self.__width * ratio, \ self.__height, fill='blue') if newtitle is not None: self.__root.title(newtitle) self.__root.update() pb = ProgressBar(500, 20) pb.closed = False def report(elapsed, complete): try: if complete == 1.0 and pb.closed == False: pb.close() pb.closed = True else: pb.update(complete, make_text_report(elapsed, complete)) except Tkinter.TclError: # exception handling in the case that the user shuts the window pass return report if __name__ == '__main__': import time report = ProgressReporter('graphical', 0.5) report.start() time.sleep(1) report.update(1. / 3) time.sleep(1) report.update(2. / 3) time.sleep(1) report.update(3. / 3) time.sleep(1) brian-1.3.1/brian/utils/separate_equations.py000066400000000000000000000117031167451777000213310ustar00rootroot00000000000000''' Code to separate Equations into independent subsets. This can be used in STDP to allow for nonlinear equations to be given. Also, a modified version of this code could probably be used to define independent equations for use in compound solvers. For example, you could split into separate independent sets and have one solver for each set. In some circumstances, this might be more efficient (e.g. a linear and nonlinear component). It may need to be modified so that the dependencies are directional, i.e. X depending on Y doesn't mean Y depends on X (which you want in the case of the STDP separation), that way you could have a linear solver for a subset and a nonlinear solver that depends on some of the linear variables. In this case you would probably look for minimal subgraphs which are allowed to have incoming edges but no outgoing edges, construct solvers for them, and then construct a compound solver for the rest. ''' if __name__ == '__main__': from brian.equations import Equations from brian.inspection import get_identifiers else: from ..equations import Equations from ..inspection import get_identifiers from collections import defaultdict from copy import copy __all__ = ['separate_equations'] def next_independent_subgraph(G): ''' Returns an independent subgraph of G and removes that subgraph from G. Here G is a graph represented as a dictionary, so that G[node] is a set of all the neighbours of node. The caller is responsible for constructing a well formed G which has all the appropriate nodes, and where edges in the graph are both ways so that b in G[a] <=> a in G[b]. ''' # We start from just one (randomly selected) node and iteratively add # elements to the subgraph from the set of neighbours of the subgraph # until there are no more neighbours to be added, and then we return. # This algorithm may not be efficient but we're likely only dealing with # fairly small graphs anyway. nodes = dict([G.popitem()]) found_new = True while found_new: found_new = False for node, targets in nodes.items(): for target in targets: if target in G: # if target not in G, we assume it has already been added nodes[target] = G.pop(target) found_new = True else: # this should be true if the caller has constructed G correctly assert target in nodes return nodes def separate_equations(eqs, additional_dependencies=[]): eqs.prepare() all_vars = eqs._string.keys() # Construct a list of dependency sets, where each variable in each # dependency set induces a dependency on each other variable in each # dependency set depsets = [] for var in all_vars: ids = set(get_identifiers(eqs._string[var])) ids = ids.intersection(all_vars) ids.add(var) depsets.append(ids) for expr in additional_dependencies: ids = set(get_identifiers(expr)) ids = ids.intersection(all_vars) depsets.append(ids) # Construct a graph deps which indicates what variable depends on which # other variables (or is depended on by other variables). deps = defaultdict(set) for ids in depsets: for id1 in ids: for id2 in ids: deps[id1].add(id2) # Extract all the independent subgraphs ind_graphs = [] while len(deps): ind_graphs.append(set(next_independent_subgraph(deps).keys())) if len(ind_graphs) == 1: return [eqs] # Finally, we construct an Equations object for each of the subgraphs ind_eqs = [] for G in ind_graphs: neweqs = Equations() for var in G: if var in eqs._eq_names: neweqs.add_eq(var, eqs._string[var], eqs._units[var], local_namespace=eqs._namespace[var]) elif var in eqs._diffeq_names: nonzero = var in eqs._diffeq_names_nonzero neweqs.add_diffeq(var, eqs._string[var], eqs._units[var], local_namespace=eqs._namespace[var], nonzero=nonzero) elif var in eqs._alias.keys(): neweqs.add_alias(var, eqs._string[var].strip()) else: assert False ind_eqs.append(neweqs) return ind_eqs if __name__ == '__main__': from brian import * # T = second # eqs = Equations(''' # da/dt = y/T : 1 # db/dt = x/T : 1 # c : 1 # x = c # y = x*x : 1 # dd/dt = e/T : 1 # de/dt = d/T : 1 # df/dt = e/T : 1 # ''') tau_pre = tau_post = 10 * ms eqs = Equations(""" dA_pre/dt = -A_pre/tau_pre : 1 dA_post/dt = -A_post/tau_post : 1 """) eqs_sep = separate_equations(eqs) for e in eqs_sep: print '******************' print e brian-1.3.1/brian/utils/sparse_patch/000077500000000000000000000000001167451777000175355ustar00rootroot00000000000000brian-1.3.1/brian/utils/sparse_patch/0_6_0.py000066400000000000000000000047671167451777000207300ustar00rootroot00000000000000from scipy import sparse import numpy from itertools import izip from operator import isSequenceType import bisect from numpy import ndarray import itertools class lil_matrix(sparse.lil_matrix): # Unfortunately we still need to implement this because although scipy 0.7.0 # now supports X[a:b,c:d] for sparse X it is unbelievably slow (shabby code # on their part). def __setitem__(self, index, W): """ Speed-up if x is a sparse matrix. TODO: checks (first remove the data). TODO: once we've got this working in all cases, should we submit to scipy? """ try: i, j = index except (ValueError, TypeError): raise IndexError, "invalid index" if isinstance(i, slice) and isinstance(j, slice) and\ (i.step is None) and (j.step is None) and\ (isinstance(W, sparse.lil_matrix) or isinstance(W, numpy.ndarray)): rows = self.rows[i] datas = self.data[i] j0 = j.start if isinstance(W, sparse.lil_matrix): for row, data, rowW, dataW in izip(rows, datas, W.rows, W.data): jj = bisect.bisect(row, j0) # Find the insertion point row[jj:jj] = [j0 + k for k in rowW] data[jj:jj] = dataW elif isinstance(W, ndarray): nq = W.shape[1] for row, data, rowW in izip(rows, datas, W): jj = bisect.bisect(row, j0) # Find the insertion point row[jj:jj] = range(j0, j0 + nq) data[jj:jj] = rowW elif isinstance(i, int) and isinstance(j, (list, tuple, numpy.ndarray)): row = dict(izip(self.rows[i], self.data[i])) try: row.update(dict(izip(j, W))) except TypeError: row.update(dict(izip(j, itertools.repeat(W)))) items = row.items() items.sort() row, data = izip(*items) self.rows[i] = list(row) self.data[i] = list(data) elif isinstance(i, slice) and isinstance(j, int) and isSequenceType(W): # This corrects a bug in scipy sparse matrix as of version 0.7.0, but # it is not efficient! for w, k in izip(W, xrange(*i.indices(self.shape[0]))): sparse.lil_matrix.__setitem__(self, (k, j), w) else: sparse.lil_matrix.__setitem__(self, index, W) brian-1.3.1/brian/utils/sparse_patch/0_7_0.py000066400000000000000000000041721167451777000207170ustar00rootroot00000000000000from scipy import sparse import numpy from itertools import izip from operator import isSequenceType import bisect from numpy import ndarray import itertools class lil_matrix(sparse.lil_matrix): # Unfortunately we still need to implement this because although scipy 0.7.0 # now supports X[a:b,c:d] for sparse X it is unbelievably slow (shabby code # on their part). def __setitem__(self, index, W): """ Speed-up if x is a sparse matrix. TODO: checks (first remove the data). TODO: once we've got this working in all cases, should we submit to scipy? """ try: i, j = index except (ValueError, TypeError): raise IndexError, "invalid index" if isinstance(i, slice) and isinstance(j, slice) and\ (i.step is None) and (j.step is None) and\ (isinstance(W, sparse.lil_matrix) or isinstance(W, numpy.ndarray)): rows = self.rows[i] datas = self.data[i] j0 = j.start if isinstance(W, sparse.lil_matrix): for row, data, rowW, dataW in izip(rows, datas, W.rows, W.data): jj = bisect.bisect(row, j0) # Find the insertion point row[jj:jj] = [j0 + k for k in rowW] data[jj:jj] = dataW elif isinstance(W, ndarray): nq = W.shape[1] for row, data, rowW in izip(rows, datas, W): jj = bisect.bisect(row, j0) # Find the insertion point row[jj:jj] = range(j0, j0 + nq) data[jj:jj] = rowW elif isinstance(i, slice) and isinstance(j, int) and isSequenceType(W): # This corrects a bug in scipy sparse matrix as of version 0.7.0, but # it is not efficient! for w, k in izip(W, xrange(*i.indices(self.shape[0]))): sparse.lil_matrix.__setitem__(self, (k, j), w) elif isinstance(i, int) and isSequenceType(j) and len(j) == 0: return else: sparse.lil_matrix.__setitem__(self, index, W) brian-1.3.1/brian/utils/sparse_patch/0_7_1.py000066400000000000000000000061231167451777000207160ustar00rootroot00000000000000from scipy import sparse import numpy import itertools from itertools import izip, repeat from operator import isSequenceType, isNumberType import bisect from numpy import ndarray class lil_matrix(sparse.lil_matrix): # Unfortunately we still need to implement this because although scipy 0.7.0 # now supports X[a:b,c:d] for sparse X it is unbelievably slow (shabby code # on their part). def __setitem__(self, index, W): """ Speed-up if x is a sparse matrix. TODO: checks (first remove the data). TODO: once we've got this working in all cases, should we submit to scipy? """ try: i, j = index except (ValueError, TypeError): raise IndexError, "invalid index" if isinstance(i, slice) and isinstance(j, slice) and\ (i.step is None) and (j.step is None) and\ (isinstance(W, sparse.lil_matrix) or isinstance(W, numpy.ndarray)): rows = self.rows[i] datas = self.data[i] j0 = j.start if isinstance(W, sparse.lil_matrix): for row, data, rowW, dataW in izip(rows, datas, W.rows, W.data): jj = bisect.bisect(row, j0) # Find the insertion point row[jj:jj] = [j0 + k for k in rowW] data[jj:jj] = dataW elif isinstance(W, ndarray): nq = W.shape[1] for row, data, rowW in izip(rows, datas, W): jj = bisect.bisect(row, j0) # Find the insertion point row[jj:jj] = range(j0, j0 + nq) data[jj:jj] = rowW elif isinstance(i, int) and isinstance(j, (list, tuple, numpy.ndarray)): if len(j) == 0: return row = dict(izip(self.rows[i], self.data[i])) try: row.update(dict(izip(j, W))) except TypeError: row.update(dict(izip(j, itertools.repeat(W)))) items = row.items() items.sort() row, data = izip(*items) self.rows[i] = list(row) self.data[i] = list(data) elif isinstance(i, slice) and isinstance(j, int) and isSequenceType(W): # This corrects a bug in scipy sparse matrix as of version 0.7.0, but # it is not efficient! for w, k in izip(W, xrange(*i.indices(self.shape[0]))): sparse.lil_matrix.__setitem__(self, (k, j), w) elif isinstance(i, int) and isinstance(j, slice) and (isNumberType(W) and not isSequenceType(W)): # this fixes a bug in scipy 0.7.1 sparse.lil_matrix.__setitem__(self, index, [W] * len(xrange(*j.indices(self.shape[1])))) elif isinstance(i, slice) and isinstance(j, slice) and isNumberType(W): n = len(xrange(*i.indices(self.shape[0]))) m = len(xrange(*j.indices(self.shape[1]))) sparse.lil_matrix.__setitem__(self, index, W * numpy.ones((n, m))) else: sparse.lil_matrix.__setitem__(self, index, W) brian-1.3.1/brian/utils/sparse_patch/__init__.py000066400000000000000000000024111167451777000216440ustar00rootroot00000000000000''' This package maintains multiple patches for scipy.sparse.lil_matrix for the various different versions of scipy. It chooses dynamically at run time which version to use. ''' import scipy, warnings, os, imp import scipy.sparse __all__ = ['lil_matrix', 'spmatrix'] default = '0_7_1' table = { '0.9.0': '0_7_1', '0.8.0': '0_7_1', '0.7.2': '0_7_1', '0.7.1': '0_7_1', '0.7.1rc3': '0_7_1', '0.7.1rc2': '0_7_1', '0.7.1rc1': '0_7_1', '0.7.0': '0_7_0', '0.7.0rc2': '0_7_0', '0.7.0b1': '0_7_0', '0.6.0': '0_6_0', } if scipy.__version__ in table: modulename = table[scipy.__version__] else: for i in xrange(1, len(scipy.__version__)): v = scipy.__version__[:-i] if v in table: modulename = table[v] break else: modulename = default warnings.warn("Couldn't find matching sparse matrix patch for scipy version %s, but in most cases this shouldn't be a problem." % scipy.__version__) module = __import__(modulename, globals(), locals(), [], -1) lil_matrix = module.lil_matrix spmatrix = scipy.sparse.spmatrix if __name__ == '__main__': x = lil_matrix((5, 5)) x[2, 3] = 5 print x.todense() brian-1.3.1/brian_no_units.py000066400000000000000000000001641167451777000162120ustar00rootroot00000000000000from brian_unit_prefs import turn_off_units import warnings turn_off_units() warnings.warn("Turning off units") brian-1.3.1/brian_no_units_no_warnings.py000066400000000000000000000001111167451777000206060ustar00rootroot00000000000000from brian_unit_prefs import turn_off_units turn_off_units(warn=False) brian-1.3.1/brian_unit_prefs.py000066400000000000000000000012131167451777000165260ustar00rootroot00000000000000"""This module is used by the package Brian for turning units on and off. For the user, you can ignore this unless you want to turn units off, in which case you simply put at the top of your program (before importing anything from brian): from brian_unit_prefs import turn_off_units turn_off_units() Or to turn units off and not have a warning printed: from brian_unit_prefs import turn_off_units turn_off_units(warn=False) """ class _unit_prefs(): pass bup = _unit_prefs() bup.use_units = True bup.warn_about_units = True def turn_off_units(warn=True): bup.use_units = False bup.warn_about_units = warn brian-1.3.1/dev/000077500000000000000000000000001167451777000134045ustar00rootroot00000000000000brian-1.3.1/dev/BEPs/000077500000000000000000000000001167451777000141755ustar00rootroot00000000000000brian-1.3.1/dev/BEPs/BEP-0000066400000000000000000000005241167451777000146640ustar00rootroot00000000000000BEP-0 Brian Enhanced Proposals (BEPs) propose new features with implementation specifications. See http://www.python.org/dev/peps/ and particularly http://www.python.org/dev/peps/pep-0001/ for explanations and format guidelines for enhancement proposals. Closed BEPs go to the closed folder and can serve for the documentation. brian-1.3.1/dev/BEPs/BEP-10-Error handling000066400000000000000000000053541167451777000176070ustar00rootroot00000000000000BEP-10: Error handling Abstract: Errors raised by Brian scripts are often hard to understand because the error is typically raised in a subroutine of a Brian module that the user doesn't know about. The idea would be to have clearer error messages and error catching mechanisms that would relate the error and its context to the Brian script. I am just listing a few ideas here. Defining a new exception BrianError and subclasses ================================================== For example, we could imagine that each major class would have its exception type, and the exception would be called with the instance of the class (self) and other runtime information. A separate module could gather a number of functions to make the message nicer and to do some introspection into the traceback, the frames and the user script (using the get_source function from the inspect module). These could be methods of a base Exception class BrianError. [Dan adds: agreed, seems like a good idea.] Nested exceptions ================= In many cases, the function that raises the exception does not know the context, such as the user-defined differential equations, etc. One could imagine to use a specific exception to pass information to the calling function, e.g.: def a_specific_method(self): ... raise SpecificException(self,other_information) def global_function(): try: eqs.a_specific_method() except SpecificException as e: useful_info=e.other_information raise WhatEverError("I got one!",useful_info) # or WhatEverError(e,...) [Dan adds: we had this in the past, or something very similar, and it didn't work very well, it basically made it more difficult to understand what was going on. We should have something like this, but only in those cases where an explicit error catching mechanism fails I think. Rather than improve the error reporting (which is what this tries to do), it would be much better to improve Brian's code so that it understands errors and deals with them more appropriately. At the moment we basically just hope there won't be any... Romain: yes I didn't mean something like the bug handler, "global-function" here does not embed the simulation, it could just be any method. There is at least one example currently where the units mismatch error is caught by the Equations object which changes it into a message about the differential equations being non-homogeneous] Real cases ========== We should list here a number of real cases that we would like to handle. 1. times = property(fget=lambda self:QuantityArray(self._times)) AttributeError: 'StateMonitor' object has no attribute '_times' Most likely: the statemonitor was initialised without setting the record keyword. brian-1.3.1/dev/BEPs/BEP-12-Scheduled events.txt000066400000000000000000000130461167451777000207530ustar00rootroot00000000000000BEP-12: Scheduled events Abstract: Objects should be able to schedule simple future events in large numbers. For example, heterogeneous axonal delays could be implemented in this way by allowing a connection to schedule 'add' events in the future. It could also be used for combining STDP with heterogeneous delays, and several other things. It can be done in particular cases with pure Python, but there are probably significant speed advantages to using C code here. Scheduled events ================ Consider the following scheme for a heterogeneous axonal delay connection. Each time a spike arrives at neuron i, the set of target neurons j with corresponding delays d_ij are found. For each target neuron the event V_j+=W_ij is scheduled at time t+d_ij. This mechanism can be made generally useful for other things by allowing more complicated events than just +=. The problem is speed and vectorisation. Computations that involve O(number of synaptic events) Python overheads are too slow. However, at least in specific cases it is possible to reduce the Python overheads to O(number of spikes) as in the case of connections at the moment. The trick is to use a dynamic cylindrical array implementing an event queue. Consider the event x+=y for some x, y. At each time t, there should be a linear array I of indices and then we simply evaluate the code X[I]+=Y[I] for X, Y the arrays where the x, y are found. In fact, this doesn't work exactly as stated if I has repeats (see the section below). The cylindrical array is a 2D array with two indices, a time index and an event number index. The time index is circular. Each time the event queue at a particular time is evaluted, the event queue at that time is emptied and can be reused later on (this is done by simply setting the number of events in that queue to zero, where the number of events is a circular array). Pushing events on to the queue also needs to be vectorised. This problem can also be solved using pure Python in a generic way, although a C version is likely much more efficient (see below). Issues ------ *How fast could the pure Python version be?* If the pure Python code was significantly slower and Brian came to rely on this scheduled event stuff it would necessitate a move to built distributions or requiring users to have a compiler. This may not be as bad as it sounds because a built distribution could be produced for Windows and maybe even Mac users, and for Linux most users will have a compiler installed on their system already. *Can it be made generic?* A difficulty is that it seems hard to make this scheme generic so that users can write their own events which would be necessary for STDP for example. The example of implementing x+=y below illustrates the difficulties. Generation of C code gives one approach, but ideally we don't want to rely on users having to have a C compiler on their system. Technical details and sample implementation ------------------------------------------- To evaluate x+=y in a vectorised way, we would like to write X[I]+=Y[I], but unfortunately this doesn't work with numpy if there are repeats in I. There is a way around this:: J = unique(I) K = digitize(I, J)-1 b = bincount(K) X[J] += b*Y[J] Unfortunately, this is slower than it needs to be because unique(I) involves a sort operation and is therefore O(n log(n)) rather than just O(n), for n=len(I). It is also not generic because it relies on the properties of addition (repeated addition -> multiplication). It could be made generic for any numpy ufunc by using the reduceat method of ufuncs however. The equivalent C code for the above is just:: for(int i=0; i