PyNN-0.7.4/0000755000175000017500000000000011774605211013134 5ustar andrewandrew00000000000000PyNN-0.7.4/doc/0000755000175000017500000000000011774605211013701 5ustar andrewandrew00000000000000PyNN-0.7.4/doc/installation.txt0000644000175000017500000001541111774601633017151 0ustar andrewandrew00000000000000============ Installation ============ Use of PyNN_ requires that you have Python (version 2.5, 2.6 or 2.7) and a recent version of the numpy_ package installed, plus at least one of the supported simulators: NEURON, NEST, PCSIM or Brian. Installing PyNN =============== The easiest way to get PyNN is to download the latest source distribution from the `PyNN download page`_, then run the setup script, e.g.:: $ tar xzf PyNN-0.7.4.tar.gz $ cd PyNN-0.7.4 $ python setup.py install This will install it to your python ``site-packages`` directory, and may require root privileges. If you wish to install it elsewhere use the ``--prefix`` or ``--home`` option, and set the ``PYTHONPATH`` environment variable accordingly (see above). We assume you have already installed the simulator(s) you wish to use it with. If this is not the case, see below for installation instructions. Test it using something like the following:: >>> from pyNN.nest import * >>> setup() >>> end() (This assumes you have NEST installed). With NEURON as the simulator, make sure you install NEURON *before* you install PyNN. The PyNN installation will then compile PyNN-specific membrane mechanisms, which are loaded when importing the ``neuron`` module:: >>> from pyNN.neuron import * NEURON -- Release 7.1 (359:7f113b76a94b) 2009-10-26 Duke, Yale, and the BlueBrain Project -- Copyright 1984-2008 See http://www.neuron.yale.edu/credits.html loading membrane mechanisms from /home/andrew/dev/pyNN/neuron/nmodl/i686/.libs/libnrnmech.so Additional mechanisms from files adexp.mod alphaisyn.mod alphasyn.mod expisyn.mod gsfa_grr.mod hh_traub.mod netstim2.mod refrac.mod reset.mod stdwa_guetig.mod stdwa_softlimits.mod stdwa_songabbott.mod stdwa_symm.mod tmgsyn.mod tmisyn.mod vecstim.mod If your model relies on other NMODL mechanisms, call the ``load_mechanisms()`` function with the path to the directory containing the ``.mod`` files. Installing NEURON ================= Download the sources for the latest release of NEURON, in ``.tar.gz`` format, from ``_. Also download Interviews from the same location. Compile Interviews and NEURON according to the instructions given at ``_, except that when you run ``configure``, add the options ``--with-nrnpython`` and, optionally, ``--with-mpi``, i.e.:: $ ./configure --prefix=`pwd` --with-nrnpython --with-mpi $ make $ make install Make sure that you add the Interviews and NEURON bin directories to your path. Test that the Python support has been enabled by running:: $ nrniv -python NEURON -- Release 7.1 (359:7f113b76a94b) 2009-10-26 Duke, Yale, and the BlueBrain Project -- Copyright 1984-2008 See http://www.neuron.yale.edu/credits.html >>> import hoc >>> import nrn Now you can compile and install NEURON as a Python package:: $ cd src/nrnpython # python setup.py install Note that the latter step requires running as root and will install to your global python ``site-packages`` directory. If you would prefer to install it locally, replace the last step with:: $ python setup.py install --prefix=~ which will install into ``~/lib/python2.5/site-packages/`` (or the equivalent for other Python versions; for more fine-grained control over where the package is installed, see the `distutils`_ documentation), then add this directory to your ``PYTHONPATH``:: $ export PYTHONPATH=$PYTHONPATH:~/lib/python2.5/site-packages Now test everything worked:: $ python >>> import neuron NEURON -- Release 7.1 (359:7f113b76a94b) 2009-10-26 Duke, Yale, and the BlueBrain Project -- Copyright 1984-2008 See http://www.neuron.yale.edu/credits.html >>> If you run into problems, check out the `NEURON Forum`_. Installing NEST and PyNEST ========================== NEST can be downloaded from ``_. Installation instructions are available in the file INSTALL, which you can find in the source package, or at ``_. On Linux, most Unix variants and Mac OS X, installation is usually as simple as:: $ ./configure --with-mpi $ make # make install This will install PyNEST to your Python ``site-packages`` directory, and may require root privileges. If you wish to install it elsewhere, see the full installation instructions. Make sure you have the GNU Scientific Library (GSL) installed, otherwise some PyNN standard models (e.g. ``IF_cond_exp``) will not be available. Now try it out:: $ cd ~ $ python >>> import nest -- N E S T 2 beta -- Neural Simulation Tool Copyright 1995-2009 The NEST Initiative Version 2.0-0-rc2 Jan 27 2011 10:59:20 ... >>> nest.Models() ['ac_generator', 'aeif_cond_alpha', 'aeif_cond_exp', 'cont_delay_synapse', 'correlation_detector', 'dc_generator', 'hh_cond_exp_traub', 'hh_psc_alpha', 'ht_neuron', 'ht_synapse', 'iaf_cond_alpha', 'iaf_cond_alpha_mc', 'iaf_cond_exp', 'iaf_cond_exp_sfa_rr', 'iaf_neuron', 'iaf_psc_alpha', 'iaf_psc_alpha_canon', 'iaf_psc_alpha_presc', 'iaf_psc_delta', 'iaf_psc_delta_canon', 'iaf_psc_exp', 'iaf_psc_exp_ps', 'iaf_tum_2000', 'mat2_psc_exp', 'mip_generator', 'multimeter', 'noise_generator', 'parrot_neuron', 'parrot_neuron_ps', 'poisson_generator', 'poisson_generator_ps', 'pulsepacket_generator', 'sli_neuron', 'smp_generator', 'spike_detector', 'spike_generator', 'static_synapse', 'static_synapse_hom_wd', 'stdp_dopamine_synapse', 'stdp_pl_synapse_hom', 'stdp_synapse', 'stdp_synapse_hom', 'step_current_generator', 'subnet', 'topology_layer_3d', 'topology_layer_free', 'topology_layer_grid', 'tsodyks_synapse', 'voltmeter', 'volume_transmitter'] Check that ``'aeif_cond_alpha'`` is in the list of models. If it is not, you may need to install a newer version of the `GNU Scientific Library`_ and then recompile NEST. Installing PCSIM ================ PCSIM_ is available to download from ``_: For compilation instructions, see the files in the ``HowTos`` directory. Installing Brian ================ Instructions for downloading and installing Brian_ are available from ``_. .. _PyNN: http://neuralensemble.org/PyNN .. _numpy: http://numpy.scipy.org/ .. _PCSIM: http://www.lsm.tugraz.at/pcsim/ .. _Brian: http://www.briansimulator.org/ .. _`PyNN download page`: https://neuralensemble.org/trac/PyNN/wiki/Download .. _`distutils`: http://docs.python.org/inst/inst.html .. _`GNU Scientific Library`: http://www.gnu.org/software/gsl/ .. _`NEURON Forum`: http://www.neuron.yale.edu/phpBB/index.php PyNN-0.7.4/doc/descriptions.txt0000644000175000017500000000744511736323051017157 0ustar andrewandrew00000000000000==================== Network descriptions ==================== intro to mention finding bugs, literate programming :: >>> p1 = sim.Population(100, sim.IF_cond_exp, cell_parameters, ... structure=space.Grid2D(dx=50.0, dy=50.0), ... label="excitatory neurons") :: >>> p1.describe(template=None) {'cell_parameters': {'cm': 1.0, 'e_rev_E': 0.0, 'e_rev_I': -70.0, 'i_offset': 0.0, 'tau_m': 20.0, 'tau_refrac': 0.10000000000000001, 'tau_syn_E': 5.0, 'tau_syn_I': 5.0, 'v_reset': -65.0, 'v_rest': -65.0, 'v_thresh': -50.0}, 'celltype': {'name': 'IF_cond_exp', 'parameters': {'C_m': 1000.0, 'E_L': -65.0, 'E_ex': 0.0, 'E_in': -70.0, 'I_e': 0.0, 'V_reset': -65.0, 'V_th': -50.0, 'g_L': 50.0, 't_ref': 0.10000000000000001, 'tau_syn_ex': 5.0, 'tau_syn_in': 5.0}}, 'first_id': 1, 'label': 'excitatory neurons', 'last_id': 100, 'local_first_id': 1, 'size': 100, 'size_local': 100, 'structure': {'name': 'Grid2D', 'parameters': {'aspect_ratio': 1.0, 'dx': 50.0, 'dy': 50.0, 'fill_order': 'sequential', 'x0': 0.0, 'y0': 0.0}}} :: >>> print p1.describe() Population "excitatory neurons" Structure : Grid2D fill_order: sequential dx: 50.0 dy: 50.0 aspect_ratio: 1.0 y0: 0.0 x0: 0.0 Local cells : 100 Cell type : IF_cond_exp ID range : 1-100 First cell on this node: ID: 1 tau_refrac: 0.1 tau_m: 20.0 e_rev_E: 0.0 i_offset: 0.0 cm: 1.0 e_rev_I: -70.0 v_thresh: -50.0 tau_syn_E: 5.0 v_rest: -65.0 tau_syn_I: 5.0 v_reset: -65.0 >>> print(p1.describe(engine='string')) Population "excitatory neurons" Structure : {'name': 'Grid2D', 'parameters': {'fill_order': 'sequential', 'dx': 50.0, 'dy': 50.0, 'aspect_ratio': 1.0, 'y0': 0.0, 'x0': 0.0}} Local cells : 100 Cell type : {'name': 'IF_cond_exp', 'parameters': {'E_in': -70.0, 'E_ex': 0.0, 'V_th': -50.0, 'I_e': 0.0, 'C_m': 1000.0, 'tau_syn_ex': 5.0, 'g_L': 50.0, 'V_reset': -65.0, 'tau_syn_in': 5.0, 'E_L': -65.0, 't_ref': 0.10000000000000001}}.name ID range : 1-100 First cell on this node: ID: 1 {'tau_refrac': 0.10000000000000001, 'tau_m': 20.0, 'e_rev_E': 0.0, 'i_offset': 0.0, 'cm': 1.0, 'e_rev_I': -70.0, 'v_thresh': -50.0, 'tau_syn_E': 5.0, 'v_rest': -65.0, 'tau_syn_I': 5.0, 'v_reset': -65.0} >>> print p1.describe('Population "{{label}}" consists of {{size}} {{celltype.name}} neurons, arranged in a {{structure.name}} structure with grid spacing ({{structure.parameters.dx}},{{structure.parameters.dy}})', engine='jinja2') ) Population "excitatory neurons" consists of 100 IF_cond_exp neurons, arranged in a Grid2D structure with grid spacing (50.0,50.0)', engine='jinja2') >>> print p1.describe('Population "$label" consists of $size $celltype.name neurons, arranged in a $structure.name structure with grid spacing ($structure.parameters.dx,$structure.parameters.dy)', engine='cheetah')PyNN-0.7.4/doc/highlevelapi_pcsim.txt0000644000175000017500000006231411736323051020301 0ustar andrewandrew00000000000000=========================== Populations and Projections =========================== While it is entirely possible to create very large networks using only the ``create()``, ``connect()``, ``set()`` and ``record()`` functions, this involves writing a lot of repetitive code, which is the same or similar for every model: iterating over lists of cells and connections, creating common projection patterns, recording from all or a subset of neurons... This sort of 'book-keeping' code takes time to write, obscures the essential principles of the simulation script with details, and, of course, the more code that has to be written, the more bugs and errors will be introduced, and, if the code is only used for a single project, not all the bugs may be found. For these reasons, PyNN provides the ``Population`` object, representing a group of neurons all of the same type (although possibly with cell-to-cell variations in the values of parameters), and the ``Projection`` object, repesenting the set of connections between two ``Population``\s. All the book-keeping code is contained within the object classes, which also provide functions ('methods') to perform commonly-used tasks, such as recording from a fixed number of cells within the population, chosen at random. By using the ``Population`` and ``Projection`` classes, less code needs to be written to create a given simulation, which means fewer-bugs and easier-to-understand scripts, plus, because the code for the classes is used in many different projects, bugs will be found more reliably, and the internal implementation of the classes optimized for performance. Of particular importance is that iterations over large numbers of cells or connections can be done in fast compiled code (within the simulator engines) rather than in comparatively slow Python code. Creating ``Population``\s ========================= Some examples of creating a population of neurons (don't forget to call ``setup()`` first). This creates a 10 x 10 array of ``IF_curr_exp`` neurons with default parameters:: >>> p1 = Population((10,10), IF_curr_exp) This creates a 1D array of 100 spike sources, and gives it a label:: >>> p2 = Population(100, SpikeSourceArray, label="Input Population") This illustrates all the possible arguments of the ``Population`` constructor, with argument names. It creates a 3D array of ``IF_cond_alpha`` neurons, all with a spike threshold set to -55 mV and membrane time constant set to 10 ms:: >>> p3 = Population(dims=(3,4,5), cellclass=IF_cond_alpha, ... cellparams={'v_thresh': -55.0, 'tau_m': 10.0, 'tau_refrac': 1.5}, ... label="Column 1") The population dimensions can be retrieved using the ``dim`` attribute, e.g.:: >>> p1.dim (10, 10) >>> p2.dim (100,) >>> p3.dim (3, 4, 5) while the total number of neurons in a population can be obtained with the Python ``len()`` function:: >>> print len(p1), len(p2), len(p3) 100 100 60 The above examples all use PyNN standard cell models. It is also possible to use simulator-specific models, but in this case the ``cellclass`` should be given as a string, e.g.:: >>> p4 = Population(20, 'iaf_neuron', cellparams={'Tau': 15.0, 'C': 0.001}) #doctest: +SKIP This example will work with NEST but not with NEURON, PCSIM or Brian. Addressing individual neurons ============================= To address individual neurons in a population, use ``[]`` notation, e.g.,:: >>> p1[0,0] 5348024557502464L >>> p1[9,9] 5348024557502563L >>> p2[67] 4785074604081219L >>> p3[2,1,0] 45L The return values are ``ID`` objects, which behave in most cases as integers, but also allow accessing the values of the cell parameters (see below). The n-tuple of values within the square brackets is referred to as a neurons's *address*, while the return value is its *id*. Trying to address a non-existent neuron will raise an Exception:: >>> p1[999,0] Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python/site-packages/pyNN/nest1.py", line 457, in __getitem__ id = self.cell[addr] IndexError: index (999) out of range (0<=index<=10) in dimension 0 as will giving the wrong number of dimensions in the address. It is equally possible to define the address as a tuple, and then pass the tuple within the square brackets, e.g.:: >>> p1[5,5] 5348024557502519L >>> address = (5,5) >>> p1[address] 5348024557502519L Neuron addresses are used in setting parameter values, and in specifying which neurons to record from. They may also be used together with the low-level ``connect()``, ``set()``, and ``record()`` functions. To obtain an address given the id, use ``locate()``, e.g.:: >>> p3[2,2,0] 50L >>> p3.locate(50L) (2, 2, 0) To access the 'i'th neuron in a Population, use the ``index()`` method, e.g.,:: >>> p3.index(0) 0L >>> p3.index(59) 59L >>> p3.index(60) Traceback (most recent call last): File "", line 1, in File "/home/andrew/dev/pyNN/neuron/__init__.py", line 759, in index return self.fullgidlist[n] IndexError: index out of bounds Conversely, if you have an ID, and you wish to obtain its index, use the ``id_to_index()`` method:: >>> p3.id_to_index(0L) 0 >>> p3.id_to_index(59L) 59 Accessing groups of neurons =========================== To access several neurons at once, use slice notation, e.g., to access all the neurons in the first row of a 2D Population, use:: >>> p1[0,:] array([5348024557502464, 5348024557502465, 5348024557502466, 5348024557502467, 5348024557502468, 5348024557502469, 5348024557502470, 5348024557502471, 5348024557502472, 5348024557502473], dtype=object) Setting parameter values ======================== Setting the same value for the entire population ------------------------------------------------ To set a parameter for all neurons in the population to the same value, use the ``set()`` method, e.g.:: >>> p1.set("tau_m", 20.0) >>> p1.set({'tau_m':20, 'v_rest':-65}) The first form can be used for setting a single parameter, the second form for setting multiple parameters at once. Setting random values --------------------- To set a parameter to values drawn from a random distribution, use the ``rset()`` method with a ``RandomDistribution`` object from the ``pyNN.random`` module (see the chapter on random numbers for more details). The following example sets the threshold potential of each neuron to a value drawn from a uniform distribution between -55 mV and -50 mV:: >>> from pyNN.random import RandomDistribution >>> vthresh_distr = RandomDistribution(distribution='uniform', parameters=[-55,-50]) >>> p1.rset('v_thresh', vthresh_distr) Note that positional arguments can also be used. The following produces the same result as the above:: >>> vthresh_distr = RandomDistribution('uniform', [-55,-50]) Setting values according to an array ------------------------------------ The most efficient way to set different (but non-random) values for different neurons is to use the ``tset()`` (for *topographic* set) method. The following example injects a current of 0.1 nA into the first column of neurons in the population:: >>> import numpy >>> current_input = numpy.zeros(p1.dim) >>> current_input[:,0] = 0.1 >>> p1.tset('i_offset', current_input) Setting parameter values for individual neurons ----------------------------------------------- To set the parameters of an individual neuron, you can use the low-level ``set()`` function,:: >>> set(p1[0,3], 'tau_m', 12.0) or you can just set the relevant attribute of the ``ID`` object:: >>> p1[0,4].tau_m = 12.0 Setting initial values ====================== To set the initial values of state variable such as the membrane potential use the ``initialize()`` method:: >>> p1.initialize('v', -65.0) To initialize different neurons to different random values, pass a ``RandomDistribution`` object instead of a float:: >>> vinit_distr = RandomDistribution(distribution='uniform', parameters=[-70,-60]) >>> p1.initialize('v', vinit_distr) Iterating over all the neurons in a population ============================================== To iterate over all the cells in a population, returning the neuron ids, use:: >>> for id in p1: #doctest: +ELLIPSIS ... print id, id.tau_m ... 5348024557502464 19.999999553 5348024557502465 19.999999553 5348024557502466 19.999999553 5348024557502467 12.0000001043 5348024557502468 12.0000001043 5348024557502469 19.999999553 ... The ``Population.ids()`` method produces the same result. To iterate over cells but return neuron addresses, use the ``addresses()`` method:: >>> for addr in p1.addresses(): #doctest: +ELLIPSIS ... print addr ... (0, 0) (0, 1) (0, 2) ... (0, 9) (1, 0) (1, 1) (1, 2) ... Injecting current ================= As for individual cells, time-varying currents may be injected into all the cells of a Population using either the ``inject_into()`` method of the ``CurrentSource`` or the ``inject()`` method of the ``Population``:: >>> times = numpy.arange(0.0, 100.0, 1.0) >>> amplitudes = 0.1*numpy.sin(times*numpy.pi/100.0) >>> sine_wave = StepCurrentSource(times, amplitudes) >>> p1.inject(sine_wave) >>> sine_wave.inject_into(p3) >>> sine_wave.inject_into(p2) Traceback (most recent call last): File "", line 1, in sine_wave.inject_into(p2) File "/home/andrew/dev/pyNN/neuron/electrodes.py", line 67, in inject_into raise TypeError("Can't inject current into a spike source.") TypeError: Can't inject current into a spike source. Recording ========= Recording spike times is done with the method ``record()``. Recording membrane potential is done with the method ``record_v()``. Recording synaptic conductances is done with the method ``record_gsyn()``. All three methods have identical argument lists. Some examples:: >>> p1.record() # record from all neurons in the population >>> p1.record(10) # record from 10 neurons chosen at random >>> p1.record([p1[0,0], p1[0,1], p1[0,2]]) # record from specific neurons Writing the recorded values to file is done with a second triple of methods, ``printSpikes()``, ``print_v()`` and ``print_gsyn()``, e.g.:: >>> run(1.0) # if we don't run the simulation, there will be no data to write 1.0 >>> p1.printSpikes("spikefile.dat") By default, the output files are post-processed to reformat them from the native simulator format to a common format that is the same for all simulator engines. This facilitates comparisons across simulators, but of course has some performace penalty. To get output in the native format of the simulator, add ``compatible_output=False`` to the argument list. When running a distributed simulation, each node records only those neurons that it simulates. By default, at the end of the simulation all nodes send their recorded data to the master node so that all values are written to a single output file. Again, there is a performance penalty for this, so if you wish each node to write its own file, add ``gather=False`` to the argument list. If you wish to obtain the recorded data within the simulation script, for plotting or further analysis, there is a further triple of methods, ``getSpikes()``, ``get_v()`` and ``get_gsyn()``. Again, there is a ``gather`` option for distributed simulations. Position in space ================= The positions of individual neurons in a population can be accessed using their ``position`` attribute, e.g.:: >>> p1[1,0].position = (0.0, 0.1, 0.2) >>> p1[1,0].position array([ 0. , 0.1, 0.2]) To obtain the positions of all neurons at once (as a numpy array), use the ``positions`` attribute of the ``Population`` object, e.g.:: >>> p1.positions #doctest: +ELLIPSIS,+NORMALIZE_WHITESPACE array([[...]]) To find the neuron that is closest to a particular point in space, use the ``nearest()`` attribute:: >>> p1.nearest((4.5, 7.8, 3.3)) 5348024557502512L >>> p1[p1.locate(5348024557502512L)].position array([ 4., 8., 0.]) Statistics ========== Often, the exact spike times and exact membrane potential traces are not required, only statistical measures. PyNN currently only provides one such measure, the mean number of spikes per neuron, e.g.:: >>> p1.meanSpikeCount() 0.0 More such statistical measures are planned for future releases. Getting information about a ``Population`` ========================================== A summary of the state of a ``Population`` may be obtained with the ``describe()`` method:: >>> print p3.describe() #doctest: +NORMALIZE_WHITESPACE ------- Population description ------- Population called Column 1 is made of 60 cells [60 being local] -> Cells are arranged on a 3D grid of size (3, 4, 5) -> Celltype is IF_cond_alpha -> ID range is 0-59 -> Cell Parameters used for first cell on this node are: | tau_refrac : 1.50000001304 | tau_m : 9.99999977648 | e_rev_E : 0.0 | i_offset : 0.0 | cm : 0.999999971718 | e_rev_I : -70.000000298 | v_init : -64.9999976158 | v_thresh : -54.999999702 | tau_syn_E : 0.300000014249 | v_rest : -64.9999976158 | tau_syn_I : 0.500000023749 | v_reset : -64.9999976158 --- End of Population description ---- Connecting two ``Population``\s with a ``Projection`` ===================================================== A ``Projection`` object is a container for all the synaptic connections between neurons in two ``Population``\s, together with methods for setting synaptic weights and delays. A ``Projection`` is created by specifying a pre-synaptic ``Population``, a post-synaptic ``Population`` and a ``Connector`` object, which determines the algorithm used to wire up the neurons, e.g.:: >>> prj2_1 = Projection(p2, p1, method=AllToAllConnector()) This connects ``p2`` (pre-synaptic) to ``p1`` (post-synaptic), using an '``AllToAllConnector``' object, which connects every neuron in the pre-synaptic population to every neuron in the post-synaptic population. The currently available ``Connector`` classes are explained below. It is fairly straightforward for a user to write a new ``Connector`` class if they wish to use a connection algorithm not already available in PyNN. All-to-all connections ---------------------- The ``AllToAllConnector'`` constructor has one optional argument ``allow_self_connections``, for use when connecting a ``Population`` to itself. By default it is ``True``, but if a neuron should not connect to itself, set it to ``False``, e.g.:: >>> prj1_1 = Projection(p1, p1, AllToAllConnector(allow_self_connections=False)) One-to-one connections ---------------------- Use of the ``OneToOneConnector`` requires that the pre- and post-synaptic populations have the same dimensions, e.g.:: >>> prj1_1a = Projection(p1, p1, OneToOneConnector()) Trying to connect two ``Population``\s with different dimensions will raise an Exception, e.g.:: >>> invalid_prj = Projection(p2, p3, OneToOneConnector()) #doctest: +IGNORE_EXCEPTION_DETAIL Traceback (most recent call last): File "", line 1, in invalid_prj = Projection(p2, p3, OneToOneConnector()) File "/home/andrew/dev/pyNN/neuron/__init__.py", line 220, in __init__ method.connect(self) File "/home/andrew/dev/pyNN/connectors.py", line 281, in connect raise common.InvalidDimensionsError("OneToOneConnector does not support presynaptic and postsynaptic Populations of different sizes.") InvalidDimensionsError: OneToOneConnector does not support presynaptic and postsynaptic Populations of different sizes. Connecting neurons with a fixed probability ------------------------------------------- With the ``FixedProbabilityConnector`` method, each possible connection between all pre-synaptic neurons and all post-synaptic neurons is created with probability ``p_connect``, e.g.:: >>> prj2_3 = Projection(p2, p3, FixedProbabilityConnector(p_connect=0.2)) The constructor also accepts an ``allow_self_connections`` parameter, as above. Connecting neurons with a distance-dependent probability -------------------------------------------------------- For each pair of pre-post cells, the connection probability depends on distance. If positions in space have been specified using the ``positions()`` method of the ``Population`` class or the ``position`` attributes of individual neurons, these positions are used to calculate distances. If not, the neuron addresses, i.e., the array coordinates, are used. The constructor requires a string ``d_expression``, which should be the right-hand side of a valid python expression for probability (i.e. returning a value between 0 and 1), involving '``d``', e.g.:: >>> prj1_1b = Projection(p1, p1, DistanceDependentProbabilityConnector("exp(-abs(d))")) >>> prj3_3 = Projection(p3, p3, DistanceDependentProbabilityConnector("d<3")) The first example connects neurons with an exponentially-decaying probability. The second example connects each neuron to all its neighbours within a range of 3 units (distance is in µm if positions have been specified, in array coordinate distance otherwise). Note that boolean values ``True`` and ``False`` are automatically converted to numerical values ``1.0`` and ``0.0``. The calculation of distance may be controlled by a number of further arguments. By default, the 3D distance between cell positions is used, but the ``axes`` argument may be used to change this, e.g.:: >>> connector = DistanceDependentProbabilityConnector("exp(-abs(d))", axes='xy') will ignore the z-coordinate when calculating distance. Similarly, the origins of the coordinate systems of the two Populations and the relative scale of the two coordinate systems may be controlled using the ``offset`` and ``scale_factor`` arguments. This is useful when connecting brain regions that have very different sizes but that have a topographic mapping between them, e.g. retina to LGN to V1. In more abstract models, it is often useful to be able to avoid edge effects by specifying periodic boundary conditions, e.g.:: >>> connector = DistanceDependentProbabilityConnector("exp(-abs(d))", periodic_boundaries=(500, 500, 0)) calculates distance on the surface of a torus of circumference 500 µm (wrap-around in the x- and y-dimensions but not z). Divergent/fan-out connections ----------------------------- The ``FixedNumberPostConnector`` connects each pre-synaptic neuron to exactly ``n`` post-synaptic neurons chosen at random:: >>> prj2_1a = Projection(p2, p1, FixedNumberPostConnector(n=30)) As a refinement to this, the number of post-synaptic neurons can be chosen at random from a ``RandomDistribution`` object, e.g.:: >>> distr_npost = RandomDistribution(distribution='binomial', parameters=[100,0.3]) >>> prj2_1b = Projection(p2, p1, FixedNumberPostConnector(n=distr_npost)) Convergent/fan-in connections ----------------------------- The ``FixedNumberPreConnector`` has the same arguments as ``FixedNumberPostConnector``, but of course it connects each *post*-synaptic neuron to ``n`` *pre*-synaptic neurons, e.g.:: >>> prj2_1c = Projection(p2, p1, FixedNumberPreConnector(5)) >>> distr_npre = RandomDistribution(distribution='poisson', parameters=[5]) >>> prj2_1d = Projection(p2, p1, FixedNumberPreConnector(distr_npre)) Writing and reading connection patterns to/from a file ------------------------------------------------------ Connection patterns can be written to a file using ``saveConnections()``, e.g.:: >>> prj1_1a.saveConnections("prj1_1a.conn") These files can then be read back in to create a new ``Projection`` object using a ``FromFileConnector`` object, e.g.:: >>> prj1_1c = Projection(p1, p1, FromFileConnector("prj1_1a.conn")) Specifying a list of connections -------------------------------- Specific connection patterns not covered by the methods above can be obtained by specifying an explicit list of pre-synaptic and post-synaptic neuron addresses, with weights and delays. (Note that the weights and delays should be optional, but currently are not). Example:: >>> conn_list = [ ... ((0,0), (0,0,0), 0.0, 0.1), ... ((0,0), (0,0,1), 0.0, 0.1), ... ((0,0), (0,0,2), 0.0, 0.1), ... ((0,1), (1,3,0), 0.0, 0.1) ... ] >>> prj1_3d = Projection(p1, p3, FromListConnector(conn_list)) User-defined connection algorithms ---------------------------------- If you wish to use a specific connection/wiring algorithm not covered by the PyNN built-in ones, the simplest option is to construct a list of connections and use the ``FromListConnector`` class. By looking at the code for the built-in ``Connector``\s, it should also be quite straightforward to write your own ``Connector`` class. Setting synaptic weights and delays =================================== Synaptic weights and delays may be set either when creating the ``Projection``, as arguments to the ``Connector`` object, or afterwards using the ``setWeights()`` and ``setDelays()`` methods ``Projection``. All ``Connector`` objects accept ``weights`` and ``delays`` arguments to their constructors. Some examples: To set all weights to the same value:: >>> connector = AllToAllConnector(weights=0.7) >>> prj1_3e = Projection(p1, p3, connector) To set delays to random values taken from a specific distribution:: >>> delay_distr = RandomDistribution(distribution='gamma',parameters=[5,0.5]) >>> connector = FixedNumberPostConnector(n=20, delays=delay_distr) >>> prj2_1e = Projection(p2, p1, connector) To set individual weights and delays to specific values:: >>> weights = numpy.arange(1.1, 2.0, 0.9/p1.size) >>> delays = 2*weights >>> connector = OneToOneConnector(weights=weights, delays=delays) >>> prj1_1d = Projection(p1, p1, connector) After creating the ``Projection``, to set the weights of all synaptic connections in a ``Projection`` to a single value, use the ``setWeights()`` method:: >>> prj1_1.setWeights(0.2) [Note: synaptic weights in PyNN are in nA for current-based synapses and µS for conductance-based synapses)]. To set different weights to different values, use ``setWeights()`` with a list or 1D numpy array argument, where the length of the list/array is equal to the number of synapses, e.g.:: >>> weight_list = 0.1*numpy.ones(len(prj2_1)) >>> weight_list[0:5] = 0.2 >>> prj2_1.setWeights(weight_list) To set weights to random values, use the ``randomizeWeights()`` method:: >>> weight_distr = RandomDistribution(distribution='gamma',parameters=[1,0.1]) >>> prj1_1.randomizeWeights(weight_distr) Setting delays works similarly:: >>> prj1_1.setDelays(0.6) >>> delay_list = 0.3*numpy.ones(len(prj2_1)) >>> delay_list[0:5] = 0.4 >>> prj2_1.setDelays(delay_list) >>> delay_distr = RandomDistribution(distribution='gamma', parameters=[2,0.2], boundaries=[get_min_delay(),1e12]) >>> prj1_1.randomizeDelays(delay_distr) Accessing weights and delays ============================ To get the weights of all connections in the ``Projection``, use the ``getWeights()`` method. Two formats are available. ``'list'`` returns a list of length equal to the number of connections in the projection, ``'array'`` returns a 2D weight array (with NaN for non-existent connections):: >>> prj2_1.getWeights(format='list')[3:7] [0.20000000000000001, 0.20000000000000001, 0.10000000000000001, 0.10000000000000001] >>> prj2_1.getWeights(format='array')[:3,:3] array([[ 0.2, 0.2, 0.2], [ 0.1, 0.1, 0.1], [ 0.1, 0.1, 0.1]]) ``getDelays()`` is analogous. ``printWeights()`` writes the weights to a file. Access to the weights and delays of individual connections is by the ``connections`` attribute, e.g.:: >>> prj2_1.connections[0].weight 0.2 >>> prj2_1.connections[10].weight 0.1 The ``weightHistogram()`` method returns a histogram of the synaptic weights, with bins determined by the ``min``, ``max`` and ``nbins`` arguments passed to the method. Getting information about a ``Projection`` ========================================== As for ``Population``, a summary of the state of a ``Projection`` may be obtained with the ``describe()`` method:: >>> print prj2_1.describe() ------- Projection description ------- Projection 'Input Population→population0' from 'Input Population' [100 cells] to 'population0' [100 cells] | Connector : AllToAllConnector | Weights : 0.0 | Delays : 0.1 | Plasticity : None | Num. connections : 10000 ---- End of Projection description ----- Synaptic plasticity =================== So far we have discussed only the case whether the synaptic weight is fixed. Dynamic synapses (short-term and long-term plasticity) are discussed in the next chapter. Examples ======== There are several examples of networks built with the high-level API in the ``examples`` directory of the source distribution.PyNN-0.7.4/doc/highlevelapi.txt0000644000175000017500000010453611736323051017111 0ustar andrewandrew00000000000000============================= Running simulations with PyNN ============================= Importing PyNN ============== The simulator used by a PyNN script is determined by which module is imported from the PyNN package, e.g.:: >>> from pyNN.neuron import * #doctest: +SKIP >>> from pyNN.nest import * #doctest: +SKIP >>> from pyNN.pcsim import * #doctest: +SKIP >>> from pyNN.brian import * #doctest: +SKIP After this line, all PyNN code is independent of the simulator used, although it is possible to include simulator-specific code in the script as well (if simulator-independence is not important to you, or if you are in the process of porting simulator-specific code to pure PyNN code). Initialising the simulator ========================== Before using any other functions or classes from PyNN, the user must call the ``setup()`` function:: >>> setup() ``setup()`` takes various optional arguments: setting the simulation timestep (there is currently no support in the API for variable timestep methods although native simulator code can be used to select this option where the simulator supports it) and setting the minimum and maximum synaptic delays, e.g.:: >>> setup(timestep=0.1, min_delay=0.1, max_delay=10.0) In previous versions, ``setup()`` took a ``debug`` argument for configuring logging. To allow more flexibility, configuration of logging must now be done separately. There is a convenience function in the ``pyNN.utility`` module to simplify this:: >>> from pyNN.utility import init_logging >>> init_logging("logfile", debug=True) or you can configure the Python ``logging`` module directly. Creating neurons ================ Neurons are created using the ``Population`` class, which represents a group of neurons all of the same type (i.e. the same model, although different parameterisations of the same model can be used to model different biological neuron types), e.g.:: >>> p1 = Population(100, IF_curr_alpha, structure=space.Grid2D()) This creates 100 integrate-and-fire neurons with default parameters, distributed on a square grid. ``IF_curr_alpha`` is a particular class of IF neuron with alpha-function shaped synaptic currents, that will work with any PyNN simulation engine, whether NEURON, NEST, PCSIM or Brian. ``IF_curr_alpha`` is a so-called 'standard cell', implemented as a Python class. For more information, see the section StandardCells. Since we didn't specify any parameters for the neuron model, the neurons we created above have default parameter values, stored in the ``default_values`` of the standard cell class, e.g.:: >>> IF_curr_alpha.default_parameters #doctest: +NORMALIZE_WHITESPACE {'tau_refrac': 0.10000000000000001, 'v_thresh': -50.0, 'tau_m': 20.0, 'tau_syn_E': 0.5, 'v_rest': -65.0, 'cm': 1.0, 'v_reset': -65.0, 'tau_syn_I': 0.5, 'i_offset': 0.0} To use different parameter values, use the ``cellparams`` argument, e.g.:: >>> p2 = Population(20, IF_curr_alpha, cellparams={'tau_m': 15.0, 'cm': 0.9}) If you try to set a non-existent parameter, or pass an invalid value, PyNN will raise an Exception, e.g.:: >>> p2a = Population(20, IF_curr_alpha, cellparams={'foo': 15.0}) Traceback (most recent call last): File "", line 1, in File "/home/andrew/dev/pyNN/neuron/__init__.py", line 113, in __init__ common.Population.__init__(self, size, cellclass, cellparams, structure, label) File "/home/andrew/dev/pyNN/common.py", line 879, in __init__ self.celltype = cellclass(cellparams) File "/home/andrew/dev/pyNN/neuron/standardmodels/cells.py", line 35, in __init__ cells.IF_curr_alpha.__init__(self, parameters) # checks supplied parameters and adds default File "/home/andrew/dev/pyNN/standardmodels/__init__.py", line 60, in __init__ models.BaseModelType.__init__(self, parameters) File "/home/andrew/dev/pyNN/models.py", line 16, in __init__ self.parameters = self.__class__.checkParameters(parameters, with_defaults=True) File "/home/andrew/dev/pyNN/models.py", line 67, in checkParameters raise errors.NonExistentParameterError(k, cls, cls.default_parameters.keys()) NonExistentParameterError: foo (valid parameters for are: cm, i_offset, tau_m, tau_refrac, tau_syn_E, tau_syn_I, v_reset, v_rest, v_thresh) >>> p2b = Population(20, IF_curr_alpha, cellparams={'tau_m': 'bar'}) Traceback (most recent call last): File "", line 1, in File "/home/andrew/dev/pyNN/neuron/__init__.py", line 113, in __init__ common.Population.__init__(self, size, cellclass, cellparams, structure, label) File "/home/andrew/dev/pyNN/common.py", line 879, in __init__ self.celltype = cellclass(cellparams) File "/home/andrew/dev/pyNN/neuron/standardmodels/cells.py", line 35, in __init__ cells.IF_curr_alpha.__init__(self, parameters) # checks supplied parameters and adds default File "/home/andrew/dev/pyNN/standardmodels/__init__.py", line 60, in __init__ models.BaseModelType.__init__(self, parameters) File "/home/andrew/dev/pyNN/models.py", line 16, in __init__ self.parameters = self.__class__.checkParameters(parameters, with_defaults=True) File "/home/andrew/dev/pyNN/models.py", line 57, in checkParameters raise errors.InvalidParameterValueError(err_msg) InvalidParameterValueError: For tau_m in IF_curr_alpha, expected , got (bar) You can also give your population a label:: >>> p3 = Population(100, SpikeSourceArray, label="Input Population") This illustrates all the possible arguments of the ``Population`` constructor, with argument names. It creates a 3x4x5 array of ``IF_cond_alpha`` neurons, all with a spike threshold set to -55 mV and membrane time constant set to 10 ms:: >>> p4 = Population(size=60, cellclass=IF_cond_alpha, ... cellparams={'v_thresh': -55.0, 'tau_m': 10.0, 'tau_refrac': 1.5}, ... structure=space.Grid3D(3./4, 3./5), label="Column 1") Since creating neurons on a grid is very common, the grid dimensions can be specified in place of the size, without having to create a structure object, e.g.:: >>> p4a = Population((3,4,5), IF_cond_alpha) >>> assert p4.size == p4a.size == 60 >>> assert p4.structure == p4a.structure, "%s != %s" % (p4.structure, p4a.structure) The above examples all use PyNN standard cell models. It is also possible to use simulator-specific models by defining a `NativeCellClass`, e.g. for NEST:: >>> p5 = Population(20, native_cell_type('iaf_neuron'), cellparams={'tau_m': 15.0, 'C_m': 0.001}) #doctest: +SKIP This example will work with NEST but not with NEURON, PCSIM or Brian. Setting parameter values ======================== As well as specifying the parameter values for the neuron models when you create a ``Population``, you can also set or change the values for an existing ``Population``. Setting the same value for the entire population ------------------------------------------------ To set a parameter for all neurons in the population to the same value, use the ``set()`` method, e.g.:: >>> p1.set("tau_m", 20.0) >>> p1.set({'tau_m':20, 'v_rest':-65}) The first form can be used for setting a single parameter, the second form for setting multiple parameters at once. Setting the parameters of individual neurons -------------------------------------------- To address individual neurons in a population, use ``[]`` notation, e.g.,:: >>> p1[0] 1 >>> p1[99] 100 >>> p2[17] 118 >>> p3[44] 246 The return values are ``ID`` objects, which behave in most cases as integers, but also allow accessing the values of the cell parameters. The value within the square brackets is referred to as a neuron's *index*, which always runs from 0 to ``size``-1 for a given population, while the return value is its *id*. Trying to address a non-existent neuron will raise an Exception:: >>> p1[999] Traceback (most recent call last): File "", line 1, in File "/home/andrew/dev/pyNN/common.py", line 394, in __getitem__ return self.all_cells[index] IndexError: index out of bounds To obtain an index given the id, use ``id_to_index()``, e.g.:: >>> p3[49] 170 >>> p3.id_to_index(170) 49 The ``ID`` object allows direct access to the parameters of individual neurons, e.g.:: >>> p4[0].tau_m 10.0 >>> p4[0].tau_m = 15 >>> p4[0].tau_m 15.0 To change several parameters at once for a single neuron, use the ``set_parameters()`` method of the neuron ID, e.g.:: >>> p4[1].set_parameters(tau_m=10.0, cm=0.5) >>> p4[1].tau_m 10.0 >>> p4[1].cm 0.5 Setting the parameters of a subset of neurons --------------------------------------------- To access several neurons at once, use slice notation, e.g., to access the first five neurons in a population, use:: >>> p2[:5] A ``PopulationView`` holds references to a subset of neurons in a ``Population``, which means that any changes in the view are also reflected in the real population (and vice versa), e.g.:: >>> view = p2[:5] >>> view.set('tau_m', 11.0) >>> p2.get('tau_m') [11.0, 11.0, 11.0, 11.0, 11.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0] ``PopulationView`` objects behave in most ways as real ``Population`` objects, notably, they can be used in a ``Projection`` (see below) and combined with other ``Population`` or ``PopulationView`` objects to create an ``Assembly``. Setting random values --------------------- To set a parameter to values drawn from a random distribution, use the ``rset()`` method with a ``RandomDistribution`` object from the ``pyNN.random`` module (see the chapter on random numbers for more details). The following example sets the threshold potential of each neuron to a value drawn from a uniform distribution between -55 mV and -50 mV:: >>> from pyNN.random import RandomDistribution >>> vthresh_distr = RandomDistribution(distribution='uniform', parameters=[-55,-50]) >>> p1.rset('v_thresh', vthresh_distr) Note that positional arguments can also be used. The following produces the same result as the above:: >>> vthresh_distr = RandomDistribution('uniform', [-55,-50]) Setting values according to an array ------------------------------------ The most efficient way to set different (but non-random) values for different neurons is to use the ``tset()`` (for *topographic* set) method. The following example injects a current of 0.1 nA into the first 20 neurons in the population:: >>> import numpy >>> current_input = numpy.zeros(p1.size) >>> current_input[:20] = 0.1 >>> p1.tset('i_offset', current_input) Setting initial values ====================== To set the initial values of state variable such as the membrane potential use the ``initialize()`` method:: >>> p1.initialize('v', -65.0) To initialize different neurons to different random values, pass a ``RandomDistribution`` object instead of a float:: >>> vinit_distr = RandomDistribution(distribution='uniform', parameters=[-70,-60]) >>> p1.initialize('v', vinit_distr) Iterating over all the neurons in a population ============================================== To iterate over all the cells in a population, returning the neuron ids, use:: >>> for id in p2: #doctest: +ELLIPSIS ... print id, id.tau_m ... 100 11.0 101 11.0 102 11.0 103 11.0 104 11.0 105 15.0 106 15.0 ... Injecting current ================= Static or time-varying currents may be injected into the cells of a Population using either the ``inject_into()`` method of the ``CurrentSource`` or the ``inject()`` method of the ``Population``:: >>> pulse = DCSource(amplitude=0.5, start=20.0, stop=80.0) >>> pulse.inject_into(p1[3:7]) >>> p4.inject(pulse) >>> times = numpy.arange(0.0, 100.0, 1.0) >>> amplitudes = 0.1*numpy.sin(times*numpy.pi/100.0) >>> sine_wave = StepCurrentSource(times, amplitudes) >>> p1[(6,11,27)].inject(sine_wave) >>> sine_wave.inject_into(p5) #doctest: +SKIP >>> sine_wave.inject_into(p3) Traceback (most recent call last): File "", line 1, in sine_wave.inject_into(p3) File "/usr/lib/python/site-packages/pyNN/neuron/electrodes.py", line 67, in inject_into raise TypeError("Can't inject current into a spike source.") TypeError: Can't inject current into a spike source. Recording ========= Recording spike times is done with the method ``record()``. Recording membrane potential is done with the method ``record_v()``. Recording synaptic conductances is done with the method ``record_gsyn()``. All three methods have identical argument lists. Some examples:: >>> p1.record() # record from all neurons in the population >>> p1.sample(10).record() # record from 10 neurons chosen at random >>> p1[[0,1,2]].record() # record from specific neurons Running a simulation ==================== The ``run()`` function runs the simulation for a given number of milliseconds, e.g.:: >>> run(1000.0) Accessing recorded data and writing to file =========================================== Writing recorded values to file is done with a second triple of methods, ``printSpikes()``, ``print_v()`` and ``print_gsyn()``, e.g.:: >>> p1.printSpikes("spikefile.dat") By default, the output files are post-processed to reformat them from the native simulator format to a common format that is the same for all simulator engines. The beginning of a typical spike file looks like:: # dt = 0.1 # n = 1000 0.0 2 0.3 5 0.4 3 0.9 2 1.0 1 . . . The beginning of a typical membrane potential file looks like:: # dt = 0.1 # n = 1000 -65.0 0 -64.9 0 -64.7 0 -64.5 0 . . . Both file types begin with header lines giving the timestep (there is currently no support for variable-time step recording) and the number of data points in the file. Each line of the spike file then gives the occurence time of a spike (in ms) and the id of the neuron in which it was recorded. Each line of the membrane potential file gives the membrane potential (in mV) followed by the id of the neuron in which it was recorded. In both cases, whether the file is sorted by cell id or by time depends on the simulator: it is not standardised. Having a standard format facilitates comparisons across simulators, but of course has some performace penalty. To get output in the native format of the simulator, add ``compatible_output=False`` to the argument list. When running a distributed simulation, each node records only those neurons that it simulates. By default, at the end of the simulation all nodes send their recorded data to the master node so that all values are written to a single output file. Again, there is a performance penalty for this, so if you wish each node to write its own file, add ``gather=False`` to the argument list. It is possible to save data in various different formats. The default (if you pass a filename) is a text file, but you can also save in various binary formats. To save in HDF5 format (this requires the PyTables package to be installed), for example:: >>> from pyNN.recording.files import HDF5ArrayFile >>> h5file = HDF5ArrayFile("spikefile.h5", "w") >>> p1.printSpikes(h5file) >>> h5file.close() If you wish to obtain the recorded data within the simulation script, for plotting or further analysis, there is a further triple of methods, ``getSpikes()``, ``get_v()`` and ``get_gsyn()``. Again, there is a ``gather`` option for distributed simulations. Statistics of recorded data --------------------------- Often, the exact spike times and exact membrane potential traces are not required, only statistical measures. PyNN currently only provides one such measure, the mean number of spikes per neuron, e.g.:: >>> p1.meanSpikeCount() 0.01 More such statistical measures are planned for future releases. # mention p1.get_spike_counts() Repeating a simulation ====================== If you wish to reset network time to zero to run a new simulation with the same network (with different parameter values, perhaps), use the ``reset()`` function. Note that this does not change the network structure, nor the choice of which neurons to record (from previous ``record()`` calls). Position in space ================= The positions of neurons in space are usually determined by passing in a ``Structure`` object when creating a ``Population`` (see above). Some examples:: >>> s1 = space.Line() >>> s1.generate_positions(3) array([[ 0., 1., 2.], [ 0., 0., 0.], [ 0., 0., 0.]]) >>> s2 = space.Grid2D(aspect_ratio=2.0, dx=3.0, dy=7.0) >>> s2.generate_positions(8) array([[ 0., 0., 3., 3., 6., 6., 9., 9.], [ 0., 7., 0., 7., 0., 7., 0., 7.], [ 0., 0., 0., 0., 0., 0., 0., 0.]]) >>> s3 = space.RandomStructure(boundary=space.Sphere(radius=100.0)) >>> s3.generate_positions(5) array([[ 15.7103484 , 7.56979681, -26.39920966, -81.56024563, -27.30566837], [ 62.74380383, -26.29395986, 33.23787658, -16.46650874, -79.4064587 ], [ 5.66080048, -70.5696085 , -42.68101409, -55.20377865, -24.22675542]]) >>> s4 = space.RandomStructure(boundary=space.Cuboid(30,40,50)) >>> s4.generate_positions(5) array([[ -2.69455996, 9.04858685, -4.61756624, 4.53035932, -4.80972742], [ 12.946409 , 11.31629902, 5.52137332, 5.49659371, -6.80003331], [ -9.36346794, -23.62104418, -6.2160148 , -15.16040818, -9.66371093]]) The positions of individual neurons in a population can be accessed using their ``position`` attribute, e.g.:: >>> p1[99].position = (0.0, 9.0, 9.0) >>> p1[99].position array([ 0., 9., 9.]) Positions are always in 3D, and may be given as integers or floating-point values, and as tuples or as numpy arrays. No specific coordinate system or scale of units is assumed, although many parts of PyNN do assume a Euclidean coordinate system. To obtain the positions of all neurons at once (as a numpy array), use the ``positions`` attribute of the ``Population`` object, e.g.:: >>> p1.positions #doctest: +ELLIPSIS,+NORMALIZE_WHITESPACE array([[...]]) To find the neuron that is closest to a particular point in space, use the ``nearest()`` attribute:: >>> p1.nearest((4.5, 7.8, 3.3)) 49 >>> p1[p1.id_to_index(49)].position array([ 4., 8., 0.]) Getting information about a ``Population`` ========================================== A summary of the state of a ``Population`` may be obtained with the ``describe()`` method:: >>> print p4.describe() #doctest: +NORMALIZE_WHITESPACE Population "Column 1" Structure : Grid3D aspect_ratios: (0.75, 0.59999999999999998) fill_order: sequential dz: 1.0 dx: 1.0 dy: 1.0 y0: 0.0 x0: 0.0 z0: 0 Local cells : 60 Cell type : IF_cond_alpha ID range : 221-280 First cell on this node: ID: 221 tau_refrac: 1.5 tau_m: 15.0 e_rev_E: 0.0 i_offset: 0.0 cm: 1.0 e_rev_I: -70.0 v_thresh: -55.0 tau_syn_E: 0.3 v_rest: -65.0 tau_syn_I: 0.5 v_reset: -65.0 The output format can be customized by passing a Jinja_ or Cheetah_ template:: >>> print p4.describe(template="Population of {{size}} {{celltype.name}} cells", ... engine='jinja2') Population of 60 IF_cond_alpha cells where ``template`` can be a string or a filename. Connecting neurons ================== A ``Projection`` object is a container for all the synaptic connections between neurons in two ``Population``\s (or ``PopulationView``s, or ``Assembly``s (see below)), together with methods for setting synaptic weights, delays and other properties. A ``Projection`` is created by specifying a pre-synaptic ``Population``, a post-synaptic ``Population`` and a ``Connector`` object, which determines the algorithm used to wire up the neurons, e.g.:: >>> prj2_1 = Projection(p2, p1, method=AllToAllConnector(), target='excitatory') This connects ``p2`` (pre-synaptic) to ``p1`` (post-synaptic) with excitatory synapses, using an '``AllToAllConnector``' object, which connects every neuron in the pre-synaptic population to every neuron in the post-synaptic population. The currently available ``Connector`` classes are explained below. It is fairly straightforward for a user to write a new ``Connector`` class if they wish to use a connection algorithm not already available in PyNN. Note that the attribute ``synapse_types`` of all standard-cell classes contains a list of the possible values of ``target`` for that cell type. All-to-all connections ---------------------- The ``AllToAllConnector'`` constructor has one optional argument ``allow_self_connections``, for use when connecting a ``Population`` to itself. By default it is ``True``, but if a neuron should not connect to itself, set it to ``False``, e.g.:: >>> prj1_1 = Projection(p1, p1, AllToAllConnector(allow_self_connections=False)) One-to-one connections ---------------------- Use of the ``OneToOneConnector`` requires that the pre- and post-synaptic populations have the same size, e.g.:: >>> prj1_1a = Projection(p1, p1, OneToOneConnector()) Trying to connect two ``Population``\s with different sizes will raise an Exception, e.g.:: >>> invalid_prj = Projection(p2, p3, OneToOneConnector()) #doctest: +IGNORE_EXCEPTION_DETAIL Traceback (most recent call last): File "", line 1, in invalid_prj = Projection(p2, p3, OneToOneConnector()) File "/usr/lib/python/site-packages/pyNN/neuron/__init__.py", line 220, in __init__ method.connect(self) File "/usr/lib/python/site-packages/pyNN/connectors.py", line 281, in connect raise common.InvalidDimensionsError("OneToOneConnector does not support presynaptic and postsynaptic Populations of different sizes.") InvalidDimensionsError: OneToOneConnector does not support presynaptic and postsynaptic Populations of different sizes. Connecting neurons with a fixed probability ------------------------------------------- With the ``FixedProbabilityConnector`` method, each possible connection between all pre-synaptic neurons and all post-synaptic neurons is created with probability ``p_connect``, e.g.:: >>> prj3_2 = Projection(p3, p2, FixedProbabilityConnector(p_connect=0.2)) The constructor also accepts an ``allow_self_connections`` parameter, as above. Connecting neurons with a distance-dependent probability -------------------------------------------------------- For each pair of pre-post cells, the connection probability depends on distance (see above for how to specify neuron positions in space). The constructor requires a string ``d_expression``, which should be the right-hand side of a valid Python expression for probability (i.e. returning a value between 0 and 1), involving '``d``', e.g.:: >>> prj1_1b = Projection(p1, p1, DistanceDependentProbabilityConnector("exp(-d)")) >>> prj2_2 = Projection(p2, p2, DistanceDependentProbabilityConnector("d<3")) The first example connects neurons with an exponentially-decaying probability. The second example connects each neuron to all its neighbours within a range of 3 units (typically interpreted as µm, but this is up to the individual user). Note that boolean values ``True`` and ``False`` are automatically converted to numerical values ``1.0`` and ``0.0``. The calculation of distance may be controlled by specifying a ``Space`` object. By default, the 3D distance between cell positions is used, but the ``axes`` argument may be used to change this, e.g.:: >>> connector = DistanceDependentProbabilityConnector("exp(-abs(d))", space=Space(axes='xy')) will ignore the z-coordinate when calculating distance. Similarly, the origins of the coordinate systems of the two ``Population``s and the relative scale of the two coordinate systems may be controlled using the ``offset`` and ``scale_factor`` arguments to the ``Space`` constructor. This is useful when connecting brain regions that have very different sizes but that have a topographic mapping between them, e.g. retina to LGN to V1. In more abstract models, it is often useful to be able to avoid edge effects by specifying periodic boundary conditions, e.g.:: >>> connector = DistanceDependentProbabilityConnector("exp(-abs(d))", space=Space(periodic_boundaries=((0,500), (0,500), None))) calculates distance on the surface of a torus of circumference 500 µm (wrap-around in the x- and y-dimensions but not z). Divergent/fan-out connections ----------------------------- The ``FixedNumberPostConnector`` connects each pre-synaptic neuron to exactly ``n`` post-synaptic neurons chosen at random:: >>> prj2_1a = Projection(p2, p1, FixedNumberPostConnector(n=30)) As a refinement to this, the number of post-synaptic neurons can be chosen at random from a ``RandomDistribution`` object, e.g.:: >>> distr_npost = RandomDistribution(distribution='binomial', parameters=[100,0.3]) >>> prj2_1b = Projection(p2, p1, FixedNumberPostConnector(n=distr_npost)) Convergent/fan-in connections ----------------------------- The ``FixedNumberPreConnector`` has the same arguments as ``FixedNumberPostConnector``, but of course it connects each *post*-synaptic neuron to ``n`` *pre*-synaptic neurons, e.g.:: >>> prj2_1c = Projection(p2, p1, FixedNumberPreConnector(5)) >>> distr_npre = RandomDistribution(distribution='poisson', parameters=[5]) >>> prj2_1d = Projection(p2, p1, FixedNumberPreConnector(distr_npre)) Using the Connection Set Algebra -------------------------------- The Connection Set Algebra (Djurfeldt, 2010) is a sophisticated system that allows elaborate connectivity patterns to be constructed using a concise syntax. Using the CSA requires the Python ``csa`` module to be installed (see Installation). The details of constructing a connection set are beyond the scope of this manual. We give here a simple example. >>> import csa >>> cs = csa.full - csa.oneToOne >>> prj2_1e = Projection(p2, p1, CSAConnector(cs)) Writing and reading connection patterns to/from a file ------------------------------------------------------ Connection patterns can be written to a file using ``saveConnections()``, e.g.:: >>> prj1_1a.saveConnections("prj1_1a.conn") These files can then be read back in to create a new ``Projection`` object using a ``FromFileConnector`` object, e.g.:: >>> prj1_1c = Projection(p1, p1, FromFileConnector("prj1_1a.conn")) Specifying a list of connections -------------------------------- Specific connection patterns not covered by the methods above can be obtained by specifying an explicit list of pre-synaptic and post-synaptic neuron indices, with weights and delays. (Note that the weights and delays should be optional, but currently are not). Example:: >>> conn_list = [ ... (0, 0, 0.0, 0.1), ... (0, 1, 0.0, 0.1), ... (0, 2, 0.0, 0.1), ... (1, 5, 0.0, 0.1) ... ] >>> prj1_2d = Projection(p1, p2, FromListConnector(conn_list)) User-defined connection algorithms ---------------------------------- If you wish to use a specific connection/wiring algorithm not covered by the PyNN built-in ones, the options include: constructing a list of connections and use the ``FromListConnector`` class; using the CSA (see above); writing your own ``Connector`` class (by looking at the code for the built-in ``Connector``\s, this should be quite straightforward). Setting synaptic weights and delays =================================== Synaptic weights and delays may be set either when creating the ``Projection``, as arguments to the ``Connector`` object, or afterwards using the ``setWeights()`` and ``setDelays()`` methods ``Projection``. All ``Connector`` objects (except ``CSAConnector``) accept ``weights`` and ``delays`` arguments to their constructors. Some examples: To set all weights to the same value:: >>> connector = AllToAllConnector(weights=0.7) >>> prj1_2e = Projection(p1, p2, connector) To set delays to random values taken from a specific distribution:: >>> delay_distr = RandomDistribution(distribution='gamma',parameters=[5,0.5]) >>> connector = FixedProbabilityConnector(p_connect=0.5, delays=delay_distr) >>> prj2_1e = Projection(p2, p1, connector) To set individual weights and delays to specific values:: >>> weights = numpy.arange(1.1, 2.0, 0.9/p1.size) >>> delays = 2*weights >>> connector = OneToOneConnector(weights=weights, delays=delays) >>> prj1_1d = Projection(p1, p1, connector) After creating the ``Projection``, to set the weights of all synaptic connections in a ``Projection`` to a single value, use the ``setWeights()`` method:: >>> prj1_1.setWeights(0.2) [Note: synaptic weights in PyNN are in nA for current-based synapses and µS for conductance-based synapses)]. To set different weights to different values, use ``setWeights()`` with a list or 1D numpy array argument, where the length of the list/array is equal to the number of synapses, e.g.:: >>> weight_list = 0.1*numpy.ones(len(prj2_1)) >>> weight_list[0:5] = 0.2 >>> prj2_1.setWeights(weight_list) To set weights to random values, use the ``randomizeWeights()`` method:: >>> weight_distr = RandomDistribution(distribution='gamma',parameters=[1,0.1]) >>> prj1_1.randomizeWeights(weight_distr) Setting delays works similarly:: >>> prj1_1.setDelays(0.6) >>> delay_list = 0.3*numpy.ones(len(prj2_1)) >>> delay_list[0:5] = 0.4 >>> prj2_1.setDelays(delay_list) >>> delay_distr = RandomDistribution(distribution='gamma', parameters=[2,0.2], boundaries=[get_min_delay(),1e12]) >>> prj1_1.randomizeDelays(delay_distr) It is also possible to access the attributes of individual connections using the ``connections`` attribute of a projection:: >>> for c in prj1_1.connections[:5]: ... c.weight *= 2 In general, though, this is less efficient than using list- or array-based access. For the ``CSAConnector``, weights and delays may be specified as part of the connection set - see the `CSA documentation`_ for details. Accessing weights and delays ============================ To get the weights of all connections in the ``Projection``, use the ``getWeights()`` method. Two formats are available. ``'list'`` returns a list of length equal to the number of connections in the projection, ``'array'`` returns a 2D weight array (with NaN for non-existent connections):: >>> prj2_1.getWeights(format='list')[3:7] [0.20000000000000001, 0.20000000000000001, 0.10000000000000001, 0.10000000000000001] >>> prj2_1.getWeights(format='array')[:3,:3] array([[ 0.2, 0.2, 0.2], [ 0.1, 0.1, 0.1], [ 0.1, 0.1, 0.1]]) ``getDelays()`` is analogous. ``printWeights()`` writes the weights to a file. Access to the weights and delays of individual connections is by the ``connections`` attribute, e.g.:: >>> prj2_1.connections[0].weight 0.2 >>> prj2_1.connections[10].weight 0.1 The ``weightHistogram()`` method returns a histogram of the synaptic weights, with bins determined by the ``min``, ``max`` and ``nbins`` arguments passed to the method. Getting information about a ``Projection`` ========================================== As for ``Population``, a summary of the state of a ``Projection`` may be obtained with the ``describe()`` method:: >>> print prj2_1.describe() Projection "population1→population0" from "population1" (20 cells) to "population0" (100 cells) Target : excitatory Connector : AllToAllConnector allow_self_connections : True Weights : 0.0 Delays : 0.1 Plasticity : None Total connections : 2000 Local connections : 2000 Synaptic plasticity =================== So far we have discussed only the case whether the synaptic weight is fixed. Dynamic synapses (short-term and long-term plasticity) are discussed in the next chapter. Higher-level structures ======================= As we noted above, the neurons in a ``Population`` all have the same type. To simplify working with many different populations, individual ``Population`` and ``PopulationView`` objects may be aggregated in an ``Assembly``, e.g.:: >>> assembly = Assembly(p1, p2[10:15], p3, p4) An assembly behaves in many ways like a ``Population``, e.g. setting and retrieving parameters, specifying which neurons to record from, etc. It can also be specified as the source or target of a ``Projection``. In this case, all the neurons in the component populations are treated as identical for the purposes of the connection algorithm (note that if the synapse type is specified (with the ``target`` attribute), an Exception will be raised if not all component neuron types possess that synapse type). You can also create an assembly simply by adding multiple different ``Population`` or ``PopulationView`` objects together:: >>> another_assembly = p3 + p4 Individual populations within an Assembly may be accessed via their labels, e.g. >>> assembly.get_population("Input Population") >>> assembly.get_population("Column 1") Iterating over an assembly returns individual IDs, ordered by population. Similarly, the ``size`` attribute of an ``Assembly`` gives the total number of neurons it contains. To iterate over or count populations, use the ``populations`` attribute:: >>> for p in assembly.populations: ... print p.label, p.size, p.celltype population0 100 IF_curr_alpha view of population1 with mask slice(10, 15, None) 5 IF_curr_alpha Input Population 100 SpikeSourceArray Column 1 60 IF_cond_alpha Finishing up ============ Just as a simulation must be begun with a call to ``setup()``, it must be ended with a call to ``end()``. Examples ======== There are several example scripts in the ``examples`` directory of the source distribution. .. _`CSA documentation`: http://software.incf.org/software/csa/ .. _`jinja`: http://jinja.pocoo.org/ .. _`cheetah`: http://www.cheetahtemplate.org/PyNN-0.7.4/doc/introduction.txt0000644000175000017500000000675611736323051017176 0ustar andrewandrew00000000000000============ Introduction ============ PyNN (pronounced 'pine') is a simulator-independent language for building neuronal network models. In other words, you can write the code for a model once, using the PyNN API and the Python_ programming language, and then run it without modification on any simulator that PyNN supports (currently NEURON_, NEST_, PCSIM_ and Brian_). The PyNN API aims to support modelling at a high-level of abstraction (populations of neurons, layers, columns and the connections between them) while still allowing access to the details of individual neurons and synapses when required. PyNN provides a library of standard neuron, synapse and synaptic plasticity models, which have been verified to work the same on the different supported simulators. PyNN also provides a set of commonly-used connectivity algorithms (e.g. all-to-all, random, distance-dependent, small-world) but makes it easy to provide your own connectivity in a simulator-independent way, either using the Connection Set Algebra `(Djurfeldt, 2010)`_ or by writing your own Python code. Even if you don't wish to run simulations on multiple simulators, you may benefit from writing your simulation code using PyNN's powerful, high-level interface. In this case, you can use any neuron or synapse model supported by your simulator, and are not restricted to the standard models. It is straightforward to port an existing model from a Python-supporting simulator to PyNN, since this can be done incrementally, replacing one piece of simulator-specific code at a time with the PyNN equivalent, and testing that the model behaviour is unchanged at each step. To support porting, PyNN also provides a low-level, procedural API (functions ``create()``, ``connect()``, ``record()``, etc.) as a wrapper around the main object-oriented API (classes ``Population``, ``Projection``, ``Assembly``, etc.), for when this better matches the original model. PyNN is a work in progress, but is already being used for several large-scale simulation projects. [wiki:Download Download] the current stable release of the API (0.7), or get the development version from the `Subversion repository`_ . Licence ------- The code is released under the CeCILL_ licence. (This is equivalent to and compatible with the GPL). Citing PyNN ----------- If you publish work using or mentioning PyNN, we would appreciate it if you would cite the following paper: Davison AP, Brüderle D, Eppler JM, Kremkow J, Muller E, Pecevski DA, Perrinet L and Yger P (2009) PyNN: a common interface for neuronal network simulators. Front. Neuroinform. 2:11 ` doi:10.3389/neuro.11.011.2008`. Questions/Bugs/Enhancements --------------------------- If you find a bug in PyNN, or wish to request a new feature, please go the `PyNN website`_, click on "New Ticket", and fill in the form. If you have questions or comments about PyNN, please post a message on the `NeuralEnsemble Google group`_. .. _Python: http://www.python.org/ .. _CeCILL: http://www.cecill.info/ .. _NEURON: http://www.neuron.yale.edu/neuron/ .. _NEST: http://www.nest-initiative.org/?page=Software .. _PCSIM: http://sourceforge.net/projects/pcsim/ .. _Brian: http://briansimulator.org/ .. _`Subversion repository`: https://neuralensemble.org/svn/PyNN/trunk .. _`PyNN website`: http://neuralensemble.org/PyNN/ .. _`NeuralEnsemble Google group`: http://groups.google.com/group/neuralensemble .. _`(Djurfeldt, 2010)`: http://software.incf.org/software/csa/PyNN-0.7.4/doc/parallel.txt0000644000175000017500000001061611736323051016237 0ustar andrewandrew00000000000000============================ Running parallel simulations ============================ Where the underlying simulator supports distributed simulations, in which the computations are spread over multiple processors using MPI (this is the case for NEURON, NEST and PCSIM), PyNN also supports this. To run a distributed simulation on eight nodes:: $ mpirun -np 8 -machinefile ~/mpi_hosts python myscript.py Depending on the implementation of MPI you have, ``mpirun`` could be replaced by ``mpiexec`` or another command, and the options may also be somewhat different. For NEURON only, you can also run distributed simulations using ``nrniv`` instead of the ``python`` executable:: $ mpirun -np 8 -machinefile ~/mpi_hosts nrniv -python -mpi myscript.py Additional requirements ----------------------- First, make sure you have compiled the simulators you wish to use with MPI enabled. There is usually a configure flag called something like "``--with-mpi``" to do this, but see the installation documentation for each simulator for details. If you wish to use the default "gather" feature (see below), which automatically gathers output data from all the nodes to the master node (the one on which you launched the simulation), you will need to install the ``mpi4py`` module (see ``_ for downloads and documentation). Installation is usually very straightforward, although, if you have more than one MPI implementation installed on your system (e.g. OpenMPI and MPICH2), you must be sure to build ``mpi4py`` with the same MPI implementation that you used to build the simulator. Code modifications ------------------ In most cases, no modifications to your code should be necessary to run in parallel. PyNN, and the simulator, take care of distributing the computations between nodes. Furthermore, the default settings should give results that are independent of the number of processors used, even when using random numbers. Gathering data to the master node --------------------------------- The various methods of the ``Projection`` and ``Projection`` classes that deal with accessing recorded data or writing it to disk, such as ``getSpikes()``, ``print_v()``, etc., have an optional argument "``gather``", which is ``True`` by default. If ``gather`` is ``True``, then data generated on other nodes is sent to the master node. This means that, for example, ``printSpikes()`` will create only a single file, on the filesystem of the master node. If ``gather`` is ``False``, each node will write a file on its local filesystem. This option is often desirable if you wish to do distributed post-processing of the data. (Don't worry, by the way, if you are using a shared filesystem such as NFS. If ``gather`` is ``False`` then the node ID is appended to the filename, so there is no chance of conflict between the different nodes). Random number generators ------------------------ In general, we expect that our results should not depend on the number of processors used to produce them. If our simulations use random numbers in setting-up or running the network, this means that each object that uses random numbers should receive the same sequence independent of which node it is on or how many nodes there are. PyNN achieves this by ensuring the generator seed is the same on all nodes, and then generating as many random numbers as would be used in the single-processor case and throwing away those that are not needed. This obviously has a potential impact on performance, and so it is possible to turn it off by passing ``parallel_safe=False`` as argument when creating the random number generator, e.g.:: >>> import pyNN.neuron as sim >>> np = sim.num_processes() >>> node = sim.rank() >>> from pyNN.random import NumpyRNG >>> rng = NumpyRNG(seed=249856, parallel_safe=False, rank=node, num_processes=np) Now, PyNN will ensure the seed is different on each node, and will generate only as many numbers as are actually needed on each node. Note that the above applies only to the random number generators provided by the PyNN ``random`` module, not to the native RNGs used internally by each simulator. This means that, for example, you should prefer ``SpikeSourceArray`` (for which you can generate Poisson spike times using a parallel-safe RNG) to ``SpikeSourcePoisson``, which uses the simulator's internal RNG, if you care about being independent of the number of processors. PyNN-0.7.4/doc/standardmodels.txt0000644000175000017500000002226411736323051017451 0ustar andrewandrew00000000000000=============== Standard models =============== Standard models are neuron models that are available in at least two of the simulation engines supported by PyNN. PyNN performs automatic translation of parameter names, types and units. Only a handful of models are currently available, but the list will be expanded in future releases. To obtain a list of all the standard models available in a given simulator, use the ``list_standard_models()`` function, e.g.: .. code-block:: python >>> from pyNN import neuron >>> neuron.list_standard_models() ['IF_cond_alpha', 'IF_curr_exp', 'IF_cond_exp', 'EIF_cond_exp_isfa_ista', 'SpikeSourceArray', 'HH_cond_exp', 'IF_cond_exp_gsfa_grr', 'IF_facets_hardware1', 'SpikeSourcePoisson', 'EIF_cond_alpha_isfa_ista', 'IF_curr_alpha'] Neurons ======= IF_curr_alpha ------------- Leaky integrate and fire model with fixed threshold and alpha-function-shaped post-synaptic current. Availability: NEST, NEURON, PCSIM, Brian ============== ============= ===== ======================================== Name Default value Units Description ============== ============= ===== ======================================== ``v_rest`` -65.0 mV Resting membrane potential ``cm`` 1.0 nF Capacity of the membrane ``tau_m`` 20.0 ms Membrane time constant ``tau_refrac`` 0.0 ms Duration of refractory period ``tau_syn_E`` 5.0 ms Rise time of the excitatory synaptic alpha function ``tau_syn_I`` 5.0 ms Rise time of the inhibitory synaptic alpha function ``i_offset`` 0.0 nA Offset current ``v_reset`` -65.0 mV Reset potential after a spike ``v_thresh`` -50.0 mV Spike threshold ============== ============= ===== ======================================== IF_curr_exp ----------- Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic current. (Separate synaptic currents for excitatory and inhibitory synapses. Availability: NEST, NEURON, PCSIM, Brian ============== ============= ===== ========================================= Name Default value Units Description ============== ============= ===== ========================================= ``v_rest`` -65.0 mV Resting membrane potential ``cm`` 1.0 nF Capacity of the membrane ``tau_m`` 20.0 ms Membrane time constant ``tau_refrac`` 0.0 ms Duration of refractory period ``tau_syn_E`` 5.0 ms Decay time of excitatory synaptic current ``tau_syn_I`` 5.0 ms Decay time of inhibitory synaptic current ``i_offset`` 0.0 nA Offset current ``v_reset`` -65.0 mV Reset potential after a spike ``v_thresh`` -50.0 mV Spike threshold ============== ============= ===== ========================================= IF_cond_alpha _____________ Leaky integrate and fire model with fixed threshold and alpha-function-shaped post-synaptic conductance. Availability: NEST, NEURON, PCSIM, Brian ============== ============= ===== =================================================== Name Default value Units Description ============== ============= ===== =================================================== ``v_rest`` -65.0 mV Resting membrane potential ``cm`` 1.0 nF Capacity of the membrane ``tau_m`` 20.0 ms Membrane time constant ``tau_refrac`` 0.0 ms Duration of refractory period ``tau_syn_E`` 5.0 ms Rise time of the excitatory synaptic alpha function ``tau_syn_I`` 5.0 ms Rise time of the inhibitory synaptic alpha function ``e_rev_E`` 0.0 mV Reversal potential for excitatory input ``e_rev_I`` -70.0 mV Reversal potential for inhibitory input ``v_thresh`` -50.0 mV Spike threshold ``v_reset`` -65.0 mV Reset potential after a spike ``i_offset`` 0.0 nA Offset current ============== ============= ===== =================================================== IF_cond_exp ----------- Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic conductance. Availability: NEST, NEURON, PCSIM, Brian ============== ============= ===== =================================================== Name Default value Units Description ============== ============= ===== =================================================== ``v_rest`` -65.0 mV Resting membrane potential ``cm`` 1.0 nF Capacity of the membrane ``tau_m`` 20.0 ms Membrane time constant ``tau_refrac`` 0.0 ms Duration of refractory period ``tau_syn_E`` 5.0 ms Decay time of the excitatory synaptic conductance ``tau_syn_I`` 5.0 ms Decay time of the inhibitory synaptic conductance ``e_rev_E`` 0.0 mV Reversal potential for excitatory input ``e_rev_I`` -70.0 mV Reversal potential for inhibitory input ``v_thresh`` -50.0 mV Spike threshold ``v_reset`` -65.0 mV Reset potential after a spike ``i_offset`` 0.0 nA Offset current ============== ============= ===== =================================================== HH_cond_exp ----------- Single-compartment Hodgkin-Huxley-type neuron with transient sodium and delayed-rectifier potassium currents using the ion channel models from Traub. Availability: NEST, NEURON, Brian ============== ============= ===== =================================================== Name Default value Units Description ============== ============= ===== =================================================== ``gbar_Na`` 20.0 uS ``gbar_K`` 6.0 uS ``g_leak`` 0.01 uS ``cm`` 0.2 nF ``v_offset`` -63.0 mV ``e_rev_Na`` 50.0 mV ``e_rev_K`` -90.0 mV ``e_rev_leak`` -65.0 mV ``e_rev_E`` 0.0 mV ``e_rev_I`` -80.0 mV ``tau_syn_E`` 0.2 ms ``tau_syn_I`` 2.0 ms ``i_offset`` 0.0 nA ============== ============= ===== =================================================== EIF_cond_alpha_isfa_ista ------------------------ Adaptive exponential integrate and fire neuron according to Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642 Availability: NEST, NEURON, Brian ============== ============= ===== =================================================== Name Default value Units Description ============== ============= ===== =================================================== ``cm`` 0.281 nF Capacity of the membrane ``tau_refrac`` 0.0 ms Duration of refractory period ``v_spike`` 0.0 mV Spike detection threshold ``v_reset`` -70.6 mV Reset value for membrane potential after a spike ``v_rest`` -70.6 mV Resting membrane potential (Leak reversal potential) ``tau_m`` 9.3667 ms Membrane time constant ``i_offset`` 0.0 nA Offset current ``a`` 4.0 nS Subthreshold adaptation conductance ``b`` 0.0805 nA Spike-triggered adaptation ``delta_T`` 2.0 mV Slope factor ``tau_w`` 144.0 ms Adaptation time constant ``v_thresh`` -50.4 mV Spike initiation threshold ``e_rev_E`` 0.0 mV Excitatory reversal potential ``tau_syn_E`` 5.0 ms Rise time of excitatory synaptic conductance (alpha function) ``e_rev_I`` -80.0 mV Inhibitory reversal potential ``tau_syn_I`` 5.0 ms Rise time of the inhibitory synaptic conductance (alpha function) ============== ============= ===== =================================================== Spike sources ============= SpikeSourcePoisson ------------------ Spike source, generating spikes according to a Poisson process. Availability: NEST, NEURON, PCSIM, Brian ============ ============= ====== ========================== Name Default value Units Description ============ ============= ====== ========================== ``rate`` 0.0 s^`-1` Mean spike frequency ``start`` 0.0 ms Start time ``duration`` 10^9 ms Duration of spike sequence ============ ============= ====== ========================== SpikeSourceArray ---------------- Spike source generating spikes at the times given in the ``spike_times`` array. Availability: NEST, NEURON, PCSIM, Brian =============== ============= ====== ========================== Name Default value Units Description =============== ============= ====== ========================== ``spike_times`` ``[]`` ms list or numpy array containing spike times =============== ============= ====== ========================== PyNN-0.7.4/doc/developers_guide.txt0000644000175000017500000002057511736323051017775 0ustar andrewandrew00000000000000 ================= Developers' guide ================= This guide contains information about contributing to PyNN development, and aims to explain the overall architecture and some of the internal details of the PyNN codebase. Discussions about PyNN take place in the `NeuralEnsemble Google Group`_. Contributing to PyNN ==================== If you find a bug or would like to add a new feature to PyNN, the first thing to do is to create a ticket for it at http://neuralensemble.org/trac/PyNN/newticket. You do not need an account, but it is better if you do have one since then we can see who reported it, and you have a better chance of avoiding the spam filter. If you know how to fix the bug, please attach a patch to the ticket, and please also provide a unit test that fails with the original code and passes when your patch is applied. If you would like commit rights for the Subversion repository, please contact us. If you do have commit rights, please make sure *all* the tests pass before you make a commit (see below). Code style ========== We try to stay fairly close to PEP8_. Please note in particular: - indentation of four spaces, no tabs - single space around most operators, but no space around the '=' sign when used to indicate a keyword argument or a default parameter value. - some function/method names in PyNN use ``mixedCase``, but these will gradually be deprecated and replaced with ``lower_case_with_underscores``. Any new functions or methods should use the latter. - we currently target versions 2.5 to 2.7. The main consequence of this is that ``except Exception`` can't use the ``as`` statement, since this is not supported in 2.5. Testing ======= Running the PyNN test suite requires the *nose_* and *mock_* packages, and optionally the *coverage_* package. To run the entire test suite, in the ``test`` subdirectory of the source tree:: $ nosetests To see how well the codebase is covered by the tests, run:: $ nosetests --with-coverage --cover-package=pyNN --cover-erase There are currently two sorts of tests, unit tests, which aim to exercise small pieces of code such as individual functions and methods, and system tests which aim to test that all the pieces of the system work together as expected. If you add a new feature to PyNN, you should write both unit and system tests. Unit tests should where necessary make use of mock/fake/stub/dummy objects to isolate the component under test as well as possible. Except when testing a specific simulator interface, unit tests should be able to run without a simulator installed. System tests should be written so that they can run with any of the simulators. The suggested way to do this is to write test functions in a separate file that take a simulator module as an argument, and then call these functions from ``test_neuron.py``, ``test_nest.py``, etc. The ``test/unsorted`` directory contains a number of old tests that are either no longer useful or have not yet been adapted to the nose framework. These are not part of the test suite, but we are gradually adapting those tests that are useful and deleting the others. Structure of the codebase ========================= PyNN is both an API for simulator-independent model descriptions and an implementation of that API for a number of simulators. If you wish to add PyNN support for your own simulator, you are welcome to add it as part of the main PyNN codebase, or you can maintain it separately. The advantage of the former is that we can help maintain it, and keep it up to date as the API evolves. A PyNN-compliant interface is not required to use any of the code from the ``pyNN`` package, it can implement the API entirely independently. However, by basing an interface on the "common" implementation you can save yourself a lot of work, since once you implement a small number of low-level functions and classes, you get the rest of the API for free. The common implementation ------------------------- Recording ~~~~~~~~~ The ``recording`` modules provides a base class ``Recorder`` that exposes methods ``record()``, ``get()``, ``write()`` and ``count()``. Each simulator using the common implementation then subclasses this base class, and must implement at least the methods ``_record()``, ``_get()`` and ``_local_count()``. Each ``Recorder`` instance records only a single variable, whose name is passed in the constructor. By default, PyNN scales recorded data to the standard PyNN units (mV for voltage, etc.), reorders columns if necessary, and adds initial values to the beginning of the recording if the simulator does not record the value at time 0. In this way, the structure of the output data is harmonized between simulators. For large datasets, this can be very time-consuming, and so this restructuring can be turned off by setting the ``compatible_output`` flag to ``False``. .. TODO: discuss output file formats. .. TODO: discuss gathering with MPI The NEST interface ------------------ The NEURON interface -------------------- The PCSIM interface ------------------- The Brian interface ------------------- Adding a new simulator interface ================================ The quickest way to add an interface for a new simulator is to implement the "internal API", described below. Each simulator interface is implemented as a sub-package within the ``pyNN`` package. The suggested layout for this sub-package is as follows:: |\_ __init__.py |\_ cells.py |\_ connectors.py |\_ electrodes.py |\_ recording.py |\_ simulator.py \_ synapses.py The only two files that are *required* are ``__init__.py`` and ``simulator.py``: the contents of all the other modules being imported into ``__init__.py``. [Maybe just provide a template, rather than discussing the whole thing] __init__: list_standard_models() [surely this could be in common?] setup() end(), run(), set(), reset(), initialize(), get_X(), num_processes(), rank() may be imported from common provided common.simulator is set to the local ``simulator`` module. create -> common.build_create(Population) connect = common.build_connect(Projection, FixedProbabilityConnector) record = common.build_record('spikes', simulator) record_v = common.build_record('v', simulator) record_gsyn = common.build_record('gsyn', simulator) A walk through the lifecycle of a simulation ============================================ Import phase ------------ [What happens on import] Setup phase ----------- [What happens when calling setup()] Creating neurons ---------------- On creating a Population... - create default structure, if none specified - create StandardCellType instance (if using standard cells) - check and translate parameters, translated parameters stored in parameters attribute - create recorders - create neurons, determine local_mask Finally, we set initial values for all neurons' state variables, e.g. membrane potential. The user may set these values later with a call to the initialize() method, but in case they don't we set them here to default values. Defaults are set on a model-by-model basis: each StandardCellType subclass has a dictionary attribute called default_initial_values. [For now, these must be numeric values. It would be nice to allow them to be the names of parameters, allowing the initial membrane potential to be set to the resting membrane potential, for example]. This of course causes a problem - not yet resolved - for non standard cells. These initial values are immediately passed through to the simulator. We set initial values using the initialize() method, which in turn updates the initial_values attribute - we do not modify initial_values directly: probably it should be read-only. Creating connectors ------------------- Composing synaptic plasticity models ------------------------------------ Connecting neurons ------------------ Instrumenting the network ------------------------- Running a simulation -------------------- Retrieving/saving recorded data ------------------------------- Finishing up, or resetting for a new run ---------------------------------------- .. _`NeuralEnsemble Google Group`: http://groups.google.com/group/neuralensemble .. _PEP8: http://www.python.org/dev/peps/pep-0008/ .. _nose: http://somethingaboutorange.com/mrl/projects/nose/ .. _mock: http://www.voidspace.org.uk/python/mock/ .. _coverage: http://nedbatchelder.com/code/coverage/ PyNN-0.7.4/doc/dynamicsynapses.txt0000644000175000017500000001016611736323051017655 0ustar andrewandrew00000000000000================ Dynamic synapses ================ The default type of synaptic connection in PyNN is static, with fixed synaptic weights. To model dynamic synapses, for which the synaptic weight (and possibly other properties, such as rise-time) varies depending on the recent history of post- and/or pre-synaptic activity, we use the same idea as for neurons, of standardized, named models that have the same interface and behaviour across simulators, even if the underlying implementation may be very different. Where the approach for dynamic synapses differs from that for neurons is that we attempt a greater degree of compositionality, i.e. we decompose models into a number of components, for example for short-term and long-term dynamics, or for the timing-dependence and the weight-dependence of STDP rules, that can then be composed in different ways. The advantage of this is that if we have ``n`` differerent models for component ''A'' and ``m`` models for component ''B'', then we require only ``n + m`` models rather than ``n x m``, which had advantages in terms of code-simplicity and in shorter model names. The disadvantage is that not all combinations may exist, if the underlying simulator implements composite models rather than using components itself: in this situation, PyNN checks whether a given composite model ``AB`` exists for a given simulator and raises an Exception if it does not. The composite approach may be extended to neuron models in future versions of the PyNN interface depending on the experience with composite synapse models. To set the model of synapse dynamics to use for the connections of a given ``Projection``, we pass a ``SynapseDynamics`` object as the ``synapse_dynamics`` keyword argument to the ``Projection`` constructor. The ``SynapseDynamics`` object is simply a container that has attributes ``fast``, which, if set, is an instance of a subclass of the abstract class ``ShortTermPlasticityMechanism``, and ``slow``, which is an instance of a subclass of the abstract class ``STDPMechanism``. Only a single subclass of ``ShortTermPlasticityMechanism`` is currently available in PyNN: ``TsodkysMarkramMechanism``. ``STDPMechanism`` objects are further decomposed into components for the timing-dependence, weight-dependence and post-synaptic voltage-dependence of the mechanism. An example of defining a ``Projection`` with depressing synapses, but no long-term plasticity:: >>> pre = post = Population(50, IF_cond_exp) >>> params = {'U': 0.5, 'tau_rec': 100.0, 'tau_facil': 0.0} >>> depressing_syn = SynapseDynamics(fast=TsodyksMarkramMechanism(**params)) >>> prj = Projection(pre, post, AllToAllConnector(), ... synapse_dynamics = depressing_syn) and one with long-term plasticity, using a spike-pair rule and with additive weight updates (i.e. the weight change is independent of the current weight value):: >>> stdp_model = STDPMechanism( ... timing_dependence=SpikePairRule(tau_plus=20.0, tau_minus=20.0), ... weight_dependence=AdditiveWeightDependence(w_min=0, w_max=0.02, ... A_plus=0.01, A_minus=0.012) ... ) >>> prj2 = Projection(pre, post, FixedProbabilityConnector(p_connect=0.1), ... synapse_dynamics=SynapseDynamics(slow=stdp_model)) Just as with synaptic weights and delays for static synapses, the parameters of dynamic synapses can be obtained and set with the ``getSynapseDynamics()``, ``setSynapseDynamics()`` and ``randomizeSynapseDynamics()`` methods of the ``Projection`` class:: >>> prj.setSynapseDynamics('tau_rec', 50.0) >>> prj.getSynapseDynamics('tau_rec')[:5] [50.0, 50.0, 50.0, 50.0, 50.0] >>> from pyNN.random import RandomDistribution >>> distr = RandomDistribution('normal', [0.02, 0.05]) >>> prj2.randomizeSynapseDynamics('w_max', distr) >>> prj2.getSynapseDynamics('w_max')[:5] [-0.056605932509016577, 0.063197908706714212, 0.034940801886916589, 0.010755581262934901, 0.011700727992415299] There are a number of examples of networks using synaptic plasticity in the ``examples`` directory of the source distribution.PyNN-0.7.4/doc/lowlevelapi.txt0000644000175000017500000001465511736323051016775 0ustar andrewandrew00000000000000================== The procedural API ================== The procedural API (also previously known as the "low-level" API) gives a view of a model that is centred more on individual neurons and connections, rather than on populations of neurons and projections between them. This viewpoint may be more suited for small networks, or when porting a model from a simulator that has a neuron-centred viewpoint. It should be noted, however, that this API is just a wrapper around the main, object-oriented API. For example, the ``create()`` function returns a ``Population`` object, it is just that it is then treated as a list of neurons rather than as a single object. Creating neurons ================ Neurons are created with the ``create()`` function. To create a single integrate-and-fire neuron, type:: >>> create(IF_curr_alpha) To create many neurons at once, add the ``n`` argument, e.g.:: >>> cells = create(IF_curr_alpha, n=10) Although the return value is actually a ``Population`` object, it can be treated in most ways like a list, accessing individual items:: >>> print cells[0].tau_m 20.0 or a range of items:: >>> for cell in cells[5:8]: ... cell.cm = 0.5 or adding two lists together:: >>> more_cells = create(IF_cond_exp, n=5) >>> all_cells = cells + more_cells >>> some_cells = cells[5:] + more_cells[2:4] (note that ``all_cells`` and ``some_cells`` here are ``Assembly`` objects). Connecting neurons ================== Lists of neurons (``Population``s or ``Assembly``s) can be connected using the ``connect()`` function, e.g.:: >>> spike_source = create(SpikeSourceArray, cellparams={'spike_times': [10.0, 20.0, 30.0]}) >>> connect(spike_source, cells) In this case we connect a spike-generating mechanism (``SpikeSourceArray`` is a 'standard cell' model that emits spikes at times specified by the ``spike_times`` parameter) to each cell in the list ``cells``, i.e. we create 10 connections at once. The ``connect()`` function behaves similarly to creating a ``Projection``, but without the flexibility to choose the connection algorithm, or to have plastic synapses. If the source or target is a list of cells (a ``Population``, etc.), each source (presynaptic) cell is connected to every target (postsynaptic) cell with probability given by the optional argument `p`, which defaults to 1, e.g.:: >>> source_list = cells >>> target_list = more_cells >>> connect(source_list, target_list, p=0.5) When specifying connections as above, default values are given to the synaptic weight and delay. These values are seldom very useful, and it is better to specify the ``weight`` and ``delay`` arguments of ``connect()``, e.g.:: >>> connect(source_list, target_list, weight=1.5, delay=0.5) Weights are specified in nA for 'current-based' synapses or µS for 'conductance-based' synapses. Delays are in ms. For current-based synapses, weights should be negative for inhibitory synapses. For conductance-based synapses, weights should always be positive, since the effect of a synapse is determined by its reversal potential. If the neuron model has more than one synapse mechanism, or more than one synaptic location, the particular synapse to which the connection should be made is specified with the ``synapse_type`` argument, e.g.:: >>> connect(source_list, target_list, weight=1.5, delay=0.5, synapse_type='inhibitory') (the attribute ``synapse_types`` of all standard cell objects contains a list of the synapse types for that cell type). Setting neuron parameters ========================= There are many ways to change the parameters for individual neurons and post-synaptic mechanisms after creation of the neuron. To change a single parameter of a single neuron, just set the relevant attribute of the neuron ID object, e.g.:: >>> cells = create(IF_curr_exp, cellparams={'v_thresh': -50.0}, n=10) >>> cells[0].tau_m 20.0 >>> cells[0].tau_m = 15 >>> cells[0].tau_m 15.0 To change several parameters at once for a single neuron, use the ``set_parameters()`` method of the neuron ID, e.g.:: >>> cells[1].set_parameters(tau_m=10.0, cm=0.5) >>> cells[1].tau_m 10.0 >>> cells[1].cm 0.5 To change parameters for several cells at once, use the ``set()`` function, e.g.:: >>> set(cells[0:5], param='v_thresh', val=-55.0) >>> print cells[0].v_thresh -55.0 >>> print cells[5].v_thresh -50.0 Individual parameters can be set using the ``param`` and ``val`` arguments, as above, or multiple parameters can be set at once by passing a dictionary of name:value pairs as the ``param`` argument, with ``val`` empty, e.g.:: >>> set(cells, param={'tau_refrac': 2.0, 'tau_syn_E': 5.0}) Injecting current ================= If you want to inject a current (which may be time varying), create a ``CurrentSource`` object and connect it to the neuron either using the ``inject()`` method of the neuron ID or using the ``inject_into()`` method of the ``CurrentSource``:: >>> pulse = DCSource(amplitude=0.5, start=20.0, stop=80.0) >>> steps = StepCurrentSource(times=[0.0, 50.0, 100.0], amplitudes=[0.1, 0.2, 0.3]) >>> pulse.inject_into(cells[3:7]) >>> cells[9].inject(steps) Recording spikes and membrane potential ======================================= To record action potentials use the ``record()`` function, to record membrane potential use the ``record_v()`` function and to record synaptic conductances use the ``record_gsyn()`` function. The arguments for all three functions are a cell id (or a ``Population``, ``Assembly`` or ``PopulationView``), and a filename, e.g.:: >>> record(cell, "spikes.dat") >>> record_v(cells, "Vm.dat") In some cases it is more efficient to write files in the simulator's native format, rather than the standard PyNN format. In this case, use the ``compatible_output=False`` argument to the ``end()`` function. For recording to binary (e.g. HDF5) rather than text files, see the chapter on file formats. Running a simulation ==================== The ``run()`` function runs the simulation for a given number of milliseconds, e.g.:: >>> run(1000.0) Finishing up ============ Just as a simulation must be begun with a call to ``setup()``, it must be ended with a call to ``end()``. For the procedural API, this is also the point at which data gets written to file. Examples ======== There are several example scripts in the ``examples`` directory of the source distribution. PyNN-0.7.4/doc/nonstandardmodels.txt0000644000175000017500000001376511736323051020172 0ustar andrewandrew00000000000000================================================= Using non-standard/native cell and synapse models ================================================= Standard models are neuron/synapse models that are available in at least two of the simulation engines supported by PyNN. Non-standard models, then, work only with a single simulator. We also call these "native" models. With native models, we lose full simulator-independence. However, for people who work mainly with a single simulator, native models allow them to use PyNN's high-level API for building and instrumenting networks but with a broader range of neuron and synapse models. This is also often a useful intermediate step in converting a native network model to PyNN. Using native NEST models ------------------------ To use a NEST neuron model with PyNN, we wrap the NEST model with a PyNN ``NativeCellType`` class, e.g.:: >>> from pyNN.nest import native_cell_type, Population, run, setup >>> setup() >>> ht_neuron = native_cell_type('ht_neuron') >>> poisson = native_cell_type('poisson_generator') >>> p1 = Population(10, ht_neuron, {'Tau_m': 20.0}) >>> p2 = Population(1, poisson, {'rate': 200.0}) We can now initialize state variables, set/get parameter values, and record from these neurons as from standard cells:: >>> p1.get('Tau_m') [20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0] >>> p1.get('Tau_theta') [2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0] >>> p1.get('C_m') NonExistentParameterError: C_m (valid parameters for ht_neuron are: AMPA_E_rev, AMPA_Tau_1, AMPA_Tau_2, AMPA_g_peak, E_K, E_Na, GABA_A_E_rev, GABA_A_Tau_1, GABA_A_Tau_2, GABA_A_g_peak, GABA_B_E_rev, GABA_B_Tau_1, GABA_B_Tau_2, GABA_B_g_peak, KNa_E_rev, KNa_g_peak, NMDA_E_rev, NMDA_Sact, NMDA_Tau_1, NMDA_Tau_2, NMDA_Vact, NMDA_g_peak, NaP_E_rev, NaP_g_peak, T_E_rev, T_g_peak, Tau_m, Tau_spike, Tau_theta, Theta_eq, g_KL, g_NaL, h_E_rev, h_g_peak, spike_duration) >>> p1.initialize('V_m', -70.0) >>> p1.initialize('Theta', -50.0) Note that the API for recording is somewhat clumsy for native models, and will be improved in the next PyNN release:: >>> p1._record('V_m') >>> run(250.0) >>> id, t, v = p1.recorders['V_m'].get().T To connect populations of native cells, you need to know the available synaptic receptor types:: >>> ht_neuron.synapse_types {'AMPA': 1, 'GABA_A': 3, 'GABA_B': 4, 'NMDA': 2} >>> from pyNN.nest import Projection, AllToAllConnector >>> connector = AllToAllConnector() >>> prj_ampa = Projection(p2, p1, connector, target='AMPA') >>> prj_nmda = Projection(p2, p1, connector, target='NMDA') To use a NEST STDP model with PyNN, we use the ``NativeSynapseDynamics`` class:: >>> from pyNN.nest import NativeSynapseDynamics >>> stdp = NativeSynapseDynamics("stdp_synapse", {'Wmax': 50.0, 'lambda': 0.015}) >>> prj_plastic = Projection(p1, p1, connector, target='AMPA', synapse_dynamics=stdp) Using native NEURON models -------------------------- A native NEURON cell model is described using a Python class (which may wrap a Hoc template). For this class to work with PyNN, there are a small number of requirements: - the ``__init__()`` method should take just ``**parameters`` as its argument. - instances should have attributes: - ``source``: a reference to the membrane potential which will be monitored for spike emission, e.g. ``self.soma(0.5)._ref_v`` - ``source_section``: the Hoc ``Section`` in which ``source`` is located. - ``parameter_names``: a tuple of the names of attributes/properties of the class that correspond to parameters of the model. - ``traces``: an empty dict, used for recording. - ``recording_time``: should be ``False`` initially. - there must be ``memb_init()`` method, taking no arguments. Here is an example, which uses the nrnutils_ package for conciseness:: from nrnutils import Mechanism, Section class SimpleNeuron(object): def __init__(self, **parameters): hh = Mechanism('hh', gl=parameters['g_leak'], el=-65, gnabar=parameters['gnabar'], gkbar=parameters['gkbar']) self.soma = Section(L=30, diam=30, mechanisms=[hh]) self.soma.add_synapse('ampa', 'Exp2Syn', e=0.0, tau1=0.1, tau2=5.0) # needed for PyNN self.source_section = self.soma self.source = self.soma(0.5)._ref_v self.parameter_names = ('g_leak', 'gnabar', 'gkbar') self.traces = {} self.recording_time = False def _set_gnabar(self, value): for seg in self.soma: seg.hh.gnabar = value def _get_gnabar(self): return self.soma(0.5).hh.gnabar gnabar = property(fget=_get_gnabar, fset=_set_gnabar) # ... gkbar and g_leak properties defined similarly def memb_init(self): for seg in self.soma: seg.v = self.v_init For each cell model, you must also define a cell type:: class SimpleNeuronType(NativeCellType): default_parameters = {'g_leak': 0.0002, 'gkbar': 0.036, 'gnabar': 0.12} default_initial_values = {'v': -65.0} recordable = ['soma(0.5).v', 'soma(0.5).ina'] model = SimpleNeuron The requirement to explicitly list all variables you might wish to record in the ``recordable`` attribute is a temporary inconvenience, which will be removed in a future version. It is now straightforward to use this cell type in PyNN: >>> from pyNN.neuron import setup, run, Population, Projection, AllToAllConnector >>> setup() >>> p1 = Population(10, SimpleNeuronType, {'g_leak': 0.0003}) >>> p1._record('soma(0.5).ina') >>> prj = Projection(p1, p1, AllToAllConnector(), target='soma.ampa') >>> run(100.0) >>> id, t, ina = p1.recorders['soma(0.5).ina'].get().T .. _nrnutils: http://pypi.python.org/pypi/nrnutils/ PyNN-0.7.4/doc/fileformats.txt0000644000175000017500000000376011736323051016760 0ustar andrewandrew00000000000000============ File formats ============ PyNN supports writing datafiles in both text and binary formats. PyNN comes with several built-in formats, but it is very easy to define your own. The default format is text-based. If we assume that you have run a simulation, and have recorded spikes, membrane potential and/or synaptic conductances for the neurons in a ``Population`` ``p``, then you can write the recorded data to a file in text format simply by specifying the filename:: >>> p.printSpikes("my_spike_data.dat") >>> p.print_v("my_Vm_data.dat") (the file extension can be anything you like). If you would like to write the data in a binary format, you must first create a PyNN ``File`` object`:: >>> from pyNN.recording.files import NumpyBinaryFile, HDF5ArrayFile >>> spike_file = NumpyBinaryFile("my_spike_data.npz", "w") >>> p.printSpikes(spike_file) >>> spike_file.close() >>> vm_file = HDF5ArrayFile("my_Vm_data.h5", "w") >>> p.print_v(vm_file) >>> vm_file.close() Note that we do not currently take advantage of the ability of HDF5 or NumPy binary files to contain multiple data sets. In addition to ``NumpyBinaryFile`` and ``HDF5ArrayFile`` (which requires PyTables to be installed), the ``recording.files`` module also contains ``PickleFile`` and ``StandardTextFile``. The file contents can then be accessed using NumPy, PyTables or the standard ``pickle/cPickle`` module, or by creating a PyNN ``File`` object in read mode:: >>> spike_file = NumpyBinaryFile("my_spike_data.npz", "r") >>> metadata = spike_file.get_metadata() >>> spikes = spike_file.read() Defining your own file formats ============================== If you wish to define your own file format, it is straightforward to create new PyNN-compatible ``File`` class by subclassing ``recording.files.BaseFile``: the only requirement is that the class should implement a method ``write(data, metadata)``, where ``data`` will be a NumPy array and ``metadata`` will be a dictionary. PyNN-0.7.4/doc/space.txt0000644000175000017500000000676111736323051015544 0ustar andrewandrew00000000000000======================================================== Representing spatial structure and calculating distances ======================================================== The ``space`` module contains classes for specifying the locations of neurons in space and for calculating the distances between them. Neuron positions can be defined either manually, using the ``positions`` attribute of a ``Population`` or using a ``Structure`` instance which is passed to the ``Population`` constructor. A number of different structures are available in ``space``. It is simple to define your own ``Structure`` sub-class if you need something that is not already provided. The simplest structure is a grid, whether 1D, 2D or 3D, e.g.:: >>> from pyNN.space import * >>> line = Line(dx=100.0, x0=0.0, y=200.0, z=500.0) >>> line.generate_positions(7) array([[ 0., 100., 200., 300., 400., 500., 600.], [ 200., 200., 200., 200., 200., 200., 200.], [ 500., 500., 500., 500., 500., 500., 500.]]) >>> grid = Grid2D(aspect_ratio=3, dx=10.0, dy=25.0, z=-3.0) >>> grid.generate_positions(3) array([[ 0., 10., 20.], [ 0., 0., 0.], [ -3., -3., -3.]]) >>> grid.generate_positions(12) array([[ 0., 0., 10., 10., 20., 20., 30., 30., 40., 40., 50., 50.], [ 0., 25., 0., 25., 0., 25., 0., 25., 0., 25., 0., 25.], [ -3., -3., -3., -3., -3., -3., -3., -3., -3., -3., -3., -3.]]) Here we have specified an *x*:*y* ratio of 3, so if we ask the grid to generate positions for 3 neurons, we get a 3x1 grid, 12 neurons a 6x2 grid, 27 neurons 9x3, etc. BY default, grid positions are filled sequentially, iterating first over the *z* dimension, then *y*, then *x*, but we can also fill the grid randomly:: >>> rgrid = Grid2D(aspect_ratio=1, dx=10.0, dy=10.0, fill_order='random') >>> rgrid.generate_positions(9) array([[ 20., 20., 20., 0., 10., 0., 10., 0., 10.], [ 10., 0., 20., 10., 0., 20., 10., 0., 20.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) The ``space`` module also provides the ``RandomStructure`` class, which distributes neurons randomly and uniformly within a given volume:: >>> glomerulus = RandomStructure(boundary=Sphere(radius=200.0)) >>> glomerulus.generate_positions(5) array([[ 4.81853231e+01, -2.49317729e+01, 1.08294461e+02, 1.72125819e-01, -1.25552649e+02], [ 3.96588073e+01, 1.75426143e+02, 3.19290169e+01, 1.65050459e+02, -1.32092198e+00], [ -1.00801053e+02, -8.51701627e+01, -1.39804442e+02, -4.97765369e+01, 3.94241050e+01]]) The volume classes currently available are ``Sphere`` and ``Cuboid``. Defining your own ``Structure`` classes is straightforward, just inherit from ``BaseStructure`` and implement a ``generate_positions(n)`` method:: class MyStructure(BaseStructure): parameter_names = ("spam", "eggs") def __init__(self, spam=3, eggs=1): ... def generate_positions(self, n): ... # must return a 3xn numpy array To definite your own ``Shape`` class for use with ``RandomStructure``, subclass ``Shape`` and implement a ``sample(n, rng)`` method:: class Tetrahedron(Shape): def __init__(self, side_length): ... def sample(self, n, rng): ... # return a nx3 numpy array. Note that rotation of structures is currently missing, but will be implemented in the next release. PyNN-0.7.4/changelog0000644000175000017500000011672211737600041015012 0ustar andrewandrew00000000000000===================== Release 0.7.3 (r1124) ===================== * Some fixes to the `CSAConnector` class * the Brian backend now includes the value at t=0 in recorded data (see ticket:225) * start times for `CurrentSources` in the NEST backend are now corrected for the connection delay * start times for `CurrentSources` in the Brian backend are now correct (there was something funny happening with clocks, before.) ==================== Release 0.7.2 (r995) ==================== Fixed a bug whereby the `connect()` function didn't work with single IDs (see ticket:195) ==================== Release 0.7.1 (r958) ==================== The main reason for this release is to add copyright statements, without which the validity of the CeCILL licence could be questioned. There are also some minor bug fixes. ==================== Release 0.7.0 (r929) ==================== This release sees a major extension of the API with the addition of the `PopulationView` and `Assembly` classes, which aim to make building large, structured networks much simpler and cleaner. A `PopulationView` allows a sub-set of the neurons from a `Population` to be encapsulated in an object. We call it a "view", rather than a "sub-population", to emphasize the fact that the neurons are not copied: they are the same neurons as in the parent `Population`, and any operations on either view or parent (setting parameter values, recording, etc.) will be reflected in the other. An `Assembly` is a list of `Population` and/or `PopulationView` objects, enabling multiple cell types to be encapsulated in a single object. `PopulationView` and `Assembly` objects behave in most ways like `Population`: you can record them, connect them using a `Projection`, you can have views of views... The "low-level API" (rechristened "procedural API") has been reimplemented in in terms of `Population` and `Projection`. For example, `create()` now returns a `Population` object rather than a list of IDs, and `connect()` returns a `Projection`object. This change should be almost invisible to the user, since `Population` now behaves very much like a list of IDs (can be sliced, joined, etc.). There has been a major change to cell addressing: Populations now always store cells in a one-dimensional array, which means cells no longer have an address but just an index. To specify the spatial structure of a Population, pass a Structure object to the constructor, e.g. `p = Population((12,10), IF_cond_exp)` is now `p = Population(120, IF_cond_exp, structure=Grid2D(1.2))` although the former syntax still works, for backwards compatibility. The reasons for doing this are: (i) we can now have more interesting structures than just grids (ii) efficiency (less juggling addresses, flattening) (iii) simplicity (less juggling addresses, less code). The API for setting initial values has changed: this is now done via the `initialize()` function or the `Population.initialize()` method, rather than by having `v_init` and similar parameters for cell models. Other API changes: - simplification of the `record_X()` methods. - enhanced `describe()` methods: can now use Jinja2 or Cheetah templating engines to produce much nicer, better formatted network descriptions. - connections and neuron positions can now be saved to various binary formats as well as to text files. - added some new connectors: `SmallWorldConnector` and `CSAConnector` (CSA = Connection Set Algebra) - native neuron and synapse models are now supported using a NativeModelType subclass, rather than specified as strings. This simplifies the code internally and increases the range of PyNN functionality that can be used with native models (e.g. you can now record any variable from a native NEST or NEURON model). For NEST, there is a class factory ``native_cell_type()``, for NEURON the NativeModelType subclasses have to be written by hand. Backend changes: - the NEST backend has been updated to work with NEST version 2.0.0. - the Brian backend has seen extensive work on performance and on bringing it to feature parity with the other backends. Details: * Where `Population.initial_values` contains arrays, these arrays now consistently contain only enough values for local cells. Before, there was some inconsistency about how this was handled. Still need more tests to be sure it's really working as expected. * Allow override of default_maxstep for NEURON backend as setup paramter. This is for the case that the user wants to add network connections across nodes after simulation start time. * Discovered that when using NEST with mpi4py, you must `import nest` first and let it do the MPI initialization. The only time this seems to be a problem with PyNN is if a user imports `pyNN.random` before `pyNN.nest`. It would be nice to handle this more gracefully, but for now I've just added a test that NEST and mpi4py agree on the rank, and a hopefully useful error message. * Added a new `setup()` option for `pyNN.nest`: `recording_precision`. By default, `recording_precision` is 3 for on-grid and 15 for off-grid. * Partially fixed the `pyNN.nest` implementation of `TsodyksMarkramMechanism` (cf ticket:172). The 'tsodyks_synapse' model has a 'tau_psc' parameter, which should be set to the same value as the decay time constant of the post-synaptic current (which is a parameter of the neuron model). I consider this only a partial fix, because if 'tau_syn_E' or 'tau_syn_I' is changed after the creation of the Projection, 'tau_psc' will not be updated to match (unlike in the `pyNN.neuron` implementation. I'm also not sure how well it will work with native neuron models. * reverted `pyNN.nest` to reading/resetting the current time from the kernel rather than keeping track of it within PyNN. NEST warns that this is dangerous, but all the tests pass, so let's wait and see. * In `HH_cond_exp`, conductances are now in µS, as for all other conductances in PyNN, instead of nS. * NEURON now supports Tsodyks-Markram synapses for current-based exponential synapses (before it was only for conductance-based). * NEURON backend now supports the IF_cond_exp_gsfa_grr model. * Simplification of the record_X() methods. With the addition of the `PopulationView` class, the selection logic implemented by the `record_from` and `rng` arguments duplicated that in `Population.__getitem__()` and `Population.sample()`, and so these arguments have been removed, and the `record_X()` methods now record all neurons within a `Population`, `PopulationView` or `Assembly`. Examples of syntax changes: `pop.record_v([pop[0], pop[17]])` --> `pop[(0, 17)].record_v()` `pop.record(10, rng=rng)` --> `pop.sample(10, rng).record() * Added a `sample()` method to `Population`, which returns a `PopulationView` of a random sample of the neurons in the parent population. * Added the EIF_cond_exp/alpha_isfa/ista and HH_cond_exp standard models in Brian. * Added a `gather` option to the `Population.get()` method. * `brian.setup()` now accepts a number of addtional argumenbts in `extra_params`, For example, extra_params={'useweave': True} will lead to inline C++ code generation * Wrote a first draft of a developers' guide. * Considerably extended the `core.LazyArray` class, as a basis for a possible rewrite of the `connectors` module. * The `random` module now uses `mpi4py` to determine the MPI rank and num_processes, rather than receiving these as arguments to the RNG constructor (see ticket:164). * Many fixes and performance enhancements for the `brian` module, which now supports synaptic plasticity. * Made the `describe()` method of `Population`, `Projection`, etc. much more powerful by adding a simple plugin-like structure for templating engines. Jinja2 and Cheetah currently available, with fallback to string.Template. * No more GSL warning every time! Just raise an Exception if we attempt to use GSLRNG and pygsl is not available. * Added some more flexibility to init_logging: logfile=None -> stderr, format includes size & rank, user can override log-level * NEST __init__.py changed to query NEST for filling NEST_SYNAPSE_TYPES. Allows _S connectors if available for use with inh_gamma_generator * Started to move synapse dynamics related stuff out of Projection and into the synapse dynamics-related classes, where it belongs. * Added an `Assembly` class to the API. An `Assembly` is a list of `Population` and/or `PopulationView` objects, enabling multiple cell types to be encapsulated in a single object. An `Assembly` object is intended to behave in most ways like `Population`: you can record them, connect them using a `Projection`... * Added a new "spike_precision" option to `nest.setup()` (see http://neuralensemble.org/trac/PyNN/wiki/SimulatorSpecificOptions) * Updated the NEST backend to work with version 2.0.0 * Rewrote the test suite, making a much cleaner distinction between unit tests, which now make heavy use of mock objects to better-isolate components, and system tests. Test suite now runs with nose (http://somethingaboutorange.com/mrl/projects/nose), in order to facilitate continuous integration testing. * the "low-level API" (rechristened "procedural API") has been reimplemented in in terms of `Population` and `Projection`, with the aim of improved maintainability. For example, `create()` now returns a `Population` object rather than a list of IDs, and `connect()` returns a `Projection`object. With the addition of `PopulationView` and `Assembly`, `Population` now behaves much more like a list of IDs (can be sliced, joined, etc.), so this should have minimal impact on existing simulation scripts. * Changed the format of connection files, as written by `saveConnections()` and read by `FromFileConnector`: files no longer contain the population label. Connections can now also be written to NumpyBinaryFile or PickleFile objects, instead of just text files. Same for `Population.save_positions()`. * Added CSAConnector, which wraps the Connection Set Algebra for use by PyNN. Requires the csa package: http://pypi.python.org/pypi/csa/ * Added a `PopulationView` class to the API. This allows a sub-set of the neurons from a `Population` to be encapsulated in an object. We call it a "view", rather than a "sub-population", to emphasize the fact that the neurons are not copied: they are the same neurons as in the parent `Population`, and any operations on either view or parent (setting parameter values, recording, etc.) will be reflected in the other. `PopulationView` objects are intended to behave in most ways like `Population`: you can record them, connect them using a `Projection`, you can have views of views... * A major change to cell addressing: Populations now always store cells in a one-dimensional array, which means cells no longer have an address but just an index. To specify the spatial structure of a Population, pass a Structure object to the constructor, e.g. `p = Population((12,10), IF_cond_exp)` is now `p = Population(120, IF_cond_exp, structure=Grid2D(1.2))` although the former syntax still works, for backwards compatibility. The reasons for doing this are: (i) we can now have more interesting structures than just grids (ii) efficiency (less juggling addresses, flattening) (iii) simplicity (less juggling addresses, less code - this opens the way to a much easier implementation of sub-populations). * Removed the `v_init` parameter from all cell models. Setting initial values of state variables is now done via the `initialize()` function or the `Population.initialize()` method. * Enhance the distance expressions by allowing expressions such as (d[0] < 0.1) & (d[1] < 0.2). Complex forms can therefore now be drawn, such as squares, ellipses, and so on. * Added an `n_connections` flag to the DistanceDependentProbabiblityConnector in order to be able to constrain the total number of connections. Can be useful for normalizations * Added a simple SmallWorldConnector. Cells are connected within a certain degree d. Then, all the connections are rewired with a probability given by a rewiring parameter and new targets are uniformly selected among all the possible targets. * Added a method to save cell positions to file. * Added a progress bar to connectors. Now, a verbose flag allows to display or not a progress bar indicating the percentage of connections established. * New implementation of the connector classes, with much improved performance and scaling with MPI, and extension of distance-dependent weights and delays to all connectors. In addition, a `safe` flag has been added to all connectors: on by default, a user can turn it off to avoid tests on weights and delays. * Added the ability to set the `atol` and `rtol` parameters of NEURON's cvode solver in the `extra_params` argument of `setup()` (patch from Johannes Partzsch). * Made `nest`s handling of the refractory period consistent with the other backends. Made the default refractory period 0.1 ms rather than 0.0 ms, since NEST appears not to handle zero refractory period. * Moved standard model (cells and synapses) machinery, the `Space` class, and `Error` classes out of `common` into their own modules. ==================== Release 0.6.0 (r710) ==================== There have been three major changes to the API in this version. 1. Spikes, membrane potential and synaptic conductances can now be saved to file in various binary formats. To do this, pass a PyNN `File` object to `Population.print_X()`, instead of a filename. There are various types of PyNN `File` objects, defined in the `recording.files` module, e.g., `StandardTextFile`, `PickleFile`, `NumpyBinaryFile`, `HDF5ArrayFile`. 2. Added a `reset()` function and made the behaviour of `setup()` consistent across simulators. `reset()` sets the simulation time to zero and sets membrane potentials to their initial values, but does not change the network structure. `setup()` destroys any previously defined network. 3. The possibility of expressing distance-dependent weights and delays was extended to the `AllToAllConnector` and `FixedProbabilityConnector` classes. To reduce the number of arguments to the constructors, the arguments affecting the spatial topology (periodic boundary conditions, etc.) were moved to a new `Space` class, so that only a single `Space` instance need be passed to the `Connector` constructor. Details: * Switched to using the point process-based AdExp mechanism in NEURON. * Factored out most of the commonality between the `Recorder` classes of each backend into a parent class `recording.Recorder`, and tidied up the `recording` module. * Can now pass a PyNN File object to `Population.print_X()`, instead of a filename. There are various types of PyNN File objects, defined in the `recording.files` module, e.g., `StandardTextFile`, `PickleFile`, `NumpyBinaryFile`, `HDF5ArrayFile`. * Added an attribute `conductance_based` to `StandardCellType`, to make the determination of synapse type for a given cell more robust. * PyNN now uses a named logger, which makes it easier to control logging levels when using PyNN within a larger application. * implemented gather for `Projection.saveConnections()` * Added a test script (test_mpi.py) to check whether serial and distributed simulations give the same results * Added a `size()` method to `Projection`, to give the total number of connections across all nodes (unlike `__len__()`, which gives only the connections on the local node * Speeded up record() by a huge factor (from 10 s for 12000 cells to less than 0.1 s) by removing an unecessary conditional path (since all IDs now have an attribute "local") * `synapse_type` is now passed to the `ConnectionManager` constructor, not to the `connect()` method, since (a) it is fixed for a given connection manager, (b) it is needed in other methods than just `connect()`; fixed weight unit conversion in `brian` module. * Updated connection handling in `nest` module to work with NEST version 1.9.8498. Will not now work with previous NEST versions * The `neuron` back-end now supports having both static and Tsodyks-Markram synapses on the same neuron (previously, the T-M synapses replaced the static synapses) - in agreement with `nest` and common sense. Thanks to Bartosz Telenczuk for reporting this. * Added a `compatible_output` mode for the `saveConnections()` method. True by default, it allows connections to be reloaded from a file. If False, then the raw connections are stored, which makes for easier postprocessing. * Added an ACSource` current source to the `nest` module. * Fixed Hoc build directory problem in setup.py - see ticket:147 * `Population.get_v()` and the other "`get`" methods now return cell indices (starting from 0) rather than cell IDs. This behaviour now matches that of `Population.print_v()`, etc. See ticket:119 if you think this is a bad idea. * Implemented `reset()` and made the behaviour of `setup()` consistent across simulators. Almost all unit tests now pass. * Moved the base `Connector` class from `common` to `connectors`. Put the `distances` function inside a `Space` class, to allow more convenient specification of topology parameters. Extended the possibility of expressing distance-dependent weights and delays to the `AllToAllConnector` and `FixedProbabilityConnector`. * `Projection.setWeights()` and `setDelays()` now accept a 2D array argument (ref ticket:136), to be symmetric with `getWeights()` and `getDelays()`. For distributed simulations, each node only takes the values it needs from the array. * `FixedProbabilityConnector` is now more strict, and checks that `p_connect` is less than 1 (see ticket:148). This makes no difference to the behaviour, but could act as a check for errors in user code. * Fixed problem with changing `SpikeSourcePoisson` rate during a simulation (see ticket:152) ==================== Release 0.5.0 (r652) ==================== There have been rather few changes to the API in this version, which has focused rather on improving the simulator interfaces and on an internal code-reorganization which aims to make PyNN easier to test, maintain and extend. Principal API changes: * Removed the 'string' connection methods from the `Projection` constructor. The `method` argument now *must* be a `Connector` object, not a string. * Can now record synaptic conductances. * Can now access weights and delays of individual connections one-at-a-time within a `Projection` through `Connection` objects. * Added an interface for injecting arbitrary time-varying currents into cells. * Added `get_v()` and `get_gsyn()` methods to the `Population` class, enabling membrane potential and synaptic conductances to be read directly into memory, rather than saved to file. Improvements to simulator back-ends: * Implemented an interface for the Brian simulator. * Re-implementated the interface to NEURON, to use the new functionality in v7.0. * Removed support for version 1 of NEST. The module for NEST v2 is now simply called `pyNN.nest`. * The PCSIM implementation is now more complete, and more compatible with the other back-ends. * Behind-the-scenes refactoring to implement the API in terms of a small number of low-level, simulator-specific operations. This reduces redundancy between simulator modules, and makes it easier to extend PyNN, since if new functionality uses the low-level operations, it only needs to be written once, not once for each simulator. Details: * Renamed `nest2` to `nest`. * Random number generators now "parallel_safe" by default. * Added documentation on running parallel simulations. * Trying a new method of getting the last data points out of NEST: we always simulate to `t+dt`. This should be fine until we implement the possibility of accessing Vm directly from the `ID` object (see ticket:35). In the meantime, hopefully the NEST guys will change this behaviour. * `gather=True` now works for all modules, even without a shared filesystem (requires `mpi4py`). * Added an `allow_update_on_post` option to the NEURON weight adjuster mechanisms. This is set to 0 (false) for consistency with NEST, which means that weight updates are accumulated and are applied only on a pre-synaptic spike, although I'm not yet sure (a) what the correct behaviour really is, (b) what PCSIM and Brian do. * The `pcsim` module is now implemented in terms of the common implementation, using just `net.add()` and `net.connect()`. I have just commented out the old code (using `CuboidGridObjectPopulation`s and `ConnectionsProjection`s) rather than deleting it, as it will probably be mostly re-used when optimizing later. * `Population.__getitem__()` now accepts slices (see ticket:21). * NEST does not record values at t=0 or t=simtime so, for compatibility with the other simulators, we now add these values manually to the array/datafile. * Fixed the `SpikeSourcePoisson` model in the `neuron` module so it has a really fixed duration, not just an 'on average' fixed duration. * Created a new Exception type, `RecordingError`, for when we try to record membrane potential from a `SpikeSource`. * Renamed the `param_dict` argument of `create()` to `cellparams`, for consistency with `Population.__init__()`. * Created default implementations of nearly all functions and classes in `common`, some of which depend on the simulator package having a `simulator` module that defines certain 'primitive' capabilities. * Created default implementations of all `Connector` classes in `connectors`, which depend on the `Projection` having a `ConnectionManager` which in turn has a `connect()` method implementing divergent connect. * Added a `ConnectionManager` class to the `simulator` module in `nest2`, `neuron` and `brian` packages. This allows (i) a common way of managing connections for both the `connect()` function and the `Projection` class, (ii) a common specification of `Connector` algorithms in terms of method calls on a `ConnectionManager` instance. * Added `weights_iterator()` and `delays_iterator()` to the base `Connector` class, to make them available to all simulator modules. * Moved `Connector` base classes from `common` into a new module, `connectors`. * Moved standard dynamic synapse base classes from `common` into a new module, `synapses`. * Moved standard cell base classes from `common` into a new module, `cells`. * Moved `Timer` class from `common` to `utility`. * `Population` attributes `all_cells` and `local_cells` are now an official part of the API (`cell` is as an alias for `local_cells` for now since it was widely used, if not officially part of the API, in 0.4). * Removed the 'string' connection methods from the `Projection` constructor. The `method` argument now *must* be a `Connector` object, not a string. * Standard cell types now know what things can be recorded from them (`recordable` attribute). * Added new Exception `NothingToWriteError`. Calling `printSpikes()`, etc, when you have not recorded anything raises an Exception, since this is quite likely a mistake, but it needs to be a specific Exception type so it can be handled without inadvertently catching all other errors that are likely to arise during writing to file. * Moved old tests to examples folder * Added Padraig Gleeson's modifications to neuroml.py * little change in setup of the rng_seeds, re-add the possibility to give a seed for a RNG that then draws seeds for the simulation. In that way one does not have to provide the exact number of seeds needed, just one. * add `save_population()` and `load_population()` functions to `utility`. * `test/explore_space.py` is now working. The test script saves results to a NeuroTools datastore, which the `explore_space` script can later retrieve. Only plotting does not work in distributed mode due to lack of X-display. * STDP testing improved. `nest`, `pcsim` and `neuron` now give pretty similar results for `SpikePairRule` with `AdditiveWeightDependence`. I think some of the remaining differences are due to sampling the weights every millisecond, rather than at the beginning of each PSP. We need to develop an API for recording weights in PyNN (PCSIM and NEURON can record the weights directly, rather than by sampling, not sure about NEST). However, I suspect there are some fundamental differences in the algorithms (notably *when* weight changes get applied), that may make it impossible to completely reconcile the results. * Moved the `MultiSim` class from `test/multisim.py` into the `utility` module. * The `pcsim` module now supports dynamic synapses with both conductance-based and current-based synapses. * `Projection.saveConnections()`, `printWeights()` and `weightHistogram()` now work in the `pcsim` module. * `pcsim` `Connector`s now handle arrays of weights/delays. * Implemented `FromListConnector` and `FromFileConnector` for `pcsim`. * Changed tau_ref for hardware neuron model from 0.4ms to 1.0ms. Old estimation was distorted by hardware error. * Fixed a bug whereby spikes from a `Population` of `SpikeSourceArray`s are not recorded if they are set after creation of the population. * Optimisations to improve the building times of large networks in `nest2`. * Can now record synaptic conductances. * Added an interface for the Brian simulator. * When you try to write data to a file, any existing file of the same name is first renamed by appending '_old' to the filename. * Modification of the RandomDistribution object to allow the specification of boundaries, and the way we deal with numbers drawn outside those boundaries. Numbers may be clipped to min/max values, or either redrawn till they fall within min/max values * Added the possibility to select a particular model of plasticity when several are available for the same plastic rule. This is mainly used in NEST, where we set the stdp_synapse_hom as the default type, because it is more efficient when (as is often the case) all plasticity parameters are identitical. * Added the possibility of giving an expression for distant-dependent weights and delays in the `DistanceDependentProbabilityConnector`. * Harmonization of `describe()` methods across simulators, by moving its definition into `common`. `describe()` now returns a string, rather than printing directly to stdout. This lets it be used for writing to log files, etc. You will now have to use `print p.describe()` to obtain the old behaviour.`describe()` now also takes a `template` argument, allowing the output to be customized. * Added an interface for injecting arbitrary currents into cells. Usage example: {{{ cell = create(IF_curr_exp) current_source1 = DCSource(amplitude=0.3, start=20, stop=80) current_source2 = StepCurrentSource(times=[20,40,60,80], amplitudes=[0.3,-0.3,0.3,0.0]) cell.inject(current_source1) # two alternatives current_source2.inject_into([cell]) }}} `DCSource` and `StepCurrentSource` are available in the `nest2`, `neuron` and `brian` modules. `NoisyCurrentSource` is only available in `nest2` for the time being, but it will be straightforward to add it for the other backends. Adding `ACSource`, etc., should be straightforward. * Optimised setting of parameter values by checking whether the list of parameters to be set contains any computed parameters, and only getting/translating all parameters in this case. Before, each time we wanted to set a parameter, we always got all the native_parameters and translated them even when there was a one-to-one correspondence between parameters. * Implemented the Gutig rule and the van Rossum rule of plasticity in `nest2`, and changed the old naming. * In `nest2`, fixed also the meanSpikeCount() method to directly obtain the spike count from the recorder, and not from the file. * Great improvement of the distance dependant distance. Speed up considerably the building time. * Moved unit tests into their own subdirectory * Fixed a bug in the periodic boundary conditions in `DistanceDependentProbabilityConnector` class if they are specified by the user and not linked to the grid sizes. * Reduced the number of adjustable parameters in the `IF_facets_hardware1` standard cell model. * Reimplemented the `neuron` module to use the new features of NEURON (`HocObject`, etc) available in v7.0. * Added `get_v()` method to the `Population` class, enabling membrane potential to be read directly into memory, rather than saved to file. ==================== Release 0.4.0 (r342) ==================== * Added a `quit_on_end` extra argument to `neuron.setup()` * Added a `synapse_types` attribute to all standard cell classes. * Removed `Projection.setThreshold()` from API * In the `neuron` module, `load_mechanisms()` now takes a path to search from as an optional argument, in order to allow loading user-defined mechanisms * Added `get_script_args()` (process command line arguments) and `colour()` (print coloured output) functions to the `utility` module. * Added `rank()` to the API (returns the MPI rank) * Removed `setRNGseeds()` from the API, since each simulator is so different in its requirements. The seeds may be provided using the `extra_params` argument to `setup()`. * The headers of output files now contain the first and last ids in the Population (not fully implemented yet for recording with the low-level API) * Global variables such as the time step and minimum delay have been replaced with functions `get_time_step()`, `get_min_delay()` and `get_current_time()`. This ensures that values are always up-to-date. * Removed `cellclass` arg from `set()` function. All cells should now know their own cellclass. * Added `get()` method to `Population` class. * Default value for the `duration` parameter in `SpikeSourcePoisson` changed from 1e12 ms to 1e6 ms. * Reimplemented `Population.set()`, `tset()`, `rset()` in a more consistent way which avoids exposing the translation machinery and fixes various bugs with computed parameters. The new implementation is likely to be slower, but several optimisations are possible. * Added `simple_parameters()`, `scaled_parameters()` and `computed_parameters()` methods to the `StandardModelType` class. Their intended use is in making `set()` methods/functions more efficient for non-computed parameters when setting on many nodes. * Multiple calls to `Population.record()` no longer record the same cell twice. * Changed `common.ID` to `common.IDMixin`, which allows the type used for the id to vary (`int` for `neuron` and `nest1/2`, `long` for pcsim). * In `common.StandardModelType`, changed most of the methods to be classmethods, since they do not act on instance data. * Added a `ModelNotAvailable` class to allow more informative error messages when people try to use a model with a simulator that doesn't support it. * hoc and mod files are now correctly packaged, installed and compiled with `distutils`. * Added a check that argument names to `setup()` are not mis-spelled. This is possible because of `extra_params`. * It is now possible to instantiate Timer objects, i.e. to have multiple, independent Timers * Some re-naming of function/method arguments to conform more closely to Python style guidelines, e.g. `methodParameters` to `method_parameters` and `paramDict` to `param_dict`. * Added `getWeights()` and `getDelays()` methods to `Projection` class. NOTE: check for which simulators this is available. XXX * Added a `RoundingWarning` exception, to warn the user when rounding is occurring. * Can now change the `spike_times` attribute of a `SpikeSourceArray` during a simulation without reinitialising. This reduces memory for long simulations, since it is not necessary to load all the spike times into memory at once. NOTE: check for which simulators this works. XXX * The `neuron` module now requires NEURON v6.1 or later. * For developers, changes to the layout of the code: (1) Simulator modules have been moved to a `src` subdirectory - this is to make distribution/installation of PyNN easier. (2) Several of the modules have been split into multiple files, in their own subdirectories, e.g.: `nest2.py` --> `nest2/__init__.py`, `nest2/cells.py` and `nest2/connectors.py`. The reason for this is that the individual files were getting very long and difficult to navigate. * Added `index()` method to `Population` class - what does it do?? XXX * Added `getSpikes()` method to `Population` class - returns spike times/ids as a numpy array. * Added support for the Stage 1 FACETS hardware. * Changed the default weight to zero (was 1.0 nA) * New STDP API, with implementations for `neuron` and `nest2`, based on discussions at the CodeSprint. * Distance calculations can now use periodic boundary conditions. * Parameter translation system now fully supports reverse translation (including units). The syntax for specifying translations is now simpler, which makes it easier to define new standard cell models. * All simulator modules now have a `list_standard_models()` function, which returns a list of all the models that are available for that simulator. * The `connect()` function now returns a `Connection` object, which has `weight` and `delay` properties. This allows accessing/changing weights/delays of individual connections in the low-level API. NOTE: only available in `nest2`? Implement for all sims or delete from this release. XXX * Added `record_c()` and `print_c()` methods to the `Population` class, to allow recording synaptic conductances. NOTE: only in `nest2` - should add to `neuron` or delete from this release. XXX * Procedures for connecting `Population`s can now be defined as classes (subclasses of an abstract `Connector` class) rather than given as a string. This should make it easier for users to add their own connection methods. Weights and delays can also be specified in the `Connector` constructor, removing the need to call `setWeights()` and `setDelays()` after the building of the connections. We keep the string specification for backwards compatibility, but this is deprecated and will be removed in a future API version. * Added new standard models: EIF_cond_alpha_isfa_ista, IF_cond_exp_gsfa_grr, HodgkinHuxley. NOTE: check names, and that all models are supported by at least two simulators. * Version 2 of the NEST simulator is now supported, with the `nest2` module. The `nest` module is now called `nest1`. * Changed the order of arguments in `random.RandomDistribution.__init__()` to put `rng` last, since this is the argument for which the default is most often used (moving it lets positional arguments be used for `distribution` and `parameters` when `rng` is not specified). * Changes to `ID` class: - `_cellclass` attribute renamed to `cellclass` and is now a [http://www.geocities.com/foetsch/python/new_style_classes.htm property]. - Ditto for `_position` --> `position` - Methods `setPosition()`, `getPosition()`, `setCellClass()` removed (just use the `position` or `cellclass` properties). - `set(param,val=None)` changed to `setParameters(**parameters)`. - Added `getParameters()` - `__setattr__()` and `__getattr__()` overridden, so that cell parameters can be read/changed using dot notation, e.g. `id.tau_m = 20.0` Note that one of the reasons for using properties is that it allows attributes to be created only when needed, hopefully saving on memory. * Added `positions` property to `Population` class, which allows the positions of all cells in a population to be set/read at once as a numpy array. * All positions are now in 3D space, irrespective of the shape of the `Population`. * Threads can now be used in `nest` and `pcsim`, via the `extra_param` option of the `setup()` function. * Removed `oldneuron` module. * Added `__iter__()` (iterates over ids) and `addresses()` (iterates over addresses) to the `Population` class. ============= Release 0.3.0 ============= * `pcsim` is now fully supported, although there are still one or two parts of the API that are not implemented. * The behaviour of the `run()` function in the `neuron` module has been changed to match the `nest` and `pcsim` modules, i.e., calling `run(simtime)` several times in succession will advance the simulation by `simtime` ms each time, whereas before, `neuron` would reset time to zero each time. * PyTables is now optional with `pcsim` * Change to `neuron` and `oldneuron` to match more closely the behaviour of `nest` and `pcsim` when the `synapse_type` argument is not given in `connect()`. Before, `neuron` would by default choose an excitatory synapse. Now, it chooses an inhibitory synapse if the weight is negative. * `runtests.py` now runs tests for `pcsim` as well as `nest`, `neuron` and `oldneuron`. * Minor changes to arg names and doc-strings, to improve API-consistency between modules. * Added users' guide (in `doc` directory). * Renamed `neuron` module to `oldneuron` and `neuron2` to `neuron`. * PyNN can now be installed using `distutils`, although this doesn't install or compile hoc/mod files. * Added a `compatible_output` argument to the `printX()` functions/methods, to allow choosing a simulator's native format (faster) or a format that is consistent across simulators. * Temporary files used for saving spikes and membrane potential are now created using the `tempfile` module, which means it should be safe to run multiple PyNN simulations at the same time (before, they would all overwrite the same file). * pygsl is no longer an absolute requirement but can be used if available * Changed the behaviour of `Population` indexing in the `nest` module to be more consistent with the `neuron2` module, in two ways. (i) negative addresses now raise an Exception. (ii) Previously, an integer index `n` signified the `(n+1)`th neuron in the population, e.g., `p[99]` would be the same as `p[10,10]` for a 10x10 population. Now, `p[99]` is the same as `p[99,]` and is only valid for a 1D population. * Addition of `ID` class (inherits from `int`), allowing syntax like `p[3,4].set('tau_m',20.0)` where `p` is a Population object. ============= Release 0.2.0 ============= * `Population.tset()` now accepts arrays of arrays (e.g. conceptually a 2D array of 1D arrays, actually a 3D array) as well as arrays of lists. * setup() now returns the node id. This can be used in a parallel framework to identify the master node. * Unified output format for spikes and membrane potential for `nest` and `neuron` modules. * Added first experimental version of `pcsim` module * `neuron2` module now supports distributed simulations using NEURON compiled with both MPI and Python support. * `Population[xx]` syntax for getting individual cell ids improved. You can now write `p[2,3]` instead of `p[(2,3)]`. * `v_init` added as a parameter to the `IF_curr_alpha`, etc, models. * Trying to access a hoc variable that doesn't exist raises a Python exception, `HocError`. * If synaptic delay is not specified, delays are now set to `min_delay`, not zero. * Random number API allows keeping control of the random numbers used in simulations, by passing an RNG object as an argument to functions that use RNGs. `random` module has wrappers for NumPy RNGs and GSL RNGs, as well as a stub class to indicate the simulator's native RNG should be used (i.e., the `Random` class in hoc). * Translation of model and parameter names from standardised names to simulator-specific names now uses one class per neuron model, rather than a single class with one method per model. For users, the only difference is that you have to use, e.g., `create(IF_curr_alpha)` instead of `create('IF_curr_alpha')` i.e., pass the class instead of a string. For developers, it should now be easier to add new standard models. * Added `neuron2` module, a reimplemtation of the PyNN API for NEURON, that uses more Python and less hoc. ============= Release 0.1.0 ============= Version 0.1 of the API was never really released. At this point the project used the FACETSCOMMON svn repository. First svn import of early stage of PyNN was on 9th May 2006. PyNN-0.7.4/setup.py0000644000175000017500000001053611774605133014656 0ustar andrewandrew00000000000000#!/usr/bin/env python from distutils.core import setup from distutils.command.build import build as _build import os class build(_build): """Add nrnivmodl to the end of the build process.""" def run(self): _build.run(self) nrnivmodl = self.find_nrnivmodl() if nrnivmodl: print "nrnivmodl found at", nrnivmodl import subprocess p = subprocess.Popen(nrnivmodl, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, close_fds=True, cwd=os.path.join(os.getcwd(), self.build_lib, 'pyNN/neuron/nmodl')) stdout = p.stdout.readlines() result = p.wait() # test if nrnivmodl was successful if result != 0: print "Unable to compile NEURON extensions. Output was:" print ' '.join([''] + stdout) # indent error msg for easy comprehension else: print "Successfully compiled NEURON extensions." else: print "Unable to find nrnivmodl. It will not be possible to use the pyNN.neuron module." def find_nrnivmodl(self): """Try to find the nrnivmodl executable.""" path = os.environ.get("PATH", "").split(os.pathsep) nrnivmodl = '' for dir_name in path: abs_name = os.path.abspath(os.path.normpath(os.path.join(dir_name, "nrnivmodl"))) if os.path.isfile(abs_name): nrnivmodl = abs_name break return nrnivmodl setup( name = "PyNN", version = "0.7.4", package_dir={'pyNN': 'src'}, packages = ['pyNN','pyNN.nest', 'pyNN.pcsim', 'pyNN.neuron', 'pyNN.brian', 'pyNN.recording', 'pyNN.standardmodels', 'pyNN.descriptions', 'pyNN.nest.standardmodels', 'pyNN.pcsim.standardmodels', 'pyNN.neuron.standardmodels', 'pyNN.brian.standardmodels'], package_data = {'pyNN': ['neuron/nmodl/*.mod', "descriptions/templates/*/*"]}, author = "The PyNN team", author_email = "pynn@neuralensemble.org", description = "A Python package for simulator-independent specification of neuronal network models", long_description = """In other words, you can write the code for a model once, using the PyNN API and the Python programming language, and then run it without modification on any simulator that PyNN supports (currently NEURON, NEST, PCSIM and Brian). The API has two parts, a low-level, procedural API (functions ``create()``, ``connect()``, ``set()``, ``record()``, ``record_v()``), and a high-level, object-oriented API (classes ``Population`` and ``Projection``, which have methods like ``set()``, ``record()``, ``setWeights()``, etc.). The low-level API is good for small networks, and perhaps gives more flexibility. The high-level API is good for hiding the details and the book-keeping, allowing you to concentrate on the overall structure of your model. The other thing that is required to write a model once and run it on multiple simulators is standard cell and synapse models. PyNN translates standard cell-model names and parameter names into simulator-specific names, e.g. standard model ``IF_curr_alpha`` is ``iaf_neuron`` in NEST and ``StandardIF`` in NEURON, while ``SpikeSourcePoisson`` is a ``poisson_generator`` in NEST and a ``NetStim`` in NEURON. Even if you don't wish to run simulations on multiple simulators, you may benefit from writing your simulation code using PyNN's powerful, high-level interface. In this case, you can use any neuron or synapse model supported by your simulator, and are not restricted to the standard models. PyNN is a work in progress, but is already being used for several large-scale simulation projects.""", license = "CeCILL http://www.cecill.info", keywords = "computational neuroscience simulation neuron nest pcsim brian neuroml", url = "http://neuralensemble.org/PyNN/", classifiers = ['Development Status :: 4 - Beta', 'Environment :: Console', 'Intended Audience :: Science/Research', 'License :: Other/Proprietary License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Topic :: Scientific/Engineering'], cmdclass = {'build': build}, ) PyNN-0.7.4/src/0000755000175000017500000000000011774605211013723 5ustar andrewandrew00000000000000PyNN-0.7.4/src/descriptions/0000755000175000017500000000000011774605211016431 5ustar andrewandrew00000000000000PyNN-0.7.4/src/descriptions/templates/0000755000175000017500000000000011774605211020427 5ustar andrewandrew00000000000000PyNN-0.7.4/src/descriptions/templates/string/0000755000175000017500000000000011774605211021735 5ustar andrewandrew00000000000000PyNN-0.7.4/src/descriptions/templates/string/projection_default.txt0000644000175000017500000000030611736323051026352 0ustar andrewandrew00000000000000Projection "$label" From: $pre To: $post Target : $target Connector : $connector Plasticity : $plasticity Total connections : $size Local connections : $size_local PyNN-0.7.4/src/descriptions/templates/string/assembly_default.txt0000644000175000017500000000013611736323051026016 0ustar andrewandrew00000000000000Neuronal assembly called "$label", consisting of the following populations: $populations PyNN-0.7.4/src/descriptions/templates/string/population_default.txt0000644000175000017500000000034211736323051026370 0ustar andrewandrew00000000000000Population "$label" Structure : $structure Local cells : $size_local Cell type : $celltype.name ID range : $first_id-$last_id First cell on this node: ID: $local_first_id $cell_parametersPyNN-0.7.4/src/descriptions/templates/string/modeltype_default.txt0000644000175000017500000000000511736323051026174 0ustar andrewandrew00000000000000$namePyNN-0.7.4/src/descriptions/templates/string/populationview_default.txt0000644000175000017500000000013211736323051027260 0ustar andrewandrew00000000000000PopulationView "$label" parent : "$parent" size : $size mask : $mask PyNN-0.7.4/src/descriptions/templates/string/synapsedynamics_default.txt0000644000175000017500000000011411736323051027405 0ustar andrewandrew00000000000000Short-term plasticity mechanism: $fast Long-term plasticity mechanism: $slowPyNN-0.7.4/src/descriptions/templates/jinja2/0000755000175000017500000000000011774605211021604 5ustar andrewandrew00000000000000PyNN-0.7.4/src/descriptions/templates/jinja2/projection_default.txt0000644000175000017500000000223311736323051026222 0ustar andrewandrew00000000000000Projection "{{label}}" from "{{pre.label}}" ({{pre.size}} cells) to "{{post.label}}" ({{post.size}} cells) Target : {{target}} Connector : {{connector.name}} {%- for name,value in connector.parameters.items() %} {{name}} : {{value}}{% endfor %} Weights : {{connector.weights}} Delays : {{connector.delays}} Plasticity : {% if plasticity %} Short-term : {{plasticity.fast}} Long-term : {% if plasticity.slow %} Timing-dependence : {{plasticity.slow.timing_dependence.name}} {%- for name,value in plasticity.slow.timing_dependence.parameters.items() %} {{name}} : {{value}}{% endfor %} Weight-dependence : {{plasticity.slow.weight_dependence.name}} {%- for name,value in plasticity.slow.weight_dependence.parameters.items() %} {{name}} : {{value}}{% endfor %} Voltage-dependence : {{plasticity.slow.voltage_dependence}} Dendritic delay fraction : {{plasticity.slow.dendritic_delay_fraction}}{% endif %}{% else %}None{% endif %} Total connections : {{size}} Local connections : {{size_local}} PyNN-0.7.4/src/descriptions/templates/jinja2/assembly_default.txt0000644000175000017500000000022511736323051025664 0ustar andrewandrew00000000000000Neuronal assembly called "{{label}}", consisting of the following populations: {% for p in populations %} * {{p.label}} {% endfor %} PyNN-0.7.4/src/descriptions/templates/jinja2/population_default.txt0000644000175000017500000000070711736323051026244 0ustar andrewandrew00000000000000Population "{{label}}" Structure : {{structure.name}} {%- for name,value in structure.parameters.items() %} {{name}}: {{value}}{% endfor %} Local cells : {{size_local}} Cell type : {{celltype.name}} ID range : {{first_id}}-{{last_id}} {% if size_local %}First cell on this node: ID: {{local_first_id}} {% for name,value in cell_parameters.items() %}{{name}}: {{value}} {% endfor -%} {% endif %}PyNN-0.7.4/src/descriptions/templates/jinja2/modeltype_default.txt0000644000175000017500000000001011736323051026037 0ustar andrewandrew00000000000000{{name}}PyNN-0.7.4/src/descriptions/templates/jinja2/population_old.txt0000644000175000017500000000103511736323051025371 0ustar andrewandrew00000000000000------- Population description ------- Population called {{label}} is made of {{size}} cells [{{size_local}} being local] {% if structure %}-> Cells are arranged in a {{structure}}{% endif %} -> Celltype is {{celltype}} -> ID range is {{first_id}}-{{last_id}} {% if n_cells_local > 0 %}-> Cell Parameters used for first cell on this node are: {% for name, value in cell_parameters %} {{name}}: {{value}} {% endfor %°} {% else %} -> There are no cells on this node. {% endif %} --- End of Population description ---- PyNN-0.7.4/src/descriptions/templates/jinja2/populationview_default.txt0000644000175000017500000000014611736323051027134 0ustar andrewandrew00000000000000PopulationView "{{label}}" parent : "{{parent}}" size : {{size}} mask : {{mask}} PyNN-0.7.4/src/descriptions/templates/jinja2/synapsedynamics_default.txt0000644000175000017500000000012211736323051027253 0ustar andrewandrew00000000000000Short-term plasticity mechanism: {{fast}} Long-term plasticity mechanism: {{slow}}PyNN-0.7.4/src/descriptions/templates/cheetah/0000755000175000017500000000000011774605211022030 5ustar andrewandrew00000000000000PyNN-0.7.4/src/descriptions/templates/cheetah/projection_default.txt0000644000175000017500000000223211736323051026445 0ustar andrewandrew00000000000000Projection "$label" from "$pre.label" ($pre.size cells) to "$post.label" ($post.size cells) Target : $target Connector : $connector.name #for $name,$value in $connector.parameters.items() $name : $value #end for Weights : $connector.weights Delays : $connector.delays Plasticity : #if $plasticity Short-term : $plasticity.fast Long-term : #if $plasticity.slow Timing-dependence : $plasticity.slow.timing_dependence.name #for $name,$value in $plasticity.slow.timing_dependence.parameters.items() $name : $value #end for Weight-dependence : $plasticity.slow.weight_dependence.name #for $name,$value in $plasticity.slow.weight_dependence.parameters.items() $name : $value #end for Voltage-dependence : $plasticity.slow.voltage_dependence Dendritic delay fraction : $plasticity.slow.dendritic_delay_fraction #end if #else None #end if Total connections : $size Local connections : $size_local PyNN-0.7.4/src/descriptions/templates/cheetah/assembly_default.txt0000644000175000017500000000021011736323051026102 0ustar andrewandrew00000000000000Neuronal assembly called "$label", consisting of the following populations: #for $p in $populations * $p.label #end for PyNN-0.7.4/src/descriptions/templates/cheetah/population_default.txt0000644000175000017500000000064311736323051026467 0ustar andrewandrew00000000000000Population "$label" Structure : $structure.name #for $name,$value in $structure.parameters.items() $name: $value #end for Local cells : $size_local Cell type : $celltype.name ID range : $first_id-$last_id #if $size_local First cell on this node: ID: $local_first_id #for $name,$value in $cell_parameters.items() $name: $value #end for #end ifPyNN-0.7.4/src/descriptions/templates/cheetah/modeltype_default.txt0000644000175000017500000000000511736323051026267 0ustar andrewandrew00000000000000$namePyNN-0.7.4/src/descriptions/templates/cheetah/population_old.txt0000644000175000017500000000074411736323051025623 0ustar andrewandrew00000000000000------- Population description ------- Population called $label is made of $size cells [$size_local being local] #if $structure -> Cells are arranged in a $structure #end if -> Celltype is $celltype -> ID range is $first_id-$last_id #if $n_cells_local > 0 -> Cell Parameters used for first cell on this node are: #for $name, $value in $cell_parameters.items() $name: $value #end for #else -> There are no cells on this node. #end if --- End of Population description ---- PyNN-0.7.4/src/descriptions/templates/cheetah/populationview_default.txt0000644000175000017500000000013211736323051027353 0ustar andrewandrew00000000000000PopulationView "$label" parent : "$parent" size : $size mask : $mask PyNN-0.7.4/src/descriptions/templates/cheetah/synapsedynamics_default.txt0000644000175000017500000000011411736323051027500 0ustar andrewandrew00000000000000Short-term plasticity mechanism: $fast Long-term plasticity mechanism: $slowPyNN-0.7.4/src/descriptions/__init__.py0000644000175000017500000001411711736323051020543 0ustar andrewandrew00000000000000""" Support module for the `describe()` method of many PyNN classes. If a supported template engine is available on the Python path, PyNN will use this engine to produce the output from `describe()`. As a fall-back, it will use the built-in string.Template engine, but this produces much less well-formatted output, as it does not support hierarchical contexts, loops or conditionals. Currently supported engines are Cheetah Template and Jinja2, but it should be trivial to add others. If a user has a preference for a particular engine (e.g. if they are providing their own templates for `describe()`), they may set the DEFAULT_TEMPLATE_ENGINE module attribute, e.g.:: from pyNN import descriptions descriptions.DEFAULT_TEMPLATE_ENGINE = 'jinja2' :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ import string import os.path DEFAULT_TEMPLATE_ENGINE = None # can be set by user TEMPLATE_ENGINES = {} def get_default_template_engine(): """ Return the default template engine class. """ default = DEFAULT_TEMPLATE_ENGINE or TEMPLATE_ENGINES.keys()[0] return TEMPLATE_ENGINES[default] def render(engine, template, context): """ Render the given template with the given context, using the given engine. """ if template is None: return context else: if engine == 'default': engine = get_default_template_engine() elif isinstance(engine, basestring): engine = TEMPLATE_ENGINES[engine] assert issubclass(engine, TemplateEngine), str(engine) return engine.render(template, context) class TemplateEngine(object): """ Base class. """ @classmethod def get_template(cls, template): """ template may be either a string containing a template or the name of a file (relative to pyNN/descriptions/templates/) """ raise NotImplementedError() @classmethod def render(cls, template, context): """ Render the template with the context. template may be either a string containing a template or the name of a file (relative to pyNN/descriptions/templates/) context should be a dict. """ raise NotImplementedError() class StringTemplateEngine(TemplateEngine): """ Interface to the built-in string.Template template engine. """ template_dir = os.path.join(os.path.dirname(__file__), 'templates', 'string') @classmethod def get_template(cls, template): """ template may be either a string containing a template or the name of a file (relative to pyNN/descriptions/templates/string/) """ template_path = os.path.join(cls.template_dir, template) if os.path.exists(template_path): f = open(template_path, 'r') template = f.read() f.close() return string.Template(template) @classmethod def render(cls, template, context): """ Render the template with the context. template may be either a string containing a template or the name of a file (relative to pyNN/descriptions/templates/string/) context should be a dict. """ template = cls.get_template(template) return template.safe_substitute(context) TEMPLATE_ENGINES['string'] = StringTemplateEngine try: import jinja2 class Jinja2TemplateEngine(TemplateEngine): """ Interface to the Jinja2 template engine. """ env = jinja2.Environment(loader=jinja2.PackageLoader('pyNN.descriptions', 'templates/jinja2')) @classmethod def get_template(cls, template): """ template may be either a string containing a template or the name of a file (relative to pyNN/descriptions/templates/jinja2/) """ assert isinstance(template, basestring) try: # maybe template is a file template = cls.env.get_template(template) except Exception: # interpret template as a string template = cls.env.from_string(template) return template @classmethod def render(cls, template, context): """ Render the template with the context. template may be either a string containing a template or the name of a file (relative to pyNN/descriptions/templates/jinja2/) context should be a dict. """ template = cls.get_template(template) return template.render(context) TEMPLATE_ENGINES['jinja2'] = Jinja2TemplateEngine except ImportError: pass try: import Cheetah.Template class CheetahTemplateEngine(TemplateEngine): """ Interface to the Cheetah template engine. """ template_dir = os.path.join(os.path.dirname(__file__), 'templates', 'cheetah') @classmethod def get_template(cls, template): """ template may be either a string containing a template or the name of a file (relative to pyNN/descriptions/templates/cheetah) """ template_path = os.path.join(cls.template_dir, template) if os.path.exists(template_path): return Cheetah.Template.Template.compile(file=template_path) else: return Cheetah.Template.Template.compile(source=template) @classmethod def render(cls, template, context): """ Render the template with the context. template may be either a string containing a template or the name of a file (relative to pyNN/descriptions/templates/cheetah/) context should be a dict. """ template = cls.get_template(template)(namespaces=[context]) return template.respond() TEMPLATE_ENGINES['cheetah'] = CheetahTemplateEngine except ImportError: passPyNN-0.7.4/src/nest/0000755000175000017500000000000011774605211014674 5ustar andrewandrew00000000000000PyNN-0.7.4/src/nest/electrodes.py0000644000175000017500000001642611737573151017416 0ustar andrewandrew00000000000000""" Current source classes for the nest module. Classes: DCSource -- a single pulse of current of constant amplitude. StepCurrentSource -- a step-wise time-varying current. NoisyCurrentSource -- a Gaussian whitish noise current. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: electrodes.py 1121 2012-04-06 13:58:02Z apdavison $ """ import nest import numpy from pyNN.nest.simulator import state from pyNN.random import NumpyRNG, NativeRNG from pyNN.common import Population, PopulationView, Assembly # should really use the StandardModel machinery to allow reverse translations class CurrentSource(object): """Base class for a source of current to be injected into a neuron.""" def delay_correction(self, value): # use dt or min_delay? return value - state.min_delay def inject_into(self, cell_list): """Inject this current source into some cells.""" for id in cell_list: if id.local and not id.celltype.injectable: raise TypeError("Can't inject current into a spike source.") if isinstance(cell_list, (Population, PopulationView, Assembly)): cell_list = [cell for cell in cell_list] nest.DivergentConnect(self._device, cell_list, delay=state.min_delay, weight=1.0) class DCSource(CurrentSource): """Source producing a single pulse of current of constant amplitude.""" def __init__(self, amplitude=1.0, start=0.0, stop=None): """Construct the current source. Arguments: start -- onset time of pulse in ms stop -- end of pulse in ms amplitude -- pulse amplitude in nA """ self.start = start self.stop = stop self.amplitude = amplitude self._device = nest.Create('dc_generator') nest.SetStatus(self._device, {'amplitude': 1000.0*self.amplitude, 'start': self.delay_correction(start)}) # conversion from nA to pA if stop: nest.SetStatus(self._device, {'stop': self.delay_correction(stop)}) class ACSource(CurrentSource): """Source producing a single pulse of current of constant amplitude.""" def __init__(self, amplitude=1.0, offset=0.0, frequency=10, phase=0., start=0.0, stop=None): """Construct the current source. Arguments: start -- onset time of pulse in ms stop -- end of pulse in ms amplitude -- pulse amplitude in nA sine_amp -- sine amplitude in nA frequency -- frequency in Hz phase -- phase in degree """ self.amplitude = amplitude self.offset = offset self.frequency = frequency self.phase = phase self._device = nest.Create('ac_generator') nest.SetStatus(self._device, {'amplitude': 1000.0*self.amplitude, 'offset' : 1000.0*self.offset, 'frequency': float(self.frequency), 'phase' : float(self.phase), 'start' : self.delay_correction(start)}) # conversion from nA to pA if stop: nest.SetStatus(self._device, {'stop': float(stop)}) class StepCurrentSource(CurrentSource): """A step-wise time-varying current source.""" def __init__(self, times, amplitudes): """Construct the current source. Arguments: times -- list/array of times at which the injected current changes. amplitudes -- list/array of current amplitudes to be injected at the times specified in `times`. The injected current will be zero up until the first time in `times`. The current will continue at the final value in `amplitudes` until the end of the simulation. """ self._device = nest.Create('step_current_generator') assert len(times) == len(amplitudes), "times and amplitudes must be the same size (len(times)=%d, len(amplitudes)=%d" % (len(times), len(amplitudes)) try: times.append(1e12) # work around for except AttributeError: numpy.append(times, 1e12) try: amplitudes.append(amplitudes[-1]) # bug in NEST except AttributeError: numpy.append(amplitudes, amplitudes[-1]) nest.SetStatus(self._device, {'amplitude_times': self.delay_correction(numpy.array(times, 'float')), 'amplitude_values': 1000.0*numpy.array(amplitudes, 'float')}) class NoisyCurrentSource(CurrentSource): """A Gaussian "white" noise current source. The current amplitude changes at fixed intervals, with the new value drawn from a Gaussian distribution.""" # We have a possible problem here in that each recipient receives a # different noise stream, which is probably what is wanted in most # scenarios, but conflicts with the idea of a current source as a single # object. # For the purposes of reproducibility, it would also be nice to have # simulator-independent noise, which means adding an rng argument. If this # is a NativeRNG, we use the 'noise_generator' model, otherwise we generate # values and use a 'step_current_generator'. def __init__(self, mean, stdev, dt=None, start=0.0, stop=None, rng=None): """Construct the current source. Required arguments: mean -- mean current amplitude in nA stdev -- standard deviation of the current amplitude in nA Optional arguments: dt -- interval between updates of the current amplitude. Must be a multiple of the simulation time step. If not specified, the simulation time step will be used. start -- onset of the current injection in ms. If not specified, the current will begin at the start of the simulation. stop -- end of the current injection in ms. If not specified, the current will continue until the end of the simulation. rng -- an RNG object from the `pyNN.random` module. For speed, this should be a `NativeRNG` instance (uses the simulator's internal random number generator). For reproducibility across simulators, use one of the other RNG types. If not specified, a NumpyRNG is used. """ self.rng = rng or NumpyRNG() self.dt = dt or state.dt if dt: assert self.dt%dt == 0 self.start = start self.stop = stop self.mean = mean self.stdev = stdev if isinstance(rng, NativeRNG): self._device = nest.Create('noise_generator') nest.SetStatus(self._device, {'mean': mean*1000.0, 'std': stdev*1000.0, 'start': self.delay_correction(start), 'dt': self.dt}) if stop: nest.SetStatus(self._device, {'stop': self.delay_correction(stop)}) else: raise NotImplementedError("Only using a NativeRNG is currently supported.") PyNN-0.7.4/src/nest/standardmodels/0000755000175000017500000000000011774605211017700 5ustar andrewandrew00000000000000PyNN-0.7.4/src/nest/standardmodels/cells.py0000644000175000017500000002534211736323051021357 0ustar andrewandrew00000000000000""" Standard cells for nest :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: cells.py 957 2011-05-03 13:44:15Z apdavison $ """ from pyNN.standardmodels import cells, build_translations class IF_curr_alpha(cells.IF_curr_alpha): """ Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic current. """ translations = build_translations( ('v_rest', 'E_L'), ('v_reset', 'V_reset'), ('cm', 'C_m', 1000.0), # C_m is in pF, cm in nF ('tau_m', 'tau_m'), ('tau_refrac', 't_ref'), ('tau_syn_E', 'tau_syn_ex'), ('tau_syn_I', 'tau_syn_in'), ('v_thresh', 'V_th'), ('i_offset', 'I_e', 1000.0), # I_e is in pA, i_offset in nA ) nest_name = {"on_grid": "iaf_psc_alpha", "off_grid": "iaf_psc_alpha"} standard_receptor_type = True class IF_curr_exp(cells.IF_curr_exp): """ Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic current. (Separate synaptic currents for excitatory and inhibitory synapses. """ translations = build_translations( ('v_rest', 'E_L'), ('v_reset', 'V_reset'), ('cm', 'C_m', 1000.0), # C_m is in pF, cm in nF ('tau_m', 'tau_m'), ('tau_refrac', 't_ref'), ('tau_syn_E', 'tau_syn_ex'), ('tau_syn_I', 'tau_syn_in'), ('v_thresh', 'V_th'), ('i_offset', 'I_e', 1000.0), # I_e is in pA, i_offset in nA ) nest_name = {"on_grid": 'iaf_psc_exp', "off_grid": 'iaf_psc_exp_ps'} standard_receptor_type = True class IF_cond_alpha(cells.IF_cond_alpha): """ Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic conductance. """ translations = build_translations( ('v_rest', 'E_L') , ('v_reset', 'V_reset'), ('cm', 'C_m', 1000.0), # C_m is in pF, cm in nF ('tau_m', 'g_L', "cm/tau_m*1000.0", "C_m/g_L"), ('tau_refrac', 't_ref'), ('tau_syn_E', 'tau_syn_ex'), ('tau_syn_I', 'tau_syn_in'), ('v_thresh', 'V_th'), ('i_offset', 'I_e', 1000.0), # I_e is in pA, i_offset in nA ('e_rev_E', 'E_ex'), ('e_rev_I', 'E_in'), ) nest_name = {"on_grid": "iaf_cond_alpha", "off_grid": "iaf_cond_alpha"} standard_receptor_type = True class IF_cond_exp(cells.IF_cond_exp): """ Leaky integrate and fire model with fixed threshold and exponentially-decaying post-synaptic conductance. """ translations = build_translations( ('v_rest', 'E_L') , ('v_reset', 'V_reset'), ('cm', 'C_m', 1000.0), # C_m is in pF, cm in nF ('tau_m', 'g_L', "cm/tau_m*1000.0", "C_m/g_L"), ('tau_refrac', 't_ref'), ('tau_syn_E', 'tau_syn_ex'), ('tau_syn_I', 'tau_syn_in'), ('v_thresh', 'V_th'), ('i_offset', 'I_e', 1000.0), # I_e is in pA, i_offset in nA ('e_rev_E', 'E_ex'), ('e_rev_I', 'E_in'), ) nest_name = {"on_grid": "iaf_cond_exp", "off_grid": "iaf_cond_exp"} standard_receptor_type = True class IF_cond_exp_gsfa_grr(cells.IF_cond_exp_gsfa_grr): """ Linear leaky integrate and fire model with fixed threshold, decaying-exponential post-synaptic conductance, conductance based spike-frequency adaptation, and a conductance-based relative refractory mechanism. See: Muller et al (2007) Spike-frequency adapting neural ensembles: Beyond mean-adaptation and renewal theories. Neural Computation 19: 2958-3010. See also: EIF_cond_alpha_isfa_ista """ translations = build_translations( ('v_rest', 'E_L'), ('v_reset', 'V_reset'), ('cm', 'C_m', 1000.0), # C_m is in pF, cm in nF ('tau_m', 'g_L', "cm/tau_m*1000.0", "C_m/g_L"), ('tau_refrac', 't_ref'), ('tau_syn_E', 'tau_syn_ex'), ('tau_syn_I', 'tau_syn_in'), ('v_thresh', 'V_th'), ('i_offset', 'I_e', 1000.0), # I_e is in pA, i_offset in nA ('e_rev_E', 'E_ex'), ('e_rev_I', 'E_in'), ('tau_sfa', 'tau_sfa'), ('e_rev_sfa', 'E_sfa'), ('q_sfa', 'q_sfa'), ('tau_rr', 'tau_rr'), ('e_rev_rr', 'E_rr'), ('q_rr', 'q_rr') ) nest_name = {"on_grid": "iaf_cond_exp_sfa_rr", "off_grid": "iaf_cond_exp_sfa_rr"} standard_receptor_type = True class IF_facets_hardware1(cells.IF_facets_hardware1): """ Leaky integrate and fire model with conductance-based synapses and fixed threshold as it is resembled by the FACETS Hardware Stage 1. For further details regarding the hardware model see the FACETS-internal Wiki: https://facets.kip.uni-heidelberg.de/private/wiki/index.php/WP7_NNM """ # in 'iaf_cond_exp', the dimension of C_m is pF, # while in the pyNN context, cm is given in nF translations = build_translations( ('v_reset', 'V_reset'), ('v_rest', 'E_L'), ('v_thresh', 'V_th'), ('e_rev_I', 'E_in'), ('tau_syn_E', 'tau_syn_ex'), ('tau_syn_I', 'tau_syn_in'), ('g_leak', 'g_L') ) nest_name = {"on_grid": "iaf_cond_exp", "off_grid": "iaf_cond_exp"} standard_receptor_type = True def __init__(self, parameters): cells.IF_facets_hardware1.__init__(self, parameters) self.parameters['C_m'] = 200.0 self.parameters['t_ref'] = 1.0 self.parameters['E_ex'] = 0.0 class HH_cond_exp(cells.HH_cond_exp): """Single-compartment Hodgkin-Huxley model.""" translations = build_translations( ('gbar_Na', 'g_Na', 1000.0), # uS --> nS ('gbar_K', 'g_K', 1000.0), ('g_leak', 'g_L', 1000.0), ('cm', 'C_m', 1000.0), # nF --> pF ('v_offset', 'V_T'), ('e_rev_Na', 'E_Na'), ('e_rev_K', 'E_K'), ('e_rev_leak', 'E_L'), ('e_rev_E', 'E_ex'), ('e_rev_I', 'E_in'), ('tau_syn_E', 'tau_syn_ex'), ('tau_syn_I', 'tau_syn_in'), ('i_offset', 'I_e', 1000.0), # nA --> pA ) nest_name = {"on_grid": "hh_cond_exp_traub", "off_grid": "hh_cond_exp_traub"} standard_receptor_type = True class EIF_cond_alpha_isfa_ista(cells.EIF_cond_alpha_isfa_ista): """ Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.) according to: Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642 See also: IF_cond_exp_gsfa_grr """ translations = build_translations( ('cm' , 'C_m', 1000.0), # nF -> pF ('tau_refrac', 't_ref'), ('v_spike' , 'V_peak'), ('v_reset' , 'V_reset'), ('v_rest' , 'E_L'), ('tau_m' , 'g_L', "cm/tau_m*1000.0", "C_m/g_L"), ('i_offset' , 'I_e', 1000.0), # nA -> pA ('a' , 'a'), ('b' , 'b', 1000.0), # nA -> pA. ('delta_T' , 'Delta_T'), ('tau_w' , 'tau_w'), ('v_thresh' , 'V_th'), ('e_rev_E' , 'E_ex'), ('tau_syn_E' , 'tau_syn_ex'), ('e_rev_I' , 'E_in'), ('tau_syn_I' , 'tau_syn_in'), ) nest_name = {"on_grid": "aeif_cond_alpha", "off_grid": "aeif_cond_alpha"} standard_receptor_type = True class SpikeSourcePoisson(cells.SpikeSourcePoisson): """Spike source, generating spikes according to a Poisson process.""" translations = build_translations( ('rate', 'rate'), ('start', 'start'), ('duration', 'stop', "start+duration", "stop-start"), ) nest_name = {"on_grid": 'poisson_generator', "off_grid": 'poisson_generator_ps'} always_local = True uses_parrot = True def __init__(self, parameters): cells.SpikeSourcePoisson.__init__(self, parameters) self.parameters['origin'] = 1.0 class SpikeSourceInhGamma(cells.SpikeSourceInhGamma): """ Spike source, generating realizations of an inhomogeneous gamma process, employing the thinning method. See: Muller et al (2007) Spike-frequency adapting neural ensembles: Beyond mean-adaptation and renewal theories. Neural Computation 19: 2958-3010. """ translations = build_translations( ('a', 'a'), ('b', 'b'), ('tbins', 'tbins'), ('start', 'start'), ('duration', 'stop', "duration+start", "stop-start"), ) nest_name = {"on_grid": 'inh_gamma_generator', "off_grid": 'inh_gamma_generator'} always_local = True def __init__(self, parameters): cells.SpikeSourceInhGamma.__init__(self, parameters) self.parameters['origin'] = 1.0 class SpikeSourceArray(cells.SpikeSourceArray): """Spike source generating spikes at the times given in the spike_times array.""" translations = build_translations( ('spike_times', 'spike_times'), ) nest_name = {"on_grid": 'spike_generator', "off_grid": 'spike_generator'} always_local = True class EIF_cond_exp_isfa_ista(cells.EIF_cond_exp_isfa_ista): """ Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.) according to: Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642 See also: IF_cond_exp_gsfa_grr """ translations = build_translations( ('cm' , 'C_m', 1000.0), # nF -> pF ('tau_refrac', 't_ref'), ('v_spike' , 'V_peak'), ('v_reset' , 'V_reset'), ('v_rest' , 'E_L'), ('tau_m' , 'g_L', "cm/tau_m*1000.0", "C_m/g_L"), ('i_offset' , 'I_e', 1000.0), # nA -> pA ('a' , 'a'), ('b' , 'b', 1000.0), # nA -> pA. ('delta_T' , 'Delta_T'), ('tau_w' , 'tau_w'), ('v_thresh' , 'V_th'), ('e_rev_E' , 'E_ex'), ('tau_syn_E' , 'tau_syn_ex'), ('e_rev_I' , 'E_in'), ('tau_syn_I' , 'tau_syn_in'), ) nest_name = {"on_grid": "aeif_cond_exp", "off_grid": "aeif_cond_exp"} standard_receptor_type = True PyNN-0.7.4/src/nest/standardmodels/__init__.py0000644000175000017500000000000011736323051021774 0ustar andrewandrew00000000000000PyNN-0.7.4/src/nest/standardmodels/synapses.py0000644000175000017500000002200111736323051022107 0ustar andrewandrew00000000000000""" Synapse Dynamics classes for nest :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: synapses.py 957 2011-05-03 13:44:15Z apdavison $ """ import nest from pyNN.standardmodels import synapses, build_translations, SynapseDynamics, STDPMechanism class SynapseDynamics(SynapseDynamics): __doc__ = SynapseDynamics.__doc__ def _get_nest_synapse_model(self, suffix): # We create a particular synapse context for each projection, by copying # the one which is desired. if self.fast: if self.slow: raise Exception("It is not currently possible to have both short-term and long-term plasticity at the same time with this simulator.") else: base_model = self.fast.native_name elif self.slow: base_model = self.slow.possible_models if isinstance(base_model, set): logger.warning("Several STDP models are available for these connections:") logger.warning(", ".join(model for model in base_model)) base_model = list(base_model)[0] logger.warning("By default, %s is used" % base_model) available_models = nest.Models(mtype='synapses') if base_model not in available_models: raise ValueError("Synapse dynamics model '%s' not a valid NEST synapse model. " "Possible models in your NEST build are: %s" % (base_model, available_models)) synapse_defaults = nest.GetDefaults(base_model) synapse_defaults.pop('synapsemodel') synapse_defaults.pop('num_connections') if 'num_connectors' in synapse_defaults: synapse_defaults.pop('num_connectors') if self.fast: synapse_defaults.update(self.fast.parameters) elif self.slow: stdp_parameters = self.slow.all_parameters # NEST does not support w_min != 0 try: stdp_parameters.pop("w_min_always_zero_in_NEST") except Exception: pass # Tau_minus is a parameter of the post-synaptic cell, not of the connection stdp_parameters.pop("tau_minus") synapse_defaults.update(stdp_parameters) label = "%s_%s" % (base_model, suffix) nest.CopyModel(base_model, label, synapse_defaults) return label def _set_tau_minus(self, cells): if len(cells) > 0 and self.slow: if 'tau_minus' in nest.GetStatus([cells[0]])[0]: tau_minus = self.slow.timing_dependence.parameters["tau_minus"] nest.SetStatus(cells.tolist(), [{'tau_minus': tau_minus}]) else: raise Exception("Postsynaptic cell model does not support STDP.") class STDPMechanism(STDPMechanism): """Specification of STDP models.""" def __init__(self, timing_dependence=None, weight_dependence=None, voltage_dependence=None, dendritic_delay_fraction=1.0): assert dendritic_delay_fraction == 1, """NEST does not currently support axonal delays: for the purpose of STDP calculations all delays are assumed to be dendritic.""" super(STDPMechanism, self).__init__(timing_dependence, weight_dependence, voltage_dependence, dendritic_delay_fraction) class TsodyksMarkramMechanism(synapses.TsodyksMarkramMechanism): translations = build_translations( ('U', 'U'), ('tau_rec', 'tau_rec'), ('tau_facil', 'tau_fac'), ('u0', 'u'), # this could cause problems for reverse translation ('x0', 'x' ), # (as for V_m) in cell models, since the initial value ('y0', 'y') # is not stored, only set. ) native_name = 'tsodyks_synapse' def __init__(self, U=0.5, tau_rec=100.0, tau_facil=0.0, u0=0.0, x0=1.0, y0=0.0): #synapses.TsodyksMarkramMechanism.__init__(self, U, tau_rec, tau_facil, u0, x0, y0) parameters = dict(locals()) # need the dict to get a copy of locals. When running parameters.pop('self') # through coverage.py, for some reason, the pop() doesn't have any effect self.parameters = self.translate(parameters) class AdditiveWeightDependence(synapses.AdditiveWeightDependence): """ The amplitude of the weight change is fixed for depression (`A_minus`) and for potentiation (`A_plus`). If the new weight would be less than `w_min` it is set to `w_min`. If it would be greater than `w_max` it is set to `w_max`. """ translations = build_translations( ('w_max', 'Wmax', 1000.0), # unit conversion ('w_min', 'w_min_always_zero_in_NEST'), ('A_plus', 'lambda'), ('A_minus', 'alpha', 'A_minus/A_plus', 'alpha*lambda'), ) possible_models = set(['stdp_synapse']) #,'stdp_synapse_hom']) def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): # units? if w_min != 0: raise Exception("Non-zero minimum weight is not supported by NEST.") #synapses.AdditiveWeightDependence.__init__(self, w_min, w_max, A_plus, A_minus) parameters = dict(locals()) parameters.pop('self') self.parameters = self.translate(parameters) self.parameters['mu_plus'] = 0.0 self.parameters['mu_minus'] = 0.0 class MultiplicativeWeightDependence(synapses.MultiplicativeWeightDependence): """ The amplitude of the weight change depends on the current weight. For depression, Dw propto w-w_min For potentiation, Dw propto w_max-w """ translations = build_translations( ('w_max', 'Wmax', 1000.0), # unit conversion ('w_min', 'w_min_always_zero_in_NEST'), ('A_plus', 'lambda'), ('A_minus', 'alpha', 'A_minus/A_plus', 'alpha*lambda'), ) possible_models = set(['stdp_synapse']) #,'stdp_synapse_hom']) def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): if w_min != 0: raise Exception("Non-zero minimum weight is not supported by NEST.") #synapses.MultiplicativeWeightDependence.__init__(self, w_min, w_max, A_plus, A_minus) parameters = dict(locals()) parameters.pop('self') self.parameters = self.translate(parameters) self.parameters['mu_plus'] = 1.0 self.parameters['mu_minus'] = 1.0 class AdditivePotentiationMultiplicativeDepression(synapses.AdditivePotentiationMultiplicativeDepression): """ The amplitude of the weight change depends on the current weight for depression (Dw propto w-w_min) and is fixed for potentiation. """ translations = build_translations( ('w_max', 'Wmax', 1000.0), # unit conversion ('w_min', 'w_min_always_zero_in_NEST'), ('A_plus', 'lambda'), ('A_minus', 'alpha', 'A_minus/A_plus', 'alpha*lambda'), ) possible_models = set(['stdp_synapse']) #,'stdp_synapse_hom']) def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): if w_min != 0: raise Exception("Non-zero minimum weight is not supported by NEST.") #synapses.AdditivePotentiationMultiplicativeDepression.__init__(self, w_min, w_max, A_plus, A_minus) parameters = dict(locals()) parameters.pop('self') self.parameters = self.translate(parameters) self.parameters['mu_plus'] = 0.0 self.parameters['mu_minus'] = 1.0 class GutigWeightDependence(synapses.GutigWeightDependence): """ The amplitude of the weight change depends on the current weight. For depression, Dw propto w-w_min For potentiation, Dw propto w_max-w """ translations = build_translations( ('w_max', 'Wmax', 1000.0), # unit conversion ('w_min', 'w_min_always_zero_in_NEST'), ('A_plus', 'lambda'), ('A_minus', 'alpha', 'A_minus/A_plus', 'alpha*lambda'), ('mu_plus', 'mu_plus'), ('mu_minus', 'mu_minus'), ) possible_models = set(['stdp_synapse']) #,'stdp_synapse_hom']) def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01,mu_plus=0.5,mu_minus=0.5): if w_min != 0: raise Exception("Non-zero minimum weight is not supported by NEST.") #synapses.GutigWeightDependence.__init__(self, w_min, w_max, A_plus, A_minus) parameters = dict(locals()) parameters.pop('self') self.parameters = self.translate(parameters) class SpikePairRule(synapses.SpikePairRule): translations = build_translations( ('tau_plus', 'tau_plus'), ('tau_minus', 'tau_minus'), # defined in post-synaptic neuron ) possible_models = set(['stdp_synapse']) #,'stdp_synapse_hom']) def __init__(self, tau_plus=20.0, tau_minus=20.0): #synapses.SpikePairRule.__init__(self, tau_plus, tau_minus) parameters = dict(locals()) parameters.pop('self') self.parameters = self.translate(parameters) PyNN-0.7.4/src/nest/cells.py0000644000175000017500000000410111736323051016341 0ustar andrewandrew00000000000000""" Definition of NativeCellType class for NEST. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ import nest from pyNN.models import BaseCellType def get_defaults(model_name): defaults = nest.GetDefaults(model_name) variables = defaults.get('recordables', []) ignore = ['archiver_length', 'available', 'capacity', 'elementsize', 'frozen', 'instantiations', 'local', 'model', 'recordables', 'state', 't_spike', 'tau_minus', 'tau_minus_triplet', 'thread', 'vp', 'receptor_types', 'events'] default_params = {} default_initial_values = {} for name,value in defaults.items(): if name in variables: default_initial_values[name] = value elif name not in ignore: default_params[name] = value return default_params, default_initial_values def get_receptor_types(model_name): return nest.GetDefaults(model_name).get("receptor_types", ('excitatory', 'inhibitory')) def get_recordables(model_name): return nest.GetDefaults(model_name).get("recordables", []) def native_cell_type(model_name): """ Return a new NativeCellType subclass. """ assert isinstance(model_name, str) return type(model_name, (NativeCellType,), {'nest_model' : model_name}) class NativeCellType(BaseCellType): def __new__(cls, parameters): cls.default_parameters, cls.default_initial_values = get_defaults(cls.nest_model) cls.synapse_types = get_receptor_types(cls.nest_model) cls.injectable = ("V_m" in cls.default_initial_values) cls.recordable = get_recordables(cls.nest_model) + ['spikes'] cls.standard_receptor_type = (cls.synapse_types == ('excitatory', 'inhibitory')) cls.nest_name = {"on_grid": cls.nest_model, "off_grid": cls.nest_model} cls.conductance_based = ("g" in (s[0] for s in cls.recordable)) return super(NativeCellType, cls).__new__(cls) def get_receptor_type(self, name): return nest.GetDefaults(self.nest_model)["receptor_types"][name] PyNN-0.7.4/src/nest/__init__.py0000644000175000017500000004144611736323051017013 0ustar andrewandrew00000000000000# -*- coding: utf-8 -*- """ NEST v2 implementation of the PyNN API. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: __init__.py 957 2011-05-03 13:44:15Z apdavison $ """ import nest from pyNN.nest import simulator from pyNN import common, recording, errors, space, __doc__ common.simulator = simulator recording.simulator = simulator if recording.MPI and (nest.Rank() != recording.mpi_comm.rank): raise Exception("MPI not working properly. Please make sure you import pyNN.nest before pyNN.random.") import numpy import os import shutil import logging import tempfile from pyNN.recording import files from pyNN.nest.cells import NativeCellType, native_cell_type from pyNN.nest.synapses import NativeSynapseDynamics, NativeSynapseMechanism from pyNN.nest.standardmodels.cells import * from pyNN.nest.connectors import * from pyNN.nest.standardmodels.synapses import * from pyNN.nest.electrodes import * from pyNN.nest.recording import * from pyNN.random import RandomDistribution from pyNN import standardmodels Set = set tempdirs = [] NEST_SYNAPSE_TYPES = nest.Models(mtype='synapses') STATE_VARIABLE_MAP = {"v": "V_m", "w": "w"} logger = logging.getLogger("PyNN") # ============================================================================== # Utility functions # ============================================================================== def list_standard_models(): """Return a list of all the StandardCellType classes available for this simulator.""" standard_cell_types = [obj for obj in globals().values() if isinstance(obj, type) and issubclass(obj, standardmodels.StandardCellType)] for cell_class in standard_cell_types: try: create(cell_class) except Exception, e: print "Warning: %s is defined, but produces the following error: %s" % (cell_class.__name__, e) standard_cell_types.remove(cell_class) return [obj.__name__ for obj in standard_cell_types] def _discrepancy_due_to_rounding(parameters, output_values): """NEST rounds delays to the time step.""" if 'delay' not in parameters: return False else: # the logic here is not the clearest, the aim was to keep # _set_connection() as simple as possible, but it might be better to # refactor the whole thing. input_delay = parameters['delay'] if hasattr(output_values, "__len__"): output_delay = output_values[parameters.keys().index('delay')] else: output_delay = output_values return abs(input_delay - output_delay) < get_time_step() # ============================================================================== # Functions for simulation set-up and control # ============================================================================== def setup(timestep=0.1, min_delay=0.1, max_delay=10.0, **extra_params): """ Should be called at the very beginning of a script. extra_params contains any keyword arguments that are required by a given simulator but not by others. """ global tempdir common.setup(timestep, min_delay, max_delay, **extra_params) if 'verbosity' in extra_params: nest_verbosity = extra_params['verbosity'].upper() else: nest_verbosity = "WARNING" nest.sli_run("M_%s setverbosity" % nest_verbosity) if "spike_precision" in extra_params: simulator.state.spike_precision = extra_params["spike_precision"] if extra_params["spike_precision"] == 'off_grid': simulator.state.default_recording_precision = 15 nest.SetKernelStatus({'off_grid_spiking': simulator.state.spike_precision=='off_grid'}) if "recording_precision" in extra_params: simulator.state.default_recording_precision = extra_params["recording_precision"] # clear the sli stack, if this is not done --> memory leak cause the stack increases nest.sr('clear') # reset the simulation kernel nest.ResetKernel() # all NEST to erase previously written files (defaut with all the other simulators) nest.SetKernelStatus({'overwrite_files' : True}) # set tempdir tempdir = tempfile.mkdtemp() tempdirs.append(tempdir) # append tempdir to tempdirs list nest.SetKernelStatus({'data_path': tempdir,}) # set kernel RNG seeds num_threads = extra_params.get('threads') or 1 if 'rng_seeds' in extra_params: rng_seeds = extra_params['rng_seeds'] else: rng_seeds_seed = extra_params.get('rng_seeds_seed') or 42 rng = NumpyRNG(rng_seeds_seed) rng_seeds = (rng.rng.uniform(size=num_threads*num_processes())*100000).astype('int').tolist() logger.debug("rng_seeds = %s" % rng_seeds) nest.SetKernelStatus({'local_num_threads': num_threads, 'rng_seeds' : rng_seeds}) # set resolution nest.SetKernelStatus({'resolution': timestep}) # Set min_delay and max_delay for all synapse models for synapse_model in NEST_SYNAPSE_TYPES: nest.SetDefaults(synapse_model, {'delay' : min_delay, 'min_delay': min_delay, 'max_delay': max_delay}) simulator.connection_managers = [] simulator.reset() return rank() def end(compatible_output=True): """Do any necessary cleaning up before exiting.""" global tempdirs # And we postprocess the low level files opened by record() # and record_v() method for recorder in simulator.recorder_list: recorder.write(gather=True, compatible_output=compatible_output) for tempdir in tempdirs: shutil.rmtree(tempdir) tempdirs = [] simulator.recorder_list = [] def run(simtime): """Run the simulation for simtime ms.""" simulator.run(simtime) return get_current_time() reset = common.reset initialize = common.initialize # ============================================================================== # Functions returning information about the simulation state # ============================================================================== get_current_time = common.get_current_time get_time_step = common.get_time_step get_min_delay = common.get_min_delay get_max_delay = common.get_max_delay num_processes = common.num_processes rank = common.rank # ============================================================================== # High-level API for creating, connecting and recording from populations of # neurons. # ============================================================================== class Population(common.Population): """ An array of neurons all of the same type. `Population' is used as a generic term intended to include layers, columns, nuclei, etc., of cells. """ recorder_class = Recorder def _create_cells(self, cellclass, cellparams, n): """ Create cells in NEST. `cellclass` -- a PyNN standard cell or the name of a native NEST model. `cellparams` -- a dictionary of cell parameters. `n` -- the number of cells to create """ # this method should never be called more than once # perhaps should check for that assert n > 0, 'n must be a positive integer' celltype = cellclass(cellparams) nest_model = celltype.nest_name[simulator.state.spike_precision] try: self.all_cells = nest.Create(nest_model, n, params=celltype.parameters) except nest.NESTError, err: if "UnknownModelName" in err.message and "cond" in err.message: raise errors.InvalidModelError("%s Have you compiled NEST with the GSL (Gnu Scientific Library)?" % err) raise errors.InvalidModelError(err) # create parrot neurons if necessary if hasattr(celltype, "uses_parrot") and celltype.uses_parrot: self.all_cells_source = numpy.array(self.all_cells) # we put the parrots into all_cells, since this will self.all_cells = nest.Create("parrot_neuron", n) # be used for connections and recording. all_cells_source nest.Connect(self.all_cells_source, self.all_cells) # should be used for setting parameters self.first_id = self.all_cells[0] self.last_id = self.all_cells[-1] self._mask_local = numpy.array(nest.GetStatus(self.all_cells, 'local')) self.all_cells = numpy.array([simulator.ID(gid) for gid in self.all_cells], simulator.ID) for gid in self.all_cells: gid.parent = self if hasattr(celltype, "uses_parrot") and celltype.uses_parrot: for gid, source in zip(self.all_cells, self.all_cells_source): gid.source = source def set(self, param, val=None): """ Set one or more parameters for every cell in the population. param can be a dict, in which case val should not be supplied, or a string giving the parameter name, in which case val is the parameter value. val can be a numeric value, or list of such (e.g. for setting spike times). e.g. p.set("tau_m",20.0). p.set({'tau_m':20,'v_rest':-65}) """ if isinstance(param, str): if isinstance(val, (str, float, int)): param_dict = {param: float(val)} elif isinstance(val, (list, numpy.ndarray)): param_dict = {param: [val]*len(self)} else: raise errors.InvalidParameterValueError elif isinstance(param, dict): param_dict = param else: raise errors.InvalidParameterValueError param_dict = self.celltype.checkParameters(param_dict, with_defaults=False) # The default implementation in common is is not very efficient for # simple and scaled parameters. # Should call nest.SetStatus(self.local_cells,...) for the parameters in # self.celltype.__class__.simple_parameters() and .scaled_parameters() # and keep the loop below just for the computed parameters. Even in this # case, it may be quicker to test whether the parameters participating # in the computation vary between cells, since if this is not the case # we can do the computation here and use nest.SetStatus. if isinstance(self.celltype, standardmodels.StandardCellType): to_be_set = {} if hasattr(self.celltype, "uses_parrot") and self.celltype.uses_parrot: gids = self.all_cells_source[self._mask_local] else: gids = self.local_cells for key, value in param_dict.items(): if key in self.celltype.scaled_parameters(): translation = self.celltype.translations[key] value = eval(translation['forward_transform'], globals(), {key:value}) to_be_set[translation['translated_name']] = value elif key in self.celltype.simple_parameters(): translation = self.celltype.translations[key] to_be_set[translation['translated_name']] = value else: assert key in self.celltype.computed_parameters() logger.debug("Setting the following parameters: %s" % to_be_set) nest.SetStatus(gids.tolist(), to_be_set) for key, value in param_dict.items(): if key in self.celltype.computed_parameters(): logger.debug("Setting %s = %s" % (key, value)) for cell in self: cell.set_parameters(**{key:value}) else: nest.SetStatus(self.local_cells.tolist(), param_dict) def _set_initial_value_array(self, variable, value): if variable in STATE_VARIABLE_MAP: variable = STATE_VARIABLE_MAP[variable] nest.SetStatus(self.local_cells.tolist(), variable, value) PopulationView = common.PopulationView Assembly = common.Assembly class Projection(common.Projection): """ A container for all the connections of a given type (same synapse type and plasticity mechanisms) between two populations, together with methods to set parameters of those connections, including of plasticity mechanisms. """ nProj = 0 def __init__(self, presynaptic_population, postsynaptic_population, method, source=None, target=None, synapse_dynamics=None, label=None, rng=None): """ presynaptic_population and postsynaptic_population - Population objects. source - string specifying which attribute of the presynaptic cell signals action potentials target - string specifying which synapse on the postsynaptic cell to connect to If source and/or target are not given, default values are used. method - a Connector object, encapsulating the algorithm to use for connecting the neurons. synapse_dynamics - a `SynapseDynamics` object specifying which synaptic plasticity mechanisms to use. rng - specify an RNG object to be used by the Connector. """ common.Projection.__init__(self, presynaptic_population, postsynaptic_population, method, source, target, synapse_dynamics, label, rng) self.synapse_type = target or 'excitatory' if self.synapse_dynamics: synapse_dynamics = self.synapse_dynamics self.synapse_dynamics._set_tau_minus(self.post.local_cells) else: synapse_dynamics = NativeSynapseDynamics("static_synapse") self.synapse_model = synapse_dynamics._get_nest_synapse_model("projection_%d" % Projection.nProj) Projection.nProj += 1 self.connection_manager = simulator.ConnectionManager(self.synapse_type, self.synapse_model, parent=self) # Create connections method.connect(self) self.connection_manager._set_tsodyks_params() self.connections = self.connection_manager def saveConnections(self, file, gather=True, compatible_output=True): """ Save connections to file in a format suitable for reading in with a FromFileConnector. """ import operator if isinstance(file, basestring): file = files.StandardTextFile(file, mode='w') lines = nest.GetStatus(self.connection_manager.connections, ('source', 'target', 'weight', 'delay')) if gather == True and num_processes() > 1: all_lines = { rank(): lines } all_lines = recording.gather_dict(all_lines) if rank() == 0: lines = reduce(operator.add, all_lines.values()) elif num_processes() > 1: file.rename('%s.%d' % (file.name, rank())) logger.debug("--- Projection[%s].__saveConnections__() ---" % self.label) if gather == False or rank() == 0: lines = numpy.array(lines, dtype=float) lines[:,2] *= 0.001 if compatible_output: lines[:,0] = self.pre.id_to_index(lines[:,0]) lines[:,1] = self.post.id_to_index(lines[:,1]) file.write(lines, {'pre' : self.pre.label, 'post' : self.post.label}) file.close() def randomizeWeights(self, rand_distr): """ Set weights to random values taken from rand_distr. """ # Arguably, we could merge this with set_weights just by detecting the # argument type. It could make for easier-to-read simulation code to # give it a separate name, though. Comments? self.setWeights(rand_distr.next(len(self), mask_local=False)) def randomizeDelays(self, rand_distr): """ Set weights to random values taken from rand_distr. """ # Arguably, we could merge this with set_weights just by detecting the # argument type. It could make for easier-to-read simulation code to # give it a separate name, though. Comments? self.setDelays(rand_distr.next(len(self), mask_local=False)) Space = space.Space # ============================================================================== # Low-level API for creating, connecting and recording from individual neurons # ============================================================================== create = common.build_create(Population) connect = common.build_connect(Projection, FixedProbabilityConnector) set = common.set record = common.build_record('spikes', simulator) record_v = common.build_record('v', simulator) record_gsyn = common.build_record('gsyn', simulator) # ============================================================================== PyNN-0.7.4/src/nest/simulator.py0000644000175000017500000005211511736323051017266 0ustar andrewandrew00000000000000# encoding: utf-8 """ Implementation of the "low-level" functionality used by the common implementation of the API. Functions and classes useable by the common implementation: Functions: create_cells() run() Classes: ID Recorder ConnectionManager Connection Attributes: state -- a singleton instance of the _State class. recorder_list All other functions and classes are private, and should not be used by other modules. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: simulator.py 957 2011-05-03 13:44:15Z apdavison $ """ import nest from pyNN import common, errors, core import logging import numpy import sys CHECK_CONNECTIONS = False recorder_list = [] recording_devices = [] connection_managers = [] global net net = None logger = logging.getLogger("PyNN") # --- For implementation of get_time_step() and similar functions -------------- class _State(object): """Represent the simulator state.""" def __init__(self): self.initialized = False self.running = False self.optimize = False self.spike_precision = "on_grid" self.default_recording_precision = 3 self._cache_num_processes = nest.GetKernelStatus()['num_processes'] # avoids blocking if only some nodes call num_processes # do the same for rank? @property def t(self): return nest.GetKernelStatus()['time'] dt = property(fget=lambda self: nest.GetKernelStatus()['resolution'], fset=lambda self, timestep: nest.SetKernelStatus({'resolution': timestep})) @property def min_delay(self): return nest.GetDefaults('static_synapse')['min_delay'] @property def max_delay(self): # any reason why not nest.GetKernelStatus()['min_delay']? return nest.GetDefaults('static_synapse')['max_delay'] @property def num_processes(self): return self._cache_num_processes @property def mpi_rank(self): return nest.Rank() @property def num_threads(self): return nest.GetKernelStatus()['local_num_threads'] def run(simtime): """Advance the simulation for a certain time.""" for device in recording_devices: device.connect_to_cells() if not state.running: simtime += state.dt # we simulate past the real time by one time step, otherwise NEST doesn't give us all the recorded data state.running = True nest.Simulate(simtime) def reset(): nest.ResetNetwork() nest.SetKernelStatus({'time': 0.0}) state.running = False # --- For implementation of access to individual neurons' parameters ----------- class ID(int, common.IDMixin): __doc__ = common.IDMixin.__doc__ def __init__(self, n): """Create an ID object with numerical value `n`.""" int.__init__(n) common.IDMixin.__init__(self) def get_native_parameters(self): """Return a dictionary of parameters for the NEST cell model.""" if "source" in self.__dict__: # self is a parrot_neuron gid = self.source else: gid = int(self) return nest.GetStatus([gid])[0] def set_native_parameters(self, parameters): """Set parameters of the NEST cell model from a dictionary.""" if hasattr(self, "source"): # self is a parrot_neuron gid = self.source else: gid = self try: nest.SetStatus([gid], [parameters]) except: # I can't seem to catch the NESTError that is raised, hence this roundabout way of doing it. exc_type, exc_value, traceback = sys.exc_info() if exc_type == 'NESTError' and "Unsupported Numpy array type" in exc_value: raise errors.InvalidParameterValueError() else: raise # --- For implementation of connect() and Connector classes -------------------- class Connection(object): """ Provide an interface that allows access to the connection's weight, delay and other attributes. """ def __init__(self, parent, index): """ Create a new connection interface. `parent` -- a ConnectionManager instance. `index` -- the index of this connection in the parent. """ self.parent = parent self.index = index def id(self): """Return a tuple of arguments for `nest.GetConnection()`. """ return self.parent.connections[self.index] @property def source(self): """The ID of the pre-synaptic neuron.""" src = ID(nest.GetStatus([self.id()], 'source')[0]) src.parent = self.parent.parent.pre return src @property def target(self): """The ID of the post-synaptic neuron.""" tgt = ID(nest.GetStatus([self.id()], 'target')[0]) tgt.parent = self.parent.parent.post return tgt def _set_weight(self, w): nest.SetStatus([self.id()], 'weight', w*1000.0) def _get_weight(self): """Synaptic weight in nA or µS.""" w_nA = nest.GetStatus([self.id()], 'weight')[0] if self.parent.synapse_type == 'inhibitory' and common.is_conductance(self.target): w_nA *= -1 # NEST uses negative values for inhibitory weights, even if these are conductances return 0.001*w_nA def _set_delay(self, d): nest.SetStatus([self.id()], 'delay', d) def _get_delay(self): """Synaptic delay in ms.""" return nest.GetStatus([self.id()], 'delay')[0] weight = property(_get_weight, _set_weight) delay = property(_get_delay, _set_delay) def generate_synapse_property(name): def _get(self): return nest.GetStatus([self.id()], name)[0] def _set(self, val): nest.SetStatus([self.id()], name, val) return property(_get, _set) setattr(Connection, 'U', generate_synapse_property('U')) setattr(Connection, 'tau_rec', generate_synapse_property('tau_rec')) setattr(Connection, 'tau_facil', generate_synapse_property('tau_fac')) setattr(Connection, 'u0', generate_synapse_property('u0')) setattr(Connection, '_tau_psc', generate_synapse_property('tau_psc')) class ConnectionManager: """ Manage synaptic connections, providing methods for creating, listing, accessing individual connections. """ def __init__(self, synapse_type, synapse_model=None, parent=None): """ Create a new ConnectionManager. `synapse_type` -- the 'physiological type' of the synapse, e.g. 'excitatory' or 'inhibitory' `synapse_model` -- the NEST synapse model to be used for all connections created with this manager. `parent` -- the parent `Projection`, if any. """ global connection_managers self.sources = [] if synapse_model is None: self.synapse_model = 'static_synapse_%s' % id(self) nest.CopyModel('static_synapse', self.synapse_model) else: self.synapse_model = synapse_model self.synapse_type = synapse_type self.parent = parent if parent is not None: assert parent.synapse_model == self.synapse_model self._connections = None connection_managers.append(self) def __getitem__(self, i): """Return the `i`th connection on the local MPI node.""" if isinstance(i, int): if i < len(self): return Connection(self, i) else: raise IndexError("%d > %d" % (i, len(self)-1)) elif isinstance(i, slice): if i.stop < len(self): return [Connection(self, j) for j in range(i.start, i.stop, i.step or 1)] else: raise IndexError("%d > %d" % (i.stop, len(self)-1)) def __len__(self): """Return the number of connections on the local MPI node.""" return nest.GetDefaults(self.synapse_model)['num_connections'] def __iter__(self): """Return an iterator over all connections on the local MPI node.""" for i in range(len(self)): yield self[i] @property def connections(self): if self._connections is None: self.sources = numpy.unique(self.sources) self._connections = nest.FindConnections(self.sources, synapse_type=self.synapse_model) return self._connections def _set_tsodyks_params(self): if 'tsodyks' in self.synapse_model: # there should be a better way to do this. In particular, if the synaptic time constant is changed # after creating the Projection, tau_psc ought to be changed as well. assert self.synapse_type in ('excitatory', 'inhibitory'), "only basic synapse types support Tsodyks-Markram connections" logger.debug("setting tau_psc") targets = nest.GetStatus(self.connections, 'target') if self.synapse_type == 'inhibitory': param_name = self.parent.post.local_cells[0].celltype.translations['tau_syn_I']['translated_name'] if self.synapse_type == 'excitatory': param_name = self.parent.post.local_cells[0].celltype.translations['tau_syn_E']['translated_name'] tau_syn = nest.GetStatus(targets, (param_name)) nest.SetStatus(self.connections, 'tau_psc', tau_syn) def connect(self, source, targets, weights, delays): """ Connect a neuron to one or more other neurons. `source` -- the ID of the pre-synaptic cell. `targets` -- a list/1D array of post-synaptic cell IDs, or a single ID. `weight` -- a list/1D array of connection weights, or a single weight. Must have the same length as `targets`. `delays` -- a list/1D array of connection delays, or a single delay. Must have the same length as `targets`. """ # are we sure the targets are all on the current node? if core.is_listlike(source): assert len(source) == 1 source = source[0] if not core.is_listlike(targets): targets = [targets] assert len(targets) > 0 if self.synapse_type not in targets[0].celltype.synapse_types: raise errors.ConnectionError("User gave synapse_type=%s, synapse_type must be one of: %s" % ( self.synapse_type, "'"+"', '".join(st for st in targets[0].celltype.synapse_types or ['*No connections supported*']))+"'" ) weights = numpy.array(weights)*1000.0 # weights should be in nA or uS, but iaf_neuron uses pA and iaf_cond_neuron uses nS. # Using convention in this way is not ideal. We should # be able to look up the units used by each model somewhere. if self.synapse_type == 'inhibitory' and common.is_conductance(targets[0]): weights = -1*weights # NEST wants negative values for inhibitory weights, even if these are conductances if isinstance(weights, numpy.ndarray): weights = weights.tolist() elif isinstance(weights, float): weights = [weights] if isinstance(delays, numpy.ndarray): delays = delays.tolist() elif isinstance(delays, float): delays = [delays] if targets[0].celltype.standard_receptor_type: try: nest.DivergentConnect([source], targets, weights, delays, self.synapse_model) except nest.NESTError, e: raise errors.ConnectionError("%s. source=%s, targets=%s, weights=%s, delays=%s, synapse model='%s'" % ( e, source, targets, weights, delays, self.synapse_model)) else: for target, w, d in zip(targets, weights, delays): nest.Connect([source], [target], {'weight': w, 'delay': d, 'receptor_type': target.celltype.get_receptor_type(self.synapse_type)}) self._connections = None # reset the caching of the connection list, since this will have to be recalculated self.sources.append(source) def convergent_connect(self, sources, target, weights, delays): """ Connect one or more neurons to a single post-synaptic neuron. `sources` -- a list/1D array of pre-synaptic cell IDs, or a single ID. `target` -- the ID of the post-synaptic cell. `weight` -- a list/1D array of connection weights, or a single weight. Must have the same length as `targets`. `delays` -- a list/1D array of connection delays, or a single delay. Must have the same length as `targets`. """ # are we sure the targets are all on the current node? if core.is_listlike(target): assert len(target) == 1 target = target[0] if not core.is_listlike(sources): sources = [sources] assert len(sources) > 0, sources if self.synapse_type not in ('excitatory', 'inhibitory', None): raise errors.ConnectionError("synapse_type must be 'excitatory', 'inhibitory', or None (equivalent to 'excitatory')") weights = numpy.array(weights)*1000.0# weights should be in nA or uS, but iaf_neuron uses pA and iaf_cond_neuron uses nS. # Using convention in this way is not ideal. We should # be able to look up the units used by each model somewhere. if self.synapse_type == 'inhibitory' and common.is_conductance(target): weights = -1*weights # NEST wants negative values for inhibitory weights, even if these are conductances if isinstance(weights, numpy.ndarray): weights = weights.tolist() elif isinstance(weights, float): weights = [weights] if isinstance(delays, numpy.ndarray): delays = delays.tolist() elif isinstance(delays, float): delays = [delays] try: nest.ConvergentConnect(sources, [target], weights, delays, self.synapse_model) except nest.NESTError, e: raise errors.ConnectionError("%s. sources=%s, target=%s, weights=%s, delays=%s, synapse model='%s'" % ( e, sources, target, weights, delays, self.synapse_model)) self._connections = None # reset the caching of the connection list, since this will have to be recalculated self.sources.extend(sources) def get(self, parameter_name, format): """ Get the values of a given attribute (weight or delay) for all connections in this manager. `parameter_name` -- name of the attribute whose values are wanted. `format` -- "list" or "array". Array format implicitly assumes that all connections belong to a single Projection. Return a list or a 2D Numpy array. The array element X_ij contains the attribute value for the connection from the ith neuron in the pre- synaptic Population to the jth neuron in the post-synaptic Population, if a single such connection exists. If there are no such connections, X_ij will be NaN. If there are multiple such connections, the summed value will be given, which makes some sense for weights, but is pretty meaningless for delays. """ if parameter_name not in ('weight', 'delay'): translated_name = None if self.parent.synapse_dynamics.fast and parameter_name in self.parent.synapse_dynamics.fast.translations: translated_name = self.parent.synapse_dynamics.fast.translations[parameter_name]["translated_name"] # this is a hack that works because there are no units conversions elif self.parent.synapse_dynamics.slow: for component_name in "timing_dependence", "weight_dependence", "voltage_dependence": component = getattr(self.parent.synapse_dynamics.slow, component_name) if component and parameter_name in component.translations: translated_name = component.translations[parameter_name]["translated_name"] break if translated_name: parameter_name = translated_name else: raise Exception("synapse type does not have an attribute '%s', or else this attribute is not accessible." % parameter_name) if format == 'list': values = nest.GetStatus(self.connections, parameter_name) if parameter_name == "weight": values = [0.001*val for val in values] elif format == 'array': value_arr = numpy.nan * numpy.ones((self.parent.pre.size, self.parent.post.size)) connection_parameters = nest.GetStatus(self.connections, ('source', 'target', parameter_name)) for conn in connection_parameters: # don't need to pass offset as arg, now we store the parent projection # (offset is always 0,0 for connections created with connect()) src, tgt, value = conn addr = self.parent.pre.id_to_index(src), self.parent.post.id_to_index(tgt) if numpy.isnan(value_arr[addr]): value_arr[addr] = value else: value_arr[addr] += value if parameter_name == 'weight': value_arr *= 0.001 if self.synapse_type == 'inhibitory' and common.is_conductance(self[0].target): value_arr *= -1 # NEST uses negative values for inhibitory weights, even if these are conductances values = value_arr else: raise Exception("format must be 'list' or 'array', actually '%s'" % format) return values def set(self, name, value): """ Set connection attributes for all connections on the local MPI node. `name` -- attribute name `value` -- the attribute numeric value, or a list/1D array of such values of the same length as the number of local connections, or a 2D array with the same dimensions as the connectivity matrix (as returned by `get(format='array')`). """ if not (numpy.isscalar(value) or core.is_listlike(value)): raise TypeError("Argument should be a numeric type (int, float...), a list, or a numpy array.") if isinstance(value, numpy.ndarray) and len(value.shape) == 2: value_list = [] connection_parameters = nest.GetStatus(self.connections, ('source', 'target')) for conn in connection_parameters: addr = self.parent.pre.id_to_index(conn['source']), self.parent.post.id_to_index(conn['target']) try: val = value[addr] except IndexError, e: raise IndexError("%s. addr=%s" % (e, addr)) if numpy.isnan(val): raise Exception("Array contains no value for synapse from %d to %d" % (c.source, c.target)) else: value_list.append(val) value = value_list if core.is_listlike(value): value = numpy.array(value) else: value = float(value) if name == 'weight': value *= 1000.0 if self.synapse_type == 'inhibitory' and common.is_conductance(self[0].target): value *= -1 # NEST wants negative values for inhibitory weights, even if these are conductances elif name == 'delay': pass else: #translation = self.parent.synapse_dynamics.reverse_translate({name: value}) #name, value = translation.items()[0] translated_name = None if self.parent.synapse_dynamics.fast: if name in self.parent.synapse_dynamics.fast.translations: translated_name = self.parent.synapse_dynamics.fast.translations[name]["translated_name"] # a hack if translated_name is None: if self.parent.synapse_dynamics.slow: for component_name in "timing_dependence", "weight_dependence", "voltage_dependence": component = getattr(self.parent.synapse_dynamics.slow, component_name) if component and name in component.translations: translated_name = component.translations[name]["translated_name"] break if translated_name: name = translated_name i = 0 try: nest.SetStatus(self.connections, name, value) except nest.NESTError, e: n = 1 if hasattr(value, '__len__'): n = len(value) raise Exception("%s. Trying to set %d values." % (e, n)) # --- Initialization, and module attributes ------------------------------------ state = _State() # a Singleton, so only a single instance ever exists del _State PyNN-0.7.4/src/nest/connectors.py0000644000175000017500000002013311736323051017417 0ustar andrewandrew00000000000000""" Connection method classes for nest :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: connectors.py 957 2011-05-03 13:44:15Z apdavison $ """ from pyNN import random, core from pyNN.connectors import Connector, AllToAllConnector, FixedProbabilityConnector, \ DistanceDependentProbabilityConnector, FixedNumberPreConnector, \ FixedNumberPostConnector, OneToOneConnector, SmallWorldConnector, \ FromListConnector, FromFileConnector, WeightGenerator, \ DelayGenerator, ProbaGenerator, DistanceMatrix, CSAConnector from pyNN.common import rank, num_processes import numpy from pyNN.space import Space class FastProbabilisticConnector(Connector): def __init__(self, projection, weights=0.0, delays=None, allow_self_connections=True, space=Space(), safe=True): Connector.__init__(self, weights, delays, space, safe) if isinstance(projection.rng, random.NativeRNG): raise Exception("Use of NativeRNG not implemented.") else: self.rng = projection.rng self.N = projection.pre.size idx = numpy.arange(self.N*rank(), self.N*(rank()+1)) self.M = num_processes()*self.N self.local = numpy.ones(self.N, bool) self.local_long = numpy.zeros(self.M, bool) self.local_long[idx] = True self.weights_generator = WeightGenerator(weights, self.local_long, projection, safe) self.delays_generator = DelayGenerator(delays, self.local_long, safe) self.probas_generator = ProbaGenerator(random.RandomDistribution('uniform',(0,1), rng=self.rng), self.local_long) self.distance_matrix = DistanceMatrix(projection.pre.positions, self.space, self.local) self.projection = projection self.candidates = projection.pre.all_cells self.allow_self_connections = allow_self_connections def _probabilistic_connect(self, tgt, p, n_connections=None, rewiring=None): """ Connect-up a Projection with connection probability p, where p may be either a float 0<=p<=1, or a dict containing a float array for each pre-synaptic cell, the array containing the connection probabilities for all the local targets of that pre-synaptic cell. """ if numpy.isscalar(p) and p == 1: precreate = numpy.arange(self.N) else: rarr = self.probas_generator.get(self.M) if not core.is_listlike(rarr) and numpy.isscalar(rarr): # if N=1, rarr will be a single number rarr = numpy.array([rarr]) precreate = numpy.where(rarr < p)[0] self.distance_matrix.set_source(tgt.position) if not self.allow_self_connections and self.projection.pre == self.projection.post: idx_tgt = numpy.where(self.candidates == tgt) if len(idx_tgt) > 0: i = numpy.where(precreate == idx_tgt[0]) if len(i) > 0: precreate = numpy.delete(precreate, i[0]) if (rewiring is not None) and (rewiring > 0): if not self.allow_self_connections and self.projection.pre == self.projection.post: i = numpy.where(self.candidates == tgt)[0] idx = numpy.delete(self.candidates, i) rarr = self.probas_generator.get(self.M)[precreate] rewired = numpy.where(rarr < rewiring)[0] N = len(rewired) if N > 0: new_idx = (len(idx)-1) * self.probas_generator.get(self.M)[precreate] precreate[rewired] = idx[new_idx.astype(int)] if (n_connections is not None) and (len(precreate) > 0): create = numpy.array([], int) while len(create) < n_connections: # if the number of requested cells is larger than the size of the # presynaptic population, we allow multiple connections for a given cell create = numpy.concatenate((create, self.projection.rng.permutation(precreate))) create = create[:n_connections] else: create = precreate sources = self.candidates[create] weights = self.weights_generator.get(self.M, self.distance_matrix, create) delays = self.delays_generator.get(self.M, self.distance_matrix, create) if len(sources) > 0: self.projection.connection_manager.convergent_connect(sources.tolist(), tgt, weights, delays) class FastAllToAllConnector(AllToAllConnector): __doc__ = AllToAllConnector.__doc__ def connect(self, projection): connector = FastProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) self.progressbar(len(projection.post.local_cells)) for count, tgt in enumerate(projection.post.local_cells): connector._probabilistic_connect(tgt, 1) self.progression(count) class FastFixedProbabilityConnector(FixedProbabilityConnector): __doc__ = FixedProbabilityConnector.__doc__ def connect(self, projection): connector = FastProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) self.progressbar(len(projection.post.local_cells)) for count, tgt in enumerate(projection.post.local_cells): connector._probabilistic_connect(tgt, self.p_connect) self.progression(count) class FastDistanceDependentProbabilityConnector(DistanceDependentProbabilityConnector): __doc__ = DistanceDependentProbabilityConnector.__doc__ def connect(self, projection): """Connect-up a Projection.""" connector = FastProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) proba_generator = ProbaGenerator(self.d_expression, connector.local) self.progressbar(len(projection.post.local_cells)) for count, tgt in enumerate(projection.post.local_cells): connector.distance_matrix.set_source(tgt.position) proba = proba_generator.get(connector.N, connector.distance_matrix) if proba.dtype == 'bool': proba = proba.astype(float) connector._probabilistic_connect(tgt, proba, self.n_connections) self.progression(count) class FixedNumberPreConnector(FixedNumberPreConnector): __doc__ = FixedNumberPreConnector.__doc__ def connect(self, projection): connector = FastProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) self.progressbar(len(projection.post.local_cells)) for count, tgt in enumerate(projection.post.local_cells): if hasattr(self, 'rand_distr'): n = self.rand_distr.next() else: n = self.n connector._probabilistic_connect(tgt, 1, n) self.progression(count) class FastSmallWorldConnector(SmallWorldConnector): __doc__ = SmallWorldConnector.__doc__ def connect(self, projection): """Connect-up a Projection.""" connector = FastProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) proba_generator = ProbaGenerator(self.d_expression, connector.local) self.progressbar(len(projection.post.local_cells)) for count, tgt in enumerate(projection.post.local_cells): connector.distance_matrix.set_source(tgt.position) proba = proba_generator.get(connector.N, connector.distance_matrix).astype(float) connector._probabilistic_connect(tgt, proba, self.n_connections, self.rewiring) self.progression(count) PyNN-0.7.4/src/nest/synapses.py0000644000175000017500000000361311736323051017113 0ustar andrewandrew00000000000000""" Definition of NativeSynapseType class for NEST :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ import nest from pyNN.models import BaseModelType, BaseSynapseDynamics DEFAULT_TAU_MINUS = 20.0 def get_defaults(model_name): defaults = nest.GetDefaults(model_name) ignore = ['delay', 'max_delay', 'min_delay', 'num_connections', 'num_connectors', 'receptor_type', 'synapsemodel', 'weight'] default_params = {} for name,value in defaults.items(): if name not in ignore: default_params[name] = value default_params['tau_minus'] = DEFAULT_TAU_MINUS return default_params class NativeSynapseDynamics(BaseSynapseDynamics): def __init__(self, model_name, parameters={}): cls = type(model_name, (NativeSynapseMechanism,), {'nest_model': model_name}) self.mechanism = cls(parameters) def _get_nest_synapse_model(self, suffix): defaults = self.mechanism.parameters.copy() defaults.pop("tau_minus") label = "%s_%s" % (self.mechanism.nest_model, suffix) nest.CopyModel(self.mechanism.nest_model, label, defaults) return label def _set_tau_minus(self, cells): if len(cells) > 0: if 'tau_minus' in nest.GetStatus([cells[0]])[0]: tau_minus = self.mechanism.parameters["tau_minus"] nest.SetStatus(cells.tolist(), [{'tau_minus': tau_minus}]) else: raise Exception("Postsynaptic cell model %s does not support STDP." % nest.GetStatus([cells[0]], "model")) class NativeSynapseMechanism(BaseModelType): def __new__(cls, parameters): cls.default_parameters = get_defaults(cls.nest_model) return super(NativeSynapseMechanism, cls).__new__(cls) PyNN-0.7.4/src/nest/recording.py0000644000175000017500000003140211736323051017217 0ustar andrewandrew00000000000000""" :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ import tempfile import os import numpy import logging import warnings import nest from pyNN import recording, common, errors from pyNN.nest import simulator VARIABLE_MAP = {'v': ['V_m'], 'gsyn': ['g_ex', 'g_in']} REVERSE_VARIABLE_MAP = {'V_m': 'v'} logger = logging.getLogger("PyNN") # --- For implementation of record_X()/get_X()/print_X() ----------------------- class RecordingDevice(object): """ Now that NEST introduced the multimeter, and does not allow a node to be connected to multiple multimeters, most of the functionality of `Recorder` has been moved to this class, while `Recorder` is a wrapper to maintain the fiction that each recorder only records a single variable. """ scale_factors = {'V_m': 1, 'g_ex': 0.001, 'g_in': 0.001} def __init__(self, device_type, to_memory=False): assert device_type in ("multimeter", "spike_detector") self.type = device_type self.device = nest.Create(device_type) device_parameters = {"withgid": True, "withtime": True} if self.type is 'multimeter': device_parameters["interval"] = common.get_time_step() else: device_parameters["precise_times"] = True device_parameters["precision"] = simulator.state.default_recording_precision if to_memory: device_parameters.update(to_file=False, to_memory=True) else: device_parameters.update(to_file=True, to_memory=False) try: nest.SetStatus(self.device, device_parameters) except nest.hl_api.NESTError, e: raise nest.hl_api.NESTError("%s. Parameter dictionary was: %s" % (e, device_parameters)) self.record_from = [] self._local_files_merged = False self._gathered = False self._connected = False self._all_ids = set([]) simulator.recording_devices.append(self) logger.debug("Created %s with parameters %s" % (device_type, device_parameters)) def __del__(self): for name in "_merged_file", "_gathered_file": if hasattr(self, name): getattr(self, name).close() def add_variables(self, *variables): assert self.type is "multimeter", "Can't add variables to a spike detector" self.record_from.extend(variables) nest.SetStatus(self.device, {'record_from': self.record_from}) def add_cells(self, new_ids): self._all_ids = self._all_ids.union(new_ids) def connect_to_cells(self): if not self._connected: ids = list(self._all_ids) if self.type is "spike_detector": nest.ConvergentConnect(ids, self.device, model='static_synapse') else: nest.DivergentConnect(self.device, ids, model='static_synapse') self._connected = True def in_memory(self): """Determine whether data is being recorded to memory.""" return nest.GetStatus(self.device, 'to_memory')[0] def events_to_array(self, events): """ Transform the NEST events dictionary (when recording to memory) to a Numpy array. """ ids = events['senders'] times = events['times'] if self.type == 'spike_detector': data = numpy.array((ids, times)).T else: data = [ids, times] for var in self.record_from: data.append(events[var]) data = numpy.array(data).T return data def scale_data(self, data): """ Scale the data to give appropriate units. """ scale_factors = [RecordingDevice.scale_factors.get(var, 1) for var in self.record_from] for i, scale_factor in enumerate(scale_factors): column = i+2 # first two columns are id and t, which should not be scaled. if scale_factor != 1: data[:, column] *= scale_factor return data def add_initial_values(self, data): """ Add initial values (NEST does not record the value at t=0). """ logger.debug("Prepending initial values to recorded data") initial_values = [] for id in self._all_ids: initial = [id, 0.0] for variable in self.record_from: variable = REVERSE_VARIABLE_MAP.get(variable, variable) try: initial.append(id.get_initial_value(variable)) except KeyError: initial.append(0.0) # unsatisfactory initial_values.append(initial) if initial_values: data = numpy.concatenate((initial_values, data)) return data def read_data_from_memory(self, gather, compatible_output): """ Return memory-recorded data. `gather` -- if True, gather data from all MPI nodes. `compatible_output` -- if True, transform the data into the PyNN standard format. """ data = nest.GetStatus(self.device,'events')[0] if compatible_output: data = self.events_to_array(data) data = self.scale_data(data) if gather and simulator.state.num_processes > 1: data = recording.gather(data) return data def read_local_data(self, compatible_output): """ Return file-recorded data from the local MPI node, merging data from different threads if applicable. The merged data is cached, to avoid the overhead of re-merging if the method is called again. """ # what if the method is called with different values of # `compatible_output`? Need to cache these separately. if self._local_files_merged: self._merged_file.seek(0) data = numpy.load(self._merged_file) else: d = nest.GetStatus(self.device)[0] if "filenames" in d: nest_files = d['filenames'] else: # indicates that run() has not been called. raise errors.NothingToWriteError("No recorder data. Have you called run()?") # possibly we can just keep on saving to the end of self._merged_file, instead of concatenating everything in memory logger.debug("Concatenating data from the following files: %s" % ", ".join(nest_files)) non_empty_nest_files = [filename for filename in nest_files if os.stat(filename).st_size > 0] if len(non_empty_nest_files) > 0: data_list = [numpy.loadtxt(nest_file) for nest_file in non_empty_nest_files] data = numpy.concatenate(data_list) if len(non_empty_nest_files) == 0 or data.size == 0: if self.type is "spike_detector": ncol = 2 else: ncol = 2 + len(self.record_from) data = numpy.empty([0, ncol]) if compatible_output and self.type is not "spike_detector": data = self.scale_data(data) data = self.add_initial_values(data) self._merged_file = tempfile.TemporaryFile() numpy.save(self._merged_file, data) self._local_files_merged = True return data def read_data(self, gather, compatible_output, always_local=False): """ Return file-recorded data. `gather` -- if True, gather data from all MPI nodes. `compatible_output` -- if True, transform the data into the PyNN standard format. Gathered data is cached, so the MPI communication need only be done once, even if the method is called multiple times. """ # what if the method is called with different values of # `compatible_output`? Need to cache these separately. if gather and simulator.state.num_processes > 1: if self._gathered: logger.debug("Loading previously gathered data from cache") self._gathered_file.seek(0) data = numpy.load(self._gathered_file) else: local_data = self.read_local_data(compatible_output) if always_local: data = local_data # for always_local cells, no need to gather else: logger.debug("Gathering data") data = recording.gather(local_data) logger.debug("Caching gathered data") self._gathered_file = tempfile.TemporaryFile() numpy.save(self._gathered_file, data) self._gathered = True else: data = self.read_local_data(compatible_output) if len(data.shape) == 1: data = data.reshape((1, data.size)) return data def read_subset(self, variables, gather, compatible_output, always_local=False): if self.in_memory(): data = self.read_data_from_memory(gather, compatible_output) else: # in file data = self.read_data(gather, compatible_output, always_local) indices = [] for variable in variables: try: indices.append(self.record_from.index(variable)) except ValueError: raise Exception("%s not recorded" % variable) columns = tuple([0, 1] + [index + 2 for index in indices]) return data[:, columns] class Recorder(recording.Recorder): """Encapsulates data and functions related to recording model variables.""" scale_factors = {'spikes': 1, 'v': 1, 'gsyn': 0.001} # units conversion def __init__(self, variable, population=None, file=None): __doc__ = recording.Recorder.__doc__ recording.Recorder.__init__(self, variable, population, file) self._create_device() def _create_device(self): to_memory = (self.file is False) # note file=None means we save to a temporary file, not to memory if self.variable is "spikes": self._device = RecordingDevice("spike_detector", to_memory) else: self._device = None for recorder in self.population.recorders.values(): if hasattr(recorder, "_device") and recorder._device is not None and recorder._device.type == 'multimeter': self._device = recorder._device break if self._device is None: self._device = RecordingDevice("multimeter", to_memory) self._device.add_variables(*VARIABLE_MAP.get(self.variable, [self.variable])) def _record(self, new_ids): """Called by record().""" self._device.add_cells(new_ids) def _reset(self): """ """ try: simulator.recording_devices.remove(self._device) except ValueError: pass self._create_device() def _get(self, gather=False, compatible_output=True, filter=None): """Return the recorded data as a Numpy array.""" if self._device is None: raise errors.NothingToWriteError("No cells recorded, so no data to return") always_local = (hasattr(self.population.celltype, 'always_local') and self.population.celltype.always_local) if self.variable is "spikes": data = self._device.read_data(gather, compatible_output, always_local) else: variables = VARIABLE_MAP.get(self.variable, [self.variable]) data = self._device.read_subset(variables, gather, compatible_output, always_local) assert len(data.shape) == 2 if not self._device._gathered: filtered_ids = self.filter_recorded(filter) mask = reduce(numpy.add, (data[:,0]==id for id in filtered_ids)) if len(data) > 0: data = data[mask] return data def _local_count(self, filter): N = {} if self._device.in_memory(): events = nest.GetStatus(self._device.device, 'events')[0] for id in filtered_ids: mask = events['senders'] == int(id) N[id] = len(events['times'][mask]) else: spikes = self._get(gather=False, compatible_output=False, filter=filter) for id in self.filter_recorded(filter): N[id] = 0 ids = numpy.sort(spikes[:,0].astype(int)) idx = numpy.unique(ids) left = numpy.searchsorted(ids, idx, 'left') right = numpy.searchsorted(ids, idx, 'right') for id, l, r in zip(idx, left, right): N[id] = r-l return N simulator.Recorder = Recorder # very inelegant. Need to rethink the module structure PyNN-0.7.4/src/space.py0000644000175000017500000002611711762125107015375 0ustar andrewandrew00000000000000# encoding: utf-8 """ Tools for performing spatial/topographical calculations. Classes: Space - representation of a Cartesian space for use in calculating distances Line - represents a structure with neurons distributed evenly on a straight line. Grid2D - represents a structure with neurons distributed on a 2D grid. Grid3D - represents a structure with neurons distributed on a 3D grid. RandomStructure - represents a structure with neurons distributed randomly within a given volume. Cuboid - representation of a cuboidal volume, for use with RandomStructure. Sphere - representation of a spherical volume, for use with RandomStructure. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ # There must be some Python package out there that provides most of this stuff. import numpy import math from operator import and_ from pyNN.random import NumpyRNG from pyNN import descriptions import logging logger = logging.getLogger("PyNN") def distance(src, tgt, mask=None, scale_factor=1.0, offset=0.0, periodic_boundaries=None): # may need to add an offset parameter """ Return the Euclidian distance between two cells. `mask` allows only certain dimensions to be considered, e.g.:: * to ignore the z-dimension, use `mask=array([0,1])` * to ignore y, `mask=array([0,2])` * to just consider z-distance, `mask=array([2])` `scale_factor` allows for different units in the pre- and post- position (the post-synaptic position is multipied by this quantity). """ d = src.position - scale_factor*(tgt.position + offset) if not periodic_boundaries == None: d = numpy.minimum(abs(d), periodic_boundaries-abs(d)) if mask is not None: d = d[mask] return numpy.sqrt(numpy.dot(d, d)) class Space(object): """ Class representing a space within distances can be calculated. The space is Cartesian, may be 1-, 2- or 3-dimensional, and may have periodic boundaries in any of the dimensions. """ AXES = {'x' : [0], 'y': [1], 'z': [2], 'xy': [0,1], 'yz': [1,2], 'xz': [0,2], 'xyz': range(3), None: range(3)} def __init__(self, axes=None, scale_factor=1.0, offset=0.0, periodic_boundaries=None): """ axes -- if not supplied, then the 3D distance is calculated. If supplied, axes should be a string containing the axes to be used, e.g. 'x', or 'yz'. axes='xyz' is the same as axes=None. scale_factor -- it may be that the pre and post populations use different units for position, e.g. degrees and µm. In this case, `scale_factor` can be specified, which is applied to the positions in the post-synaptic population. offset -- if the origins of the coordinate systems of the pre- and post- synaptic populations are different, `offset` can be used to adjust for this difference. The offset is applied before any scaling. periodic_boundaries -- either `None`, or a tuple giving the boundaries for each dimension, e.g. `((x_min, x_max), None, (z_min, z_max))`. """ self.periodic_boundaries = periodic_boundaries self.axes = numpy.array(Space.AXES[axes]) self.scale_factor = scale_factor self.offset = offset def distances(self, A, B, expand=False): """ Calculate the distance matrix between two sets of coordinates, given the topology of the current space. From http://projects.scipy.org/pipermail/numpy-discussion/2007-April/027203.html """ if len(A.shape) == 1: A = A.reshape(3, 1) if len(B.shape) == 1: B = B.reshape(3, 1) B = self.scale_factor*(B + self.offset) d = numpy.zeros((len(self.axes), A.shape[1], B.shape[1]), dtype=A.dtype) for i,axis in enumerate(self.axes): diff2 = A[axis,:,None] - B[axis, :] if self.periodic_boundaries is not None: boundaries = self.periodic_boundaries[axis] if boundaries is not None: range = boundaries[1]-boundaries[0] ad2 = abs(diff2) diff2 = numpy.minimum(ad2, range-ad2) diff2 **= 2 d[i] = diff2 if not expand: d = numpy.sum(d, 0) numpy.sqrt(d, d) return d def distance_generator(self, f, g): """ Return a function that calculates the distance matrix as a function of indices i,j, given two functions f(i) and g(j) that return coordinates. """ def distance_map(i, j): d = self.distances(f(i), g(j)) if d.shape[0] == 1: d = d[0,:] # arguably this transformation should go in distances() elif d.shape[1] == 1: d = d[:,0] return d return distance_map class BaseStructure(object): def __eq__(self, other): return reduce(and_, (getattr(self, attr) == getattr(other, attr) for attr in self.parameter_names)) def get_parameters(self): P = {} for name in self.parameter_names: P[name] = getattr(self, name) return P def describe(self, template='structure_default.txt', engine='default'): """ Returns a human-readable description of the network structure. The output may be customized by specifying a different template togther with an associated template engine (see ``pyNN.descriptions``). If template is None, then a dictionary containing the template context will be returned. """ context = {'name': self.__class__.__name__, 'parameters': self.get_parameters()} return descriptions.render(engine, template, context) class Line(BaseStructure): """ Represents a structure with neurons distributed evenly on a straight line. """ parameter_names = ("dx", "x0", "y", "z") def __init__(self, dx=1.0, x0=0.0, y=0.0, z=0.0): self.dx = dx self.x0 = x0 self.y = y self.z = z def generate_positions(self, n): x = self.dx*numpy.arange(n, dtype=float) + self.x0 y = numpy.zeros(n) + self.y z = numpy.zeros(n) + self.z return numpy.array((x,y,z)) class Grid2D(BaseStructure): """ Represents a structure with neurons distributed on a 2D grid. """ parameter_names = ("aspect_ratio", "dx", "dy", "x0", "y0", "fill_order") def __init__(self, aspect_ratio=1.0, dx=1.0, dy=1.0, x0=0.0, y0=0.0, z=0, fill_order="sequential"): """ aspect_ratio - ratio of the number of grid points per side (not the ratio of the side lengths, unless dx == dy) """ self.aspect_ratio = aspect_ratio assert fill_order in ('sequential', 'random') self.fill_order = fill_order self.dx = dx; self.dy = dy; self.x0 = x0; self.y0 = y0; self.z = z def calculate_size(self, n): nx = math.sqrt(n*self.aspect_ratio) if n%nx != 0: raise Exception("Invalid size") ny = n/nx return nx, ny def generate_positions(self, n): nx, ny = self.calculate_size(n) x,y,z = numpy.indices((nx,ny,1), dtype=float) x = self.x0 + self.dx*x.flatten() y = self.y0 + self.dy*y.flatten() z = self.z + z.flatten() positions = numpy.array((x,y,z)) # use column_stack, if we decide to switch from (3,n) to (n,3) if self.fill_order == 'sequential': return positions else: # random return numpy.random.permutation(positions.T).T class Grid3D(BaseStructure): """ Represents a structure with neurons distributed on a 3D grid. """ parameter_names = ("aspect_ratios", "dx", "dy", "dz", "x0", "y0", "z0", "fill_order") def __init__(self, aspect_ratioXY=1.0, aspect_ratioXZ=1.0, dx=1.0, dy=1.0, dz=1.0, x0=0.0, y0=0.0, z0=0, fill_order="sequential"): """ If fill_order is 'sequential', the z-index will be filled first, then y then x, i.e. the first cell will be at (0,0,0) (given default values for the other arguments), the second at (0,0,1), etc. """ self.aspect_ratios = (aspect_ratioXY, aspect_ratioXZ) assert fill_order in ('sequential', 'random') self.fill_order = fill_order self.dx = dx; self.dy = dy; self.dz = dz self.x0 = x0; self.y0 = y0; self.z0 = z0 def calculate_size(self, n): a,b = self.aspect_ratios nx = int(round(math.pow(n*a*b, 1/3.0))) ny = int(round(nx/a)) nz = int(round(nx/b)) assert nx*ny*nz == n, str((nx, ny, nz, nx*ny*nz, n, a, b)) return nx, ny, nz def generate_positions(self, n): nx, ny, nz = self.calculate_size(n) x,y,z = numpy.indices((nx,ny,nz), dtype=float) x = self.x0 + self.dx*x.flatten() y = self.y0 + self.dy*y.flatten() z = self.z0 + self.dz*z.flatten() if self.fill_order == 'sequential': return numpy.array((x,y,z)) else: raise NotImplementedError class Shape(object): pass class Cuboid(Shape): """ Represents a cuboidal volume within which neurons may be distributed. """ def __init__(self, width, height, depth): """ height: extent in y direction width: extent in x direction depth: extent in z direction """ self.height = height self.width = width self.depth = depth def sample(self, n, rng): return 0.5*rng.uniform(-1, 1, size=(n,3)) * (self.width, self.height, self.depth) class Sphere(Shape): """ Represents a spherical volume within which neurons may be distributed. """ def __init__(self, radius): Shape.__init__(self) self.radius = radius def sample(self, n, rng): # this implementation is wasteful, as it throws away a lot of numbers, # but simple. More efficient implementations welcome. positions = numpy.empty((n,3)) i = 0 while i < n: candidate = rng.uniform(-1, 1, size=(1,3)) if (candidate**2).sum() < 1: positions[i] = candidate i += 1 return self.radius*positions class RandomStructure(BaseStructure): """ Represents a structure with neurons distributed randomly within a given volume. """ parameter_names = ('boundary', 'origin', 'rng') def __init__(self, boundary, origin=(0.0,0.0,0.0), rng=None): """ `boundary` - a subclass of Shape `origin` - the coordinates (x,y,z) of the centre of the volume. """ assert isinstance(boundary, Shape) assert len(origin) == 3 self.boundary = boundary self.origin = origin self.rng = rng or NumpyRNG() def generate_positions(self, n): return (numpy.array(self.origin) + self.boundary.sample(n, self.rng)).T # what about rotations? PyNN-0.7.4/src/random.py0000644000175000017500000002351111774601632015562 0ustar andrewandrew00000000000000""" Provides wrappers for several random number generators (RNGs), giving them all a common interface so that they can be used interchangeably in PyNN. Note however that we have so far made no effort to implement parameter translation, and parameter names/order may be different for the different RNGs. Classes: NumpyRNG - uses the numpy.random.RandomState RNG GSLRNG - uses the RNGs from the Gnu Scientific Library NativeRNG - indicates to the simulator that it should use it's own, built-in RNG RandomDistribution - produces random numbers from a specific distribution :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id:random.py 188 2008-01-29 10:03:59Z apdavison $ """ import sys import logging import numpy.random try: import pygsl.rng have_gsl = True except (ImportError, Warning): have_gsl = False import time try: from mpi4py import MPI mpi_rank = MPI.COMM_WORLD.rank num_processes = MPI.COMM_WORLD.size except ImportError: MPI = None mpi_rank = 0 num_processes = 1 logger = logging.getLogger("PyNN") class AbstractRNG: """Abstract class for wrapping random number generators. The idea is to be able to use either simulator-native rngs, which may be more efficient, or a standard python rng, e.g. a numpy.random.RandomState object, which would allow the same random numbers to be used across different simulators, or simply to read externally-generated numbers from files.""" def __init__(self, seed=None): if seed is not None: assert isinstance(seed, int), "`seed` must be an int (< %d), not a %s" % (sys.maxint, type(seed).__name__) self.seed = seed # define some aliases self.random = self.next self.sample = self.next def next(self, n=1, distribution='uniform', parameters=[], mask_local=None): """Return n random numbers from the distribution. If n is 1, return a float, if n > 1, return a Numpy array, if n <= 0, raise an Exception.""" # arguably, rng.next() should return a float, rng.next(1) an array of length 1 raise NotImplementedError class WrappedRNG(AbstractRNG): def __init__(self, seed=None, parallel_safe=True): AbstractRNG.__init__(self, seed) self.parallel_safe = parallel_safe if self.seed is not None and not parallel_safe: self.seed += mpi_rank # ensure different nodes get different sequences if mpi_rank != 0: logger.warning("Changing the seed to %s on node %d" % (self.seed, mpi_rank)) def next(self, n=1, distribution='uniform', parameters=[], mask_local=None): """Return n random numbers from the distribution. If n >= 0, return a numpy array, if n < 0, raise an Exception.""" if n == 0: rarr = numpy.random.rand(0) # We return an empty array elif n > 0: if num_processes > 1 and not self.parallel_safe: # n is the number for the whole model, so if we do not care about # having exactly the same random numbers independent of the # number of processors (m), we only need generate n/m+1 per node # (assuming round-robin distribution of cells between processors) if mask_local is None: n = n/num_processes + 1 elif mask_local is not False: n = mask_local.sum() rarr = self._next(distribution, n, parameters) else: raise ValueError, "The sample number must be positive" if self.parallel_safe and num_processes > 1: if mask_local is False: # return all the numbers on all nodes pass elif mask_local is not None: # strip out the random numbers that # should be used on other processors. assert mask_local.size == n rarr = rarr[mask_local] else: raise Exception("For a parallel-safe RNG, mask_local must be either an array or False, not %s" % mask_local) if hasattr(rarr, '__len__') and len(rarr) == 1: return rarr[0] else: return rarr def __getattr__(self, name): """ This is to give the PyNN RNGs the same methods as the wrapped RNGs (numpy.random.RandomState or the GSL RNGs.) """ return getattr(self.rng, name) class NumpyRNG(WrappedRNG): """Wrapper for the numpy.random.RandomState class (Mersenne Twister PRNG).""" def __init__(self, seed=None, parallel_safe=True): WrappedRNG.__init__(self, seed, parallel_safe) self.rng = numpy.random.RandomState() if self.seed is not None: self.rng.seed(self.seed) else: self.rng.seed() def _next(self, distribution, n, parameters): return getattr(self.rng, distribution)(size=n, *parameters) def describe(self): return "NumpyRNG() with seed %s for MPI rank %d (MPI processes %d). %s parallel safe." % ( self.seed, mpi_rank, z, self.parallel_safe and "Is" or "Not") class GSLRNG(WrappedRNG): """Wrapper for the GSL random number generators.""" def __init__(self, seed=None, type='mt19937', parallel_safe=True): if not have_gsl: raise ImportError, "GSLRNG: Cannot import pygsl" WrappedRNG.__init__(self, seed, parallel_safe) self.rng = getattr(pygsl.rng, type)() if self.seed is not None: self.rng.set(self.seed) else: self.seed = int(time.time()) self.rng.set(self.seed) def __getattr__(self, name): """This is to give GSLRNG the same methods as the GSL RNGs.""" return getattr(self.rng, name) def _next(self, distribution, n, parameters): p = parameters + [n] return getattr(self.rng, distribution)(*p) # should add a wrapper for the built-in Python random module. class NativeRNG(AbstractRNG): """ Signals that the simulator's own native RNG should be used. Each simulator module should implement a class of the same name which inherits from this and which sets the seed appropriately. """ def __str__(self): return "AbstractRNG(seed=%s)" % self.seed class RandomDistribution(object): """ Class which defines a next(n) method which returns an array of n random numbers from a given distribution. """ def __init__(self, distribution='uniform', parameters=[], rng=None, boundaries=None, constrain="clip"): """ If present, rng should be a NumpyRNG or GSLRNG object. distribution should be the name of a method supported by the underlying random number generator object. parameters should be a list or tuple containing the arguments expected by the underlying method in the correct order. named arguments are not yet supported. boundaries is a tuple (min, max) used to specify explicitly, for distribution like Gaussian, Gamma or others, hard boundaries for the parameters. If parameters are drawn outside those boundaries, the policy applied will depend on the constrain parameter. constrain control the policy for weights out of the specified boundaries. If "clip", random numbers are clipped to the boundaries. If "redraw", random numbers are drawn till they fall within the boundaries. Note that NumpyRNG and GSLRNG distributions may not have the same names, e.g., 'normal' for NumpyRNG and 'gaussian' for GSLRNG, and the arguments may also differ. """ self.name = distribution assert isinstance(parameters, (list, tuple, dict)), "The parameters argument must be a list or tuple or dict" self.parameters = parameters self.boundaries = boundaries if self.boundaries: self.min_bound = min(self.boundaries) self.max_bound = max(self.boundaries) self.constrain = constrain if rng: assert isinstance(rng, AbstractRNG), "rng must be a pyNN.random RNG object" self.rng = rng else: # use numpy.random.RandomState() by default self.rng = NumpyRNG() def next(self, n=1, mask_local=None): """Return n random numbers from the distribution.""" res = self.rng.next(n=n, distribution=self.name, parameters=self.parameters, mask_local=mask_local) if self.boundaries: if isinstance(res, numpy.float): res = numpy.array([res]) if self.constrain == "clip": return numpy.maximum(numpy.minimum(res, self.max_bound), self.min_bound) elif self.constrain == "redraw": # not sure how well this works with parallel_safe, mask_local if len(res) == 1: while not ((res > self.min_bound) and (res < self.max_bound)): res = self.rng.next(n=n, distribution=self.name, parameters=self.parameters, mask_local=mask_local) return res else: idx = numpy.where((res > self.max_bound) | (res < self.min_bound))[0] while len(idx) > 0: res[idx] = self.rng.next(n=n, distribution=self.name, parameters=self.parameters, mask_local=mask_local) idx = numpy.where((res > self.max_bound) | (res < self.min_bound))[0] return res else: raise Exception("This constrain method (%s) does not exist" %self.constrain) return res def __str__(self): return "RandomDistribution('%(name)s', %(parameters)s, %(rng)s)" % self.__dict__ PyNN-0.7.4/src/neuroml2.py0000644000175000017500000011341111766347722016054 0ustar andrewandrew00000000000000""" PyNN-->NeuroML v2 :copyright: Copyright 2006-2012 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. This file is based on neuroml.py written by Andrew Davison & has been updated for NeuroML v2.0 by Padraig Gleeson """ ''' For an overview of PyNN & NeuroML interoperability see http://www.neuroml.org/pynn.php This script is intended to map models sprcified in PyNN on to the equivalent representation in NeuroML v2.0. A valid NML2 file will be produced containing the cells, populations, etc. and a LEMS file will be created which imports this file and can run a simple simulation using the LEMS interpreter, see http://www.neuroml.org/neuroml2.php#libNeuroML Ideally... this will produce equivalent simulation results when a script is run using: python myPyNN.py nest python myPyNN.py neuron python myPyNN.py neuroml2 (followed by nml2 LEMS_PyNN2NeuroMLv2.xml) WORK IN PROGRESS! REQUIRES PyNN at tags/0.7.2/ To test this out get the 0.7 PyNN branch from SVN using: svn co https://neuralensemble.org/svn/PyNN/branch/0.7 pyNN cd pyNN sudo python setup.py install Contact p.gleeson@ucl.ac.uk for more details Features below depend on using the latest LEMS/libNeuroML code which includes the nml2 utility and the LEMS definitions of PyNN core models (IF_curr_alpha, SpikeSourcePoisson, etc.) in PyNN.xml. Get it from http://sourceforge.net/apps/trac/neuroml/browser/NeuroML2/ Currently supported features: Generation of valid NeuroML 2 file containing cells & populations & connections Export of simulation duration & dt & recorded populations in a LEMS file for running a basic simulation with simple num integration method (so use small dt!) Cell models impl: IF_curr_alpha, IF_curr_exp, IF_cond_exp, IF_cond_alpha, HH_cond_exp, EIF_cond_exp_isfa_ista, EIF_cond_alpha_isfa_ista Others: SpikeSourcePoisson, SpikeSourceArray Export of explicitly created Populations, export of populations created with create() Export of (instance based) list of conenctions in explicit Support for weight & delay on connections Missing/required: Other models todo: DCSource, StepCurrentSource, ACSource, NoisyCurrentSource Need to test >1 cells in a population Setting of initial values in Populations Support for populations some of whose cells have has their parameters modified Synapse dynamics (e.g. STDP) not yet implemented Desirable TODO: Generation of SED-ML file with simulation description Automated tests of equivalence between Neuron & Nest & generated LEMS ''' from pyNN import common, connectors, standardmodels, core from pyNN.standardmodels import cells import numpy import sys sys.path.append('/usr/lib/python%s/site-packages/oldxml' % sys.version[:3]) # needed for Ubuntu import xml.dom.minidom import logging logger = logging.getLogger("neuroml2") neuroml_ns = 'http://www.neuroml.org/schema/neuroml2' namespace_xsi = "http://www.w3.org/2001/XMLSchema-instance" neuroml_ver="v2alpha" neuroml_xsd="http://neuroml.svn.sourceforge.net/viewvc/neuroml/NeuroML2/Schemas/NeuroML2/NeuroML_"+neuroml_ver+".xsd" simulation_prefix = 'simulation_' network_prefix = 'network_' display_prefix = 'display_' line_prefix = 'line_' colours = ['#000000','#FF0000','#0000FF','#009b00','#ffc800','#8c6400','#ff00ff','#ffff00','#808080'] strict = False # ============================================================================== # Utility classes # ============================================================================== class ID(int, common.IDMixin): """ Instead of storing ids as integers, we store them as ID objects, which allows a syntax like: p[3,4].tau_m = 20.0 where p is a Population object. The question is, how big a memory/performance hit is it to replace integers with ID objects? """ def __init__(self, n): common.IDMixin.__init__(self) def get_native_parameters(self): """Return a dictionary of parameters for the NeuroML2 cell model.""" return self._cell def set_native_parameters(self, parameters): """Set parameters of the NeuroML2 cell model from a dictionary. for name, val in parameters.items(): setattr(self._cell, name, val)""" self._cell = parameters.copy() # ============================================================================== # Module-specific functions and classes (not part of the common API) # ============================================================================== def build_node(name_, text=None, **attributes): # we call the node name 'name_' because 'name' is a common attribute name (confused? I am) node = nml2doc.createElement(name_) for attr, value in attributes.items(): node.setAttribute(attr, str(value)) if text: node.appendChild(nml2doc.createTextNode(text)) return node def build_parameter_node(name, value): param_node = build_node('parameter', value=value) if name: param_node.setAttribute('name', name) group_node = build_node('group', 'all') param_node.appendChild(group_node) return param_node class IF_base(object): """Base class for integrate-and-fire neuron models.""" def build_nodes(self): cell_type = self.__class__.__name__ logger.debug("Building nodes for "+cell_type) #cell_node = build_node('component', type=self.__class__.__name__, id=self.label) cell_node = build_node(cell_type, id=self.label) for param in self.parameters.keys(): paral_val = str(self.parameters[param]) # TODO why is this broken for a in EIF_cond_exp_isfa_ista???? if "EIF_cond_" in cell_type and param is "a": paral_val = float(paral_val) paral_val = paral_val/1000. logger.debug("Setting param %s to %s"%(param, paral_val)) cell_node.setAttribute(param, str(paral_val)) ##TODO remove!! cell_node.setAttribute('v_init', '-65') doc_node = build_node('notes', "Component for PyNN %s cell type" % cell_type) cell_node.appendChild(doc_node) synapse_nodes = [] if 'cond_exp' in cell_type: synapse_nodes_e = build_node("expCondSynapse", id="syn_e_"+self.label) synapse_nodes_e.setAttribute("tau_syn",str(self.parameters["tau_syn_E"])) synapse_nodes_e.setAttribute("e_rev",str(self.parameters["e_rev_E"])) synapse_nodes.append(synapse_nodes_e) synapse_nodes_i = build_node("expCondSynapse", id="syn_i_"+self.label) synapse_nodes_i.setAttribute("tau_syn",str(self.parameters["tau_syn_I"])) synapse_nodes_i.setAttribute("e_rev",str(self.parameters["e_rev_I"])) synapse_nodes.append(synapse_nodes_i) elif 'cond_alpha' in cell_type: synapse_nodes_e = build_node("alphaCondSynapse", id="syn_e_"+self.label) synapse_nodes_e.setAttribute("tau_syn",str(self.parameters["tau_syn_E"])) synapse_nodes_e.setAttribute("e_rev",str(self.parameters["e_rev_E"])) synapse_nodes.append(synapse_nodes_e) synapse_nodes_i = build_node("alphaCondSynapse", id="syn_i_"+self.label) synapse_nodes_i.setAttribute("tau_syn",str(self.parameters["tau_syn_I"])) synapse_nodes_i.setAttribute("e_rev",str(self.parameters["e_rev_I"])) synapse_nodes.append(synapse_nodes_i) elif 'curr_exp' in cell_type: synapse_nodes_e = build_node("expCurrSynapse", id="syn_e_"+self.label) synapse_nodes_e.setAttribute("tau_syn",str(self.parameters["tau_syn_E"])) synapse_nodes.append(synapse_nodes_e) synapse_nodes_i = build_node("expCurrSynapse", id="syn_i_"+self.label) synapse_nodes_i.setAttribute("tau_syn",str(self.parameters["tau_syn_I"])) synapse_nodes.append(synapse_nodes_i) elif 'curr_alpha' in cell_type: synapse_nodes_e = build_node("alphaCurrSynapse", id="syn_e_"+self.label) synapse_nodes_e.setAttribute("tau_syn",str(self.parameters["tau_syn_E"])) synapse_nodes.append(synapse_nodes_e) synapse_nodes_i = build_node("alphaCurrSynapse", id="syn_i_"+self.label) synapse_nodes_i.setAttribute("tau_syn",str(self.parameters["tau_syn_I"])) synapse_nodes.append(synapse_nodes_i) return cell_node, synapse_nodes class NotImplementedModel(object): def __init__(self): if strict: raise Exception('Cell type %s is not available in NeuroML' % self.__class__.__name__) def build_nodes(self): cell_node = build_node(':not_implemented_cell', id=self.label) doc_node = build_node('notes', "PyNN %s cell type not implemented" % self.__class__.__name__) cell_node.appendChild(doc_node) return cell_node, [] # ============================================================================== # Standard cells # ============================================================================== class IF_curr_exp(cells.IF_curr_exp, IF_base): """Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic current. (Separate synaptic currents for excitatory and inhibitory synapses""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.IF_curr_exp.default_parameters]) def __init__(self, parameters): cells.IF_curr_exp.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.synapse_type = "doub_exp_syn" self.__class__.n += 1 logger.debug("IF_curr_exp created") class IF_curr_alpha(cells.IF_curr_alpha, IF_base): """Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic current.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.IF_curr_alpha.default_parameters]) def __init__(self, parameters): cells.IF_curr_alpha.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.synapse_type = "doub_exp_syn" self.__class__.n += 1 logger.debug("IF_curr_alpha created") class IF_cond_exp(cells.IF_cond_exp, IF_base): """Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic conductance.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.IF_cond_exp.default_parameters]) def __init__(self, parameters): cells.IF_cond_exp.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.synapse_type = "doub_exp_syn" self.__class__.n += 1 logger.debug("IF_cond_exp created") class IF_cond_alpha(cells.IF_cond_alpha, IF_base): """Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic conductance.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.IF_cond_alpha.default_parameters]) def __init__(self, parameters): cells.IF_cond_alpha.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.synapse_type = "alpha_syn" self.__class__.n += 1 logger.debug("IF_cond_alpha created") class EIF_cond_exp_isfa_ista(cells.EIF_cond_exp_isfa_ista, IF_base): """Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.) according to: Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.EIF_cond_exp_isfa_ista.default_parameters]) def __init__(self, parameters): cells.EIF_cond_exp_isfa_ista.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.synapse_type = "exp_syn" self.__class__.n += 1 logger.debug("EIF_cond_exp_isfa_ista created") class EIF_cond_alpha_isfa_ista(cells.EIF_cond_alpha_isfa_ista, IF_base): """Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.) according to: Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.EIF_cond_alpha_isfa_ista.default_parameters]) def __init__(self, parameters): cells.EIF_cond_alpha_isfa_ista.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.synapse_type = "alpha_syn" self.__class__.n += 1 logger.debug("EIF_cond_alpha_isfa_ista created") class HH_cond_exp(cells.HH_cond_exp, IF_base): """ Single-compartment Hodgkin-Huxley model.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.HH_cond_exp.default_parameters]) def __init__(self, parameters): cells.HH_cond_exp.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.synapse_type = "exp_syn" self.__class__.n += 1 logger.debug("HH_cond_exp created") class GenericModel(object): units_to_use = {} def build_nodes(self): logger.debug("Building nodes for "+self.__class__.__name__) model_node = build_node(self.__class__.__name__, id=self.label) for param in self.parameters.keys(): units = '' if param in self.units_to_use.keys(): units = self.units_to_use[param] model_node.setAttribute(param, str(self.parameters[param])+units) doc_node = build_node('notes', "Component for PyNN %s model type" % self.__class__.__name__) model_node.appendChild(doc_node) return model_node, [] class SpikeSourcePoisson(cells.SpikeSourcePoisson, GenericModel): """Spike source, generating spikes according to a Poisson process.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.SpikeSourcePoisson.default_parameters]) def __init__(self, parameters): cells.SpikeSourcePoisson.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.__class__.n += 1 self.units_to_use = {'start':'ms','duration':'ms','rate':'per_s'} logger.debug("SpikeSourcePoisson created: "+self.label) class SpikeSourceArray(cells.SpikeSourceArray, GenericModel): """Spike source generating spikes at the times given in the spike_times array.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.SpikeSourceArray.default_parameters]) def __init__(self, parameters): cells.SpikeSourceArray.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.__class__.n += 1 logger.debug("SpikeSourceArray created: "+self.label) def build_nodes(self): logger.debug("Building nodes for "+self.__class__.__name__) model_node = build_node('spikeArray', id=self.label) #doc_node = build_node('notes', "Component for PyNN %s model type" % self.__class__.__name__) #model_node.appendChild(doc_node) for spike in self.parameters['spike_times']: spike_node = build_node('spike', time="%fms"%spike) model_node.appendChild(spike_node) return model_node, [] # ============================================================================== # Functions for simulation set-up and control # ============================================================================== def setup(timestep=0.1, min_delay=0.1, max_delay=0.1, debug=False,**extra_params): logger.debug("setup() called, extra_params = "+str(extra_params)) """ Should be called at the very beginning of a script. extra_params contains any keyword arguments that are required by a given simulator but not by others. """ global nml2doc, nml2file, lemsdoc, lemsfile, lemsNode, nml_id, population_holder, projection_holder, input_holder, cell_holder, channel_holder, neuromlNode, strict, dt population_holder = [] projection_holder = [] input_holder = [] cell_holder = [] if not extra_params.has_key('file'): nml2file = "PyNN2NeuroMLv2.nml" else: nml2file = extra_params['file'] nml_id = nml2file.split('.')[0] if isinstance(nml2file, basestring): nml2file = open(nml2file, 'w') if 'strict' in extra_params: strict = extra_params['strict'] dt = timestep nml2doc = xml.dom.minidom.Document() neuromlNode = nml2doc.createElementNS(neuroml_ns,'neuroml') neuromlNode.setAttribute("xmlns",neuroml_ns) neuromlNode.setAttribute('xmlns:xsi',namespace_xsi) neuromlNode.setAttribute('xsi:schemaLocation',neuroml_ns+" "+neuroml_xsd) neuromlNode.setAttribute('id',nml_id) nml2doc.appendChild(neuromlNode) lemsdoc = xml.dom.minidom.Document() lemsNode = lemsdoc.createElement('Lems') lemsdoc.appendChild(lemsNode) drNode = build_node('DefaultRun',component=simulation_prefix+nml_id) lemsNode.appendChild(drNode) coreNml2Files = ["NeuroMLCoreDimensions.xml","PyNN.xml","Networks.xml","Simulation.xml"] for f in coreNml2Files: incNode = build_node('Include', file="NeuroML2CoreTypes/"+f) lemsNode.appendChild(incNode) incNode = build_node('Include', file=nml2file.name) lemsNode.appendChild(incNode) global simNode, displayNode simNode = build_node('Simulation', id=simulation_prefix+nml_id, step=str(dt)+"ms", target=network_prefix+nml_id) lemsNode.appendChild(simNode) displayNode = build_node('Display',id="display_0",title="Recording of PyNN model run in LEMS", timeScale="1ms") simNode.appendChild(displayNode) lemsfile = "LEMS_"+nml_id+".xml" if isinstance(lemsfile, basestring): lemsfile = open(lemsfile, 'w') return 0 def end(compatible_output=True): """Do any necessary cleaning up before exiting.""" global nml2doc, nml2file, neuromlNode, nml_id for cellNode in cell_holder: neuromlNode.appendChild(cellNode) network_node = build_node('network', id=network_prefix+nml_id) neuromlNode.appendChild(network_node) for holder in population_holder, projection_holder, input_holder: for node in holder: network_node.appendChild(node) # Write the files logger.debug("Writing NeuroML 2 structure to: "+nml2file.name) nml2file.write(nml2doc.toprettyxml()) nml2file.close() logger.debug("Writing LEMS file to: "+lemsfile.name) lemsfile.write(lemsdoc.toprettyxml()) lemsfile.close() print("\nThe file: "+lemsfile.name+" has been generated. This can be executed with libNeuroML utility nml2 (which wraps the LEMS Interpreter), i.e.") print("\n nml2 "+lemsfile.name) print("\nFor more details see: http://www.neuroml.org/neuroml2.php#libNeuroML\n") def run(simtime): """Run the simulation for simtime ms.""" global simNode simNode.setAttribute('length', str(simtime)+"ms") def get_min_delay(): return 0.0 common.get_min_delay = get_min_delay def num_processes(): return 1 common.num_processes = num_processes def rank(): return 0 common.rank = rank # ============================================================================== # High-level API for creating, connecting and recording from populations of # neurons. # ============================================================================== class Population(common.Population): """ An array of neurons all of the same type. `Population' is used as a generic term intended to include layers, columns, nuclei, etc., of cells. """ n = 0 def __init__(self, size, cellclass, cellparams=None, structure=None, label=None): __doc__ = common.Population.__doc__ common.Population.__init__(self, size, cellclass, cellparams, structure, label) ###simulator.initializer.register(self) def _create_cells(self, cellclass, cellparams, n): """ Create a population of neurons all of the same type. `cellclass` -- a PyNN standard cell `cellparams` -- a dictionary of cell parameters. `n` -- the number of cells to create """ global population_holder, cell_holder, channel_holder assert n > 0, 'n must be a positive integer' self.celltype = cellclass(cellparams) Population.n += 1 self.celltype.label = 'cell_%s' % (self.label) population_node = build_node('population', id=self.label, component=self.celltype.label, size=self.size) #celltype_node = build_node('cell_type', self.celltype.label) instances_node = build_node('instances', size=self.size) for i in range(self.size): x, y, z = self.positions[:, i] instance_node = build_node('instance', id=i) instance_node.appendChild( build_node('location', x=x, y=y, z=z) ) instances_node.appendChild(instance_node) #population_node.appendChild(node) population_holder.append(population_node) cell_node, synapse_nodes = self.celltype.build_nodes() cell_holder.append(cell_node) for syn_node in synapse_nodes: cell_holder.append(syn_node) # Add all channels first, then all synapses ''' for channel_node in channel_list: channel_holder_node.insertBefore(channel_node , channel_holder_node.firstChild) for synapse_node in synapse_list: channel_holder_node.appendChild(synapse_node)''' self.first_id = 0 self.last_id = self.size-1 self.all_cells = numpy.array([ID(id) for id in range(self.first_id, self.last_id+1)], dtype=ID) self._mask_local = numpy.ones_like(self.all_cells).astype(bool) self.first_id = self.all_cells[0] self.last_id = self.all_cells[-1] for id in self.all_cells: id.parent = self id._cell = self.celltype.parameters.copy() #self.local_cells = self.all_cells def _set_initial_value_array(self, variable, value): logger.debug("Population %s having %s initialised to: %s"%(self.label, variable, value)) # TODO: use this in generated XML for component... if variable is 'v': self.celltype.parameters['v_init'] = value def _record(self, variable, record_from=None, rng=None, to_file=True): """ Private method called by record() and record_v(). """ global simNode, displayNode, color #displayNode = build_node('Display',id=display_prefix+self.label,title="Recording of "+variable+" in "+self.label, timeScale="1ms") #simNode.appendChild(displayNode) scale = "1" #if variable == 'v': scale = "1mV" colour = colours[displayNode.childNodes.length%len(colours)] for i in range(self.size): lineNode = build_node('Line', id=line_prefix+self.label, scale=scale, color=colour, quantity="%s[%i]/%s"%(self.label,i,variable), save="%s_%i_%s_nml2.dat"%(self.label,i,variable)) displayNode.appendChild(lineNode) def meanSpikeCount(self): return -1 def printSpikes(self, file, gather=True, compatible_output=True): pass def print_v(self, file, gather=True, compatible_output=True): pass ''' class AllToAllConnector(connectors.AllToAllConnector): def connect(self, projection): connectivity_node = build_node('connectivity_pattern') connectivity_node.appendChild( build_node('all_to_all', allow_self_connections=int(self.allow_self_connections)) ) return connectivity_node class OneToOneConnector(connectors.OneToOneConnector): def connect(self, projection): connectivity_node = build_node('connectivity_pattern') connectivity_node.appendChild( build_node('one_to_one') ) return connectivity_node class FixedProbabilityConnector(connectors.FixedProbabilityConnector): def connect(self, projection): connectivity_node = build_node('connectivity_pattern') connectivity_node.appendChild( build_node('fixed_probability', probability=self.p_connect, allow_self_conections=int(self.allow_self_connections)) ) return connectivity_node ''' FixedProbabilityConnector = connectors.FixedProbabilityConnector AllToAllConnector = connectors.AllToAllConnector OneToOneConnector = connectors.OneToOneConnector CSAConnector = connectors.CSAConnector class FixedNumberPreConnector(connectors.FixedNumberPreConnector): def connect(self, projection): if hasattr(self, "n"): connectivity_node = build_node('connectivity_pattern') connectivity_node.appendChild( build_node('per_cell_connection', num_per_source=self.n, direction="PreToPost", allow_self_connections = int(self.allow_self_connections)) ) return connectivity_node else: raise Exception('Connection with variable connection number not implemented.') class FixedNumberPostConnector(connectors.FixedNumberPostConnector): def connect(self, projection): if hasattr(self, "n"): connectivity_node = build_node('connectivity_pattern') connectivity_node.appendChild( build_node('per_cell_connection', num_per_source=self.n, direction="PostToPre", allow_self_connections = int(self.allow_self_connections)) ) return connectivity_node else: raise Exception('Connection with variable connection number not implemented.') class FromListConnector(connectors.FromListConnector): def connect(self, projection): connections_node = build_node('connections') for i in xrange(len(self.conn_list)): src, tgt, weight, delay = self.conn_list[i][:] src = self.pre[tuple(src)] tgt = self.post[tuple(tgt)] connection_node = build_node('connection', id=i) connection_node.appendChild( build_node('pre', cell_id=src) ) connection_node.appendChild( build_node('post', cell_id=tgt) ) connection_node.appendChild( build_node('properties', internal_delay=delay, weight=weight) ) connections_node.appendChild(connection_node) return connections_node class FromFileConnector(connectors.FromFileConnector): def connect(self, projection): # now open the file... f = open(self.filename,'r',10000) lines = f.readlines() f.close() # We read the file and gather all the data in a list of tuples (one per line) input_tuples = [] for line in lines: single_line = line.rstrip() src, tgt, w, d = single_line.split("\t", 4) src = "[%s" % src.split("[",1)[1] tgt = "[%s" % tgt.split("[",1)[1] input_tuples.append((eval(src), eval(tgt), float(w), float(d))) f.close() self.conn_list = input_tuples FromListConnector.connect(projection) class Projection(common.Projection): """ A container for all the connections of a given type (same synapse type and plasticity mechanisms) between two populations, together with methods to set parameters of those connections, including of plasticity mechanisms. """ n = 0 def __init__(self, presynaptic_population, postsynaptic_population, method, source=None, target=None, synapse_dynamics=None, label=None, rng=None): """ presynaptic_population and postsynaptic_population - Population objects. source - string specifying which attribute of the presynaptic cell signals action potentials target - string specifying which synapse on the postsynaptic cell to connect to If source and/or target are not given, default values are used. method - a Connector object, encapsulating the algorithm to use for connecting the neurons. synapse_dynamics - a `SynapseDynamics` object specifying which synaptic plasticity mechanisms to use. rng - specify an RNG object to be used by the Connector. """ global projection_holder common.Projection.__init__(self, presynaptic_population, postsynaptic_population, method, source, target, synapse_dynamics, label, rng) self.label = self.label or 'Projection%d' % Projection.n connection_method = method if target: self.synapse_type = target else: self.synapse_type = "ExcitatorySynapse" synapseComponent = "syn_" if self.synapse_type is "ExcitatorySynapse" or self.synapse_type is "excitatory": self.targetPort = "spike_in_E" synapseComponent = synapseComponent +"e_" elif self.synapse_type is "InhibitorySynapse" or self.synapse_type is "inhibitory": self.targetPort = "spike_in_I" synapseComponent = synapseComponent +"i_" else: self.targetPort = "spike_in" synapseComponent = synapseComponent +"cell_"+postsynaptic_population.label self.connection_manager = ConnectionManager(self.synapse_type, synapse_model=None, parent=self) self.connections = self.connection_manager ## Create connections method.connect(self) logger.debug("init in Projection, %s, pre: %s, post %s"%(self.label, presynaptic_population.label, postsynaptic_population.label)) #projection_node = build_node('projection', id=self.label) for connection in self.connection_manager.connections: connection_node = build_node('synapticConnectionWD', to='%s[%i]'%(postsynaptic_population.label,connection[1]), synapse=synapseComponent) connection_node.setAttribute("from",'%s[%i]'%(presynaptic_population.label,connection[0])) connection_node.setAttribute("weight",str(connection[3][0])) connection_node.setAttribute("delay",str(connection[4][0])+"ms") projection_holder.append(connection_node) ''' projection_node.appendChild( build_node('source', self.pre.label) ) projection_node.appendChild( build_node('target', self.post.label) ) synapse_node = build_node('synapse_props') synapse_node.appendChild( build_node('synapse_type', self.synapse_type) ) synapse_node.appendChild( build_node('default_values', internal_delay=5, weight=1, threshold=-20) ) projection_node.appendChild(synapse_node) projection_node.appendChild( connection_method.connect(self) ) ''' projection_holder.append(connection_node) Projection.n += 1 def saveConnections(self, filename, gather=True, compatible_output=True): pass def __len__(self): return 0 # needs implementing properly class ConnectionManager(object): """ Manage synaptic connections, providing methods for creating, listing, accessing individual connections. Based on ConnectionManager in moose/simulator.py """ def __init__(self, synapse_type, synapse_model=None, parent=None): """ Create a new ConnectionManager. `parent` -- the parent `Projection` """ assert parent is not None self.connections = [] self.parent = parent self.synapse_type = synapse_type self.synapse_model = synapse_model def connect(self, source, targets, weights, delays): """ Connect a neuron to one or more other neurons with a static connection. `source` -- the ID of the pre-synaptic cell. `targets` -- a list/1D array of post-synaptic cell IDs, or a single ID. `weight` -- a list/1D array of connection weights, or a single weight. Must have the same length as `targets`. `delays` -- a list/1D array of connection delays, or a single delay. Must have the same length as `targets`. """ if not isinstance(source, int) or source < 0: errmsg = "Invalid source ID: %s" % (source) raise errors.ConnectionError(errmsg) if not core.is_listlike(targets): targets = [targets] ##############weights = weights*1000.0 # scale units if isinstance(weights, float): weights = [weights] if isinstance(delays, float): delays = [delays] assert len(targets) > 0 # need to scale weights for appropriate units for target, weight, delay in zip(targets, weights, delays): if target.local: if not isinstance(target, common.IDMixin): raise errors.ConnectionError("Invalid target ID: %s" % target) #TODO record weights ''' if self.synapse_type == "excitatory": synapse_object = target._cell.esyn elif self.synapse_type == "inhibitory": synapse_object = target._cell.isyn else: synapse_object = getattr(target._cell, self.synapse_type) ###############source._cell.source.connect('event', synapse_object, 'synapse') synapse_object.n_incoming_connections += 1 index = synapse_object.n_incoming_connections - 1 synapse_object.setWeight(index, weight) synapse_object.setDelay(index, delay)''' index=0 self.connections.append((source, target, index, weights, delays)) def set(self, name, value): """ Set connection attributes for all connections in this manager. `name` -- attribute name `value` -- the attribute numeric value, or a list/1D array of such values of the same length as the number of local connections, or a 2D array with the same dimensions as the connectivity matrix (as returned by `get(format='array')`). """ #TODO: allow this!! #for conn in self.connections: #??? # ============================================================================== # Low-level API for creating, connecting and recording from individual neurons # ============================================================================== create = common.build_create(Population) connect = common.build_connect(Projection, FixedProbabilityConnector) set = common.set initialize = common.initialize ####record = common.build_record('spikes', simulator) ####record_v = common.build_record('v', simulator) ####record_gsyn = common.build_record('gsyn', simulator) def record(source, filename): """Record spikes to a file. source can be an individual cell or a list of cells.""" logger.debug("Being asked to record spikes of %s to %s"%(source, filename)) def record_v(source, filename): """Record membrane potential to a file. source can be an individual cell or a list of cells.""" logger.debug("Being asked to record v of %s to %s"%(source, filename)) global simNode, displayNode, color scale = "1" colour = colours[displayNode.childNodes.length%len(colours)] for i in range(source.size): lineNode = build_node('Line', id=line_prefix+source.label, scale=scale, color=colour, quantity="%s[%i]/%s"%(source.label,i,'v'), save="%s_%i_%s_nml2.dat"%(source.label,i,'v')) displayNode.appendChild(lineNode) def record_gsyn(source, filename): """Record gsyn.""" print "Being asked to record gsyn of %s to %s"%(source, filename) # ============================================================================== ## to reimplement in simulator.py... min_delay = 0.0 max_delay = 1e12 def get_min_delay(): """Return the minimum allowed synaptic delay.""" return min_delay def get_max_delay(): """Return the maximum allowed synaptic delay.""" return max_delay common.get_min_delay = get_min_delay common.get_max_delay = get_max_delayPyNN-0.7.4/src/standardmodels/0000755000175000017500000000000011774605211016727 5ustar andrewandrew00000000000000PyNN-0.7.4/src/standardmodels/cells.py0000644000175000017500000003320511736323051020403 0ustar andrewandrew00000000000000""" Definition of default parameters (and hence, standard parameter names) for standard cell models. Plain integrate-and-fire models: IF_curr_exp IF_curr_alpha IF_cond_exp IF_cond_alpha Integrate-and-fire with adaptation: IF_cond_exp_gsfa_grr EIF_cond_alpha_isfa_ista EIF_cond_exp_isfa_ista Integrate-and-fire model for use with the FACETS hardware IF_facets_hardware1 Hodgkin-Huxley model HH_cond_exp Spike sources (input neurons) SpikeSourcePoisson SpikeSourceArray SpikeSourceInhGamma :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ import numpy from pyNN.standardmodels import StandardCellType class IF_curr_alpha(StandardCellType): """ Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic current. """ default_parameters = { 'v_rest' : -65.0, # Resting membrane potential in mV. 'cm' : 1.0, # Capacity of the membrane in nF 'tau_m' : 20.0, # Membrane time constant in ms. 'tau_refrac' : 0.1, # Duration of refractory period in ms. 'tau_syn_E' : 0.5, # Rise time of the excitatory synaptic alpha function in ms. 'tau_syn_I' : 0.5, # Rise time of the inhibitory synaptic alpha function in ms. 'i_offset' : 0.0, # Offset current in nA 'v_reset' : -65.0, # Reset potential after a spike in mV. 'v_thresh' : -50.0, # Spike threshold in mV. } recordable = ['spikes', 'v'] conductance_based = False default_initial_values = { 'v': -65.0, #'v_rest', } class IF_curr_exp(StandardCellType): """ Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic current. (Separate synaptic currents for excitatory and inhibitory synapses. """ default_parameters = { 'v_rest' : -65.0, # Resting membrane potential in mV. 'cm' : 1.0, # Capacity of the membrane in nF 'tau_m' : 20.0, # Membrane time constant in ms. 'tau_refrac' : 0.1, # Duration of refractory period in ms. 'tau_syn_E' : 5.0, # Decay time of excitatory synaptic current in ms. 'tau_syn_I' : 5.0, # Decay time of inhibitory synaptic current in ms. 'i_offset' : 0.0, # Offset current in nA 'v_reset' : -65.0, # Reset potential after a spike in mV. 'v_thresh' : -50.0, # Spike threshold in mV. } recordable = ['spikes', 'v'] conductance_based = False default_initial_values = { 'v': -65.0, #'v_rest', } class IF_cond_alpha(StandardCellType): """ Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic conductance. """ default_parameters = { 'v_rest' : -65.0, # Resting membrane potential in mV. 'cm' : 1.0, # Capacity of the membrane in nF 'tau_m' : 20.0, # Membrane time constant in ms. 'tau_refrac' : 0.1, # Duration of refractory period in ms. 'tau_syn_E' : 0.3, # Rise time of the excitatory synaptic alpha function in ms. 'tau_syn_I' : 0.5, # Rise time of the inhibitory synaptic alpha function in ms. 'e_rev_E' : 0.0, # Reversal potential for excitatory input in mV 'e_rev_I' : -70.0, # Reversal potential for inhibitory input in mV 'v_thresh' : -50.0, # Spike threshold in mV. 'v_reset' : -65.0, # Reset potential after a spike in mV. 'i_offset' : 0.0, # Offset current in nA } recordable = ['spikes', 'v', 'gsyn'] default_initial_values = { 'v': -65.0, #'v_rest', } class IF_cond_exp(StandardCellType): """ Leaky integrate and fire model with fixed threshold and exponentially-decaying post-synaptic conductance. """ default_parameters = { 'v_rest' : -65.0, # Resting membrane potential in mV. 'cm' : 1.0, # Capacity of the membrane in nF 'tau_m' : 20.0, # Membrane time constant in ms. 'tau_refrac' : 0.1, # Duration of refractory period in ms. 'tau_syn_E' : 5.0, # Decay time of the excitatory synaptic conductance in ms. 'tau_syn_I' : 5.0, # Decay time of the inhibitory synaptic conductance in ms. 'e_rev_E' : 0.0, # Reversal potential for excitatory input in mV 'e_rev_I' : -70.0, # Reversal potential for inhibitory input in mV 'v_thresh' : -50.0, # Spike threshold in mV. 'v_reset' : -65.0, # Reset potential after a spike in mV. 'i_offset' : 0.0, # Offset current in nA } recordable = ['spikes', 'v', 'gsyn'] default_initial_values = { 'v': -65.0, #'v_rest', } class IF_cond_exp_gsfa_grr(StandardCellType): """ Linear leaky integrate and fire model with fixed threshold, decaying-exponential post-synaptic conductance, conductance based spike-frequency adaptation, and a conductance-based relative refractory mechanism. See: Muller et al (2007) Spike-frequency adapting neural ensembles: Beyond mean-adaptation and renewal theories. Neural Computation 19: 2958-3010. See also: EIF_cond_alpha_isfa_ista """ default_parameters = { 'v_rest' : -65.0, # Resting membrane potential in mV. 'cm' : 1.0, # Capacity of the membrane in nF 'tau_m' : 20.0, # Membrane time constant in ms. 'tau_refrac' : 0.1, # Duration of refractory period in ms. 'tau_syn_E' : 5.0, # Decay time of the excitatory synaptic conductance in ms. 'tau_syn_I' : 5.0, # Decay time of the inhibitory synaptic conductance in ms. 'e_rev_E' : 0.0, # Reversal potential for excitatory input in mV 'e_rev_I' : -70.0, # Reversal potential for inhibitory input in mV 'v_thresh' : -50.0, # Spike threshold in mV. 'v_reset' : -65.0, # Reset potential after a spike in mV. 'i_offset' : 0.0, # Offset current in nA 'tau_sfa' : 100.0, # Time constant of spike-frequency adaptation in ms 'e_rev_sfa' : -75.0, # spike-frequency adaptation conductance reversal potential in mV 'q_sfa' : 15.0, # Quantal spike-frequency adaptation conductance increase in nS 'tau_rr' : 2.0, # Time constant of the relative refractory mechanism in ms 'e_rev_rr' : -75.0, # relative refractory mechanism conductance reversal potential in mV 'q_rr' : 3000.0 # Quantal relative refractory conductance increase in nS } recordable = ['spikes', 'v', 'gsyn'] default_initial_values = { 'v': -65.0, #'v_rest', } class IF_facets_hardware1(StandardCellType): """ Leaky integrate and fire model with conductance-based synapses and fixed threshold as it is resembled by the FACETS Hardware Stage 1. The following parameters can be assumed for a corresponding software simulation: cm = 0.2 nF, tau_refrac = 1.0 ms, e_rev_E = 0.0 mV. For further details regarding the hardware model see the FACETS-internal Wiki: https://facets.kip.uni-heidelberg.de/private/wiki/index.php/WP7_NNM """ default_parameters = { 'g_leak' : 40.0, # nS 'tau_syn_E' : 30.0, # ms 'tau_syn_I' : 30.0, # ms 'v_reset' : -80.0, # mV 'e_rev_I' : -80.0, # mV, 'v_rest' : -65.0, # mV 'v_thresh' : -55.0 # mV } recordable = ['spikes', 'v', 'gsyn'] default_initial_values = { 'v': -65.0, #'v_rest', } class HH_cond_exp(StandardCellType): """Single-compartment Hodgkin-Huxley model. Reference: Traub & Miles, Neuronal Networks of the Hippocampus, Cambridge, 1991. """ default_parameters = { 'gbar_Na' : 20.0, # uS 'gbar_K' : 6.0, # uS 'g_leak' : 0.01, # uS 'cm' : 0.2, # nF 'v_offset' : -63.0, # mV 'e_rev_Na' : 50.0, 'e_rev_K' : -90.0, 'e_rev_leak': -65.0, 'e_rev_E' : 0.0, 'e_rev_I' : -80.0, 'tau_syn_E' : 0.2, # ms 'tau_syn_I' : 2.0, 'i_offset' : 0.0, # nA } recordable = ['spikes', 'v', 'gsyn'] default_initial_values = { 'v': -65.0, #'v_rest', } class EIF_cond_alpha_isfa_ista(StandardCellType): """ Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.) according to: Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642 See also: IF_cond_exp_gsfa_grr, EIF_cond_exp_isfa_ista """ default_parameters = { 'cm' : 0.281, # Capacitance of the membrane in nF 'tau_refrac': 0.1, # Duration of refractory period in ms. 'v_spike' : -40.0, # Spike detection threshold in mV. 'v_reset' : -70.6, # Reset value for V_m after a spike. In mV. 'v_rest' : -70.6, # Resting membrane potential (Leak reversal potential) in mV. 'tau_m' : 9.3667, # Membrane time constant in ms 'i_offset' : 0.0, # Offset current in nA 'a' : 4.0, # Subthreshold adaptation conductance in nS. 'b' : 0.0805, # Spike-triggered adaptation in nA 'delta_T' : 2.0, # Slope factor in mV 'tau_w' : 144.0, # Adaptation time constant in ms 'v_thresh' : -50.4, # Spike initiation threshold in mV 'e_rev_E' : 0.0, # Excitatory reversal potential in mV. 'tau_syn_E' : 5.0, # Rise time of excitatory synaptic conductance in ms (alpha function). 'e_rev_I' : -80.0, # Inhibitory reversal potential in mV. 'tau_syn_I' : 5.0, # Rise time of the inhibitory synaptic conductance in ms (alpha function). } recordable = ['spikes', 'v', 'w', 'gsyn'] default_initial_values = { 'v': -65.0, #'v_rest', 'w': 0.0, } class EIF_cond_exp_isfa_ista(StandardCellType): """ Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.) according to: Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642 See also: IF_cond_exp_gsfa_grr, EIF_cond_alpha_isfa_ista """ default_parameters = { 'cm' : 0.281, # Capacitance of the membrane in nF 'tau_refrac': 0.1, # Duration of refractory period in ms. 'v_spike' : -40.0, # Spike detection threshold in mV. 'v_reset' : -70.6, # Reset value for V_m after a spike. In mV. 'v_rest' : -70.6, # Resting membrane potential (Leak reversal potential) in mV. 'tau_m' : 9.3667, # Membrane time constant in ms 'i_offset' : 0.0, # Offset current in nA 'a' : 4.0, # Subthreshold adaptation conductance in nS. 'b' : 0.0805, # Spike-triggered adaptation in nA 'delta_T' : 2.0, # Slope factor in mV 'tau_w' : 144.0, # Adaptation time constant in ms 'v_thresh' : -50.4, # Spike initiation threshold in mV 'e_rev_E' : 0.0, # Excitatory reversal potential in mV. 'tau_syn_E' : 5.0, # Decay time constant of excitatory synaptic conductance in ms. 'e_rev_I' : -80.0, # Inhibitory reversal potential in mV. 'tau_syn_I' : 5.0, # Decay time constant of the inhibitory synaptic conductance in ms. } recordable = ['spikes', 'v', 'w', 'gsyn'] default_initial_values = { 'v': -65.0, #'v_rest', 'w': 0.0, } class SpikeSourcePoisson(StandardCellType): """Spike source, generating spikes according to a Poisson process.""" default_parameters = { 'rate' : 1.0, # Mean spike frequency (Hz) 'start' : 0.0, # Start time (ms) 'duration' : 1e10 # Duration of spike sequence (ms) } recordable = ['spikes'] injectable = False synapse_types = () class SpikeSourceInhGamma(StandardCellType): """ Spike source, generating realizations of an inhomogeneous gamma process, employing the thinning method. See: Muller et al (2007) Spike-frequency adapting neural ensembles: Beyond mean-adaptation and renewal theories. Neural Computation 19: 2958-3010. """ default_parameters = { 'a' : numpy.array([1.0]), # time histogram of parameter a of a gamma distribution (dimensionless) 'b' : numpy.array([1.0]), # time histogram of parameter b of a gamma distribution (seconds) 'tbins' : numpy.array([0.0]), # time bins of the time histogram of a,b in units of ms 'start' : 0.0, # Start time (ms) 'duration' : 1e10 # Duration of spike sequence (ms) } recordable = ['spikes'] injectable = False synapse_types = () class SpikeSourceArray(StandardCellType): """Spike source generating spikes at the times given in the spike_times array.""" default_parameters = { 'spike_times' : [] } # list or numpy array containing spike times in milliseconds. recordable = ['spikes'] injectable = False synapse_types = () def __init__(self, parameters): if parameters and 'spike_times' in parameters: parameters['spike_times'] = numpy.array(parameters['spike_times'], 'float') StandardCellType.__init__(self, parameters) PyNN-0.7.4/src/standardmodels/__init__.py0000644000175000017500000002613211736323051021041 0ustar andrewandrew00000000000000# encoding: utf-8 """ Machinery for implementation of "standard models", i.e. neuron and synapse models that are available in multiple simulators: Functions: build_translations() Classes: StandardModelType StandardCellType ModelNotAvailable SynapseDynamics ShortTermPlasticityMechanism STDPMechanism STDPWeightDependence STDPTimingDependence :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ from pyNN import descriptions, errors, models import numpy from pyNN.core import is_listlike # ============================================================================== # Standard cells # ============================================================================== def build_translations(*translation_list): """ Build a translation dictionary from a list of translations/transformations. """ translations = {} for item in translation_list: assert 2 <= len(item) <= 4, "Translation tuples must have between 2 and 4 items. Actual content: %s" % str(item) pynn_name = item[0] sim_name = item[1] if len(item) == 2: # no transformation f = pynn_name g = sim_name elif len(item) == 3: # simple multiplicative factor scale_factor = item[2] f = "float(%g)*%s" % (scale_factor, pynn_name) g = "%s/float(%g)" % (sim_name, scale_factor) elif len(item) == 4: # more complex transformation f = item[2] g = item[3] translations[pynn_name] = {'translated_name': sim_name, 'forward_transform': f, 'reverse_transform': g} return translations class StandardModelType(models.BaseModelType): """Base class for standardized cell model and synapse model classes.""" translations = {} def __init__(self, parameters): models.BaseModelType.__init__(self, parameters) assert set(self.translations.keys()) == set(self.default_parameters.keys()), \ "%s != %s" % (self.translations.keys(), self.default_parameters.keys()) self.parameters = self.__class__.translate(self.parameters) @classmethod def translate(cls, parameters): """Translate standardized model parameters to simulator-specific parameters.""" parameters = cls.checkParameters(parameters, with_defaults=False) native_parameters = {} for name in parameters: D = cls.translations[name] pname = D['translated_name'] if is_listlike(cls.default_parameters[name]): parameters[name] = numpy.array(parameters[name]) try: pval = eval(D['forward_transform'], globals(), parameters) except NameError, errmsg: raise NameError("Problem translating '%s' in %s. Transform: '%s'. Parameters: %s. %s" \ % (pname, cls.__name__, D['forward_transform'], parameters, errmsg)) except ZeroDivisionError: raise #pval = 1e30 # this is about the highest value hoc can deal with native_parameters[pname] = pval return native_parameters @classmethod def reverse_translate(cls, native_parameters): """Translate simulator-specific model parameters to standardized parameters.""" standard_parameters = {} for name,D in cls.translations.items(): if is_listlike(cls.default_parameters[name]): tname = D['translated_name'] native_parameters[tname] = numpy.array(native_parameters[tname]) try: standard_parameters[name] = eval(D['reverse_transform'], {}, native_parameters) except NameError, errmsg: raise NameError("Problem translating '%s' in %s. Transform: '%s'. Parameters: %s. %s" \ % (name, cls.__name__, D['reverse_transform'], native_parameters, errmsg)) return standard_parameters @classmethod def simple_parameters(cls): """Return a list of parameters for which there is a one-to-one correspondance between standard and native parameter values.""" return [name for name in cls.translations if cls.translations[name]['forward_transform'] == name] @classmethod def scaled_parameters(cls): """Return a list of parameters for which there is a unit change between standard and native parameter values.""" return [name for name in cls.translations if "float" in cls.translations[name]['forward_transform']] @classmethod def computed_parameters(cls): """Return a list of parameters whose values must be computed from more than one other parameter.""" return [name for name in cls.translations if name not in cls.simple_parameters()+cls.scaled_parameters()] def update_parameters(self, parameters): """ update self.parameters with those in parameters """ self.parameters.update(self.translate(parameters)) class StandardCellType(StandardModelType, models.BaseCellType): """Base class for standardized cell model classes.""" recordable = ['spikes', 'v', 'gsyn'] synapse_types = ('excitatory', 'inhibitory') always_local = False # override for NEST spike sources class ModelNotAvailable(object): """Not available for this simulator.""" def __init__(self, *args, **kwargs): raise NotImplementedError("The %s model is not available for this simulator." % self.__class__.__name__) # ============================================================================== # Synapse Dynamics classes # ============================================================================== class SynapseDynamics(models.BaseSynapseDynamics): """ For specifying synapse short-term (faciliation, depression) and long-term (STDP) plasticity. To be passed as the `synapse_dynamics` argument to `Projection.__init__()` or `connect()`. """ def __init__(self, fast=None, slow=None): """ Create a new specification for a dynamic synapse, combining a `fast` component (short-term facilitation/depression) and a `slow` component (long-term potentiation/depression). """ if fast: assert isinstance(fast, ShortTermPlasticityMechanism) if slow: assert isinstance(slow, STDPMechanism) assert 0 <= slow.dendritic_delay_fraction <= 1.0 self.fast = fast self.slow = slow def describe(self, template='synapsedynamics_default.txt', engine='default'): """ Returns a human-readable description of the synapse dynamics. The output may be customized by specifying a different template togther with an associated template engine (see ``pyNN.descriptions``). If template is None, then a dictionary containing the template context will be returned. """ context = {'fast': self.fast and self.fast.describe(template=None) or 'None', 'slow': self.slow and self.slow.describe(template=None) or 'None'} return descriptions.render(engine, template, context) class ShortTermPlasticityMechanism(StandardModelType): """Abstract base class for models of short-term synaptic dynamics.""" def __init__(self): raise NotImplementedError class STDPMechanism(object): """Specification of STDP models.""" def __init__(self, timing_dependence=None, weight_dependence=None, voltage_dependence=None, dendritic_delay_fraction=1.0): """ Create a new specification for an STDP mechanism, by combining a weight-dependence, a timing-dependence, and, optionally, a voltage- dependence. For point neurons, the synaptic delay `d` can be interpreted either as occurring purely in the pre-synaptic axon + synaptic cleft, in which case the synaptic plasticity mechanism 'sees' the post-synaptic spike immediately and the pre-synaptic spike after a delay `d` (`dendritic_delay_fraction = 0`) or as occurring purely in the post- synaptic dendrite, in which case the pre-synaptic spike is seen immediately, and the post-synaptic spike after a delay `d` (`dendritic_delay_fraction = 1`), or as having both pre- and post- synaptic components (`dendritic_delay_fraction` between 0 and 1). In a future version of the API, we will allow the different components of the synaptic delay to be specified separately in milliseconds. """ if timing_dependence: assert isinstance(timing_dependence, STDPTimingDependence) if weight_dependence: assert isinstance(weight_dependence, STDPWeightDependence) assert isinstance(dendritic_delay_fraction, (int, long, float)) assert 0 <= dendritic_delay_fraction <= 1 self.timing_dependence = timing_dependence self.weight_dependence = weight_dependence self.voltage_dependence = voltage_dependence self.dendritic_delay_fraction = dendritic_delay_fraction @property def possible_models(self): td = self.timing_dependence wd = self.weight_dependence pm = td.possible_models.intersection(wd.possible_models) if len(pm) == 1 : return list(pm)[0] elif len(pm) == 0 : raise errors.NoModelAvailableError("No available plasticity models") elif len(pm) > 1 : # we pass the set of models back to the simulator-specific module for it to deal with return pm @property def all_parameters(self): parameters = self.timing_dependence.parameters.copy() parameters.update(self.weight_dependence.parameters) return parameters def describe(self, template='stdpmechanism_default.txt', engine='default'): """ Returns a human-readable description of the STDP mechanism. The output may be customized by specifying a different template togther with an associated template engine (see ``pyNN.descriptions``). If template is None, then a dictionary containing the template context will be returned. """ context = {'weight_dependence': self.weight_dependence.describe(template=None), 'timing_dependence': self.timing_dependence.describe(template=None), 'voltage_dependence': self.voltage_dependence and self.voltage_dependence.describe(template=None) or None, 'dendritic_delay_fraction': self.dendritic_delay_fraction} return descriptions.render(engine, template, context) class STDPWeightDependence(StandardModelType): """Abstract base class for models of STDP weight dependence.""" def __init__(self): raise NotImplementedError class STDPTimingDependence(StandardModelType): """Abstract base class for models of STDP timing dependence (triplets, etc)""" def __init__(self): raise NotImplementedError PyNN-0.7.4/src/standardmodels/synapses.py0000644000175000017500000001647611736323051021161 0ustar andrewandrew00000000000000# encoding: utf-8 """ Definition of default parameters (and hence, standard parameter names) for standard dynamic synapse models. Classes for specifying short-term plasticity (facilitation/depression): TsodyksMarkramMechanism Classes for defining STDP rules: AdditiveWeightDependence MultiplicativeWeightDependence AdditivePotentiationMultiplicativeDepression GutigWeightDependence SpikePairRule :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ from pyNN.standardmodels import ShortTermPlasticityMechanism, STDPWeightDependence, STDPTimingDependence class TsodyksMarkramMechanism(ShortTermPlasticityMechanism): """ Synapse exhibiting facilitation and depression, implemented using the model of Tsodyks, Markram et al.: Tsodyks, Uziel, Markram (2000) Synchrony Generation in Recurrent Networks with Frequency-Dependent Synapses. Journal of Neuroscience, vol 20 RC50 Note that the time constant of the post-synaptic current is set in the neuron model, not here. """ default_parameters = { 'U': 0.5, # use parameter 'tau_rec': 100.0, # depression time constant (ms) 'tau_facil': 0.0, # facilitation time constant (ms) 'u0': 0.0, # } 'x0': 1.0, # } initial values 'y0': 0.0 # } } def __init__(self, U=0.5, tau_rec=100.0, tau_facil=0.0, u0=0.0, x0=1.0, y0=0.0): """ Create a new specification for a short-term plasticity mechanism. `U` -- use parameter `tau_rec` -- depression time constant (ms) `tau_facil` -- facilitation time constant (ms) `u0`, `x0`, `y0` -- initial conditions. """ raise NotImplementedError class AdditiveWeightDependence(STDPWeightDependence): """ The amplitude of the weight change is fixed for depression (`A_minus`) and for potentiation (`A_plus`). If the new weight would be less than `w_min` it is set to `w_min`. If it would be greater than `w_max` it is set to `w_max`. """ default_parameters = { 'w_min': 0.0, 'w_max': 1.0, 'A_plus': 0.01, 'A_minus': 0.01 } def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): # units? """ Create a new specification for the weight-dependence of an STDP rule. `w_min` -- minimum synaptic weight, in the same units as the weight, i.e. µS or nA. `w_max` -- maximum synaptic weight. `A_plus` -- synaptic weight increase as a fraction of `w_max` when the pre-synaptic spike precedes the post-synaptic spike by an infinitessimal amount. `A_minus` -- synaptic weight decrease as a fraction of `w_max` when the pre-synaptic spike lags the post-synaptic spike by an infinitessimal amount. """ raise NotImplementedError class MultiplicativeWeightDependence(STDPWeightDependence): """ The amplitude of the weight change depends on the current weight. For depression, Δw ∝ w - w_min For potentiation, Δw ∝ w_max - w """ default_parameters = { 'w_min' : 0.0, 'w_max' : 1.0, 'A_plus' : 0.01, 'A_minus': 0.01, } def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): """ Create a new specification for the weight-dependence of an STDP rule. `w_min` -- minimum synaptic weight, in the same units as the weight, i.e. µS or nA. `w_max` -- maximum synaptic weight. `A_plus` -- synaptic weight increase as a fraction of `w_max-w` when the pre-synaptic spike precedes the post-synaptic spike by an infinitessimal amount. `A_minus` -- synaptic weight decrease as a fraction of `w-w_min` when the pre-synaptic spike lags the post-synaptic spike by an infinitessimal amount. """ raise NotImplementedError class AdditivePotentiationMultiplicativeDepression(STDPWeightDependence): """ The amplitude of the weight change depends on the current weight for depression (Δw ∝ w) and is fixed for potentiation. """ default_parameters = { 'w_min' : 0.0, 'w_max' : 1.0, 'A_plus' : 0.01, 'A_minus': 0.01, } def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): """ Create a new specification for the weight-dependence of an STDP rule. `w_min` -- minimum synaptic weight, in the same units as the weight, i.e. µS or nA. `w_max` -- maximum synaptic weight. `A_plus` -- synaptic weight increase as a fraction of `w_max` when the pre-synaptic spike precedes the post-synaptic spike by an infinitessimal amount. `A_minus` -- synaptic weight decrease as a fraction of `w-w_min` when the pre-synaptic spike lags the post-synaptic spike by an infinitessimal amount. """ raise NotImplementedError class GutigWeightDependence(STDPWeightDependence): """ The amplitude of the weight change depends on (w_max-w)^mu_plus for potentiation and (w-w_min)^mu_minus for depression. """ default_parameters = { 'w_min' : 0.0, 'w_max' : 1.0, 'A_plus' : 0.01, 'A_minus' : 0.01, 'mu_plus' : 0.5, 'mu_minus': 0.5 } def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01,mu_plus=0.5,mu_minus=0.5): """ Create a new specification for the weight-dependence of an STDP rule. `w_min` -- minimum synaptic weight, in the same units as the weight, i.e. µS or nA. `w_max` -- maximum synaptic weight. `A_plus` -- synaptic weight increase as a fraction of `(w_max-w)^mu_plus` when the pre-synaptic spike precedes the post-synaptic spike by an infinitessimal amount. `A_minus` -- synaptic weight decrease as a fraction of `(w-w_min)^mu_minus` when the pre-synaptic spike lags the post-synaptic spike by an infinitessimal amount. `mu_plus` -- see above `mu_minus` -- see above """ raise NotImplementedError # Not yet implemented for any module #class PfisterSpikeTripletRule(STDPTimingDependence): # raise NotImplementedError class SpikePairRule(STDPTimingDependence): """ The amplitude of the weight change depends only on the relative timing of spike pairs, not triplets, etc. """ default_parameters = { 'tau_plus': 20.0, 'tau_minus': 20.0, } def __init__(self, tau_plus=20.0, tau_minus=20.0): """ Create a new specification for the timing-dependence of an STDP rule. `tau_plus` -- time constant of the positive part of the STDP curve, in milliseconds. `tau_minus` -- time constant of the negative part of the STDP curve, in milliseconds. """ raise NotImplementedError #_abstract_method(self)PyNN-0.7.4/src/connectors2.py0000644000175000017500000010162411736323051016535 0ustar andrewandrew00000000000000import numpy, logging, sys from pyNN import errors, common, core, random, utility from pyNN.space import Space from pyNN.random import RandomDistribution from numpy import arccos, arcsin, arctan, arctan2, ceil, cos, cosh, e, exp, \ fabs, floor, fmod, hypot, ldexp, log, log10, modf, pi, power, \ sin, sinh, sqrt, tan, tanh, maximum, minimum logger = logging.getLogger("PyNN") class ConnectionAttributeGenerator(object): def __init__(self, source, local_mask, safe=True): self.source = source self.local_mask = local_mask self.local_size = local_mask.sum() self.safe = safe if self.safe: self.get = self.get_safe if isinstance(self.source, numpy.ndarray): self.source_iterator = iter(self.source) def check(self, data): return data def extract(self, N, distance_matrix=None, sub_mask=None): #local_mask is supposed to be a mask of booleans, while #sub_mask is a list of cells ids. if isinstance(self.source, basestring): assert distance_matrix is not None d = distance_matrix.as_array(sub_mask) values = eval(self.source) return values elif hasattr(self.source, 'func_name'): assert distance_matrix is not None d = distance_matrix.as_array(sub_mask) values = self.source(d) return values elif numpy.isscalar(self.source): if sub_mask is None: values = numpy.ones((self.local_size,))*self.source else: values = numpy.ones((len(sub_mask),))*self.source return values elif isinstance(self.source, RandomDistribution): if sub_mask is None: values = self.source.next(N, mask_local=self.local_mask) else: values = self.source.next(len(sub_mask), mask_local=self.local_mask) return values elif isinstance(self.source, numpy.ndarray): source_row = self.source_iterator.next() values = source_row[self.local_mask] if sub_mask is not None: values = values[sub_mask] return values def get_safe(self, N, distance_matrix=None, sub_mask=None): return self.check(self.extract(N, distance_matrix, sub_mask)) def get(self, N, distance_matrix=None, sub_mask=None): return self.extract(N, distance_matrix, sub_mask) class WeightGenerator(ConnectionAttributeGenerator): def __init__(self, source, local_mask, projection, safe=True): ConnectionAttributeGenerator.__init__(self, source, local_mask, safe) self.projection = projection self.is_conductance = common.is_conductance(projection.post.all_cells[0]) def check(self, weight): if weight is None: weight = DEFAULT_WEIGHT if core.is_listlike(weight): weight = numpy.array(weight) nan_filter = (1-numpy.isnan(weight)).astype(bool) # weight arrays may contain NaN, which should be ignored filtered_weight = weight[nan_filter] all_negative = (filtered_weight<=0).all() all_positive = (filtered_weight>=0).all() if not (all_negative or all_positive): raise errors.InvalidWeightError("Weights must be either all positive or all negative") elif numpy.isscalar(weight): all_positive = weight >= 0 all_negative = weight < 0 else: raise Exception("Weight must be a number or a list/array of numbers.") if self.is_conductance or self.projection.synapse_type == 'excitatory': if not all_positive: raise errors.InvalidWeightError("Weights must be positive for conductance-based and/or excitatory synapses") elif self.is_conductance==False and self.projection.synapse_type == 'inhibitory': if not all_negative: raise errors.InvalidWeightError("Weights must be negative for current-based, inhibitory synapses") else: # is_conductance is None. This happens if the cell does not exist on the current node. logger.debug("Can't check weight, conductance status unknown.") return weight class DelayGenerator(ConnectionAttributeGenerator): def __init__(self, source, local_mask, safe=True): ConnectionAttributeGenerator.__init__(self, source, local_mask, safe) self.min_delay = common.get_min_delay() self.max_delay = common.get_max_delay() def check(self, delay): all_negative = (delay<=self.max_delay).all() all_positive = (delay>=self.min_delay).all()# If the delay is too small , we have to throw an error if not (all_negative and all_positive): raise errors.ConnectionError("delay (%s) is out of range [%s,%s]" % (delay, common.get_min_delay(), common.get_max_delay())) return delay class ProbaGenerator(ConnectionAttributeGenerator): pass class DistanceMatrix(object): def __init__(self, B, space, mask=None): assert B.shape[0] == 3 self.space = space if mask is not None: self.B = B[:,mask] else: self.B = B def as_array(self, sub_mask=None): if self._distance_matrix is None and self.A is not None: if sub_mask is None: self._distance_matrix = self.space.distances(self.A, self.B)[0] else: self._distance_matrix = self.space.distances(self.A, self.B[:,sub_mask])[0] return self._distance_matrix def set_source(self, A): assert A.shape == (3,) self.A = A self._distance_matrix = None class Connector(object): def __init__(self, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): self.weights = weights self.space = space self.safe = safe self.verbose = verbose min_delay = common.get_min_delay() if delays is None: self.delays = min_delay else: if core.is_listlike(delays): assert min(delays) >= min_delay elif not (isinstance(delays, basestring) or isinstance(delays, RandomDistribution)): assert delays >= min_delay self.delays = delays def connect(self, projection): pass def progressbar(self, N): self.prog = utility.ProgressBar(0, N, 20, mode='fixed') def progression(self, count): self.prog.update_amount(count) if self.verbose and common.rank() == 0: print self.prog, "\r", sys.stdout.flush() class ProbabilisticConnector(Connector): def __init__(self, projection, weights=0.0, delays=None, allow_self_connections=True, space=Space(), safe=True): Connector.__init__(self, weights, delays, space, safe) if isinstance(projection.rng, random.NativeRNG): raise Exception("Use of NativeRNG not implemented.") else: self.rng = projection.rng self.local = numpy.ones(len(projection.pre), bool) self.N = projection.pre.size self.weights_generator = WeightGenerator(weights, self.local, projection, safe) self.delays_generator = DelayGenerator(delays, self.local, safe) self.probas_generator = ProbaGenerator(RandomDistribution('uniform',(0,1), rng=self.rng), self.local) self.distance_matrix = DistanceMatrix(projection.pre.positions, self.space, self.local) self.projection = projection self.allow_self_connections = allow_self_connections def _probabilistic_connect(self, tgt, p): """ Connect-up a Projection with connection probability p, where p may be either a float 0<=p<=1, or a dict containing a float array for each pre-synaptic cell, the array containing the connection probabilities for all the local targets of that pre-synaptic cell. """ if numpy.isscalar(p) and p == 1: create = numpy.arange(self.local.sum()) else: rarr = self.probas_generator.get(self.N) if not core.is_listlike(rarr) and numpy.isscalar(rarr): # if N=1, rarr will be a single number rarr = numpy.array([rarr]) create = numpy.where(rarr < p)[0] self.distance_matrix.set_source(tgt.position) #create = self.projection.pre.id_to_index(create).astype(int) sources = self.projection.pre.all_cells.flatten()[create] if not self.allow_self_connections and self.projection.pre == self.projection.post and tgt in sources: i = numpy.where(sources == tgt)[0] sources = numpy.delete(sources, i) create = numpy.delete(create, i) weights = self.weights_generator.get(self.N, self.distance_matrix, create) delays = self.delays_generator.get(self.N, self.distance_matrix, create) if len(sources) > 0: self.projection.connection_manager.convergent_connect(sources.tolist(), tgt, weights, delays) class AllToAllConnector(Connector): """ Connects all cells in the presynaptic population to all cells in the postsynaptic population. """ def __init__(self, allow_self_connections=True, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): """ Create a new connector. `allow_self_connections` -- if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. `space` -- a `Space` object, needed if you wish to specify distance- dependent weights or delays """ Connector.__init__(self, weights, delays, space, safe, verbose) assert isinstance(allow_self_connections, bool) self.allow_self_connections = allow_self_connections def connect(self, projection): connector = ProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) self.progressbar(len(projection.post.local_cells)) for count, tgt in enumerate(projection.post.local_cells.flat): connector._probabilistic_connect(tgt, 1) self.progression(count) class FixedProbabilityConnector(Connector): """ For each pair of pre-post cells, the connection probability is constant. """ def __init__(self, p_connect, allow_self_connections=True, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): """ Create a new connector. `p_connect` -- a float between zero and one. Each potential connection is created with this probability. `allow_self_connections` -- if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. `space` -- a `Space` object, needed if you wish to specify distance- dependent weights or delays """ Connector.__init__(self, weights, delays, space, safe, verbose) assert isinstance(allow_self_connections, bool) self.allow_self_connections = allow_self_connections self.p_connect = float(p_connect) assert 0 <= self.p_connect def connect(self, projection): connector = ProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) self.progressbar(len(projection.post.local_cells)) for count, tgt in enumerate(projection.post.local_cells.flat): connector._probabilistic_connect(tgt, self.p_connect) self.progression(count) class DistanceDependentProbabilityConnector(ProbabilisticConnector): """ For each pair of pre-post cells, the connection probability depends on distance. """ def __init__(self, d_expression, allow_self_connections=True, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): """ Create a new connector. `d_expression` -- the right-hand side of a valid python expression for probability, involving 'd', e.g. "exp(-abs(d))", or "d<3" `space` -- a Space object. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created, or a DistanceDependence object. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. """ Connector.__init__(self, weights, delays, space, safe, verbose) assert isinstance(d_expression, str) try: d = 0; assert 0 <= eval(d_expression), eval(d_expression) d = 1e12; assert 0 <= eval(d_expression), eval(d_expression) except ZeroDivisionError, err: raise ZeroDivisionError("Error in the distance expression %s. %s" % (d_expression, err)) self.d_expression = d_expression assert isinstance(allow_self_connections, bool) self.allow_self_connections = allow_self_connections def connect(self, projection): """Connect-up a Projection.""" connector = ProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) proba_generator = ProbaGenerator(self.d_expression, connector.local) self.progressbar(len(projection.post.local_cells)) for count, tgt in enumerate(projection.post.local_cells.flat): connector.distance_matrix.set_source(tgt.position) proba = proba_generator.get(connector.N, connector.distance_matrix) if proba.dtype == 'bool': proba = proba.astype(float) connector._probabilistic_connect(tgt, proba) self.progression(count) class FromListConnector(Connector): """ Make connections according to a list. """ def __init__(self, conn_list, safe=True, verbose=False): """ Create a new connector. `conn_list` -- a list of tuples, one tuple for each connection. Each tuple should contain: (pre_addr, post_addr, weight, delay) where pre_addr is the address (a tuple) of the presynaptic neuron, and post_addr is the address of the postsynaptic neuron. """ # needs extending for dynamic synapses. Connector.__init__(self, 0., common.get_min_delay(), safe=safe, verbose=verbose) self.conn_list = conn_list def connect(self, projection): """Connect-up a Projection.""" # slow: should maybe sort by pre self.progressbar(len(self.conn_list)) for count, i in enumerate(xrange(len(self.conn_list))): src, tgt, weight, delay = self.conn_list[i][:] src = projection.pre[tuple(src)] tgt = projection.post[tuple(tgt)] projection.connection_manager.connect(src, [tgt], weight, delay) self.progression(count) class FromFileConnector(FromListConnector): """ Make connections according to a list read from a file. """ def __init__(self, filename, distributed=False, safe=True, verbose=False): """ Create a new connector. `filename` -- name of a text file containing a list of connections, in the format required by `FromListConnector`. `distributed` -- if this is True, then each node will read connections from a file called `filename.x`, where `x` is the MPI rank. This speeds up loading connections for distributed simulations. """ Connector.__init__(self, 0., common.get_min_delay(), safe=safe, verbose=verbose) self.filename = filename self.distributed = distributed def connect(self, projection): """Connect-up a Projection.""" if self.distributed: self.filename += ".%d" % common.rank() # open the file... f = open(self.filename, 'r', 10000) lines = f.readlines() f.close() # gather all the data in a list of tuples (one per line) input_tuples = [] for line in lines: single_line = line.rstrip() src, tgt, w, d = single_line.split("\t", 4) src = "[%s" % src.split("[",1)[1] tgt = "[%s" % tgt.split("[",1)[1] input_tuples.append((eval(src), eval(tgt), float(w), float(d))) self.conn_list = input_tuples FromListConnector.connect(self, projection) class FixedNumberPostConnector(Connector): """ Each pre-synaptic neuron is connected to exactly n post-synaptic neurons chosen at random. If n is less than the size of the post-synaptic population, there are no multiple connections, i.e., no instances of the same pair of neurons being multiply connected. If n is greater than the size of the post-synaptic population, all possible single connections are made before starting to add duplicate connections. """ def __init__(self, n, allow_self_connections=True, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): """ Create a new connector. `n` -- either a positive integer, or a `RandomDistribution` that produces positive integers. If `n` is a `RandomDistribution`, then the number of post-synaptic neurons is drawn from this distribution for each pre-synaptic neuron. `allow_self_connections` -- if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. """ Connector.__init__(self, weights, delays, space, safe, verbose) assert isinstance(allow_self_connections, bool) self.allow_self_connections = allow_self_connections if isinstance(n, int): self.n = n assert n >= 0 elif isinstance(n, random.RandomDistribution): self.rand_distr = n # weak check that the random distribution is ok assert numpy.all(numpy.array(n.next(100)) >= 0), "the random distribution produces negative numbers" else: raise Exception("n must be an integer or a RandomDistribution object") def connect(self, projection): """Connect-up a Projection.""" local = numpy.ones(len(projection.post), bool) weights_generator = WeightGenerator(self.weights, local, projection, self.safe) delays_generator = DelayGenerator(self.delays, local, self.safe) distance_matrix = DistanceMatrix(projection.post.positions, self.space) self.progressbar(len(projection.pre)) if isinstance(projection.rng, random.NativeRNG): raise Exception("Use of NativeRNG not implemented.") for count, src in enumerate(projection.pre.all()): # pick n neurons at random if hasattr(self, 'rand_distr'): n = self.rand_distr.next() else: n = self.n candidates = projection.post.all_cells.flatten() if not self.allow_self_connections and projection.pre == projection.post: idx = numpy.where(candidates == src)[0] candidates = numpy.delete(candidates, idx) targets = numpy.array([]) while len(targets) < n: # if the number of requested cells is larger than the size of the # postsynaptic population, we allow multiple connections for a given cell targets = numpy.concatenate((targets, projection.rng.permutation(candidates)[:n])) distance_matrix.set_source(src.position) targets = targets[:n] create = projection.post.id_to_index(targets).astype(int) weights = weights_generator.get(n, distance_matrix, create) delays = delays_generator.get(n, distance_matrix, create) if len(targets) > 0: projection.connection_manager.connect(src, targets.tolist(), weights, delays) self.progression(count) class FixedNumberPreConnector(Connector): """ Each post-synaptic neuron is connected to exactly n pre-synaptic neurons chosen at random. If n is less than the size of the pre-synaptic population, there are no multiple connections, i.e., no instances of the same pair of neurons being multiply connected. If n is greater than the size of the pre-synaptic population, all possible single connections are made before starting to add duplicate connections. """ def __init__(self, n, allow_self_connections=True, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): """ Create a new connector. `n` -- either a positive integer, or a `RandomDistribution` that produces positive integers. If `n` is a `RandomDistribution`, then the number of pre-synaptic neurons is drawn from this distribution for each post-synaptic neuron. `allow_self_connections` -- if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. """ Connector.__init__(self, weights, delays, space, safe, verbose) assert isinstance(allow_self_connections, bool) self.allow_self_connections = allow_self_connections if isinstance(n, int): self.n = n assert n >= 0 elif isinstance(n, random.RandomDistribution): self.rand_distr = n # weak check that the random distribution is ok assert numpy.all(numpy.array(n.next(100)) >= 0), "the random distribution produces negative numbers" else: raise Exception("n must be an integer or a RandomDistribution object") def connect(self, projection): """Connect-up a Projection.""" local = numpy.ones(len(projection.pre), bool) weights_generator = WeightGenerator(self.weights, local, projection, self.safe) delays_generator = DelayGenerator(self.delays, local, self.safe) distance_matrix = DistanceMatrix(projection.pre.positions, self.space) self.progressbar(len(projection.post.local_cells)) if isinstance(projection.rng, random.NativeRNG): raise Exception("Warning: use of NativeRNG not implemented.") for count, tgt in enumerate(projection.post.local_cells.flat): # pick n neurons at random if hasattr(self, 'rand_distr'): n = self.rand_distr.next() else: n = self.n candidates = projection.pre.all_cells.flatten() if not self.allow_self_connections and projection.pre == projection.post: i = numpy.where(candidates == tgt)[0] candidates = numpy.delete(candidates, i) sources = numpy.array([]) while len(sources) < n: # if the number of requested cells is larger than the size of the # presynaptic population, we allow multiple connections for a given cell sources = numpy.concatenate((sources, projection.rng.permutation(candidates)[:n])) distance_matrix.set_source(tgt.position) sources = sources[:n] create = projection.pre.id_to_index(sources).astype(int) weights = weights_generator.get(n, distance_matrix, create) delays = delays_generator.get(n, distance_matrix, create) if len(sources) > 0: projection.connection_manager.convergent_connect(sources, tgt, weights, delays) self.progression(count) class OneToOneConnector(Connector): """ Where the pre- and postsynaptic populations have the same size, connect cell i in the presynaptic population to cell i in the postsynaptic population for all i. """ #In fact, despite the name, this should probably be generalised to the #case where the pre and post populations have different dimensions, e.g., #cell i in a 1D pre population of size n should connect to all cells #in row i of a 2D post population of size (n,m). def __init__(self, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): """ Create a new connector. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. """ Connector.__init__(self, weights, delays, space, verbose) self.space = space self.safe = safe def connect(self, projection): """Connect-up a Projection.""" if projection.pre.dim == projection.post.dim: N = projection.post.size local = projection.post._mask_local.flatten() if isinstance(self.weights, basestring) or isinstance(self.delays, basestring): raise Exception('Expression for weights or delays is not supported for OneToOneConnector !') weights_generator = WeightGenerator(self.weights, local, projection, self.safe) delays_generator = DelayGenerator(self.delays, local, self.safe) weights = weights_generator.get(N) delays = delays_generator.get(N) self.progressbar(len(projection.post.local_cells)) count = 0 for tgt, w, d in zip(projection.post.local_cells, weights, delays): src = projection.pre.index(projection.post.id_to_index(tgt)) # the float is in case the values are of type numpy.float64, which NEST chokes on projection.connection_manager.connect(src, [tgt], float(w), float(d)) self.progression(count) count += 1 else: raise errors.InvalidDimensionsError("OneToOneConnector does not support presynaptic and postsynaptic Populations of different sizes.") class SmallWorldConnector(Connector): """ For each pair of pre-post cells, the connection probability depends on distance. """ def __init__(self, degree, rewiring, allow_self_connections=True, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): """ Create a new connector. `degree` -- the region lenght where nodes will be connected locally `rewiring` -- the probability of rewiring each eadges `space` -- a Space object. `allow_self_connections` -- if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created, or a DistanceDependence object. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. """ Connector.__init__(self, weights, delays, space, safe, verbose) assert 0 <= rewiring <= 1 assert isinstance(allow_self_connections, bool) self.rewiring = rewiring self.d_expression = "d < %g" %degree self.allow_self_connections = allow_self_connections def _smallworld_connect(self, src, p): """ Connect-up a Projection with connection probability p, where p may be either a float 0<=p<=1, or a dict containing a float array for each pre-synaptic cell, the array containing the connection probabilities for all the local targets of that pre-synaptic cell. """ rarr = self.probas_generator.get(self.N) if not core.is_listlike(rarr) and numpy.isscalar(rarr): # if N=1, rarr will be a single number rarr = numpy.array([rarr]) create = numpy.where(rarr < p)[0] self.distance_matrix.set_source(src.position) targets = self.candidates[create] candidates = self.projection.post.all_cells.flatten() if not self.allow_self_connections and projection.pre == projection.post: i = numpy.where(candidates == src)[0] candidates = numpy.delete(candidates, i) rarr = self.probas_generator.get(len(create)) rewired = rarr < self.rewiring if sum(rewired) > 0: idx = numpy.random.random_integers(0, len(candidates)-1, sum(rewired)) targets[rewired] = candidates[idx] create = self.projection.post.id_to_index(targets).astype(int) weights = self.weights_generator.get(self.N, self.distance_matrix, create) delays = self.delays_generator.get(self.N, self.distance_matrix, create) if len(targets) > 0: self.projection.connection_manager.connect(src, targets.tolist(), weights, delays) def connect(self, projection): """Connect-up a Projection.""" local = numpy.ones(len(projection.post), bool) self.N = projection.post.size if isinstance(projection.rng, random.NativeRNG): raise Exception("Use of NativeRNG not implemented.") else: self.rng = projection.rng self.weights_generator = WeightGenerator(self.weights, local, projection, self.safe) self.delays_generator = DelayGenerator(self.delays, local, self.safe) self.probas_generator = ProbaGenerator(RandomDistribution('uniform',(0,1), rng=self.rng), local) self.distance_matrix = DistanceMatrix(projection.post.positions, self.space, local) self.projection = projection self.candidates = self.projection.post.all_cells.flatten() self.progressbar(len(projection.pre)) proba_generator = ProbaGenerator(self.d_expression, local) for count, src in enumerate(projection.pre.all()): self.distance_matrix.set_source(src.position) proba = proba_generator.get(self.N, self.distance_matrix).astype(float) self._smallworld_connect(src, proba) self.progression(count) PyNN-0.7.4/src/core.py0000644000175000017500000002312711736323051015227 0ustar andrewandrew00000000000000""" Assorted utility classes and functions. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ import numpy from pyNN import random import operator from copy import copy, deepcopy import functools def is_listlike(obj): """ Check whether an object (a) can be converted into an array/list *and* has a length. This excludes iterators, for example. Maybe need to split into different functions, as don't always need length. """ return type(obj) in [list, numpy.ndarray, tuple, set] def check_shape(meth): """ Decorator for LazyArray magic methods, to ensure that the operand has the same shape as the array. """ def wrapped_meth(self, val): if isinstance(val, (LazyArray, numpy.ndarray)): if val.shape != self.shape: raise ValueError("shape mismatch: objects cannot be broadcast to a single shape") return meth(self, val) return wrapped_meth def lazy_operation(name): def op(self, val): new_map = deepcopy(self) new_map.operations.append((getattr(operator, name), val)) return new_map return check_shape(op) def lazy_inplace_operation(name): def op(self, val): self.operations.append((getattr(operator, name), val)) return self return check_shape(op) def lazy_unary_operation(name): def op(self): new_map = deepcopy(self) new_map.operations.append((getattr(operator, name), None)) return new_map return op class LazyArray(object): """ Optimises storage of arrays in various ways: - stores only a single value if all the values in the array are the same - if the array is created from a RandomDistribution or a function f(i,j), then elements are only evaluated when they are accessed. Any operations performed on the array are also queued up to be executed on access. The main intention of the latter is to save memory for very large arrays by accessing them one row or column at a time: the entire array need never be in memory. """ def __init__(self, value, shape): """ Create a new LazyArray. `value` : may be an int, long, float, bool, numpy array, RandomDistribution or a function f(i,j). f(i,j) should return a single number when i and j are integers, and a 1D array when either i or j is a numpy array. The case where both i and j are arrays need not be supported. """ if is_listlike(value): assert numpy.isreal(value).all() if not isinstance(value, numpy.ndarray): value = numpy.array(value) assert value.shape == shape, "Array has shape %s, value has shape %s" % (shape, value.shape) # this should be true even when using MPI else: assert numpy.isreal(value) self.base_value = value self.shape = shape self.operations = [] @property def nrows(self): return self.shape[0] @property def ncols(self): return self.shape[1] def __getitem__(self, addr): if isinstance(addr, (int, long, float)): addr = (addr,) if len(addr) != len(self.shape): raise IndexError("invalid index") if not isinstance(addr, (int, long, tuple)): raise TypeError("array indices must be integers, not %s" % type(addr).__name__) val = self.value if isinstance(val, (int, long, float)): self.check_bounds(addr) return val else: return val[addr] def __setitem__(self, addr, new_value): self.check_bounds(addr) val = self.value if isinstance(val, (int, long, float)) and val == new_value: pass else: self.base_value = self.as_array() self.base_value[addr] = new_value def check_bounds(self, addr): if isinstance(addr, (int, long, float)): addr = (addr,) for i, size in zip(addr, self.shape): if (i < -size) or (i >= size): raise IndexError("index out of bounds") def apply(self, f): """ Add the function f(x) to the list of the operations to be performed, where x will be a scalar or a numpy array. >>> m = LazyArray(4, shape=(2,2)) >>> m.apply(numpy.sqrt) >>> m.value 2.0 >>> m.as_array() array([[ 2., 2.], [ 2., 2.]]) """ self.operations.append((f, None)) def _apply_operations(self, x): for f, arg in self.operations: if arg is None: x = f(x) else: x = f(x, arg) return x def by_column(self, mask=None): """ Iterate over the columns of the array. Columns will be yielded either as a 1D array or as a single value (for a flat array). `mask`: either None or a boolean array indicating which columns should be included. """ column_indices = numpy.arange(self.ncols) if mask is not None: assert len(mask) == self.ncols column_indices = column_indices[mask] if isinstance(self.base_value, (int, long, float, bool)): for j in column_indices: yield self._apply_operations(self.base_value) elif isinstance(self.base_value, numpy.ndarray): for j in column_indices: yield self._apply_operations(self.base_value[:, j]) elif isinstance(self.base_value, random.RandomDistribution): if mask is None: for j in column_indices: yield self._apply_operations(self.base_value.next(self.nrows, mask_local=False)) else: column_indices = numpy.arange(self.ncols) for j,local in zip(column_indices, mask): col = self.base_value.next(self.nrows, mask_local=False) if local: yield self._apply_operations(col) #elif isinstance(self.base_value, LazyArray): # for column in self.base_value.by_column(mask=mask): # yield self._apply_operations(column) elif callable(self.base_value): # a function of (i,j) row_indices = numpy.arange(self.nrows, dtype=int) for j in column_indices: yield self._apply_operations(self.base_value(row_indices, j)) else: raise Exception("invalid mapping") @property def value(self): """ Returns the base value with all operations applied to it. Works only when the base value is a scalar or a real numpy array, not when the base value is a RandomDistribution or mapping function. """ val = self._apply_operations(self.base_value) if isinstance(val, LazyArray): val = val.value return val def as_array(self): """ Return the LazyArray as a real numpy array. """ if isinstance(self.base_value, (int, long, float, bool)): x = self.base_value*numpy.ones(self.shape) elif isinstance(self.base_value, numpy.ndarray): x = self.base_value elif isinstance(self.base_value, random.RandomDistribution): n = self.nrows*self.ncols x = self.base_value.next(n).reshape(self.shape) elif callable(self.base_value): row_indices = numpy.arange(self.nrows, dtype=int) x = numpy.array([self.base_value(row_indices, j) for j in range(self.ncols)]).T else: raise Exception("invalid mapping") return self._apply_operations(x) __iadd__ = lazy_inplace_operation('add') __isub__ = lazy_inplace_operation('sub') __imul__ = lazy_inplace_operation('mul') __idiv__ = lazy_inplace_operation('div') __ipow__ = lazy_inplace_operation('pow') __add__ = lazy_operation('add') __radd__ = __add__ __sub__ = lazy_operation('sub') __mul__ = lazy_operation('mul') __rmul__ = __mul__ __div__ = lazy_operation('div') __pow__ = lazy_operation('pow') __lt__ = lazy_operation('lt') __gt__ = lazy_operation('gt') __le__ = lazy_operation('le') __ge__ = lazy_operation('ge') __neg__ = lazy_unary_operation('neg') __pos__ = lazy_unary_operation('pos') __abs__ = lazy_unary_operation('abs') # based on http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize class forgetful_memoize(object): """ Decorator that caches the result from the last time a function was called. If the next call uses the same arguments, the cached value is returned, and not re-evaluated. If the next call uses different arguments, the cached value is overwritten. The use case is when the same, heavy-weight function is called repeatedly with the same arguments in different places. """ def __init__(self, func): self.func = func self.cached_args = None self.cached_value = None def __call__(self, *args): import pdb; pdb.set_trace() if args == self.cached_args: print "using cached value" return self.cached_value else: #print "calculating value" value = self.func(*args) self.cached_args = args self.cached_value = value return value def __get__(self, obj, objtype): """Support instance methods.""" return functools.partial(self.__call__, obj) PyNN-0.7.4/src/common.py0000644000175000017500000023450711736323051015575 0ustar andrewandrew00000000000000# encoding: utf-8 """ Defines a common implementation of the PyNN API. Simulator modules are not required to use any of the code herein, provided they provide the correct interface, but it is suggested that they use as much as is consistent with good performance (optimisations may require overriding some of the default definitions given here). Utility functions and classes: is_conductance() check_weight() check_delay() Accessing individual neurons: IDMixin Common API implementation/base classes: 1. Simulation set-up and control: setup() end() run() reset() get_time_step() get_current_time() get_min_delay() get_max_delay() rank() num_processes() 2. Creating, connecting and recording from individual neurons: create() connect() set() initialize() build_record() 3. Creating, connecting and recording from populations of neurons: Population PopulationView Assembly Projection :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: common.py 994 2011-10-03 13:13:54Z apdavison $ """ import numpy, os import logging from warnings import warn import operator import tempfile from pyNN import random, recording, errors, models, standardmodels, core, space, descriptions from pyNN.recording import files from itertools import chain if not 'simulator' in locals(): simulator = None # should be set by simulator-specific modules DEFAULT_WEIGHT = 0.0 DEFAULT_BUFFER_SIZE = 10000 DEFAULT_MAX_DELAY = 10.0 DEFAULT_TIMESTEP = 0.1 DEFAULT_MIN_DELAY = DEFAULT_TIMESTEP logger = logging.getLogger("PyNN") # ============================================================================= # Utility functions and classes # ============================================================================= def is_conductance(target_cell): """ Returns True if the target cell uses conductance-based synapses, False if it uses current-based synapses, and None if the synapse-basis cannot be determined. """ if hasattr(target_cell, 'local') and target_cell.local and hasattr(target_cell, 'celltype'): is_conductance = target_cell.celltype.conductance_based else: is_conductance = None return is_conductance def check_weight(weight, synapse_type, is_conductance): if weight is None: weight = DEFAULT_WEIGHT if core.is_listlike(weight): weight = numpy.array(weight) nan_filter = (1 - numpy.isnan(weight)).astype(bool) # weight arrays may contain NaN, which should be ignored filtered_weight = weight[nan_filter] all_negative = (filtered_weight <= 0).all() all_positive = (filtered_weight >= 0).all() if not (all_negative or all_positive): raise errors.InvalidWeightError("Weights must be either all positive or all negative") elif numpy.isreal(weight): all_positive = weight >= 0 all_negative = weight < 0 else: raise errors.InvalidWeightError("Weight must be a number or a list/array of numbers.") if is_conductance or synapse_type == 'excitatory': if not all_positive: raise errors.InvalidWeightError("Weights must be positive for conductance-based and/or excitatory synapses") elif is_conductance == False and synapse_type == 'inhibitory': if not all_negative: raise errors.InvalidWeightError("Weights must be negative for current-based, inhibitory synapses") else: # is_conductance is None. This happens if the cell does not exist on the current node. logger.debug("Can't check weight, conductance status unknown.") return weight def check_delay(delay): if delay is None: delay = get_min_delay() # If the delay is too small , we have to throw an error if delay < get_min_delay() or delay > get_max_delay(): raise errors.ConnectionError("delay (%s) is out of range [%s,%s]" % \ (delay, get_min_delay(), get_max_delay())) return delay # ============================================================================= # Accessing individual neurons # ============================================================================= class IDMixin(object): """ Instead of storing ids as integers, we store them as ID objects, which allows a syntax like: p[3,4].tau_m = 20.0 where p is a Population object. """ # Simulator ID classes should inherit both from the base type of the ID # (e.g., int or long) and from IDMixin. def __getattr__(self, name): try: val = self.__getattribute__(name) except AttributeError: if name == "parent": raise Exception("parent is not set") try: val = self.get_parameters()[name] except KeyError: raise errors.NonExistentParameterError(name, self.celltype.__class__.__name__, self.celltype.get_parameter_names()) return val def __setattr__(self, name, value): if name == "parent": object.__setattr__(self, name, value) elif self.celltype.has_parameter(name): self.set_parameters(**{name: value}) else: object.__setattr__(self, name, value) def set_parameters(self, **parameters): """ Set cell parameters, given as a sequence of parameter=value arguments. """ # if some of the parameters are computed from the values of other # parameters, need to get and translate all parameters if self.local: if self.is_standard_cell: computed_parameters = self.celltype.computed_parameters() have_computed_parameters = numpy.any([p_name in computed_parameters for p_name in parameters]) if have_computed_parameters: all_parameters = self.get_parameters() all_parameters.update(parameters) parameters = all_parameters parameters = self.celltype.translate(parameters) self.set_native_parameters(parameters) else: raise errors.NotLocalError("Cannot set parameters for a cell that does not exist on this node.") def get_parameters(self): """Return a dict of all cell parameters.""" if self.local: parameters = self.get_native_parameters() if self.is_standard_cell: parameters = self.celltype.reverse_translate(parameters) return parameters else: raise errors.NotLocalError("Cannot obtain parameters for a cell that does not exist on this node.") @property def celltype(self): return self.parent.celltype @property def is_standard_cell(self): return issubclass(self.celltype.__class__, standardmodels.StandardCellType) def _set_position(self, pos): """ Set the cell position in 3D space. Cell positions are stored in an array in the parent Population. """ assert isinstance(pos, (tuple, numpy.ndarray)) assert len(pos) == 3 self.parent._set_cell_position(self, pos) def _get_position(self): """ Return the cell position in 3D space. Cell positions are stored in an array in the parent Population, if any, or within the ID object otherwise. Positions are generated the first time they are requested and then cached. """ return self.parent._get_cell_position(self) position = property(_get_position, _set_position) @property def local(self): return self.parent.is_local(self) def inject(self, current_source): """Inject current from a current source object into the cell.""" current_source.inject_into([self]) def get_initial_value(self, variable): """Get the initial value of a state variable of the cell.""" return self.parent._get_cell_initial_value(self, variable) def set_initial_value(self, variable, value): """Set the initial value of a state variable of the cell.""" self.parent._set_cell_initial_value(self, variable, value) def as_view(self): """Return a PopulationView containing just this cell.""" index = self.parent.id_to_index(self) return self.parent[index:index+1] # ============================================================================= # Functions for simulation set-up and control # ============================================================================= def setup(timestep=DEFAULT_TIMESTEP, min_delay=DEFAULT_MIN_DELAY, max_delay=DEFAULT_MAX_DELAY, **extra_params): """ Initialises/reinitialises the simulator. Any existing network structure is destroyed. extra_params contains any keyword arguments that are required by a given simulator but not by others. """ invalid_extra_params = ('mindelay', 'maxdelay', 'dt') for param in invalid_extra_params: if param in extra_params: raise Exception("%s is not a valid argument for setup()" % param) if min_delay > max_delay: raise Exception("min_delay has to be less than or equal to max_delay.") if min_delay < timestep: raise Exception("min_delay (%g) must be greater than timestep (%g)" % (min_delay, timestep)) def end(compatible_output=True): """Do any necessary cleaning up before exiting.""" raise NotImplementedError def run(simtime): """Run the simulation for simtime ms.""" raise NotImplementedError def reset(): """ Reset the time to zero, neuron membrane potentials and synaptic weights to their initial values, and delete any recorded data. The network structure is not changed, nor is the specification of which neurons to record from. """ simulator.reset() def initialize(cells, variable, value): assert isinstance(cells, (BasePopulation, Assembly)), type(cells) cells.initialize(variable, value) def get_current_time(): """Return the current time in the simulation.""" return simulator.state.t def get_time_step(): """Return the integration time step.""" return simulator.state.dt def get_min_delay(): """Return the minimum allowed synaptic delay.""" return simulator.state.min_delay def get_max_delay(): """Return the maximum allowed synaptic delay.""" return simulator.state.max_delay def num_processes(): """Return the number of MPI processes.""" return simulator.state.num_processes def rank(): """Return the MPI rank of the current node.""" return simulator.state.mpi_rank # ============================================================================= # Low-level API for creating, connecting and recording from individual neurons # ============================================================================= def build_create(population_class): def create(cellclass, cellparams=None, n=1): """ Create n cells all of the same type. If n > 1, return a list of cell ids/references. If n==1, return just the single id. """ return population_class(n, cellclass, cellparams) # return the Population or Population.all_cells? return create def build_connect(projection_class, connector_class): def connect(source, target, weight=0.0, delay=None, synapse_type=None, p=1, rng=None): """ Connect a source of spikes to a synaptic target. source and target can both be individual cells or lists of cells, in which case all possible connections are made with probability p, using either the random number generator supplied, or the default rng otherwise. Weights should be in nA or µS. """ if isinstance(source, IDMixin): source = source.as_view() if isinstance(target, IDMixin): target = target.as_view() connector = connector_class(p_connect=p, weights=weight, delays=delay) return projection_class(source, target, connector, target=synapse_type, rng=rng) return connect def set(cells, param, val=None): """ Set one or more parameters of an individual cell or list of cells. param can be a dict, in which case val should not be supplied, or a string giving the parameter name, in which case val is the parameter value. """ assert isinstance(cells, (BasePopulation, Assembly)) cells.set(param, val) def build_record(variable, simulator): def record(source, filename): """ Record spikes to a file. source can be an individual cell, a Population, PopulationView or Assembly. """ # would actually like to be able to record to an array and choose later # whether to write to a file. if not isinstance(source, (BasePopulation, Assembly)): source = source.parent source._record(variable, to_file=filename) # recorder_list is used by end() if isinstance(source, BasePopulation): simulator.recorder_list.append(source.recorders[variable]) # this is a bit hackish - better to add to Population.__del__? if isinstance(source, Assembly): for population in source.populations: simulator.recorder_list.append(population.recorders[variable]) if variable == 'v': record.__name__ = "record_v" record.__doc__ = """ Record membrane potential to a file. source can be an individual cell, a Population, PopulationView or Assembly.""" elif variable == 'gsyn': record.__name__ = "record_gsyn" record.__doc__ = """ Record synaptic conductances to a file. source can be an individual cell, a Population, PopulationView or Assembly.""" return record # ============================================================================= # High-level API for creating, connecting and recording from populations of # neurons. # ============================================================================= class BasePopulation(object): record_filter = None def __getitem__(self, index): """ Return either a single cell (ID object) from the Population, if index is an integer, or a subset of the cells (PopulationView object), if index is a slice or array. Note that __getitem__ is called when using [] access, e.g. p = Population(...) p[2] is equivalent to p.__getitem__(2). p[3:6] is equivalent to p.__getitem__(slice(3, 6)) """ if isinstance(index, int): return self.all_cells[index] elif isinstance(index, (slice, list, numpy.ndarray)): return PopulationView(self, index) elif isinstance(index, tuple): return PopulationView(self, list(index)) else: raise TypeError("indices must be integers, slices, lists, arrays or tuples, not %s" % type(index).__name__) def __len__(self): """Return the total number of cells in the population (all nodes).""" return self.size @property def local_size(self): return len(self.local_cells) # would self._mask_local.sum() be faster? def __iter__(self): """Iterator over cell ids on the local node.""" return iter(self.local_cells) @property def conductance_based(self): return self.celltype.conductance_based def is_local(self, id): """ Determine whether the cell with the given ID exists on the local MPI node. """ assert id.parent is self index = self.id_to_index(id) return self._mask_local[index] def all(self): """Iterator over cell ids on all nodes.""" return iter(self.all_cells) def __add__(self, other): """ A Population/PopulationView can be added to another Population, PopulationView or Assembly, returning an Assembly. """ assert isinstance(other, BasePopulation) return Assembly(self, other) def _get_cell_position(self, id): index = self.id_to_index(id) return self.positions[:, index] def _set_cell_position(self, id, pos): index = self.id_to_index(id) self.positions[:, index] = pos def _get_cell_initial_value(self, id, variable): assert isinstance(self.initial_values[variable], core.LazyArray) index = self.id_to_local_index(id) return self.initial_values[variable][index] def _set_cell_initial_value(self, id, variable, value): assert isinstance(self.initial_values[variable], core.LazyArray) index = self.id_to_local_index(id) self.initial_values[variable][index] = value def nearest(self, position): """Return the neuron closest to the specified position.""" # doesn't always work correctly if a position is equidistant between # two neurons, i.e. 0.5 should be rounded up, but it isn't always. # also doesn't take account of periodic boundary conditions pos = numpy.array([position] * self.positions.shape[1]).transpose() dist_arr = (self.positions - pos)**2 distances = dist_arr.sum(axis=0) nearest = distances.argmin() return self[nearest] def sample(self, n, rng=None): """ Randomly sample n cells from the Population, and return a PopulationView object. """ assert isinstance(n, int) if not rng: rng = random.NumpyRNG() indices = rng.permutation(numpy.arange(len(self)))[0:n] logger.debug("The %d cells recorded have indices %s" % (n, indices)) logger.debug("%s.sample(%s)", self.label, n) return PopulationView(self, indices) def get(self, parameter_name, gather=False): """ Get the values of a parameter for every local cell in the population. """ # if all the cells have the same value for this parameter, should # we return just the number, rather than an array? if hasattr(self, "_get_array"): values = self._get_array(parameter_name).tolist() else: values = [getattr(cell, parameter_name) for cell in self] # list or array? if gather == True and num_processes() > 1: all_values = { rank(): values } all_indices = { rank(): self.local_cells.tolist()} all_values = recording.gather_dict(all_values) all_indices = recording.gather_dict(all_indices) if rank() == 0: values = reduce(operator.add, all_values.values()) indices = reduce(operator.add, all_indices.values()) idx = numpy.argsort(indices) values = numpy.array(values)[idx] return values def set(self, param, val=None): """ Set one or more parameters for every cell in the population. param can be a dict, in which case val should not be supplied, or a string giving the parameter name, in which case val is the parameter value. val can be a numeric value, or list of such (e.g. for setting spike times). e.g. p.set("tau_m",20.0). p.set({'tau_m':20,'v_rest':-65}) """ #""" # -- Proposed change to arguments -- #Set one or more parameters for every cell in the population. # #Each value may be a single number or a list/array of numbers of the same #size as the population. If the parameter itself takes lists/arrays as #values (e.g. spike times), then the value provided may be either a #single lists/1D array, a list of lists/1D arrays, or a 2D array. # #e.g. p.set(tau_m=20.0). # p.set(tau_m=20, v_rest=[-65.0, -65.3, ... , -67.2]) #""" if isinstance(param, str): param_dict = {param: val} elif isinstance(param, dict): param_dict = param else: raise errors.InvalidParameterValueError param_dict = self.celltype.checkParameters(param_dict, with_defaults=False) logger.debug("%s.set(%s)", self.label, param_dict) if hasattr(self, "_set_array"): self._set_array(**param_dict) else: for cell in self: cell.set_parameters(**param_dict) def tset(self, parametername, value_array): """ 'Topographic' set. Set the value of parametername to the values in value_array, which must have the same dimensions as the Population. """ #""" # -- Proposed change to arguments -- #'Topographic' set. Each value in parameters should be a function that #accepts arguments x,y,z and returns a single value. #""" if parametername not in self.celltype.get_parameter_names(): raise errors.NonExistentParameterError(parametername, self.celltype, self.celltype.get_parameter_names()) if (self.size,) == value_array.shape: # the values are numbers or non-array objects local_values = value_array[self._mask_local] assert local_values.size == self.local_cells.size, "%d != %d" % (local_values.size, self.local_cells.size) elif len(value_array.shape) == 2: # the values are themselves 1D arrays if value_array.shape[0] != self.size: raise errors.InvalidDimensionsError("Population: %d, value_array first dimension: %s" % (self.size, value_array.shape[0])) local_values = value_array[self._mask_local] # not sure this works else: raise errors.InvalidDimensionsError("Population: %d, value_array: %s" % (self.size, str(value_array.shape))) assert local_values.shape[0] == self.local_cells.size, "%d != %d" % (local_values.size, self.local_cells.size) try: logger.debug("%s.tset('%s', array(shape=%s, min=%s, max=%s))", self.label, parametername, value_array.shape, value_array.min(), value_array.max()) except TypeError: # min() and max() won't work for non-numeric values logger.debug("%s.tset('%s', non_numeric_array(shape=%s))", self.label, parametername, value_array.shape) # Set the values for each cell if hasattr(self, "_set_array"): self._set_array(**{parametername: local_values}) else: for cell, val in zip(self, local_values): setattr(cell, parametername, val) def rset(self, parametername, rand_distr): """ 'Random' set. Set the value of parametername to a value taken from rand_distr, which should be a RandomDistribution object. """ # Note that we generate enough random numbers for all cells on all nodes # but use only those relevant to this node. This ensures that the # sequence of random numbers does not depend on the number of nodes, # provided that the same rng with the same seed is used on each node. logger.debug("%s.rset('%s', %s)", self.label, parametername, rand_distr) if isinstance(rand_distr.rng, random.NativeRNG): self._native_rset(parametername, rand_distr) else: rarr = rand_distr.next(n=self.all_cells.size, mask_local=False) rarr = numpy.array(rarr) # isn't rarr already an array? assert rarr.size == self.size, "%s != %s" % (rarr.size, self.size) self.tset(parametername, rarr) def _call(self, methodname, arguments): """ Call the method methodname(arguments) for every cell in the population. e.g. p.call("set_background","0.1") if the cell class has a method set_background(). """ raise NotImplementedError() def _tcall(self, methodname, objarr): """ `Topographic' call. Call the method methodname() for every cell in the population. The argument to the method depends on the coordinates of the cell. objarr is an array with the same dimensions as the Population. e.g. p.tcall("memb_init", vinitArray) calls p.cell[i][j].memb_init(vInitArray[i][j]) for all i,j. """ raise NotImplementedError() def randomInit(self, rand_distr): """ Set initial membrane potentials for all the cells in the population to random values. """ warn("The randomInit() method is deprecated, and will be removed in a future release. Use initialize('v', rand_distr) instead.") self.initialize('v', rand_distr) def initialize(self, variable, value): """ Set initial values of state variables, e.g. the membrane potential. `value` may either be a numeric value (all neurons set to the same value) or a `RandomDistribution` object (each neuron gets a different value) """ logger.debug("In Population '%s', initialising %s to %s" % (self.label, variable, value)) if isinstance(value, random.RandomDistribution): initial_value = value.next(n=self.all_cells.size, mask_local=self._mask_local) assert len(initial_value) == self.local_size, "%d != %d" % (len(initial_value), self.local_size) else: initial_value = value self.initial_values[variable] = core.LazyArray(initial_value, shape=(self.local_size,)) if hasattr(self, "_set_initial_value_array"): self._set_initial_value_array(variable, initial_value) else: if isinstance(value, random.RandomDistribution): for cell, val in zip(self, initial_value): cell.set_initial_value(variable, val) else: for cell in self: # only on local node cell.set_initial_value(variable, initial_value) def can_record(self, variable): """Determine whether `variable` can be recorded from this population.""" return (variable in self.celltype.recordable) def _add_recorder(self, variable): """Create a new Recorder for the supplied variable.""" assert variable not in self.recorders if hasattr(self, "parent"): population = self.grandparent else: population = self logger.debug("Adding recorder for %s to %s" % (variable, self.label)) population.recorders[variable] = population.recorder_class(variable, population=population) def _record(self, variable, to_file=True): """ Private method called by record() and record_v(). """ if variable is None: # reset the list of things to record # note that if _record(None) is called on a view of a population # recording will be reset for the entire population, not just the view for recorder in self.recorders.values(): recorder.reset() self.recorders = {} else: if not self.can_record(variable): raise errors.RecordingError(variable, self.celltype) logger.debug("%s.record('%s')", self.label, variable) if variable not in self.recorders: self._add_recorder(variable) if self.record_filter is not None: self.recorders[variable].record(self.record_filter) else: self.recorders[variable].record(self.all_cells) if isinstance(to_file, basestring): self.recorders[variable].file = to_file def record(self, to_file=True): """ Record spikes from all cells in the Population. """ self._record('spikes', to_file) def record_v(self, to_file=True): """ Record the membrane potential for all cells in the Population. """ self._record('v', to_file) def record_gsyn(self, to_file=True): """ Record synaptic conductances for all cells in the Population. """ self._record('gsyn', to_file) def printSpikes(self, file, gather=True, compatible_output=True): """ Write spike times to file. file should be either a filename or a PyNN File object. If compatible_output is True, the format is "spiketime cell_id", where cell_id is the index of the cell counting along rows and down columns (and the extension of that for 3-D). This allows easy plotting of a `raster' plot of spiketimes, with one line for each cell. The timestep, first id, last id, and number of data points per cell are written in a header, indicated by a '#' at the beginning of the line. If compatible_output is False, the raw format produced by the simulator is used. This may be faster, since it avoids any post-processing of the spike files. For parallel simulators, if gather is True, all data will be gathered to the master node and a single output file created there. Otherwise, a file will be written on each node, containing only the cells simulated on that node. """ self.recorders['spikes'].write(file, gather, compatible_output, self.record_filter) def getSpikes(self, gather=True, compatible_output=True): """ Return a 2-column numpy array containing cell ids and spike times for recorded cells. Useful for small populations, for example for single neuron Monte-Carlo. """ return self.recorders['spikes'].get(gather, compatible_output, self.record_filter) # if we haven't called record(), this will give a KeyError. A more # informative error message would be nice. def print_v(self, file, gather=True, compatible_output=True): """ Write membrane potential traces to file. file should be either a filename or a PyNN File object. If compatible_output is True, the format is "v cell_id", where cell_id is the index of the cell counting along rows and down columns (and the extension of that for 3-D). The timestep, first id, last id, and number of data points per cell are written in a header, indicated by a '#' at the beginning of the line. If compatible_output is False, the raw format produced by the simulator is used. This may be faster, since it avoids any post-processing of the voltage files. For parallel simulators, if gather is True, all data will be gathered to the master node and a single output file created there. Otherwise, a file will be written on each node, containing only the cells simulated on that node. """ self.recorders['v'].write(file, gather, compatible_output, self.record_filter) def get_v(self, gather=True, compatible_output=True): """ Return a 2-column numpy array containing cell ids and Vm for recorded cells. """ return self.recorders['v'].get(gather, compatible_output, self.record_filter) def print_gsyn(self, file, gather=True, compatible_output=True): """ Write synaptic conductance traces to file. file should be either a filename or a PyNN File object. If compatible_output is True, the format is "t g cell_id", where cell_id is the index of the cell counting along rows and down columns (and the extension of that for 3-D). The timestep, first id, last id, and number of data points per cell are written in a header, indicated by a '#' at the beginning of the line. If compatible_output is False, the raw format produced by the simulator is used. This may be faster, since it avoids any post-processing of the voltage files. """ self.recorders['gsyn'].write(file, gather, compatible_output, self.record_filter) def get_gsyn(self, gather=True, compatible_output=True): """ Return a 3-column numpy array containing cell ids and synaptic conductances for recorded cells. """ return self.recorders['gsyn'].get(gather, compatible_output, self.record_filter) def get_spike_counts(self, gather=True): """ Returns the number of spikes for each neuron. """ return self.recorders['spikes'].count(gather, self.record_filter) def meanSpikeCount(self, gather=True): """ Returns the mean number of spikes per neuron. """ spike_counts = self.recorders['spikes'].count(gather, self.record_filter) total_spikes = sum(spike_counts.values()) if rank() == 0 or not gather: # should maybe use allgather, and get the numbers on all nodes if len(spike_counts) > 0: return float(total_spikes)/len(spike_counts) else: return 0 else: return numpy.nan def inject(self, current_source): """ Connect a current source to all cells in the Population. """ if not self.celltype.injectable: raise TypeError("Can't inject current into a spike source.") current_source.inject_into(self) def save_positions(self, file): """ Save positions to file. The output format is id x y z """ # first column should probably be indices, not ids. This would make it # simulator independent. if isinstance(file, basestring): file = files.StandardTextFile(file, mode='w') cells = self.all_cells result = numpy.empty((len(cells), 4)) result[:,0] = cells result[:,1:4] = self.positions.T if rank() == 0: file.write(result, {'population' : self.label}) file.close() class Population(BasePopulation): """ A group of neurons all of the same type. """ nPop = 0 def __init__(self, size, cellclass, cellparams=None, structure=None, label=None): """ Create a population of neurons all of the same type. size - number of cells in the Population. For backwards-compatibility, n may also be a tuple giving the dimensions of a grid, e.g. n=(10,10) is equivalent to n=100 with structure=Grid2D() cellclass should either be a standardized cell class (a class inheriting from common.standardmodels.StandardCellType) or a string giving the name of the simulator-specific model that makes up the population. cellparams should be a dict which is passed to the neuron model constructor structure should be a Structure instance. label is an optional name for the population. """ if not isinstance(size, int): # also allow a single integer, for a 1D population assert isinstance(size, tuple), "`size` must be an integer or a tuple of ints. You have supplied a %s" % type(size) # check the things inside are ints for e in size: assert isinstance(e, int), "`size` must be an integer or a tuple of ints. Element '%s' is not an int" % str(e) assert structure is None, "If you specify `size` as a tuple you may not specify structure." if len(size) == 1: structure = space.Line() elif len(size) == 2: nx, ny = size structure = space.Grid2D(nx/float(ny)) elif len(size) == 3: nx, ny, nz = size structure = space.Grid3D(nx/float(ny), nx/float(nz)) else: raise Exception("A maximum of 3 dimensions is allowed. What do you think this is, string theory?") size = reduce(operator.mul, size) self.size = size self.label = label or 'population%d' % Population.nPop self.celltype = cellclass(cellparams) self._structure = structure or space.Line() self._positions = None self._is_sorted = True # Build the arrays of cell ids # Cells on the local node are represented as ID objects, other cells by integers # All are stored in a single numpy array for easy lookup by address # The local cells are also stored in a list, for easy iteration self._create_cells(cellclass, cellparams, size) self.initial_values = {} for variable, value in self.celltype.default_initial_values.items(): self.initialize(variable, value) self.recorders = {} Population.nPop += 1 @property def local_cells(self): return self.all_cells[self._mask_local] @property def cell(self): warn("The `Population.cell` attribute is not an official part of the \ API, and its use is deprecated. It will be removed in a future \ release. All uses of `cell` may be replaced by `all_cells`") return self.all_cells def id_to_index(self, id): """ Given the ID(s) of cell(s) in the Population, return its (their) index (order in the Population). >>> assert p.id_to_index(p[5]) == 5 >>> assert p.id_to_index(p.index([1,2,3])) == [1,2,3] """ if not numpy.iterable(id): if not self.first_id <= id <= self.last_id: raise ValueError("id should be in the range [%d,%d], actually %d" % (self.first_id, self.last_id, id)) return int(id - self.first_id) # this assumes ids are consecutive else: if isinstance(id, PopulationView): id = id.all_cells id = numpy.array(id) if (self.first_id > id.min()) or (self.last_id < id.max()): raise ValueError("ids should be in the range [%d,%d], actually [%d, %d]" % (self.first_id, self.last_id, id.min(), id.max())) return (id - self.first_id).astype(int) # this assumes ids are consecutive def id_to_local_index(self, id): """ Given the ID(s) of cell(s) in the Population, return its (their) index (order in the Population), counting only cells on the local MPI node. """ if num_processes() > 1: return self.local_cells.tolist().index(id) # probably very slow #return numpy.nonzero(self.local_cells == id)[0][0] # possibly faster? # another idea - get global index, use idx-sum(mask_local[:idx])? else: return self.id_to_index(id) def _get_structure(self): return self._structure def _set_structure(self, structure): assert isinstance(structure, space.BaseStructure) if structure != self._structure: self._positions = None # setting a new structure invalidates previously calculated positions self._structure = structure structure = property(fget=_get_structure, fset=_set_structure) # arguably structure should be read-only, i.e. it is not possible to change it after Population creation @property def position_generator(self): def gen(i): return self.positions[:,i] return gen def _get_positions(self): """ Try to return self._positions. If it does not exist, create it and then return it. """ if self._positions is None: self._positions = self.structure.generate_positions(self.size) assert self._positions.shape == (3, self.size) return self._positions def _set_positions(self, pos_array): assert isinstance(pos_array, numpy.ndarray) assert pos_array.shape == (3, self.size), "%s != %s" % (pos_array.shape, (3, self.size)) self._positions = pos_array.copy() # take a copy in case pos_array is changed later self._structure = None # explicitly setting positions destroys any previous structure positions = property(_get_positions, _set_positions, """A 3xN array (where N is the number of neurons in the Population) giving the x,y,z coordinates of all the neurons (soma, in the case of non-point models).""") def describe(self, template='population_default.txt', engine='default'): """ Returns a human-readable description of the population. The output may be customized by specifying a different template togther with an associated template engine (see ``pyNN.descriptions``). If template is None, then a dictionary containing the template context will be returned. """ context = { "label": self.label, "celltype": self.celltype.describe(template=None), "structure": None, "size": self.size, "size_local": len(self.local_cells), "first_id": self.first_id, "last_id": self.last_id, } if len(self.local_cells) > 0: first_id = self.local_cells[0] context.update({ "local_first_id": first_id, "cell_parameters": first_id.get_parameters(), }) if self.structure: context["structure"] = self.structure.describe(template=None) return descriptions.render(engine, template, context) class PopulationView(BasePopulation): """ A view of a subset of neurons within a Population. In most ways, Populations and PopulationViews have the same behaviour, i.e. they can be recorded, connected with Projections, etc. It should be noted that any changes to neurons in a PopulationView will be reflected in the parent Population and vice versa. It is possible to have views of views. """ def __init__(self, parent, selector, label=None): """ Create a view of a subset of neurons within a parent Population or PopulationView. selector - a slice or numpy mask array. The mask array should either be a boolean array of the same size as the parent, or an integer array containing cell indices, i.e. if p.size == 5, PopulationView(p, array([False, False, True, False, True])) PopulationView(p, array([2,4])) PopulationView(p, slice(2,5,2)) will all create the same view. """ self.parent = parent self.mask = selector # later we can have fancier selectors, for now we just have numpy masks self.label = label or "view of %s with mask %s" % (parent.label, self.mask) # maybe just redefine __getattr__ instead of the following... self.celltype = self.parent.celltype # If the mask is a slice, IDs will be consecutives without duplication. # If not, then we need to remove duplicated IDs if not isinstance(self.mask, slice): if isinstance(self.mask, list): self.mask = numpy.array(self.mask) if self.mask.dtype is numpy.dtype('bool'): if len(self.mask) != len(self.parent): raise Exception("Boolean masks should have the size of Parent Population") self.mask = numpy.arange(len(self.parent))[self.mask] if len(numpy.unique(self.mask)) != len(self.mask): logging.warning("PopulationView can contain only once each ID, duplicated IDs are remove") self.mask = numpy.unique(self.mask) self.all_cells = self.parent.all_cells[self.mask] # do we need to ensure this is ordered? idx = numpy.argsort(self.all_cells) self._is_sorted = numpy.all(idx == numpy.arange(len(self.all_cells))) self.size = len(self.all_cells) self._mask_local = self.parent._mask_local[self.mask] self.local_cells = self.all_cells[self._mask_local] self.first_id = numpy.min(self.all_cells) # only works if we assume all_cells is sorted, otherwise could use min() self.last_id = numpy.max(self.all_cells) self.recorders = self.parent.recorders self.record_filter= self.all_cells @property def initial_values(self): # this is going to be complex - if we keep initial_values as a dict, # need to return a dict-like object that takes account of self.mask raise NotImplementedError @property def structure(self): return self.parent.structure # should we allow setting structure for a PopulationView? Maybe if the # parent has some kind of CompositeStructure? @property def positions(self): return self.parent.positions.T[self.mask].T # make positions N,3 instead of 3,N to avoid all this transposing? def id_to_index(self, id): """ Given the ID(s) of cell(s) in the PopulationView, return its/their index/indices (order in the PopulationView). >>> assert id_to_index(p.index(5)) == 5 >>> assert id_to_index(p.index([1,2,3])) == [1,2,3] """ if not numpy.iterable(id): if self._is_sorted: if id not in self.all_cells: raise IndexError("ID %s not present in the View" %id) return numpy.searchsorted(self.all_cells, id) else: result = numpy.where(self.all_cells == id)[0] if len(result) == 0: raise IndexError("ID %s not present in the View" %id) else: return result else: if self._is_sorted: return numpy.searchsorted(self.all_cells, id) else: result = numpy.array([]) for item in id: data = numpy.where(self.all_cells == item)[0] if len(data) == 0: raise IndexError("ID %s not present in the View" %item) elif len(data) > 1: raise Exception("ID %s is duplicated in the View" %item) else: result = numpy.append(result, data) return result @property def grandparent(self): """ Returns the parent Population at the root of the tree (since the immediate parent may itself be a PopulationView). The name "grandparent" is of course a little misleading, as it could be just the parent, or the great, great, great, ..., grandparent. """ if hasattr(self.parent, "parent"): return self.parent.grandparent else: return self.parent def describe(self, template='populationview_default.txt', engine='default'): """ Returns a human-readable description of the population view. The output may be customized by specifying a different template togther with an associated template engine (see ``pyNN.descriptions``). If template is None, then a dictionary containing the template context will be returned. """ context = {"label": self.label, "parent": self.parent.label, "mask": self.mask, "size": self.size} return descriptions.render(engine, template, context) # ============================================================================= class Assembly(object): """ A group of neurons, may be heterogeneous, in contrast to a Population where all the neurons are of the same type. """ count = 0 def __init__(self, *populations, **kwargs): """ Create an Assembly of Populations and/or PopulationViews. kwargs may contain a keyword argument 'label'. """ if kwargs: assert kwargs.keys() == ['label'] self.populations = [] for p in populations: self._insert(p) self.label = kwargs.get('label', 'assembly%d' % Assembly.count) assert isinstance(self.label, basestring), "label must be a string or unicode" Assembly.count += 1 def _insert(self, element): if not isinstance(element, BasePopulation): raise TypeError("argument is a %s, not a Population." % type(element).__name__) if isinstance(element, PopulationView): if not element.parent in self.populations: double = False for p in self.populations: data = numpy.concatenate((p.all_cells, element.all_cells)) if len(numpy.unique(data))!= len(p.all_cells) + len(element.all_cells): logging.warning('Adding a PopulationView to an Assembly containing elements already present is not posible') double = True #Should we automatically remove duplicated IDs ? break if not double: self.populations.append(element) else: logging.warning('Adding a PopulationView to an Assembly when parent Population is there is not possible') elif isinstance(element, BasePopulation): if not element in self.populations: self.populations.append(element) else: logging.warning('Adding a Population twice in an Assembly is not possible') @property def local_cells(self): result = self.populations[0].local_cells for p in self.populations[1:]: result = numpy.concatenate((result, p.local_cells)) return result @property def all_cells(self): result = self.populations[0].all_cells for p in self.populations[1:]: result = numpy.concatenate((result, p.all_cells)) return result def all(self): """Iterator over cell ids on all nodes.""" return iter(self.all_cells) @property def _is_sorted(self): idx = numpy.argsort(self.all_cells) return numpy.all(idx == numpy.arange(len(self.all_cells))) @property def _homogeneous_synapses(self): syn = is_conductance(self.populations[0].all_cells[0]) for p in self.populations[1:]: if syn != is_conductance(p.all_cells[0]): return False return True @property def conductance_based(self): return all(p.celltype.conductance_based for p in self.populations) @property def _mask_local(self): result = self.populations[0]._mask_local for p in self.populations[1:]: result = numpy.concatenate((result, p._mask_local)) return result @property def first_id(self): return numpy.min(self.all_cells) @property def last_id(self): return numpy.max(self.all_cells) def id_to_index(self, id): """ Given the ID(s) of cell(s) in the Assembly, return its (their) index (order in the Assembly). >>> assert p.id_to_index(p[5]) == 5 >>> assert p.id_to_index(p.index([1,2,3])) == [1,2,3] """ all_cells = self.all_cells if not numpy.iterable(id): if self._is_sorted: return numpy.searchsorted(all_cells, id) else: result = numpy.where(all_cells == id)[0] if len(result) == 0: raise IndexError("ID %s not present in the View" %id) else: return result else: if self._is_sorted: return numpy.searchsorted(all_cells, id) else: result = numpy.array([]) for item in id: data = numpy.where(all_cells == item)[0] if len(data) == 0: raise IndexError("ID %s not present in the View" %item) elif len(data) > 1: raise Exception("ID %s is duplicated in the View" %item) else: result = numpy.append(result, data) return result def all(self): """Iterator over cell ids on all nodes.""" return iter(self.all_cells) @property def positions(self): result = self.populations[0].positions for p in self.populations[1:]: result = numpy.hstack((result, p.positions)) return result @property def size(self): return sum(p.size for p in self.populations) def __iter__(self): """ Iterator over cells in all populations within the Assembly, for cells on the local MPI node. """ return chain(iter(p) for p in self.populations) def __len__(self): """Return the total number of cells in the population (all nodes).""" return self.size def __getitem__(self, index): """ Where index is an integer, return an ID. Where index is a slice, list or numpy array, return a new Assembly consisting of appropriate populations and (possibly newly created) population views. """ count = 0; boundaries = [0] for p in self.populations: count += p.size boundaries.append(count) boundaries = numpy.array(boundaries) if isinstance(index, int): # return an ID pindex = boundaries[1:].searchsorted(index, side='right') return self.populations[pindex][index-boundaries[pindex]] elif isinstance(index, (slice, list, numpy.ndarray)): if isinstance(index, slice): indices = numpy.arange(self.size)[index] else: indices = index pindices = boundaries[1:].searchsorted(indices, side='right') views = (self.populations[i][indices[pindices==i] - boundaries[i]] for i in numpy.unique(pindices)) return Assembly(*views) else: raise TypeError("indices must be integers, slices, lists, arrays, not %s" % type(index).__name__) def __add__(self, other): """ An Assembly may be added to a Population, PopulationView or Assembly with the '+' operator, returning a new Assembly, e.g.: a2 = a1 + p """ if isinstance(other, BasePopulation): return Assembly(*(self.populations + [other])) elif isinstance(other, Assembly): return Assembly(*(self.populations + other.populations)) else: raise TypeError("can only add a Population or another Assembly to an Assembly") def __iadd__(self, other): """ A Population, PopulationView or Assembly may be added to an existing Assembly using the '+=' operator, e.g.: a += p """ if isinstance(other, BasePopulation): self._insert(other) elif isinstance(other, Assembly): for p in other.populations: self._insert(p) else: raise TypeError("can only add a Population or another Assembly to an Assembly") return self def sample(self, n, rng=None): """ Randomly sample n cells from the Assembly, and return a Assembly object. """ assert isinstance(n, int) if not rng: rng = random.NumpyRNG() indices = rng.permutation(numpy.arange(len(self)))[0:n] logger.debug("The %d cells recorded have indices %s" % (n, indices)) logger.debug("%s.sample(%s)", self.label, n) return self[indices] def initialize(self, variable, value): """ Set the initial value of one of the state variables of the neurons in this assembly. `value` may either be a numeric value (all neurons set to the same value) or a `!RandomDistribution` object (each neuron gets a different value) """ for p in self.populations: p.initialize(variable, value) def rset(self, parametername, rand_distr): """ 'Random' set. Set the value of parametername to a value taken from rand_distr, which should be a RandomDistribution object. """ for p in self.populations: p.rset(parametername, rand_distr) def _record(self, variable, to_file=True): # need to think about record_from for p in self.populations: p._record(variable, to_file) def record(self, to_file=True): """Record spikes from all cells in the Assembly.""" self._record('spikes', to_file) def record_v(self, to_file=True): """Record the membrane potential from all cells in the Assembly.""" self._record('v', to_file) def record_gsyn(self, to_file=True): """Record synaptic conductances from all cells in the Assembly.""" self._record('gsyn', to_file) def get_population(self, label): """ Return the Population/PopulationView from within the Assembly that has the given label. If no such Population exists, raise KeyError. """ for p in self.populations: if label == p.label: return p raise KeyError("Assembly does not contain a population with the label %s" % label) def save_positions(self, file): """ Save positions to file. The output format is id x y z """ # this should be rewritten to use self.positions and recording.files if isinstance(file, basestring): file = files.StandardTextFile(file, mode='w') cells = self.all_cells result = numpy.empty((len(cells), 4)) result[:,0] = cells result[:,1:4] = self.positions.T if rank() == 0: file.write(result, {'assembly' : self.label}) file.close() @property def position_generator(self): def gen(i): return self.positions[:,i] return gen def _get_recorded_variable(self, variable, gather=True, compatible_output=True, size=1): try: result = self.populations[0].recorders[variable].get(gather, compatible_output, self.populations[0].record_filter) except errors.NothingToWriteError: result = numpy.zeros((0, size+2)) count = self.populations[0].size for p in self.populations[1:]: try: data = p.recorders[variable].get(gather, compatible_output, p.record_filter) data[:,0] += count # map index-in-population to index-in-assembly result = numpy.vstack((result, data)) except errors.NothingToWriteError: pass count += p.size return result def get_v(self, gather=True, compatible_output=True): """ Return a 2-column numpy array containing cell ids and Vm for recorded cells. """ return self._get_recorded_variable('v', gather, compatible_output, size=1) def get_gsyn(self, gather=True, compatible_output=True): """ Return a 3-column numpy array containing cell ids and synaptic conductances for recorded cells. """ return self._get_recorded_variable('gsyn', gather, compatible_output, size=2) def meanSpikeCount(self, gather=True): """ Returns the mean number of spikes per neuron. """ spike_counts = self.get_spike_counts() total_spikes = sum(spike_counts.values()) if rank() == 0 or not gather: # should maybe use allgather, and get the numbers on all nodes return float(total_spikes)/len(spike_counts) else: return numpy.nan def get_spike_counts(self, gather=True): """ Returns the number of spikes for each neuron. """ try: spike_counts = self.populations[0].recorders['spikes'].count(gather, self.populations[0].record_filter) except errors.NothingToWriteError: spike_counts = {} for p in self.populations[1:]: try: spike_counts.update(p.recorders['spikes'].count(gather, p.record_filter)) except errors.NothingToWriteError: pass return spike_counts def _print(self, file, variable, format, gather=True, compatible_output=True): ## First, we write all the individual data for the heterogeneous populations ## embedded within the Assembly. To speed things up, we write them in temporary ## folders as Numpy Binary objects tempdir = tempfile.mkdtemp() filenames = {} filename = '%s/%s.%s' %(tempdir, self.populations[0].label, variable) p_file = files.NumpyBinaryFile(filename, mode='w') try: self.populations[0].recorders[variable].write(p_file, gather, compatible_output, self.populations[0].record_filter) filenames[self.populations[0]] = (filename, True) except errors.NothingToWriteError: filenames[self.populations[0]] = (filename, False) for p in self.populations[1:]: filename = '%s/%s.%s' %(tempdir, p.label, variable) p_file = files.NumpyBinaryFile(filename, mode='w') try: p.recorders[variable].write(p_file, gather, compatible_output, p.record_filter) filenames[p] = (filename, True) except errors.NothingToWriteError: filenames[p] = (filename, False) ## Then we need to merge the previsouly written files into a single one, to be consistent ## with a Population object. Note that the header should be better considered. metadata = {'variable' : variable, 'size' : self.size, 'label' : self.label, 'populations' : ", ".join(["%s[%d-%d]" %(p.label, p.first_id, p.last_id) for p in self.populations]), 'first_id' : self.first_id, 'last_id' : self.last_id} metadata['dt'] = simulator.state.dt # note that this has to run on all nodes (at least for NEST) data = numpy.zeros(format) for pop in filenames.keys(): if filenames[pop][1] is True: name = filenames[pop][0] if gather==False and simulator.state.num_processes > 1: name += '.%d' % simulator.state.mpi_rank p_file = files.NumpyBinaryFile(name, mode='r') tmp_data = p_file.read() if compatible_output: tmp_data[:, -1] = self.id_to_index(tmp_data[:,-1] + pop.first_id) data = numpy.vstack((data, tmp_data)) os.remove(name) metadata['n'] = data.shape[0] os.rmdir(tempdir) if isinstance(file, basestring): if gather==False and simulator.state.num_processes > 1: file += '.%d' % simulator.state.mpi_rank file = files.StandardTextFile(file, mode='w') if simulator.state.mpi_rank == 0 or gather == False: file.write(data, metadata) file.close() def printSpikes(self, file, gather=True, compatible_output=True): """ Write spike times to file. file should be either a filename or a PyNN File object. If compatible_output is True, the format is "spiketime cell_id", where cell_id is the index of the cell counting along rows and down columns (and the extension of that for 3-D). This allows easy plotting of a `raster' plot of spiketimes, with one line for each cell. The timestep, first id, last id, and number of data points per cell are written in a header, indicated by a '#' at the beginning of the line. If compatible_output is False, the raw format produced by the simulator is used. This may be faster, since it avoids any post-processing of the spike files. For parallel simulators, if gather is True, all data will be gathered to the master node and a single output file created there. Otherwise, a file will be written on each node, containing only the cells simulated on that node. """ self._print(file, 'spikes', (0, 2), gather, compatible_output) def print_v(self, file, gather=True, compatible_output=True): """ Write membrane potential traces to file. file should be either a filename or a PyNN File object. If compatible_output is True, the format is "v cell_id", where cell_id is the index of the cell counting along rows and down columns (and the extension of that for 3-D). The timestep, first id, last id, and number of data points per cell are written in a header, indicated by a '#' at the beginning of the line. If compatible_output is False, the raw format produced by the simulator is used. This may be faster, since it avoids any post-processing of the voltage files. For parallel simulators, if gather is True, all data will be gathered to the master node and a single output file created there. Otherwise, a file will be written on each node, containing only the cells simulated on that node. """ self._print(file, 'v', (0, 2), gather, compatible_output) def print_gsyn(self, file, gather=True, compatible_output=True): """ Write synaptic conductance traces to file. file should be either a filename or a PyNN File object. If compatible_output is True, the format is "t g cell_id", where cell_id is the index of the cell counting along rows and down columns (and the extension of that for 3-D). The timestep, first id, last id, and number of data points per cell are written in a header, indicated by a '#' at the beginning of the line. If compatible_output is False, the raw format produced by the simulator is used. This may be faster, since it avoids any post-processing of the voltage files. """ self._print(file, 'gsyn', (0, 3), gather, compatible_output) def inject(self, current_source): """ Connect a current source to all cells in the Assembly. """ for p in self.populations: current_source.inject_into(p) def describe(self, template='assembly_default.txt', engine='default'): """ Returns a human-readable description of the assembly. The output may be customized by specifying a different template togther with an associated template engine (see ``pyNN.descriptions``). If template is None, then a dictionary containing the template context will be returned. """ context = {"label": self.label, "populations": [p.describe(template=None) for p in self.populations]} return descriptions.render(engine, template, context) # ============================================================================= class Projection(object): """ A container for all the connections of a given type (same synapse type and plasticity mechanisms) between two populations, together with methods to set parameters of those connections, including of plasticity mechanisms. """ def __init__(self, presynaptic_neurons, postsynaptic_neurons, method, source=None, target=None, synapse_dynamics=None, label=None, rng=None): """ presynaptic_neurons and postsynaptic_neurons - Population, PopulationView or Assembly objects. source - string specifying which attribute of the presynaptic cell signals action potentials. This is only needed for multicompartmental cells with branching axons or dendrodendriticsynapses. All standard cells have a single source, and this is the default. target - string specifying which synapse on the postsynaptic cell to connect to. For standard cells, this can be 'excitatory' or 'inhibitory'. For non-standard cells, it could be 'NMDA', etc. If target is not given, the default values of 'excitatory' is used. method - a Connector object, encapsulating the algorithm to use for connecting the neurons. synapse_dynamics - a `standardmodels.SynapseDynamics` object specifying which synaptic plasticity mechanisms to use. rng - specify an RNG object to be used by the Connector. """ for prefix, pop in zip(("pre", "post"), (presynaptic_neurons, postsynaptic_neurons)): if not isinstance(pop, (BasePopulation, Assembly)): raise errors.ConnectionError("%ssynaptic_neurons must be a Population, PopulationView or Assembly, not a %s" % (prefix, type(pop))) if isinstance(postsynaptic_neurons, Assembly): if not postsynaptic_neurons._homogeneous_synapses: raise Exception('Projection to an Assembly object can be made only with homogeneous synapses types') self.pre = presynaptic_neurons # } these really self.source = source # } should be self.post = postsynaptic_neurons # } read-only self.target = target # } self.label = label if isinstance(rng, random.AbstractRNG): self.rng = rng elif rng is None: self.rng = random.NumpyRNG(seed=151985012) else: raise Exception("rng must be either None, or a subclass of pyNN.random.AbstractRNG") self._method = method self.synapse_dynamics = synapse_dynamics #self.connection = None # access individual connections. To be defined by child, simulator-specific classes self.weights = [] if label is None: if self.pre.label and self.post.label: self.label = "%s→%s" % (self.pre.label, self.post.label) if self.synapse_dynamics: assert isinstance(self.synapse_dynamics, models.BaseSynapseDynamics), \ "The synapse_dynamics argument, if specified, must be a models.BaseSynapseDynamics object, not a %s" % type(synapse_dynamics) def __len__(self): """Return the total number of local connections.""" return len(self.connection_manager) def size(self, gather=True): """ Return the total number of connections. - only local connections, if gather is False, - all connections, if gather is True (default) """ if gather: n = len(self) return recording.mpi_sum(n) else: return len(self) def __repr__(self): return 'Projection("%s")' % self.label def __getitem__(self, i): """Return the `i`th connection within the Projection.""" return self.connection_manager[i] # --- Methods for setting connection parameters --------------------------- def setWeights(self, w): """ w can be a single number, in which case all weights are set to this value, or a list/1D array of length equal to the number of connections in the projection, or a 2D array with the same dimensions as the connectivity matrix (as returned by `getWeights(format='array')`). Weights should be in nA for current-based and µS for conductance-based synapses. """ # should perhaps add a "distribute" argument, for symmetry with "gather" in getWeights() # if post is an Assembly, some components might have cond-synapses, others curr, so need a more sophisticated check here w = check_weight(w, self.synapse_type, is_conductance(self.post.local_cells[0])) self.connection_manager.set('weight', w) def randomizeWeights(self, rand_distr): """ Set weights to random values taken from rand_distr. """ # Arguably, we could merge this with set_weights just by detecting the # argument type. It could make for easier-to-read simulation code to # give it a separate name, though. Comments? self.setWeights(rand_distr.next(len(self))) def setDelays(self, d): """ d can be a single number, in which case all delays are set to this value, or a list/1D array of length equal to the number of connections in the projection, or a 2D array with the same dimensions as the connectivity matrix (as returned by `getDelays(format='array')`). """ self.connection_manager.set('delay', d) def randomizeDelays(self, rand_distr): """ Set delays to random values taken from rand_distr. """ self.setDelays(rand_distr.next(len(self))) def setSynapseDynamics(self, param, value): """ Set parameters of the dynamic synapses for all connections in this projection. """ self.connection_manager.set(param, value) def randomizeSynapseDynamics(self, param, rand_distr): """ Set parameters of the synapse dynamics to values taken from rand_distr """ self.setSynapseDynamics(param, rand_distr.next(len(self))) # --- Methods for writing/reading information to/from file. --------------- def getWeights(self, format='list', gather=True): """ Get synaptic weights for all connections in this Projection. Possible formats are: a list of length equal to the number of connections in the projection, a 2D weight array (with NaN for non-existent connections). Note that for the array format, if there is more than one connection between two cells, the summed weight will be given. """ if gather: logger.error("getWeights() with gather=True not yet implemented") return self.connection_manager.get('weight', format) def getDelays(self, format='list', gather=True): """ Get synaptic delays for all connections in this Projection. Possible formats are: a list of length equal to the number of connections in the projection, a 2D delay array (with NaN for non-existent connections). """ if gather: logger.error("getDelays() with gather=True not yet implemented") return self.connection_manager.get('delay', format) def getSynapseDynamics(self, parameter_name, format='list', gather=True): """ Get parameters of the dynamic synapses for all connections in this Projection. """ if gather: logger.error("getstandardmodels.SynapseDynamics() with gather=True not yet implemented") return self.connection_manager.get(parameter_name, format) def saveConnections(self, file, gather=True, compatible_output=True): """ Save connections to file in a format suitable for reading in with a FromFileConnector. """ if isinstance(file, basestring): file = files.StandardTextFile(file, mode='w') lines = [] if not compatible_output: for c in self.connections: lines.append([c.source, c.target, c.weight, c.delay]) else: for c in self.connections: lines.append([self.pre.id_to_index(c.source), self.post.id_to_index(c.target), c.weight, c.delay]) if gather == True and num_processes() > 1: all_lines = { rank(): lines } all_lines = recording.gather_dict(all_lines) if rank() == 0: lines = reduce(operator.add, all_lines.values()) elif num_processes() > 1: file.rename('%s.%d' % (file.name, rank())) logger.debug("--- Projection[%s].__saveConnections__() ---" % self.label) if gather == False or rank() == 0: file.write(lines, {'pre' : self.pre.label, 'post' : self.post.label}) file.close() def printWeights(self, file, format='list', gather=True): """ Print synaptic weights to file. In the array format, zeros are printed for non-existent connections. """ weights = self.getWeights(format=format, gather=gather) if isinstance(file, basestring): file = files.StandardTextFile(file, mode='w') if format == 'array': weights = numpy.where(numpy.isnan(weights), 0.0, weights) file.write(weights, {}) file.close() def printDelays(self, file, format='list', gather=True): """ Print synaptic weights to file. In the array format, zeros are printed for non-existent connections. """ delays = self.getDelays(format=format, gather=gather) if isinstance(file, basestring): file = files.StandardTextFile(file, mode='w') if format == 'array': delays = numpy.where(numpy.isnan(delays), 0.0, delays) file.write(delays, {}) file.close() def weightHistogram(self, min=None, max=None, nbins=10): """ Return a histogram of synaptic weights. If min and max are not given, the minimum and maximum weights are calculated automatically. """ # it is arguable whether functions operating on the set of weights # should be put here or in an external module. weights = self.getWeights(format='list', gather=True) if min is None: min = weights.min() if max is None: max = weights.max() bins = numpy.linspace(min, max, nbins+1) return numpy.histogram(weights, bins) # returns n, bins def describe(self, template='projection_default.txt', engine='default'): """ Returns a human-readable description of the projection. The output may be customized by specifying a different template togther with an associated template engine (see ``pyNN.descriptions``). If template is None, then a dictionary containing the template context will be returned. """ context = { "label": self.label, "pre": self.pre.describe(template=None), "post": self.post.describe(template=None), "source": self.source, "target": self.target, "size_local": len(self), "size": self.size(gather=True), "connector": self._method.describe(template=None), "plasticity": None, } if self.synapse_dynamics: context.update(plasticity=self.synapse_dynamics.describe(template=None)) return descriptions.render(engine, template, context) # ============================================================================= PyNN-0.7.4/src/multisim.py0000644000175000017500000000673211736323051016145 0ustar andrewandrew00000000000000""" A small framework to make it easier to run the same model on multiple simulators. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ from multiprocessing import Process, Queue from pyNN import common, recording def run_simulation(network_model, sim, parameters, input_queue, output_queue): """ Build the model defined in the class `network_model`, with parameters `parameters`, and then consume tasks from `input_queue` until receiving the command 'STOP'. """ print "Running simulation with %s" % sim.__name__ common.simulator = sim.simulator recording.simulator = sim.simulator network = network_model(sim, parameters) print "Network constructed with %s." % sim.__name__ for obj_name, attr, args, kwargs in iter(input_queue.get, 'STOP'): print "%s processing command %s.%s(%s, %s)" % (sim.__name__, obj_name, attr, args, kwargs) obj = eval(obj_name) result = getattr(obj, attr)(*args, **kwargs) output_queue.put(result) print "Simulation with %s complete" % sim.__name__ #sim.end() class MultiSim(object): """ Interface that runs a network model on different simulators, with each simulation in a separate process. """ def __init__(self, sim_list, network_model, parameters): """ Build the model defined in the class `network_model`, with parameters `parameters`, for each of the simulator modules specified in `sim_list`. The `network_model` constructor takes arguments `sim` and `parameters`. """ self.processes = {} self.task_queues = {} self.result_queues = {} for sim in sim_list: task_queue = Queue() result_queue = Queue() p = Process(target=run_simulation, args=(network_model, sim, parameters, task_queue, result_queue)) p.start() self.processes[sim.__name__] = p self.task_queues[sim.__name__] = task_queue self.result_queues[sim.__name__] = result_queue def __iter__(self): return self.processes.itervalues() def __getattr__(self, name): """ Assumes `name` is a method of the `network_model` model. Return a function that runs `net.name()` for all the simulators. """ def iterate_over_nets(*args, **kwargs): retvals = {} for sim_name in self.processes: self.task_queues[sim_name].put(('network', name, args, kwargs)) for sim_name in self.processes: retvals[sim_name] = self.result_queues[sim_name].get() return retvals return iterate_over_nets def run(self, simtime, steps=1): #, *callbacks): """ Run the model for a time `simtime` in all simulators. The run may be broken into a number of steps (each of equal duration). #Any functions in `callbacks` will be called after each step. """ dt = float(simtime)/steps for i in range(steps): for sim_name in self.processes: self.task_queues[sim_name].put(('sim', 'run', [dt], {})) for sim_name in self.processes: t = self.result_queues[sim_name].get() def end(self): for sim_name in self.processes: self.task_queues[sim_name].put('STOP') self.processes[sim_name].join() PyNN-0.7.4/src/__init__.py0000644000175000017500000000506011774605133016040 0ustar andrewandrew00000000000000""" PyNN (pronounced 'pine') is a Python package for simulator-independent specification of neuronal network models. In other words, you can write the code for a model once, using the PyNN API, and then run it without modification on any simulator that PyNN supports. To use PyNN, import the particular simulator module you wish to use, e.g. import pyNN.neuron as sim all subsequent code in the `sim` namespace will then have the same behaviour independent of simulator. Functions for simulation set-up and control: setup() run() reset() end() get_time_step() get_current_time() get_min_delay() get_max_delay() rank() num_processes() list_standard_models() Functions for creating, connecting, modifying and recording from neurons (low-level interface): create() connect() set() record() record_v() record_gsyn() Classes for creating, connecting, modifying and recording from neurons (high-level interface): Population Projection Space Structures: Line, Grid2D, Grid3D, RandomStructure Connectors: AllToAllConnector, OneToOneConnector, FixedProbabilityConnector, DistanceDependentProbabilityConnector, FixedNumberPreConnector, FixedNumberPostConnector, FromListConnector, FromFileConnector, CSAConnector Standard cell types: IF_curr_exp, IF_curr_alpha, IF_cond_exp, IF_cond_alpha, IF_cond_exp_gsfa_grr, IF_facets_hardware1, HH_cond_exp, EIF_cond_alpha_isfa_ista, EIF_cond_exp_isfa_ista, SpikeSourcePoisson, SpikeSourceArray, SpikeSourceInhGamma (not all cell types are available for all simulator backends). Synaptic plasticity: SynapseDynamics, TsodyksMarkramMechanism, STDPMechanism, AdditiveWeightDependence, MultiplicativeWeightDependence, AdditivePotentiationMultiplicativeDepression, GutigWeightDependence, SpikePairRule (not all combinations area available for all simulator backends). Current injection: DCSource, ACSource, StepCurrentSource, NoisyCurrentSource. File types: StandardTextFile, PickleFile, NumpyBinaryFile, HDF5ArrayFile Available simulator modules: nest neuron pcsim brian Other modules: utility random :copyright: Copyright 2006-2012 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ __version__ = '0.7.4' __all__ = ["common", "random", "nest", "neuron", "pcsim", "brian", "recording", "errors", "space", "descriptions", "standardmodels"] PyNN-0.7.4/src/utility.py0000644000175000017500000002463611762141071016007 0ustar andrewandrew00000000000000""" A collection of utility functions and classes. Functions: colour() - allows output of different coloured text on stdout. notify() - send an e-mail when a simulation has finished. get_script_args() - get the command line arguments to the script, however it was run (python, nrniv, mpirun, etc.). init_logging() - convenience function for setting up logging to file and to the screen. Timer - a convenience wrapper around the time.time() function from the standard library. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: utility.py 957 2011-05-03 13:44:15Z apdavison $ """ # If there is a settings.py file on the path, defaults will be # taken from there. try: from settings import SMTPHOST, EMAIL except ImportError: SMTPHOST = None EMAIL = None import sys import logging import time import os red = 0010; green = 0020; yellow = 0030; blue = 0040 magenta = 0050; cyan = 0060; bright = 0100 try: import ll.ansistyle def colour(col, text): return str(ll.ansistyle.Text(col, str(text))) except ImportError: def colour(col, text): return text def notify(msg="Simulation finished.", subject="Simulation finished.", smtphost=SMTPHOST, address=EMAIL): """Send an e-mail stating that the simulation has finished.""" if not (smtphost and address): print "SMTP host and/or e-mail address not specified.\nUnable to send notification message." else: import smtplib, datetime msg = ("From: %s\r\nTo: %s\r\nSubject: %s\r\n\r\n") % (address,address,subject) + msg msg += "\nTimestamp: %s" % datetime.datetime.now().strftime("%H:%M:%S, %F") server = smtplib.SMTP(smtphost) server.sendmail(address, address, msg) server.quit() def get_script_args(n_args, usage=''): """ Get command line arguments. This works by finding the name of the main script and assuming any arguments after this in sys.argv are arguments to the script. It would be nicer to use optparse, but this doesn't seem to work too well with nrniv or mpirun. """ calling_frame = sys._getframe(1) if '__file__' in calling_frame.f_locals: script = calling_frame.f_locals['__file__'] try: script_index = sys.argv.index(script) except ValueError: try: script_index = sys.argv.index(os.path.abspath(script)) except ValueError: script_index = 0 else: script_index = 0 args = sys.argv[script_index+1:script_index+1+n_args] if len(args) != n_args: usage = usage or "Script requires %d arguments, you supplied %d" % (n_args, len(args)) raise Exception(usage) return args def init_logging(logfile, debug=False, num_processes=1, rank=0, level=None): # allow logfile == None # which implies output to stderr if logfile: if num_processes > 1: logfile += '.%d' % rank logfile = os.path.abspath(logfile) # prefix log messages with mpi rank mpi_prefix = "" if num_processes > 1: mpi_prefix = 'Rank %d of %d: ' % (rank, num_processes) if debug: log_level = logging.DEBUG else: log_level = logging.INFO # allow user to override exact log_level if level: log_level = level logging.basicConfig(level=log_level, format=mpi_prefix+'%(asctime)s %(levelname)s %(message)s', filename=logfile, filemode='w') def save_population(population, filename, variables=[]): """ Saves the spike_times of a population and the size, structure, labels such that one can load it back into a SpikeSourceArray population using the load_population function. """ import shelve s = shelve.open(filename) s['spike_times'] = population.getSpikes() s['label'] = population.label s['size'] = population.size s['structure'] = population.structure # should perhaps just save the positions? variables_dict = {} for variable in variables: variables_dict[variable] = getattr(population, variable) s['variables'] = variables_dict s.close() def load_population(filename, sim): """ Loads a population that was saved with the save_population function into SpikeSourceArray. """ import shelve s = shelve.open(filename) ssa = getattr(sim, "SpikeSourceArray") population = getattr(sim, "Population")(s['size'], ssa, structure=s['structure'], label=s['label']) # set the spiketimes spikes = s['spike_times'] for neuron in range(s['size']): spike_times = spikes[spikes[:,0] == neuron][:,1] neuron_in_new_population = neuron+population.first_id index = population.id_to_index(neuron_in_new_population) population[index].set_parameters(**{'spike_times':spike_times}) # set the variables for variable, value in s['variables'].items(): setattr(population, variable, value) s.close() return population class Timer(object): """For timing script execution.""" def __init__(self): self.start() def start(self): """Start timing.""" self._start_time = time.time() self._last_check = self._start_time def elapsedTime(self, format=None): """Return the elapsed time in seconds but keep the clock running.""" current_time = time.time() elapsed_time = current_time - self._start_time if format == 'long': elapsed_time = Timer.time_in_words(elapsed_time) self._last_check = current_time return elapsed_time def reset(self): """Reset the time to zero, and start the clock.""" self.start() def diff(self, format=None): # I think delta() would be a better name for this method. """Return the time since the last time elapsedTime() or diff() was called.""" current_time = time.time() time_since_last_check = current_time - self._last_check self._last_check = current_time if format=='long': time_since_last_check = Timer.time_in_words(elapsed_time) return time_since_last_check @staticmethod def time_in_words(s): """Formats a time in seconds as a string containing the time in days, hours, minutes, seconds. Examples:: >>> time_in_words(1) 1 second >>> time_in_words(123) 2 minutes, 3 seconds >>> time_in_words(24*3600) 1 day """ # based on http://mail.python.org/pipermail/python-list/2003-January/181442.html T = {} T['year'], s = divmod(s, 31556952) min, T['second'] = divmod(s, 60) h, T['minute'] = divmod(min, 60) T['day'], T['hour'] = divmod(h, 24) def add_units(val, units): return "%d %s" % (int(val), units) + (val>1 and 's' or '') return ', '.join([add_units(T[part], part) for part in ('year', 'day', 'hour', 'minute', 'second') if T[part]>0]) class ProgressBar: """ Create a progress bar in the shell. """ def __init__(self, min_value=0, max_value=100, width=77, **kwargs): self.char = kwargs.get('char', '#') self.mode = kwargs.get('mode', 'dynamic') # fixed or dynamic if not self.mode in ['fixed', 'dynamic']: self.mode = 'fixed' self.bar = '' self.min = min_value self.max = max_value self.span = max_value - min_value self.width = width self.amount = 0 # When amount == max, we are 100% done self.update_amount(0) def increment_amount(self, add_amount = 1): """ Increment self.amount by 'add_ammount' or default to incrementing by 1, and then rebuild the bar string. """ new_amount = self.amount + add_amount if new_amount < self.min: new_amount = self.min if new_amount > self.max: new_amount = self.max self.amount = new_amount self.build_bar() def update_amount(self, new_amount = None): """ Update self.amount with 'new_amount', and then rebuild the bar string. """ if not new_amount: new_amount = self.amount if new_amount < self.min: new_amount = self.min if new_amount > self.max: new_amount = self.max self.amount = new_amount self.build_bar() def build_bar(self): """ Figure new percent complete, and rebuild the bar string base on self.amount. """ diff = float(self.amount - self.min) try: percent_done = int(round((diff / float(self.span)) * 100.0)) except Exception: percent_done = 100 # figure the proper number of 'character' make up the bar all_full = self.width - 2 num_hashes = int(round((percent_done * all_full) / 100)) if self.mode == 'dynamic': # build a progress bar with self.char (to create a dynamic bar # where the percent string moves along with the bar progress. self.bar = self.char * num_hashes else: # build a progress bar with self.char and spaces (to create a # fixe bar (the percent string doesn't move) self.bar = self.char * num_hashes + ' ' * (all_full-num_hashes) percent_str = str(percent_done) + "%" self.bar = '[ ' + self.bar + ' ] ' + percent_str def __str__(self): return str(self.bar) def assert_arrays_equal(a, b): import numpy assert isinstance(a, numpy.ndarray), "a is a %s" % type(a) assert isinstance(b, numpy.ndarray), "b is a %s" % type(b) assert a.shape == b.shape, "%s != %s" % (a,b) assert (a.flatten()==b.flatten()).all(), "%s != %s" % (a,b) def assert_arrays_almost_equal(a, b, threshold): import numpy assert isinstance(a, numpy.ndarray), "a is a %s" % type(a) assert isinstance(b, numpy.ndarray), "b is a %s" % type(b) assert a.shape == b.shape, "%s != %s" % (a,b) assert (abs(a - b) < threshold).all(), "max(|a - b|) = %s" % (abs(a - b)).max() def sort_by_column(a, col): # see stackoverflow.com/questions/2828059/ return a[a[:,col].argsort(),:]PyNN-0.7.4/src/errors.py0000644000175000017500000000501011736323051015602 0ustar andrewandrew00000000000000# encoding: utf-8 """ Defines exceptions for the PyNN API InvalidParameterValueError NonExistentParameterError InvalidDimensionsError ConnectionError InvalidModelError RoundingWarning NothingToWriteError InvalidWeightError NotLocalError RecordingError :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ class InvalidParameterValueError(ValueError): """Inappropriate parameter value""" pass class NonExistentParameterError(Exception): """ Model parameter does not exist. """ def __init__(self, parameter_name, model_name, valid_parameter_names=['unknown']): Exception.__init__(self) self.parameter_name = parameter_name self.model_name = model_name self.valid_parameter_names = valid_parameter_names self.valid_parameter_names.sort() def __str__(self): return "%s (valid parameters for %s are: %s)" % (self.parameter_name, self.model_name, ", ".join(self.valid_parameter_names)) class InvalidDimensionsError(Exception): """Argument has inappropriate shape/dimensions.""" pass class ConnectionError(Exception): """Attempt to create an invalid connection or access a non-existent connection.""" pass class InvalidModelError(Exception): """Attempt to use a non-existent model type.""" pass class NoModelAvailableError(Exception): """The simulator does not implement the requested model.""" pass class RoundingWarning(Warning): """The argument has been rounded to a lower level of precision by the simulator.""" pass class NothingToWriteError(Exception): """There is no data available to write.""" pass # subclass IOError? class InvalidWeightError(Exception): # subclass ValueError? or InvalidParameterValueError? """Invalid value for the synaptic weight.""" pass class NotLocalError(Exception): """Attempt to access a cell or connection that does not exist on this node (but exists elsewhere).""" pass class RecordingError(Exception): # subclass AttributeError? """Attempt to record a variable that does not exist for this cell type.""" def __init__(self, variable, cell_type): self.variable = variable self.cell_type = cell_type def __str__(self): return "Cannot record %s from cell type %s" % (self.variable, self.cell_type.__class__.__name__) PyNN-0.7.4/src/pcsim/0000755000175000017500000000000011774605211015036 5ustar andrewandrew00000000000000PyNN-0.7.4/src/pcsim/electrodes.py0000644000175000017500000000652011736323051017541 0ustar andrewandrew00000000000000""" Current source classes for the pcsim module. Classes: DCSource -- a single pulse of current of constant amplitude. StepCurrentSource -- a step-wise time-varying current. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: electrodes.py 957 2011-05-03 13:44:15Z apdavison $ """ import numpy import pypcsim from pyNN.pcsim import simulator class CurrentSource(object): """Base class for a source of current to be injected into a neuron.""" def inject_into(self, cell_list): """Inject this current source into some cells.""" if simulator.state.num_processes == 1: delay = 0.0 else: delay = simulator.state.min_delay # perhaps it would be better to replicate the current source on each node, to avoid this delay for cell in cell_list: if cell.local and not cell.celltype.injectable: raise TypeError("Can't inject current into a spike source.") c = simulator.net.connect(self.input_node, cell, pypcsim.StaticAnalogSynapse(delay=0.001*delay)) self.connections.append(c) class StepCurrentSource(CurrentSource): """A step-wise time-varying current source.""" def __init__(self, times, amplitudes): """Construct the current source. Arguments: times -- list/array of times at which the injected current changes. amplitudes -- list/array of current amplitudes to be injected at the times specified in `times`. The injected current will be zero up until the first time in `times`. The current will continue at the final value in `amplitudes` until the end of the simulation. """ CurrentSource.__init__(self) assert len(times) == len(amplitudes), "times and amplitudes must be the same size (len(times)=%d, len(amplitudes)=%d" % (len(times), len(amplitudes)) self.times = times self.amplitudes = amplitudes n = len(times) durations = numpy.empty((n+1,)) levels = numpy.empty_like(durations) durations[0] = times[0] levels[0] = 0.0 t = numpy.array(times) try: durations[1:-1] = t[1:] - t[0:-1] except ValueError, e: raise ValueError("%s. durations[1:].shape=%s, t[1:].shape=%s, t[0:-1].shape=%s" % (e, durations[1:].shape, t[1:].shape, t[0:-1].shape)) levels[1:] = amplitudes[:] durations[-1] = 1e12 levels *= 1e-9 # nA --> A durations *= 1e-3 # s --> ms self.input_node = simulator.net.create(pypcsim.AnalogLevelBasedInputNeuron(levels, durations)) self.connections = [] print "created stepcurrentsource" class DCSource(StepCurrentSource): """Source producing a single pulse of current of constant amplitude.""" def __init__(self, amplitude=1.0, start=0.0, stop=None): """Construct the current source. Arguments: start -- onset time of pulse in ms stop -- end of pulse in ms amplitude -- pulse amplitude in nA """ times = [0.0, start, (stop or 1e99)] amplitudes = [0.0, amplitude, 0.0] StepCurrentSource.__init__(self, times, amplitudes) PyNN-0.7.4/src/pcsim/standardmodels/0000755000175000017500000000000011774605211020042 5ustar andrewandrew00000000000000PyNN-0.7.4/src/pcsim/standardmodels/cells.py0000644000175000017500000002613011736323051021515 0ustar andrewandrew00000000000000""" Standard cells for pcsim :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: cells.py 957 2011-05-03 13:44:15Z apdavison $ """ from pyNN import common, errors from pyNN.standardmodels import cells, build_translations, ModelNotAvailable import pypcsim import numpy import logging logger = logging.getLogger("PyNN") class IF_curr_alpha(cells.IF_curr_alpha): """Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic current.""" translations = build_translations( ('tau_m', 'taum', 1e-3), ('cm', 'Cm', 1e-9), ('v_rest', 'Vresting', 1e-3), ('v_thresh', 'Vthresh', 1e-3), ('v_reset', 'Vreset', 1e-3), ('tau_refrac', 'Trefract', 1e-3), ('i_offset', 'Iinject', 1e-9), ('tau_syn_E', 'TauSynExc', 1e-3), ('tau_syn_I', 'TauSynInh', 1e-3), ) pcsim_name = "LIFCurrAlphaNeuron" simObjFactory = None setterMethods = {} def __init__(self, parameters): cells.IF_curr_alpha.__init__(self, parameters) self.parameters['Inoise'] = 0.0 self.simObjFactory = pypcsim.LIFCurrAlphaNeuron(**self.parameters) class IF_curr_exp(cells.IF_curr_exp): """Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic current. (Separate synaptic currents for excitatory and inhibitory synapses.""" translations = build_translations( ('tau_m', 'taum', 1e-3), ('cm', 'Cm', 1e-9), ('v_rest', 'Vresting', 1e-3), ('v_thresh', 'Vthresh', 1e-3), ('v_reset', 'Vreset', 1e-3), ('tau_refrac', 'Trefract', 1e-3), ('i_offset', 'Iinject', 1e-9), ('tau_syn_E', 'TauSynExc', 1e-3), ('tau_syn_I', 'TauSynInh', 1e-3), ) pcsim_name = "LIFCurrExpNeuron" simObjFactory = None setterMethods = {} def __init__(self, parameters): cells.IF_curr_exp.__init__(self, parameters) self.parameters['Inoise'] = 0.0 self.simObjFactory = pypcsim.LIFCurrExpNeuron(**self.parameters) class IF_cond_alpha(cells.IF_cond_alpha): """Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic conductance.""" translations = build_translations( ('tau_m', 'taum', 1e-3), ('cm', 'Cm', 1e-9), ('v_rest', 'Vresting', 1e-3), ('v_thresh', 'Vthresh', 1e-3), ('v_reset', 'Vreset', 1e-3), ('tau_refrac', 'Trefract', 1e-3), ('i_offset', 'Iinject', 1e-9), ('tau_syn_E', 'TauSynExc', 1e-3), ('tau_syn_I', 'TauSynInh', 1e-3), ('e_rev_E', 'ErevExc', 1e-3), ('e_rev_I', 'ErevInh', 1e-3), ) pcsim_name = "LIFCondAlphaNeuron" simObjFactory = None setterMethods = {} recordable = ['spikes', 'v'] def __init__(self, parameters): cells.IF_cond_alpha.__init__(self, parameters) self.parameters['Inoise'] = 0.0 self.simObjFactory = pypcsim.LIFCondAlphaNeuron(**self.parameters) class IF_cond_exp(cells.IF_cond_exp): """Leaky integrate and fire model with fixed threshold and exponentially-decaying post-synaptic conductance.""" translations = build_translations( ('tau_m', 'taum', 1e-3), ('cm', 'Cm', 1e-9), ('v_rest', 'Vresting', 1e-3), ('v_thresh', 'Vthresh', 1e-3), ('v_reset', 'Vreset', 1e-3), ('tau_refrac', 'Trefract', 1e-3), ('i_offset', 'Iinject', 1e-9), ('tau_syn_E', 'TauSynExc', 1e-3), ('tau_syn_I', 'TauSynInh', 1e-3), ('e_rev_E', 'ErevExc', 1e-3), ('e_rev_I', 'ErevInh', 1e-3), ) pcsim_name = "LIFCondExpNeuron" simObjFactory = None setterMethods = {} recordable = ['spikes', 'v'] def __init__(self, parameters): cells.IF_cond_exp.__init__(self, parameters) self.parameters['Inoise'] = 0.0 self.simObjFactory = pypcsim.LIFCondExpNeuron(**self.parameters) """ Implemented not tested """ class SpikeSourcePoisson(cells.SpikeSourcePoisson): """Spike source, generating spikes according to a Poisson process.""" translations = build_translations( ('start', 'Tstart', 1e-3), ('rate', 'rate'), ('duration', 'duration', 1e-3) ) pcsim_name = 'PoissonInputNeuron' simObjFactory = None setterMethods = {} def __init__(self, parameters): cells.SpikeSourcePoisson.__init__(self, parameters) self.simObjFactory = pypcsim.PoissonInputNeuron(**self.parameters) def sanitize_spike_times(spike_times): """ PCSIM has a bug that the SpikingInputNeuron sometimes stops emitting spikes I think this happens when two spikes fall in the same time step. This workaround removes any spikes after the first within a given time step. """ time_step = common.get_time_step() try: spike_times = numpy.array(spike_times, float) except ValueError, e: raise errors.InvalidParameterValueError("Spike times must be floats. %s") bins = (spike_times/time_step).astype('int') mask = numpy.concatenate((numpy.array([True]), bins[1:] != bins[:-1])) if mask.sum() < len(bins): logger.warn("Spikes have been thrown away because they were too close together.") logger.debug(spike_times[(1-mask).astype('bool')]) if len(spike_times) > 0: return spike_times[mask] else: return spike_times class SpikeSourceArray(cells.SpikeSourceArray): """Spike source generating spikes at the times given in the spike_times array.""" translations = build_translations( ('spike_times', 'spikeTimes'), # 1e-3), ) pcsim_name = 'SpikingInputNeuron' simObjFactory = None setterMethods = {'spikeTimes':'setSpikes'} getterMethods = {'spikeTimes':'getSpikeTimes' } def __init__(self, parameters): cells.SpikeSourceArray.__init__(self, parameters) self.pcsim_object_handle = pypcsim.SpikingInputNeuron(**self.parameters) self.simObjFactory = pypcsim.SpikingInputNeuron(**self.parameters) @classmethod def translate(cls, parameters): """Translate standardized model parameters to simulator-specific parameters.""" translated_parameters = super(SpikeSourceArray, cls).translate(parameters) translated_parameters['spikeTimes'] = sanitize_spike_times(translated_parameters['spikeTimes']) # for why we used 'super' here, see http://blogs.gnome.org/jamesh/2005/06/23/overriding-class-methods-in-python/ # convert from ms to s - should really be done in common.py, but that doesn't handle lists, only arrays if isinstance(translated_parameters['spikeTimes'], list): translated_parameters['spikeTimes'] = [t*0.001 for t in translated_parameters['spikeTimes']] elif isinstance(translated_parameters['spikeTimes'], numpy.ndarray): translated_parameters['spikeTimes'] *= 0.001 return translated_parameters @classmethod def reverse_translate(cls, native_parameters): """Translate simulator-specific model parameters to standardized parameters.""" standard_parameters = super(SpikeSourceArray, cls).reverse_translate(native_parameters) if isinstance(standard_parameters['spike_times'], list): standard_parameters['spike_times'] = [t*1000.0 for t in standard_parameters['spike_times']] elif isinstance(standard_parameters['spike_times'], numpy.ndarray): standard_parameters['spike_times'] *= 1000.0 return standard_parameters class EIF_cond_alpha_isfa_ista(ModelNotAvailable): pass #class EIF_cond_alpha_isfa_ista(cells.EIF_cond_alpha_isfa_ista): # """ # Exponential integrate and fire neuron with spike triggered and sub-threshold # adaptation currents (isfa, ista reps.) according to: # # Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as # an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642 # # See also: IF_cond_exp_gsfa_grr # """ # # translations = build_translations( # ('cm' , 'Cm', 1e-9), # nF -> F # ('tau_refrac', 'Trefract', 1e-3), # ms -> s # ('v_spike' , 'Vpeak', 1e-3), # ('v_reset' , 'Vr', 1e-3), # ('v_rest' , 'El', 1e-3), # ('tau_m' , 'gl', "1e-6*cm/tau_m", "Cm/gl"), # units correct? # ('i_offset' , 'Iinject', 1e-9), # ('a' , 'a', 1e-9), # ('b' , 'b', 1e-9), # ('delta_T' , 'slope', 1e-3), # ('tau_w' , 'tau_w', 1e-3), # ('v_thresh' , 'Vt', 1e-3), # ('e_rev_E' , 'ErevExc', 1e-3), # ('tau_syn_E' , 'TauSynExc', 1e-3), # ('e_rev_I' , 'ErevInh', 1e-3), # ('tau_syn_I' , 'TauSynInh', 1e-3), # ) # pcsim_name = "aEIFCondAlphaNeuron" # simObjFactory = None # setterMethods = {} # # def __init__(self, parameters): # cells.EIF_cond_alpha_isfa_ista.__init__(self, parameters) # self.parameters['Inoise'] = 0.0 # limited_parameters = {} # problem that Trefract is not implemented # for k in ('a','b','Vt','Vr','El','gl','Cm','tau_w','slope','Vpeak', # 'Vinit','Inoise','Iinject', 'ErevExc', # 'TauSynExc', 'ErevInh', 'TauSynInh'): # limited_parameters[k] = self.parameters[k] # self.simObjFactory = getattr(pypcsim, EIF_cond_alpha_isfa_ista.pcsim_name)(**limited_parameters) class IF_facets_hardware1(ModelNotAvailable): pass class HH_cond_exp(ModelNotAvailable): pass #class HH_cond_exp(cells.HH_cond_exp): # """docstring needed here.""" # # translations = build_translations( # ('gbar_Na', 'gbar_Na'), # ('gbar_K', 'gbar_K'), # ('g_leak', 'Rm', '1/g_leak', '1/Rm'), # check HHNeuronTraubMiles91.h, not sure this is right # ('cm', 'Cm', 1000.0), # ('v_offset', 'V_T'), # PCSIM fixes V_T at -63 mV # ('e_rev_Na', 'E_Na'), # ('e_rev_K', 'E_K'), # ('e_rev_leak', 'Vresting'), # check HHNeuronTraubMiles91.h, not sure this is right # ('e_rev_E', 'ErevExc'), # ('e_rev_I', 'ErevInh'), # ('tau_syn_E', 'TauSynExc'), # ('tau_syn_I', 'TauSynInh'), # ('i_offset', 'Iinject', 1000.0), # ) # pcsim_name = "HHNeuronTraubMiles91" # simObjFactory = None # setterMethods = {} # # def __init__(self, parameters): # cells.HH_cond_exp.__init__(self, parameters) # self.parameters['Inoise'] = 0.0 # self.simObjFactory = pypcsim.LIFCondAlphaNeuron(**self.parameters) class SpikeSourceInhGamma(ModelNotAvailable): pass class IF_cond_exp_gsfa_grr(ModelNotAvailable): pass PyNN-0.7.4/src/pcsim/standardmodels/__init__.py0000644000175000017500000000000011736323051022136 0ustar andrewandrew00000000000000PyNN-0.7.4/src/pcsim/standardmodels/synapses.py0000644000175000017500000002132511736323051022261 0ustar andrewandrew00000000000000""" Synapse Dynamics classes for pcsim :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: synapses.py 957 2011-05-03 13:44:15Z apdavison $ """ from pyNN.standardmodels import synapses, build_translations, SynapseDynamics, STDPMechanism import pypcsim synapse_models = [s for s in dir(pypcsim) if 'Synapse' in s] def get_synapse_models(criterion): return set([s for s in synapse_models if criterion in s]) conductance_based_synapse_models = get_synapse_models("Cond") current_based_synapse_models = get_synapse_models("Curr") alpha_function_synapse_models = get_synapse_models("Alpha") double_exponential_synapse_models = get_synapse_models("DoubleExp") single_exponential_synapse_models = get_synapse_models("Exp").difference(double_exponential_synapse_models) #stdp_synapse_models = get_synapse_models("Stdp") stdp_synapse_models = set(["StaticStdpSynapse", # CurrExp "StaticStdpCondExpSynapse", "DynamicStdpSynapse", # CurrExp "DynamicStdpCondExpSynapse"]) #dynamic_synapse_models = get_synapse_models("Dynamic") dynamic_synapse_models = set(["DynamicSpikingSynapse", # DynamicCurrExpSynapse "DynamicCondExpSynapse", "DynamicCurrAlphaSynapse", "DynamicCondAlphaSynapse", "DynamicStdpSynapse", # CurrExp "DynamicStdpCondExpSynapse", # there don't seem to be any alpha-function STDP synapse models ]) class STDPMechanism(STDPMechanism): """Specification of STDP models.""" def __init__(self, timing_dependence=None, weight_dependence=None, voltage_dependence=None, dendritic_delay_fraction=1.0): # not sure what the situation is with dendritic_delay_fraction in PCSIM super(STDPMechanism, self).__init__(timing_dependence, weight_dependence, voltage_dependence, dendritic_delay_fraction) class TsodyksMarkramMechanism(synapses.TsodyksMarkramMechanism): translations = build_translations( ('U', 'U'), ('tau_rec', 'D', 1e-3), ('tau_facil', 'F', 1e-3), ('u0', 'u0'), ('x0', 'r0' ), # I'm not at all sure this ('y0', 'f0') # translation is correct # need to look at the source code ) #possible_models = get_synapse_models("Dynamic") possible_models = dynamic_synapse_models def __init__(self, U=0.5, tau_rec=100.0, tau_facil=0.0, u0=0.0, x0=1.0, y0=0.0): #synapses.TsodyksMarkramMechanism.__init__(self, U, tau_rec, tau_facil, u0, x0, y0) parameters = dict(locals()) # need the dict to get a copy of locals. When running parameters.pop('self') # through coverage.py, for some reason, the pop() doesn't have any effect self.parameters = self.translate(parameters) class AdditiveWeightDependence(synapses.AdditiveWeightDependence): """ The amplitude of the weight change is fixed for depression (`A_minus`) and for potentiation (`A_plus`). If the new weight would be less than `w_min` it is set to `w_min`. If it would be greater than `w_max` it is set to `w_max`. """ translations = build_translations( ('w_max', 'Wex', 1e-9), # unit conversion. This exposes a limitation of the current # translation machinery, because this value depends on the # type of the post-synaptic cell. We currently work around # this using the "scales_with_weight" attribute, although # this breaks reverse translation. ('w_min', 'w_min_always_zero_in_PCSIM'), ('A_plus', 'Apos', '1e-9*A_plus*w_max', '1e9*Apos/w_max'), # note that here Apos and Aneg ('A_minus', 'Aneg', '-1e-9*A_minus*w_max', '-1e9*Aneg/w_max'), # have the same units as the weight ) possible_models = stdp_synapse_models scales_with_weight = ['Wex', 'Apos', 'Aneg'] def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): # units? if w_min != 0: raise Exception("Non-zero minimum weight is not supported by PCSIM.") #synapses.AdditiveWeightDependence.__init__(self, w_min, w_max, A_plus, A_minus) parameters = dict(locals()) parameters.pop('self') self.parameters = self.translate(parameters) self.parameters['useFroemkeDanSTDP'] = False self.parameters['mupos'] = 0.0 self.parameters['muneg'] = 0.0 self.parameters.pop('w_min_always_zero_in_PCSIM') class MultiplicativeWeightDependence(synapses.MultiplicativeWeightDependence): """ The amplitude of the weight change depends on the current weight. For depression, Dw propto w-w_min For potentiation, Dw propto w_max-w """ translations = build_translations( ('w_max', 'Wex', 1e-9), # unit conversion ('w_min', 'w_min_always_zero_in_PCSIM'), ('A_plus', 'Apos'), # here Apos and Aneg ('A_minus', 'Aneg', -1), # are dimensionless ) possible_models = stdp_synapse_models scales_with_weight = ['Wex'] def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): # units? if w_min != 0: raise Exception("Non-zero minimum weight is not supported by PCSIM.") #synapses.MultiplicativeWeightDependence.__init__(self, w_min, w_max, A_plus, A_minus) parameters = dict(locals()) parameters.pop('self') self.parameters = self.translate(parameters) self.parameters['useFroemkeDanSTDP'] = False self.parameters['mupos'] = 1.0 self.parameters['muneg'] = 1.0 self.parameters.pop('w_min_always_zero_in_PCSIM') class AdditivePotentiationMultiplicativeDepression(synapses.AdditivePotentiationMultiplicativeDepression): """ The amplitude of the weight change depends on the current weight for depression (Dw propto w-w_min) and is fixed for potentiation. """ translations = build_translations( ('w_max', 'Wex', 1e-9), # unit conversion ('w_min', 'w_min_always_zero_in_PCSIM'), ('A_plus', 'Apos', 1e-9), # Apos has the same units as the weight ('A_minus', 'Aneg', -1), # Aneg is dimensionless ) possible_models = stdp_synapse_models scales_with_weight = ['Wex', 'Apos'] def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): # units? if w_min != 0: raise Exception("Non-zero minimum weight is not supported by PCSIM.") #synapses.AdditivePotentiationMultiplicativeDepression.__init__(self, w_min, w_max, A_plus, A_minus) parameters = dict(locals()) parameters.pop('self') self.parameters = self.translate(parameters) self.parameters['useFroemkeDanSTDP'] = False self.parameters['mupos'] = 0.0 self.parameters['muneg'] = 1.0 self.parameters.pop('w_min_always_zero_in_PCSIM') class GutigWeightDependence(synapses.GutigWeightDependence): """ The amplitude of the weight change depends on the current weight. For depression, Dw propto w-w_min For potentiation, Dw propto w_max-w """ translations = build_translations( ('w_max', 'Wex', 1e-9), # unit conversion ('w_min', 'w_min_always_zero_in_PCSIM'), ('A_plus', 'Apos'), ('A_minus', 'Aneg', -1), ('mu_plus', 'mupos'), ('mu_minus', 'muneg') ) possible_models = stdp_synapse_models def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01, mu_plus=0.5, mu_minus=0.5): # units? if w_min != 0: raise Exception("Non-zero minimum weight is not supported by PCSIM.") #synapses.AdditivePotentiationMultiplicativeDepression.__init__(self, w_min, w_max, A_plus, A_minus) parameters = dict(locals()) parameters.pop('self') self.parameters = self.translate(parameters) self.parameters['useFroemkeDanSTDP'] = False self.parameters.pop('w_min_always_zero_in_PCSIM') class SpikePairRule(synapses.SpikePairRule): translations = build_translations( ('tau_plus', 'taupos', 1e-3), ('tau_minus', 'tauneg', 1e-3), ) possible_models = stdp_synapse_models def __init__(self, tau_plus=20.0, tau_minus=20.0): #synapses.SpikePairRule.__init__(self, tau_plus, tau_minus) parameters = dict(locals()) parameters.pop('self') self.parameters = self.translate(parameters) self.parameters['STDPgap'] = 0.0 PyNN-0.7.4/src/pcsim/connectors_old.py0000644000175000017500000001263311736323051020425 0ustar andrewandrew00000000000000# ============================================================================== # Connection method classes for pcsim # $Id: connectors_old.py 601 2009-05-14 10:17:16Z apdavison $ # ============================================================================== from pyNN import common, connectors import pypcsim from pyNN.pcsim import simulator import numpy class ListConnectionPredicate(pypcsim.PyConnectionDecisionPredicate): """Used by FromListConnector and FromFileConnector.""" def __init__(self, conn_array): pypcsim.PyConnectionDecisionPredicate.__init__(self) # now need to turn conn_list into a form suitable for use by decide() # a sparse array would be one possibility, but for now we use a dict of dicts self._connections = {} for i in xrange(len(conn_array)): src, tgt = conn_array[i][:] if src not in self._connections: self._connections[src] = [] self._connections[src].append(tgt) def decide(self, src, tgt, rng): if src in self._connections and tgt in self._connections[src]: return True else: return False class AllToAllConnector(connectors.AllToAllConnector): def connect(self, projection): # what about allow_self_connections? decider = pypcsim.RandomConnections(1) wiring_method = pypcsim.DistributedSyncWiringMethod(simulator.net) return decider, wiring_method, self.weights, self.delays class OneToOneConnector(connectors.OneToOneConnector): def connect(self, projection): if projection.pre.dim == projection.post.dim: decider = pypcsim.RandomConnections(1) wiring_method = pypcsim.OneToOneWiringMethod(simulator.net) return decider, wiring_method, self.weights, self.delays else: raise Exception("Connection method not yet implemented for the case where presynaptic and postsynaptic Populations have different sizes.") class FixedProbabilityConnector(connectors.FixedProbabilityConnector): def connect(self, projection): decider = pypcsim.RandomConnections(float(self.p_connect)) wiring_method = pypcsim.DistributedSyncWiringMethod(simulator.net) return decider, wiring_method, self.weights, self.delays #class DistanceDependentProbabilityConnector(connectors.DistanceDependentProbabilityConnector): # # def connect(self, projection): # decider = pypcsim.EuclideanDistanceRandomConnections(method_parameters[0], method_parameters[1]) # wiring_method = pypcsim.DistributedSyncWiringMethod(simulator.net) # return decider, wiring_method, self.weights, self.delays class FixedNumberPreConnector(connectors.FixedNumberPreConnector): def connect(self, projection): decider = pypcsim.DegreeDistributionConnections(pypcsim.ConstantNumber(self.n), pypcsim.DegreeDistributionConnections.incoming) wiring_method = pypcsim.SimpleAllToAllWiringMethod(simulator.net) return decider, wiring_method, self.weights, self.delays class FixedNumberPostConnector(connectors.FixedNumberPostConnector): def connect(self, projection): decider = pypcsim.DegreeDistributionConnections(pypcsim.ConstantNumber(self.n), pypcsim.DegreeDistributionConnections.outgoing) wiring_method = pypcsim.SimpleAllToAllWiringMethod(simulator.net) return decider, wiring_method, self.weights, self.delays class FromListConnector(connectors.FromListConnector): def connect(self, projection): conn_array = numpy.zeros((len(self.conn_list),4)) for i in xrange(len(self.conn_list)): src, tgt, weight, delay = self.conn_list[i][:] src = projection.pre[tuple(src)] tgt = projection.post[tuple(tgt)] conn_array[i,:] = (src, tgt, weight, delay) self.weights = conn_array[:,2] self.delays = conn_array[:,3] lcp = ListConnectionPredicate(conn_array[:,0:2]) decider = pypcsim.PredicateBasedConnections(lcp) wiring_method = pypcsim.SimpleAllToAllWiringMethod(simulator.net) # pcsim does not yet deal with having lists of weights, delays, so for now we just return 0 values # and will set the weights, delays later return decider, wiring_method, self.weights, self.delays class FromFileConnector(connectors.FromFileConnector): def connect(self, projection): f = open(self.filename, 'r', 10000) lines = f.readlines() f.close() conn_array = numpy.zeros((len(lines),4)) for i,line in enumerate(lines): single_line = line.rstrip() src, tgt, w, d = single_line.split("\t", 4) src = "[%s" % src.split("[",1)[1] tgt = "[%s" % tgt.split("[",1)[1] src = projection.pre[tuple(eval(src))] tgt = projection.post[tuple(eval(tgt))] conn_array[i,:] = (src, tgt, w, d) self.weights = conn_array[:,2] self.delays = conn_array[:,3] lcp = ListConnectionPredicate(conn_array[:,0:2]) decider = pypcsim.PredicateBasedConnections(lcp) wiring_method = pypcsim.SimpleAllToAllWiringMethod(simulator.net) # pcsim does not yet deal with having lists of weights, delays, so for now we just return 0 values # and will set the weights, delays later return decider, wiring_method, self.weights, self.delays PyNN-0.7.4/src/pcsim/__init__.py0000644000175000017500000010313311736323051017145 0ustar andrewandrew00000000000000# encoding: utf-8 """ pypcsim implementation of the PyNN API. Dejan Pecevski dejan@igi.tugraz.at Thomas Natschlaeger thomas.natschlaeger@scch.at Andrew Davison davison@unic.cnrs-gif.fr December 2006- :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: __init__.py 957 2011-05-03 13:44:15Z apdavison $ """ __version__ = "$Revision: 957 $" import sys import pyNN.random from pyNN.random import * from pyNN import common, recording, errors, space, core, __doc__ from pyNN.pcsim import simulator common.simulator = simulator recording.simulator = simulator import os.path import types import sys import numpy import pypcsim from pyNN.pcsim.standardmodels.cells import * from pyNN.pcsim.connectors import * from pyNN.pcsim.standardmodels.synapses import * from pyNN.pcsim.electrodes import * from pyNN.pcsim.recording import * from pyNN import standardmodels try: import tables except ImportError: pass import exceptions from datetime import datetime import operator Set = set ID = simulator.ID # ============================================================================== # Utility classes # ============================================================================== # Implementation of the NativeRNG class NativeRNG(pyNN.random.NativeRNG): def __init__(self, seed=None, type='MersenneTwister19937'): pyNN.random.AbstractRNG.__init__(self, seed) self.rndEngine = getattr(pypcsim, type)() if not self.seed: self.seed = int(datetime.today().microsecond) self.rndEngine.seed(self.seed) def next(self, n=1, distribution='Uniform', parameters={'a':0,'b':1}, mask_local=None): """Return n random numbers from the distribution. If n is 1, return a float, if n > 1, return a numpy array, if n <= 0, raise an Exception.""" distribution_type = getattr(pypcsim, distribution + "Distribution") if isinstance(parameters, dict): dist = apply(distribution_type, (), parameters) else: dist = apply(distribution_type, tuple(parameters), {}) values = [ dist.get(self.rndEngine) for i in xrange(n) ] if n == 1: return values[0] else: return values def list_standard_models(): """Return a list of all the StandardCellType classes available for this simulator.""" setup() standard_cell_types = [obj for obj in globals().values() if isinstance(obj, type) and issubclass(obj, standardmodels.StandardCellType)] for cell_class in standard_cell_types: try: create(cell_class) except Exception, e: print "Warning: %s is defined, but produces the following error: %s" % (cell_class.__name__, e) standard_cell_types.remove(cell_class) return [obj.__name__ for obj in standard_cell_types] class WDManager(object): def getWeight(self, w=None): if w is not None: weight = w else: weight = 1. return weight def getDelay(self, d=None): if d is not None: delay = d else: delay = simulator.state.min_delay return delay def convertWeight(self, w, conductance): if conductance: w_factor = 1e-6 # Convert from µS to S else: w_factor = 1e-9 # Convert from nA to A if isinstance(w, pyNN.random.RandomDistribution): weight = pyNN.random.RandomDistribution(w.name, w.parameters, w.rng) if weight.name == "uniform": (w_min, w_max) = weight.parameters weight.parameters = (w_factor*w_min, w_factor*w_max) elif weight.name == "normal": (w_mean, w_std) = weight.parameters weight.parameters = (w_factor*w_mean, w_factor*w_std) else: print "WARNING: no conversion of the weights for this particular distribution" else: weight = w*w_factor return weight def reverse_convertWeight(self, w, conductance): if conductance: w_factor = 1e6 # Convert from S to µS else: w_factor = 1e9 # Convert from A to nA return w*w_factor def convertDelay(self, d): if isinstance(d, pyNN.random.RandomDistribution): delay = pyNN.random.RandomDistribution(d.name, d.parameters, d.rng) if delay.name == "uniform": (d_min, d_max) = delay.parameters delay.parameters = (d_min/1000., d_max/1000.) elif delay.name == "normal": (d_mean, d_std) = delay.parameters delay.parameters = (d_mean/1000., w_std) else: delay = d/1000. return delay # ============================================================================== # Functions for simulation set-up and control # ============================================================================== def setup(timestep=0.1, min_delay=0.1, max_delay=10.0, **extra_params): """ Should be called at the very beginning of a script. extra_params contains any keyword arguments that are required by a given simulator but not by others. For pcsim, the possible arguments are 'construct_rng_seed' and 'simulation_rng_seed'. """ if simulator.state.constructRNGSeed is None: if extra_params.has_key('construct_rng_seed'): construct_rng_seed = extra_params['construct_rng_seed'] else: construct_rng_seed = datetime.today().microsecond simulator.state.constructRNGSeed = construct_rng_seed if simulator.state.simulationRNGSeed is None: if extra_params.has_key('simulation_rng_seed'): simulation_rng_seed = extra_params['simulation_rng_seed'] else: simulation_rng_seed = datetime.today().microsecond simulator.state.simulationRNGSeed = simulation_rng_seed if extra_params.has_key('threads'): simulator.net = pypcsim.DistributedMultiThreadNetwork( extra_params['threads'], pypcsim.SimParameter( pypcsim.Time.ms(timestep), pypcsim.Time.ms(min_delay), pypcsim.Time.ms(max_delay), simulator.state.constructRNGSeed, simulator.state.simulationRNGSeed)) else: simulator.net = pypcsim.DistributedSingleThreadNetwork( pypcsim.SimParameter( pypcsim.Time.ms(timestep), pypcsim.Time.ms(min_delay), pypcsim.Time.ms(max_delay), simulator.state.constructRNGSeed, simulator.state.simulationRNGSeed)) simulator.state.t = 0 #simulator.state.dt = timestep # seems to mess up the net object simulator.state.min_delay = min_delay simulator.state.max_delay = max_delay common.setup(timestep, min_delay, max_delay, **extra_params) return simulator.net.mpi_rank() def end(compatible_output=True): """Do any necessary cleaning up before exiting.""" for recorder in simulator.recorder_list: recorder.write(gather=True, compatible_output=compatible_output) simulator.recorder_list = [] def run(simtime): """Run the simulation for simtime ms.""" simulator.state.t += simtime simulator.net.advance(int(simtime / simulator.state.dt )) return simulator.state.t reset = common.reset initialize = common.initialize get_current_time = common.get_current_time get_time_step = common.get_time_step get_min_delay = common.get_min_delay get_max_delay = common.get_max_delay num_processes = common.num_processes rank = common.rank # ============================================================================== # High-level API for creating, connecting and recording from populations of # neurons. # ============================================================================== class Population(common.Population): """ An array of neurons all of the same type. `Population' is used as a generic term intended to include layers, columns, nuclei, etc., of cells. """ recorder_class = Recorder def __init__(self, size, cellclass, cellparams=None, structure=None, label=None, parent=None): __doc__ = common.Population.__doc__ common.Population.__init__(self, size, cellclass, cellparams, structure, label) def _create_cells(self, cellclass, cellparams, n): """ Create cells in PCSIM. `cellclass` -- a PyNN standard cell or a native PCSIM cell class. `cellparams` -- a dictionary of cell parameters. `n` -- the number of cells to create `parent` -- the parent Population, or None if the cells don't belong to a Population. This function is used by both `create()` and `Population.__init__()` Return: - a 1D array of all cell IDs - a 1D boolean array indicating which IDs are present on the local MPI node - the ID of the first cell created - the ID of the last cell created """ global net assert n > 0, 'n must be a positive integer' # if isinstance(cellclass, str): # if not cellclass in dir(pypcsim): # raise errors.InvalidModelError('Trying to create non-existent cellclass ' + cellclass ) # cellclass = getattr(pypcsim, cellclass) # self.celltype = cellclass # if issubclass(cellclass, standardmodels.StandardCellType): self.celltype = cellclass(cellparams) self.cellfactory = self.celltype.simObjFactory # else: # self.celltype = cellclass # if issubclass(cellclass, pypcsim.SimObject): # self.cellfactory = cellclass(**cellparams) # else: # raise exceptions.AttributeError('Trying to create non-existent cellclass ' + cellclass.__name__ ) self.all_cells = numpy.array([id for id in simulator.net.add(self.cellfactory, n)], simulator.ID) self.first_id = self.all_cells[0] self.last_id = self.all_cells[-1] # mask_local is used to extract those elements from arrays that apply to the cells on the current node self._mask_local = numpy.array([simulator.is_local(id) for id in self.all_cells]) for i,id in enumerate(self.all_cells): self.all_cells[i] = simulator.ID(self.all_cells[i]) self.all_cells[i].parent = self # CuboidGridPopulation(SimNetwork &net, GridPoint3D origin, Volume3DSize dims, SimObjectFactory &objFactory) ##self.pcsim_population = pypcsim.CuboidGridObjectPopulation( ## simulator.net, ## pypcsim.GridPoint3D(0,0,0), ## pypcsim.Volume3DSize(dims[0], dims[1], dims[2]), ## self.cellfactory) ##self.cell = numpy.array(self.pcsim_population.idVector()) ##self.first_id = 0 ##self.cell -= self.cell[0] ##self.all_cells = self.cell ##self.local_cells = numpy.array(self.pcsim_population.localIndexes()) ##def __getitem__(self, addr): ## """Return a representation of the cell with coordinates given by addr, ## suitable for being passed to other methods that require a cell id. ## Note that __getitem__ is called when using [] access, e.g. ## p = Population(...) ## p[2,3] is equivalent to p.__getitem__((2,3)). ## """ ## if isinstance(addr, int): ## addr = (addr,) ## if len(addr) != self.actual_ndim: ## raise errors.InvalidDimensionsError, "Population has %d dimensions. Address was %s" % (self.actual_ndim, str(addr)) ## orig_addr = addr; ## while len(addr) < 3: ## addr += (0,) ## index = 0 ## for i, s in zip(addr, self.steps): ## index += i*s ## pcsim_index = self.pcsim_population.getIndex(addr[0], addr[1], addr[2]) ## assert index == pcsim_index, " index = %s, pcsim_index = %s" % (index, pcsim_index) ## id = ID(pcsim_index) ## id.parent = self ## if orig_addr != self.locate(id): ## raise IndexError, 'Invalid cell address %s' % str(addr) ## assert orig_addr == self.locate(id), 'index=%s addr=%s id=%s locate(id)=%s' % (index, orig_addr, id, self.locate(id)) ## return id ##def __iter__(self): ## return self.__gid_gen() def __gid_gen(self): """ Generator to produce an iterator over all cells on this node, returning gids. """ ids = self.pcsim_population.idVector() for i in ids: id = ID(i-ids[0]) id.parent = self yield id def id_to_index(self, id): cells = self.all_cells ## supposed to support id being a list/array of IDs. ## For now, restrict to single ID ##if hasattr(id, '__len__'): ## res = [] ## for item in id: ## res.append(numpy.where(cells == item)[0][0]) ## return numpy.array(res) ##else: return cells.tolist().index(id) # because ids may not be consecutive when running a distributed sim ##def getObjectID(self, index): ## return self.pcsim_population[index] ##def __len__(self): ## """Return the total number of cells in the population.""" ## return self.pcsim_population.size() ##def tset(self, parametername, value_array): ## """ ## 'Topographic' set. Set the value of parametername to the values in ## value_array, which must have the same dimensions as the Population. ## """ ## """PCSIM: iteration and set """ ## if self.dim[0:self.actual_ndim] == value_array.shape: ## values = numpy.copy(value_array) # we do not wish to change the original value_array in case it needs to be reused in user code ## for cell, val in zip(self, values): ## cell.set_parameters(**{parametername: val}) ## elif len(value_array.shape) == len(self.dim[0:self.actual_ndim])+1: # the values are themselves 1D arrays ## for cell,addr in zip(self.ids(), self.addresses()): ## val = value_array[addr] ## setattr(cell, parametername, val) ## else: ## raise errors.InvalidDimensionsError ##def rset(self, parametername, rand_distr): ## """ ## 'Random' set. Set the value of parametername to a value taken from ## rand_distr, which should be a RandomDistribution object. ## """ ## """ ## Will be implemented in the future more efficiently for ## NativeRNGs. ## """ ## rarr = numpy.array(rand_distr.next(n=self.size)) ## rarr = rarr.reshape(self.dim[0:self.actual_ndim]) ## self.tset(parametername, rarr) def _call(self, methodname, arguments): """ Calls the method methodname(arguments) for every cell in the population. e.g. p.call("set_background","0.1") if the cell class has a method set_background(). """ """ This works nicely for PCSIM for simulator specific cells, because cells (SimObject classes) are directly wrapped in python """ for i in xrange(0, len(self)): obj = simulator.net.object(self.pcsim_population[i]) if obj: apply( obj, methodname, (), arguments) def _tcall(self, methodname, objarr): """ `Topographic' call. Calls the method methodname() for every cell in the population. The argument to the method depends on the coordinates of the cell. objarr is an array with the same dimensions as the Population. e.g. p.tcall("memb_init", vinitArray) calls p.cell[i][j].memb_init(vInitArray[i][j]) for all i, j. """ """ PCSIM: iteration at the python level and apply""" for i in xrange(0, len(self)): obj = simulator.net.object(self.pcsim_population[i]) if obj: apply( obj, methodname, (), arguments) PopulationView = common.PopulationView Assembly = common.Assembly class Projection(common.Projection, WDManager): """ A container for all the connections of a given type (same synapse type and plasticity mechanisms) between two populations, together with methods to set parameters of those connections, including of plasticity mechanisms. """ nProj = 0 def __init__(self, presynaptic_population, postsynaptic_population, method, source=None, target=None, synapse_dynamics=None, label=None, rng=None): """ presynaptic_population and postsynaptic_population - Population objects. source - string specifying which attribute of the presynaptic cell signals action potentials target - string specifying which synapse on the postsynaptic cell to connect to If source and/or target are not given, default values are used. method - a Connector object, encapsulating the algorithm to use for connecting the neurons. synapse_dynamics - a `SynapseDynamics` object specifying which synaptic plasticity mechanisms to use. rng - specify an RNG object to be used by the Connector.. """ """ PCSIM implementation specific comments: - source parameter does not have any meaning in context of PyPCSIM interface. Action potential signals are predefined by the neuron model and each cell has only one source, so there is no need to name a source since is implicitly known. - rng parameter is also not currently not applicable. For connection making only internal random number generators can be used. - The semantics of the target parameter is slightly changed: If it is a string then it represents a pcsim synapse class. If it is an integer then it represents which target(synapse) on the postsynaptic cell to connect to. It can be also a pcsim SimObjectFactory object which will be used for creation of the synapse objects associated to the created connections. """ common.Projection.__init__(self, presynaptic_population, postsynaptic_population, method, source, target, synapse_dynamics, label, rng) self.is_conductance = self.post.conductance_based if isinstance(self.post, Assembly): assert self.post._homogeneous_synapses celltype = self.post.populations[0].celltype else: celltype = self.post.celltype self.synapse_shape = ("alpha" in celltype.__class__.__name__) and "alpha" or "exp" ### Determine connection decider ##decider, wiring_method, weight, delay = method.connect(self) ## ##weight = self.getWeight(weight) ##self.is_conductance = hasattr(self.post.pcsim_population.object(0),'ErevExc') ## ##if isinstance(weight, pyNN.random.RandomDistribution) or hasattr(weight, '__len__'): ## w = 1. ##else: ## w = self.convertWeight(weight, self.is_conductance) ## ##delay = self.getDelay(delay) ##if isinstance(delay, pyNN.random.RandomDistribution) or hasattr(delay, '__len__'): ## d = simulator.state.min_delay/1000. ##else: ## d = self.convertDelay(delay) ## # handle synapse dynamics if core.is_listlike(method.weights): w = method.weights[0] elif hasattr(method.weights, "next"): # random distribution w = 0.0 # actual value used here shouldn't matter. Actual values will be set in the Connector. elif isinstance(method.weights, basestring): w = 0.0 # actual value used here shouldn't matter. Actual values will be set in the Connector. elif hasattr(method.weights, 'func_name'): w = 0.0 # actual value used here shouldn't matter. Actual values will be set in the Connector. else: w = method.weights if core.is_listlike(method.delays): d = min(method.delays) elif hasattr(method.delays, "next"): # random distribution d = get_min_delay() # actual value used here shouldn't matter. Actual values will be set in the Connector. elif isinstance(method.delays, basestring): d = get_min_delay() # actual value used here shouldn't matter. Actual values will be set in the Connector. elif hasattr(method.delays, 'func_name'): d = 0.0 # actual value used here shouldn't matter. Actual values will be set in the Connector. else: d = method.delays plasticity_parameters = {} if self.synapse_dynamics: # choose the right model depending on whether we have conductance- or current-based synapses if self.is_conductance: possible_models = get_synapse_models("Cond") else: possible_models = get_synapse_models("Curr").union(get_synapse_models("CuBa")) if self.synapse_shape == 'alpha': possible_models = possible_models.intersection(get_synapse_models("Alpha")) else: possible_models = possible_models.intersection(get_synapse_models("Exp")).difference(get_synapse_models("DoubleExp")) if not self.is_conductance and self.synapse_shape is "exp": possible_models.add("StaticStdpSynapse") possible_models.add("StaticSpikingSynapse") possible_models.add("DynamicStdpSynapse") possible_models.add("DynamicSpikingSynapse") # we need to know the synaptic time constant, which is a property of the # post-synaptic cell in PyNN. Here, we get it from the Population initial # value, but this is a problem if tau_syn varies from cell to cell if target in (None, 'excitatory'): tau_syn = self.post.celltype.parameters['TauSynExc'] if self.is_conductance: e_syn = self.post.celltype.parameters['ErevExc'] elif target == 'inhibitory': tau_syn = self.post.celltype.parameters['TauSynInh'] if self.is_conductance: e_syn = self.post.celltype.parameters['ErevInh'] else: raise Exception("Currently, target must be one of 'excitatory', 'inhibitory' with dynamic synapses") if self.is_conductance: plasticity_parameters.update(Erev=e_syn) weight_scale_factor = 1e-6 else: weight_scale_factor = 1e-9 if self.synapse_dynamics.fast: possible_models = possible_models.intersection(self.synapse_dynamics.fast.possible_models) plasticity_parameters.update(self.synapse_dynamics.fast.parameters) # perhaps need to ensure that STDP is turned off here, to be turned back on by the next block else: possible_models = possible_models.difference(dynamic_synapse_models) # imported from synapses module if self.synapse_dynamics.slow: possible_models = possible_models.intersection(self.synapse_dynamics.slow.possible_models) plasticity_parameters.update(self.synapse_dynamics.slow.all_parameters) dendritic_delay = self.synapse_dynamics.slow.dendritic_delay_fraction * d transmission_delay = d - dendritic_delay plasticity_parameters.update({'back_delay': 2*0.001*dendritic_delay, 'Winit': w*weight_scale_factor}) # hack to work around the limitations of the translation method if self.is_conductance: for name in self.synapse_dynamics.slow.weight_dependence.scales_with_weight: plasticity_parameters[name] *= 1e3 # a scale factor of 1e-9 is always applied in the translation stage else: possible_models = possible_models.difference(stdp_synapse_models) plasticity_parameters.update({'W': w*weight_scale_factor}) if len(possible_models) == 0: raise errors.NoModelAvailableError("The synapse model requested is not available.") synapse_type = getattr(pypcsim, list(possible_models)[0]) try: self.syn_factory = synapse_type(delay=d, tau=tau_syn, **plasticity_parameters) except Exception, err: err.args = ("%s\nActual arguments were: delay=%g, tau=%g, plasticity_parameters=%s" % (err.message, d, tau_syn, plasticity_parameters),) + err.args[1:] raise else: if not target: self.syn_factory = pypcsim.SimpleScalingSpikingSynapse(1, w, d) elif isinstance(target, int): self.syn_factory = pypcsim.SimpleScalingSpikingSynapse(target, w, d) else: if isinstance(target, str): if target == 'excitatory': self.syn_factory = pypcsim.SimpleScalingSpikingSynapse(1, w, d) elif target == 'inhibitory': self.syn_factory = pypcsim.SimpleScalingSpikingSynapse(2, w, d) else: target = eval(target) self.syn_factory = target({}) else: self.syn_factory = target ##self.pcsim_projection = pypcsim.ConnectionsProjection(self.pre.pcsim_population, self.post.pcsim_population, ## self.syn_factory, decider, wiring_method, collectIDs = True, ## collectPairs=True) ## ########## Should be removed and better implemented by using ### the fact that those random Distribution can be passed directly ### while the network is build, and not set after... ##if isinstance(weight, pyNN.random.RandomDistribution): ## self.randomizeWeights(weight) ##elif hasattr(weight, '__len__'): ## assert len(weight) == len(self), "Weight array does not have the same number of elements as the Projection %d != %d" % (len(weight),len(self)) ## self.setWeights(weight) ## ##if isinstance(delay, pyNN.random.RandomDistribution): ## self.randomizeDelays(delay) ##elif hasattr(delay, '__len__'): ## assert len(delay) == len(self), "Weight array does not have the same number of elements as the Projection %d != %d" % (len(weight),len(self)) ## self.setDelays(delay) ##self.synapse_type = self.syn_factory #target or 'excitatory' self.synapse_type = target or 'excitatory' self.connection_manager = simulator.ConnectionManager(self.syn_factory, parent=self) self.connections = self.connection_manager method.connect(self) Projection.nProj += 1 # The commented-out code in this class has been left there as it may be # useful when we start (re-)optimizing the implementation ##def __len__(self): ## """Return the total number of connections.""" ## return self.pcsim_projection.size() #def __getitem__(self, n): # return self.pcsim_projection[n] # --- Methods for setting connection parameters ---------------------------- ##def setWeights(self, w): ## """ ## w can be a single number, in which case all weights are set to this ## value, or a list/1D array of length equal to the number of connections ## in the population. ## Weights should be in nA for current-based and µS for conductance-based ## synapses. ## """ ## w = self.convertWeight(w, self.is_conductance) ## if isinstance(w, float) or isinstance(w, int): ## for i in range(len(self)): ## simulator.net.object(self.pcsim_projection[i]).W = w ## else: ## for i in range(len(self)): ## simulator.net.object(self.pcsim_projection[i]).W = w[i] ## ##def randomizeWeights(self, rand_distr): ## """ ## Set weights to random values taken from rand_distr. ## """ ## # Arguably, we could merge this with set_weights just by detecting the ## # argument type. It could make for easier-to-read simulation code to ## # give it a separate name, though. Comments? ## rand_distr = self.convertWeight(rand_distr, self.is_conductance) ## weights = rand_distr.next(len(self)) ## for i in range(len(self)): ## simulator.net.object(self.pcsim_projection[i]).W = weights[i] ## ##def setDelays(self, d): ## """ ## d can be a single number, in which case all delays are set to this ## value, or a list/1D array of length equal to the number of connections ## in the population. ## """ ## # with STDP, will need updating to take account of the dendritic_delay_fraction ## d = self.convertDelay(d) ## if isinstance(d, float) or isinstance(d, int): ## for i in range(len(self)): ## simulator.net.object(self.pcsim_projection[i]).delay = d ## else: ## assert 1000.0*min(d) >= simulator.state.min_delay, "Smallest delay %g ms must be larger than %g ms" % (min(d), simulator.state.min_delay) ## for i in range(len(self)): ## simulator.net.object(self.pcsim_projection[i]).delay = d[i] ## ##def randomizeDelays(self, rand_distr): ## """ ## Set delays to random values taken from rand_distr. ## """ ## rand_distr = self.convertDelay(rand_distr) ## delays = rand_distr.next(len(self)) ## for i in range(len(self)): ## simulator.net.object(self.pcsim_projection[i]).delay = delays[i] ## ##def getWeights(self, format='list', gather=True): ## """ ## Possible formats are: a list of length equal to the number of connections ## in the projection, a 2D weight array (with zero or None for non-existent ## connections). ## """ ## if format == 'list': ## if self.is_conductance: ## A = 1e6 # S --> uS ## else: ## A = 1e9 # A --> nA ## return [A*self.pcsim_projection.object(i).W for i in xrange(self.pcsim_projection.size())] ## elif format == 'array': ## raise Exception("Not yet implemented") ## else: ## raise Exception("Valid formats are 'list' and 'array'") ## ##def getDelays(self, format='list', gather=True): ## """ ## Possible formats are: a list of length equal to the number of connections ## in the projection, a 2D weight array (with zero or None for non-existent ## connections). ## """ ## if format == 'list': ## A = 1e3 # s --> ms ## return [A*self.pcsim_projection.object(i).delay for i in xrange(self.pcsim_projection.size())] ## elif format == 'array': ## raise Exception("Not yet implemented") ## else: ## raise Exception("Valid formats are 'list' and 'array'") ## ### --- Methods for writing/reading information to/from file. ---------------- ## ##def saveConnections(self, filename, gather=False): ## """Save connections to file in a format suitable for reading in with the ## 'fromFile' method.""" ## # Not at all sure this will work for distributed simulations ## f = open(filename, 'w') ## for i in range(self.pcsim_projection.size()): ## pre_id, post_id = self.pcsim_projection.prePostPair(i) ## pre_id = list(self.pre.pcsim_population.idVector()).index(pre_id.packed()) # surely there is an easier/faster way? ## post_id = list(self.post.pcsim_population.idVector()).index(post_id.packed()) ## pre_addr = self.pre.locate(ID(pre_id)) ## post_addr = self.post.locate(ID(post_id)) ## w = self.reverse_convertWeight(self.pcsim_projection.object(i).W, self.is_conductance) ## d = 1e3*self.pcsim_projection.object(i).delay ## f.write("%s\t%s\t%g\t%g\n" % (map(int, pre_addr), map(int, post_addr), w, d)) ## f.close() Space = space.Space # ============================================================================== # Low-level API for creating, connecting and recording from individual neurons # ============================================================================== create = common.build_create(Population) connect = common.build_connect(Projection, FixedProbabilityConnector) set = common.set record = common.build_record('spikes', simulator) record_v = common.build_record('v', simulator) record_gsyn = common.build_record('gsyn', simulator) # ============================================================================== PyNN-0.7.4/src/pcsim/simulator.py0000644000175000017500000004036311736323051017432 0ustar andrewandrew00000000000000# encoding: utf-8 """ Implementation of the "low-level" functionality used by the common implementation of the API. Functions and classes useable by the common implementation: Functions: create_cells() reset() Classes: ID Recorder ConnectionManager Connection Attributes: state -- a singleton instance of the _State class. recorder_list All other functions and classes are private, and should not be used by other modules. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: simulator.py 957 2011-05-03 13:44:15Z apdavison $ """ import logging import pypcsim import types import numpy from pyNN import common, errors, standardmodels, core recorder_list = [] connection_managers = [] STATE_VARIABLE_MAP = {"v": ("Vinit", 1e-3)} logger = logging.getLogger("PyNN") # --- Internal PCSIM functionality -------------------------------------------- def is_local(id): """Determine whether an object exists on the local MPI node.""" return pypcsim.SimObject.ID(id).node == net.mpi_rank() # --- For implementation of get_time_step() and similar functions -------------- class _State(object): """Represent the simulator state.""" def __init__(self): """Initialize the simulator.""" self.initialized = False self.t = 0.0 self.min_delay = None self.max_delay = None self.constructRNGSeed = None self.simulationRNGSeed = None @property def num_processes(self): return net.mpi_size() @property def mpi_rank(self): return net.mpi_rank() dt = property(fget=lambda self: net.get_dt().in_ms()) #, fset=lambda self,x: net.set_dt(pypcsim.Time.ms(x))) def reset(): """Reset the state of the current network to time t = 0.""" net.reset() state.t = 0.0 # --- For implementation of access to individual neurons' parameters ----------- class ID(long, common.IDMixin): __doc__ = common.IDMixin.__doc__ def __init__(self, n): """Create an ID object with numerical value `n`.""" long.__init__(n) common.IDMixin.__init__(self) def _pcsim_cell(self): """Return the PCSIM cell with the current ID.""" global net #if self.parent: # pcsim_cell = self.parent.pcsim_population.object(self) #else: pcsim_cell = net.object(self) return pcsim_cell def get_native_parameters(self): """Return a dictionary of parameters for the PCSIM cell model.""" pcsim_cell = self._pcsim_cell() pcsim_parameters = {} if self.is_standard_cell: parameter_names = [D['translated_name'] for D in self.celltype.translations.values()] else: parameter_names = [] # for native cells, is there a way to get their list of parameters? for translated_name in parameter_names: if hasattr(self.celltype, 'getterMethods') and translated_name in self.celltype.getterMethods: getterMethod = self.celltype.getterMethods[translated_name] pcsim_parameters[translated_name] = getattr(pcsim_cell, getterMethod)() else: try: pcsim_parameters[translated_name] = getattr(pcsim_cell, translated_name) except AttributeError, e: raise AttributeError("%s. Possible attributes are: %s" % (e, dir(pcsim_cell))) for k,v in pcsim_parameters.items(): if isinstance(v, pypcsim.StdVectorDouble): pcsim_parameters[k] = list(v) return pcsim_parameters def set_native_parameters(self, parameters): """Set parameters of the PCSIM cell model from a dictionary.""" simobj = self._pcsim_cell() for name, value in parameters.items(): if hasattr(self.celltype, 'setterMethods') and name in self.celltype.setterMethods: setterMethod = self.celltype.setterMethods[name] getattr(simobj, setterMethod)(value) else: setattr(simobj, name, value) def get_initial_value(self, variable): pcsim_name, unit_conversion = STATE_VARIABLE_MAP[variable] pcsim_cell = self._pcsim_cell() if hasattr(self.celltype, 'getterMethods') and variable in self.celltype.getterMethods: getterMethod = self.celltype.getterMethods[pcsim_name] value = getattr(pcsim_cell, getterMethod)() else: try: value = getattr(pcsim_cell, pcsim_name) except AttributeError, e: raise AttributeError("%s. Possible attributes are: %s" % (e, dir(pcsim_cell))) return value/unit_conversion def set_initial_value(self, variable, value): pcsim_name, unit_conversion = STATE_VARIABLE_MAP[variable] pcsim_cell = self._pcsim_cell() value = unit_conversion*value if hasattr(self.celltype, 'setterMethods') and variable in self.celltype.setterMethods: setterMethod = self.celltype.setterMethods[pcsim_name] getattr(pcsim_cell, setterMethod)(value) else: try: value = setattr(pcsim_cell, pcsim_name, value) except AttributeError, e: raise AttributeError("%s. Possible attributes are: %s" % (e, dir(pcsim_cell))) index = self.parent.id_to_local_index(self) self.parent.initial_values[variable][index] = value # --- For implementation of connect() and Connector classes -------------------- class Connection(object): """ Store an individual connection and information about it. Provide an interface that allows access to the connection's weight, delay and other attributes. """ def __init__(self, source, target, pcsim_connection, weight_unit_factor): """ Create a new connection. `source` -- ID of pre-synaptic neuron. `target` -- ID of post-synaptic neuron. `pcsim_connection` -- a PCSIM Connection object. `weight_unit_factor` -- 1e9 for current-based synapses (A-->nA), 1e6 for conductance-based synapses (S-->µS). """ self.source = source self.target = target self.pcsim_connection = pcsim_connection self.weight_unit_factor = weight_unit_factor def _get_weight(self): """Synaptic weight in nA or µS.""" return self.weight_unit_factor*self.pcsim_connection.W def _set_weight(self, w): self.pcsim_connection.W = w/self.weight_unit_factor weight = property(fget=_get_weight, fset=_set_weight) def _get_delay(self): """Synaptic delay in ms.""" return 1000.0*self.pcsim_connection.delay # s --> ms def _set_delay(self, d): self.pcsim_connection.delay = 0.001*d delay = property(fget=_get_delay, fset=_set_delay) class ConnectionManager(object): """ Manage synaptic connections, providing methods for creating, listing, accessing individual connections. """ synapse_target_ids = { 'excitatory': 1, 'inhibitory': 2 } def __init__(self, synapse_type, synapse_model=None, parent=None): """ Create a new ConnectionManager. `synapse_type` -- the 'physiological type' of the synapse, e.g. 'excitatory' or 'inhibitory', or a PCSIM synapse factory. `synapse_model` -- not used. Present for consistency with other simulators. `parent` -- the parent `Projection`, if any. """ global connection_managers self.synapse_type = synapse_type self.connections = [] self.parent = parent connection_managers.append(self) self.parent = parent #if parent is None: self.connections = [] def __getitem__(self, i): """Return the `i`th connection on the local MPI node.""" #if self.parent: # if self.parent.is_conductance: # A = 1e6 # S --> uS # else: # A = 1e9 # A --> nA # return Connection(self.parent.pcsim_projection.object(i), A) #else: return self.connections[i] def __len__(self): """Return the number of connections on the local MPI node.""" #if self.parent: # return self.parent.pcsim_projection.size() #else: return len(self.connections) def __iter__(self): """Return an iterator over all connections on the local MPI node.""" for i in range(len(self)): yield self[i] def connect(self, source, targets, weights, delays): """ Connect a neuron to one or more other neurons with a static connection. `source` -- the ID of the pre-synaptic cell. `targets` -- a list/1D array of post-synaptic cell IDs, or a single ID. `weight` -- a list/1D array of connection weights, or a single weight. Must have the same length as `targets`. `delays` -- a list/1D array of connection delays, or a single delay. Must have the same length as `targets`. """ if not isinstance(source, (int, long)) or source < 0: errmsg = "Invalid source ID: %s" % source raise errors.ConnectionError(errmsg) if not core.is_listlike(targets): targets = [targets] if isinstance(weights, float): weights = [weights] if isinstance(delays, float): delays = [delays] assert len(targets) > 0 for target in targets: if not isinstance(target, common.IDMixin): raise errors.ConnectionError("Invalid target ID: %s" % target) assert len(targets) == len(weights) == len(delays), "%s %s %s" % (len(targets),len(weights),len(delays)) if common.is_conductance(targets[0]): weight_scale_factor = 1e-6 # Convert from µS to S else: weight_scale_factor = 1e-9 # Convert from nA to A synapse_type = self.synapse_type or "excitatory" if isinstance(synapse_type, basestring): syn_target_id = ConnectionManager.synapse_target_ids[synapse_type] syn_factory = pypcsim.SimpleScalingSpikingSynapse( syn_target_id, weights[0], delays[0]) elif isinstance(synapse_type, pypcsim.SimObject): syn_factory = synapse_type else: raise errors.ConnectionError("synapse_type must be a string or a PCSIM synapse factory. Actual type is %s" % type(synapse_type)) for target, weight, delay in zip(targets, weights, delays): syn_factory.W = weight*weight_scale_factor syn_factory.delay = delay*0.001 # ms --> s try: c = net.connect(source, target, syn_factory) except RuntimeError, e: raise errors.ConnectionError(e) if target.local: self.connections.append(Connection(source, target, net.object(c), 1.0/weight_scale_factor)) def convergent_connect(self, sources, target, weights, delays): """ Connect a neuron to one or more other neurons with a static connection. `sources` -- a list/1D array of pre-synaptic cell IDs, or a single ID. `target` -- the ID of the post-synaptic cell. `weight` -- a list/1D array of connection weights, or a single weight. Must have the same length as `targets`. `delays` -- a list/1D array of connection delays, or a single delay. Must have the same length as `targets`. """ if not isinstance(target, (int, long)) or target < 0: errmsg = "Invalid target ID: %s" % target raise errors.ConnectionError(errmsg) if not core.is_listlike(sources): sources = [sources] if isinstance(weights, float): weights = [weights] if isinstance(delays, float): delays = [delays] assert len(sources) > 0 for source in sources: if not isinstance(source, common.IDMixin): raise errors.ConnectionError("Invalid source ID: %s" % source) assert len(sources) == len(weights) == len(delays), "%s %s %s" % (len(sources),len(weights),len(delays)) if common.is_conductance(target): weight_scale_factor = 1e-6 # Convert from µS to S else: weight_scale_factor = 1e-9 # Convert from nA to A synapse_type = self.synapse_type or "excitatory" if isinstance(synapse_type, basestring): syn_target_id = ConnectionManager.synapse_target_ids[synapse_type] syn_factory = pypcsim.SimpleScalingSpikingSynapse( syn_target_id, weights[0], delays[0]) elif isinstance(synapse_type, pypcsim.SimObject): syn_factory = synapse_type else: raise errors.ConnectionError("synapse_type must be a string or a PCSIM synapse factory. Actual type is %s" % type(synapse_type)) for source, weight, delay in zip(sources, weights, delays): syn_factory.W = weight*weight_scale_factor syn_factory.delay = delay*0.001 # ms --> s try: c = net.connect(source, target, syn_factory) except RuntimeError, e: raise errors.ConnectionError(e) if target.local: self.connections.append(Connection(source, target, net.object(c), 1.0/weight_scale_factor)) def get(self, parameter_name, format): """ Get the values of a given attribute (weight, delay, etc) for all connections on the local MPI node. `parameter_name` -- name of the attribute whose values are wanted. `format` -- "list" or "array". Array format implicitly assumes that all connections belong to a single Projection. Return a list or a 2D Numpy array. The array element X_ij contains the attribute value for the connection from the ith neuron in the pre- synaptic Population to the jth neuron in the post-synaptic Population, if such a connection exists. If there are no such connections, X_ij will be NaN. """ if format == 'list': values = [getattr(c, parameter_name) for c in self] elif format == 'array': values = numpy.nan * numpy.ones((self.parent.pre.size, self.parent.post.size)) for c in self: addr = (self.parent.pre.id_to_index(c.source), self.parent.post.id_to_index(c.target)) values[addr] = getattr(c, parameter_name) else: raise Exception("format must be 'list' or 'array'") return values def set(self, name, value): """ Set connection attributes for all connections on the local MPI node. `name` -- attribute name `value` -- the attribute numeric value, or a list/1D array of such values of the same length as the number of local connections, or a 2D array with the same dimensions as the connectivity matrix (as returned by `get(format='array')`) """ if numpy.isscalar(value): for c in self: setattr(c, name, value) elif isinstance(value, numpy.ndarray) and len(value.shape) == 2: for c in self.connections: addr = (self.parent.pre.id_to_index(c.source), self.parent.post.id_to_index(c.target)) try: val = value[addr] except IndexError, e: raise IndexError("%s. addr=%s" % (e, addr)) if numpy.isnan(val): raise Exception("Array contains no value for synapse from %d to %d" % (c.source, c.target)) else: setattr(c, name, val) elif core.is_listlike(value): for c,val in zip(self.connections, value): setattr(c, name, val) else: raise TypeError("Argument should be a numeric type (int, float...), a list, or a numpy array.") # --- Initialization, and module attributes ------------------------------------ net = None state = _State() del _State PyNN-0.7.4/src/pcsim/connectors.py0000644000175000017500000000126611736323051017567 0ustar andrewandrew00000000000000""" Connection method classes for pcsim :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: connectors.py 957 2011-05-03 13:44:15Z apdavison $ """ from pyNN.connectors import AllToAllConnector, \ OneToOneConnector, \ FixedProbabilityConnector, \ DistanceDependentProbabilityConnector, \ FromListConnector, \ FromFileConnector, \ FixedNumberPreConnector, \ FixedNumberPostConnector, \ SmallWorldConnector PyNN-0.7.4/src/pcsim/recording.py0000644000175000017500000001127411736323051017366 0ustar andrewandrew00000000000000""" :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ import numpy import pypcsim from pyNN import recording from pyNN.pcsim import simulator # --- For implementation of record_X()/get_X()/print_X() ----------------------- class Recorder(recording.Recorder): """Encapsulates data and functions related to recording model variables.""" fieldnames = {'v': 'Vm', 'gsyn': 'psr'} def __init__(self, variable, population=None, file=None): __doc__ = recording.Recorder.__doc__ recording.Recorder.__init__(self, variable, population, file) self.recorders = {} def _record(self, new_ids): """Called by record()""" net = simulator.net if self.variable == 'spikes': for id in new_ids: #if self.population: # pcsim_id = self.population.pcsim_population[int(id)] #else: pcsim_id = int(id) src_id = pypcsim.SimObject.ID(pcsim_id) rec = net.create(pypcsim.SpikeTimeRecorder(), pypcsim.SimEngine.ID(src_id.node, src_id.eng)) net.connect(pcsim_id, rec, pypcsim.Time.sec(0)) assert id not in self.recorders self.recorders[id] = rec elif self.variable == 'v': for id in new_ids: #if self.population: # pcsim_id = self.population.pcsim_population[int(id)] #else: pcsim_id = int(id) src_id = pypcsim.SimObject.ID(pcsim_id) rec = net.create(pypcsim.AnalogRecorder(), pypcsim.SimEngine.ID(src_id.node, src_id.eng)) net.connect(pcsim_id, Recorder.fieldnames[self.variable], rec, 0, pypcsim.Time.sec(0)) self.recorders[id] = rec else: raise NotImplementedError("Recording of %s not implemented." % self.variable) def _reset(self): raise NotImplementedError("TO DO") def _get(self, gather=False, compatible_output=True, filter=None): """Return the recorded data as a Numpy array.""" # compatible_output is not used, but is needed for compatibility with the nest module. # Does nest really need it? net = simulator.net if self.variable == 'spikes': data = numpy.empty((0,2)) for id in self.filter_recorded(filter): rec = self.recorders[id] if isinstance(net.object(id), pypcsim.SpikingInputNeuron): spikes = 1000.0*numpy.array(net.object(id).getSpikeTimes()) # is this special case really necessary? spikes = spikes[spikes<=simulator.state.t] else: spikes = 1000.0*numpy.array(net.object(rec).getSpikeTimes()) spikes = spikes.flatten() spikes = spikes[spikes<=simulator.state.t+1e-9] if len(spikes) > 0: new_data = numpy.array([numpy.ones(spikes.shape, dtype=int)*id, spikes]).T data = numpy.concatenate((data, new_data)) elif self.variable == 'v': data = numpy.empty((0,3)) for id in self.filter_recorded(filter): rec = self.recorders[id] v = 1000.0*numpy.array(net.object(rec).getRecordedValues()) v = v.flatten() final_v = 1000.0*net.object(id).getVm() v = numpy.append(v, final_v) dt = simulator.state.dt t = dt*numpy.arange(v.size) new_data = numpy.array([numpy.ones(v.shape, dtype=int)*id, t, v]).T data = numpy.concatenate((data, new_data)) elif self.variable == 'gsyn': raise NotImplementedError else: raise Exception("Recording of %s not implemented." % self.variable) if gather and simulator.state.num_processes > 1: data = recording.gather(data) return data def count(self, gather=False, filter=None): """ Return the number of data points for each cell, as a dict. This is mainly useful for spike counts or for variable-time-step integration methods. """ N = {} if self.variable == 'spikes': for id in self.filter_recorded(filter): N[id] = simulator.net.object(self.recorders[id]).spikeCount() else: raise Exception("Only implemented for spikes.") if gather and simulator.state.num_processes > 1: N = recording.gather_dict(N) return N simulator.Recorder = Recorder PyNN-0.7.4/src/neuroml.py0000644000175000017500000006636311736323051015771 0ustar andrewandrew00000000000000# encoding: utf-8 """ PyNN-->NeuroML :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: neuroml.py 1066 2012-03-07 16:09:26Z pgleeson $ """ from pyNN import common, connectors, standardmodels from pyNN.standardmodels import cells import math import numpy import sys sys.path.append('/usr/lib/python%s/site-packages/oldxml' % sys.version[:3]) # needed for Ubuntu import xml.dom.minidom import logging logger = logging.getLogger("neuroml") neuroml_url = 'http://morphml.org' namespace = {'xsi': "http://www.w3.org/2001/XMLSchema-instance", 'mml': neuroml_url+"/morphml/schema", 'net': neuroml_url+"/networkml/schema", 'meta': neuroml_url+"/metadata/schema", 'bio': neuroml_url+"/biophysics/schema", 'cml': neuroml_url+"/channelml/schema",} neuroml_ver="1.8.1" neuroml_xsd="http://www.neuroml.org/NeuroMLValidator/NeuroMLFiles/Schemata/v"+neuroml_ver+"/Level3/NeuroML_Level3_v"+neuroml_ver+".xsd" strict = False # ============================================================================== # Utility classes # ============================================================================== class ID(int, common.IDMixin): """ Instead of storing ids as integers, we store them as ID objects, which allows a syntax like: p[3,4].tau_m = 20.0 where p is a Population object. The question is, how big a memory/performance hit is it to replace integers with ID objects? """ def __init__(self, n): common.IDMixin.__init__(self) # ============================================================================== # Module-specific functions and classes (not part of the common API) # ============================================================================== def build_node(name_, text=None, **attributes): # we call the node name 'name_' because 'name' is a common attribute name (confused? I am) ns, name_ = name_.split(':') if ns: node = xmldoc.createElementNS(namespace[ns], "%s:%s" % (ns, name_)) else: node = xmldoc.createElement(name_) for attr, value in attributes.items(): node.setAttribute(attr, str(value)) if text: node.appendChild(xmldoc.createTextNode(text)) return node def build_parameter_node(name, value): param_node = build_node('bio:parameter', value=value) if name: param_node.setAttribute('name', name) group_node = build_node('bio:group', 'all') param_node.appendChild(group_node) return param_node class IF_base(object): """Base class for integrate-and-fire neuron models.""" def define_morphology(self): segments_node = build_node('mml:segments') soma_node = build_node('mml:segment', id=0, name="Soma", cable=0) # L = 100 diam = 1000/PI: gives area = 10³ cm² soma_node.appendChild(build_node('mml:proximal', x=0, y=0, z=0, diameter=1000/math.pi)) soma_node.appendChild(build_node('mml:distal', x=0, y=0, z=100, diameter=1000/math.pi)) segments_node.appendChild(soma_node) cables_node = build_node('mml:cables') soma_node = build_node('mml:cable', id=0, name="Soma") soma_node.appendChild(build_node('meta:group','all')) cables_node.appendChild(soma_node) return segments_node, cables_node def define_biophysics(self): # L = 100 diam = 1000/PI // biophys_node = build_node(':biophysics', units="Physiological Units") ifnode = build_node('bio:mechanism', name="IandF_"+self.label, type='Channel Mechanism') passive_node = build_node('bio:mechanism', name="pas_"+self.label, type='Channel Mechanism', passive_conductance="true") # g_max = 10�?�³cm/tau_m // cm(nF)/tau_m(ms) = G(µS) = 10�?��?�G(S). Divide by area (10³) to get factor of 10�?�³ gmax = str(1e-3*self.parameters['cm']/self.parameters['tau_m']) passive_node.appendChild(build_parameter_node('gmax', gmax)) cm_node = build_node('bio:specificCapacitance') cm_node.appendChild(build_parameter_node('', str(self.parameters['cm']))) # units? Ra_node = build_node('bio:specificAxialResistance') Ra_node.appendChild(build_parameter_node('', "0.1")) # value doesn't matter for a single compartment # These are not needed here #esyn_node = build_node('bio:mechanism', name="ExcitatorySynapse", type="Channel Mechanism") #isyn_node = build_node('bio:mechanism', name="InhibitorySynapse", type="Channel Mechanism") for node in ifnode, passive_node, cm_node, Ra_node: # the order is important here biophys_node.appendChild(node) return biophys_node def define_connectivity(self): conn_node = build_node(':connectivity') esyn_node = build_node('net:potential_syn_loc', synapse_type="ExcSyn_"+self.label, synapse_direction="preAndOrPost") esyn_node.appendChild(build_node('net:group')) isyn_node = build_node('net:potential_syn_loc', synapse_type="InhSyn_"+self.label, synapse_direction="preAndOrPost") isyn_node.appendChild(build_node('net:group')) for node in esyn_node, isyn_node: conn_node.appendChild(node) return conn_node def define_channel_types(self): passive_node = build_node('cml:channel_type', name="pas_"+self.label, density="yes") passive_node.appendChild( build_node('meta:notes', "Simple example of a leak/passive conductance") ) gmax = str(1e-3*self.parameters['cm']/self.parameters['tau_m']) cvr_node = build_node('cml:current_voltage_relation', cond_law="ohmic", ion="non_specific", default_gmax=gmax, default_erev=self.parameters['v_rest']) passive_node.appendChild(cvr_node) ifnode = build_node('cml:channel_type', name="IandF_"+self.label) ifnode.appendChild( build_node('meta:notes', "Spike and reset mechanism") ) cvr_node = build_node('cml:current_voltage_relation') ifmech_node = build_node('cml:integrate_and_fire', threshold=self.parameters['v_thresh'], t_refrac=self.parameters['tau_refrac'], v_reset=self.parameters['v_reset'], g_refrac=0.1) # this value just needs to be 'large enough' cvr_node.appendChild(ifmech_node) ifnode.appendChild(cvr_node) return [passive_node, ifnode] def define_synapse_types(self, synapse_type): esyn_node = build_node('cml:synapse_type', name="ExcSyn_"+self.label) rise_time_exc="0" rise_time_inh="0" if (synapse_type == 'alpha_syn'): rise_time_exc=self.parameters['tau_syn_E'] rise_time_inh=self.parameters['tau_syn_I'] esyn_node.appendChild( build_node('cml:doub_exp_syn', max_conductance="1.0e-5", rise_time=rise_time_exc, decay_time=self.parameters['tau_syn_E'], reversal_potential=self.parameters['e_rev_E'] ) ) isyn_node = build_node('cml:synapse_type', name="InhSynSyn_"+self.label) isyn_node.appendChild( build_node('cml:doub_exp_syn', max_conductance="1.0e-5", rise_time=rise_time_inh, decay_time=self.parameters['tau_syn_I'], reversal_potential=self.parameters['e_rev_I'] ) ) return [esyn_node, isyn_node] def build_nodes(self): cell_node = build_node(':cell', name=self.label) doc_node = build_node('meta:notes', "Instance of PyNN %s cell type" % self.__class__.__name__) segments_node, cables_node = self.define_morphology() biophys_node = self.define_biophysics() conn_node = self.define_connectivity() for node in doc_node, segments_node, cables_node, biophys_node, conn_node: cell_node.appendChild(node) channel_nodes = self.define_channel_types() synapse_nodes = self.define_synapse_types(self.synapse_type) return cell_node, channel_nodes, synapse_nodes class NotImplementedModel(object): def __init__(self): if strict: raise Exception('Cell type %s is not available in NeuroML' % self.__class__.__name__) def build_nodes(self): cell_node = build_node(':not_implemented_cell', name=self.label) doc_node = build_node('meta:notes', "PyNN %s cell type not implemented" % self.__class__.__name__) return cell_node, [], [] # ============================================================================== # Standard cells # ============================================================================== class IF_curr_exp(cells.IF_curr_exp, NotImplementedModel): """Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic current. (Separate synaptic currents for excitatory and inhibitory synapses""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.IF_curr_exp.default_parameters]) def __init__(self, parameters): NotImplementedModel.__init__(self) cells.IF_curr_exp.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.synapse_type = "doub_exp_syn" self.__class__.n += 1 logger.debug("IF_curr_exp created") class IF_curr_alpha(cells.IF_curr_alpha, NotImplementedModel): """Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic current.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.IF_curr_alpha.default_parameters]) def __init__(self, parameters): NotImplementedModel.__init__(self) cells.IF_curr_exp.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.synapse_type = "doub_exp_syn" self.__class__.n += 1 logger.debug("IF_curr_alpha created") class IF_cond_exp(cells.IF_cond_exp, IF_base): """Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic conductance.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.IF_cond_exp.default_parameters]) def __init__(self, parameters): cells.IF_cond_exp.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.synapse_type = "doub_exp_syn" self.__class__.n += 1 logger.debug("IF_cond_exp created") class IF_cond_alpha(cells.IF_cond_alpha, IF_base): """Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic conductance.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.IF_cond_alpha.default_parameters]) def __init__(self, parameters): cells.IF_cond_alpha.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.synapse_type = "alpha_syn" self.__class__.n += 1 logger.debug("IF_cond_alpha created") class SpikeSourcePoisson(cells.SpikeSourcePoisson, NotImplementedModel): """Spike source, generating spikes according to a Poisson process.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.SpikeSourcePoisson.default_parameters]) def __init__(self, parameters): NotImplementedModel.__init__(self) cells.SpikeSourcePoisson.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.__class__.n += 1 class SpikeSourceArray(cells.SpikeSourceArray, NotImplementedModel): """Spike source generating spikes at the times given in the spike_times array.""" n = 0 translations = standardmodels.build_translations(*[(name, name) for name in cells.SpikeSourcePoisson.default_parameters]) def __init__(self, parameters): NotImplementedModel.__init__(self) cells.SpikeSourceARRAY.__init__(self, parameters) self.label = '%s%d' % (self.__class__.__name__, self.__class__.n) self.__class__.n += 1 # ============================================================================== # Functions for simulation set-up and control # ============================================================================== def setup(timestep=0.1, min_delay=0.1, max_delay=0.1, debug=False,**extra_params): logger.debug("setup() called, extra_params = "+str(extra_params)) """ Should be called at the very beginning of a script. extra_params contains any keyword arguments that are required by a given simulator but not by others. """ global xmldoc, xmlfile, populations_node, projections_node, inputs_node, cells_node, channels_node, neuromlNode, strict if not extra_params.has_key('file'): xmlfile = "PyNN2NeuroML.xml" else: xmlfile = extra_params['file'] if isinstance(xmlfile, basestring): xmlfile = open(xmlfile, 'w') if 'strict' in extra_params: strict = extra_params['strict'] dt = timestep xmldoc = xml.dom.minidom.Document() neuromlNode = xmldoc.createElementNS(neuroml_url+'/neuroml/schema','neuroml') neuromlNode.setAttributeNS(namespace['xsi'],'xsi:schemaLocation',"http://morphml.org/neuroml/schema "+neuroml_xsd) neuromlNode.setAttribute('lengthUnits',"micron") neuromlNode.setAttribute("xmlns","http://morphml.org/neuroml/schema") for ns in namespace.keys(): neuromlNode.setAttribute("xmlns:"+ns,namespace[ns]) xmldoc.appendChild(neuromlNode) neuromlNode.appendChild(xmldoc.createComment("NOTE: the support for abstract cell models in NeuroML v1.x is limited, so the mapping PyNN -> NeuroML v1.x is quite incomplete.")) neuromlNode.appendChild(xmldoc.createComment("Try the PyNN -> NeuroML v2.0 mapping instead.")) populations_node = build_node('net:populations') projections_node = build_node('net:projections', units="Physiological Units") inputs_node = build_node('net:inputs', units="Physiological Units") cells_node = build_node(':cells') channels_node = build_node(':channels', units="Physiological Units") for node in cells_node, channels_node, populations_node, projections_node, inputs_node: neuromlNode.appendChild(node) return 0 def end(compatible_output=True): """Do any necessary cleaning up before exiting.""" global xmldoc, xmlfile, populations_node, projections_node, inputs_node, cells_node, channels_node, neuromlNode # Remove empty nodes, otherwise the validator will complain for node in cells_node, channels_node, populations_node, projections_node, inputs_node: if not node.hasChildNodes(): neuromlNode.removeChild(node) # Write the file xmlfile.write(xmldoc.toprettyxml()) xmlfile.close() def run(simtime): """Run the simulation for simtime ms.""" pass # comment in NeuroML file def get_min_delay(): return 0.0 common.get_min_delay = get_min_delay def num_processes(): return 1 common.num_processes = num_processes def rank(): return 0 common.rank = rank # ============================================================================== # Low-level API for creating, connecting and recording from individual neurons # ============================================================================== def create(cellclass, cellparams=None, n=1): """Create n cells all of the same type. If n > 1, return a list of cell ids/references. If n==1, return just the single id. """ raise Exception('Not yet implemented') def connect(source, target, weight=None, delay=None, synapse_type=None, p=1, rng=None): """Connect a source of spikes to a synaptic target. source and target can both be individual cells or lists of cells, in which case all possible connections are made with probability p, using either the random number generator supplied, or the default rng otherwise. Weights should be in nA or uS.""" raise Exception('Not yet implemented') def set(cells, cellclass, param, val=None): """Set one or more parameters of an individual cell or list of cells. param can be a dict, in which case val should not be supplied, or a string giving the parameter name, in which case val is the parameter value. cellclass must be supplied for doing translation of parameter names.""" raise Exception('Not yet implemented') def record(source, filename): """Record spikes to a file. source can be an individual cell or a list of cells.""" pass # put a comment in the NeuroML file? def record_v(source, filename): """Record membrane potential to a file. source can be an individual cell or a list of cells.""" pass # put a comment in the NeuroML file? # ============================================================================== # High-level API for creating, connecting and recording from populations of # neurons. # ============================================================================== class Population(common.Population): """ An array of neurons all of the same type. `Population' is used as a generic term intended to include layers, columns, nuclei, etc., of cells. """ n = 0 def __init__(self, size, cellclass, cellparams=None, structure=None, label=None): __doc__ = common.Population.__doc__ common.Population.__init__(self, size, cellclass, cellparams, structure, label) ###simulator.initializer.register(self) def _create_cells(self, cellclass, cellparams, n): """ Create a population of neurons all of the same type. `cellclass` -- a PyNN standard cell `cellparams` -- a dictionary of cell parameters. `n` -- the number of cells to create """ global populations_node, cells_node, channels_node assert n > 0, 'n must be a positive integer' self.celltype = cellclass(cellparams) Population.n += 1 population_node = build_node('net:population', name=self.label) self.celltype.label = '%s_%s' % (self.celltype.__class__.__name__, self.label) celltype_node = build_node('net:cell_type', self.celltype.label) instances_node = build_node('net:instances', size=self.size) for i in range(self.size): x, y, z = self.positions[:, i] instance_node = build_node('net:instance', id=i) instance_node.appendChild( build_node('net:location', x=x, y=y, z=z) ) instances_node.appendChild(instance_node) for node in celltype_node, instances_node: population_node.appendChild(node) populations_node.appendChild(population_node) cell_node, channel_list, synapse_list = self.celltype.build_nodes() cells_node.appendChild(cell_node) # Add all channels first, then all synapses for channel_node in channel_list: channels_node.insertBefore(channel_node , channels_node.firstChild) for synapse_node in synapse_list: channels_node.appendChild(synapse_node) self.first_id = 0 self.last_id = self.size-1 self.all_cells = numpy.array([ID(id) for id in range(self.first_id, self.last_id+1)], dtype=ID) self._mask_local = numpy.ones_like(self.all_cells).astype(bool) #self.local_cells = self.all_cells def _set_initial_value_array(self, variable, value): """ Nothing yet... """ pass def _record(self, variable, record_from=None, rng=None, to_file=True): """ Private method called by record() and record_v(). """ pass def meanSpikeCount(self): return -1 def printSpikes(self, file, gather=True, compatible_output=True): pass def print_v(self, file, gather=True, compatible_output=True): pass class AllToAllConnector(connectors.AllToAllConnector): def connect(self, projection): connectivity_node = build_node('net:connectivity_pattern') connectivity_node.appendChild( build_node('net:all_to_all', allow_self_connections=int(self.allow_self_connections)) ) return connectivity_node class OneToOneConnector(connectors.OneToOneConnector): def connect(self, projection): connectivity_node = build_node('net:connectivity_pattern') connectivity_node.appendChild( build_node('net:one_to_one') ) return connectivity_node class FixedProbabilityConnector(connectors.FixedProbabilityConnector): def connect(self, projection): connectivity_node = build_node('net:connectivity_pattern') connectivity_node.appendChild( build_node('net:fixed_probability', probability=self.p_connect, allow_self_conections=int(self.allow_self_connections)) ) return connectivity_node class FixedNumberPreConnector(connectors.FixedNumberPreConnector): def connect(self, projection): if hasattr(self, "n"): connectivity_node = build_node('net:connectivity_pattern') connectivity_node.appendChild( build_node('net:per_cell_connection', num_per_source=self.n, direction="PreToPost", allow_self_connections = int(self.allow_self_connections)) ) return connectivity_node else: raise Exception('Connection with variable connection number not implemented.') class FixedNumberPostConnector(connectors.FixedNumberPostConnector): def connect(self, projection): if hasattr(self, "n"): connectivity_node = build_node('net:connectivity_pattern') connectivity_node.appendChild( build_node('net:per_cell_connection', num_per_source=self.n, direction="PostToPre", allow_self_connections = int(self.allow_self_connections)) ) return connectivity_node else: raise Exception('Connection with variable connection number not implemented.') class FromListConnector(connectors.FromListConnector): def connect(self, projection): connections_node = build_node('net:connections') for i in xrange(len(self.conn_list)): src, tgt, weight, delay = self.conn_list[i][:] src = self.pre[tuple(src)] tgt = self.post[tuple(tgt)] connection_node = build_node('net:connection', id=i) connection_node.appendChild( build_node('net:pre', cell_id=src) ) connection_node.appendChild( build_node('net:post', cell_id=tgt) ) connection_node.appendChild( build_node('net:properties', internal_delay=delay, weight=weight) ) connections_node.appendChild(connection_node) return connections_node class FromFileConnector(connectors.FromFileConnector): def connect(self, projection): # now open the file... f = open(self.filename,'r',10000) lines = f.readlines() f.close() # We read the file and gather all the data in a list of tuples (one per line) input_tuples = [] for line in lines: single_line = line.rstrip() src, tgt, w, d = single_line.split("\t", 4) src = "[%s" % src.split("[",1)[1] tgt = "[%s" % tgt.split("[",1)[1] input_tuples.append((eval(src), eval(tgt), float(w), float(d))) f.close() self.conn_list = input_tuples FromListConnector.connect(projection) class Projection(common.Projection): """ A container for all the connections of a given type (same synapse type and plasticity mechanisms) between two populations, together with methods to set parameters of those connections, including of plasticity mechanisms. """ n = 0 def __init__(self, presynaptic_population, postsynaptic_population, method, source=None, target=None, synapse_dynamics=None, label=None, rng=None): """ presynaptic_population and postsynaptic_population - Population objects. source - string specifying which attribute of the presynaptic cell signals action potentials target - string specifying which synapse on the postsynaptic cell to connect to If source and/or target are not given, default values are used. method - a Connector object, encapsulating the algorithm to use for connecting the neurons. synapse_dynamics - a `SynapseDynamics` object specifying which synaptic plasticity mechanisms to use. rng - specify an RNG object to be used by the Connector. """ global projections_node common.Projection.__init__(self, presynaptic_population, postsynaptic_population, method, source, target, synapse_dynamics, label, rng) self.label = self.label or 'Projection%d' % Projection.n connection_method = method if target: self.synapse_type = target else: self.synapse_type = "ExcitatorySynapse" projection_node = build_node('net:projection', name=self.label) projection_node.appendChild( build_node('net:source', self.pre.label) ) projection_node.appendChild( build_node('net:target', self.post.label) ) synapse_node = build_node('net:synapse_props') synapse_node.appendChild( build_node('net:synapse_type', self.synapse_type) ) synapse_node.appendChild( build_node('net:default_values', internal_delay=5, weight=1, threshold=-20) ) projection_node.appendChild(synapse_node) projection_node.appendChild( connection_method.connect(self) ) projections_node.appendChild(projection_node) Projection.n += 1 def saveConnections(self, filename, gather=True, compatible_output=True): pass def __len__(self): return 0 # needs implementing properly # ============================================================================== PyNN-0.7.4/src/connectors.py0000644000175000017500000012734711736323051016465 0ustar andrewandrew00000000000000""" Defines a common implementation of the built-in PyNN Connector classes. Simulator modules may use these directly, or may implement their own versions for improved performance. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ import numpy, logging, sys, re from pyNN import errors, common, core, random, utility, recording, descriptions from pyNN.space import Space from pyNN.recording import files from pyNN.random import RandomDistribution from numpy import arccos, arcsin, arctan, arctan2, ceil, cos, cosh, e, exp, \ fabs, floor, fmod, hypot, ldexp, log, log10, modf, pi, power, \ sin, sinh, sqrt, tan, tanh, maximum, minimum try: import csa haveCSA = True except ImportError: haveCSA = False logger = logging.getLogger("PyNN") def expand_distances(d_expression): """ Check if a distance expression contains at least one term d[x]. If yes, then the distances are expanded and we assume the user has specified an expression such as d[0] + d[2]. """ regexpr = re.compile(r'.*d\[\d*\].*') if regexpr.match(d_expression): return True return False class ConnectionAttributeGenerator(object): """ Connection attributes, such as weights and delays, may be specified as: - a single numerical value, in which case all connections have this value - a numpy array of the same size as the number of connections - a RandomDistribution object - a function of the distance between the source and target cells This class encapsulates all these different possibilities in order to present a uniform interface. """ def __init__(self, source, local_mask, safe=True): """ Create a new %s. source - something that may be used to obtain connection attribute values local_mask - a boolean array indicating which of the post-synaptic cells are on the local machine safe - whether to check that values are within the appropriate range. These checks can be slow, so safe=False allows you to turn them off once you're certain your code is working correctly. """ % self.__class__.__name__ self.source = source self.local_mask = local_mask self.safe = safe if self.safe: self.get = self.get_safe if isinstance(self.source, numpy.ndarray): self.source_iterator = iter(self.source) def check(self, data): """ This method should be over-ridden by sub-classes. """ return data def extract(self, N, distance_matrix=None, sub_mask=None): """ Return an array of values for a connection attribute. N - number of values to be returned over the entire simulation. If running a distributed simulation, the number returned on any given node will be smaller. distance_matrix - a DistanceMatrix object, used for calculating distance-dependent attributes. sub-mask - a sublist of the ids we want compute some values with. For example in parallel, distances shoudl be computed only between a source and local targets, since only connections with those targets are established. Avoid useless computations... """ if isinstance(self.source, basestring): assert distance_matrix is not None if expand_distances(self.source): d = distance_matrix.as_array(sub_mask, expand=True) else: d = distance_matrix.as_array(sub_mask) values = eval(self.source) return values elif callable(self.source): assert distance_matrix is not None d = distance_matrix.as_array(sub_mask, expand=True) values = self.source(d) return values elif numpy.isscalar(self.source): if sub_mask is None: values = numpy.ones((self.local_mask.sum(),))*self.source else: values = numpy.ones((len(sub_mask),))*self.source return values # seems a bit wasteful to return an array of identical values elif isinstance(self.source, RandomDistribution): if sub_mask is None: values = self.source.next(N, mask_local=self.local_mask) else: data = self.source.next(N, mask_local=self.local_mask) if type(data) == numpy.float64: data = numpy.array([data]) values = data[sub_mask] return values elif isinstance(self.source, numpy.ndarray): if len(self.source.shape) == 2: source_row = self.source_iterator.next() values = source_row[self.local_mask] elif len(self.source.shape) == 1: # for OneToOneConnector or AllToAllConnector used from or to only one Neuron values = self.source[self.local_mask] else: raise Exception() if sub_mask is not None: values = values[sub_mask] return values else: raise Exception("Invalid source") def get_safe(self, N, distance_matrix=None, sub_mask=None): return self.check(self.extract(N, distance_matrix, sub_mask)) def get(self, N, distance_matrix=None, sub_mask=None): return self.extract(N, distance_matrix, sub_mask) class WeightGenerator(ConnectionAttributeGenerator): """Generator for synaptic weights. %s""" % ConnectionAttributeGenerator.__doc__ def __init__(self, source, local_mask, projection, safe=True): ConnectionAttributeGenerator.__init__(self, source, local_mask, safe) self.projection = projection self.is_conductance = common.is_conductance(projection.post.all_cells[0]) def check(self, weight): if weight is None: weight = common.DEFAULT_WEIGHT if core.is_listlike(weight): weight = numpy.array(weight) nan_filter = (1-numpy.isnan(weight)).astype(bool) # weight arrays may contain NaN, which should be ignored filtered_weight = weight[nan_filter] all_negative = (filtered_weight<=0).all() all_positive = (filtered_weight>=0).all() if not (all_negative or all_positive): raise errors.InvalidWeightError("Weights must be either all positive or all negative") elif numpy.isscalar(weight): all_positive = weight >= 0 all_negative = weight < 0 else: raise Exception("Weight must be a number or a list/array of numbers.") if self.is_conductance or self.projection.synapse_type == 'excitatory': if not all_positive: raise errors.InvalidWeightError("Weights must be positive for conductance-based and/or excitatory synapses") elif self.is_conductance==False and self.projection.synapse_type == 'inhibitory': if not all_negative: raise errors.InvalidWeightError("Weights must be negative for current-based, inhibitory synapses") else: # is_conductance is None. This happens if the cell does not exist on the current node. logger.debug("Can't check weight, conductance status unknown.") return weight class DelayGenerator(ConnectionAttributeGenerator): """Generator for synaptic delays. %s""" % ConnectionAttributeGenerator.__doc__ def __init__(self, source, local_mask, safe=True): ConnectionAttributeGenerator.__init__(self, source, local_mask, safe) self.min_delay = common.get_min_delay() self.max_delay = common.get_max_delay() def check(self, delay): all_negative = (delay<=self.max_delay).all() all_positive = (delay>=self.min_delay).all()# If the delay is too small , we have to throw an error if not (all_negative and all_positive): raise errors.ConnectionError("delay (%s) is out of range [%s,%s]" % (delay, common.get_min_delay(), common.get_max_delay())) return delay class ProbaGenerator(ConnectionAttributeGenerator): pass class DistanceMatrix(object): # should probably move to space module def __init__(self, B, space, mask=None): assert B.shape[0] == 3, B.shape self.space = space if mask is not None: self.B = B[:,mask] else: self.B = B def as_array(self, sub_mask=None, expand=False): if self._distance_matrix is None and self.A is not None: if sub_mask is None: self._distance_matrix = self.space.distances(self.A, self.B, expand) else: self._distance_matrix = self.space.distances(self.A, self.B[:,sub_mask], expand) if expand: N = self._distance_matrix.shape[2] self._distance_matrix = self._distance_matrix.reshape((3, N)) else: self._distance_matrix = self._distance_matrix[0] return self._distance_matrix def set_source(self, A): assert A.shape == (3,), A.shape self.A = A self._distance_matrix = None class Connector(object): def __init__(self, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): self.weights = weights self.space = space self.safe = safe self.verbose = verbose min_delay = common.get_min_delay() if delays is None: self.delays = min_delay else: if core.is_listlike(delays): if min(delays) < min_delay: raise errors.ConnectionError("smallest delay (%g) is smaller than minimum delay (%g)" % (min(delays), min_delay)) elif not (isinstance(delays, basestring) or isinstance(delays, RandomDistribution)): if delays < min_delay: raise errors.ConnectionError("delay (%g) is smaller than minimum delay (%g)" % (delays, min_delay)) self.delays = delays def connect(self, projection): raise NotImplementedError() def progressbar(self, N): self.prog = utility.ProgressBar(0, N, 20, mode='fixed') def progression(self, count): self.prog.update_amount(count) if self.verbose and common.rank() == 0: print self.prog, "\r", sys.stdout.flush() def get_parameters(self): P = {} for name in self.parameter_names: P[name] = getattr(self, name) return P def describe(self, template='connector_default.txt', engine='default'): """ Returns a human-readable description of the connection method. The output may be customized by specifying a different template togther with an associated template engine (see ``pyNN.descriptions``). If template is None, then a dictionary containing the template context will be returned. """ context = {'name': self.__class__.__name__, 'parameters': self.get_parameters(), 'weights': self.weights, 'delays': self.delays} return descriptions.render(engine, template, context) class ProbabilisticConnector(Connector): def __init__(self, projection, weights=0.0, delays=None, allow_self_connections=True, space=Space(), safe=True): Connector.__init__(self, weights, delays, space, safe) if isinstance(projection.rng, random.NativeRNG): raise Exception("Use of NativeRNG not implemented.") else: self.rng = projection.rng self.local = projection.post._mask_local self.N = projection.post.size self.weights_generator = WeightGenerator(weights, self.local, projection, safe) self.delays_generator = DelayGenerator(delays, self.local, safe) self.probas_generator = ProbaGenerator(RandomDistribution('uniform', (0,1), rng=self.rng), self.local) self._distance_matrix = None self.projection = projection self.candidates = projection.post.local_cells self.size = self.local.sum() self.allow_self_connections = allow_self_connections @property def distance_matrix(self): """ We want to avoid calculating positions if it is not necessary, so we delay it until the distance matrix is actually used. """ if self._distance_matrix is None: self._distance_matrix = DistanceMatrix(self.projection.post.positions, self.space, self.local) return self._distance_matrix def _probabilistic_connect(self, src, p, n_connections=None): """ Connect-up a Projection with connection probability p, where p may be either a float 0<=p<=1, or a dict containing a float array for each pre-synaptic cell, the array containing the connection probabilities for all the local targets of that pre-synaptic cell. """ if numpy.isscalar(p) and p == 1: precreate = numpy.arange(self.size) else: rarr = self.probas_generator.get(self.N) if not core.is_listlike(rarr) and numpy.isscalar(rarr): # if N=1, rarr will be a single number rarr = numpy.array([rarr]) precreate = numpy.where(rarr < p)[0] self.distance_matrix.set_source(src.position) if not self.allow_self_connections and self.projection.pre == self.projection.post: idx_src = numpy.where(self.candidates == src) if len(idx_src) > 0: i = numpy.where(precreate == idx_src[0]) if len(i) > 0: precreate = numpy.delete(precreate, i[0]) if (n_connections is not None) and (len(precreate) > 0): create = numpy.array([], int) while len(create) < n_connections: # if the number of requested cells is larger than the size of the ## presynaptic population, we allow multiple connections for a given cell create = numpy.concatenate((create, self.projection.rng.permutation(precreate))) create = create[:n_connections] else: create = precreate targets = self.candidates[create] weights = self.weights_generator.get(self.N, self.distance_matrix, create) delays = self.delays_generator.get(self.N, self.distance_matrix, create) if len(targets) > 0: self.projection.connection_manager.connect(src, targets.tolist(), weights, delays) class AllToAllConnector(Connector): """ Connects all cells in the presynaptic population to all cells in the postsynaptic population. """ parameter_names = ('allow_self_connections',) def __init__(self, allow_self_connections=True, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): """ Create a new connector. `allow_self_connections` -- if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. `space` -- a `Space` object, needed if you wish to specify distance- dependent weights or delays """ Connector.__init__(self, weights, delays, space, safe, verbose) assert isinstance(allow_self_connections, bool) self.allow_self_connections = allow_self_connections def connect(self, projection): connector = ProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) self.progressbar(len(projection.pre)) for count, src in enumerate(projection.pre.all()): connector._probabilistic_connect(src, 1) self.progression(count) class FixedProbabilityConnector(Connector): """ For each pair of pre-post cells, the connection probability is constant. """ parameter_names = ('allow_self_connections', 'p_connect') def __init__(self, p_connect, allow_self_connections=True, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): """ Create a new connector. `p_connect` -- a float between zero and one. Each potential connection is created with this probability. `allow_self_connections` -- if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. `space` -- a `Space` object, needed if you wish to specify distance- dependent weights or delays """ Connector.__init__(self, weights, delays, space, safe, verbose) assert isinstance(allow_self_connections, bool) self.allow_self_connections = allow_self_connections self.p_connect = float(p_connect) assert 0 <= self.p_connect def connect(self, projection): #assert projection.rng.parallel_safe connector = ProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) self.progressbar(len(projection.pre)) for count, src in enumerate(projection.pre.all()): connector._probabilistic_connect(src, self.p_connect) self.progression(count) class DistanceDependentProbabilityConnector(Connector): """ For each pair of pre-post cells, the connection probability depends on distance. """ parameter_names = ('allow_self_connections', 'd_expression') def __init__(self, d_expression, allow_self_connections=True, weights=0.0, delays=None, space=Space(), safe=True, verbose=False, n_connections=None): """ Create a new connector. `d_expression` -- the right-hand side of a valid python expression for probability, involving 'd', e.g. "exp(-abs(d))", or "d<3" `n_connections` -- The number of efferent synaptic connections per neuron. `space` -- a Space object. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created, or a distance expression as for `d_expression`. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. """ Connector.__init__(self, weights, delays, space, safe, verbose) assert isinstance(d_expression, str) try: if not expand_distances(d_expression): d = 0; assert 0 <= eval(d_expression), eval(d_expression) d = 1e12; assert 0 <= eval(d_expression), eval(d_expression) except ZeroDivisionError, err: raise ZeroDivisionError("Error in the distance expression %s. %s" % (d_expression, err)) self.d_expression = d_expression assert isinstance(allow_self_connections, bool) self.allow_self_connections = allow_self_connections self.n_connections = n_connections def connect(self, projection): """Connect-up a Projection.""" connector = ProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) proba_generator = ProbaGenerator(self.d_expression, connector.local) self.progressbar(len(projection.pre)) if (common.num_processes() > 1) and (self.n_connections is not None): raise Exception("n_connections not implemented yet for this connector in parallel !") for count, src in enumerate(projection.pre.all()): connector.distance_matrix.set_source(src.position) proba = proba_generator.get(connector.N, connector.distance_matrix) if proba.dtype == 'bool': proba = proba.astype(float) connector._probabilistic_connect(src, proba, self.n_connections) self.progression(count) class FromListConnector(Connector): """ Make connections according to a list. """ parameter_names = ('conn_list',) def __init__(self, conn_list, safe=True, verbose=False): """ Create a new connector. `conn_list` -- a list of tuples, one tuple for each connection. Each tuple should contain: (pre_idx, post_idx, weight, delay) where pre_idx is the index (i.e. order in the Population, not the ID) of the presynaptic neuron, and post_idx is the index of the postsynaptic neuron. """ # needs extending for dynamic synapses. Connector.__init__(self, 0., common.get_min_delay(), safe=safe, verbose=verbose) self.conn_list = numpy.array(conn_list) def connect(self, projection): """Connect-up a Projection.""" idx = numpy.argsort(self.conn_list[:, 0]) self.sources = numpy.unique(self.conn_list[:,0]).astype(int) self.candidates = projection.post.local_cells self.conn_list = self.conn_list[idx] self.progressbar(len(self.sources)) count = 0 left = numpy.searchsorted(self.conn_list[:,0], self.sources, 'left') right = numpy.searchsorted(self.conn_list[:,0], self.sources, 'right') #tests = "|".join(['(tgts == %d)' %id for id in self.candidates]) for src, l, r in zip(self.sources, left, right): targets = self.conn_list[l:r, 1].astype(int) weights = self.conn_list[l:r, 2] delays = self.conn_list[l:r, 3] try: src = projection.pre.all_cells[src] except IndexError: raise errors.ConnectionError("invalid source index %s" % src) try: tgts = projection.post.all_cells[targets] except IndexError: raise errors.ConnectionError("invalid target index or indices") ## We need to exclude the non local cells. Fastidious, need maybe ## to use a convergent_connect method, instead of a divergent_connect one #idx = eval(tests) #projection.connection_manager.connect(src, tgts[idx].tolist(), weights[idx], delays[idx]) projection.connection_manager.connect(src, tgts.tolist(), weights, delays) self.progression(count) count += 1 class FromFileConnector(FromListConnector): """ Make connections according to a list read from a file. """ parameter_names = ('filename', 'distributed') def __init__(self, file, distributed=False, safe=True, verbose=False): """ Create a new connector. `file` -- file object containing a list of connections, in the format required by `FromListConnector`. `distributed` -- if this is True, then each node will read connections from a file called `filename.x`, where `x` is the MPI rank. This speeds up loading connections for distributed simulations. """ Connector.__init__(self, 0., common.get_min_delay(), safe=safe, verbose=verbose) if isinstance(file, basestring): file = files.StandardTextFile(file, mode='r') self.file = file self.distributed = distributed def connect(self, projection): """Connect-up a Projection.""" if self.distributed: self.file.rename("%s.%d" % (self.file.name, common.rank())) self.conn_list = self.file.read() FromListConnector.connect(self, projection) class FixedNumberPostConnector(Connector): """ Each pre-synaptic neuron is connected to exactly n post-synaptic neurons chosen at random. If n is less than the size of the post-synaptic population, there are no multiple connections, i.e., no instances of the same pair of neurons being multiply connected. If n is greater than the size of the post-synaptic population, all possible single connections are made before starting to add duplicate connections. """ parameter_names = ('allow_self_connections', 'n') def __init__(self, n, allow_self_connections=True, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): """ Create a new connector. `n` -- either a positive integer, or a `RandomDistribution` that produces positive integers. If `n` is a `RandomDistribution`, then the number of post-synaptic neurons is drawn from this distribution for each pre-synaptic neuron. `allow_self_connections` -- if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. """ Connector.__init__(self, weights, delays, space, safe, verbose) assert isinstance(allow_self_connections, bool) self.allow_self_connections = allow_self_connections if isinstance(n, int): self.n = n assert n >= 0 elif isinstance(n, random.RandomDistribution): self.rand_distr = n # weak check that the random distribution is ok assert numpy.all(numpy.array(n.next(100)) >= 0), "the random distribution produces negative numbers" else: raise Exception("n must be an integer or a RandomDistribution object") def connect(self, projection): """Connect-up a Projection.""" local = numpy.ones(len(projection.post), bool) weights_generator = WeightGenerator(self.weights, local, projection, self.safe) delays_generator = DelayGenerator(self.delays, local, self.safe) distance_matrix = DistanceMatrix(projection.post.positions, self.space) candidates = projection.post.all_cells size = len(projection.post) self.progressbar(len(projection.pre)) if isinstance(projection.rng, random.NativeRNG): raise Exception("Use of NativeRNG not implemented.") for count, src in enumerate(projection.pre.all()): # pick n neurons at random if hasattr(self, 'rand_distr'): n = self.rand_distr.next() else: n = self.n idx = numpy.arange(size) if not self.allow_self_connections and projection.pre == projection.post: i = numpy.where(candidates == src)[0] idx = numpy.delete(idx, i) create = numpy.array([]) while len(create) < n: # if the number of requested cells is larger than the size of the # postsynaptic population, we allow multiple connections for a given cell create = numpy.concatenate((create, projection.rng.permutation(idx)[:n])) distance_matrix.set_source(src.position) create = create[:n].astype(int) targets = candidates[create] weights = weights_generator.get(n, distance_matrix, create) delays = delays_generator.get(n, distance_matrix, create) if len(targets) > 0: projection.connection_manager.connect(src, targets.tolist(), weights, delays) self.progression(count) class FixedNumberPreConnector(Connector): """ Each post-synaptic neuron is connected to exactly n pre-synaptic neurons chosen at random. If n is less than the size of the pre-synaptic population, there are no multiple connections, i.e., no instances of the same pair of neurons being multiply connected. If n is greater than the size of the pre-synaptic population, all possible single connections are made before starting to add duplicate connections. """ parameter_names = ('allow_self_connections', 'n') def __init__(self, n, allow_self_connections=True, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): """ Create a new connector. `n` -- either a positive integer, or a `RandomDistribution` that produces positive integers. If `n` is a `RandomDistribution`, then the number of pre-synaptic neurons is drawn from this distribution for each post-synaptic neuron. `allow_self_connections` -- if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. """ Connector.__init__(self, weights, delays, space, safe, verbose) assert isinstance(allow_self_connections, bool) self.allow_self_connections = allow_self_connections if isinstance(n, int): self.n = n assert n >= 0 elif isinstance(n, random.RandomDistribution): self.rand_distr = n # weak check that the random distribution is ok assert numpy.all(numpy.array(n.next(100)) >= 0), "the random distribution produces negative numbers" else: raise Exception("n must be an integer or a RandomDistribution object") def connect(self, projection): """Connect-up a Projection.""" local = numpy.ones(len(projection.pre), bool) weights_generator = WeightGenerator(self.weights, local, projection, self.safe) delays_generator = DelayGenerator(self.delays, local, self.safe) distance_matrix = DistanceMatrix(projection.pre.positions, self.space) candidates = projection.pre.all_cells size = len(projection.pre) self.progressbar(len(projection.post.local_cells)) if isinstance(projection.rng, random.NativeRNG): raise Exception("Warning: use of NativeRNG not implemented.") for count, tgt in enumerate(projection.post.local_cells): # pick n neurons at random if hasattr(self, 'rand_distr'): n = self.rand_distr.next() else: n = self.n idx = numpy.arange(size) if not self.allow_self_connections and projection.pre == projection.post: i = numpy.where(candidates == tgt)[0] idx = numpy.delete(idx, i) create = numpy.array([]) while len(create) < n: # if the number of requested cells is larger than the size of the # presynaptic population, we allow multiple connections for a given cell create = numpy.concatenate((create, projection.rng.permutation(idx)[:n])) distance_matrix.set_source(tgt.position) create = create[:n].astype(int) sources = candidates[create] weights = weights_generator.get(n, distance_matrix, create) delays = delays_generator.get(n, distance_matrix, create) for src, w, d in zip(sources, weights, delays): projection.connection_manager.connect(src, tgt, w, d) self.progression(count) class OneToOneConnector(Connector): """ Where the pre- and postsynaptic populations have the same size, connect cell i in the presynaptic population to cell i in the postsynaptic population for all i. """ parameter_names = tuple() def __init__(self, weights=0.0, delays=None, space=Space(), safe=True, verbose=False): """ Create a new connector. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. """ Connector.__init__(self, weights, delays, space, verbose) self.space = space self.safe = safe def connect(self, projection): """Connect-up a Projection.""" if projection.pre.size == projection.post.size: N = projection.post.size local = projection.post._mask_local if isinstance(self.weights, basestring) or isinstance(self.delays, basestring): raise Exception('Expression for weights or delays is not supported for OneToOneConnector !') weights_generator = WeightGenerator(self.weights, local, projection, self.safe) delays_generator = DelayGenerator(self.delays, local, self.safe) weights = weights_generator.get(N) delays = delays_generator.get(N) self.progressbar(len(projection.post.local_cells)) count = 0 create = numpy.arange(0, N)[local] sources = projection.pre.all_cells[create] for tgt, src, w, d in zip(projection.post.local_cells, sources, weights, delays): # the float is in case the values are of type numpy.float64, which NEST chokes on projection.connection_manager.connect(src, [tgt], [float(w)], [float(d)]) self.progression(count) count += 1 else: raise errors.InvalidDimensionsError("OneToOneConnector does not support presynaptic and postsynaptic Populations of different sizes.") class SmallWorldConnector(Connector): """ For each pair of pre-post cells, the connection probability depends on distance. """ parameter_names = ('allow_self_connections', 'degree', 'rewiring') def __init__(self, degree, rewiring, allow_self_connections=True, weights=0.0, delays=None, space=Space(), safe=True, verbose=False, n_connections=None): """ Create a new connector. `degree` -- the region lenght where nodes will be connected locally `rewiring` -- the probability of rewiring each eadges `space` -- a Space object. `allow_self_connections` -- if the connector is used to connect a Population to itself, this flag determines whether a neuron is allowed to connect to itself, or only to other neurons in the Population. `n_connections` -- The number of efferent synaptic connections per neuron. `weights` -- may either be a float, a RandomDistribution object, a list/ 1D array with at least as many items as connections to be created, or a DistanceDependence object. Units nA. `delays` -- as `weights`. If `None`, all synaptic delays will be set to the global minimum delay. """ Connector.__init__(self, weights, delays, space, safe, verbose) assert 0 <= rewiring <= 1 assert isinstance(allow_self_connections, bool) self.rewiring = rewiring self.d_expression = "d < %g" %degree self.allow_self_connections = allow_self_connections self.n_connections = n_connections def _smallworld_connect(self, src, p, n_connections=None): """ Connect-up a Projection with connection probability p, where p may be either a float 0<=p<=1, or a dict containing a float array for each pre-synaptic cell, the array containing the connection probabilities for all the local targets of that pre-synaptic cell. """ rarr = self.probas_generator.get(self.N) if not core.is_listlike(rarr) and numpy.isscalar(rarr): # if N=1, rarr will be a single number rarr = numpy.array([rarr]) precreate = numpy.where(rarr < p)[0] self.distance_matrix.set_source(src.position) if not self.allow_self_connections and self.projection.pre == self.projection.post: i = numpy.where(self.candidates == src)[0] precreate = numpy.delete(precreate, i) idx = numpy.arange(0, self.size) if not self.allow_self_connections and self.projection.pre == self.projection.post: i = numpy.where(self.candidates == src)[0] idx = numpy.delete(idx, i) rarr = self.probas_generator.get(self.N)[precreate] rewired = numpy.where(rarr < self.rewiring)[0] N = len(rewired) if N > 0: new_idx = (len(idx)-1) * self.probas_generator.get(self.N)[precreate] precreate[rewired] = idx[new_idx.astype(int)] if (n_connections is not None) and (len(precreate) > 0): create = numpy.array([], int) while len(create) < n_connections: # if the number of requested cells is larger than the size of the ## presynaptic population, we allow multiple connections for a given cell create = numpy.concatenate((create, self.projection.rng.permutation(precreate))) create = create[:n_connections] else: create = precreate targets = self.candidates[create] weights = self.weights_generator.get(self.N, self.distance_matrix, create) delays = self.delays_generator.get(self.N, self.distance_matrix, create) if len(targets) > 0: self.projection.connection_manager.connect(src, targets.tolist(), weights, delays) def connect(self, projection): """Connect-up a Projection.""" local = numpy.ones(len(projection.post), bool) self.N = projection.post.size if isinstance(projection.rng, random.NativeRNG): raise Exception("Use of NativeRNG not implemented.") else: self.rng = projection.rng self.weights_generator = WeightGenerator(self.weights, local, projection, self.safe) self.delays_generator = DelayGenerator(self.delays, local, self.safe) self.probas_generator = ProbaGenerator(RandomDistribution('uniform',(0,1), rng=self.rng), local) self.distance_matrix = DistanceMatrix(projection.post.positions, self.space, local) self.projection = projection self.candidates = projection.post.all_cells self.size = len(projection.post) self.progressbar(len(projection.pre)) proba_generator = ProbaGenerator(self.d_expression, local) for count, src in enumerate(projection.pre.all()): self.distance_matrix.set_source(src.position) proba = proba_generator.get(self.N, self.distance_matrix).astype(float) self._smallworld_connect(src, proba, self.n_connections) self.progression(count) class CSAConnector(Connector): parameter_names = ('cset',) if haveCSA: def __init__ (self, cset, weights=None, delays=None, safe=True, verbose=False): """ """ min_delay = common.get_min_delay() Connector.__init__(self, None, None, safe=safe, verbose=verbose) self.cset = cset if csa.arity(cset) == 0: #assert weights is not None and delays is not None, \ # 'must specify weights and delays in addition to a CSA mask' self.weights = weights if weights is None: self.weights = common.DEFAULT_WEIGHT self.delays = delays if delays is None: self.delays = common.get_min_delay() else: assert csa.arity(cset) == 2, 'must specify mask or connection-set with arity 2' assert weights is None and delays is None, \ "weights or delays specified both in connection-set and as CSAConnector argument" else: def __init__ (self, cset, safe=True, verbose=False): raise RuntimeError, "CSAConnector not available---couldn't import csa module" @staticmethod def isConstant (x): return isinstance (x, (int, float)) @staticmethod def constantIterator (x): while True: yield x def connect(self, projection): """Connect-up a Projection.""" # Cut out finite part c = csa.cross((0, projection.pre.size-1), (0, projection.post.size-1)) * self.cset if csa.arity(self.cset) == 2: # Connection-set with arity 2 for (i, j, weight, delay) in c: projection.connection_manager.connect (projection.pre[i], [projection.post[j]], weight, delay) elif CSAConnector.isConstant (self.weights) \ and CSAConnector.isConstant (self.delays): # Mask with constant weights and delays for (i, j) in c: projection.connection_manager.connect (projection.pre[i], [projection.post[j]], self.weights, self.delays) else: # Mask with weights and/or delays iterable weights = self.weights if CSAConnector.isConstant (weights): weights = CSAConnector.constantIterator (weights) delays = self.delays if CSAConnector.isConstant (delays): delays = CSAConnector.constantIterator (delays) for (i, j), weight, delay in zip (c, weights, delays): projection.connection_manager.connect (projection.pre[i], [projection.post[j]], weight, delay) PyNN-0.7.4/src/models.py0000644000175000017500000000753211736323051015564 0ustar andrewandrew00000000000000""" Base classes for cell and synapse models, whether "standard" (cross-simulator) or "native" (restricted to an individual simulator). :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ import copy from pyNN import errors, descriptions class BaseModelType(object): """Base class for standard and native cell and synapse model classes.""" # does not really belong in this module. Some reorganisation required. default_parameters = {} default_initial_values = {} def __init__(self, parameters): self.parameters = self.__class__.checkParameters(parameters, with_defaults=True) @classmethod def has_parameter(cls, name): return name in cls.default_parameters @classmethod def get_parameter_names(cls): return cls.default_parameters.keys() @classmethod def checkParameters(cls, supplied_parameters, with_defaults=False): """ Returns a parameter dictionary, checking that each supplied_parameter is in the default_parameters and converts to the type of the latter. If with_defaults==True, parameters not in supplied_parameters are in the returned dictionary as in default_parameters. """ default_parameters = cls.default_parameters if with_defaults: parameters = copy.copy(default_parameters) else: parameters = {} if supplied_parameters: for k in supplied_parameters.keys(): if k in default_parameters.keys(): err_msg = "For %s in %s, expected %s, got %s (%s)" % \ (k, cls.__name__, type(default_parameters[k]), type(supplied_parameters[k]), supplied_parameters[k]) # same type if type(supplied_parameters[k]) == type(default_parameters[k]): parameters[k] = supplied_parameters[k] # float and something that can be converted to a float elif isinstance(default_parameters[k], float): try: parameters[k] = float(supplied_parameters[k]) except (ValueError, TypeError): raise errors.InvalidParameterValueError(err_msg) # list and something that can be transformed to a list elif isinstance(default_parameters[k], list): try: parameters[k] = list(supplied_parameters[k]) except TypeError: raise errors.InvalidParameterValueError(err_msg) else: raise errors.InvalidParameterValueError(err_msg) else: raise errors.NonExistentParameterError(k, cls, cls.default_parameters.keys()) return parameters def describe(self, template='modeltype_default.txt', engine='default'): """ Returns a human-readable description of the cll or synapse type. The output may be customized by specifying a different template togther with an associated template engine (see ``pyNN.descriptions``). If template is None, then a dictionary containing the template context will be returned. """ context = { "name": self.__class__.__name__, "parameters": self.parameters, } return descriptions.render(engine, template, context) class BaseCellType(BaseModelType): """Base class for cell model classes.""" recordable = [] synapse_types = [] conductance_based = True # override for cells with current-based synapses injectable = True # override for spike sources class BaseSynapseDynamics(object): passPyNN-0.7.4/src/recording/0000755000175000017500000000000011774605211015677 5ustar andrewandrew00000000000000PyNN-0.7.4/src/recording/__init__.py0000644000175000017500000002133411736323051020010 0ustar andrewandrew00000000000000""" Defines classes and functions for managing recordings (spikes, membrane potential etc). These classes and functions are not part of the PyNN API, and are only for internal use. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: __init__.py 957 2011-05-03 13:44:15Z apdavison $ """ import tempfile import logging import os.path import numpy import os from pyNN.recording import files try: from mpi4py import MPI except ImportError: MPI = None logger = logging.getLogger("PyNN") numpy1_1_formats = {'spikes': "%g\t%d", 'v': "%g\t%g\t%d", 'gsyn': "%g\t%g\t%g\t%d"} numpy1_0_formats = {'spikes': "%g", # only later versions of numpy support different 'v': "%g", # formats for different columns 'gsyn': "%g"} if MPI: mpi_comm = MPI.COMM_WORLD MPI_ROOT = 0 def rename_existing(filename): if os.path.exists(filename): os.system('mv %s %s_old' % (filename, filename)) logger.warning("File %s already exists. Renaming the original file to %s_old" % (filename, filename)) def gather(data): # gather 1D or 2D numpy arrays if MPI is None: raise Exception("Trying to gather data without MPI installed. If you are not running a distributed simulation, this is a bug in PyNN.") assert isinstance(data, numpy.ndarray) assert len(data.shape) < 3 # first we pass the data size size = data.size sizes = mpi_comm.gather(size, root=MPI_ROOT) or [] # now we pass the data displacements = [sum(sizes[:i]) for i in range(len(sizes))] #print mpi_comm.rank, sizes, displacements, data gdata = numpy.empty(sum(sizes)) mpi_comm.Gatherv([data.flatten(), size, MPI.DOUBLE], [gdata, (sizes, displacements), MPI.DOUBLE], root=MPI_ROOT) if len(data.shape) == 1: return gdata else: num_columns = data.shape[1] return gdata.reshape((gdata.size/num_columns, num_columns)) def gather_dict(D): # Note that if the same key exists on multiple nodes, the value from the # node with the highest rank will appear in the final dict. Ds = mpi_comm.gather(D, root=MPI_ROOT) if Ds: for otherD in Ds: D.update(otherD) return D def mpi_sum(x): if MPI and mpi_comm.size > 1: return mpi_comm.allreduce(x, op=MPI.SUM) else: return x class Recorder(object): """Encapsulates data and functions related to recording model variables.""" formats = {'spikes': 'id t', 'v': 'id t v', 'gsyn': 'id t ge gi', 'generic': 'id t variable'} def __init__(self, variable, population=None, file=None): """ Create a recorder. `variable` -- "spikes", "v" or "gsyn" `population` -- the Population instance which is being recorded by the recorder (optional) `file` -- one of: - a file-name, - `None` (write to a temporary file) - `False` (write to memory). """ self.variable = variable self.file = file self.population = population # needed for writing header information if population: assert population.can_record(variable) self.recorded = set([]) def record(self, ids): """Add the cells in `ids` to the set of recorded cells.""" logger.debug('Recorder.record(<%d cells>)' % len(ids)) ids = set([id for id in ids if id.local]) new_ids = ids.difference(self.recorded) self.recorded = self.recorded.union(ids) logger.debug('Recorder.recorded contains %d ids' % len(self.recorded)) self._record(new_ids) def reset(self): """Reset the list of things to be recorded.""" self._reset() self.recorded = set([]) def filter_recorded(self, filter): if filter is not None: return set(filter).intersection(self.recorded) else: return self.recorded def get(self, gather=False, compatible_output=True, filter=None): """Return the recorded data as a Numpy array.""" data_array = self._get(gather, compatible_output, filter) if self.population is not None: try: data_array[:,0] = self.population.id_to_index(data_array[:, 0]) # id is always first column except Exception: pass self._data_size = data_array.shape[0] return data_array def write(self, file=None, gather=False, compatible_output=True, filter=None): """Write recorded data to file.""" file = file or self.file if isinstance(file, basestring): filename = file #rename_existing(filename) if gather==False and simulator.state.num_processes > 1: filename += '.%d' % simulator.state.mpi_rank else: filename = file.name logger.debug("Recorder is writing '%s' to file '%s' with gather=%s and compatible_output=%s" % (self.variable, filename, gather, compatible_output)) data = self.get(gather, compatible_output, filter) metadata = self.metadata logger.debug("data has size %s" % str(data.size)) if simulator.state.mpi_rank == 0 or gather == False: if compatible_output: data = self._make_compatible(data) # Open the output file, if necessary and write the data logger.debug("Writing data to file %s" % file) if isinstance(file, basestring): file = files.StandardTextFile(filename, mode='w') file.write(data, metadata) file.close() @property def metadata(self): metadata = {} metadata['variable'] = self.variable if self.population is not None: metadata.update({ 'size': self.population.size, 'first_index': 0, 'last_index': len(self.population), 'first_id': self.population.first_id, 'last_id': self.population.last_id, 'label': self.population.label, }) metadata['dt'] = simulator.state.dt # note that this has to run on all nodes (at least for NEST) if not hasattr(self, '_data_size'): self.get() metadata['n'] = self._data_size return metadata def _make_compatible(self, data_source): """ Rewrite simulation data in a standard format: spiketime (in ms) cell_id-min(cell_id) """ assert isinstance(data_source, numpy.ndarray) logger.debug("Converting data from memory into compatible format") N = len(data_source) logger.debug("Number of data elements = %d" % N) if N > 0: # Shuffle columns if necessary input_format = self.formats.get(self.variable, self.formats["generic"]).split() time_column = input_format.index('t') id_column = input_format.index('id') if self.variable == 'gsyn': ge_column = input_format.index('ge') gi_column = input_format.index('gi') column_map = [ge_column, gi_column, id_column] elif self.variable == 'v': # voltage files v_column = input_format.index('v') column_map = [v_column, id_column] elif self.variable == 'spikes': # spike files column_map = [time_column, id_column] else: variable_column = input_format.index('variable') column_map = [variable_column, id_column] data_array = data_source[:, column_map] else: logger.warning("%s is empty or does not exist" % data_source) data_array = numpy.array([]) return data_array def count(self, gather=True, filter=None): """ Return the number of data points for each cell, as a dict. This is mainly useful for spike counts or for variable-time-step integration methods. """ if self.variable == 'spikes': N = self._local_count(filter) else: raise Exception("Only implemented for spikes.") if gather and simulator.state.num_processes > 1: N = gather_dict(N) return N PyNN-0.7.4/src/recording/files.py0000644000175000017500000001744511774605161017372 0ustar andrewandrew00000000000000""" Provides standard interfaces to various text and binary file formats for saving spikes, membrane potential and synaptic conductances. To record data in a given format, pass an instance of any of the File classes to the Population.printSpikes(), print_v() or print_gsyn() methods. Classes: StandardTextFile PickleFile NumpyBinaryFile HDF5ArrayFile - requires PyTables :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: files.py 1194 2012-07-03 15:00:07Z apdavison $ """ import numpy, sys, os, shutil import cPickle as pickle from numpy.lib import format try: import tables have_hdf5 = True except ImportError: have_hdf5 = False DEFAULT_BUFFER_SIZE = 10000 def _savetxt(filename, data, format, delimiter): """ Due to the lack of savetxt in older versions of numpy we provide a cut-down version of that function. """ f = open(filename, 'w') for row in data: f.write(delimiter.join([format%val for val in row]) + '\n') f.close() def savez(file, *args, **kwds): __doc__ = numpy.savez.__doc__ import zipfile if isinstance(file, basestring): if not file.endswith('.npz'): file = file + '.npz' namedict = kwds for i, val in enumerate(args): key = 'arr_%d' % i if key in namedict.keys(): raise ValueError, "Cannot use un-named variables and keyword %s" % key namedict[key] = val zip = zipfile.ZipFile(file, mode="w") # Place to write temporary .npy files # before storing them in the zip. We need to path this to have a working # function in parallel ! import tempfile direc = tempfile.mkdtemp() for key, val in namedict.iteritems(): fname = key + '.npy' filename = os.path.join(direc, fname) fid = open(filename, 'wb') format.write_array(fid, numpy.asanyarray(val)) fid.close() zip.write(filename, arcname=fname) zip.close() shutil.rmtree(direc) class BaseFile(object): """ Base class for PyNN File classes. """ def __init__(self, filename, mode='r'): """ Open a file with the given filename and mode. """ self.name = filename self.mode = mode dir = os.path.dirname(filename) if dir and not os.path.exists(dir): os.makedirs(dir) try: ## Need this because in parallel, file names are changed self.fileobj = open(self.name, mode, DEFAULT_BUFFER_SIZE) except Exception, err: self.open_error = err def __del__(self): self.close() def _check_open(self): if not hasattr(self, 'fileobj'): raise self.open_error def rename(self, filename): self.close() try: ## Need this because in parallel, only one node will delete the file with NFS os.remove(self.name) except Exception: pass self.name = filename self.fileobj = open(self.name, self.mode, DEFAULT_BUFFER_SIZE) def write(self, data, metadata): """ Write data and metadata to file. `data` should be a NumPy array, `metadata` should be a dictionary. """ raise NotImplementedError def read(self): """ Read data from the file and return a NumPy array. """ raise NotImplementedError def get_metadata(self): """ Read metadata from the file and return a dict. """ raise NotImplementedError def close(self): """Close the file.""" if hasattr(self, 'fileobj'): self.fileobj.close() class StandardTextFile(BaseFile): """ Data and metadata is written as text. Metadata is written at the top of the file, with each line preceded by "#". Data is written with one data point per line. """ def write(self, data, metadata): __doc__ = BaseFile.write.__doc__ self._check_open() # can we write to the file more than once? In this case, should use seek,tell # to always put the header information at the top? # write header header_lines = ["# %s = %s" % item for item in metadata.items()] self.fileobj.write("\n".join(header_lines) + '\n') # write data savetxt = getattr(numpy, 'savetxt', _savetxt) savetxt(self.fileobj, data, fmt='%r', delimiter='\t') self.fileobj.close() def read(self): self._check_open() return numpy.loadtxt(self.fileobj) class PickleFile(BaseFile): """ Data and metadata are pickled and saved to file. """ def write(self, data, metadata): __doc__ = BaseFile.write.__doc__ self._check_open() pickle.dump((data, metadata), self.fileobj) def read(self): __doc__ = BaseFile.read.__doc__ self._check_open() data = pickle.load(self.fileobj)[0] self.fileobj.seek(0) return data def get_metadata(self): __doc__ = BaseFile.get_metadata.__doc__ self._check_open() metadata = pickle.load(self.fileobj)[1] self.fileobj.seek(0) return metadata class NumpyBinaryFile(BaseFile): """ Data and metadata are saved in .npz format, which is a zipped archive of arrays. """ def _check_open(self): if not hasattr(self, "fileobj") or self.fileobj.closed: self.fileobj = open(self.name, self.mode, DEFAULT_BUFFER_SIZE) else: self.fileobj.seek(0) def write(self, data, metadata): __doc__ = BaseFile.write.__doc__ self._check_open() metadata_array = numpy.array(metadata.items(), dtype=(str, float)) savez(self.fileobj, data=data, metadata=metadata_array) def read(self): __doc__ = BaseFile.read.__doc__ self._check_open() data = numpy.load(self.fileobj)['data'] return data def get_metadata(self): __doc__ = BaseFile.get_metadata.__doc__ self._check_open() D = {} for name,value in numpy.load(self.fileobj)['metadata']: try: D[name] = eval(value) except Exception: D[name] = value return D if have_hdf5: class HDF5ArrayFile(BaseFile): """ Data are saved as an array within a node named "data". Metadata are saved as attributes of this node. """ def __init__(self, filename, mode='r', title="PyNN data file"): """ Open an HDF5 file with the given filename, mode and title. """ self.name = filename self.mode = mode self.fileobj = tables.openFile(filename, mode=mode, title=title) # may not work with old versions of PyTables < 1.3, since they only support numarray, not numpy def write(self, data, metadata): __doc__ = BaseFile.write.__doc__ if len(data) > 0: try: node = self.fileobj.createArray(self.fileobj.root, "data", data) except tables.HDF5ExtError, e: raise tables.HDF5ExtError("%s. data.shape=%s, metadata=%s" % (e, data.shape, metadata)) for name, value in metadata.items(): setattr(node.attrs, name, value) self.fileobj.close() def read(self): __doc__ = BaseFile.read.__doc__ return self.fileobj.root.data.read() def get_metadata(self): __doc__ = BaseFile.get_metadata.__doc__ D = {} node = self.fileobj.root.data for name in node._v_attrs._f_list(): D[name] = node.attrs.__getattr__(name) return D PyNN-0.7.4/src/brian/0000755000175000017500000000000011774605211015016 5ustar andrewandrew00000000000000PyNN-0.7.4/src/brian/electrodes.py0000644000175000017500000000626311737557201017534 0ustar andrewandrew00000000000000""" Current source classes for the brian module. Classes: DCSource -- a single pulse of current of constant amplitude. StepCurrentSource -- a step-wise time-varying current. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: electrodes.py 1117 2012-04-02 16:06:20Z apdavison $ """ from brian import ms, nA, network_operation import simulator import numpy current_sources = [] @network_operation(when='start') def update_currents(): global current_sources for current_source in current_sources: current_source._update_current() class CurrentSource(object): """Base class for a source of current to be injected into a neuron.""" def __init__(self): global current_sources self.cell_list = [] current_sources.append(self) def inject_into(self, cell_list): """Inject this current source into some cells.""" for cell in cell_list: if not cell.celltype.injectable: raise TypeError("Can't inject current into a spike source.") self.cell_list.extend(cell_list) class StepCurrentSource(CurrentSource): """A step-wise time-varying current source.""" def __init__(self, times, amplitudes): """Construct the current source. Arguments: times -- list/array of times at which the injected current changes. amplitudes -- list/array of current amplitudes to be injected at the times specified in `times`. The injected current will be zero up until the first time in `times`. The current will continue at the final value in `amplitudes` until the end of the simulation. """ CurrentSource.__init__(self) assert len(times) == len(amplitudes), "times and amplitudes must be the same size (len(times)=%d, len(amplitudes)=%d" % (len(times), len(amplitudes)) self.times = times self.amplitudes = numpy.array(amplitudes) * nA self.i = 0 self.running = True def _update_current(self): """ Check whether the current amplitude needs updating, and then do so if needed. This is called at every timestep. """ if self.running and simulator.state.t >= self.times[self.i]: #*ms: for cell in self.cell_list: index = cell.parent.id_to_index(cell) cell.parent_group.i_inj[index] = self.amplitudes[self.i] self.i += 1 if self.i >= len(self.times): self.running = False class DCSource(StepCurrentSource): """Source producing a single pulse of current of constant amplitude.""" def __init__(self, amplitude=1.0, start=0.0, stop=None): """Construct the current source. Arguments: start -- onset time of pulse in ms stop -- end of pulse in ms amplitude -- pulse amplitude in nA """ self.start = start self.stop = stop self.amplitude = amplitude times = [0.0, start, (stop or 1e99)] amplitudes = [0.0, amplitude, 0.0] StepCurrentSource.__init__(self, times, amplitudes) PyNN-0.7.4/src/brian/standardmodels/0000755000175000017500000000000011774605211020022 5ustar andrewandrew00000000000000PyNN-0.7.4/src/brian/standardmodels/cells.py0000644000175000017500000004023311762141172021476 0ustar andrewandrew00000000000000""" Standard cells for the brian module :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: cells.py 957 2011-05-03 13:44:15Z apdavison $ """ from pyNN.standardmodels import cells, build_translations, ModelNotAvailable from pyNN import errors #import brian_no_units_no_warnings from brian.library.synapses import * import brian from pyNN.brian.simulator import SimpleCustomRefractoriness, AdaptiveReset from brian import mV, ms, nF, nA, uS, second, Hz, amp import numpy class IF_curr_alpha(cells.IF_curr_alpha): """Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic current.""" translations = build_translations( ('v_rest', 'v_rest', mV), ('v_reset', 'v_reset'), ('cm', 'c_m', nF), ('tau_m', 'tau_m', ms), ('tau_refrac', 'tau_refrac'), ('tau_syn_E', 'tau_syn_E', ms), ('tau_syn_I', 'tau_syn_I', ms), ('v_thresh', 'v_thresh'), ('i_offset', 'i_offset', nA), ) eqs= brian.Equations(''' dv/dt = (ge + gi + i_offset + i_inj)/c_m + (v_rest-v)/tau_m : mV dge/dt = (2.7182818284590451*ye-ge)/tau_syn_E : nA dye/dt = -ye/tau_syn_E : nA dgi/dt = (2.7182818284590451*yi-gi)/tau_syn_I : nA dyi/dt = -yi/tau_syn_I : nA c_m : nF tau_syn_E : ms tau_syn_I : ms tau_m : ms v_rest : mV i_offset : nA i_inj : nA ''' ) synapses = {'excitatory' : 'ye', 'inhibitory' : 'yi'} @property def threshold(self): return self.parameters['v_thresh'] * mV @property def reset(self): return self.parameters['v_reset'] * mV class IF_curr_exp(cells.IF_curr_exp): """Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic current. (Separate synaptic currents for excitatory and inhibitory synapses.""" translations = build_translations( ('v_rest', 'v_rest', mV), ('v_reset', 'v_reset'), ('cm', 'c_m', nF), ('tau_m', 'tau_m', ms), ('tau_refrac', 'tau_refrac'), ('tau_syn_E', 'tau_syn_E', ms), ('tau_syn_I', 'tau_syn_I', ms), ('v_thresh', 'v_thresh'), ('i_offset', 'i_offset', nA), ) eqs= brian.Equations(''' dv/dt = (ie + ii + i_offset + i_inj)/c_m + (v_rest-v)/tau_m : mV die/dt = -ie/tau_syn_E : nA dii/dt = -ii/tau_syn_I : nA tau_syn_E : ms tau_syn_I : ms tau_m : ms c_m : nF v_rest : mV i_offset : nA i_inj : nA ''' ) synapses = {'excitatory': 'ie', 'inhibitory': 'ii'} @property def threshold(self): return self.parameters['v_thresh'] * mV @property def reset(self): return self.parameters['v_reset'] * mV class IF_cond_alpha(cells.IF_cond_alpha): translations = build_translations( ('v_rest', 'v_rest', mV), ('v_reset', 'v_reset'), ('cm', 'c_m', nF), ('tau_m', 'tau_m', ms), ('tau_refrac', 'tau_refrac'), ('tau_syn_E', 'tau_syn_E', ms), ('tau_syn_I', 'tau_syn_I', ms), ('v_thresh', 'v_thresh'), ('i_offset', 'i_offset', nA), ('e_rev_E', 'e_rev_E', mV), ('e_rev_I', 'e_rev_I', mV), ) eqs= brian.Equations(''' dv/dt = (v_rest-v)/tau_m + (ge*(e_rev_E-v) + gi*(e_rev_I-v) + i_offset + i_inj)/c_m : mV dge/dt = (2.7182818284590451*ye-ge)/tau_syn_E : uS dye/dt = -ye/tau_syn_E : uS dgi/dt = (2.7182818284590451*yi-gi)/tau_syn_I : uS dyi/dt = -yi/tau_syn_I : uS tau_syn_E : ms tau_syn_I : ms tau_m : ms v_rest : mV e_rev_E : mV e_rev_I : mV c_m : nF i_offset : nA i_inj : nA ''' ) synapses = {'excitatory': 'ye', 'inhibitory': 'yi'} @property def threshold(self): return self.parameters['v_thresh'] * mV @property def reset(self): return self.parameters['v_reset'] * mV class IF_cond_exp(cells.IF_cond_exp): """Leaky integrate and fire model with fixed threshold and exponentially-decaying post-synaptic conductance.""" translations = build_translations( ('v_rest', 'v_rest', mV), ('v_reset', 'v_reset'), ('cm', 'c_m', nF), ('tau_m', 'tau_m', ms), ('tau_refrac', 'tau_refrac'), ('tau_syn_E', 'tau_syn_E', ms), ('tau_syn_I', 'tau_syn_I', ms), ('v_thresh', 'v_thresh'), ('i_offset', 'i_offset', nA), ('e_rev_E', 'e_rev_E', mV), ('e_rev_I', 'e_rev_I', mV), ) eqs= brian.Equations(''' dv/dt = (v_rest-v)/tau_m + (ge*(e_rev_E-v) + gi*(e_rev_I-v) + i_offset + i_inj)/c_m : mV dge/dt = -ge/tau_syn_E : uS dgi/dt = -gi/tau_syn_I : uS tau_syn_E : ms tau_syn_I : ms tau_m : ms c_m : nF v_rest : mV e_rev_E : mV e_rev_I : mV i_offset : nA i_inj : nA ''' ) synapses = {'excitatory': 'ge', 'inhibitory': 'gi'} @property def threshold(self): return self.parameters['v_thresh'] * mV @property def reset(self): return self.parameters['v_reset'] * mV class IF_facets_hardware1(ModelNotAvailable): """Leaky integrate and fire model with conductance-based synapses and fixed threshold as it is resembled by the FACETS Hardware Stage 1. For further details regarding the hardware model see the FACETS-internal Wiki: https://facets.kip.uni-heidelberg.de/private/wiki/index.php/WP7_NNM """ pass class SpikeSourcePoisson(cells.SpikeSourcePoisson): """Spike source, generating spikes according to a Poisson process.""" translations = build_translations( ('rate', 'rate'), ('start', 'start'), ('duration', 'duration'), ) class rates(object): """ Acts as a function of time for the PoissonGroup, while storing the parameters for later retrieval. """ def __init__(self, start, duration, rate, n): self.start = start * numpy.ones(n) * ms self.duration = duration * numpy.ones(n) * ms self.rate = rate * numpy.ones(n) * Hz def __call__(self, t): idx = (self.start <= t) & (t <= self.start + self.duration) return numpy.where(idx, self.rate, 0) def __init__(self, parameters): cells.SpikeSourcePoisson.__init__(self, parameters) start = self.parameters['start'] duration = self.parameters['duration'] rate = self.parameters['rate'] class SpikeSourceArray(cells.SpikeSourceArray): """Spike source generating spikes at the times given in the spike_times array.""" translations = build_translations( ('spike_times', 'spiketimes', ms), ) @classmethod def translate(cls, parameters): if 'spike_times' in parameters: try: parameters['spike_times'] = numpy.array(parameters['spike_times'], float) except ValueError: raise errors.InvalidParameterValueError("spike times must be floats") return super(SpikeSourceArray, cls).translate(parameters) class EIF_cond_alpha_isfa_ista(cells.EIF_cond_alpha_isfa_ista): """ Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.) according to: Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642 See also: IF_cond_exp_gsfa_grr, EIF_cond_exp_isfa_ista """ translations = build_translations( ('cm', 'c_m', nF), ('v_spike', 'v_spike'), ('v_rest', 'v_rest', mV), ('v_reset', 'v_reset'), ('tau_m', 'tau_m', ms), ('tau_refrac', 'tau_refrac'), ('i_offset', 'i_offset', nA), ('a', 'a', nA), ('b', 'b', nA), ('delta_T', 'delta_T', mV), ('tau_w', 'tau_w', ms), ('v_thresh', 'v_thresh', mV), ('e_rev_E', 'e_rev_E', mV), ('tau_syn_E', 'tau_syn_E', ms), ('e_rev_I', 'e_rev_I', mV), ('tau_syn_I', 'tau_syn_I', ms), ) eqs= brian.Equations(''' dv/dt = ((v_rest-v) + delta_T*exp((v-v_thresh)/delta_T))/tau_m + (ge*(e_rev_E-v) + gi*(e_rev_I-v) + i_offset + i_inj - w)/c_m : mV dge/dt = (2.7182818284590451*ye-ge)/tau_syn_E : uS dye/dt = -ye/tau_syn_E : uS dgi/dt = (2.7182818284590451*yi-gi)/tau_syn_I : uS dyi/dt = -yi/tau_syn_I : uS dw/dt = (a*(v-v_rest) - w)/tau_w : nA tau_syn_E : ms tau_syn_I : ms tau_m : ms v_rest : mV e_rev_E : mV e_rev_I : mV c_m : nF i_offset : nA i_inj : nA delta_T : mV a : uS b : nA tau_w : ms v_thresh : mV v_spike : mV ''' ) synapses = {'excitatory': 'ye', 'inhibitory': 'yi'} @property def threshold(self): return self.parameters['v_spike'] * mV @property def reset(self): reset = AdaptiveReset(self.parameters['v_reset'] * mV, self.parameters['b'] * amp) return SimpleCustomRefractoriness(reset, period = self.parameters['tau_refrac'] * ms) class EIF_cond_exp_isfa_ista(cells.EIF_cond_exp_isfa_ista): """ Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.) according to: Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642 See also: IF_cond_exp_gsfa_grr, EIF_cond_exp_isfa_ista """ translations = build_translations( ('cm', 'c_m', nF), ('v_spike', 'v_spike'), ('v_rest', 'v_rest', mV), ('v_reset', 'v_reset'), ('tau_m', 'tau_m', ms), ('tau_refrac', 'tau_refrac'), ('i_offset', 'i_offset', nA), ('a', 'a', nA), ('b', 'b', nA), ('delta_T', 'delta_T', mV), ('tau_w', 'tau_w', ms), ('v_thresh', 'v_thresh', mV), ('e_rev_E', 'e_rev_E', mV), ('tau_syn_E', 'tau_syn_E', ms), ('e_rev_I', 'e_rev_I', mV), ('tau_syn_I', 'tau_syn_I', ms), ) eqs= brian.Equations(''' dv/dt = ((v_rest-v) + delta_T*exp((v - v_thresh)/delta_T))/tau_m + (ge*(e_rev_E-v) + gi*(e_rev_I-v) + i_offset + i_inj - w)/c_m : mV dge/dt = -ge/tau_syn_E : uS dgi/dt = -gi/tau_syn_I : uS dw/dt = (a*(v-v_rest) - w)/tau_w : nA tau_syn_E : ms tau_syn_I : ms tau_m : ms v_rest : mV e_rev_E : mV e_rev_I : mV c_m : nF i_offset : nA i_inj : nA delta_T : mV a : uS b : nA tau_w : ms v_thresh : mV v_spike : mV ''' ) synapses = {'excitatory': 'ge', 'inhibitory': 'gi'} @property def threshold(self): return self.parameters['v_spike'] * mV @property def reset(self): reset = AdaptiveReset(self.parameters['v_reset'] * mV, self.parameters['b'] * amp) return SimpleCustomRefractoriness(reset, period = self.parameters['tau_refrac'] * ms) class HH_cond_exp(cells.HH_cond_exp): translations = build_translations( ('gbar_Na', 'gbar_Na', uS), ('gbar_K', 'gbar_K', uS), ('g_leak', 'g_leak', uS), ('cm', 'c_m', nF), ('v_offset', 'v_offset', mV), ('e_rev_Na', 'e_rev_Na', mV), ('e_rev_K', 'e_rev_K', mV), ('e_rev_leak', 'e_rev_leak', mV), ('e_rev_E', 'e_rev_E', mV), ('e_rev_I', 'e_rev_I', mV), ('tau_syn_E', 'tau_syn_E', ms), ('tau_syn_I', 'tau_syn_I', ms), ('i_offset', 'i_offset', nA), ) eqs= brian.Equations(''' dv/dt = (g_leak*(e_rev_leak-v)+ge*(e_rev_E-v)+gi*(e_rev_I-v)-gbar_Na*(m*m*m)*h*(v-e_rev_Na)-gbar_K*(n*n*n*n)*(v-e_rev_K) + i_offset + i_inj)/c_m : mV dm/dt = (alpham*(1-m)-betam*m) : 1 dn/dt = (alphan*(1-n)-betan*n) : 1 dh/dt = (alphah*(1-h)-betah*h) : 1 dge/dt = -ge/tau_syn_E : uS dgi/dt = -gi/tau_syn_I : uS alpham = 0.32*(mV**-1)*(13*mV-v+v_offset)/(exp((13*mV-v+v_offset)/(4*mV))-1.)/ms : Hz betam = 0.28*(mV**-1)*(v-v_offset-40*mV)/(exp((v-v_offset-40*mV)/(5*mV))-1)/ms : Hz alphah = 0.128*exp((17*mV-v+v_offset)/(18*mV))/ms : Hz betah = 4./(1+exp((40*mV-v+v_offset)/(5*mV)))/ms : Hz alphan = 0.032*(mV**-1)*(15*mV-v+v_offset)/(exp((15*mV-v+v_offset)/(5*mV))-1.)/ms : Hz betan = .5*exp((10*mV-v+v_offset)/(40*mV))/ms : Hz tau_syn_E : ms tau_syn_I : ms e_rev_E : mV e_rev_I : mV e_rev_Na : mV e_rev_K : mV e_rev_leak : mV gbar_Na : uS gbar_K : uS g_leak : uS v_offset : mV c_m : nF i_offset : nA i_inj : nA ''') synapses = {'excitatory': 'ge', 'inhibitory': 'gi'} @property def threshold(self): return brian.EmpiricalThreshold(threshold=-40*mV, refractory=2*ms) @property def reset(self): return 0 * mV @property def extra(self): return {'implicit' : True} class SpikeSourceInhGamma(ModelNotAvailable): pass class IF_cond_exp_gsfa_grr(ModelNotAvailable): pass PyNN-0.7.4/src/brian/standardmodels/__init__.py0000644000175000017500000000000011736323051022116 0ustar andrewandrew00000000000000PyNN-0.7.4/src/brian/standardmodels/synapses.py0000644000175000017500000000513111736323051022236 0ustar andrewandrew00000000000000""" Synapse Dynamics classes for the brian module. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: synapses.py 957 2011-05-03 13:44:15Z apdavison $ """ from pyNN.standardmodels import synapses, SynapseDynamics, STDPMechanism class STDPMechanism(STDPMechanism): """Specification of STDP models.""" def __init__(self, timing_dependence=None, weight_dependence=None, voltage_dependence=None, dendritic_delay_fraction=0.0): assert dendritic_delay_fraction == 0, """Brian does not currently support dendritic delays: for the purpose of STDP calculations all delays are assumed to be axonal.""" standardmodels.STDPMechanism.__init__(self, timing_dependence, weight_dependence, voltage_dependence, dendritic_delay_fraction) class TsodyksMarkramMechanism(synapses.TsodyksMarkramMechanism): def __init__(self, U=0.5, tau_rec=100.0, tau_facil=0.0, u0=0.0, x0=1.0, y0=0.0): parameters = dict(locals()) parameters.pop('self') self.parameters = parameters def reset(population,spikes, v_reset): population.R_[spikes]-=U_SE*population.R_[spikes] population.v_[spikes]= v_reset class AdditiveWeightDependence(synapses.AdditiveWeightDependence): def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): # units? parameters = dict(locals()) parameters.pop('self') self.parameters = parameters self.parameters['mu_plus'] = 0. self.parameters['mu_minus'] = 0. class MultiplicativeWeightDependence(synapses.MultiplicativeWeightDependence): def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): parameters = dict(locals()) parameters.pop('self') self.parameters = parameters self.parameters['mu_plus'] = 1. self.parameters['mu_minus'] = 1. class AdditivePotentiationMultiplicativeDepression(synapses.AdditivePotentiationMultiplicativeDepression): def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): parameters = dict(locals()) parameters.pop('self') self.parameters = parameters self.parameters['mu_plus'] = 0.0 self.parameters['mu_minus'] = 1.0 class SpikePairRule(synapses.SpikePairRule): def __init__(self, tau_plus=20.0, tau_minus=20.0): parameters = dict(locals()) parameters.pop('self') self.parameters = parameters PyNN-0.7.4/src/brian/__init__.py0000644000175000017500000003334611737577326017156 0ustar andrewandrew00000000000000# -*- coding: utf-8 -*- """ Brian implementation of the PyNN API. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: __init__.py 1122 2012-04-06 14:33:59Z apdavison $ """ import logging Set = set #import brian_no_units_no_warnings from pyNN.brian import simulator from pyNN import common, recording, space, core, __doc__ common.simulator = simulator recording.simulator = simulator from pyNN.random import * from pyNN.recording import files from pyNN.brian.standardmodels.cells import * from pyNN.brian.connectors import * from pyNN.brian.standardmodels.synapses import * from pyNN.brian.electrodes import * from pyNN.brian import electrodes from pyNN.brian.recording import * from pyNN import standardmodels logger = logging.getLogger("PyNN") def list_standard_models(): """Return a list of all the StandardCellType classes available for this simulator.""" standard_cell_types = [obj for obj in globals().values() if isinstance(obj, type) and issubclass(obj, standardmodels.StandardCellType)] for cell_class in standard_cell_types: try: create(cell_class) except Exception, e: print "Warning: %s is defined, but produces the following error: %s" % (cell_class.__name__, e) standard_cell_types.remove(cell_class) return [obj.__name__ for obj in standard_cell_types] # ============================================================================== # Functions for simulation set-up and control # ============================================================================== def setup(timestep=0.1, min_delay=0.1, max_delay=10.0, **extra_params): """ Should be called at the very beginning of a script. extra_params contains any keyword arguments that are required by a given simulator but not by others. """ common.setup(timestep, min_delay, max_delay, **extra_params) _cleanup() brian.set_global_preferences(**extra_params) simulator.state = simulator._State(timestep, min_delay, max_delay) simulator.state.add(update_currents) # from electrodes update_currents.clock = simulator.state.simclock recording.simulator = simulator reset() return rank() def _cleanup(): simulator.recorder_list = [] electrodes.current_sources = [] if hasattr(simulator, 'state'): if hasattr(simulator.state, 'network'): for item in simulator.state.network.groups + simulator.state.network._all_operations: del item del simulator.state def end(compatible_output=True): """Do any necessary cleaning up before exiting.""" for recorder in simulator.recorder_list: recorder.write(gather=True, compatible_output=compatible_output) _cleanup() def get_current_time(): """Return the current time in the simulation.""" return simulator.state.t def run(simtime): """Run the simulation for simtime ms.""" simulator.state.run(simtime) return get_current_time() reset = simulator.reset initialize = common.initialize # ============================================================================== # Functions returning information about the simulation state # ============================================================================== get_time_step = common.get_time_step get_min_delay = common.get_min_delay get_max_delay = common.get_max_delay num_processes = common.num_processes rank = common.rank # ============================================================================== # High-level API for creating, connecting and recording from populations of # neurons. # ============================================================================== class Population(common.Population, common.BasePopulation): """ An array of neurons all of the same type. `Population' is used as a generic term intended to include layers, columns, nuclei, etc., of cells. """ recorder_class = Recorder def _create_cells(self, cellclass, cellparams=None, n=1): """ Create cells in Brian. `cellclass` -- a PyNN standard cell or a native Brian cell class. `cellparams` -- a dictionary of cell parameters. `n` -- the number of cells to create """ # currently, we create a single NeuronGroup for create(), but # arguably we should use n NeuronGroups each containing a single cell # either that or use the subgroup() method in connect(), etc assert n > 0, 'n must be a positive integer' # if isinstance(cellclass, basestring): # celltype is not a standard cell # try: # eqs = brian.Equations(cellclass) # except Exception, errmsg: # raise errors.InvalidModelError(errmsg) # v_thresh = cellparams['v_thresh'] * mV # v_reset = cellparams['v_reset'] * mV # tau_refrac = cellparams['tau_refrac'] * ms # brian_cells = brian.NeuronGroup(n, # model=eqs, # threshold=v_thresh, # reset=v_reset, # clock=state.simclock, # compile=True, # max_delay=state.max_delay) # cell_parameters = cellparams or {} # elif isinstance(cellclass, type) and issubclass(cellclass, standardmodels.StandardCellType): celltype = cellclass(cellparams) cell_parameters = celltype.parameters if isinstance(celltype, cells.SpikeSourcePoisson): fct = celltype.rates(cell_parameters['start'], cell_parameters['duration'], cell_parameters['rate'], n) brian_cells = simulator.PoissonGroupWithDelays(n, rates=fct) elif isinstance(celltype, cells.SpikeSourceArray): spike_times = cell_parameters['spiketimes'] brian_cells = simulator.MultipleSpikeGeneratorGroupWithDelays([spike_times for i in xrange(n)]) elif 'v_reset' in cell_parameters: params = {'threshold' : celltype.threshold, 'reset' : celltype.reset} if cell_parameters.has_key('tau_refrac'): params['refractory'] = cell_parameters['tau_refrac'] * ms if hasattr(celltype, 'extra'): params.update(celltype.extra) brian_cells = simulator.ThresholdNeuronGroup(n, cellclass.eqs, **params) else: params = {'threshold' : celltype.threshold} if hasattr(celltype, 'extra'): params.update(celltype.extra) brian_cells = simulator.PlainNeuronGroup(n, cellclass.eqs, **params) # elif isinstance(cellclass, type) and issubclass(cellclass, standardmodels.ModelNotAvailable): # raise NotImplementedError("The %s model is not available for this simulator." % cellclass.__name__) # else: # raise Exception("Invalid cell type: %s" % type(cellclass)) if cell_parameters: for key, value in cell_parameters.items(): setattr(brian_cells, key, value) # should we globally track the IDs used, so as to ensure each cell gets a unique integer? (need only track the max ID) self.all_cells = numpy.array([simulator.ID(simulator.state.next_id) for cell in xrange(len(brian_cells))], simulator.ID) for cell in self.all_cells: cell.parent = self cell.parent_group = brian_cells self._mask_local = numpy.ones((n,), bool) # all cells are local. This doesn't seem very efficient. self.first_id = self.all_cells[0] self.last_id = self.all_cells[-1] self.brian_cells = brian_cells simulator.state.network.add(brian_cells) def _set_initial_value_array(self, variable, value): if variable is 'v': value = value*mV if not hasattr(value, "__len__"): value = value*numpy.ones((len(self),)) self.brian_cells.initial_values[variable] = value self.brian_cells.initialize() PopulationView = common.PopulationView Assembly = common.Assembly class Projection(common.Projection): """ A container for all the connections of a given type (same synapse type and plasticity mechanisms) between two populations, together with methods to set parameters of those connections, including of plasticity mechanisms. """ def __init__(self, presynaptic_population, postsynaptic_population, method, source=None, target=None, synapse_dynamics=None, label=None, rng=None): """ presynaptic_population and postsynaptic_population - Population objects. source - string specifying which attribute of the presynaptic cell signals action potentialss target - string specifying which synapse on the postsynaptic cell to connect to If source and/or target are not given, default values are used. method - a Connector object, encapsulating the algorithm to use for connecting the neurons. synapse_dynamics - a `SynapseDynamics` object specifying which synaptic plasticity mechanisms to use. rng - specify an RNG object to be used by the Connector. """ common.Projection.__init__(self, presynaptic_population, postsynaptic_population, method, source, target, synapse_dynamics, label, rng) self._method = method self._connections = None self.synapse_type = target or 'excitatory' #if isinstance(presynaptic_population, common.Assembly) or isinstance(postsynaptic_population, common.Assembly): #raise Exception("Projections with Assembly objects are not working yet in Brian") if self.synapse_dynamics: if self.synapse_dynamics.fast: if self.synapse_dynamics.slow: raise Exception("It is not currently possible to have both short-term and long-term plasticity at the same time with this simulator.") else: self._plasticity_model = "tsodyks_markram_synapse" elif synapse_dynamics.slow: self._plasticity_model = "stdp_synapse" else: self._plasticity_model = "static_synapse" self.connection_manager = simulator.ConnectionManager(self.synapse_type, self._plasticity_model, parent=self) self.connections = self.connection_manager method.connect(self) self.connection_manager._finalize() if self._plasticity_model != "static_synapse": for key in self.connections.key(): synapses = self.connections.brian_connections[key] if self._plasticity_model is "stdp_synapse": parameters = self.synapse_dynamics.slow.all_parameters if common.is_conductance(self.post[0]): units = uS else: units = nA stdp = simulator.STDP(synapses, parameters['tau_plus'] * ms, parameters['tau_minus'] * ms, parameters['A_plus'], -parameters['A_minus'], parameters['mu_plus'], parameters['mu_minus'], wmin = parameters['w_min'] * units, wmax = parameters['w_max'] * units) simulator.state.add(stdp) elif self._plasticity_model is "tsodyks_markram_synapse": parameters = self.synapse_dynamics.fast.parameters stp = brian.STP(synapses, parameters['tau_rec'] * ms, parameters['tau_facil'] * ms, parameters['U']) simulator.state.add(stp) def saveConnections(self, file, gather=True, compatible_output=True): """ Save connections to file in a format suitable for reading in with a FromFileConnector. """ import operator lines = numpy.empty((len(self.connection_manager), 4)) padding = 0 for key in self.connection_manager.keys: bc = self.connection_manager.brian_connections[key] size = bc.W.getnnz() lines[padding:padding+size,0], lines[padding:padding+size,1] = self.connection_manager.indices[key] lines[padding:padding+size,2] = bc.W.alldata / bc.weight_units if isinstance(bc, brian.DelayConnection): lines[padding:padding+size,3] = bc.delay.alldata / ms else: lines[padding:padding+size,3] = bc.delay * bc.source.clock.dt / ms padding += size logger.debug("--- Projection[%s].__saveConnections__() ---" % self.label) if isinstance(file, basestring): file = files.StandardTextFile(file, mode='w') file.write(lines, {'pre' : self.pre.label, 'post' : self.post.label}) file.close() Space = space.Space # ============================================================================== # Low-level API for creating, connecting and recording from individual neurons # ============================================================================== create = common.build_create(Population) connect = common.build_connect(Projection, FixedProbabilityConnector) set = common.set record = common.build_record('spikes', simulator) record_v = common.build_record('v', simulator) record_gsyn = common.build_record('gsyn', simulator) # ============================================================================== PyNN-0.7.4/src/brian/simulator.py0000644000175000017500000007206111737573151017423 0ustar andrewandrew00000000000000# encoding: utf-8 """ Implementation of the "low-level" functionality used by the common implementation of the API. Functions and classes useable by the common implementation: Functions: create_cells() reset() run() Classes: ID Recorder ConnectionManager Connection Attributes: state -- a singleton instance of the _State class. recorder_list All other functions and classes are private, and should not be used by other modules. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: simulator.py 1121 2012-04-06 13:58:02Z apdavison $ """ import logging import brian import numpy from itertools import izip import scipy.sparse from pyNN import common, errors, core mV = brian.mV ms = brian.ms nA = brian.nA uS = brian.uS Hz = brian.Hz # Global variables recorder_list = [] ZERO_WEIGHT = 1e-99 logger = logging.getLogger("PyNN") # --- Internal Brian functionality -------------------------------------------- def _new_property(obj_hierarchy, attr_name, units): """ Return a new property, mapping attr_name to obj_hierarchy.attr_name. For example, suppose that an object of class A has an attribute b which itself has an attribute c which itself has an attribute d. Then placing e = _new_property('b.c', 'd') in the class definition of A makes A.e an alias for A.b.c.d """ def set(self, value): obj = reduce(getattr, [self] + obj_hierarchy.split('.')) setattr(obj, attr_name, value*units) def get(self): obj = reduce(getattr, [self] + obj_hierarchy.split('.')) return getattr(obj, attr_name)/units return property(fset=set, fget=get) def nesteddictwalk(d): """ Walk a nested dict structure, returning all values in all dicts. """ for value1 in d.values(): if isinstance(value1, dict): for value2 in nesteddictwalk(value1): # recurse into subdict yield value2 else: yield value1 class PlainNeuronGroup(brian.NeuronGroup): def __init__(self, n, equations, **kwargs): try: clock = state.simclock max_delay = state.max_delay*ms except Exception: raise Exception("Simulation timestep not yet set. Need to call setup()") brian.NeuronGroup.__init__(self, n, model=equations, compile=True, clock=clock, max_delay=max_delay, freeze=True, **kwargs) self.parameter_names = equations._namespace.keys() for var in ('v', 'ge', 'gi', 'ie', 'ii'): # can probably get this list from brian if var in self.parameter_names: self.parameter_names.remove(var) self.initial_values = {} self._S0 = self._S[:,0] def initialize(self): for variable, values in self.initial_values.items(): setattr(self, variable, values) class ThresholdNeuronGroup(brian.NeuronGroup): def __init__(self, n, equations, **kwargs): try: clock = state.simclock max_delay = state.max_delay*ms except Exception: raise Exception("Simulation timestep not yet set. Need to call setup()") brian.NeuronGroup.__init__(self, n, model=equations, compile=True, clock=clock, max_delay=max_delay, freeze=True, **kwargs) self.parameter_names = equations._namespace.keys() + ['v_thresh', 'v_reset', 'tau_refrac'] for var in ('v', 'ge', 'gi', 'ie', 'ii'): # can probably get this list from brian if var in self.parameter_names: self.parameter_names.remove(var) self.initial_values = {} self._S0 = self._S[:,0] tau_refrac = _new_property('_resetfun', 'period', ms) v_reset = _new_property('_resetfun', 'resetvalue', mV) v_thresh = _new_property('_threshold', 'threshold', mV) def initialize(self): for variable, values in self.initial_values.items(): setattr(self, variable, values) class PoissonGroupWithDelays(brian.PoissonGroup): def __init__(self, N, rates=0): try: clock = state.simclock max_delay = state.max_delay*ms except Exception: raise Exception("Simulation timestep not yet set. Need to call setup()") brian.NeuronGroup.__init__(self, N, model=brian.LazyStateUpdater(), threshold=brian.PoissonThreshold(), clock=clock, max_delay=max_delay) self._variable_rate = True self.rates = rates self._S0[0] = self.rates(self.clock.t) self.parameter_names = ['rate', 'start', 'duration'] def initialize(self): pass class MultipleSpikeGeneratorGroupWithDelays(brian.MultipleSpikeGeneratorGroup): def __init__(self, spiketimes): try: clock = state.simclock max_delay = state.max_delay*ms except Exception: raise Exception("Simulation timestep not yet set. Need to call setup()") thresh = brian.directcontrol.MultipleSpikeGeneratorThreshold(spiketimes) brian.NeuronGroup.__init__(self, len(spiketimes), model=brian.LazyStateUpdater(), threshold=thresh, clock=clock, max_delay=max_delay) self.parameter_names = ['spiketimes'] def _get_spiketimes(self): return self._threshold.spiketimes def _set_spiketimes(self, spiketimes): assert core.is_listlike(spiketimes) if len(spiketimes) == 0 or numpy.isscalar(spiketimes[0]): spiketimes = [spiketimes for i in xrange(len(self))] assert len(spiketimes) == len(self), "spiketimes (length %d) must contain as many iterables as there are cells in the group (%d)." % (len(spiketimes), len(self)) self._threshold.set_spike_times(spiketimes) spiketimes = property(fget=_get_spiketimes, fset=_set_spiketimes) def initialize(self): pass # --- For implementation of get_time_step() and similar functions -------------- class _State(object): """Represent the simulator state.""" def __init__(self, timestep, min_delay, max_delay): """Initialize the simulator.""" self.network = brian.Network() self.network.clock = brian.Clock(t=0*ms, dt=timestep*ms) self.initialized = True self.num_processes = 1 self.mpi_rank = 0 self.min_delay = min_delay self.max_delay = max_delay self.gid = 0 def _get_dt(self): return self.simclock.dt/ms def _set_dt(self, timestep): self.simclock.set_dt(timestep*ms) dt = property(fget=_get_dt, fset=_set_dt) @property def simclock(self): if self.network.clock is None: raise Exception("Simulation timestep not yet set. Need to call setup()") return self.network.clock @property def t(self): return self.simclock.t/ms def run(self, simtime): self.network.run(simtime * ms) def add(self, obj): self.network.add(obj) @property def next_id(self): res = self.gid self.gid += 1 return res def reset(): """Reset the state of the current network to time t = 0.""" state.network.reinit() for group in state.network.groups: group.initialize() # --- For implementation of access to individual neurons' parameters ----------- class ID(int, common.IDMixin): __doc__ = common.IDMixin.__doc__ gid = 0 def __init__(self, n): """Create an ID object with numerical value `n`.""" if n is None: n = gid int.__init__(n) common.IDMixin.__init__(self) def get_native_parameters(self): """Return a dictionary of parameters for the Brian cell model.""" params = {} assert hasattr(self.parent_group, "parameter_names"), str(self.celltype) index = self.parent.id_to_index(self) for name in self.parent_group.parameter_names: if name in ['v_thresh', 'v_reset', 'tau_refrac']: # parameter shared among all cells params[name] = float(getattr(self.parent_group, name)) elif name in ['rate', 'duration', 'start']: params[name] = getattr(self.parent_group.rates, name)[index] elif name == 'spiketimes': params[name] = getattr(self.parent_group,name)[index] else: # parameter may vary from cell to cell try: params[name] = float(getattr(self.parent_group, name)[index]) except TypeError, errmsg: raise TypeError("%s. celltype=%s, parameter name=%s" % (errmsg, self.celltype, name)) return params def set_native_parameters(self, parameters): """Set parameters of the Brian cell model from a dictionary.""" index = self.parent.id_to_index(self) for name, value in parameters.items(): if name in ['v_thresh', 'v_reset', 'tau_refrac']: setattr(self.parent_group, name, value) #logger.warning("This parameter cannot be set for individual cells within a Population. Changing the value for all cells in the Population.") elif name in ['rate', 'duration', 'start']: getattr(self.parent_group.rates, name)[index] = value elif name == 'spiketimes': all_spiketimes = [st[st>state.t] for st in self.parent_group.spiketimes] all_spiketimes[index] = value self.parent_group.spiketimes = all_spiketimes else: setattr(self.parent_group[index], name, value) def set_initial_value(self, variable, value): if variable is 'v': value *= mV self.parent_group.initial_values[variable][self.parent.id_to_local_index(self)] = value def get_initial_value(self, variable): return self.parent_group.initial_values[variable][self.parent.id_to_local_index(self)] # --- For implementation of create() and Population.__init__() ----------------- class STDP(brian.STDP): ''' See documentation for brian:class:`STDP` for more details. Options hidden here could be used in a more general manner. For example, spike interactions (all-to-all, nearest), or the axonal vs. dendritic delays because Brian do support those features.... ''' def __init__(self, C, taup, taum, Ap, Am, mu_p, mu_m, wmin=0, wmax=None, delay_pre=None, delay_post=None): if wmax is None: raise AttributeError, "You must specify the maximum synaptic weight" wmax = float(wmax) # removes units wmin = float(wmin) Ap *= wmax # removes units Am *= wmax # removes units eqs = brian.Equations(''' dA_pre/dt = -A_pre/taup : 1 dA_post/dt = -A_post/taum : 1''', taup=taup, taum=taum, wmax=wmax, mu_m=mu_m, mu_p=mu_p) pre = 'A_pre += Ap' pre += '\nw += A_post*pow(w/wmax, mu_m)' post = 'A_post += Am' post += '\nw += A_pre*pow(1-w/wmax, mu_p)' brian.STDP.__init__(self, C, eqs=eqs, pre=pre, post=post, wmin=wmin, wmax=wmax, delay_pre=None, delay_post=None, clock=None) class SimpleCustomRefractoriness(brian.Refractoriness): @brian.check_units(period=brian.second) def __init__(self, resetfun, period=5*brian.msecond, state=0): self.period = period self.resetfun = resetfun self.state = state self._periods = {} # a dictionary mapping group IDs to periods self.statevectors = {} self.lastresetvalues = {} def __call__(self,P): ''' Clamps state variable at reset value. ''' # if we haven't computed the integer period for this group yet. # do so now if id(P) in self._periods: period = self._periods[id(P)] else: period = int(self.period/P.clock.dt)+1 self._periods[id(P)] = period V = self.statevectors.get(id(P),None) if V is None: V = P.state_(self.state) self.statevectors[id(P)] = V LRV = self.lastresetvalues.get(id(P),None) if LRV is None: LRV = numpy.zeros(len(V)) self.lastresetvalues[id(P)] = LRV lastspikes = P.LS.lastspikes() self.resetfun(P,lastspikes) # call custom reset function LRV[lastspikes] = V[lastspikes] # store a copy of the custom resetted values clampedindices = P.LS[0:period] V[clampedindices] = LRV[clampedindices] # clamp at custom resetted values def __repr__(self): return 'Custom refractory period, '+str(self.period) class AdaptiveReset(object): def __init__(self, Vr= -70.6 * mV, b=0.0805 * nA): self.Vr = Vr self.b = b def __call__(self, P, spikes): P.v[spikes] = self.Vr P.w[spikes] += self.b # --- For implementation of connect() and Connector classes -------------------- class Connection(object): """ Provide an interface that allows access to the connection's weight, delay and other attributes. """ def __init__(self, brian_connection, indices, addr): """ Create a new connection. `brian_connection` -- a Brian Connection object (may contain many connections). `index` -- the index of the current connection within `brian_connection`, i.e. the nth non-zero element in the weight array. `indices` -- the mapping of the x, y coordinates of the established connections, stored by the connection_manager handling those connections. """ # the index is the nth non-zero element self.bc = brian_connection self.addr = int(indices[0]), int(indices[1]) self.source, self.target = addr def _set_weight(self, w): w = w or ZERO_WEIGHT self.bc[self.addr] = w*self.bc.weight_units def _get_weight(self): """Synaptic weight in nA or µS.""" return float(self.bc[self.addr]/self.bc.weight_units) def _set_delay(self, d): self.bc.delay[self.addr] = d*ms def _get_delay(self): """Synaptic delay in ms.""" if isinstance(self.bc, brian.DelayConnection): return float(self.bc.delay[self.addr]/ms) if isinstance(self.bc, brian.Connection): return float(self.bc.delay * self.bc.source.clock.dt/ms) weight = property(_get_weight, _set_weight) delay = property(_get_delay, _set_delay) class ConnectionManager(object): """ Manage synaptic connections, providing methods for creating, listing, accessing individual connections. """ def __init__(self, synapse_type, synapse_model=None, parent=None): """ Create a new ConnectionManager. `synapse_type` -- the 'physiological type' of the synapse, e.g. 'excitatory' or 'inhibitory',or any other key in the `synapses` attibute of the celltype class. `synapse_model` -- not used. Present for consistency with other simulators. `parent` -- the parent `Projection`, if any. """ self.synapse_type = synapse_type self.synapse_model = synapse_model self.parent = parent self.n = {} self.brian_connections = {} self.indices = {} self._populations = [{}, {}] def __getitem__(self, i): """Return the `i`th connection as a Connection object.""" cumsum_idx = numpy.cumsum(self.n.values()) if isinstance(i, slice): idx = numpy.searchsorted(cumsum_idx, numpy.arange(*i.indices(i.stop)), 'left') keys = [self.keys[j] for j in idx] else: idx = numpy.searchsorted(cumsum_idx, i, 'left') keys = self.keys[idx] global_indices = self._indices if isinstance(i, int): if i < len(self): pad = i - cumsum_idx[idx] local_idx = self.indices[keys][0][pad], self.indices[keys][1][pad] local_addr = global_indices[0][i], global_indices[1][i] return Connection(self.brian_connections[keys], local_idx, local_addr) else: raise IndexError("%d > %d" % (i, len(self)-1)) elif isinstance(i, slice): if i.stop < len(self): res = [] for count, j in enumerate(xrange(*i.indices(i.stop))): key = keys[count] pad = j - cumsum_idx[idx[count]] local_idx = self.indices[key][0][pad], self.indices[key][1][pad] local_addr = global_indices[0][j], global_indices[1][j] res.append(Connection(self.brian_connections[key], local_idx, local_addr)) return res else: raise IndexError("%d > %d" % (i.stop, len(self)-1)) def __len__(self): """Return the total number of connections in this manager.""" result = 0 for key in self.keys: result += self.n[key] return result def __connection_generator(self): """Yield each connection in turn.""" global_indices = self._indices count = 0 for key in self.keys: bc = self.brian_connections[key] for i in xrange(bc.W.getnnz()): local_idx = self.indices[key][0][i], self.indices[key][0][i] local_addr = global_indices[0][count], global_indices[1][count] yield Connection(bc, self.indices[key]) count += 1 @property def keys(self): return self.brian_connections.keys() def __iter__(self): """Return an iterator over all connections in this manager.""" return self.__connection_generator() def _finalize(self): for key in self.keys: self.indices[key] = self.brian_connections[key].W.nonzero() self.brian_connections[key].compress() @property def _indices(self): sources = numpy.array([], int) targets = numpy.array([], int) for key in self.keys: paddings = self._populations[0][key[0]], self._populations[1][key[1]] sources = numpy.concatenate((sources, self.indices[key][0] + paddings[0])) targets = numpy.concatenate((targets, self.indices[key][1] + paddings[1])) return sources.astype(int), targets.astype(int) def _get_brian_connection(self, source_group, target_group, synapse_obj, weight_units, homogeneous=False): """ Return the Brian Connection object that connects two NeuronGroups with a given synapse model. source_group -- presynaptic Brian NeuronGroup. target_group -- postsynaptic Brian NeuronGroup synapse_obj -- name of the variable that will be modified by synaptic input. weight_units -- Brian Units object: nA for current-based synapses, uS for conductance-based synapses. """ key = (source_group, target_group, synapse_obj) if not self.brian_connections.has_key(key): assert isinstance(source_group, brian.NeuronGroup) assert isinstance(target_group, brian.NeuronGroup), type(target_group) assert isinstance(synapse_obj, basestring), "%s (%s)" % (synapse_obj, type(synapse_obj)) try: max_delay = state.max_delay*ms except Exception: raise Exception("Simulation timestep not yet set. Need to call setup()") if not homogeneous: self.brian_connections[key] = brian.DelayConnection(source_group, target_group, synapse_obj, max_delay=max_delay) else: self.brian_connections[key] = brian.Connection(source_group, target_group, synapse_obj, max_delay=state.max_delay*ms) self.brian_connections[key].weight_units = weight_units state.add(self.brian_connections[key]) self.n[key] = 0 return self.brian_connections[key] def _detect_parent_groups(self, cells): groups = {} for index, cell in enumerate(cells): group = cell.parent_group if not groups.has_key(group): groups[group] = [index] else: groups[group] += [index] return groups def connect(self, source, targets, weights, delays, homogeneous=False): """ Connect a neuron to one or more other neurons with a static connection. `source` -- the ID of the pre-synaptic cell. `targets` -- a list/1D array of post-synaptic cell IDs, or a single ID. `weight` -- a list/1D array of connection weights, or a single weight. Must have the same length as `targets`. `delays` -- a list/1D array of connection delays, or a single delay. Must have the same length as `targets`. """ #print "connecting", source, "to", targets, "with weights", weights, "and delays", delays if not core.is_listlike(targets): targets = [targets] if isinstance(weights, float): weights = [weights] if isinstance(delays, float): delays = [delays] assert len(targets) > 0 if not isinstance(source, common.IDMixin): raise errors.ConnectionError("source should be an ID object, actually %s" % type(source)) for target in targets: if not isinstance(target, common.IDMixin): raise errors.ConnectionError("Invalid target ID: %s" % target) assert len(targets) == len(weights) == len(delays), "%s %s %s" % (len(targets),len(weights),len(delays)) if common.is_conductance(targets[0]): units = uS else: units = nA synapse_type = self.synapse_type or "excitatory" try: source_group = source.parent_group except AttributeError, errmsg: raise errors.ConnectionError("%s. Maybe trying to connect from non-existing cell (ID=%s)." % (errmsg, source)) groups = self._detect_parent_groups(targets) # we assume here all the targets belong to the same NeuronGroup weights = numpy.array(weights) * units delays = numpy.array(delays) * ms weights[weights == 0] = ZERO_WEIGHT for target_group, indices in groups.items(): synapse_obj = targets[indices[0]].parent.celltype.synapses[synapse_type] bc = self._get_brian_connection(source_group, target_group, synapse_obj, units, homogeneous) padding = (int(source.parent.first_id), int(targets[indices[0]].parent.first_id)) src = int(source) - padding[0] mytargets = numpy.array(targets, int)[indices] - padding[1] bc.W.rows[src] = mytargets bc.W.data[src] = weights[indices] if not homogeneous: bc.delayvec.rows[src] = mytargets bc.delayvec.data[src] = delays[indices] else: bc.delay = int(delays[0] / bc.source.clock.dt) key = (source_group, target_group, synapse_obj) self.n[key] += len(mytargets) pop_sources = self._populations[0] if len(pop_sources) is 0: pop_sources[source_group] = 0 elif not pop_sources.has_key(source_group): pop_sources[source_group] = numpy.sum([len(item) for item in pop_sources.keys()]) pop_targets = self._populations[1] if len(pop_targets) is 0: pop_targets[target_group] = 0 elif not pop_targets.has_key(target_group): pop_targets[target_group] = numpy.sum([len(item) for item in pop_targets.keys()]) def get(self, parameter_name, format): """ Get the values of a given attribute (weight or delay) for all connections in this manager. `parameter_name` -- name of the attribute whose values are wanted. `format` -- "list" or "array". Array format implicitly assumes that all connections belong to a single Projection. Return a list or a 2D Numpy array. The array element X_ij contains the attribute value for the connection from the ith neuron in the pre- synaptic Population to the jth neuron in the post-synaptic Population, if such a connection exists. If there are no such connections, X_ij will be NaN. """ if self.parent is None: raise Exception("Only implemented for connections created via a Projection object, not using connect()") values = numpy.array([]) for key in self.keys: bc = self.brian_connections[key] if parameter_name == "weight": values = numpy.concatenate((values, bc.W.alldata / bc.weight_units)) elif parameter_name == 'delay': if isinstance(bc, brian.DelayConnection): values = numpy.concatenate((values, bc.delay.alldata / ms)) else: data = bc.delay * bc.source.clock.dt * numpy.ones(bc.W.getnnz()) /ms values = numpy.concatenate((values, data)) else: raise Exception("Getting parameters other than weight and delay not yet supported.") if format == 'list': values = values.tolist() elif format == 'array': values_arr = numpy.nan * numpy.ones((self.parent.pre.size, self.parent.post.size)) sources, targets = self._indices values_arr[sources, targets] = values values = values_arr else: raise Exception("format must be 'list' or 'array', actually '%s'" % format) return values def set(self, name, value): """ Set connection attributes for all connections in this manager. `name` -- attribute name `value` -- the attribute numeric value, or a list/1D array of such values of the same length as the number of local connections, or a 2D array with the same dimensions as the connectivity matrix (as returned by `get(format='array')`). """ if self.parent is None: raise Exception("Only implemented for connections created via a Projection object, not using connect()") for key in self.keys: bc = self.brian_connections[key] padding = 0 if name == 'weight': M = bc.W units = bc.weight_units elif name == 'delay': M = bc.delay units = ms else: raise Exception("Setting parameters other than weight and delay not yet supported.") value = value*units if numpy.isscalar(value): if (name == 'weight') or (name == 'delay' and isinstance(bc, brian.DelayConnection)): for row in xrange(M.shape[0]): M.set_row(row, value) elif (name == 'delay' and isinstance(bc, brian.Connection)): bc.delay = int(value / bc.source.clock.dt) else: raise Exception("Setting a non appropriate parameter") elif isinstance(value, numpy.ndarray) and len(value.shape) == 2: if (name == 'delay') and not isinstance(bc, brian.DelayConnection): raise Exception("FastConnector have been used, and only fixed homogeneous delays are allowed") address_gen = ((i, j) for i,row in enumerate(bc.W.rows) for j in row) for (i,j) in address_gen: M[i,j] = value[i,j] elif core.is_listlike(value): N = M.getnnz() assert len(value[padding:padding+N]) == N if (name == 'delay') and not isinstance(bc, brian.DelayConnection): raise Exception("FastConnector have been used: only fixed homogeneous delays are allowed") M.alldata = value else: raise Exception("Values must be scalars or lists/arrays") padding += M.getnnz() # --- Initialization, and module attributes ------------------------------------ state = None # a Singleton, so only a single instance ever exists #del _StatePyNN-0.7.4/src/brian/connectors.py0000644000175000017500000001565111736323051017552 0ustar andrewandrew00000000000000""" Connection method classes for the brian module :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: connectors.py 957 2011-05-03 13:44:15Z apdavison $ """ from pyNN.space import Space from pyNN.random import RandomDistribution import numpy from pyNN import random, common, core from pyNN.connectors import AllToAllConnector, \ ProbabilisticConnector, \ OneToOneConnector, \ FixedProbabilityConnector, \ DistanceDependentProbabilityConnector, \ FromListConnector, \ FromFileConnector, \ FixedNumberPreConnector, \ FixedNumberPostConnector, \ SmallWorldConnector, \ CSAConnector, \ WeightGenerator, \ DelayGenerator, \ ProbaGenerator class FastProbabilisticConnector(ProbabilisticConnector): def __init__(self, projection, weights=0.0, delays=None, allow_self_connections=True, space=Space(), safe=True): ProbabilisticConnector.__init__(self, projection, weights, delays, allow_self_connections, space, safe) def _probabilistic_connect(self, src, p, n_connections=None): """ Connect-up a Projection with connection probability p, where p may be either a float 0<=p<=1, or a dict containing a float array for each pre-synaptic cell, the array containing the connection probabilities for all the local targets of that pre-synaptic cell. """ if numpy.isscalar(p) and p == 1: precreate = numpy.arange(self.size) else: rarr = self.probas_generator.get(self.N) if not core.is_listlike(rarr) and numpy.isscalar(rarr): # if N=1, rarr will be a single number rarr = numpy.array([rarr]) precreate = numpy.where(rarr < p)[0] self.distance_matrix.set_source(src.position) if not self.allow_self_connections and self.projection.pre == self.projection.post: i = numpy.where(self.candidates == src)[0] precreate = numpy.delete(precreate, i) if (n_connections is not None) and (len(precreate) > 0): create = numpy.array([], int) while len(create) < n_connections: # if the number of requested cells is larger than the size of the ## presynaptic population, we allow multiple connections for a given cell create = numpy.concatenate((create, self.projection.rng.permutation(precreate))) create = create[:n_connections] else: create = precreate targets = self.candidates[create] weights = self.weights_generator.get(self.N, self.distance_matrix, create) delays = self.delays_generator.get(self.N, self.distance_matrix, create) homogeneous = numpy.isscalar(self.delays_generator.source) if len(targets) > 0: self.projection.connection_manager.connect(src, targets.tolist(), weights, delays, homogeneous) class FastAllToAllConnector(AllToAllConnector): __doc__ = AllToAllConnector.__doc__ def connect(self, projection): connector = FastProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) self.progressbar(len(projection.pre.local_cells)) for count, tgt in enumerate(projection.pre.local_cells): connector._probabilistic_connect(tgt, 1) self.progression(count) class FastFixedProbabilityConnector(FixedProbabilityConnector): __doc__ = FixedProbabilityConnector.__doc__ def connect(self, projection): connector = FastProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) self.progressbar(len(projection.pre.local_cells)) for count, tgt in enumerate(projection.pre.local_cells): connector._probabilistic_connect(tgt, self.p_connect) self.progression(count) class FastDistanceDependentProbabilityConnector(DistanceDependentProbabilityConnector): __doc__ = DistanceDependentProbabilityConnector.__doc__ def connect(self, projection): """Connect-up a Projection.""" connector = FastProbabilisticConnector(projection, self.weights, self.delays, self.allow_self_connections, self.space, safe=self.safe) proba_generator = ProbaGenerator(self.d_expression, connector.local) self.progressbar(len(projection.pre.local_cells)) for count, tgt in enumerate(projection.pre.local_cells): connector.distance_matrix.set_source(tgt.position) proba = proba_generator.get(connector.N, connector.distance_matrix) if proba.dtype == 'bool': proba = proba.astype(float) connector._probabilistic_connect(tgt, proba, self.n_connections) self.progression(count) class FastOneToOneConnector(OneToOneConnector): __doc__ = OneToOneConnector.__doc__ def connect(self, projection): """Connect-up a Projection.""" if projection.pre.size == projection.post.size: N = projection.post.size local = projection.post._mask_local if isinstance(self.weights, basestring) or isinstance(self.delays, basestring): raise Exception('Expression for weights or delays is not supported for OneToOneConnector !') weights_generator = WeightGenerator(self.weights, local, projection, self.safe) delays_generator = DelayGenerator(self.delays, local, self.safe) weights = weights_generator.get(N) delays = delays_generator.get(N) self.progressbar(len(projection.post.local_cells)) count = 0 create = numpy.arange(0, N)[local] sources = projection.pre.all_cells[create] homogeneous = numpy.isscalar(delays_generator.source) for tgt, src, w, d in zip(projection.post.local_cells, sources, weights, delays): # the float is in case the values are of type numpy.float64, which NEST chokes on projection.connection_manager.connect(src, [tgt], float(w), float(d), homogeneous) self.progression(count) count += 1 else: raise errors.InvalidDimensionsError("OneToOneConnector does not support presynaptic and postsynaptic Populations of different sizes.")PyNN-0.7.4/src/brian/recording.py0000644000175000017500000001433111737532600017346 0ustar andrewandrew00000000000000""" :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ import numpy import brian from pyNN import recording from pyNN.brian import simulator import logging mV = brian.mV ms = brian.ms uS = brian.uS logger = logging.getLogger("PyNN") # --- For implementation of record_X()/get_X()/print_X() ----------------------- class Recorder(recording.Recorder): """Encapsulates data and functions related to recording model variables.""" def __init__(self, variable, population=None, file=None): __doc__ = recording.Recorder.__doc__ recording.Recorder.__init__(self, variable, population, file) self._devices = [] # defer creation until first call of record() def _create_devices(self, group): """Create a Brian recording device.""" # By default, StateMonitor has when='end', i.e. the value recorded at # the end of the timestep is associated with the time at the start of the step, # This is different to the PyNN semantics (i.e. the value at the end of # the step is associated with the time at the end of the step.) clock = simulator.state.simclock if self.variable == 'spikes': devices = [brian.SpikeMonitor(group, record=True)] elif self.variable == 'v': devices = [brian.StateMonitor(group, 'v', record=True, clock=clock, when='start')] elif self.variable == 'gsyn': example_cell = list(self.recorded)[0] varname = example_cell.celltype.synapses['excitatory'] device1 = brian.StateMonitor(group, varname, record=True, clock=clock, when='start') varname = example_cell.celltype.synapses['inhibitory'] device2 = brian.StateMonitor(group, varname, record=True, clock=clock, when='start') devices = [device1, device2] else: devices = [brian.StateMonitor(group, self.variable, record=True, clock=clock, when='start')] for device in devices: simulator.state.add(device) return devices def record(self, ids): """Add the cells in `ids` to the set of recorded cells.""" #update StateMonitor.record and StateMonitor.recordindex self.recorded = self.recorded.union(ids) if len(self._devices) == 0: self._devices = self._create_devices(ids[0].parent_group) if not self.variable is 'spikes': cells = list(self.recorded) for device in self._devices: device.record = numpy.array(cells) - cells[0].parent.first_id device.recordindex = dict((i,j) for i,j in zip(device.record, range(len(device.record)))) logger.debug("recording %s from %s" % (self.variable, cells)) def _reset(self): raise NotImplementedError("Recording reset is not currently supported for pyNN.brian") def _get(self, gather=False, compatible_output=True, filter=None): """Return the recorded data as a Numpy array.""" filtered_ids = self.filter_recorded(filter) cells = list(filtered_ids) padding = cells[0].parent.first_id filtered_ids = numpy.array(cells) - padding def get_all_values(device, units): # because we use `when='start'`, need to add the value at the end of the final time step. values = numpy.array(device._values)/units current_values = device.P.state_(device.varname)[device.record]/units return numpy.vstack((values, current_values[numpy.newaxis, :])) def get_times(): n = self._devices[0].times.size + 1 times = numpy.empty((n,)) times[:n-1] = self._devices[0].times/ms times[-1] = simulator.state.t return times if self.variable == 'spikes': data = numpy.empty((0,2)) for id in filtered_ids: times = self._devices[0].spiketimes[id]/ms new_data = numpy.array([numpy.ones(times.shape)*id + padding, times]).T data = numpy.concatenate((data, new_data)) elif self.variable == 'v': values = get_all_values(self._devices[0], mV) n = values.shape[0] times = get_times() data = numpy.empty((0,3)) for id, row in zip(self.recorded, values.T): new_data = numpy.array([numpy.ones(row.shape)*id, times, row]).T data = numpy.concatenate((data, new_data)) if filter is not None: mask = reduce(numpy.add, (data[:,0]==id for id in filtered_ids + padding)) data = data[mask] elif self.variable == 'gsyn': values1 = get_all_values(self._devices[0], uS) values2 = get_all_values(self._devices[1], uS) times = get_times() data = numpy.empty((0,4)) for id, row1, row2 in zip(self.recorded, values1.T, values2.T): assert row1.shape == row2.shape new_data = numpy.array([numpy.ones(row1.shape)*id, times, row1, row2]).T data = numpy.concatenate((data, new_data)) if filter is not None: mask = reduce(numpy.add, (data[:,0]==id for id in filtered_ids + padding)) data = data[mask] else: values = get_all_values(self._devices[0], mV) times = get_times() data = numpy.empty((0,3)) for id, row in zip(self.recorded, values.T): new_data = numpy.array([numpy.ones(row.shape)*id, times, row]).T data = numpy.concatenate((data, new_data)) if filter is not None: mask = reduce(numpy.add, (data[:,0]==id for id in filtered_ids + padding)) data = data[mask] return data def _local_count(self, filter=None): N = {} filtered_ids = self.filter_recorded(filter) cells = list(filtered_ids) padding = cells[0].parent.first_id filtered_ids = numpy.array(cells) - padding for id in filtered_ids: N[id + padding] = len(self._devices[0].spiketimes[id]) return N simulator.Recorder = Recorder PyNN-0.7.4/src/neuron/0000755000175000017500000000000011774605211015231 5ustar andrewandrew00000000000000PyNN-0.7.4/src/neuron/electrodes.py0000644000175000017500000000532511736323051017736 0ustar andrewandrew00000000000000""" Current source classes for the neuron module. Classes: DCSource -- a single pulse of current of constant amplitude. StepCurrentSource -- a step-wise time-varying current. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: electrodes.py 957 2011-05-03 13:44:15Z apdavison $ """ from neuron import h class CurrentSource(object): """Base class for a source of current to be injected into a neuron.""" pass class DCSource(CurrentSource): """Source producing a single pulse of current of constant amplitude.""" def __init__(self, amplitude=1.0, start=0.0, stop=None): """Construct the current source. Arguments: start -- onset time of pulse in ms stop -- end of pulse in ms amplitude -- pulse amplitude in nA """ self.amplitude = amplitude self.start = start self.stop = stop or 1e12 self._devices = [] def inject_into(self, cell_list): """Inject this current source into some cells.""" for id in cell_list: if id.local: iclamp = h.IClamp(0.5, sec=id._cell.source_section) iclamp.delay = self.start iclamp.dur = self.stop-self.start iclamp.amp = self.amplitude self._devices.append(iclamp) class StepCurrentSource(CurrentSource): """A step-wise time-varying current source.""" def __init__(self, times, amplitudes): """Construct the current source. Arguments: times -- list/array of times at which the injected current changes. amplitudes -- list/array of current amplitudes to be injected at the times specified in `times`. The injected current will be zero up until the first time in `times`. The current will continue at the final value in `amplitudes` until the end of the simulation. """ self.times = h.Vector(times) self.amplitudes = h.Vector(amplitudes) self._devices = [] def inject_into(self, cell_list): """Inject this current source into some cells.""" for id in cell_list: if id.local: if not id.celltype.injectable: raise TypeError("Can't inject current into a spike source.") iclamp = h.IClamp(0.5, sec=id._cell.source_section) iclamp.delay = 0.0 iclamp.dur = 1e12 iclamp.amp = 0.0 self._devices.append(iclamp) self.amplitudes.play(iclamp._ref_amp, self.times) PyNN-0.7.4/src/neuron/standardmodels/0000755000175000017500000000000011774605211020235 5ustar andrewandrew00000000000000PyNN-0.7.4/src/neuron/standardmodels/cells.py0000644000175000017500000002320611736323051021711 0ustar andrewandrew00000000000000# encoding: utf-8 """ Standard cells for the neuron module. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: cells.py 873 2010-12-13 22:40:03Z apdavison $ """ from pyNN.standardmodels import cells, build_translations from pyNN import errors from pyNN.neuron.cells import StandardIF, SingleCompartmentTraub, RandomSpikeSource, VectorSpikeSource, BretteGerstnerIF, GsfaGrrIF from math import pi import logging logger = logging.getLogger("PyNN") class IF_curr_alpha(cells.IF_curr_alpha): """Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic current.""" translations = build_translations( ('tau_m', 'tau_m'), ('cm', 'c_m'), ('v_rest', 'v_rest'), ('v_thresh', 'v_thresh'), ('v_reset', 'v_reset'), ('tau_refrac', 't_refrac'), ('i_offset', 'i_offset'), ('tau_syn_E', 'tau_e'), ('tau_syn_I', 'tau_i'), ) model = StandardIF def __init__(self, parameters): cells.IF_curr_alpha.__init__(self, parameters) # checks supplied parameters and adds default # values for not-specified parameters. self.parameters['syn_type'] = 'current' self.parameters['syn_shape'] = 'alpha' class IF_curr_exp(cells.IF_curr_exp): """Leaky integrate and fire model with fixed threshold and decaying-exponential post-synaptic current. (Separate synaptic currents for excitatory and inhibitory synapses.""" translations = build_translations( ('tau_m', 'tau_m'), ('cm', 'c_m'), ('v_rest', 'v_rest'), ('v_thresh', 'v_thresh'), ('v_reset', 'v_reset'), ('tau_refrac', 't_refrac'), ('i_offset', 'i_offset'), ('tau_syn_E', 'tau_e'), ('tau_syn_I', 'tau_i'), ) model = StandardIF def __init__(self, parameters): cells.IF_curr_exp.__init__(self, parameters) self.parameters['syn_type'] = 'current' self.parameters['syn_shape'] = 'exp' class IF_cond_alpha(cells.IF_cond_alpha): """Leaky integrate and fire model with fixed threshold and alpha-function- shaped post-synaptic conductance.""" translations = build_translations( ('tau_m', 'tau_m'), ('cm', 'c_m'), ('v_rest', 'v_rest'), ('v_thresh', 'v_thresh'), ('v_reset', 'v_reset'), ('tau_refrac', 't_refrac'), ('i_offset', 'i_offset'), ('tau_syn_E', 'tau_e'), ('tau_syn_I', 'tau_i'), ('e_rev_E', 'e_e'), ('e_rev_I', 'e_i') ) model = StandardIF def __init__(self, parameters): cells.IF_cond_alpha.__init__(self, parameters) # checks supplied parameters and adds default # values for not-specified parameters. self.parameters['syn_type'] = 'conductance' self.parameters['syn_shape'] = 'alpha' class IF_cond_exp(cells.IF_cond_exp): """Leaky integrate and fire model with fixed threshold and exponentially-decaying post-synaptic conductance.""" translations = build_translations( ('tau_m', 'tau_m'), ('cm', 'c_m'), ('v_rest', 'v_rest'), ('v_thresh', 'v_thresh'), ('v_reset', 'v_reset'), ('tau_refrac', 't_refrac'), ('i_offset', 'i_offset'), ('tau_syn_E', 'tau_e'), ('tau_syn_I', 'tau_i'), ('e_rev_E', 'e_e'), ('e_rev_I', 'e_i') ) model = StandardIF def __init__(self, parameters): cells.IF_cond_exp.__init__(self, parameters) # checks supplied parameters and adds default # values for not-specified parameters. self.parameters['syn_type'] = 'conductance' self.parameters['syn_shape'] = 'exp' class IF_facets_hardware1(cells.IF_facets_hardware1): """Leaky integrate and fire model with conductance-based synapses and fixed threshold as it is resembled by the FACETS Hardware Stage 1. For further details regarding the hardware model see the FACETS-internal Wiki: https://facets.kip.uni-heidelberg.de/private/wiki/index.php/WP7_NNM """ translations = build_translations( ('v_rest', 'v_rest'), ('v_thresh', 'v_thresh'), ('v_reset', 'v_reset'), ('g_leak', 'tau_m', "0.2*1000.0/g_leak", "0.2*1000.0/tau_m"), ('tau_syn_E', 'tau_e'), ('tau_syn_I', 'tau_i'), ('e_rev_I', 'e_i') ) model = StandardIF def __init__(self, parameters): cells.IF_facets_hardware1.__init__(self, parameters) self.parameters['syn_type'] = 'conductance' self.parameters['syn_shape'] = 'exp' self.parameters['i_offset'] = 0.0 self.parameters['c_m'] = 0.2 self.parameters['t_refrac'] = 1.0 self.parameters['e_e'] = 0.0 class HH_cond_exp(cells.HH_cond_exp): translations = build_translations( ('gbar_Na', 'gbar_Na', 1e-3), # uS -> mS ('gbar_K', 'gbar_K', 1e-3), ('g_leak', 'g_leak', 1e-3), ('cm', 'c_m'), ('v_offset', 'v_offset'), ('e_rev_Na', 'ena'), ('e_rev_K', 'ek'), ('e_rev_leak', 'e_leak'), ('e_rev_E', 'e_e'), ('e_rev_I', 'e_i'), ('tau_syn_E', 'tau_e'), ('tau_syn_I', 'tau_i'), ('i_offset', 'i_offset'), ) model = SingleCompartmentTraub def __init__(self, parameters): cells.HH_cond_exp.__init__(self, parameters) # checks supplied parameters and adds default # values for not-specified parameters. self.parameters['syn_type'] = 'conductance' self.parameters['syn_shape'] = 'exp' class IF_cond_exp_gsfa_grr(cells.IF_cond_exp_gsfa_grr): """ Linear leaky integrate and fire model with fixed threshold, decaying-exponential post-synaptic conductance, conductance based spike-frequency adaptation, and a conductance-based relative refractory mechanism. See: Muller et al (2007) Spike-frequency adapting neural ensembles: Beyond mean-adaptation and renewal theories. Neural Computation 19: 2958-3010. See also: EIF_cond_alpha_isfa_ista """ translations = build_translations( ('v_rest', 'v_reset'), ('v_reset', 'v_reset'), ('cm', 'c_m'), ('tau_m', 'tau_m'), ('tau_refrac', 't_refrac'), ('tau_syn_E', 'tau_e'), ('tau_syn_I', 'tau_i'), ('v_thresh', 'v_thresh'), ('i_offset', 'i_offset'), ('e_rev_E', 'e_e'), ('e_rev_I', 'e_i'), ('tau_sfa', 'tau_sfa'), ('e_rev_sfa', 'e_sfa'), ('q_sfa', 'q_sfa'), ('tau_rr', 'tau_rr'), ('e_rev_rr', 'e_rr'), ('q_rr', 'q_rr') ) model = GsfaGrrIF def __init__(self, parameters): cells.IF_cond_exp_gsfa_grr.__init__(self, parameters) # checks supplied parameters and adds default # values for not-specified parameters. self.parameters['syn_type'] = 'conductance' self.parameters['syn_shape'] = 'exp' class SpikeSourcePoisson(cells.SpikeSourcePoisson): """Spike source, generating spikes according to a Poisson process.""" translations = build_translations( ('start', 'start'), ('rate', '_interval', "1000.0/rate", "1000.0/_interval"), ('duration', 'duration'), ) model = RandomSpikeSource class SpikeSourceArray(cells.SpikeSourceArray): """Spike source generating spikes at the times given in the spike_times array.""" translations = build_translations( ('spike_times', 'spike_times'), ) model = VectorSpikeSource class EIF_cond_alpha_isfa_ista(cells.EIF_cond_alpha_isfa_ista): """ Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.) according to: Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642 See also: IF_cond_exp_gsfa_grr """ translations = build_translations( ('cm', 'c_m'), ('tau_refrac', 't_refrac'), ('v_spike', 'v_spike'), ('v_reset', 'v_reset'), ('v_rest', 'v_rest'), ('tau_m', 'tau_m'), ('i_offset', 'i_offset'), ('a', 'A', 0.001), # nS --> uS ('b', 'B'), ('delta_T', 'delta'), ('tau_w', 'tau_w'), ('v_thresh', 'v_thresh'), ('e_rev_E', 'e_e'), ('tau_syn_E', 'tau_e'), ('e_rev_I', 'e_i'), ('tau_syn_I', 'tau_i'), ) model = BretteGerstnerIF def __init__(self, parameters): cells.EIF_cond_alpha_isfa_ista.__init__(self, parameters) self.parameters['syn_type'] = 'conductance' self.parameters['syn_shape'] = 'alpha' class EIF_cond_exp_isfa_ista(cells.EIF_cond_exp_isfa_ista): """Like EIF_cond_alpha_isfa_ista, but with single-exponential synapses.""" translations = EIF_cond_alpha_isfa_ista.translations model = BretteGerstnerIF def __init__(self, parameters): cells.EIF_cond_exp_isfa_ista.__init__(self, parameters) self.parameters['syn_type'] = 'conductance' self.parameters['syn_shape'] = 'exp' PyNN-0.7.4/src/neuron/standardmodels/__init__.py0000644000175000017500000000000011736323051022331 0ustar andrewandrew00000000000000PyNN-0.7.4/src/neuron/standardmodels/synapses.py0000644000175000017500000001011111736323051022443 0ustar andrewandrew00000000000000""" Synapse Dynamics classes for the neuron module. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: synapses.py 957 2011-05-03 13:44:15Z apdavison $ """ from pyNN.standardmodels import synapses, build_translations, STDPMechanism, SynapseDynamics class TsodyksMarkramMechanism(synapses.TsodyksMarkramMechanism): translations = build_translations( ('U', 'U'), ('tau_rec', 'tau_rec'), ('tau_facil', 'tau_facil'), ('u0', 'u0'), ('x0', 'x' ), # } note that these two values ('y0', 'y') # } are not used ) native_name = 'tsodkys-markram' def __init__(self, U=0.5, tau_rec=100.0, tau_facil=0.0, u0=0.0, x0=1.0, y0=0.0): assert (x0 == 1 and y0 == 0), "It is not currently possible to set x0 and y0" #synapses.TsodyksMarkramMechanism.__init__(self, U, tau_rec, tau_facil, u0, x0, y0) self.parameters = self.translate({'U': U, 'tau_rec': tau_rec, 'tau_facil': tau_facil, 'u0': u0, 'x0': x0, 'y0': y0}) class AdditiveWeightDependence(synapses.AdditiveWeightDependence): """ The amplitude of the weight change is fixed for depression (`A_minus`) and for potentiation (`A_plus`). If the new weight would be less than `w_min` it is set to `w_min`. If it would be greater than `w_max` it is set to `w_max`. """ translations = build_translations( ('w_max', 'wmax'), ('w_min', 'wmin'), ('A_plus', 'aLTP'), ('A_minus', 'aLTD'), ) possible_models = set(['StdwaSA',]) def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): # units? #synapses.AdditiveWeightDependence.__init__(self, w_min, w_max, A_plus, A_minus) self.parameters = self.translate({'w_min': w_min, 'w_max': w_max, 'A_plus': A_plus, 'A_minus': A_minus}) class MultiplicativeWeightDependence(synapses.MultiplicativeWeightDependence): """ The amplitude of the weight change depends on the current weight. For depression, Dw propto w-w_min For potentiation, Dw propto w_max-w """ translations = build_translations( ('w_max', 'wmax'), ('w_min', 'wmin'), ('A_plus', 'aLTP'), ('A_minus', 'aLTD'), ) possible_models = set(['StdwaSoft',]) def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): #synapses.MultiplicativeWeightDependence.__init__(self, w_min, w_max, A_plus, A_minus) self.parameters = self.translate({'w_min': w_min, 'w_max': w_max, 'A_plus': A_plus, 'A_minus': A_minus}) class AdditivePotentiationMultiplicativeDepression(synapses.AdditivePotentiationMultiplicativeDepression): """ The amplitude of the weight change depends on the current weight for depression (Dw propto w-w_min) and is fixed for potentiation """ translations = build_translations( ('w_max', 'wmax'), ('w_min', 'wmin'), ('A_plus', 'aLTP'), ('A_minus', 'aLTD'), ) possible_models = set(['StdwaGuetig']) def __init__(self, w_min=0.0, w_max=1.0, A_plus=0.01, A_minus=0.01): #synapses.AdditivePotentiationMultiplicativeDepression.__init__(self, w_min, w_max, A_plus, A_minus) parameters = dict(locals()) parameters.pop('self') self.parameters = self.translate(parameters) self.parameters['muLTP'] = 0.0 self.parameters['muLTD'] = 1.0 class SpikePairRule(synapses.SpikePairRule): translations = build_translations( ('tau_plus', 'tauLTP'), ('tau_minus', 'tauLTD'), ) possible_models = set(['StdwaSA', 'StdwaSoft', 'StdwaGuetig']) def __init__(self, tau_plus=20.0, tau_minus=20.0): #synapses.SpikePairRule.__init__(self, tau_plus, tau_minus) self.parameters = self.translate({'tau_plus': tau_plus, 'tau_minus': tau_minus}) PyNN-0.7.4/src/neuron/cells.py0000644000175000017500000004274711736323051016720 0ustar andrewandrew00000000000000# encoding: utf-8 """ Standard cells for the neuron module. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: cells.py 957 2011-05-03 13:44:15Z apdavison $ """ from pyNN.standardmodels import cells, build_translations from pyNN.models import BaseCellType from pyNN import errors from neuron import h, nrn, hclass from simulator import state from math import pi import logging logger = logging.getLogger("PyNN") def _new_property(obj_hierarchy, attr_name): """ Returns a new property, mapping attr_name to obj_hierarchy.attr_name. For example, suppose that an object of class A has an attribute b which itself has an attribute c which itself has an attribute d. Then placing e = _new_property('b.c', 'd') in the class definition of A makes A.e an alias for A.b.c.d """ def set(self, value): obj = reduce(getattr, [self] + obj_hierarchy.split('.')) setattr(obj, attr_name, value) def get(self): obj = reduce(getattr, [self] + obj_hierarchy.split('.')) return getattr(obj, attr_name) return property(fset=set, fget=get) class NativeCellType(BaseCellType): pass class SingleCompartmentNeuron(nrn.Section): """docstring""" synapse_models = { 'current': { 'exp': h.ExpISyn, 'alpha': h.AlphaISyn }, 'conductance' : { 'exp': h.ExpSyn, 'alpha': h.AlphaSyn }, } def __init__(self, syn_type, syn_shape, c_m, i_offset, tau_e, tau_i, e_e, e_i): # initialise Section object with 'pas' mechanism nrn.Section.__init__(self) self.seg = self(0.5) self.L = 100 self.seg.diam = 1000/pi # gives area = 1e-3 cm2 self.source_section = self self.syn_type = syn_type self.syn_shape = syn_shape # insert synapses assert syn_type in ('current', 'conductance'), "syn_type must be either 'current' or 'conductance'. Actual value is %s" % syn_type assert syn_shape in ('alpha', 'exp'), "syn_type must be either 'alpha' or 'exp'" synapse_model = StandardIF.synapse_models[syn_type][syn_shape] self.esyn = synapse_model(0.5, sec=self) self.isyn = synapse_model(0.5, sec=self) if self.syn_shape == 'exp': if self.syn_type == 'conductance': self.esyn_TM = h.tmgsyn(0.5, sec=self) self.isyn_TM = h.tmgsyn(0.5, sec=self) else: self.esyn_TM = h.tmisyn(0.5, sec=self) self.isyn_TM = h.tmisyn(0.5, sec=self) # insert current source self.stim = h.IClamp(0.5, sec=self) self.stim.delay = 0 self.stim.dur = 1e12 self.stim.amp = i_offset # for recording self.spike_times = h.Vector(0) self.traces = {} self.gsyn_trace = {} self.recording_time = 0 self.v_init = None @property def excitatory(self): return self.esyn @property def inhibitory(self): return self.isyn @property def excitatory_TM(self): if hasattr(self, 'esyn_TM'): return self.esyn_TM else: return None @property def inhibitory_TM(self): if hasattr(self, 'isyn_TM'): return self.isyn_TM else: return None def area(self): """Membrane area in µm²""" return pi*self.L*self.seg.diam c_m = _new_property('seg', 'cm') i_offset = _new_property('stim', 'amp') def _get_tau_e(self): return self.esyn.tau def _set_tau_e(self, value): self.esyn.tau = value if hasattr(self, 'esyn_TM'): self.esyn_TM.tau = value tau_e = property(fget=_get_tau_e, fset=_set_tau_e) def _get_tau_i(self): return self.isyn.tau def _set_tau_i(self, value): self.isyn.tau = value if hasattr(self, 'isyn_TM'): self.isyn_TM.tau = value tau_i = property(fget=_get_tau_i, fset=_set_tau_i) def _get_e_e(self): return self.esyn.e def _set_e_e(self, value): self.esyn.e = value if hasattr(self, 'esyn_TM'): self.esyn_TM.e = value e_e = property(fget=_get_e_e, fset=_set_e_e) def _get_e_i(self): return self.isyn.e def _set_e_i(self, value): self.isyn.e = value if hasattr(self, 'isyn_TM'): self.isyn_TM.e = value e_i = property(fget=_get_e_i, fset=_set_e_i) def record(self, active): if active: rec = h.NetCon(self.source, None) rec.record(self.spike_times) else: self.spike_times = h.Vector(0) def record_v(self, active): if active: self.vtrace = h.Vector() self.vtrace.record(self(0.5)._ref_v) if not self.recording_time: self.record_times = h.Vector() self.record_times.record(h._ref_t) self.recording_time += 1 else: self.vtrace = None self.recording_time -= 1 if self.recording_time == 0: self.record_times = None def record_gsyn(self, syn_name, active): # how to deal with static and T-M synapses? # record both and sum? if active: self.gsyn_trace[syn_name] = h.Vector() self.gsyn_trace[syn_name].record(getattr(self, syn_name)._ref_g) if not self.recording_time: self.record_times = h.Vector() self.record_times.record(h._ref_t) self.recording_time += 1 else: self.gsyn_trace[syn_name] = None self.recording_time -= 1 if self.recording_time == 0: self.record_times = None def memb_init(self): assert self.v_init is not None, "cell is a %s" % self.__class__.__name__ for seg in self: seg.v = self.v_init #self.seg.v = self.v_init def set_Tsodyks_Markram_synapses(self, ei, U, tau_rec, tau_facil, u0): if self.syn_shape == 'alpha': raise Exception("Tsodyks-Markram mechanism not available for alpha-function-shaped synapses.") elif ei == 'excitatory': syn = self.esyn_TM elif ei == 'inhibitory': syn = self.isyn_TM else: raise Exception("Tsodyks-Markram mechanism not yet implemented for user-defined synapse types. ei = %s" % ei) syn.U = U syn.tau_rec = tau_rec syn.tau_facil = tau_facil syn.u0 = u0 def set_parameters(self, param_dict): for name in self.parameter_names: setattr(self, name, param_dict[name]) class LeakySingleCompartmentNeuron(SingleCompartmentNeuron): def __init__(self, syn_type, syn_shape, tau_m, c_m, v_rest, i_offset, tau_e, tau_i, e_e, e_i): SingleCompartmentNeuron.__init__(self, syn_type, syn_shape, c_m, i_offset, tau_e, tau_i, e_e, e_i) self.insert('pas') self.v_init = v_rest # default value def __set_tau_m(self, value): #print "setting tau_m to", value, "cm =", self.seg.cm self.seg.pas.g = 1e-3*self.seg.cm/value # cm(nF)/tau_m(ms) = G(uS) = 1e-6G(S). Divide by area (1e-3) to get factor of 1e-3 def __get_tau_m(self): #print "tau_m = ", 1e-3*self.seg.cm/self.seg.pas.g, "cm = ", self.seg.cm return 1e-3*self.seg.cm/self.seg.pas.g def __get_cm(self): #print "cm = ", self.seg.cm return self.seg.cm def __set_cm(self, value): # when we set cm, need to change g to maintain the same value of tau_m #print "setting cm to", value tau_m = self.tau_m self.seg.cm = value self.tau_m = tau_m v_rest = _new_property('seg.pas', 'e') tau_m = property(fget=__get_tau_m, fset=__set_tau_m) c_m = property(fget=__get_cm, fset=__set_cm) # if the property were called 'cm' # it would never get accessed as the # built-in Section.cm would always # be used first class StandardIF(LeakySingleCompartmentNeuron): """docstring""" def __init__(self, syn_type, syn_shape, tau_m=20, c_m=1.0, v_rest=-65, v_thresh=-55, t_refrac=2, i_offset=0, v_reset=None, tau_e=5, tau_i=5, e_e=0, e_i=-70): if v_reset is None: v_reset = v_rest LeakySingleCompartmentNeuron.__init__(self, syn_type, syn_shape, tau_m, c_m, v_rest, i_offset, tau_e, tau_i, e_e, e_i) # insert spike reset mechanism self.spike_reset = h.ResetRefrac(0.5, sec=self) self.spike_reset.vspike = 40 # (mV) spike height self.source = self.spike_reset # process arguments self.parameter_names = ['c_m', 'tau_m', 'v_rest', 'v_thresh', 't_refrac', # 'c_m' must come before 'tau_m' 'i_offset', 'v_reset', 'tau_e', 'tau_i'] if syn_type == 'conductance': self.parameter_names.extend(['e_e', 'e_i']) self.set_parameters(locals()) v_thresh = _new_property('spike_reset', 'vthresh') v_reset = _new_property('spike_reset', 'vreset') t_refrac = _new_property('spike_reset', 'trefrac') class BretteGerstnerIF(LeakySingleCompartmentNeuron): """docstring""" def __init__(self, syn_type, syn_shape, tau_m=20, c_m=1.0, v_rest=-65, v_thresh=-55, t_refrac=2, i_offset=0, tau_e=5, tau_i=5, e_e=0, e_i=-70, v_spike=0.0, v_reset=-70.6, A=4.0, B=0.0805, tau_w=144.0, delta=2.0): LeakySingleCompartmentNeuron.__init__(self, syn_type, syn_shape, tau_m, c_m, v_rest, i_offset, tau_e, tau_i, e_e, e_i) # insert Brette-Gerstner spike mechanism self.adexp = h.AdExpIF(0.5, sec=self) self.source = self.seg._ref_v self.parameter_names = ['c_m', 'tau_m', 'v_rest', 'v_thresh', 't_refrac', 'i_offset', 'v_reset', 'tau_e', 'tau_i', 'A', 'B', 'tau_w', 'delta', 'v_spike'] if syn_type == 'conductance': self.parameter_names.extend(['e_e', 'e_i']) self.set_parameters(locals()) self.w_init = None v_thresh = _new_property('adexp', 'vthresh') v_reset = _new_property('adexp', 'vreset') t_refrac = _new_property('adexp', 'trefrac') B = _new_property('adexp', 'b') A = _new_property('adexp', 'a') ## using 'A' because for some reason, cell.a gives the error "NameError: a, the mechanism does not exist at PySec_170bb70(0.5)" tau_w = _new_property('adexp', 'tauw') delta = _new_property('adexp', 'delta') def __set_v_spike(self, value): self.adexp.vspike = value self.adexp.vpeak = value + 10.0 def __get_v_spike(self): return self.adexp.vspike v_spike = property(fget=__get_v_spike, fset=__set_v_spike) def __set_tau_m(self, value): self.seg.pas.g = 1e-3*self.seg.cm/value # cm(nF)/tau_m(ms) = G(uS) = 1e-6G(S). Divide by area (1e-3) to get factor of 1e-3 self.adexp.GL = self.seg.pas.g * self.area() * 1e-2 # S/cm2 to uS def __get_tau_m(self): return 1e-3*self.seg.cm/self.seg.pas.g def __set_v_rest(self, value): self.seg.pas.e = value self.adexp.EL = value def __get_v_rest(self): return self.seg.pas.e tau_m = property(fget=__get_tau_m, fset=__set_tau_m) v_rest = property(fget=__get_v_rest, fset=__set_v_rest) def record(self, active): if active: self.rec = h.NetCon(self.source, None, self.get_threshold(), 0.0, 0.0, sec=self) self.rec.record(self.spike_times) def get_threshold(self): return self.adexp.vspike def memb_init(self): assert self.v_init is not None, "cell is a %s" % self.__class__.__name__ assert self.w_init is not None for seg in self: seg.v = self.v_init seg.w = self.w_init class GsfaGrrIF(StandardIF): """docstring""" def __init__(self, syn_type, syn_shape, tau_m=10.0, c_m=1.0, v_rest=-70.0, v_thresh=-57.0, t_refrac=0.1, i_offset=0.0, tau_e=1.5, tau_i=10.0, e_e=0.0, e_i=-75.0, v_spike=0.0, v_reset=-70.0, q_rr=3214.0, q_sfa=14.48, e_rr=-70.0, e_sfa=-70.0, tau_rr=1.97, tau_sfa=110.0): StandardIF.__init__(self, syn_type, syn_shape, tau_m, c_m, v_rest, v_thresh, t_refrac, i_offset, v_reset, tau_e, tau_i, e_e, e_i) # insert GsfaGrr mechanism self.gsfa_grr = h.GsfaGrr(0.5, sec=self) self.v_thresh = v_thresh self.parameter_names = ['c_m', 'tau_m', 'v_rest', 'v_thresh', 'v_reset', 't_refrac', 'tau_e', 'tau_i', 'i_offset', 'e_rr', 'e_sfa', 'q_rr', 'q_sfa', 'tau_rr', 'tau_sfa'] if syn_type == 'conductance': self.parameter_names.extend(['e_e', 'e_i']) self.set_parameters(locals()) q_sfa = _new_property('gsfa_grr', 'q_s') q_rr = _new_property('gsfa_grr', 'q_r') tau_sfa = _new_property('gsfa_grr', 'tau_s') tau_rr = _new_property('gsfa_grr', 'tau_r') e_sfa = _new_property('gsfa_grr', 'E_s') e_rr = _new_property('gsfa_grr', 'E_r') def __set_v_thresh(self, value): self.spike_reset.vthresh = value # this can fail on constructor try: self.gsfa_grr.vthresh = value except AttributeError: pass def __get_v_thresh(self): return self.spike_reset.vthresh v_thresh = property(fget=__get_v_thresh, fset=__set_v_thresh) class SingleCompartmentTraub(SingleCompartmentNeuron): def __init__(self, syn_type, syn_shape, c_m=1.0, e_leak=-65, i_offset=0, tau_e=5, tau_i=5, e_e=0, e_i=-70, gbar_Na=20e-3, gbar_K=6e-3, g_leak=0.01e-3, ena=50, ek=-90, v_offset=-63): """ Conductances are in millisiemens (S/cm2, since A = 1e-3) """ SingleCompartmentNeuron.__init__(self, syn_type, syn_shape, c_m, i_offset, tau_e, tau_i, e_e, e_i) self.source = self.seg._ref_v self.insert('k_ion') self.insert('na_ion') self.insert('hh_traub') self.parameter_names = ['c_m', 'e_leak', 'i_offset', 'tau_e', 'tau_i', 'gbar_Na', 'gbar_K', 'g_leak', 'ena', 'ek', 'v_offset'] if syn_type == 'conductance': self.parameter_names.extend(['e_e', 'e_i']) self.set_parameters(locals()) self.v_init = e_leak # default value # not sure ena and ek are handled correctly e_leak = _new_property('seg.hh_traub', 'el') v_offset = _new_property('seg.hh_traub', 'vT') gbar_Na = _new_property('seg.hh_traub', 'gnabar') gbar_K = _new_property('seg.hh_traub', 'gkbar') g_leak = _new_property('seg.hh_traub', 'gl') def get_threshold(self): return 10.0 def record(self, active): if active: rec = h.NetCon(self.source, None, sec=self) rec.record(self.spike_times) class RandomSpikeSource(hclass(h.NetStimFD)): parameter_names = ('start', '_interval', 'duration') def __init__(self, start=0, _interval=1e12, duration=0): self.start = start self.interval = _interval self.duration = duration self.noise = 1 self.spike_times = h.Vector(0) self.source = self self.switch = h.NetCon(None, self) self.source_section = None self.seed(state.mpi_rank) # should allow user to set specific seeds somewhere, e.g. in setup() def _set_interval(self, value): self.switch.weight[0] = -1 self.switch.event(h.t+1e-12, 0) self.interval = value self.switch.weight[0] = 1 self.switch.event(h.t+2e-12, 1) def _get_interval(self): return self.interval _interval = property(fget=_get_interval, fset=_set_interval) def record(self, active): if active: self.rec = h.NetCon(self, None) self.rec.record(self.spike_times) class VectorSpikeSource(hclass(h.VecStim)): parameter_names = ('spike_times',) def __init__(self, spike_times=[]): self.spike_times = spike_times self.source = self self.source_section = None def _set_spike_times(self, spike_times): try: self._spike_times = h.Vector(spike_times) except RuntimeError: raise errors.InvalidParameterValueError("spike_times must be an array of floats") self.play(self._spike_times) def _get_spike_times(self): return self._spike_times spike_times = property(fget=_get_spike_times, fset=_set_spike_times) def record(self, active): """ Since spike_times are specified by user, recording is meaningless, but we need to provide a stub for consistency with other models. """ pass PyNN-0.7.4/src/neuron/__init__.py0000644000175000017500000003067411736323051017351 0ustar andrewandrew00000000000000# encoding: utf-8 """ nrnpython implementation of the PyNN API. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id:__init__.py 188 2008-01-29 10:03:59Z apdavison $ """ __version__ = "$Rev: 191 $" from pyNN.random import * from pyNN.neuron import simulator from pyNN import common, recording as base_recording, space, __doc__ common.simulator = simulator base_recording.simulator = simulator from pyNN.neuron.standardmodels.cells import * from pyNN.neuron.connectors import * from pyNN.neuron.standardmodels.synapses import * from pyNN.neuron.electrodes import * from pyNN.neuron.recording import Recorder from pyNN import standardmodels import numpy import logging from neuron import h logger = logging.getLogger("PyNN") # ============================================================================== # Utility functions # ============================================================================== def list_standard_models(): """Return a list of all the StandardCellType classes available for this simulator.""" return [obj.__name__ for obj in globals().values() if isinstance(obj, type) and issubclass(obj, standardmodels.StandardCellType)] # ============================================================================== # Functions for simulation set-up and control # ============================================================================== def setup(timestep=0.1, min_delay=0.1, max_delay=10.0, **extra_params): """ Should be called at the very beginning of a script. extra_params contains any keyword arguments that are required by a given simulator but not by others. NEURON specific extra_params: use_cvode - use the NEURON cvode solver. Defaults to False. returns: MPI rank """ common.setup(timestep, min_delay, max_delay, **extra_params) simulator.initializer.clear() simulator.state.clear() simulator.reset() simulator.state.dt = timestep simulator.state.min_delay = min_delay simulator.state.max_delay = max_delay if extra_params.has_key('use_cvode'): simulator.state.cvode.active(int(extra_params['use_cvode'])) if extra_params.has_key('rtol'): simulator.state.cvode.rtol(float(extra_params['rtol'])) if extra_params.has_key('atol'): simulator.state.cvode.atol(float(extra_params['atol'])) if extra_params.has_key('default_maxstep'): simulator.state.default_maxstep=float(extra_params['default_maxstep']) return rank() def end(compatible_output=True): """Do any necessary cleaning up before exiting.""" for recorder in simulator.recorder_list: recorder.write(gather=True, compatible_output=compatible_output) simulator.recorder_list = [] #simulator.finalize() def run(simtime): """Run the simulation for simtime ms.""" simulator.run(simtime) return get_current_time() reset = common.reset initialize = common.initialize # ============================================================================== # Functions returning information about the simulation state # ============================================================================== get_current_time = common.get_current_time get_time_step = common.get_time_step get_min_delay = common.get_min_delay get_max_delay = common.get_max_delay num_processes = common.num_processes rank = common.rank # ============================================================================== # High-level API for creating, connecting and recording from populations of # neurons. # ============================================================================== class Population(common.Population): """ An array of neurons all of the same type. `Population' is used as a generic term intended to include layers, columns, nuclei, etc., of cells. """ recorder_class = Recorder def __init__(self, size, cellclass, cellparams=None, structure=None, label=None): __doc__ = common.Population.__doc__ common.Population.__init__(self, size, cellclass, cellparams, structure, label) simulator.initializer.register(self) def _create_cells(self, cellclass, cellparams, n): """ Create cells in NEURON. `cellclass` -- a PyNN standard cell or a native NEURON cell class that implements an as-yet-undescribed interface. `cellparams` -- a dictionary of cell parameters. `n` -- the number of cells to create """ # this method should never be called more than once # perhaps should check for that assert n > 0, 'n must be a positive integer' celltype = cellclass(cellparams) cell_model = celltype.model cell_parameters = celltype.parameters self.first_id = simulator.state.gid_counter self.last_id = simulator.state.gid_counter + n - 1 self.all_cells = numpy.array([id for id in range(self.first_id, self.last_id+1)], simulator.ID) # mask_local is used to extract those elements from arrays that apply to the cells on the current node self._mask_local = self.all_cells%simulator.state.num_processes==simulator.state.mpi_rank # round-robin distribution of cells between nodes for i,(id,is_local) in enumerate(zip(self.all_cells, self._mask_local)): self.all_cells[i] = simulator.ID(id) self.all_cells[i].parent = self if is_local: self.all_cells[i]._build_cell(cell_model, cell_parameters) simulator.initializer.register(*self.all_cells[self._mask_local]) simulator.state.gid_counter += n def _native_rset(self, parametername, rand_distr): """ 'Random' set. Set the value of parametername to a value taken from rand_distr, which should be a RandomDistribution object. """ assert isinstance(rand_distr.rng, NativeRNG) rng = simulator.h.Random(rand_distr.rng.seed or 0) native_rand_distr = getattr(rng, rand_distr.name) rarr = [native_rand_distr(*rand_distr.parameters)] + [rng.repick() for i in range(self.all_cells.size-1)] self.tset(parametername, rarr) PopulationView = common.PopulationView Assembly = common.Assembly class Projection(common.Projection): """ A container for all the connections of a given type (same synapse type and plasticity mechanisms) between two populations, together with methods to set parameters of those connections, including of plasticity mechanisms. """ nProj = 0 def __init__(self, presynaptic_population, postsynaptic_population, method, source=None, target=None, synapse_dynamics=None, label=None, rng=None): """ presynaptic_population and postsynaptic_population - Population objects. source - string specifying which attribute of the presynaptic cell signals action potentials target - string specifying which synapse on the postsynaptic cell to connect to If source and/or target are not given, default values are used. method - a Connector object, encapsulating the algorithm to use for connecting the neurons. synapse_dynamics - a `SynapseDynamics` object specifying which synaptic plasticity mechanisms to use. rng - specify an RNG object to be used by the Connector. """ common.Projection.__init__(self, presynaptic_population, postsynaptic_population, method, source, target, synapse_dynamics, label, rng) self.synapse_type = target or 'excitatory' ## Deal with short-term synaptic plasticity if self.synapse_dynamics and self.synapse_dynamics.fast: # need to check it is actually the Ts-M model, even though that is the only one at present! U = self.synapse_dynamics.fast.parameters['U'] tau_rec = self.synapse_dynamics.fast.parameters['tau_rec'] tau_facil = self.synapse_dynamics.fast.parameters['tau_facil'] u0 = self.synapse_dynamics.fast.parameters['u0'] for cell in self.post: cell._cell.set_Tsodyks_Markram_synapses(self.synapse_type, U, tau_rec, tau_facil, u0) synapse_model = 'Tsodyks-Markram' else: synapse_model = None self.connection_manager = simulator.ConnectionManager(self.synapse_type, synapse_model=synapse_model, parent=self) self.connections = self.connection_manager ## Create connections method.connect(self) logger.info("--- Projection[%s].__init__() ---" %self.label) ## Deal with long-term synaptic plasticity if self.synapse_dynamics and self.synapse_dynamics.slow: ddf = self.synapse_dynamics.slow.dendritic_delay_fraction if ddf > 0.5 and num_processes() > 1: # depending on delays, can run into problems with the delay from the # pre-synaptic neuron to the weight-adjuster mechanism being zero. # The best (only?) solution would be to create connections on the # node with the pre-synaptic neurons for ddf>0.5 and on the node # with the post-synaptic neuron (as is done now) for ddf<0.5 raise NotImplementedError("STDP with dendritic_delay_fraction > 0.5 is not yet supported for parallel computation.") stdp_parameters = self.synapse_dynamics.slow.all_parameters stdp_parameters['allow_update_on_post'] = int(False) # for compatibility with NEST long_term_plasticity_mechanism = self.synapse_dynamics.slow.possible_models for c in self.connections: c.useSTDP(long_term_plasticity_mechanism, stdp_parameters, ddf) # Check none of the delays are out of bounds. This should be redundant, # as this should already have been done in the Connector object, so # we could probably remove it. delays = [c.nc.delay for c in self.connections] if delays: assert min(delays) >= get_min_delay() Projection.nProj += 1 # --- Methods for setting connection parameters ---------------------------- def randomizeWeights(self, rand_distr): """ Set weights to random values taken from rand_distr. """ # If we have a native rng, we do the loops in hoc. Otherwise, we do the loops in # Python if isinstance(rand_distr.rng, NativeRNG): rarr = simulator.nativeRNG_pick(len(self), rand_distr.rng, rand_distr.name, rand_distr.parameters) else: rarr = rand_distr.next(len(self)) logger.info("--- Projection[%s].__randomizeWeights__() ---" % self.label) self.setWeights(rarr) def randomizeDelays(self, rand_distr): """ Set delays to random values taken from rand_distr. """ # If we have a native rng, we do the loops in hoc. Otherwise, we do the loops in # Python if isinstance(rand_distr.rng, NativeRNG): rarr = simulator.nativeRNG_pick(len(self), rand_distr.rng, rand_distr.name, rand_distr.parameters) else: rarr = rand_distr.next(len(self)) logger.info("--- Projection[%s].__randomizeDelays__() ---" % self.label) self.setDelays(rarr) Space = space.Space # ============================================================================== # Low-level API for creating, connecting and recording from individual neurons # ============================================================================== create = common.build_create(Population) connect = common.build_connect(Projection, FixedProbabilityConnector) set = common.set record = common.build_record('spikes', simulator) record_v = common.build_record('v', simulator) record_gsyn = common.build_record('gsyn', simulator) # ============================================================================== PyNN-0.7.4/src/neuron/simulator.py0000644000175000017500000005530311737573151017636 0ustar andrewandrew00000000000000# encoding: utf8 """ Implementation of the "low-level" functionality used by the common implementation of the API, for the NEURON simulator. Functions and classes useable by the common implementation: Functions: create_cells() reset() run() finalize() Classes: ID Recorder ConnectionManager Connection Attributes: state -- a singleton instance of the _State class. recorder_list All other functions and classes are private, and should not be used by other modules. :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: simulator.py 1121 2012-04-06 13:58:02Z apdavison $ """ from pyNN import __path__ as pyNN_path from pyNN import common, errors, core import platform import logging import numpy import os.path from neuron import h, load_mechanisms # Global variables nrn_dll_loaded = [] recorder_list = [] connection_managers = [] gid_sources = [] logger = logging.getLogger("PyNN") # --- Internal NEURON functionality -------------------------------------------- def is_point_process(obj): """Determine whether a particular object is a NEURON point process.""" return hasattr(obj, 'loc') def register_gid(gid, source, section=None): """Register a global ID with the global `ParallelContext` instance.""" ###print "registering gid %s to %s (section=%s)" % (gid, source, section) state.parallel_context.set_gid2node(gid, state.mpi_rank) # assign the gid to this node if is_point_process(source): nc = h.NetCon(source, None) # } associate the cell spike source else: nc = h.NetCon(source, None, sec=section) state.parallel_context.cell(gid, nc) # } with the gid (using a temporary NetCon) gid_sources.append(source) # gid_clear (in _State.reset()) will cause a # segmentation fault if any of the sources # registered using pc.cell() no longer exist, so # we keep a reference to all sources in the # global gid_sources list. It would be nicer to # be able to unregister a gid and have a __del__ # method in ID, but this will do for now. def nativeRNG_pick(n, rng, distribution='uniform', parameters=[0,1]): """ Pick random numbers from a Hoc Random object. Return a Numpy array. """ native_rng = h.Random(0 or rng.seed) rarr = [getattr(native_rng, distribution)(*parameters)] rarr.extend([native_rng.repick() for j in xrange(n-1)]) return numpy.array(rarr) def h_property(name): """Return a property that accesses a global variable in Hoc.""" def _get(self): return getattr(h, name) def _set(self, val): setattr(h, name, val) return property(fget=_get, fset=_set) class _Initializer(object): """ Manage initialization of NEURON cells. Rather than create an `FInializeHandler` instance for each cell that needs to initialize itself, we create a single instance, and use an instance of this class to maintain a list of cells that need to be initialized. Public methods: register() """ def __init__(self): """ Create an `FinitializeHandler` object in Hoc, which will call the `_initialize()` method when NEURON is initialized. """ h('objref initializer') h.initializer = self self.fih = h.FInitializeHandler(1, "initializer._initialize()") self.clear() def register(self, *items): """ Add items to the list of cells/populations to be initialized. Cell objects must have a `memb_init()` method. """ for item in items: if isinstance(item, (common.BasePopulation, common.Assembly)): if "Source" not in item.celltype.__class__.__name__: # don't do memb_init() on spike sources self.population_list.append(item) else: if hasattr(item._cell, "memb_init"): self.cell_list.append(item) def _initialize(self): """Call `memb_init()` for all registered cell objects.""" logger.info("Initializing membrane potential of %d cells and %d Populations." % \ (len(self.cell_list), len(self.population_list))) for cell in self.cell_list: cell._cell.memb_init() for population in self.population_list: for cell in population: cell._cell.memb_init() def clear(self): self.cell_list = [] self.population_list = [] # --- For implementation of get_time_step() and similar functions -------------- class _State(object): """Represent the simulator state.""" def __init__(self): """Initialize the simulator.""" h('min_delay = 0') h('tstop = 0') h('steps_per_ms = 1/dt') self.parallel_context = h.ParallelContext() self.parallel_context.spike_compress(1, 0) self.num_processes = int(self.parallel_context.nhost()) self.mpi_rank = int(self.parallel_context.id()) self.cvode = h.CVode() h('objref plastic_connections') self.clear() self.default_maxstep=10.0 t = h_property('t') def __get_dt(self): return h.dt def __set_dt(self, dt): h.steps_per_ms = 1.0/dt h.dt = dt dt = property(fget=__get_dt, fset=__set_dt) tstop = h_property('tstop') # } these are stored in hoc so that we min_delay = h_property('min_delay') # } can interact with the GUI def clear(self): global connection_managers, gid_sources self.parallel_context.gid_clear() gid_sources = [] self.gid_counter = 0 self.running = False h.plastic_connections = [] connection_managers = [] def reset(): """Reset the state of the current network to time t = 0.""" state.running = False state.t = 0 state.tstop = 0 h.finitialize() def run(simtime): """Advance the simulation for a certain time.""" if not state.running: state.running = True local_minimum_delay = state.parallel_context.set_maxstep(state.default_maxstep) h.finitialize() state.tstop = 0 logger.debug("default_maxstep on host #%d = %g" % (state.mpi_rank, state.default_maxstep )) logger.debug("local_minimum_delay on host #%d = %g" % (state.mpi_rank, local_minimum_delay)) if state.num_processes > 1: assert local_minimum_delay >= state.min_delay, \ "There are connections with delays (%g) shorter than the minimum delay (%g)" % (local_minimum_delay, state.min_delay) state.tstop += simtime logger.info("Running the simulation for %g ms" % simtime) state.parallel_context.psolve(state.tstop) def finalize(quit=False): """Finish using NEURON.""" state.parallel_context.runworker() state.parallel_context.done() if quit: logger.info("Finishing up with NEURON.") h.quit() # --- For implementation of access to individual neurons' parameters ----------- class ID(int, common.IDMixin): __doc__ = common.IDMixin.__doc__ def __init__(self, n): """Create an ID object with numerical value `n`.""" int.__init__(n) common.IDMixin.__init__(self) def _build_cell(self, cell_model, cell_parameters): """ Create a cell in NEURON, and register its global ID. `cell_model` -- one of the cell classes defined in the `neuron.cells` module (more generally, any class that implements a certain interface, but I haven't explicitly described that yet). `cell_parameters` -- a dictionary containing the parameters used to initialise the cell model. """ gid = int(self) self._cell = cell_model(**cell_parameters) # create the cell object register_gid(gid, self._cell.source, section=self._cell.source_section) if hasattr(self._cell, "get_threshold"): # this is not adequate, since the threshold may be changed after cell creation state.parallel_context.threshold(int(self), self._cell.get_threshold()) # the problem is that self._cell does not know its own gid def get_native_parameters(self): """Return a dictionary of parameters for the NEURON cell model.""" D = {} for name in self._cell.parameter_names: D[name] = getattr(self._cell, name) return D def set_native_parameters(self, parameters): """Set parameters of the NEURON cell model from a dictionary.""" for name, val in parameters.items(): setattr(self._cell, name, val) def get_initial_value(self, variable): """Get the initial value of a state variable of the cell.""" return getattr(self._cell, "%s_init" % variable) def set_initial_value(self, variable, value): """Set the initial value of a state variable of the cell.""" index = self.parent.id_to_local_index(self) self.parent.initial_values[variable][index] = value setattr(self._cell, "%s_init" % variable, value) # --- For implementation of connect() and Connector classes -------------------- class Connection(object): """ Store an individual plastic connection and information about it. Provide an interface that allows access to the connection's weight, delay and other attributes. """ def __init__(self, source, target, nc): """ Create a new connection. `source` -- ID of pre-synaptic neuron. `target` -- ID of post-synaptic neuron. `nc` -- a Hoc NetCon object. """ self.source = source self.target = target self.nc = nc def useSTDP(self, mechanism, parameters, ddf): """ Set this connection to use spike-timing-dependent plasticity. `mechanism` -- the name of an NMODL mechanism that modifies synaptic weights based on the times of pre- and post-synaptic spikes. `parameters` -- a dictionary containing the parameters of the weight- adjuster mechanism. `ddf` -- dendritic delay fraction. If ddf=1, the synaptic delay `d` is considered to occur entirely in the post-synaptic dendrite, i.e., the weight adjuster receives the pre- synaptic spike at the time of emission, and the post- synaptic spike a time `d` after emission. If ddf=0, the synaptic delay is considered to occur entirely in the pre-synaptic axon. """ self.ddf = ddf self.weight_adjuster = getattr(h, mechanism)(0.5) self.pre2wa = state.parallel_context.gid_connect(int(self.source), self.weight_adjuster) self.pre2wa.threshold = self.nc.threshold self.pre2wa.delay = self.nc.delay * (1-ddf) self.pre2wa.weight[0] = 1 # directly create NetCon as wa is on the same machine as the post-synaptic cell self.post2wa = h.NetCon(self.target._cell.source, self.weight_adjuster, sec=self.target._cell.source_section) self.post2wa.threshold = 1 self.post2wa.delay = self.nc.delay * ddf self.post2wa.weight[0] = -1 for name, value in parameters.items(): setattr(self.weight_adjuster, name, value) # setpointer i = len(h.plastic_connections) h.plastic_connections.append(self) h('setpointer plastic_connections._[%d].weight_adjuster.wsyn, plastic_connections._[%d].nc.weight' % (i,i)) def _set_weight(self, w): self.nc.weight[0] = w def _get_weight(self): """Synaptic weight in nA or µS.""" return self.nc.weight[0] def _set_delay(self, d): self.nc.delay = d if hasattr(self, 'pre2wa'): self.pre2wa.delay = float(d)*(1-self.ddf) self.post2wa.delay = float(d)*self.ddf def _get_delay(self): """Connection delay in ms.""" return self.nc.delay weight = property(_get_weight, _set_weight) delay = property(_get_delay, _set_delay) def generate_synapse_property(name): def _get(self): synapse = self.nc.syn() if hasattr(synapse, name): return getattr(synapse, name) else: raise Exception("synapse type does not have an attribute '%s'" % name) def _set(self, val): synapse = self.nc.syn() if hasattr(synapse, name): setattr(synapse, name, val) else: raise Exception("synapse type does not have an attribute '%s'" % name) return property(_get, _set) setattr(Connection, 'U', generate_synapse_property('U')) setattr(Connection, 'tau_rec', generate_synapse_property('tau_rec')) setattr(Connection, 'tau_facil', generate_synapse_property('tau_facil')) setattr(Connection, 'u0', generate_synapse_property('u0')) def generate_stdp_property(name): def _get(self): return getattr(self.weight_adjuster, name) def _set(self, val): setattr(self.weight_adjuster, name, val) return property(_get, _set) setattr(Connection, 'w_max', generate_stdp_property('wmax')) setattr(Connection, 'w_min', generate_stdp_property('wmin')) setattr(Connection, 'A_plus', generate_stdp_property('aLTP')) setattr(Connection, 'A_minus', generate_stdp_property('aLTD')) setattr(Connection, 'tau_plus', generate_stdp_property('tauLTP')) setattr(Connection, 'tau_minus', generate_stdp_property('tauLTD')) class ConnectionManager(object): """ Manage synaptic connections, providing methods for creating, listing, accessing individual connections. """ def __init__(self, synapse_type, synapse_model=None, parent=None): """ Create a new ConnectionManager. `synapse_model` -- either None or 'Tsodyks-Markram'. `parent` -- the parent `Projection` """ global connection_managers assert parent is not None self.connections = [] self.parent = parent self.synapse_type = synapse_type self.synapse_model = synapse_model connection_managers.append(self) def __getitem__(self, i): """Return the `i`th connection on the local MPI node.""" if isinstance(i, int): if i < len(self): return self.connections[i] else: raise IndexError("%d > %d" % (i, len(self)-1)) elif isinstance(i, slice): if i.stop < len(self): return [self.connections[j] for j in range(*i.indices(i.stop))] else: raise IndexError("%d > %d" % (i.stop, len(self)-1)) def __len__(self): """Return the number of connections on the local MPI node.""" return len(self.connections) def __iter__(self): """Return an iterator over all connections on the local MPI node.""" return iter(self.connections) def _resolve_synapse_type(self): if self.synapse_type is None: self.synapse_type = weight>=0 and 'excitatory' or 'inhibitory' if self.synapse_model == 'Tsodyks-Markram' and 'TM' not in self.synapse_type: self.synapse_type += '_TM' def connect(self, source, targets, weights, delays): """ Connect a neuron to one or more other neurons with a static connection. `source` -- the ID of the pre-synaptic cell. `targets` -- a list/1D array of post-synaptic cell IDs, or a single ID. `weight` -- a list/1D array of connection weights, or a single weight. Must have the same length as `targets`. `delays` -- a list/1D array of connection delays, or a single delay. Must have the same length as `targets`. """ if not isinstance(source, int) or source > state.gid_counter or source < 0: errmsg = "Invalid source ID: %s (gid_counter=%d)" % (source, state.gid_counter) raise errors.ConnectionError(errmsg) if not core.is_listlike(targets): targets = [targets] if isinstance(weights, float): weights = [weights] if isinstance(delays, float): delays = [delays] assert len(targets) > 0 for target in targets: if not isinstance(target, common.IDMixin): raise errors.ConnectionError("Invalid target ID: %s" % target) assert len(targets) == len(weights) == len(delays), "%s %s %s" % (len(targets), len(weights), len(delays)) self._resolve_synapse_type() for target, weight, delay in zip(targets, weights, delays): if target.local: if "." in self.synapse_type: section, synapse_type = self.synapse_type.split(".") synapse_object = getattr(getattr(target._cell, section), synapse_type) else: synapse_object = getattr(target._cell, self.synapse_type) nc = state.parallel_context.gid_connect(int(source), synapse_object) nc.weight[0] = weight nc.delay = delay # nc.threshold is supposed to be set by ParallelContext.threshold, called in _build_cell(), above, but this hasn't been tested self.connections.append(Connection(source, target, nc)) def convergent_connect(self, sources, target, weights, delays): """ Connect a neuron to one or more other neurons with a static connection. `sources` -- a list/1D array of pre-synaptic cell IDs, or a single ID. `target` -- the ID of the post-synaptic cell. `weight` -- a list/1D array of connection weights, or a single weight. Must have the same length as `targets`. `delays` -- a list/1D array of connection delays, or a single delay. Must have the same length as `targets`. """ if not isinstance(target, int) or target > state.gid_counter or target < 0: errmsg = "Invalid target ID: %s (gid_counter=%d)" % (target, state.gid_counter) raise errors.ConnectionError(errmsg) if not core.is_listlike(sources): sources = [sources] if isinstance(weights, float): weights = [weights] if isinstance(delays, float): delays = [delays] assert len(sources) > 0 for source in sources: if not isinstance(source, common.IDMixin): raise errors.ConnectionError("Invalid source ID: %s" % source) assert len(sources) == len(weights) == len(delays), "%s %s %s" % (len(sources),len(weights),len(delays)) if target.local: for source, weight, delay in zip(sources, weights, delays): if self.synapse_type is None: self.synapse_type = weight >= 0 and 'excitatory' or 'inhibitory' if self.synapse_model == 'Tsodyks-Markram' and 'TM' not in self.synapse_type: self.synapse_type += '_TM' synapse_object = getattr(target._cell, self.synapse_type) nc = state.parallel_context.gid_connect(int(source), synapse_object) nc.weight[0] = weight nc.delay = delay # nc.threshold is supposed to be set by ParallelContext.threshold, called in _build_cell(), above, but this hasn't been tested self.connections.append(Connection(source, target, nc)) def get(self, parameter_name, format): """ Get the values of a given attribute (weight, delay, etc) for all connections on the local MPI node. `parameter_name` -- name of the attribute whose values are wanted. `format` -- "list" or "array". Array format implicitly assumes that all connections belong to a single Projection. Return a list or a 2D Numpy array. The array element X_ij contains the attribute value for the connection from the ith neuron in the pre- synaptic Population to the jth neuron in the post-synaptic Population, if a single such connection exists. If there are no such connections, X_ij will be NaN. If there are multiple such connections, the summed value will be given, which makes some sense for weights, but is pretty meaningless for delays. """ if format == 'list': values = [getattr(c, parameter_name) for c in self.connections] elif format == 'array': values = numpy.nan * numpy.ones((self.parent.pre.size, self.parent.post.size)) for c in self.connections: value = getattr(c, parameter_name) addr = (self.parent.pre.id_to_index(c.source), self.parent.post.id_to_index(c.target)) if numpy.isnan(values[addr]): values[addr] = value else: values[addr] += value else: raise Exception("format must be 'list' or 'array'") return values def set(self, name, value): """ Set connection attributes for all connections on the local MPI node. `name` -- attribute name `value` -- the attribute numeric value, or a list/1D array of such values of the same length as the number of local connections, or a 2D array with the same dimensions as the connectivity matrix (as returned by `get(format='array')`). """ if numpy.isscalar(value): for c in self: setattr(c, name, value) elif isinstance(value, numpy.ndarray) and len(value.shape) == 2: for c in self.connections: addr = (self.parent.pre.id_to_index(c.source), self.parent.post.id_to_index(c.target)) try: val = value[addr] except IndexError, e: raise IndexError("%s. addr=%s" % (e, addr)) if numpy.isnan(val): raise Exception("Array contains no value for synapse from %d to %d" % (c.source, c.target)) else: setattr(c, name, val) elif core.is_listlike(value): for c,val in zip(self.connections, value): setattr(c, name, val) else: raise TypeError("Argument should be a numeric type (int, float...), a list, or a numpy array.") # --- Initialization, and module attributes ------------------------------------ mech_path = os.path.join(pyNN_path[0], 'neuron', 'nmodl') load_mechanisms(mech_path) # maintains a list of mechanisms that have already been imported state = _State() # a Singleton, so only a single instance ever exists del _State initializer = _Initializer() del _Initializer PyNN-0.7.4/src/neuron/connectors.py0000644000175000017500000000135611736323051017762 0ustar andrewandrew00000000000000""" Connection method classes for the neuron module :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. $Id: connectors.py 957 2011-05-03 13:44:15Z apdavison $ """ from pyNN.connectors import AllToAllConnector, \ OneToOneConnector, \ FixedProbabilityConnector, \ DistanceDependentProbabilityConnector, \ FromListConnector, \ FromFileConnector, \ FixedNumberPreConnector, \ FixedNumberPostConnector, \ SmallWorldConnector, \ CSAConnector PyNN-0.7.4/src/neuron/nmodl/0000755000175000017500000000000011774605211016342 5ustar andrewandrew00000000000000PyNN-0.7.4/src/neuron/nmodl/hh_traub.mod0000755000175000017500000000446111736323051020644 0ustar andrewandrew00000000000000COMMENT Modified Hodgkin-Huxley model ENDCOMMENT UNITS { (mA) = (milliamp) (mV) = (millivolt) (uS) = (microsiemens) (S) = (siemens) } NEURON { SUFFIX hh_traub USEION na READ ena WRITE ina USEION k READ ek WRITE ik NONSPECIFIC_CURRENT il RANGE gnabar, gkbar, gl, el, gna, gk, vT GLOBAL minf, hinf, ninf, mtau, htau, ntau THREADSAFE : assigned GLOBALs will be per thread } PARAMETER { gnabar = 0.02 (S/cm2) <0,1e9> gkbar = 0.006 (S/cm2) <0,1e9> gl = 0.00001 (S/cm2) <0,1e9> el = -60.0 (mV) vT = -63.0 (mV) } STATE { m h n } ASSIGNED { v (mV) ena (mV) ek (mV) gna (S/cm2) gk (S/cm2) ina (mA/cm2) ik (mA/cm2) il (mA/cm2) minf hinf ninf mtau (ms) htau (ms) ntau (ms) } BREAKPOINT { SOLVE states METHOD cnexp gna = gnabar*m*m*m*h ina = gna*(v - ena) gk = gkbar*n*n*n*n ik = gk*(v - ek) il = gl*(v - el) } INITIAL { : the following (commented out) is the preferred initialization :rates(v) :m = minf :h = hinf :n = ninf : but for compatibility with NEST, we use the following m = 0 h = 0 n = 0 } DERIVATIVE states { rates(v) m' = (minf-m)/mtau h' = (hinf-h)/htau n' = (ninf-n)/ntau } PROCEDURE rates(v(mV)) { LOCAL alpha, beta, sum, u TABLE minf, mtau, hinf, htau, ninf, ntau FROM -100 TO 100 WITH 200 UNITSOFF u = v - vT :"m" sodium activation system alpha = 0.32 * vtrap(13-u, 4) beta = 0.28 * vtrap(u-40, 5) sum = alpha + beta mtau = 1/sum minf = alpha/sum :"h" sodium inactivation system alpha = 0.128 * exp((17-u)/18) beta = 4 / (exp((40-u)/5) + 1) sum = alpha + beta htau = 1/sum hinf = alpha/sum :"n" potassium activation system alpha = 0.032*vtrap(15-u, 5) beta = 0.5*exp((10-u)/40) sum = alpha + beta ntau = 1/sum ninf = alpha/sum } FUNCTION vtrap(x,y) { :Traps for 0 in denominator of rate eqns. if (fabs(x/y) < 1e-6) { vtrap = y*(1 - x/y/2) }else{ vtrap = x/(exp(x/y) - 1) } } UNITSON PyNN-0.7.4/src/neuron/nmodl/stdwa_softlimits.mod0000644000175000017500000000375111736323051022445 0ustar andrewandrew00000000000000COMMENT Spike Timing Dependent Weight Adjuster based on Song and Abbott, 2001, but with soft weight limits Andrew Davison, UNIC, CNRS, 2003-2005, 2009 ENDCOMMENT NEURON { POINT_PROCESS StdwaSoft RANGE interval, tlast_pre, tlast_post, M, P RANGE deltaw, wmax, wmin, aLTP, aLTD, wprune, tauLTP, tauLTD, on RANGE allow_update_on_post POINTER wsyn } ASSIGNED { interval (ms) : since last spike of the other kind tlast_pre (ms) : time of last presynaptic spike tlast_post (ms) : time of last postsynaptic spike M : LTD function P : LTP function deltaw : change in weight wsyn : weight of the synapse } INITIAL { interval = 0 tlast_pre = 0 tlast_post = 0 M = 0 P = 0 deltaw = 0 } PARAMETER { tauLTP = 20 (ms) : decay time for LTP part ( values from ) tauLTD = 20 (ms) : decay time for LTD part ( Song and Abbott, 2001 ) wmax = 1 : min and max values of synaptic weight wmin = 0 aLTP = 0.001 : amplitude of LTP steps aLTD = 0.00106 : amplitude of LTD steps on = 1 : allows learning to be turned on and off globally wprune = 0 : default is no pruning allow_update_on_post = 1 : if this is true, we update the weight on receiving both pre- and post-synaptic spikes : if it is false, weight updates are accumulated and applied only for a pre-synaptic spike } NET_RECEIVE (w) { if (w >= 0) { : this is a pre-synaptic spike P = P*exp((tlast_pre-t)/tauLTP) + aLTP interval = tlast_post - t : interval is negative tlast_pre = t deltaw = deltaw + (wsyn-wmin) * M * exp(interval/tauLTD) } else { : this is a post-synaptic spike M = M*exp((tlast_post-t)/tauLTD) - aLTD interval = t - tlast_pre : interval is positive tlast_post = t deltaw = deltaw + (wmax-wsyn) * P * exp(-interval/tauLTP) } if (on) { if (w >= 0 || allow_update_on_post) { if (wsyn > wprune) { wsyn = wsyn + deltaw } else { wsyn = 0 } deltaw = 0.0 } } } PyNN-0.7.4/src/neuron/nmodl/netstim2.mod0000755000175000017500000001224111736323051020610 0ustar andrewandrew00000000000000: Based on: : $Id: netstim2.mod 888 2011-01-04 15:17:54Z apdavison $ : but with fixed duration rather than fixed number of spikes, and an interval : that can safely be varied during the simulation : Modified by Andrew Davison, UNIC, CNRS NEURON { ARTIFICIAL_CELL NetStimFD RANGE interval, start, duration RANGE noise THREADSAFE : only true if every instance has its own distinct Random POINTER donotuse } PARAMETER { interval = 10 (ms) <1e-9,1e9> : time between spikes (msec) duration = 100 (ms) <0,1e9> : duration of firing (msec) start = 50 (ms) : start of first spike noise = 0 <0,1> : amount of randomness (0.0 - 1.0) } ASSIGNED { event (ms) on donotuse valid } PROCEDURE seed(x) { set_seed(x) } INITIAL { valid = 4 on = 0 : off if (noise < 0) { noise = 0 } if (noise > 1) { noise = 1 } if (start >= 0 && duration > 0) { : randomize the first spike so on average it occurs at : start + noise*interval invl(interval) : for some reason, the first invl() call seems to give implausibly large values, so we discard it event = start + invl(interval) - interval*(1. - noise) : but not earlier than 0 if (event < 0) { event = 0 } if (event < start+duration) { on = 1 net_send(event, 3) } } } PROCEDURE init_sequence(t(ms)) { if (duration > 0) { on = 1 event = 0 } } FUNCTION invl(mean (ms)) (ms) { if (mean <= 0.0) { mean = 0.01 (ms) : I would worry if it were 0.0 } if (noise == 0) { invl = mean }else{ invl = (1.0 - noise)*mean + noise*mean*erand() } } VERBATIM double nrn_random_pick(void* r); void* nrn_random_arg(int argpos); ENDVERBATIM FUNCTION erand() { VERBATIM if (_p_donotuse) { /* :Supports separate independent but reproducible streams for : each instance. However, the corresponding hoc Random : distribution MUST be set to Random.negexp(1) */ _lerand = nrn_random_pick(_p_donotuse); }else{ /* only can be used in main thread */ if (_nt != nrn_threads) { hoc_execerror("multithread random in NetStim"," only via hoc Random"); } ENDVERBATIM : the old standby. Cannot use if reproducible parallel sim : independent of nhost or which host this instance is on : is desired, since each instance on this cpu draws from : the same stream erand = exprand(1) VERBATIM } ENDVERBATIM } PROCEDURE noiseFromRandom() { VERBATIM { void** pv = (void**)(&_p_donotuse); if (ifarg(1)) { *pv = nrn_random_arg(1); }else{ *pv = (void*)0; } } ENDVERBATIM } PROCEDURE next_invl() { if (duration > 0) { event = invl(interval) } if (t+event >= start+duration) { on = 0 } :printf("t=%g, event=%g, t+event=%g, on=%g\n", t, event, t+event, on) } NET_RECEIVE (w) { if (flag == 0) { : external event :printf("external event. w = %g\n", w) if (w > 0) { : turn on spike sequence : but not if a netsend is on the queue init_sequence(t) : randomize the first spike so on average it occurs at : noise*interval (most likely interval is always 0) next_invl() event = event - interval*(1.0 - noise) valid = valid + 1 : events with previous values of valid will be ignored. net_send(event, valid) }else if (w < 0) { : turn off spiking definitively on = 0 } } if (flag == 3) { : from INITIAL if (on == 1) { : but ignore if turned off by external event init_sequence(t) net_send(0, valid) :printf("init_sequence(%g)\n", t) } } if (flag == valid && on == 1) { net_event(t) next_invl() :printf("%g %g %g flag=%g valid=%g\n", t, interval, event, flag, valid) if (on == 1) { net_send(event, valid) } } } COMMENT Presynaptic spike generator --------------------------- This mechanism has been written to be able to use synapses in a single neuron receiving various types of presynaptic trains. This is a "fake" presynaptic compartment containing a spike generator. The trains of spikes can be either periodic or noisy (Poisson-distributed) Parameters; noise: between 0 (no noise-periodic) and 1 (fully noisy) interval: mean time between spikes (ms) [number: number of spikes (independent of noise)] - deleted duration: duration of spiking (ms) - added Written by Z. Mainen, modified by A. Destexhe, The Salk Institute Modified by Michael Hines for use with CVode The intrinsic bursting parameters have been removed since generators can stimulate other generators to create complicated bursting patterns with independent statistics (see below) Modified by Michael Hines to use logical event style with NET_RECEIVE This stimulator can also be triggered by an input event. If the stimulator is in the on==0 state (no net_send events on queue) and receives a positive weight event, then the stimulator changes to the on=1 state and goes through its entire spike sequence before changing to the on=0 state. During that time it ignores any positive weight events. If, in an on!=0 state, the stimulator receives a negative weight event, the stimulator will change to the on==0 state. In the on==0 state, it will ignore any ariving net_send events. A change to the on==1 state immediately fires the first spike of its sequence. ENDCOMMENT PyNN-0.7.4/src/neuron/nmodl/stdwa_songabbott.mod0000644000175000017500000000435311736323051022411 0ustar andrewandrew00000000000000COMMENT Spike Timing Dependent Weight Adjuster based on Song and Abbott, 2001. Andrew Davison, UNIC, CNRS, 2003-2004, 2009 ENDCOMMENT NEURON { POINT_PROCESS StdwaSA RANGE interval, tlast_pre, tlast_post, M, P RANGE deltaw, wmax, wmin, aLTP, aLTD, tauLTP, tauLTD, on RANGE allow_update_on_post POINTER wsyn } ASSIGNED { interval (ms) : since last spike of the other kind tlast_pre (ms) : time of last presynaptic spike tlast_post (ms) : time of last postsynaptic spike M : LTD function P : LTP function deltaw : change in weight wsyn : weight of the synapse } INITIAL { interval = 0 tlast_pre = 0 tlast_post = 0 M = 0 P = 0 deltaw = 0 } PARAMETER { tauLTP = 20 (ms) : decay time for LTP part ( values from ) tauLTD = 20 (ms) : decay time for LTD part ( Song and Abbott, 2001 ) wmax = 1 : min and max values of synaptic weight wmin = 0 : aLTP = 0.001 : amplitude of LTP steps aLTD = 0.00106 : amplitude of LTD steps on = 1 : allows learning to be turned on and off allow_update_on_post = 1 : if this is true, we update the weight on receiving both pre- and post-synaptic spikes : if it is false, weight updates are accumulated and applied only for a pre-synaptic spike } NET_RECEIVE (w) { if (w >= 0) { : this is a pre-synaptic spike P = P*exp((tlast_pre-t)/tauLTP) + aLTP interval = tlast_post - t : interval is negative tlast_pre = t deltaw = deltaw + wmax * M * exp(interval/tauLTD) :printf("pre: t=%f P=%f M=%f interval=%f deltaw=%f w_syn(b4)=%f\n", t, P, M, interval, deltaw, wsyn) } else { : this is a post-synaptic spike M = M*exp((tlast_post-t)/tauLTD) - aLTD interval = t - tlast_pre : interval is positive tlast_post = t deltaw = deltaw + wmax * P * exp(-interval/tauLTP) :printf("post: t=%f P=%f M=%f interval=%f deltaw=%f, w_syn(b4)=%f\n", t, P, M, interval, deltaw, wsyn) } if (on) { if (w >= 0 || allow_update_on_post) { wsyn = wsyn + deltaw if (wsyn > wmax) { wsyn = wmax } if (wsyn < wmin) { wsyn = wmin } deltaw = 0.0 } } :printf("update: w=%f\n", wsyn) } PyNN-0.7.4/src/neuron/nmodl/expisyn.mod0000755000175000017500000000057111736323051020545 0ustar andrewandrew00000000000000NEURON { POINT_PROCESS ExpISyn RANGE tau, i NONSPECIFIC_CURRENT i } UNITS { (nA) = (nanoamp) (mV) = (millivolt) } PARAMETER { tau = 0.1 (ms) <1e-9,1e9> } STATE { i (nA) } INITIAL { i = 0 } BREAKPOINT { SOLVE state METHOD cnexp } DERIVATIVE state { i' = -i/tau } NET_RECEIVE(weight (nA)) { :printf("t = %f, weight = %f\n", t, weight) i = i - weight } PyNN-0.7.4/src/neuron/nmodl/stdwa_guetig.mod0000644000175000017500000000374211736323051021534 0ustar andrewandrew00000000000000COMMENT Spike Timing Dependent Weight Adjuster based on Song and Abbott, 2001, but with weight limits according to Guetig et al, 2003 Andrew Davison, UNIC, CNRS, 2003-2005, 2009 ENDCOMMENT NEURON { POINT_PROCESS StdwaGuetig RANGE interval, tlast_pre, tlast_post, M, P RANGE deltaw, wmax, wmin, aLTP, aLTD, tauLTP, tauLTD, on RANGE muLTP, muLTD RANGE allow_update_on_post POINTER wsyn } ASSIGNED { interval (ms) : since last spike of the other kind tlast_pre (ms) : time of last presynaptic spike tlast_post (ms) : time of last postsynaptic spike M : LTD function P : LTP function deltaw : change in weight wsyn : weight of the synapse } INITIAL { interval = 0 tlast_pre = 0 tlast_post = 0 M = 0 P = 0 deltaw = 0 } PARAMETER { tauLTP = 20 (ms) : decay time for LTP part ( values from ) tauLTD = 20 (ms) : decay time for LTD part ( Song and Abbott, 2001 ) wmax = 1 : min and max values of synaptic weight wmin = 0 aLTP = 0.001 : amplitude of LTP steps aLTD = 0.00106 : amplitude of LTD steps muLTP = 0.0 muLTD = 0.0 on = 1 : allows learning to be turned on and off allow_update_on_post = 1 : if this is true, we update the weight on receiving both pre- and post-synaptic spikes : if it is false, weight updates are accumulated and applied only for a pre-synaptic spike } NET_RECEIVE (w) { if (w >= 0) { : this is a pre-synaptic spike P = P*exp((tlast_pre-t)/tauLTP) + aLTP interval = tlast_post - t : interval is negative tlast_pre = t deltaw = deltaw + (wsyn-wmin)^muLTD * M * exp(interval/tauLTD) } else { : this is a post-synaptic spike M = M*exp((tlast_post-t)/tauLTD) - aLTD interval = t - tlast_pre : interval is positive tlast_post = t deltaw = deltaw + (wmax-wsyn)^muLTP * P * exp(-interval/tauLTP) } if (on) { if (w >= 0 || allow_update_on_post) { wsyn = wsyn + deltaw deltaw = 0.0 } } } PyNN-0.7.4/src/neuron/nmodl/tmgsyn.mod0000644000175000017500000001135011736323051020361 0ustar andrewandrew00000000000000: This model implemented by Ted Carnevale, and obtained from : http://senselab.med.yale.edu/modeldb/showmodel.asp?model=3815 COMMENT Revised 12/15/2000 in light of a personal communication from Misha Tsodyks that u is incremented _before_ x is converted to y--a point that was not clear in the paper. If u is incremented _after_ x is converted to y, then the first synaptic activation after a long interval of silence will produce smaller and smaller postsynaptic effect as the length of the silent interval increases, eventually becoming vanishingly small. Implementation of a model of short-term facilitation and depression based on the kinetics described in Tsodyks et al. Synchrony generation in recurrent networks with frequency-dependent synapses Journal of Neuroscience 20:RC50:1-5, 2000. Their mechanism represented synapses as current sources. The mechanism implemented here uses a conductance change instead. The basic scheme is x -------> y Instantaneous, spike triggered. Increment is u*x (see discussion of u below). x == fraction of "synaptic resources" that have "recovered" (fraction of xmtr pool that is ready for release, or fraction of postsynaptic channels that are ready to be opened, or some joint function of these two factors) y == fraction of "synaptic resources" that are in the "active state." This is proportional to the number of channels that are open, or the fraction of max synaptic current that is being delivered. tau y -------> z z == fraction of "synaptic resources" that are in the "inactive state" tau_rec z -------> x where x + y + z = 1 The active state y is multiplied by a synaptic weight to compute the actual synaptic conductance (or current, in the original form of the model). In addition, there is a "facilition" term u that governs the fraction of x that is converted to y on each synaptic activation. -------> u Instantaneous, spike triggered, happens _BEFORE_ x is converted to y. Increment is U*(1-u) where U and u both lie in the range 0 - 1. tau_facil u -------> decay of facilitation This implementation for NEURON offers the user a parameter u0 that has a default value of 0 but can be used to specify a nonzero initial value for u. When tau_facil = 0, u is supposed to equal U. Note that the synaptic conductance in this mechanism has the same kinetics as y, i.e. decays with time constant tau. This mechanism can receive multiple streams of synaptic input via NetCon objects. Each stream keeps track of its own weight and activation history. The printf() statements are for testing purposes only. ENDCOMMENT NEURON { POINT_PROCESS tmgsyn RANGE e, i RANGE tau, tau_rec, tau_facil, U, u0 NONSPECIFIC_CURRENT i } UNITS { (nA) = (nanoamp) (mV) = (millivolt) (umho) = (micromho) } PARAMETER { : e = -90 mV for inhibitory synapses, : 0 mV for excitatory e = -90 (mV) : tau was the same for inhibitory and excitatory synapses : in the models used by T et al. tau = 3 (ms) < 1e-9, 1e9 > : tau_rec = 100 ms for inhibitory synapses, : 800 ms for excitatory tau_rec = 100 (ms) < 1e-9, 1e9 > : tau_facil = 1000 ms for inhibitory synapses, : 0 ms for excitatory tau_facil = 1000 (ms) < 0, 1e9 > : U = 0.04 for inhibitory synapses, : 0.5 for excitatory : the (1) is needed for the < 0, 1 > to be effective : in limiting the values of U and u0 U = 0.04 (1) < 0, 1 > : initial value for the "facilitation variable" u0 = 0 (1) < 0, 1 > } ASSIGNED { v (mV) i (nA) x } STATE { g (umho) } INITIAL { g=0 } BREAKPOINT { SOLVE state METHOD cnexp i = g*(v - e) } DERIVATIVE state { g' = -g/tau } NET_RECEIVE(weight (umho), y, z, u, tsyn (ms)) { INITIAL { : these are in NET_RECEIVE to be per-stream y = 0 z = 0 : u = 0 u = u0 tsyn = t : this header will appear once per stream :printf("t\t t-tsyn\t y\t z\t u\t newu\t g\t dg\t newg\t newy\n") } : first calculate z at event- : based on prior y and z z = z*exp(-(t - tsyn)/tau_rec) z = z + ( y*(exp(-(t - tsyn)/tau) - exp(-(t - tsyn)/tau_rec)) / ((tau/tau_rec)-1) ) : now calc y at event- y = y*exp(-(t - tsyn)/tau) x = 1-y-z : calc u at event-- if (tau_facil > 0) { u = u*exp(-(t - tsyn)/tau_facil) } else { u = U } :printf("%g\t%g\t%g\t%g\t%g", t, t-tsyn, y, z, u) if (tau_facil > 0) { state_discontinuity(u, u + U*(1-u)) } :printf("\t%g\t%g\t%g", u, g, weight*x*u) state_discontinuity(g, g + weight*x*u) state_discontinuity(y, y + x*u) tsyn = t :printf("\t%g\t%g\n", g, y) } PyNN-0.7.4/src/neuron/nmodl/refrac.mod0000644000175000017500000000332611736323051020306 0ustar andrewandrew00000000000000: Insert in a passive compartment to get an integrate-and-fire neuron : with a refractory period. : Note that this only sets the membrane potential to the correct value : at the start and end of the refractory period, and prevents spikes : during the period by clamping the membrane potential to the reset : voltage with a huge conductance. : : Andrew P. Davison. UNIC, CNRS, May 2006. : $Id: refrac.mod 888 2011-01-04 15:17:54Z apdavison $ NEURON { POINT_PROCESS ResetRefrac RANGE vreset, trefrac, vspike, vthresh NONSPECIFIC_CURRENT i } UNITS { (mV) = (millivolt) (nA) = (nanoamp) (uS) = (microsiemens) } PARAMETER { vthresh = -50 (mV) : spike threshold vreset = -60 (mV) : reset potential after a spike vspike = 40 (mV) : spike height (mainly for graphical purposes) trefrac = 1 (ms) g_on = 1e12 (uS) spikewidth = 1e-12 (ms) : must be less than trefrac. Check for this? } ASSIGNED { v (mV) i (nA) g (uS) refractory } INITIAL { g = 0 net_send(0,4) } BREAKPOINT { i = g*(v-vreset) } NET_RECEIVE (weight) { if (flag == 1) { : beginning of spike g = g_on state_discontinuity(v,vspike) net_send(spikewidth,2) net_event(t) } else if (flag == 2) { : end of spike, beginning of refractory period state_discontinuity(v,vreset) if (trefrac > spikewidth) { net_send(trefrac-spikewidth,3) } else { : also the end of the refractory period g = 0 } } else if (flag == 3) { : end of refractory period state_discontinuity(v,vreset) g = 0 } else if (flag == 4) { : watch membrane potential WATCH (v > vthresh) 1 } }PyNN-0.7.4/src/neuron/nmodl/alphasyn.mod0000644000175000017500000000276211736323051020666 0ustar andrewandrew00000000000000TITLE Alpha-function synaptic conductance, with NET_RECEIVE COMMENT This model works with variable time-step methods (although it may not be very accurate) but at the expense of having to maintain the queues of spike times and weights. Andrew P. Davison, UNIC, CNRS, May 2006 ENDCOMMENT DEFINE MAX_SPIKES 1000 DEFINE CUTOFF 20 NEURON { POINT_PROCESS AlphaSyn RANGE tau, i, g, e, q NONSPECIFIC_CURRENT i } UNITS { (nA) = (nanoamp) (uS) = (microsiemens) (mV) = (millivolts) } PARAMETER { tau = 5 (ms) <1e-9,1e9> e = 0 (mV) } ASSIGNED { v (mV) i (nA) g (uS) q onset_times[MAX_SPIKES] (ms) weight_list[MAX_SPIKES] (uS) } INITIAL { i = 0 q = 0 : queue index } BREAKPOINT { LOCAL k, expired_spikes, x g = 0 expired_spikes = 0 FROM k=0 TO q-1 { x = (t - onset_times[k])/tau if (x > CUTOFF) { expired_spikes = expired_spikes + 1 } else { g = g + weight_list[k] * alpha(x) } } i = g*(v - e) update_queue(expired_spikes) } FUNCTION update_queue(n) { LOCAL k :if (n > 0) { printf("Queue changed. t = %4.2f onset_times=[",t) } FROM k=0 TO q-n-1 { onset_times[k] = onset_times[k+n] weight_list[k] = weight_list[k+n] :if (n > 0) { printf("%4.2f ",onset_times[k]) } } :if (n > 0) { printf("]\n") } q = q-n } FUNCTION alpha(x) { if (x < 0) { alpha = 0 } else { alpha = x * exp(1 - x) } } NET_RECEIVE(weight (uS)) { onset_times[q] = t weight_list[q] = weight if (q >= MAX_SPIKES-1) { printf("Error in AlphaSyn. Spike queue is full\n") } else { q = q + 1 } } PyNN-0.7.4/src/neuron/nmodl/gsfa_grr.mod0000644000175000017500000000327711736323051020643 0ustar andrewandrew00000000000000: conductance based spike-frequency adaptation, and a conductance-based relative refractory : mechanism ... to be inserted in a integrate-and-fire neuron : : See: Muller et al (2007) Spike-frequency adapting neural ensembles: Beyond : mean-adaptation and renewal theories. Neural Computation 19: 2958-3010. : : : Implemented from adexp.mod by Eilif Muller. EPFL-BMI, Jan 2011. : $Id: gsfa_grr.mod 888 2011-01-04 15:17:54Z apdavison $ NEURON { POINT_PROCESS GsfaGrr RANGE vthresh RANGE q_r, q_s RANGE E_s, E_r, tau_s, tau_r NONSPECIFIC_CURRENT i } UNITS { (mV) = (millivolt) (nA) = (nanoamp) (uS) = (microsiemens) (nS) = (nanosiemens) } PARAMETER { vthresh = -57 (mV) : spike threshold q_r = 3214.0 (nS) : relative refractory quantal conductance q_s = 14.48 (nS) : SFA quantal conductance tau_s = 110.0 (ms) : time constant of SFA tau_r = 1.97 (ms) : time constant of relative refractory mechanism E_s = -70 (mV) : SFA reversal potential E_r = -70 (mV) : relative refractory period reversal potential } ASSIGNED { v (mV) i (nA) } STATE { g_s (nS) g_r (nS) } INITIAL { g_s = 0 g_r = 0 net_send(0,2) } BREAKPOINT { SOLVE states METHOD cnexp :derivimplicit i = (0.001)*(g_r*(v-E_r) + g_s*(v-E_s)) } DERIVATIVE states { : solve eq for adaptation, relref variable g_s' = -g_s/tau_s g_r' = -g_r/tau_r } NET_RECEIVE (weight) { if (flag == 1) { : beginning of spike state_discontinuity(g_s, g_s + q_s) state_discontinuity(g_r, g_r + q_r) } else if (flag == 2) { : watch membrane potential WATCH (v > vthresh) 1 } }PyNN-0.7.4/src/neuron/nmodl/adexp.mod0000644000175000017500000000623511736323051020147 0ustar andrewandrew00000000000000: Insert in a passive compartment to get an adaptive-exponential (Brette-Gerstner) : integrate-and-fire neuron with a refractory period. : This calculates the adaptive current, sets the membrane potential to the : correct value at the start and end of the refractory period, and prevents spikes : during the period by clamping the membrane potential to the reset voltage with : a huge conductance. : : Reference: : : Brette R and Gerstner W. Adaptive exponential integrate-and-fire : model as an effective description of neuronal activity. : J. Neurophysiol. 94: 3637-3642, 2005. : : Implemented by Andrew Davison. UNIC, CNRS, March 2009. : $Id: adexp.mod 888 2011-01-04 15:17:54Z apdavison $ NEURON { POINT_PROCESS AdExpIF RANGE vreset, trefrac, vspike, vthresh, vpeak, spikewidth RANGE w, winit RANGE a, b, tauw, EL, GL, delta NONSPECIFIC_CURRENT i } UNITS { (mV) = (millivolt) (nA) = (nanoamp) (uS) = (microsiemens) } PARAMETER { vthresh = -50 (mV) : spike threshold for exponential calculation purposes vreset = -60 (mV) : reset potential after a spike vspike = -40 (mV) : spike detection threshold vpeak = 0 (mV) : peak of spike trefrac = 1 (ms) : refractory period gon = 1e6 (uS) : refractory clamp conductance spikewidth = 1e-12 (ms) : must be less than trefrac a = 0.004 (uS) : level of adaptation b = 0.0805 (nA) : increment of adaptation tauw = 144 (ms) : time constant of adaptation EL = -70.6 (mV) : leak reversal (must be equal to e_pas) GL = 0.03 (uS) : leak conductance (must be equal to g_pas(S/cm2)*membrane area(um2)*1e-2) delta = 2 (mV) : steepness of exponential approach to threshold winit = 0 (nA) } ASSIGNED { v (mV) i (nA) irefrac (nA) iexp (nA) grefrac (uS) refractory } STATE { w (nA) } INITIAL { grefrac = 0 net_send(0,4) w = winit } BREAKPOINT { SOLVE states METHOD cnexp :derivimplicit irefrac = grefrac*(v-vreset) iexp = - GL*delta*exp((v-vthresh)/delta) i = iexp + w + irefrac :printf("BP: t = %f dt = %f v = %f w = %f irefrac = %f iexp = %f i = %f\n", t, dt, v, w, irefrac, iexp, i) } DERIVATIVE states { : solve eq for adaptation variable w' = (a*(v-EL) - w)/tauw } NET_RECEIVE (weight) { if (flag == 1) { : beginning of spike v = vpeak w = w + b net_send(spikewidth, 2) net_event(t) :printf("spike: t = %f v = %f w = %f i = %f\n", t, v, w, i) } else if (flag == 2) { : end of spike, beginning of refractory period v = vreset grefrac = gon if (trefrac > spikewidth) { net_send(trefrac-spikewidth, 3) } else { : also the end of the refractory period grefrac = 0 } :printf("refrac: t = %f v = %f w = %f i = %f\n", t, v, w, i) } else if (flag == 3) { : end of refractory period v = vreset grefrac = 0 :printf("end_refrac: t = %f v = %f w = %f i = %f\n", t, v, w, i) } else if (flag == 4) { : watch membrane potential WATCH (v > vspike) 1 } }PyNN-0.7.4/src/neuron/nmodl/alphaisyn.mod0000644000175000017500000000270611736323051021035 0ustar andrewandrew00000000000000TITLE Alpha-function synaptic current, with NET_RECEIVE COMMENT This model works with variable time-step methods (although it may not be very accurate) but at the expense of having to maintain the queues of spike times and weights. Andrew P. Davison, UNIC, CNRS, May 2006 ENDCOMMENT DEFINE MAX_SPIKES 1000 DEFINE CUTOFF 20 NEURON { POINT_PROCESS AlphaISyn RANGE tau, i, q NONSPECIFIC_CURRENT i } UNITS { (nA) = (nanoamp) } PARAMETER { tau = 5 (ms) <1e-9,1e9> } ASSIGNED { i (nA) q onset_times[MAX_SPIKES] (ms) weight_list[MAX_SPIKES] (nA) } INITIAL { i = 0 q = 0 : queue index } BREAKPOINT { LOCAL k, expired_spikes, x i = 0 expired_spikes = 0 FROM k=0 TO q-1 { x = (t - onset_times[k])/tau if (x > CUTOFF) { expired_spikes = expired_spikes + 1 } else { i = i - weight_list[k] * alpha(x) } } update_queue(expired_spikes) } FUNCTION update_queue(n) { LOCAL k :if (n > 0) { printf("Queue changed. t = %4.2f onset_times=[",t) } FROM k=0 TO q-n-1 { onset_times[k] = onset_times[k+n] weight_list[k] = weight_list[k+n] :if (n > 0) { printf("%4.2f ",onset_times[k]) } } :if (n > 0) { printf("]\n") } q = q-n } FUNCTION alpha(x) { if (x < 0) { alpha = 0 } else { alpha = x * exp(1 - x) } } NET_RECEIVE(weight (nA)) { onset_times[q] = t weight_list[q] = weight :printf("t = %f, weight = %f\n", t, weight) if (q >= MAX_SPIKES-1) { printf("Error in AlphaSynI. Spike queue is full\n") } else { q = q + 1 } } PyNN-0.7.4/src/neuron/nmodl/reset.mod0000644000175000017500000000122711736323051020164 0ustar andrewandrew00000000000000: Insert in a passive compartment to get an integrate-and-fire neuron : (no refractory period). : Andrew P. Davison. UNIC, CNRS, May 2006. : $Id: reset.mod 888 2011-01-04 15:17:54Z apdavison $ NEURON { POINT_PROCESS Reset RANGE vreset, vspike } UNITS { (mV) = (millivolt) } PARAMETER { vreset = -60 (mV) : reset potential after a spike vspike = 40 (mV) : spike height (mainly for graphical purposes) } ASSIGNED { v (millivolt) } NET_RECEIVE (weight) { if (flag == 1) { v = vreset } else { v = vspike net_send(1e-12,1) : using variable time step, this should allow the spike to be detected using threshold crossing net_event(t) } } PyNN-0.7.4/src/neuron/nmodl/tmisyn.mod0000644000175000017500000001133111736323051020362 0ustar andrewandrew00000000000000: This model is a minor modification of tmgsyn.mod, which was implemented by : Ted Carnevale, and obtained from : http://senselab.med.yale.edu/modeldb/showmodel.asp?model=3815 : The modified version is a current-based, not conductance-based synapse. COMMENT Revised 12/15/2000 in light of a personal communication from Misha Tsodyks that u is incremented _before_ x is converted to y--a point that was not clear in the paper. If u is incremented _after_ x is converted to y, then the first synaptic activation after a long interval of silence will produce smaller and smaller postsynaptic effect as the length of the silent interval increases, eventually becoming vanishingly small. Implementation of a model of short-term facilitation and depression based on the kinetics described in Tsodyks et al. Synchrony generation in recurrent networks with frequency-dependent synapses Journal of Neuroscience 20:RC50:1-5, 2000. Their mechanism represented synapses as current sources. The mechanism implemented here uses a conductance change instead. The basic scheme is x -------> y Instantaneous, spike triggered. Increment is u*x (see discussion of u below). x == fraction of "synaptic resources" that have "recovered" (fraction of xmtr pool that is ready for release, or fraction of postsynaptic channels that are ready to be opened, or some joint function of these two factors) y == fraction of "synaptic resources" that are in the "active state." This is proportional to the number of channels that are open, or the fraction of max synaptic current that is being delivered. tau y -------> z z == fraction of "synaptic resources" that are in the "inactive state" tau_rec z -------> x where x + y + z = 1 The active state y is multiplied by a synaptic weight to compute the actual synaptic conductance (or current, in the original form of the model). In addition, there is a "facilition" term u that governs the fraction of x that is converted to y on each synaptic activation. -------> u Instantaneous, spike triggered, happens _BEFORE_ x is converted to y. Increment is U*(1-u) where U and u both lie in the range 0 - 1. tau_facil u -------> decay of facilitation This implementation for NEURON offers the user a parameter u0 that has a default value of 0 but can be used to specify a nonzero initial value for u. When tau_facil = 0, u is supposed to equal U. Note that the synaptic conductance in this mechanism has the same kinetics as y, i.e. decays with time constant tau. This mechanism can receive multiple streams of synaptic input via NetCon objects. Each stream keeps track of its own weight and activation history. The printf() statements are for testing purposes only. ENDCOMMENT NEURON { POINT_PROCESS tmisyn RANGE i RANGE tau, tau_rec, tau_facil, U, u0 NONSPECIFIC_CURRENT i } UNITS { (nA) = (nanoamp) (mV) = (millivolt) } PARAMETER { : tau was the same for inhibitory and excitatory synapses : in the models used by T et al. tau = 3 (ms) < 1e-9, 1e9 > : tau_rec = 100 ms for inhibitory synapses, : 800 ms for excitatory tau_rec = 100 (ms) < 1e-9, 1e9 > : tau_facil = 1000 ms for inhibitory synapses, : 0 ms for excitatory tau_facil = 1000 (ms) < 0, 1e9 > : U = 0.04 for inhibitory synapses, : 0.5 for excitatory : the (1) is needed for the < 0, 1 > to be effective : in limiting the values of U and u0 U = 0.04 (1) < 0, 1 > : initial value for the "facilitation variable" u0 = 0 (1) < 0, 1 > } ASSIGNED { x } STATE { i (nA) } INITIAL { i=0 } BREAKPOINT { SOLVE state METHOD cnexp } DERIVATIVE state { i' = -i/tau } NET_RECEIVE(weight (nA), y, z, u, tsyn (ms)) { INITIAL { : these are in NET_RECEIVE to be per-stream y = 0 z = 0 : u = 0 u = u0 tsyn = t : this header will appear once per stream :printf("t\t t-tsyn\t y\t z\t u\t newu\t i\t dg\t newi\t newy\n") } : first calculate z at event- : based on prior y and z z = z*exp(-(t - tsyn)/tau_rec) z = z + ( y*(exp(-(t - tsyn)/tau) - exp(-(t - tsyn)/tau_rec)) / ((tau/tau_rec)-1) ) : now calc y at event- y = y*exp(-(t - tsyn)/tau) x = 1-y-z : calc u at event-- if (tau_facil > 0) { u = u*exp(-(t - tsyn)/tau_facil) } else { u = U } :printf("%g\t%g\t%g\t%g\t%g", t, t-tsyn, y, z, u) if (tau_facil > 0) { state_discontinuity(u, u + U*(1-u)) } :printf("\t%g\t%g\t%g", u, i, weight*x*u) state_discontinuity(i, i + weight*x*u) state_discontinuity(y, y + x*u) tsyn = t :printf("\t%g\t%g\n", i, y) } PyNN-0.7.4/src/neuron/nmodl/stdwa_symm.mod0000644000175000017500000000366111736323051021235 0ustar andrewandrew00000000000000COMMENT Spike Timing Dependent Weight Adjuster with symmetric functions (i.e. only depends on the absolute value of the time difference, not on its sign. Andrew Davison, UNIC, CNRS, 2004, 2009 ENDCOMMENT NEURON { POINT_PROCESS StdwaSymm RANGE interval, tlast_pre, tlast_post RANGE deltaw, wmax, f, tau_a, tau_b, a, on RANGE allow_update_on_post POINTER wsyn } ASSIGNED { interval (ms) : since last spike of the other kind tlast_pre (ms) : time of last presynaptic spike tlast_post (ms) : time of last postsynaptic spike f : weight change function deltaw : change in weight wsyn : weight of the synapse tas (ms2) : tau_a squared } INITIAL { interval = 0 tlast_pre = 0 tlast_post = 0 f = 0 deltaw = 0 } PARAMETER { tau_a = 20 (ms) : crossing point from LTP to LTD tau_b = 15 (ms) : decay time constant for exponential part of f wmax = 1 : min and max values of synaptic weight a = 0.001 : step amplitude on = 1 : allows learning to be turned on and off allow_update_on_post = 1 : if this is true, we update the weight on receiving both pre- and post-synaptic spikes : if it is false, weight updates are accumulated and applied only for a pre-synaptic spike } NET_RECEIVE (w) { tas = tau_a * tau_a : do it here in case tau_a has been changed since the last spike if (w >= 0) { : this is a pre-synaptic spike interval = tlast_post - t tlast_pre = t f = (1 - interval*interval/tas) * exp(interval/tau_b) deltaw = deltaw + wmax * a * f } else { : this is a post-synaptic spike interval = t - tlast_pre tlast_post = t f = (1 - interval*interval/tas) * exp(-interval/tau_b) deltaw = deltaw + wmax * a* f } if (on) { if (w >= 0 || allow_update_on_post) { wsyn = wsyn + deltaw if (wsyn > wmax) { wsyn = wmax } if (wsyn < 0) { wsyn = 0 } deltaw = 0.0 } } } PyNN-0.7.4/src/neuron/nmodl/vecstim.mod0000644000175000017500000001161011736323051020511 0ustar andrewandrew00000000000000COMMENT This mechanism emits spike events at the times given in a supplied Vector. Example usage: objref vs vs = new VecStim(.5) vs.play(spikevec) This is a modified version of the original vecstim.mod (author unknown?) which allows multiple vectors to be used sequentially. This saves memory in a long simulation, as the same storage can be reused. The mechanism checks at intervals `ping` whether a new vector has been provided using the play() procedure and if so resets its pointer to the first element in the new vector. Note that any spikes remaining in the first vector will be lost. Any spiketimes in the new vector that are earlier than the current time are ignored. The mechanism actually checks slightly after the ping interval, to avoid play() and the ping check occurring at the same time step but in the wrong order. Extracts from the comments on the original vecstim: The idiom for getting a Vector argument in a model description is encapsulated in the "play" procedure. There are potentially many VecStim instances and so the Vector pointer must be stored in the space allocated for the particular instance when "play" is called. The assigned variable "space" gives us space for a double precision number, 64 bits, which is sufficient to store an opaque pointer. The "element" procedure uses this opaque pointer to make sure that the requested "index" element is within the size of the vector and assigns the "etime" double precision variable to the value of that element. Since index is defined at the model description level it is a double precision variable as well and must be treated as such in the VERBATIM block. An index value of -1 means that no further events should be sent from this instance. Fortunately, space for model data is cleared when it is first allocated. So if play is not called, the pointer will be 0 and the test in the element procedure would turn off the VecStim by setting index to -1. Also, because the existence of the first argument is checked in the "play" procedure, one can turn off the VecStim with vs.play() No checking is done if the stimvec is destroyed (when the reference count for the underlying Vector becomes 0). Continued use of the VecStim instance in this case would cause a memory error. So it is up to the user to call vs.play() or to destroy the VecStim instance before running another simulation. The strategy of the INITIAL and NET_RECEIVE blocks is to send a self event (with flag 1) to be delivered at the time specified by the index of the Vector starting at index 0. When the self event is delivered to the NET_RECEIVE block, it causes an immediate input event on every NetCon which has this VecStim as its source. These events, would then be delivered to their targets after the appropriate delay specified for each NetCon. ENDCOMMENT : Vector stream of events NEURON { ARTIFICIAL_CELL VecStim RANGE ping } PARAMETER { ping = 1 (ms) } ASSIGNED { index etime (ms) space } INITIAL { index = 0 element() if (index > 0) { net_send(etime - t, 1) } if (ping > 0) { net_send(ping, 2) } } NET_RECEIVE (w) { if (flag == 1) { net_event(t) element() if (index > 0) { if (etime < t) { printf("Warning in VecStim: spike time (%g ms) before current time (%g ms)\n",etime,t) } else { net_send(etime - t, 1) } } } else if (flag == 2) { : ping - reset index to 0 :printf("flag=2, etime=%g, t=%g, ping=%g, index=%g\n",etime,t,ping,index) if (index == -2) { : play() has been called :printf("Detected new vector\n") index = 0 : the following loop ensures that if the vector : contains spiketimes earlier than the current : time, they are ignored. while (etime < t && index >= 0) { element() :printf("element(): index=%g, etime=%g, t=%g\n",index,etime,t) } if (index > 0) { net_send(etime - t, 1) } } net_send(ping, 2) } } VERBATIM extern double* vector_vec(); extern int vector_capacity(); extern void* vector_arg(); ENDVERBATIM PROCEDURE element() { VERBATIM { void* vv; int i, size; double* px; i = (int)index; if (i >= 0) { vv = *((void**)(&space)); if (vv) { size = vector_capacity(vv); px = vector_vec(vv); if (i < size) { etime = px[i]; index += 1.; } else { index = -1.; } } else { index = -1.; } } } ENDVERBATIM } PROCEDURE play() { VERBATIM void** vv; vv = (void**)(&space); *vv = (void*)0; if (ifarg(1)) { *vv = vector_arg(1); } index = -2; ENDVERBATIM } PyNN-0.7.4/src/neuron/recording.py0000644000175000017500000001270111736323051017555 0ustar andrewandrew00000000000000""" :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ import numpy from pyNN import recording from pyNN.neuron import simulator import re from neuron import h recordable_pattern = re.compile(r'((?P
\w+)(\((?P[-+]?[0-9]*\.?[0-9]+)\))?\.)?(?P\w+)') # --- For implementation of record_X()/get_X()/print_X() ----------------------- class Recorder(recording.Recorder): """Encapsulates data and functions related to recording model variables.""" def _record(self, new_ids): """Add the cells in `new_ids` to the set of recorded cells.""" if self.variable == 'spikes': for id in new_ids: id._cell.record(1) elif self.variable == 'v': for id in new_ids: id._cell.record_v(1) elif self.variable == 'gsyn': for id in new_ids: id._cell.record_gsyn("excitatory", 1) id._cell.record_gsyn("inhibitory", 1) if id._cell.excitatory_TM is not None: id._cell.record_gsyn("excitatory_TM", 1) id._cell.record_gsyn("inhibitory_TM", 1) else: for id in new_ids: self._native_record(id) def _reset(self): for id in self.recorded: id._cell.traces = {} id._cell.record(active=False) id._cell.record_v(active=False) for syn_name in id._cell.gsyn_trace: id._cell.record_gsyn(syn_name, active=False) def _native_record(self, id): match = recordable_pattern.match(self.variable) if match: parts = match.groupdict() if parts['section']: section = getattr(id._cell, parts['section']) if parts['location']: segment = section(float(parts['location'])) else: segment = section else: segment = id._cell.source id._cell.traces[self.variable] = vec = h.Vector() vec.record(getattr(segment, "_ref_%s" % parts['var'])) if not id._cell.recording_time: id._cell.record_times = h.Vector() id._cell.record_times.record(h._ref_t) id._cell.recording_time += 1 else: raise Exception("Recording of %s not implemented." % self.variable) def _get(self, gather=False, compatible_output=True, filter=None): """Return the recorded data as a Numpy array.""" # compatible_output is not used, but is needed for compatibility with the nest module. # Does nest really need it? if self.variable == 'spikes': data = numpy.empty((0,2)) for id in self.filter_recorded(filter): spikes = numpy.array(id._cell.spike_times) spikes = spikes[spikes<=simulator.state.t+1e-9] if len(spikes) > 0: new_data = numpy.array([numpy.ones(spikes.shape)*id, spikes]).T data = numpy.concatenate((data, new_data)) elif self.variable == 'v': data = numpy.empty((0,3)) for id in self.filter_recorded(filter): v = numpy.array(id._cell.vtrace) t = numpy.array(id._cell.record_times) new_data = numpy.array([numpy.ones(v.shape)*id, t, v]).T data = numpy.concatenate((data, new_data)) elif self.variable == 'gsyn': data = numpy.empty((0,4)) for id in self.filter_recorded(filter): ge = numpy.array(id._cell.gsyn_trace['excitatory']) gi = numpy.array(id._cell.gsyn_trace['inhibitory']) if 'excitatory_TM' in id._cell.gsyn_trace: ge_TM = numpy.array(id._cell.gsyn_trace['excitatory_TM']) gi_TM = numpy.array(id._cell.gsyn_trace['inhibitory_TM']) if ge.size == 0: ge = ge_TM elif ge.size == ge_TM.size: ge = ge + ge_TM else: raise Exception("Inconsistent conductance array sizes: ge.size=%d, ge_TM.size=%d", (ge.size, ge_TM.size)) if gi.size == 0: gi = gi_TM elif gi.size == gi_TM.size: gi = gi + gi_TM else: raise Exception() t = numpy.array(id._cell.record_times) new_data = numpy.array([numpy.ones(ge.shape)*id, t, ge, gi]).T data = numpy.concatenate((data, new_data)) else: data = numpy.empty((0,3)) for id in self.filter_recorded(filter): var = numpy.array(id._cell.traces[self.variable]) t = numpy.array(id._cell.record_times) new_data = numpy.array([numpy.ones(var.shape)*id, t, var]).T data = numpy.concatenate((data, new_data)) #raise Exception("Recording of %s not implemented." % self.variable) if gather and simulator.state.num_processes > 1: data = recording.gather(data) return data def _local_count(self, filter=None): N = {} for id in self.filter_recorded(filter): N[int(id)] = id._cell.spike_times.size() return N simulator.Recorder = Recorder PyNN-0.7.4/src/neuron/nineml.py0000644000175000017500000000647411736323051017075 0ustar andrewandrew00000000000000""" Support cell types defined in 9ML with NEURON. Requires the 9ml2nmodl script to be on the path. Classes: NineMLCell - a single neuron instance NineMLCellType - base class for cell types, not used directly Functions: nineml_cell_type - return a new NineMLCellType subclass Constants: NMODL_DIR - subdirectory to which NMODL mechanisms will be written :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details. """ from __future__ import absolute_import # Not compatible with Python 2.4 import subprocess import neuron from pyNN.nineml.cells import join, _add_prefix, _build_nineml_celltype, NineMLCellType import logging import os h = neuron.h logger = logging.getLogger("PyNN") NMODL_DIR = "nineml_mechanisms" class NineMLCell(object): def __init__(self, **parameters): self.type = parameters.pop("type") self.source_section = h.Section() self.source = getattr(h, self.type.model_name)(0.5, sec=self.source_section) for param, value in parameters.items(): setattr(self.source, param, value) # for recording self.spike_times = h.Vector(0) self.traces = {} self.recording_time = False def __getattr__(self, name): try: return self.__getattribute__(name) except AttributeError: if name in self.type.synapse_types: return self.source # source is also target else: raise AttributeError("'NineMLCell' object has no attribute or synapse type '%s'" % name) def record(self, active): if active: rec = h.NetCon(self.source, None) rec.record(self.spike_times) else: self.spike_times = h.Vector(0) def memb_init(self): # this is a bit of a hack for var in self.type.recordable: if hasattr(self, "%s_init" % var): initial_value = getattr(self, "%s_init" % var) logger.debug("Initialising %s to %g" % (var, initial_value)) setattr(self.source, var, initial_value) def _compile_nmodl(nineml_component, weight_variables): # weight variables should really be within component """ Generate NMODL code for the 9ML component, run "nrnivmodl" and then load the mechanisms into NEURON. """ if not os.path.exists(NMODL_DIR): os.makedirs(NMODL_DIR) cwd = os.getcwd() os.chdir(NMODL_DIR) xml_file = "%s.xml" % nineml_component.name logger.debug("Writing NineML component to %s" % xml_file) nineml_component.write(xml_file) nineml2nmodl = __import__("9ml2nmodl") nineml2nmodl.write_nmodl(xml_file, weight_variables) # weight variables should really come from xml file p = subprocess.check_call(["nrnivmodl"]) os.chdir(cwd) neuron.load_mechanisms(NMODL_DIR) def nineml_cell_type(name, neuron_model, port_map={}, weight_variables={}, **synapse_models): """ Return a new NineMLCellType subclass. """ return _build_nineml_celltype(name, (NineMLCellType,), {'neuron_model': neuron_model, 'synapse_models': synapse_models, 'port_map': port_map, 'weight_variables': weight_variables}) PyNN-0.7.4/README0000644000175000017500000000100411736323052014005 0ustar andrewandrew00000000000000 Installation instructions ************************* python setup.py install SVN instructions **************** Fetching the source: svn co https://neuralensemble.org/svn/PyNN/trunk pyNN ** Anonymous checkout is allowed Committing Changes: In the svn dir: svn commit ** svn commit requires a neuralensemble.org account, or a FACETS account with SVN access. Trac **** http://neuralensemble.org/PyNN :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS. :license: CeCILL, see LICENSE for details.PyNN-0.7.4/LICENSE0000644000175000017500000005115111736323052014142 0ustar andrewandrew00000000000000 CeCILL FREE SOFTWARE LICENSE AGREEMENT Notice This Agreement is a Free Software license agreement that is the result of discussions between its authors in order to ensure compliance with the two main principles guiding its drafting: * firstly, compliance with the principles governing the distribution of Free Software: access to source code, broad rights granted to users, * secondly, the election of a governing law, French law, with which it is conformant, both as regards the law of torts and intellectual property law, and the protection that it offers to both authors and holders of the economic rights over software. The authors of the CeCILL (for Ce[a] C[nrs] I[nria] L[ogiciel] L[ibre]) license are: Commissariat l'Energie Atomique - CEA, a public scientific, technical and industrial research establishment, having its principal place of business at 25 rue Leblanc, immeuble Le Ponant D, 75015 Paris, France. Centre National de la Recherche Scientifique - CNRS, a public scientific and technological establishment, having its principal place of business at 3 rue Michel-Ange, 75794 Paris cedex 16, France. Institut National de Recherche en Informatique et en Automatique - INRIA, a public scientific and technological establishment, having its principal place of business at Domaine de Voluceau, Rocquencourt, BP 105, 78153 Le Chesnay cedex, France. Preamble The purpose of this Free Software license agreement is to grant users the right to modify and redistribute the software governed by this license within the framework of an open source distribution model. The exercising of these rights is conditional upon certain obligations for users so as to preserve this status for all subsequent redistributions. In consideration of access to the source code and the rights to copy, modify and redistribute granted by the license, users are provided only with a limited warranty and the software's author, the holder of the economic rights, and the successive licensors only have limited liability. In this respect, the risks associated with loading, using, modifying and/or developing or reproducing the software by the user are brought to the user's attention, given its Free Software status, which may make it complicated to use, with the result that its use is reserved for developers and experienced professionals having in-depth computer knowledge. Users are therefore encouraged to load and test the suitability of the software as regards their requirements in conditions enabling the security of their systems and/or data to be ensured and, more generally, to use and operate it in the same conditions of security. This Agreement may be freely reproduced and published, provided it is not altered, and that no provisions are either added or removed herefrom. This Agreement may apply to any or all software for which the holder of the economic rights decides to submit the use thereof to its provisions. Article 1 - DEFINITIONS For the purpose of this Agreement, when the following expressions commence with a capital letter, they shall have the following meaning: Agreement: means this license agreement, and its possible subsequent versions and annexes. Software: means the software in its Object Code and/or Source Code form and, where applicable, its documentation, "as is" when the Licensee accepts the Agreement. Initial Software: means the Software in its Source Code and possibly its Object Code form and, where applicable, its documentation, "as is" when it is first distributed under the terms and conditions of the Agreement. Modified Software: means the Software modified by at least one Contribution. Source Code: means all the Software's instructions and program lines to which access is required so as to modify the Software. Object Code: means the binary files originating from the compilation of the Source Code. Holder: means the holder(s) of the economic rights over the Initial Software. Licensee: means the Software user(s) having accepted the Agreement. Contributor: means a Licensee having made at least one Contribution. Licensor: means the Holder, or any other individual or legal entity, who distributes the Software under the Agreement. Contribution: means any or all modifications, corrections, translations, adaptations and/or new functions integrated into the Software by any or all Contributors, as well as any or all Internal Modules. Module: means a set of sources files including their documentation that enables supplementary functions or services in addition to those offered by the Software. External Module: means any or all Modules, not derived from the Software, so that this Module and the Software run in separate address spaces, with one calling the other when they are run. Internal Module: means any or all Module, connected to the Software so that they both execute in the same address space. GNU GPL: means the GNU General Public License version 2 or any subsequent version, as published by the Free Software Foundation Inc. Parties: mean both the Licensee and the Licensor. These expressions may be used both in singular and plural form. Article 2 - PURPOSE The purpose of the Agreement is the grant by the Licensor to the Licensee of a non-exclusive, transferable and worldwide license for the Software as set forth in Article 5 hereinafter for the whole term of the protection granted by the rights over said Software. Article 3 - ACCEPTANCE 3.1 The Licensee shall be deemed as having accepted the terms and conditions of this Agreement upon the occurrence of the first of the following events: * (i) loading the Software by any or all means, notably, by downloading from a remote server, or by loading from a physical medium; * (ii) the first time the Licensee exercises any of the rights granted hereunder. 3.2 One copy of the Agreement, containing a notice relating to the characteristics of the Software, to the limited warranty, and to the fact that its use is restricted to experienced users has been provided to the Licensee prior to its acceptance as set forth in Article 3.1 hereinabove, and the Licensee hereby acknowledges that it has read and understood it. Article 4 - EFFECTIVE DATE AND TERM 4.1 EFFECTIVE DATE The Agreement shall become effective on the date when it is accepted by the Licensee as set forth in Article 3.1. 4.2 TERM The Agreement shall remain in force for the entire legal term of protection of the economic rights over the Software. Article 5 - SCOPE OF RIGHTS GRANTED The Licensor hereby grants to the Licensee, who accepts, the following rights over the Software for any or all use, and for the term of the Agreement, on the basis of the terms and conditions set forth hereinafter. Besides, if the Licensor owns or comes to own one or more patents protecting all or part of the functions of the Software or of its components, the Licensor undertakes not to enforce the rights granted by these patents against successive Licensees using, exploiting or modifying the Software. If these patents are transferred, the Licensor undertakes to have the transferees subscribe to the obligations set forth in this paragraph. 5.1 RIGHT OF USE The Licensee is authorized to use the Software, without any limitation as to its fields of application, with it being hereinafter specified that this comprises: 1. permanent or temporary reproduction of all or part of the Software by any or all means and in any or all form. 2. loading, displaying, running, or storing the Software on any or all medium. 3. entitlement to observe, study or test its operation so as to determine the ideas and principles behind any or all constituent elements of said Software. This shall apply when the Licensee carries out any or all loading, displaying, running, transmission or storage operation as regards the Software, that it is entitled to carry out hereunder. 5.2 ENTITLEMENT TO MAKE CONTRIBUTIONS The right to make Contributions includes the right to translate, adapt, arrange, or make any or all modifications to the Software, and the right to reproduce the resulting software. The Licensee is authorized to make any or all Contributions to the Software provided that it includes an explicit notice that it is the author of said Contribution and indicates the date of the creation thereof. 5.3 RIGHT OF DISTRIBUTION In particular, the right of distribution includes the right to publish, transmit and communicate the Software to the general public on any or all medium, and by any or all means, and the right to market, either in consideration of a fee, or free of charge, one or more copies of the Software by any means. The Licensee is further authorized to distribute copies of the modified or unmodified Software to third parties according to the terms and conditions set forth hereinafter. 5.3.1 DISTRIBUTION OF SOFTWARE WITHOUT MODIFICATION The Licensee is authorized to distribute true copies of the Software in Source Code or Object Code form, provided that said distribution complies with all the provisions of the Agreement and is accompanied by: 1. a copy of the Agreement, 2. a notice relating to the limitation of both the Licensor's warranty and liability as set forth in Articles 8 and 9, and that, in the event that only the Object Code of the Software is redistributed, the Licensee allows future Licensees unhindered access to the full Source Code of the Software by indicating how to access it, it being understood that the additional cost of acquiring the Source Code shall not exceed the cost of transferring the data. 5.3.2 DISTRIBUTION OF MODIFIED SOFTWARE When the Licensee makes a Contribution to the Software, the terms and conditions for the distribution of the resulting Modified Software become subject to all the provisions of this Agreement. The Licensee is authorized to distribute the Modified Software, in source code or object code form, provided that said distribution complies with all the provisions of the Agreement and is accompanied by: 1. a copy of the Agreement, 2. a notice relating to the limitation of both the Licensor's warranty and liability as set forth in Articles 8 and 9, and that, in the event that only the object code of the Modified Software is redistributed, the Licensee allows future Licensees unhindered access to the full source code of the Modified Software by indicating how to access it, it being understood that the additional cost of acquiring the source code shall not exceed the cost of transferring the data. 5.3.3 DISTRIBUTION OF EXTERNAL MODULES When the Licensee has developed an External Module, the terms and conditions of this Agreement do not apply to said External Module, that may be distributed under a separate license agreement. 5.3.4 COMPATIBILITY WITH THE GNU GPL The Licensee can include a code that is subject to the provisions of one of the versions of the GNU GPL in the Modified or unmodified Software, and distribute that entire code under the terms of the same version of the GNU GPL. The Licensee can include the Modified or unmodified Software in a code that is subject to the provisions of one of the versions of the GNU GPL, and distribute that entire code under the terms of the same version of the GNU GPL. Article 6 - INTELLECTUAL PROPERTY 6.1 OVER THE INITIAL SOFTWARE The Holder owns the economic rights over the Initial Software. Any or all use of the Initial Software is subject to compliance with the terms and conditions under which the Holder has elected to distribute its work and no one shall be entitled to modify the terms and conditions for the distribution of said Initial Software. The Holder undertakes that the Initial Software will remain ruled at least by this Agreement, for the duration set forth in Article 4.2. 6.2 OVER THE CONTRIBUTIONS The Licensee who develops a Contribution is the owner of the intellectual property rights over this Contribution as defined by applicable law. 6.3 OVER THE EXTERNAL MODULES The Licensee who develops an External Module is the owner of the intellectual property rights over this External Module as defined by applicable law and is free to choose the type of agreement that shall govern its distribution. 6.4 JOINT PROVISIONS The Licensee expressly undertakes: 1. not to remove, or modify, in any manner, the intellectual property notices attached to the Software; 2. to reproduce said notices, in an identical manner, in the copies of the Software modified or not. The Licensee undertakes not to directly or indirectly infringe the intellectual property rights of the Holder and/or Contributors on the Software and to take, where applicable, vis--vis its staff, any and all measures required to ensure respect of said intellectual property rights of the Holder and/or Contributors. Article 7 - RELATED SERVICES 7.1 Under no circumstances shall the Agreement oblige the Licensor to provide technical assistance or maintenance services for the Software. However, the Licensor is entitled to offer this type of services. The terms and conditions of such technical assistance, and/or such maintenance, shall be set forth in a separate instrument. Only the Licensor offering said maintenance and/or technical assistance services shall incur liability therefor. 7.2 Similarly, any Licensor is entitled to offer to its licensees, under its sole responsibility, a warranty, that shall only be binding upon itself, for the redistribution of the Software and/or the Modified Software, under terms and conditions that it is free to decide. Said warranty, and the financial terms and conditions of its application, shall be subject of a separate instrument executed between the Licensor and the Licensee. Article 8 - LIABILITY 8.1 Subject to the provisions of Article 8.2, the Licensee shall be entitled to claim compensation for any direct loss it may have suffered from the Software as a result of a fault on the part of the relevant Licensor, subject to providing evidence thereof. 8.2 The Licensor's liability is limited to the commitments made under this Agreement and shall not be incurred as a result of in particular: (i) loss due the Licensee's total or partial failure to fulfill its obligations, (ii) direct or consequential loss that is suffered by the Licensee due to the use or performance of the Software, and (iii) more generally, any consequential loss. In particular the Parties expressly agree that any or all pecuniary or business loss (i.e. loss of data, loss of profits, operating loss, loss of customers or orders, opportunity cost, any disturbance to business activities) or any or all legal proceedings instituted against the Licensee by a third party, shall constitute consequential loss and shall not provide entitlement to any or all compensation from the Licensor. Article 9 - WARRANTY 9.1 The Licensee acknowledges that the scientific and technical state-of-the-art when the Software was distributed did not enable all possible uses to be tested and verified, nor for the presence of possible defects to be detected. In this respect, the Licensee's attention has been drawn to the risks associated with loading, using, modifying and/or developing and reproducing the Software which are reserved for experienced users. The Licensee shall be responsible for verifying, by any or all means, the suitability of the product for its requirements, its good working order, and for ensuring that it shall not cause damage to either persons or properties. 9.2 The Licensor hereby represents, in good faith, that it is entitled to grant all the rights over the Software (including in particular the rights set forth in Article 5). 9.3 The Licensee acknowledges that the Software is supplied "as is" by the Licensor without any other express or tacit warranty, other than that provided for in Article 9.2 and, in particular, without any warranty as to its commercial value, its secured, safe, innovative or relevant nature. Specifically, the Licensor does not warrant that the Software is free from any error, that it will operate without interruption, that it will be compatible with the Licensee's own equipment and software configuration, nor that it will meet the Licensee's requirements. 9.4 The Licensor does not either expressly or tacitly warrant that the Software does not infringe any third party intellectual property right relating to a patent, software or any other property right. Therefore, the Licensor disclaims any and all liability towards the Licensee arising out of any or all proceedings for infringement that may be instituted in respect of the use, modification and redistribution of the Software. Nevertheless, should such proceedings be instituted against the Licensee, the Licensor shall provide it with technical and legal assistance for its defense. Such technical and legal assistance shall be decided on a case-by-case basis between the relevant Licensor and the Licensee pursuant to a memorandum of understanding. The Licensor disclaims any and all liability as regards the Licensee's use of the name of the Software. No warranty is given as regards the existence of prior rights over the name of the Software or as regards the existence of a trademark. Article 10 - TERMINATION 10.1 In the event of a breach by the Licensee of its obligations hereunder, the Licensor may automatically terminate this Agreement thirty (30) days after notice has been sent to the Licensee and has remained ineffective. 10.2 A Licensee whose Agreement is terminated shall no longer be authorized to use, modify or distribute the Software. However, any licenses that it may have granted prior to termination of the Agreement shall remain valid subject to their having been granted in compliance with the terms and conditions hereof. Article 11 - MISCELLANEOUS 11.1 EXCUSABLE EVENTS Neither Party shall be liable for any or all delay, or failure to perform the Agreement, that may be attributable to an event of force majeure, an act of God or an outside cause, such as defective functioning or interruptions of the electricity or telecommunications networks, network paralysis following a virus attack, intervention by government authorities, natural disasters, water damage, earthquakes, fire, explosions, strikes and labor unrest, war, etc. 11.2 Any failure by either Party, on one or more occasions, to invoke one or more of the provisions hereof, shall under no circumstances be interpreted as being a waiver by the interested Party of its right to invoke said provision(s) subsequently. 11.3 The Agreement cancels and replaces any or all previous agreements, whether written or oral, between the Parties and having the same purpose, and constitutes the entirety of the agreement between said Parties concerning said purpose. No supplement or modification to the terms and conditions hereof shall be effective as between the Parties unless it is made in writing and signed by their duly authorized representatives. 11.4 In the event that one or more of the provisions hereof were to conflict with a current or future applicable act or legislative text, said act or legislative text shall prevail, and the Parties shall make the necessary amendments so as to comply with said act or legislative text. All other provisions shall remain effective. Similarly, invalidity of a provision of the Agreement, for any reason whatsoever, shall not cause the Agreement as a whole to be invalid. 11.5 LANGUAGE The Agreement is drafted in both French and English and both versions are deemed authentic. Article 12 - NEW VERSIONS OF THE AGREEMENT 12.1 Any person is authorized to duplicate and distribute copies of this Agreement. 12.2 So as to ensure coherence, the wording of this Agreement is protected and may only be modified by the authors of the License, who reserve the right to periodically publish updates or new versions of the Agreement, each with a separate number. These subsequent versions may address new issues encountered by Free Software. 12.3 Any Software distributed under a given version of the Agreement may only be subsequently distributed under the same version of the Agreement or a subsequent version, subject to the provisions of Article 5.3.4. Article 13 - GOVERNING LAW AND JURISDICTION 13.1 The Agreement is governed by French law. The Parties agree to endeavor to seek an amicable solution to any disagreements or disputes that may arise during the performance of the Agreement. 13.2 Failing an amicable solution within two (2) months as from their occurrence, and unless emergency proceedings are necessary, the disagreements or disputes shall be referred to the Paris Courts having jurisdiction, by the more diligent Party. Version 2.0 dated 2006-09-05. PyNN-0.7.4/PKG-INFO0000644000175000017500000000447611774605211014244 0ustar andrewandrew00000000000000Metadata-Version: 1.0 Name: PyNN Version: 0.7.4 Summary: A Python package for simulator-independent specification of neuronal network models Home-page: http://neuralensemble.org/PyNN/ Author: The PyNN team Author-email: pynn@neuralensemble.org License: CeCILL http://www.cecill.info Description: In other words, you can write the code for a model once, using the PyNN API and the Python programming language, and then run it without modification on any simulator that PyNN supports (currently NEURON, NEST, PCSIM and Brian). The API has two parts, a low-level, procedural API (functions ``create()``, ``connect()``, ``set()``, ``record()``, ``record_v()``), and a high-level, object-oriented API (classes ``Population`` and ``Projection``, which have methods like ``set()``, ``record()``, ``setWeights()``, etc.). The low-level API is good for small networks, and perhaps gives more flexibility. The high-level API is good for hiding the details and the book-keeping, allowing you to concentrate on the overall structure of your model. The other thing that is required to write a model once and run it on multiple simulators is standard cell and synapse models. PyNN translates standard cell-model names and parameter names into simulator-specific names, e.g. standard model ``IF_curr_alpha`` is ``iaf_neuron`` in NEST and ``StandardIF`` in NEURON, while ``SpikeSourcePoisson`` is a ``poisson_generator`` in NEST and a ``NetStim`` in NEURON. Even if you don't wish to run simulations on multiple simulators, you may benefit from writing your simulation code using PyNN's powerful, high-level interface. In this case, you can use any neuron or synapse model supported by your simulator, and are not restricted to the standard models. PyNN is a work in progress, but is already being used for several large-scale simulation projects. Keywords: computational neuroscience simulation neuron nest pcsim brian neuroml Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Environment :: Console Classifier: Intended Audience :: Science/Research Classifier: License :: Other/Proprietary License Classifier: Natural Language :: English Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Topic :: Scientific/Engineering PyNN-0.7.4/examples/0000755000175000017500000000000011774605211014752 5ustar andrewandrew00000000000000PyNN-0.7.4/examples/SpikeSourcePoisson.py0000644000175000017500000000151111736323052021127 0ustar andrewandrew00000000000000""" Simple test of recording a SpikeSourcePoisson object Andrew Davison, UNIC, CNRS September 2006 $Id: SpikeSourcePoisson.py 647 2009-06-09 12:37:02Z apdavison $ """ from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) setup(timestep=0.01, min_delay=0.1) poissonsource = Population(10, SpikeSourcePoisson, {'rate' : 100.0, 'duration' : 100.0, 'start' : 100.0}) poissonsource.record() run(300.0) print "Mean spike count:", poissonsource.meanSpikeCount() print "First few spikes:" all_spikes = poissonsource.getSpikes() first_id = all_spikes[0,0] for i, cell in enumerate(poissonsource): print "cell #%d: %s" % (cell, all_spikes[all_spikes[:,0]==i][:,1][:5]) poissonsource.printSpikes("Results/SpikeSourcePoisson_%s.ras" % simulator_name) end() PyNN-0.7.4/examples/StepCurrentSource.py0000644000175000017500000000114411736323052020761 0ustar andrewandrew00000000000000""" Simple test of injecting time-varying current into a cell Andrew Davison, UNIC, CNRS May 2009 $Id: StepCurrentSource.py 647 2009-06-09 12:37:02Z apdavison $ """ from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) setup() cell = create(IF_curr_exp, {'v_thresh': -55.0, 'tau_refrac': 5.0}) current_source = StepCurrentSource([50.0, 110.0, 150.0, 210.0], [0.4, 0.6, -0.2, 0.2]) cell.inject(current_source) record_v(cell, "Results/StepCurrentSource_%s.v" % simulator_name) run(250.0) end() PyNN-0.7.4/examples/simpleNetwork.py0000644000175000017500000000250511736323052020167 0ustar andrewandrew00000000000000""" Simple network with a Poisson spike source projecting to a pair of IF_curr_alpha neurons Andrew Davison, UNIC, CNRS August 2006 $Id: simpleNetwork.py 933 2011-02-14 18:41:49Z apdavison $ """ import numpy from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) tstop = 1000.0 rate = 100.0 setup(timestep=0.1, min_delay=0.2, max_delay=1.0) cell_params = {'tau_refrac':2.0,'v_thresh':-50.0,'tau_syn_E':2.0, 'tau_syn_I':2.0} output_population = Population(2, IF_curr_alpha, cell_params, label="output") number = int(2*tstop*rate/1000.0) numpy.random.seed(26278342) spike_times = numpy.add.accumulate(numpy.random.exponential(1000.0/rate, size=number)) assert spike_times.max() > tstop print spike_times.min() input_population = Population(1, SpikeSourceArray, {'spike_times': spike_times}, label="input") projection = Projection(input_population, output_population, AllToAllConnector()) projection.setWeights(1.0) input_population.record() output_population.record() output_population.record_v() run(tstop) output_population.printSpikes("Results/simpleNetwork_output_%s.ras" % simulator_name) input_population.printSpikes("Results/simpleNetwork_input_%s.ras" % simulator_name) output_population.print_v("Results/simpleNetwork_%s.v" % simulator_name) end() PyNN-0.7.4/examples/simple_STDP.py0000644000175000017500000000274211736323052017452 0ustar andrewandrew00000000000000""" Very simple STDP test Andrew Davison, UNIC, CNRS January 2008 $Id: simple_STDP.py 647 2009-06-09 12:37:02Z apdavison $ """ import numpy from pyNN.utility import get_script_args sim_name = get_script_args(1)[0] exec("from pyNN.%s import *" % sim_name) setup(timestep=0.001, min_delay=0.1, max_delay=1.0, debug=True, quit_on_end=False) p1 = Population(1, SpikeSourceArray, {'spike_times': numpy.arange(1, 50, 1.0)}) p2 = Population(1, IF_cond_exp) stdp_model = STDPMechanism(timing_dependence=SpikePairRule(tau_plus=20.0, tau_minus=20.0), weight_dependence=AdditiveWeightDependence(w_min=0, w_max=0.04, A_plus=0.01, A_minus=0.012)) connection_method = AllToAllConnector(weights=0.024, delays=0.2) prj = Projection(p1, p2, method=connection_method, synapse_dynamics=SynapseDynamics(slow=stdp_model)) p1.record() p2.record() p2.record_v() try: p2.record_gsyn() except Exception: pass t = [] w = [] for i in range(60): t.append(run(1.0)) w.extend(prj.getWeights()) p1.printSpikes("Results/simple_STDP_1_%s.ras" % sim_name) p2.printSpikes("Results/simple_STDP_2_%s.ras" % sim_name) p2.print_v("Results/simple_STDP_%s.v" % sim_name) try: p2.print_gsyn("Results/simple_STDP_%s.gsyn" % sim_name) except Exception: pass print w f = open("Results/simple_STDP_%s.w" % sim_name, 'w') f.write("\n".join([str(ww) for ww in w])) f.close() end() PyNN-0.7.4/examples/IF_cond_alpha.py0000755000175000017500000000232711736323052017777 0ustar andrewandrew00000000000000""" A single IF neuron with alpha-function shaped, conductance-based synapses, fed by two spike sources. Run as: $ python IF_cond_alpha.py where is 'neuron', 'nest', etc Andrew Davison, UNIC, CNRS May 2006 $Id: IF_cond_alpha.py 700 2010-01-27 15:05:00Z apdavison $ """ from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) setup(timestep=0.1, min_delay=0.1, max_delay=4.0) ifcell = create(IF_cond_alpha, {'i_offset' : 0.1, 'tau_refrac' : 3.0, 'v_thresh' : -51.0, 'tau_syn_E' : 2.0, 'tau_syn_I': 5.0, 'v_reset' : -70.0, 'e_rev_E' : 0., 'e_rev_I' : -80.}) spike_sourceE = create(SpikeSourceArray, {'spike_times': [float(i) for i in range(5,105,10)]}) spike_sourceI = create(SpikeSourceArray, {'spike_times': [float(i) for i in range(155,255,10)]}) connE = connect(spike_sourceE, ifcell, weight=0.006, synapse_type='excitatory',delay=2.0) connI = connect(spike_sourceI, ifcell, weight=0.02, synapse_type='inhibitory',delay=4.0) record_v(ifcell, "Results/IF_cond_alpha_%s.v" % simulator_name) run(200.0) end() PyNN-0.7.4/examples/nineml_neuron.py0000644000175000017500000000447611736323052020205 0ustar andrewandrew00000000000000""" Example of using a cell type defined in 9ML with pyNN.neuron """ import sys from os.path import abspath, realpath, join import nineml root = abspath(join(realpath(nineml.__path__[0]), "../../..")) sys.path.append(join(root, "lib9ml/python/examples/AL")) sys.path.append(join(root, "code_generation/nmodl")) leaky_iaf = __import__("leaky_iaf") coba_synapse = __import__("coba_synapse") import pyNN.neuron as sim from pyNN.neuron.nineml import nineml_cell_type from pyNN.utility import init_logging from copy import deepcopy init_logging(None, debug=True) sim.setup(timestep=0.1, min_delay=0.1) celltype_cls = nineml_cell_type("if_cond_exp", leaky_iaf.c1, excitatory=coba_synapse.c1, inhibitory=deepcopy(coba_synapse.c1), port_map={ 'excitatory': [('V', 'V'), ('Isyn', 'Isyn')], 'inhibitory': [('V', 'V'), ('Isyn', 'Isyn')] }, weight_variables={ 'excitatory': 'q', 'inhibitory': 'q' }) parameters = { 'C': 1.0, 'gL': 50.0, 't_ref': 5.0, 'excitatory_tau': 2.0, 'inhibitory_tau': 5.0, 'excitatory_E': 0.0, 'inhibitory_E': -70.0, 'theta': -50.0, 'vL': -65.0, 'V_reset': -65.0 } cells = sim.Population(1, celltype_cls, parameters) cells.initialize('V', parameters['vL']) cells.initialize('t_spike', -1e99) # neuron not refractory at start cells.initialize('regime', 1002) # temporary hack input = sim.Population(2, sim.SpikeSourcePoisson, {'rate': 100}) connector = sim.OneToOneConnector(weights=1.0, delays=0.5) conn = [sim.Projection(input[0:1], cells, connector, target='excitatory'), sim.Projection(input[1:2], cells, connector, target='inhibitory')] cells._record('V') cells._record('excitatory_g') cells._record('inhibitory_g') cells.record() sim.run(100.0) cells.recorders['V'].write("Results/nineml_neuron.V", filter=[cells[0]]) cells.recorders['excitatory_g'].write("Results/nineml_neuron.g_exc", filter=[cells[0]]) cells.recorders['inhibitory_g'].write("Results/nineml_neuron.g_inh", filter=[cells[0]]) sim.end() PyNN-0.7.4/examples/VAbenchmarks3.py0000644000175000017500000001644211736323052017760 0ustar andrewandrew00000000000000# coding: utf-8 """ An implementation of benchmarks 1 and 2 from Brette et al. (2007) Journal of Computational Neuroscience 23: 349-398 The IF network is based on the CUBA and COBA models of Vogels & Abbott (J. Neurosci, 2005). The model consists of a network of excitatory and inhibitory neurons, connected via current-based "exponential" synapses (instantaneous rise, exponential decay). This version demonstrates another alternative way to set-up the network, using Assemblies. Andrew Davison, UNIC, CNRS March 2010 $Id: $ """ import os import socket from math import * from pyNN.utility import get_script_args, Timer usage = """Usage: python VAbenchmarks.py is either neuron, nest, brian or pcsim is either CUBA or COBA.""" simulator_name, benchmark = get_script_args(2, usage) exec("from pyNN.%s import *" % simulator_name) from pyNN.random import NumpyRNG, RandomDistribution timer = Timer() # === Define parameters ======================================================== threads = 1 rngseed = 98765 parallel_safe = True n = 4000 # number of cells r_ei = 4.0 # number of excitatory cells:number of inhibitory cells pconn = 0.02 # connection probability stim_dur = 50. # (ms) duration of random stimulation rate = 100. # (Hz) frequency of the random stimulation dt = 0.1 # (ms) simulation timestep tstop = 1000 # (ms) simulaton duration delay = 0.2 # Cell parameters area = 20000. # (µm²) tau_m = 20. # (ms) cm = 1. # (µF/cm²) g_leak = 5e-5 # (S/cm²) if benchmark == "COBA": E_leak = -60. # (mV) elif benchmark == "CUBA": E_leak = -49. # (mV) v_thresh = -50. # (mV) v_reset = -60. # (mV) t_refrac = 5. # (ms) (clamped at v_reset) v_mean = -60. # (mV) 'mean' membrane potential, for calculating CUBA weights tau_exc = 5. # (ms) tau_inh = 10. # (ms) # Synapse parameters if benchmark == "COBA": Gexc = 4. # (nS) Ginh = 51. # (nS) elif benchmark == "CUBA": Gexc = 0.27 # (nS) #Those weights should be similar to the COBA weights Ginh = 4.5 # (nS) # but the delpolarising drift should be taken into account Erev_exc = 0. # (mV) Erev_inh = -80. # (mV) ### what is the synaptic delay??? # === Calculate derived parameters ============================================= area = area*1e-8 # convert to cm² cm = cm*area*1000 # convert to nF Rm = 1e-6/(g_leak*area) # membrane resistance in MΩ assert tau_m == cm*Rm # just to check n_exc = int(round((n*r_ei/(1+r_ei)))) # number of excitatory cells n_inh = n - n_exc # number of inhibitory cells if benchmark == "COBA": celltype = IF_cond_exp w_exc = Gexc*1e-3 # We convert conductances to uS w_inh = Ginh*1e-3 elif benchmark == "CUBA": celltype = IF_curr_exp w_exc = 1e-3*Gexc*(Erev_exc - v_mean) # (nA) weight of excitatory synapses w_inh = 1e-3*Ginh*(Erev_inh - v_mean) # (nA) assert w_exc > 0; assert w_inh < 0 # === Build the network ======================================================== extra = {'threads' : threads, 'filename': "va3_%s.xml" % benchmark, 'label': 'VA3'} if simulator_name == "neuroml": extra["file"] = "VAbenchmarks.xml" node_id = setup(timestep=dt, min_delay=delay, max_delay=delay, **extra) np = num_processes() host_name = socket.gethostname() print "Host #%d is on %s" % (node_id+1, host_name) print "%s Initialising the simulator with %d thread(s)..." % (node_id, extra['threads']) cell_params = { 'tau_m' : tau_m, 'tau_syn_E' : tau_exc, 'tau_syn_I' : tau_inh, 'v_rest' : E_leak, 'v_reset' : v_reset, 'v_thresh' : v_thresh, 'cm' : cm, 'tau_refrac' : t_refrac} if (benchmark == "COBA"): cell_params['e_rev_E'] = Erev_exc cell_params['e_rev_I'] = Erev_inh timer.start() print "%s Creating cell populations..." % node_id exc_cells = Population(n_exc, celltype, cell_params, label="Excitatory cells") inh_cells = Population(n_inh, celltype, cell_params, label="Inhibitory cells") all_cells = exc_cells + inh_cells if benchmark == "COBA": ext_stim = Population((20,), SpikeSourcePoisson,{'rate' : rate, 'duration' : stim_dur}, label="expoisson") rconn = 0.01 ext_conn = FixedProbabilityConnector(rconn, weights=0.1) print "%s Initialising membrane potential to random values..." % node_id rng = NumpyRNG(seed=rngseed, parallel_safe=parallel_safe) uniformDistr = RandomDistribution('uniform', [v_reset,v_thresh], rng=rng) all_cells.initialize('v', uniformDistr) print "%s Connecting populations..." % node_id exc_conn = FixedProbabilityConnector(pconn, weights=w_exc, delays=delay) inh_conn = FixedProbabilityConnector(pconn, weights=w_inh, delays=delay) connections={} connections['exc'] = Projection(exc_cells, all_cells, exc_conn, target='excitatory', rng=rng) connections['inh'] = Projection(inh_cells, all_cells, inh_conn, target='inhibitory', rng=rng) if (benchmark == "COBA"): connections['ext'] = Projection(ext_stim, all_cells, ext_conn, target='excitatory') # === Setup recording ========================================================== print "%s Setting up recording..." % node_id all_cells.record() exc_cells[[0, 1]].record_v() buildCPUTime = timer.diff() # === Save connections to file ================================================= #print "%s Saving connections to file..." % node_id #for prj in connections.keys(): # connections[prj].saveConnections('Results/VAbenchmark_%s_%s_%s_np%d.conn' % (benchmark, prj, simulator_name, np)) #saveCPUTime = timer.diff() # === Run simulation =========================================================== print "%d Running simulation..." % node_id run(tstop) simCPUTime = timer.diff() E_count = exc_cells.meanSpikeCount() I_count = inh_cells.meanSpikeCount() # === Print results to file ==================================================== print "%d Writing data to file..." % node_id if not(os.path.isdir('Results')): os.mkdir('Results') exc_cells.printSpikes("Results/VAbenchmark_%s_exc_%s_np%d.ras" % (benchmark, simulator_name, np)) inh_cells.printSpikes("Results/VAbenchmark_%s_inh_%s_np%d.ras" % (benchmark, simulator_name, np)) exc_cells[[0, 1]].print_v("Results/VAbenchmark_%s_exc_%s_np%d.v" % (benchmark, simulator_name, np)) writeCPUTime = timer.diff() connections = "%d e→e,i %d i→e,i" % (connections['exc'].size(), connections['inh'].size(),) if node_id == 0: print "\n--- Vogels-Abbott Network Simulation ---" print "Nodes : %d" % np print "Simulation type : %s" % benchmark print "Number of Neurons : %d" % n print "Number of Synapses : %s" % connections print "Excitatory conductance : %g nS" % Gexc print "Inhibitory conductance : %g nS" % Ginh print "Excitatory rate : %g Hz" % (E_count*1000.0/tstop,) print "Inhibitory rate : %g Hz" % (I_count*1000.0/tstop,) print "Build time : %g s" % buildCPUTime #print "Save connections time : %g s" % saveCPUTime print "Simulation time : %g s" % simCPUTime print "Writing time : %g s" % writeCPUTime # === Finished with simulator ================================================== end() PyNN-0.7.4/examples/simpleRandomNetwork.py0000644000175000017500000000426611736323052021336 0ustar andrewandrew00000000000000""" Simple network with a 1D population of poisson spike sources projecting to a 2D population of IF_curr_exp neurons. Andrew Davison, UNIC, CNRS August 2006, November 2009 $Id: simpleRandomNetwork.py 933 2011-02-14 18:41:49Z apdavison $ """ import socket, os from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) from pyNN.random import NumpyRNG seed = 764756387 tstop = 1000.0 # ms input_rate = 100.0 # Hz cell_params = {'tau_refrac': 2.0, # ms 'v_thresh': -50.0, # mV 'tau_syn_E': 2.0, # ms 'tau_syn_I': 2.0} # ms n_record = 5 node = setup(timestep=0.025, min_delay=1.0, max_delay=1.0, debug=True, quit_on_end=False) print "Process with rank %d running on %s" % (node, socket.gethostname()) rng = NumpyRNG(seed=seed, parallel_safe=True) print "[%d] Creating populations" % node n_spikes = int(2*tstop*input_rate/1000.0) spike_times = numpy.add.accumulate(rng.next(n_spikes, 'exponential', [1000.0/input_rate], mask_local=False)) input_population = Population(2, SpikeSourceArray, {'spike_times': spike_times }, label="input") output_population = Population(2, IF_curr_exp, cell_params, label="output") print "[%d] input_population cells: %s" % (node, input_population.local_cells) print "[%d] output_population cells: %s" % (node, output_population.local_cells) print "[%d] Connecting populations" % node connector = FixedProbabilityConnector(0.5, weights=1.0) projection = Projection(input_population, output_population, connector, rng=rng) file_stem = "Results/simpleRandomNetwork_np%d_%s" % (num_processes(), simulator_name) projection.saveConnections('%s.conn' % file_stem) input_population.record() output_population.record() output_population.sample(n_record, rng).record_v() print "[%d] Running simulation" % node run(tstop) print "[%d] Writing spikes to disk" % node output_population.printSpikes('%s_output.ras' % file_stem) input_population.printSpikes('%s_input.ras' % file_stem) print "[%d] Writing Vm to disk" % node output_population.print_v('%s.v' % file_stem) print "[%d] Finishing" % node end() print "[%d] Done" % node PyNN-0.7.4/examples/tsodyksmarkram.py0000644000175000017500000000320611736323052020376 0ustar andrewandrew00000000000000""" Example of depressing and facilitating synapses Andrew Davison, UNIC, CNRS May 2009 $Id: tsodyksmarkram.py 933 2011-02-14 18:41:49Z apdavison $ """ import numpy from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("import pyNN.%s as sim" % simulator_name) sim.setup(quit_on_end=False) spike_source = sim.Population(1, sim.SpikeSourceArray, {'spike_times': numpy.arange(10, 100, 10)}) connector = sim.AllToAllConnector(weights=0.01, delays=0.5) synapse_dynamics = { 'static': None, 'depressing': sim.SynapseDynamics( fast=sim.TsodyksMarkramMechanism(U=0.5, tau_rec=800.0, tau_facil=0.0)), 'facilitating': sim.SynapseDynamics( fast=sim.TsodyksMarkramMechanism(U=0.04, tau_rec=100.0, tau_facil=1000.0)), } populations = {} projections = {} for label in 'static', 'depressing', 'facilitating': populations[label] = sim.Population(1, sim.IF_cond_exp, {'e_rev_I': -75}, label=label) populations[label].record_v() if populations[label].can_record('gsyn'): populations[label].record_gsyn() projections[label] = sim.Projection(spike_source, populations[label], connector, target='inhibitory', synapse_dynamics=synapse_dynamics[label]) spike_source.record() sim.run(200.0) for label,p in populations.items(): p.print_v("Results/tsodyksmarkram_%s_%s.v" % (label, simulator_name)) if populations[label].can_record('gsyn'): p.print_gsyn("Results/tsodyksmarkram_%s_%s.gsyn" % (label, simulator_name)) print spike_source.getSpikes() sim.end() PyNN-0.7.4/examples/EIF_cond_alpha_isfa_ista.py0000644000175000017500000000113111736323052022113 0ustar andrewandrew00000000000000""" Test of the EIF_cond_alpha_isfa_ista model Andrew Davison, UNIC, CNRS December 2007 $Id: EIF_cond_alpha_isfa_ista.py 772 2010-08-03 07:41:36Z apdavison $ """ from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) setup(timestep=0.01,min_delay=0.1,max_delay=4.0,debug=True) ifcell = create(EIF_cond_alpha_isfa_ista, {'i_offset': 1.0, 'tau_refrac': 2.0, 'v_spike': -40}) print ifcell[0].get_parameters() record_v(ifcell,"Results/EIF_cond_alpha_isfa_ista_%s.v" % simulator_name) run(200.0) end() PyNN-0.7.4/examples/HH_cond_exp.py0000644000175000017500000000205711736323052017504 0ustar andrewandrew00000000000000""" A single-compartment Hodgkin-Huxley neuron with exponential, conductance-based synapses, fed by two spike sources. Run as: $ python HH_cond_exp.py where is 'neuron', 'nest', etc Andrew Davison, UNIC, CNRS July 2007 $Id: HH_cond_exp.py 647 2009-06-09 12:37:02Z apdavison $ """ from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) setup(timestep=0.01, min_delay=0.1, max_delay=4.0, quit_on_end=False) hhcell = create(HH_cond_exp) spike_sourceE = create(SpikeSourceArray, {'spike_times': [float(i) for i in range(1,100,1)]}) spike_sourceI = create(SpikeSourceArray, {'spike_times': [float(i) for i in range(100,200,11)]}) connE = connect(spike_sourceE, hhcell, weight=0.02, synapse_type='excitatory', delay=2.0) connI = connect(spike_sourceI, hhcell, weight=0.01, synapse_type='inhibitory', delay=4.0) record_v(hhcell, "Results/HH_cond_exp_%s.v" % simulator_name) record_gsyn(hhcell, "Results/HH_cond_exp_%s.gsyn" % simulator_name) run(200.0) end() PyNN-0.7.4/examples/distrib_example.py0000644000175000017500000000334711737573765020527 0ustar andrewandrew00000000000000 from pyNN.utility import get_script_args import sys import numpy simulator = get_script_args(1)[0] from mpi4py import MPI exec("import pyNN.%s as sim" % simulator) comm = MPI.COMM_WORLD sim.setup(debug=True) print "\nThis is node %d (%d of %d)" % (sim.rank(), sim.rank()+1, sim.num_processes()) assert comm.rank == sim.rank() assert comm.size == sim.num_processes() data1 = numpy.empty(100, dtype=float) if comm.rank == 0: data1 = numpy.arange(100, dtype=float) else: pass comm.Bcast([data1, MPI.DOUBLE], root=0) print comm.rank, data1 data2 = numpy.arange(comm.rank, 10+comm.rank, dtype=float) print comm.rank, data2 data2g = numpy.empty(10*comm.size) comm.Gather([data2, MPI.DOUBLE], [data2g, MPI.DOUBLE], root=0) if comm.rank == 0: print "gathered (2):", data2g data3 = numpy.arange(0, 5*(comm.rank+1), dtype=float) print comm.rank, data3 if comm.rank == 0: sizes = range(5,5*comm.size+1,5) disp = [size-5 for size in sizes] data3g = numpy.empty(sum(sizes)) else: sizes = disp = [] data3g = numpy.empty([]) comm.Gatherv([data3, data3.size, MPI.DOUBLE], [data3g, (sizes,disp), MPI.DOUBLE], root=0) if comm.rank == 0: print "gathered (3):", data3g def gather(data): assert isinstance(data, numpy.ndarray) # first we pass the data size size = data.size sizes = comm.gather(size, root=0) or [] # now we pass the data displacements = [sum(sizes[:i]) for i in range(len(sizes))] print comm.rank, "sizes=", sizes, "displacements=", displacements gdata = numpy.empty(sum(sizes)) comm.Gatherv([data, size, MPI.DOUBLE], [gdata, (sizes,displacements), MPI.DOUBLE], root=0) return gdata data3g = gather(data3) if comm.rank == 0: print "gathered (3, again):", data3g sim.end()PyNN-0.7.4/examples/VAbenchmarks2-csa.py0000644000175000017500000001653711736323052020530 0ustar andrewandrew00000000000000# coding: utf-8 """ An implementation of benchmarks 1 and 2 from Brette et al. (2007) Journal of Computational Neuroscience 23: 349-398 The IF network is based on the CUBA and COBA models of Vogels & Abbott (J. Neurosci, 2005). The model consists of a network of excitatory and inhibitory neurons, connected via current-based "exponential" synapses (instantaneous rise, exponential decay). This version demonstrates an alternative way to set-up the network, using PopulationViews. Andrew Davison, UNIC, CNRS March 2010 $Id: $ """ import os import socket from math import * import csa from pyNN.utility import get_script_args, Timer usage = """Usage: python VAbenchmarks.py is either neuron, nest, brian or pcsim is either CUBA or COBA.""" simulator_name, benchmark = get_script_args(2, usage) exec("from pyNN.%s import *" % simulator_name) from pyNN.random import NumpyRNG, RandomDistribution timer = Timer() # === Define parameters ======================================================== threads = 1 rngseed = 98765 parallel_safe = True n = 4000 # number of cells r_ei = 4.0 # number of excitatory cells:number of inhibitory cells pconn = 0.02 # connection probability stim_dur = 50. # (ms) duration of random stimulation rate = 100. # (Hz) frequency of the random stimulation dt = 0.1 # (ms) simulation timestep tstop = 1000 # (ms) simulaton duration delay = 0.2 # Cell parameters area = 20000. # (µm²) tau_m = 20. # (ms) cm = 1. # (µF/cm²) g_leak = 5e-5 # (S/cm²) if benchmark == "COBA": E_leak = -60. # (mV) elif benchmark == "CUBA": E_leak = -49. # (mV) v_thresh = -50. # (mV) v_reset = -60. # (mV) t_refrac = 5. # (ms) (clamped at v_reset) v_mean = -60. # (mV) 'mean' membrane potential, for calculating CUBA weights tau_exc = 5. # (ms) tau_inh = 10. # (ms) # Synapse parameters if benchmark == "COBA": Gexc = 4. # (nS) Ginh = 51. # (nS) elif benchmark == "CUBA": Gexc = 0.27 # (nS) #Those weights should be similar to the COBA weights Ginh = 4.5 # (nS) # but the delpolarising drift should be taken into account Erev_exc = 0. # (mV) Erev_inh = -80. # (mV) ### what is the synaptic delay??? # === Calculate derived parameters ============================================= area = area*1e-8 # convert to cm² cm = cm*area*1000 # convert to nF Rm = 1e-6/(g_leak*area) # membrane resistance in MΩ assert tau_m == cm*Rm # just to check n_exc = int(round((n*r_ei/(1+r_ei)))) # number of excitatory cells n_inh = n - n_exc # number of inhibitory cells if benchmark == "COBA": celltype = IF_cond_exp w_exc = Gexc*1e-3 # We convert conductances to uS w_inh = Ginh*1e-3 elif benchmark == "CUBA": celltype = IF_curr_exp w_exc = 1e-3*Gexc*(Erev_exc - v_mean) # (nA) weight of excitatory synapses w_inh = 1e-3*Ginh*(Erev_inh - v_mean) # (nA) assert w_exc > 0; assert w_inh < 0 # === Build the network ======================================================== extra = {'threads' : threads} if simulator_name == "neuroml": extra["file"] = "VAbenchmarks.xml" node_id = setup(timestep=dt, min_delay=delay, max_delay=delay, **extra) np = num_processes() host_name = socket.gethostname() print "Host #%d is on %s" % (node_id+1, host_name) print "%s Initialising the simulator with %d thread(s)..." % (node_id, extra['threads']) cell_params = { 'tau_m' : tau_m, 'tau_syn_E' : tau_exc, 'tau_syn_I' : tau_inh, 'v_rest' : E_leak, 'v_reset' : v_reset, 'v_thresh' : v_thresh, 'cm' : cm, 'tau_refrac' : t_refrac} if (benchmark == "COBA"): cell_params['e_rev_E'] = Erev_exc cell_params['e_rev_I'] = Erev_inh timer.start() print "%s Creating cell populations..." % node_id all_cells = Population(n_exc+n_inh, celltype, cell_params, label="All_Cells") exc_cells = all_cells[:n_exc] inh_cells = all_cells[n_exc:] if benchmark == "COBA": ext_stim = Population(20, SpikeSourcePoisson, {'rate' : rate, 'duration' : stim_dur}, label="expoisson") rconn = 0.01 ext_conn = FixedProbabilityConnector(rconn, weights=0.1) print "%s Initialising membrane potential to random values..." % node_id rng = NumpyRNG(seed=rngseed, parallel_safe=parallel_safe) uniformDistr = RandomDistribution('uniform', [v_reset,v_thresh], rng=rng) all_cells.initialize('v', uniformDistr) print "%s Connecting populations..." % node_id #exc_conn = FixedProbabilityConnector(pconn, weights=w_exc, delays=delay) #inh_conn = FixedProbabilityConnector(pconn, weights=w_inh, delays=delay) exc_conn = CSAConnector(csa.cset (csa.random (pconn), w_exc, delay)) inh_conn = CSAConnector(csa.cset (csa.random (pconn), w_inh, delay)) connections={} connections['exc'] = Projection(exc_cells, all_cells, exc_conn, target='excitatory', rng=rng) connections['inh'] = Projection(inh_cells, all_cells, inh_conn, target='inhibitory', rng=rng) if (benchmark == "COBA"): connections['ext'] = Projection(ext_stim, all_cells, ext_conn, target='excitatory') # === Setup recording ========================================================== print "%s Setting up recording..." % node_id all_cells.record() vrecord_list = [exc_cells[0],exc_cells[1]] exc_cells.record_v(vrecord_list) buildCPUTime = timer.diff() # === Save connections to file ================================================= #print "%s Saving connections to file..." % node_id #for prj in connections.keys(): # connections[prj].saveConnections('Results/VAbenchmark_%s_%s_%s_np%d.conn' % (benchmark, prj, simulator_name, np)) #saveCPUTime = timer.diff() # === Run simulation =========================================================== print "%d Running simulation..." % node_id run(tstop) simCPUTime = timer.diff() E_count = exc_cells.meanSpikeCount() I_count = inh_cells.meanSpikeCount() # === Print results to file ==================================================== print "%d Writing data to file..." % node_id if not(os.path.isdir('Results')): os.mkdir('Results') exc_cells.printSpikes("Results/VAbenchmark_%s_exc_%s_np%d.ras" % (benchmark, simulator_name, np)) inh_cells.printSpikes("Results/VAbenchmark_%s_inh_%s_np%d.ras" % (benchmark, simulator_name, np)) exc_cells.print_v("Results/VAbenchmark_%s_exc_%s_np%d.v" % (benchmark, simulator_name, np)) writeCPUTime = timer.diff() connections = "%d e→e,i %d i→e,i" % (connections['exc'].size(), connections['inh'].size(),) if node_id == 0: print "\n--- Vogels-Abbott Network Simulation ---" print "Nodes : %d" % np print "Simulation type : %s" % benchmark print "Number of Neurons : %d" % n print "Number of Synapses : %s" % connections print "Excitatory conductance : %g nS" % Gexc print "Inhibitory conductance : %g nS" % Ginh print "Excitatory rate : %g Hz" % (E_count*1000.0/tstop,) print "Inhibitory rate : %g Hz" % (I_count*1000.0/tstop,) print "Build time : %g s" % buildCPUTime #print "Save connections time : %g s" % saveCPUTime print "Simulation time : %g s" % simCPUTime print "Writing time : %g s" % writeCPUTime # === Finished with simulator ================================================== end() PyNN-0.7.4/examples/IF_cond_exp.py0000644000175000017500000000256111736323052017503 0ustar andrewandrew00000000000000""" A single IF neuron with exponential, conductance-based synapses, fed by two spike sources. Run as: $ python IF_cond_exp.py where is 'neuron', 'nest', etc Andrew Davison, UNIC, CNRS May 2006 $Id: IF_cond_exp.py 917 2011-01-31 15:23:34Z apdavison $ """ from pyNN.utility import get_script_args from pyNN.errors import RecordingError simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) setup(timestep=0.1,min_delay=0.1,max_delay=4.0) ifcell = create(IF_cond_exp, { 'i_offset' : 0.1, 'tau_refrac' : 3.0, 'v_thresh' : -51.0, 'tau_syn_E' : 2.0, 'tau_syn_I': 5.0, 'v_reset' : -70.0, 'e_rev_E' : 0., 'e_rev_I' : -80.}) spike_sourceE = create(SpikeSourceArray, {'spike_times': [float(i) for i in range(5,105,10)]}) spike_sourceI = create(SpikeSourceArray, {'spike_times': [float(i) for i in range(155,255,10)]}) connE = connect(spike_sourceE, ifcell, weight=0.006, synapse_type='excitatory',delay=2.0) connI = connect(spike_sourceI, ifcell, weight=0.02, synapse_type ='inhibitory',delay=4.0) record_v(ifcell, "Results/IF_cond_exp_%s.v" % simulator_name) try: record_gsyn(ifcell, "Results/IF_cond_exp_%s.gsyn" % simulator_name) except (NotImplementedError, RecordingError): pass run(200.0) end() PyNN-0.7.4/examples/tsodyksmarkram2.py0000644000175000017500000000254611736323052020466 0ustar andrewandrew00000000000000""" Example of depressing and facilitating, current-based, alpha synapses Andrew Davison, UNIC, CNRS May 2009 $Id:$ """ import numpy from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("import pyNN.%s as sim" % simulator_name) sim.setup(debug=True, quit_on_end=False) spike_source = sim.Population(1, sim.SpikeSourceArray, {'spike_times': numpy.arange(10, 100, 10)}) connector = sim.AllToAllConnector(weights=-1.0, delays=0.5) synapse_dynamics = { 'static': None, 'depressing': sim.SynapseDynamics( fast=sim.TsodyksMarkramMechanism(U=0.5, tau_rec=800.0, tau_facil=0.0)), 'facilitating': sim.SynapseDynamics( fast=sim.TsodyksMarkramMechanism(U=0.04, tau_rec=100.0, tau_facil=1000.0)), } populations = {} projections = {} for label in 'static', 'depressing', 'facilitating': populations[label] = sim.Population(1, sim.IF_curr_alpha, label=label) populations[label].record_v() projections[label] = sim.Projection(spike_source, populations[label], connector, target='inhibitory', synapse_dynamics=synapse_dynamics[label]) spike_source.record() sim.run(200.0) for label,p in populations.items(): p.print_v("Results/tsodyksmarkram2_%s_%s.v" % (label, simulator_name)) sim.end() PyNN-0.7.4/examples/simpleNetworkL.py0000644000175000017500000000214311736323052020301 0ustar andrewandrew00000000000000""" Simple network, using only the low-level interface, with a Poisson spike source projecting to a pair of IF_curr_alpha neurons. Andrew Davison, UNIC, CNRS August 2006 $Id: simpleNetworkL.py 772 2010-08-03 07:41:36Z apdavison $ """ import numpy from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) tstop = 1000.0 # all times in milliseconds rate = 100.0 # spikes/s setup(timestep=0.1,min_delay=0.2) cell_params = {'tau_refrac':2.0, 'v_thresh':-50.0, 'tau_syn_E':2.0, 'tau_syn_I' : 4.0} ifcell1 = create(IF_curr_alpha, cell_params) ifcell2 = create(IF_curr_alpha, cell_params) number = int(2*tstop*rate/1000.0) numpy.random.seed(637645386) spike_times = numpy.add.accumulate(numpy.random.exponential(1000.0/rate, size=number)) assert spike_times.max() > tstop spike_source = create(SpikeSourceArray, {'spike_times': spike_times }) conn1 = connect(spike_source, ifcell1, weight=1.0) conn2 = connect(spike_source, ifcell2, weight=1.0) record_v(ifcell1+ifcell2, "Results/simpleNetworkL_%s.v" % simulator_name) run(tstop) end() PyNN-0.7.4/examples/small_network.py0000644000175000017500000000356111736323052020210 0ustar andrewandrew00000000000000""" Small network created with the Population and Projection classes Andrew Davison, UNIC, CNRS May 2006 $Id: small_network.py 771 2010-07-30 02:08:40Z apdavison $ """ import numpy from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) # === Define parameters ======================================================== n = 5 # Number of cells w = 0.2 # synaptic weight (nA) cell_params = { 'tau_m' : 20.0, # (ms) 'tau_syn_E' : 2.0, # (ms) 'tau_syn_I' : 4.0, # (ms) 'tau_refrac' : 2.0, # (ms) 'v_rest' : 0.0, # (mV) 'v_reset' : 0.0, # (mV) 'v_thresh' : 20.0, # (mV) 'cm' : 0.5} # (nF) dt = 0.1 # (ms) syn_delay = 1.0 # (ms) input_rate = 50.0 # (Hz) simtime = 1000.0 # (ms) # === Build the network ======================================================== setup(timestep=dt, max_delay=syn_delay) cells = Population(n, IF_curr_alpha, cell_params, label="cells") number = int(2*simtime*input_rate/1000.0) numpy.random.seed(26278342) spike_times = numpy.add.accumulate(numpy.random.exponential(1000.0/input_rate, size=number)) assert spike_times.max() > simtime spike_source = Population(n, SpikeSourceArray, {'spike_times': spike_times}) cells.record() cells.record_v() input_conns = Projection(spike_source, cells, AllToAllConnector()) input_conns.setWeights(w) input_conns.setDelays(syn_delay) # === Run simulation =========================================================== cells.initialize('v', 0.0) # (mV) run(simtime) cells.printSpikes("Results/small_network_%s.ras" % simulator_name) cells.print_v("Results/small_network_%s.v" % simulator_name) print "Mean firing rate: ", cells.meanSpikeCount()*1000.0/simtime, "Hz" # === Clean up and quit ======================================================== end() PyNN-0.7.4/examples/brunel.py0000644000175000017500000002111211736323052016606 0ustar andrewandrew00000000000000""" Conversion of the Brunel network implemented in nest-1.0.13/examples/brunel.sli to use PyNN. Brunel N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 8:183-208 Andrew Davison, UNIC, CNRS May 2006 $Id: brunel.py 869 2010-12-13 13:24:53Z pierre $ """ from pyNN.utility import get_script_args, Timer simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) from pyNN.random import NumpyRNG, RandomDistribution timer = Timer() # === Define parameters ======================================================== downscale = 50 # scale number of neurons down by this factor # scale synaptic weights up by this factor to # obtain similar dynamics independent of size order = 50000 # determines size of network: # 4*order excitatory neurons # 1*order inhibitory neurons Nrec = 50 # number of neurons to record from, per population epsilon = 0.1 # connectivity: proportion of neurons each neuron projects to # Parameters determining model dynamics, cf Brunel (2000), Figs 7, 8 and Table 1 # here: Case C, asynchronous irregular firing, ~35 Hz eta = 2.0 # rel rate of external input g = 5.0 # rel strength of inhibitory synapses J = 0.1 # synaptic weight [mV] delay = 1.5 # synaptic delay, all connections [ms] # single neuron parameters tauMem = 20.0 # neuron membrane time constant [ms] tauSyn = 0.1 # synaptic time constant [ms] tauRef = 2.0 # refractory time [ms] U0 = 0.0 # resting potential [mV] theta = 20.0 # threshold # simulation-related parameters simtime = 100.0 # simulation time [ms] dt = 0.1 # simulation step length [ms] # seed for random generator used when building connections connectseed = 12345789 use_RandomArray = True # use Python rng rather than NEST rng # seed for random generator(s) used during simulation kernelseed = 43210987 # === Calculate derived parameters ============================================= # scaling: compute effective order and synaptic strength order_eff = int(float(order)/downscale) J_eff = J*downscale # compute neuron numbers NE = int(4*order_eff) # number of excitatory neurons NI = int(1*order_eff) # number of inhibitory neurons N = NI + NE # total number of neurons # compute synapse numbers CE = int(epsilon*NE) # number of excitatory synapses on neuron CI = int(epsilon*NI) # number of inhibitory synapses on neuron C = CE + CI # total number of internal synapses per n. Cext = CE # number of external synapses on neuron # synaptic weights, scaled for alpha functions, such that # for constant membrane potential, charge J would be deposited fudge = 0.00041363506632638 # ensures dV = J at V=0 # excitatory weight: JE = J_eff / tauSyn * fudge JE = (J_eff/tauSyn)*fudge # inhibitory weight: JI = - g * JE JI = -g*JE # threshold, external, and Poisson generator rates: nu_thresh = theta/(J_eff*CE*tauMem) nu_ext = eta*nu_thresh # external rate per synapse p_rate = 1000*nu_ext*Cext # external input rate per neuron (Hz) # number of synapses---just so we know Nsyn = (C+1)*N + 2*Nrec # number of neurons * (internal synapses + 1 synapse from PoissonGenerator) + 2synapses" to spike detectors # put cell parameters into a dict cell_params = {'tau_m' : tauMem, 'tau_syn_E' : tauSyn, 'tau_syn_I' : tauSyn, 'tau_refrac' : tauRef, 'v_rest' : U0, 'v_reset' : U0, 'v_thresh' : theta, 'cm' : 0.001} # (nF) # === Build the network ======================================================== # clear all existing network elements and set resolution and limits on delays. # For NEST, limits must be set BEFORE connecting any elements #extra = {'threads' : 2} extra = {} rank = setup(timestep=dt, max_delay=delay, **extra) np = num_processes() import socket host_name = socket.gethostname() print "Host #%d is on %s" % (rank+1, host_name) if extra.has_key('threads'): print "%d Initialising the simulator with %d threads..." %(rank, extra['threads']) else: print "%d Initialising the simulator with single thread..." %(rank) # Small function to display information only on node 1 def nprint(s): if (rank == 0): print s timer.start() # start timer on construction print "%d Setting up random number generator" %rank rng = NumpyRNG(kernelseed, parallel_safe=True) print "%d Creating excitatory population with %d neurons." % (rank, NE) E_net = Population(NE, IF_curr_alpha, cell_params, label="E_net") print "%d Creating inhibitory population with %d neurons." % (rank, NI) I_net = Population(NI, IF_curr_alpha, cell_params, label="I_net") print "%d Initialising membrane potential to random values between %g mV and %g mV." % (rank, U0, theta) uniformDistr = RandomDistribution('uniform', [U0, theta], rng) E_net.initialize('v', uniformDistr) I_net.initialize('v', uniformDistr) print "%d Creating excitatory Poisson generator with rate %g spikes/s." % (rank, p_rate) expoisson = Population(NE, SpikeSourcePoisson, {'rate': p_rate}, "expoisson") print "%d Creating inhibitory Poisson generator with the same rate." % rank inpoisson = Population(NI, SpikeSourcePoisson, {'rate': p_rate}, "inpoisson") # Record spikes print "%d Setting up recording in excitatory population." % rank E_net.record(Nrec) E_net[[0, 1]].record_v() print "%d Setting up recording in inhibitory population." % rank I_net.record(Nrec) I_net[[0, 1]].record_v() E_Connector = FixedProbabilityConnector(epsilon, weights=JE, delays=delay, verbose=True) I_Connector = FixedProbabilityConnector(epsilon, weights=JI, delays=delay, verbose=True) ext_Connector = OneToOneConnector(weights=JE, delays=dt, verbose=True) print "%d Connecting excitatory population with connection probability %g, weight %g nA and delay %g ms." % (rank, epsilon, JE, delay) E_to_E = Projection(E_net, E_net, E_Connector, rng=rng, target="excitatory") print "E --> E\t\t", len(E_to_E), "connections" I_to_E = Projection(I_net, E_net, I_Connector, rng=rng, target="inhibitory") print "I --> E\t\t", len(I_to_E), "connections" input_to_E = Projection(expoisson, E_net, ext_Connector, target="excitatory") print "input --> E\t", len(input_to_E), "connections" print "%d Connecting inhibitory population with connection probability %g, weight %g nA and delay %g ms." % (rank, epsilon, JI, delay) E_to_I = Projection(E_net, I_net, E_Connector, rng=rng, target="excitatory") print "E --> I\t\t", len(E_to_I), "connections" I_to_I = Projection(I_net, I_net, I_Connector, rng=rng, target="inhibitory") print "I --> I\t\t", len(I_to_I), "connections" input_to_I = Projection(inpoisson, I_net, ext_Connector, target="excitatory") print "input --> I\t", len(input_to_I), "connections" # read out time used for building buildCPUTime = timer.elapsedTime() # === Run simulation =========================================================== # run, measure computer time timer.start() # start timer on construction print "%d Running simulation for %g ms." % (rank, simtime) run(simtime) simCPUTime = timer.elapsedTime() print "%d Writing data to file." % rank exfilename = "Results/Brunel_exc_np%d_%s.ras" % (np, simulator_name) # output file for excit. population infilename = "Results/Brunel_inh_np%d_%s.ras" % (np, simulator_name) # output file for inhib. population vexfilename = "Results/Brunel_exc_np%d_%s.v" % (np, simulator_name) # output file for membrane potential traces vinfilename = "Results/Brunel_inh_np%d_%s.v" % (np, simulator_name) # output file for membrane potential traces # write data to file E_net.printSpikes(exfilename) I_net.printSpikes(infilename) E_net[[0, 1]].print_v(vexfilename) I_net[[0, 1]].print_v(vinfilename) E_rate = E_net.meanSpikeCount()*1000.0/simtime I_rate = I_net.meanSpikeCount()*1000.0/simtime # write a short report nprint("\n--- Brunel Network Simulation ---") nprint("Nodes : %d" % np) nprint("Number of Neurons : %d" % N) nprint("Number of Synapses : %d" % Nsyn) nprint("Input firing rate : %g" % p_rate) nprint("Excitatory weight : %g" % JE) nprint("Inhibitory weight : %g" % JI) nprint("Excitatory rate : %g Hz" % E_rate) nprint("Inhibitory rate : %g Hz" % I_rate) nprint("Build time : %g s" % buildCPUTime) nprint("Simulation time : %g s" % simCPUTime) # === Clean up and quit ======================================================== end() PyNN-0.7.4/examples/IF_curr_alpha.py0000644000175000017500000000224111736323052020017 0ustar andrewandrew00000000000000""" A single IF neuron with alpha-function shaped, current-based synapses, fed by two spike sources. Run as: $ python IF_curr_alpha.py where is 'neuron', 'nest', etc Andrew Davison, UNIC, CNRS May 2006 $Id: IF_curr_alpha.py 756 2010-05-18 12:57:19Z apdavison $ """ from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) setup(timestep=0.1,min_delay=0.1,max_delay=4.0) ifcell = create(IF_curr_alpha, {'i_offset' : 0.1, 'tau_refrac' : 3.0, 'v_thresh' : -51.0, 'tau_syn_E' : 2.0, 'tau_syn_I': 5.0, 'v_reset' : -70.0}) spike_sourceE = create(SpikeSourceArray, {'spike_times': [float(i) for i in range(5,105,10)]}) spike_sourceI = create(SpikeSourceArray, {'spike_times': [float(i) for i in range(100,255,10)]}) connE = connect(spike_sourceE, ifcell, weight=1.5, synapse_type='excitatory', delay=2.0) connI = connect(spike_sourceI, ifcell, weight=-1.5, synapse_type='inhibitory', delay=4.0) record_v(ifcell, "Results/IF_curr_alpha_%s.v" % simulator_name) initialize(ifcell, 'v', -54.2) run(200.0) end() PyNN-0.7.4/examples/VAbenchmarks.py0000644000175000017500000001727411736323052017701 0ustar andrewandrew00000000000000# coding: utf-8 """ An implementation of benchmarks 1 and 2 from Brette et al. (2007) Journal of Computational Neuroscience 23: 349-398 The IF network is based on the CUBA and COBA models of Vogels & Abbott (J. Neurosci, 2005). The model consists of a network of excitatory and inhibitory neurons, connected via current-based "exponential" synapses (instantaneous rise, exponential decay). Andrew Davison, UNIC, CNRS August 2006 $Id:VAbenchmarks.py 5 2007-04-16 15:01:24Z davison $ """ import os import socket from math import * from pyNN.utility import get_script_args, Timer usage = """Usage: python VAbenchmarks.py is either neuron, nest, brian or pcsim is either CUBA or COBA.""" simulator_name, benchmark = get_script_args(2, usage) exec("from pyNN.%s import *" % simulator_name) from pyNN.random import NumpyRNG, RandomDistribution timer = Timer() # === Define parameters ======================================================== threads = 1 rngseed = 98765 parallel_safe = True n = 4000 # number of cells r_ei = 4.0 # number of excitatory cells:number of inhibitory cells pconn = 0.02 # connection probability stim_dur = 50. # (ms) duration of random stimulation rate = 100. # (Hz) frequency of the random stimulation dt = 0.1 # (ms) simulation timestep tstop = 1000 # (ms) simulaton duration delay = 0.2 # Cell parameters area = 20000. # (µm²) tau_m = 20. # (ms) cm = 1. # (µF/cm²) g_leak = 5e-5 # (S/cm²) if benchmark == "COBA": E_leak = -60. # (mV) elif benchmark == "CUBA": E_leak = -49. # (mV) v_thresh = -50. # (mV) v_reset = -60. # (mV) t_refrac = 5. # (ms) (clamped at v_reset) v_mean = -60. # (mV) 'mean' membrane potential, for calculating CUBA weights tau_exc = 5. # (ms) tau_inh = 10. # (ms) # Synapse parameters if benchmark == "COBA": Gexc = 4. # (nS) Ginh = 51. # (nS) elif benchmark == "CUBA": Gexc = 0.27 # (nS) #Those weights should be similar to the COBA weights Ginh = 4.5 # (nS) # but the delpolarising drift should be taken into account Erev_exc = 0. # (mV) Erev_inh = -80. # (mV) ### what is the synaptic delay??? # === Calculate derived parameters ============================================= area = area*1e-8 # convert to cm² cm = cm*area*1000 # convert to nF Rm = 1e-6/(g_leak*area) # membrane resistance in MΩ assert tau_m == cm*Rm # just to check n_exc = int(round((n*r_ei/(1+r_ei)))) # number of excitatory cells n_inh = n - n_exc # number of inhibitory cells if benchmark == "COBA": celltype = IF_cond_exp w_exc = Gexc*1e-3 # We convert conductances to uS w_inh = Ginh*1e-3 elif benchmark == "CUBA": celltype = IF_curr_exp w_exc = 1e-3*Gexc*(Erev_exc - v_mean) # (nA) weight of excitatory synapses w_inh = 1e-3*Ginh*(Erev_inh - v_mean) # (nA) assert w_exc > 0; assert w_inh < 0 # === Build the network ======================================================== extra = {'threads' : threads, 'filename': "va_%s.xml" % benchmark, 'label': 'VA'} if simulator_name == "neuroml": extra["file"] = "VAbenchmarks.xml" node_id = setup(timestep=dt, min_delay=delay, max_delay=delay, **extra) np = num_processes() host_name = socket.gethostname() print "Host #%d is on %s" % (node_id+1, host_name) print "%s Initialising the simulator with %d thread(s)..." % (node_id, extra['threads']) cell_params = { 'tau_m' : tau_m, 'tau_syn_E' : tau_exc, 'tau_syn_I' : tau_inh, 'v_rest' : E_leak, 'v_reset' : v_reset, 'v_thresh' : v_thresh, 'cm' : cm, 'tau_refrac' : t_refrac} if (benchmark == "COBA"): cell_params['e_rev_E'] = Erev_exc cell_params['e_rev_I'] = Erev_inh timer.start() print "%s Creating cell populations..." % node_id exc_cells = Population(n_exc, celltype, cell_params, label="Excitatory_Cells") inh_cells = Population(n_inh, celltype, cell_params, label="Inhibitory_Cells") if benchmark == "COBA": ext_stim = Population(20, SpikeSourcePoisson, {'rate' : rate, 'duration' : stim_dur}, label="expoisson") rconn = 0.01 ext_conn = FixedProbabilityConnector(rconn, weights=0.1) print "%s Initialising membrane potential to random values..." % node_id rng = NumpyRNG(seed=rngseed, parallel_safe=parallel_safe) uniformDistr = RandomDistribution('uniform', [v_reset,v_thresh], rng=rng) exc_cells.initialize('v', uniformDistr) inh_cells.initialize('v', uniformDistr) print "%s Connecting populations..." % node_id exc_conn = FixedProbabilityConnector(pconn, weights=w_exc, delays=delay, verbose=True) inh_conn = FixedProbabilityConnector(pconn, weights=w_inh, delays=delay, verbose=True) connections={} connections['e2e'] = Projection(exc_cells, exc_cells, exc_conn, target='excitatory', rng=rng) connections['e2i'] = Projection(exc_cells, inh_cells, exc_conn, target='excitatory', rng=rng) connections['i2e'] = Projection(inh_cells, exc_cells, inh_conn, target='inhibitory', rng=rng) connections['i2i'] = Projection(inh_cells, inh_cells, inh_conn, target='inhibitory', rng=rng) if (benchmark == "COBA"): connections['ext2e'] = Projection(ext_stim, exc_cells, ext_conn, target='excitatory') connections['ext2i'] = Projection(ext_stim, inh_cells, ext_conn, target='excitatory') # === Setup recording ========================================================== print "%s Setting up recording..." % node_id exc_cells.record() inh_cells.record() exc_cells[[0, 1]].record_v() buildCPUTime = timer.diff() # === Save connections to file ================================================= #for prj in connections.keys(): #connections[prj].saveConnections('Results/VAbenchmark_%s_%s_%s_np%d.conn' % (benchmark, prj, simulator_name, np)) saveCPUTime = timer.diff() # === Run simulation =========================================================== print "%d Running simulation..." % node_id run(tstop) simCPUTime = timer.diff() E_count = exc_cells.meanSpikeCount() I_count = inh_cells.meanSpikeCount() # === Print results to file ==================================================== print "%d Writing data to file..." % node_id if not(os.path.isdir('Results')): os.mkdir('Results') exc_cells.printSpikes("Results/VAbenchmark_%s_exc_%s_np%d.ras" % (benchmark, simulator_name, np)) inh_cells.printSpikes("Results/VAbenchmark_%s_inh_%s_np%d.ras" % (benchmark, simulator_name, np)) exc_cells[[0, 1]].print_v("Results/VAbenchmark_%s_exc_%s_np%d.v" % (benchmark, simulator_name, np)) writeCPUTime = timer.diff() connections = "%d e→e %d e→i %d i→e %d i→i" % (connections['e2e'].size(), connections['e2i'].size(), connections['i2e'].size(), connections['i2i'].size()) if node_id == 0: print "\n--- Vogels-Abbott Network Simulation ---" print "Nodes : %d" % np print "Simulation type : %s" % benchmark print "Number of Neurons : %d" % n print "Number of Synapses : %s" % connections print "Excitatory conductance : %g nS" % Gexc print "Inhibitory conductance : %g nS" % Ginh print "Excitatory rate : %g Hz" % (E_count*1000.0/tstop,) print "Inhibitory rate : %g Hz" % (I_count*1000.0/tstop,) print "Build time : %g s" % buildCPUTime print "Save connections time : %g s" % saveCPUTime print "Simulation time : %g s" % simCPUTime print "Writing time : %g s" % writeCPUTime # === Finished with simulator ================================================== end() PyNN-0.7.4/examples/simple_STDP2.py0000644000175000017500000000255211736323052017533 0ustar andrewandrew00000000000000""" Very simple STDP test Andrew Davison, UNIC, CNRS January 2008 $Id: simple_STDP.py 607 2009-05-19 15:04:35Z apdavison $ """ import numpy from pyNN.utility import get_script_args sim_name = get_script_args(1)[0] exec("from pyNN.%s import *" % sim_name) setup(timestep=0.001, min_delay=0.1, max_delay=1.0, debug=True, quit_on_end=False) p1 = Population(1, SpikeSourceArray, {'spike_times': numpy.arange(1, 50, 1.0)}) p2 = Population(1, IF_curr_exp) stdp_model = STDPMechanism(timing_dependence=SpikePairRule(tau_plus=20.0, tau_minus=20.0), weight_dependence=AdditiveWeightDependence(w_min=0, w_max=0.8, A_plus=0.01, A_minus=0.012)) connection_method = AllToAllConnector(weights=0.48, delays=0.2) prj = Projection(p1, p2, method=connection_method, synapse_dynamics=SynapseDynamics(slow=stdp_model)) p1.record() p2.record() p2.record_v() t = [] w = [] for i in range(60): t.append(run(1.0)) w.extend(prj.getWeights()) w.extend(prj.getWeights()) p1.printSpikes("Results/simple_STDP_1_%s.ras" % sim_name) p2.printSpikes("Results/simple_STDP_2_%s.ras" % sim_name) p2.print_v("Results/simple_STDP_%s.v" % sim_name) print w f = open("Results/simple_STDP_%s.w" % sim_name, 'w') f.write("\n".join([str(ww) for ww in w])) f.close() end() PyNN-0.7.4/examples/EIF_cond_exp_isfa_ista.py0000644000175000017500000000112511736323052021625 0ustar andrewandrew00000000000000""" Test of the EIF_cond_alpha_isfa_ista model Andrew Davison, UNIC, CNRS December 2007 $Id: EIF_cond_alpha_isfa_ista.py 772 2010-08-03 07:41:36Z apdavison $ """ from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) setup(timestep=0.01,min_delay=0.1,max_delay=4.0,debug=True) ifcell = create(EIF_cond_exp_isfa_ista, {'i_offset': 1.0, 'tau_refrac': 2.0, 'v_spike': -40}) print ifcell[0].get_parameters() record_v(ifcell,"Results/EIF_cond_exp_isfa_ista_%s.v" % simulator_name) run(200.0) end() PyNN-0.7.4/examples/HH_cond_exp2.py0000644000175000017500000000326511736323052017570 0ustar andrewandrew00000000000000""" A single-compartment Hodgkin-Huxley neuron with exponential, conductance-based synapses, fed by a current injection. Run as: $ python HH_cond_exp2.py where is 'neuron', 'nest', etc Andrew Davison, UNIC, CNRS March 2010 $Id:$ """ from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) setup(timestep=0.001, min_delay=0.1) cellparams = { 'gbar_Na' : 20.0, 'gbar_K' : 6.0, 'g_leak' : 0.01, 'cm' : 0.2, 'v_offset' : -63.0, 'e_rev_Na' : 50.0, 'e_rev_K' : -90.0, 'e_rev_leak': -65.0, 'e_rev_E' : 0.0, 'e_rev_I' : -80.0, 'tau_syn_E' : 0.2, 'tau_syn_I' : 2.0, 'i_offset' : 1.0, } hhcell = create(HH_cond_exp, cellparams=cellparams) initialize(hhcell, 'v', -64.0) record_v(hhcell, "Results/HH_cond_exp2_%s.v" % simulator_name) var_names = { 'neuron': {'m': "seg.m_hh_traub", 'h': "seg.h_hh_traub", 'n': "seg.n_hh_traub"}, 'brian': {'m': 'm', 'h': 'h', 'n': 'n'}, } if simulator_name in var_names: hhcell.can_record = lambda x: True # hack for native_name in var_names[simulator_name].values(): hhcell._record(native_name) run(20.0) import pylab pylab.rcParams['interactive'] = True id, t, v = hhcell.get_v().T pylab.plot(t, v) pylab.xlabel("time (ms)") pylab.ylabel("Vm (mV)") if simulator_name in var_names: pylab.figure(2) for var_name, native_name in var_names[simulator_name].items(): id, t, x = hhcell.recorders[native_name].get().T pylab.plot(t, x, label=var_name) pylab.xlabel("time (ms)") pylab.legend() end() PyNN-0.7.4/examples/IF_curr_alpha2.py0000644000175000017500000000131311736323052020100 0ustar andrewandrew00000000000000""" A single IF neuron with alpha-function shaped, current-based synapses, fed by a single spike source. Andrew Davison, UNIC, CNRS May 2006 $Id: IF_curr_alpha2.py 772 2010-08-03 07:41:36Z apdavison $ """ from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) id = setup(timestep=0.01,min_delay=0.1) ifcells = create(IF_curr_alpha, {'i_offset' : 0.1, 'tau_refrac' : 0.1, 'v_thresh' : -52.2},n=5) spike_source = create(SpikeSourceArray, {'spike_times': [0.1*float(i) for i in range(1,1001,1)]}) conn = connect(spike_source,ifcells,weight=1.5) record_v(ifcells[0:1], "Results/IF_curr_alpha2_%s.v" % simulator_name) run(100.0) end() PyNN-0.7.4/examples/VAbenchmarks2.py0000644000175000017500000001646511736323052017764 0ustar andrewandrew00000000000000# coding: utf-8 """ An implementation of benchmarks 1 and 2 from Brette et al. (2007) Journal of Computational Neuroscience 23: 349-398 The IF network is based on the CUBA and COBA models of Vogels & Abbott (J. Neurosci, 2005). The model consists of a network of excitatory and inhibitory neurons, connected via current-based "exponential" synapses (instantaneous rise, exponential decay). This version demonstrates an alternative way to set-up the network, using PopulationViews. Andrew Davison, UNIC, CNRS March 2010 $Id: $ """ import os import socket from math import * from pyNN.utility import get_script_args, Timer usage = """Usage: python VAbenchmarks.py is either neuron, nest, brian or pcsim is either CUBA or COBA.""" simulator_name, benchmark = get_script_args(2, usage) exec("from pyNN.%s import *" % simulator_name) from pyNN.random import NumpyRNG, RandomDistribution timer = Timer() # === Define parameters ======================================================== threads = 1 rngseed = 98765 parallel_safe = True n = 4000 # number of cells r_ei = 4.0 # number of excitatory cells:number of inhibitory cells pconn = 0.02 # connection probability stim_dur = 50. # (ms) duration of random stimulation rate = 100. # (Hz) frequency of the random stimulation dt = 0.1 # (ms) simulation timestep tstop = 1000 # (ms) simulaton duration delay = 0.2 # Cell parameters area = 20000. # (µm²) tau_m = 20. # (ms) cm = 1. # (µF/cm²) g_leak = 5e-5 # (S/cm²) if benchmark == "COBA": E_leak = -60. # (mV) elif benchmark == "CUBA": E_leak = -49. # (mV) v_thresh = -50. # (mV) v_reset = -60. # (mV) t_refrac = 5. # (ms) (clamped at v_reset) v_mean = -60. # (mV) 'mean' membrane potential, for calculating CUBA weights tau_exc = 5. # (ms) tau_inh = 10. # (ms) # Synapse parameters if benchmark == "COBA": Gexc = 4. # (nS) Ginh = 51. # (nS) elif benchmark == "CUBA": Gexc = 0.27 # (nS) #Those weights should be similar to the COBA weights Ginh = 4.5 # (nS) # but the delpolarising drift should be taken into account Erev_exc = 0. # (mV) Erev_inh = -80. # (mV) ### what is the synaptic delay??? # === Calculate derived parameters ============================================= area = area*1e-8 # convert to cm² cm = cm*area*1000 # convert to nF Rm = 1e-6/(g_leak*area) # membrane resistance in MΩ assert tau_m == cm*Rm # just to check n_exc = int(round((n*r_ei/(1+r_ei)))) # number of excitatory cells n_inh = n - n_exc # number of inhibitory cells if benchmark == "COBA": celltype = IF_cond_exp w_exc = Gexc*1e-3 # We convert conductances to uS w_inh = Ginh*1e-3 elif benchmark == "CUBA": celltype = IF_curr_exp w_exc = 1e-3*Gexc*(Erev_exc - v_mean) # (nA) weight of excitatory synapses w_inh = 1e-3*Ginh*(Erev_inh - v_mean) # (nA) assert w_exc > 0; assert w_inh < 0 # === Build the network ======================================================== extra = {'threads' : threads, 'filename': "va2_%s.xml" % benchmark, 'label': 'VA2'} if simulator_name == "neuroml": extra["file"] = "VAbenchmarks.xml" node_id = setup(timestep=dt, min_delay=delay, max_delay=delay, **extra) np = num_processes() host_name = socket.gethostname() print "Host #%d is on %s" % (node_id+1, host_name) print "%s Initialising the simulator with %d thread(s)..." % (node_id, extra['threads']) cell_params = { 'tau_m' : tau_m, 'tau_syn_E' : tau_exc, 'tau_syn_I' : tau_inh, 'v_rest' : E_leak, 'v_reset' : v_reset, 'v_thresh' : v_thresh, 'cm' : cm, 'tau_refrac' : t_refrac} if (benchmark == "COBA"): cell_params['e_rev_E'] = Erev_exc cell_params['e_rev_I'] = Erev_inh timer.start() print "%s Creating cell populations..." % node_id all_cells = Population(n_exc+n_inh, celltype, cell_params, label="All Cells") exc_cells = all_cells[:n_exc]; exc_cells.label = "Excitatory cells" inh_cells = all_cells[n_exc:]; inh_cells.label = "Inhibitory cells" if benchmark == "COBA": ext_stim = Population(20, SpikeSourcePoisson, {'rate' : rate, 'duration' : stim_dur}, label="expoisson") rconn = 0.01 ext_conn = FixedProbabilityConnector(rconn, weights=0.1) print "%s Initialising membrane potential to random values..." % node_id rng = NumpyRNG(seed=rngseed, parallel_safe=parallel_safe) uniformDistr = RandomDistribution('uniform', [v_reset,v_thresh], rng=rng) all_cells.initialize('v', uniformDistr) print "%s Connecting populations..." % node_id exc_conn = FixedProbabilityConnector(pconn, weights=w_exc, delays=delay) inh_conn = FixedProbabilityConnector(pconn, weights=w_inh, delays=delay) connections={} connections['exc'] = Projection(exc_cells, all_cells, exc_conn, target='excitatory', rng=rng) connections['inh'] = Projection(inh_cells, all_cells, inh_conn, target='inhibitory', rng=rng) if (benchmark == "COBA"): connections['ext'] = Projection(ext_stim, all_cells, ext_conn, target='excitatory') # === Setup recording ========================================================== print "%s Setting up recording..." % node_id all_cells.record() exc_cells[[0, 1]].record_v() buildCPUTime = timer.diff() # === Save connections to file ================================================= #print "%s Saving connections to file..." % node_id #for prj in connections.keys(): # connections[prj].saveConnections('Results/VAbenchmark_%s_%s_%s_np%d.conn' % (benchmark, prj, simulator_name, np)) #saveCPUTime = timer.diff() # === Run simulation =========================================================== print "%d Running simulation..." % node_id run(tstop) simCPUTime = timer.diff() E_count = exc_cells.meanSpikeCount() I_count = inh_cells.meanSpikeCount() # === Print results to file ==================================================== print "%d Writing data to file..." % node_id if not(os.path.isdir('Results')): os.mkdir('Results') exc_cells.printSpikes("Results/VAbenchmark_%s_exc_%s_np%d.ras" % (benchmark, simulator_name, np)) inh_cells.printSpikes("Results/VAbenchmark_%s_inh_%s_np%d.ras" % (benchmark, simulator_name, np)) exc_cells[[0, 1]].print_v("Results/VAbenchmark_%s_exc_%s_np%d.v" % (benchmark, simulator_name, np)) writeCPUTime = timer.diff() connections = "%d e→e,i %d i→e,i" % (connections['exc'].size(), connections['inh'].size(),) if node_id == 0: print "\n--- Vogels-Abbott Network Simulation ---" print "Nodes : %d" % np print "Simulation type : %s" % benchmark print "Number of Neurons : %d" % n print "Number of Synapses : %s" % connections print "Excitatory conductance : %g nS" % Gexc print "Inhibitory conductance : %g nS" % Ginh print "Excitatory rate : %g Hz" % (E_count*1000.0/tstop,) print "Inhibitory rate : %g Hz" % (I_count*1000.0/tstop,) print "Build time : %g s" % buildCPUTime #print "Save connections time : %g s" % saveCPUTime print "Simulation time : %g s" % simCPUTime print "Writing time : %g s" % writeCPUTime # === Finished with simulator ================================================== end() PyNN-0.7.4/examples/IF_curr_exp.py0000644000175000017500000000222511736323052017530 0ustar andrewandrew00000000000000""" A single IF neuron with exponential, current-based synapses, fed by two spike sources. Run as: $ python IF_curr_exp.py where is 'neuron', 'nest', etc Andrew Davison, UNIC, CNRS September 2006 $Id: IF_curr_exp.py 756 2010-05-18 12:57:19Z apdavison $ """ from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) setup(timestep=0.1, min_delay=0.1, max_delay=4.0) ifcell = create(IF_curr_exp,{'i_offset' : 0.1, 'tau_refrac' : 3.0, 'v_thresh' : -51.0, 'tau_syn_E' : 2.0, 'tau_syn_I': 5.0, 'v_reset' : -70.0}) spike_sourceE = create(SpikeSourceArray, {'spike_times': [float(i) for i in range(5,105,10)]}) spike_sourceI = create(SpikeSourceArray, {'spike_times': [float(i) for i in range(155,255,10)]}) connE = connect(spike_sourceE, ifcell, weight=1.5, synapse_type='excitatory', delay=2.0) connI = connect(spike_sourceI, ifcell, weight=-1.5, synapse_type='inhibitory', delay=4.0) record_v(ifcell, "Results/IF_curr_exp_%s.v" % simulator_name) initialize(ifcell, 'v', -53.2) run(200.0) end() PyNN-0.7.4/examples/IF_curr_exp2.py0000644000175000017500000000234211736323052017612 0ustar andrewandrew00000000000000""" A single IF neuron with exponential, current-based synapses, fed by a single Poisson spike source. Run as: $ python IF_curr_exp2.py where is 'neuron', 'nest', etc $Id: IF_curr_exp2.py 915 2011-01-27 10:57:12Z apdavison $ """ import numpy from pyNN.utility import get_script_args simulator_name = get_script_args(1)[0] exec("from pyNN.%s import *" % simulator_name) from pyNN.random import NumpyRNG setup(timestep=0.01, min_delay=2.0, max_delay=4.0) ifcell = create(IF_curr_exp,{'i_offset' : 0.1, 'tau_refrac' : 3.0, 'v_thresh' : -51.0, 'tau_syn_E' : 2.0, 'tau_syn_I': 5.0, 'v_reset' : -70.0}) input_rate = 200.0 simtime = 1000.0 seed = 240965239 rng = NumpyRNG(seed=seed) n_spikes = input_rate*simtime/1000.0 spike_times = numpy.add.accumulate(rng.next(n_spikes, 'exponential', [1000.0/input_rate])) spike_source = create(SpikeSourceArray, {'spike_times': spike_times}) conn = connect(spike_source, ifcell, weight=1.5, synapse_type='excitatory', delay=2.0) record(ifcell, "Results/IF_curr_exp2_%s.ras" % simulator_name) record_v(ifcell, "Results/IF_curr_exp2_%s.v" % simulator_name) initialize(ifcell, 'v', -53.2) run(simtime) end() PyNN-0.7.4/AUTHORS0000644000175000017500000000272511736323052014210 0ustar andrewandrew00000000000000The following people have contributed to PyNN. Their affiliations at the time of the contributions are shown below. Andrew Davison [1] Pierre Yger [1, 9] Eilif Muller [7, 13] Jens Kremkow [5,6] Daniel Brüderle [2] Laurent Perrinet [6] Jochen Eppler [3,4] Dejan Pecevski [8] Nicolas Debeissat [10] Mikael Djurfeldt [12] Michael Schmuker [11] Bernhard Kaplan [2] Thomas Natschlaeger [8] Subhasis Ray [14] 1 Unité de Neuroscience, Information et Complexité, CNRS, Gif sur Yvette, France 2 Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany 3 Honda Research Institute Europe GmbH, Offenbach, Germany 4 Bernstein Center for Computational Neuroscience, Albert-Ludwigs-University, Freiburg, Germany 5 Neurobiology and Biophysics, Institute of Biology III, Albert-Ludwigs-University, Freiburg, Germany 6 Institut de Neurosciences Cognitives de la Méditerranée, CNRS, Marseille, France 7 Laboratory of Computational Neuroscience, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland 8 Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria 9 Department of Bioengineering, Imperial College London 10 INRIA, Sophia Antipolis, France 11 Neuroinformatics & Theoretical Neuroscience, Freie Universität Berlin, Berlin, Germany 12 International Neuroinformatics Coordinating Facility, Stockholm, Sweden 13 Blue Brain Project, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland 14 NCBS, Bangalore, IndiaPyNN-0.7.4/test/0000755000175000017500000000000011774605211014113 5ustar andrewandrew00000000000000PyNN-0.7.4/test/unittests/0000755000175000017500000000000011774605211016155 5ustar andrewandrew00000000000000PyNN-0.7.4/test/unittests/test_descriptions.py0000644000175000017500000000420611736323051022273 0ustar andrewandrew00000000000000from pyNN import common, errors, random, standardmodels, space, descriptions from nose.tools import assert_equal import numpy from mock import Mock import os.path class MockTemplateEngine(descriptions.TemplateEngine): render = Mock(return_value="african swallow") get_template = Mock() def test_get_default_template_engine(): engine = descriptions.get_default_template_engine() assert issubclass(engine, descriptions.TemplateEngine) def test_render_with_no_template(): context = {'a':2, 'b':3} result = descriptions.render(Mock(), None, context) assert_equal(result, context) def test_render_with_template(): engine = MockTemplateEngine context = {'a':2, 'b':3} template = "abcdefg" result = descriptions.render(engine, template, context) engine.render.assert_called_with(template, context) assert_equal(result, "african swallow") def test_StringTE_get_template(): result = descriptions.StringTemplateEngine.get_template("$a $b c d") assert_equal(result.template, "$a $b c d") def test_StringTE_get_template_from_file(): filename = "population_default.txt" result = descriptions.StringTemplateEngine.get_template(filename) assert result.template != filename def test_StringTE_render(): context = {'a':2, 'b':3} result = descriptions.StringTemplateEngine.render("$a $b c d", context) assert_equal(result, "2 3 c d") def test_Jinja2TE_get_template_from_file(): filename = "population_default.txt" result = descriptions.Jinja2TemplateEngine.get_template(filename) assert_equal(os.path.basename(result.filename), filename) def test_Jinja2TE_render(): context = {'a':2, 'b':3} result = descriptions.Jinja2TemplateEngine.render("{{a}} {{b}} c d", context) assert_equal(result, "2 3 c d") def test_CheetahTE_get_template_from_file(): filename = "population_default.txt" result = descriptions.CheetahTemplateEngine.get_template(filename) # incomplete test def test_CheetahTE_render(): context = {'a':2, 'b':3} result = descriptions.CheetahTemplateEngine.render("$a $b c d", context) assert_equal(result, "2 3 c d") PyNN-0.7.4/test/unittests/test_random.py0000644000175000017500000001733311736323051021052 0ustar andrewandrew00000000000000""" Unit tests for pyNN/random.py. $Id: rngtests.py 698 2010-01-26 15:47:04Z apdavison $ """ import pyNN.random as random import numpy import unittest def assert_arrays_almost_equal(a, b, threshold): if not (abs(a-b) < threshold).all(): err_msg = "%s != %s" % (a, b) err_msg += "\nlargest difference = %g" % abs(a-b).max() raise unittest.TestCase.failureException(err_msg) # ============================================================================== class SimpleTests(unittest.TestCase): """Simple tests on a single RNG function.""" def setUp(self): random.mpi_rank=0; random.num_processes=1 self.rnglist = [random.NumpyRNG(seed=987)] if random.have_gsl: self.rnglist.append(random.GSLRNG(seed=654)) def testNextOne(self): """Calling next() with no arguments or with n=1 should return a float.""" for rng in self.rnglist: assert isinstance(rng.next(),float) assert isinstance(rng.next(1),float) assert isinstance(rng.next(n=1),float) def testNextTwoPlus(self): """Calling next(n=m) where m > 1 should return an array.""" for rng in self.rnglist: self.assertEqual(len(rng.next(5)),5) self.assertEqual(len(rng.next(n=5)),5) def testNonPositiveN(self): """Calling next(m) where m < 0 should raise a ValueError.""" for rng in self.rnglist: self.assertRaises(ValueError,rng.next,-1) def testNZero(self): """Calling next(0) should return an empty array.""" for rng in self.rnglist: self.assertEqual(len(rng.next(0)), 0) def test_invalid_seed(self): self.assertRaises(AssertionError, random.NumpyRNG, seed=2.3) class ParallelTests(unittest.TestCase): def setUp(self): self.rng_types = [random.NumpyRNG] if random.have_gsl: self.rng_types.append(random.GSLRNG) def test_parallel_unsafe(self): for rng_type in self.rng_types: random.mpi_rank=0; random.num_processes=2 rng0 = rng_type(seed=1000, parallel_safe=False) random.mpi_rank=1; random.num_processes=2 rng1 = rng_type(seed=1000, parallel_safe=False) self.assertEqual(rng0.seed, 1000) self.assertEqual(rng1.seed, 1001) draw0 = rng0.next(5) draw1 = rng1.next(5) self.assertEqual(len(draw0), 5/2+1) self.assertEqual(len(draw1), 5/2+1) self.assertNotEqual(draw0.tolist(), draw1.tolist()) def test_parallel_safe_with_mask_local(self): for rng_type in self.rng_types: random.mpi_rank=0; random.num_processes=2 rng0 = rng_type(seed=1000, parallel_safe=True) random.mpi_rank=1; random.num_processes=2 rng1 = rng_type(seed=1000, parallel_safe=True) draw0 = rng0.next(5, mask_local=numpy.array((1,0,1,0,1), bool)) draw1 = rng1.next(5, mask_local=numpy.array((0,1,0,1,0), bool)) self.assertEqual(len(draw0), 3) self.assertEqual(len(draw1), 2) self.assertNotEqual(draw0.tolist(), draw1.tolist()) def test_parallel_safe_with_mask_local_None(self): for rng_type in self.rng_types: random.mpi_rank=0; random.num_processes=2 rng0 = rng_type(seed=1000, parallel_safe=True) self.assertRaises(Exception, rng0.next, 5, mask_local=None) def test_parallel_safe_with_mask_local_False(self): for rng_type in self.rng_types: random.mpi_rank=0; random.num_processes=2 rng0 = rng_type(seed=1000, parallel_safe=True) random.mpi_rank=1; random.num_processes=2 rng1 = rng_type(seed=1000, parallel_safe=True) draw0 = rng0.next(5, mask_local=False) draw1 = rng1.next(5, mask_local=False) self.assertEqual(len(draw0), 5) self.assertEqual(len(draw1), 5) self.assertEqual(draw0.tolist(), draw1.tolist()) def test_permutation(self): # only works for NumpyRNG at the moment. pygsl has a permutation module, but I can't find documentation for it. random.mpi_rank=0; random.num_processes=2 rng0 = random.NumpyRNG(seed=1000, parallel_safe=True) random.mpi_rank=1; random.num_processes=2 rng1 = random.NumpyRNG(seed=1000, parallel_safe=True) A = range(10) perm0 = rng0.permutation(A) perm1 = rng1.permutation(A) assert_arrays_almost_equal(perm0, perm1, 1e-99) class NativeRNGTests(unittest.TestCase): def test_create(self): rng = random.NativeRNG(seed=8274528) str(rng) class RandomDistributionTests(unittest.TestCase): def setUp(self): random.mpi_rank=0; random.num_processes=1 self.rnglist = [random.NumpyRNG(seed=987)] if random.have_gsl: self.rnglist.append(random.GSLRNG(seed=654)) def test_uniform(self): rd = random.RandomDistribution(distribution='uniform', parameters=[-1.0, 3.0], rng=self.rnglist[0]) vals = rd.next(100) assert vals.min() >= -1.0 assert vals.max() < 3.0 assert abs(vals.mean() - 1.0) < 0.2 if random.have_gsl: # GSL uniform is always between 0 and 1 rd = random.RandomDistribution(distribution='uniform', parameters=[], rng=self.rnglist[1]) vals = rd.next(100) assert vals.min() >= 0.0 assert vals.max() < 1.0 assert abs(vals.mean() - 0.5) < 0.05 def test_gaussian(self): mean = 1.0 std = 1.0 rd1 = random.RandomDistribution(distribution='normal', parameters=[mean, std], rng=self.rnglist[0]) vals_list = [rd1.next(100)] if random.have_gsl: # GSL gaussian is always centred on zero rd2 = random.RandomDistribution(distribution='gaussian', parameters=[std], rng=self.rnglist[1]) vals_list.append(mean + rd2.next(100)) for vals in vals_list: assert vals.min() > mean-4*std assert vals.min() < mean+4*std assert abs(vals.mean() - mean) < 0.2, abs(vals.mean() - mean) def test_gamma(self): a = 0.5 b = 0.5 for rng in self.rnglist: rd = random.RandomDistribution(distribution='gamma', parameters=[a, b], rng=rng) vals = rd.next(100) # need to check vals are as expected str(rd) # should be in a separate test def test_boundaries(self): rd = random.RandomDistribution(distribution='uniform', parameters=[-1.0, 1.0], rng=self.rnglist[0], boundaries=[0.0, 1.0], constrain="clip") vals = rd.next(100) assert vals.min() == 0 assert vals.max() < 1.0 assert abs(vals.mean() - 0.25) < 0.05 rd = random.RandomDistribution(distribution='uniform', parameters=[-1.0, 1.0], rng=self.rnglist[0], boundaries=[0.0, 1.0], constrain="redraw") vals = rd.next(100) assert vals.min() >= 0 assert vals.max() < 1.0 assert abs(vals.mean() - 0.5) < 0.05 val = rd.next() rd = random.RandomDistribution(distribution='uniform', parameters=[-1.0, 1.0], rng=self.rnglist[0], boundaries=[0.0, 1.0], constrain=None) self.assertRaises(Exception, rd.next) # ============================================================================== if __name__ == "__main__": unittest.main() PyNN-0.7.4/test/unittests/test_neuron.py0000644000175000017500000002274611774601632021112 0ustar andrewandrew00000000000000from neuron import h from pyNN.neuron import electrodes, recording, simulator from pyNN import common from mock import Mock from nose.tools import assert_equal, assert_raises, assert_almost_equal import numpy class MockCellClass(object): recordable = ['v'] parameters = ['romans', 'judeans'] injectable = True @classmethod def has_parameter(cls, name): return name in cls.parameters class MockCell(object): parameter_names = ['romans', 'judeans'] def __init__(self, romans=0, judeans=1): self.source_section = h.Section() self.source = self.source_section(0.5)._ref_v self.synapse = h.tmgsyn(self.source_section(0.5)) self.record = Mock() self.record_v = Mock() self.record_gsyn = Mock() self.memb_init = Mock() self.excitatory_TM = None self.inhibitory_TM = None self.romans = romans self.judeans = judeans self.foo_init = -99.9 class MockID(int): def __init__(self, n): int.__init__(n) self.local = bool(n%2) self.celltype = MockCellClass() self._cell = MockCell() class MockPopulation(common.BasePopulation): celltype = MockCellClass() local_cells = [MockID(44), MockID(33)] def test_is_point_process(): section = h.Section() clamp = h.SEClamp(section(0.5)) assert simulator.is_point_process(clamp) section.insert('hh') assert not simulator.is_point_process(section(0.5).hh) def test_native_rng_pick(): rng = Mock() rng.seed = 28754 rarr = simulator.nativeRNG_pick(100, rng, 'uniform', [-3,6]) assert isinstance(rarr, numpy.ndarray) assert_equal(rarr.shape, (100,)) assert -3 <= rarr.min() < -2.5 assert 5.5 < rarr.max() < 6 def test_register_gid(): cell = MockCell() simulator.register_gid(84568345, cell.source, cell.source_section) class TestInitializer(object): def test_initializer_initialize(self): init = simulator.initializer orig_initialize = init._initialize init._initialize = Mock() h.finitialize(-65) init._initialize.assert_called() init._initialize = orig_initialize def test_register(self): init = simulator.initializer cell = MockID(22) pop = MockPopulation() init.clear() init.register(cell, pop) assert_equal(init.cell_list, [cell]) assert_equal(init.population_list, [pop]) def test_initialize(self): init = simulator.initializer cell = MockID(77) pop = MockPopulation() init.register(cell, pop) init._initialize() cell._cell.memb_init.assert_called() for pcell in pop.local_cells: pcell._cell.memb_init.assert_called() def test_clear(self): init = simulator.initializer init.cell_list = range(10) init.population_list = range(10) init.clear() assert_equal(init.cell_list, []) assert_equal(init.population_list, []) class TestState(object): def test_dt_property(self): simulator.state.dt = 0.01 assert_equal(h.dt, 0.01) assert_equal(h.steps_per_ms, 100.0) assert_equal(simulator.state.dt, 0.01) def test_reset(): simulator.state.running = True simulator.state.t = 17 simulator.state.tstop = 123 init = simulator.initializer orig_initialize = init._initialize init._initialize = Mock() simulator.reset() assert_equal(simulator.state.running, False) assert_equal(simulator.state.t, 0.0) assert_equal(simulator.state.tstop, 0.0) init._initialize.assert_called() init._initialize = orig_initialize def test_run(): simulator.reset() simulator.run(12.3) assert_almost_equal(h.t, 12.3, places=11) simulator.run(7.7) assert_almost_equal(h.t, 20.0, places=11) def test_finalize(): orig_pc = simulator.state.parallel_context simulator.state.parallel_context = Mock() simulator.finalize() simulator.state.parallel_context.runworker.assert_called() simulator.state.parallel_context.done.assert_called() simulator.state.parallel_context = orig_pc class TestID(object): def setup(self): self.id = simulator.ID(984329856) self.id.parent = MockPopulation() self.id._cell = MockCell() def test_create(self): assert_equal(self.id, 984329856) def test_build_cell(self): parameters = {'judeans': 1, 'romans': 0} self.id._build_cell(MockCell, parameters) def test_get_native_parameters(self): D = self.id.get_native_parameters() assert isinstance(D, dict) def test_set_native_parameters(self): self.id.set_native_parameters({'romans': 3, 'judeans': 1}) def test_get_initial_value(self): foo_init = self.id.get_initial_value('foo') assert_equal(foo_init, -99.9) #def test_set_initial_value(self): class TestConnection(object): def setup(self): self.source = MockID(252) self.target = MockID(539) self.nc = h.NetCon(self.source._cell.source, self.target._cell.synapse, sec=self.source._cell.source_section) self.nc.delay = 1.0 self.c = simulator.Connection(self.source, self.target, self.nc) def test_create(self): c = self.c assert_equal(c.source, self.source) assert_equal(c.target, self.target) def test_useSTDP(self): self.c.useSTDP("StdwaSA", {'wmax': 0.04}, ddf=0) def test_weight_property(self): self.nc.weight[0] = 0.123 assert_equal(self.c.weight, 0.123) self.c.weight = 0.234 assert_equal(self.nc.weight[0], 0.234) def test_delay_property(self): self.nc.delay = 12.3 assert_equal(self.c.delay, 12.3) self.c.delay = 23.4 assert_equal(self.nc.delay, 23.4) def test_U_property(self): self.target._cell.synapse.U = 0.1 assert_equal(self.c.U, 0.1) self.c.U = 0.2 assert_equal(self.target._cell.synapse.U, 0.2) def test_w_max_property(self): self.c.useSTDP("StdwaSA", {'wmax': 0.04}, ddf=0) assert_equal(self.c.w_max, 0.04) self.c.w_max = 0.05 assert_equal(self.c.weight_adjuster.wmax, 0.05) # electrodes class TestCurrentSources(object): def setup(self): self.cells = [MockID(n) for n in range(5)] def test_inject_dc(self): cs = electrodes.DCSource() cs.inject_into(self.cells) assert_equal(cs.stop, 1e12) assert_equal(len(cs._devices), 2) def test_inject_step_current(self): cs = electrodes.StepCurrentSource([1,2,3], [0.5, 1.5, 2.5]) cs.inject_into(self.cells) assert_equal(len(cs._devices), 2)# 2 local cells # need more assertions about iclamps, vectors # recording class TestRecorder(object): def setup(self): if "foo" not in recording.Recorder.formats: recording.Recorder.formats['foo'] = "bar" self.rv = recording.Recorder('v') self.rg = recording.Recorder('gsyn') self.rs = recording.Recorder('spikes') self.rf = recording.Recorder('foo') self.cells = [MockID(22), MockID(29)] def teardown(self): recording.Recorder.formats.pop("foo") def test__record(self): self.rv._record(self.cells) self.rg._record(self.cells) self.rs._record(self.cells) for cell in self.cells: cell._cell.record.assert_called_with(1) cell._cell.record_v.assert_called_with(1) cell._cell.record_gsyn.assert_called_with('inhibitory', 1) assert_raises(Exception, self.rf._record, self.cells) def test__get_v(self): self.rv.recorded = self.cells self.cells[0]._cell.vtrace = numpy.arange(-65.0, -64.0, 0.1) self.cells[1]._cell.vtrace = numpy.arange(-64.0, -65.0, -0.1) self.cells[0]._cell.record_times = self.cells[1]._cell.record_times = numpy.arange(0.0, 1.0, 0.1) vdata = self.rv._get(gather=False, compatible_output=True, filter=None) assert_equal(vdata.shape, (20,3)) def test__get_spikes(self): self.rs.recorded = self.cells self.cells[0]._cell.spike_times = numpy.arange(101.0, 111.0) self.cells[1]._cell.spike_times = numpy.arange(13.0, 23.0) simulator.state.t = 111.0 sdata = self.rs._get(gather=False, compatible_output=True, filter=None) assert_equal(sdata.shape, (20,2)) def test__get_gsyn(self): self.rg.recorded = self.cells for cell in self.cells: cell._cell.gsyn_trace = {} cell._cell.gsyn_trace['excitatory'] = numpy.arange(0.01, 0.0199, 0.001) cell._cell.gsyn_trace['inhibitory'] = numpy.arange(1.01, 1.0199, 0.001) cell._cell.gsyn_trace['excitatory_TM'] = numpy.arange(2.01, 2.0199, 0.001) cell._cell.gsyn_trace['inhibitory_TM'] = numpy.arange(4.01, 4.0199, 0.001) cell._cell.record_times = self.cells[1]._cell.record_times = numpy.arange(0.0, 1.0, 0.1) gdata = self.rg._get(gather=False, compatible_output=True, filter=None) assert_equal(gdata.shape, (20,4)) def test__local_count(self): self.rs.recorded = self.cells self.cells[0]._cell.spike_times = h.Vector(numpy.arange(101.0, 111.0)) self.cells[1]._cell.spike_times = h.Vector(numpy.arange(13.0, 33.0)) assert_equal(self.rs._local_count(filter=None), {22: 10, 29: 20}) PyNN-0.7.4/test/unittests/test_assembly.py0000644000175000017500000001426511736323051021412 0ustar andrewandrew00000000000000from pyNN import common from pyNN.common import Assembly, BasePopulation from nose.tools import assert_equal, assert_raises import numpy from pyNN.utility import assert_arrays_equal from mock import Mock def test_create_with_zero_populations(): a = Assembly() assert_equal(a.populations, []) assert isinstance(a.label, basestring) class MockPopulation(BasePopulation): size = 10 local_cells = numpy.arange(1,10,2) all_cells = numpy.arange(10) _mask_local = numpy.arange(10)%2 == 1 initialize = Mock() positions = numpy.arange(3*size).reshape(3,size) def describe(self, template='abcd', engine=None): if template is None: return {'label': 'dummy'} else: return "" def test_create_with_one_population(): p = MockPopulation() a = Assembly(p) assert_equal(a.populations, [p]) assert isinstance(a.label, basestring) def test_create_with_two_populations(): p1 = MockPopulation() p2 = MockPopulation() a = Assembly(p1, p2, label="test") assert_equal(a.populations, [p1, p2]) assert_equal(a.label, "test") def test_create_with_non_population_should_raise_Exception(): assert_raises(TypeError, Assembly, [1, 2, 3]) def test_size_property(): p1 = MockPopulation() p2 = MockPopulation() a = Assembly(p1, p2, label="test") assert_equal(a.size, p1.size + p2.size) def test_positions_property(): p1 = MockPopulation() p2 = MockPopulation() a = Assembly(p1, p2, label="test") assert_arrays_equal(a.positions, numpy.concatenate((p1.positions, p2.positions), axis=1)) def test__len__(): p1 = MockPopulation() p2 = MockPopulation() a = Assembly(p1, p2, label="test") assert_equal(len(a), len(p1) + len(p2)) def test_local_cells(): p1 = MockPopulation() p2 = MockPopulation() a = Assembly(p1, p2, label="test") assert_arrays_equal(a.local_cells, numpy.append(p1.local_cells, p2.local_cells)) def test_all_cells(): p1 = MockPopulation() p2 = MockPopulation() a = Assembly(p1, p2, label="test") assert_arrays_equal(a.all_cells, numpy.append(p1.all_cells, p2.all_cells)) def test_iter(): p1 = MockPopulation() p2 = MockPopulation() a = Assembly(p1, p2, label="test") assembly_ids = [id for id in a] def test__add__population(): p1 = MockPopulation() p2 = MockPopulation() a1 = Assembly(p1) assert_equal(a1.populations, [p1]) a2 = a1 + p2 assert_equal(a1.populations, [p1]) assert_equal(a2.populations, [p1, p2]) def test__add__assembly(): p1 = MockPopulation() p2 = MockPopulation() p3 = MockPopulation() a1 = Assembly(p1, p2) a2 = Assembly(p2, p3) a3 = a1 + a2 assert_equal(a3.populations, [p1, p2, p3]) # or do we want [p1, p2, p3]? def test_add_inplace_population(): p1 = MockPopulation() p2 = MockPopulation() a = Assembly(p1) a += p2 assert_equal(a.populations, [p1, p2]) def test_add_inplace_assembly(): p1 = MockPopulation() p2 = MockPopulation() p3 = MockPopulation() a1 = Assembly(p1, p2) a2 = Assembly(p2, p3) a1 += a2 assert_equal(a1.populations, [p1, p2, p3]) def test_add_invalid_object(): p1 = MockPopulation() p2 = MockPopulation() a = Assembly(p1, p2) assert_raises(TypeError, a.__add__, 42) assert_raises(TypeError, a.__iadd__, 42) def test_initialize(): p1 = MockPopulation() p2 = MockPopulation() a = Assembly(p1, p2) a.initialize("v", -54.3) p1.initialize.assert_called_with("v", -54.3) p2.initialize.assert_called_with("v", -54.3) def test_describe(): p1 = MockPopulation() p2 = MockPopulation() a = Assembly(p1, p2) assert isinstance(a.describe(), basestring) assert isinstance(a.describe(template=None), dict) def test_get_population(): p1 = MockPopulation() p1.label = "pop1" p2 = MockPopulation() p2.label = "pop2" a = Assembly(p1, p2) assert_equal(a.get_population("pop1"), p1) assert_equal(a.get_population("pop2"), p2) assert_raises(KeyError, a.get_population, "foo") def test_all_cells(): p1 = MockPopulation() p2 = MockPopulation() p3 = MockPopulation() a = Assembly(p1, p2, p3) assert_equal(a.all_cells.size, p1.all_cells.size + p2.all_cells.size + p3.all_cells.size) assert_equal(a.all_cells[0], p1.all_cells[0]) assert_equal(a.all_cells[-1], p3.all_cells[-1]) assert_arrays_equal(a.all_cells, numpy.append(p1.all_cells, (p2.all_cells, p3.all_cells))) def test_local_cells(): p1 = MockPopulation() p2 = MockPopulation() p3 = MockPopulation() a = Assembly(p1, p2, p3) assert_equal(a.local_cells.size, p1.local_cells.size + p2.local_cells.size + p3.local_cells.size) assert_equal(a.local_cells[0], p1.local_cells[0]) assert_equal(a.local_cells[-1], p3.local_cells[-1]) assert_arrays_equal(a.local_cells, numpy.append(p1.local_cells, (p2.local_cells, p3.local_cells))) def test_mask_local(): p1 = MockPopulation() p2 = MockPopulation() p3 = MockPopulation() a = Assembly(p1, p2, p3) assert_equal(a._mask_local.size, p1._mask_local.size + p2._mask_local.size + p3._mask_local.size) assert_equal(a._mask_local[0], p1._mask_local[0]) assert_equal(a._mask_local[-1], p3._mask_local[-1]) assert_arrays_equal(a._mask_local, numpy.append(p1._mask_local, (p2._mask_local, p3._mask_local))) assert_arrays_equal(a.local_cells, a.all_cells[a._mask_local]) def test_save_positions(): import os orig_rank = common.rank common.rank = lambda: 0 p1 = MockPopulation() p2 = MockPopulation() p1.all_cells = numpy.array([34, 45]) p2.all_cells = numpy.array([56, 67]) p1.positions = numpy.arange(0,6).reshape((2,3)).T p2.positions = numpy.arange(6,12).reshape((2,3)).T a = Assembly(p1, p2, label="test") output_file = Mock() a.save_positions(output_file) assert_arrays_equal(output_file.write.call_args[0][0], numpy.array([[34, 0, 1, 2], [45, 3, 4, 5], [56, 6, 7, 8], [67, 9, 10, 11]])) assert_equal(output_file.write.call_args[0][1], {'assembly': a.label}) # arguably, the first column should contain indices, not ids. common.rank = orig_rankPyNN-0.7.4/test/unittests/test_files.py0000644000175000017500000000646011774601632020701 0ustar andrewandrew00000000000000from pyNN.recording import files from textwrap import dedent from mock import Mock from nose.tools import assert_equal import numpy import os from pyNN.utility import assert_arrays_equal builtin_open = open def test__savetxt(): mock_file = Mock() files.open = Mock(return_value=mock_file) files._savetxt(filename="dummy_file", data=[(0, 2.3),(1, 3.4),(2, 4.3)], format="%f", delimiter=" ") target = [(('0.000000 2.300000\n',), {}), (('1.000000 3.400000\n',), {}), (('2.000000 4.300000\n',), {})] assert_equal(mock_file.write.call_args_list, target) files.open = builtin_open def test_create_BaseFile(): files.open = Mock() bf = files.BaseFile("filename", 'r') files.open.assert_called_with("filename", "r", files.DEFAULT_BUFFER_SIZE) files.open = builtin_open def test_del(): files.open = Mock() bf = files.BaseFile("filename", 'r') close_mock = Mock() bf.close = close_mock del bf close_mock.assert_called_with() files.open = builtin_open def test_close(): files.open = Mock() bf = files.BaseFile("filename", 'r') bf.close() bf.fileobj.close.assert_called_with() files.open = builtin_open def test_StandardTextFile_write(): files.open = Mock() stf = files.StandardTextFile("filename", "w") data=[(0, 2.25),(1, 3.5),(2, 4.125)] metadata = {'a': 1, 'b': 9.99} target = [('# a = 1\n# b = 9.99\n',), ('0.0\t2.25\n',), ('1.0\t3.5\n',), ('2.0\t4.125\n',)] stf.write(data, metadata) assert_equal([call[0] for call in stf.fileobj.write.call_args_list], target) files.open = builtin_open def test_StandardTextFile_read(): files.open = Mock() stf = files.StandardTextFile("filename", "w") orig_loadtxt = numpy.loadtxt numpy.loadtxt = Mock() stf.read() numpy.loadtxt.assert_called_with(stf.fileobj) numpy.loadtxt = orig_loadtxt files.open = builtin_open def test_PickleFile(): pf = files.PickleFile("tmp.pickle", "w") data=[(0, 2.3),(1, 3.4),(2, 4.3)] metadata = {'a': 1, 'b': 9.99} pf.write(data, metadata) pf.close() pf = files.PickleFile("tmp.pickle", "r") assert_equal(pf.get_metadata(), metadata) assert_equal(pf.read(), data) pf.close() os.remove("tmp.pickle") def test_NumpyBinaryFile(): nbf = files.NumpyBinaryFile("tmp.npz", "w") data=[(0, 2.3),(1, 3.4),(2, 4.3)] metadata = {'a': 1, 'b': 9.99} nbf.write(data, metadata) nbf.close() nbf = files.NumpyBinaryFile("tmp.npz", "r") assert_equal(nbf.get_metadata(), metadata) assert_arrays_equal(nbf.read().flatten(), numpy.array(data).flatten()) nbf.close() os.remove("tmp.npz") def test_HDF5ArrayFile(): if files.have_hdf5: h5f = files.HDF5ArrayFile("tmp.h5", "w") data=[(0, 2.3),(1, 3.4),(2, 4.3)] metadata = {'a': 1, 'b': 9.99} h5f.write(data, metadata) h5f.close() h5f = files.HDF5ArrayFile("tmp.h5", "r") assert_equal(h5f.get_metadata(), metadata) assert_arrays_equal(numpy.array(h5f.read()).flatten(), numpy.array(data).flatten()) h5f.close() os.remove("tmp.h5") PyNN-0.7.4/test/unittests/test_standardmodels.py0000644000175000017500000002106311736323051022571 0ustar andrewandrew00000000000000from pyNN.standardmodels import build_translations, StandardModelType, \ SynapseDynamics, STDPMechanism, \ STDPWeightDependence, STDPTimingDependence from pyNN import errors from nose.tools import assert_equal, assert_raises from mock import Mock import numpy def test_build_translations(): t = build_translations( ('a', 'A'), ('b', 'B', 1000.0), ('c', 'C', 'c + a', 'C - A') ) assert_equal(set(t.keys()), set(['a', 'b', 'c'])) assert_equal(set(t['a'].keys()), set(['translated_name', 'forward_transform', 'reverse_transform'])) assert_equal(t['a']['translated_name'], 'A') assert_equal(t['a']['forward_transform'], 'a') assert_equal(t['a']['reverse_transform'], 'A') assert_equal(t['b']['translated_name'], 'B') assert_equal(t['b']['forward_transform'], 'float(1000)*b') assert_equal(t['b']['reverse_transform'], 'B/float(1000)') assert_equal(t['c']['translated_name'], 'C') assert_equal(t['c']['forward_transform'], 'c + a') assert_equal(t['c']['reverse_transform'], 'C - A') ##test StandardModelType def test_has_parameter(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3} assert M.has_parameter('a') assert M.has_parameter('b') assert not M.has_parameter('z') def test_get_parameter_names(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3} assert_equal(set(M.get_parameter_names()), set(['a', 'b'])) def test_instantiate(): """ Instantiating a StandardModelType should set self.parameters to the value of translate(checkParameters(parameters)). """ M = StandardModelType M.default_parameters = {'a': 0.0, 'b': 0.0} M.translations = {'a': None, 'b': None} P1 = {'a': 22.2, 'b': 33.3} P2 = {'A': 22.2, 'B': 333} orig_checkParameters = M.checkParameters orig_translate = M.translate M.checkParameters = Mock(return_value=P2) M.translate = Mock() m = M(P1) assert isinstance(m.parameters, Mock) M.checkParameters.assert_called_with(P1, with_defaults=True) M.translate.assert_called_with(P2) M.checkParameters = orig_checkParameters M.translate = orig_translate def test_checkParameters_without_defaults(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3, 'c': [1, 2, 3], 'd': 'hello'} assert_equal(M.checkParameters({'a': 11, 'c': [4, 5, 6], 'd': 'goodbye'}), {'a': 11.0,'c': [4, 5, 6], 'd': 'goodbye'}) def test_checkParameters_with_defaults(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3, 'c': [1, 2, 3]} assert_equal(M.checkParameters({'a': 11, 'c': [4, 5, 6]}, with_defaults=True), {'a': 11.0, 'b': 33.3, 'c': [4, 5, 6]}) def test_checkParameters_with_nonexistent_parameter(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3, 'c': [1, 2, 3]} assert_raises(errors.NonExistentParameterError, M.checkParameters, {'a': 11.1, 'z': 99.9}) def test_checkParameters_with_invalid_value(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3, 'c': [1, 2, 3], 'd': 'hello'} assert_raises(errors.InvalidParameterValueError, M.checkParameters, {'a': 11.1, 'b': [4,3,2]}) assert_raises(errors.InvalidParameterValueError, M.checkParameters, {'a': 11.1, 'c': 12.3}) assert_raises(errors.InvalidParameterValueError, M.checkParameters, {'a': 11.1, 'd': 12.3}) def test_translate(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3, 'c': 44.4} M.translations = build_translations( ('a', 'A'), ('b', 'B', 1000.0), ('c', 'C', 'c + a', 'C - A'), ) assert_equal(M.translate({'a': 23.4, 'b': 34.5, 'c': 45.6}), {'A': 23.4, 'B': 34500.0, 'C': 69.0}) def test_translate_with_invalid_transformation(): M = StandardModelType M.translations = build_translations( ('a', 'A'), ('b', 'B', 'b + z', 'B-Z'), ) M.default_parameters = {'a': 22.2, 'b': 33.3} #really we should trap such errors in build_translations(), not in translate() assert_raises(NameError, M.translate, {'a': 23.4, 'b': 34.5}) def test_translate_with_divide_by_zero_error(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3} M.translations = build_translations( ('a', 'A'), ('b', 'B', 'b/0', 'B*0'), ) assert_raises(ZeroDivisionError, M.translate, {'a': 23.4, 'b': 34.5}) def test_reverse_translate(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3, 'c': 44.4} M.translations = build_translations( ('a', 'A'), ('b', 'B', 1000.0), ('c', 'C', 'c + a', 'C - A'), ) assert_equal(M.reverse_translate({'A': 23.4, 'B': 34500.0, 'C': 69.0}), {'a': 23.4, 'b': 34.5, 'c': 45.6}) def test_reverse_translate_with_invalid_transformation(): M = StandardModelType M.translations = build_translations( ('a', 'A'), ('b', 'B', 'b + z', 'B-Z'), ) M.default_parameters = {'a': 22.2, 'b': 33.3} #really we should trap such errors in build_translations(), not in reverse_translate() assert_raises(NameError, M.reverse_translate, {'A': 23.4, 'B': 34.5}) def test_simple_parameters(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3, 'c': 44.4} M.translations = build_translations( ('a', 'A'), ('b', 'B', 1000.0), ('c', 'C', 'c + a', 'C - A'), ) assert_equal(M.simple_parameters(), ['a']) def test_scaled_parameters(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3, 'c': 44.4} M.translations = build_translations( ('a', 'A'), ('b', 'B', 1000.0), ('c', 'C', 'c + a', 'C - A'), ) assert_equal(M.scaled_parameters(), ['b']) def test_computed_parameters(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3, 'c': 44.4} M.translations = build_translations( ('a', 'A'), ('b', 'B', 1000.0), ('c', 'C', 'c + a', 'C - A'), ) assert_equal(M.computed_parameters(), ['c']) def test_update_parameters(): """ update_parameters(P) should update self.parameters with the value of translate(P) """ M = StandardModelType orig_checkParameters = M.checkParameters orig_translate = M.translate M.checkParameters = Mock() P = {'a': 3} M.translate = Mock(return_value=P) m = M({}) m.parameters = Mock() m.update_parameters({}) m.parameters.update.assert_called_with(P) M.checkParameters = orig_checkParameters M.translate = orig_translate def test_describe(): M = StandardModelType M.default_parameters = {'a': 22.2, 'b': 33.3, 'c': 44.4} M.translations = build_translations( ('a', 'A'), ('b', 'B', 1000.0), ('c', 'C', 'c + a', 'C - A'), ) m = M({}) assert isinstance(m.describe(), basestring) # test StandardCellType ## test SynapseDynamics # test create def test_describe_SD(): sd = SynapseDynamics() assert isinstance(sd.describe(), basestring) assert isinstance(sd.describe(template=None), dict) ## test ShortTermPlasticityMechanism def test_STDPMechanism_create(): STDPTimingDependence.__init__ = Mock(return_value=None) STDPWeightDependence.__init__ = Mock(return_value=None) td = STDPTimingDependence() wd = STDPWeightDependence() stdp = STDPMechanism(td, wd, None, 0.5) assert_equal(stdp.timing_dependence, td) assert_equal(stdp.weight_dependence, wd) assert_equal(stdp.voltage_dependence, None) assert_equal(stdp.dendritic_delay_fraction, 0.5) def test_STDPMechanism_create_invalid_types(): assert_raises(AssertionError, # probably want a more informative error STDPMechanism, timing_dependence="abc") assert_raises(AssertionError, # probably want a more informative error STDPMechanism, weight_dependence="abc") assert_raises(AssertionError, # probably want a more informative error STDPMechanism, dendritic_delay_fraction = "abc") assert_raises(AssertionError, # probably want a more informative error STDPMechanism, dendritic_delay_fraction = "1.1") ## test STDPWeightDependence ## test STDPTimingDependence PyNN-0.7.4/test/unittests/test_space.py0000644000175000017500000002520311736323051020660 0ustar andrewandrew00000000000000from pyNN import space import unittest import numpy from mock import Mock from nose.tools import assert_equal, assert_raises from pyNN.utility import assert_arrays_equal from math import sqrt def assert_arrays_almost_equal(a, b, threshold, msg=''): if a.shape != b.shape: raise unittest.TestCase.failureException("Shape mismatch: a.shape=%s, b.shape=%s" % (a.shape, b.shape)) if not (abs(a-b) < threshold).all(): err_msg = "%s != %s" % (a, b) err_msg += "\nlargest difference = %g" % abs(a-b).max() if msg: err_msg += "\nOther information: %s" % msg raise unittest.TestCase.failureException(err_msg) def test_distance(): cell1 = Mock() cell2 = Mock() A = lambda *x: numpy.array(x) cell1.position = A(2.3, 4.5, 6.7) cell2.position = A(2.3, 4.5, 6.7) assert_equal(space.distance(cell1, cell2), 0.0) cell2.position = A(5.3, 4.5, 6.7) assert_equal(space.distance(cell1, cell2), 3.0) cell2.position = A(5.3, 8.5, 6.7) assert_equal(space.distance(cell1, cell2), 5.0) cell2.position = A(5.3, 8.5, -5.3) assert_equal(space.distance(cell1, cell2), 13.0) assert_equal(space.distance(cell1, cell2, mask=A(0,1)), 5.0) assert_equal(space.distance(cell1, cell2, mask=A(2)), 12.0) assert_equal(space.distance(cell1, cell2, offset=A(-3.0, -4.0, 12.0)), 0.0) cell2.position = A(10.6, 17.0, -10.6) assert_equal(space.distance(cell1, cell2, scale_factor=0.5), 13.0) cell2.position = A(-1.7, 8.5, -5.3) assert_equal(space.distance(cell1, cell2, periodic_boundaries=A(7.0, 1e12, 1e12)), 13.0) class SpaceTest(unittest.TestCase): def setUp(self): N = numpy.array self.A = N([0.0, 0.0, 0.0]) self.B = N([1.0, 1.0, 1.0]) self.C = N([-1.0, -1.0, -1.0]) self.D = N([2.0, 3.0, 4.0]) self.ABCD = N([[0.0, 0.0, 0.0], [1.0, 1.0, 1.0], [-1.0, -1.0, -1.0], [2.0, 3.0, 4.0]]).T def assertArraysEqual(self, A, B): self.assert_((A==B).all(), "%s != %s" % (A,B)) def test_infinite_space_with_3D_distances(self): s = space.Space() self.assertEqual(s.distances(self.A, self.B), sqrt(3)) self.assertEqual(s.distances(self.C, self.B), sqrt(12)) self.assertArraysEqual(s.distances(self.A, self.ABCD), numpy.array([0.0, sqrt(3), sqrt(3), sqrt(29)])) self.assertArraysEqual(s.distances(self.A, self.ABCD), s.distances(self.ABCD, self.A).T) assert_arrays_equal(s.distances(self.ABCD, self.ABCD), numpy.array([(0.0, sqrt(3), sqrt(3), sqrt(29)), (sqrt(3), 0.0, sqrt(12), sqrt(14)), (sqrt(3), sqrt(12), 0.0, sqrt(50.0)), (sqrt(29), sqrt(14), sqrt(50.0), 0.0)])) def test_generator_for_infinite_space_with_3D_distances(self): s = space.Space() f = lambda i: self.ABCD[:,i] g = lambda j: self.ABCD[:,j] self.assertArraysEqual(s.distance_generator(f, g)(0, numpy.arange(4)), numpy.array([0.0, sqrt(3), sqrt(3), sqrt(29)])) assert_arrays_equal(s.distance_generator(f, g)(numpy.arange(4), numpy.arange(4)), numpy.array([(0.0, sqrt(3), sqrt(3), sqrt(29)), (sqrt(3), 0.0, sqrt(12), sqrt(14)), (sqrt(3), sqrt(12), 0.0, sqrt(50.0)), (sqrt(29), sqrt(14), sqrt(50.0), 0.0)])) def test_infinite_space_with_collapsed_axes(self): s_x = space.Space(axes='x') s_xy = space.Space(axes='xy') s_yz = space.Space(axes='yz') self.assertEqual(s_x.distances(self.A, self.B), 1.0) self.assertEqual(s_xy.distances(self.A, self.B), sqrt(2)) self.assertEqual(s_x.distances(self.A, self.D), 2.0) self.assertEqual(s_xy.distances(self.A, self.D), sqrt(13)) self.assertEqual(s_yz.distances(self.A, self.D), sqrt(25)) self.assertArraysEqual(s_yz.distances(self.D, self.ABCD), numpy.array([sqrt(25), sqrt(13), sqrt(41), sqrt(0)])) def test_infinite_space_with_scale_and_offset(self): s = space.Space(scale_factor=2.0, offset=1.0) self.assertEqual(s.distances(self.A, self.B), sqrt(48)) self.assertEqual(s.distances(self.B, self.A), sqrt(3)) self.assertEqual(s.distances(self.C, self.B), sqrt(75)) self.assertEqual(s.distances(self.B, self.C), sqrt(3)) self.assertArraysEqual(s.distances(self.A, self.ABCD), numpy.array([sqrt(12), sqrt(48), sqrt(0), sqrt(200)])) def test_cylindrical_space(self): s = space.Space(periodic_boundaries=((-1.0, 4.0), (-1.0, 4.0), (-1.0, 4.0))) self.assertEqual(s.distances(self.A, self.B), sqrt(3)) self.assertEqual(s.distances(self.A, self.D), sqrt(4+4+1)) self.assertEqual(s.distances(self.C, self.D), sqrt(4+1+0)) self.assertArraysEqual(s.distances(self.A, self.ABCD), numpy.array([0.0, sqrt(3), sqrt(3), sqrt(4+4+1)])) self.assertArraysEqual(s.distances(self.A, self.ABCD), s.distances(self.ABCD, self.A).T) self.assertArraysEqual(s.distances(self.C, self.ABCD), numpy.array([sqrt(3), sqrt(4+4+4), 0.0, sqrt(4+1+0)])) class LineTest(unittest.TestCase): def test_generate_positions_default_parameters(self): line = space.Line() n = 4 positions = line.generate_positions(n) assert_equal(positions.shape, (3,n)) assert_arrays_almost_equal( positions, numpy.array([[0,0,0], [1,0,0], [2,0,0], [3,0,0]], float).T, threshold=1e-15 ) def test_generate_positions(self): line = space.Line(dx=100.0, x0=-100.0, y=444.0, z=987.0) n = 2 positions = line.generate_positions(n) assert_equal(positions.shape, (3,n)) assert_arrays_almost_equal( positions, numpy.array([[-100,444,987], [0,444,987]], float).T, threshold=1e-15 ) def test__eq__(self): line1 = space.Line() line2 = space.Line(1.0, 0.0, 0.0, 0.0) line3 = space.Line(dx=2.0) assert_equal(line1, line2) assert line1 != line3 def test_get_parameters(self): params = dict(dx=100.0, x0=-100.0, y=444.0, z=987.0) line = space.Line(**params) assert_equal(line.get_parameters(), params) class Grid2D_Test(object): def setup(self): self.grid1 = space.Grid2D() self.grid2 = space.Grid2D(aspect_ratio=3.0, dx=11.1, dy=9.9, x0=123, y0=456, z=789) def test_calculate_size(self): assert_equal(self.grid1.calculate_size(n=1), (1,1)) assert_equal(self.grid1.calculate_size(n=4), (2,2)) assert_equal(self.grid1.calculate_size(n=9), (3,3)) assert_raises(Exception, self.grid1.calculate_size, n=10) assert_equal(self.grid2.calculate_size(n=3), (3,1)) assert_equal(self.grid2.calculate_size(n=12), (6,2)) assert_equal(self.grid2.calculate_size(n=27), (9,3)) assert_raises(Exception, self.grid2.calculate_size, n=4) def test_generate_positions(self): n = 4 positions = self.grid1.generate_positions(n) assert_equal(positions.shape, (3,n)) assert_arrays_almost_equal( positions, numpy.array([ [0,0,0], [0,1,0], [1,0,0], [1,1,0] ]).T, 1e-15) assert_arrays_almost_equal( self.grid2.generate_positions(12), numpy.array([ [123,456,789], [123,465.9,789], [123+11.1,456,789], [123+11.1,465.9,789], [123+22.2,456,789], [123+22.2,465.9,789], [123+33.3,456,789], [123+33.3,465.9,789], [123+44.4,456,789], [123+44.4,465.9,789], [123+55.5,456,789], [123+55.5,465.9,789], ]).T, 1e-15) class Grid3D_Test(object): def setup(self): self.grid1 = space.Grid3D() self.grid2 = space.Grid3D(aspect_ratioXY=3.0, aspect_ratioXZ=2.0, dx=11, dy=9, dz=7, x0=123, y0=456, z0=789) def test_calculate_size(self): assert_equal(self.grid1.calculate_size(n=1), (1,1,1)) assert_equal(self.grid1.calculate_size(n=8), (2,2,2)) assert_equal(self.grid1.calculate_size(n=27), (3,3,3)) assert_raises(Exception, self.grid1.calculate_size, n=10) assert_equal(self.grid2.calculate_size(n=36), (6,2,3)) assert_equal(self.grid2.calculate_size(n=288), (12,4,6)) assert_raises(Exception, self.grid2.calculate_size, n=100) def test_generate_positions(self): n = 8 positions = self.grid1.generate_positions(n) assert_equal(positions.shape, (3,n)) assert_arrays_almost_equal( positions, numpy.array([ [0,0,0], [0,0,1], [0,1,0], [0,1,1], [1,0,0], [1,0,1], [1,1,0], [1,1,1] ]).T, 1e-15) class TestSphere(object): def test__create(self): s = space.Sphere(2.5) assert_equal(s.radius, 2.5) def test_sample(self): n = 1000 s = space.Sphere(2.5) positions = s.sample(n, numpy.random) assert_equal(positions.shape, (n,3)) for axis in range(2): assert 1 < max(positions[:,axis]) < 2.5 assert -1 > min(positions[:,axis]) > -2.5 s2 = numpy.sum(positions**2, axis=1) assert max(s2) < 6.25 class TestCuboid(object): def test_sample(self): n = 1000 c = space.Cuboid(3, 4, 5) positions = c.sample(n, numpy.random) assert_equal(positions.shape, (n,3)) assert 1 < max(positions[:,0]) < 1.5, max(positions[:,0]) assert -1 > min(positions[:,0]) > -1.5 assert -1.5 > min(positions[:,1]) > -2.0 assert -2 > min(positions[:,2]) > -2.5 class TestRandomStructure(object): def test_generate_positions(self): n = 1000 s = space.Sphere(2.5) rs = space.RandomStructure(boundary=s, origin=(1.0, 1.0, 1.0)) positions = rs.generate_positions(n) assert_equal(positions.shape, (3,n)) for axis in range(2): assert 3 < max(positions[axis,:]) < 3.5 assert -1 > min(positions[axis,:]) > -1.5 PyNN-0.7.4/test/unittests/__init__.py0000644000175000017500000000000011736323051020251 0ustar andrewandrew00000000000000PyNN-0.7.4/test/unittests/test_core.py0000644000175000017500000002111111736323051020507 0ustar andrewandrew00000000000000from pyNN.core import is_listlike, LazyArray from pyNN import random import numpy from nose.tools import assert_raises, assert_equal from pyNN.utility import assert_arrays_equal, assert_arrays_almost_equal import operator class MockRNG(random.WrappedRNG): rng = None def __init__(self, parallel_safe): random.WrappedRNG.__init__(self, parallel_safe=parallel_safe) self.start = 0.0 def _next(self, distribution, n, parameters): s = self.start self.start += n*0.1 return numpy.arange(s, s+n*0.1, 0.1) def test_is_list_like_with_tuple(): assert is_listlike((1,2,3)) def test_is_list_like_with_list(): assert is_listlike([1,2,3]) def test_is_list_like_with_iterator(): assert not is_listlike(iter((1,2,3))) def test_is_list_like_with_set(): assert is_listlike(set((1,2,3))) def test_is_list_like_with_numpy_array(): assert is_listlike(numpy.arange(10)) def test_is_list_like_with_string(): assert not is_listlike("abcdefg") #def test_is_list_like_with_file(): # f = file() # assert not is_listlike(f) # test LazyArray def test_create_with_int(): A = LazyArray(3, shape=(5,)) assert A.shape == (5,) assert A.value == 3 def test_create_with_float(): A = LazyArray(3.0, shape=(5,)) assert A.shape == (5,) assert A.value == 3.0 def test_create_with_list(): A = LazyArray([1,2,3], shape=(3,)) assert A.shape == (3,) assert_arrays_equal(A.value, numpy.array([1,2,3])) def test_create_with_array(): A = LazyArray(numpy.array([1,2,3]), shape=(3,)) assert A.shape == (3,) assert_arrays_equal(A.value, numpy.array([1,2,3])) def test_create_inconsistent(): assert_raises(AssertionError, LazyArray, [1,2,3], shape=4) def test_create_with_string(): assert_raises(AssertionError, LazyArray, "123", shape=3) def test_setitem_nonexpanded_same_value(): A = LazyArray(3, shape=(5,)) assert A.value == 3 A[0] = 3 assert A.value == 3 def test_setitem_invalid_value(): A = LazyArray(3, shape=(5,)) assert_raises(TypeError, A, "abc") def test_setitem_nonexpanded_different_value(): A = LazyArray(3, shape=(5,)) assert A.value == 3 A[0] = 4; A[4] = 5 assert_arrays_equal(A.value, numpy.array([4, 3, 3, 3, 5])) def test_columnwise_iteration_with_flat_array(): m = LazyArray(5, shape=(4,3)) # 4 rows, 3 columns cols = [col for col in m.by_column()] assert_equal(cols, [5, 5, 5]) def test_columnwise_iteration_with_structured_array(): input = numpy.arange(12).reshape((4,3)) m = LazyArray(input, shape=(4,3)) # 4 rows, 3 columns cols = [col for col in m.by_column()] assert_arrays_equal(cols[0], input[:,0]) assert_arrays_equal(cols[2], input[:,2]) def test_columnwise_iteration_with_random_array_parallel_safe_no_mask(): random.mpi_rank=0; random.num_processes=2 input = random.RandomDistribution(rng=MockRNG(parallel_safe=True)) copy_input = random.RandomDistribution(rng=MockRNG(parallel_safe=True)) m = LazyArray(input, shape=(4,3)) cols = [col for col in m.by_column()] assert_arrays_equal(cols[0], copy_input.next(4, mask_local=False)) assert_arrays_equal(cols[1], copy_input.next(4, mask_local=False)) assert_arrays_equal(cols[2], copy_input.next(4, mask_local=False)) def test_columnwise_iteration_with_function(): input = lambda i,j: 2*i + j m = LazyArray(input, shape=(4,3)) cols = [col for col in m.by_column()] assert_arrays_equal(cols[0], numpy.array([0, 2, 4, 6])) assert_arrays_equal(cols[1], numpy.array([1, 3, 5, 7])) assert_arrays_equal(cols[2], numpy.array([2, 4, 6, 8])) def test_columnwise_iteration_with_flat_array_and_mask(): m = LazyArray(5, shape=(4,3)) # 4 rows, 3 columns mask = numpy.array([True, False, True]) cols = [col for col in m.by_column(mask=mask)] assert_equal(cols, [5, 5]) def test_columnwise_iteration_with_structured_array_and_mask(): input = numpy.arange(12).reshape((4,3)) m = LazyArray(input, shape=(4,3)) # 4 rows, 3 columns mask = numpy.array([False, True, True]) cols = [col for col in m.by_column(mask=mask)] assert_arrays_equal(cols[0], input[:,1]) assert_arrays_equal(cols[1], input[:,2]) def test_columnwise_iteration_with_random_array_parallel_safe_with_mask(): random.mpi_rank=0; random.num_processes=2 input = random.RandomDistribution(rng=MockRNG(parallel_safe=True)) copy_input = random.RandomDistribution(rng=MockRNG(parallel_safe=True)) m = LazyArray(input, shape=(4,3)) mask = numpy.array([False, False, True]) cols = [col for col in m.by_column(mask=mask)] assert_equal(len(cols), 1) assert_arrays_almost_equal(cols[0], copy_input.next(12, mask_local=False)[8:], 1e-15) def test_as_array_with_flat_array(): m = LazyArray(5, shape=(4,3)) assert_arrays_equal(m.as_array(), 5*numpy.ones((4,3))) def test_as_array_with_structured_array(): input = numpy.arange(12).reshape((4,3)) m = LazyArray(input, shape=(4,3)) assert_arrays_equal(m.as_array(), input) def test_as_array_with_functional_array(): input = lambda i,j: 2*i + j m = LazyArray(input, shape=(4,3)) assert_arrays_equal(m.as_array(), numpy.array([[0, 1, 2], [2, 3, 4], [4, 5, 6], [6, 7, 8]])) def test_iadd_with_flat_array(): m = LazyArray(5, shape=(4,3)) m += 2 assert_arrays_equal(m.as_array(), 7*numpy.ones((4,3))) assert_equal(m.base_value, 5) assert_equal(m.value, 7) def test_add_with_flat_array(): m0 = LazyArray(5, shape=(4,3)) m1 = m0 + 2 assert_equal(m1.value, 7) assert_equal(m0.value, 5) def test_lt_with_flat_array(): m0 = LazyArray(5, shape=(4,3)) m1 = m0 < 10 assert_equal(m1.value, True) assert_equal(m0.value, 5) def test_lt_with_structured_array(): input = numpy.arange(12).reshape((4,3)) m0 = LazyArray(input, shape=(4,3)) m1 = m0 < 5 assert_arrays_equal(m1.value, input < 5) def test_structured_array_lt_array(): input = numpy.arange(12).reshape((4,3)) m0 = LazyArray(input, shape=(4,3)) comparison = 5*numpy.ones((4,3)) m1 = m0 < comparison assert_arrays_equal(m1.value, input < comparison) def test_multiple_operations_with_structured_array(): input = numpy.arange(12).reshape((4,3)) m0 = LazyArray(input, shape=(4,3)) m1 = (m0 + 2) < 5 m2 = (m0 < 5) + 2 assert_arrays_equal(m1.value, (input+2)<5) assert_arrays_equal(m2.value, (input<5)+2) assert_arrays_equal(m0.value, input) def test_apply_function_to_constant_array(): f = lambda m: 2*m + 3 m0 = LazyArray(5, shape=(4,3)) m1 = f(m0) assert isinstance(m1, LazyArray) assert_equal(m1.value, 13) # the following tests the internals, not the behaviour # it is just to check I understand what's going on assert_equal(m1.operations, [(operator.mul, 2), (operator.add, 3)]) def test_apply_function_to_structured_array(): f = lambda m: 2*m + 3 input = numpy.arange(12).reshape((4,3)) m0 = LazyArray(input, shape=(4,3)) m1 = f(m0) assert isinstance(m1, LazyArray) assert_arrays_equal(m1.value, input*2 + 3) def test_apply_function_to_functional_array(): input = lambda i,j: 2*i + j m0 = LazyArray(input, shape=(4,3)) f = lambda m: 2*m + 3 m1 = f(m0) assert_arrays_equal(m1.as_array(), numpy.array([[3, 5, 7], [7, 9, 11], [11, 13, 15], [15, 17, 19]])) def test_add_two_constant_arrays(): m0 = LazyArray(5, shape=(4,3)) m1 = LazyArray(7, shape=(4,3)) m2 = m0 + m1 assert_equal(m2.value, 12) # the following tests the internals, not the behaviour # it is just to check I understand what's going on assert_equal(m2.base_value, m0.base_value) assert_equal(m2.operations, [(operator.add, m1)]) def test_add_incommensurate_arrays(): m0 = LazyArray(5, shape=(4,3)) m1 = LazyArray(7, shape=(5,3)) assert_raises(ValueError, m0.__add__, m1) def test_getitem_from_constant_array(): m = LazyArray(3, shape=(4,3)) assert m[0,0] == m[3,2] == m[-1,2] == m[-4,2] == m[2,-3] == 3 assert_raises(IndexError, m.__getitem__, (4,0)) assert_raises(IndexError, m.__getitem__, (2,-4)) def test_getitem_from_constant_array(): m = LazyArray(3*numpy.ones((4,3)), shape=(4,3)) assert m[0,0] == m[3,2] == m[-1,2] == m[-4,2] == m[2,-3] == 3 assert_raises(IndexError, m.__getitem__, (4,0)) assert_raises(IndexError, m.__getitem__, (2,-4)) PyNN-0.7.4/test/unittests/test_lowlevelapi.py0000644000175000017500000000461111736323051022110 0ustar andrewandrew00000000000000from pyNN import common from mock import Mock from inspect import isfunction from nose.tools import assert_equal def test_build_create(): population_class = Mock() create_function = common.build_create(population_class) assert isfunction(create_function) p = create_function("cell class", "cell params", n=999) population_class.assert_called_with(999, "cell class", "cell params") def test_build_connect(): projection_class = Mock() connector_class = Mock(return_value="connector") connect_function = common.build_connect(projection_class, connector_class) assert isfunction(connect_function) prj = connect_function("source", "target", "weight", "delay", "synapse_type", "p", "rng") connector_class.assert_called_with(p_connect="p", weights="weight", delays="delay") projection_class.assert_called_with("source", "target", "connector", target="synapse_type", rng="rng") class MockID(common.IDMixin): def as_view(self): return "view" prj = connect_function(MockID(), MockID(), "weight", "delay", "synapse_type", "p", "rng") projection_class.assert_called_with("view", "view", "connector", target="synapse_type", rng="rng") def test_set(): cells = common.BasePopulation() cells.set = Mock() common.set(cells, "param", "val") cells.set.assert_called_with("param", "val") def test_build_record(): simulator = Mock() simulator.recorder_list = [] record_function = common.build_record("foo", simulator) assert isfunction(record_function) source = common.BasePopulation() source._record = Mock() source.recorders = {'foo': Mock()} record_function(source, "filename") source._record.assert_called_with("foo", to_file="filename") assert_equal(simulator.recorder_list, [source.recorders['foo']]) def test_build_record_with_assembly(): simulator = Mock() simulator.recorder_list = [] record_function = common.build_record("foo", simulator) assert isfunction(record_function) p1 = common.BasePopulation() p2 = common.BasePopulation() source = common.Assembly(p1, p2) source._record = Mock() for p in p1, p2: p.recorders = {'foo': Mock()} record_function(source, "filename") source._record.assert_called_with("foo", to_file="filename") assert_equal(simulator.recorder_list, [p1.recorders['foo'], p2.recorders['foo']]) PyNN-0.7.4/test/unittests/test_idmixin.py0000644000175000017500000001055011736323051021225 0ustar andrewandrew00000000000000from pyNN import common, errors, standardmodels from nose.tools import assert_equal, assert_raises class MockStandardCell(standardmodels.StandardCellType): default_parameters = { 'a': 20.0, 'b': -34.9, 'c': 2.2, } translations = standardmodels.build_translations(('a', 'A'), ('b', 'B'), ('c', 'C', 'c + a', 'C - A')) class MockNativeCell(object): @classmethod def has_parameter(cls, name): return False @classmethod def get_parameter_names(cls): return [] class MockPopulation(object): def __init__(self, standard): if standard: self.celltype = MockStandardCell({}) else: self.celltype = MockNativeCell() self._is_local_called = False self._positions = {} self._initial_values = {} def id_to_index(self, id): return 1234 def is_local(self, id): self._is_local_called = True return True def _set_cell_position(self, id, pos): self._positions[id] = pos def _get_cell_position(self, id): return (1.2, 3.4, 5.6) def _get_cell_initial_value(self, id, variable): return -65.0 def _set_cell_initial_value(self, id, variable, value): self._initial_values[id] = (variable, value) class MockID(common.IDMixin): def __init__(self, standard_cell): self.parent = MockPopulation(standard=standard_cell) self.foo = "bar" self._parameters = {'A': 76.5, 'B': 23.4, 'C': 100.0} def get_native_parameters(self): return self._parameters def set_native_parameters(self, parameters): self._parameters.update(parameters) class MockCurrentSource(object): def __init__(self): self._inject_into = [] def inject_into(self, objs): self._inject_into.extend(objs) class Test_IDMixin(): def setup(self): self.id = MockID(standard_cell=True) self.id_ns = MockID(standard_cell=False) def test_getattr_with_parameter_attr(self): assert_equal(self.id.a, 76.5) assert_equal(self.id_ns.A, 76.5) assert_raises(errors.NonExistentParameterError, self.id.__getattr__, "tau_m") assert_raises(errors.NonExistentParameterError, self.id_ns.__getattr__, "tau_m") def test_getattr_with_nonparameter_attr(self): assert_equal(self.id.foo, "bar") assert_equal(self.id_ns.foo, "bar") def test_getattr_with_parent_not_set(self): del(self.id.parent) assert_raises(Exception, self.id.__getattr__, "parent") def test_setattr_with_parameter_attr(self): self.id.a = 87.6 self.id_ns.A = 98.7 assert_equal(self.id.a, 87.6) assert_equal(self.id_ns.A, 98.7) def test_set_parameters(self): assert_raises(errors.NonExistentParameterError, self.id.set_parameters, hello='world') ##assert_raises(errors.NonExistentParameterError, self.id_ns.set_parameters, hello='world') self.id.set_parameters(a=12.3, c=77.7) assert_equal(self.id._parameters, {'A': 12.3, 'B': 23.4, 'C': 90.0}) def test_get_parameters(self): assert_equal(self.id.get_parameters(), {'a': 76.5, 'b': 23.4, 'c': 23.5}) def test_celltype_property(self): assert_equal(self.id.celltype.__class__, MockStandardCell) assert_equal(self.id_ns.celltype.__class__, MockNativeCell) def test_is_standard_cell(self): assert self.id.is_standard_cell assert not self.id_ns.is_standard_cell def test_position_property(self): for id in (self.id, self.id_ns): assert_equal(id.position, (1.2, 3.4, 5.6)) id.position = (9,8,7) assert_equal(id.parent._positions[id], (9,8,7)) def test_local_property(self): for id in (self.id, self.id_ns): assert id.parent._is_local_called is False assert id.local assert id.parent._is_local_called is True def test_inject(self): for id in (self.id, self.id_ns): cs = MockCurrentSource() id.inject(cs) assert_equal(cs._inject_into, [id]) def test_get_initial_value(self): self.id.get_initial_value('v') def test_set_initial_value(self): self.id.set_initial_value('v', -77.7) assert_equal(self.id.parent._initial_values[self.id], ('v', -77.7)) PyNN-0.7.4/test/unittests/test_nest.py0000644000175000017500000000503111736323051020533 0ustar andrewandrew00000000000000import nest import numpy from mock import Mock from nose.tools import assert_equal from pyNN.standardmodels import StandardCellType, build_translations from pyNN.nest import Population from pyNN.nest.cells import NativeCellType from pyNN.common import IDMixin class MockStandardCellType(StandardCellType): default_parameters = { "foo": 99.9, "hoo": 100.0, "woo": 5.0, } default_initial_values = { "v": 0.0, } translations = build_translations( ('foo', 'FOO'), ('hoo', 'HOO', 3.0), ('woo', 'WOO', '2*woo + hoo', '(WOO - HOO)/2'), ) class MockNativeCellType(NativeCellType): default_parameters = { "FOO": 99.9, "HOO": 300.0, "WOO": 112.0, } default_initial_values = { "v": 0.0, } nest_model = "mock_neuron" class MockID(IDMixin): set_parameters = Mock() class TestPopulation(object): def setup(self): self.orig_cc = Population._create_cells self.orig_init = Population.initialize self.orig_ss = nest.SetStatus Population._create_cells = Mock() Population.initialize = Mock() nest.SetStatus = Mock() def teardown(self): Population._create_cells = self.orig_cc Population.initialize = self.orig_init nest.SetStatus = self.orig_ss def test_set_with_standard_celltype(self): p = Population(10, MockStandardCellType) p.all_cells = numpy.array([MockID()]*10, dtype=object) #numpy.arange(10) p._mask_local = numpy.ones((10,), bool) p.set("foo", 32) assert_equal(nest.SetStatus.call_args[0][1], {"FOO": 32.0}) p.set("hoo", 33.0) assert_equal(nest.SetStatus.call_args[0][1], {"HOO": 99.0}) p.set("woo", 6.0) assert_equal(nest.SetStatus.call_args[0][1], {}) p.all_cells[0].set_parameters.assert_called_with(woo=6.0) def test_set_with_native_celltype(self): gd_orig = nest.GetDefaults nest.GetDefaults = Mock(return_value={"FOO": 1.2, "HOO": 3.4, "WOO": 5.6}) p = Population(10, MockNativeCellType) p.all_cells = numpy.array([MockID()]*10, dtype=object) #numpy.arange(10) p._mask_local = numpy.ones((10,), bool) p.set("FOO", 32) assert_equal(nest.SetStatus.call_args[0][1], {"FOO": 32.0}) p.set("HOO", 33.0) assert_equal(nest.SetStatus.call_args[0][1], {"HOO": 33.0}) p.set("WOO", 6.0) assert_equal(nest.SetStatus.call_args[0][1], {"WOO": 6.0}) nest.GetDefaults = gd_origPyNN-0.7.4/test/unittests/test_utility_functions.py0000644000175000017500000000761611736323051023370 0ustar andrewandrew00000000000000from pyNN import common, errors import numpy from nose.tools import assert_equal, assert_raises from pyNN.utility import assert_arrays_equal MIN_DELAY = 1.23 MAX_DELAY = 999 class MockCell(object): def __init__(self, cellclass, local=True): self.celltype = cellclass() self.local = local def build_cellclass(cb): class MockCellClass(object): conductance_based = cb return MockCellClass class MockSimulator(object): class MockState(object): min_delay = MIN_DELAY max_delay = MAX_DELAY state = MockState() def setup(): common.simulator = MockSimulator def test_is_conductance(): for cb in (True, False): cell = MockCell(build_cellclass(cb)) assert common.is_conductance(cell) == cb def test_is_conductance_with_nonlocal_cell(): cell = MockCell(build_cellclass(True), local=False) assert common.is_conductance(cell) is None def test_check_weight_with_scalar(): assert_equal(4.3, common.check_weight(4.3, 'excitatory', is_conductance=True)) assert_equal(4.3, common.check_weight(4.3, 'excitatory', is_conductance=False)) assert_equal(4.3, common.check_weight(4.3, 'inhibitory', is_conductance=True)) assert_equal(-4.3, common.check_weight(-4.3, 'inhibitory', is_conductance=False)) assert_equal(common.DEFAULT_WEIGHT, common.check_weight(None, 'excitatory', is_conductance=True)) assert_raises(errors.InvalidWeightError, common.check_weight, 4.3, 'inhibitory', is_conductance=False) assert_raises(errors.InvalidWeightError, common.check_weight, -4.3, 'inhibitory', is_conductance=True) assert_raises(errors.InvalidWeightError, common.check_weight, -4.3, 'excitatory', is_conductance=True) assert_raises(errors.InvalidWeightError, common.check_weight, -4.3, 'excitatory', is_conductance=False) def test_check_weight_with_list(): w = range(10) assert_equal(w, common.check_weight(w, 'excitatory', is_conductance=True).tolist()) assert_equal(w, common.check_weight(w, 'excitatory', is_conductance=False).tolist()) assert_equal(w, common.check_weight(w, 'inhibitory', is_conductance=True).tolist()) assert_raises(errors.InvalidWeightError, common.check_weight, w, 'inhibitory', is_conductance=False) w = range(-10,0) assert_equal(w, common.check_weight(w, 'inhibitory', is_conductance=False).tolist()) assert_raises(errors.InvalidWeightError, common.check_weight, w, 'inhibitory', is_conductance=True) assert_raises(errors.InvalidWeightError, common.check_weight, w, 'excitatory', is_conductance=True) assert_raises(errors.InvalidWeightError, common.check_weight, w, 'excitatory', is_conductance=False) w = range(-5,5) assert_raises(errors.InvalidWeightError, common.check_weight, w, 'excitatory', is_conductance=True) assert_raises(errors.InvalidWeightError, common.check_weight, w, 'excitatory', is_conductance=False) assert_raises(errors.InvalidWeightError, common.check_weight, w, 'inhibitory', is_conductance=True) assert_raises(errors.InvalidWeightError, common.check_weight, w, 'inhibitory', is_conductance=False) def test_check_weight_with_NaN(): w = numpy.arange(10.0) w[0] = numpy.nan assert_arrays_equal(w[1:], common.check_weight(w, 'excitatory', is_conductance=True)[1:]) # NaN != NaN by definition def test_check_weight_with_invalid_value(): assert_raises(errors.InvalidWeightError, common.check_weight, "butterflies", 'excitatory', is_conductance=True) def test_check_weight_is_conductance_is_None(): # need to check that a log message was created assert_equal(4.3, common.check_weight(4.3, 'excitatory', is_conductance=None)) def test_check_delay(): assert_equal(common.check_delay(None), MIN_DELAY) assert_equal(common.check_delay(2*MIN_DELAY), 2*MIN_DELAY) assert_raises(errors.ConnectionError, common.check_delay, 0.5*MIN_DELAY) assert_raises(errors.ConnectionError, common.check_delay, 2.0*MAX_DELAY) PyNN-0.7.4/test/unittests/test_connectors.py0000644000175000017500000003611311736323051021744 0ustar andrewandrew00000000000000from pyNN import connectors, common, random, errors, space import numpy import os from mock import Mock from nose.tools import assert_equal, assert_raises from pyNN.utility import assert_arrays_equal from itertools import repeat MIN_DELAY = 0.123 MAX_DELAY = 99999 class MockSimulator(object): class MockState(object): min_delay = MIN_DELAY max_delay = MAX_DELAY num_processes = 2 mpi_rank = 1 state = MockState() class MockCell(int): def __init__(self, n): """Create an ID object with numerical value `n`.""" int.__init__(n) self.position = numpy.array([n, 88.8, 99.9]) class MockPre(object): def __init__(self, size): self.size = size self.all_cells = numpy.arange(17, 17+size) # the 17 is just to make sure we make no assumptions about ID values starting at zero self.positions = numpy.array([(i, 88.8, 99.9) for i in self.all_cells]).T self.position_generator = lambda i: self.positions[:,i] def __len__(self): return self.size def all(self): return iter(MockCell(id) for id in self.all_cells) class MockPost(object): def __init__(self, local_mask): self._mask_local = local_mask self.size = local_mask.size # can local mask be an array of indices or a slice? self.all_cells = numpy.arange(79, 79+self.size) self.local_cells = self.all_cells[local_mask] self.positions = numpy.array([(i, 88.8, 99.9) for i in self.all_cells]).T self.position_generator = lambda i: self.positions[:,i] def __len__(self): return self.size class MockConnectionManager(object): def __init__(self): self.connections = [] def connect(self, src, targets, weights, delays): #if src in self.connections: # raise Exception("connect already called with source %s" % src) # no reason why this shouldn't happen, but it doesn't in the current implementation, so I'm being lazy #else: # self.connections[src] = {"targets": targets, # "weights": weights, # "delays": delays} if isinstance(weights, float): weights = repeat(weights) if isinstance(delays, float): delays = repeat(delays) if not hasattr(targets, "__len__"): targets = [targets] for tgt, w, d in zip(targets, weights, delays): self.connections.append((src, tgt, w, d)) def convergent_connect(self, sources, tgt, weights, delays): if isinstance(weights, float): weights = repeat(weights) if isinstance(delays, float): delays = repeat(delays) for src, w, d in zip(sources, weights, delays): self.connections.append((src, tgt, w, d)) class MockRNG(random.WrappedRNG): rng = None def __init__(self, num_processes, delta=1): random.num_processes = num_processes random.WrappedRNG.__init__(self) self.start = 0.0 self.delta = delta def _next(self, distribution, n, parameters): s = self.start self.start += n*self.delta return numpy.arange(s, s+n*self.delta, self.delta) class MockProjection(object): def __init__(self, pre, post): self.pre = pre self.post = post self.connection_manager = MockConnectionManager() self.rng = MockRNG(num_processes=2, delta=0.1) self.synapse_type = 'inhibitory' class TestOneToOneConnector(object): def setup(self): common.simulator = MockSimulator self.prj = MockProjection(MockPre(5), MockPost(numpy.array([0,1,0,1,0], dtype=bool))) def test_connect_with_scalar_weights_and_delays(self): C = connectors.OneToOneConnector(weights=5.0, delays=0.5, safe=False) C.progressbar = Mock() C.progression = Mock() C.connect(self.prj) assert_equal(self.prj.connection_manager.connections, [(18, 80, 5.0, 0.5), (20, 82, 5, 0.5)]) def test_connect_with_random_weights(self): rd = random.RandomDistribution(rng=MockRNG(num_processes=2, delta=1.0)) C = connectors.OneToOneConnector(weights=rd, delays=0.5, safe=False) C.progressbar = Mock() C.progression = Mock() C.connect(self.prj) assert_equal(self.prj.connection_manager.connections, [(18, 80, 1.0, 0.5), (20, 82, 3.0, 0.5)]) class TestAllToAllConnector(object): def setup(self): common.simulator = MockSimulator self.prj = MockProjection(MockPre(4), MockPost(numpy.array([0,1,0,1,0], dtype=bool))) def test_connect_with_scalar_weights_and_delays(self): C = connectors.AllToAllConnector(weights=5.0, delays=0.5, safe=False) C.progressbar = Mock() C.progression = Mock() C.connect(self.prj) assert_equal(set(self.prj.connection_manager.connections), set([(17, 80, 5.0, 0.5), (17, 82, 5.0, 0.5), (18, 80, 5.0, 0.5), (18, 82, 5.0, 0.5), (19, 80, 5.0, 0.5), (19, 82, 5.0, 0.5), (20, 80, 5.0, 0.5), (20, 82, 5.0, 0.5)])) def test_connect_with_random_weights_parallel_safe(self): rd = random.RandomDistribution(rng=MockRNG(num_processes=2, delta=1.0)) C = connectors.AllToAllConnector(weights=rd, delays=0.5, safe=False) C.progressbar = Mock() C.progression = Mock() C.connect(self.prj) assert_equal(self.prj.connection_manager.connections, [(17, 80, 1.0, 0.5), (17, 82, 3.0, 0.5), (18, 80, 6.0, 0.5), (18, 82, 8.0, 0.5), (19, 80, 11.0, 0.5), (19, 82, 13.0, 0.5), (20, 80, 16.0, 0.5), (20, 82, 18.0, 0.5)]) def test_connect_with_distance_dependent_weights_parallel_safe(self): d_expr = "d+100" C = connectors.AllToAllConnector(weights=d_expr, delays=0.5, safe=False) C.progressbar = Mock() C.progression = Mock() C.connect(self.prj) assert_equal(self.prj.connection_manager.connections, [(17, 80, 163.0, 0.5), # 100+|17-80| (17, 82, 165.0, 0.5), # 100+|17-82| (18, 80, 162.0, 0.5), # etc. (18, 82, 164.0, 0.5), (19, 80, 161.0, 0.5), (19, 82, 163.0, 0.5), (20, 80, 160.0, 0.5), (20, 82, 162.0, 0.5)]) def test_create_with_delays_None(self): C = connectors.AllToAllConnector(weights=0.1, delays=None) assert_equal(C.weights, 0.1) assert_equal(C.delays, common.get_min_delay()) assert C.safe assert C.allow_self_connections def test_create_with_delays_too_small(self): assert_raises(errors.ConnectionError, connectors.AllToAllConnector, allow_self_connections=True, delays=0.0) def test_create_with_list_delays_too_small(self): assert_raises(errors.ConnectionError, connectors.AllToAllConnector, allow_self_connections=True, delays=[1.0, 1.0, 0.0]) class TestFixedProbabilityConnector(object): def setup(self): common.simulator = MockSimulator self.prj = MockProjection(MockPre(4), MockPost(numpy.array([0,1,0,1,0], dtype=bool))) def test_connect_with_default_args(self): C = connectors.FixedProbabilityConnector(p_connect=0.75) C.progressbar = Mock() C.progression = Mock() C.connect(self.prj) # 20 possible connections. Due to the mock RNG, only the # first 8 are created (17, 79), (17, 80), (17,81), (17,82), (17,83), (18,79), (18,80), (18,81) # of these, (17,80), (17,82), (18,80) are created on this node assert_equal(self.prj.connection_manager.connections, [(17, 80, 0.0, MIN_DELAY), (17, 82, 0.0, MIN_DELAY), (18, 80, 0.0, MIN_DELAY)]) class TestDistanceDependentProbabilityConnector(object): def setup(self): common.simulator = MockSimulator self.prj = MockProjection(MockPre(4), MockPost(numpy.array([0,1,0,1,0], dtype=bool))) self.prj.rng = MockRNG(num_processes=2, delta=0.01) def test_connect_with_default_args(self): C = connectors.DistanceDependentProbabilityConnector(d_expression="d<62.5") C.progressbar = Mock() C.progression = Mock() C.connect(self.prj) # 20 possible connections. Only those with a sufficiently small distance # are created assert_equal(self.prj.connection_manager.connections, [(18, 80, 0.0, MIN_DELAY), (19, 80, 0.0, MIN_DELAY), (20, 80, 0.0, MIN_DELAY), (20, 82, 0.0, MIN_DELAY)]) class TestFromListConnector(object): def setup(self): common.simulator = MockSimulator self.prj = MockProjection(MockPre(4), MockPost(numpy.array([0,1,0,1,0], dtype=bool))) def test_connect_with_valid_list(self): connection_list = [ (0, 0, 0.1, 0.1), # 17 -> 79 (3, 0, 0.2, 0.11), # 20 -> 79 (2, 3, 0.3, 0.12), # 19 -> 82 local (2, 2, 0.4, 0.13), # 19 -> 81 (0, 1, 0.5, 0.14), # 17 -> 80 local ] C = connectors.FromListConnector(connection_list) C.progressbar = Mock() C.progression = Mock() C.connect(self.prj) # note that ListConnector does not filter out non-local connections assert_equal(self.prj.connection_manager.connections, [(17, 79, 0.1, 0.1), (17, 80, 0.5, 0.14), (19, 82, 0.3, 0.12), (19, 81, 0.4, 0.13), (20, 79, 0.2, 0.11)]) def test_connect_with_out_of_range_index(self): connection_list = [ (0, 0, 0.1, 0.1), # 17 -> 79 (3, 0, 0.2, 0.11), # 20 -> 79 (2, 3, 0.3, 0.12), # 19 -> 82 local (5, 2, 0.4, 0.13), # NON-EXISTENT -> 81 (0, 1, 0.5, 0.14), # 17 -> 80 local ] C = connectors.FromListConnector(connection_list) assert_raises(errors.ConnectionError, C.connect, self.prj) class TestFromFileConnector(object): def setup(self): common.simulator = MockSimulator self.prj = MockProjection(MockPre(4), MockPost(numpy.array([0,1,0,1,0], dtype=bool))) self.connection_list = [ (0, 0, 0.1, 0.1), # 17 -> 79 (3, 0, 0.2, 0.11), # 20 -> 79 (2, 3, 0.3, 0.12), # 19 -> 82 local (2, 2, 0.4, 0.13), # 19 -> 81 (0, 1, 0.5, 0.14), # 17 -> 80 local ] def teardown(self): if os.path.exists("test.connections"): os.remove("test.connections") def test_connect_with_standard_text_file_not_distributed(self): numpy.savetxt("test.connections", self.connection_list) C = connectors.FromFileConnector("test.connections", distributed=False) C.connect(self.prj) assert_equal(self.prj.connection_manager.connections, [(17, 79, 0.1, 0.1), (17, 80, 0.5, 0.14), (19, 82, 0.3, 0.12), (19, 81, 0.4, 0.13), (20, 79, 0.2, 0.11)]) def test_connect_with_standard_text_file_distributed(self): local_connection_list = [c for c in self.connection_list if c[1]%2 == 1] numpy.savetxt("test.connections.1", local_connection_list) C = connectors.FromFileConnector("test.connections", distributed=True) C.connect(self.prj) assert_equal(self.prj.connection_manager.connections, [(17, 80, 0.5, 0.14), (19, 82, 0.3, 0.12)]) class TestFixedNumberPostConnector(object): def setup(self): common.simulator = MockSimulator self.prj = MockProjection(MockPre(4), MockPost(numpy.array([0,1,0,1,0], dtype=bool))) self.prj.rng.rng = Mock() self.prj.rng.rng.permutation = lambda x: x def test_with_n_smaller_than_population_size(self): C = connectors.FixedNumberPostConnector(n=3) C.progressbar = Mock() C.progression = Mock() assert self.prj.rng is not None C.connect(self.prj) # FixedNumberPost does not currently filter out only local connections assert_equal(self.prj.connection_manager.connections, [(17, 79, 0.0, MIN_DELAY), (17, 80, 0.0, MIN_DELAY), (17, 81, 0.0, MIN_DELAY), (18, 79, 0.0, MIN_DELAY), (18, 80, 0.0, MIN_DELAY), (18, 81, 0.0, MIN_DELAY), (19, 79, 0.0, MIN_DELAY), (19, 80, 0.0, MIN_DELAY), (19, 81, 0.0, MIN_DELAY), (20, 79, 0.0, MIN_DELAY), (20, 80, 0.0, MIN_DELAY), (20, 81, 0.0, MIN_DELAY)]) class TestFixedNumberPreConnector(object): def setup(self): common.simulator = MockSimulator self.prj = MockProjection(MockPre(4), MockPost(numpy.array([0,1,0,1,0], dtype=bool))) self.prj.rng.rng = Mock() self.prj.rng.rng.permutation = lambda x: x def test_with_n_smaller_than_population_size(self): C = connectors.FixedNumberPreConnector(n=3) C.progressbar = Mock() C.progression = Mock() assert self.prj.rng is not None self.prj.post.local_cells = [MockCell(n) for n in self.prj.post.local_cells] C.connect(self.prj) assert_equal(self.prj.connection_manager.connections, [(17, 80, 0.0, MIN_DELAY), (18, 80, 0.0, MIN_DELAY), (19, 80, 0.0, MIN_DELAY), (17, 82, 0.0, MIN_DELAY), (18, 82, 0.0, MIN_DELAY), (19, 82, 0.0, MIN_DELAY)]) class TestDistanceMatrix(object): def test_really_simple0(self): A = numpy.zeros((3,)) B = numpy.zeros((3,5)) D = connectors.DistanceMatrix(B, space.Space()) D.set_source(A) assert_arrays_equal(D.as_array(), numpy.zeros((5,), float)) def test_really_simple1(self): A = numpy.ones((3,)) B = numpy.zeros((3,5)) D = connectors.DistanceMatrix(B, space.Space()) D.set_source(A) assert_arrays_equal(D.as_array(), numpy.sqrt(3*numpy.ones((5,), float))) PyNN-0.7.4/test/unittests/test_population.py0000644000175000017500000001431011736323051021754 0ustar andrewandrew00000000000000from pyNN import common, errors, random, standardmodels, space from nose.tools import assert_equal, assert_raises import numpy from mock import Mock, patch from pyNN.utility import assert_arrays_equal class MockID(int, common.IDMixin): def __init__(self, n): int.__init__(n) common.IDMixin.__init__(self) def get_parameters(self): return {} class MockPopulation(common.Population): recorder_class = Mock() initialize = Mock() def _create_cells(self, cellclass, cellparams, size): self.all_cells = numpy.array([MockID(i) for i in range(999, 999+size)], MockID) self._mask_local = numpy.arange(size)%5==3 # every 5th cell, starting with the 4th, is on this node self.first_id = self.all_cells[0] self.last_id = self.all_cells[-1] class MockStandardCell(standardmodels.StandardCellType): default_parameters = { 'a': 20.0, 'b': -34.9 } translations = standardmodels.build_translations(('a', 'A'), ('b', 'B')) default_initial_values = {'m': -1.23} class MockStructure(space.BaseStructure): parameter_names = ('p0', 'p1') p0 = 1 p1 = 2 def test_create_population_standard_cell_simple(): p = MockPopulation(11, MockStandardCell) assert_equal(p.size, 11) assert isinstance(p.label, basestring) assert isinstance(p.celltype, MockStandardCell) assert isinstance(p._structure, space.Line) assert_equal(p._positions, None) assert_equal(p.celltype.parameters, {'A': 20.0, 'B': -34.9}) assert_equal(p.initial_values, {}) assert isinstance(p.recorders, dict) p.initialize.assert_called_with('m', -1.23) def test_create_population_standard_cell_with_params(): p = MockPopulation(11, MockStandardCell, {'a': 17.0, 'b': 0.987}) assert isinstance(p.celltype, MockStandardCell) assert_equal(p.celltype.parameters, {'A': 17.0, 'B': 0.987}) # test create native cell # test create native cell with params # test create with structure def test_create_population_with_implicit_grid(): p = MockPopulation((11,), MockStandardCell) assert_equal(p.size, 11) assert isinstance(p.structure, space.Line) p = MockPopulation((5,6), MockStandardCell) assert_equal(p.size, 30) assert isinstance(p.structure, space.Grid2D) p = MockPopulation((2,3,4), MockStandardCell) assert_equal(p.size, 24) assert isinstance(p.structure, space.Grid3D) assert_raises(Exception, MockPopulation, (2,3,4,5), MockStandardCell) # test local_cells property def test_cell_property(): p = MockPopulation(11, MockStandardCell) assert_arrays_equal(p.cell, p.all_cells) def test_id_to_index(): p = MockPopulation(11, MockStandardCell) assert isinstance(p[0], common.IDMixin) assert_equal(p.id_to_index(p[0]), 0) assert_equal(p.id_to_index(p[10]), 10) def test_id_to_index_with_array(): p = MockPopulation(11, MockStandardCell) assert isinstance(p[0], common.IDMixin) assert_arrays_equal(p.id_to_index(p.all_cells[3:9:2]), numpy.arange(3,9,2)) def test_id_to_index_with_populationview(): p = MockPopulation(11, MockStandardCell) assert isinstance(p[0], common.IDMixin) view = p[3:7] assert isinstance(view, common.PopulationView) assert_arrays_equal(p.id_to_index(view), numpy.arange(3,7)) def test_id_to_index_with_invalid_id(): p = MockPopulation(11, MockStandardCell) assert isinstance(p[0], common.IDMixin) assert_raises(ValueError, p.id_to_index, MockID(p.last_id+1)) assert_raises(ValueError, p.id_to_index, MockID(p.first_id-1)) def test_id_to_index_with_invalid_ids(): p = MockPopulation(11, MockStandardCell) assert_raises(ValueError, p.id_to_index, [MockID(p.first_id-1)] + p.all_cells[0:3].tolist()) def test_id_to_local_index(): orig_np = common.num_processes common.num_processes = lambda: 5 p = MockPopulation(11, MockStandardCell) # every 5th cell, starting with the 4th, is on this node. assert_equal(p.id_to_local_index(p[3]), 0) assert_equal(p.id_to_local_index(p[8]), 1) common.num_processes = lambda: 1 # only one node assert_equal(p.id_to_local_index(p[3]), 3) assert_equal(p.id_to_local_index(p[8]), 8) common.num_processes = orig_np def test_id_to_local_index_with_invalid_id(): orig_np = common.num_processes common.num_processes = lambda: 5 p = MockPopulation(11, MockStandardCell) # every 5th cell, starting with the 4th, is on this node. assert_raises(ValueError, p.id_to_local_index, p[0]) common.num_processes = orig_np # test structure property def test_set_structure(): p = MockPopulation(11, MockStandardCell) p._positions = numpy.arange(33).reshape(3,11) new_struct = MockStructure() p.structure = new_struct assert_equal(p._structure, new_struct) assert_equal(p._positions, None) # test positions property def test_get_positions(): p = MockPopulation(11, MockStandardCell) pos1 = numpy.arange(33).reshape(3,11) p._structure = Mock() p._structure.generate_positions = Mock(return_value=pos1) assert_equal(p._positions, None) assert_arrays_equal(p.positions, pos1) assert_arrays_equal(p._positions, pos1) pos2 = 1+numpy.arange(33).reshape(3,11) p._positions = pos2 assert_arrays_equal(p.positions, pos2) def test_set_positions(): p = MockPopulation(11, MockStandardCell) assert p._structure != None new_positions = numpy.random.uniform(size=(3,11)) p.positions = new_positions assert_equal(p.structure, None) assert_arrays_equal(p.positions, new_positions) new_positions[0,0] = 99.9 assert p.positions[0,0] != 99.9 def test_position_generator(): p = MockPopulation(11, MockStandardCell) assert_arrays_equal(p.position_generator(0), p.positions[:,0]) assert_arrays_equal(p.position_generator(10), p.positions[:,10]) assert_arrays_equal(p.position_generator(-1), p.positions[:,10]) assert_arrays_equal(p.position_generator(-11), p.positions[:,0]) assert_raises(IndexError, p.position_generator, 11) assert_raises(IndexError, p.position_generator, -12) # test describe method def test_describe(): p = MockPopulation(11, MockStandardCell) assert isinstance(p.describe(), basestring) assert isinstance(p.describe(template=None), dict) PyNN-0.7.4/test/unittests/test_projection.py0000644000175000017500000001643711736323051021752 0ustar andrewandrew00000000000000from pyNN import common, standardmodels from nose.tools import assert_equal, assert_raises from mock import Mock import numpy import os from pyNN.utility import assert_arrays_equal orig_rank = common.rank orig_np = common.num_processes def setup(): common.rank = lambda: 1 common.num_processes = lambda: 3 def teardown(): common.rank = orig_rank common.num_processes = orig_np class MockStandardCell(standardmodels.StandardCellType): recordable = ['v', 'spikes'] class MockPopulation(common.BasePopulation): label = "mock_population" first_id = 555 class MockConnectionManager(object): def __len__(self): return 999 def __getitem__(self, i): return 888+i def get(self, name, format): return numpy.arange(100) class MockConnection(object): source = 246 target = 652 weight = 542 delay = 254 def test_create_simple(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) def test_create_with_synapse_dynamics(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock(), synapse_dynamics=standardmodels.SynapseDynamics()) def test_len(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() assert_equal(len(prj), len(prj.connection_manager)) def test_size_no_gather(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() assert_equal(prj.size(gather=False), len(prj)) def test_size_with_gather(): orig_mpi_sum = common.recording.mpi_sum common.recording.mpi_sum = Mock() p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() prj.size(gather=True) common.recording.mpi_sum.assert_called_with(len(prj)) common.recording.mpi_sum = orig_mpi_sum def test__getitem(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() assert_equal(prj[0], 888) def test_set_weights(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.synapse_type = "foo" prj.post.local_cells = [0] prj.connection_manager = MockConnectionManager() prj.connection_manager.set = Mock() prj.setWeights(0.5) prj.connection_manager.set.assert_called_with('weight', 0.5) def test_randomize_weights(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() prj.setWeights = Mock() rd = Mock() rd.next = Mock(return_value=777) prj.randomizeWeights(rd) rd.next.assert_called_with(len(prj)) prj.setWeights.assert_called_with(777) def test_set_delays(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() prj.connection_manager.set = Mock() prj.setDelays(0.5) prj.connection_manager.set.assert_called_with('delay', 0.5) def test_randomize_delays(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() prj.setDelays = Mock() rd = Mock() rd.next = Mock(return_value=777) prj.randomizeDelays(rd) rd.next.assert_called_with(len(prj)) prj.setDelays.assert_called_with(777) def test_set_synapse_dynamics_param(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() prj.connection_manager.set = Mock() prj.setSynapseDynamics('U', 0.5) prj.connection_manager.set.assert_called_with('U', 0.5) def test_get_weights(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() prj.connection_manager.get = Mock() prj.getWeights(format='list', gather=False) prj.connection_manager.get.assert_called_with('weight', 'list') def test_get_delays(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() prj.connection_manager.get = Mock() prj.getDelays(format='list', gather=False) prj.connection_manager.get.assert_called_with('delay', 'list') def test_save_connections(): filename = "test.connections" if os.path.exists(filename + ".1"): os.remove(filename + ".1") p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() prj.connections = [MockConnection(), MockConnection(), MockConnection()] prj.saveConnections(filename, gather=False, compatible_output=False) assert os.path.exists(filename + ".1") os.remove(filename + ".1") def test_print_weights_as_list(): filename = "test.weights" if os.path.exists(filename): os.remove(filename) p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() prj.printWeights(filename, format='list', gather=False) assert os.path.exists(filename) os.remove(filename) def test_print_weights_as_array(): filename = "test.weights" if os.path.exists(filename): os.remove(filename) p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.connection_manager = MockConnectionManager() prj.printWeights(filename, format='array', gather=False) assert os.path.exists(filename) os.remove(filename) def test_weight_histogram_with_args(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.getWeights = Mock(return_value=numpy.array(range(10)*42)) n, bins = prj.weightHistogram(min=0.0, max=9.0, nbins=10) assert_equal(n.size, 10) assert_equal(bins.size, n.size+1) assert_arrays_equal(n, 42*numpy.ones(10)) assert_equal(n.sum(), 420) assert_arrays_equal(bins, numpy.arange(0.0, 9.1, 0.9)) def test_weight_histogram_no_args(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock()) prj.getWeights = Mock(return_value=numpy.array(range(10)*42)) n, bins = prj.weightHistogram(nbins=10) assert_equal(n.size, 10) assert_equal(bins.size, n.size+1) assert_arrays_equal(n, 42*numpy.ones(10)) assert_equal(n.sum(), 420) assert_arrays_equal(bins, numpy.arange(0.0, 9.1, 0.9)) def test_describe(): p1 = MockPopulation() p2 = MockPopulation() prj = common.Projection(p1, p2, method=Mock(), synapse_dynamics=standardmodels.SynapseDynamics()) prj.connection_manager = MockConnectionManager() prj.pre.describe = Mock() prj.post.describe = Mock() assert isinstance(prj.describe(engine='string'), basestring) assert isinstance(prj.describe(template=None), dict) PyNN-0.7.4/test/unittests/test_populationview.py0000644000175000017500000001013511736323051022650 0ustar andrewandrew00000000000000from pyNN import common, standardmodels from nose.tools import assert_equal, assert_raises import numpy from mock import Mock, patch from pyNN.utility import assert_arrays_equal class MockID(object): def __init__(self, i, parent): self.label = str(i) self.parent = parent def get_parameters(self): return {} class MockPopulation(common.Population): recorder_class = Mock() initialize = Mock() def _create_cells(self, cellclass, cellparams, size): self.all_cells = numpy.array([MockID(i, self) for i in range(size)], MockID) self._mask_local = numpy.arange(size)%5==3 class MockStandardCell(standardmodels.StandardCellType): default_parameters = { 'a': 20.0, 'b': -34.9 } translations = standardmodels.build_translations(('a', 'A'), ('b', 'B')) default_initial_values = {'m': -1.23} # test create with population parent and mask selector def test_create_with_slice_selector(): p = MockPopulation(11, MockStandardCell) mask = slice(3,9,2) pv = common.PopulationView(parent=p, selector=mask) assert_equal(pv.parent, p) assert_equal(pv.size, 3) assert_equal(pv.mask, mask) assert_arrays_equal(pv.all_cells, numpy.array([p.all_cells[3], p.all_cells[5], p.all_cells[7]])) assert_arrays_equal(pv.local_cells, numpy.array([p.all_cells[3]])) assert_arrays_equal(pv._mask_local, numpy.array([1,0,0], dtype=bool)) assert_equal(pv.celltype, p.celltype) assert_equal(pv.cellparams, p.cellparams) assert_equal(pv.recorders, p.recorders) assert_equal(pv.first_id, p.all_cells[3]) assert_equal(pv.last_id, p.all_cells[7]) def test_create_with_boolean_array_selector(): p = MockPopulation(11, MockStandardCell) mask = numpy.array([0,0,0,1,0,1,0,1,0,0,0], dtype=bool) pv = common.PopulationView(parent=p, selector=mask) assert_arrays_equal(pv.all_cells, numpy.array([p.all_cells[3], p.all_cells[5], p.all_cells[7]])) #assert_arrays_equal(pv.mask, mask) def test_create_with_index_array_selector(): p = MockPopulation(11, MockStandardCell) mask = numpy.array([3, 5, 7]) pv = common.PopulationView(parent=p, selector=mask) assert_arrays_equal(pv.all_cells, numpy.array([p.all_cells[3], p.all_cells[5], p.all_cells[7]])) assert_arrays_equal(pv.mask, mask) # test create with populationview parent and mask selector def test_create_with_slice_selector(): p = MockPopulation(11, MockStandardCell) mask1 = slice(0,9,1) pv1 = common.PopulationView(parent=p, selector=mask1) assert_arrays_equal(pv1.all_cells, p.all_cells[0:9]) mask2 = slice(3,9,2) pv2 = common.PopulationView(parent=pv1, selector=mask2) assert_equal(pv2.parent, pv1) # or would it be better to resolve the parent chain up to an actual Population? assert_arrays_equal(pv2.all_cells, numpy.array([p.all_cells[3], p.all_cells[5], p.all_cells[7]])) assert_arrays_equal(pv2._mask_local, numpy.array([1,0,0], dtype=bool)) # test initial values property def test_structure_property(): p = MockPopulation(11, MockStandardCell) mask = slice(3,9,2) pv = common.PopulationView(parent=p, selector=mask) assert_equal(pv.structure, p.structure) # test positions property def test_get_positions(): p = MockPopulation(11, MockStandardCell) ppos = numpy.random.uniform(size=(3,11)) p._positions = ppos pv = common.PopulationView(parent=p, selector=slice(3,9,2)) assert_arrays_equal(pv.positions, numpy.array([ppos[:,3], ppos[:,5], ppos[:,7]]).T) # test id_to_index def test_id_to_index(): p = MockPopulation(11, MockStandardCell) mask = slice(3,9,2) pv = common.PopulationView(parent=p, selector=mask) assert_equal(pv.id_to_index(p.all_cells[3]), 0) assert_equal(pv.id_to_index(p.all_cells[7]), 2) assert_raises(IndexError, pv.id_to_index, p.all_cells[0]) # test describe def test_describe(): p = MockPopulation(11, MockStandardCell) mask = slice(3,9,2) pv = common.PopulationView(parent=p, selector=mask) assert isinstance(pv.describe(), basestring) assert isinstance(pv.describe(template=None), dict) PyNN-0.7.4/test/unittests/test_recording.py0000644000175000017500000001232611736323051021543 0ustar andrewandrew00000000000000from pyNN import recording from nose.tools import assert_equal, assert_raises from mock import Mock import numpy import os from pyNN.utility import assert_arrays_equal MPI = recording.MPI if MPI: mpi_comm = recording.mpi_comm #def test_rename_existing(): #def test_gather(): #import time #for x in range(7): # N = pow(10, x) # local_data = numpy.empty((N,2)) # local_data[:,0] = numpy.ones(N, dtype=float)*comm.rank # local_data[:,1] = numpy.random.rand(N) # # start_time = time.time() # all_data = gather(local_data) # #print comm.rank, "local", local_data # if comm.rank == 0: # # print "all", all_data # print N, time.time()-start_time #def test_gather_no_MPI(): #def test_gather_dict(): #def test_mpi_sum(): class MockPopulation(object): size = 11 first_id = 2454 last_id = first_id + size label = "mock population" def __len__(self): return self.size def can_record(self, variable): if variable in ["spikes", "v", "gsyn"]: return True else: return False def id_to_index(self, id): return id def test_Recorder_create(): r = recording.Recorder('spikes') assert_equal(r.variable, 'spikes') assert_equal(r.population, None) assert_equal(r.file, None) assert_equal(r.recorded, set([])) def test_Recorder_invalid_variable(): assert_raises(AssertionError, recording.Recorder, 'foo', population=MockPopulation()) class MockID(object): def __init__(self, id, local): self.id = id self.local = local def test_record(): r = recording.Recorder('spikes') r._record = Mock() assert_equal(r.recorded, set([])) all_ids = (MockID(0, True), MockID(1, False), MockID(2, True), MockID(3, True), MockID(4, False)) first_ids = all_ids[0:3] r.record(first_ids) assert_equal(r.recorded, set(id for id in first_ids if id.local)) assert_equal(len(r.recorded), 2) r._record.assert_called_with(r.recorded) more_ids = all_ids[2:5] r.record(more_ids) assert_equal(r.recorded, set(id for id in all_ids if id.local)) assert_equal(len(r.recorded), 3) r._record.assert_called_with(set(all_ids[3:4])) def test_filter_recorded(): r = recording.Recorder('spikes') r._record = Mock() all_ids = (MockID(0, True), MockID(1, False), MockID(2, True), MockID(3, True), MockID(4, False)) r.record(all_ids) assert_equal(r.recorded, set(id for id in all_ids if id.local)) filter = all_ids[::2] filtered_ids = r.filter_recorded(filter) assert_equal(filtered_ids, set(id for id in filter if id.local)) assert_equal(r.filter_recorded(None), r.recorded) def test_get__zero_offset(): r = recording.Recorder('spikes') fake_data = numpy.array([ (3, 12.3), (4, 14.5), (7, 19.8) ]) r._get = Mock(return_value=fake_data) assert_arrays_equal(r.get(), fake_data) class MockState(object): def __init__(self, mpi_rank): self.mpi_rank = mpi_rank self.dt = 0.123 class MockSimulator(object): def __init__(self, mpi_rank): self.state = MockState(mpi_rank) def test_write__with_filename__compatible_output__gather__onroot(): recording.simulator = MockSimulator(mpi_rank=0) orig_metadata = recording.Recorder.metadata recording.Recorder.metadata = {'a': 2, 'b':3} r = recording.Recorder('spikes') fake_data = numpy.array([ (3, 12.3), (4, 14.5), (7, 19.8) ]) r._get = Mock(return_value=fake_data) r._make_compatible = Mock(return_value=fake_data) r.write(file="tmp.spikes", gather=True, compatible_output=True) os.remove("tmp.spikes") recording.Recorder.metadata = orig_metadata def test_metadata_property(): r = recording.Recorder('spikes', population=None) r._get = Mock(return_value=numpy.random.uniform(size=(6,2))) assert_equal(r.metadata, {'variable': 'spikes', 'dt': 0.123, 'n': 6}) r = recording.Recorder('v', population=MockPopulation()) r._get = Mock(return_value=numpy.random.uniform(size=(6,2))) assert_equal(r.metadata, {'first_id': 2454, 'label': 'mock population', 'n': 6, 'variable': 'v', 'dt': 0.123, 'last_id': 2465, 'size': 11, 'first_index': 0, 'last_index': 11}) def test__make_compatible_spikes(): r = recording.Recorder('spikes') input_data = numpy.array([[0, 12.3], [1, 45.2], [0, 46.3], [4, 49.4], [0, 78.3]]) output_data = r._make_compatible(input_data) # time id assert_arrays_equal(input_data[:,(1,0)], output_data) def test__make_compatible_v(): r = recording.Recorder('v') input_data = numpy.array([[0, 0.0, -65.0], [3, 0.0, -65.0], [0, 0.1, -64.3], [3, 0.1, -65.1], [0, 0.2, -63.7], [3, 0.2, -65.5]]) output_data = r._make_compatible(input_data) # voltage id assert_arrays_equal(input_data[:,(2,0)], output_data) #def test_count__spikes_gather(): #def test_count__spikes_nogather(): #def test_count__other(): PyNN-0.7.4/test/unittests/test_basepopulation.py0000644000175000017500000003713011774601632022622 0ustar andrewandrew00000000000000from pyNN import common, errors, random, standardmodels from nose.tools import assert_equal, assert_raises import numpy from mock import Mock, patch from pyNN.utility import assert_arrays_equal from pyNN import core builtin_open = open id_map = {'larry': 0, 'curly': 1, 'moe': 2, 'joe': 3, 'william': 4, 'jack': 5, 'averell': 6} class MockStandardCell(standardmodels.StandardCellType): recordable = ['v', 'spikes'] default_parameters = {'tau_m': 999.9, 'i_offset': 321.0, 'spike_times': [0,1,2], 'foo': 33.3} translations = {'tau_m': None, 'i_offset': None, 'spike_times': None, 'foo': None} @classmethod def translate(cls, parameters): return parameters class MockPopulation(common.BasePopulation): size = 13 all_cells = numpy.arange(100, 113) _mask_local = numpy.array([0,1,0,1,0,1,0,1,0,1,0,1,0], bool) local_cells = all_cells[_mask_local] positions = numpy.arange(39).reshape((13,3)).T label = "mock_population" celltype = MockStandardCell({}) initial_values = {"foo": core.LazyArray(numpy.array((98, 100, 102)), shape=(3,))} def id_to_index(self, id): if id.label in id_map: return id_map[id.label] else: raise Exception("Invalid ID") def id_to_local_index(self, id): if id.label in id_map: global_index = id_map[id.label] if global_index%2 == 1: return global_index/2 else: raise Exception("ID not on this node") else: raise Exception("Invalid ID") class MockID(object): def __init__(self, label, parent): self.label = label self.parent = parent def test__getitem__int(): p = MockPopulation() assert_equal(p[0], 100) assert_equal(p[12], 112) assert_raises(IndexError, p.__getitem__, 13) assert_equal(p[-1], 112) def test__getitem__slice(): orig_PV = common.PopulationView common.PopulationView = Mock() p = MockPopulation() pv = p[3:9] common.PopulationView.assert_called_with(p, slice(3,9,None)) common.PopulationView = orig_PV def test__getitem__list(): orig_PV = common.PopulationView common.PopulationView = Mock() p = MockPopulation() pv = p[range(3,9)] common.PopulationView.assert_called_with(p, range(3,9)) common.PopulationView = orig_PV def test__getitem__tuple(): orig_PV = common.PopulationView common.PopulationView = Mock() p = MockPopulation() pv = p[(3,5,7)] common.PopulationView.assert_called_with(p, [3,5,7]) common.PopulationView = orig_PV def test__getitem__invalid(): p = MockPopulation() assert_raises(TypeError, p.__getitem__, "foo") def test_len(): p = MockPopulation() assert_equal(len(p), MockPopulation.size) def test_iter(): p = MockPopulation() itr = p.__iter__() assert hasattr(itr, "next") assert_equal(len(list(itr)), 6) def test_is_local(): p1 = MockPopulation() p2 = MockPopulation() id_local = MockID("curly", parent=p1) id_nonlocal = MockID("larry", parent=p1) assert p1.is_local(id_local) assert not p1.is_local(id_nonlocal) assert_raises(AssertionError, p2.is_local, id_local) def test_all(): p = MockPopulation() itr = p.all() assert hasattr(itr, "next") assert_equal(len(list(itr)), 13) def test_add(): p1 = MockPopulation() p2 = MockPopulation() assembly = p1 + p2 assert isinstance(assembly, common.Assembly) assert_equal(assembly.populations, [p1, p2]) def test_get_cell_position(): p = MockPopulation() id = MockID("larry", parent=p) assert_arrays_equal(p._get_cell_position(id), numpy.array([0,1,2])) id = MockID("moe", parent=p) assert_arrays_equal(p._get_cell_position(id), numpy.array([6,7,8])) def test_set_cell_position(): p = MockPopulation() id = MockID("larry", parent=p) p._set_cell_position(id, numpy.array([100,101,102])) assert_equal(p.positions[0,0], 100) assert_equal(p.positions[0,1], 3) def test_get_cell_initial_value(): p = MockPopulation() id = MockID("curly", parent=p) assert_equal(p._get_cell_initial_value(id, "foo"), 98) def test_set_cell_initial_value(): p = MockPopulation() id = MockID("curly", parent=p) p._set_cell_initial_value(id, "foo", -1) assert_equal(p._get_cell_initial_value(id, "foo"), -1) def test_nearest(): p = MockPopulation() p.positions = numpy.arange(39).reshape((13,3)).T assert_equal(p.nearest((0.0, 1.0, 2.0)), p[0]) assert_equal(p.nearest((3.0, 4.0, 5.0)), p[1]) assert_equal(p.nearest((36.0, 37.0, 38.0)), p[12]) assert_equal(p.nearest((1.49, 2.49, 3.49)), p[0]) assert_equal(p.nearest((1.51, 2.51, 3.51)), p[1]) def test_sample(): orig_pv = common.PopulationView common.PopulationView = Mock() p = MockPopulation() rng = Mock() rng.permutation = Mock(return_value=numpy.array([7,4,8,12,0,3,9,1,2,11,5,10,6])) pv = p.sample(5, rng=rng) assert_arrays_equal(common.PopulationView.call_args[0][1], numpy.array([7,4,8,12,0])) common.PopulationView = orig_pv def test_get_should_call_get_array_if_it_exists(): p = MockPopulation() p._get_array = Mock() p.get("tau_m") p._get_array.assert_called_with("tau_m") def test_get_with_no_get_array(): orig_iter = MockPopulation.__iter__ mock_cell = Mock() MockPopulation.__iter__ = Mock(return_value=iter([mock_cell])) p = MockPopulation() values = p.get("i_offset") assert hasattr(mock_cell, "i_offset") MockPopulation.__iter__ = orig_iter def test_get_with_gather(): np_orig = common.num_processes rank_orig = common.rank gd_orig = common.recording.gather_dict common.num_processes = lambda: 2 common.rank = lambda: 0 def mock_gather_dict(D): # really hacky assert isinstance(D[0], list) D[1] = [i-1 for i in D[0]] + [D[0][-1] + 1] return D common.recording.gather_dict = mock_gather_dict p = MockPopulation() p._get_array = Mock(return_value=numpy.arange(11.0, 23.0, 2.0)) assert_arrays_equal(p.get("tau_m", gather=True), numpy.arange(10.0, 23.0)) common.num_processes = np_orig common.rank = rank_orig common.recording.gather_dict = gd_orig def test_set_from_dict(): p = MockPopulation() p._set_array = Mock() p.set({'tau_m': 43.21}) p._set_array.assert_called_with(**{'tau_m': 43.21}) def test_set_from_pair(): p = MockPopulation() p._set_array = Mock() p.set('tau_m', 12.34) p._set_array.assert_called_with(**{'tau_m': 12.34}) def test_set_invalid_type(): p = MockPopulation() assert_raises(errors.InvalidParameterValueError, p.set, 'foo', {}) assert_raises(errors.InvalidParameterValueError, p.set, [1,2,3]) assert_raises(errors.InvalidParameterValueError, p.set, 'foo', 'bar') assert_raises(errors.InvalidParameterValueError, p.set, {'foo': 'bar'}) def test_set_inconsistent_type(): p = MockPopulation() p._set_array = Mock() assert_raises(errors.InvalidParameterValueError, p.set, 'tau_m', [12.34, 56.78]) def test_set_with_no_get_array(): mock_cell = Mock() orig_iter = MockPopulation.__iter__ MockPopulation.__iter__ = Mock(return_value=iter([mock_cell])) p = MockPopulation() values = p.set("i_offset", 0.1) mock_cell.set_parameters.assert_called_with(**{"i_offset": 0.1}) MockPopulation.__iter__ = orig_iter def test_set_with_list(): p = MockPopulation() p._set_array = Mock() p.set('spike_times', range(10)) p._set_array.assert_called_with(**{'spike_times': range(10)}) def test_tset_with_numeric_values(): p = MockPopulation() p._set_array = Mock() tau_m = numpy.linspace(10.0, 20.0, num=p.size) p.tset("tau_m", tau_m) assert_arrays_equal(p._set_array.call_args[1]['tau_m'], tau_m[p._mask_local]) def test_tset_with_array_values(): p = MockPopulation() p._set_array = Mock() spike_times = numpy.linspace(0.0, 1000.0, num=10*p.size).reshape((p.size,10)) p.tset("spike_times", spike_times) call_args = p._set_array.call_args[1]['spike_times'] assert_equal(call_args.shape, spike_times[p._mask_local].shape) assert_arrays_equal(call_args.flatten(), spike_times[p._mask_local].flatten()) def test_tset_invalid_dimensions_2D(): """Population.tset(): If the size of the valueArray does not match that of the Population, should raise an InvalidDimensionsError.""" p = MockPopulation() array_in = numpy.array([[0.1,0.2,0.3],[0.4,0.5,0.6]]) assert_raises(errors.InvalidDimensionsError, p.tset, 'i_offset', array_in) def test_tset_invalid_dimensions_1D(): p = MockPopulation() tau_m = numpy.linspace(10.0, 20.0, num=p.size+1) assert_raises(errors.InvalidDimensionsError, p.tset, "tau_m", tau_m) def test_rset(): """Population.rset()""" p = MockPopulation() rd = Mock() rnums = numpy.arange(p.size) rd.next = Mock(return_value=rnums) p.tset = Mock() p.rset("cm", rd) rd.next.assert_called_with(**{'mask_local': False, 'n': p.size}) call_args = p.tset.call_args assert_equal(call_args[0][0], 'cm') assert_arrays_equal(call_args[0][1], rnums) def test_rset_with_native_rng(): p = MockPopulation() p._native_rset = Mock() rd = Mock() rd.rng = random.NativeRNG() p.rset('tau_m', rd) p._native_rset.assert_called_with('tau_m', rd) def test_initialize(): p = MockPopulation() p.initial_values = {} p._set_initial_value_array = Mock() p.initialize('v', -65.0) assert_equal(p.initial_values['v'].value, -65.0) p._set_initial_value_array.assert_called_with('v', -65.0) def test_initialize_random_distribution(): p = MockPopulation() p.initial_values = {} p._set_initial_value_array = Mock() class MockRandomDistribution(random.RandomDistribution): def next(self, n, mask_local): return 42*numpy.ones(n)[mask_local] p.initialize('v', MockRandomDistribution()) assert_arrays_equal(p.initial_values['v'].value, 42*numpy.ones(p.local_size)) #p._set_initial_value_array.assert_called_with('v', 42*numpy.ones(p.size)) def test_can_record(): p = MockPopulation() p.celltype = MockStandardCell({}) assert p.can_record('v') assert not p.can_record('foo') def test__record(): p = MockPopulation() p.recorders = {'v': Mock()} p._record('v') meth, args, kwargs = p.recorders['v'].method_calls[0] id_arr, = args assert_equal(meth, 'record') assert_arrays_equal(id_arr, p.all_cells) def test__record_invalid_variable(): p = MockPopulation() assert_raises(errors.RecordingError, p._record, 'foo') #def test__record_int(): #p = MockPopulation() #p.recorders = {'spikes': Mock()} #p._record('spikes', 5) #meth, args, kwargs = p.recorders['spikes'].method_calls[0] #id_arr, = args #assert_equal(meth, 'record') #assert_equal(id_arr.size, 5) #def test__record_with_RNG(): #p = MockPopulation() #p.recorders = {'v': Mock()} #rng = Mock() #rng.permutation = Mock(return_value=numpy.arange(p.size)) #p._record('v', 5, rng) #meth, args, kwargs = p.recorders['v'].method_calls[0] #id_arr, = args #assert_equal(meth, 'record') #assert_equal(id_arr.size, 5) #rng.permutation.assert_called_with(p.all_cells) #def test__record_list(): #record_list = ['curly', 'larry', 'moe'] # should really check that record_list contains IDs #p = MockPopulation() #p.recorders = {'v': Mock()} #p._record('v', record_list) #meth, args, kwargs = p.recorders['v'].method_calls[0] #id_list, = args #assert_equal(meth, 'record') #assert_equal(id_list, record_list) def test_invalid_record_from(): p = MockPopulation() assert_raises(Exception, p._record, 'v', 4.2) def test_spike_recording(): p = MockPopulation() p._record = Mock() p.record("arg1") p._record.assert_called_with('spikes', "arg1") def test_record_v(): p = MockPopulation() p._record = Mock() p.record_v("arg1") p._record.assert_called_with('v', "arg1") def test_record_gsyn(): p = MockPopulation() p._record = Mock() p.record_gsyn("arg1") p._record.assert_called_with('gsyn', "arg1") def test_printSpikes(): p = MockPopulation() p.recorders = {'spikes': Mock()} p.record_filter = "arg4" p.printSpikes("arg1", "arg2", "arg3") meth, args, kwargs = p.recorders['spikes'].method_calls[0] assert_equal(meth, 'write') assert_equal(args, ("arg1", "arg2", "arg3", "arg4")) def test_getSpikes(): p = MockPopulation() p.recorders = {'spikes': Mock()} p.record_filter = "arg3" p.getSpikes("arg1", "arg2") meth, args, kwargs = p.recorders['spikes'].method_calls[0] assert_equal(meth, 'get') assert_equal(args, ("arg1", "arg2", "arg3")) def test_print_v(): p = MockPopulation() p.recorders = {'v': Mock()} p.record_filter = "arg4" p.print_v("arg1", "arg2", "arg3") meth, args, kwargs = p.recorders['v'].method_calls[0] assert_equal(meth, 'write') assert_equal(args, ("arg1", "arg2", "arg3", "arg4")) def test_get_v(): p = MockPopulation() p.recorders = {'v': Mock()} p.record_filter = "arg3" p.get_v("arg1", "arg2") meth, args, kwargs = p.recorders['v'].method_calls[0] assert_equal(meth, 'get') assert_equal(args, ("arg1", "arg2", "arg3")) def test_print_gsyn(): p = MockPopulation() p.recorders = {'gsyn': Mock()} p.record_filter = "arg4" p.print_gsyn("arg1", "arg2", "arg3") meth, args, kwargs = p.recorders['gsyn'].method_calls[0] assert_equal(meth, 'write') assert_equal(args, ("arg1", "arg2", "arg3", "arg4")) def test_get_gsyn(): p = MockPopulation() p.recorders = {'gsyn': Mock()} p.record_filter = "arg3" p.get_gsyn("arg1", "arg2") meth, args, kwargs = p.recorders['gsyn'].method_calls[0] assert_equal(meth, 'get') assert_equal(args, ("arg1", "arg2", "arg3")) def test_get_spike_counts(): p = MockPopulation() p.recorders = {'spikes': Mock()} p.get_spike_counts("arg1") meth, args, kwargs = p.recorders['spikes'].method_calls[0] assert_equal(meth, 'count') assert_equal(args, ("arg1", None)) def test_meanSpikeCount(): orig_rank = common.rank common.rank = lambda: 0 p = MockPopulation() p.recorders = {'spikes': Mock()} p.recorders['spikes'].count = Mock(return_value={0: 2, 1: 5}) assert_equal(p.meanSpikeCount(), 3.5) common.rank = orig_rank def test_meanSpikeCount_on_slave_node(): orig_rank = common.rank common.rank = lambda: 1 p = MockPopulation() p.recorders = {'spikes': Mock()} p.recorders['spikes'].count = Mock(return_value={0: 2, 1: 5}) assert p.meanSpikeCount() is numpy.NaN common.rank = orig_rank def test_inject(): p = MockPopulation() cs = Mock() p.inject(cs) meth, args, kwargs = cs.method_calls[0] assert_equal(meth, "inject_into") assert_equal(args, (p,)) def test_inject_into_invalid_celltype(): p = MockPopulation() p.celltype.injectable = False assert_raises(TypeError, p.inject, Mock()) def test_save_positions(): import os orig_rank = common.rank common.rank = lambda: 0 p = MockPopulation() p.all_cells = numpy.array([34, 45, 56, 67]) p.positions = numpy.arange(12).reshape((4,3)).T output_file = Mock() p.save_positions(output_file) assert_arrays_equal(output_file.write.call_args[0][0], numpy.array([[34, 0, 1, 2], [45, 3, 4, 5], [56, 6, 7, 8], [67, 9, 10, 11]])) assert_equal(output_file.write.call_args[0][1], {'population': p.label}) # arguably, the first column should contain indices, not ids. common.rank = orig_rankPyNN-0.7.4/test/unittests/test_simulation_control.py0000644000175000017500000000457011736323051023515 0ustar andrewandrew00000000000000from pyNN import common, errors from nose.tools import assert_equal, assert_raises class MockState(object): def __init__(self): self.accesses = [] def __getattr__(self, name): if name == 'accesses': return self.__getattribute__(name) else: self.accesses.append(name) class MockSimulator(object): def __init__(self): self.reset_called = False self.state = MockState() def reset(self): self.reset_called = True class MockPopulation(common.Population): def __init__(self): self.initializations = [] def initialize(self, variable, value): self.initializations.append((variable, value)) def test_setup(): assert_raises(Exception, common.setup, min_delay=1.0, max_delay=0.9) assert_raises(Exception, common.setup, mindelay=1.0) # } common assert_raises(Exception, common.setup, maxdelay=10.0) # } misspellings assert_raises(Exception, common.setup, dt=0.1) # } assert_raises(Exception, common.setup, timestep=0.1, min_delay=0.09) def test_end(): assert_raises(NotImplementedError, common.end) def test_run(): assert_raises(NotImplementedError, common.run, 10.0) def test_reset(): common.simulator = MockSimulator() common.reset() assert common.simulator.reset_called def test_initialize(): p = MockPopulation() common.initialize(p, 'v', -65.0) assert p.initializations == [('v', -65.0)] def test_current_time(): common.simulator = MockSimulator() common.get_current_time() assert_equal(common.simulator.state.accesses, ['t']) def test_time_step(): common.simulator = MockSimulator() common.get_time_step() assert_equal(common.simulator.state.accesses, ['dt']) def test_min_delay(): common.simulator = MockSimulator() common.get_min_delay() assert_equal(common.simulator.state.accesses, ['min_delay']) def test_max_delay(): common.simulator = MockSimulator() common.get_max_delay() assert_equal(common.simulator.state.accesses, ['max_delay']) def test_num_processes(): common.simulator = MockSimulator() common.num_processes() assert_equal(common.simulator.state.accesses, ['num_processes']) def test_rank(): common.simulator = MockSimulator() common.rank() assert_equal(common.simulator.state.accesses, ['mpi_rank']) PyNN-0.7.4/test/system/0000755000175000017500000000000011774605211015437 5ustar andrewandrew00000000000000PyNN-0.7.4/test/system/test_neuron.py0000644000175000017500000001076511736323051020364 0ustar andrewandrew00000000000000from nose.plugins.skip import SkipTest from scenarios import scenarios from nose.tools import assert_equal, assert_almost_equal from pyNN.random import RandomDistribution from pyNN.utility import init_logging try: import pyNN.neuron from pyNN.neuron.cells import _new_property, NativeCellType from nrnutils import Mechanism, Section have_neuron = True except ImportError: have_neuron = False def test_scenarios(): for scenario in scenarios: if "neuron" not in scenario.exclude: scenario.description = scenario.__name__ if have_neuron: yield scenario, pyNN.neuron else: raise SkipTest def test_ticket168(): """ Error setting firing rate of `SpikeSourcePoisson` after `reset()` in NEURON http://neuralensemble.org/trac/PyNN/ticket/168 """ pynn = pyNN.neuron pynn.setup() cell = pynn.Population(1, cellclass=pynn.SpikeSourcePoisson, label="cell") cell[0].rate = 12 pynn.run(10.) pynn.reset() cell[0].rate = 12 pynn.run(10.) assert_almost_equal(pynn.get_current_time(), 10.0, places=11) assert_equal(cell[0]._cell.interval, 1000.0/12.0) class SimpleNeuron(object): def __init__(self, **parameters): # define ion channel parameters leak = Mechanism('pas', e=-65, g=parameters['g_leak']) hh = Mechanism('hh', gl=parameters['g_leak'], el=-65, gnabar=parameters['gnabar'], gkbar=parameters['gkbar']) # create cable sections self.soma = Section(L=30, diam=30, mechanisms=[hh]) self.apical = Section(L=600, diam=2, nseg=5, mechanisms=[leak], parent=self.soma, connect_to=1) self.basilar = Section(L=600, diam=2, nseg=5, mechanisms=[leak], parent=self.soma) self.axon = Section(L=1000, diam=1, nseg=37, mechanisms=[hh]) # synaptic input self.apical.add_synapse('ampa', 'Exp2Syn', e=0.0, tau1=0.1, tau2=5.0) # needed for PyNN self.source_section = self.soma self.source = self.soma(0.5)._ref_v self.parameter_names = ('g_leak', 'gnabar', 'gkbar') self.traces = {} self.recording_time = False def _set_g_leak(self, value): for sec in (self.apical, self.basilar): for seg in sec: seg.pas.g = value for sec in (self.soma, self.axon): for seg in sec: seg.hh.gl = value def _get_g_leak(self): return self.apical(0.5).pas.g g_leak = property(fget=_get_g_leak, fset=_set_g_leak) def _set_gnabar(self, value): for sec in (self.soma, self.axon): for seg in sec: seg.hh.gnabar = value def _get_gnabar(self): return self.soma(0.5).hh.gnabar gnabar = property(fget=_get_gnabar, fset=_set_gnabar) def _set_gkbar(self, value): for sec in (self.soma, self.axon): for seg in sec: seg.hh.gkbar = value def _get_gkbar(self): return self.soma(0.5).hh.gkbar gkbar = property(fget=_get_gkbar, fset=_set_gkbar) def memb_init(self): """needed for PyNN""" for sec in (self.soma, self.axon, self.apical, self.basilar): for seg in sec: seg.v = self.v_init class SimpleNeuronType(NativeCellType): default_parameters = {'g_leak': 0.0002, 'gkbar': 0.036, 'gnabar': 0.12} default_initial_values = {'v': -65.0} recordable = ['apical(1.0).v', 'soma(0.5).ina'] # this is not good - over-ride Population.can_record()? model = SimpleNeuron def test_record_native_model(): nrn = pyNN.neuron init_logging(logfile=None, debug=True) nrn.setup() parameters = {'g_leak': 0.0003} p1 = nrn.Population(10, SimpleNeuronType, parameters) print p1.get('g_leak') p1.rset('gnabar', RandomDistribution('uniform', [0.10, 0.14])) print p1.get('gnabar') p1.initialize('v', -63.0) current_source = nrn.StepCurrentSource([50.0, 110.0, 150.0, 210.0], [0.4, 0.6, -0.2, 0.2]) p1.inject(current_source) p2 = nrn.Population(1, nrn.SpikeSourcePoisson, {'rate': 100.0}) p1._record('apical(1.0).v') p1._record('soma(0.5).ina') connector = nrn.AllToAllConnector(weights=0.1) prj_alpha = nrn.Projection(p2, p1, connector, target='apical.ampa') nrn.run(250.0) assert_equal(p1.recorders['apical(1.0).v'].get().shape, (25010, 3)) id, t, v = p1.recorders['apical(1.0).v'].get().T return id, t, vPyNN-0.7.4/test/system/__init__.py0000644000175000017500000000000011736323051017533 0ustar andrewandrew00000000000000PyNN-0.7.4/test/system/test_pcsim.py0000644000175000017500000000232411736323051020161 0ustar andrewandrew00000000000000from nose.plugins.skip import SkipTest from scenarios import scenarios try: import pyNN.pcsim have_pcsim = True except ImportError: have_pcsim = False def test_all(): for scenario in scenarios: if "pcsim" not in scenario.exclude: scenario.description = scenario.__name__ if have_pcsim: yield scenario, pyNN.pcsim else: raise SkipTest def test_PoissonInputNeuron(): if have_pcsim: import pypcsim as pcs import numpy net = pcs.SingleThreadNetwork() inputs1 = [net.create(pcs.PoissonInputNeuron(rate=20.0, duration=1e6)) for i in range(5)] inputs2 = [net.create(pcs.PoissonInputNeuron(rate=40.0, duration=1e6)) for i in range(5)] recorders = [net.object(net.record(input, pcs.SpikeTimeRecorder())) for input in inputs1+inputs2] net.simulate(10.0) spike_counts = numpy.array([recorder.spikeCount() for recorder in recorders]) assert (100 < spike_counts[:5]).all() assert (300 > spike_counts[:5]).all() assert (300 < spike_counts[5:]).all() assert (500 > spike_counts[5:]).all() else: raise SkipTest PyNN-0.7.4/test/system/test_moose.py0000644000175000017500000000310111736323051020162 0ustar andrewandrew00000000000000from nose.plugins.skip import SkipTest from scenarios import scenarios from nose.tools import assert_equal, assert_almost_equal from pyNN.random import RandomDistribution from pyNN.utility import init_logging try: import pyNN.moose have_moose = True except ImportError: have_moose = False def test_scenarios(): for scenario in scenarios: if "moose" not in scenario.exclude: scenario.description = scenario.__name__ if have_moose: yield scenario, pyNN.moose else: raise SkipTest def test_recording(): if not have_moose: raise SkipTest sim = pyNN.moose sim.setup() p = sim.Population(2, sim.HH_cond_exp, {'i_offset': 0.1}) p.initialize('v', -65.0) p.record_v() sim.run(100.0) id, t, v = p.get_v().T assert v.max() > 0 # at least one spike sim.end() return id, t, v def test_synaptic_connections(): if not have_moose: raise SkipTest import numpy sim = pyNN.moose sim.setup() p1 = sim.Population(1, sim.SpikeSourceArray, {'spike_times': numpy.arange(3.0, 103, 10.0)}) #p1 = sim.Population(1, sim.SpikeSourcePoisson, {'rate': 100.0}) #p1 = sim.Population(1, sim.HH_cond_exp, {'i_offset': 1.0}) #p2 = sim.Population(1, sim.HH_cond_exp) p2 = sim.Population(1, sim.IF_cond_exp) prj = sim.Projection(p1, p2, sim.AllToAllConnector(weights=1e-4)) p2.record_v() sim.run(100.0) id, t, v2 = p2.get_v().T sim.end() return id, t, v2PyNN-0.7.4/test/system/test_nest.py0000644000175000017500000000454311736323051020024 0ustar andrewandrew00000000000000from nose.plugins.skip import SkipTest from scenarios import scenarios from nose.tools import assert_equal try: import pyNN.nest have_nest = True except ImportError: have_nest = False def test_scenarios(): for scenario in scenarios: if "nest" not in scenario.exclude: scenario.description = scenario.__name__ if have_nest: yield scenario, pyNN.nest else: raise SkipTest def test_record_native_model(): nest = pyNN.nest from pyNN.random import RandomDistribution from pyNN.utility import init_logging init_logging(logfile=None, debug=True) nest.setup() parameters = {'Tau_m': 17.0} n_cells = 10 p1 = nest.Population(n_cells, nest.native_cell_type("ht_neuron"), parameters) p1.initialize('V_m', -70.0) p1.initialize('Theta', -50.0) p1.set('Theta_eq', -51.5) assert_equal(p1.get('Theta_eq'), [-51.5]*10) print p1.get('Tau_m') p1.rset('Tau_m', RandomDistribution('uniform', [15.0, 20.0])) print p1.get('Tau_m') current_source = nest.StepCurrentSource([50.0, 110.0, 150.0, 210.0], [0.01, 0.02, -0.02, 0.01]) p1.inject(current_source) p2 = nest.Population(1, nest.native_cell_type("poisson_generator"), {'rate': 200.0}) print "Setting up recording" p2.record() p1._record('V_m') connector = nest.AllToAllConnector(weights=0.001) prj_ampa = nest.Projection(p2, p1, connector, target='AMPA') tstop = 250.0 nest.run(tstop) n_points = int(tstop/nest.get_time_step()) + 1 assert_equal(p1.recorders['V_m'].get().shape, (n_points*n_cells, 3)) id, t, v = p1.recorders['V_m'].get().T assert v.max() > 0.0 # should have some spikes def test_native_stdp_model(): nest = pyNN.nest from pyNN.utility import init_logging init_logging(logfile=None, debug=True) nest.setup() p1 = nest.Population(10, nest.IF_cond_exp) p2 = nest.Population(10, nest.SpikeSourcePoisson) stdp_params = {'Wmax': 50.0, 'lambda': 0.015} stdp = nest.NativeSynapseDynamics("stdp_synapse", stdp_params) connector = nest.AllToAllConnector(weights=0.001) prj = nest.Projection(p2, p1, connector, target='excitatory', synapse_dynamics=stdp) PyNN-0.7.4/test/system/scenarios.py0000644000175000017500000004172011737571712020012 0ustar andrewandrew00000000000000# encoding: utf-8 from pyNN.random import NumpyRNG, RandomDistribution from pyNN import common, recording from nose.tools import assert_equal import numpy from pyNN.utility import init_logging, assert_arrays_equal, assert_arrays_almost_equal, sort_by_column def set_simulator(sim): common.simulator = sim.simulator recording.simulator = sim.simulator scenarios = [] def register(exclude=[]): def inner_register(scenario): #print "registering %s with exclude=%s" % (scenario, exclude) if scenario not in scenarios: scenario.exclude = exclude scenarios.append(scenario) return scenario return inner_register @register() def scenario1(sim): """ Balanced network of integrate-and-fire neurons. """ set_simulator(sim) cell_params = { 'tau_m': 20.0, 'tau_syn_E': 5.0, 'tau_syn_I': 10.0, 'v_rest': -60.0, 'v_reset': -60.0, 'v_thresh': -50.0, 'cm': 1.0, 'tau_refrac': 5.0, 'e_rev_E': 0.0, 'e_rev_I': -80.0 } stimulation_params = {'rate' : 100.0, 'duration' : 50.0} n_exc = 80 n_inh = 20 n_input = 20 rngseed = 98765 parallel_safe = True n_threads = 1 pconn_recurr = 0.02 pconn_input = 0.01 tstop = 1000.0 delay = 0.2 weights = { 'excitatory': 4.0e-3, 'inhibitory': 51.0e-3, 'input': 0.1, } sim.setup(timestep=0.1, threads=n_threads) all_cells = sim.Population(n_exc+n_inh, sim.IF_cond_exp, cell_params, label="All cells") cells = { 'excitatory': all_cells[:n_exc], 'inhibitory': all_cells[n_exc:], 'input': sim.Population(n_input, sim.SpikeSourcePoisson, stimulation_params, label="Input") } rng = NumpyRNG(seed=rngseed, parallel_safe=parallel_safe) uniform_distr = RandomDistribution( 'uniform', [cell_params['v_reset'], cell_params['v_thresh']], rng=rng) all_cells.initialize('v', uniform_distr) connections = {} for name, pconn, target in ( ('excitatory', pconn_recurr, 'excitatory'), ('inhibitory', pconn_recurr, 'inhibitory'), ('input', pconn_input, 'excitatory'), ): connector = sim.FixedProbabilityConnector(pconn, weights=weights[name], delays=delay) connections[name] = sim.Projection(cells[name], all_cells, connector, target=target, label=name, rng=rng) all_cells.record() cells['excitatory'][0:2].record_v() assert_equal(cells['excitatory'][0:2].grandparent, all_cells) sim.run(tstop) E_count = cells['excitatory'].meanSpikeCount() I_count = cells['inhibitory'].meanSpikeCount() print "Excitatory rate : %g Hz" % (E_count*1000.0/tstop,) print "Inhibitory rate : %g Hz" % (I_count*1000.0/tstop,) sim.end() @register() def scenario1a(sim): """ Balanced network of integrate-and-fire neurons, built with the "low-level" API. """ set_simulator(sim) cell_params = { 'tau_m': 10.0, 'tau_syn_E': 2.0, 'tau_syn_I': 5.0, 'v_rest': -60.0, 'v_reset': -65.0, 'v_thresh': -55.0, 'cm': 0.5, 'tau_refrac': 2.5, 'e_rev_E': 0.0, 'e_rev_I': -75.0 } stimulation_params = {'rate': 80.0, 'duration': 50.0} n_exc = 80 n_inh = 20 n_input = 20 rngseed = 87546 parallel_safe = True n_threads = 1 pconn_recurr = 0.03 pconn_input = 0.01 tstop = 1000.0 delay = 0.3 w_exc = 3.0e-3 w_inh = 45.0e-3 w_input = 0.12 sim.setup(timestep=0.1, threads=n_threads) excitatory_cells = sim.create(sim.IF_cond_alpha, cell_params, n=n_exc) inhibitory_cells = sim.create(sim.IF_cond_alpha, cell_params, n=n_inh) inputs = sim.create(sim.SpikeSourcePoisson, stimulation_params, n=n_input) all_cells = excitatory_cells + inhibitory_cells sim.initialize(all_cells, 'v', cell_params['v_rest']) sim.connect(excitatory_cells, all_cells, weight=w_exc, delay=delay, synapse_type='excitatory', p=pconn_recurr) sim.connect(inhibitory_cells, all_cells, weight=w_exc, delay=delay, synapse_type='inhibitory', p=pconn_recurr) sim.connect(inputs, all_cells, weight=w_input, delay=delay, synapse_type='excitatory', p=pconn_input) sim.record(all_cells, "scenario1a_%s.spikes" % sim.__name__) sim.record_v(excitatory_cells[0:2], "scenario1a_%s.v" % sim.__name__) sim.run(tstop) E_count = excitatory_cells.meanSpikeCount() I_count = inhibitory_cells.meanSpikeCount() print "Excitatory rate : %g Hz" % (E_count*1000.0/tstop,) print "Inhibitory rate : %g Hz" % (I_count*1000.0/tstop,) sim.end() @register(exclude=["moose"]) def scenario2(sim): """ Array of neurons, each injected with a different current. firing period of a IF neuron injected with a current I: T = tau_m*log(I*tau_m/(I*tau_m - v_thresh*cm)) (if v_rest = v_reset = 0.0) we set the refractory period to be very large, so each neuron fires only once (except neuron[0], which never reaches threshold). """ set_simulator(sim) n = 100 t_start = 25.0 duration = 100.0 t_stop = 150.0 tau_m = 20.0 v_thresh = 10.0 cm = 1.0 cell_params = {"tau_m": tau_m, "v_rest": 0.0, "v_reset": 0.0, "tau_refrac": 100.0, "v_thresh": v_thresh, "cm": cm} I0 = (v_thresh*cm)/tau_m sim.setup(timestep=0.01, spike_precision="off_grid") neurons = sim.Population(n, sim.IF_curr_exp, cell_params) neurons.initialize('v', 0.0) I = numpy.arange(I0, I0+1.0, 1.0/n) currents = [sim.DCSource(start=t_start, stop=t_start+duration, amplitude=amp) for amp in I] for j, (neuron, current) in enumerate(zip(neurons, currents)): if j%2 == 0: # these should neuron.inject(current) # be entirely else: # equivalent current.inject_into([neuron]) neurons.record_v() neurons.record() sim.run(t_stop) spikes = neurons.getSpikes() assert_equal(spikes.shape, (99,2)) # first cell does not fire spikes = sort_by_column(spikes, 0) spike_times = spikes[:,1] expected_spike_times = t_start + tau_m*numpy.log(I*tau_m/(I*tau_m - v_thresh*cm)) a = spike_times = spikes[:,1] b = expected_spike_times[1:] max_error = abs((a-b)/b).max() print "max error =", max_error assert max_error < 0.005, max_error sim.end() return a,b, spikes @register(exclude=["moose", "brian"]) def scenario3(sim): """ Simple feed-forward network network with additive STDP. The second half of the presynaptic neurons fires faster than the second half, so their connections should be potentiated more. """ set_simulator(sim) init_logging(logfile=None, debug=True) second = 1000.0 duration = 10 tau_m = 20 # ms cm = 1.0 # nF v_reset = -60 cell_parameters = dict( tau_m = tau_m, cm = cm, v_rest = -70, e_rev_E = 0, e_rev_I = -70, v_thresh = -54, v_reset = v_reset, tau_syn_E = 5, tau_syn_I = 5, ) g_leak = cm/tau_m # µS w_min = 0.0*g_leak w_max = 0.05*g_leak r1 = 5.0 r2 = 40.0 sim.setup() pre = sim.Population(100, sim.SpikeSourcePoisson) post = sim.Population(10, sim.IF_cond_exp) pre.set("duration", duration*second) pre.set("start", 0.0) pre[:50].set("rate", r1) pre[50:].set("rate", r2) assert_equal(pre[49].rate, r1) assert_equal(pre[50].rate, r2) post.set(cell_parameters) post.initialize('v', RandomDistribution('normal', (v_reset, 5.0))) stdp = sim.SynapseDynamics( slow=sim.STDPMechanism( sim.SpikePairRule(tau_plus=20.0, tau_minus=20.0 ), sim.AdditiveWeightDependence(w_min=w_min, w_max=w_max, A_plus=0.01, A_minus=0.01), #dendritic_delay_fraction=0.5)) dendritic_delay_fraction=1)) connections = sim.Projection(pre, post, sim.AllToAllConnector(), target='excitatory', synapse_dynamics=stdp) initial_weight_distr = RandomDistribution('uniform', (w_min, w_max)) connections.randomizeWeights(initial_weight_distr) initial_weights = connections.getWeights(format='array') assert initial_weights.min() >= w_min assert initial_weights.max() < w_max assert initial_weights[0,0] != initial_weights[1,0] pre.record() post.record() post.record_v(1) sim.run(duration*second) actual_rate = pre.meanSpikeCount()/duration expected_rate = (r1+r2)/2 errmsg = "actual rate: %g expected rate: %g" % (actual_rate, expected_rate) assert abs(actual_rate - expected_rate) < 1, errmsg #assert abs(pre[:50].meanSpikeCount()/duration - r1) < 1 #assert abs(pre[50:].meanSpikeCount()/duration- r2) < 1 final_weights = connections.getWeights(format='array') assert initial_weights[0,0] != final_weights[0,0] import scipy.stats t,p = scipy.stats.ttest_ind(initial_weights[:50,:].flat, initial_weights[50:,:].flat) assert p > 0.05, p t,p = scipy.stats.ttest_ind(final_weights[:50,:].flat, final_weights[50:,:].flat) assert p < 0.01, p assert final_weights[:50,:].mean() < final_weights[50:,:].mean() return initial_weights, final_weights, pre, post, connections @register() def ticket166(sim): """ Check that changing the spike_times of a SpikeSourceArray mid-simulation works (see http://neuralensemble.org/trac/PyNN/ticket/166) """ dt = 0.1 # ms t_step = 100.0 # ms lag = 3.0 # ms interactive = False if interactive: import pylab pylab.rcParams['interactive'] = interactive set_simulator(sim) sim.setup(timestep=dt) spikesources = sim.Population(2, sim.SpikeSourceArray) cells = sim.Population(2, sim.IF_cond_exp) conn = sim.Projection(spikesources, cells, sim.OneToOneConnector(weights=0.1)) cells.record_v() spiketimes = numpy.arange(2.0, t_step, t_step/13.0) spikesources[0].spike_times = spiketimes spikesources[1].spike_times = spiketimes + lag t = sim.run(t_step) # both neurons depolarized by synaptic input t = sim.run(t_step) # no more synaptic input, neurons decay spiketimes += 2*t_step spikesources[0].spike_times = spiketimes # note we add no new spikes to the second source t = sim.run(t_step) # first neuron gets depolarized again final_v_0 = cells[0:1].get_v()[-1,2] final_v_1 = cells[1:2].get_v()[-1,2] sim.end() if interactive: id, t, vtrace = cells[0:1].get_v().T print vtrace.shape print t.shape pylab.plot(t, vtrace) id, t, vtrace = cells[1:2].get_v().T pylab.plot(t, vtrace) assert final_v_0 > -64.0 # first neuron has been depolarized again assert final_v_1 < -64.99 # second neuron has decayed back towards rest @register() def test_reset(sim): """ Run the same simulation n times without recreating the network, and check the results are the same each time. """ repeats = 3 sim.setup() p = sim.Population(1, sim.IF_cond_exp, {"i_offset": 0.1}) p.record_v() data = [] for i in range(repeats): sim.run(10.0) data.append(p.get_v()) sim.reset() sim.end() for rec in data: assert_arrays_almost_equal(rec, data[0], 1e-12) @register(exclude=['brian', 'pcsim']) def test_reset_recording(sim): """ Check that _record(None) resets the list of things to record. """ sim.setup() p = sim.Population(2, sim.IF_cond_exp) p[0].i_offset = 0.1 p[1].i_offset = 0.2 p[0:1].record_v() assert_equal(p.recorders['v'].recorded, set([p[0]])) sim.run(10.0) idA, tA, vA = p.get_v().T sim.reset() p._record(None) p[1:2].record_v() assert_equal(p.recorders['v'].recorded, set([p[1]])) sim.run(10.0) idB, tB, vB = p.get_v().T assert_arrays_equal(tA, tB) assert (idA == 0).all() assert (idB == 1).all() assert (vA != vB).any() @register() def test_setup(sim): """ Run the same simulation n times, recreating the network each time, and check the results are the same each time. """ n = 3 data = [] for i in range(n): sim.setup() p = sim.Population(1, sim.IF_cond_exp, {"i_offset": 0.1}) p.record_v() sim.run(10.0) data.append(p.get_v()) sim.end() assert len(data) == n assert data[0].size > 0 for rec in data: assert_arrays_equal(rec, data[0]) @register(exclude=['pcsim']) def test_EIF_cond_alpha_isfa_ista(sim): set_simulator(sim) sim.setup(timestep=0.01, min_delay=0.1, max_delay=4.0) ifcell = sim.create(sim.EIF_cond_alpha_isfa_ista, {'i_offset': 1.0, 'tau_refrac': 2.0, 'v_spike': -40}) ifcell.record() sim.run(200.0) expected_spike_times = numpy.array([10.02, 25.52, 43.18, 63.42, 86.67, 113.13, 142.69, 174.79]) diff = (ifcell.getSpikes()[:,1] - expected_spike_times)/expected_spike_times assert abs(diff).max() < 0.001 sim.end() @register(exclude=['pcsim']) def test_HH_cond_exp(sim): sim.setup(timestep=0.001, min_delay=0.1) cellparams = { 'gbar_Na' : 20.0, 'gbar_K' : 6.0, 'g_leak' : 0.01, 'cm' : 0.2, 'v_offset' : -63.0, 'e_rev_Na' : 50.0, 'e_rev_K' : -90.0, 'e_rev_leak': -65.0, 'e_rev_E' : 0.0, 'e_rev_I' : -80.0, 'tau_syn_E' : 0.2, 'tau_syn_I' : 2.0, 'i_offset' : 1.0, } hhcell = sim.create(sim.HH_cond_exp, cellparams=cellparams) sim.initialize(hhcell, 'v', -64.0) hhcell.record_v() sim.run(20.0) id, t, v = hhcell.get_v().T first_spike = t[numpy.where(v>0)[0][0]] assert first_spike - 2.95 < 0.01 @register(exclude=['pcsim']) def test_record_vm_and_gsyn_from_assembly(sim): from pyNN.utility import init_logging init_logging(logfile=None, debug=True) set_simulator(sim) dt = 0.1 tstop = 100.0 sim.setup(timestep=dt) cells = sim.Population(5, sim.IF_cond_exp) + sim.Population(6, sim.EIF_cond_exp_isfa_ista) inputs = sim.Population(5, sim.SpikeSourcePoisson, {'rate': 50.0}) sim.connect(inputs, cells, weight=0.1, delay=0.5, synapse_type='inhibitory') sim.connect(inputs, cells, weight=0.1, delay=0.3, synapse_type='excitatory') cells.record_v() cells[2:9].record_gsyn() for p in cells.populations: assert_equal(p.recorders['v'].recorded, set(p.all_cells)) assert_equal(cells.populations[0].recorders['gsyn'].recorded, set(cells.populations[0].all_cells[2:5])) assert_equal(cells.populations[1].recorders['gsyn'].recorded, set(cells.populations[1].all_cells[0:4])) sim.run(tstop) vm_p0 = cells.populations[0].get_v() vm_p1 = cells.populations[1].get_v() vm_all = cells.get_v() gsyn_p0 = cells.populations[0].get_gsyn() gsyn_p1 = cells.populations[1].get_gsyn() gsyn_all = cells.get_gsyn() assert_equal(numpy.unique(vm_p0[:,0]).tolist(), [ 0., 1., 2., 3., 4.]) assert_equal(numpy.unique(vm_p1[:,0]).tolist(), [ 0., 1., 2., 3., 4., 5.]) assert_equal(numpy.unique(vm_all[:,0]).astype(int).tolist(), range(11)) assert_equal(numpy.unique(gsyn_p0[:,0]).tolist(), [ 2., 3., 4.]) assert_equal(numpy.unique(gsyn_p1[:,0]).tolist(), [ 0., 1., 2., 3.]) assert_equal(numpy.unique(gsyn_all[:,0]).astype(int).tolist(), range(2,9)) n_points = int(tstop/dt) + 1 assert_equal(vm_p0.shape[0], 5*n_points) assert_equal(vm_p1.shape[0], 6*n_points) assert_equal(vm_all.shape[0], 11*n_points) assert_equal(gsyn_p0.shape[0], 3*n_points) assert_equal(gsyn_p1.shape[0], 4*n_points) assert_equal(gsyn_all.shape[0], 7*n_points) assert_arrays_equal(vm_p1[vm_p1[:,0]==3][:,2], vm_all[vm_all[:,0]==8][:,2]) sim.end() @register() def ticket195(sim): """ Check that the `connect()` function works correctly with single IDs (see http://neuralensemble.org/trac/PyNN/ticket/195) """ sim.setup(timestep=0.01) pre = sim.Population(10, sim.SpikeSourceArray, cellparams={'spike_times':range(1,10)}) post = sim.Population(10, sim.IF_cond_exp) sim.connect(pre[0], post[0], weight=0.01, delay=0.1, p=1) post.record() sim.run(100.0) assert_arrays_almost_equal(post.getSpikes(), numpy.array([[0.0, 13.4]]), 0.5) @register() def ticket226(sim): """ Check that the start time of DCSources is correctly taken into account http://neuralensemble.org/trac/PyNN/ticket/226) """ sim.setup(timestep=0.1) cell = sim.Population(1, sim.IF_curr_alpha, {'tau_m': 20.0, 'cm': 1.0, 'v_rest': -60.0, 'v_reset': -60.0}) cell.initialize('v', -60.0) inj = sim.DCSource(amplitude=1.0, start=10.0, stop=20.0) cell.inject(inj) cell.record_v() sim.run(30.0) id, t, v = cell.get_v().T assert abs(v[abs(t-10.0)<0.01][0] - -60.0) < 1e-10 assert v[abs(t-10.1)<0.01][0] > -59.99 PyNN-0.7.4/test/system/test_brian.py0000644000175000017500000000066111736323051020143 0ustar andrewandrew00000000000000from nose.plugins.skip import SkipTest from scenarios import scenarios try: import pyNN.brian have_brian = True except ImportError: have_brian = False def test_scenarios(): for scenario in scenarios: if "brian" not in scenario.exclude: scenario.description = scenario.__name__ if have_brian: yield scenario, pyNN.brian else: raise SkipTest