[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#681233: marked as done (unblock: pynn/0.7.4-1)



Your message dated Wed, 22 Aug 2012 17:43:22 +0200
with message-id <20120822154322.GQ5375@radis.cristau.org>
and subject line Re: Bug#681233: unblock: pynn/0.7.4-1
has caused the Debian Bug report #681233,
regarding unblock: pynn/0.7.4-1
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
681233: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=681233
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: release.debian.org
Severity: normal
User: release.debian.org@packages.debian.org
Usertags: freeze-exception

unblock pynn/0.7.4-1

Please unblock package pynn

* Addresses FTBFS #669466

* This is a bugfix upstream release

  Development of pynn is primarily in 'maintenance' mode so not much of
  new features are added and bugfix releases come out whenever important
  fixes are introduced

  the only new feature in this release from existing in wheezy
  0.7.2 -- addition of a new interface src/neuroml2.py

* Upstream's changelog:

 * Some fixes to the `CSAConnector` class
 * the Brian backend now includes the value at t=0 in recorded data (see ticket:225)
 * start times for `CurrentSources` in the NEST backend are now corrected for the
   connection delay
 * start times for `CurrentSources` in the Brian backend are now correct (there
   was something funny happening with clocks, before.)


* It built (including running unittests) across all recent debian and ubuntu releases

pynn_0.7.4-1~nd60+1_amd64.build OK      2:56.61 real, 143.24 user, 85.27 sys, 272152 out
pynn_0.7.4-1~nd70+1_amd64.build OK      3:20.59 real, 158.30 user, 99.26 sys, 487688 out
pynn_0.7.4-1~nd+1_amd64.build   OK      3:10.88 real, 159.76 user, 99.49 sys, 502352 out
pynn_0.7.4-1~nd10.10+1_amd64.build      OK      3:21.21 real, 152.62 user, 116.75 sys, 277096 out
pynn_0.7.4-1~nd10.04+1_amd64.build      OK      3:30.88 real, 166.93 user, 191.90 sys, 264912 out
pynn_0.7.4-1~nd11.04+1_amd64.build      OK      2:59.93 real, 179.43 user, 91.54 sys, 261160 out
pynn_0.7.4-1~nd11.10+1_amd64.build      OK      2:37.89 real, 162.05 user, 92.57 sys, 426840 out
pynn_0.7.4-1~nd12.04+1_amd64.build      OK      2:35.35 real, 167.00 user, 91.06 sys, 414040 out

which suggests absent flaky code at least among those covered by unittests

* complete debdiff (with -w to exclude trailing spaces changes) is attached

-- System Information:
Debian Release: wheezy/sid
  APT prefers testing
  APT policy: (900, 'testing'), (600, 'unstable'), (300, 'experimental'), (100, 'stable')
Architecture: amd64 (x86_64)

Kernel: Linux 3.2.0-2-amd64 (SMP w/2 CPU cores)
Locale: LANG=en_US, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/bash
diff -Nru -w pynn-0.7.2/changelog pynn-0.7.4/changelog
--- pynn-0.7.2/changelog	2011-10-03 08:57:31.000000000 -0400
+++ pynn-0.7.4/changelog	2012-04-06 10:39:29.000000000 -0400
@@ -1,3 +1,20 @@
+=====================
+Release 0.7.3 (r1124)
+=====================
+
+* Some fixes to the `CSAConnector` class
+* the Brian backend now includes the value at t=0 in recorded data (see ticket:225)
+* start times for `CurrentSources` in the NEST backend are now corrected for the
+  connection delay
+* start times for `CurrentSources` in the Brian backend are now correct (there
+  was something funny happening with clocks, before.)
+
+====================
+Release 0.7.2 (r995)
+====================
+
+Fixed a bug whereby the `connect()` function didn't work with single IDs (see ticket:195)
+
 ====================
 Release 0.7.1 (r958)
 ====================
diff -Nru -w pynn-0.7.2/debian/changelog pynn-0.7.4/debian/changelog
--- pynn-0.7.2/debian/changelog	2012-03-16 10:20:17.000000000 -0400
+++ pynn-0.7.4/debian/changelog	2012-07-03 23:54:58.000000000 -0400
@@ -1,3 +1,12 @@
+pynn (0.7.4-1) unstable; urgency=low
+
+  * New upstream release
+    - fixes a few bugs detected upstream
+    - works around regressions of numpy 1.6 (e.g. #679948, #679998) and
+      mock 0.8 allowing unittests to pass (Closes: #669466)
+
+ -- Yaroslav Halchenko <debian@onerussian.com>  Tue, 03 Jul 2012 23:54:42 -0400
+
 pynn (0.7.2-1) unstable; urgency=low
 
   * New upstream release.
diff -Nru -w pynn-0.7.2/debian/patches/deb_disable_test_with_numerical_instability pynn-0.7.4/debian/patches/deb_disable_test_with_numerical_instability
--- pynn-0.7.2/debian/patches/deb_disable_test_with_numerical_instability	2012-03-16 10:04:05.000000000 -0400
+++ pynn-0.7.4/debian/patches/deb_disable_test_with_numerical_instability	1969-12-31 19:00:00.000000000 -0500
@@ -1,18 +0,0 @@
-From: Michael Hanke <michael.hanke@gmail.com>
-Subject: Disable failing test with numerical instabilities
-
-diff --git a/test/unittests/test_files.py b/test/unittests/test_files.py
-index da4ebe8..60b4a74 100644
---- a/test/unittests/test_files.py
-+++ b/test/unittests/test_files.py
-@@ -53,8 +53,8 @@ def test_StandardTextFile_write():
-               (('1.0\t3.4\n',), {}),
-               (('2.0\t4.3\n',), {})]
-     stf.write(data, metadata)
--    assert_equal(stf.fileobj.write.call_args_list,
--                 target)
-+    #assert_equal(stf.fileobj.write.call_args_list,
-+    #             target)
-     files.open = builtin_open
-     
- def test_StandardTextFile_read():
diff -Nru -w pynn-0.7.2/debian/patches/series pynn-0.7.4/debian/patches/series
--- pynn-0.7.2/debian/patches/series	2012-03-16 10:04:25.000000000 -0400
+++ pynn-0.7.4/debian/patches/series	1969-12-31 19:00:00.000000000 -0500
@@ -1 +0,0 @@
-deb_disable_test_with_numerical_instability
diff -Nru -w pynn-0.7.2/doc/installation.txt pynn-0.7.4/doc/installation.txt
--- pynn-0.7.2/doc/installation.txt	2011-10-03 08:57:19.000000000 -0400
+++ pynn-0.7.4/doc/installation.txt	2012-07-03 10:37:15.000000000 -0400
@@ -9,8 +9,8 @@
 
 The easiest way to get PyNN is to download the latest source distribution from the `PyNN download page`_, then run the setup script, e.g.::
 
-    $ tar xzf PyNN-0.7.0.tar.gz
-    $ cd PyNN-0.7.0
+    $ tar xzf PyNN-0.7.4.tar.gz
+    $ cd PyNN-0.7.4
     $ python setup.py install
     
 This will install it to your python ``site-packages`` directory, and may require root privileges. If you wish to install it elsewhere use the ``--prefix`` or ``--home`` option, and set the ``PYTHONPATH`` environment variable accordingly (see above). We assume you have already installed the simulator(s) you wish to use it with. If this is not the case, see below for installation instructions.
diff -Nru -w pynn-0.7.2/examples/VAbenchmarks2-csa.py pynn-0.7.4/examples/VAbenchmarks2-csa.py
--- pynn-0.7.2/examples/VAbenchmarks2-csa.py	2011-10-03 08:57:31.000000000 -0400
+++ pynn-0.7.4/examples/VAbenchmarks2-csa.py	2012-04-02 09:14:18.000000000 -0400
@@ -125,12 +125,12 @@
 exc_cells = all_cells[:n_exc]
 inh_cells = all_cells[n_exc:]
 if benchmark == "COBA":
-    ext_stim = Population(20, SpikeSourcePoisson,{'rate' : rate, 'duration' : stim_dur},"expoisson")
+    ext_stim = Population(20, SpikeSourcePoisson, {'rate' : rate, 'duration' : stim_dur}, label="expoisson")
     rconn = 0.01
     ext_conn = FixedProbabilityConnector(rconn, weights=0.1)
 
 print "%s Initialising membrane potential to random values..." % node_id
-rng = NumpyRNG(seed=rngseed, parallel_safe=parallel_safe, rank=node_id, num_processes=np)
+rng = NumpyRNG(seed=rngseed, parallel_safe=parallel_safe)
 uniformDistr = RandomDistribution('uniform', [v_reset,v_thresh], rng=rng)
 all_cells.initialize('v', uniformDistr)
 
diff -Nru -w pynn-0.7.2/PKG-INFO pynn-0.7.4/PKG-INFO
--- pynn-0.7.2/PKG-INFO	2011-10-03 09:16:24.000000000 -0400
+++ pynn-0.7.4/PKG-INFO	2012-07-03 11:06:49.000000000 -0400
@@ -1,12 +1,12 @@
 Metadata-Version: 1.0
 Name: PyNN
-Version: 0.7.2
+Version: 0.7.4
 Summary: A Python package for simulator-independent specification of neuronal network models
 Home-page: http://neuralensemble.org/PyNN/
 Author: The PyNN team
 Author-email: pynn@neuralensemble.org
 License: CeCILL http://www.cecill.info
-Description: In other words, you can write the code for a model once, using the PyNN API and the Python_ programming language, and then run it without modification on any simulator that PyNN supports (currently NEURON, NEST, PCSIM and Brian).
+Description: In other words, you can write the code for a model once, using the PyNN API and the Python programming language, and then run it without modification on any simulator that PyNN supports (currently NEURON, NEST, PCSIM and Brian).
         
         The API has two parts, a low-level, procedural API (functions ``create()``, ``connect()``, ``set()``, ``record()``, ``record_v()``), and a high-level, object-oriented API (classes ``Population`` and ``Projection``, which have methods like ``set()``, ``record()``, ``setWeights()``, etc.).
         
diff -Nru -w pynn-0.7.2/setup.py pynn-0.7.4/setup.py
--- pynn-0.7.2/setup.py	2011-10-03 09:13:22.000000000 -0400
+++ pynn-0.7.4/setup.py	2012-07-03 11:06:03.000000000 -0400
@@ -40,7 +40,7 @@
       
 setup(
     name = "PyNN",
-    version = "0.7.2",
+    version = "0.7.4",
     package_dir={'pyNN': 'src'},
     packages = ['pyNN','pyNN.nest', 'pyNN.pcsim', 'pyNN.neuron', 'pyNN.brian',
                 'pyNN.recording', 'pyNN.standardmodels', 'pyNN.descriptions',
@@ -50,7 +50,7 @@
     author = "The PyNN team",
     author_email = "pynn@neuralensemble.org",
     description = "A Python package for simulator-independent specification of neuronal network models",
-        long_description = """In other words, you can write the code for a model once, using the PyNN API and the Python_ programming language, and then run it without modification on any simulator that PyNN supports (currently NEURON, NEST, PCSIM and Brian).
+        long_description = """In other words, you can write the code for a model once, using the PyNN API and the Python programming language, and then run it without modification on any simulator that PyNN supports (currently NEURON, NEST, PCSIM and Brian).
 
 The API has two parts, a low-level, procedural API (functions ``create()``, ``connect()``, ``set()``, ``record()``, ``record_v()``), and a high-level, object-oriented API (classes ``Population`` and ``Projection``, which have methods like ``set()``, ``record()``, ``setWeights()``, etc.). 
 
@@ -74,4 +74,3 @@
                    'Topic :: Scientific/Engineering'],
     cmdclass = {'build': build},
 )
-
diff -Nru -w pynn-0.7.2/src/brian/electrodes.py pynn-0.7.4/src/brian/electrodes.py
--- pynn-0.7.2/src/brian/electrodes.py	2011-10-03 08:57:20.000000000 -0400
+++ pynn-0.7.4/src/brian/electrodes.py	2012-04-06 08:16:01.000000000 -0400
@@ -8,7 +8,7 @@
 :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS.
 :license: CeCILL, see LICENSE for details.
 
-$Id: electrodes.py 957 2011-05-03 13:44:15Z apdavison $
+$Id: electrodes.py 1117 2012-04-02 16:06:20Z apdavison $
 """
 
 from brian import ms, nA, network_operation
@@ -88,7 +88,9 @@
             stop      -- end of pulse in ms
             amplitude -- pulse amplitude in nA
         """
+        self.start = start
+        self.stop = stop
+        self.amplitude = amplitude
         times = [0.0, start, (stop or 1e99)]
         amplitudes = [0.0, amplitude, 0.0]
         StepCurrentSource.__init__(self, times, amplitudes)
-        
diff -Nru -w pynn-0.7.2/src/brian/__init__.py pynn-0.7.4/src/brian/__init__.py
--- pynn-0.7.2/src/brian/__init__.py	2011-10-03 08:57:20.000000000 -0400
+++ pynn-0.7.4/src/brian/__init__.py	2012-04-06 10:33:58.000000000 -0400
@@ -5,7 +5,7 @@
 :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS.
 :license: CeCILL, see LICENSE for details.
 
-$Id: __init__.py 957 2011-05-03 13:44:15Z apdavison $
+$Id: __init__.py 1122 2012-04-06 14:33:59Z apdavison $
 """
 
 import logging
@@ -49,28 +49,30 @@
     simulator but not by others.
     """
     common.setup(timestep, min_delay, max_delay, **extra_params)
+    _cleanup()
     brian.set_global_preferences(**extra_params)
     simulator.state = simulator._State(timestep, min_delay, max_delay)    
     simulator.state.add(update_currents) # from electrodes
-    ## We need to reset the clock of the update_currents function, for the electrodes
-    simulator.state.network._all_operations[0].clock = brian.Clock(t=0*ms, dt=timestep*ms)
-    simulator.state.min_delay = min_delay
-    simulator.state.max_delay = max_delay
-    simulator.state.dt        = timestep
+    update_currents.clock = simulator.state.simclock
     recording.simulator = simulator
     reset()
     return rank()
 
-def end(compatible_output=True):
-    """Do any necessary cleaning up before exiting."""
-    for recorder in simulator.recorder_list:
-        recorder.write(gather=True, compatible_output=compatible_output)
+def _cleanup():
     simulator.recorder_list = []
     electrodes.current_sources = []
+    if hasattr(simulator, 'state'):
+        if hasattr(simulator.state, 'network'):
     for item in simulator.state.network.groups + simulator.state.network._all_operations:
         del item    
     del simulator.state
 
+def end(compatible_output=True):
+    """Do any necessary cleaning up before exiting."""
+    for recorder in simulator.recorder_list:
+        recorder.write(gather=True, compatible_output=compatible_output)
+    _cleanup()
+
 def get_current_time():
     """Return the current time in the simulation."""
     return simulator.state.t
diff -Nru -w pynn-0.7.2/src/brian/recording.py pynn-0.7.4/src/brian/recording.py
--- pynn-0.7.2/src/brian/recording.py	2011-10-03 08:57:20.000000000 -0400
+++ pynn-0.7.4/src/brian/recording.py	2012-04-06 05:21:04.000000000 -0400
@@ -28,20 +28,25 @@
     
     def _create_devices(self, group):
         """Create a Brian recording device."""
+        # By default, StateMonitor has when='end', i.e. the value recorded at
+        # the end of the timestep is associated with the time at the start of the step,
+        # This is different to the PyNN semantics (i.e. the value at the end of
+        # the step is associated with the time at the end of the step.)
+
         clock = simulator.state.simclock
         if self.variable == 'spikes':
             devices = [brian.SpikeMonitor(group, record=True)]
         elif self.variable == 'v':
-            devices = [brian.StateMonitor(group, 'v', record=True, clock=clock)]
+            devices = [brian.StateMonitor(group, 'v', record=True, clock=clock, when='start')]
         elif self.variable == 'gsyn':
             example_cell = list(self.recorded)[0]
             varname = example_cell.celltype.synapses['excitatory']
-            device1 = brian.StateMonitor(group, varname, record=True, clock=clock)
+            device1 = brian.StateMonitor(group, varname, record=True, clock=clock, when='start')
             varname = example_cell.celltype.synapses['inhibitory']
-            device2 = brian.StateMonitor(group, varname, record=True, clock=clock)
+            device2 = brian.StateMonitor(group, varname, record=True, clock=clock, when='start')
             devices = [device1, device2]
         else:
-            devices = [brian.StateMonitor(group, self.variable, record=True, clock=clock)]
+            devices = [brian.StateMonitor(group, self.variable, record=True, clock=clock, when='start')]
         for device in devices:
             simulator.state.add(device)
         return devices
@@ -69,6 +74,17 @@
         cells        = list(filtered_ids) 
         padding      = cells[0].parent.first_id        
         filtered_ids = numpy.array(cells) - padding
+        def get_all_values(device, units):
+            # because we use `when='start'`, need to add the value at the end of the final time step.
+            values = numpy.array(device._values)/units
+            current_values = device.P.state_(device.varname)[device.record]/units
+            return numpy.vstack((values, current_values[numpy.newaxis, :]))
+        def get_times():
+            n = self._devices[0].times.size + 1
+            times  = numpy.empty((n,))
+            times[:n-1] = self._devices[0].times/ms
+            times[-1]  = simulator.state.t
+            return times
         if self.variable == 'spikes':
             data    = numpy.empty((0,2))
             for id in filtered_ids:
@@ -76,8 +92,9 @@
                 new_data = numpy.array([numpy.ones(times.shape)*id + padding, times]).T
                 data     = numpy.concatenate((data, new_data))
         elif self.variable == 'v':
-            values = numpy.array(self._devices[0]._values)/mV
-            times  = self._devices[0].times/ms
+            values = get_all_values(self._devices[0], mV)
+            n = values.shape[0]
+            times = get_times()
             data   = numpy.empty((0,3))
             for id, row in zip(self.recorded, values.T):
                 new_data = numpy.array([numpy.ones(row.shape)*id, times, row]).T
@@ -86,9 +103,9 @@
                 mask = reduce(numpy.add, (data[:,0]==id for id in filtered_ids + padding))
                 data = data[mask]
         elif self.variable == 'gsyn':
-            values1 = numpy.array(self._devices[0]._values)/uS
-            values2 = numpy.array(self._devices[1]._values)/uS
-            times   = self._devices[0].times/ms
+            values1 = get_all_values(self._devices[0], uS)
+            values2 = get_all_values(self._devices[1], uS)
+            times = get_times()
             data    = numpy.empty((0,4))
             for id, row1, row2 in zip(self.recorded, values1.T, values2.T):
                 assert row1.shape == row2.shape
@@ -98,8 +115,8 @@
                 mask = reduce(numpy.add, (data[:,0]==id for id in filtered_ids + padding))
                 data = data[mask]
         else:
-            values = numpy.array(self._devices[0]._values)/mV
-            times  = self._devices[0].times/ms
+            values = get_all_values(self._devices[0], mV)
+            times = get_times()
             data   = numpy.empty((0,3))
             for id, row in zip(self.recorded, values.T):
                 new_data = numpy.array([numpy.ones(row.shape)*id, times, row]).T
diff -Nru -w pynn-0.7.2/src/brian/simulator.py pynn-0.7.4/src/brian/simulator.py
--- pynn-0.7.2/src/brian/simulator.py	2011-10-03 08:57:20.000000000 -0400
+++ pynn-0.7.4/src/brian/simulator.py	2012-04-06 09:58:01.000000000 -0400
@@ -27,7 +27,7 @@
 :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS.
 :license: CeCILL, see LICENSE for details.
 
-$Id: simulator.py 957 2011-05-03 13:44:15Z apdavison $
+$Id: simulator.py 1121 2012-04-06 13:58:02Z apdavison $
 """
 
 import logging
@@ -193,7 +193,7 @@
     def __init__(self, timestep, min_delay, max_delay):
         """Initialize the simulator."""
         self.network       = brian.Network()
-        self._set_dt(timestep)
+        self.network.clock = brian.Clock(t=0*ms, dt=timestep*ms)
         self.initialized   = True
         self.num_processes = 1
         self.mpi_rank      = 0
@@ -202,13 +202,10 @@
         self.gid           = 0
         
     def _get_dt(self):
-        if self.network.clock is None:
-            raise Exception("Simulation timestep not yet set. Need to call setup()")
-        return self.network.clock.dt/ms
+        return self.simclock.dt/ms
         
     def _set_dt(self, timestep):
-        if self.network.clock is None or timestep != self._get_dt():
-            self.network.clock = brian.Clock(dt=timestep*ms)
+        self.simclock.set_dt(timestep*ms)
     dt = property(fget=_get_dt, fset=_set_dt)
 
     @property
diff -Nru -w pynn-0.7.2/src/connectors.py pynn-0.7.4/src/connectors.py
--- pynn-0.7.2/src/connectors.py	2011-10-03 08:57:31.000000000 -0400
+++ pynn-0.7.4/src/connectors.py	2012-04-02 09:14:17.000000000 -0400
@@ -894,7 +894,7 @@
             min_delay = common.get_min_delay()
             Connector.__init__(self, None, None, safe=safe, verbose=verbose)
             self.cset = cset
-            if cset.arity == 0:
+            if csa.arity(cset) == 0:
                 #assert weights is not None and delays is not None, \
                 #       'must specify weights and delays in addition to a CSA mask'
                 self.weights = weights
@@ -904,7 +904,7 @@
                 if delays is None:
                     self.delays = common.get_min_delay()
             else:
-                assert cset.arity == 2, 'must specify mask or connection-set with arity 2'
+                assert csa.arity(cset) == 2, 'must specify mask or connection-set with arity 2'
                 assert weights is None and delays is None, \
                        "weights or delays specified both in connection-set and as CSAConnector argument"
     else:
@@ -922,23 +922,18 @@
 
     def connect(self, projection):
         """Connect-up a Projection."""
-        i0 = projection.pre.first_id
-        size1 = projection.pre.last_id - i0
-        j0 = projection.post.first_id
-        targets = [j - j0 for j in projection.post]
-
-        # Cut out finite part and shift to global ids
-        c = csa.shift (i0, j0) * csa.cross ((0, size1), targets) * self.cset
+        # Cut out finite part
+        c = csa.cross((0, projection.pre.size-1), (0, projection.post.size-1)) * self.cset
         
         if csa.arity (self.cset) == 2:
             # Connection-set with arity 2
             for (i, j, weight, delay) in c:
-                projection.connection_manager.connect (i, [j], weight, delay)
+                projection.connection_manager.connect (projection.pre[i], [projection.post[j]], weight, delay)
         elif CSAConnector.isConstant (self.weights) \
              and CSAConnector.isConstant (self.delays):
             # Mask with constant weights and delays
             for (i, j) in c:
-                projection.connection_manager.connect (i, [j], self.weights, self.delays)
+                projection.connection_manager.connect (projection.pre[i], [projection.post[j]], self.weights, self.delays)
         else:
             # Mask with weights and/or delays iterable
             weights = self.weights
@@ -948,4 +943,4 @@
             if CSAConnector.isConstant (delays):
                 delays = CSAConnector.constantIterator (delays)
             for (i, j), weight, delay in zip (c, weights, delays):
-                projection.connection_manager.connect (i, [j], weight, delay)
+                projection.connection_manager.connect (projection.pre[i], [projection.post[j]], weight, delay)
diff -Nru -w pynn-0.7.2/src/__init__.py pynn-0.7.4/src/__init__.py
--- pynn-0.7.2/src/__init__.py	2011-10-03 09:14:31.000000000 -0400
+++ pynn-0.7.4/src/__init__.py	2012-07-03 11:06:03.000000000 -0400
@@ -65,9 +65,9 @@
     utility
     random
 
-:copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS.
+:copyright: Copyright 2006-2012 by the PyNN team, see AUTHORS.
 :license: CeCILL, see LICENSE for details.
 """
 
-__version__ = '0.7.2 ( $Rev: 995 $)'.replace(' $','')
+__version__ = '0.7.4'
 __all__ = ["common", "random", "nest", "neuron", "pcsim", "brian", "recording", "errors", "space", "descriptions", "standardmodels"]
diff -Nru -w pynn-0.7.2/src/nest/electrodes.py pynn-0.7.4/src/nest/electrodes.py
--- pynn-0.7.2/src/nest/electrodes.py	2011-10-03 08:57:30.000000000 -0400
+++ pynn-0.7.4/src/nest/electrodes.py	2012-04-06 09:58:01.000000000 -0400
@@ -10,7 +10,7 @@
 :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS.
 :license: CeCILL, see LICENSE for details.
 
-$Id: electrodes.py 957 2011-05-03 13:44:15Z apdavison $
+$Id: electrodes.py 1121 2012-04-06 13:58:02Z apdavison $
 """
 
 import nest
@@ -24,6 +24,10 @@
 class CurrentSource(object):
     """Base class for a source of current to be injected into a neuron."""
     
+    def delay_correction(self, value):
+        # use dt or min_delay?
+        return value - state.min_delay
+
     def inject_into(self, cell_list):
         """Inject this current source into some cells."""
         for id in cell_list:
@@ -31,7 +35,7 @@
                 raise TypeError("Can't inject current into a spike source.")
         if isinstance(cell_list, (Population, PopulationView, Assembly)):
             cell_list = [cell for cell in cell_list]
-        nest.DivergentConnect(self._device, cell_list)
+        nest.DivergentConnect(self._device, cell_list, delay=state.min_delay, weight=1.0)
     
 
 class DCSource(CurrentSource):
@@ -45,12 +49,14 @@
             stop      -- end of pulse in ms
             amplitude -- pulse amplitude in nA
         """
+        self.start = start
+        self.stop = stop
         self.amplitude = amplitude
         self._device = nest.Create('dc_generator')
         nest.SetStatus(self._device, {'amplitude': 1000.0*self.amplitude,
-                                      'start': float(start)}) # conversion from nA to pA
+                                      'start': self.delay_correction(start)}) # conversion from nA to pA
         if stop:
-            nest.SetStatus(self._device, {'stop': float(stop)})
+            nest.SetStatus(self._device, {'stop': self.delay_correction(stop)})
 
 
 class ACSource(CurrentSource):
@@ -77,7 +83,7 @@
                                       'offset'   : 1000.0*self.offset,
                                       'frequency': float(self.frequency),
                                       'phase'    : float(self.phase),
-                                      'start'    : float(start)}) # conversion from nA to pA
+                                      'start'    : self.delay_correction(start)}) # conversion from nA to pA
         if stop:
             nest.SetStatus(self._device, {'stop': float(stop)})
 
@@ -108,7 +114,7 @@
             amplitudes.append(amplitudes[-1])  # bug in NEST
         except AttributeError:
             numpy.append(amplitudes, amplitudes[-1])
-        nest.SetStatus(self._device, {'amplitude_times': numpy.array(times, 'float'),
+        nest.SetStatus(self._device, {'amplitude_times': self.delay_correction(numpy.array(times, 'float')),
                                       'amplitude_values': 1000.0*numpy.array(amplitudes, 'float')})
         
         
@@ -157,9 +163,9 @@
             self._device = nest.Create('noise_generator')
             nest.SetStatus(self._device, {'mean': mean*1000.0,
                                            'std': stdev*1000.0,
-                                           'start': float(start),
+                                           'start': self.delay_correction(start),
                                            'dt': self.dt})
             if stop:
-                nest.SetStatus(self._device, {'stop': float(stop)})
+                nest.SetStatus(self._device, {'stop': self.delay_correction(stop)})
         else:
             raise NotImplementedError("Only using a NativeRNG is currently supported.")
diff -Nru -w pynn-0.7.2/src/neuroml2.py pynn-0.7.4/src/neuroml2.py
--- pynn-0.7.2/src/neuroml2.py	1969-12-31 19:00:00.000000000 -0500
+++ pynn-0.7.4/src/neuroml2.py	2012-06-14 07:49:38.000000000 -0400
@@ -0,0 +1,978 @@
+"""
+PyNN-->NeuroML v2
+
+:copyright: Copyright 2006-2012 by the PyNN team, see AUTHORS.
+:license: CeCILL, see LICENSE for details.
+
+This file is based on neuroml.py written by Andrew Davison & has been updated for
+NeuroML v2.0 by Padraig Gleeson
+
+"""
+
+'''
+
+For an overview of PyNN & NeuroML interoperability see http://www.neuroml.org/pynn.php
+
+This script is intended to map models sprcified in PyNN on to the equivalent representation in
+NeuroML v2.0. A valid NML2 file will be produced containing the cells, populations,
+etc. and a LEMS file will be created which imports this file and can run a simple
+simulation using the LEMS interpreter, see http://www.neuroml.org/neuroml2.php#libNeuroML
+
+Ideally... this will produce equivalent simulation results when a script is run using:
+
+    python myPyNN.py nest
+    python myPyNN.py neuron
+    python myPyNN.py neuroml2   (followed by nml2 LEMS_PyNN2NeuroMLv2.xml)
+
+        WORK IN PROGRESS! REQUIRES PyNN at tags/0.7.2/
+
+To test this out get the 0.7 PyNN branch from SVN using:
+
+    svn co https://neuralensemble.org/svn/PyNN/branch/0.7 pyNN
+    cd pyNN
+    sudo python setup.py install
+
+Contact p.gleeson@ucl.ac.uk for more details 
+
+Features below depend on using the latest LEMS/libNeuroML code which includes the
+nml2 utility and the LEMS definitions of PyNN core models (IF_curr_alpha,
+SpikeSourcePoisson, etc.) in PyNN.xml. Get it from
+http://sourceforge.net/apps/trac/neuroml/browser/NeuroML2/
+
+
+Currently supported features:
+    Generation of valid NeuroML 2 file containing cells & populations & connections
+    Export of simulation duration & dt & recorded populations in a LEMS file for
+       running a basic simulation with simple num integration method (so use small dt!)
+    Cell models impl: IF_curr_alpha, IF_curr_exp, IF_cond_exp, IF_cond_alpha, HH_cond_exp, EIF_cond_exp_isfa_ista, EIF_cond_alpha_isfa_ista
+    Others: SpikeSourcePoisson, SpikeSourceArray
+    Export of explicitly created Populations, export of populations created with create()
+    Export of (instance based) list of conenctions in explicit <connection from=... to=...>
+    Support for weight & delay on connections
+
+Missing/required:
+    Other models todo: DCSource, StepCurrentSource, ACSource, NoisyCurrentSource
+    Need to test >1 cells in a population
+    Setting of initial values in Populations
+    Support for populations some of whose cells have has their parameters modified
+    Synapse dynamics (e.g. STDP) not yet implemented
+
+
+Desirable TODO:
+    Generation of SED-ML file with simulation description
+    Automated tests of equivalence between Neuron & Nest & generated LEMS
+
+'''
+
+from pyNN import common, connectors, standardmodels, core
+from pyNN.standardmodels import cells
+
+import numpy
+import sys
+
+sys.path.append('/usr/lib/python%s/site-packages/oldxml' % sys.version[:3]) # needed for Ubuntu
+import xml.dom.minidom
+
+import logging
+logger = logging.getLogger("neuroml2")
+
+neuroml_ns = 'http://www.neuroml.org/schema/neuroml2'
+
+namespace_xsi = "http://www.w3.org/2001/XMLSchema-instance";
+
+neuroml_ver="v2alpha"
+neuroml_xsd="http://neuroml.svn.sourceforge.net/viewvc/neuroml/NeuroML2/Schemas/NeuroML2/NeuroML_"+neuroml_ver+".xsd";
+
+simulation_prefix = 'simulation_'
+network_prefix = 'network_'
+display_prefix = 'display_'
+line_prefix = 'line_'
+colours = ['#000000','#FF0000','#0000FF','#009b00','#ffc800','#8c6400','#ff00ff','#ffff00','#808080']
+
+strict = False
+
+# ==============================================================================
+#   Utility classes
+# ==============================================================================
+
+class ID(int, common.IDMixin):
+    """
+    Instead of storing ids as integers, we store them as ID objects,
+    which allows a syntax like:
+        p[3,4].tau_m = 20.0
+    where p is a Population object. The question is, how big a memory/performance
+    hit is it to replace integers with ID objects?
+    """
+    
+    def __init__(self, n):
+        common.IDMixin.__init__(self)
+
+
+    def get_native_parameters(self):
+        """Return a dictionary of parameters for the NeuroML2 cell model."""
+     
+        return self._cell
+
+    def set_native_parameters(self, parameters):
+        """Set parameters of the NeuroML2 cell model from a dictionary.
+        for name, val in parameters.items():
+            setattr(self._cell, name, val)"""
+        self._cell =    parameters.copy()
+
+# ==============================================================================
+#   Module-specific functions and classes (not part of the common API)
+# ==============================================================================
+
+def build_node(name_, text=None, **attributes):
+    # we call the node name 'name_' because 'name' is a common attribute name (confused? I am)
+
+    node = nml2doc.createElement(name_)
+    for attr, value in attributes.items():
+        node.setAttribute(attr, str(value))
+    if text:
+        node.appendChild(nml2doc.createTextNode(text))
+    return node
+
+def build_parameter_node(name, value):
+        param_node = build_node('parameter', value=value)
+        if name:
+            param_node.setAttribute('name', name)
+        group_node = build_node('group', 'all')
+        param_node.appendChild(group_node)
+        return param_node
+
+
+class IF_base(object):
+    """Base class for integrate-and-fire neuron models."""        
+
+
+    def build_nodes(self):
+        cell_type = self.__class__.__name__
+        logger.debug("Building nodes for "+cell_type)
+
+        #cell_node = build_node('component', type=self.__class__.__name__, id=self.label)
+        cell_node = build_node(cell_type, id=self.label)
+        
+        for param in self.parameters.keys():
+            paral_val = str(self.parameters[param])
+
+            # TODO why is this broken for a in EIF_cond_exp_isfa_ista????
+            if "EIF_cond_" in cell_type and param is "a":
+                paral_val = float(paral_val)
+                paral_val = paral_val/1000.
+                
+            logger.debug("Setting param %s to %s"%(param, paral_val))
+            
+            cell_node.setAttribute(param, str(paral_val))
+
+        ##TODO remove!!
+        cell_node.setAttribute('v_init', '-65')
+            
+        doc_node = build_node('notes', "Component for PyNN %s cell type" % cell_type)
+        cell_node.appendChild(doc_node)
+
+        synapse_nodes = []
+        if 'cond_exp' in cell_type:
+            synapse_nodes_e = build_node("expCondSynapse", id="syn_e_"+self.label)
+            synapse_nodes_e.setAttribute("tau_syn",str(self.parameters["tau_syn_E"]))
+            synapse_nodes_e.setAttribute("e_rev",str(self.parameters["e_rev_E"]))
+            synapse_nodes.append(synapse_nodes_e)
+            synapse_nodes_i = build_node("expCondSynapse", id="syn_i_"+self.label)
+            synapse_nodes_i.setAttribute("tau_syn",str(self.parameters["tau_syn_I"]))
+            synapse_nodes_i.setAttribute("e_rev",str(self.parameters["e_rev_I"]))
+            synapse_nodes.append(synapse_nodes_i)
+        elif 'cond_alpha' in cell_type:
+            synapse_nodes_e = build_node("alphaCondSynapse", id="syn_e_"+self.label)
+            synapse_nodes_e.setAttribute("tau_syn",str(self.parameters["tau_syn_E"]))
+            synapse_nodes_e.setAttribute("e_rev",str(self.parameters["e_rev_E"]))
+            synapse_nodes.append(synapse_nodes_e)
+            synapse_nodes_i = build_node("alphaCondSynapse", id="syn_i_"+self.label)
+            synapse_nodes_i.setAttribute("tau_syn",str(self.parameters["tau_syn_I"]))
+            synapse_nodes_i.setAttribute("e_rev",str(self.parameters["e_rev_I"]))
+            synapse_nodes.append(synapse_nodes_i)
+        elif 'curr_exp' in cell_type:
+            synapse_nodes_e = build_node("expCurrSynapse", id="syn_e_"+self.label)
+            synapse_nodes_e.setAttribute("tau_syn",str(self.parameters["tau_syn_E"]))
+            synapse_nodes.append(synapse_nodes_e)
+            synapse_nodes_i = build_node("expCurrSynapse", id="syn_i_"+self.label)
+            synapse_nodes_i.setAttribute("tau_syn",str(self.parameters["tau_syn_I"]))
+            synapse_nodes.append(synapse_nodes_i)
+        elif 'curr_alpha' in cell_type:
+            synapse_nodes_e = build_node("alphaCurrSynapse", id="syn_e_"+self.label)
+            synapse_nodes_e.setAttribute("tau_syn",str(self.parameters["tau_syn_E"]))
+            synapse_nodes.append(synapse_nodes_e)
+            synapse_nodes_i = build_node("alphaCurrSynapse", id="syn_i_"+self.label)
+            synapse_nodes_i.setAttribute("tau_syn",str(self.parameters["tau_syn_I"]))
+            synapse_nodes.append(synapse_nodes_i)
+
+        
+        return cell_node, synapse_nodes
+
+
+class NotImplementedModel(object):
+    
+    def __init__(self):
+        if strict:
+            raise Exception('Cell type %s is not available in NeuroML' % self.__class__.__name__)
+    
+    def build_nodes(self):
+        cell_node = build_node(':not_implemented_cell', id=self.label)
+        doc_node = build_node('notes', "PyNN %s cell type not implemented" % self.__class__.__name__)
+        cell_node.appendChild(doc_node)
+        return cell_node, []
+        
+
+# ==============================================================================
+#   Standard cells
+# ==============================================================================
+
+class IF_curr_exp(cells.IF_curr_exp, IF_base):
+    """Leaky integrate and fire model with fixed threshold and
+    decaying-exponential post-synaptic current. (Separate synaptic currents for
+    excitatory and inhibitory synapses"""
+    
+    n = 0
+    translations = standardmodels.build_translations(*[(name, name)
+                                               for name in cells.IF_curr_exp.default_parameters])
+    
+    def __init__(self, parameters):
+        cells.IF_curr_exp.__init__(self, parameters)
+        self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
+        self.synapse_type = "doub_exp_syn"
+        self.__class__.n += 1
+        logger.debug("IF_curr_exp created")
+
+
+class IF_curr_alpha(cells.IF_curr_alpha, IF_base):
+    """Leaky integrate and fire model with fixed threshold and alpha-function-
+    shaped post-synaptic current."""
+    
+    n = 0
+    translations = standardmodels.build_translations(*[(name, name)
+                                               for name in cells.IF_curr_alpha.default_parameters])
+    
+    def __init__(self, parameters):
+        cells.IF_curr_alpha.__init__(self, parameters)
+        self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
+        self.synapse_type = "doub_exp_syn"
+        self.__class__.n += 1
+        logger.debug("IF_curr_alpha created")
+
+
+class IF_cond_exp(cells.IF_cond_exp, IF_base):
+    """Leaky integrate and fire model with fixed threshold and 
+    decaying-exponential post-synaptic conductance."""
+    
+    n = 0
+    translations = standardmodels.build_translations(*[(name, name)
+                                               for name in cells.IF_cond_exp.default_parameters])
+    
+    def __init__(self, parameters):
+        cells.IF_cond_exp.__init__(self, parameters)
+        self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
+        self.synapse_type = "doub_exp_syn"
+        self.__class__.n += 1
+        logger.debug("IF_cond_exp created")
+
+
+class IF_cond_alpha(cells.IF_cond_alpha, IF_base):
+    """Leaky integrate and fire model with fixed threshold and alpha-function-
+    shaped post-synaptic conductance."""
+
+    n = 0
+    translations = standardmodels.build_translations(*[(name, name)
+                                               for name in cells.IF_cond_alpha.default_parameters])
+
+    def __init__(self, parameters):
+        cells.IF_cond_alpha.__init__(self, parameters)
+        self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
+        self.synapse_type = "alpha_syn"
+        self.__class__.n += 1
+        logger.debug("IF_cond_alpha created")
+
+
+class EIF_cond_exp_isfa_ista(cells.EIF_cond_exp_isfa_ista, IF_base):
+    """Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.) according to:
+Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642."""
+
+    n = 0
+    translations = standardmodels.build_translations(*[(name, name)
+                                               for name in cells.EIF_cond_exp_isfa_ista.default_parameters])
+
+    def __init__(self, parameters):
+        cells.EIF_cond_exp_isfa_ista.__init__(self, parameters)
+        self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
+        self.synapse_type = "exp_syn"
+        self.__class__.n += 1
+        logger.debug("EIF_cond_exp_isfa_ista created")
+
+
+class EIF_cond_alpha_isfa_ista(cells.EIF_cond_alpha_isfa_ista, IF_base):
+    """Exponential integrate and fire neuron with spike triggered and sub-threshold adaptation currents (isfa, ista reps.) according to:
+Brette R and Gerstner W (2005) Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity. J Neurophysiol 94:3637-3642."""
+
+    n = 0
+    translations = standardmodels.build_translations(*[(name, name)
+                                               for name in cells.EIF_cond_alpha_isfa_ista.default_parameters])
+
+    def __init__(self, parameters):
+        cells.EIF_cond_alpha_isfa_ista.__init__(self, parameters)
+        self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
+        self.synapse_type = "alpha_syn"
+        self.__class__.n += 1
+        logger.debug("EIF_cond_alpha_isfa_ista created")
+
+
+class HH_cond_exp(cells.HH_cond_exp, IF_base):
+    """ Single-compartment Hodgkin-Huxley model."""
+
+    n = 0
+    translations = standardmodels.build_translations(*[(name, name)
+                                               for name in cells.HH_cond_exp.default_parameters])
+
+    def __init__(self, parameters):
+        cells.HH_cond_exp.__init__(self, parameters)
+        self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
+        self.synapse_type = "exp_syn"
+        self.__class__.n += 1
+        logger.debug("HH_cond_exp created")
+
+
+class GenericModel(object):
+
+    units_to_use = {}
+
+    def build_nodes(self):
+        logger.debug("Building nodes for "+self.__class__.__name__)
+
+        model_node = build_node(self.__class__.__name__, id=self.label)
+
+        for param in self.parameters.keys():
+            units = ''
+            if param in self.units_to_use.keys():
+                units = self.units_to_use[param]
+            model_node.setAttribute(param, str(self.parameters[param])+units)
+
+
+        doc_node = build_node('notes', "Component for PyNN %s model type" % self.__class__.__name__)
+        model_node.appendChild(doc_node)
+
+        return model_node, []
+
+
+class SpikeSourcePoisson(cells.SpikeSourcePoisson, GenericModel):
+    """Spike source, generating spikes according to a Poisson process."""
+
+    n = 0
+    translations = standardmodels.build_translations(*[(name, name)
+                                               for name in cells.SpikeSourcePoisson.default_parameters])
+
+
+    def __init__(self, parameters):
+        cells.SpikeSourcePoisson.__init__(self, parameters)
+        self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
+        self.__class__.n += 1
+        self.units_to_use = {'start':'ms','duration':'ms','rate':'per_s'}
+        logger.debug("SpikeSourcePoisson created: "+self.label)
+        
+
+class SpikeSourceArray(cells.SpikeSourceArray, GenericModel):
+    """Spike source generating spikes at the times given in the spike_times array."""
+
+    n = 0
+    translations = standardmodels.build_translations(*[(name, name)
+                                               for name in cells.SpikeSourceArray.default_parameters])
+
+    def __init__(self, parameters):
+        cells.SpikeSourceArray.__init__(self, parameters)
+        self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
+        self.__class__.n += 1
+        logger.debug("SpikeSourceArray created: "+self.label)
+
+    def build_nodes(self):
+        logger.debug("Building nodes for "+self.__class__.__name__)
+
+        model_node = build_node('spikeArray', id=self.label)
+        #doc_node = build_node('notes', "Component for PyNN %s model type" % self.__class__.__name__)
+        #model_node.appendChild(doc_node)
+
+        for spike in self.parameters['spike_times']:
+            spike_node = build_node('spike', time="%fms"%spike)
+            model_node.appendChild(spike_node)
+
+        return model_node, []
+
+
+# ==============================================================================
+#   Functions for simulation set-up and control
+# ==============================================================================
+
+def setup(timestep=0.1, min_delay=0.1, max_delay=0.1, debug=False,**extra_params):
+
+    logger.debug("setup() called, extra_params = "+str(extra_params))
+    """
+    Should be called at the very beginning of a script.
+    extra_params contains any keyword arguments that are required by a given
+    simulator but not by others.
+    """
+    global nml2doc, nml2file, lemsdoc, lemsfile, lemsNode, nml_id, population_holder, projection_holder, input_holder, cell_holder, channel_holder, neuromlNode, strict, dt
+
+    population_holder = []
+    projection_holder = []
+    input_holder = []
+    cell_holder = []
+    
+    if not extra_params.has_key('file'):
+        nml2file = "PyNN2NeuroMLv2.nml"
+    else:
+        nml2file = extra_params['file']
+
+    nml_id = nml2file.split('.')[0]
+
+    if isinstance(nml2file, basestring):
+        nml2file = open(nml2file, 'w')
+
+    if 'strict' in extra_params:
+        strict = extra_params['strict']
+    dt = timestep
+
+    nml2doc = xml.dom.minidom.Document()
+    neuromlNode = nml2doc.createElementNS(neuroml_ns,'neuroml')
+    neuromlNode.setAttribute("xmlns",neuroml_ns)
+
+    neuromlNode.setAttribute('xmlns:xsi',namespace_xsi)
+    neuromlNode.setAttribute('xsi:schemaLocation',neuroml_ns+" "+neuroml_xsd)
+    neuromlNode.setAttribute('id',nml_id)
+
+
+    nml2doc.appendChild(neuromlNode)
+    
+
+    lemsdoc = xml.dom.minidom.Document()
+    lemsNode = lemsdoc.createElement('Lems')
+    lemsdoc.appendChild(lemsNode)
+
+    drNode = build_node('DefaultRun',component=simulation_prefix+nml_id)
+    lemsNode.appendChild(drNode)
+    coreNml2Files = ["NeuroMLCoreDimensions.xml","PyNN.xml","Networks.xml","Simulation.xml"]
+    for f in coreNml2Files:
+        incNode = build_node('Include', file="NeuroML2CoreTypes/"+f)
+        lemsNode.appendChild(incNode)
+
+    incNode = build_node('Include', file=nml2file.name)
+    lemsNode.appendChild(incNode)
+
+    global simNode, displayNode
+    simNode = build_node('Simulation', id=simulation_prefix+nml_id, step=str(dt)+"ms", target=network_prefix+nml_id)
+    lemsNode.appendChild(simNode)
+    displayNode = build_node('Display',id="display_0",title="Recording of PyNN model run in LEMS", timeScale="1ms")
+    simNode.appendChild(displayNode)
+
+    lemsfile = "LEMS_"+nml_id+".xml"
+    if isinstance(lemsfile, basestring):
+        lemsfile = open(lemsfile, 'w')
+        
+    return 0
+        
+def end(compatible_output=True):
+    """Do any necessary cleaning up before exiting."""
+    global nml2doc, nml2file, neuromlNode, nml_id
+
+
+    for cellNode in cell_holder:
+        neuromlNode.appendChild(cellNode)
+
+  
+    network_node = build_node('network', id=network_prefix+nml_id)
+    neuromlNode.appendChild(network_node)
+
+    for holder in population_holder, projection_holder, input_holder:
+        for node in holder:
+            network_node.appendChild(node)
+
+    # Write the files
+    logger.debug("Writing NeuroML 2 structure to: "+nml2file.name)
+    nml2file.write(nml2doc.toprettyxml())
+    nml2file.close()
+
+    logger.debug("Writing LEMS file to: "+lemsfile.name)
+    lemsfile.write(lemsdoc.toprettyxml())
+    lemsfile.close()
+    print("\nThe file: "+lemsfile.name+" has been generated. This can be executed with libNeuroML utility nml2 (which wraps the LEMS Interpreter), i.e.")
+    print("\n    nml2 "+lemsfile.name)
+    print("\nFor more details see: http://www.neuroml.org/neuroml2.php#libNeuroML\n";)
+
+
+def run(simtime):
+    """Run the simulation for simtime ms."""
+    global simNode
+    simNode.setAttribute('length', str(simtime)+"ms")
+
+
+
+def get_min_delay():
+    return 0.0
+common.get_min_delay = get_min_delay
+
+def num_processes():
+    return 1
+common.num_processes = num_processes
+
+def rank():
+    return 0
+common.rank = rank
+
+
+# ==============================================================================
+#   High-level API for creating, connecting and recording from populations of
+#   neurons.
+# ==============================================================================
+    
+class Population(common.Population):
+    """
+    An array of neurons all of the same type. `Population' is used as a generic
+    term intended to include layers, columns, nuclei, etc., of cells.
+    """
+    
+    n = 0
+
+    def __init__(self, size, cellclass, cellparams=None, structure=None,
+                 label=None):
+        __doc__ = common.Population.__doc__
+        common.Population.__init__(self, size, cellclass, cellparams, structure, label)
+        ###simulator.initializer.register(self)
+
+    def _create_cells(self, cellclass, cellparams, n):
+        """
+        Create a population of neurons all of the same type.
+        
+
+        `cellclass`  -- a PyNN standard cell
+        `cellparams` -- a dictionary of cell parameters.
+        `n`          -- the number of cells to create
+        """
+        global population_holder, cell_holder, channel_holder
+
+        assert n > 0, 'n must be a positive integer'
+
+        self.celltype = cellclass(cellparams)
+        Population.n += 1
+
+        self.celltype.label = 'cell_%s' % (self.label)
+
+        population_node = build_node('population', id=self.label, component=self.celltype.label, size=self.size)
+
+        #celltype_node = build_node('cell_type', self.celltype.label)
+
+        instances_node = build_node('instances', size=self.size)
+        for i in range(self.size):
+            x, y, z = self.positions[:, i]
+            instance_node = build_node('instance', id=i)
+            instance_node.appendChild( build_node('location', x=x, y=y, z=z) )
+            instances_node.appendChild(instance_node)
+            
+        #population_node.appendChild(node)
+        
+        population_holder.append(population_node)
+
+        cell_node, synapse_nodes = self.celltype.build_nodes()
+        cell_holder.append(cell_node)
+        for syn_node in synapse_nodes:
+            cell_holder.append(syn_node)
+
+
+        # Add all channels first, then all synapses
+        '''
+        for channel_node in channel_list:
+            channel_holder_node.insertBefore(channel_node , channel_holder_node.firstChild)
+        for synapse_node in synapse_list:
+            channel_holder_node.appendChild(synapse_node)'''
+
+        self.first_id = 0
+        self.last_id = self.size-1
+        self.all_cells = numpy.array([ID(id) for id in range(self.first_id, self.last_id+1)], dtype=ID)
+        self._mask_local = numpy.ones_like(self.all_cells).astype(bool)
+        self.first_id = self.all_cells[0]
+        self.last_id = self.all_cells[-1]
+        for id in self.all_cells:
+            id.parent = self
+            id._cell = self.celltype.parameters.copy()
+        
+        #self.local_cells = self.all_cells
+
+
+    def _set_initial_value_array(self, variable, value):
+        logger.debug("Population %s having %s initialised to: %s"%(self.label, variable, value))
+
+        # TODO: use this in generated XML for component...
+        if variable is 'v':
+            self.celltype.parameters['v_init'] = value
+
+        
+    def _record(self, variable, record_from=None, rng=None, to_file=True):
+        """
+        Private method called by record() and record_v().
+        """
+        global simNode, displayNode, color
+        #displayNode = build_node('Display',id=display_prefix+self.label,title="Recording of "+variable+" in "+self.label, timeScale="1ms")
+        #simNode.appendChild(displayNode)
+
+        scale = "1"
+        #if variable == 'v': scale = "1mV"
+        colour = colours[displayNode.childNodes.length%len(colours)]
+        for i in range(self.size):
+            lineNode = build_node('Line',
+                                  id=line_prefix+self.label,
+                                  scale=scale,
+                                  color=colour,
+                                  quantity="%s[%i]/%s"%(self.label,i,variable),
+                                  save="%s_%i_%s_nml2.dat"%(self.label,i,variable))
+                                  
+            displayNode.appendChild(lineNode)
+    
+    def meanSpikeCount(self):
+        return -1
+    
+    def printSpikes(self, file, gather=True, compatible_output=True):
+        pass
+    
+    def print_v(self, file, gather=True, compatible_output=True):
+        pass
+'''
+class AllToAllConnector(connectors.AllToAllConnector):
+    
+    def connect(self, projection):
+        connectivity_node = build_node('connectivity_pattern')
+        connectivity_node.appendChild( build_node('all_to_all',
+                                                  allow_self_connections=int(self.allow_self_connections)) )
+        return connectivity_node
+
+class OneToOneConnector(connectors.OneToOneConnector):
+    
+    def connect(self, projection):
+        connectivity_node = build_node('connectivity_pattern')
+        connectivity_node.appendChild( build_node('one_to_one') )
+        return connectivity_node
+
+class FixedProbabilityConnector(connectors.FixedProbabilityConnector):
+    
+    def connect(self, projection):
+        connectivity_node = build_node('connectivity_pattern')
+        connectivity_node.appendChild( build_node('fixed_probability',
+                                                  probability=self.p_connect,
+                                                  allow_self_conections=int(self.allow_self_connections)) )
+        return connectivity_node
+'''
+FixedProbabilityConnector = connectors.FixedProbabilityConnector
+AllToAllConnector = connectors.AllToAllConnector
+OneToOneConnector = connectors.OneToOneConnector
+CSAConnector = connectors.CSAConnector
+
+class FixedNumberPreConnector(connectors.FixedNumberPreConnector):
+    
+    def connect(self, projection):
+        if hasattr(self, "n"):
+            connectivity_node = build_node('connectivity_pattern')
+            connectivity_node.appendChild( build_node('per_cell_connection',
+                                                      num_per_source=self.n,
+                                                      direction="PreToPost",
+                                                      allow_self_connections = int(self.allow_self_connections)) )
+            return connectivity_node
+        else:
+            raise Exception('Connection with variable connection number not implemented.')
+    
+class FixedNumberPostConnector(connectors.FixedNumberPostConnector):
+    
+    def connect(self, projection):
+        if hasattr(self, "n"):
+            connectivity_node = build_node('connectivity_pattern')
+            connectivity_node.appendChild( build_node('per_cell_connection',
+                                                      num_per_source=self.n,
+                                                      direction="PostToPre",
+                                                      allow_self_connections = int(self.allow_self_connections)) )
+            return connectivity_node
+        else:
+            raise Exception('Connection with variable connection number not implemented.')
+
+        
+class FromListConnector(connectors.FromListConnector):
+    
+    def connect(self, projection):
+        connections_node = build_node('connections')
+        for i in xrange(len(self.conn_list)):
+            src, tgt, weight, delay = self.conn_list[i][:]
+            src = self.pre[tuple(src)]
+            tgt = self.post[tuple(tgt)]
+            connection_node = build_node('connection', id=i)
+            connection_node.appendChild( build_node('pre', cell_id=src) )
+            connection_node.appendChild( build_node('post', cell_id=tgt) )
+            connection_node.appendChild( build_node('properties', internal_delay=delay, weight=weight) )
+            connections_node.appendChild(connection_node)
+        return connections_node
+
+
+class FromFileConnector(connectors.FromFileConnector):
+    
+    def connect(self, projection):
+        # now open the file...
+        f = open(self.filename,'r',10000)
+        lines = f.readlines()
+        f.close()
+        
+        # We read the file and gather all the data in a list of tuples (one per line)
+        input_tuples = []
+        for line in lines:
+            single_line = line.rstrip()
+            src, tgt, w, d = single_line.split("\t", 4)
+            src = "[%s" % src.split("[",1)[1]
+            tgt = "[%s" % tgt.split("[",1)[1]
+            input_tuples.append((eval(src), eval(tgt), float(w), float(d)))
+        f.close()
+        self.conn_list = input_tuples
+        FromListConnector.connect(projection)
+
+
+class Projection(common.Projection):
+    """
+    A container for all the connections of a given type (same synapse type and
+    plasticity mechanisms) between two populations, together with methods to set
+    parameters of those connections, including of plasticity mechanisms.
+    """
+    
+    n = 0
+    
+    def __init__(self, presynaptic_population, postsynaptic_population,
+                 method,
+                 source=None, target=None, synapse_dynamics=None,
+                 label=None, rng=None):
+
+        """
+        presynaptic_population and postsynaptic_population - Population objects.
+        
+        source - string specifying which attribute of the presynaptic cell signals action potentials
+        
+        target - string specifying which synapse on the postsynaptic cell to connect to
+        If source and/or target are not given, default values are used.
+        
+        method - a Connector object, encapsulating the algorithm to use for
+                 connecting the neurons.
+        
+        synapse_dynamics - a `SynapseDynamics` object specifying which
+        synaptic plasticity mechanisms to use.
+        
+        rng - specify an RNG object to be used by the Connector.
+        """
+        global projection_holder
+        common.Projection.__init__(self, presynaptic_population, postsynaptic_population,
+                                   method, source, target, synapse_dynamics, label, rng)
+        self.label = self.label or 'Projection%d' % Projection.n
+        connection_method = method
+        if target:
+            self.synapse_type = target
+        else:
+            self.synapse_type = "ExcitatorySynapse"
+
+        synapseComponent = "syn_"
+
+        if self.synapse_type is "ExcitatorySynapse" or self.synapse_type is "excitatory":
+            self.targetPort = "spike_in_E"
+            synapseComponent = synapseComponent +"e_"
+        elif self.synapse_type is "InhibitorySynapse" or self.synapse_type is "inhibitory":
+            self.targetPort = "spike_in_I"
+            synapseComponent = synapseComponent +"i_"
+        else:
+            self.targetPort = "spike_in"
+
+        synapseComponent = synapseComponent +"cell_"+postsynaptic_population.label
+
+        self.connection_manager = ConnectionManager(self.synapse_type,
+                                                              synapse_model=None,
+                                                              parent=self)
+        self.connections = self.connection_manager
+        ## Create connections
+        method.connect(self)
+
+        logger.debug("init in Projection, %s, pre: %s, post %s"%(self.label, presynaptic_population.label, postsynaptic_population.label))
+        
+        
+        #projection_node = build_node('projection', id=self.label)
+
+        for connection in self.connection_manager.connections:
+            connection_node = build_node('synapticConnectionWD',
+                                                    to='%s[%i]'%(postsynaptic_population.label,connection[1]),
+                                                    synapse=synapseComponent)
+
+            connection_node.setAttribute("from",'%s[%i]'%(presynaptic_population.label,connection[0]))
+            connection_node.setAttribute("weight",str(connection[3][0]))
+            connection_node.setAttribute("delay",str(connection[4][0])+"ms")
+
+            projection_holder.append(connection_node)
+
+        '''
+        projection_node.appendChild( build_node('source', self.pre.label) )
+        projection_node.appendChild( build_node('target', self.post.label) )
+        synapse_node = build_node('synapse_props')
+        synapse_node.appendChild( build_node('synapse_type', self.synapse_type) )
+        synapse_node.appendChild( build_node('default_values', internal_delay=5, weight=1, threshold=-20) )
+        projection_node.appendChild(synapse_node)
+        
+        projection_node.appendChild( connection_method.connect(self) )
+        '''
+        projection_holder.append(connection_node)
+        Projection.n += 1
+
+    def saveConnections(self, filename, gather=True, compatible_output=True):
+        pass
+    
+    def __len__(self):
+        return 0 # needs implementing properly
+
+
+
+class ConnectionManager(object):
+    """
+    Manage synaptic connections, providing methods for creating, listing,
+    accessing individual connections.
+
+    Based on ConnectionManager in moose/simulator.py
+
+    """
+
+    def __init__(self, synapse_type, synapse_model=None, parent=None):
+        """
+        Create a new ConnectionManager.
+
+        `parent` -- the parent `Projection`
+        """
+        assert parent is not None
+        self.connections = []
+        self.parent = parent
+        self.synapse_type = synapse_type
+        self.synapse_model = synapse_model
+
+    def connect(self, source, targets, weights, delays):
+        """
+        Connect a neuron to one or more other neurons with a static connection.
+
+        `source`  -- the ID of the pre-synaptic cell.
+        `targets` -- a list/1D array of post-synaptic cell IDs, or a single ID.
+        `weight`  -- a list/1D array of connection weights, or a single weight.
+                     Must have the same length as `targets`.
+        `delays`  -- a list/1D array of connection delays, or a single delay.
+                     Must have the same length as `targets`.
+        """
+        if not isinstance(source, int) or source < 0:
+            errmsg = "Invalid source ID: %s" % (source)
+            raise errors.ConnectionError(errmsg)
+        if not core.is_listlike(targets):
+            targets = [targets]
+
+        ##############weights = weights*1000.0 # scale units
+        if isinstance(weights, float):
+            weights = [weights]
+        if isinstance(delays, float):
+            delays = [delays]
+        assert len(targets) > 0
+        # need to scale weights for appropriate units
+        for target, weight, delay in zip(targets, weights, delays):
+            if target.local:
+                if not isinstance(target, common.IDMixin):
+                    raise errors.ConnectionError("Invalid target ID: %s" % target)
+                #TODO record weights
+                '''
+                if self.synapse_type == "excitatory":
+                    synapse_object = target._cell.esyn
+                elif self.synapse_type == "inhibitory":
+                    synapse_object = target._cell.isyn
+                else:
+                    synapse_object = getattr(target._cell, self.synapse_type)
+                ###############source._cell.source.connect('event', synapse_object, 'synapse')
+                synapse_object.n_incoming_connections += 1
+                index = synapse_object.n_incoming_connections - 1
+                synapse_object.setWeight(index, weight)
+                synapse_object.setDelay(index, delay)'''
+                index=0
+                self.connections.append((source, target, index, weights, delays))
+
+    def set(self, name, value):
+        """
+        Set connection attributes for all connections in this manager.
+
+        `name`  -- attribute name
+        `value` -- the attribute numeric value, or a list/1D array of such
+                   values of the same length as the number of local connections,
+                   or a 2D array with the same dimensions as the connectivity
+                   matrix (as returned by `get(format='array')`).
+        """
+        #TODO: allow this!!
+        #for conn in self.connections:
+            #???
+
+
+# ==============================================================================
+#   Low-level API for creating, connecting and recording from individual neurons
+# ==============================================================================
+
+create = common.build_create(Population)
+
+connect = common.build_connect(Projection, FixedProbabilityConnector)
+
+set = common.set
+
+initialize = common.initialize
+
+####record = common.build_record('spikes', simulator)
+
+####record_v = common.build_record('v', simulator)
+
+####record_gsyn = common.build_record('gsyn', simulator)
+
+
+
+def record(source, filename):
+    """Record spikes to a file. source can be an individual cell or a list of
+    cells."""
+    logger.debug("Being asked to record spikes of %s to %s"%(source, filename))
+
+def record_v(source, filename):
+    """Record membrane potential to a file. source can be an individual cell or
+    a list of cells."""
+    logger.debug("Being asked to record v of %s to %s"%(source, filename))
+
+    global simNode, displayNode, color
+
+    scale = "1"
+    colour = colours[displayNode.childNodes.length%len(colours)]
+    for i in range(source.size):
+        lineNode = build_node('Line',
+                              id=line_prefix+source.label,
+                              scale=scale,
+                              color=colour,
+                              quantity="%s[%i]/%s"%(source.label,i,'v'),
+                              save="%s_%i_%s_nml2.dat"%(source.label,i,'v'))
+
+        displayNode.appendChild(lineNode)
+
+def record_gsyn(source, filename):
+    """Record gsyn."""
+    print "Being asked to record gsyn of %s to %s"%(source, filename)
+
+# ==============================================================================
+
+## to reimplement in simulator.py...
+
+
+min_delay = 0.0
+max_delay = 1e12
+
+
+def get_min_delay():
+    """Return the minimum allowed synaptic delay."""
+    return min_delay
+
+def get_max_delay():
+    """Return the maximum allowed synaptic delay."""
+    return max_delay
+
+common.get_min_delay = get_min_delay
+common.get_max_delay = get_max_delay
\ No newline at end of file
diff -Nru -w pynn-0.7.2/src/neuroml.py pynn-0.7.4/src/neuroml.py
--- pynn-0.7.2/src/neuroml.py	2011-10-03 08:57:31.000000000 -0400
+++ pynn-0.7.4/src/neuroml.py	2012-04-02 09:14:17.000000000 -0400
@@ -5,17 +5,22 @@
 :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS.
 :license: CeCILL, see LICENSE for details.
 
-$Id: neuroml.py 957 2011-05-03 13:44:15Z apdavison $
+$Id: neuroml.py 1066 2012-03-07 16:09:26Z pgleeson $
 """
 
-from pyNN import common, connectors, cells, standardmodels
+from pyNN import common, connectors, standardmodels
+from pyNN.standardmodels import cells
+
 import math
 import numpy
 import sys
+
 sys.path.append('/usr/lib/python%s/site-packages/oldxml' % sys.version[:3]) # needed for Ubuntu
-import xml.dom.ext
 import xml.dom.minidom
 
+import logging
+logger = logging.getLogger("neuroml")
+
 neuroml_url = 'http://morphml.org'
 namespace = {'xsi': "http://www.w3.org/2001/XMLSchema-instance";,
              'mml':  neuroml_url+"/morphml/schema",
@@ -24,7 +29,7 @@
              'bio':  neuroml_url+"/biophysics/schema",  
              'cml':  neuroml_url+"/channelml/schema",}
              
-neuroml_ver="1.7.3"
+neuroml_ver="1.8.1"
 neuroml_xsd="http://www.neuroml.org/NeuroMLValidator/NeuroMLFiles/Schemata/v"+neuroml_ver+"/Level3/NeuroML_Level3_v"+neuroml_ver+".xsd";
 
 strict = False
@@ -92,7 +97,7 @@
         biophys_node  = build_node(':biophysics', units="Physiological Units")
         ifnode        = build_node('bio:mechanism', name="IandF_"+self.label, type='Channel Mechanism')
         passive_node  = build_node('bio:mechanism', name="pas_"+self.label, type='Channel Mechanism', passive_conductance="true")
-        # g_max = 10�³cm/tau_m  // cm(nF)/tau_m(ms) = G(µS) = 10��G(S). Divide by area (10³) to get factor of 10�³
+        # g_max = 10�?�³cm/tau_m  // cm(nF)/tau_m(ms) = G(µS) = 10�?��?�G(S). Divide by area (10³) to get factor of 10�?�³
         gmax = str(1e-3*self.parameters['cm']/self.parameters['tau_m'])
         passive_node.appendChild(build_parameter_node('gmax', gmax))
         cm_node       = build_node('bio:specificCapacitance')
@@ -216,6 +221,7 @@
         self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
         self.synapse_type = "doub_exp_syn"
         self.__class__.n += 1
+        logger.debug("IF_curr_exp created")
 
 class IF_curr_alpha(cells.IF_curr_alpha, NotImplementedModel):
     """Leaky integrate and fire model with fixed threshold and alpha-function-
@@ -231,6 +237,7 @@
         self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
         self.synapse_type = "doub_exp_syn"
         self.__class__.n += 1
+        logger.debug("IF_curr_alpha created")
 
 class IF_cond_exp(cells.IF_cond_exp, IF_base):
     """Leaky integrate and fire model with fixed threshold and 
@@ -245,6 +252,7 @@
         self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
         self.synapse_type = "doub_exp_syn"
         self.__class__.n += 1
+        logger.debug("IF_cond_exp created")
         
 class IF_cond_alpha(cells.IF_cond_alpha, IF_base):
     """Leaky integrate and fire model with fixed threshold and alpha-function-
@@ -259,6 +267,7 @@
         self.label = '%s%d' % (self.__class__.__name__, self.__class__.n)
         self.synapse_type = "alpha_syn"
         self.__class__.n += 1
+        logger.debug("IF_cond_alpha created")
 
 class SpikeSourcePoisson(cells.SpikeSourcePoisson, NotImplementedModel):
     """Spike source, generating spikes according to a Poisson process."""
@@ -294,13 +303,20 @@
 # ==============================================================================
 
 def setup(timestep=0.1, min_delay=0.1, max_delay=0.1, debug=False,**extra_params):
+
+    logger.debug("setup() called, extra_params = "+str(extra_params))
     """
     Should be called at the very beginning of a script.
     extra_params contains any keyword arguments that are required by a given
     simulator but not by others.
     """
     global xmldoc, xmlfile, populations_node, projections_node, inputs_node, cells_node, channels_node, neuromlNode, strict
+
+    if not extra_params.has_key('file'):
+        xmlfile = "PyNN2NeuroML.xml"
+    else:
     xmlfile = extra_params['file']
+
     if isinstance(xmlfile, basestring):
         xmlfile = open(xmlfile, 'w')
     if 'strict' in extra_params:
@@ -310,8 +326,17 @@
     neuromlNode = xmldoc.createElementNS(neuroml_url+'/neuroml/schema','neuroml')
     neuromlNode.setAttributeNS(namespace['xsi'],'xsi:schemaLocation',"http://morphml.org/neuroml/schema "+neuroml_xsd)
     neuromlNode.setAttribute('lengthUnits',"micron")
+
+    neuromlNode.setAttribute("xmlns","http://morphml.org/neuroml/schema";)
+
+    for ns in namespace.keys():
+        neuromlNode.setAttribute("xmlns:"+ns,namespace[ns])
+
     xmldoc.appendChild(neuromlNode)
     
+    neuromlNode.appendChild(xmldoc.createComment("NOTE: the support for abstract cell models in NeuroML v1.x is limited, so the mapping PyNN -> NeuroML v1.x is quite incomplete."))
+    neuromlNode.appendChild(xmldoc.createComment("Try the PyNN -> NeuroML v2.0 mapping instead."))
+    
     populations_node = build_node('net:populations')
     projections_node = build_node('net:projections', units="Physiological Units")
     inputs_node = build_node('net:inputs', units="Physiological Units")
@@ -330,7 +355,7 @@
         if not node.hasChildNodes():
             neuromlNode.removeChild(node)
     # Write the file
-    xml.dom.ext.PrettyPrint(xmldoc, xmlfile)
+    xmlfile.write(xmldoc.toprettyxml())
     xmlfile.close()
 
 def run(simtime):
@@ -402,23 +427,23 @@
     
     def __init__(self, size, cellclass, cellparams=None, structure=None,
                  label=None):
+        __doc__ = common.Population.__doc__
+        common.Population.__init__(self, size, cellclass, cellparams, structure, label)
+        ###simulator.initializer.register(self)
+
+    def _create_cells(self, cellclass, cellparams, n):
         """
         Create a population of neurons all of the same type.
         
-        size - number of cells in the Population. For backwards-compatibility, n
-               may also be a tuple giving the dimensions of a grid, e.g. n=(10,10)
-               is equivalent to n=100 with structure=Grid2D()
-        cellclass should either be a standardized cell class (a class inheriting
-        from common.standardmodels.StandardCellType) or a string giving the name of the
-        simulator-specific model that makes up the population.
-        cellparams should be a dict which is passed to the neuron model
-          constructor
-        structure should be a Structure instance.
-        label is an optional name for the population.
+
+        `cellclass`  -- a PyNN standard cell
+        `cellparams` -- a dictionary of cell parameters.
+        `n`          -- the number of cells to create
         """
         global populations_node, cells_node, channels_node
-        common.Population.__init__(self, size, cellclass, cellparams, structure, label)
-        self.label = self.label or 'Population%d' % Population.n
+
+        assert n > 0, 'n must be a positive integer'
+
         self.celltype = cellclass(cellparams)
         Population.n += 1
         
@@ -449,7 +474,15 @@
         self.last_id = self.size-1
         self.all_cells = numpy.array([ID(id) for id in range(self.first_id, self.last_id+1)], dtype=ID)
         self._mask_local = numpy.ones_like(self.all_cells).astype(bool)
-        self.local_cells = self.all_cells[self._mask_local]
+        #self.local_cells = self.all_cells
+
+
+    def _set_initial_value_array(self, variable, value):
+        """
+            Nothing yet...
+        """
+        pass
+
 
     def _record(self, variable, record_from=None, rng=None, to_file=True):
         """
diff -Nru -w pynn-0.7.2/src/neuron/simulator.py pynn-0.7.4/src/neuron/simulator.py
--- pynn-0.7.2/src/neuron/simulator.py	2011-10-03 08:57:30.000000000 -0400
+++ pynn-0.7.4/src/neuron/simulator.py	2012-04-06 09:58:01.000000000 -0400
@@ -27,7 +27,7 @@
 :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS.
 :license: CeCILL, see LICENSE for details.
 
-$Id: simulator.py 957 2011-05-03 13:44:15Z apdavison $
+$Id: simulator.py 1121 2012-04-06 13:58:02Z apdavison $
 """
 
 from pyNN import __path__ as pyNN_path
@@ -450,11 +450,6 @@
                     synapse_object = getattr(target._cell, self.synapse_type) 
                 nc = state.parallel_context.gid_connect(int(source), synapse_object)
                 nc.weight[0] = weight
-                
-                # if we have a mechanism (e.g. from 9ML that includes multiple
-                # synaptic channels, need to set nc.weight[1] here
-                if nc.wcnt() > 1:
-                    nc.weight[1] = target._cell.type.synapse_types.index(self.synapse_type)
                 nc.delay  = delay
                 # nc.threshold is supposed to be set by ParallelContext.threshold, called in _build_cell(), above, but this hasn't been tested
                 self.connections.append(Connection(source, target, nc))
diff -Nru -w pynn-0.7.2/src/random.py pynn-0.7.4/src/random.py
--- pynn-0.7.2/src/random.py	2011-10-03 08:57:31.000000000 -0400
+++ pynn-0.7.4/src/random.py	2012-07-03 10:37:14.000000000 -0400
@@ -49,7 +49,7 @@
     simply to read externally-generated numbers from files."""
     
     def __init__(self, seed=None):
-        if seed:
+        if seed is not None:
             assert isinstance(seed, int), "`seed` must be an int (< %d), not a %s" % (sys.maxint, type(seed).__name__)
         self.seed = seed
         # define some aliases
@@ -70,7 +70,7 @@
     def __init__(self, seed=None, parallel_safe=True):
         AbstractRNG.__init__(self, seed)
         self.parallel_safe = parallel_safe
-        if self.seed and not parallel_safe:
+        if self.seed is not None and not parallel_safe:
             self.seed += mpi_rank # ensure different nodes get different sequences
             if mpi_rank != 0:
                 logger.warning("Changing the seed to %s on node %d" % (self.seed, mpi_rank))
@@ -123,7 +123,7 @@
     def __init__(self, seed=None, parallel_safe=True):
         WrappedRNG.__init__(self, seed, parallel_safe)
         self.rng = numpy.random.RandomState()
-        if self.seed:
+        if self.seed is not None:
             self.rng.seed(self.seed)
         else:
             self.rng.seed()  
@@ -144,7 +144,7 @@
             raise ImportError, "GSLRNG: Cannot import pygsl"
         WrappedRNG.__init__(self, seed, parallel_safe)
         self.rng = getattr(pygsl.rng, type)()
-        if self.seed:
+        if self.seed is not None:
             self.rng.set(self.seed)
         else:
             self.seed = int(time.time())
@@ -220,19 +220,19 @@
                             parameters=self.parameters,
                             mask_local=mask_local)
         if self.boundaries:  
-            if type(res) == numpy.float64:
+            if isinstance(res, numpy.float):
                 res = numpy.array([res])
             if self.constrain == "clip":
                 return numpy.maximum(numpy.minimum(res, self.max_bound), self.min_bound)
             elif self.constrain == "redraw": # not sure how well this works with parallel_safe, mask_local
                 if len(res) == 1:
                     while not ((res > self.min_bound) and (res < self.max_bound)):
-                        res = self.rng.next(n=n, distribution=self.name, parameters=self.parameters)
+                        res = self.rng.next(n=n, distribution=self.name, parameters=self.parameters, mask_local=mask_local)
                     return res
                 else:
                     idx = numpy.where((res > self.max_bound) | (res < self.min_bound))[0]
                     while len(idx) > 0:
-                        res[idx] = self.rng.next(len(idx), distribution=self.name, parameters=self.parameters)
+                        res[idx] = self.rng.next(n=n, distribution=self.name, parameters=self.parameters, mask_local=mask_local)
                         idx = numpy.where((res > self.max_bound) | (res < self.min_bound))[0]
                     return res
             else:
@@ -241,4 +241,3 @@
         
     def __str__(self):
         return "RandomDistribution('%(name)s', %(parameters)s, %(rng)s)" % self.__dict__
-    
diff -Nru -w pynn-0.7.2/src/recording/files.py pynn-0.7.4/src/recording/files.py
--- pynn-0.7.2/src/recording/files.py	2011-10-03 08:57:31.000000000 -0400
+++ pynn-0.7.4/src/recording/files.py	2012-07-03 11:06:25.000000000 -0400
@@ -14,7 +14,7 @@
 :copyright: Copyright 2006-2011 by the PyNN team, see AUTHORS.
 :license: CeCILL, see LICENSE for details.
 
-$Id: files.py 957 2011-05-03 13:44:15Z apdavison $
+$Id: files.py 1194 2012-07-03 15:00:07Z apdavison $
 """
 
 
@@ -191,17 +191,22 @@
     arrays.
     """
     
+    def _check_open(self):
+        if not hasattr(self, "fileobj") or self.fileobj.closed:
+            self.fileobj = open(self.name, self.mode, DEFAULT_BUFFER_SIZE)
+        else:
+            self.fileobj.seek(0)
+    
     def write(self, data, metadata):
         __doc__ = BaseFile.write.__doc__
         self._check_open()
-        metadata_array = numpy.array(metadata.items())
+        metadata_array = numpy.array(metadata.items(), dtype=(str, float))
         savez(self.fileobj, data=data, metadata=metadata_array)
         
     def read(self):
         __doc__ = BaseFile.read.__doc__
         self._check_open()
         data = numpy.load(self.fileobj)['data']
-        self.fileobj.seek(0)
         return data
     
     def get_metadata(self):
@@ -213,7 +218,6 @@
                 D[name] = eval(value)
             except Exception:
                 D[name] = value
-        self.fileobj.seek(0)
         return D
     
     
diff -Nru -w pynn-0.7.2/src/space.py pynn-0.7.4/src/space.py
--- pynn-0.7.2/src/space.py	2011-10-03 08:57:31.000000000 -0400
+++ pynn-0.7.4/src/space.py	2012-06-01 07:40:55.000000000 -0400
@@ -28,6 +28,9 @@
 from operator import and_
 from pyNN.random import NumpyRNG
 from pyNN import descriptions
+import logging
+
+logger = logging.getLogger("PyNN")
 
 def distance(src, tgt, mask=None, scale_factor=1.0, offset=0.0,
              periodic_boundaries=None): # may need to add an offset parameter
diff -Nru -w pynn-0.7.2/test/system/scenarios.py pynn-0.7.4/test/system/scenarios.py
--- pynn-0.7.2/test/system/scenarios.py	2011-10-03 09:12:57.000000000 -0400
+++ pynn-0.7.4/test/system/scenarios.py	2012-04-06 09:46:50.000000000 -0400
@@ -506,3 +506,22 @@
     sim.run(100.0)
     assert_arrays_almost_equal(post.getSpikes(), numpy.array([[0.0, 13.4]]), 0.5)
 
+@register()
+def ticket226(sim):
+    """
+    Check that the start time of DCSources is correctly taken into account
+    http://neuralensemble.org/trac/PyNN/ticket/226)
+    """
+    sim.setup(timestep=0.1)
+
+    cell = sim.Population(1, sim.IF_curr_alpha,
+                          {'tau_m': 20.0, 'cm': 1.0, 'v_rest': -60.0,
+                           'v_reset': -60.0})
+    cell.initialize('v', -60.0)
+    inj = sim.DCSource(amplitude=1.0, start=10.0, stop=20.0)
+    cell.inject(inj)
+    cell.record_v()
+    sim.run(30.0)
+    id, t, v = cell.get_v().T
+    assert abs(v[abs(t-10.0)<0.01][0] - -60.0) < 1e-10
+    assert v[abs(t-10.1)<0.01][0] > -59.99
diff -Nru -w pynn-0.7.2/test/unittests/test_basepopulation.py pynn-0.7.4/test/unittests/test_basepopulation.py
--- pynn-0.7.2/test/unittests/test_basepopulation.py	2011-10-03 08:57:17.000000000 -0400
+++ pynn-0.7.4/test/unittests/test_basepopulation.py	2012-07-03 10:37:14.000000000 -0400
@@ -166,10 +166,11 @@
 
 def test_get_with_no_get_array():
     orig_iter = MockPopulation.__iter__
-    MockPopulation.__iter__ = Mock(return_value=iter([Mock()]))
+    mock_cell = Mock()
+    MockPopulation.__iter__ = Mock(return_value=iter([mock_cell]))
     p = MockPopulation()
     values = p.get("i_offset")
-    assert_equal(values[0]._name, "i_offset")
+    assert hasattr(mock_cell, "i_offset")
     MockPopulation.__iter__ = orig_iter
 
 def test_get_with_gather():
diff -Nru -w pynn-0.7.2/test/unittests/test_files.py pynn-0.7.4/test/unittests/test_files.py
--- pynn-0.7.2/test/unittests/test_files.py	2011-10-03 08:57:17.000000000 -0400
+++ pynn-0.7.4/test/unittests/test_files.py	2012-07-03 10:37:14.000000000 -0400
@@ -46,14 +46,14 @@
 def test_StandardTextFile_write():
     files.open = Mock()
     stf = files.StandardTextFile("filename", "w")
-    data=[(0, 2.3),(1, 3.4),(2, 4.3)]
+    data=[(0, 2.25),(1, 3.5),(2, 4.125)]
     metadata = {'a': 1, 'b': 9.99}
-    target = [(('# a = 1\n# b = 9.99\n',), {}),
-              (('0.0\t2.3\n',), {}),
-              (('1.0\t3.4\n',), {}),
-              (('2.0\t4.3\n',), {})]
+    target = [('# a = 1\n# b = 9.99\n',),
+              ('0.0\t2.25\n',),
+              ('1.0\t3.5\n',),
+              ('2.0\t4.125\n',)]
     stf.write(data, metadata)
-    assert_equal(stf.fileobj.write.call_args_list,
+    assert_equal([call[0] for call in stf.fileobj.write.call_args_list],
                  target)
     files.open = builtin_open
     
diff -Nru -w pynn-0.7.2/test/unittests/test_neuron.py pynn-0.7.4/test/unittests/test_neuron.py
--- pynn-0.7.2/test/unittests/test_neuron.py	2011-10-03 08:57:17.000000000 -0400
+++ pynn-0.7.4/test/unittests/test_neuron.py	2012-07-03 10:37:14.000000000 -0400
@@ -40,9 +40,6 @@
     celltype = MockCellClass()
     local_cells = [MockID(44), MockID(33)]
 
-# simulator
-def test_load_mechanisms():
-    assert_raises(Exception, simulator.load_mechanisms, "/tmp") # not found
     
 def test_is_point_process():
     section = h.Section()

--- End Message ---
--- Begin Message ---
On Wed, Jul 11, 2012 at 11:41:25 -0400, Yaroslav Halchenko wrote:

> diff -Nru -w pynn-0.7.2/src/neuroml.py pynn-0.7.4/src/neuroml.py
> --- pynn-0.7.2/src/neuroml.py	2011-10-03 08:57:31.000000000 -0400
> +++ pynn-0.7.4/src/neuroml.py	2012-04-02 09:14:17.000000000 -0400
> @@ -294,13 +303,20 @@
>  # ==============================================================================
>  
>  def setup(timestep=0.1, min_delay=0.1, max_delay=0.1, debug=False,**extra_params):
> +
> +    logger.debug("setup() called, extra_params = "+str(extra_params))

this is broken, surely it should be below the docstring?

also the sys.path.append('/usr/lib/python2.x/site-packages/oldxml')
seems like a horrible, horrible hack.

>      """
>      Should be called at the very beginning of a script.
>      extra_params contains any keyword arguments that are required by a given
>      simulator but not by others.
>      """

unblocked anyway...

Cheers,
Julien

Attachment: signature.asc
Description: Digital signature


--- End Message ---

Reply to: