pax_global_header00006660000000000000000000000064120313162470014511gustar00rootroot0000000000000052 comment=e46df2f493c2a81b54104ea535664042ce348ce7 mdp-3.3/000077500000000000000000000000001203131624700121365ustar00rootroot00000000000000mdp-3.3/.gitattributes000066400000000000000000000000211203131624700150220ustar00rootroot00000000000000*.py diff=python mdp-3.3/.gitignore000066400000000000000000000007531203131624700141330ustar00rootroot00000000000000# git-ls-files --others --exclude-from=.git/info/exclude # Lines that start with '#' are comments. # For a project mostly in C, the following would be a good set of # exclude patterns (uncomment them if you want to use them): # *.[oa] # *~ # python ignores *.pyc *.pyo # eclipse .project .classpath .settings .metadata .pydevproject # other generic files to ignore *~ *.lock *.DS_Store *.swp *.out # setuptools stuff /build/ /_configtest.f # files generated by py.test /.figleaf /html/ mdp-3.3/.mailmap000066400000000000000000000002041203131624700135530ustar00rootroot00000000000000Zbigniew Jędrzejewski-Szmek Pietro Berkes Michael Schmuker mdp-3.3/CHANGES000066400000000000000000003537211203131624700131440ustar00rootroot00000000000000MDP-3.3: 2012-09-19: FIX: fix error in automatic testing for MultinomialNB. 2012-09-19: ERF: make sklearn nodes automatic testing more robust The previous solution was actually ignoring the special definitions in NODES. 2012-09-19: FIX: disable pp support if server can not start Being able to import pp is not enough to be sure that pp works. For example in a low memory situation, the following can happen: >>> import pp >>> server = pp.Server() [...] OSError: [Errno 12] Cannot allocate memory This fix just disables pp support if the server can not be started. 2012-06-18: FIX: Fix wrapping of sklearn 0.11 classifiers 2012-04-17: FIX: make test_SFA2Node even more robust 2012-04-16: FIX: make FastICANode test more robust 2012-04-16: FIX: make test_SFA2Node more robust 2012-04-05: FIX: fix pp_tests when run multiple times. pp tests were failing when run twice in a row. hugly work-around, but it seems to work... 2012-04-05: FIX: fixed broken test_reload. test_reload was failing when called twice in a row. 2012-04-05: FIX: fix random seed tests. The tests were failing when called twice in a row: >>> import mdp >>> mdp.test() >>> mdp.test() the first call was working, the second one was giving failures. 2012-04-01: ERF: added tests for learning of bias parameters 2012-03-26: FIX: replace third remaing test for pp_monkeypatch_dirname Hopefully this will fix test suite failures. 2012-03-22: FIX: Decrease the noise level in the DiscreteHopfieldClassifier. 2012-03-22: FIX: honor MDP_DISABLE_SHOGUN env variable 2012-03-19: FIX: fix left-over directories from testing pp. I do not know why, but this simple change fixes the leftover directories problem when testig with python-pp and pp monkey-patching. It should have worked even as it was before, but apparently some race condition happens. 2012-03-06: FIX: fix determinant of random rotation matrix determinant sign was wrong if dimensions of rotation matrix were odd. Thanks to Philip DeBoer. Actual Fix. 2012-03-06: FIX: fix determinant of random rotation matrix determinant sign was wrong if dimensions of rotation matrix were odd. Thanks to Philip DeBoer. Failing Test. 2012-02-13: ENH: remove duplicated and overly verbose code 2012-02-13: FIX: remove FastICA stabilization from tests 2012-02-13: FIX: remove unused parameter stabilization in FastICA. 2012-01-19: NEW: added new sklearn algorithms wrapping We now wrap 93 sklearn algorithms! 2012-01-19: FIX: fix another imcompatibility with sklearn 0.10 Although EllipticEnvelop is derived from sklearn.base.ClassifierMixin, it is not a classifier. It is actually a "predictor". Added a check in the sklearn wrappers. 2012-01-19: FIX: fix sklearn wrappers for version 0.10 New sklearn version introduces classifiers and predictors mixin classes that do not have a 'fit' method. Typical error: AttributeError: type object 'OutlierDetectionMixin' has no attribute 'fit' Just added a check that the method is really present before wrapping. 2012-01-12: FIX: fix failing test for no eigenvalues left problem Check that PCANode now raises the right exception. 2012-01-12: FIX: add useful exception in case of no eigenvalues left. Check for the condition explained in b0810d72ce11925e1db6204c3a20bdfc77741a82 and raise a nicely formatted exception: Traceback: [...] File ".../mdp/nodes/pca_nodes.py", line 223, in _stop_training ' var_abs=%e!'%self.var_abs) NodeException: No eigenvalues larger than var_abs=1.000000e-15! 2012-01-12: OTH: added failing test for no eigenvalues left problem. When PCANode is set to use SVD and automatic dimensionality reduction it may happen that after removing directions corresponding to eigenvalues smaller than var_abs (1e-12 default), nothing is left. This happens for example if the data is a matrix of (almost) zeros. The error looks like this: Traceback (most recent call last): [...] File ".../mdp/nodes/pca_nodes.py", line 220, in _stop_training d = d[ d / d.max() > self.var_rel ] ValueError: zero-size array to ufunc.reduce without identity 2012-01-03: FIX: old joblib breaks imports from sklearn.decomposition >>> import sklearn.decomposition Traceback (most recent call last): File "", line 1, in File "/usr/lib/pymodules/python2.6/sklearn/decomposition/__init__.py", line 8, in from .sparse_pca import SparsePCA, MiniBatchSparsePCA File "/usr/lib/pymodules/python2.6/sklearn/decomposition/sparse_pca.py", line 10, in from .dict_learning import dict_learning, dict_learning_online File "/usr/lib/pymodules/python2.6/sklearn/decomposition/dict_learning.py", line 17, in from ..externals.joblib import Parallel, delayed, cpu_count ImportError: cannot import name cpu_count >>> joblib.__version__ '0.4.3' 2012-01-02: FIX: py3k compatibility for reload reload() is moved to imp.reload(). It also seems that Python 3 behaves slightly differently wrt. reloads. For some reason, mdp.configuration is not imported properly on reload. Let's just not remove the name from the namespace, as this is the easiest fix. 2011-12-23: ERF: added a failing test for reload MDP does not work with reload. See https://github.com/mdp-toolkit/mdp-toolkit/issues/1 for details. Thanks to Yaroslav Halchenko for reporting it. 2011-12-18: FIX: Removed the leftover _global_message_emitter attribute (as reported by Tiziano and Hannah). 2011-12-09: FIX: fix wrong sorting of eigenvectors in degenerate case for SVD. Thanks to Yaroslav Halchenko for reporting and fixing! 2011-12-09: NEW: added failing test for wrong sorting of SVDs 2011-10-24: FIX: updated sklearn wrappers to silence warnings for version >= 0.9 ------------------------------------------------------------------------------- MDP-3.2: 2011-10-24: FIX: do not complain when failing to remove temporary directory. The error warning breaks doctests. On top of that, I do not think the error message is really useful. 2011-10-22: FIX: fix previous commit (options can not be None) 2011-10-22: FIX: fix bug with pytest when MDP is installed on user non-writable directory. 2011-10-22: ENH: make NeuralGasNode automatic tests faster 2011-10-22: ENH: call scklearn with its name whenever possible without too much disruption 2011-10-22: FIX: workaround for debian patch to pp that disables pp's default password. 2011-10-22: FIX: could not find submodules and classes in old versions of sklearn the problem was that we build names of sub-modules from strings, and we need to take into account two possible prefixes "sklearn" or "scikits.learn" 2011-10-22: FIX: accomodate new name for scikits.learn 0.9 aka sklearn 2011-10-22: RF: extract two methods and simplify logic of metaclass wrapping 2011-10-22: OTH: one more metaclass test also cleand up whitespaces 2011-10-21: FIX/python2.5: forgotten __future__ imports 2011-10-21: ERF: rewrite one format string to make is clearer 2011-10-21: FIX: extend infodict with **kwargs name, to avoid guessing later 2011-10-19: ENH: metaclass: use dict to replace two lists and avoid infodict No functional changes. 2011-10-21: FIX: finally fig bug in metaclass signature overwriting 2011-10-21: OTH: make test_metaclass more stringent 2011-10-21: OTH: added self contained test for metaclass and extensions issues. 2011-10-19: OTH: added more detailed failing test for metaclass 2011-10-19: OTH: another bug in signature overwriting. commited a failing test. Thanks to Alberto Escalante! 2011-10-16: ENH: make tests for NeuralGasNode shorter 2011-10-15: FIX: chmod -x mdp/test/test_NeuralGasNode.py 2011-10-15: DOC: tiny changes in comments 2011-10-15: FIX: tests for neural gas node 2011-10-15: FIX: when start positions for the nodes was set, the number of nodes default is discarded 2011-10-14: DOC+RF: slight improvements to documentation, comments, and spacing 2011-10-14: FIX: NeuralGas fixes by Michael Schmucker 2011-08-03: ENH: remove tuple unpacking init: it took twice as much lines 2011-08-03: FIX: NeuralGas fixed by Michael Schmuker 2011-08-03: FIX: remove \ line breaks 2011-07-25: FIX: remove \ line breaks 2011-07-25: NEW, FIX: tests for NeuralGasNode and two fixes 2011-07-25: FIX: _remove_old_edges makes use of its argument 2011-07-25: DOC, FIX: Typos in docstrings and whitespaces 2011-07-25: NEW: NeuralGasNode by Michael Schmuker 2011-10-14: FIX: remove unused variable 2011-10-14: FIX: py3k bytes/str issue in subprocess input 2011-10-14: FIX: remove unused variable 2011-10-14: ENH: use a context manager for sys.stdout stealing 2011-10-14: FIX: add pytest ini file to manifest 2011-10-14: NEW: really check if pp needs monkey-patching 2011-10-14: NEW: be compatible with shogun 1.0. Drop support for older releases. 2011-10-14: FIX: fix handling of MDP_DISABLE_SHOGUN variable 2011-10-14: Fix KeyError with NSDEBUG=1 Commit 7345d4a changed old-style formatting to new-style formatting, or at least the format string, but .format was not replaced with %! 2011-10-13: DOC: make it a bit easier to see that you can use pp on debian too 2011-10-13: FIX: remove (now) useless duplicated test in bimdp 2011-10-13: FIX: windows does not allow me to remove a file 2011-10-13: FIX: no with statement in tests {flow,node}.save Apparently windows does not allow one process to open a file multiple times. As {flow,node}.save use internally the with statement to open the dump file, we can not use the with statement in the test of these methods: nesting with statement means opening the same file multiple times. 2011-10-12: NEW: allow for pretty printing of mdp.config. now config.info() is printed on stdout when you type >>> print mdp.config or >>> mdp.config don't think about the fact that this is only achievable by a metaclass ;-) 2011-10-12: ERF: also add mdp.__version__ in config.info __revision__ is not enough, when mdp is installed and does not load from a git repository that string is empty (by the way: we should fix it!) 2011-10-11: FIX: remove forgotten print statement 2011-10-11: NEW: add tempfile test to help diagnose win7 problems 2011-10-11: FIX: make py.test.__version__ check more robust 2011-10-11: Add mdp.__revision__ to mdp.config.info() output This changes the order of initialization a bit: first mdp.configuration is fully imported, then mdp.utils. Because of this change, mdp.utils.repo_revision was renamed to mdp.repo_revision. It is imported only in mdp.configuration and then deleted from the visible namespace. get_git_revision was available as utils.get_git_revision, and now it's gone, but mdp.__revision__ contains the same value, so there little need to export the function. I hope nobody used it. >>> print mdp.config.info() python: 2.7.2.final.0 mdp: MDP-3.1-74-gbaca2a8+ parallel python: 1.6.0 shogun: NOT AVAILABLE: No module named shogun libsvm: libsvm.so.3 joblib: 0.4.6 scikits: 0.8.1 numx: scipy 0.9.0 symeig: scipy.linalg.eigh 2011-10-11: NEW: append + to mdp.__revision__ in modified repo 2011-10-11: FIX: half of an assert message was lost 2011-10-11: ERF: use py.test 2 ini-file configuration. we can now check for py.test version, and get finally rid of the redundant conftest.py files. we now have only one conftest.py file for bimdp and mdp. 2011-10-11: FIX: check if we have ENOENT in TemporaryDir.__del__ 2011-10-11: FIX: stop complaining if dir is gone already in TemporaryDirectory 2011-10-11: ERF: get rid of the pp monkey patch dirs the hard way. TemporaryDirectory does not manage to delete itself when created in a subprocess 2011-10-11: ERF: make the pp monkeypatch dir prefix more explicit 2011-10-11: ERF: change the interpretation of MDP_MONKEYPATCH_PP now you can specify a container directory for the patching. it defaults to tempfile.gettempdir(), i.e. /tmp on unix and something else on windows 2011-10-11: FIX: only allow py.test version >= 2.1.2 py.test version > 2 are subtly incompatible with versions < 2. just settle on the new ones. problem: debian still does not package a reasonably recent version of py.test. in this case, just use the standalone script. thanks to 8a5b2e272de4230f15d09f85b7d35b0aeee3078e it's as flexible as py.test itself. 2011-10-11: ERF: pass options to py.test in mdp.test. this way using the standalone script run_tests.py is as flexible as using py.test directly 2011-10-11: ERF: updated standalone py.test script to version 2.1.2 2011-10-11: DOC: added pointer to win64 binaries distributor 2011-10-10: ENH: use with statement for better readability in a couple of tests No functional changes. 2011-10-10: ENH: make pyflakes happy by not defining unused names No functional changes. 2011-10-10: FIX: make monkeypatch_pp compatible with both pp 1.6 and 1.6.1 >>> import pp >>> pp.version '1.6.0' >>> pp._Worker.command '"/usr/bin/python" -u "/usr/lib/pymodules/python2.6/ppworker.py" > 2>/dev/null' >>> import pp >>> pp.version '1.6.1' >>> pp._Worker.command ['/usr/bin/python2.6', '-u', '/usr/lib/pymodules/python2.6/ppworker.py', '2>/dev/null'] 2011-10-10: FIX: temporary directory for tests was not managed properly sed 's/mdp_toolkit_reporting_configured/mdp_configured/g' because the existence of this variable guards not only reporting configuration, but also the temporary directory stuff (and also the name was kind of long). sed 's/tempdirname/mdp_tempdirname/' because it goes in the py.test namespace and should have a prefix like the other variable. sed -r 's+bimdp/+TMP\/+g; s+mdp/+bimdp/+g; s+TMP\/+mdp/+g' bimdp/test/conftest.py because bimdp should be kept in sync. In my installation bimdp is actually configured first, so tests in the temporary directory were failing with AttributeError on py.test.tempdirname. 2011-10-10: FIX: tempdirs in cacheing tests were shortlived A temporary directory created with TemporaryDirectory can go at any time, so a construct like dirname = TemporaryDirectory().name is broken. Since the directories are created in the master temporary directory anyway, they don't have to be deleted separately. 2011-10-10: FIX: do not set the cachedir by default on import mdp.caching setting it by default was creating a lot of bogus temporary directories when parallel processes where started using process_scheduler. I see no side effects for not setting it 2011-10-10: FIX: better name for global temp dir 2011-10-10: FIX: fix pp monkey patching. pp._Worker.command.replace(ppworker, ppworker3) did not work, as pp._Worker.command is a list and not a string! 2011-10-10: FIX: put joblib caching stuff in the new global test tempdir 2011-10-10: ERF: added a container directory for temporary test files. the directory gets deleted (no matter its contents) at the end of the test run. This should fix stale temp files. 2011-10-05: FIX: partially revert 569cbe7 Changing 'except Exception:' clauses to bare 'except:' would catch and swallow ^C and the like. 2011-10-02: ENH: revert the condition for pp monkeypatching It's better to only enable it when explicitely requested. The warning is also removed, because it doesn't make sense to warn about something that was explicitely requested. If the option to monkey patch is not given, check if pp is likely to be broken, and if so, disable it at configuration time. 2011-10-02: FIX: remove some unused imports and one allocation 2011-10-02: ENH: make temporary directory names more readable tmpxayPZEpp4mdp/ => pp4mdp.tmpxayPZE/ 2011-09-08: FIX: new module structure in scikits.learn 0.8 2011-09-07: BF: Exclude float16 dtypes in numpy.linalg.eigh. 2011-08-24: Fixes to FastICA node Fine tuning and stabilisation are now used correctly, and the result closely resembles the original matlab version for the deflation/gaussian case with prewhitened data. The results are not numerically identical because there are still some differences in stopping criteria, but it's close. To verify, see original Matlab implementation. 2011-08-10: DOC: added documentation item to TODO list 2011-06-30: ERF: PEP8 fix (da pignolo) 2011-06-30: ERF: simplify metaclass wrapping code 2011-06-30: FIX: Fixed the node metaclass to correctly handle signatures and docstrings for inheritance chains. 2011-06-30: ERF: Some small pylint-motivated updates. 2011-06-30: ERF: wrap private methods in meteclass even if no docstring is defined. 2011-06-30: ERF: Made the _get_required_train_args method more robust. 2011-06-30: OTH: refactored metaclass tests, again. 2011-06-30: OTH: refactored metaclass tests 2011-06-16: DOC: added proper git format for changelog 2011-06-16: OTH: commit failing test for docstring overwriting Thanks to Alberto Escalante to point this bug out. 2011-04-07: FIX: verbose argument in set_cachedir was ignored. 2011-06-12: FIX: Fixed a regression that broke the DBN example. 2011-06-11: ERF: Major refactoring of the binet inspection code. This follows the refactoring of the flow-to-HTML conversion. Smaller refactorings in realted parts (slideshow) were also done. 2011-06-11: ERF: Prepare large refactoring by renaming the files first. 2011-05-31: ERF: Refactored the flow-to-HTML conversion. Threw out useless code, fixed style issues and now use the proper vocabulary (visitor pattern). The bimdp inspection has not yet been updated and is therefore broken in this commit. 2011-04-18: FIX: monkeypatch_pp was broken I made some changes without actually testing if stuff works... 2011-04-15: FIX: delete main joblib cache directory at mdp exit The directory is of course only deleted if we created it ourselves, i.e. if it is a temporary directory. 2011-04-15: FIX: delete joblib cache directory after tests 2011-04-15: DOC: describe MDPNUMX and MDPNSDEBUG env. vars. 2011-04-15: FIX: bump supported scikits-learn version number Setting >=0.6 is pretty conservative, but Debian's 0.5-1 is bad for sure. 2011-04-14: DOC: document MDP_DISABLE_MONKEYPATCH_PP 2011-04-14: DOC: document MDP_DISABLE_* variables 2011-01-12: DOC: improve rst in mdp/extension.py 2011-03-21: FIX: drop some unused variables & imports Wanted to test pyflakes on something. This is the result. 2011-04-14: FIX: apply a temporary workaround to the pp import problem For details see http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=620551. ppworker.py must be run from some safe place. A temporary dir seems a good option. Unfortunately a temporary file is not enough, because parent directory should be empty. TemporaryDirectory is so complicated, because the code is run during Python shutdown procedure and some modules are not available. 2011-04-05: FIX: make short description fit within 200 char limit Tiziano proposed the text, nobody protested... I guess that this signifies a consensus. :) Some small formatting fixes in source code and output were applied. 2011-04-01: OTH: remove unnecessary parentheses around assert 2011-04-04: FIX: restore python2.4, 2.5 support in setup.py Module ast is not available in python < 2.6. The format of docstrings is slightly changed: removed empty lines at the top. Tested with python 2.4, 2.5, 2.6, 2.7, 3.1, 3.2. Output looks identical. 2011-04-01: FIX: [pp] Result container was shared between invocations 2011-04-01: ERF: improved scikits.learn detection code. 2011-03-31: ERF: Some small code improvements, no functional changes. 2011-03-31: FIX: Fixed source path handling. ------------------------------------------------------------------------------- MDP-3.1: 2011-03-29: FIX: fixed shogun version detection code. As I expected, explicit version checking is bound to fail. I would prefer to just remove it and rely on functional checks, i.e. is the feature we depend on present or not? 2011-03-24: ERF: CovarianceMatrix has the option of not centering the data; SFANode modified accordingly 2011-03-22: FIX: fix wrong normalization in SFANode SFANode was using the covariance matrix of the derivatives, while the SFA algorithm requires the non-centered second moment matrix of the derivatives. This bug is probably irrelevant in most cases, but in some corner cases the error scales with 1/sqrt(N) where N is the number of samples. Many thanks to Sven Dähne for fishing this out ;-) 2011-03-22: ERF: added new tests for SFANode Those tests are currently failing. 2011-03-22: ERF: really use the debug flag in SFANode.stop_training 2011-03-02: FIX: old svm versions would break the import process Bug reported by Philipp Meier. 2011-03-03: DOC: updated TODO 2011-03-03: DOC: updated TODO 2011-02-23: DOC: updated CHECKLIST and TODO 2011-02-23: ERF: performance and memory improvements for KNNClassifier. Transform array of n_samples^2 * n_features array into n_samples^2. Thanks to Fabian Pedregosa! 2011-02-17: FIX: Small cosmetic fix in flow inspection for the case when stop_training doesn't receive any argument. 2011-02-17: FIX: Do not wrap abstract scikits.learn transformers 2011-02-17: Robustify wrap_scikits_transformer. Fix for the case that a transform does not implement some of the methods. 2011-01-31: ERF: running self-contained test script is now blocking the prompt 2011-01-27: FIX: Made the target checks more permissive when execution ends or is aborted. 2011-01-27: ERF: Added additional checks for copy flag in BiLayer. 2011-01-27: FIX: Fixed regression in inspection method output. 2011-01-27: FIX: BiLayer was not compatible with all return value options. 2011-01-27: FIX: Fixed some leftover inconsistencies from BiMDP stop_training interface changes. 2011-01-26: DOC: with_extension doesn't work for coroutines, tell epydoc about extension 2011-01-25: ERF: remove unittest-style wrapper class from test_extension.py. 2011-01-25: OTH: use context managers to open/close file for pickling 2011-01-25: ERF: Check for empty slides in execution as well. 2011-01-25: ERF: Improved the behavior of show_training if an already trained flow is given (a special expection is now raised). 2011-01-17: FIX: add trove classifiers for Python 2 and 3. 2011-01-14: ERF: Updated the Parallel Python support, especially the NetworkPPScheduler. It now works with pp 1.6, and there are some refactoring and robustnes improvements as well. 2011-01-13: FIX: really update CHECKLIST 2011-01-13: NEW: mdp descriptions now defined in the module. They new descriptions are used in the setup.py script and in generating the web site. 2011-01-13: ERF: get version and description in a robust way. ------------------------------------------------------------------------------- MDP-3.0: 2011-01-12: FIX: import NormalizeNode in mdp.nodes 2011-01-12: FIX: no return in IdentityNode method 2011-01-12: FIX: test_scikits should be run only if scipy is available 2011-01-12: FIX: tell epydoc that nodes are documented in rst 2011-01-11: NEW: convert scikits docstrings to rst This way I'm only getting missing target warnings and the text is parse as rst. No monospace font :) 2011-01-11: FIX: fix some epydoc warnings, add some links and a forgotten export 2011-01-07: MISC: duplicate some useful code among the three wrappers last merge from scikits branch. 2011-01-11: ERF: make scikits default docs rst-compatible 2011-01-10: Merge branch 'namespace_fixup' Conflicts: mdp/__init__.py mdp/nodes/__init__.py mdp/utils/__init__.py Add missing comma in __all__ in mdp/__init__.py. 2011-01-10: OTH: shorten test name (Looked awkward in py.test output.) 2011-01-10: NEW: introduce MDPNSDEBUG environment variable This variable controls namespace fixup debugging messages. Also remove 'advanced string formatting' in favour of printf-style statements (for python 2.5 compatibility). 2011-01-10: FIX: made the one scikits test a bit more useful 2011-01-10: FIX: there is no problem with output_dim in scikits classifiers because they are identity nodes 2011-01-10: DOC: documentation of output_dim problem in scikits.learn wrappers 2011-01-10: FIX: check for existence of im_func in classifier wrapper 2011-01-10: FIX: fail meaningfully when trying to set output_dim 2011-01-10: FIX: added docstrings to scikits wrappers 2011-01-10: FIX: Cache mechanism called pre_execution_check with too much information 2011-01-10: FIX: broken variable for disabling scikits.learn 2011-01-10: FIX: fix wrapper string in scikits nodes 2011-01-10: FIX: fixed hard-coded scikits test. 2011-01-10: FIX: fixed a bit pp_remote support. I commented out the test. There's no point in having a test that requires a working ssh setup. we should move that code in the examples. TODO: I still did not manage to test the remote support properly: it does not work if ssh spits some session information on startup :-( 2011-01-10: FIX: fix namespace issues with scikits nodes. 2011-01-10: FIX: Made the straight forward fixes for pp 1.6 support, not sure if this is working now. 2011-01-10: Merge remote branch 'origin/master' 2011-01-10: Merge a branch with some scikits work I'm doing this as a merge instead of rebasing, because the work was really done in parallel, and e.g. Tiziano removed the file, which I moved and updated. 2011-01-10: FIX: make scikits support depend on version 0.5 Version 0.4 breaks... Version 0.5 at least does learn & execute. 2011-01-10: FIX: move scikits test into mdp/tests/ 2011-01-10: Merge remote branch 'origin/master' Conflicts: mdp/test/test_pp_local.py 2011-01-10: FIX: Updated local scheduler to work with latest pp version 1.6. 2011-01-10: FIX: fix name leaking in nodes/__init__.py 2011-01-10: FIX: add scikits to list of dependencies in testall.py 2011-01-10: FIX: Fixed local Parallel Python tests. 2011-01-10: FIX: make test_ISFANode more robust. 2011-01-10: FIX: remove stale test_scikits.py file 2011-01-10: NEW: merged constant_expansion_node branch. Adds new GeneralExpansionNode thanks to Alberto Escalante. Tests and node had to be adjusted to merge cleanly and to follow new conventions. TODO: pseudo_inverse method is quite fragile, why? 2011-01-10: ERF: Added native traceback flag to IDE test run template. 2011-01-09: FIX: remove bitrot in test_pp_*.py. This makes the local tests pass with parallel python 1.5.4. Remote tests still fail because amongst other things the server list must be passed explicitely. The tests fail horribly with newer pp version. 2011-01-09: FIX: kill epydoc warnings 2011-01-09: Revert "DOC: migrated class.__doc__ to rst for nerual_gas_nodes" This reverts commit 2c2a4cd72ecd7d2dbb2032dd7db4f0754b3ba846. 2011-01-07: FIX: make joblib version checking more robust e.g. joblib.__version__='1.0.0-gitABCEDFG' should pass. 2011-01-07: FIX: be more tolerant against missing modules in scikits.learn. depending on the installation procedure, user may or may not have some of the scikits.learn modules. we just wrap the modules we find. 2011-01-07: FIX: add wrongly removed 'scikits' namespace from mdp.nodes. 2011-01-07: FIX: wrong version in scikits.learn info item. 2011-01-07: ERF: Merged "scikits" branch into "master". Conflicts: mdp/__init__.py. Fixed new config syntax and generic_tests. 2011-01-06: FIX: fixed testall script. now it works. the script should be ported to use subprocess, so that it runs OS agnostic. 2011-01-05: ERF: improved testall.py 2011-01-05: DOC: updated CHECKLIST for release. 2011-01-05: FIX: Slightly changed the include_last_sample argument to fix the problem with the standard parallel fork implementation. 2011-01-05: FIX: fix test broken due to new keyword in SFANode. 2011-01-05: FIX: added include_last_sample to SFA2Node too. 2011-01-05: ERF: added include_last_sample switch to SFANode. Motivation and discussion can be found in this thread: http://sourceforge.net/mailarchive/forum.php?thread_name=20100826183424.GF26995%40tulpenbaum.cognition.tu-berlin.de&forum_name=mdp-toolkit-users 2011-01-05: FIX: clean up __all__ in mdp/__init__.py numx* should not be in all. Quoting Tiziano: > numx was not in __all__, and therefore was not documented by epydoc. > Once it is added to __all__, indeed things break heavily. well, I think even it was by mistake, it is a good thing that numx is not in __all__. mdp.numx is an *internal* thing. people doing from mdp import * should not get numx. so, leave it like this and we don't even need a workaround for epydoc. While at it, remove whitening which was already gone and make the list alphabetical. 2011-01-04: Merge branch 'cache' Conflicts: mdp/__init__.py 2011-01-04: FIX: make setup.py run on also on python3 The code must run on all supported python versions. try: ... except Type, v: ... file() cannot be used. While at it, I'm changing exec() to re.match. It's simpler. 2011-01-04: FIX: python2.5 has no itertools.product 2011-01-04: ERf: added script to test all python versions and dependencies. 2011-01-04: FIX: fix name error in configuration.py 2011-01-04: ERF: added way of disabling optional dependencies optional dependencies can be disabled setting the env variable MDP_DISABLE_DEPNAME 2011-01-04: ERF: remove long deprecated helper functions. pca and fastica survive because we use them as ad in the the site and because most probably they are the only one in real use out there. 2011-01-04: FIX: fix test failures due to previous commit. 2011-01-04: ERF: cleaned up mdp/__init__.py. configuration is now a separate file. the logic of numx loading has been changed (again!). numpt/scipy can not be treated as an external dependency. 2011-01-04: FIX: fix generic node test failures for TimeFrameSlidingWindowNode. 2011-01-04: ERF: support iterables in generic node tests. 2011-01-03: FIX: MDP depends on joblib >= 0.4.3 2011-01-03: FIX; fixed git log format in CHECKLIST 2011-01-03: DOC: add __homepage__ variable in __init__.py 2011-01-03: FIX: removed useless file. 2011-01-03: ERF: updated author and maintainers, put version in one single place. 2011-01-03: DOC: changed reference to installation instructions 2011-01-03: ERF: removed our names from COPYRIGHT. It makes little sense to keep up-to-date the list of maintainers in this file. we have one authoritative source: the development.rst page. 2010-12-31: FIX: fixed TimeDelayNode docstring rst formatting 2010-12-31: NEW: new TimeDelay and TimeDelaySlidingWindow nodes. Thanks to Sebastian Höfer! 2010-12-31: DOC: specify dependency in svm nodes. 2010-12-31: FIX: change order of nodes in nodes/__init__.py __all__ order is somehow arbitrary, but better than before. it is needed for the automatic node list generation in the web site. 2010-12-31: DOC: migrated class.__doc__ to rst for svm family 2010-12-31: FIX: logic and wording of the symeig dependency test were wrong. I don't have a 64bit machine to test that the function in mdp/utils/routines.py still does the right thing. if it does not, just change mdp.config.has_symeig == '...' to mdp.config.has_symeig != '...' ;-) 2010-12-31: DOC: migrated class.__doc__ to rst for misc_nodes 2010-12-31: DOC: migrated class.__doc__ to rst for classifier_nodes 2010-12-31: DOC: migrated class.__doc__ to rst for expansion_nodes 2010-12-31: DOC: migrated class.__doc__ to rst for nerual_gas_nodes 2010-12-31: DOC: migrated class.__doc__ to rst for regression_nodes 2010-12-31: DOC: migrated class.__doc__ to rst for rbm_nodes 2010-12-31: DOC: migrated class.__doc__ to rst for lle_nodes 2010-12-31: DOC: migrated class.__doc__ to rst for em_nodes 2010-12-31: DOC: migrated class.__doc__ to rst for fda_nodes 2010-12-31: DOC: migrated class.__doc__ to rst for sfa family 2010-12-31: DOC: migrated class.__doc__ to rst for ica family 2010-12-31: DOC: migrated class.__doc__ to rst for nipals 2010-12-31: DOC: migrated class.__doc__ to rst for pca_nodes 2010-12-31: DOC: migrate convolution nodes to rst docstring 2010-12-31: DOC: fixed migration to rst docstrings for xsfa_nodes. 2010-12-30: Merge branch 'feature/libsvm_291_support' Conflicts: mdp/__init__.py mdp/test/test_contrib.py 2010-12-30: FIX: failing dtype_consistency_tests with numpy as backend The config.has_symeig logic was that config.has_symeig is False when using the _fake_symeig compatibility wrapper. Dtypes were selected based on that. 2010-12-30: FIX: make sure mdp.config.has_{numpy,scipy} are always set, really 2010-12-30: FIX: make sure mdp.config.has_{numpy,scipy} are always set 2010-12-30: FIX: Don’t try to fetch version of numx_version twice. 2010-12-29: ERF: Put SVM tests into a separate file. Also: Fix skip_on_condition on class. 2010-12-27: DOC: rename ExternalDepFail to -Failed and document stuff 2010-12-27: FIX: test_seed failured when testing running bimdp.test() after mdp.test() The reason was that when the test functions were imported from mdp.test into bimdp.test, they still referenced mdp.test._SEED global variable. But the --seed option in bimdp is independent, so it was set to a different random seed. By duplicating test_seed.py, a second independent global is created. 2010-12-27: Add a test for separable data. 2010-12-27: NEW: check if joblib is >= 0.4.6 2010-12-27: ERF: Restructured the libsvm tests. 2010-12-27: OTH: make COPYRIGHT valid rst This makes is easier to include in the webpage. 2010-12-27: Merge branch 'config_simplification' Conflicts: mdp/__init__.py This merge commit contains an adaptation of caching to the new framework. 2010-12-27: NEW: convert caching_extention.py to rst 2010-12-27: Merge branch 'rst_docstrings' Conflicts: mdp/signal_node.py 2010-12-27: Merge branch 'cache' Conflicts: mdp/caching/caching_extension.py 2010-12-27: Merge commit '06007d3e8987a6c8574fd63c7608b37e9e657e66' 2010-12-27: ERF: Added test for execute_method in classifiers. 2010-12-27: ERF: Add validation for libsvm parameters. 2010-12-27: ERF: Get rid of superfluous parameter handling. Use svm_parameter directly. 2010-12-27: FIX: Get the libsvm tests running for versions >=2.91. TODO: * The new interface still needs some polishing. * We need to test the RBF classifier. It won’t work right now. 2010-12-24: FIX: fix test of PCANode for degenerate case, it was failing occasionally. 2010-12-24: removed now useless copy method overwriting in NoiseNode 2010-12-24: Merge branches 'deepcopy' and 'master' 2010-12-24: ERF: added additional tests for node and flow deepcopy This tests fail in master branch righ now, because lambdas can not be pickled! 2010-12-24: ERF: added additional tests for node and flow deepcopy 2010-12-24: Merge branch 'master' of ssh+git://mdp-toolkit.git.sourceforge.net/gitroot/mdp-toolkit/mdp-toolkit 2010-12-24: FIX: fixex bimdp.test() under windows. The FIX introduced with dfa8ec5a0d367941e4c297325a43361d579f8bbd was not working on windows, because git does not support symlinks there. It has been fixed differently now, and it should work reliably under any system, I hope. 2010-12-22: ERF: Improved the exception handling for failing training argument check (user now gets a better error message). 2010-12-22: changed my email address in copyright and setup 2010-12-21: Merge remote branch 'new_guidelines' 2010-12-21: FIX: now bimdp.test() really works as advertised! 2010-12-21: FIX: regenerate run_tests.py script for running tests without py.test 2010-12-03: OTH: remove uses of deprecated dict.has_key() 2010-12-03: OTH: use with_statement in signal_node 2010-12-18: DOC: improved docs of caching mechanism 2010-12-18: FIX: possible to switch caching path while extension active 2010-12-18: FIX: caching started at second execution because of automatic setting of dtype, input_dim 2010-12-18: NEW: caching extension can now cache individual classes or instances 2010-12-18: DOC: a few fixes to the cache extension docs 2010-12-09: NEW: Added parallelization for NearestMean and KNN classifier. Also updated NearestMeanClassifier attribute names to avoid confusion. 2010-12-09: DOC: Added explanation in _default_fork docstring. 2010-12-09: DOC: Improved code documentation. 2010-12-03: FIX: copy with deepcopy everywhere, raise DeprecationWarning on old API Since Node.copy() is a public function, changing the allowed parameters might be bad. So just ignore the parameter and raise a DeprecationWarning if the parameter has a non-None value. >>> import mdp >>> node = mdp.Node() >>> node2 = node.copy(protocol=1) __main__:1: MDPDeprecationWarning: protocol parameter to copy() is ignored 2010-12-03: FIX: Patch for using deepcopy instead of pickle for Node and Flow Hi all, here is a patch and test script for replacing the pickle with deepcopy in Node.copy() and Flow.copy(). The signature of the copy function changes (the protocol keyword is no longer necessary), I don't know what the proper way to handle this is (deprecation error?), so I'll leave it up to you guys. Thanks for all the efforts, we get very positive feedback from all users of mdp and Oger! David 2010-12-02: OTH: remove redundant shogun version checking 2010-12-02: NEW: add tests for the new config architecture 2010-12-01: OTH: Convert config.has* to an attribute I have suspicions that using a lambda function was causing problems with the process scheduler. git grep -l -E 'config.has_'|xargs sed -r -i 's/(config.has_[a-z_]+)\(\)/\1/g' 2010-12-01: OTH: simplify mdp config discovery Create ExternalDepFound/ExternalDepFail objects immediately after attempting import. This way there's no need to reverse engineer where the symeig implementation comes from. TODO: test_process_schedule fails! 2010-12-01: OTH: simplify numpy/scipy imports 2010-12-01: OTH: make ExternalDep objects have boolean value 2010-12-02: OTH: replace 'map()' with 'for:' following 2to3's advice 2010-12-02: OTH: remove redundant OrderedDict implementation I'm also moving the OrderedDict recipe to a separate file, because it's kind of logically separate and will not be used in newer versions of Python anyway. 2010-12-01: OTH: mdp.test() should not cause SystemExit Function py.cmdline.pytest calls py.test.cmdline.main, but also raises SystemExit. Just call the latter one, so the interactive session can continue. 2010-12-01: Change __license__ to 'Modified BSD' in source code The license text was already changed in the documentation, but not in the source code. 2010-11-30: NEW: Added the execute_method argument and attribute to the Classifier class. This makes it possible to get the classification results from a normal flow without resorting to monkey patching (which makes nodes unpickelable). 2010-11-30: ERF: Improved code readibility. 2010-11-30: NEW: Added iadd method to Flow. Also fixed the error strings. 2010-11-29: NEW: Added parallel version of Gaussian classifier. 2010-11-29: ERF: Added helper function to combine CovarianceMatrix instances. 2010-11-24: NEW: Added KNN classifier. 2010-11-24: FIX: Fixed wrongly created classifier BiNodes. 2010-11-24: NEW: Added nearest-mean classifier node. 2010-11-24: FIX: Updated test name. 2010-11-24: ERF: Some cleanup in the GaussianClassifier. Added an exception if any class covariance matrix is singular. 2010-11-23: BRK: Renamed GaussianClassifierNode to GaussianClassifier to follow the general classifier convention (otherwise a special case would be needed in the BiMDP autogenerated code). 2010-11-23: FIX: Updated BiMDP test for the new siganture of FDANode. 2010-11-23: BRK: Updated FDANode siganture to follow the newer conventions from ClassifierNode (labels instead of cl) and the PCA/SFANode (n instead of range). 2010-11-09: OTH: move PROJECT_GUIDELINES and NEW_DEVELOPER_INFO to development.rst in docs 2010-11-01: NEW: convert neural_gas_nodes.py and rbm_nodes.py docstrings to rst 2010-11-01: NEW: add reference to JoMLR article 2010-11-01: NEW: convert mdp/nodes/xsfa_nodes.py docstrings to rst 2010-11-01: NEW: convert mdp/__init__.py and mdp/nodes/signal_node.py docstrings to rst 2010-11-01: Merge commit 'c3bdc14c092b92c214f8bd76002be565706f07ca' 2010-11-01: FIX: Fixed bug in node join, leading to inconsistent state under certain conditions with FlowNode. 2010-10-31: ERF: Added copy_callable option for thread scheduler. 2010-10-16: FIX: Updated failing unittest. 2010-10-16: FIX: Fixed the parameter names in the switchboard factory. 2010-10-15: BRK: Updated the remaining switchboards. 2010-10-15: BRK: Started to replace the x_ and y_ arguments in 2d switchboards with a single _xy argument. So far only the Rectangular2dSwitchboard is done. 2010-09-26: FIX: Correctly distinguish Flow and BiFlow in trace inspection execute. 2010-09-25: FIX: Fixed bug in ParallelFlowNode and added unittest. 2010-09-14: DOC: Fixed wrong docstring. 2010-09-14: BRK: Improved extension context manager and decorator to only deactivate extensions that were activated by it. This prevents unintended side effects. Corresponsing tests were added. 2010-09-14: FIX: Fixed outdated import. 2010-09-14: Merge commit '4671fc2511b3446e57afcc1974d46f15ac92cb5f' 2010-09-14: BRK: Improved the dimension consistency handling in FlowNode, also making the dimension setting stricter. This is to prevent really strange errors in some corner cases (e.g. when the output_dim is automatically set in execute). One generic test had to be dropped, but explicit test for the dimension setting were added. 2010-09-14: ERF: Streamlined the parallel fork template method. 2010-09-14: FIX: Fixed the test node in the test_execute_fork unittest. 2010-09-14: ERF: Removed the ParallelBiNode, since it is no longer needed. 2010-09-14: Revert "FIX: Fixed small bug in dimension consistency check." This reverts commit 0b37100894f08faf3653bab09bc4b5fb21705d72. 2010-09-13: ERF: Added more unittests for parallel. One still fails because an improvement in FlowNode is needed. 2010-09-13: DOC: Added comment on necessary extension context improvement. 2010-09-13: FIX: Fixed small bug in dimension consistency check. 2010-09-13: ERF: Improved the threaded scheduler (better exception handling). 2010-09-13: FIX: Fixed parallel bug, missing argument in callable. 2010-09-13: ERF: Turned the use_execute_fork property into a normal method. 2010-09-12: FIX: Added the missing property decorators (this was dangerous, since a function attribute is cast as True).. 2010-09-12: FIX: Fixed bug in ParallelNode. 2010-09-12: FIX: Updated ParallelFlow to use correct execution result container. 2010-09-12: FIX: Updated __all__ list. 2010-09-10: OTH: python2.5: remove PEP 3101 formatting 2010-07-30: OTH: python2.5: 'with' and 'as' 2010-09-09: Remove next-to-last import numpy from mdp codebase numx rules! 2010-09-09: ERF: Changed the node purging mechanism in parallel to no longer use a special extension method in FlowNode. Also some smaller comment updates. 2010-09-09: NEW: MDP migrates to BSD license! 2010-09-09: BRK: Clarfied some subtle aspects of the parallel API and updated the code to improve consistency. 2010-09-09: Make git display python diffs better 2010-09-08: FIX: Some smaller fixes in the updated parallel code. 2010-09-08: FIX: The parallel BiMDP tests are running again. Also some more cleanup in MDP parallel. 2010-09-08: FIX: The standard MDP tests now all run again after the changes in parallel. 2010-09-07: BRK: Continued work on the parallelization update, not working yet. 2010-09-06: BRK: Started to modify the parallel node forking. The is_bi_training in BiNode is replaced with a use_execute_fork in the ParallelExtensionNode base class. This is more logical, less complicated and a first step towards enabling more parallelization patterns. 2010-09-02: OTH: the caching mechanism could be simplified with the latest joblib 2010-08-31: DOC: Improve docs of caching mechanism, keywork argument for verbosity of cache 2010-08-26: FIX: SFANode.train fails with a meaningful error if trained with x.shape = (1, n) 2010-08-26: Fix --seed option for py.test >= 1.3 Py.test in version 1.3 added a feature to skip conftest.py files when they are exactly the same. This helps when a directory is copied to a subdirectory of itself and the config files are not idempotent. Unfortunately, this runs afoul of our logic to add the --seed option everywhere, irrespective of wheter py.test is run implicitely for the whole mdp-toolkit or for mdp or for bimdp or for both. The solution is to modify one of the conftest.py files to be unidentical. Tested with with py.test 1.2.1 and 1.3.3. 2010-08-18: OTH: Simplify docstring formatting. 2010-08-25: NEW: FANode detects singluar matrices 2010-08-24: ERF: fftpack does not support float >64; those types are removed from the supported list for ConvolutionNode 2010-08-24: FIX: Remove forgotten print statements 2010-08-24: ERF: ISFA supports float>64 2010-08-23: ERF: added python version to mdp.config 2010-08-19: ERF: Add complex256 to list of unsafe dtypes 2010-08-19: FIX: Generic tests failed for MDPNUMX='numpy' When symeig and scipy.linalg.eigh are not present, MDP falls back into using numpy.linalg.eig . This does not support large floating point types, for some reasons, which caused the tests to fail. 2010-08-17: ERF: Updated the classifier nodes that were overlooked in the big PreserveDimNode cleanup. 2010-08-17: Merge branch 'supporteddtypes' Conflicts: mdp/nodes/misc_nodes.py 2010-08-17: ERF: Clarify questions about support for float64 only 2010-08-17: FIX: Fixed generic tests, now check that PreserveDimNode raises right exceptions. 2010-08-17: ERF: Added tests for PreserveDimNode. 2010-08-17: ERF: Wrapped up the PreserveDimNode changes. Unable to fix the generic unittests, somebody has to fix those. 2010-08-16: ERF: Refined the choice of supported dtypes 2010-08-16: ERF: Redefine default supported dtypes to be 'Floats', and modify all nodes accordingly 2010-08-16: ERF: Further simplified Cumulator mechanism; added test After digging into numpy's C code, we decided that at present numpy.concatenate does the right thing, i.e. it creates a buffer large enough and uses memcpy to copy the data in each array. 2010-08-16: FIX: py.test failed with an error reporting a duplicated 'seed' option In py.test 1.3.3 they changed the sequence in which the conftest.py files are loaded. In our case, the /mdp/test/conftest.py file was laoded before the /conftest.py file, which tried to redefine the optione 'seed', causing an error. 2010-08-14: ERF: Simplified concatenation mechanism in Cumulator The new solution is much faster and still avoids direct calls to numpy.concatenate, that performs pairwise concatenations and wastes a lot of memory. 2010-08-14: FIX: Cumulator had no docstring 2010-08-10: FIX: Added missing exit target check in BiFlow. Changed the unittest to better test this case. 2010-08-10: ERF: Changed BiFlow.execute to always return a tuple (x,msg), even if the msg is empty. The previous changing return signature was too likely to cause errors. 2010-08-10: DOC: Small fix in docstring. 2010-08-10: DOC: Small fix in comment. 2010-08-09: ERF: Added check for case when both x and msg are None to produce better error message. 2010-08-09: ERF: Removed duplicate verbose print. 2010-08-08: FIX: Fixed the missing import for PreserveDimNode and fixed the broken generic tests for these nodes by excluding them. This is only a provisional fix. 2010-08-08: ERF: Rewrote the creation of the autogenerated BiNode classes. Went back to dynamically creating the classes at runtime. Now use a node blacklist instead of the previous whitelist. These two changes allow the import of nodes with dependencies, like the SVM nodes. 2010-08-08: NEW: Added new PreserveDimNode base class for input_dim == output_dim. This should avoid duplication of the set_dim methods. 2010-08-05: ERF: Tweaked the hinet CSS. 2010-08-05: DOC: Improved the ParallelFlow docstrings. 2010-07-30: DOC: Small improvements to the corutine decorator docs. 2010-07-30: FIX: Removed some left over stop_message references. 2010-07-29: NEW: extend namespace_fixup to bimdp 2010-07-29: ERF: rework fixup_namespace function and add tests 2010-07-27: FIX: NodeMetaclass and ExtensionException where imported but not in __all__ 2010-07-27: FIX: remove helper_funcs module from mdp namespace, they are imported into mdp 2010-07-27: NEW: Added a simple run script to run the tests from a .py file (e.g. to use the Eclipse debugger). 2010-07-27: ERF: Optimized the switchboard gradient implementation (about 10x faster). 2010-07-27: ERF: Changed one more unittest to use scheduler context manager. 2010-07-27: ERF: Added context manager interface to Scheduler class and added unittests. 2010-07-27: ERF: Renamed callable_ to task_callable and made some small docstring updates. 2010-07-27: ERF: Removed unused imprts and updated docs. 2010-07-27: FIX: Fix double delete 2010-07-26: NEW: use sphinx links in mdp.parallel docstring 2010-07-25: FIX: remove del's of nonexistent symbols 2010-07-25: NEW: use fixup_namespace to clean up __import__ 2010-07-25: FIX: missing comma in __all__ was concatentating names 2010-07-25: OTH: add import of fixup_namespace without actually using it This is seperate to check if there are no circular imports 2010-07-24: FIX: remove nonexistent name from __all__ 2010-07-23: FIX: remove unused import of hotshot.log 2010-07-23: FIX: fix broken name delete 2010-07-23: FIX: fixed setup.py (remove refs to demo & contrib) 2010-07-23: FIX: convolution nodes are conditional on scipy.signal 2010-07-23: FIX: remove del of nonexistent symbol 2010-07-23: FIX: move pp_simple_slave_test.py to examples 2010-07-23: FIX: regenerate bimdp wrappers for classifiers 2010-07-23: ERF: convert parallel python tests to py.test and disable them 2010-07-23: FIX: TODO had the wrond line ending. Sorry everybody :-( 2010-07-23: FIX: Fix tests in test_nodes_generic with svm classifier nodes. Add arguments to ShogunSVMClassifier and LibSVMClassifier to automatically set kernel and classifier on __init__. Add parameters to test_nodes_generic to use these arguments. 2010-07-23: DOC: updated TODO list 2010-07-23: NEW: Added new tests for Convolution2DNode 2010-07-23: FIX: removed contrib directory 2010-07-23: FIX: duplicate conftest.py twice, so py.test can be run without options, try 3 Previous version worked only under cygwin when on windows. 2010-07-23: ERF: PEP8 2010-07-23: DOC: The reason of the strange transformations in CumulatorNode 2010-07-23: FIX: Automatic test now cover everything, fixed a lot of stuff in the meanwhile! 2010-07-23: FIX: NormalNoise did not function properly 2010-07-23: FIX: KMeansNode failed on some special cases 2010-07-23: FIX: duplicate conftest.py twice, so py.test can be run without options, try 2 Previous version worked only when PYTHONPATH included mdp-toolkit/. 2010-07-23: ERF: Migrated the demos to the examples. Removed the tutorial demo. 2010-07-23: FIX: duplicate conftest.py twice, so py.test can be run without options Note: bimdp/test/conftest.py is a symbolic link to bindmp/test/conftest.py. OTOH, conftest.py in the top directory is a little different. 2010-07-23: ERF: remove old run_coverage script The new way is: py.test --figleaf and the result go into html/. 2010-07-23: OTH: Renamed NEW_DEVELOPER_INFO.txt → NEW_DEVELOPER_INFO. 2010-07-23: ERF: Made tests for SVM classifiers, Convolution2D conditional on presence of nodes 2010-07-23: Merge branch 'test_cleanup' Conflicts: mdp/test/_tools.py 2010-07-23: FIX: fix broken conditional test for caching 2010-07-23: ERF: simplify testing by providing commonly used imports in test/_tools All testing files can just say from _tools import * and the get numx, numx_rand, mult, ... Py.test is not imported into _tools, so that it is easier to possible to import files which don't actually depend on py.test when it is not available. Also includes some tiny naming and space corrections. 2010-07-23: NEW: Conditional tests decorator; decorated test_caching tests 2010-07-23: FIX: use the right TESTDECIMALS in test_utils_generic 2010-07-23: ERF: convert utils tests to py.test 2010-07-23: NEW: add QuadraticFormException for QuadraticForm errors 2010-07-23: ERF: Caching now imported conditional on joblib 2010-07-23: Merge branch 'master' of ssh+git://mdp-toolkit.git.sourceforge.net/gitroot/mdp-toolkit/mdp-toolkit 2010-07-23: Merge branch 'master' into HEAD Conflicts: mdp/__init__.py 2010-07-23: ERF: Added *Cumulator and ClassifierNode to __all__ 2010-07-23: OTH: Trying to push changes 2010-07-23: ERF: Caching tests moved to own file, updated to py.test style 2010-07-23: ERF: de-obfuscated code in config object 2010-07-23: Merge branch 'sprint_cache' Conflicts: mdp/__init__.py mdp/extension.py mdp/nodes/__init__.py mdp/nodes/convolution_nodes.py mdp/test/test_extension.py mdp/test/test_nodes.py mdp/utils/__init__.py mdp/utils/routines.py 2010-07-23: FIX: Windows specific fix. 2010-07-23: OTH: whitespaces 2010-07-23: ERF: make generic_test_factory a public function It is now even documented. 2010-07-23: OTH: Fixed whitespaces in convolution nodes 2010-07-23: DOC: describe the complicated big_nodes argument of _generic_test_factory 2010-07-23: NEW: tests can be run within python shell. You can run the tests with import mdp mdp.test() you don't need py.test installed! 2010-07-23: Merge branch 'sprint_conv2' 2010-07-23: ERF: Made import of Convolution2DNode conditional on presence of scipy 2010-07-23: ERF: Added method in config object to check for existence of arbitrary module by name 2010-07-22: FIX: remove nonexistent import from __all__ 2010-07-22: ERF: Adding numpy and scipy to config. 2010-07-22: Merge branch 'sprint_variadic_cumulator' 2010-07-22: FIX: fix broken test_gradient. It was not useing the numx, numx_rand convention!!! >:-( 2010-07-22: FIX: Fixing an error in contrib tests. 2010-07-22: ERF: migrated bimdp tests to py.test 2010-07-22: FIX: remove unexistent test name 2010-07-22: NEW: Added convolution class and tests 2010-07-22: ERF: Merged master to convolution branch and removed vestigial file 2010-07-22: DOC: Update docs 2010-07-22: ERF: Cache extension migrated to own directory, tests updated 2010-07-22: NEW: Init for caching module 2010-07-22: OTH: Attempted refactoring cache extension in own directory, got metaclass conflict 2010-07-22: ERF: update gitignore for py.test gnerated files 2010-07-22: Merge branch 'test_framework' Conflicts: bimdp/test/test_binode.py mdp/nodes/misc_nodes.py 2010-07-22: NEW: Context manager for caching extension 2010-07-22: FIX: Fixed documentation and call with wrong argument 2010-07-22: DOC: Fixed error in documentation of context manager 2010-07-22: ERF: Deleted forgotten print statement 2010-07-22: NEW: activate_caching, deactivate_caching 2010-07-22: ERF: Simplified extension context manager 2010-07-22: ERF: port contrib and hinet to py.test 2010-07-22: ERF: Cache directory can be changed or picked at random (default) 2010-07-22: Merge branch 'sprint_info_refactoring' 2010-07-22: ERF: Renamed ‘Requirements’ → ‘MDPConfiguration’. Changed the API style a little. Needs more love with version checking. 2010-07-22: ERF: Caching extension moved to own file 2010-07-22: ERF: Tests for cache extension 2010-07-22: DOC: add note about where the links must be changed in sphinx rst 2010-07-22: FIX: Exception missing mdp module. 2010-07-22: ERF: ClassifyCumulator also inherits from VariadicCumulator. A little bit useless atm, since many methods are overwritten. 2010-07-22: FIX: UTF-8 problem. I want to have Python 3 only… 2010-07-22: FIX: broken linear regression test Use non-random input value, so that it doesn't fail randomly. 2010-07-22: NEW: added hooks and a command-line option in py.test Use --seed to set the random seed. Added report of extended configuration infos before and after testing reports. Added unit test for random seed [should only fail in case of a bug in py.test and/or numpy]. 2010-07-22: FIX: make generated hinet tests pass 2010-07-22: ERF: Cache extension now based on joblib 2010-07-22: NEW: Added VariadicCumulator with mdp.Cumulator being a special case of it. VariadicCumulator adds as many automatic fields as are specified in the initialisation function. 2010-07-22: ERF: use py.test.raises for exception testing 2010-07-22: FIX: correct PCANode generation 2010-07-22: ERF: extract PCANode generation to a helper function 2010-07-22: ERF: migrate tests for hinet to py.test 2010-07-22: FIX: Switchboard.is_invertible() cannot be a staticmethod 2010-07-22: ERF: remove previously converted tests from test_nodes.py 2010-07-22: ERF: migrate tests for CuBICA- and TDSEPNodes to py.test 2010-07-22: ERF: migrate tests for HistParade- and TimeFrameNodes to py.test 2010-07-22: ERF: migrate tests for EtaComputerNode to py.test 2010-07-22: ERF: migrate tests for GrowingNeuralGasNode to py.test 2010-07-22: ERF: migrate tests for NoiseNode to py.test 2010-07-22: ERF: migrate tests for FDANode to py.test 2010-07-22: ERF: migrate tests for GaussianClassifier to py.test 2010-07-22: ERF: migrate tests for FANode to py.test 2010-07-22: ERF: migrate tests for ISFANode to py.test 2010-07-22: ERF: migrate tests for RMB*Node to py.test 2010-07-22: ERF: move spinner to _tools 2010-07-22: ERF: migrate tests for LinearRegressionNode to py.test 2010-07-22: ERF: migrate tests for CutoffNode to py.test 2010-07-22: ERF: migrate tests for HistogramNode to py.test 2010-07-22: ERF: migrate tests for RBFExpansionNode to py.test 2010-07-22: ERF: migrate tests for AdaptiveCutoffNode to py.test 2010-07-22: ERF: migrate tests for SFA2Node to py.test 2010-07-22: ERF: migrate tests for SFANode to py.test 2010-07-22: ERF: migrate tests for WhiteningNode to py.test 2010-07-22: ERF: migrate tests for PCANode to py.test 2010-07-22: ERF: migrate tests for PolynomialExpansionNode to py.test 2010-07-22: ERF: migrate covariance Node tests to py.test 2010-07-22: ERF: add node copying, saving and training tests 2010-07-22: ERF: move BogusNode* to _tools 2010-07-22: FIX: fix python2.5 compatibility 2010-07-21: FIX: Don’t include utils until it is needed. 2010-07-21: ERF: Drop Python 2.4, use built-in all(). 2010-07-21: ERF: Fix some of the issues with contrib.__all__ and dependencies. 2010-07-21: FIX: Fixing bugs with scheduling. Avoid printing while mdp is imported. Rename _info() to req.info() in MDPVersionCallable. req.info() is a different object but it’s exact layout is not important for scheduling. 2010-07-21: ERF: add tests for RBFExpansionNode 2010-07-21: FIX: don't use self in staticmethod 2010-07-21: ERF: add generated tests for FastICA 2010-07-21: ERF: generate tests for reversing 2010-07-21: ERF: use any and all to check conditions 2010-07-21: BRK: make is_trainable staticmethod where possible 2010-07-21: BRK: make is_invertible staticmethod where possible 2010-07-21: ERF: Everybody loves nicer output 2010-07-21: OTH: implemented obvious thing of decorating the execute method with joblib, it does not work; breaks the caching extension 2010-07-21: ERF: Remove all the cool logic and decorators because __dict__ is not ordered and all the methods are executed in arbitrary order. 2010-07-21: Merge commit '38664d80675ea7304ee28b460ba88c547b55629e' 2010-07-20: NEW: Added a new Requirements object which checks for available features. Current version needs some discussion/rework because class methods are not initialised in order... 2010-07-21: ERF: generate dimdtypeset consistency tests 2010-07-21: FIX: Fixed the remaining broken unittests. 2010-07-21: ERF: generate outputdim_consistency tests 2010-07-21: NEW: Adding the ‘official’ backport for OrderedDict to our routines.py. 2010-07-21: FIX: Updated the bimdp.hinet package to work with the new stop_training. 2010-07-21: ERF: world's first automatic test generation 2010-07-21: OTH: Merged cache branch with convolution branch 2010-07-21: FIX: Updated the BiFlowNode to work with new stop_training. 2010-07-21: DOC: Improved documentation for Convolution2DNode 2010-07-21: FIX: Fixed the failing BiFlow unittest. 2010-07-21: ERF: Convolution2DNode now supports FFT convolution, checks for the validity of all of its argments; the new tests are in a file outside the repository, waiting for the new test framework 2010-07-21: FIX: Fixed the inspection by removing the stop_message reference. 2010-07-21: BRK: Simplified the coroutine decorator according the stop_message removal. 2010-07-21: BRK: Updated and simplified BiFlow for the new stop_training specification. Not tested yet. 2010-07-21: ERF: Added output dimension check to IdentityNode. 2010-07-21: ERF: ported test_flows to py.test 2010-07-21: ERf: ported test_graph.py to py.test 2010-07-21: ERF: ported test_parallelflows to py.test 2010-07-21: ERF: ported test_parallelhinet.py to py.test 2010-07-21: ERF: ported test_parallelnodes.py to py.test 2010-07-21: ERF: ported test_process_schedule to py.test 2010-07-21: ERF: ported test_extension to py.test 2010-07-21: FIX: caching extension now works with non-contiguous arrays 2010-07-21: OTH: Updated bimdp todo list. 2010-07-21: BRK: Started with change of the stop_training signature in bimdp and removed stop_message. The binode file is basically done. 2010-07-21: NEW: New utility function to generate Gabor wavelets 2010-07-21: ERF: ported test_classifier to py.test 2010-07-21: ERF: added new testing tools file. 2010-07-21: ERF: ported test_schedule to py.test 2010-07-20: FIX: globally remove trailing whitespace Used http://svn.python.org/projects/python/trunk/Tools/scripts/reindent.py . 2010-07-20: FIX: broken import 2010-07-20: Merge branch 'py3k', remote branch 'origin/sprint_gradient' into master Conflicts: bimdp/inspection/trace_inspection.py 2010-07-20: FIX: Removed use of new.instancemethod from inspection. 2010-07-20: FIX: typo in Flow.__setitem__ 2010-07-20: ERF: use context manager for file access 2010-07-20: FIX: don't create a list that is ignored RefactoringTool: Warnings/messages while refactoring: RefactoringTool: ### In file /home/zbyszek/mdp/mdp-toolkit/build/py3k/mdp/linear_flows.py ### RefactoringTool: Line 481: You should use a for loop here 2010-07-20: FIX: Fixed the integer division in bimdp. 2010-07-20: FIX: remove bogus import in test/__init__.py 2010-07-20: FIX: get rid of the "callable" variable and fix the scheduler shutdown An explicit flush was needed. 2010-07-20: NEW convolution2d node 2010-07-20: ERF: esthetic change: activate heart beat spinner 2010-07-20: FIX: another integer division problem 2010-07-20: FIX: don't use None in comparisons 2010-07-20: FIX: integer division again 2010-07-20: FIX: fixed handling of binary mode for stdin and stdout for pickle 2010-07-20: FIX: remove unused import 2010-07-20: FIX: don't compare float with NoneType In Python2.6 None is smaller than any float, but in Python3 the comparison fails with TypeError. 2010-07-20: FIX: whitespace 2010-07-20: FIX: correct relative imports in process_schedule.py 2010-07-20: FIX: another integer division problem 2010-07-20: FIX: whitespace 2010-07-20: FIX: add x before range git grep -l 'for.*[^x]range' mdp/test/|xargs sed -i -r 's/for(.*[^x])in range/for\1in xrange/g' 2010-07-20: FIX: matlab→python and py3k cleanups a = a + 1 becomes a += 1 / becomes // range() becomes xrange() useless cast removed exception variable doesn't leak out after the catch block anymore 2010-07-19: FIX: whitespace 2010-07-20: FIX: zip returns a generator in py3k, we need a list 2010-07-19: FIX: fixed integer division 2010-07-19: FIX: fixed calling unbound class methods in extensions. unbound methods have been removed in python3. 2010-07-19: ERF: refactored py3tool 2010-07-19: FIX: use sets not lists in testing GraphNode 2010-07-19: FIX: open files in binary mode when loading a pickle 2010-07-19: MISC: gitignore build stuff 2010-07-19: ERF: use 'open' function instead of the deprecated 'file' in mdp/demo 2010-07-19: FIX: missing import in setup.py [2] 2010-07-19: FIX: missing import in setup.py 2010-07-19: NEW: python3 setup logic (from numpy) 2010-07-19: ERF: use 'open' function instead of the deprecated 'file'. 2010-07-20: ERF: Restricted the gradient default implementation to the IdentityNode, now raise an exception as the default. 2010-07-20: ERF: Removed the explicitly defined IdentityBiNode, use the autogenerated one instead. 2010-07-20: NEW context manager for extensions 2010-07-20: NEW basic implementation of the cache execute extension 2010-07-19: ERF: Added a gradient unittest with a small network. 2010-07-19: ERF: Improved one layer gradient unittest into a functional test. Insignificant cleanup in gradient module. 2010-07-19: FIX: Fixed the switchboard gradient and added unittest. 2010-07-19: NEW: Added the switchboard gradient implementation with a test (more unittests are needed). 2010-07-19: NEW: Added gradient implementation for layer with some unittests (more are needed). Added check in gradient base node that node has finished training. 2010-07-19: DOC: Added the new project guidelines. 2010-07-19: ERF: use integer division 2010-07-19: NEW: Added gradient extension with some implementations for simple nodes (no hinet yet). 2010-07-02: Added kwargs to SenderBiNode init to enable multiple Inheritance. 2010-07-02: Fixed bug in inspection: If nodes were added in later training phases the clickability broke for all nodes. Again this affected XSFA. Changed text in inspection from layer to node. 2010-07-02: Updated the inspection example (more training phases for better illustration). Tiny doc update in SenderBiNode. 2010-07-02: Fixed bug for normal node in BiFlowNode (this broke the XSFA inspection example). 2010-07-01: Simplified the inspection msg printing. 2010-07-01: Improved the inspection message printing (alphabetic sorting and highlighted keywords). 2010-06-29: Improved the key order in the inspection msg HTML view. 2010-06-29: Improved the array HTML display in the inspection. 2010-06-29: Fixed one corner case bug in coroutine decorator and added more unittests. Updated bimdp todo list. Small docstring update in switchboard. 2010-06-16: Merged the coroutine mixin class into the BiNode base class. Added _bi_reset hook method that should be overwritten instead of bi_reset. This is more consistent with the standard MDP design. The bi_reset template method now clears the coroutine table. Dervied BiNode classes should rename their bi_reset to _bi_reset, but unless they use the coroutine decorator there is currently no negative effect for not doing so. Updated the BiMDP todo list. 2010-06-15: Fixed duplicate classifier nodes in autogenereated code. 2010-06-14: Added check and unittest for corner case of a directly terminating coroutine. 2010-06-13: Reverted my misguided fixes to the metaclass wrapping helpers. 2010-06-11: Small comments update. 2010-06-11: Updated the codecorator to work with the updated NodeMetaclass wrapping helper functions. Add new feature to specify argument default values. Changed the _coroutine_instances to be initially None for better efficiency. 2010-06-11: Rewrote the NodeMetaclass wrapping helper functions. The old versions contained a bug (default values would prevent user defined values from being passed to the function). The new version is also arguably more elegant and flexible. A unittest was added for parts of the new functionality. 2010-06-11: Fixed the n argument in PCANode inverse. 2010-06-09: Added new coroutine decorator and mixin for BiMDP, allowing easy continuations (without boilerplate code or explicit state machines). 2010-06-09: Fixed bug in SenderBiNode. 2010-06-09: Fixed typo. 2010-06-09: Fixed broken SFA execute range argument and renamed it to n. Added unittests for PCA and SFA n arguments. Added some missing SFANode attributes to init. 2010-06-09: Fixed bug in BiFlowNode. 2010-05-31: Added loop check for previous button in slideshow. 2010-05-27: Numpy still uses the deprecated string exceptions, which were naturally not caught during inspection. I now changed inspection to catch everything. 2010-05-19: Added check for loop setting in slideshow next(). 2010-05-17: Very small cosmetic updates. 2010-05-17: fixed other two python2.4 incompatibilities in demos. 2010-05-15: fixed bug with ProcessScheduler when mdp installed by debian package. Now it works correctly even when multiple versions of mdp are installed, for example a system version via debian and a local git version. A unit test has been added to make sure parent and child processes are running the very same mdp. Thanks to Yaroslav Halchenko for pointing out the problem! ------------------------------------------------------------------------------- MDP-2.6: 2010-05-14: fix spurious shogun warning. 2010-05-14: applied patch from Rike to re-enable some of the working Shogun tests. 2010-05-14: fixed bug when using most recent numpy (1.4.1). utils.get_dtype was failing when called with 'All' argument. most of MDP unittests were failing! 2010-05-14: added stylesheet files to setup script. only *.py files are installed by default. 2010-05-14: modified mechanism to get git revision in order to properly handle the case when git fails. added stylesheets to MANIFEST template 2010-05-14: several additions to distribution files 2010-05-14: removed NaiveBayesClassifer from automatically generated nodes in bimdp. regenerated node wrappers 2010-05-14: fixed bug in GaussianClassifierNode and FDANode when training with a list of labels. Removed obsolete 'uniq' function in utils. we can use builtin set instead. 2010-05-14: disabled Shogun tests. they often fail withtou apparent reasons. try for example: python2.5 -c 'import mdp; mdp.test(seed=257416710)' 2010-05-14: Added a maximum iteration constraint to the k-means classifier. 2010-05-14: Moved the NaiveBayesClassifier to the demo. 2010-05-14: Fixed a bug where the Classifier rank() method returned items in the reverse order. 2010-05-13: LLENode was not fixed ;-) disabled 2D test and added reminder to TODO 2010-05-13: fixed ever failing LLENode test. 2010-05-13: added info about LibSVM and Shogun in mdp.info() 2010-05-13: first take at disabling SVM tests when libraries are not available 2010-05-12: Updated the todo list to reflect the latest developments. 2010-05-12: Adjusted the BiClassifier node according to the renamings. 2010-05-12: Renamings in the classifier nodes (classify->label, cl->labels). 2010-05-11: removed code duplication from test/__init__.py 2010-05-11: added info function to display infos about MDP and dependencies 2010-04-29: Added ability to exit BiFlow with -1 target value, making it easier to use inverse execution. 2010-04-22: Inserted the new sys.path behavior also in the main clause. 2010-04-22: Changed the sys.path behavior in subprocesses to fix problems with multiple MDP versions on a system. 2010-04-14: Fall back on default if specified browser is not available. 2010-04-14: Added tracebacks to exception handling during debug inspection. 2010-04-12: Added ability to specify a specific browser for slideshows, also changed the argument name. 2010-04-09: Rewrote K-Means training to be aligned with the Cumulator training. 2010-04-09: Adapted the SVM Classifiers to use the new ClassifierCumulator class. 2010-04-03: Made a Cumulator version for simple Classifiers. 2010-04-06: Made GaussianClassifierNode inherit from ClassifierNode. 2010-04-04: Simplified the inheritance names in classifier nodes’ declarations. 2010-03-29: Patch from Benjamin Schrauwen to fix the method polution issue. 2010-03-29: Fixed the dysfunctional unittest, it now breaks, currently no solution. 2010-03-29: Added test for pp scheduler with real flow. 2010-03-28: Improved the extension verbose printing. 2010-03-28: Fixed one more issue (method polution) and added a test. 2010-03-28: Added new failing test, caused by polution. 2010-03-28: Added unittest for the special behavior when multiple extensions extend the same method. 2010-03-28: Patch from Benjamin Schrauwen for extension mechanism. 2010-03-27: Slightly changed the behavior of targeted message keys: They are now removed even if they are not args of the method. Updated the unittests accordingly. 2010-03-27: Fixed bug for layer containing a single node. 2010-03-26: Added BiClassifier and automatic creation of BiMDP versions for MDP classifiers (currently except the SVN based ones). 2010-03-25: Applied patch from Benjamin Schrauwen, the BiMDP tests now all pass. 2010-03-25: Changed the BiMDP msg key seperator from "=>" to "->" to allow for more consistent convention in the future, when more options might be added. Updated the todos accordingly. 2010-03-25: Undid the changes to the extension mechanism, they are now available in a seperate branch. 2010-03-25: Enable inheritance in extensions. Patch thanks to Benjamin Schrauwen! (cherry picking from commit aacbc906be0f79e66124085e1c51c35a0aee731d) Enable inheritance in extensions. Test thanks to Benjamin Schrauwen! (cherry picking from commit 89b7facb223aff09d4581ce5aa68d07a8ef47b1b) 2010-03-25: Enable inheritance in extensions. Test thanks to Benjamin Schrauwen! 2010-03-25: Enable inheritance in extensions. Patch thanks to Benjamin Schrauwen! 2010-03-24: Python fixes for compatibility with 2.4. 2010-03-24: Reverted some of the Python 2.6 (and Numpy 1.4) only changes. 2010-03-24: Added verbose argument to mdp.activate_extension. 2010-03-23: Added a section about dealing with testing branches. 2010-03-22: Added unittest for bimdp parallel stuff and updated another. Update the todos. 2010-03-22: Improved the CloneBiLayer message data splitting and added corresponding unittest. 2010-03-22: Slightly changed the message handing in BiSwitchboard and added corresponding unittests. Small rename in BiNode. Small todo update. 2010-03-16: Added a simple k-means clustering classifier. 2010-03-14: Added node_id support to BiFlowNode. 2010-03-13: Added utility function izip_stretched. 2010-03-12: Added test for adding node to flow. Moved one BiNode test to correct class. 2010-03-11: Fixed bug in flow in-place adding method. 2010-03-11: Added a refcast to classify and prob methods. 2010-03-10: Fixed the remaining failing tests from Tiziano. It was caused by an ambiguity of the BiFlow.train docstring, so this is now explained in more detail. An additional check was added to BiFlow to detect this kind of error. 2010-03-10: Refactored the additional argument check into seperate method, so that it can be reused by derived flow classes (especially BiFlow). 2010-03-10: Fixed the test for FDA in a biflow. 2010-03-10: Added other failing tests for BiFlow. 2010-03-10: Added failing tests for BiFlows. 2010-03-10: Added special __add__ method to BiNode, creating a BiFlow. Also added corresponding unittest. 2010-03-09: Fixed an assertion bug in Shogun test. 2010-03-09: Added unittests for recent stop_message magic. Also made some small improvements and added for documentation for this. Made small updates to the general bimdp docstring. 2010-03-09: Added a section on using git with Windows or Eclipse. Added a section on following PEP 8. 2010-03-09: Added checks for input_dim. 2010-03-09: We need to have shogun 0.9 for our SVM Classifier. 2010-03-08: Updated NEW_DEVELOPER_INFO to ‘The git situation’. 2010-03-08: Fixed Py2.6 DeprecationWarning in RBMLearning test. 2010-03-08: Fixed a new bug in BiNode. Moved the SVN nodes to the classifier part in the autogen module. 2010-03-08: Made a larger set of additions to the SVM tests. 2010-03-08: Updated the docstrings for the SVM classifiers. 2010-03-06: Fix: Forgot to set the classifier_type in libsvm. 2010-03-06: Added simple support for probability estimations in LibSVM through our _prob() method. 2010-03-04: Renamed SVM_Node -> SVM_Classifier. 2010-03-02: Supplying arguments to the shogun classifier works now. Improved the test suite. 2010-03-01: Lots of updates and refinements to shogun integration. 2010-02-28: Added some recognition for KernelMachines in shogun. 2010-02-27: Applied some fixes and optimisations to the SVM code. 2010-02-26: Added class _LabelNormalizer to ease the handling of label mapping. 2010-02-26: ClassifierNode-Migration: Now use ClassifierNode._classify in SVMNodes. 2010-02-26: Last commit before migration of SVM nodes to ClassifierNode. 2009-09-16: Made some improvements in SVM testing and overall structure. 2010-03-06: Improved the algorithm for _randomly_filled_hyperball(). Scaling is much better now. 2010-03-06: Fixed a bug in ClassifierNode.rank(). It now expects a list of feature vectors to function. 2010-03-06: Fixed one potential stop_message issue. 2010-03-06: Added magic in BiNode to call execute or inverse from stop_message. 2010-02-27: Made the product helper method take advantage of Python 2.6. 2010-03-04: Moved release files in root of master branch. 2010-03-03: Added automatic detection of number of CPU cores. This is used in the ProcessScheduler and the ThreadScheduler as the default value. 2010-03-02: Fixed bug left over from inspection cleanup. 2010-02-27: get_svn_revision --> get_git_revision 2010-02-25: Made a couple of smaller improvements in the inspections stuff. Added a nice demo for customized inspections. 2010-02-22: Introducing a node for a simple discrete Hopfield model. 2010-02-22: Stupid bug in bool_to_sign fixed. 2010-02-23: Simplified the improved import precautions. 2010-02-23: Enabled calling bimdp.test(). Removed redundant code from autogen. 2010-02-23: Made the import process in the worker processes more robust, to fix the problem with calling bimdp.test() from the command line. 2010-02-23: Small updated in the autogen code. 2010-02-23: Changed the automatic creation of BiNode versions for MDP nodes. Now a module file is used that is created by a special script. While less elegant than the original dynamic solution this is now compatible with code checkers and is more transparent for users. 2010-02-23: Added general purpose routines bool_to_sign and sign_to_bool. 2010-02-22: Fixed mdp import bug in process scheduler (caused by the occurance of mdp in the repo name). 2010-02-19: Delete mdp_dist. All future updates should be made in release-manager branch. 2010-02-19: Removed remaining binet references. Renamed some inspection classes to get the MDP out of the name. Updated the todo list. 2010-02-19: Need to check against numpy.all when comparing array elements. 2010-02-18: Corrected the common assert-statement-with-parentheses mistake. 2010-02-18: Restructured bimdp to follow the mdp structure. 2010-02-17: Renamed binet to bimdp. 2009-08-01: Typo fix in progressbar script and clarification for OS X (the automatic size works there as well). 2010-02-17: Added .gitignore file for .pyc files and some generic patterns. 2010-02-14: Added option to automatically handle the source_paths in the worker processes. 2010-02-13: Added more unittests for BiFlow. Fixed one bug in inspection. Moved the JumpBiNode to the tests. 2010-02-11: Updated the parallel demo. 2010-02-11: Added a new thread based scheduler. Some very tiny updates to ProcessScheduler. 2010-02-09: Added more BiNode unittests. Updated BiNode documentation. Updated BiFlow unittests (not yet complete). 2010-01-30: Some small fixes in the inspection. Tried to use cgitd to show nice HTML tracebacks, but so far this failed (FlowExceptionCR only preserves a traceback string). 2010-01-29: Removed the updown module (this should be no longer needed). 2010-01-26: Added check in get_quadratic_form to make it more flexible (e.g. when a derived class wants to use it in _stop_training). Also made some small PEP8 updates. 2010-01-26: Updated the imports for the parallel package. Fixed some other small issues. 2010-01-22: Fixed one issue in rectangular switchboards (the data in each patch was transposed). Added corresponding unittests. 2010-01-11: Seriously beefed up the automatic BiNode creation to preserve the init signature. This makes it compatible with the introspection in the parallel default fork implementation. Also added a special ParallelBiNode and made a tiny style improvement in BiNode. 2010-01-11: Added a default implementation for forking nodes. 2010-01-06: Updated the todos. Also one small improvement to the metaclass modification. 2010-01-05: Some small fixes and cosmetic correction. 2010-01-05: Fixed one bug from hell in FlowNode (copy was broken for nested FlowNodes when the copy happened after training). 2010-01-05: Fixed a bug in BiFlowNode. 2010-01-04: Modified the automatic node creation, now is compatible with pickle and should be 100% transparent. Also fixed the switchboard factory extension support. 2010-01-04: Put the automatic node creation into a function to avoid namespace polution. Added automatic node creation for switchboards. There is one remaining issue with pickling. 2010-01-04: Fixed bugs from refactoring. 2010-01-04: Small refactoring in switchboard factory extension (to allow easier modification by derived classes). 2010-01-04: Added automatic creation of BiNode versions of all nodes in mdp.nodes. 2010-01-04: Removed leftover code. 2010-01-04: Updated todos and docstring in init. Small cosmetic updates elsewhere. 2010-01-03: Fixed bug of the new inspection training node highlighting feature. 2010-01-03: Fixed one issue with stop messages in ParallelBiFlow, added a related convenience check to CloneBiLayer. 2010-01-03: Small improvements and bugfixes with respect to stop_message. 2010-01-03: Added highlighting of the currently training node in the inspection. 2010-01-02: Fixed a couple of bugs. The DBN example is now working again. 2010-01-02: Inspection was simplified, parallel did not need any changes. The binet simplification should now be complete (but I have not run real-world tests yet). 2010-01-02: Updated bilayer and biswitchboard (with some huge simplifications in biswitchboard). Now parallel and inspection remain to be updated. 2010-01-02: Added a new switchboard. 2010-01-01: Fixed some issues in BiFlow after simplification. Updated BiFlowNode, this is hopefully done. 2009-12-30: Finished simplification of BiFlow. bihinet still needs to be updated as well. 2009-12-25: Added /usr/share/dict/words to the list of dictionaries. 2009-12-23: Started with the binet simplification. BiNode should be more or less finished, most of the rest is not yet done. In principle everything should already work, there is just a large amount of dead code around. 2009-12-22: Fixed one bug in the trace inspection (added check for msg being None). 2009-12-22: Slightly modified the BiLayer data splitting behavior. 2009-12-22: Fixed bug in iterator / argument check for train: default argument values were not considered. 2009-12-16: Renamed the 'html_representation' extension to 'html'. 2009-12-15: Added a copy method to FlowNode to enable delegation to internal nodes. Added corresponding tests. 2009-12-13: Prettified the argument representations in the slides. 2009-12-12: Small update to SenderBiNode. 2009-12-12: Improved the exception display in the inspection slideshows. Fixed missing reset in fascade. 2009-12-12: Fixed bug in BiFlow (same bug that was previously fixed in BiFlowNode, just forgot to commit this). 2009-12-12: Fixed bug in BiFlowNode (relative target index was not translated to absolute index during training). 2009-12-12: Increased the robustnes of the training inspection in debug mode (it can now deal with exceptions before the stop training hook is reached). 2009-12-11: clean up a couple of lines in RBM nodes 2009-12-09: Added a small parallelization demo to demonstrate and explore the speedup. 2009-12-05: Fixed outdated default value in slideshow (could cause JS error in special circumstances). 2009-12-04: added documentation about the convention for the train methods of multiple training phase nodes. 2009-12-04: added Node.has_multiple_training_phases method. 2009-12-03: Modified check for missing argument in node training to also check for too many arguments. Added a corresponding test. 2009-12-03: Added check for missing training arguments in Flow (which should be provided by the iterator). Also edited the corresponding unittest. 2009-12-03: Fixed small error in docstring for slideshow. 2009-12-02: well, NoiseNode *can* be pickled if you use pickle instead of cPickle. I hope this bug is fixed in python2.6 or python3 ;-) 2009-12-02: raise error if trying to copying or saving the NoiseNode error (bug in cPickle) 2009-12-02: added failing test for iterables mess in flow. 2009-12-01: Added missing super call. 2009-12-01: Fixed one evil bug: When the last training phase of a node was not parallelizable in a ParallelCheckpointFlow then the checkpoint function was not called. The fix required the addition of another hook method in ParallelFlow, but is overall much cleaner than the previous solution. Also improved the status messages for local training phases. 2009-12-01: gaussian classifier still had overwritten train just to offer a doc-string. putting the doc-string in _train is sufficient. 2009-11-29: Added new version of SenderBiNode (doesn't use branching). 2009-11-26: Fixed one biflow issue when in inverse mode (especially inside a BiFlowNode). 2009-11-26: Updated some docstrings for the extension mechanism (they were only refering to methods and not attributes). 2009-11-26: Fixed a bug in the inspection facade. 2009-11-22: Improved the extension unittests (now testing class attributes as well). 2009-11-21: Improved the slideshow button text hints (they now also contain the keyboard shortcuts). 2009-11-21: Fixed one embarrassing mistake in the node metaclasses. 2009-11-21: Added support for kwargs in training and execution inspection. 2009-11-16: Added a SimpleMarkovClassifier node. There is duplicated functionality compared to the NaiveBayesClassifier, so the NaiveBayesClassifier could be refactored to be based on the SimpleMarkovClassifier. 2009-11-15: Fixed the signature overwriting in NodeMetaclass. Also renamed some occurances of "kargs" to "kwargs" in the Node class (previously both names were used in different methods). 2009-11-15: Fixed a bug in rank(). 2009-11-14: Moved ClassifierNode up one directory. 2009-11-14: added file with info for new developers 2009-11-13: Added a ClassifierNode with basic methods classify, rank and prob. 2009-11-13: Fixed one bug in the message container, thanks to Sven Dähne. 2009-11-09: Fixed one small bug in switchboard factory. Added switchboard factory extension in binet. Updated todos. 2009-11-07: Added new switchboard factory node extension. This has been factored out of the hinet planer and I will now use it in my simulation software as well (therefore I wanted to integrate it into MDP). Unittests and documentation are currently missing (unittests will be somewhat tricky, not sure how deeply this should be covered in the tutorial anyway). 2009-11-06: Rewrote some parts of the extension mechanism. It now works for arbitrary attributes. One nice things is that this actually reduced the number of lines of code. The tutorial has not yet been updated. Attribute specific unittests are missing as well. 2009-11-06: Updated the todo list. 2009-11-05: Slightly improved the API for activating multiple extensions. Discovered that the extension mechanism does not work for classmethods/staticmethods, since they appear as non-callable descriptors. I will change the extension mechanism to work for arbitrary attributes ASAP. 2009-11-04: Small improvements in extension mechanism, added more comments. Added missing Exception import in hinet init. 2009-11-04: Fixed a small slideshow issue. 2009-11-03: Completed the clickable inspection nodes. One can now click on a node to jump to the next slide where it is reached. This does not work in IE yet (the workaround would be simple, but don't want to clutter the code with IE specific garbage). Also removed the AJAX storage div (caused trouble due to duplicate ids and is not needed anyway). 2009-11-03: Added support for active node ids in binet inspection. This will make it possible to click on a node and directly jump to the next time it is reached. The JS part of this is not yet implemented. 2009-11-03: Fixed the slideshow image CSS and added IE support. 2009-10-30: Cleaned up the JavaScript code in the binet inspection. 2009-10-27: Added checks in channel switchboard. 2009-10-21: Sorry, forgot to fix the webbrowser.open issue in binet. 2009-10-21: Fixed the webbrowser.open issue on MacOS as suggested by Rike. 2009-10-20: Tweaked hinet css. Extracted slideshow css into speparate file. 2009-10-20: Extracted CSS code into separate file. 2009-10-18: Fixed error in rhombic switchboard, added corresponding unittest. 2009-10-17: Added new features to the ChannelSwitchboard base class to get the effective receptive fields in hierarchical networks. Also added corresponding tests. Minor cosmetic updates in hinet translator. 2009-10-16: Added hinet translatation adapter for valid XHTML (instead of HTML). Also added a corresponding unittest, but this does not test the validity of the XHTML. 2009-10-16: Fixed two small issues in the hinet HTML representation. 2009-10-14: fixed bug in TimeFramesNode when input_dim was set explicitly. thanks to anonymous sourceforge user [Bug ID: 2865106] 2009-10-14: Some CSS fixes and a fix in the demo. 2009-10-14: Added slideshow CSS for crisp rescaling, in anticipation of the Firefox 3.6 beta release. 2009-10-12: Added feature that nicely line wraps the section ids in a slideshow. 2009-10-07: Added more checks to DoubleRect2dSwitchboard. Some small improvements. Updated the package init file. 2009-10-07: Reverted output_channels renaming in switchboard, to not create compatibility issues. 2009-10-07: Many switchboard updates. A new ChannelSwitchbard base class was introduced, many code improvements, HTML representations were added for the new switchboard types. 2009-10-06: removed spurious print statement in QuadraticForm tests. 2009-10-04: Added a section on the extension mechanism in the tutorial (note that this is a first draft). Also added a description of the slideshow stuff (plus a minor update in the slideshow code). 2009-10-04: Migrated extension stuff to separate module. 2009-10-01: Fixed some issues in the new DoubleRhomb2dSwitchboard, added more tests. 2009-09-29: Added a new switchboard class, DoubleRhomb2dSwitchboard. The tests are currently incomplete and there are probably bugs in there. 2009-09-29: Some simplifications in the switchboard classes. 2009-09-29: Added a new switchboard class, DoubleRect2dSwitchboard. This will probably be complemented in the future by a Rhomb2d switchboard for multilayer networks. 2009-09-24: Fixed support for multiple slideshows in one file. Added multiple slideshow demo. Allowed keyboard shortcuts for slideshows to be turned off. 2009-09-23: Updated todos. 2009-09-23: Fixed the template indentation handling. Updated the binet inspection to work with the new slideshow. Moved the manual tests for slideshow and hinet to the demos folder. 2009-09-23: Some more cleanup for the slideshow stuff. 2009-09-23: Small update in binet for moving the hinet basic CSS to utils. 2009-09-23: Cleaned up the slideshow JavaScript code. Most importantly the use of global variables is now minimized by using an object oriented design with a closure. Some additional cleanup in the slideshow classes was done as well. Added one helper function to directly create an HTML slideshow. Moved the default CSS from hinet to utils, so it can be used by both slideshow and hinet. Moved the hinet HTML test/demo to the test folder and added a similar demo for slideshow (using the MDP logo animation). 2009-09-19: added script to analyze code coverage during tests and output html report; needs figleaf, http://darcs.idyll.org/~t/projects/figleaf/doc/; mostly, we don't test exception, but there are some more worrying parts, like we don't test for for flow inversion... 2009-09-17: Added new utility function orthogonal_permutations. The function helps avoiding deeply nested statements when several of several arguments need to be tested against each other. 2009-09-14: Many thanks to Christian Hinze for pointing out a bug in QuadraticForm.get_invariances. Removed warning about bug in linalg.qr in numpy 1.0.1. 2009-09-14: Mainly added basic svm tests and did some cleanup and bug fixes. Also, the requested comments have been investigated. 2009-09-10: fixed bug in QuadraticForm.get_invariances: second derivative value was wrong when linear term significantly large; second derivative directions were ok 2009-08-30: small cleanups of svm_nodes and comments/requests for comments. rike? 2009-08-30: changed import order in nodes/__init__.py to clean up import in expansion_nodes.py for GrowingNeuralGasExpansionNode. 2009-08-23: Made extension method attributes public. Improved the extension unittests. 2009-08-22: Small code improvement in binet inspector. Updated todo for extensions. 2009-08-22: Modified the NodeMetaclass wrapping procedure to use super instead of direct method references. Hopefully this has no negative side effects. The advantage is that it is compatible with runtime monkey patching, so it enables more extension stuff. 2009-08-21: Extension decorator now preserves function docstring and signature. 2009-08-17: Fixed extension related bug in ParallelBiFlow. Improved the related unittests. Added some comments. 2009-08-16: Added little comment. 2009-08-16: Removed use of parallel FlowNodes in parallel flows. Small update to extensions. 2009-08-12: Added option to extension decorator to specify the method name. Added extension decorator unittest (now we have 400 unittests, wheee!). 2009-08-12: Fixed left-over bugs from yesterdays mess, fixed unittests. 2009-08-11: Added more unittests for the extension mechanism. Changes in the extension documentation and comments, and one small fix. 2009-08-11: Use set for list of active extensions. 2009-08-11: Added safety mechanisms to extensions, added one corresponding unittest. 2009-08-10: Re-enabled Shogun nodes. 2009-08-10: Added Henning Sprekeler to the list of developers and removed him from the list of contributors. Updated the various copyright statements all over the place, and the home page. 2009-08-10: Corrected the docstring of the GrowingNeuralGasExpansionNode to be more precise on the relation between max_nodes and the dimension of the expansion. 2009-08-10: added standard tests for GworingNeuralGasExpansionNode and cleaned up its setting of output_dim. 2009-08-07: Bug removed in GrowingNeuralGasExpansionNode: scipy -> numx 2009-08-07: missing file added; UpDownNode modified so it can pass down the results of computations at he top 2009-08-07: Added GrowingNeuralGasExpansionNode to expansion_nodes. Tests are still missing. 2009-08-07: ShogunSVMNode only gets imported when libraries are present. Stylistic changes (not all done yet). Some parameter issues and docstring clarified. Non-kernel classification. self._classification_type added. Parameter mapping for kernels introduced. 2009-08-06: remove import of svm_nodes in contrib. the automatic shogun import is not ready for prime-time. 2009-08-02: Initial SVM commit. Using only shogun for now. Still needs a lot of work and a better API. But it is already working somehow. 2009-08-02: uploaded UpDownBiNode, mother class for DBN, backpropagation; corrected small bug in binet; I've got loads of questions about binet 2009-07-31: GaussianClassifierNode has now explicit output_dim method (which fails). 2009-07-25: Added RBF node, utilities to replicate array on an additional dimension (rrep, lrep, irep) -> very useful 2009-07-19: Updated BiNet todo list. 2009-07-16: Fixed bug in rectangular switchboard that was introduced in the previous update. 2009-07-15: Added unused channel information to rectangular switchboard, this is also shown in the HTML representation. Added comments in slideshow. ------------------------------------------------------------------------------- MDP-2.5: 2009-06-30: Added online detection of numerical backend, parallel python support, symeig backend and numerical backend to the output of unit tests. Should help in debugging. 2009-06-12: Integration of the cutoff and histogram nodes. 2009-06-12: Fixed bug in parallel flow (exception handling). 2009-06-09: Fixed bug in LLENode when output_dim is a float. Thanks to Konrad Hinsen. 2009-06-05: Fixed bugs in parallel flow for multiple schedulers. 2009-06-05: Fixed a bug in layer inverse, thanks to Alberto Escalante. 2009-04-29: Added a LinearRegressionNode. 2009-03-31: PCANode does not complain anymore when covariance matrix has negative eigenvalues iff svd==True or reduce==True. If output_dim has been specified has a desired variance, negative eigenvalues are ignored. Improved error message for SFANode in case of negative eigenvalues, we now suggest to prepend the node with a PCANode(svd=True) or PCANode(reduce=True). 2009-03-26: Migrated from old thread package to the new threading one. Added flag to disable caching in process scheduler. There are some breaking changes for custom schedulers (parallel flow training or execution is not affected). 2009-03-25: Added svn revision tracking support. 2009-03-25: Removed the copy_callable flag for scheduler, this is now completely replaced by forking the TaskCallable. This has no effect for the convenient ParallelFlow interface, but custom schedulers get broken. 2009-03-22: Implemented caching in the ProcessScheduler. 2009-02-22: make_parallel now works completely in-place to save memory. 2009-02-12: Added container methods to FlowNode. 2009-03-03: Added CrossCovarianceMatrix with tests. 2009-02-03: Added IdentityNode. 2009-01-30: Added a helper function in hinet to directly display a flow HTML representation. 2009-01-22: Allow output_dim in Layer to be set lazily. 2008-12-23: Added total_variance to the nipals node. 2008-12-23: Always set explained_variance and total_variance after training in PCANode. 2008-12-12: Modified symrand to really return symmetric matrices (and not only positive definite). Adapted GaussianClassifierNode to account for that. Adapted symrand to return also complex hermitian matrices. 2008-12-11: Fixed one problem in PCANode (when output_dim was set to input_dim the total variance was treated as unknown). Fixed var_part parameter in ParallelPCANode. 2008-12-11: Added var_part feature to PCANode (filter according to variance relative to absoute variance). 2008-12-04: Fixed missing axis arg in amax call in tutorial. Thanks to Samuel John! 2008-12-04: Fixed the empty data iterator handling in ParallelFlow. Also added empty iterator checks in the normal Flow (raise an exception if the iterator is empty). 2008-11-19: Modified pca and sfa nodes to check for negative eigenvalues in the cov matrices 2008-11-19: symeig integrated in scipy, mdp can use it from there now. 2008-11-18: Added ParallelFDANode. 2008-11-18: Updated the train callable for ParallelFlow to support additional arguments. 2008-11-05: Rewrite of the make parallel code, now supports hinet structures. 2008-11-03: Rewrite of the hinet HTML repesentation creator. Unfortunately this also breaks the public interface, but the changes are pretty simple. 2008-10-29: Shut off warnings coming from remote processes in ProcessScheduler 2008-10-27: Fixed problem with overwriting kwargs in the init method of ParallelFlow. 2008-10-24: Fixed pretrained nodes bug in hinet.FlowNode. 2008-10-20: Fixed critical import bug in parallel package when pp (parallel python library) is installed. ------------------------------------------------------------------------------- MDP-2.4: 2008-10-16: added interface to BLAS's "gemm" matrix multiplication function. 2008-10-15: removed obsolete helper functions. 2008-10-15: added new feature. Now: output = XXXNode(output_dim=10)(x) trains and executes the node. This makes helper_functions obsolete! It even works for multiple training phases (only if the training phase has not started yet). 2008-10-15: removed use of deprecated features with python2.6 -3. 2008-10-14: removed dangerous list default argument in rotate and permute functions. A tuple is now used. 2008-10-13: PEP8 code restyling (pylint). 2008-10-07: Removed workarounds for pickle bug in numpy < 1.1.x (see numpy ticket 551). 2008-09-24: Implemeted metaclass trick for automatic inheritance of documentation for private methods. Node's subclass authors can now directly document "_execute", and let users see those docs as documenting "execute": class MyNode(Node): def _execute(self): """blah""" >>> print MyNode.execute.__doc__ blah. Just defined set of methods allow overwriting of the docstring. the current list is: ['_train', '_stop_training', '_execute', '_inverse'] 2008-09-22: Added new functionality to nodes and flows. Node1+Node2 now returns a flow and Flow1+Node1 appends Node1 to Flow1. 2008-09-07: New node for Locally Linear Embedding 2008-08-28: The docstring of Flow.train now mentions that instead of x the iterators can also return a tuple of x and additional args. 2008-08-28: Fixed bug in PCANode: when setting output_dim to a float number after instantiation but before stop_training, i.e: pca = PCANode() pca.train(x) pca.output_dim = 0.9 pca.stop_training() an exception was thrown: Output dim are set already (0) (1 given) 2008-08-28: Fixed bug in PCANode: when setting output_dim to float number, after stop_training pca.output_dim was a float number instead of integer. 2008-08-21: Added inverse for Switchboard, including unittests. 2008-08-19: Added parallel package. 2008-08-19: Added new NormalNoiseNode (with small unittest), which can be safely pickled. 2008-08-07: fixed bug in rbm_nodes (TrainingFinishedException). Thanks to Mathias Franzius! 2008-07-03: Node.__call__ method now takes *args and **kwargs to call the execute method. Thanks to Jake VanderPlas! 2008-06-30: Fix debian bug 487939: Modifed the fix for the debian bug so that white_parm defaults to None instead of {}. With default {} a sublcass updating the dictionary would have updated it for all instances... http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=487939 Applied patch from Michael Hanke. Thank you! ------------------------------------------------------------------------------- MDP-2.3: 2008-05-08: fixed bug in the raising of the switchboard exception. 2008-05-02: added exception for hinet.Rectangular2dSwitchboard and corresponding unittest. 2008-04-25: added new TDSEPNode, updated ISFANode. 2008-04-16: added an HTML visualisation tool for hinet flows to the hinet package. Added a hinet section in the tutorial. 2008-03-22: released MDP 2.2, this release is intended for internal use only 2008-03-21: added hinet package 2008-03-19: added RBM nodes 2008-03-18: fixed bug in PCANode when output_dim was specified as float. 2008-03-16: HitParadeNode now supports integer dtypes. 2008-03-13: created contrib subpackage and test_contrib test suites. JADENode moved to contrib. Tests are now sorted by test_suite. Added the NIPALSNode for iterative PCA. 2008-03-11: added JADENode with tests 2008-03-10: added test for dimensions settings (nodes should have output_dim and input_dim set after training) 2008-01-22: removed utils.comb scipy code and obsolete SciPy copyright notice from utils/routines.py 2008-01-03: fixed bug in SFA2Node when input_dim was set on instantiation, fixed bug in SFA2Node._stop_training when output_dim was set and larger than input_dim, fixed bug in QuadraticForm when H was indefinite. 2007-09-14: FastICANode has been completely rewritten. It is now up-to-date with the original Matlab version (2.5) published on 19.10.2005. Highlights: - fine tuning is implemented - the stabilized version of the algorithm is implemented - the new 'skew' non linearity for skewed input data is implemented - bug fix in 'tanh' non linearity - bug fix in 'gaus' non linearity in 'symm' approach - all combinations of input parameters are being tested 2007-09-12: Added new funcionality for PCA and Whitening. The nodes can now be instantiated with svd=True, to use SVD instead of the standard eigenvalue problem solver. Setting reduce=True and var_abs and var_rel it is possible to automatically discard irrelevant principal components. See the nodes documentation for more info 2007-09-11: added check for positive eigenvalues in symeg and symeig_fake 2007-08-17: added pre_inversion_checks, inversion equivalent to pre_execution_checks 2007-07-20: enable iteration on items and time completion estimation method in progress_bar 2007-07-09: fixed bug in SFA2Node: when instantiated with input_dim==output_dim, the range in symeig was not set and output_dim was wrong 2007-07-04: - fixed bug in mdp.utils.__init__.py about SymeigException not caught - fixed bug in ISFANode when perturbing eps_contrast was not respected (it was perturing indefinitely) - added a 'debug' kwarg to stop_training in PCANode, WhiteningNode, and SFANode. When stop_training fails because of singular matrices, the matrices itselves are not deleted (as before) but kept in self.cov_mtx and self.dcov_mtx for later inspection 2007-07-03: fixed bug in ISFANode when lags was array. 2007-06-25: Added get_{proj,rec}matrix method to the ICANodes. 'get_recmatrix' returns an estimate of the mixing matrix. ------------------------------------------------------------------------------- MDP-2.1: 2007-03-23: Updated tutorial and web site. 2007-03-23: Use get_limits.finfo to set precision limits 2007-03-23: Use symeig if found 2007-03-23: Use scipy instead of numpy when possible 2007-03-23: Numpy functions substituted with array method where possible 2007-03-22: Implemented invariances in Quadraticforms 2007-03-21: MDP now fully compatible with numpy 1.0 2007-03-15: Added ISFANode 2007-03-13: Added MultipleCovarianceMatrices utility class 2006-11-04: Added 'save' method to Node and Flow 2006-11-04: Node's train and stop_training methods now accept *args and **kwargs ------------------------------------------------------------------------------- MDP-2.0RC: 29.06.2006: Updated tutorial and web site. 28.06.2006: New random_rot. Added SFA2Node, QuadraticForm, moved graph into the mdp tree. 27.06.2006: Converted typecode handling to numpy.dtype style; supported_typecodes is now a property. Added bias in covariance. 26.06.2006: Converted to numpy. Scipy, Numeric, and numarray are not supported anymore 06.12.2005: pca_nodes: added get_explained_variance public method. 02.12.2005: New introspection utils in mdp.utils. 01.11.2005: New nodes: FANode, FDANode, GaussianClassifierNode 31.10.2005: Non back-compatible changes. Node has got three new 'properties': output_dim, input_dim, typecode. They are accessible through their getters (get_PROPERTY). They can be set using the default setters (set_PROPERTY). Subclasses can customize the setters, overriding the _set_PROEPRTY private methods. All nodes had to be changed to conform with the new structure. All tests pass. 26.10.2005: To force MDP to use a particular numerical extension, you can now set the enviroment variable MDPNUMX. Supported values are 'symeig', 'scipy', 'Numeric', 'numarray'. Mainly useful for testing purposes. 06.10.2005: The SfaNode, CuBICA, and FastICA aliases have been deleted. 06.10.2005: Node supports multiple and even an infinite number of training phases. FiniteNode makes the implementation of a class with a finite number of phases easy and one with just one phase trivial. 06.10.2005: Flow supports the new nodes and even nodes requiring a supervision signal during training. 06.10.2005: SignalNode, SignalNodeException, and SimpleFlow are now deprecated aliases. Use Node, NodeException and Flow instead. 06.10.2005: some bug fixes. 05.10.2005: fixed failing 'import mdp' in test_symeig when mdp is not installed 12.07.2005: bug in utils.symrand (global name "mult" is not defined) 06.07.2005: changed round off error checking in FastICANode (it was 1e-15 and it is now 1e-5). it failed on a MacOSX machine. 23.06.2005: _check_roundoff in lcov.py issues a MDPWarning using warnings.warn. it was using 'raise' before, and it was caught as a FlowException. 21.06.2005: node consistency is assured at flow instantiation. ------------------------------------------------------------------------------- MDP-1.1.0: 13.06.2005: MDP 1.1.0 released. 01.06.2005: Crash_recovery is now off by default. To switch it on do: flow.set_crash_recovery(1) 30.05.2005: New NoiseNode. 30.05.2005: SimpleFlow and SignalNode now have a 'copy' method. 30.05.2005: SimpleFlow is now a mutable sequence type and implements many of the 'list' methods. 30.05.2005: removed scipy dependency. Now mdp runs with either Numeric, numarray or scipy. 24.05.2005: symeig removed from mdp. 23.05.2005: all classes are now new-style. ------------------------------------------------------------------------------- MDP-1.0.0: 15.11.2004: MDP 1.0.0 released 09.11.2004: Added crash recovery capabilities to SimpleFlow (on by default). 05.11.2004: New GrowingNeuralGasNode. New graph module. 04.11.2004: New IdentityNode subclass added. All analysis nodes are now subclasses of IdentityNode. Use of input_dim and output_dim in the nodes' constructor is now consistent. 02.11.2004: Now symeig works reliably also for complex matrices. symeig now can be distributed as an independent package It is still contained in mdp.utils for convenience. Default value for option overwrite changed from 1 to 0. 14.10.2004: With the release of Scypy 0.3.2, the installation of MDP got much simpler. 06.10.2004: Fixed a bug in symeig (when B=None and overwrite=0, symeig raised an exception) 04.09.2004: Fixed Windows-xspecific problems. ------------------------------------------------------------------------------- MDP-0.9.0: 24.08.2004: MDP 0.9.0 released. First public release. 16.08.2004: MDP project registered at SourceForge. ------------------------------------------------------------------------------- mdp-3.3/CHECKLIST000066400000000000000000000114331203131624700133740ustar00rootroot00000000000000=== Checklist for MDP release === Before release: - check that new nodes have been explicitly imported in nodes/__init__.py and that they are listed in __all__: - create a list of defined nodes with: git grep 'class .*Node(' mdp/nodes | grep -v test | grep -v Scikits | cut -d ':' -f 2 | cut -d ' ' -f 2 | cut -d '(' -f 1 | sort > /tmp/list_defined - create a list of nodes imported in mdp.nodes with: python -c "import sys, mdp; [sys.stdout.write(i+'\n') for i in sorted([obj for obj in mdp.nodes.__dict__ if obj.endswith('Node') and not obj.endswith('ScikitsLearnNode')])]" > /tmp/list_in_dict - create a list of nodes in __all__ with: python -c "import sys, mdp; [sys.stdout.write(i+'\n') for i in sorted([obj for obj in mdp.nodes.__all__ if obj.endswith('Node') and not obj.endswith('ScikitsLearnNode')])]" > /tmp/list_in_all - compare those lists [keep in mind that a couple of nodes are private and so those lists do not need to be exactly equal] - make sure that __init__ has the right version number - update date in variable __copyright__ in file __init__ - test all suported python versions and dependencies with python testall.py /home/tiziano/python/x86_64/lib/pythonVERSION/site-packages - "make doctest" in docs repository and fix all failures During release: - update CHANGES: you can generate a new bunch of CHANGES with: git log --no-color --pretty="format:%w(79,0,12)%ad: %s%+b" --date=short --no-merges --since=$LASTRELEASE where LASTRELEASE is the date of the last release [LASTRELEASE=2010-05-15]. You can then prepend the output of this command to the original CHANGES file, but even better would be to edit the result to only keep the changes that are relevant for the user like incompatibilities, new features, etc.. - update TODO and COPYRIGHT (date) - generate tutorial, website, and API documentation [make website] - change homepage colors - short/long description should go: on SF.net description, tutorial, home page, modules __init__, software.incf.net. - generate installers and source packages and test them: for python2: run the gendist script for python3: the windows installer must be generated under windows, following the instructions in gendist - create a release notes file - tag release in git (tag mdp-toolkit repo) git tag -a MDP-3.0 - push the tag git push --tags - update on SF.net: release files: - sftp username,mdp-toolkit@frs.sourceforge.net - cd /home/pfs/project/m/md/mdp-toolkit/mdp-toolkit/ - create a new directory for the release, for example for release 3.0: mkdir 3.0 cd 3.0 - upload the files there (note: the release notes should be named README.txt): file to upload are: .tar.gz, .zip, .exe, tutorial, release notes file - login to sourceforge, go to "Files" - select the new created directory - select the installer for windows and set it as default for windows by clicking on the "i" icon on the right, - select the tar.gz for linux and set it as default for linux and mac - at that point the readme file should be automatically shown as release note file if README.txt is not shown, delete it and upload it through the web interface. make sure that it is shown. - more info: https://sourceforge.net/apps/trac/sourceforge/wiki/Release%20files%20for%20download - make the website within a clone of the docs repository with: - make website - be careful to read all warnings and messages, often things do not work as expected. - upload the pdf tutorial, which is in build/latex/MDP-tutorial.pdf, to sf.net as explained above for the source tarballs. - synchronize the site with: cd build/html rsync -av --delete-after . username,mdp-toolkit@web.sourceforge.net:/home/project-web/mdp-toolkit/htdocs/ - more info: http://alexandria.wiki.sourceforge.net/Project+Web,+Shell,+VHOST+and+Database+Services - tag the docs repository: git tag -a MDP-3.0 git push --tags - post news to sourceforge [the content may be the release notes file]: https://sourceforge.net/news/submit.php?group_id=116959 - update package information on mloss.org, pypi, and software.incf.net: - pypi [you need an account here: http://pypi.python.org/pypi]: within a clone of the mdp-toolkit repo: python setup.py register - mloss.org: https://mloss.org/software/update/60/ - software.incf.org: http://software.incf.net/software/modular-toolkit-for-data-processing-mdp/ After release: - update version number in __init__ - send announcement to: connectionists: connectionists@cs.cmu.edu ML-news: ML-news@googlegroups.com numpy-discussion: numpy-discussion@scipy.org Scipy users: scipy-user@scipy.org mdp-users: mdp-toolkit-users@lists.sourceforge.net Python-announce: python-announce-list@python.org - celebrate!! mdp-3.3/COPYRIGHT000066400000000000000000000033161203131624700134340ustar00rootroot00000000000000This file is part of Modular toolkit for Data Processing (MDP). All the code in this package is distributed under the following conditions: Copyright (c) 2003-2012, MDP Developers All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the Modular toolkit for Data Processing (MDP) nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. mdp-3.3/MANIFEST.in000066400000000000000000000002641203131624700136760ustar00rootroot00000000000000include CHANGES include COPYRIGHT include README include TODO include mdp/utils/slideshow.css include mdp/hinet/hinet.css include py3tool.py include conftest.py include pytest.ini mdp-3.3/README000066400000000000000000000001171203131624700130150ustar00rootroot00000000000000Please refer to the online documentation at http://mdp-toolkit.sourceforge.net mdp-3.3/TODO000066400000000000000000000075711203131624700126400ustar00rootroot00000000000000for MDP 3.3 =========== - clearify documentation on Flow.train as suggested by Fabian Schoenfeld in http://sourceforge.net/mailarchive/message.php?msg_id=27926167 - checkout windows 64bit binaries from http://www.lfd.uci.edu/~gohlke/pythonlibs/ maybe point to it from the documentation? - add code for generating the plots in the tutorial (but we don't want the doctests to fail if matplotlib is not installed: find a good way of dealing with it, pymvpa already does it properly) - document extension disable environment variables - docstrings should be migrated to rst everywhere (only class.__doc__ has been done for now) - example pages are still not good enough: idea: I actually think that every example page should be self-consistent, so no imports (apart from system-installed modules of course) should be allowed. if an example requires a lot of boiler-plate code, this code could be attached at the end of the example page. note that you can use the literalinclude [1] directive to include the code, so you don't even need to re-type everything in rst. this solution has the big advantage of allowing doctest to run properly and to avoid magic tricks with sys.path, which are not exactly elegant and prone to fail [1] http://sphinx.pocoo.org/markup/code.html?highlight=include#literalinclude - remove _monkeypatch_pp when parallel python is fixed - let EtaConmputerNode match the new convention of SFA Node in terms of last sample. - add example of usage of MDP within PyMVPA. The code exists already: https://github.com/PyMVPA/PyMVPA/blob/master/doc/examples/mdp_mnist.py - find a solution to the __revision__ problem: should it be set on installation? numpy solved the problem, do we want to go this route? - parallel: provide adapters for more sophisticated schedulers - add _bias attribute to PCANode to make it more consistent with SFA node. Maybe one could even create a new AffineNode node as a base class for PCA, SFA and other affine transformations? This might also be a good opportunity for some more PEP8 cleanup. - add more classifier stuff, like a ClassifierFlow - add an example of training a node with multiple training phases using a flow, where the training is done first using lists and then using a custom iterators. special care should be taken on explaining how to pass additional arguments to the train method. an example of how this can look confusing and go wrong can be found in the testFlowWrongItarableException test in test_flows.py - fix damned LLENode test for 2D shape embedded in 3D! - check that SparsePCA works on machine with scipy 0.9.0, add it to MDP if so - create a Flow metaclass to enable Flow extensions (think of ParallelFlow!) - implement an extension context manager with additional parameters and exception handling. E.g.: flow = Flow([PCANode()]) with extension('parallel', args=(ProcessScheduler,8)): flow.train(x) note that the context manager takes care of initialize and shutting down the scheduler. Proposed syntax: extension(string, args=tuple, kwargs=dictionary) - bimdp: add deep belief network flow and nodes to the core bimdp - add cross-correlation tools, maybe support the use of a parallel scheduler - check problem with LLENode tutorial demo when using matplotlib 0.99.1.2, see Olivier Grisel's email - LinearRegressionNode: add optional 2nd phase that computes residuals and significance of the slope - provide a Node pickler, for arrays use the binary numpy format (npy, numpy.save, numpy.load) and not pickle: pickling arrays is unsupported - add benchmarks for parallel module - provide different versions of the MDP logo which includes the website address, possibly one higher quality print version, available in "how to cite" section - Use the new property decorators when migrating to Python 2.6 (see http://docs.python.org/library/functions.html#property). - kalman filters - memory profiler - GUI mdp-3.3/bimdp/000077500000000000000000000000001203131624700132315ustar00rootroot00000000000000mdp-3.3/bimdp/__init__.py000066400000000000000000000100361203131624700153420ustar00rootroot00000000000000""" The BiMDP package is an extension of the pure feed-forward flow concept in MDP. It defines a framework for far more general flow sequences, involving top-down processes (e.g. for error backpropagation) or even loops. So the 'bi' in BiMDP primarily stands for 'bidirectional'. BiMDP is implemented by extending both the Node and the Flow concept. Both the new BiNode and BiFlow classes are downward compatible with the classical Nodes and Flows, allowing them to be combined with BiMDP elements. The first fundamental addition in BiMDP is that BiNodes can specify a target node for their output, to continue the flow execution at the specified target node. The second new feature is that Nodes can use messages to propagate arbitrary information, in addition to the standard single array data. A BiFlow is needed to enable these features, and the BiNode class has adds convenience functionality to help with this. Another important addition are the inspection capapbilities (e.g., bimdo.show_training), which create and interactive HTML representation of the data flow. This makes debugging much easier and can also be extended to visualize data (see the demos in the test folder). BiMDP fully supports and extends the HiNet and the Parallel packages. New BiMDP concepts: Jumps and Messages ====================================== Jump targets are numbers (relative position in the flow) or strings, which are then compared to the optional node_id. The target number 0 refers to the node itself. During execution a node can also use the value of EXIT_TARGET (which is currently just 'exit') as target value to end the execution. The BiFlow will then return the last output as result. Messages are standard Python dictionaries to transport information that would not fit well into the standard x array. The dict keys also support target specifications and other magic for more convenient usage. This is described in more detail in the BiNode module. """ ### T O D O ### # ------------- optional ---------------- # TODO: maybe also allow target==EXIT_TARGET during training # would have to modify _train_node_single_phase # TODO: add a target seperator that does not remove the key. Could use # -> remove key # --> remove one '-' on entry # => do not remove the key # Note that adding this kind of magic is relatively cheap in BiNode, # in parsing first check just for > . # TODO: add wildcard support for node_id in message keys. # Simply tread the node_id part of the key as a regex and check for match. # This adds an overhead of about 1 sec per 100,000 messages. # TODO: Terminate execution if both x and msg are None? This could help in # the stop_training execution, but could lead to strange results # during normal execution. # We could add a check before the result is returned in execute. # TODO: support dictionary methods like 'keys' in BiFlow? # TODO: add workaround for Google Chrome issue once a solution for # http://code.google.com/p/chromium/issues/detail?id=47416 # is in place. # TODO: Implement more internal checks for node output result? # Check that last element is not None? Use assume? # TODO: implement switchlayer, a layer where each column represents a different # target, so the target value determines which nodes are used # TODO: show more information in trace slides via mouse hover, # or enable some kind of folding (might be possible via CSS like suckerfish) from binode import ( BiNodeException, BiNode, PreserveDimBiNode, MSG_ID_SEP, binode_coroutine ) from biclassifier import BiClassifier from biflow import ( MessageResultContainer, BiFlowException, BiFlow, BiCheckpointFlow, EXIT_TARGET ) # the inspection stuff is considered a core functionality from inspection import * from test import test import nodes import hinet import parallel del binode del biflow del inspection from mdp.utils import fixup_namespace fixup_namespace(__name__, None, ('binode', 'biclassifier', 'biflow', 'inspection', )) del fixup_namespace mdp-3.3/bimdp/biclassifier.py000066400000000000000000000027671203131624700162560ustar00rootroot00000000000000 import mdp import binode class BiClassifier(binode.BiNode, mdp.ClassifierNode): """BiMDP version of the ClassifierNode base class. It enables that the classification results are returned by execute in the message. """ def _execute(self, x, return_labels=None, return_probs=None, return_ranks=None): """Return the unaltered x and classification results when requested. return_labels -- If True then the 'label' method is called on the x and the result is returned in the dict, with the key 'labels'. If return_labels is a string then this is used as a prefix for the 'labels' key of the result. return_probs, return_ranks -- Work like return_labels, but the results are stored under the key 'probs' and 'ranks'. """ msg = {} if return_labels: if not isinstance(return_labels, str): msg["labels"] = self.label(x) else: msg[return_labels + "labels"] = self.label(x) if return_probs: if not isinstance(return_probs, str): msg["probs"] = self.prob(x) else: msg[return_probs + "probs"] = self.prob(x) if return_ranks: if not isinstance(return_ranks, str): msg["ranks"] = self.rank(x) else: msg[return_ranks + "ranks"] = self.rank(x) if msg: return x, msg else: return x mdp-3.3/bimdp/biflow.py000066400000000000000000000626141203131624700150760ustar00rootroot00000000000000""" BiMDP Flow class for flexible (bidirectional) data flow. The central class is a BiFlow, which implements all the flow handling options offered by the BiNode class (see binode.py for a description). """ # NOTE: make sure that isinstance(str, target) is never used, so that in # principle any object could be used. import itertools import mdp n = mdp.numx from binode import BiNode # this target value tells the flow to abort and return the current values EXIT_TARGET = "exit" class NoneIterable(object): """Iterable for an infinite sequence of Nones.""" def __iter__(self): while True: yield None class BiFlowException(mdp.FlowException): """Exception for BiFlow problems.""" pass class MessageResultContainer(object): """Store and combine msg output chunks from a BiNode. It is for example used when the flow execution yields msg output, which has to be joined for the end result. """ def __init__(self): """Initialize the internal storage variables.""" self._msg_results = dict() # all none array message results self._msg_array_results = dict() # result dict for arrays def add_message(self, msg): """Add a single msg result to the combined results. msg must be either a dict of results or None. numpy arrays will be transformed to a single numpy array in the end. For all other types the addition operator will be used to combine results (i.e., lists will be appended, single integers will be summed over). """ if msg: for key in msg: if type(msg[key]) is n.ndarray: if key not in self._msg_array_results: self._msg_array_results[key] = [] self._msg_array_results[key].append(msg[key]) else: if key not in self._msg_results: self._msg_results[key] = msg[key] else: try: self._msg_results[key] += msg[key] except: err = ("Could not combine final msg results " "in BiFlow.") raise BiFlowException(err) def get_message(self): """Return the msg which combines all the msg results.""" # move array results from _msg_array_results to _msg_results for key in self._msg_array_results: if key in self._msg_results: err = ("A key in the msg results is used with " "different data types.") raise BiFlowException(err) else: self._msg_results[key] = n.concatenate( self._msg_array_results[key]) return self._msg_results class BiFlow(mdp.Flow): """BiMDP version of a flow, which supports jumps between nodes. This capabilities can be used by classes derived from BiNode. Normal nodes can also be used in this flow, the msg argument is skipped for these. Normal nodes can be also jump targets, but only when a relative target index is used (since they do not support node ids). """ def __init__(self, flow, verbose=False, **kwargs): kwargs["crash_recovery"] = False super(BiFlow, self).__init__(flow=flow, verbose=verbose, **kwargs) ### Basic Methods from Flow. ### def train(self, data_iterables, msg_iterables=None, stop_messages=None): """Train the nodes in the flow. The nodes will be trained according to their place in the flow. data_iterables -- Sequence of iterables with the training data for each trainable node. Can also be a single array or None. Note that iterables yielding tuples for additonal node arguments (e.g. the class labels for an FDANode) are not supported in a BiFlow. Instead use the BiNode version of the node and provide the arguments in the message (via msg_iterables). msg_iterables -- Sequence of iterables with the msg training data for each trainable node. stop_messages -- Sequence of messages for stop_training. Note that the type and iterator length of the data iterables is taken as reference, so the message iterables are assumed to have the same length. """ # Note: When this method is updated BiCheckpointFlow should be updated # as well. self._bi_reset() # normaly not required, just for safety data_iterables, msg_iterables = self._sanitize_training_iterables( data_iterables=data_iterables, msg_iterables=msg_iterables) if stop_messages is None: stop_messages = [None] * len(data_iterables) # train each Node successively for i_node in range(len(self.flow)): if self.verbose: print ("training node #%d (%s)" % (i_node, str(self.flow[i_node]))) self._train_node(data_iterables[i_node], i_node, msg_iterables[i_node], stop_messages[i_node]) if self.verbose: print "training finished" def _train_node(self, iterable, nodenr, msg_iterable=None, stop_msg=None): """Train a particular node. nodenr -- index of the node to be trained msg_iterable -- optional msg data for the training Note that the msg is only passed to the Node if it is an instance of BiNode. stop_msg -- optional msg data for stop_training Note that the message is only passed to the Node if the msg is not None, so for a normal node the msg has to be None. Note: unlike the normal mdp.Flow we do no exception handling here. """ if not self.flow[nodenr].is_trainable(): return iterable, msg_iterable, _ = self._sanitize_iterables(iterable, msg_iterable) while True: if not self.flow[nodenr].get_remaining_train_phase(): break self._train_node_single_phase(iterable, nodenr, msg_iterable, stop_msg) self._bi_reset() def _train_node_single_phase(self, iterable, nodenr, msg_iterable, stop_msg=None): """Perform a single training phase for a given node. This method should be only called internally in BiFlow. """ empty_iterator = True for (x, msg) in itertools.izip(iterable, msg_iterable): empty_iterator = False ## execute the flow until the nodes return value is right i_node = 0 while True: result = self._execute_seq(x, msg, i_node=i_node, stop_at_node=nodenr) ## check the execution result, target should be True if (not isinstance(result, tuple)) or (len(result) != 3): err = ("The Node to be trained was not reached " + "during training, last result: " + str(result)) raise BiFlowException(err) elif result[2] is True: x = result[0] msg = result[1] else: err = ("Target node not found in flow during " + "training, last target value: " + str(result[2])) raise BiFlowException(err) ## perform node training if isinstance(self.flow[nodenr], BiNode): result = self.flow[nodenr].train(x, msg) if result is None: # training is done for this chunk break else: try: self.flow[nodenr].train(x) except TypeError: # check if error is caused by additional node arguments train_arg_keys = self._get_required_train_args( self.flow[nodenr]) if len(train_arg_keys): err = ("The node '%s' " % str(self.flow[nodenr]) + "requires additional training " + " arguments, which is not supported in a " + "BiFlow. Instead use the BiNode version " + "of the node and put the arguments in " + "the msg.") raise BiFlowException(err) else: raise break ## training execution continues, interpret result if not isinstance(result, tuple): x = result msg = None target = None elif len(result) == 2: x, msg = result target = None elif len(result) == 3: x, msg, target = result else: err = ("Node produced invalid return value " + "during training: " + str(result)) raise BiFlowException(err) i_node = self._target_to_index(target, nodenr) self._bi_reset() if empty_iterator: if self.flow[nodenr].get_current_train_phase() == 1: err_str = ("The training data iteration for node " "no. %d could not be repeated for the " "second training phase, you probably " "provided an iterable instead of an " "iterable." % (nodenr+1)) raise BiFlowException(err_str) else: err = ("The training data iterable for node " "no. %d is empty." % (nodenr+1)) raise BiFlowException(err) ## stop_training part # unlike the normal mdp.Flow we always close the training # to perform the stop_training phase self._stop_training_hook() if stop_msg is None: result = self.flow[nodenr].stop_training() else: result = self.flow[nodenr].stop_training(stop_msg) if result is None: # the training phase ends here without an execute phase return # start an execution phase if not isinstance(result, tuple): x = result msg = None target = None elif len(result) == 2: x, msg = result target = None elif len(result) == 3: x, msg, target = result if target == EXIT_TARGET: return else: err = ("Node produced invalid return value " + "for stop_training: " + str(result)) raise BiFlowException(err) i_node = self._target_to_index(target, nodenr) result = self._execute_seq(x, msg, i_node=i_node) # check that we reached the end of flow or get EXIT_TARGET, # only complain if the target was not found if isinstance(result, tuple) and len(result) == 3: target = result[2] if target not in [1, -1, EXIT_TARGET]: err = ("Target node not found in flow during " + "stop_training phase, last target value: " + str(target)) raise BiFlowException(err) def execute(self, iterable, msg_iterable=None, target_iterable=None): """Execute the flow and return (y, msg). Note that the returned msg can be an empty dict, but not None. iterable -- Can be an iterable or iterator for arrays, a single array or None. In the last two cases it is assumed that msg is a single message as well. msg_iterable -- Can be an iterable or iterator or a single message (but only if iterable is a single array or None). target_iterable -- Like msg_iterable, but for target. Note that the type and iteration length of iterable is taken as reference, so msg is assumed to have the same length. If msg results are found and if iteration is used then the BiFlow tries to join the msg results (and concatenate in the case of arrays). """ self._bi_reset() # normaly not required, just for safety iterable, msg_iterable, target_iterable = \ self._sanitize_iterables(iterable, msg_iterable, target_iterable) y_results = None msg_results = MessageResultContainer() empty_iterator = True for (x, msg, target) in itertools.izip(iterable, msg_iterable, target_iterable): empty_iterator = False if not target: i_node = 0 else: i_node = self._target_to_index(target) result = self._execute_seq(x=x, msg=msg, i_node=i_node) if not isinstance(result, tuple): y = result msg = None elif (len(result) == 2): y, msg = result elif (len(result) == 3) and (result[2] in [1, -1, EXIT_TARGET]): # target -1 is allowed for easier inverse handling y, msg = result[:2] elif len(result) == 3: err = ("Target node not found in flow during execute," + " last result: " + str(result)) raise BiFlowException(err) else: err = ("BiNode execution returned invalid result type: " + result) raise BiFlowException(err) self._bi_reset() if msg: msg_results.add_message(msg) # check if all y have the same type and store it # note that the checks for msg are less restrictive if y is not None: if y_results is None: y_results = [y] elif y_results is False: err = "Some but not all y return values were None." raise BiFlowException(err) else: y_results.append(y) else: if y_results is None: y_results = False else: err = "Some but not all y return values were None." raise BiFlowException(err) if empty_iterator: err = ("The execute data iterable is empty.") raise BiFlowException(err) # consolidate results if y_results: y_results = n.concatenate(y_results) result_msg = msg_results.get_message() return y_results, result_msg def __call__(self, iterable, msg_iterable=None): """Calling an instance is equivalent to call its 'execute' method.""" return self.execute(iterable, msg_iterable=msg_iterable) ### New Methods for BiMDP. ### def _bi_reset(self): """Reset the nodes and internal flow variables.""" for node in self.flow: if isinstance(node, BiNode): node.bi_reset() def _request_node_id(self, node_id): """Return first hit of _request_node_id on internal nodes. So _request_node_id is called for all nodes in the flow until a return value is not None. If no such node is found the return value is None. """ for node in self.flow: if isinstance(node, BiNode): found_node = node._request_node_id(node_id) if found_node: return found_node return None ## container special methods to support node_id def __getitem__(self, key): if isinstance(key, str): item = self._request_node_id(key) if item is None: err = ("This biflow contains no node with with the id " + str(key)) raise KeyError(err) return item else: return super(BiFlow, self).__getitem__(key) def __setitem__(self, key, value): if isinstance(key, str): err = "Setting nodes by node_id is not supported." raise BiFlowException(err) else: super(BiFlow, self).__setitem__(key, value) def __delitem__(self, key): if isinstance(key, str): err = "Deleting nodes by node_id is not supported." raise BiFlowException(err) else: super(BiFlow, self).__delitem__(key) def __contains__(self, key): if isinstance(key, str): if self._request_node_id(key) is not None: return True else: return False else: return super(BiFlow, self).__contains__(key) ### Flow Implementation Methods ### def _sanitize_training_iterables(self, data_iterables, msg_iterables): """Check and adjust the training iterable list.""" if data_iterables is None: if msg_iterables is None: err = ("Both the training data and the training messages are " "None.") raise BiFlowException(err) else: data_iterables = [None] * len(self.flow) elif isinstance(data_iterables, n.ndarray): data_iterables = [[data_iterables]] * len(self.flow) # the form of msg_iterables follows that of data_iterables msg_iterables = [[msg_iterables]] * len(data_iterables) else: data_iterables = self._train_check_iterables(data_iterables) if msg_iterables is None: msg_iterables = [None] * len(self.flow) else: msg_iterables = self._train_check_iterables(msg_iterables) return data_iterables, msg_iterables def _sanitize_iterables(self, iterable, msg_iterable, target_iterable=None): """Check and adjust a data, message and target iterable.""" # TODO: maybe add additional checks if isinstance(iterable, n.ndarray): iterable = [iterable] msg_iterable = [msg_iterable] target_iterable = [target_iterable] elif iterable is None: if msg_iterable is None: err = "Both the data and the message iterable is None." raise BiFlowException(err) else: iterable = NoneIterable() if isinstance(msg_iterable, dict): msg_iterable = [msg_iterable] target_iterable = [target_iterable] else: if msg_iterable is None: msg_iterable = NoneIterable() if target_iterable is None: target_iterable = NoneIterable() return iterable, msg_iterable, target_iterable def _target_to_index(self, target, current_node=0): """Return the absolute node index of the target code. If the string id target node is not found in this flow then the string is returned without alteration. When a relative index is given it is translated to the absolute index and it is checked if it is in the allowed range. target -- Can be a string node id, a relative index or None (which is interpreted as 1). current_node -- If target is specified as a relative index then this node index is used to translate the target to the absolute node index (otherwise it has no effect). check_bounds -- If False then it is not checked wether the node index is in range(len(flow)). """ if target == EXIT_TARGET: return EXIT_TARGET if target is None: target = 1 if not isinstance(target, int): for i_node, node in enumerate(self.flow): if isinstance(node, BiNode) and node._request_node_id(target): return i_node # no matching node was found return target else: absolute_index = current_node + target if absolute_index < 0: err = "Target int value references node at position < 0." raise BiFlowException(err) elif absolute_index >= len(self.flow): err = ("Target int value references a node" " beyond the flow length (target " + str(target) + ", current node " + str(current_node) + ").") raise BiFlowException(err) return absolute_index # TODO: update docstring for case when target is not found def _execute_seq(self, x, msg=None, i_node=0, stop_at_node=None): """Execute the whole flow as far as possible. i_node -- Can specify a node index where the excecution is supposed to start. stop_at_node -- Node index where the execution should stop. The input values for this node are returned in this case in the form (x, msg, target) with target being set to True. If the end of the flow is reached then the return value is y or (y, msg). If the an execution target node is not found then (x, msg, target) is returned (target values of 1 and -1 are also possible). If a normal Node (not derived from BiNode) is encountered then the current msg is simply carried forward around it. """ ## this method is also used by other classes, like BiFlowNode while i_node != stop_at_node: if isinstance(self.flow[i_node], BiNode): result = self.flow[i_node].execute(x, msg) # check the type of the result if type(result) is not tuple: x = result msg = None target = 1 elif len(result) == 2: x, msg = result target = 1 elif len(result) == 3: x, msg, target = result else: err = ("BiNode execution returned invalid result type: " + result) raise BiFlowException(err) else: # just a normal MDP node x = self.flow[i_node].execute(x) # note that the message is carried forward unchanged target = 1 ## check if the target is in this flow, return otherwise if isinstance(target, int): i_node = i_node + target # values of +1 and -1 beyond this flow are allowed if i_node == len(self.flow): if not msg: return x else: return (x, msg) elif i_node == -1: return x, msg, -1 else: i_node = self._target_to_index(target, i_node) if not isinstance(i_node, int): # target not found in this flow # this is also the exit point when EXIT_TARGET is given return x, msg, target # reached stop_at_node, signal this by returning target value True return x, msg, True ### Some useful flow classes. ### class BiCheckpointFlow(BiFlow, mdp.CheckpointFlow): """Similar to normal checkpoint flow. The main difference is that even the last training phase of a node is already closed before the checkpoint function is called. """ def train(self, data_iterables, checkpoints, msg_iterables=None, stop_messages=None): """Train the nodes in the flow. The nodes will be trained according to their place in the flow. Additionally calls the checkpoint function 'checkpoint[i]' when the training phase of node #i is over. A checkpoint function takes as its only argument the trained node. If the checkpoint function returns a dictionary, its content is added to the instance's dictionary. The class CheckpointFunction can be used to define user-supplied checkpoint functions. """ self._bi_reset() # normaly not required, just for safety data_iterables, msg_iterables = self._sanitize_training_iterables( data_iterables=data_iterables, msg_iterables=msg_iterables) if stop_messages is None: stop_messages = [None] * len(data_iterables) checkpoints = self._train_check_checkpoints(checkpoints) # train each Node successively for i_node in range(len(self.flow)): if self.verbose: print ("training node #%d (%s)" % (i_node, str(self.flow[i_node]))) self._train_node(data_iterables[i_node], i_node, msg_iterables[i_node], stop_messages[i_node]) if i_node <= len(checkpoints) and checkpoints[i_node] is not None: checkpoint_dict = checkpoints[i_node](self.flow[i_node]) if dict: self.__dict__.update(checkpoint_dict) if self.verbose: print "training finished" mdp-3.3/bimdp/binode.py000066400000000000000000000550401203131624700150470ustar00rootroot00000000000000""" Special BiNode class derived from Node to allow complicated flow patterns. Messages: ========= The message argument 'msg' of the outer method 'execute' or 'train' is either a dict or None (which is treated equivalently to an empty dict and is the default value). The message is automatically parsed against the method signature of _train or _execute (or any other specified method) in the following way: normal key string -- Is copied if in signature and passed as a named argument. node_id->key -- Is extracted (i.e. removed in original message) and passed as a named argument. The separator '->' is also stored available as the constant MSG_ID_SEP. If the key is not an argument of the message then the whole key is simply erased. The msg returned from the inner part of the method (e.g. _execute) is then used to update the original message (so values can be overwritten). If args without default value are missing in the message, this will result in the standard Python missing-arguments-exception (this is not checked by BiNode itself). BiNode Return Value Options: ============================ result for execute: x, (x, msg), (x, msg, target) result for train: None -- terminates training x, (x, msg), (x, msg, target) -- Execution is continued and this node will be reached at a later time to terminate training. If the result has the form (None, msg) then the msg is dropped (so it is not required to 'clear' the message manually). result for stop_training: None -- Simply terminates the training, like for a normal node. x, (x, msg), (x, msg, target) -- Causes an execute like phase, which terminates when the end of the flow is reached or when EXIT_TARGET is given as target value (just like during a normal execute phase). Magic message keys: =================== When the incoming message is parsed by the BiNode base class, some argument keywords are treated in a special way: 'msg' -- If any method like _execute accept a 'msg' keyword then the complete remaining message (after parsing the other keywords) is supplied. The message in the return value then completely replaces the original message (instead of only updating it). This way a node can completely control the message and for example remove keys. 'target' -- If any template method like execute finds a 'target' keyword in the message then this is used as the target value in the return value. However, if _execute then also returns a target value this overwrites the target value. In global_message calls 'target' has no special meaning and can be used like any other keyword. 'method' -- Specify the name of the method that should be used instead of the standard one (e.g. in execute the standard method is _execute). An underscore is automatically added in front, so to select _execute one would have to provide 'execute'. If 'inverse' is given then the inverse dimension check will be performed and if no target is provided it will be set to -1. """ import inspect import mdp # separator for node_id in message keys MSG_ID_SEP = "->" class BiNodeException(mdp.NodeException): """Exception for BiNode problems.""" pass class BiNode(mdp.Node): """Abstract base class for nodes that use bimdp features. This class itself is not non-functional. Derived class should, if necessary, overwrite the _bi_reset method (in addition to the normal mdp.Node methods). Note hat this class can also be used as an Adapter / Mixin for normal nodes. This can for example be useful for nodes which require additional data arguments during training or execution. These can then be encapsulated in a messsage. Note that BiNode has to come first in the MRO to make all this work. """ def __init__(self, node_id=None, stop_result=None, **kwargs): """Initialize BiNode. node_id -- None or string which identifies the node. stop_result -- A (msg, target) tupple which is used by stop_training. If _stop_training returns a result as well then is updates / overwrites the stop_result, otherwise simply stop_result is returned (with x set to None). If the node has multiple training phases then stop_result must be None or an iterable with one entry for each training phase. kwargs are forwarded via super to the next __init__ method in the MRO. """ self._node_id = node_id self._stop_result = stop_result self._coroutine_instances = None super(BiNode, self).__init__(**kwargs) ### Modified template methods from mdp.Node. ### def execute(self, x, msg=None): """Return single value y or a result tuple. x can be None, then the usual checks are omitted. The possible return types are y, (y, msg), (y, msg, target) The outgoing msg carries forward the incoming message content. The last entry in a result tuple must not be None. y can be None if the result is a tuple. This template method normally calls the corresponding _execute method or another method as specified in the message (using the magic 'method' key. """ if msg is None: if x is None: err = "Both x and msg are None." raise BiNodeException(err) return super(BiNode, self).execute(x) msg_id_keys = self._get_msg_id_keys(msg) target = self._extract_message_key("target", msg, msg_id_keys) method_name = self._extract_message_key("method", msg, msg_id_keys) method, target = self._get_method(method_name, self._execute, target) msg, arg_dict = self._extract_method_args(method, msg, msg_id_keys) # perform specific checks if x is not None: if (not method_name) or (method_name == "execute"): self._pre_execution_checks(x) x = self._refcast(x) # testing for the actual method allows nodes to delegate method # resolution to internal nodes by manipulating _get_method elif method == self._inverse: self._pre_inversion_checks(x) result = method(x, **arg_dict) return self._combine_result(result, msg, target) def train(self, x, msg=None): """Train and return None or more if the execution should continue. The possible return types are None, y, (y, msg), (y, msg, target). The last entry in a result tuple must not be None. y can be None if the result is a tuple. This template method normally calls the corresponding _train method or another method as specified in the message (using the magic 'method' key. Note that the remaining msg and taret values are only used if _train (or the requested method) returns something different from None (so an empty dict can be used to trigger continued execution). """ # perform checks, adapted from Node.train if not self.is_trainable(): raise mdp.IsNotTrainableException("This node is not trainable.") if not self.is_training(): err = "The training phase has already finished." raise mdp.TrainingFinishedException(err) if msg is None: if x is None: err = "Both x and msg are None." raise BiNodeException(err) # no fall-back on Node.train because we might have a return value self._check_input(x) try: self._check_train_args(x) except TypeError: err = ("%s training seems to require " % str(self) + "additional arguments, but none were given.") raise BiNodeException(err) self._train_phase_started = True x = self._refcast(x) return self._train_seq[self._train_phase][0](x) msg_id_keys = self._get_msg_id_keys(msg) target = self._extract_message_key("target", msg, msg_id_keys) method_name = self._extract_message_key("method", msg, msg_id_keys) default_method = self._train_seq[self._train_phase][0] method, target = self._get_method(method_name, default_method, target) msg, arg_dict = self._extract_method_args(method, msg, msg_id_keys) # perform specific checks if x is not None: if (not method_name) or (method_name == "train"): self._check_input(x) try: self._check_train_args(x, **arg_dict) except TypeError: err = ("The given additional arguments %s " % str(arg_dict.keys()) + "are not compatible with training %s." % str(self)) raise BiNodeException(err) self._train_phase_started = True x = self._refcast(x) elif method == self._inverse: self._pre_inversion_checks(x) result = method(x, **arg_dict) if result is None: return None result = self._combine_result(result, msg, target) if (isinstance(result, tuple) and len(result) == 2 and result[0] is None): # drop the remaining msg, so that no maual clearing is required return None return result def stop_training(self, msg=None): """Stop training phase and start an execute phase with a target. The possible return types are None, y, (y, msg), (y, msg, target). For None nothing more happens, the training phase ends like for a standard MDP node. If a return value is given then an excute phase is started. This template method normally calls a _stop_training method from self._train_seq. If a stop_result was given in __init__ then it is used but can be overwritten by the returned _stop_training result or by the msg argument provided by the BiFlow. """ # basic checks if self.is_training() and self._train_phase_started == False: raise mdp.TrainingException("The node has not been trained.") if not self.is_training(): err = "The training phase has already finished." raise mdp.TrainingFinishedException(err) # call stop_training if not msg: result = self._train_seq[self._train_phase][1]() target = None else: msg_id_keys = self._get_msg_id_keys(msg) target = self._extract_message_key("target", msg, msg_id_keys) method_name = self._extract_message_key("method", msg, msg_id_keys) default_method = self._train_seq[self._train_phase][1] method, target = self._get_method(method_name, default_method, target) msg, arg_dict = self._extract_method_args(method, msg, msg_id_keys) result = method(**arg_dict) # close the current phase self._train_phase += 1 self._train_phase_started = False # check if we have some training phase left if self.get_remaining_train_phase() == 0: self._training = False # use stored stop message and update it with the result if self._stop_result: if self.has_multiple_training_phases(): stored_stop_result = self._stop_result[self._train_phase - 1] else: stored_stop_result = self._stop_result # make sure that the original dict in stored_stop_result is not # modified (this could have unexpected consequences in some cases) stored_msg = stored_stop_result[0].copy() if msg: stored_msg.update(msg) msg = stored_msg if target is None: target = stored_stop_result[1] return self._combine_result(result, msg, target) ## Additional new methods. ## @property def node_id(self): """Return the node id (should be string) or None.""" return self._node_id def bi_reset(self): """Reset the node for the next data chunck. This template method calls the _bi_reset method. This method is automatically called by BiFlow after the processing of a data chunk is completed (during both training and execution). All temporary data should be deleted. The internal node structure can be reset for the next data chunk. This is especially important if this node is called multiple times for a single chunk and an internal state keeps track of the actions to be performed for each call. """ if self._coroutine_instances is not None: # delete the instance attributes to unshadow the coroutine # initialization methods for key in self._coroutine_instances: delattr(self, key) self._coroutine_instances = None self._bi_reset() def _bi_reset(self): """Hook method, overwrite when needed.""" pass def _request_node_id(self, node_id): """Return the node if it matches the provided node id. Otherwise the return value is None. In this default implementation self is returned if node_id == self._node_id. Use this method instead of directly accessing self._node_id. This allows a node to be associated with multiple node_ids. Otherwise node_ids would not work for container nodes like BiFlowNode. """ if self._node_id == node_id: return self else: return None ### Helper methods for msg handling. ### def _get_msg_id_keys(self, msg): """Return the id specific message keys for this node. The format is [(key, fullkey),...]. """ msg_id_keys = [] for fullkey in msg: if fullkey.find(MSG_ID_SEP) > 0: node_id, key = fullkey.split(MSG_ID_SEP) if node_id == self._node_id: msg_id_keys.append((key, fullkey)) return msg_id_keys @staticmethod def _extract_message_key(key, msg, msg_id_keys): """Extract and return the requested key from the message. Note that msg and msg_id_keys are modfied if the found key was node_id specific. """ value = None if key in msg: value = msg[key] # check for node_id specific key and remove it from the msg for i, (_key, _fullkey) in enumerate(msg_id_keys): if key == _key: value = msg.pop(_fullkey) msg_id_keys.pop(i) break return value @staticmethod def _extract_method_args(method, msg, msg_id_keys): """Extract the method arguments form the message. Return the new message and a dict with the keyword arguments (the return of the message is done because it can be set to None). """ arg_keys = inspect.getargspec(method)[0] arg_dict = dict((key, msg[key]) for key in msg if key in arg_keys) for key, fullkey in msg_id_keys: if key in arg_keys: arg_dict[key] = msg.pop(fullkey) else: del msg[fullkey] if "msg" in arg_keys: arg_dict["msg"] = msg msg = None return msg, arg_dict def _get_method(self, method_name, default_method, target=None): """Return the method to be called and the target return value. method_name -- as provided in msg (without underscore) default_method -- bound method object target -- return target value as provided in message or None If the chosen method is _inverse then the default target is -1. """ if not method_name: method = default_method elif method_name == "inverse": method = self._inverse if target is None: target = -1 else: method_name = "_" + method_name try: method = getattr(self, method_name) except AttributeError: err = ("The message requested a method named '%s', but " "there is no such method." % method_name) raise BiNodeException(err) return method, target @staticmethod def _combine_result(result, msg, target): """Combine the execution result with the provided values. result -- x, (x, msg) or (x, msg, target) The values in result always has priority. """ # overwrite result values if necessary and return if isinstance(result, tuple): if msg: if result[1]: # combine outgoing msg and remaining msg values msg.update(result[1]) result = (result[0], msg) + result[2:] if (target is not None) and (len(result) == 2): # use given target if no target value was returned result += (target,) return result else: # result is only single array if (not msg) and (target is None): return result elif target is None: return result, msg else: return result, msg, target ### Overwrite Special Methods ### def __repr__(self): """BiNode version of the Node representation, adding the node_id.""" name = type(self).__name__ inp = "input_dim=%s" % str(self.input_dim) out = "output_dim=%s" % str(self.output_dim) if self.dtype is None: typ = 'dtype=None' else: typ = "dtype='%s'" % self.dtype.name node_id = self.node_id if node_id is None: nid = 'node_id=None' else: nid = 'node_id="%s"' % node_id args = ', '.join((inp, out, typ, nid)) return name + '(' + args + ')' def __add__(self, other): """Adding binodes returns a BiFlow. If a normal Node or Flow is added to a BiNode then a BiFlow is returned. Note that if a flow is added then a deep copy is used (deep copies of the nodes are used). """ # unfortunately the inline imports are required to avoid # a cyclic import (unless one adds a helper function somewhere else) if isinstance(other, mdp.Node): import bimdp return bimdp.BiFlow([self, other]) elif isinstance(other, mdp.Flow): flow_copy = other.copy() import bimdp biflow = bimdp.BiFlow([self.copy()] + flow_copy.flow) return biflow else: # can delegate old cases return super(BiNode, self).__add__(other) class PreserveDimBiNode(BiNode, mdp.PreserveDimNode): """BiNode version of the PreserveDimNode.""" pass ### Helper Functions / Decorators ### def binode_coroutine(args=None, defaults=()): """Decorator for the convenient definition of BiNode couroutines. This decorator takes care of all the boilerplate code to use a coroutine as a BiNode method for continuations (which is more elegant and convenient than using a a state machine implementation). args -- List of string names of the additional arguments. Note that the standard 'x' array is always given as the first value. So if n args are requested the yield will return n+1 values. defaults -- Tuple of default values for the arguments. If this tuple has n elements, they correspond to the last n elements in 'args' (following the convention of inspect.getargspec). Internally there are three methods/functions: - The user defined function containing the original coroutine code. This is only stored in the decorator closure. - A new method ('_coroutine_initialization') with the name and signature of the decorated coroutine, which internally handles the first initialization of the coroutine instance. This method is returned by the decorator. - A method with the signature specified by the 'args' for the decorator. After the coroutine has been initialized this method shadows the initialization method in the class instance (using an instance attribute to shadow the class attribute). """ if args is None: args = ["self", "x"] else: args = ["self", "x"] + args def _binode_coroutine(coroutine): # the original coroutine is only stored in this closure infodict = mdp.NodeMetaclass._function_infodict(coroutine) original_name = infodict["name"] ## create the coroutine interface method def _coroutine_interface(self, *args): try: return self._coroutine_instances[original_name].send(args) except StopIteration, exception: delattr(self, original_name) del self._coroutine_instances[original_name] if len(exception.args): return exception.args else: return None # turn the signature into the one specified by the args interface_infodict = infodict.copy() interface_infodict["signature"] = ", ".join(args) interface_infodict["defaults"] = defaults coroutine_interface = mdp.NodeMetaclass._wrap_function( _coroutine_interface, interface_infodict) ## create the initialization method def _coroutine_initialization(self, *args): coroutine_instance = coroutine(self, *args) bound_coroutine_interface = coroutine_interface.__get__( self, self.__class__) if self._coroutine_instances is None: self._coroutine_instances = dict() self._coroutine_instances[original_name] = coroutine_instance setattr(self, original_name, bound_coroutine_interface) try: return coroutine_instance.next() except StopIteration, exception: delattr(self, original_name) del self._coroutine_instances[original_name] if len(exception.args): return exception.args else: return None coroutine_initialization = mdp.NodeMetaclass._wrap_function( _coroutine_initialization, infodict) return coroutine_initialization return _binode_coroutine mdp-3.3/bimdp/hinet/000077500000000000000000000000001203131624700143405ustar00rootroot00000000000000mdp-3.3/bimdp/hinet/__init__.py000066400000000000000000000003611203131624700164510ustar00rootroot00000000000000 from biflownode import BiFlowNode from bilayer import CloneBiLayerException, CloneBiLayer from biswitchboard import * from bihtmlvisitor import BiHiNetHTMLVisitor, show_biflow del biflownode del bilayer del biswitchboard del bihtmlvisitor mdp-3.3/bimdp/hinet/biflownode.py000066400000000000000000000241541203131624700170500ustar00rootroot00000000000000 import mdp import mdp.hinet as hinet n = mdp.numx from bimdp import BiNode, BiNodeException from bimdp import BiFlow, BiFlowException # TODO: add derived BiFlowNode which allow specification message flag for # BiFlowNode to specify the internal target? Or hardwired target? class BiFlowNode(BiNode, hinet.FlowNode): """BiFlowNode wraps a BiFlow of Nodes into a single BiNode. This is handy if you want to use a flow where a Node is required. Additional args and kwargs for train and execute are supported. Note that for nodes in the internal flow the intermediate training phases will generally be closed, e.g. a CheckpointSaveFunction should not expect these training phases to be left open. All the read-only container slots are supported and are forwarded to the internal flow. """ def __init__(self, biflow, input_dim=None, output_dim=None, dtype=None, node_id=None): """Wrap the given BiFlow into this node. Pretrained nodes are allowed, but the internal _flow should not be modified after the BiFlowNode was created (this will cause problems if the training phase structure of the internal nodes changes). The node dimensions do not have to be specified. Unlike in a normal FlowNode they cannot be extracted from the nodes and are left unfixed. The data type is left unfixed as well. """ if not isinstance(biflow, BiFlow): raise BiNodeException("The biflow has to be an BiFlow instance.") super(BiFlowNode, self).__init__(flow=biflow, input_dim=input_dim, output_dim=output_dim, dtype=dtype, node_id=node_id) # last successful request for target node_id self._last_id_request = None def _get_target(self): """Return the last successfully requested target node_id. The stored target is then reset to None. If no target is stored (i.e. if it is None) then 0 is returned. """ if self._last_id_request: target = self._last_id_request self._last_id_request = None return target else: return 0 return 0 def _get_method(self, method_name, default_method, target): """Return the default method and the target. This method overrides the standard BiNode _get_method to delegate the method selection to the internal nodes. If the method_name is 'inverse' then adjustments are made so that the last internal node is called. """ if method_name == "inverse": if self._last_id_request is None: if target == -1: target = None if target is None: self._last_id_request = len(self._flow) - 1 return default_method, target def _execute(self, x, msg=None): target = self._get_target() i_node = self._flow._target_to_index(target) # we know that _get_target returned a valid target, so no check return self._flow._execute_seq(x, msg, i_node) def _get_execute_method(self, x, method_name, target): """Return _execute and the provided target. The method selection is done in the contained nodes. """ return self._execute, target def _get_train_seq(self): """Return a training sequence containing all training phases.""" train_seq = [] for i_node, node in enumerate(self._flow): if node.is_trainable(): remaining_len = (len(node._get_train_seq()) - self._pretrained_phase[i_node]) train_seq += ([(self._get_train_function(i_node), self._get_stop_training_function(i_node))] * remaining_len) # if the last node is trainable we have to set the output dimensions # to those of the BiFlowNode. if self._flow[-1].is_trainable(): train_seq[-1] = (train_seq[-1][0], self._get_stop_training_wrapper(self._flow[-1], train_seq[-1][1])) return train_seq ## Helper methods for _get_train_seq. ## def _get_train_function(self, nodenr): """Internal function factory for train. nodenr -- the index of the node to be trained """ # This method is similar to the first part of # BiFlow._train_node_single_phase. def _train(x, msg=None): target = self._get_target() i_node = self._flow._target_to_index(target) ## loop until we have to go outside or complete train while True: ## execute flow before training node result = self._flow._execute_seq(x, msg, i_node=i_node, stop_at_node=nodenr) if (isinstance(result, tuple) and len(result) == 3 and result[2] is True): # we have reached the training node x = result[0] msg = result[1] i_node = nodenr # have to update this manually else: # flownode should be reentered later return result ## perform node training if isinstance(self._flow[nodenr], BiNode): result = self._flow[nodenr].train(x, msg) if result is None: return None # training is done for this chunk else: self._flow[nodenr].train(x) return None ## training execution continues, interpret result if not isinstance(result, tuple): x = result msg = None target = None elif len(result) == 2: x, msg = result target = None elif len(result) == 3: x, msg, target = result else: # reaching this is probably an error, leave the handling # to the outer flow return result ## check if the target is in this flow, return otherwise if isinstance(target, int): i_node = i_node + target # values of +1 and -1 beyond this flow are allowed if i_node == len(self._flow): if not msg: return x else: return (x, msg) elif i_node == -1: return x, msg, -1 else: i_node = self._flow._target_to_index(target, i_node) if not isinstance(i_node, int): # target not found in this flow # this is also the exit point when EXIT_TARGET is given return x, msg, target # return the custom _train function return _train def _get_stop_training_function(self, nodenr): """Internal function factory for stop_training. nodenr -- the index of the node for which the training stops """ # This method is similar to the second part of # BiFlow._train_node_single_phase. def _stop_training(msg=None): if isinstance(self._flow[nodenr], BiNode): result = self._flow[nodenr].stop_training(msg) else: # for a non-bi Node the msg is dropped result = self._flow[nodenr].stop_training() # process stop_training result if result is None: return None # prepare execution phase if not isinstance(result, tuple): x = result msg = None target = None elif len(result) == 2: x, msg = result target = None elif len(result) == 3: x, msg, target = result else: err = ("Node produced invalid return value " + "for stop_training: " + str(result)) raise BiFlowException(err) if isinstance(target, int): i_node = nodenr + target # values of +1 and -1 beyond this flow are allowed if i_node == len(self._flow): return x, msg, 1 elif i_node == -1: return x, msg, -1 else: i_node = self._flow._target_to_index(target, nodenr) if not isinstance(i_node, int): # target not found in this flow # this is also the exit point when EXIT_TARGET is given return x, msg, target return self._flow._execute_seq(x, msg, i_node=i_node) # return the custom _stop_training function return _stop_training def _get_stop_training_wrapper(self, node, func): """Return wrapper for stop_training to set BiFlowNode outputdim.""" # We have to overwrite the version from FlowNode to take care of the # optional return value. def _stop_training_wrapper(*args, **kwargs): result = func(*args, **kwargs) self.output_dim = node.output_dim return result return _stop_training_wrapper ### Special BiNode methods ### def _bi_reset(self): self._last_id_request = None for node in self._flow: if isinstance(node, BiNode): node.bi_reset() def _request_node_id(self, node_id): if self._node_id == node_id: return self for node in self._flow: if isinstance(node, BiNode): found_node = node._request_node_id(node_id) if found_node: self._last_id_request = node_id return found_node return None mdp-3.3/bimdp/hinet/bihtmlvisitor.py000066400000000000000000000105701203131624700176140ustar00rootroot00000000000000""" BiNet version of the htmlvisitor hinet module to convert a flow into HTML. """ import tempfile import os import webbrowser import mdp from bimdp import BiNode from bimdp.nodes import SenderBiNode from bimdp.hinet import CloneBiLayer class BiHiNetHTMLVisitor(mdp.hinet.HiNetHTMLVisitor): """Special version of HiNetHTMLVisitor with BiNode support. All bimdp attributes are highligthed via the span.bicolor css tag. """ _BIHINET_STYLE = """ span.bicolor { color: #6633FC; } """ @classmethod def hinet_css(cls): """Return the standard CSS string. The CSS should be embedded in the final HTML file. """ css = super(BiHiNetHTMLVisitor, cls).hinet_css() return css + cls._BIHINET_STYLE def _translate_clonelayer(self, clonelayer): """This specialized version checks for CloneBiLayer.""" f = self._file self._open_node_env(clonelayer, "layer") f.write('') f.write(str(clonelayer) + '

') f.write('%d repetitions' % len(clonelayer)) if isinstance(clonelayer, CloneBiLayer): f.write('

') f.write('') f.write('use copies: %s' % str(clonelayer.use_copies)) f.write('') f.write('') self._visit_node(clonelayer.nodes[0]) f.write('') self._close_node_env(clonelayer) def _write_node_header(self, node, type_id="node"): """Write the header content for the node into the HTML file.""" f = self._file if type_id == "flow": pass elif type_id == "flownode": if isinstance(node, BiNode): f.write('') f.write('id: %s' % node._node_id) f.write('') else: f.write('in-dim: %s' % str(node.input_dim)) if isinstance(node, BiNode): f.write('  id: %s' % node._node_id) f.write('') f.write('') f.write('') @mdp.extension_method("html", SenderBiNode) def _html_representation(self): return ('recipient id: %s
' % str(self._recipient_id)) #@mdp.extension_method("html_representation", BiNode) #def _html_representation(self): # html_repr = super(BiNode, self).html_representation() # if self._stop_result: # html_repr = [html_repr, # 'stop_msg: %s
' # % str(self._stop_result)] # return html_repr ## Helper functions ## def show_biflow(flow, filename=None, title="MDP flow display", show_size=False, browser_open=True): """Write a flow with BiMDP nodes into a HTML file, open it in the browser and return the file name. Compared the the non-bi function this provides special decoration for BiNode attributes. flow -- The flow to be shown. filename -- Filename for the HTML file to be created. If None a temporary file is created. title -- Title for the HTML file. show_size -- Show the approximate memory footprint of all nodes. browser_open -- If True (default value) then the slideshow file is automatically opened in a webbrowser. """ if filename is None: (fd, filename) = tempfile.mkstemp(suffix=".html", prefix="MDP_") html_file = os.fdopen(fd, 'w') else: html_file = open(filename, 'w') html_file.write('\n\n%s\n' % title) html_file.write('\n\n\n') html_file.write('

%s

\n' % title) explanation = '(data flows from top to bottom)' html_file.write('%s\n' % explanation) html_file.write('


\n') converter = BiHiNetHTMLVisitor(html_file, show_size=show_size) converter.convert_flow(flow=flow) html_file.write('\n') html_file.close() if browser_open: webbrowser.open(os.path.abspath(filename)) return filename mdp-3.3/bimdp/hinet/bilayer.py000066400000000000000000000316101203131624700163420ustar00rootroot00000000000000 import mdp import mdp.hinet as hinet n = mdp.numx from bimdp import BiNode, BiNodeException class CloneBiLayerException(BiNodeException): """CloneBiLayer specific exception.""" pass class CloneBiLayer(BiNode, hinet.CloneLayer): """BiMDP version of CloneLayer. Since all the nodes in the layer are identical, it is guaranteed that the target identities match. The outgoing data on the other hand is not checked. So if the notes return different kinds of results the overall result is very unpredictable. The incoming data is split into len(self.nodes) parts, so the actual chunk size does not matter as long as it is compatible with this scheme. This also means that this class can deal with incoming data from a BiSwitchboard that is being send down. Arrays in the message are split up if they can be evenlty split into len(self.nodes) parts along the second axis, otherwise they are put into each node message. Arrays in the outgoing message are joined along the second axis (unless they are the same unsplit array), so if an array is accidently split no harm should be done (there is only some overhead). Note that a msg is always passed to the internal nodes, even if the Layer itself was targeted. Additional target resolution can then happen in the internal node (e.g. like it is done in the standard BiFlowNode). Both incomming and outgoing messages are automatically checked for the the use_copies msg key. """ def __init__(self, node, n_nodes=1, use_copies=False, node_id=None, dtype=None): """Initialize the internal variables. node -- Node which makes up the layer. n_nodes -- Number of times the node is repeated in this layer. use_copies -- Determines if a single instance or copies of the node are used. """ super(CloneBiLayer, self).__init__(node_id=node_id, node=node, n_nodes=n_nodes, dtype=dtype) # (self.node is None) is used as flag for self.use_copies self.use_copies = use_copies use_copies = property(fget=lambda self: self._get_use_copies(), fset=lambda self, flag: self._set_use_copies(flag)) def _get_use_copies(self): """Return the use_copies flag.""" return self.node is None def _set_use_copies(self, use_copies): """Switch internally between using a single node instance or copies. In a normal CloneLayer a single node instance is used to represent all the horizontally aligned nodes. But in a BiMDP where the nodes store temporary data this may not work. Via this method one can therefore create copies of the single node instance. This method can also be triggered by the use_copies msg key. """ if use_copies and (self.node is not None): # switch to node copies self.nodes = [self.node.copy() for _ in range(len(self.nodes))] self.node = None # disable single node while copies are used elif (not use_copies) and (self.node is None): # switch to a single node instance if self.is_training(): err = ("Calling switch_to_instance during training will " "probably result in lost training data.") raise CloneBiLayerException(err) elif self.is_bi_training(): err = ("Calling switch_to_instance during bi_learning will " "probably result in lost learning data.") raise CloneBiLayerException(err) self.node = self.nodes[0] self.nodes = [self.node] * len(self.nodes) def _get_method(self, method_name, default_method, target): """Return the default method and the unaltered target. This method overrides the standard BiNode _get_method to delegate the method selection to the internal nodes. """ return default_method, target ## standard node methods ## def _check_input(self, x): """Input check is disabled. It will be checked by the targeted internal node. """ pass def _execute(self, x, msg=None): """Process the data through the internal nodes.""" if msg is not None: self._extract_message_copy_flag(msg) y_results = [] msg_results = [] target = None node_msgs = self._get_split_messages(msg) if x is not None: # use the dimension of x, because this also works for inverse node_dim = x.shape[1] // len(self.nodes) else: node_dim = None for i_node, node in enumerate(self.nodes): if node_dim: node_x = x[:, node_dim*i_node : node_dim*(i_node+1)] else: node_x = None node_msg = node_msgs[i_node] if node_msg: node_result = node.execute(node_x, node_msg) else: node_result = node.execute(node_x) ## store result if not isinstance(node_result, tuple): y_results.append(node_result) else: y_results.append(node_result[0]) msg_results.append(node_result[1]) if len(node_result) == 3: target = node_result[2] ## combine message results msg = self._get_combined_message(msg_results) if (not y_results) or (y_results[-1] is None): y = None else: y = n.hstack(y_results) # check outgoing message for use_copies key if msg is not None: self._extract_message_copy_flag(msg) ## return result if target is not None: return (y, msg, target) elif msg: return (y, msg) else: return y def _train(self, x, msg=None): """Perform single training step by training the internal nodes.""" ## this code is mostly identical to the execute code, ## currently the only difference is that train is called if msg is not None: self._extract_message_copy_flag(msg) y_results = [] msg_results = [] target = None node_msgs = self._get_split_messages(msg) if x is not None: # use the dimension of x, because this also works for inverse node_dim = x.shape[1] // len(self.nodes) else: node_dim = None for i_node, node in enumerate(self.nodes): if node_dim: node_x = x[:, node_dim*i_node : node_dim*(i_node+1)] else: node_x = None node_msg = node_msgs[i_node] if node_msg: node_result = node.train(node_x, node_msg) else: node_result = node.train(node_x) ## store result if not isinstance(node_result, tuple): y_results.append(node_result) else: y_results.append(node_result[0]) msg_results.append(node_result[1]) if len(node_result) == 3: target = node_result[2] ## combine message results msg = self._get_combined_message(msg_results) if (not y_results) or (y_results[-1] is None): y = None else: y = n.hstack(y_results) # check outgoing message for use_copies key if msg is not None: self._extract_message_copy_flag(msg) ## return result if target is not None: return (y, msg, target) elif msg: return (y, msg) else: return y def _stop_training(self, msg=None): """Call stop_training on the internal nodes. The outgoing result message is also searched for a use_copies key, which is then applied if found. """ if msg is not None: self._extract_message_copy_flag(msg) target = None if self.use_copies: ## have to call stop_training for each node y_results = [] msg_results = [] node_msgs = self._get_split_messages(msg) for i_node, node in enumerate(self.nodes): node_msg = node_msgs[i_node] if node_msg: node_result = node.stop_training(node_msg) else: node_result = node.stop_training() ## store result if not isinstance(node_result, tuple): y_results.append(node_result) else: y_results.append(node_result[0]) msg_results.append(node_result[1]) if len(node_result) == 3: target = node_result[2] ## combine message results msg = self._get_combined_message(msg_results) if (not y_results) or (y_results[-1] is None): y = None else: y = n.hstack(y_results) else: ## simple case of a single instance node_result = self.node.stop_training(msg) if not isinstance(node_result, tuple): return node_result elif len(node_result) == 2: y, msg = node_result else: y, msg, target = node_result # check outgoing message for use_copies key if msg is not None: self._extract_message_copy_flag(msg) # return result if target is not None: return (y, msg, target) elif msg: return (y, msg) else: return y ## BiNode methods ## def _bi_reset(self): """Call bi_reset on all the inner nodes.""" if self.use_copies: for node in self.nodes: node.bi_reset() else: # note: reaching this code probably means that copies should be used self.node.bi_reset() def _request_node_id(self, node_id): """Return an internal node if it matches the provided node id. If the node_id matches that of the layer itself, then self is returned. """ if self.node_id == node_id: return self if not self.use_copies: return self.node._request_node_id(node_id) else: # return the first find, but call _request_node_id on all copies # otherwise BiFlowNode._last_id_request would get confused first_found_node = None for node in self.nodes: found_node = node._request_node_id(node_id) if (not first_found_node) and found_node: first_found_node = found_node return first_found_node ## Helper methods for message handling ## def _extract_message_copy_flag(self, msg): """Look for the the possible copy flag and modify the msg if needed. If the copy flag is found the Node is switched accordingly. """ msg_id_keys = self._get_msg_id_keys(msg) copy_flag = self._extract_message_key("use_copies", msg, msg_id_keys) if copy_flag is not None: self.use_copies = copy_flag def _get_split_messages(self, msg): """Return messages for the individual nodes.""" if not msg: return [None] * len(self.nodes) msgs = [dict() for _ in range(len(self.nodes))] n_nodes = len(self.nodes) for (key, value) in msg.items(): if (isinstance(value, n.ndarray) and # check if the array can be split up len(value.shape) >= 2 and not value.shape[1] % n_nodes): # split the data along the second index split_values = n.hsplit(value, n_nodes) for i, split_value in enumerate(split_values): msgs[i][key] = split_value else: for node_msg in msgs: # Note: the value is not copied, just referenced node_msg[key] = value return msgs def _get_combined_message(self, msgs): """Return the combined message. Only keys from the last entry in msgs are used. Only when the value is an array are all the msg values combined. """ if (not msgs) or (msgs[-1] is None): return None if len(msgs) == 1: return msgs[0] msg = dict() for (key, one_value) in msgs[-1].items(): other_value = msgs[0][key] if (isinstance(one_value, n.ndarray) and # check if the array was originally split up (len(one_value.shape) >= 2 and one_value is not other_value)): msg[key] = n.hstack([node_msg[key] for node_msg in msgs]) else: # pick the msg value of the last node msg[key] = msgs[-1][key] return msg mdp-3.3/bimdp/hinet/biswitchboard.py000066400000000000000000000121101203131624700175310ustar00rootroot00000000000000import sys import mdp import mdp.hinet as hinet n = mdp.numx from bimdp import BiNode class BiSwitchboard(BiNode, hinet.Switchboard): """BiMDP version of the normal Switchboard. It adds support for stop_message and also tries to apply the switchboard mapping to arrays in the message. The mapping is only applied if the array is at least two dimensional and the second dimension matches the switchboard dimension. """ def __init__(self, **kwargs): """Initialize BiSwitchboard. args and kwargs are forwarded via super to the next __init__ method in the MRO. """ super(BiSwitchboard, self).__init__(**kwargs) if self.inverse_connections is not None: self.down_connections = self.inverse_connections else: # a stable (order preserving where possible) sort is # necessary here, therefore use mergesort # otherwise channels (e.g. for a Rectangular2d...) are mixed up self.down_connections = n.argsort(self.connections, kind="mergesort") def _execute(self, x, msg=None): """Return the routed input data.""" if x is not None: y = super(BiSwitchboard, self)._execute(x) else: y = None msg = self._execute_msg(msg) if not msg: return y else: return y, msg def _inverse(self, x, msg=None): """Return the routed input data.""" if x is not None: y = super(BiSwitchboard, self)._inverse(x) else: y = None msg = self._inverse_msg(msg) if not msg: return y else: return y, msg def is_bi_training(self): return False ## Helper methods ## def _inverse_msg(self, msg): """Inverse routing for msg.""" if not msg: return None out_msg = {} for (key, value) in msg.items(): if (type(value) is n.ndarray and len(value.shape) >= 2 and value.shape[1] == self.output_dim): out_msg[key] = super(BiSwitchboard, self)._inverse(value) else: out_msg[key] = value return out_msg def _execute_msg(self, msg): """Feed-forward routing for msg.""" if not msg: return None out_msg = {} for (key, value) in msg.items(): if (type(value) is n.ndarray and len(value.shape) >= 2 and value.shape[1] == self.input_dim): out_msg[key] = super(BiSwitchboard, self)._execute(value) else: out_msg[key] = value return out_msg ## create BiSwitchboard versions of the standard MDP switchboards ## # corresponding methods for the switchboard_factory extension are # created as well @classmethod def _binode_create_switchboard(cls, free_params, prev_switchboard, prev_output_dim, node_id): """Modified version of create_switchboard to support node_id. This method can be used as a substitute when using the switchboard_factory extension. """ compatible = False for base_class in cls.compatible_pre_switchboards: if isinstance(prev_switchboard, base_class): compatible = True if not compatible: err = ("The prev_switchboard class '%s'" % prev_switchboard.__class__.__name__ + " is not compatible with this switchboard class" + " '%s'." % cls.__name__) raise mdp.hinet.SwitchboardException(err) for key, value in free_params.items(): if key.endswith('_xy') and isinstance(value, int): free_params[key] = (value, value) kwargs = cls._get_switchboard_kwargs(free_params, prev_switchboard, prev_output_dim) return cls(node_id=node_id, **kwargs) # TODO: Use same technique as for binodes? # But have to take care of the switchboard_factory extension. # use a function to avoid poluting the namespace def _create_bi_switchboards(): switchboard_classes = [ mdp.hinet.ChannelSwitchboard, mdp.hinet.Rectangular2dSwitchboard, mdp.hinet.DoubleRect2dSwitchboard, mdp.hinet.DoubleRhomb2dSwitchboard, ] current_module = sys.modules[__name__] for switchboard_class in switchboard_classes: node_name = switchboard_class.__name__ binode_name = node_name[:-len("Switchboard")] + "BiSwitchboard" docstring = ("Automatically created BiSwitchboard version of %s." % node_name) docstring = "Automatically created BiNode version of %s." % node_name exec ('class %s(BiSwitchboard, mdp.hinet.%s): "%s"' % (binode_name, node_name, docstring)) in current_module.__dict__ # create appropriate FactoryExtension nodes mdp.extension_method("switchboard_factory", current_module.__dict__[binode_name], "create_switchboard")(_binode_create_switchboard) _create_bi_switchboards() mdp-3.3/bimdp/inspection/000077500000000000000000000000001203131624700154045ustar00rootroot00000000000000mdp-3.3/bimdp/inspection/__init__.py000066400000000000000000000013431203131624700175160ustar00rootroot00000000000000""" Package to inspect biflow training or execution by creating an HTML slideshow. """ from tracer import ( InspectionHTMLTracer, TraceHTMLConverter, TraceHTMLVisitor, TraceDebugException, inspection_css, prepare_training_inspection, remove_inspection_residues, ) from slideshow import ( TrainHTMLSlideShow, SectExecuteHTMLSlideShow, ExecuteHTMLSlideShow ) from facade import ( standard_css, EmptyTraceException, inspect_training, show_training, inspect_execution, show_execution ) del tracer del slideshow del facade from mdp.utils import fixup_namespace fixup_namespace(__name__, None, ('tracer', 'slideshow', 'facade', )) del fixup_namespace mdp-3.3/bimdp/inspection/facade.py000066400000000000000000000401071203131624700171630ustar00rootroot00000000000000""" Module with simple functions for the complete inspection procedure. """ from __future__ import with_statement import os import webbrowser import cPickle as pickle import tempfile import traceback import warnings import mdp from mdp import numx from bimdp import BiFlow from tracer import ( InspectionHTMLTracer, TraceDebugException, inspection_css, prepare_training_inspection, remove_inspection_residues, _trace_biflow_training, PICKLE_EXT, STANDARD_CSS_FILENAME ) from slideshow import ( TrainHTMLSlideShow, ExecuteHTMLSlideShow, SectExecuteHTMLSlideShow) from utils import robust_write_file, robust_pickle, first_iterable_elem def _open_custom_brower(open_browser, url): """Helper function to support opening a custom browser.""" if isinstance(open_browser, str): try: custom_browser = webbrowser.get(open_browser) custom_browser.open(url) except webbrowser.Error: err = ("Could not open browser '%s', using default." % open_browser) warnings.warn(err) webbrowser.open(url) else: webbrowser.open(url) def standard_css(): """Return the standard CSS for inspection.""" return (mdp.utils.basic_css() + inspection_css() + TrainHTMLSlideShow.slideshow_css()) class EmptyTraceException(Exception): """Exception for empty traces, i.e., when no slides where generated.""" pass def inspect_training(snapshot_path, x_samples, msg_samples=None, stop_messages=None, inspection_path=None, tracer=None, debug=False, slide_style=None, show_size=False, verbose=True, **kwargs): """Return the HTML code for an inspection slideshow of the training. This function must be used after the training was completed. Before the training prepare_training_inspection must have been called to create snapshots. After training one should call remove_inspection_residues. Note that the file into which the returned slideshow HTML is inserted must be in the snapshot_path. snapshot_path -- Path were the flow training snapshots are stored. x_samples, msg_samples -- Lists with the input data for the training trace. stop_messages -- The stop msg for the training trace. inspection_path -- Path were the slides will be stored. If None (default value) then the snapshot_path is used. tracer -- Instance of InspectionHTMLTracer, can be None for default class. debug -- If True (default is False) then any exception will be caught and the gathered data up to that point is returned in the normal way. This is useful for bimdp debugging. slide_style -- CSS code for the individual slides (when they are viewed as single HTML files), has no effect on the slideshow appearance. show_size -- Show the approximate memory footprint of all nodes. verbose -- If True (default value) a status message is printed for each loaded snapshot. **kwargs -- Additional arguments for flow.train can be specified as keyword arguments. """ if not inspection_path: inspection_path = snapshot_path ## create CSS file for the slides if not slide_style: slide_style = standard_css() robust_write_file(path=inspection_path, filename=STANDARD_CSS_FILENAME, content=slide_style) del slide_style ## create slides try: slide_filenames, slide_node_ids, index_table = \ _trace_biflow_training(snapshot_path=snapshot_path, inspection_path=inspection_path, x_samples=x_samples, msg_samples=msg_samples, stop_messages=stop_messages, tracer=tracer, debug=debug, show_size=show_size, verbose=verbose, **kwargs ) if not slide_filenames: err = ("No inspection slides were generated, probably because " "there are no untrained nodes in the given flow.") raise EmptyTraceException(err) except TraceDebugException, debug_exception: slide_filenames, slide_node_ids, index_table = debug_exception.result if index_table is None: return None # no snapshots were found # create slideshow slideshow = TrainHTMLSlideShow(filenames=slide_filenames, node_ids=slide_node_ids, index_table=index_table, delay=500, delay_delta=100, loop=False) return str(slideshow) def show_training(flow, data_iterables, msg_iterables=None, stop_messages=None, path=None, tracer=None, debug=False, show_size=False, open_browser=True, **kwargs): """Perform both the flow training and the training inspection. The return value is the filename of the slideshow HTML file. This function must be used with the untrained flow (no previous call of Flow.train is required, the training happens here). This function is more convenient than inspect_training since it includes all required steps, but it is also less customizable. After everything is complete the inspection slideshow is opened in the browser. flow -- The untrained Flow or BiFlow. After this function has been called the flow will be fully trained. data_iterables, msg_iterables, stop_messages -- Same as for calling train on a flow. path -- Path were both the training snapshots and the inspection slides will be stored. If None (default value) a temporary directory will be used. tracer -- Instance of InspectionHTMLTracer, can be None for default class. debug -- Ignore exception during training and try to complete the slideshow (default value is False). show_size -- Show the approximate memory footprint of all nodes. open_browser -- If True (default value) then the slideshow file is automatically opened in a webbrowser. One can also use string value with the browser name (for webbrowser.get) to request a specific browser. **kwargs -- Additional arguments for flow.train can be specified as keyword arguments. """ if path is None: path = tempfile.mkdtemp(prefix='MDP_') # get first part of data iterators as sample data for inspection # if data_iterables is an array, wrap it up in a list if isinstance(data_iterables, numx.ndarray): data_iterables = [[data_iterables]] * len(flow) x_samples = [] for i, data_iterable in enumerate(data_iterables): if data_iterable is None: x_sample, new_data_iterable = None, None else: x_sample, new_data_iterable = first_iterable_elem(data_iterable) x_samples.append(x_sample) data_iterables[i] = new_data_iterable del x_sample if msg_iterables: msg_samples = [] for i, msg_iterable in enumerate(msg_iterables): if msg_iterable is None: msg_sample, new_msg_iterable = None, None else: msg_sample, new_msg_iterable = first_iterable_elem(msg_iterable) msg_samples.append(msg_sample) msg_iterables[i] = new_msg_iterable del msg_sample else: msg_samples = None # store the data to disk to disk to save memory and safeguard against # any change made to the data during the training robust_pickle(path, "training_data_samples.pckl", (x_samples, msg_samples, stop_messages)) del x_samples del msg_samples # perform the training and gather snapshots prepare_training_inspection(flow=flow, path=path) try: if isinstance(flow, BiFlow): flow.train(data_iterables, msg_iterables, stop_messages, **kwargs) else: flow.train(data_iterables, **kwargs) except Exception: if debug: traceback.print_exc() print ("exception during training, " + "inspecting up to failure point...") # create the last snapshot manually try: # if a normal mdp.Flow instance was given then this fails flow._bi_reset() except Exception: pass filename = (flow._snapshot_name_ + "_%d" % flow._snapshot_counter_ + PICKLE_EXT) robust_pickle(flow._snapshot_path_, filename, flow) else: raise remove_inspection_residues(flow) # reload data samples with open(os.path.join(path, "training_data_samples.pckl"), "rb") as sample_file: x_samples, msg_samples, stop_messages = pickle.load(sample_file) # create slideshow slideshow = inspect_training(snapshot_path=path, inspection_path=path, x_samples=x_samples, msg_samples=msg_samples, stop_messages=stop_messages, tracer=tracer, debug=debug, show_size=show_size, verbose=False) filename = os.path.join(path, "training_inspection.html") title = "Training Inspection" with open(filename, 'w') as html_file: html_file.write('\n\n%s\n' % title) html_file.write('\n\n\n') html_file.write('

%s

\n' % title) html_file.write(slideshow) html_file.write('\n') if open_browser: _open_custom_brower(open_browser, os.path.abspath(filename)) return filename def inspect_execution(flow, x, msg=None, target=None, path=None, name=None, tracer=None, debug=False, slide_style=None, show_size=False, **kwargs): """Return the HTML code for an inspection slideshow of the execution and the return value of the execution (in a tuple). Note that the file into which the slideshow HTML is inserted must be in the snapshot_path. flow -- The flow for the execution. x, msg, target -- Data for the execution, msg and target can only be used for a BiFlow (default value is None). path -- Path were the slideshow will be stored, if None (default value) a temporary directory will be used. name -- Name string to be used for the slide files. tracer -- Instance of InspectionHTMLTracer, can be None for default class. debug -- If True (default is False) then any exception will be caught and the gathered data up to that point is returned in the normal way. This is useful for bimdp debugging. slide_style -- CSS code for the individual slides (when they are viewed as single HTML files), has no effect on the slideshow appearance. show_size -- Show the approximate memory footprint of all nodes. **kwargs -- Additional arguments for flow.execute can be specified as keyword arguments. """ if path is None: path = tempfile.mkdtemp(prefix='MDP_') if not name: name = "execution_inspection" # create CSS file for the slides if not slide_style: slide_style = standard_css() robust_write_file(path=path, filename=STANDARD_CSS_FILENAME, content=slide_style) del slide_style if not tracer: tracer = InspectionHTMLTracer() tracer._html_converter.flow_html_converter.show_size = show_size # create slides try: slide_filenames, slide_node_ids, section_ids, result = \ tracer.trace_execution(path=path, trace_name=name, flow=flow, x=x, msg=msg, target=target, debug=debug, **kwargs) except TraceDebugException, debug_exception: if not debug_exception.result: return None traceback.print_exc() print ("exception during excecution, " + "create inspection up to failure point...") slide_filenames, slide_node_ids, section_ids = debug_exception.result result = None # create slideshow file if not slide_filenames: err = "For some reason no execution slides were generated." raise EmptyTraceException(err) if not section_ids: slideshow = ExecuteHTMLSlideShow(filenames=slide_filenames, node_ids=slide_node_ids, delay=500, delay_delta=100, loop=False) else: # after an exception the last section_id entry can be missing if len(section_ids) < len(slide_filenames): section_ids.append(section_ids[-1]) slideshow = SectExecuteHTMLSlideShow(filenames=slide_filenames, node_ids=slide_node_ids, section_ids=section_ids, delay=500, delay_delta=100, loop=False) return str(slideshow), result def show_execution(flow, x, msg=None, target=None, path=None, name=None, tracer=None, debug=False, show_size=False, open_browser=True, **kwargs): """Write the inspection slideshow into an HTML file and open it in the browser. The return value is a tuple with the slideshow filename and the return value of the execution. flow -- The flow for the execution. x, msg, target -- Data for the execution, msg and target can only be used for a BiFlow (default value is None). path -- Path were the slideshow will be stored, if None (default value) a temporary directory will be used. name -- A name for the slideshow. tracer -- Instance of InspectionHTMLTracer, can be None for default class. debug -- If True (default is False) then any exception will be caught and the gathered data up to that point is returned in the normal way. This is useful for bimdp debugging. show_size -- Show the approximate memory footprint of all nodes. open_browser -- If True (default value) then the slideshow file is automatically opened in a webbrowser. One can also use string value with the browser name (for webbrowser.get) to request a specific browser. **kwargs -- Additional arguments for flow.execute can be specified as keyword arguments. """ if path is None: path = tempfile.mkdtemp(prefix='MDP_') if not name: name = "execution_inspection" title = "Execution Inspection" else: title = "Execution Inspection: " + name filename = os.path.join(path, name + ".html") slideshow, result = inspect_execution( flow=flow, path=path, x=x, msg=msg, target=target, name=name, tracer=tracer, debug=debug, show_size=show_size, **kwargs) # inspect execution created the path if required, so no need to check here with open(filename, 'w') as html_file: html_file = open(filename, 'w') html_file.write('\n\n%s\n' % title) html_file.write('\n\n\n') html_file.write('

%s

\n' % title) html_file.write(slideshow) html_file.write('\n') if open_browser: _open_custom_brower(open_browser, os.path.abspath(filename)) return filename, result mdp-3.3/bimdp/inspection/slideshow.py000066400000000000000000000202701203131624700177600ustar00rootroot00000000000000""" Module for HTML trace slideshows. The individual slides are the HTML files generated via the trace_inspection module (the body of the HTML files is extracted and makes up a slide). """ from __future__ import with_statement import os from mdp.utils import HTMLSlideShow, SectionHTMLSlideShow class ExecuteHTMLSlideShow(HTMLSlideShow): def __init__(self, filenames, node_ids, delay=500, delay_delta=100, loop=False, **kwargs): """Return the complete HTML code for the slideshow. filenames -- Sequence of strings, containing the path for each slide. node_ids -- Sequence of the active node ids for each slide. """ kwargs.update(vars()) # create a list of the possible node ids unique_node_ids = list(set(node_ids)) kwargs["unique_node_ids"] = unique_node_ids del kwargs["self"] super(ExecuteHTMLSlideShow, self).__init__(**kwargs) _SLIDESHOW_CSS_FILENAME = "trace_slideshow.css" @classmethod def slideshow_css(cls): css_filename = os.path.join(os.path.split(__file__)[0], cls._SLIDESHOW_CSS_FILENAME) with open(css_filename, 'r') as css_file: css = css_file.read() return css js_loadslide_template = r''' // maps slide index to active node id var slide_node_ids = $node_ids; // list of all node ids that are available var unique_node_ids = $unique_node_ids; $ that.loadSlide = function () { loadPage(slideselect[current_slide].value); } // is called by loadPage after the loading has happened function makeNodesClickable() { var i; for (i = 0; i < unique_node_ids.length; i += 1) { try { document.getElementById(unique_node_ids[i]). addEventListener("click", nodeClickCallback, false); } catch (e) { // means that the requested node is added in a later training // phase and is therefore not yet in the DOM } } } function nodeClickCallback() { // TODO: use event.srcElement for IE (event.target for W3C) var node_id = this.id; // search for next occurance of this node id var i; for (i = current_slide + 1; i < slide_node_ids.length; i += 1) { if (slide_node_ids[i] === node_id) { current_slide = i; that.updateSlide(); return; } } // alert("Node is not reached after this slide."); } ''' js_loadhtml_template = r''' /** * Code to load the body content from HTMl files and inject it. * inspired by http://www.xul.fr/ajax/responseHTML-attribute.html */ // Extract body content from html content. function getBody(content) { var lowContent = content.toLowerCase(); // eliminate case sensitivity // deal with attributes var i_start = lowContent.indexOf("", i_start); if (i_start === -1) { return ""; } var i_end = lowContent.lastIndexOf(""); if (i_end === -1) { i_end = lowContent.lastIndexOf(""); } // if no HTML then just grab everything till end. if (i_end === -1) { i_end = content.length; } return content.slice(i_start + 1, i_end); } // Return a XMLHttpRequest object (browser independent). function getXHR() { var request = false; try { request = new ActiveXObject('Msxml2.XMLHTTP'); } catch (err2) { try { request = new ActiveXObject('Microsoft.XMLHTTP'); } catch (err3) { try { request = new XMLHttpRequest(); } catch (err1) { request = false; } } } return request; } // Load an HTML page and inject the content. function loadPage(url) { var target = document.getElementById("html_display"); var xhr = getXHR(); xhr.onreadystatechange = function() { if(xhr.readyState == 4) { target.innerHTML = getBody(xhr.responseText); makeNodesClickable(); } } xhr.open("GET", url, true); xhr.send(null); } ''' # Note: We do not use an id prefix, since there is only one slideshow. html_bottom_template = r'''
''' class SectExecuteHTMLSlideShow(SectionHTMLSlideShow, ExecuteHTMLSlideShow): """Execute slideshow with support for sections.""" pass class TrainHTMLSlideShow(SectionHTMLSlideShow, ExecuteHTMLSlideShow): def __init__(self, filenames, node_ids, index_table, **kwargs): """Return the complete HTML code for the slideshow. filenames -- Sequence of strings, containing the path for each slide. node_ids -- Sequence of the active node ids for each slide. index_table -- Nested lists with the index data generated by inspect_biflow_training (last slide indexed by node, phase, train and stop). """ slideshow_id = self._get_random_id() n_nodes = len(index_table) n_phases = max([len(phase_indices) for phase_indices in index_table]) # create the table and mapping between slide index and phase and node train_id = 0 # id indexing phase, node and train or stop start_index = 0 # first slide index for the current phase end_index = 0 # last slide index for the current phase section_ids = [] train_table = [[None for _ in range(n_nodes + 1)] for _ in range(n_phases + 1)] # create labels for table train_table[0] = [' '] + ['node %d' % (i+1) for i in range(n_nodes)] for i_phase in range(n_phases): train_table[i_phase+1][0] = 'phase %d' % (i_phase + 1) for i_node in range(n_nodes): for i_phase in range(len(index_table[i_node])): end_index = index_table[i_node][i_phase][0] # train link stuff html_string = ('train' % (slideshow_id, start_index) + ' ') section_ids += [train_id,] * (end_index - start_index + 1) train_id += 1 # stop link stuff start_index = end_index + 1 end_index = index_table[i_node][i_phase][1] if start_index > end_index: # this can happen due to an exception during training start_index = end_index else: html_string += ('stop' % (slideshow_id, start_index)) train_table[i_phase+1][i_node+1] = html_string section_ids += [train_id,] * (end_index - start_index + 1) train_id += 1 start_index = end_index + 1 kwargs["control_table"] = train_table kwargs["section_ids"] = section_ids kwargs["filenames"] = filenames kwargs.update(vars()) del kwargs["self"] del kwargs["index_table"] super(TrainHTMLSlideShow, self).__init__(**kwargs) html_top_template = r'''
${{ for row in control_table: self.write('\n') for cell in row: self.write('\n' % cell) self.write('\n') }}
%s

''' html_controls_template = r''' ${{super(SectionHTMLSlideShow, self).html_controls_template(vars())}} ''' mdp-3.3/bimdp/inspection/trace.css000066400000000000000000000014661203131624700172230ustar00rootroot00000000000000/* CSS for inspection trace slides. */ table.current_node { background-color: #D1FFC7; } table.training_node { background-color: #FFFFCC; } table.clickable { cursor: pointer; } #inspect_biflow_td { vertical-align: top; padding: 0 100 0 0; } #inspect_result_td { vertical-align: top; } #displayed { border-top: 1px solid #003399; } table.inspect_io_data { font-family: monospace; } table.inspect_io_data td { vertical-align: top; } table.inspect_io_data pre { font-weight: bold; } span.keyword { background-color: #E8E8E8; } span.inactive_section { color: #0000EE; cursor: pointer; font-weight: bold; } div.error { color: #FF0000; text-align: left; } div.error h3 { font-size: large; color: #FF0000; } html { overflow-y : scroll; } mdp-3.3/bimdp/inspection/trace_slideshow.css000066400000000000000000000012221203131624700212720ustar00rootroot00000000000000/* Additional CSS for the inspection slideshow. */ div.slideshow { text-align: center; } table.slideshow, table.slideshow td, table.slideshow th { border-collapse: collapse; padding: 1px 2px 1px 2px; font-size: small; border: 1px solid; } table.slideshow { border: 2px solid; margin: 0 auto; } table.slideshow td { text-align: center; } /* style for slideshow with sections (like for training) */ span.inactive_section:hover { color: #6666FF; } span.active_section { color: #0000EE; background-color: #55FF55; cursor: pointer; font-weight: bold; } span.active_section:hover { color: #6666FF; } mdp-3.3/bimdp/inspection/tracer.py000066400000000000000000001135521203131624700172450ustar00rootroot00000000000000""" Module to trace and document the training and execution of a BiFlow. This module supports (Bi)HiNet structures. Monkey patching is used to inject the tracing code into the Flow. InspectionHTMLTracer is the main class. It uses TraceDecorationVisitor to add the tracing decoration to the flow and TraceHTMLConverter to create HTML view of the flow state (which in turn uses TraceHTMLVisitor for the flow representation). Note that this module does not combine the trace views into a slideshow, this is done in the seperate slideshow module. """ # TODO: wrap inner methods (e.g. _train) to document effective arguments? from __future__ import with_statement import os import cPickle as pickle import fnmatch import copy import traceback import mdp n = mdp.numx import mdp.hinet as hinet from bimdp import BiNode from bimdp import BiFlow from bimdp.hinet import BiFlowNode, CloneBiLayer from bimdp.hinet import BiHiNetHTMLVisitor from utils import robust_pickle CLICKABLE_NODE_ID = "clickable_node_%d" # standard css filename for the complete CSS: STANDARD_CSS_FILENAME = "mdp.css" NODE_TRACE_METHOD_NAMES = ["execute", "train", "stop_training"] BINODE_TRACE_METHOD_NAMES = [] # methods that are only traced in binodes TRACING_WRAP_FLAG = "_insp_is_wrapped_for_tracing_" ORIGINAL_METHOD_PREFIX = "_insp_original_" class TraceDebugException(Exception): """Exception for return the information when debug is True.""" def __init__(self, result): """Store the information necessary to finish the tracing. result -- The result that would otherwise be returned by the method. """ super(TraceDebugException, self).__init__() self.result = result class InspectionHTMLTracer(object): """Class for inspecting a single pass through a provided flow. This class is based on a visitor that decorates the flow elements with tracing wrappers. It also provides a callback function for the tracers and stores everything else needed for the inspection. This class is already specialized for creating HTML slides in the callback function. Note that a flow decorated for tracing is not compatible with pickling or parallel training and execution. Normally the decorated flow is only used in trace_training or trace_execution anyway. """ def __init__(self, html_converter=None, css_filename=STANDARD_CSS_FILENAME): """Prepare for tracing and create the HTML translator. html_converter -- TraceHTMLConverter instance, with a convert_flow method to create the flow visualization for each slide. css_filename -- CSS file used for all the slides (default 'inspect.css'). """ if html_converter is None: self._html_converter = TraceHTMLConverter() else: self._html_converter = html_converter self._css_filename = css_filename self._tracing_decorator = TraceDecorationVisitor( decorator=self._standard_tracer_decorate, undecorator=self._standard_tracer_undecorate) self._trace_path = None # path for the current trace self._trace_name = None # name for the current trace self._flow = None # needed for the callback HTML translation # step counter used in the callback, is reset automatically self._slide_index = None self._slide_filenames = None self._section_ids = None # can be used during execution self._slide_node_ids = None # active node for each slide index def _reset(self): """Reset the internal variables for a new tracing. Should be called before 'train', 'stop_training' or 'execute' is called on the flow. """ self._slide_index = 0 self._slide_filenames = [] self._section_ids = [] self._slide_node_ids = [] self._html_converter.reset() def trace_training(self, path, flow, x, msg=None, stop_msg=None, trace_name="training", debug=False, **kwargs): """Trace a single training phase and the stop_training. Return a tuple containing a list of the training slide filenames, the training node ids and the same for stop_training. path -- Path were the inspection files will be stored. trace_name -- Name prefix for this inspection (default is training). **kwargs -- Additional arguments for flow.train can be specified as keyword arguments. """ self._reset() self._trace_path = path # train and stop filenames must be different self._trace_name = trace_name + "_t" self._flow = flow self._tracing_decorator.decorate_flow(flow) biflownode = BiFlowNode(BiFlow(flow.flow)) try: biflownode.train(x=x, msg=msg, **kwargs) # reset is important for the following stop_training biflownode.bi_reset() # Note: this also catches legacy string exceptions (which are still # used in numpy, e.g. np.core.multiarray.error) except: if debug: # insert the error slide and encapsulate the exception traceback.print_exc() self._write_error_frame() result = (self._slide_filenames, self._slide_node_ids, None, None) raise TraceDebugException(result=result) else: raise train_filenames = self._slide_filenames train_node_ids = self._slide_node_ids self._reset() self._trace_name = trace_name + "_s" try: biflownode.stop_training(stop_msg) except: if debug: # insert the error slide and encapsulate the exception traceback.print_exc() self._write_error_frame() result = (train_filenames, train_node_ids, self._slide_filenames, self._slide_node_ids) raise TraceDebugException(result=result) else: raise stop_filenames = self._slide_filenames stop_node_ids = self._slide_node_ids # restore undecorated flow self._tracing_decorator.decorate_flow(flow, undecorate_mode=True) return train_filenames, train_node_ids, stop_filenames, stop_node_ids def trace_execution(self, path, trace_name, flow, x, msg=None, target=None, debug=False, **kwargs): """Trace a single execution. The return value is a tuple containing a list of the slide filenames, the node ids, the section_ids for a slideshow with sections (or None if no section_ids were used) and the execution output value. path -- Path were the inspection files will be stored. trace_name -- Name prefix for this inspection. **kwargs -- Additional arguments for flow.execute can be specified as keyword arguments. """ self._reset() self._trace_path = path self._trace_name = trace_name self._flow = flow self._tracing_decorator.decorate_flow(flow) if (not (isinstance(flow, BiFlow) or isinstance(flow, BiNode)) and (msg is not None)): # a msg would be interpreted as nodenr by a Flow, so check this err = "A msg was given for a normal Flow (need BiFlow)." raise Exception(err) try: if msg or target: result = self._flow.execute(x, msg, target, **kwargs) # this case also works for mdp.Flow else: result = self._flow.execute(x, **kwargs) # Note: this also catches legacy string exceptions (which are still # used in numpy, e.g. np.core.multiarray.error) except: if debug: # insert the error slide and encapsulate the exception traceback.print_exc() self._write_error_frame() if not self._section_ids: self._section_ids = None result = (self._slide_filenames, self._slide_node_ids, self._section_ids) raise TraceDebugException(result=result) else: raise self._tracing_decorator.decorate_flow(flow, undecorate_mode=True) if not self._section_ids: self._section_ids = None else: if len(self._section_ids) != len(self._slide_filenames): err = ("Mismatch between number of section_ids and number of " "slides.") raise Exception(err) return (self._slide_filenames, self._slide_node_ids, self._section_ids, result) def _tracer_callback(self, node, method_name, method_result, method_args, method_kwargs): """This method is called by the tracers. The calling tracer also provides this method with the needed state information and the method arguments. node -- The node from which the callback was initiated. method_name -- Name of the method from which the callback was initiated. result -- Return value of the method. args, kwargs -- The arguments of the method call. """ ## write visualization to html_file try: html_file = self._begin_HTML_frame() section_id, node_id = self._html_converter.write_html( path=self._trace_path, html_file=html_file, flow=self._flow, node=node, method_name=method_name, method_result=method_result, method_args=method_args, method_kwargs=method_kwargs) self._slide_index += 1 if section_id is not None: self._section_ids.append(section_id) self._slide_node_ids.append(node_id) finally: self._end_HTML_frame(html_file) ## HTML decoration ## def _begin_HTML_frame(self): """Return the HTML file for a trace frame including the header. The file should then be finished via _end_HTML_frame. """ path = self._trace_path filename = self._trace_name + "_%d.html" % self._slide_index self._slide_filenames.append(filename) html_file = open(os.path.join(path, filename), "w") html_file = hinet.NewlineWriteFile(html_file) html_file.write('\n\nInspection Slide') if self._css_filename: html_file.write('\n\n') return html_file def _end_HTML_frame(self, html_file): """Complete and close the HTML file for a trace frame. The method should always be used after _begin_HTML_frame. """ html_file.write('\n') def _write_error_frame(self): with self._begin_HTML_frame() as html_file: html_file.write('
') html_file.write('

Encountered Exception

') traceback_html = traceback.format_exc().replace('\n', '
') # get HTML traceback, didn't work due to legacy stuff # TODO: retry this in the future # import StringIO as stringio # import cgitb # import mdp # exception_type, exception, tb = sys.exc_info() # # Problem: only the text of the original exception is stored in # # mdp.FlowExceptionCR, and the text is not even correctpy displayed. ## if exception_type is mdp.FlowExceptionCR: ## exception.args = tuple() ## exception.message = None # buffer = stringio.StringIO() # handler = cgitb.Hook(file=buffer) # handler.handle((exception_type, exception, tb)) # traceback_html = buffer.getvalue() html_file.write(traceback_html) html_file.write('
') self._end_HTML_frame(html_file) ## monkey patching tracing decorator wrapper methods ## def _standard_tracer_decorate(self, node): """Adds a tracer wrapper to the node via monkey patching.""" # add a marker to show that this node is wrapped setattr(node, TRACING_WRAP_FLAG, True) trace_method_names = list(NODE_TRACE_METHOD_NAMES) if isinstance(node, BiNode): trace_method_names += BINODE_TRACE_METHOD_NAMES for method_name in trace_method_names: new_method_name = ORIGINAL_METHOD_PREFIX + method_name # create a reference to the original method setattr(node, new_method_name, getattr(node, method_name)) # use nested scopes lexical closure to get proper wrapper def get_wrapper(_method_name, _inspector): _new_method_name = ORIGINAL_METHOD_PREFIX + method_name def wrapper(self, *args, **kwargs): args_copy = copy.deepcopy(args) kwargs_copy = copy.deepcopy(kwargs) result = getattr(self, _new_method_name)(*args, **kwargs) _inspector._tracer_callback(self, _method_name, result, args_copy, kwargs_copy) return result return wrapper # hide the original method in this instance behind the wrapper setattr(node, method_name, get_wrapper(method_name, self).__get__(node)) # modify getstate to enable pickling (get rid of the instance methods) def wrapped_getstate(self): result = self.__dict__.copy() if not hasattr(node, TRACING_WRAP_FLAG): return result del result[TRACING_WRAP_FLAG] # delete all instance methods trace_method_names = list(NODE_TRACE_METHOD_NAMES) if isinstance(self, BiNode): trace_method_names += BINODE_TRACE_METHOD_NAMES for method_name in trace_method_names: del result[method_name] old_method_name = ORIGINAL_METHOD_PREFIX + method_name del result[old_method_name] del result["__getstate__"] return result node.__getstate__ = wrapped_getstate.__get__(node) def _standard_tracer_undecorate(self, node): """Remove a tracer wrapper from the node.""" if not hasattr(node, TRACING_WRAP_FLAG): return delattr(node, TRACING_WRAP_FLAG) trace_method_names = list(NODE_TRACE_METHOD_NAMES) if isinstance(node, BiNode): trace_method_names += BINODE_TRACE_METHOD_NAMES for method_name in trace_method_names: # delete the wrapped method in the instance to unhide the original delattr(node, method_name) # delete the no longer used reference to the original method old_method_name = ORIGINAL_METHOD_PREFIX + method_name delattr(node, old_method_name) # restore normal getstate delattr(node, "__getstate__") class TraceDecorationVisitor(object): """Class to add tracing wrappers to nodes in a flow.""" def __init__(self, decorator, undecorator): """Initialize. decorator -- Callable decorator that wraps node methods. undecorator -- Callable decorator that removes the wrapper from a method. """ self._decorator = decorator self._undecorator = undecorator # note that _visit_clonelayer uses the undecorate mode self._undecorate_mode = None def decorate_flow(self, flow, undecorate_mode=False): """Adds or removes wrappers from the nodes in the given flow.""" self._undecorate_mode = undecorate_mode for node in flow: self._visit_node(node) def _visit_node(self, node): if hasattr(node, "flow"): self._visit_flownode(node) elif isinstance(node, mdp.hinet.CloneLayer): self._visit_clonelayer(node) elif isinstance(node, mdp.hinet.Layer): self._visit_layer(node) else: self._visit_standard_node(node) def _visit_standard_node(self, node): """Wrap the node.""" if not self._undecorate_mode: self._decorator(node) else: self._undecorator(node) def _visit_flownode(self, flownode): for node in flownode.flow: self._visit_node(node) def _visit_layer(self, layer): for node in layer: self._visit_node(node) def _visit_clonelayer(self, clonelayer): # TODO: enable the use of a shallow copy to save memory, # but this requires to implement __copy__ in Node etc. for recursive # shallow copying if self._undecorate_mode: if isinstance(clonelayer, CloneBiLayer): # check that clonelayer is actually decorated if not hasattr(clonelayer, "_original_set_use_copies"): return del clonelayer._set_use_copies del clonelayer._original_set_use_copies del clonelayer.__getstate__ self._visit_node(clonelayer.nodes[0]) if not clonelayer.use_copies: clonelayer.nodes = ((clonelayer.node,) * len(clonelayer.nodes)) else: self._visit_node(clonelayer.nodes[0]) clonelayer.nodes = (clonelayer.node,) * len(clonelayer.nodes) # undecoration is complete return ## decorate clonelayer if ((not isinstance(clonelayer, CloneBiLayer)) or (not clonelayer.use_copies)): # use a decorated deep copy for the first node clonelayer.node = clonelayer.nodes[0].copy() clonelayer.nodes = (clonelayer.node,) + clonelayer.nodes[1:] # only decorate the first node self._visit_node(clonelayer.nodes[0]) if isinstance(clonelayer, CloneBiLayer): # add a wrapper to _set_use_copies, # otherwise all nodes in layer would get decorated clonelayer._original_set_use_copies = clonelayer._set_use_copies flow_decorator = self def wrapped_use_copies(self, use_copies): # undecorate internal nodes to allow copy operation flow_decorator._undecorate_mode = True flow_decorator._visit_node(clonelayer.nodes[0]) flow_decorator._undecorate_mode = False if use_copies and not self.use_copies: # switch to node copies, no problem clonelayer._original_set_use_copies(use_copies) elif not use_copies and self.use_copies: # switch to a single node instance # but use a (decorated) deep copy for first node clonelayer._original_set_use_copies(use_copies) clonelayer.node = clonelayer.nodes[0].copy() clonelayer.nodes = ((clonelayer.node,) + clonelayer.nodes[1:]) flow_decorator._visit_node(clonelayer.nodes[0]) clonelayer._set_use_copies = wrapped_use_copies.__get__(clonelayer) # modify getstate to enable pickling # (get rid of the instance methods) def wrapped_getstate(self): result = self.__dict__.copy() # delete instance methods del result["_original_set_use_copies"] del result["_set_use_copies"] del result["__getstate__"] return result clonelayer.__getstate__ = wrapped_getstate.__get__(clonelayer) _INSPECTION_CSS_FILENAME = "trace.css" def inspection_css(): """Return the CSS for the inspection slides.""" css_filename = os.path.join(os.path.split(__file__)[0], _INSPECTION_CSS_FILENAME) with open(css_filename, 'r') as css_file: css = css_file.read() return BiHiNetHTMLVisitor.hinet_css() + css class TraceHTMLVisitor(BiHiNetHTMLVisitor): """Special BiHiNetHTMLVisitor to take into account runtime info. It highlights the currently active node. """ def __init__(self, html_file, show_size=False): super(TraceHTMLVisitor, self).__init__(html_file, show_size=show_size) self._current_node = None self._node_id_index = None # this is the HTML node id, not the Node attribute self._current_node_id = None def convert_flow(self, flow, current_node=None): self._current_node = current_node self._node_id_index = 0 self._current_node_id = None super(TraceHTMLVisitor, self).convert_flow(flow) def _open_node_env(self, node, type_id="node"): """Open the HTML environment for the node internals. This special version highlights the nodes involved in the trace. node -- The node itself. type_id -- The id string as used in the CSS. """ f = self._file html_line = '\n['). replace(']\n ...', ']
\n...')) return ar_str @classmethod def _dict_pretty_html(cls, dic): """Return a nice HTML representation of the given numpy array.""" # TODO: use an stringio buffer for efficency # put array keys last, because arrays are typically rather large keys = [key for key, value in dic.items() if not isinstance(value, n.ndarray)] keys.sort() ar_keys = [key for key, value in dic.items() if isinstance(value, n.ndarray)] ar_keys.sort() keys += ar_keys dic_strs = [] for key in keys: value = dic[key] dic_str = '' + repr(key) + ': ' if isinstance(value, str): dic_str += repr(value) elif isinstance(value, n.ndarray): dic_str += cls._array_pretty_html(value) else: dic_str += str(value) dic_strs.append(dic_str) return '{' + ',
\n'.join(dic_strs) + '}' def write_html(self, path, html_file, flow, node, method_name, method_result, method_args, method_kwargs): """Write the HTML translation of the flow into the provided file. Return value is the section_id and the HTML/CSS id of the active node. The section id is ignored during training. path -- Path of the slide (e.h. to store additional images). html_file -- File of current slide, where the translation is written. flow -- The overall flow. node -- The node that was called last. method_name -- The method that was called on the last node. method_result -- The result from the last call. method_args -- args that were given to the method method_kwargs -- kwargs that were given to the method """ self._html_file = hinet.NewlineWriteFile(html_file) f = self._html_file ## create table, left side for the flow, right side for data f.write('

') f.write('
') f.write("

flow state

") self.flow_html_converter._file = f self.flow_html_converter.convert_flow(flow, node) # now the argument / result part of the table f.write('
') section_id = self._write_data_html( path=path, html_file=html_file, flow=flow, node=node, method_name=method_name, method_result=method_result, method_args=method_args, method_kwargs=method_kwargs) f.write('
') f.write('\n') self._html_file = None return section_id, self.flow_html_converter._current_node_id def _write_data_html(self, path, html_file, flow, node, method_name, method_result, method_args, method_kwargs): """Write the data part (right side of the slide). Return value can be a section_id or None. The section_id is ignored during training (since the slideshow sections are used for the training phases). This method can be overriden for custom visualisations. Usually this original method should still be called via super. path -- Path of the slide (e.h. to store additional images). html_file -- File of current slide, where the translation is written. flow -- The overall flow. node -- The node that was called last. method_name -- The method that was called on the last node. method_result -- The result from the last call. method_args -- args that were given to the method method_kwargs -- kwargs that were given to the method """ f = self._html_file f.write('

%s arguments

' % method_name) f.write('') if method_name == "stop_training": # first argument is not x, # if no arguments were given method_args == (None,) if method_args == (None,): f.write('') else: # deal and remove x part of arguments x = method_args[0] if isinstance(x, n.ndarray): f.write('' + '') else: f.write('') # remaining arg is message method_args = method_args[1:] if method_args and method_args[0] is not None: f.write('') # normally the kwargs should be empty for arg_key in method_kwargs: f.write('') f.write('
None
x = 
' + self._array_pretty_html(x) + '
x = 
' + str(x) + '
msg = 
' + self._dict_pretty_html(method_args[0]) + '
' + arg_key + ' = 
' + str(method_kwargs[arg_key]) + '
') ## print results f.write("

%s result

" % method_name) f.write('') if method_result is None: f.write('') elif isinstance(method_result, n.ndarray): f.write('') elif isinstance(method_result, tuple): f.write('') else: f.write(str(method_result[0]) + '') # second value is msg f.write('') else: f.write(str(method_result[1]) + '') # last value is target if len(method_result) > 2: f.write('') else: f.write('') ## Functions to capture pickled biflow snapshots during training. ## PICKLE_EXT = ".pckl" PICKLE_PROTO = -1 SNAPSHOT_FILENAME = "snapshot" def prepare_training_inspection(flow, path): """Use hook in the BiFlow to store a snapshot in each training phase. path -- Path were the snapshots are stored. This is done by wrapping the _stop_training_hook method of biflow. Some attributes are added to the biflow which store all information needed for the pickling (like filename). To enable pickling we use the __getstate__ slot, since some attributes cannot be pickled. """ # add attributes to biflow which are used in wrapper_method flow._snapshot_counter_ = 0 flow._snapshot_path_ = path flow._snapshot_name_ = SNAPSHOT_FILENAME flow._snapshot_instance_methods_ = [] ### wrap _stop_training_hook to store biflow snapshots ### def pickle_wrap_method(_flow, _method_name): new_method_name = ORIGINAL_METHOD_PREFIX + _method_name def wrapper(self, *args, **kwargs): result = getattr(self, new_method_name)(*args, **kwargs) # pickle biflow filename = (self._snapshot_name_ + "_%d" % self._snapshot_counter_ + PICKLE_EXT) robust_pickle(self._snapshot_path_, filename, self) self._snapshot_counter_ += 1 return result # create a reference to the original method setattr(_flow, new_method_name, getattr(_flow, _method_name)) # hide the original method in this instance behind the wrapper setattr(_flow, _method_name, wrapper.__get__(_flow)) _flow._snapshot_instance_methods_.append(_method_name) _flow._snapshot_instance_methods_.append(new_method_name) pickle_wrap_method(flow, "_stop_training_hook") ### wrap __getstate__ to enable pickling ### # note that in the pickled flow no trace of the wrapping remains def wrapped_biflow_getstate(self): result = self.__dict__.copy() # delete all instancemethods for method_name in self._snapshot_instance_methods_: del result[method_name] # delete the special attributes which were inserted by the wrapper # (not really necessary) del result["_snapshot_counter_"] del result["_snapshot_path_"] del result["_snapshot_name_"] del result["_snapshot_instance_methods_"] # remove data attributes (generators cannot be pickled) # pop with default value also works when key is not present in dict result.pop("_train_data_iterables", None) result.pop("_train_data_iterator", None) result.pop("_train_msg_iterables", None) result.pop("_train_msg_iterator", None) result.pop("_stop_messages", None) result.pop("_exec_data_iterator", None) result.pop("_exec_msg_iterator", None) result.pop("_exec_target_iterator", None) return result flow.__getstate__ = wrapped_biflow_getstate.__get__(flow) flow._snapshot_instance_methods_.append("__getstate__") def remove_inspection_residues(flow): """Remove all the changes made by prepare_training_inspection.""" try: for method_name in flow._snapshot_instance_methods_: delattr(flow, method_name) del flow._snapshot_counter_ del flow._snapshot_path_ del flow._snapshot_name_ del flow._snapshot_instance_methods_ except: # probably the hooks were already removed, so do nothing pass def _trace_biflow_training(snapshot_path, inspection_path, x_samples, msg_samples=None, stop_messages=None, tracer=None, debug=False, show_size=False, verbose=True, **kwargs): """Load flow snapshots and perform the inspection with the given data. The return value consists of the slide filenames, the slide node ids, and an index table (index of last slide of section indexed by node, phase, train and stop). If no snapshots were found the return value is None. snapshot_path -- Path were the flow training snapshots are stored. inspection_path -- Path were the slides are stored. css_filename -- Filename of the CSS file for the slides. x_samples, msg_samples -- Lists with the input data for the training trace. stop_messages -- The stop msg for the training trace. tracer -- Instance of InspectionHTMLTracer, can be None for default class. debug -- If True (default is False) then any exception will be caught and the gathered data up to that point is returned in the normal way. This is useful for bimdp debugging. show_size -- Show the approximate memory footprint of all nodes. verbose -- If True (default value) a status message is printed for each loaded snapshot. **kwargs -- Additional arguments for flow.train can be specified as keyword arguments. """ if not tracer: tracer = InspectionHTMLTracer() tracer._html_converter.flow_html_converter.show_size = show_size i_train_node = 0 # index of the training node i_snapshot = 0 # snapshot counter index_table = [[]] # last slide indexed by [node, phase, train 0 or stop 1] slide_filenames = [] slide_node_ids = [] try: # search for the snapshot files for file_path, dirs, files in os.walk(os.path.abspath(snapshot_path)): dirs.sort() files = fnmatch.filter(files, SNAPSHOT_FILENAME + "*" + PICKLE_EXT) files.sort() for filename in files: filename = os.path.join(file_path, filename) # load the flow snapshot biflow = None # free memory with open(filename, "rb") as pickle_file: biflow = pickle.load(pickle_file) # determine which node is training and set the indices for node in biflow[i_train_node:]: if node.get_remaining_train_phase() > 0: break else: i_train_node += 1 index_table.append([]) # inspect the training x = x_samples[i_train_node] if msg_samples: msg = msg_samples[i_train_node] else: msg = None if stop_messages: stop_msg = stop_messages[i_train_node] else: stop_msg = None trace_name = "%d_%d" % (i_snapshot, i_train_node) train_files, train_ids, stop_files, stop_ids = \ tracer.trace_training(trace_name=trace_name, path=inspection_path, flow=biflow, x=x, msg=msg, stop_msg=stop_msg, debug=debug, **kwargs) slide_filenames += train_files train_index = len(slide_filenames) - 1 slide_filenames += stop_files stop_index = len(slide_filenames) - 1 index_table[i_train_node].append((train_index, stop_index)) slide_node_ids += train_ids slide_node_ids += stop_ids if verbose: print "got traces for snapshot %d" % (i_snapshot + 1) i_snapshot += 1 except TraceDebugException, debug_exception: train_files, train_ids, stop_files, stop_ids = debug_exception.result slide_filenames += train_files train_index = len(slide_filenames) - 1 if stop_files: slide_filenames += stop_files stop_index = len(slide_filenames) - 1 index_table[i_train_node].append((train_index, stop_index)) slide_node_ids += train_ids if stop_ids: slide_node_ids += stop_ids debug_exception.result = (slide_filenames, slide_node_ids, index_table) raise return slide_filenames, slide_node_ids, index_table mdp-3.3/bimdp/inspection/utils.py000066400000000000000000000051601203131624700171200ustar00rootroot00000000000000""" Some helper functions and classes for inspection. """ import os import cPickle as pickle def robust_pickle(path, filename, obj): """Robust pickle function, creates path if it does not exist.""" filename = os.path.join(path, filename) try: picke_file = open(filename, "wb") except IOError, inst: error_code = inst.args[0] if error_code == 2: # path does not exist os.makedirs(path) picke_file = open(filename, "wb") else: raise try: pickle.dump(obj, picke_file, -1) finally: picke_file.close() def robust_write_file(path, filename, content): """Create a file with the given content and return the filename. If the provided path does not exist it will be created. If the file already exists it will be overwritten. """ try: new_file = open(os.path.join(path, filename), "w") except IOError, inst: error_code = inst.args[0] if error_code == 2: # path does not exist os.makedirs(path) new_file = open(os.path.join(path, filename), "w") else: raise new_file.write(content) return filename def first_iterable_elem(iterable): """Helper function to get the first element of an iterator or iterable. The return value is a tuple of the first element and the iterable. If the iterable is actually an iterator then a decorator is used to wrap it and extract the first element in a non-consuming way. """ if iter(iterable) is iterable: # iterable is actually iterator, have to wrap it peek_iter = PeekIterator(iterable) first_elem = peek_iter.peek() return first_elem, peek_iter else: first_elem = iter(iterable).next() return first_elem, iterable class PeekIterator(object): """Look-ahead iterator decorator.""" def __init__(self, iterator): self.iterator = iterator # we simplicity we do not use collections.deque self.cache = [] def peek(self): """Return the next element in the iterator without consuming it. So the returned elements will still be returned by next in the normal order. If the iterator has no next element then the StopIterator exception is passed. """ next_elem = self.next() # TODO: use a dequeue for better efficiency self.cache = [next_elem] + self.cache return next_elem def next(self): if self.cache: return self.cache.pop() else: return self.iterator.next() def __iter__(self): return self mdp-3.3/bimdp/nodes/000077500000000000000000000000001203131624700143415ustar00rootroot00000000000000mdp-3.3/bimdp/nodes/__init__.py000066400000000000000000000003371203131624700164550ustar00rootroot00000000000000 from autogen import binodes_code, biclassifiers_code exec binodes_code() exec biclassifiers_code() from miscnodes import IdentityBiNode, SenderBiNode from gradient import NotDifferentiableException, GradientExtensionNode mdp-3.3/bimdp/nodes/autogen.py000066400000000000000000000120341203131624700163550ustar00rootroot00000000000000""" Module to automatically create a module with BiMDP versions of MDP nodes. Run this module to overwrite the autogen_binodes module with a new version. """ import inspect import mdp from cStringIO import StringIO # Blacklist of nodes that cause problems with autogeneration NOAUTOGEN_MDP_NODES = [ "NoiseNode" # function default value causes trouble ] NOAUTOGEN_MDP_CLASSIFIERS = [] def _get_node_subclasses(node_class=mdp.Node, module=mdp.nodes): """ Return all node classes in module which are subclasses of node_class. """ node_subclasses = [] for node_subclass in (getattr(module, name) for name in dir(module)): if (isinstance(node_subclass, type) and issubclass(node_subclass, node_class)): node_subclasses.append(node_subclass) return node_subclasses def _binode_code(fid, node_class, modulename, base_classname="BiNode", old_classname="Node"): """Write code for BiMDP versions of normal node classes into module file. It preserves the signature, which is useful for introspection (this is used by the ParallelNode _default_fork implementation). fid -- File handle of the module file. node_class -- Node class for which the new node class will be created. modulename -- Name of the module where the node is from. base_classname -- Base class to be used for the new nodes. old_classname -- Name of the original base class, which will be replaced in the new class name. """ node_name = node_class.__name__ binode_name = node_name[:-len(old_classname)] + base_classname fid.write('class %s(%s, %s.%s):' % (binode_name, base_classname, modulename, node_name)) docstring = ("Automatically created %s version of %s." % (base_classname, node_name)) fid.write('\n """%s"""' % docstring) ## define the init method explicitly to preserve the signature docstring = node_class.__init__.__doc__ args, varargs, varkw, defaults = inspect.getargspec(node_class.__init__) args.remove('self') args += ('node_id', 'stop_result') defaults += (None, None) if defaults is None: defaults = [] first_default = len(args) - len(defaults) fid.write('\n def __init__(self') fid.write(''.join(', ' + arg for arg in args[:-len(defaults)])) fid.write(''.join(', ' + arg + '=' + repr(defaults[i_arg]) for i_arg, arg in enumerate(args[first_default:]))) if varargs: fid.write(', *%s' % varargs) # always support kwargs, to prevent multiple-inheritance issues if not varkw: varkw = "kwargs" fid.write(', **%s' % varkw) fid.write('):') if docstring: fid.write('\n """%s"""' % docstring) fid.write('\n super(%s, self).__init__(' % binode_name) fid.write(', '.join('%s=%s' % (arg, arg) for arg in args)) if varargs: if args: fid.write(', ') fid.write('*%s' % varargs) if args or varargs: fid.write(', ') fid.write('**%s' % varkw) fid.write(')\n\n') def _binode_module(fid, node_classes, modulename="mdp.nodes", base_classname="BiNode", old_classname="Node", base_import="from bimdp import BiNode"): """Write code for BiMDP versions of normal node classes into module file. fid -- File handle of the module file. node_classes -- List of node classes for which binodes are created. modulename -- Name of the module where the node is from. base_classname -- Base class to be used for the new nodes. old_classname -- Name of the original base class, which will be replaced in the new class name. base_import -- Inmport line for the base_class. """ fid.write('"""\nAUTOMATICALLY GENERATED CODE, DO NOT MODIFY!\n\n') fid.write('Edit and run autogen.py instead to overwrite this module.\n"""') fid.write('\n\nimport %s\n' % modulename) fid.write(base_import + '\n\n') for node_class in node_classes: _binode_code(fid, node_class, modulename, base_classname=base_classname, old_classname=old_classname) def binodes_code(): """Generate and import the BiNode wrappers for MDP Nodes.""" fid = StringIO() nodes = (node for node in _get_node_subclasses(node_class=mdp.Node, module=mdp.nodes) if not issubclass(node, mdp.ClassifierNode) and node.__name__ not in NOAUTOGEN_MDP_NODES) _binode_module(fid, nodes) return fid.getvalue() def biclassifiers_code(): """Generate and import the BiClassifier wrappers for ClassifierNodes.""" fid = StringIO() nodes = (node for node in _get_node_subclasses(node_class=mdp.ClassifierNode, module=mdp.nodes) if node.__name__ not in NOAUTOGEN_MDP_CLASSIFIERS) _binode_module(fid, nodes, base_classname="BiClassifier", old_classname="Classifier", base_import="from bimdp import BiClassifier") return fid.getvalue() mdp-3.3/bimdp/nodes/gradient.py000066400000000000000000000132251203131624700165130ustar00rootroot00000000000000""" Extension to get the total derivative / gradient / Jacobian matrix. """ import mdp import bimdp np = mdp.numx class NotDifferentiableException(mdp.NodeException): """Exception if the total derivative does not exist.""" pass # Default implementation is needed to satisfy the "method" request. class GradientExtensionNode(mdp.ExtensionNode, mdp.Node): """Base node of the extension to calculate the gradient at a certain point. To get the gradient simply put 'method': 'gradient' into the msg dict. The grad array is three dimensional, with shape (len(x), self.output_dim, self.input_dim). The matrix formed by the last two indices is also called the Jacobian matrix. Nodes which have no well defined total derivative should raise the NotDifferentiableException. """ extension_name = "gradient" def _gradient(self, x, grad=None): """Calculate the contribution to the grad for this node at point x. The contribution is then combined with the given gradient, to get the gradient for the original x. This is a template function, derived classes should override _get_grad. """ if self.is_training(): raise mdp.TrainingException("The training is not completed yet.") if grad is None: grad = np.zeros((len(x), self.input_dim, self.input_dim)) diag_indices = np.arange(self.input_dim) grad[:,diag_indices,diag_indices] = 1.0 new_grad = self._get_grad(x) # combine the gradients grad = np.asarray([np.dot(new_grad[i], grad[i]) for i in range(len(new_grad))]) # update the x value for the next node result = self._execute(x) if isinstance(result, tuple): x = result[0] msg = result[1] else: x = result msg = {} msg.update({"grad": grad}) return x, msg def _get_grad(self, x): """Return the grad for the given points. Override this method. """ err = "Gradient not implemented for class %s." % str(self.__class__) raise NotImplementedError(err) def _stop_gradient(self, x, grad=None): """Helper method to make gradient available for stop_message.""" result = self._gradient(x, grad) # FIXME: Is this really correct? x should be updated! # Could remove this once we have the new stop signature. return result[1], 1 ## Implementations for specific nodes. ## # TODO: cache the gradient for linear nodes? # If there was a linear base class one could integrate this? # TODO: add at least a PCA gradient implementation @mdp.extension_method("gradient", mdp.nodes.IdentityNode, "_get_grad") def _identity_grad(self, x): grad = np.zeros((len(x), self.output_dim, self.input_dim)) diag_indices = np.arange(self.input_dim) grad[:,diag_indices,diag_indices] = 1.0 return grad @mdp.extension_method("gradient", mdp.nodes.SFANode, "_get_grad") def _sfa_grad(self, x): # the gradient is constant, but have to give it for each x point return np.repeat(self.sf.T[np.newaxis,:,:], len(x), axis=0) @mdp.extension_method("gradient", mdp.nodes.QuadraticExpansionNode, "_get_grad") def _quadex_grad(self, x): # the exapansion is: # [x1, x2, x3, x1x1, x1x2, x1x3, x2x2, x2x3, x3,x3] dim = self.input_dim grad = np.zeros((len(x), self.output_dim, dim)) # constant part diag_indices = np.arange(dim) grad[:,diag_indices,diag_indices] = 1.0 # quadratic part i_start = dim for i in range(dim): grad[:, i_start:i_start+dim-i, i] = x[:,i:] diag_indices = np.arange(dim - i) grad[:, diag_indices+i_start, diag_indices+i] += x[:,i,np.newaxis] i_start += (dim - i) return grad @mdp.extension_method("gradient", mdp.nodes.SFA2Node, "_get_grad") def _sfa2_grad(self, x): quadex_grad = self._expnode._get_grad(x) sfa_grad = _sfa_grad(self, x) return np.asarray([np.dot(sfa_grad[i], quadex_grad[i]) for i in range(len(sfa_grad))]) ## mdp.hinet nodes ## @mdp.extension_method("gradient", mdp.hinet.Layer, "_get_grad") def _layer_grad(self, x): in_start = 0 in_stop = 0 out_start = 0 out_stop = 0 grad = None for node in self.nodes: out_start = out_stop out_stop += node.output_dim in_start = in_stop in_stop += node.input_dim if grad is None: node_grad = node._get_grad(x[:, in_start:in_stop]) grad = np.zeros([node_grad.shape[0], self.output_dim, self.input_dim], dtype=node_grad.dtype) # note that the gradient is block-diagonal grad[:, out_start:out_stop, in_start:in_stop] = node_grad else: grad[:, out_start:out_stop, in_start:in_stop] = \ node._get_grad(x[:, in_start:in_stop]) return grad # this is an optimized implementation, the original implementation is # used for reference in the unittest @mdp.extension_method("gradient", mdp.hinet.Switchboard, "_gradient") def _switchboard_gradient(self, x, grad=None): if grad is None: grad = np.zeros((len(x), self.input_dim, self.input_dim)) diag_indices = np.arange(self.input_dim) grad[:,diag_indices,diag_indices] = 1.0 ## custom implementation for greater speed grad = grad[:, self.connections] # update the x value for the next node result = self._execute(x) if isinstance(result, tuple): x = result[0] msg = result[1] else: x = result msg = {} msg.update({"grad": grad}) return x, msg mdp-3.3/bimdp/nodes/miscnodes.py000066400000000000000000000014211203131624700166750ustar00rootroot00000000000000 from bimdp import BiNode, MSG_ID_SEP from bimdp.nodes import IdentityBiNode class SenderBiNode(IdentityBiNode): """Sends the incoming x data to another node via bi_message.""" def __init__(self, recipient_id=None, **kwargs): """Initialize the internal variables. recipient_id -- None or the id for the data recipient. """ super(SenderBiNode, self).__init__(**kwargs) self._recipient_id = recipient_id def _execute(self, x, no_x=None): """Add msg_x to the message (adressed to a target if defined).""" msg = dict() if self._recipient_id: msg[self._recipient_id + MSG_ID_SEP + "msg_x"] = x else: msg["msg_x"] = x if no_x: x = None return x, msg mdp-3.3/bimdp/parallel/000077500000000000000000000000001203131624700150255ustar00rootroot00000000000000mdp-3.3/bimdp/parallel/__init__.py000066400000000000000000000003561203131624700171420ustar00rootroot00000000000000 from parallelbiflow import ( BiFlowTrainCallable, BiFlowExecuteCallable, ParallelBiFlowException, ParallelBiFlow, ParallelCheckpointBiFlow) from parallelbihinet import ParallelCloneBiLayer del parallelbiflow del parallelbihinet mdp-3.3/bimdp/parallel/parallelbiflow.py000066400000000000000000000733211203131624700204040ustar00rootroot00000000000000""" Module for parallel flow training and execution. Not that this module depends on bihinet, since it uses a BiFlowNode to encapsulate the BiFlow in the tasks. """ import itertools import mdp n = mdp.numx import mdp.parallel as parallel from bimdp import ( BiFlow, BiFlowException, MessageResultContainer, BiCheckpointFlow, EXIT_TARGET ) from bimdp.hinet import BiFlowNode ### Train Task Classes ### class BiFlowTrainCallable(parallel.FlowTrainCallable): """Task implementing a single training phase in a flow for a data block.""" def __call__(self, data): """Do the training and return the purged BiFlowNode. data -- tuple containing x and msg """ x, msg = data while True: result = self._flownode.train(x, msg) if (result is None) or isinstance(result, dict): break elif (isinstance(result, tuple) and (result[2] in [1, -1, EXIT_TARGET])): break else: err = ("Target node not found in flow during " + "training, last result: " + str(result)) raise BiFlowException(err) self._flownode.bi_reset() if self._purge_nodes: parallel._purge_flownode(self._flownode) return self._flownode def fork(self): return self.__class__(self._flownode.fork(), purge_nodes=self._purge_nodes) ### Execute Task Classes ### class BiFlowExecuteCallable(parallel.FlowExecuteCallable): """Task implementing data execution for a BiFlowNode.""" def __init__(self, flownode, purge_nodes=True): """Store everything for the execution. flownode -- FlowNode for the execution purge_nodes -- If True nodes not needed for the join will be replaced with dummy nodes to reduce the footprint. """ super(BiFlowExecuteCallable, self).__init__(flownode, purge_nodes=purge_nodes) def __call__(self, data): """Return the execution result and the BiFlowNode as a tuple. If use_fork_execute is True for the flownode then it is returned in the result tuple. """ x, msg, target = data # by using _flow we do not have to reenter (like for train) result = self._flownode._flow.execute(x, msg, target) self._flownode.bi_reset() if self._flownode.use_execute_fork(): if self._purge_nodes: parallel._purge_flownode(self._flownode) return (result, self._flownode) else: return (result, None) def fork(self): return self.__class__(self._flownode.fork(), purge_nodes=self._purge_nodes) ### ParallelBiFlow Class ### class ParallelBiFlowException(parallel.ParallelFlowException): """Standard exception for problems with ParallelBiFlow.""" class ParallelBiFlow(BiFlow, parallel.ParallelFlow): """A parallel provides the tasks for parallel training. Note that even though a node input x or output y can be None, the data iterables cannot be None themselves, since they define the iterator length for the message iterator as well. They can, however, return None for each iteration step. """ def __init__(self, flow, verbose=False, **kwargs): """Initialize the internal variables.""" self._train_msg_iterables = None self._train_msg_iterator = None self._stop_messages = None self._exec_msg_iterator = None self._exec_target_iterator = None super(ParallelBiFlow, self).__init__(flow, verbose=verbose, **kwargs) @mdp.with_extension("parallel") def train(self, data_iterables, msg_iterables=None, stop_messages=None, scheduler=None, train_callable_class=None, overwrite_result_container=True, **kwargs): """Parallel version of the standard train method. If a scheduler is provided the training will be done in parallel on the scheduler. data_iterables -- A list of iterables, one for each node in the flow. The iterators returned by the iterables must return data arrays that are then used for the node training. See Flow.train for more details. If a custom train_callable_class is used to preprocess the data then other data types can be used as well. msg_iterables - A list of iterables for the messages. stop_messages -- Sequence of messages for stop_training. scheduler -- Value can be either None for normal training (default value) or a Scheduler instance for parallel training with the scheduler. If the scheduler value is an iterable or iterator then it is assumed that it contains a scheduler for each training phase. After a node has been trained the scheduler is shutdown. Note that you can e.g. use a generator to create the schedulers just in time. For nodes which are not trained the scheduler can be None. train_callable_class -- Class used to create training callables for the scheduler. By specifying your own class you can implement data transformations before the data is actually fed into the flow (e.g. from 8 bit image to 64 bit double precision). Note that the train_callable_class is only used if a scheduler was provided. If a scheduler is provided the default class used is NodeResultContainer. overwrite_result_container -- If set to True (default value) then the result container in the scheduler will be overwritten with an instance of NodeResultContainer, if it is not already an instance of NodeResultContainer. """ if self.is_parallel_training: raise ParallelBiFlowException("Parallel training is underway.") if scheduler is None: if train_callable_class is not None: err = ("A train_callable_class was specified but no scheduler " "was given, so the train_callable_class has no effect.") raise ParallelBiFlowException(err) super(ParallelBiFlow, self).train(data_iterables, msg_iterables, stop_messages, **kwargs) else: if train_callable_class is None: train_callable_class = BiFlowTrainCallable schedulers = None # do parallel training try: self.setup_parallel_training( data_iterables=data_iterables, msg_iterables=msg_iterables, stop_messages=stop_messages, train_callable_class=train_callable_class, **kwargs) # prepare scheduler if not isinstance(scheduler, parallel.Scheduler): # scheduler contains an iterable with the schedulers # self._i_train_node was set in setup_parallel_training schedulers = iter(scheduler) scheduler = schedulers.next() if self._i_train_node > 0: # dispose schedulers for pretrained nodes for _ in range(self._i_train_node): if scheduler is not None: scheduler.shutdown() scheduler = schedulers.next() elif self._i_train_node is None: # all nodes are already trained, dispose schedulers for _ in range(len(self.flow) - 1): if scheduler is not None: scheduler.shutdown() # the last scheduler will be shutdown in finally scheduler = schedulers.next() last_trained_node = self._i_train_node else: schedulers = None # check that the scheduler is compatible if ((scheduler is not None) and overwrite_result_container and (not isinstance(scheduler.result_container, parallel.TrainResultContainer))): scheduler.result_container = \ parallel.TrainResultContainer() while self.is_parallel_training: while self.task_available: task = self.get_task() scheduler.add_task(*task) results = scheduler.get_results() if results == []: err = ("Could not get any training tasks or results " "for the current training phase.") raise Exception(err) else: self.use_results(results) # check if we have to switch to next scheduler if ((schedulers is not None) and (self._i_train_node > last_trained_node)): # dispose unused schedulers for _ in range(self._i_train_node - last_trained_node): if scheduler is not None: scheduler.shutdown() scheduler = schedulers.next() last_trained_node = self._i_train_node # check that the scheduler is compatible if ((scheduler is not None) and overwrite_result_container and (not isinstance(scheduler.result_container, parallel.TrainResultContainer))): scheduler.result_container = \ parallel.TrainResultContainer() finally: # reset remaining iterator references, which cannot be pickled self._train_data_iterator = None self._train_msg_iterator = None if (schedulers is not None) and (scheduler is not None): scheduler.shutdown() def setup_parallel_training(self, data_iterables, msg_iterables=None, stop_messages=None, train_callable_class=BiFlowTrainCallable): """Prepare the flow for handing out tasks to do the training. After calling setup_parallel_training one has to pick up the tasks with get_task, run them and finally return the results via use_results. tasks are available as long as task_available is True. Training may require multiple phases, which are each closed by calling use_results. data_iterables -- A list of iterables, one for each node in the flow. The iterators returned by the iterables must return data arrays that are then used for the node training. See Flow.train for more details. If a custom train_callable_class is used to preprocess the data then other data types can be used as well. msg_iterables - A list of iterables for the messages. Can also be a single message if data_iterables is a single array. stop_messages -- Sequence of messages for stop_training. train_callable_class -- Class used to create training callables for the scheduler. By specifying your own class you can implement data transformations before the data is actually fed into the flow (e.g. from 8 bit image to 64 bit double precision). Note that the train_callable_class is only used if a scheduler was provided. If a scheduler is provided the default class used is NodeResultContainer. """ self._bi_reset() # normally not required, just for safety if self.is_parallel_training: err = "Parallel training is already underway." raise ParallelBiFlowException(err) self._train_callable_class = train_callable_class data_iterables, msg_iterables = self._sanitize_training_iterables( data_iterables=data_iterables, msg_iterables=msg_iterables) self._train_data_iterables = data_iterables self._train_msg_iterables = msg_iterables if stop_messages is None: stop_messages = [None] * len(data_iterables) self._stop_messages = stop_messages self._flownode = BiFlowNode(BiFlow(self.flow)) self._i_train_node = 0 self._next_train_phase() def _next_train_phase(self): """Find the next phase or node for parallel training. When it is found the corresponding internal variables are set. Nodes which are not derived from ParallelNode are trained locally. If a fork() fails due to a TrainingPhaseNotParallelException in a certain train phase, then the training is done locally as well (but fork() is tested again for the next phase). """ # find next node that can be forked, if required do local training while self._i_train_node < len(self.flow): current_node = self.flow[self._i_train_node] if not current_node.is_training(): self._i_train_node += 1 continue iterable = self._train_data_iterables[self._i_train_node] msg_iterable = self._train_msg_iterables[self._i_train_node] iterable, msg_iterable, _ = self._sanitize_iterables(iterable, msg_iterable) try: self._flownode.fork() # fork successful, prepare parallel training if self.verbose: print ("start parallel training phase of " + "node no. %d in parallel flow" % (self._i_train_node+1)) self._train_data_iterator = iter(iterable) self._train_msg_iterator = iter(msg_iterable) first_task = self._create_train_task() # make sure that iterator is not empty if first_task is None: if current_node.get_current_train_phase() == 1: err_str = ("The training data iteration for node " "no. %d could not be repeated for the " "second training phase, you probably " "provided an iterator instead of an " "iterable." % (self._i_train_node+1)) raise mdp.FlowException(err_str) else: err_str = ("The training data iterator for node " "no. %d is empty." % (self._i_train_node+1)) raise mdp.FlowException(err_str) task_data_chunk = first_task[0] if task_data_chunk is None: err = "Training data iterator is empty." raise ParallelBiFlowException(err) # Only first task contains the new callable (enable caching). # A fork is not required here, since the callable is always # forked in the scheduler. self._next_task = (task_data_chunk, self._train_callable_class(self._flownode, purge_nodes=True)) break except parallel.NotForkableParallelException, exception: if self.verbose: print ("could not fork node no. %d: %s" % (self._i_train_node + 1, str(exception))) print ("start nonparallel training phase of " + "node no. %d in parallel flow" % (self._i_train_node+1)) self._local_train_phase(iterable, msg_iterable) if self.verbose: print ("finished nonparallel training phase of " + "node no. %d in parallel flow" % (self._i_train_node+1)) if not self.flow[self._i_train_node].is_training(): self._i_train_node += 1 else: # training is finished self._i_train_node = None def _local_train_phase(self, iterable, msg_iterable): """Perform a single training phase locally. The internal _train_callable_class is used for the training. """ task_callable = self._train_callable_class(self._flownode, purge_nodes=False) i_task = 0 for (x, msg) in itertools.izip(iterable, msg_iterable): i_task += 1 # Note: if x contains additional args assume that the # callable can handle this task_callable((x, msg)) if self.verbose: print (" finished nonparallel task no. %d" % i_task) # perform stop_training with result check self._stop_training_hook() result = self._flownode.stop_training( self._stop_messages[self._i_train_node]) self._post_stop_training_hook() if (result is not None) and (not isinstance(result, dict)): if (isinstance(result, tuple) and (result[2] in [1, -1, EXIT_TARGET])): pass else: err = ("Target node not found in flow during " + "stop_training phase, last result: " + str(result)) raise BiFlowException(err) self._bi_reset() def _create_train_task(self): """Create and return a single training task without callable. Returns None if data iterator end is reached. Raises NoTaskException if any other problem arises. """ try: x = self._train_data_iterator.next() msg = self._train_msg_iterator.next() return ((x, msg), None) except StopIteration: return None @mdp.with_extension("parallel") # needed for fork in local scheduler def execute(self, iterable=None, msg_iterable=None, target_iterable=None, scheduler=None, execute_callable_class=None, overwrite_result_container=True): """Execute the flow and return (y, msg). If a scheduler is provided the execution will be done in parallel on the scheduler. iterable -- Single array or iterable. msg_iterable -- Single message or iterable. target_iterable -- Single target or iterable. scheduler -- Value can be either None for normal execution (default value) or a Scheduler instance for parallel execution with the scheduler. execute_callable_class -- Class used to create execution callables for the scheduler. By specifying your own class you can implement data transformations before the data is actually fed into the flow (e.g. from 8 bit image to 64 bit double precision). Note that the execute_callable_class is only used if a scheduler was provided. If a scheduler is provided the default class used is NodeResultContainer. overwrite_result_container -- If set to True (default value) then the result container in the scheduler will be overwritten with an instance of OrderedResultContainer, if it is not already an instance of OrderedResultContainer. """ if self.is_parallel_training: raise ParallelBiFlowException("Parallel training is underway.") if scheduler is None: if execute_callable_class is not None: err = ("A execute_callable_class was specified but no " "scheduler was given, so the execute_callable_class " "has no effect.") raise ParallelBiFlowException(err) return super(ParallelBiFlow, self).execute(iterable, msg_iterable, target_iterable) if execute_callable_class is None: execute_callable_class = BiFlowExecuteCallable # check that the scheduler is compatible if overwrite_result_container: if not isinstance(scheduler.result_container, parallel.ExecuteResultContainer): scheduler.result_container = parallel.ExecuteResultContainer() # do parallel execution self._flownode = BiFlowNode(BiFlow(self.flow)) try: self.setup_parallel_execution( iterable=iterable, msg_iterable=msg_iterable, target_iterable=target_iterable, execute_callable_class=execute_callable_class) while self.task_available: task = self.get_task() scheduler.add_task(*task) result = self.use_results(scheduler.get_results()) finally: # reset remaining iterator references, which cannot be pickled self._exec_data_iterator = None self._exec_msg_iterator = None self._exec_target_iterator = None return result def setup_parallel_execution(self, iterable, msg_iterable=None, target_iterable=None, execute_callable_class=BiFlowExecuteCallable): """Prepare the flow for handing out tasks to do the execution. Instead of automatically executing the _flow with the iterable, it only prepares the tasks for the scheduler. iterable -- Single array or iterable. msg_iterable -- Single message or iterable. target_iterable -- Single target or iterable. execute_callable_class -- Class used to create execution callables for the scheduler. By specifying your own class you can implement data transformations before the data is actually fed into the flow (e.g. from 8 bit image to 64 bit double precision). Note that the execute_callable_class is only used if a scheduler was provided. If a scheduler is provided the default class used is NodeResultContainer. """ self._bi_reset() # normally not required, just for safety if self.is_parallel_training: raise ParallelBiFlowException("Parallel training is underway.") self._execute_callable_class = execute_callable_class iterable, msg_iterable, target_iterable = self._sanitize_iterables( iterable, msg_iterable, target_iterable) self._exec_data_iterator = iter(iterable) self._exec_msg_iterator = iter(msg_iterable) self._exec_target_iterator = iter(target_iterable) first_task = self._create_execute_task() if first_task is None: err = ("The execute data iterable is empty.") raise mdp.FlowException(err) task_data_chunk = first_task[0] if task_data_chunk is None: err = "Execution data iterable is empty." raise ParallelBiFlowException(err) # Only first task contains the new callable (enable caching). # A fork is not required here, since the callable is always # forked in the scheduler. self._next_task = (task_data_chunk, self._execute_callable_class(self._flownode, purge_nodes=True)) def _create_execute_task(self): """Create and return a single execution task. Returns None if data iterator end is reached. Raises NoTaskException if no task is available. """ try: x = self._exec_data_iterator.next() msg = self._exec_msg_iterator.next() target = self._exec_target_iterator.next() return ((x, msg, target), None) except StopIteration: return None def use_results(self, results): """Use the result from the scheduler. During parallel training this will start the next training phase. For parallel execution this will return the result, like a normal execute would. In addition it will join any forked nodes. results -- Iterable containing the results, normally the return value of scheduler.ResultContainer.get_results(). The individual results can be the return values of the tasks. """ if self.is_parallel_training: for result in results: self._flownode.join(result) # perform local stop_training with result check self._stop_training_hook() result = self._flownode.stop_training( self._stop_messages[self._i_train_node]) self._post_stop_training_hook() if (result is not None): target = result[2] # values of +1, -1 and EXIT_TARGET are tolerated if target not in [1, -1, EXIT_TARGET]: err = ("Target node not found in flow during " + "stop_training phase, last result: " + str(result)) raise BiFlowException(err) self._flownode.bi_reset() if self.verbose: print ("finished parallel training phase of node no. " + "%d in parallel flow" % (self._i_train_node+1)) if not self.flow[self._i_train_node].is_training(): self._i_train_node += 1 self._next_train_phase() elif self.is_parallel_executing: self._exec_data_iterator = None self._exec_msg_iterator = None self._exec_target_iterator = None y_results = [] msg_results = MessageResultContainer() # use internal flownode to join all biflownodes self._flownode = BiFlowNode(BiFlow(self.flow)) for result_tuple in results: result, forked_biflownode = result_tuple # consolidate results if isinstance(result, tuple) and (len(result) == 2): y, msg = result msg_results.add_message(msg) else: y = result if y is not None: try: y_results.append(y) except: err = "Some but not all y return values were None." raise BiFlowException(err) else: y_results = None # join biflownode if forked_biflownode is not None: self._flownode.join(forked_biflownode) # return results if y_results is not None: y_results = n.concatenate(y_results) return (y_results, msg_results.get_message()) else: err = "It seems that there are no results to retrieve." raise BiFlowException(err) class ParallelCheckpointBiFlow(mdp.parallel.ParallelCheckpointFlow, ParallelBiFlow, BiCheckpointFlow): """Parallel version of CheckpointFlow. Can be used for saving intermediate results. """ def train(self, data_iterables, checkpoints, msg_iterables=None, stop_messages=None, scheduler=None, train_callable_class=None, overwrite_result_container=True, **kwargs): """Train all trainable nodes in the flow. Same as the train method in ParallelFlow, but with additional support of checkpoint functions as in CheckpointFlow. """ # this call goes via ParallelCheckpointFlow to ParallelBiFlow and then: # the train call in ParallelBiFlow then goes to BiCheckpointFlow # the setup_parallel_training goes to ParallelCheckpointBiFlow kwargs["checkpoints"] = checkpoints super(ParallelCheckpointBiFlow, self).train( data_iterables=data_iterables, scheduler=scheduler, train_callable_class=train_callable_class, overwrite_result_container=overwrite_result_container, msg_iterables=msg_iterables, **kwargs) def setup_parallel_training(self, data_iterables, checkpoints, msg_iterables=None, train_callable_class=BiFlowTrainCallable, **kwargs): """Checkpoint version of parallel training.""" # this call goes to ParallelCheckpointFlow and then ParallelBiFlow super(ParallelCheckpointBiFlow, self).setup_parallel_training( data_iterables=data_iterables, checkpoints=checkpoints, train_callable_class=train_callable_class, msg_iterables=msg_iterables, **kwargs) mdp-3.3/bimdp/parallel/parallelbihinet.py000066400000000000000000000052721203131624700205440ustar00rootroot00000000000000""" Parallel version of bihinet. """ import mdp from bimdp.hinet import CloneBiLayer class ParallelCloneBiLayer(CloneBiLayer, mdp.parallel.ParallelExtensionNode): """Parallel version of CloneBiLayer. This class also adds support for calling switch_to_instance during training, using the join method of the internal nodes. """ def _set_use_copies(self, use_copies): """Switch internally between using a single node instance or copies. In a normal CloneLayer a single node instance is used to represent all the horizontally aligned nodes. But in a BiMDP where the nodes store temporary data this may not work. Via this method one can therefore create copies of the single node instance. This method can also be triggered by the use_copies msg key. """ if use_copies and not self.use_copies: # switch to node copies self.nodes = [self.node.copy() for _ in range(len(self.nodes))] self.node = None # disable single node while copies are used self._uses_copies = True elif not use_copies and self.use_copies: # switch to a single node instance if self.is_training(): for forked_node in self.nodes[1:]: self.nodes[0].join(forked_node) elif self.is_bi_training(): for forked_node in self.nodes[1:]: self.nodes[0].bi_join(forked_node) self.node = self.nodes[0] self.nodes = (self.node,) * len(self.nodes) def _fork(self): """Fork the nodes in the layer to fork the layer.""" forked_node = ParallelCloneBiLayer( node=self.nodes[0].fork(), n_nodes=len(self.nodes), use_copies=False, node_id=self._node_id, dtype=self.get_dtype()) if self.use_copies: # simulate switch_to_copies forked_node.nodes = [node.fork() for node in self.nodes] forked_node.node = None return forked_node else: return forked_node def _join(self, forked_node): """Join the trained nodes from the forked layer.""" if self.use_copies: for i_node, layer_node in enumerate(self.nodes): layer_node.join(forked_node.nodes[i_node]) else: self.node.join(forked_node.node) def use_execute_fork(self): if self.use_copies: return any(node.use_execute_fork() for node in self.nodes) else: return self.node.use_execute_fork() mdp-3.3/bimdp/test/000077500000000000000000000000001203131624700142105ustar00rootroot00000000000000mdp-3.3/bimdp/test/__init__.py000066400000000000000000000005631203131624700163250ustar00rootroot00000000000000import os import mdp # wrap the mdp.test function and set the module path to bimdp path infodict = mdp.NodeMetaclass._function_infodict(mdp.test) idx = infodict["argnames"].index('mod_loc') defaults = list(infodict['defaults']) defaults[idx] = os.path.dirname(__file__) infodict['defaults'] = tuple(defaults) test = mdp.NodeMetaclass._wrap_function(mdp.test, infodict) mdp-3.3/bimdp/test/_tools.py000066400000000000000000000117301203131624700160630ustar00rootroot00000000000000""" Classes for tracing BiNode behavior in flows. """ import mdp from bimdp.nodes import IdentityBiNode class JumpBiNode(IdentityBiNode): """BiNode which can perform all kinds of jumps. This is useful for testing or flow control. It can also be used together with BiNodes as simple jump targets. """ def __init__(self, train_results=None, stop_train_results=None, execute_results=None, *args, **kwargs): """Initialize this BiNode. Note that this node has an internal variable self.loop_counter which is used by execute, message and stop_message (and incremented by each). train_results -- List of lists of results for the training phases. First index for training phase, second for loop counter. stop_train_results -- List of results for the training phases. execute_results -- Single result tuple starting at msg or list of results, which are used according to the loop counter. The list entries can also be None (then x is simply forwarded). stop_message_results -- Like execute_results. """ self.loop_counter = 0 # counter for execution phase self._train_results = train_results self._stop_train_results = stop_train_results self._execute_results = execute_results super(JumpBiNode, self).__init__(*args, **kwargs) def is_trainable(self): if self._train_results: return True else: return False @staticmethod def is_invertible(): return False def _get_train_seq(self): """Return a train_seq which returns the predefined values.""" # wrapper function for _train, using local scopes def get_train_function(i_phase): def train_function(x): self.loop_counter += 1 if self.loop_counter-1 >= len(self._train_results[i_phase]): return None return self._train_results[i_phase][self.loop_counter-1] return train_function # wrapper function for _stop_training def get_stop_training(i_phase): def stop_training(): return self._stop_train_results[i_phase] return stop_training # now wrap the training sequence train_seq = [] if not self._train_results: return train_seq for i_phase in range(len(self._train_results)): train_seq.append((get_train_function(i_phase), get_stop_training(i_phase))) return train_seq def _execute(self, x): """Return the predefined values for the current loop count value.""" self.loop_counter += 1 if not self._execute_results: return x if self.loop_counter-1 >= len(self._execute_results): return x result = self._execute_results[self.loop_counter-1] if result is None: return x else: return result def _bi_reset(self): """Reset the loop counter.""" self.loop_counter = 0 def is_bi_learning(self): return False class TraceJumpBiNode(JumpBiNode): """Node for testing, that logs when and how it is called.""" def __init__(self, tracelog, log_data=False, verbose=False, *args, **kwargs): """Initialize the node. tracelog -- list to which to append the log entries log_data -- if true the data will be logged as well """ self._tracelog = tracelog self._log_data = log_data self._verbose = verbose super(TraceJumpBiNode, self).__init__(*args, **kwargs) def train(self, x, msg=None): if self._log_data: self._tracelog.append((self._node_id, "train", x, msg)) else: self._tracelog.append((self._node_id, "train")) if self._verbose: print self._tracelog[-1] return super(TraceJumpBiNode, self).train(x, msg) def execute(self, x, msg=None): if self._log_data: self._tracelog.append((self._node_id, "execute", x, msg)) else: self._tracelog.append((self._node_id, "execute")) if self._verbose: print self._tracelog[-1] return super(TraceJumpBiNode, self).execute(x, msg) def stop_training(self, msg=None): self._tracelog.append((self._node_id, "stop_training")) if self._verbose: print self._tracelog[-1] return super(TraceJumpBiNode, self).stop_training(msg) def _bi_reset(self): self._tracelog.append((self._node_id, "bi_reset")) if self._verbose: print self._tracelog[-1] return super(TraceJumpBiNode, self)._bi_reset() class ParallelTraceJumpBiNode(TraceJumpBiNode): def _fork(self): return self.copy() def _join(self): pass class IdNode(mdp.Node): """Non-bi identity node for testing.""" @staticmethod def is_trainable(): return False mdp-3.3/bimdp/test/conftest.py000066400000000000000000000047301203131624700164130ustar00rootroot00000000000000# global hooks for py.test import tempfile import os import shutil import glob import mdp import py.test _err_str = """ IMPORTANT: some tests use random numbers. This could occasionally lead to failures due to numerical degeneracies. To rule this out, please run the tests more than once. If you get reproducible failures please report a bug! """ def pytest_configure(config): seed = config.getvalue("seed") # if seed was not set by the user, we set one now if seed is None or seed == ('NO', 'DEFAULT'): config.option.seed = int(mdp.numx_rand.randint(2**31-1)) def pytest_unconfigure(config): # remove garbage created during tests # note that usage of TemporaryDirectory is not enough to assure # that all garbage is removed, expacially because we use subprocesses shutil.rmtree(py.test.mdp_tempdirname, ignore_errors=True) # if pp was monkey-patched, remove any stale pp4mdp directories if hasattr(mdp.config, 'pp_monkeypatch_dirname'): monkey_dirs = os.path.join(mdp.config.pp_monkeypatch_dirname, mdp.parallel.pp_support.TEMPDIR_PREFIX) [shutil.rmtree(d, ignore_errors=True) for d in glob.glob(monkey_dirs+'*')] def pytest_runtest_setup(item): # set random seed before running each test # so that a failure in a test can be reproduced just running # that particular test. if this was not done, you would need # to run the whole test suite again mdp.numx_rand.seed(item.config.option.seed) def pytest_addoption(parser): """Add random seed option to py.test. """ parser.addoption('--seed', dest='seed', type=int, action='store', help='set random seed') def pytest_report_header(config): # report the random seed before and after running the tests return '%s\nRandom Seed: %d\n' % (mdp.config.info(), config.option.seed) def pytest_terminal_summary(terminalreporter): # add a note about error due to randomness only if an error or a failure # occured t = terminalreporter t.write_sep("=", "NOTE") t.write_line("%s\nRandom Seed: %d" % (mdp.config.info(), t.config.option.seed)) if 'failed' in t.stats or 'error' in t.stats: t.write_line(_err_str) def pytest_namespace(): # get temporary directory to put temporary files # will be deleted at the end of the test run dirname = tempfile.mkdtemp(suffix='.tmp', prefix='MDPtestdir_') return dict(mdp_tempdirname=dirname) mdp-3.3/bimdp/test/ide_run.py000066400000000000000000000003551203131624700162120ustar00rootroot00000000000000""" Helper script to run or debug the tests in an IDE as a simple .py file. """ import py #args_str = "" args_str = "-k parallel --maxfail 1 --tb native" #args_str = "--maxfail 1 --tb native" py.test.cmdline.main(args_str.split(" ")) mdp-3.3/bimdp/test/test_biflow.py000066400000000000000000000250641203131624700171120ustar00rootroot00000000000000import py.test import mdp from mdp import numx as np from bimdp import ( MessageResultContainer, BiFlow, BiFlowException, EXIT_TARGET, nodes ) from _tools import TraceJumpBiNode, IdNode class TestMessageResultContainer(object): """Test the behavior of the BetaResultContainer.""" def test_mixed_dict(self): """Test msg being a dict containing an array.""" rescont = MessageResultContainer() msg1 = { "f": 2, "a": np.zeros((10,3), 'int'), "b": "aaa", "c": 1, } msg2 = { "a": np.ones((15,3), 'int'), "b": "bbb", "c": 3, "d": 1, } rescont.add_message(msg1) rescont.add_message(msg2) combined_msg = rescont.get_message() a = np.zeros((25,3), 'int') a[10:] = 1 reference_msg = {"a": a, "c": 4, "b": "aaabbb", "d": 1, "f": 2} assert np.all(reference_msg["a"] == reference_msg["a"]) combined_msg.pop("a") reference_msg.pop("a") assert combined_msg == reference_msg def test_none_msg(self): """Test with one message being None.""" rescont = MessageResultContainer() msgs = [None, {"a": 1}, None, {"a": 2, "b": 1}, None] for msg in msgs: rescont.add_message(msg) msg = rescont.get_message() assert msg == {"a": 3, "b": 1} def test_incompatible_arrays(self): """Test with incompatible arrays.""" rescont = MessageResultContainer() msgs = [{"a": np.zeros((10,3))}, {"a": np.zeros((10,4))}] for msg in msgs: rescont.add_message(msg) py.test.raises(ValueError, rescont.get_message) class TestBiFlow(object): def test_normal_flow(self): """Test a BiFlow with normal nodes.""" flow = BiFlow([mdp.nodes.SFANode(output_dim=5), mdp.nodes.PolynomialExpansionNode(degree=3), mdp.nodes.SFANode(output_dim=20)]) data_iterables = [[np.random.random((20,10)) for _ in range(6)], None, [np.random.random((20,10)) for _ in range(6)]] flow.train(data_iterables) x = np.random.random([100,10]) flow.execute(x) def test_normal_multiphase(self): """Test training and execution with multiple training phases. The node with multiple training phases is a hinet.FlowNode. """ sfa_node = mdp.nodes.SFANode(input_dim=10, output_dim=8) sfa2_node = mdp.nodes.SFA2Node(input_dim=8, output_dim=6) flownode = mdp.hinet.FlowNode(mdp.Flow([sfa_node, sfa2_node])) flow = BiFlow([flownode, mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.SFANode(output_dim=5)]) data_iterables = [[np.random.random((30,10)) for _ in range(6)], None, [np.random.random((30,10)) for _ in range(6)]] flow.train(data_iterables) x = np.random.random([100,10]) flow.execute(x) def test_fda_binode(self): """Test using the FDABiNode in a BiFlow.""" samples = mdp.numx_rand.random((100,10)) labels = mdp.numx.arange(100) flow = BiFlow([mdp.nodes.PCANode(), nodes.FDABiNode()]) flow.train([[samples],[samples]], [None,[{"labels": labels}]]) def test_wrong_argument_handling(self): """Test correct error for additional arguments in Node instance.""" samples = mdp.numx_rand.random((100,10)) labels = mdp.numx.arange(100) # labels argument of FDANode is not supported in biflow flow = BiFlow([mdp.nodes.PCANode(), mdp.nodes.FDANode()]) # the iterables are passed as if this were a normal Flow py.test.raises(BiFlowException, flow.train, [[samples], [samples, labels]]) # messing up the data iterables further doesn't matter, this is # actually interpreted as three data chunks for the FDANode training, # since argument iterables are not supported by BiFlow py.test.raises(BiFlowException, flow.train, [[samples], [samples, labels, labels]]) def test_training_targets(self): """Test targeting during training and stop_training.""" tracelog = [] verbose = False node1 = TraceJumpBiNode( output_dim=1, tracelog=tracelog, node_id="node_1", train_results=[[None]], stop_train_results=[None], execute_results=[None, (None, {"b": 2}, "node_3"), (None, {"b": 2}, EXIT_TARGET),], verbose=verbose) node2 = TraceJumpBiNode( output_dim=1, tracelog=tracelog, node_id="node_2", train_results=[[None]], stop_train_results=[(None, {"b": 2}, "node_1")], execute_results=[None, (None, None, "node_1"), (None, None, "node_1")], verbose=verbose) node3 = TraceJumpBiNode( output_dim=1, tracelog=tracelog, node_id="node_3", train_results=[[(None, {"a": 1}, "node_2"), None]], stop_train_results=[(None, {"a": 1}, "node_2")], execute_results=[(None, {"b": 2}, EXIT_TARGET)], verbose=verbose) biflow = BiFlow([node1, node2, node3]) data_iterables = [[np.random.random((1,1)) for _ in range(2)], [np.random.random((1,1)) for _ in range(2)], [np.random.random((1,1)) for _ in range(2)]] biflow.train(data_iterables) # print ",\n".join(str(log) for log in tracelog) # tracelog reference reference = [ ('node_1', 'bi_reset'), ('node_2', 'bi_reset'), ('node_3', 'bi_reset'), ('node_1', 'train'), ('node_1', 'bi_reset'), ('node_2', 'bi_reset'), ('node_3', 'bi_reset'), ('node_1', 'train'), ('node_1', 'bi_reset'), ('node_2', 'bi_reset'), ('node_3', 'bi_reset'), ('node_1', 'stop_training'), ('node_1', 'bi_reset'), ('node_2', 'bi_reset'), ('node_3', 'bi_reset'), ('node_1', 'execute'), ('node_2', 'train'), ('node_1', 'bi_reset'), ('node_2', 'bi_reset'), ('node_3', 'bi_reset'), ('node_1', 'execute'), ('node_2', 'train'), ('node_1', 'bi_reset'), ('node_2', 'bi_reset'), ('node_3', 'bi_reset'), ('node_2', 'stop_training'), ('node_1', 'execute'), ('node_2', 'execute'), ('node_3', 'execute'), ('node_1', 'bi_reset'), ('node_2', 'bi_reset'), ('node_3', 'bi_reset'), ('node_1', 'execute'), ('node_2', 'execute'), ('node_3', 'train'), ('node_2', 'execute'), ('node_1', 'execute'), ('node_3', 'train'), ('node_1', 'bi_reset'), ('node_2', 'bi_reset'), ('node_3', 'bi_reset'), ('node_1', 'execute'), ('node_2', 'execute'), ('node_3', 'train'), ('node_2', 'execute'), ('node_1', 'execute'), ('node_3', 'train'), ('node_1', 'bi_reset'), ('node_2', 'bi_reset'), ('node_3', 'bi_reset'), ('node_3', 'stop_training'), ('node_2', 'execute'), ('node_3', 'execute'), ('node_1', 'bi_reset'), ('node_2', 'bi_reset'), ('node_3', 'bi_reset') ] assert tracelog == reference def test_execute_jump(self): """Test jumping around during execution.""" tracelog = [] verbose = False node1 = TraceJumpBiNode( tracelog=tracelog, node_id="node_1", execute_results=[(None, None, "node_3"), (None, None, "node_2")], verbose=verbose) node2 = TraceJumpBiNode( tracelog=tracelog, node_id="node_2", execute_results=[(None, None, "node_1")], verbose=verbose) node3 = TraceJumpBiNode( tracelog=tracelog, node_id="node_3", execute_results=[(None, None, "node_1")], verbose=verbose) biflow = BiFlow([node1, node2, node3]) biflow.execute(None, {"a": 1}) # bimdp.show_execution(biflow, x=None, msg={"a": 1}, debug=True) # tracelog reference reference = [ ('node_1', 'bi_reset'), ('node_2', 'bi_reset'), ('node_3', 'bi_reset'), ('node_1', 'execute'), ('node_3', 'execute'), ('node_1', 'execute'), ('node_2', 'execute'), ('node_1', 'execute'), ('node_2', 'execute'), ('node_3', 'execute'), ('node_1', 'bi_reset'), ('node_2', 'bi_reset'), ('node_3', 'bi_reset'), ] assert tracelog == reference def test_msg_normal_node(self): """Test that the msg is passed over a normal node.""" node = IdNode() biflow = BiFlow([node]) msg = {"a": 1} result = biflow.execute(np.random.random((1,1)), msg) assert msg == result[1] def test_exit_target(self): """Test that the magic exit target works.""" tracelog = [] node1 = TraceJumpBiNode( tracelog=tracelog, execute_results=[(None, None, EXIT_TARGET)], verbose=False) node2 = IdNode() biflow = BiFlow([node1, node2]) biflow.execute(None, {"a": 1}) # bimdp.show_execution(biflow, x=None, msg={"a": 1}, debug=True) reference = [ (None, 'bi_reset'), (None, 'execute'), (None, 'bi_reset') ] assert tracelog == reference def test_append_node_copy(self): """Test that appending a node does not perform a deept copy.""" node1 = nodes.IdentityBiNode() node2 = nodes.IdentityBiNode() flow = BiFlow([node1]) flow += node2 assert flow[0] is node1 assert type(flow) is BiFlow mdp-3.3/bimdp/test/test_bihinet.py000066400000000000000000000152721203131624700172520ustar00rootroot00000000000000import mdp from mdp import numx as n from bimdp import BiFlow, MSG_ID_SEP, EXIT_TARGET from bimdp.hinet import BiFlowNode, CloneBiLayer, BiSwitchboard from bimdp.nodes import SFABiNode, IdentityBiNode class TestBiFlowNode(object): """Test the behavior of the BiFlowNode.""" def test_two_nodes1(self): """Test a TestBiFlowNode with two normal nodes.""" sfa_node = mdp.nodes.SFANode(input_dim=10, output_dim=8) sfa2_node = mdp.nodes.SFA2Node(input_dim=8, output_dim=6) flownode = BiFlowNode(BiFlow([sfa_node, sfa2_node])) for _ in range(2): for _ in range(6): flownode.train(n.random.random((30,10))) flownode.stop_training() x = n.random.random([100,10]) flownode.execute(x) def test_two_nodes2(self): """Test a TestBiFlowNode with two normal nodes using a normal Flow.""" sfa_node = mdp.nodes.SFANode(input_dim=10, output_dim=8) sfa2_node = mdp.nodes.SFA2Node(input_dim=8, output_dim=6) flownode = BiFlowNode(BiFlow([sfa_node, sfa2_node])) flow = mdp.Flow([flownode]) data_iterables = [[n.random.random((30,10)) for _ in range(6)]] flow.train(data_iterables) x = n.random.random([100,10]) flow.execute(x) def test_pretrained_nodes(self): """Test a TestBiFlowNode with two normal pretrained nodes.""" sfa_node = mdp.nodes.SFANode(input_dim=10, output_dim=8) sfa2_node = mdp.nodes.SFA2Node(input_dim=8, output_dim=6) flownode = BiFlowNode(BiFlow([sfa_node, sfa2_node])) flow = mdp.Flow([flownode]) data_iterables = [[n.random.random((30,10)) for _ in range(6)]] flow.train(data_iterables) pretrained_flow = flow[0]._flow biflownode = BiFlowNode(pretrained_flow) x = n.random.random([100,10]) biflownode.execute(x) class DummyBiNode(IdentityBiNode): """Dummy class for CloneBiLayer tests.""" def _execute(self, x, data1, data2): self.data1 = data1 self.data2 = data2 return x @staticmethod def is_trainable(): return False class TestCloneBiLayer(object): """Test the behavior of the BiCloneLayer.""" def test_clonelayer(self): """Test a simple clonelayer with three SFA Nodes.""" sfa_node = SFABiNode(input_dim=3, output_dim=2) clonelayer = CloneBiLayer(sfa_node, 3) x = n.random.random((100,9)) clonelayer.train(x) clonelayer.stop_training() clonelayer.execute(x) def test_use_copies_msg(self): """Test the correct reaction to an outgoing use_copies message.""" stop_result = ({"clonelayer" + MSG_ID_SEP + "use_copies": True}, 1) stop_sfa_node = SFABiNode(stop_result=stop_result, input_dim=10, output_dim=3) clonelayer = CloneBiLayer(node=stop_sfa_node, n_nodes=3, use_copies=False, node_id="clonelayer") x = n.random.random((100,30)) clonelayer.train(x) clonelayer.stop_training() assert clonelayer.use_copies is True def test_use_copies_msg_flownode(self): """Test the correct reaction to an outgoing use_copies message.""" stop_result = ({"clonelayer" + MSG_ID_SEP + "use_copies": True}, EXIT_TARGET) stop_sfa_node = SFABiNode(stop_result=stop_result, input_dim=10, output_dim=3) biflownode = BiFlowNode(BiFlow([stop_sfa_node])) clonelayer = CloneBiLayer(node=biflownode, n_nodes=3, use_copies=False, node_id="clonelayer") biflow = clonelayer + IdentityBiNode() x = n.random.random((100,30)) biflow.train(x) assert clonelayer.use_copies is True def test_message_splitting(self): """Test message array splitting and combination.""" node = DummyBiNode(input_dim=3) clonelayer = CloneBiLayer(node, 2, use_copies=True) x = n.random.random((10, 6)) data1 = n.random.random((10, 4)) # should be split data2 = n.random.random((10, 5)) # should not be touched msg = { "string": "blabla", "list": [1,2], "data1": data1, "data2": data2, } y, out_msg = clonelayer.execute(x, msg) node1, node2 = clonelayer.nodes assert n.all(x == y) assert out_msg["string"] == msg["string"] assert out_msg["list"] == msg["list"] assert n.all(out_msg["data1"] == data1) assert n.all(node1.data1 == data1[:,:2]) assert n.all(node2.data1 == data1[:,2:]) assert out_msg["data2"] is data2 assert n.all(node1.data2 is data2) assert n.all(node2.data2 is data2) class TestBiSwitchboardNode(object): """Test the behavior of the BiSwitchboardNode.""" def test_execute_routing(self): """Test the standard routing for messages.""" sboard = BiSwitchboard(input_dim=3, connections=[2,0,1]) x = n.array([[1,2,3],[4,5,6]]) msg = { "string": "blabla", "list": [1,2], "data": x.copy(), # should be mapped by switchboard "data2": n.zeros(3), # should not be modified "data3": n.zeros((3,4)), # should not be modified } y, out_msg = sboard.execute(x, msg) reference_y = n.array([[3,1,2],[6,4,5]]) assert (y == reference_y).all() assert out_msg["string"] == msg["string"] assert out_msg["list"] == msg["list"] assert n.all(out_msg["data"] == reference_y) assert out_msg["data2"].shape == (3,) assert out_msg["data3"].shape == (3,4) def test_inverse_message_routing(self): """Test the inverse routing for messages.""" sboard = BiSwitchboard(input_dim=3, connections=[2,0,1]) x = n.array([[1,2,3],[4,5,6]]) msg = { "string": "blabla", "method": "inverse", "list": [1,2], "data": x, # should be mapped by switchboard "data2": n.zeros(3), # should not be modified "data3": n.zeros((3,4)), # should not be modified "target": "test" } y, out_msg, target = sboard.execute(None, msg) assert y is None assert target == "test" reference_y = n.array([[2,3,1],[5,6,4]]) assert out_msg["string"] == msg["string"] assert out_msg["list"] == msg["list"] assert (out_msg["data"] == reference_y).all() assert out_msg["data2"].shape == (3,) assert out_msg["data3"].shape == (3,4) mdp-3.3/bimdp/test/test_binode.py000066400000000000000000000405121203131624700170630ustar00rootroot00000000000000import mdp n = mdp.numx import py.test from bimdp import BiNode, MSG_ID_SEP, BiFlow, BiClassifier, binode_coroutine from bimdp.nodes import ( IdentityBiNode, SFABiNode, FDABiNode, SignumBiClassifier ) from _tools import JumpBiNode class TestBiNode(object): def test_msg_parsing1(self): """Test the message parsing and recombination.""" class TestBiNode(BiNode): def _execute(self, x, a, b, d): self.a = a self.b = b self.d = d return x, {"g": 15, "z": 3} @staticmethod def is_trainable(): return False binode = TestBiNode(node_id="test") b_key = "test" + MSG_ID_SEP + "b" d_key = "test" + MSG_ID_SEP + "d" msg = {"c": 12, b_key: 42, "a": 13, d_key: "bla"} _, msg = binode.execute(None, msg) assert "a" in msg assert b_key not in msg assert d_key not in msg assert binode.a == 13 assert binode.b == 42 assert binode.d == "bla" # test the message combination assert msg["g"] == 15 assert msg["z"] == 3 def test_msg_parsing2(self): """Test that an adressed argument is not found.""" class TestBiNode(BiNode): def _execute(self, x, a, b): self.a = a self.b = b @staticmethod def is_trainable(): return False binode = TestBiNode(node_id="test") b_key = "test" + MSG_ID_SEP + "b" # check that the 'd' key which is not an arg gets removed d_key = "test" + MSG_ID_SEP + "d" msg = {"c": 12, b_key: 42, "a": 13, d_key: "bla"} _, out_msg = binode.execute(None, msg) assert d_key not in out_msg def test_msg_magic(self): """Test that the magic msg argument works.""" class TestBiNode(BiNode): def _execute(self, x, a, msg, b): self.a = a self.b = b del msg["c"] msg["f"] = 1 return x, msg @staticmethod def is_trainable(): return False binode = TestBiNode(node_id="test") b_key = "test" + MSG_ID_SEP + "b" msg = {"c": 12, b_key: 42, "a": 13} _, msg = binode.execute(None, msg) assert "a" in msg assert "c" not in msg # was deleted in _execute assert msg["f"] == 1 assert b_key not in msg assert binode.a == 13 assert binode.b == 42 def test_method_magic(self): """Test the magic method message key.""" class TestBiNode(BiNode): def _test(self, x, a, b): self.a = a self.b = b @staticmethod def is_trainable(): return False binode = TestBiNode(node_id="test") b_key = "test" + MSG_ID_SEP + "b" msg = {"c": 12, "a": 13, b_key: 42, "method": "test"} binode.execute(None, msg) assert "a" in msg assert b_key not in msg assert binode.b == 42 def test_target_magic(self): """Test the magic target message key.""" class TestBiNode(BiNode): def _execute(self, x, a, b): self.a = a self.b = b @staticmethod def is_trainable(): return False binode = TestBiNode(node_id="test") b_key = "test" + MSG_ID_SEP + "b" target_key = "test" + MSG_ID_SEP + "target" msg = {"c": 12, b_key: 42, "a": 13, target_key: "test2"} result = binode.execute(None, msg) assert len(result) == 3 assert result[2] == "test2" def test_inverse_magic1(self): """Test the magic inverse method argument.""" class TestBiNode(BiNode): def _inverse(self, x, a, b): self.a = a self.b = b y = n.zeros((len(x), self.input_dim)) return y @staticmethod def is_trainable(): return False binode = TestBiNode(node_id="test", input_dim=20, output_dim=10) b_key = "test" + MSG_ID_SEP + "b" msg = {"c": 12, "a": 13, b_key: 42, "method": "inverse"} x = n.zeros((5, binode.output_dim)) result = binode.execute(x, msg) assert len(result) == 3 assert result[2] == -1 assert result[0].shape == (5, 20) def test_inverse_magic2(self): """Test overriding the magic inverse target.""" class TestBiNode(BiNode): def _inverse(self, x, a, b): self.a = a self.b = b y = n.zeros((len(x), self.input_dim)) return y, None, "test2" @staticmethod def is_trainable(): return False binode = TestBiNode(node_id="test", input_dim=20, output_dim=10) b_key = "test" + MSG_ID_SEP + "b" msg = {"c": 12, "a": 13, b_key: 42, "method": "inverse"} x = n.zeros((5, binode.output_dim)) result = binode.execute(x, msg) assert result[2] == "test2" def test_stoptrain_result1(self): """Test that stop_result is handled correctly.""" stop_result = ({"test": 0}, 1) bi_sfa_node = SFABiNode(stop_result=stop_result, node_id="testing binode") assert bi_sfa_node.is_trainable() x = n.random.random((100,10)) train_result = bi_sfa_node.train(x) assert train_result == None assert bi_sfa_node.is_training() result = bi_sfa_node.stop_training() assert result == (None,) + stop_result assert bi_sfa_node.input_dim == 10 assert bi_sfa_node.output_dim == 10 assert bi_sfa_node.dtype == "float64" def test_stoptrain_result2(self): """Test that stop_result is handled correctly for multiple phases.""" stop_result = [({"test": 0}, 1), ({"test2": 0}, 2)] binode = FDABiNode(stop_result=stop_result, node_id="testing binode") x = n.random.random((100,10)) msg = {"labels": n.zeros(len(x))} binode.train(x, msg) result = binode.stop_training() assert result == (None,) + stop_result[0] binode.train(x, msg) result = binode.stop_training() assert result == (None,) + stop_result[1] def test_stop_training_execute(self): """Test the magic execute method argument for stop_training.""" class TestBiNode(BiNode): def _train(self, x): pass def _execute(self, x, a): self.a = a self.x = x y = n.zeros((len(x), self.output_dim)) return y binode = TestBiNode(input_dim=20, output_dim=10) x = n.ones((5, binode.input_dim)) binode.train(x) msg = {"x": x, "a": 13, "method": "execute"} result = binode.stop_training(msg) assert n.all(binode.x == x) assert binode.x.shape == (5, binode.input_dim) assert binode.a == 13 assert len(result) == 2 assert result[0].shape == (5, binode.output_dim) assert not n.any(result[0]) def test_stop_training_inverse(self): """Test the magic inverse method argument for stop_training.""" class TestBiNode(BiNode): def _train(self, x): pass def _inverse(self, x, a): self.a = a self.x = x y = n.zeros((len(x), self.input_dim)) return y binode = TestBiNode(input_dim=20, output_dim=10) binode.train(n.ones((5, binode.input_dim))) x = n.ones((5, binode.output_dim)) msg = {"x": x, "a": 13, "method": "inverse"} result = binode.stop_training(msg) assert n.all(binode.x == x) assert binode.x.shape == (5, binode.output_dim) assert binode.a == 13 assert len(result) == 3 assert result[2] == -1 assert result[0].shape == (5, binode.input_dim) assert not n.any(result[0]) def test_flow_from_sum(self): """Test the special addition method for BiNode.""" node1 = IdentityBiNode() node2 = mdp.Node() flow = node1 + node2 assert type(flow) is BiFlow node2 = IdentityBiNode() flow = node1 + node2 assert type(flow) is BiFlow assert len(flow) == 2 node3 = IdentityBiNode() flow = node1 + node2 + node3 assert type(flow) is BiFlow assert len(flow) == 3 node4 = IdentityBiNode() flow = node4 + flow assert type(flow) is BiFlow assert len(flow) == 4 class TestBiClassifierNode(object): def test_biclassifier(self): """Test the BiClassifier base class.""" class TestBiClassifier(BiClassifier): def _label(self, x): return "LABELS" def _prob(self, x): return "PROPS" @staticmethod def is_trainable(): return False node = TestBiClassifier() x = n.empty((5,2)) msg = {"return_labels": "test->", "return_probs": True} result = node.execute(x, msg) assert result[0] is x assert "labels" not in result[1] assert result[1]["probs"] == "PROPS" assert result[1][msg["return_labels"] + "labels"] == "LABELS" assert "rank" not in result[1] msg = {"return_labels": None} result = node.execute(x,msg) assert result[0] is x assert "labels" not in result[1] assert "prop" not in result[1] assert "rank" not in result[1] def test_autogen_biclassifier(self): """Test that the autogenerated classifiers work.""" node = SignumBiClassifier() msg = {"return_labels": True} # taken from the SignumClassifier unittest x = n.array([[1, 2, -3, -4], [1, 2, 3, 4]]) result = node.execute(x, msg) assert result[0] is x assert result[1]["labels"].tolist() == [-1, 1] class TestIdentityBiNode(object): def test_idnode(self): """Test the IdentityBiNode. Instantiation is tested and it should perform like an id node, but accept msg arguments. """ binode = IdentityBiNode(node_id="testing binode") x = n.random.random((10,5)) msg = {"some array": n.random.random((10,3))} # see if msg causes no problem y, msg = binode.execute(x, msg) assert n.all(x==y) # see if missing msg causes problem y = binode.execute(x) assert n.all(x==y) class TestJumpBiNode(object): def test_node(self): """Test the JumpBiNode.""" train_results = [[(0, "t1")], [None], [(3, "t3")]] stop_train_results = [None, (5, "st2"), (6, "st3")] execute_results = [(None, {}), None, (None, {}, "et4")] jumpnode = JumpBiNode(train_results=train_results, stop_train_results=stop_train_results, execute_results=execute_results) x = n.random.random((2,2)) assert jumpnode.is_trainable() # training rec_train_results = [] rec_stop_train_results = [] for _ in range(len(train_results)): rec_train_results.append([jumpnode.train(x)]) jumpnode.bi_reset() rec_stop_train_results.append(jumpnode.stop_training()) jumpnode.bi_reset() assert not jumpnode.is_training() assert rec_train_results == train_results assert rec_stop_train_results == rec_stop_train_results # execution rec_execute_results = [] for _ in range(4): # note that this is more then the execute_targets rec_execute_results.append(jumpnode.execute(x)) execute_results[1] = x execute_results.append(x) assert (rec_execute_results == execute_results) assert jumpnode.loop_counter == 4 class TestBiNodeCoroutine(object): """Test the coroutine decorator and the related BiNode functionality.""" def test_codecorator(self): """Test basic codecorator functionality.""" class CoroutineBiNode(BiNode): @staticmethod def is_trainable(): return False @binode_coroutine(["alpha", "beta"]) def _execute(self, x, alpha): """Blabla.""" x, alpha, beta = yield (x, {"alpha": alpha, "beta": 2}, self.node_id) x, alpha, beta = yield (x, {"alpha": alpha+1, "beta": beta+2}, self.node_id) yield x, {"alpha": alpha, "beta": beta} node = CoroutineBiNode(node_id="conode") flow = BiFlow([node]) x = n.random.random((3,2)) y, msg = flow.execute(x, {"alpha": 3}) assert msg["alpha"] == 4 assert msg["beta"] == 4 assert node.execute.__doc__ == """Blabla.""" def test_codecorator2(self): """Test codecorator functionality with StopIteration.""" class CoroutineBiNode(BiNode): @staticmethod def is_trainable(): return False @binode_coroutine(["alpha", "beta"]) def _execute(self, x, alpha): x, alpha, beta = yield (x, {"alpha": alpha, "beta": 2}, self.node_id) x, alpha, beta = yield (x, {"alpha": alpha+1, "beta": beta+2}, self.node_id) raise StopIteration(x, {"alpha": alpha, "beta": beta}) node = CoroutineBiNode(node_id="conode") flow = BiFlow([node]) x = n.random.random((3,2)) y, msg = flow.execute(x, {"alpha": 3}) assert msg["alpha"] == 4 assert msg["beta"] == 4 def test_codecorator_defaults(self): """Test codecorator argument default values.""" class CoroutineBiNode(BiNode): @staticmethod def is_trainable(): return False @binode_coroutine(["alpha", "beta"], defaults=(7,8)) def _execute(self, x): x, alpha, beta = yield (x, None, self.node_id) raise StopIteration(x, {"alpha": alpha, "beta": beta}) node = CoroutineBiNode(node_id="conode") flow = BiFlow([node]) x = n.random.random((3,2)) y, msg = flow.execute(x) assert msg["alpha"] == 7 assert msg["beta"] == 8 def test_codecorator_no_iteration(self): """Test codecorator corner case with no iterations.""" class CoroutineBiNode(BiNode): @staticmethod def is_trainable(): return False @binode_coroutine() def _execute(self, x): # at least one yield must be in a coroutine if False: yield None raise StopIteration(None, {"a": 1}, self.node_id) node1 = CoroutineBiNode() x = n.random.random((3,2)) result = node1.execute(x) assert result == (None, {"a": 1}, None) def test_codecorator_reset1(self): """Test that codecorator correctly resets after termination.""" class CoroutineBiNode(BiNode): @staticmethod def is_trainable(): return False @binode_coroutine() def _execute(self, x, a, msg=None): # note that the a argument is required, drop message for _ in range(2): x = yield x raise StopIteration(x) node1 = CoroutineBiNode() x = n.random.random((3,2)) # this inits the coroutine, a argument is needed node1.execute(x, {"a": 2}) node1.execute(x) node1.execute(x) assert node1._coroutine_instances == {} # couroutine should be reset, a argument is needed again py.test.raises(TypeError, node1.execute, x) def test_codecorator_reset2(self): """Test that codecorator correctly resets without yields.""" class CoroutineBiNode(BiNode): @staticmethod def is_trainable(): return False @binode_coroutine() def _execute(self, x, a, msg=None): if False: yield raise StopIteration(x) node1 = CoroutineBiNode() x = n.random.random((3,2)) node1.execute(x, {"a": 2}) assert node1._coroutine_instances == {} mdp-3.3/bimdp/test/test_gradient.py000066400000000000000000000226241203131624700174240ustar00rootroot00000000000000from __future__ import with_statement import mdp import bimdp from mdp import numx, numx_rand class TestGradientExtension(object): def test_sfa_gradient(self): """Test gradient for combination of SFA nodes.""" sfa_node1 = bimdp.nodes.SFABiNode(output_dim=8) sfa_node2 = bimdp.nodes.SFABiNode(output_dim=7) sfa_node3 = bimdp.nodes.SFABiNode(output_dim=5) flow = sfa_node1 + sfa_node2 + sfa_node3 x = numx_rand.random((300, 10)) flow.train(x) x = numx_rand.random((2, 10)) mdp.activate_extension("gradient") try: flow.execute(x, {"method": "gradient"}) finally: mdp.deactivate_extension("gradient") def test_gradient_product(self): """Test that the product of gradients is calculated correctly.""" sfa_node1 = bimdp.nodes.SFABiNode(output_dim=5) sfa_node2 = bimdp.nodes.SFABiNode(output_dim=3) flow = sfa_node1 + sfa_node2 x = numx_rand.random((300, 10)) flow.train(x) mdp.activate_extension("gradient") try: x1 = numx_rand.random((2, 10)) x2, msg = sfa_node1.execute(x1, {"method": "gradient"}) grad1 = msg["grad"] _, msg = sfa_node2.execute(x2, {"method": "gradient"}) grad2 = msg["grad"] grad12 = flow.execute(x1, {"method": "gradient"})[1]["grad"] # use a different way to calculate the product of the gradients, # this method is too memory intensive for large data ref_grad = numx.sum(grad2[:,:,numx.newaxis,:] * numx.transpose(grad1[:,numx.newaxis,:,:], (0,1,3,2)), axis=3) assert numx.amax(abs(ref_grad - grad12)) < 1E-9 finally: mdp.deactivate_extension("gradient") def test_quadexpan_gradient1(self): """Test validity of gradient for QuadraticExpansionBiNode.""" node = mdp.nodes.QuadraticExpansionNode() x = numx.array([[1, 3, 4]]) node.execute(x) mdp.activate_extension("gradient") try: result = node._gradient(x) grad = result[1]["grad"] reference = numx.array( [[[ 1, 0, 0], # x1 [ 0, 1, 0], # x2 [ 0, 0, 1], # x3 [ 2, 0, 0], # x1x1 [ 3, 1, 0], # x1x2 [ 4, 0, 1], # x1x3 [ 0, 6, 0], # x2x2 [ 0, 4, 3], # x2x3 [ 0, 0, 8]]]) # x3x3 assert numx.all(grad == reference) finally: mdp.deactivate_extension("gradient") def test_quadexpan_gradient2(self): """Test gradient with multiple data points.""" node = mdp.nodes.QuadraticExpansionNode() x = numx_rand.random((3,5)) node.execute(x) mdp.activate_extension("gradient") try: result = node._gradient(x) gradient = result[1]["grad"] assert gradient.shape == (3,20,5) finally: mdp.deactivate_extension("gradient") def test_sfa2_gradient(self): sfa2_node1 = bimdp.nodes.SFA2BiNode(output_dim=5) sfa2_node2 = bimdp.nodes.SFA2BiNode(output_dim=3) flow = sfa2_node1 + sfa2_node2 x = numx_rand.random((300, 6)) flow.train(x) x = numx_rand.random((2, 6)) mdp.activate_extension("gradient") try: flow.execute(x, {"method": "gradient"}) finally: mdp.deactivate_extension("gradient") def test_sfa2_gradient2(self): def _alt_sfa2_grad(self, x): """Reference grad method based on quadratic forms.""" # note that the H and f arrays are cached in the node and remain even # after the extension has been deactivated if not hasattr(self, "__gradient_Hs"): quad_forms = [self.get_quadratic_form(i) for i in range(self.output_dim)] self.__gradient_Hs = numx.vstack((quad_form.H[numx.newaxis] for quad_form in quad_forms)) self.__gradient_fs = numx.vstack((quad_form.f[numx.newaxis] for quad_form in quad_forms)) grad = (numx.dot(x, self.__gradient_Hs) + numx.repeat(self.__gradient_fs[numx.newaxis,:,:], len(x), axis=0)) return grad sfa2_node = bimdp.nodes.SFA2BiNode(output_dim=3) x = numx_rand.random((300, 6)) sfa2_node.train(x) sfa2_node.stop_training() x = numx_rand.random((2, 6)) mdp.activate_extension("gradient") try: result1 = sfa2_node.execute(x, {"method": "gradient"}) grad1 = result1[1]["grad"] grad2 = _alt_sfa2_grad(sfa2_node, x) assert numx.amax(abs(grad1 - grad2)) < 1E-9 finally: mdp.deactivate_extension("gradient") def test_layer_gradient(self): """Test gradient for a simple layer.""" node1 = mdp.nodes.SFA2Node(input_dim=4, output_dim=3) node2 = mdp.nodes.SFANode(input_dim=6, output_dim=2) layer = mdp.hinet.Layer([node1, node2]) x = numx_rand.random((100,10)) layer.train(x) layer.stop_training() mdp.activate_extension("gradient") try: x = numx_rand.random((7,10)) result = layer._gradient(x) grad = result[1]["grad"] # get reference result grad1 = node1._gradient(x[:, :node1.input_dim])[1]["grad"] grad2 = node2._gradient(x[:, node1.input_dim:])[1]["grad"] ref_grad = numx.zeros(((7,5,10))) ref_grad[:, :node1.output_dim, :node1.input_dim] = grad1 ref_grad[:, node1.output_dim:, node1.input_dim:] = grad2 assert numx.all(grad == ref_grad) finally: mdp.deactivate_extension("gradient") def test_clonebilayer_gradient(self): """Test gradient for a simple layer.""" layer = bimdp.hinet.CloneBiLayer( bimdp.nodes.SFA2BiNode(input_dim=5, output_dim=2), n_nodes=3) x = numx_rand.random((100,15)) layer.train(x) layer.stop_training() mdp.activate_extension("gradient") try: x = numx_rand.random((7,15)) result = layer._gradient(x) grad = result[1]["grad"] assert grad.shape == (7,6,15) finally: mdp.deactivate_extension("gradient") def test_switchboard_gradient1(self): """Test that gradient is correct for a tiny switchboard.""" sboard = mdp.hinet.Switchboard(input_dim=4, connections=[2,0]) x = numx_rand.random((2,4)) mdp.activate_extension("gradient") try: result = sboard._gradient(x) grad = result[1]["grad"] ref_grad = numx.array([[[0,0,1,0], [1,0,0,0]], [[0,0,1,0], [1,0,0,0]]], dtype=grad.dtype) assert numx.all(grad == ref_grad) finally: mdp.deactivate_extension("gradient") def test_switchboard_gradient2(self): """Test gradient for a larger switchboard.""" dim = 100 connections = [int(i) for i in numx.random.random((dim,)) * (dim-1)] sboard = mdp.hinet.Switchboard(input_dim=dim, connections=connections) x = numx.random.random((10, dim)) # assume a 5-dimensional gradient at this stage grad = numx.random.random((10, dim, 5)) # original reference implementation def _switchboard_grad(self, x): grad = numx.zeros((self.output_dim, self.input_dim)) grad[range(self.output_dim), self.connections] = 1 return numx.tile(grad, (len(x), 1, 1)) with mdp.extension("gradient"): result = sboard._gradient(x, grad) ext_grad = result[1]["grad"] tmp_grad = _switchboard_grad(sboard, x) ref_grad = numx.asarray([numx.dot(tmp_grad[i], grad[i]) for i in range(len(tmp_grad))]) assert numx.all(ext_grad == ref_grad) def test_network_gradient(self): """Test gradient for a small SFA network.""" sfa_node = bimdp.nodes.SFABiNode(input_dim=4*4, output_dim=5) switchboard = bimdp.hinet.Rectangular2dBiSwitchboard( in_channels_xy=8, field_channels_xy=4, field_spacing_xy=2) flownode = bimdp.hinet.BiFlowNode(bimdp.BiFlow([sfa_node])) sfa_layer = bimdp.hinet.CloneBiLayer(flownode, switchboard.output_channels) flow = bimdp.BiFlow([switchboard, sfa_layer]) train_gen = [numx_rand.random((10, switchboard.input_dim)) for _ in range(3)] flow.train([None, train_gen]) # now can test the gradient mdp.activate_extension("gradient") try: x = numx_rand.random((3, switchboard.input_dim)) result = flow(x, {"method": "gradient"}) grad = result[1]["grad"] assert grad.shape == (3, sfa_layer.output_dim, switchboard.input_dim) finally: mdp.deactivate_extension("gradient") mdp-3.3/bimdp/test/test_namespace_fixups.py000066400000000000000000000004661203131624700211610ustar00rootroot00000000000000from mdp.test.test_namespace_fixups import (generate_calls, test_exports) MODULES = ['bimdp', 'bimdp.nodes', 'bimdp.hinet', 'bimdp.parallel', ] def pytest_generate_tests(metafunc): generate_calls(MODULES, metafunc) mdp-3.3/bimdp/test/test_parallelbiflow.py000066400000000000000000000066501203131624700206270ustar00rootroot00000000000000import mdp from mdp import numx as n from bimdp.nodes import SFABiNode, SFA2BiNode from bimdp.parallel import ParallelBiFlow # TODO: maybe test the helper classes as well, e.g. the new callable class TestParallelBiNode(object): def test_stop_message_attribute(self): """Test that the stop_result attribute is present in forked node.""" stop_result = ({"test": "blabla"}, "node123") x = n.random.random([100,10]) node = SFABiNode(stop_result=stop_result) try: mdp.activate_extension("parallel") node2 = node.fork() node2.train(x) forked_result = node2.stop_training() assert forked_result == (None,) + stop_result # same with derived sfa2 node node = SFA2BiNode(stop_result=stop_result) mdp.activate_extension("parallel") node2 = node.fork() node2.train(x) forked_result = node2.stop_training() assert forked_result == (None,) + stop_result finally: mdp.deactivate_extension("parallel") class TestParallelBiFlow(object): def test_nonparallel_flow(self): """Test a ParallelBiFlow with standard nodes.""" flow = ParallelBiFlow([mdp.nodes.SFANode(output_dim=5), mdp.nodes.PolynomialExpansionNode(degree=3), mdp.nodes.SFANode(output_dim=20)]) data_iterables = [[n.random.random((20,10)) for _ in range(6)], None, [n.random.random((20,10)) for _ in range(6)]] scheduler = mdp.parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler) x = n.random.random([100,10]) flow.execute(x) iterator = [n.random.random((20,10)) for _ in range(6)] flow.execute(iterator, scheduler=scheduler) scheduler.shutdown() def test_mixed_parallel_flow(self): """Test a ParallelBiFlow with both standard and BiNodes.""" flow = ParallelBiFlow([mdp.nodes.PCANode(output_dim=8), SFABiNode(output_dim=5), SFA2BiNode(output_dim=20)]) data_iterables = [[n.random.random((20,10)) for _ in range(6)]] * 3 scheduler = mdp.parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler) x = n.random.random([100,10]) flow.execute(x) iterator = [n.random.random((20,10)) for _ in range(6)] flow.execute(iterator, scheduler=scheduler) scheduler.shutdown() def test_parallel_process(self): """Test training and execution with multiple training phases. The node with multiple training phases is a hinet.FlowNode. """ sfa_node = mdp.nodes.SFANode(input_dim=10, output_dim=8) sfa2_node = mdp.nodes.SFA2Node(input_dim=8, output_dim=6) flow = ParallelBiFlow([sfa_node, sfa2_node]) data_iterables = [[n.random.random((30,10)) for _ in range(6)], [n.random.random((30,10)) for _ in range(7)]] scheduler = mdp.parallel.ProcessScheduler(n_processes=2) flow.train(data_iterables, scheduler=scheduler) flow.execute(data_iterables[1], scheduler=scheduler) x = n.random.random([100,10]) flow.execute(x) iterator = [n.random.random((20,10)) for _ in range(6)] flow.execute(iterator, scheduler=scheduler) scheduler.shutdown() mdp-3.3/bimdp/test/test_parallelbihinet.py000066400000000000000000000022351203131624700207620ustar00rootroot00000000000000import mdp from mdp import numx as n from bimdp import BiFlow, MSG_ID_SEP, EXIT_TARGET from bimdp.parallel import ( ParallelBiFlow, ParallelCloneBiLayer ) from bimdp.nodes import SFABiNode from bimdp.hinet import BiFlowNode class TestCloneBiLayer(object): """Test the behavior of the BiCloneLayer.""" def test_use_copies_msg(self): """Test the correct reaction to an outgoing use_copies message.""" stop_result = ({"clonelayer" + MSG_ID_SEP + "use_copies": True}, EXIT_TARGET) stop_sfa_node = SFABiNode(stop_result=stop_result, input_dim=10, output_dim=3) biflownode = BiFlowNode(BiFlow([stop_sfa_node])) clonelayer = ParallelCloneBiLayer(node=biflownode, n_nodes=3, use_copies=False, node_id="clonelayer") data = [[n.random.random((100,30)) for _ in range(5)]] biflow = ParallelBiFlow([clonelayer]) biflow.train(data, scheduler=mdp.parallel.Scheduler()) assert clonelayer.use_copies is True mdp-3.3/cleandist000077500000000000000000000000501203131624700140250ustar00rootroot00000000000000rm -rf dist rm -rf build rm -f MANIFEST mdp-3.3/gendist000077500000000000000000000005621203131624700135240ustar00rootroot00000000000000#!/bin/bash ./cleandist python2.6 -V python2.6 ./setup.py sdist --manifest-only python2.6 ./setup.py sdist --formats=zip,gztar python2.6 ./setup.py bdist_wininst --plat-name=Python2 # for python 3 you need to do it under windows # C:\Python31\python.exe setup.py bdist_wininst --plat-name=Python3 # then copy the resulting installer from build/py3k/dist rm -rf MANIFEST mdp-3.3/mdp/000077500000000000000000000000001203131624700127165ustar00rootroot00000000000000mdp-3.3/mdp/__init__.py000066400000000000000000000212171203131624700150320ustar00rootroot00000000000000# Modular toolkit for Data Processing (MDP) """\ **The Modular toolkit for Data Processing (MDP)** package is a library of widely used data processing algorithms, and the possibility to combine them together to form pipelines for building more complex data processing software. MDP has been designed to be used as-is and as a framework for scientific data processing development. From the user's perspective, MDP consists of a collection of *units*, which process data. For example, these include algorithms for supervised and unsupervised learning, principal and independent components analysis and classification. These units can be chained into data processing flows, to create pipelines as well as more complex feed-forward network architectures. Given a set of input data, MDP takes care of training and executing all nodes in the network in the correct order and passing intermediate data between the nodes. This allows the user to specify complex algorithms as a series of simpler data processing steps. The number of available algorithms is steadily increasing and includes signal processing methods (Principal Component Analysis, Independent Component Analysis, Slow Feature Analysis), manifold learning methods ([Hessian] Locally Linear Embedding), several classifiers, probabilistic methods (Factor Analysis, RBM), data pre-processing methods, and many others. Particular care has been taken to make computations efficient in terms of speed and memory. To reduce the memory footprint, it is possible to perform learning using batches of data. For large data-sets, it is also possible to specify that MDP should use single precision floating point numbers rather than double precision ones. Finally, calculations can be parallelised using the ``parallel`` subpackage, which offers a parallel implementation of the basic nodes and flows. From the developer's perspective, MDP is a framework that makes the implementation of new supervised and unsupervised learning algorithms easy and straightforward. The basic class, ``Node``, takes care of tedious tasks like numerical type and dimensionality checking, leaving the developer free to concentrate on the implementation of the learning and execution phases. Because of the common interface, the node then automatically integrates with the rest of the library and can be used in a network together with other nodes. A node can have multiple training phases and even an undetermined number of phases. Multiple training phases mean that the training data is presented multiple times to the same node. This allows the implementation of algorithms that need to collect some statistics on the whole input before proceeding with the actual training, and others that need to iterate over a training phase until a convergence criterion is satisfied. It is possible to train each phase using chunks of input data if the chunks are given as an iterable. Moreover, crash recovery can be optionally enabled, which will save the state of the flow in case of a failure for later inspection. MDP is distributed under the open source BSD license. It has been written in the context of theoretical research in neuroscience, but it has been designed to be helpful in any context where trainable data processing algorithms are used. Its simplicity on the user's side, the variety of readily available algorithms, and the reusability of the implemented nodes also make it a useful educational tool. http://mdp-toolkit.sourceforge.net """ __docformat__ = "restructuredtext en" # The descriptions strings below are parsed with a regexp in setup.py. # Don't do anything fancy, keep strings triple quoted and verify that # the get_*_description functions continue to work. # __short_description__ must be one line, 200 characters maximum. # C.f. http://docs.python.org/distutils/setupscript.html?highlight=description#meta-data __short_description__ = """\ MDP is a Python library for building complex data processing software \ by combining widely used machine learning algorithms into pipelines \ and networks.""" __medium_description__ ="""\ **Modular toolkit for Data Processing (MDP)** is a Python data processing framework. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. From the scientific developer's perspective, MDP is a modular framework, which can easily be expanded. The implementation of new algorithms is easy and intuitive. The new implemented units are then automatically integrated with the rest of the library. The base of available algorithms is steadily increasing and includes signal processing methods (Principal Component Analysis, Independent Component Analysis, Slow Feature Analysis), manifold learning methods ([Hessian] Locally Linear Embedding), several classifiers, probabilistic methods (Factor Analysis, RBM), data pre-processing methods, and many others. """ class MDPException(Exception): """Base class for exceptions in MDP.""" pass class MDPWarning(UserWarning): """Base class for warnings in MDP.""" pass class MDPDeprecationWarning(DeprecationWarning, MDPWarning): """Warn about deprecated MDP API.""" pass import configuration __version__ = '3.3' __revision__ = configuration.get_git_revision() __authors__ = 'MDP Developers' __copyright__ = '(c) 2003-2012 mdp-toolkit-devel@lists.sourceforge.net' __license__ = 'BSD License, see COPYRIGHT' __contact__ = 'mdp-toolkit-users@lists.sourceforge.net' __homepage__ = 'http://mdp-toolkit.sourceforge.net' configuration.set_configuration() config = configuration.config (numx_description, numx, numx_linalg, numx_fft, numx_rand, numx_version) = configuration.get_numx() # import the utils module (used by other modules) import utils # set symeig utils.symeig = configuration.get_symeig(numx_linalg) # import exceptions from nodes and flows from signal_node import (NodeException, InconsistentDimException, TrainingException, TrainingFinishedException, IsNotTrainableException, IsNotInvertibleException) from linear_flows import CrashRecoveryException, FlowException, FlowExceptionCR # import base nodes and flow classes from signal_node import (NodeMetaclass, Node, PreserveDimNode, Cumulator, VariadicCumulator) from linear_flows import (Flow, CheckpointFlow, CheckpointFunction, CheckpointSaveFunction) # import helper functions: from helper_funcs import pca, fastica # import extension mechanism from extension import (ExtensionException, extension_method, ExtensionNodeMetaclass, ExtensionNode, get_extensions, get_active_extensions, with_extension, activate_extension, deactivate_extension, activate_extensions, deactivate_extensions, extension) # import classifier node from classifier_node import (ClassifierNode, ClassifierCumulator) # import our modules import nodes import hinet import parallel from test import test # explicitly set __all__, mainly needed for epydoc __all__ = ['config', 'CheckpointFlow', 'CheckpointFunction', 'CheckpointSaveFunction', 'ClassifierCumulator', 'ClassifierNode', 'CrashRecoveryException', 'Cumulator', 'ExtensionNode', 'ExtensionNodeMetaclass', 'Flow', 'FlowException', 'FlowExceptionCR', 'IsNotInvertibleException', 'IsNotTrainableException', 'MDPException', 'MDPWarning', 'Node', 'NodeException', 'TrainingException', 'TrainingFinishedException', 'VariadicCumulator', 'activate_extension', 'activate_extensions', 'deactivate_extension', 'deactivate_extensions', 'extension', 'extension_method', 'get_extensions', 'graph', 'hinet', 'nodes', 'parallel', 'pca', 'fastica', 'utils', 'with_extension', ] if config.has_joblib: import caching __all__ += ['caching'] utils.fixup_namespace(__name__, __all__, ('signal_node', 'linear_flows', 'helper_funcs', 'classifier_node', 'configuration', 'repo_revision', 'extension', ),('extension', 'configuration')) mdp-3.3/mdp/caching/000077500000000000000000000000001203131624700143125ustar00rootroot00000000000000mdp-3.3/mdp/caching/__init__.py000066400000000000000000000005761203131624700164330ustar00rootroot00000000000000from caching_extension import (activate_caching, deactivate_caching, cache, set_cachedir, __doc__, __docformat__) from mdp.utils import fixup_namespace __all__ = ['activate_caching', 'deactivate_caching', 'cache', 'set_cachedir'] fixup_namespace(__name__, __all__,('caching_extension','fixup_namespace',)) mdp-3.3/mdp/caching/caching_extension.py000066400000000000000000000161431203131624700203610ustar00rootroot00000000000000"""MDP extension to cache the execution phase of nodes. This extension is based on the **joblib** library by Gael Varoquaux, available at http://packages.python.org/joblib/. At the moment, the extension is based on joblib v. 0.4.6. """ __docformat__ = "restructuredtext en" import joblib from ..utils import TemporaryDirectory from ..extension import ExtensionNode, activate_extension, deactivate_extension from ..signal_node import Node # -- global attributes for this extension _cachedir = None # If a temporary directory is used, a reference to the # TemporaryDirectory object is kept here. The directory will be # deleted when this object is destroyed, so either when this module is # destroyed, or when a new directory is set and it is temporary # directory again. _cacheobj = None # instance of joblib cache object (set with set_cachedir) _memory = None # True is the cache is active for *all* classes _cache_active_global = True _cached_classes = [] _cached_instances = [] _cached_methods = {} def set_cachedir(cachedir=None, verbose=0): """Set root directory for the joblib cache. :Parameters: cachedir the cache directory name; if ``None``, a temporary directory is created using `TemporaryDirectory` verbose an integer number, controls the verbosity of the cache (default is 0, i.e., not verbose) """ global _cachedir global _cacheobj global _cached_methods global _memory if cachedir is None: _cacheobj = TemporaryDirectory(prefix='mdp-joblib-cache.') cachedir = _cacheobj.name # only reset if the directory changes if cachedir != _cachedir: _cachedir = cachedir _memory = joblib.Memory(cachedir, verbose=verbose) # reset cached methods _cached_methods.clear() # initialize cache with temporary directory #set_cachedir() class CacheExecuteExtensionNode(ExtensionNode, Node): """MDP extension for caching execution results. The return value of the 'execute' methods are cached if: 1) the extension is activated in global mode 2) the Node subclass is registered to be cached or 3) the instance is registered to be cached *Warning: this extension might break the algorithms if nodes rely on side effects.* See `activate_caching`, `deactivate_caching`, and the `cache` context manager to learn about how to activate the caching mechanism and its options. """ extension_name = 'cache_execute' def is_cached(self): """Return True if the node is cached.""" global _cache_active_global global _cached_classes global _cached_instances return (_cache_active_global or self.__class__ in _cached_classes or self in _cached_instances) def set_instance_cache(self, active=True): """Add or remove this instance from caching. The global caching and class caching options still have priority over the instance caching option. """ # add to global dictionary global _cached_instances if active: _cached_instances.append(self) else: if self in _cached_instances: _cached_instances.remove(self) def execute(self, x, *args, **kwargs): global _cached_methods # cache is not active for globally, for this class or instance: # call original execute method if not self.is_cached(): return self._non_extension_execute(x, *args, **kwargs) if self not in _cached_methods: global _memory _cached_methods[self] = _memory.cache( self._non_extension_execute.im_func) # execute pre-execution checks once so that all automatic # settings of things like dtype and input_dim are done, and # caching begins from first execution, not the second self._pre_execution_checks(x) return _cached_methods[self](self, x, *args, **kwargs) # ------- helper functions and context manager # TODO: check that classes and instances are Nodes def activate_caching(cachedir=None, cache_classes=None, cache_instances=None, verbose=0): """Activate caching extension. By default, the cache is activated globally (i.e., for all instances of Node). If cache_classes or cache instances are specified, the cache is activated only for those classes and instances. :Parameters: cachedir The root of the joblib cache, or a temporary directory if None cache_classes A list of Node subclasses for which caching is activated. Default value: None cache_classes A list of Node instances for which caching is activated. Default value: None """ global _cache_active_global global _cached_classes global _cached_instances set_cachedir(cachedir=cachedir, verbose=verbose) _cache_active_global = (cache_classes is None and cache_instances is None) # active cache for specific classes and instances if cache_classes is not None: _cached_classes = list(cache_classes) if cache_instances is not None: _cached_instances = list(cache_instances) activate_extension('cache_execute') def deactivate_caching(cachedir=None): """De-activate caching extension.""" deactivate_extension('cache_execute') # reset global variables global _cache_active_global global _cached_classes global _cached_instances global _cached_methods _cache_active_global = True _cached_classes = [] _cached_instances = [] _cached_methods = {} class cache(object): """Context manager for the 'cache_execute' extension. This allows using the caching extension using a 'with' statement, as in: >>> with mdp.caching.cache(CACHEDIR): # doctest: +SKIP ... # 'node' is executed caching the results in CACHEDIR ... node.execute(x) If the argument to the context manager is not specified, caching is done in a temporary directory. """ def __init__(self, cachedir=None, cache_classes=None, cache_instances=None, verbose=0): """Activate caching extension. By default, the cache is activated globally (i.e., for all instances of Node). If cache_classes or cache instances are specified, the cache is activated only for those classes and instances. :Parameters: cachedir The root of the joblib cache, or a temporary directory if None cache_classes A list of Node subclasses for which caching is activated. Default value: None cache_classes A list of Node instances for which caching is activated. Default value: None """ self.cachedir = cachedir self.cache_classes = cache_classes self.cache_instances = cache_instances self.verbose = verbose def __enter__(self): activate_caching(self.cachedir, self.cache_classes, self.cache_instances, self.verbose) def __exit__(self, type, value, traceback): deactivate_caching() mdp-3.3/mdp/classifier_node.py000066400000000000000000000124521203131624700164250ustar00rootroot00000000000000import mdp from mdp import PreserveDimNode, numx, VariadicCumulator import operator class ClassifierNode(PreserveDimNode): """A ClassifierNode can be used for classification tasks that should not interfere with the normal execution flow. A reason for that is that the labels used for classification do not form a vector space, and so they don't make much sense in a flow. """ def __init__(self, execute_method=None, input_dim=None, output_dim=None, dtype=None): """Initialize classifier. execute_method -- Set to string value 'label', 'rank', or 'prob' to force the corresponding classification method being used instead of the standard identity execution (which is used when execute_method has the default value None). This can be used when the node is last in a flow, the return value from Flow.execute will then consist of the classification results. """ self.execute_method = execute_method super(ClassifierNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) ### Methods to be implemented by the subclasses def _label(self, x, *args, **kargs): raise NotImplementedError def _prob(self, x, *args, **kargs): raise NotImplementedError ### User interface to the overwritten methods def label(self, x, *args, **kwargs): """Returns an array with best class labels. By default, subclasses should overwrite _label to implement their label. The docstring of the '_label' method overwrites this docstring. """ self._pre_execution_checks(x) return self._label(self._refcast(x), *args, **kwargs) def prob(self, x, *args, **kwargs): """Returns the probability for each datapoint and label (e.g., [{1:0.1, 2:0.0, 3:0.9}, {1:1.0, 2:0.0, 3:0.0}, ...]) By default, subclasses should overwrite _prob to implement their prob. The docstring of the '_prob' method overwrites this docstring. """ self._pre_execution_checks(x) return self._prob(self._refcast(x), *args, **kwargs) def rank(self, x, threshold=None): """Returns ordered list with all labels ordered according to prob(x) (e.g., [[3 1 2], [2 1 3], ...]). The optional threshold parameter is used to exclude labels having equal or less probability. E.g. threshold=0 excludes all labels with zero probability. """ all_ranking = [] prob = self.prob(x) for p in prob: if threshold is None: ranking = p.items() else: ranking = ((k, v) for k, v in p.items() if v > threshold) result = [k for k, v in sorted(ranking, key=operator.itemgetter(1), reverse=True)] all_ranking.append(result) return all_ranking def _execute(self, x): if not self.execute_method: return x elif self.execute_method == "label": return self.label(x) elif self.execute_method == "rank": return self.rank(x) elif self.execute_method == "prob": return self.prob(x) # XXX are the _train and _stop_training functions necessary anymore? class ClassifierCumulator(VariadicCumulator('data', 'labels'), ClassifierNode): """A ClassifierCumulator is a Node whose training phase simply collects all input data and labels. In this way it is possible to easily implement batch-mode learning. The data is accessible in the attribute 'self.data' after the beginning of the '_stop_training' phase. 'self.tlen' contains the number of data points collected. 'self.labels' contains the assigned label to each data point. """ def __init__(self, input_dim=None, output_dim=None, dtype=None): super(ClassifierCumulator, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) def _check_train_args(self, x, labels): super(ClassifierCumulator, self)._check_train_args(x, labels) if (isinstance(labels, (list, tuple, numx.ndarray)) and len(labels) != x.shape[0]): msg = ("The number of labels must be equal to the number of " "datapoints (%d != %d)" % (len(labels), x.shape[0])) raise mdp.TrainingException(msg) def _train(self, x, labels): """Cumulate all input data in a one dimensional list.""" self.tlen += x.shape[0] self.data.extend(x.ravel().tolist()) # if labels is a number, all x's belong to the same class if isinstance(labels, (list, tuple, numx.ndarray)): pass else: labels = [labels] * x.shape[0] self.labels.extend(labels.ravel().tolist()) def _stop_training(self, *args, **kwargs): """Transform the data and labels lists to array objects and reshape them.""" self.data = numx.array(self.data, dtype=self.dtype) self.data.shape = (self.tlen, self.input_dim) self.labels = numx.array(self.labels) self.labels.shape = (self.tlen) mdp-3.3/mdp/configuration.py000066400000000000000000000371331203131624700161460ustar00rootroot00000000000000from __future__ import with_statement import sys import os import tempfile import inspect import mdp from repo_revision import get_git_revision import cStringIO as StringIO __docformat__ = "restructuredtext en" class MetaConfig(type): """Meta class for config object to allow for pretty printing of class config (as we never instantiate it)""" def __str__(self): return self.info() def __repr__(self): return self.info() class config(object): """Provide information about optional dependencies. This class should not be instantiated, it serves as a namespace for dependency information. This information is encoded as a series of attributes called ``has_``. Dependency parameters are object which have a a boolean value (``True`` if the dependency is available). If False, they contain an error string which will be used in ``mdp.config.info()`` output. If ``True``, they contain information about the available version of the dependency. Those objects should be created by using the helper class methods `ExternalDepFound` and `ExternalDepFailed`. >>> bool(config.has_python) True Dependency parameters are numbered in the order of creation, so the output is predictable. The selection of the numerical backend (`numpy` or `scipy`) can be forced by setting the environment variable MDPNUMX. The loading of an optional dependency can be inhibited by setting the environment variables ``MDP_DISABLE_`` to a non-empty value. The following variables are defined: ``MDPNUMX`` either ``numpy`` or ``scipy``. By default the latter is used if available. ``MDP_DISABLE_PARALLEL_PYTHON`` inhibit loading of `mdp.parallel` based on parallel python (module ``pp``) ``MDP_DISABLE_SHOGUN`` inhibit loading of the shogun classifier ``MDP_DISABLE_LIBSVM`` inhibit loading of the svm classifier ``MDP_DISABLE_JOBLIB`` inhibit loading of the ``joblib`` module and `mdp.caching` ``MDP_DISABLE_SKLEARN`` inhibit loading of the ``sklearn`` module ``MDPNSDEBUG`` print debugging information during the import process ``MDP_PP_SECRET`` set parallel python (pp) secret. If not set, and no secret is known to pp, a default secret will be used. ``MDP_DISABLE_MONKEYPATCH_PP`` disable automatic monkeypatching of parallel python worker script, otherwise a work around for debian bug #620551 is activated. """ __metaclass__ = MetaConfig _HAS_NUMBER = 0 class _ExternalDep(object): def __init__(self, name, version=None, failmsg=None): assert (version is not None) + (failmsg is not None) == 1 self.version = str(version) # convert e.g. exception to str self.failmsg = str(failmsg) if failmsg is not None else None global config self.order = config._HAS_NUMBER config._HAS_NUMBER += 1 setattr(config, 'has_' + name, self) def __nonzero__(self): return self.failmsg is None def __repr__(self): if self: return self.version else: return "NOT AVAILABLE: " + self.failmsg @classmethod def ExternalDepFailed(cls, name, failmsg): """Inform that an optional dependency was not found. A new `_ExternalDep` object will be created and stored in `config`. :Parameters: name identifier of the optional dependency. This should be a valid python identifier, because it will be accessible as ``mdp.config.has_`` attribute. failmsg an object convertible to ``str``, which will be displayed in ``mdp.config.info()`` output. This will usually be either an exception (e.g. ``ImportError``), or a message string. """ return cls._ExternalDep(name, failmsg=failmsg) @classmethod def ExternalDepFound(cls, name, version): """Inform that an optional dependency was found. A new `_ExternalDep` object will be created and stored in `config`. :Parameters: name identifier of the optional dependency. This should be a valid python identifier, because it will be accessible as ``mdp.config.has_`` attribute. version an object convertible to ``str``, which will be displayed in ``mdp.config.info()`` output. Something like ``'0.4.3'``. """ return cls._ExternalDep(name, version=version) @classmethod def info(cls): """Return nicely formatted info about MDP. >>> print mdp.config.info() # doctest: +SKIP python: 2.7.2.final.0 mdp: 3.3, MDP-3.2-9-g4bc7356+ parallel python: 1.6.1-monkey-patched shogun: v1.1.0_02ce3cd_2011-12-12_08:17_ libsvm: libsvm.so.3 joblib: 0.5.4 sklearn: 0.9 numx: scipy 0.9.0 symeig: scipy.linalg.eigh This function is used to provide the py.test report header and footer. """ listable_features = [(f[4:].replace('_', ' '), getattr(cls, f)) for f in dir(cls) if f.startswith('has_')] maxlen = max(len(f[0]) for f in listable_features) listable_features = sorted(listable_features, key=lambda f: f[1].order) return '\n'.join('%*s: %r' % (maxlen+1, f[0], f[1]) for f in listable_features) def get_numx(): # find out the numerical extension # To force MDP to use one specific extension module # set the environment variable MDPNUMX # Mainly useful for testing USR_LABEL = os.getenv('MDPNUMX') # check if the variable is properly set if USR_LABEL and USR_LABEL not in ('numpy', 'scipy'): err = ("Numerical backend '%s'" % USR_LABEL + "not supported. Supported backends: numpy, scipy.") raise ImportError(err) numx_description = None numx_exception = {} # if variable is not set or the user wants scipy if USR_LABEL is None or USR_LABEL == 'scipy': try: import scipy as numx from scipy import (linalg as numx_linalg, fftpack as numx_fft, random as numx_rand, version as numx_version) numx_description = 'scipy' config.ExternalDepFound('numx', 'scipy ' + numx_version.version) except ImportError, exc: if USR_LABEL: raise ImportError(exc) else: numx_exception['scipy'] = exc # if the user wants numpy or scipy was not available if USR_LABEL == 'numpy' or numx_description is None: try: import numpy as numx from numpy import (linalg as numx_linalg, fft as numx_fft, random as numx_rand, version as numx_version) numx_description = 'numpy' config.ExternalDepFound('numx', 'numpy ' + numx_version.version) except ImportError, exc: config.ExternalDepFailed('numx', exc) numx_exception['numpy'] = exc # fail if neither scipy nor numpy could be imported # the test is for numx_description, not numx, because numx could # be imported successfully, but e.g. numx_rand could later fail. if numx_description is None: msg = ([ "Could not import any of the numeric backends.", "Import errors:" ] + [ lab+': '+str(exc) for lab, exc in numx_exception.items() ] + ["sys.path: " + str(sys.path)]) raise ImportError('\n'.join(msg)) return (numx_description, numx, numx_linalg, numx_fft, numx_rand, numx_version) def get_symeig(numx_linalg): # if we have scipy, check if the version of # scipy.linalg.eigh supports the rich interface args = inspect.getargspec(numx_linalg.eigh)[0] if len(args) > 4: # if yes, just wrap it from utils._symeig import wrap_eigh as symeig config.ExternalDepFound('symeig', 'scipy.linalg.eigh') else: # either we have numpy, or we have an old scipy # we need to use our own rich wrapper from utils._symeig import _symeig_fake as symeig config.ExternalDepFound('symeig', 'symeig_fake') return symeig def _version_too_old(version, known_good): """Return True iff a version is smaller than a tuple of integers. This method will return True only if the version string can confidently be said to be smaller than ``known_good``. If the string cannot be parsed as dot-separated-integers, ``None`` (which is false) will be returned. The comparison is performed part by part, the first non-equal one wins. >>> _version_too_old('0.4.3', (0,4,3)) False >>> _version_too_old('0.4.2', (0,4,3)) True >>> _version_too_old('0.5.devel', (0,4,3)) False >>> _version_too_old('0.4.devel', (0,4,3)) """ for part,expected in zip(version.split('.'), known_good): try: p = int(part) except ValueError: return None if p < expected: return True if p > expected: break return False class _sys_stdout_replaced(object): "Replace systdout temporarily" def __enter__(self): self.sysstdout = sys.stdout sys.stdout = StringIO.StringIO() return sys.stdout def __exit__(self, *args): sys.stdout = self.sysstdout def _pp_needs_monkeypatching(): # only run this function the first time mdp is imported # otherwise reload(mdp) breaks if not hasattr(mdp, '_pp_needs_monkeypatching'): # check if we are on one of those broken system were # parallel python is affected by # http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=620551 # this is a minimal example to reproduce the problem # XXX IMPORTANT XXX # This function only works once, i.e. at import # if you attempt to call it again afterwards, # it does not work [pp does not print the error twice] # we need to hijack stdout here, because pp does not raise # exceptions: it writes to stdout directly!!! # pp stuff import pp server = pp.Server() with _sys_stdout_replaced() as capture: server.submit(lambda: None, (), (), ('numpy',))() server.destroy() # read error from hijacked stdout error = capture.getvalue() mdp._pp_needs_monkeypatching = 'ImportError' in error return mdp._pp_needs_monkeypatching def set_configuration(): # set python version config.ExternalDepFound('python', '.'.join([str(x) for x in sys.version_info])) version = mdp.__version__ if mdp.__revision__: version += ', ' + mdp.__revision__ config.ExternalDepFound('mdp', version) # parallel python dependency try: import pp # set pp secret if not there already # (workaround for debian patch to pp that disables pp's default password) pp_secret = os.getenv('MDP_PP_SECRET') or 'mdp-pp-support-password' # module 'user' has been deprecated since python 2.6 and deleted # completely as of python 3.0. # Basically pp can not work on python 3 at the moment. import user if not hasattr(user, 'pp_secret'): user.pp_secret = pp_secret except ImportError, exc: config.ExternalDepFailed('parallel_python', exc) else: if os.getenv('MDP_DISABLE_PARALLEL_PYTHON'): config.ExternalDepFailed('parallel_python', 'disabled') else: # even if we can import pp, starting the server may still fail # for example with: # OSError: [Errno 12] Cannot allocate memory try: server = pp.Server() server.destroy() except Exception, exc: # no idea what exception the pp server may raise # we need to catch all here... config.ExternalDepFailed('parallel_python', exc) else: if _pp_needs_monkeypatching(): if os.getenv('MDP_DISABLE_MONKEYPATCH_PP'): config.ExternalDepFailed('parallel_python', pp.version + ' broken on Debian') else: config.ExternalDepFound('parallel_python', pp.version + '-monkey-patched') config.pp_monkeypatch_dirname = tempfile.gettempdir() else: config.ExternalDepFound('parallel_python', pp.version) # shogun try: import shogun from shogun import (Kernel as sgKernel, Features as sgFeatures, Classifier as sgClassifier) except ImportError, exc: config.ExternalDepFailed('shogun', exc) else: if os.getenv('MDP_DISABLE_SHOGUN'): config.ExternalDepFailed('shogun', 'disabled') else: # From now on just support shogun >= 1.0 # Between 0.10 to 1.0 there are too many API changes... try: version = sgKernel.Version_get_version_release() except AttributeError: config.ExternalDepFailed('shogun', 'too old, upgrade to at least version 1.0') else: if not version.startswith('v1.'): config.ExternalDepFailed('shogun', 'too old, upgrade to at least version 1.0.') else: config.ExternalDepFound('shogun', version) # libsvm try: import svm as libsvm libsvm.libsvm except ImportError, exc: config.ExternalDepFailed('libsvm', exc) except AttributeError, exc: config.ExternalDepFailed('libsvm', 'libsvm version >= 2.91 required') else: if os.getenv('MDP_DISABLE_LIBSVM'): config.ExternalDepFailed('libsvm', 'disabled') else: config.ExternalDepFound('libsvm', libsvm.libsvm._name) # joblib try: import joblib except ImportError, exc: config.ExternalDepFailed('joblib', exc) else: version = joblib.__version__ if os.getenv('MDP_DISABLE_JOBLIB'): config.ExternalDepFailed('joblib', 'disabled') elif _version_too_old(version, (0,4,3)): config.ExternalDepFailed('joblib', 'version %s is too old' % version) else: config.ExternalDepFound('joblib', version) # sklearn try: try: import sklearn except ImportError: import scikits.learn as sklearn version = sklearn.__version__ except ImportError, exc: config.ExternalDepFailed('sklearn', exc) except AttributeError, exc: config.ExternalDepFailed('sklearn', exc) else: if os.getenv('MDP_DISABLE_SKLEARN'): config.ExternalDepFailed('sklearn', 'disabled') elif _version_too_old(version, (0,6)): config.ExternalDepFailed('sklearn', 'version %s is too old' % version) else: config.ExternalDepFound('sklearn', version) mdp-3.3/mdp/extension.py000066400000000000000000000445501203131624700153140ustar00rootroot00000000000000""" Extension Mechanism for nodes. The extension mechanism makes it possible to dynamically add class attributes, especially methods, for specific features to node classes (e.g. nodes need a _fork and _join method for parallelization). It is also possible for users to define new extensions to provide new functionality for MDP nodes without having to modify any MDP code. Without the extension mechanism extending nodes would be done by inheritance, which is fine unless one wants to use multiple inheritance at the same time (requiring multiple inheritance for every combination of extensions one wants to use). The extension mechanism does not depend on inheritance, instead it adds the methods to the node classes dynamically at runtime. This makes it possible to activate extensions just when they are needed, reducing the risk of interference between different extensions. However, since the extension mechanism provides a special Metaclass it is still possible to define the extension nodes as classes derived from nodes. This keeps the code readable and is compatible with automatic code checkers (like the background pylint checks in the Eclipse IDE with PyDev). """ from mdp import MDPException, NodeMetaclass # TODO: Register the node instances as well? # This would allow instance initialization when an extension is activated. # Implementing this should not be too hard via the metclass. # TODO: Add warning about overriding public methods with respect to # the docstring wrappers? # TODO: in the future could use ABC's to register nodes with extension nodes # name prefix used for the original attributes when they are shadowed ORIGINAL_ATTR_PREFIX = "_non_extension_" # prefix used to store the current extension name for an attribute EXTENSION_ATTR_PREFIX = "_extension_for_" # list of attribute names that are not affected by extensions, NON_EXTENSION_ATTRIBUTES = ["__module__", "__doc__", "extension_name"] # dict of dicts of dicts, contains a key for each extension, # the inner dict maps the node types to their extension node, # the innermost dict then maps attribute names to values # (e.g. a method name to the actual function) _extensions = dict() # set containing the names of the currently activated extensions _active_extensions = set() class ExtensionException(MDPException): """Base class for extension related exceptions.""" pass def _register_attribute(ext_name, node_cls, attr_name, attr_value): """Register an attribute as an extension attribute. ext_name -- String with the name of the extension. node_cls -- Node class for which the method should be registered. """ _extensions[ext_name][node_cls][attr_name] = attr_value def extension_method(ext_name, node_cls, method_name=None): """Returns a function to register a function as extension method. This function is intended to be used with the decorator syntax. :Parameters: ext_name String with the name of the extension. node_cls Node class for which the method should be registered. method_name Name of the extension method (default value is ``None``). If no value is provided then the name of the function is used. Note that it is possible to directly call other extension functions, call extension methods in other node classes or to use super in the normal way (the function will be called as a method of the node class). """ def register_function(func): _method_name = method_name if not _method_name: _method_name = func.__name__ if not ext_name in _extensions: err = ("No ExtensionNode base class has been defined for this " "extension.") raise ExtensionException(err) if not node_cls in _extensions[ext_name]: # register this node _extensions[ext_name][node_cls] = dict() _register_attribute(ext_name, node_cls, _method_name, func) return func return register_function class ExtensionNodeMetaclass(NodeMetaclass): """This is the metaclass for node extension superclasses. It takes care of registering extensions and the attributes in the extension. """ def __new__(cls, classname, bases, members): """Create new node classes and register extensions. If a concrete extension node is created then a corresponding mixin class is automatically created and registered. """ if classname == "ExtensionNode": # initial creation of ExtensionNode class return super(ExtensionNodeMetaclass, cls).__new__(cls, classname, bases, members) # check if this is a new extension definition, # in that case this node is directly derived from ExtensionNode if ExtensionNode in bases: ext_name = members["extension_name"] if not ext_name: err = "No extension name has been specified." raise ExtensionException(err) if ext_name not in _extensions: # creation of a new extension, add entry in dict _extensions[ext_name] = dict() else: err = ("An extension with the name '" + ext_name + "' has already been registered.") raise ExtensionException(err) # find the node that this extension node belongs to base_node_cls = None for base in bases: if type(base) is not ExtensionNodeMetaclass: if base_node_cls is None: base_node_cls = base else: err = ("Extension node derived from multiple " "normal nodes.") raise ExtensionException(err) if base_node_cls is None: # This new extension is not directly derived from another class, # so there is nothing to register (no default implementation). # We disable the doc method extension mechanism as this class # is not a node subclass and adding methods (e.g. _execute) would # cause problems. cls.DOC_METHODS = [] return super(ExtensionNodeMetaclass, cls).__new__(cls, classname, bases, members) ext_node_cls = super(ExtensionNodeMetaclass, cls).__new__( cls, classname, bases, members) ext_name = ext_node_cls.extension_name if not base_node_cls in _extensions[ext_name]: # register the base node _extensions[ext_name][base_node_cls] = dict() # Register methods from extension class hierarchy: iterate MRO in # reverse order and register all attributes starting from the # classes which are subclasses from ExtensionNode. extension_subtree = False for base in reversed(ext_node_cls.__mro__): # make sure we only inject methods in classes which have # ExtensionNode as superclass if extension_subtree and ExtensionNode in base.__mro__: for attr_name, attr_value in base.__dict__.items(): if attr_name not in NON_EXTENSION_ATTRIBUTES: # check if this attribute has not already been # extended in one of the base classes already_active = False for bb in ext_node_cls.__mro__: if (bb in _extensions[ext_name] and attr_name in _extensions[ext_name][bb] and _extensions[ext_name][bb][attr_name] == attr_value): already_active = True # only register if not yet active if not already_active: _register_attribute(ext_name, base_node_cls, attr_name, attr_value) if base == ExtensionNode: extension_subtree = True return ext_node_cls class ExtensionNode(object): """Base class for extensions nodes. A new extension node class should override the _extension_name. The concrete node implementations are then derived from this extension node class. To call an instance method from a parent class you have multiple options: - use super, but with the normal node class, e.g.: >>> super(mdp.nodes.SFA2Node, self).method() # doctest: +SKIP Here SFA2Node was given instead of the extension node class for the SFA2Node. If the extensions node class is used directly (without the extension mechanism) this can cause problems. In that case you have to be careful about the inheritance order and the effect on the MRO. - call it explicitly using the __func__ attribute [python version < 3]: >>> parent_class.method.__func__(self) # doctest: +SKIP or [python version >=3]: >>> parent_class.method(self) # doctest: +SKIP To call the original (pre-extension) method in the same class use you simply prefix the method name with '_non_extension_' (this is the value of the `ORIGINAL_ATTR_PREFIX` constant in this module). """ __metaclass__ = ExtensionNodeMetaclass # override this name in a concrete extension node base class extension_name = None def get_extensions(): """Return a dictionary currently registered extensions. Note that this is not a copy, so if you change anything in this dict the whole extension mechanism will be affected. If you just want the names of the available extensions use get_extensions().keys(). """ return _extensions def get_active_extensions(): """Returns a list with the names of the currently activated extensions.""" # use copy to protect the original set, also important if the return # value is used in a for-loop (see deactivate_extensions function) return list(_active_extensions) def activate_extension(extension_name, verbose=False): """Activate the extension by injecting the extension methods.""" if extension_name not in _extensions.keys(): err = "Unknown extension name: %s" + str(extension_name) raise ExtensionException(err) if extension_name in _active_extensions: if verbose: print 'Extension %s is already active!' % extension_name return _active_extensions.add(extension_name) try: for node_cls, attributes in _extensions[extension_name].items(): for attr_name, attr_value in attributes.items(): if verbose: print ("extension %s: adding %s to %s" % (extension_name, attr_name, node_cls.__name__)) ## store the original attribute / make it available ext_attr_name = EXTENSION_ATTR_PREFIX + attr_name if attr_name in dir(node_cls): if ext_attr_name in node_cls.__dict__: # two extensions override the same attribute err = ("Name collision for attribute '" + attr_name + "' between extension '" + getattr(node_cls, ext_attr_name) + "' and newly activated extension '" + extension_name + "'.") raise ExtensionException(err) # only overwrite the attribute if the extension is not # yet active on this class or its superclasses if ext_attr_name not in dir(node_cls): original_attr = getattr(node_cls, attr_name) if verbose: print ("extension %s: overwriting %s in %s" % (extension_name, attr_name, node_cls.__name__)) setattr(node_cls, ORIGINAL_ATTR_PREFIX + attr_name, original_attr) setattr(node_cls, attr_name, attr_value) # store to which extension this attribute belongs, this is also # used as a flag that this is an extension attribute setattr(node_cls, ext_attr_name, extension_name) except: # make sure that an incomplete activation is reverted deactivate_extension(extension_name) raise def deactivate_extension(extension_name, verbose=False): """Deactivate the extension by removing the injected methods.""" if extension_name not in _extensions.keys(): err = "Unknown extension name: " + str(extension_name) raise ExtensionException(err) if extension_name not in _active_extensions: return for node_cls, attributes in _extensions[extension_name].items(): for attr_name in attributes.keys(): original_name = ORIGINAL_ATTR_PREFIX + attr_name if verbose: print ("extension %s: removing %s from %s" % (extension_name, attr_name, node_cls.__name__)) if original_name in node_cls.__dict__: # restore the original attribute if verbose: print ("extension %s: restoring %s in %s" % (extension_name, attr_name, node_cls.__name__)) delattr(node_cls, attr_name) original_attr = getattr(node_cls, original_name) # Check if the attribute is defined by one of the super # classes and test if the overwritten method is not that # method, otherwise we would inject unwanted methods. # Note: '==' tests identity for .__func__ and .__self__, # but .im_class does not matter in Python 2.6. if all(map(lambda x:getattr(x, attr_name, None) != original_attr, node_cls.__mro__[1:])): setattr(node_cls, attr_name, original_attr) delattr(node_cls, original_name) else: try: # no original attribute to restore, so simply delete # might be missing if the activation failed delattr(node_cls, attr_name) except AttributeError: pass try: # might be missing if the activation failed delattr(node_cls, EXTENSION_ATTR_PREFIX + attr_name) except AttributeError: pass _active_extensions.remove(extension_name) def activate_extensions(extension_names, verbose=False): """Activate all the extensions for the given names. extension_names -- Sequence of extension names. """ try: for extension_name in extension_names: activate_extension(extension_name, verbose=verbose) except: # if something goes wrong deactivate all, otherwise we might be # in an inconsistent state (e.g. methods for active extensions might # have been removed) deactivate_extensions(get_active_extensions()) raise def deactivate_extensions(extension_names, verbose=False): """Deactivate all the extensions for the given names. extension_names -- Sequence of extension names. """ for extension_name in extension_names: deactivate_extension(extension_name, verbose=verbose) # TODO: add check that only extensions are deactivated that were # originally activcated by this extension (same in context manager) # also add test for this def with_extension(extension_name): """Return a wrapper function to activate and deactivate the extension. This function is intended to be used with the decorator syntax. The deactivation happens only if the extension was activated by the decorator (not if it was already active before). So this decorator ensures that the extensions is active and prevents unintended side effects. If the generated function is a generator, the extension will be in effect only when the generator object is created (that is when the function is called, but its body is not actually immediately executed). When the function body is executed (after ``next`` is called on the generator object), the extension might not be in effect anymore. Therefore, it is better to use the `extension` context manager with a generator function. """ def decorator(func): def wrapper(*args, **kwargs): # make sure that we don't deactive and extension that was # not activated by the decorator (would be a strange sideeffect) if extension_name not in get_active_extensions(): try: activate_extension(extension_name) result = func(*args, **kwargs) finally: deactivate_extension(extension_name) else: result = func(*args, **kwargs) return result # now make sure that docstring and signature match the original func_info = NodeMetaclass._function_infodict(func) return NodeMetaclass._wrap_function(wrapper, func_info) return decorator class extension(object): """Context manager for MDP extension. This allows you to use extensions using a ``with`` statement, as in: >>> with mdp.extension('extension_name'): ... # 'node' is executed with the extension activated ... node.execute(x) It is also possible to activate multiple extensions at once: >>> with mdp.extension(['ext1', 'ext2']): ... # 'node' is executed with the two extensions activated ... node.execute(x) The deactivation at the end happens only for the extensions that were activated by this context manager (not for those that were already active when the context was entered). This prevents unintended side effects. """ def __init__(self, ext_names): if isinstance(ext_names, str): ext_names = [ext_names] self.ext_names = ext_names self.deactivate_exts = [] def __enter__(self): already_active = get_active_extensions() self.deactivate_exts = [ext_name for ext_name in self.ext_names if ext_name not in already_active] activate_extensions(self.ext_names) def __exit__(self, type, value, traceback): deactivate_extensions(self.deactivate_exts) mdp-3.3/mdp/graph/000077500000000000000000000000001203131624700140175ustar00rootroot00000000000000mdp-3.3/mdp/graph/__init__.py000066400000000000000000000007221203131624700161310ustar00rootroot00000000000000from graph import ( Graph, GraphEdge, GraphException, GraphNode, GraphTopologicalException, is_sequence, recursive_map, recursive_reduce) __all__ = ['Graph', 'GraphEdge', 'GraphException', 'GraphNode', 'GraphTopologicalException', 'is_sequence', 'recursive_map', 'recursive_reduce'] from mdp.utils import fixup_namespace fixup_namespace(__name__, __all__, ('graph','fixup_namespace',)) mdp-3.3/mdp/graph/graph.py000066400000000000000000000313241203131624700154750ustar00rootroot00000000000000# inspired by some code by Nathan Denny (1999) # see http://www.ece.arizona.edu/~denny/python_nest/graph_lib_1.0.1.html try: # use reduce against BDFL's will even on python > 2.6 from functools import reduce except ImportError: pass class GraphException(Exception): """Base class for exception in the graph package.""" pass class GraphTopologicalException(GraphException): """Exception thrown during a topological sort if the graph is cyclical.""" pass def is_sequence(x): return isinstance(x, (list, tuple)) def recursive_map(func, seq): """Apply a function recursively on a sequence and all subsequences.""" def _func(x): if is_sequence(x): return recursive_map(func, x) else: return func(x) return map(_func, seq) def recursive_reduce(func, seq, *argv): """Apply reduce(func, seq) recursively to a sequence and all its subsequences.""" def _func(x, y): if is_sequence(y): return func(x, recursive_reduce(func, y)) else: return func(x, y) return reduce(_func, seq, *argv) class GraphNode(object): """Represent a graph node and all information attached to it.""" def __init__(self, data=None): self.data = data # edges in self.ein = [] # edges out self.eout = [] def add_edge_in(self, edge): self.ein.append(edge) def add_edge_out(self, edge): self.eout.append(edge) def remove_edge_in(self, edge): self.ein.remove(edge) def remove_edge_out(self, edge): self.eout.remove(edge) def get_edges_in(self, from_ = None): """Return a copy of the list of the entering edges. If from_ is specified, return only the nodes coming from that node.""" inedges = self.ein[:] if from_: inedges = [edge for edge in inedges if edge.head == from_] return inedges def get_edges_out(self, to_ = None): """Return a copy of the list of the outgoing edges. If to_ is specified, return only the nodes going to that node.""" outedges = self.eout[:] if to_: outedges = [edge for edge in outedges if edge.tail == to_] return outedges def get_edges(self, neighbor = None): """Return a copy of all edges. If neighbor is specified, return only the edges connected to that node.""" return ( self.get_edges_in(from_=neighbor) + self.get_edges_out(to_=neighbor) ) def in_degree(self): """Return the number of entering edges.""" return len(self.ein) def out_degree(self): """Return the number of outgoing edges.""" return len(self.eout) def degree(self): """Return the number of edges.""" return self.in_degree()+self.out_degree() def in_neighbors(self): """Return the neighbors down in-edges (i.e. the parents nodes).""" return map(lambda x: x.get_head(), self.ein) def out_neighbors(self): """Return the neighbors down in-edges (i.e. the parents nodes).""" return map(lambda x: x.get_tail(), self.eout) def neighbors(self): return self.in_neighbors() + self.out_neighbors() class GraphEdge(object): """Represent a graph edge and all information attached to it.""" def __init__(self, head, tail, data=None): # head node self.head = head # neighbors out self.tail = tail # arbitrary data slot self.data = data def get_ends(self): """Return the tuple (head_id, tail_id).""" return (self.head, self.tail) def get_tail(self): return self.tail def get_head(self): return self.head class Graph(object): """Represent a directed graph.""" def __init__(self): # list of nodes self.nodes = [] # list of edges self.edges = [] # node functions def add_node(self, data=None): node = GraphNode(data=data) self.nodes.append(node) return node def remove_node(self, node): # the node is not in this graph if node not in self.nodes: errstr = 'This node is not part of the graph (%s)' % node raise GraphException(errstr) # remove all edges containing this node for edge in node.get_edges(): self.remove_edge(edge) # remove the node self.nodes.remove(node) # edge functions def add_edge(self, head, tail, data=None): """Add an edge going from head to tail. head : head node tail : tail node """ # create edge edge = GraphEdge(head, tail, data=data) # add edge to head and tail node head.add_edge_out(edge) tail.add_edge_in(edge) # add to the edges dictionary self.edges.append(edge) return edge def remove_edge(self, edge): head, tail = edge.get_ends() # remove from head head.remove_edge_out(edge) # remove from tail tail.remove_edge_in(edge) # remove the edge self.edges.remove(edge) ### populate functions def add_nodes(self, data): """Add many nodes at once. data -- number of nodes to add or sequence of data values, one for each new node""" if not is_sequence(data): data = [None]*data return map(self.add_node, data) def add_tree(self, tree): """Add a tree to the graph. The tree is specified with a nested list of tuple, in a LISP-like notation. The values specified in the list become the values of the single nodes. Return an equivalent nested list with the nodes instead of the values. Example: >>> a=b=c=d=e=None >>> g.add_tree( (a, b, (c, d ,e)) ) corresponds to this tree structure, with all node values set to None: a / \ b c / \ d e """ def _add_edge(root, son): self.add_edge(root, son) return root nodes = recursive_map(self.add_node, tree) recursive_reduce(_add_edge, nodes) return nodes def add_full_connectivity(self, from_nodes, to_nodes): """Add full connectivity from a group of nodes to another one. Return a list of lists of edges, one for each node in 'from_nodes'. Example: create a two-layer graph with full connectivity. >>> g = Graph() >>> layer1 = g.add_nodes(10) >>> layer2 = g.add_nodes(5) >>> g.add_full_connectivity(layer1, layer2) """ edges = [] for from_ in from_nodes: edges.append(map(lambda x: self.add_edge(from_, x), to_nodes)) return edges ###### graph algorithms def topological_sort(self): """Perform a topological sort of the nodes. If the graph has a cycle, throw a GraphTopologicalException with the list of successfully ordered nodes.""" # topologically sorted list of the nodes (result) topological_list = [] # queue (fifo list) of the nodes with in_degree 0 topological_queue = [] # {node: in_degree} for the remaining nodes (those with in_degree>0) remaining_indegree = {} # init queues and lists for node in self.nodes: indegree = node.in_degree() if indegree == 0: topological_queue.append(node) else: remaining_indegree[node] = indegree # remove nodes with in_degree 0 and decrease the in_degree of their sons while len(topological_queue): # remove the first node with degree 0 node = topological_queue.pop(0) topological_list.append(node) # decrease the in_degree of the sons for son in node.out_neighbors(): remaining_indegree[son] -= 1 if remaining_indegree[son] == 0: topological_queue.append(son) # if not all nodes were covered, the graph must have a cycle # raise a GraphTopographicalException if len(topological_list)!=len(self.nodes): raise GraphTopologicalException(topological_list) return topological_list ### Depth-First sort def _dfs(self, neighbors_fct, root, visit_fct=None): # core depth-first sort function # changing the neighbors function to return the sons of a node, # its parents, or both one gets normal dfs, reverse dfs, or # dfs on the equivalent undirected graph, respectively # result list containing the nodes in Depth-First order dfs_list = [] # keep track of all already visited nodes visited_nodes = { root: None } # stack (lifo) list dfs_stack = [] dfs_stack.append(root) while len(dfs_stack): # consider the next node on the stack node = dfs_stack.pop() dfs_list.append(node) # visit the node if visit_fct != None: visit_fct(node) # add all sons to the stack (if not already visited) for son in neighbors_fct(node): if son not in visited_nodes: visited_nodes[son] = None dfs_stack.append(son) return dfs_list def dfs(self, root, visit_fct=None): """Return a list of nodes in some Depth First order starting from a root node. If defined, visit_fct is applied on each visited node. The returned list does not have to contain all nodes in the graph, but only the ones reachable from the root. """ neighbors_fct = lambda node: node.out_neighbors() return self._dfs(neighbors_fct, root, visit_fct=visit_fct) def undirected_dfs(self, root, visit_fct=None): """Perform Depth First sort. This function is identical to dfs, but the sort is performed on the equivalent undirected version of the graph.""" neighbors_fct = lambda node: node.neighbors() return self._dfs(neighbors_fct, root, visit_fct=visit_fct) ### Connected components def connected_components(self): """Return a list of lists containing the nodes of all connected components of the graph.""" visited = {} def visit_fct(node, visited=visited): visited[node] = None components = [] nodes = self.nodes for node in nodes: if node in visited: continue components.append(self.undirected_dfs(node, visit_fct)) return components def is_weakly_connected(self): """Return True if the graph is weakly connected.""" return len(self.undirected_dfs(self.nodes[0]))==len(self.nodes) ### Breadth-First Sort # BFS and DFS could be generalized to one function. I leave them # distinct for clarity. def _bfs(self, neighbors_fct, root, visit_fct=None): # core breadth-first sort function # changing the neighbors function to return the sons of a node, # its parents, or both one gets normal bfs, reverse bfs, or # bfs on the equivalent undirected graph, respectively # result list containing the nodes in Breadth-First order bfs_list = [] # keep track of all already visited nodes visited_nodes = { root: None } # queue (fifo) list bfs_queue = [] bfs_queue.append(root) while len(bfs_queue): # consider the next node in the queue node = bfs_queue.pop(0) bfs_list.append(node) # visit the node if visit_fct != None: visit_fct(node) # add all sons to the queue (if not already visited) for son in neighbors_fct(node): if son not in visited_nodes: visited_nodes[son] = None bfs_queue.append(son) return bfs_list def bfs(self, root, visit_fct=None): """Return a list of nodes in some Breadth First order starting from a root node. If defined, visit_fct is applied on each visited node. Note the returned list does not have to contain all nodes in the graph, but only the ones reachable from the root.""" neighbors_fct = lambda node: node.out_neighbors() return self._bfs(neighbors_fct, root, visit_fct=visit_fct) def undirected_bfs(self, root, visit_fct=None): """Perform Breadth First sort. This function is identical to bfs, but the sort is performed on the equivalent undirected version of the graph.""" neighbors_fct = lambda node: node.neighbors() return self._bfs(neighbors_fct, root, visit_fct=visit_fct) mdp-3.3/mdp/helper_funcs.py000066400000000000000000000017631203131624700157540ustar00rootroot00000000000000import mdp def pca(x, **kwargs): """Filters multidimensioanl input data through its principal components. Observations of the same variable are stored on rows, different variables are stored on columns. This is a shortcut function for the corresponding node `nodes.PCANode`. If any keyword arguments are specified, they are passed to its constructor. This is equivalent to ``mdp.nodes.PCANode(**kwargs)(x)`` """ return mdp.nodes.PCANode(**kwargs)(x) def fastica(x, **kwargs): """Perform Independent Component Analysis on input data using the FastICA algorithm by Aapo Hyvarinen. Observations of the same variable are stored on rows, different variables are stored on columns. This is a shortcut function for the corresponding node `nodes.FastICANode`. If any keyword arguments are specified, they are passed to its constructor. This is equivalent to ``mdp.nodes.FastICANode(**kwargs)(x)`` """ return mdp.nodes.FastICANode(**kwargs)(x) mdp-3.3/mdp/hinet/000077500000000000000000000000001203131624700140255ustar00rootroot00000000000000mdp-3.3/mdp/hinet/__init__.py000066400000000000000000000056051203131624700161440ustar00rootroot00000000000000"""Hierarchical Networks Package. This package makes it possible to construct graph-like Node structures, especially hierarchical networks. The most important building block is the new Layer node, which works as an horizontal version of flow. It encapsulates a list of Nodes, which are trained and executed in parallel. For example we can take two Nodes with 100 dimensional input to construct a layer with a 200 dimensional input. The first half of the input data is automatically fed into the first Node, the second half into the second Node. Since one might also want to use Flows (i.e. vertical stacks of Nodes) in a Layer, a wrapper class for Nodes is provided. The FlowNode class wraps any Flow into a Node, which can then be used like any other Node. Together with the Layer this allows you to combine Nodes both horizontally and vertically. Thereby one can in principle realize any feed-forward network topology. For a hierarchical networks one might want to route the different parts of the data to different Nodes in a Layer in complicated ways. This is done by a Switchboard that handles all the routing. Defining the routing manually can be quite tedious, so one can derive subclasses for special routing situations. One such subclass for 2d image data is provided. It maps the data according to rectangular overlapping 2d input areas. One can then feed the output into a Layer and each Node will get the correct input. """ from flownode import FlowNode from layer import Layer, SameInputLayer, CloneLayer from switchboard import ( Switchboard, SwitchboardException, MeanInverseSwitchboard, ChannelSwitchboard, Rectangular2dSwitchboard, Rectangular2dSwitchboardException, DoubleRect2dSwitchboard, DoubleRect2dSwitchboardException, DoubleRhomb2dSwitchboard, DoubleRhomb2dSwitchboardException ) from htmlvisitor import ( HiNetHTMLVisitor, HiNetXHTMLVisitor, NewlineWriteFile, show_flow ) from switchboard_factory import ( get_2d_image_switchboard, FactoryExtensionChannelSwitchboard, FactoryRectangular2dSwitchboard, FactoryDoubleRect2dSwitchboard, FactoryDoubleRhomb2dSwitchboard ) __all__ = ['FlowNode', 'Layer', 'SameInputLayer', 'CloneLayer', 'Switchboard', 'SwitchboardException', 'ChannelSwitchboard', 'Rectangular2dSwitchboard', 'Rectangular2dSwitchboardException', 'DoubleRect2dSwitchboard', 'DoubleRect2dSwitchboardException', 'DoubleRhomb2dSwitchboard', 'DoubleRhomb2dSwitchboardException', 'HiNetHTMLVisitor', 'HiNetXHTMLVisitor', 'NewlineWriteFile', 'show_flow', 'get_2d_image_switchboard' ] from mdp.utils import fixup_namespace fixup_namespace(__name__, __all__, ('flownode', 'layer', 'switchboard', 'hinet_Visitor', 'switchboard_factory', 'utils', 'fixup_namespace' )) mdp-3.3/mdp/hinet/flownode.py000066400000000000000000000205761203131624700162260ustar00rootroot00000000000000""" Module for the FlowNode class. """ import mdp import warnings as _warnings import copy as _copy class FlowNode(mdp.Node): """FlowNode wraps a Flow of Nodes into a single Node. This is handy if you want to use a flow where a Node is required. Additional args and kwargs for train and execute are supported. Note that for nodes in the internal flow the intermediate training phases will generally be closed, e.g. a CheckpointSaveFunction should not expect these training phases to be left open. All the read-only container slots are supported and are forwarded to the internal flow. """ def __init__(self, flow, input_dim=None, output_dim=None, dtype=None): """Wrap the given flow into this node. Pretrained nodes are allowed, but the internal flow should not be modified after the FlowNode was created (this will cause problems if the training phase structure of the internal nodes changes). If the node dimensions and dtype are not specified, they will be extracted from the internal nodes (late dimension setting is also supported). flow can have crash recovery enabled, but there is no special support for it. """ self._flow = flow # set properties if needed: if input_dim is None: input_dim = self._flow[0].input_dim if output_dim is None: output_dim = self._flow[-1].output_dim if dtype is None: dtype = self._flow[-1].dtype # store which nodes are pretrained up to what phase self._pretrained_phase = [node.get_current_train_phase() for node in flow] # check if all the nodes are already fully trained train_len = 0 for i_node, node in enumerate(self._flow): if node.is_trainable(): train_len += (len(node._get_train_seq()) - self._pretrained_phase[i_node]) if train_len: self._is_trainable = True else: self._is_trainable = False # remaining standard node initialisation super(FlowNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) @property def flow(self): """Read-only internal flow property. In general the internal flow should not be modified (see __init__ for more details). """ return self._flow def _set_input_dim(self, n): # try setting the input_dim of the first node self._flow[0].input_dim = n # let a consistency check run self._flow._check_nodes_consistency() # if we didn't fail here, go on self._input_dim = n def _set_output_dim(self, n): last_node = self._flow[-1] if len(self._flow) == 1: self._flow[-1].output_dim = n elif last_node.output_dim is None: self._fix_nodes_dimensions() # check if it worked if last_node.output_dim is None: if last_node.input_dim is None: err = ("FlowNode can't set the dimension of the last " "node, because its input_dim is undefined (" "which could lead to inconsistent dimensions).") raise mdp.InconsistentDimException(err) # now we can safely try to set the dimension last_node.output_dim = n # the last_node dim is now set if n != last_node.output_dim: err = (("FlowNode can't be set to output_dim %d" % n) + " because the last internal node already has " + "output_dim %d." % last_node.output_dim) raise mdp.InconsistentDimException(err) self._output_dim = n def _fix_nodes_dimensions(self): """Try to fix the dimensions of the internal nodes.""" if len(self._flow) > 1: prev_node = self._flow[0] for node in self._flow[1:]: if node.input_dim is None: node.input_dim = prev_node.output_dim prev_node = node self._flow._check_nodes_consistency() if self._flow[-1].output_dim is not None: # additional checks are performed here self.output_dim = self._flow[-1].output_dim def _set_dtype(self, t): # dtype can not be set for sure in arbitrary flows # but here we want to be sure that FlowNode *can* # offer a dtype that is consistent for node in self._flow: node.dtype = t self._dtype = t def _get_supported_dtypes(self): # we support the minimal common dtype set types = set(mdp.utils.get_dtypes('All')) for node in self._flow: types = types.intersection(node.get_supported_dtypes()) return list(types) def is_trainable(self): return self._is_trainable def is_invertible(self): return all(node.is_invertible() for node in self._flow) def _get_train_seq(self): """Return a training sequence containing all training phases.""" def get_train_function(_i_node, _node): # This internal function is needed to channel the data through # the nodes in front of the current nodes. # using nested scopes here instead of default args, see pep-0227 def _train(x, *args, **kwargs): if i_node > 0: _node.train(self._flow.execute(x, nodenr=_i_node-1), *args, **kwargs) else: _node.train(x, *args, **kwargs) return _train train_seq = [] for i_node, node in enumerate(self._flow): if node.is_trainable(): remaining_len = (len(node._get_train_seq()) - self._pretrained_phase[i_node]) train_seq += ([(get_train_function(i_node, node), node.stop_training)] * remaining_len) # try fix the dimension of the internal nodes and the FlowNode # after the last node has been trained def _get_stop_training_wrapper(self, node, func): def _stop_training_wrapper(*args, **kwargs): func(*args, **kwargs) self._fix_nodes_dimensions() return _stop_training_wrapper if train_seq: train_seq[-1] = (train_seq[-1][0], _get_stop_training_wrapper(self, self._flow[-1], train_seq[-1][1])) return train_seq def _execute(self, x, *args, **kwargs): return self._flow.execute(x, *args, **kwargs) def _inverse(self, x): return self._flow.inverse(x) def copy(self, protocol=None): """Return a copy of this node. The copy call is delegated to the internal node, which allows the use of custom copy methods for special nodes. The protocol parameter should not be used. """ if protocol is not None: _warnings.warn("protocol parameter to copy() is ignored", mdp.MDPDeprecationWarning, stacklevel=2) # Warning: If we create a new FlowNode with the copied internal # nodes then it will differ from the original one if some nodes # were trained in the meantime. Especially _get_train_seq would # return a shorter list in that case, possibly breaking stuff # outside of this FlowNode (e.g. if it is enclosed by another # FlowNode the _train_phase of this node will no longer fit the # result of _get_train_seq). # # copy the nodes by delegation old_nodes = self._flow[:] new_nodes = [node.copy() for node in old_nodes] # now copy the rest of this flownode via deepcopy self._flow.flow = None new_flownode = _copy.deepcopy(self) new_flownode._flow.flow = new_nodes self._flow.flow = old_nodes return new_flownode ## container methods ## def __len__(self): return len(self._flow) def __getitem__(self, key): return self._flow.__getitem__(key) def __contains__(self, item): return self._flow.__contains__(item) def __iter__(self): return self._flow.__iter__() mdp-3.3/mdp/hinet/hinet.css000066400000000000000000000021651203131624700156520ustar00rootroot00000000000000/* CSS for hinet representation. Warning: In nested tables the top table css overwrites the nested css if they are specified like 'table.flow td' (i.e. all td's below this table). So be careful about hiding/overriding nested td's. The tables "nodestruct" are used to separate the dimension values from the actual node text. */ table.flow { border-collapse: separate; padding: 3px; border: 3px double; border-color: #003399; } table.flow table { width: 100%; padding: 0 3px; border-color: #003399; } table.flow td { padding: 1px; border-style: none; } table.layer { border-collapse: separate; border: 2px dashed; } table.flownode { border-collapse: separate; border: 1px dotted; } table.nodestruct { border-style: none; } table.node { border-collapse: separate; border: 1px solid; border-spacing: 2px; } td.nodename { font-size: normal; text-align: center; } td.nodeparams { font-size: xx-small; text-align: left; } td.dim { font-size: xx-small; text-align: center; color: #008ADC; } span.memorycolor { color: #CCBB77; } mdp-3.3/mdp/hinet/htmlvisitor.py000066400000000000000000000276641203131624700170020ustar00rootroot00000000000000""" Module to convert a flow into an HTML representation. This is especially useful for hinet structures. The code uses the visitor pattern to reach and convert all the nodes in a flow. """ from __future__ import with_statement import tempfile import os import webbrowser import cStringIO as StringIO import mdp import switchboard # TODO: use
   
for whitespaces? class NewlineWriteFile(object): """Decorator for file-like object. Adds a newline character to each line written with write(). """ def __init__(self, file_obj): """Wrap the given file-like object.""" self.file_obj = file_obj def write(self, str_obj): """Write a string to the file object and append a newline character.""" self.file_obj.write(str_obj + "\n") # forward all other methods def __getattr__(self, attr): return getattr(self.file_obj, attr) class HiNetHTMLVisitor(object): """Class to convert a hinet flow to HTML. This class implements the visitor pattern. It relies on the 'html' extension to get custom representations for normal node classes. """ def __init__(self, html_file, show_size=False): """Initialize the HMTL converter. html_file -- File object into which the representation is written (only the write method is used). show_size -- Show the approximate memory footprint of all nodes. """ self.show_size = show_size self._file = NewlineWriteFile(html_file) @mdp.with_extension("html") def convert_flow(self, flow): """Convert the flow into HTML and write it into the internal file.""" f = self._file self._open_node_env(flow, "flow") for node in flow: f.write('
') self._close_node_env(flow, "flow") _CSS_FILENAME = "hinet.css" @classmethod def hinet_css(cls): """Return the standard CSS string. The CSS should be embedded in the final HTML file. """ css_filename = os.path.join(os.path.split(__file__)[0], cls._CSS_FILENAME) with open(css_filename, 'r') as css_file: css = css_file.read() return css def _visit_node(self, node): """Translate a node and return the translation. Depending on the type of the node this can be delegated to more specific methods. """ if hasattr(node, "flow"): self._visit_flownode(node) elif isinstance(node, mdp.hinet.CloneLayer): self._visit_clonelayer(node) elif isinstance(node, mdp.hinet.SameInputLayer): self._visit_sameinputlayer(node) elif isinstance(node, mdp.hinet.Layer): self._visit_layer(node) else: self._visit_standard_node(node) def _visit_flownode(self, flownode): f = self._file self._open_node_env(flownode, "flownode") for node in flownode.flow: f.write('') self._close_node_env(flownode, "flownode") def _visit_layer(self, layer): f = self._file self._open_node_env(layer, "layer") f.write('') for node in layer: f.write('') f.write('') self._close_node_env(layer) def _visit_clonelayer(self, layer): f = self._file self._open_node_env(layer, "layer") f.write('') f.write('') self._close_node_env(layer) def _visit_sameinputlayer(self, layer): f = self._file self._open_node_env(layer, "layer") f.write('' % (len(layer), str(layer))) f.write('') for node in layer: f.write('') f.write('') self._close_node_env(layer) def _visit_standard_node(self, node): f = self._file self._open_node_env(node) f.write('') f.write('') self._close_node_env(node) # helper methods for decoration def _open_node_env(self, node, type_id="node"): """Open the HTML environment for the node internals. node -- The node itself. type_id -- The id string as used in the CSS. """ self._file.write('
None
x = 
' + self._array_pretty_html(method_result) + '
x = 
') if isinstance(method_result[0], n.ndarray): f.write(self._array_pretty_html(method_result[0]) + '
msg = 
') if isinstance(method_result[1], dict): f.write(self._dict_pretty_html(method_result[1]) + '
target = 
' + str(method_result[2]) + '
unknown result type: 
' + str(method_result) + '
') self._visit_node(node) f.write('
') self._visit_node(node) f.write('
') self._visit_node(node) f.write('
') f.write(str(layer) + '

') f.write('%d repetitions' % len(layer)) f.write('
') self._visit_node(layer.node) f.write('
%s
') self._visit_node(node) f.write('
') f.write(str(node)) f.write('
') f.write(node.html_representation()) f.write('
' % type_id) self._write_node_header(node, type_id) def _write_node_header(self, node, type_id="node"): """Write the header content for the node into the HTML file.""" f = self._file if not (type_id=="flow" or type_id=="flownode"): f.write('' % str(node.input_dim)) f.write('') if not (type_id=="flow" or type_id=="flownode"): f.write('') f.write('
in-dim: %s
') f.write('') def _close_node_env(self, node, type_id="node"): """Close the HTML environment for the node internals. node -- The node itself. type_id -- The id string as used in the CSS. """ f = self._file f.write('
') f.write('
out-dim: %s' % str(node.output_dim)) if self.show_size: f.write('  size: %s' % mdp.utils.get_node_size_str(node)) f.write('
') class HTMLExtensionNode(mdp.ExtensionNode, mdp.Node): """Extension node for custom HTML representations of individual nodes. This extension works together with the HiNetHTMLVisitor to allow the polymorphic generation of representations for node classes. """ extension_name = "html" def html_representation(self): """Return an HTML representation of the node.""" html_repr = self._html_representation() if type(html_repr) is str: return html_repr else: return "
\n".join(html_repr) # override this method def _html_representation(self): """Return either the final HTML code or a list of HTML lines.""" return "" @mdp.extension_method("html", switchboard.Rectangular2dSwitchboard, "_html_representation") def _rect2d_switchoard_html(self): lines = ['rec. field size (in channels): %d x %d = %d' % (self.field_channels_xy[0], self.field_channels_xy[1], self.field_channels_xy[0] * self.field_channels_xy[1]), '# of rec. fields (out channels): %d x %d = %d' % (self.out_channels_xy[0], self.out_channels_xy[1], self.output_channels), 'rec. field distances (in channels): ' + str(self.field_spacing_xy), 'channel width: %d' % self.in_channel_dim] if not all(self.unused_channels_xy): lines.append('unused channels: ' + str(self.unused_channels_xy)) return lines @mdp.extension_method("html", switchboard.DoubleRect2dSwitchboard, "_html_representation") def _double_rect2d_switchoard_html(self): lines = ['rec. field size (in channels): %d x %d = %d' % (self.field_channels_xy[0], self.field_channels_xy[1], self.field_channels_xy[0] * self.field_channels_xy[1]), '# of long row rec. fields (out channels): ' + str(self.long_out_channels_xy), 'total number of receptive fields: %d' % self.output_channels, 'channel width: %d' % self.in_channel_dim] if self.x_unused_channels or self.y_unused_channels: lines.append('unused channels: ' + str(self.unused_channels_xy)) return lines @mdp.extension_method("html", switchboard.DoubleRhomb2dSwitchboard, "_html_representation") def _double_rhomb2d_switchoard_html(self): lines = ['rec. field size: %d' % self.diag_field_channels, '# of rec. fields (out channels): %d x %d = %d' % (self.out_channels_xy[0], self.out_channels_xy[1], self.output_channels), 'channel width: %d' % self.in_channel_dim] return lines @mdp.extension_method("html", mdp.nodes.SFA2Node, "_html_representation") def _sfa_html(self): return 'expansion dim: ' + str(self._expnode.output_dim) @mdp.extension_method("html", mdp.nodes.NormalNoiseNode, "_html_representation") def _noise_html(self): return ['noise level: ' + str(self.noise_args[1]), 'noise offset: ' + str(self.noise_args[0])] @mdp.extension_method("html", mdp.nodes.CutoffNode, "_html_representation") def _cutoff_html(self): return ['lower bound: ' + str(self.lower_bound), 'upper bound: ' + str(self.upper_bound)] @mdp.extension_method("html", mdp.nodes.HistogramNode, "_html_representation") def _hist_html(self): return 'history data fraction: ' + str(self.hist_fraction) @mdp.extension_method("html", mdp.nodes.AdaptiveCutoffNode, "_html_representation") def _adap_html(self): return ['lower cutoff fraction: ' + str(self.lower_cutoff_fraction), 'upper cutoff fraction: ' + str(self.upper_cutoff_fraction), 'history data fraction: ' + str(self.hist_fraction)] class HiNetXHTMLVisitor(HiNetHTMLVisitor): """Modified converter to create valid XHTML.""" def convert_flow(self, flow): """Convert the flow into XHTML and write it into the internal file.""" # first write the normal HTML into a buffer orig_file = self._file html_file = StringIO.StringIO() self._file = NewlineWriteFile(html_file) super(HiNetXHTMLVisitor, self).convert_flow(flow) self._file = orig_file # now convert it to XHTML html_code = html_file.getvalue() html_code = html_code.replace('
', '
') html_code = html_code.replace(' ', ' ') self._file.write(html_code) ## Helper functions ## def show_flow(flow, filename=None, title="MDP flow display", show_size=False, browser_open=True): """Write a flow into a HTML file, open it in the browser and return the file name. flow -- The flow to be shown. filename -- Filename for the HTML file to be created. If None a temporary file is created. title -- Title for the HTML file. show_size -- Show the approximate memory footprint of all nodes. browser_open -- If True (default value) then the slideshow file is automatically opened in a webbrowser. """ if filename is None: (fd, filename) = tempfile.mkstemp(suffix=".html", prefix="MDP_") html_file = os.fdopen(fd, 'w') else: html_file = open(filename, 'w') html_file.write('\n\n%s\n' % title) html_file.write('\n\n\n') html_file.write('

%s

\n' % title) explanation = '(data flows from top to bottom)' html_file.write('%s\n' % explanation) html_file.write('


\n') converter = mdp.hinet.HiNetHTMLVisitor(html_file, show_size=show_size) converter.convert_flow(flow=flow) html_file.write('\n') html_file.close() if browser_open: webbrowser.open(os.path.abspath(filename)) return filename mdp-3.3/mdp/hinet/layer.py000066400000000000000000000305031203131624700155140ustar00rootroot00000000000000""" Module for Layers. Note that additional args and kwargs for train or execute are currently not supported. """ import mdp from mdp import numx # TODO: maybe turn self.nodes into a read only property with self._nodes # TODO: Find a better way to deal with additional args for train/execute? # Maybe split them by default, but can be disabled via switch? class Layer(mdp.Node): """Layers are nodes which consist of multiple horizontally parallel nodes. The incoming data is split up according to the dimensions of the internal nodes. For example if the first node has an input_dim of 50 and the second node 100 then the layer will have an input_dim of 150. The first node gets x[:,:50], the second one x[:,50:]. Any additional arguments are forwarded unaltered to each node. Warning: This might change in the next release (2.5). Since they are nodes themselves layers can be stacked in a flow (e.g. to build a layered network). If one would like to use flows instead of nodes inside of a layer one can use a FlowNode. """ def __init__(self, nodes, dtype=None): """Setup the layer with the given list of nodes. The input and output dimensions for the nodes must be already set (the output dimensions for simplicity reasons). The training phases for the nodes are allowed to differ. Keyword arguments: nodes -- List of the nodes to be used. """ self.nodes = nodes # check nodes properties and get the dtype dtype = self._check_props(dtype) # calculate the the dimensions self.node_input_dims = numx.zeros(len(self.nodes)) input_dim = 0 for index, node in enumerate(nodes): input_dim += node.input_dim self.node_input_dims[index] = node.input_dim output_dim = self._get_output_dim_from_nodes() super(Layer, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) def _get_output_dim_from_nodes(self): """Calculate the output_dim from the nodes and return it. If the output_dim of a node is not set the None is returned. """ output_dim = 0 for node in self.nodes: if node.output_dim is not None: output_dim += node.output_dim else: return None return output_dim def _check_props(self, dtype): """Check the compatibility of the properties of the internal nodes. Return the found dtype and check the dimensions. dtype -- The specified layer dtype. """ dtype_list = [] # the dtypes for all the nodes for i, node in enumerate(self.nodes): # input_dim for each node must be set if node.input_dim is None: err = ("input_dim must be set for every node. " + "Node #%d (%s) does not comply." % (i, node)) raise mdp.NodeException(err) if node.dtype is not None: dtype_list.append(node.dtype) # check that the dtype is None or the same for every node nodes_dtype = None nodes_dtypes = set(dtype_list) nodes_dtypes.discard(None) if len(nodes_dtypes) > 1: err = ("All nodes must have the same dtype (found: %s)." % nodes_dtypes) raise mdp.NodeException(err) elif len(nodes_dtypes) == 1: nodes_dtype = list(nodes_dtypes)[0] # check that the nodes dtype matches the specified dtype if nodes_dtype and dtype: if not numx.dtype(nodes_dtype) == numx.dtype(dtype): err = ("Cannot set dtype to %s: " % numx.dtype(nodes_dtype).name + "an internal node requires %s" % numx.dtype(dtype).name) raise mdp.NodeException(err) elif nodes_dtype and not dtype: dtype = nodes_dtype return dtype def _set_dtype(self, t): for node in self.nodes: node.dtype = t self._dtype = t def _get_supported_dtypes(self): # we supported the minimal common dtype set types = set(mdp.utils.get_dtypes('All')) for node in self.nodes: types = types.intersection(node.get_supported_dtypes()) return list(types) def is_trainable(self): return any(node.is_trainable() for node in self.nodes) def is_invertible(self): return all(node.is_invertible() for node in self.nodes) def _get_train_seq(self): """Return the train sequence. The length is set by the node with maximum length. """ max_train_length = 0 for node in self.nodes: node_length = len(node._get_train_seq()) if node_length > max_train_length: max_train_length = node_length return ([[self._train, self._stop_training]] * max_train_length) def _train(self, x, *args, **kwargs): """Perform single training step by training the internal nodes.""" start_index = 0 stop_index = 0 for node in self.nodes: start_index = stop_index stop_index += node.input_dim if node.is_training(): node.train(x[:, start_index : stop_index], *args, **kwargs) def _stop_training(self, *args, **kwargs): """Stop training of the internal nodes.""" for node in self.nodes: if node.is_training(): node.stop_training(*args, **kwargs) if self.output_dim is None: self.output_dim = self._get_output_dim_from_nodes() def _pre_execution_checks(self, x): """Make sure that output_dim is set and then perform normal checks.""" if self.output_dim is None: # first make sure that the output_dim is set for all nodes in_start = 0 in_stop = 0 for node in self.nodes: in_start = in_stop in_stop += node.input_dim node._pre_execution_checks(x[:,in_start:in_stop]) self.output_dim = self._get_output_dim_from_nodes() if self.output_dim is None: err = "output_dim must be set at this point for all nodes" raise mdp.NodeException(err) super(Layer, self)._pre_execution_checks(x) def _execute(self, x, *args, **kwargs): """Process the data through the internal nodes.""" in_start = 0 in_stop = 0 out_start = 0 out_stop = 0 y = None for node in self.nodes: out_start = out_stop out_stop += node.output_dim in_start = in_stop in_stop += node.input_dim if y is None: node_y = node.execute(x[:,in_start:in_stop], *args, **kwargs) y = numx.zeros([node_y.shape[0], self.output_dim], dtype=node_y.dtype) y[:,out_start:out_stop] = node_y else: y[:,out_start:out_stop] = node.execute(x[:,in_start:in_stop], *args, **kwargs) return y def _inverse(self, x, *args, **kwargs): """Combine the inverse of all the internal nodes.""" in_start = 0 in_stop = 0 out_start = 0 out_stop = 0 y = None for node in self.nodes: # compared with execute, input and output are switched out_start = out_stop out_stop += node.input_dim in_start = in_stop in_stop += node.output_dim if y is None: node_y = node.inverse(x[:,in_start:in_stop], *args, **kwargs) y = numx.zeros([node_y.shape[0], self.input_dim], dtype=node_y.dtype) y[:,out_start:out_stop] = node_y else: y[:,out_start:out_stop] = node.inverse(x[:,in_start:in_stop], *args, **kwargs) return y ## container methods ## def __len__(self): return len(self.nodes) def __getitem__(self, key): return self.nodes.__getitem__(key) def __contains__(self, item): return self.nodes.__contains__(item) def __iter__(self): return self.nodes.__iter__() class CloneLayer(Layer): """Layer with a single node instance that is used multiple times. The same single node instance is used to build the layer, so Clonelayer(node, 3) executes in the same way as Layer([node]*3). But Layer([node]*3) would have a problem when closing a training phase, so one has to use CloneLayer. A CloneLayer can be used for weight sharing in the training phase. It might be also useful for reducing the memory footprint use during the execution phase (since only a single node instance is needed). """ def __init__(self, node, n_nodes=1, dtype=None): """Setup the layer with the given list of nodes. Keyword arguments: node -- Node to be cloned. n_nodes -- Number of repetitions/clones of the given node. """ super(CloneLayer, self).__init__((node,) * n_nodes, dtype=dtype) self.node = node # attribute for convenience def _stop_training(self, *args, **kwargs): """Stop training of the internal node.""" if self.node.is_training(): self.node.stop_training(*args, **kwargs) if self.output_dim is None: self.output_dim = self._get_output_dim_from_nodes() class SameInputLayer(Layer): """SameInputLayer is a layer were all nodes receive the full input. So instead of splitting the input according to node dimensions, all nodes receive the complete input data. """ def __init__(self, nodes, dtype=None): """Setup the layer with the given list of nodes. The input dimensions for the nodes must all be equal, the output dimensions can differ (but must be set as well for simplicity reasons). Keyword arguments: nodes -- List of the nodes to be used. """ self.nodes = nodes # check node properties and get the dtype dtype = self._check_props(dtype) # check that the input dimensions are all the same input_dim = self.nodes[0].input_dim for node in self.nodes: if not node.input_dim == input_dim: err = "The nodes have different input dimensions." raise mdp.NodeException(err) output_dim = self._get_output_dim_from_nodes() # intentionally use MRO above Layer, not SameInputLayer super(Layer, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) @staticmethod def is_invertible(): return False def _train(self, x, *args, **kwargs): """Perform single training step by training the internal nodes.""" for node in self.nodes: if node.is_training(): node.train(x, *args, **kwargs) def _pre_execution_checks(self, x): """Make sure that output_dim is set and then perform nromal checks.""" if self.output_dim is None: # first make sure that the output_dim is set for all nodes for node in self.nodes: node._pre_execution_checks(x) self.output_dim = self._get_output_dim_from_nodes() if self.output_dim is None: err = "output_dim must be set at this point for all nodes" raise mdp.NodeException(err) # intentionally use MRO above Layer, not SameInputLayer super(Layer, self)._pre_execution_checks(x) def _execute(self, x, *args, **kwargs): """Process the data through the internal nodes.""" out_start = 0 out_stop = 0 y = None for node in self.nodes: out_start = out_stop out_stop += node.output_dim if y is None: node_y = node.execute(x, *args, **kwargs) y = numx.zeros([node_y.shape[0], self.output_dim], dtype=node_y.dtype) y[:,out_start:out_stop] = node_y else: y[:,out_start:out_stop] = node.execute(x, *args, **kwargs) return y mdp-3.3/mdp/hinet/switchboard.py000066400000000000000000000744011203131624700167160ustar00rootroot00000000000000""" Module for Switchboards. Note that additional args and kwargs for train or execute are currently not supported. """ import mdp from mdp import numx class SwitchboardException(mdp.NodeException): """Exception for routing problems in the Switchboard class.""" pass # TODO: deal with input_dim, output_dim and dtype correctly, # like in IdentityNode class Switchboard(mdp.Node): """Does the routing associated with the connections between layers. It may be directly used as a layer/node, routing all the data at once. If the routing/mapping is not injective the processed data may be quite large and probably contains many redundant copies of the input data. So is this case one may instead use nodes for individual output channels and put each in a MultiNode. SwitchboardLayer is the most general version of a switchboard layer, since there is no imposed rule for the connection topology. For practical applications should often derive more specialized classes. """ def __init__(self, input_dim, connections): """Create a generic switchboard. The input and output dimension as well as dtype have to be fixed at initialization time. Keyword arguments: input_dim -- Dimension of the input data (number of connections). connections -- 1d Array or sequence with an entry for each output connection, containing the corresponding index of the input connection. """ # check connections for inconsistencies if len(connections) == 0: err = "Received empty connection list." raise SwitchboardException(err) if numx.nanmax(connections) >= input_dim: err = ("One or more switchboard connection " "indices exceed the input dimension.") raise SwitchboardException(err) # checks passed self.connections = numx.array(connections) output_dim = len(connections) super(Switchboard, self).__init__(input_dim=input_dim, output_dim=output_dim) # try to invert connections if (self.input_dim == self.output_dim and len(numx.unique(self.connections)) == self.input_dim): self.inverse_connections = numx.argsort(self.connections) else: self.inverse_connections = None def _execute(self, x): return x[:, self.connections] @staticmethod def is_trainable(): return False def is_invertible(self): if self.inverse_connections is None: return False else: return True def _inverse(self, x): if self.inverse_connections is None: raise SwitchboardException("Connections are not invertible.") else: return x[:, self.inverse_connections] def _get_supported_dtypes(self): """Return the list of dtypes supported by this node.""" return (mdp.utils.get_dtypes('AllFloat') + mdp.utils.get_dtypes('AllInteger') + mdp.utils.get_dtypes('Character')) class MeanInverseSwitchboard(Switchboard): """Variant of Switchboard with modified inverse. If the switchboard mapping is not injective, then the mean values are used for the inverse. Inputs that are discarded in the mapping are set to zero. You can use this class as a mixin for other switchboard classes. """ def _inverse(self, x): """Take the mean of overlapping values.""" n_y_cons = numx.bincount(self.connections) # n. connections to y_i y_cons = numx.argsort(self.connections) # x indices for y_i y = numx.zeros((len(x), self.input_dim)) i_x_counter = 0 # counter for processed x indices i_y = 0 # current y index while True: n_cons = n_y_cons[i_y] if n_cons > 0: y[:,i_y] = numx.sum(x[:,y_cons[i_x_counter: i_x_counter + n_cons]], axis=1) / n_cons i_x_counter += n_cons if i_x_counter >= self.output_dim: break i_y += 1 return y @staticmethod def is_invertible(): return True class ChannelSwitchboard(Switchboard): """Base class for Switchboards in which the data is bundled into channels. The dimensions of the input / output channels are constant. public attributes (in addition to inherited attributes): out_channel_dim in_channel_dim output_channels """ def __init__(self, input_dim, connections, out_channel_dim, in_channel_dim=1): """Initialize the switchboard. connections -- Connection sequence like for a standard switchboard (the indices do not correspond to whole channels, but single connections). out_channel_dim -- Number of connections per output channel. in_channel_dim -- Number of connections per input channel (default 1). All the components of an input channel are treated equally by the switchboard (i.e., they are routed to the same output channel). """ super(ChannelSwitchboard, self).__init__(input_dim, connections) # perform checks if self.output_dim % out_channel_dim: err = ("Output dim %d is not multiple of out_channel_dim %d." % (self.output_dim, out_channel_dim)) raise SwitchboardException(err) if input_dim % in_channel_dim: err = ("Input dim %d is not multiple of in_channel_dim %d." % (self.input_dim, in_channel_dim)) raise SwitchboardException(err) # finalize initialization self.out_channel_dim = out_channel_dim self.in_channel_dim = in_channel_dim self.output_channels = self.output_dim // out_channel_dim self.input_channels = self.input_dim // in_channel_dim def get_out_channel_input(self, channel): """Return the input connections for the given channel index. channel -- index of the requested channel (starting at 0) """ index = channel * self.out_channel_dim return self.connections[index : index+self.out_channel_dim] def get_out_channel_node(self, channel): """Return a Switchboard that does the routing for a single output channel. channel -- index of the requested channel (starting at 0) """ return Switchboard(self.input_dim, self.get_out_channel_input(channel)) def get_out_channels_input_channels(self, channels): """Return array of input channel indices for the given output channels. channels -- Sequence of the requested output channels or a single channel index (i.e. a number). The returned array contains the indices of all input channels which are connected to at least one of the given output channels. """ if isinstance(channels, int): channels = [channels] # create boolean arry to determine with active inputs channels_input = self.connections.reshape((-1, self.out_channel_dim)) channels_input = channels_input[channels].reshape(-1) covered = numx.zeros(self.input_dim, dtype="bool") covered[channels_input] = True # reshape to perform logical OR over the input channels covered = covered.reshape((-1, self.in_channel_dim)) covered = covered.sum(axis=1, dtype=bool) return covered.nonzero()[0] def to_2tuple(value): """Return value or (value, value) if value is not a tuple.""" if isinstance(value, tuple): return value if isinstance(value, list): return tuple(value) return (value, value) class Rectangular2dSwitchboardException(SwitchboardException): """Exception for routing problems in the Rectangular2dSwitchboard class.""" pass class Rectangular2dSwitchboard(ChannelSwitchboard): """Switchboard for a 2-dimensional topology. This is a specialized version of SwitchboardLayer that makes it easy to implement connection topologies which are based on a 2-dimensional network layers. The input connections are assumed to be grouped into so called channels, which are considered as lying in a two dimensional rectangular plane. Each output channel corresponds to a 2d rectangular field in the input plane. The fields can overlap. The coordinates follow the standard image convention (see the above CoordinateTranslator class). public attributes (in addition to init arguments and inherited attributes): unused_channels_xy out_channels_xy """ def __init__(self, in_channels_xy, field_channels_xy, field_spacing_xy=1, in_channel_dim=1, ignore_cover=False): """Calculate the connections. Keyword arguments: in_channels_xy -- 2-Tuple with number of input channels in the x- and y-direction (or a single number for both). This has to be specified, since the actual input is only one 1d array. field_channels_xy -- 2-Tuple with number of channels in each field in the x- and y-direction (or a single number for both). field_spacing_xy -- 2-Tuple with offset between two fields in the x- and y-direction (or a single number for both). in_channel_dim -- Number of connections per input channel. ignore_cover -- Boolean value defines if an Rectangular2dSwitchboardException is raised when the fields do not cover all input channels. Set this to True if you are willing to risk loosing input channels at the border. """ in_channels_xy = to_2tuple(in_channels_xy) field_channels_xy = to_2tuple(field_channels_xy) field_spacing_xy = to_2tuple(field_spacing_xy) self.in_channels_xy = in_channels_xy self.field_channels_xy = field_channels_xy self.field_spacing_xy = field_spacing_xy # number of channels which are not covered out_channel_dim = (in_channel_dim * field_channels_xy[0] * field_channels_xy[1]) ## check parameters for inconsistencies for i, name in enumerate(["x", "y"]): if (field_channels_xy[i] > in_channels_xy[i]): err = ("Number of field channels exceeds the number of " "input channels in %s-direction. " "This would lead to an empty connection list." % name) raise Rectangular2dSwitchboardException(err) # number of output channels in x-direction x_out_channels = (((in_channels_xy[0] - field_channels_xy[0]) // field_spacing_xy[0]) + 1) x_unused_channels = in_channels_xy[0] - field_channels_xy[0] if x_unused_channels > 0: x_unused_channels %= field_spacing_xy[0] elif x_unused_channels < 0: x_unused_channels = in_channels_xy[0] # number of output channels in y-direction y_out_channels = (((in_channels_xy[1] - field_channels_xy[1]) // field_spacing_xy[1]) + 1) y_unused_channels = in_channels_xy[1] - field_channels_xy[1] if y_unused_channels > 0: y_unused_channels %= field_spacing_xy[1] elif y_unused_channels < 0: y_unused_channels = in_channels_xy[1] self.unused_channels_xy = (x_unused_channels, y_unused_channels) for i, name in enumerate(["x", "y"]): if self.unused_channels_xy[i] and not ignore_cover: err = ("Channel fields do not " "cover all input channels in %s-direction." % name) raise Rectangular2dSwitchboardException(err) self.out_channels_xy = (x_out_channels, y_out_channels) ## end of parameters checks # TODO: rewrite all this leveraging numpy? # this might make the code much simpler # use 3rd shape entry for channel width out_channels = x_out_channels * y_out_channels in_trans = CoordinateTranslator(*in_channels_xy) # input-output mapping of connections # connections has an entry for each output connection, # containing the index of the input connection. connections = numx.zeros([out_channels * out_channel_dim], dtype=numx.int32) first_out_con = 0 for y_out_chan in range(self.out_channels_xy[1]): for x_out_chan in range(self.out_channels_xy[0]): # inner loop over field x_start_chan = x_out_chan * field_spacing_xy[0] y_start_chan = y_out_chan * field_spacing_xy[1] for y_in_chan in range(y_start_chan, y_start_chan + field_channels_xy[1]): for x_in_chan in range(x_start_chan, x_start_chan + field_channels_xy[0]): first_in_con = (in_trans.image_to_index( x_in_chan, y_in_chan) * in_channel_dim) connections[first_out_con: first_out_con + in_channel_dim] = \ range(first_in_con, first_in_con + in_channel_dim) first_out_con += in_channel_dim super(Rectangular2dSwitchboard, self).__init__( input_dim=(in_channel_dim * in_channels_xy[0] * in_channels_xy[1]), connections=connections, out_channel_dim=out_channel_dim, in_channel_dim=in_channel_dim) class DoubleRect2dSwitchboardException(SwitchboardException): """Exception for routing problems in the DoubleRect2dSwitchboard class.""" pass class DoubleRect2dSwitchboard(ChannelSwitchboard): """Special 2d Switchboard where each inner point is covered twice. First the input is covered with non-overlapping rectangular fields. Then the input is covered with fields of the same size that are shifted in the x and y direction by half the field size (we call this the uneven fields). Note that the output of this switchboard cannot be interpreted as a rectangular grid, because the short rows are shifted. Instead it is a rhombic grid (it is not a hexagonal grid because the distances of the field centers do not satisfy the necessary relation). See http://en.wikipedia.org/wiki/Lattice_(group) Example for a 6x4 input and a field size of 2 in both directions: long row fields: 1 1 2 2 3 3 1 1 2 2 3 3 4 4 5 5 6 6 4 4 5 5 6 6 short row fields: * * * * * * * 7 7 8 8 * * 7 7 8 8 * * * * * * * Note that the short row channels come after all the long row connections in the connections sequence. public attributes (in addition to init arguments and inherited attributes): unused_channels_xy long_out_channels_xy -- Output channels in the long rows. """ # TODO: settle on 'long' or 'even' term? def __init__(self, in_channels_xy, field_channels_xy, in_channel_dim=1, ignore_cover=False): """Calculate the connections. Keyword arguments: in_channels_xy -- 2-Tuple with number of input channels in the x- and y-direction (or a single number for both). This has to be specified, since the actual input is only one 1d array. field_channels_xy -- 2-Tuple with number of channels in each field in the x- and y-direction (or a single number for both). Must be even numbers. in_channel_dim -- Number of connections per input channel ignore_cover -- Boolean value defines if an Rectangular2dSwitchboardException is raised when the fields do not cover all input channels. Set this to True if you are willing to risk loosing input channels at the border. """ in_channels_xy = to_2tuple(in_channels_xy) field_channels_xy = to_2tuple(field_channels_xy) ## count channels and stuff self.in_channels_xy = in_channels_xy self.field_channels_xy = field_channels_xy out_channel_dim = (in_channel_dim * field_channels_xy[0] * field_channels_xy[1]) ## check parameters for inconsistencies for i, name in enumerate(["x", "y"]): if field_channels_xy[i] % 2: err = ("%s_field_channels must be an even number, was %d" % (name, field_channels_xy[i])) raise Rectangular2dSwitchboardException(err) field_spacing_xy = (field_channels_xy[0] // 2, field_channels_xy[1] // 2) for i, name in enumerate(["x", "y"]): if (field_channels_xy[i] > in_channels_xy[i]): err = ("Number of field channels" "exceeds the number of input channels in %s-direction. " "This would lead to an empty connection list." % name) raise Rectangular2dSwitchboardException(err) # number of output channels in x-direction xl = in_channels_xy[0] // field_channels_xy[0] x_unused_channels = in_channels_xy[0] - field_channels_xy[0] if x_unused_channels > 0: x_unused_channels %= field_spacing_xy[0] elif x_unused_channels < 0: x_unused_channels = in_channels_xy[0] if x_unused_channels and not ignore_cover: err = ("Channel fields do not " "cover all input channels in x-direction.") raise Rectangular2dSwitchboardException(err) if (in_channels_xy[0] - xl * field_channels_xy[0]) >= (field_channels_xy[0] // 2): err = ("x short rows have same length as long rows.") raise Rectangular2dSwitchboardException(err) # number of output channels in y-direction yl = in_channels_xy[1] // field_channels_xy[1] y_unused_channels = in_channels_xy[1] - field_channels_xy[1] if y_unused_channels > 0: y_unused_channels %= field_spacing_xy[1] elif y_unused_channels < 0: y_unused_channels = in_channels_xy[1] if y_unused_channels and not ignore_cover: err = ("Channel fields do not " "cover all input channels in y-direction.") raise Rectangular2dSwitchboardException(err) if ((in_channels_xy[1] - yl * field_channels_xy[1]) >= (field_channels_xy[1] // 2)): err = ("y short rows have same length as long rows.") raise Rectangular2dSwitchboardException(err) # TODO: add check against n+1/2 size, long line length equals short one ## end of parameters checks self.long_out_channels_xy = (xl, yl) self.unused_channels_xy = (x_unused_channels, y_unused_channels) out_channels = xl * yl + (xl-1) * (yl-1) in_trans = CoordinateTranslator(*in_channels_xy) connections = numx.zeros([out_channels * out_channel_dim], dtype=numx.int32) first_out_con = 0 ## first create the even connections even_x_out_channels = in_channels_xy[0] // (2 * field_spacing_xy[0]) even_y_out_channels = in_channels_xy[1] // (2 * field_spacing_xy[1]) for y_out_chan in range(even_y_out_channels): for x_out_chan in range(even_x_out_channels): # inner loop over field x_start_chan = x_out_chan * (2 * field_spacing_xy[0]) y_start_chan = y_out_chan * (2 * field_spacing_xy[1]) for y_in_chan in range(y_start_chan, y_start_chan + self.field_channels_xy[1]): for x_in_chan in range(x_start_chan, x_start_chan + self.field_channels_xy[0]): first_in_con = (in_trans.image_to_index( x_in_chan, y_in_chan) * in_channel_dim) connections[first_out_con: first_out_con + in_channel_dim] = \ range(first_in_con, first_in_con + in_channel_dim) first_out_con += in_channel_dim ## create the uneven connections for y_out_chan in range(even_y_out_channels - 1): for x_out_chan in range(even_x_out_channels - 1): # inner loop over field x_start_chan = (x_out_chan * (2 * field_spacing_xy[0]) + field_spacing_xy[0]) y_start_chan = (y_out_chan * (2 * field_spacing_xy[1]) + field_spacing_xy[1]) for y_in_chan in range(y_start_chan, y_start_chan + self.field_channels_xy[1]): for x_in_chan in range(x_start_chan, x_start_chan + self.field_channels_xy[0]): first_in_con = (in_trans.image_to_index( x_in_chan, y_in_chan) * in_channel_dim) connections[first_out_con: first_out_con + in_channel_dim] = \ range(first_in_con, first_in_con + in_channel_dim) first_out_con += in_channel_dim super(DoubleRect2dSwitchboard, self).__init__( input_dim=in_channel_dim * in_channels_xy[0] * in_channels_xy[1], connections=connections, out_channel_dim=out_channel_dim, in_channel_dim=in_channel_dim) class DoubleRhomb2dSwitchboardException(SwitchboardException): """Exception for routing problems in the DoubleRhomb2dSwitchboard class.""" pass class DoubleRhomb2dSwitchboard(ChannelSwitchboard): """Rectangular lattice switchboard covering a rhombic lattice. All inner points of the rhombic lattice are covered twice. The rectangular fields are rotated by 45 degree. We assume that both the first and last row is a long row, e.g. * * * * * * * * * * * * * * * * * * The incoming data is expected to contain the long rows first, then the short rows. The alignment of the first field is chosen to minimize cutaway. public attributes (in addition to init arguments and inherited attributes): out_channels_xy """ def __init__(self, long_in_channels_xy, diag_field_channels, in_channel_dim=1): """Calculate the connections. Note that the incoming data will be interpreted as a rhombic grid, as it is produced by DoubleRect2dSwitchboard. Keyword arguments: long_in_channels_xy -- 2-Tuple with number of long input channels in the x- and y-direction (or a single number for both). diag_field_channels -- Field edge size (before the rotation). in_channel_dim -- Number of connections per input channel """ long_in_channels_xy = to_2tuple(long_in_channels_xy) self.long_in_channels_xy = long_in_channels_xy if long_in_channels_xy[0] < long_in_channels_xy[1]: started_in_short = 1 else: started_in_short = 0 ## check parameters for inconsistencies ## if diag_field_channels % 2: err = ("diag_field_channels must be even (for double cover)") raise DoubleRhomb2dSwitchboardException(err) self.diag_field_channels = diag_field_channels # helper variables for the field range _x_chan_field_range = (long_in_channels_xy[0] - (1 - started_in_short) - diag_field_channels) _y_chan_field_range = (long_in_channels_xy[1] - started_in_short - diag_field_channels) if (_x_chan_field_range % (diag_field_channels // 2) or _x_chan_field_range < 0): err = ("diag_field_channels value is not compatible with " "long_in_channels_xy[0]") raise DoubleRhomb2dSwitchboardException(err) if (_y_chan_field_range % (diag_field_channels // 2) or _y_chan_field_range < 0): err = ("diag_field_channels value is not compatible with " "long_in_channels_xy[1]") raise DoubleRhomb2dSwitchboardException(err) ## count channels and stuff self.in_channel_dim = in_channel_dim input_dim = ((2 * long_in_channels_xy[0] * long_in_channels_xy[1] - long_in_channels_xy[0] - long_in_channels_xy[1] + 1) * in_channel_dim) out_channel_dim = in_channel_dim * diag_field_channels**2 x_out_channels = (2 * _x_chan_field_range // diag_field_channels + 1) y_out_channels = (2 * _y_chan_field_range // diag_field_channels + 1) self.out_channels_xy = (x_out_channels, y_out_channels) ## prepare iteration over fields long_in_trans = CoordinateTranslator(*long_in_channels_xy) short_in_trans = CoordinateTranslator(long_in_channels_xy[0] - 1, long_in_channels_xy[1] - 1) short_in_offset = long_in_channels_xy[0] * long_in_channels_xy[1] connections = numx.zeros([x_out_channels * y_out_channels * out_channel_dim], dtype=numx.int32) first_out_con = 0 for y_out_chan in range(y_out_channels): for x_out_chan in range(x_out_channels): # inner loop over perceptive field x_start_chan = (1 + x_out_chan) * diag_field_channels // 2 y_start_chan = y_out_chan * diag_field_channels # set the initial field offset to minimize edge loss x_start_chan -= started_in_short y_start_chan += started_in_short # iterate over both long and short rows for iy, y_in_chan in enumerate(range(y_start_chan, y_start_chan + (2 * diag_field_channels - 1))): # half width of the field in the given row if iy <= (diag_field_channels - 1): field_width = iy + 1 else: field_width = (diag_field_channels - 1 - (iy % diag_field_channels)) for x_in_chan in range(x_start_chan - field_width // 2, x_start_chan + field_width // 2 + field_width % 2): # array index of the first input connection # for this input channel if not y_in_chan % 2: if started_in_short: x_in_chan += 1 first_in_con = ( long_in_trans.image_to_index( x_in_chan, y_in_chan // 2) * self.in_channel_dim) else: first_in_con = ( (short_in_trans.image_to_index( x_in_chan, y_in_chan // 2) + short_in_offset) * self.in_channel_dim) connections[first_out_con: first_out_con + self.in_channel_dim] = \ range(first_in_con, first_in_con + self.in_channel_dim) first_out_con += self.in_channel_dim super(DoubleRhomb2dSwitchboard, self).__init__( input_dim=input_dim, connections=connections, out_channel_dim=out_channel_dim, in_channel_dim=in_channel_dim) # utility class for Rectangular2dSwitchboard class CoordinateTranslator(object): """Translate between image (PIL) and numpy array coordinates. PIL image coordinates go from 0..width-1 . The first coordinate is x. Array coordinates also start from 0, but the first coordinate is the row. As depicted below we have x = column, y = row. The entry index numbers are also shown. +------> x | 1 2 | 3 4 y v array[y][x] """ def __init__(self, x_image_dim, y_image_dim): self.x_image_dim = x_image_dim self.y_image_dim = y_image_dim self._max_index = x_image_dim * y_image_dim - 1 def image_to_array(self, x, y): return y, x def image_to_index(self, x, y): if not 0 <= x < self.x_image_dim: raise Exception("x coordinate %d is outside the valid range." % x) if not 0 <= y < self.y_image_dim: raise Exception("y coordinate %d is outside the valid range." % y) return y * self.x_image_dim + x def array_to_image(self, row, col): return col, row def array_to_index(self, row, col): if not 0 <= row < self.y_image_dim: raise Exception("row index %d is outside the valid range." % row) if not 0 <= col < self.x_image_dim: raise Exception("column index %d is outside the valid range." % col) return row * self.x_image_dim + col def index_to_array(self, index): if not 0 <= index <= self._max_index: raise Exception("index %d is outside the valid range." % index) return index // self.x_image_dim, index % self.x_image_dim def index_to_image(self, index): if not 0 <= index <= self._max_index: raise Exception("index %d is outside the valid range." % index) return index % self.x_image_dim, index // self.x_image_dim mdp-3.3/mdp/hinet/switchboard_factory.py000066400000000000000000000131141203131624700204370ustar00rootroot00000000000000""" Extension for building switchboards in a 2d hierarchical network. """ # TODO: add unittests and maybe mention it in the tutorial # TODO: maybe integrate all this into the original switchboard classes? import mdp from mdp.hinet import ( ChannelSwitchboard, Rectangular2dSwitchboard, DoubleRect2dSwitchboard, DoubleRhomb2dSwitchboard ) def get_2d_image_switchboard(image_size_xy): """Return a Rectangular2dSwitchboard representing an image. This can then be used as the prev_switchboard. """ return Rectangular2dSwitchboard(in_channels_xy=image_size_xy, field_channels_xy=1, field_spacing_xy=1) class FactoryExtensionChannelSwitchboard(mdp.ExtensionNode, ChannelSwitchboard): """Extension node for the assembly of channel switchboards. data attributes: free_parameters -- List of parameters that do not depend on the previous layer. Note that there might still be restrictions imposed by the switchboard. By convention parameters that end with '_xy' can either be a single int or a 2-tuple (sequence) of ints. compatible_pre_switchboards -- List of compatible base classes for prev_switchboard. """ extension_name = "switchboard_factory" free_parameters = [] compatible_pre_switchboards = [ChannelSwitchboard] @classmethod def create_switchboard(cls, free_params, prev_switchboard, prev_output_dim): """Return a new instance of this switchboard. free_params -- Parameters as specified by free_parameters. prev_switchboard -- Instance of the previous switchboard. prev_output_dim -- Output dimension of the previous layer. This template method checks the compatibility of the prev_switchboard and sanitizes '_xy' in free_params. """ compatible = False for base_class in cls.compatible_pre_switchboards: if isinstance(prev_switchboard, base_class): compatible = True if not compatible: err = ("The prev_switchboard class '%s'" % prev_switchboard.__class__.__name__ + " is not compatible with this switchboard class" + " '%s'." % cls.__name__) raise mdp.hinet.SwitchboardException(err) kwargs = cls._get_switchboard_kwargs(free_params, prev_switchboard, prev_output_dim) return cls(**kwargs) @staticmethod def _get_switchboard_kwargs(free_params, prev_switchboard, prev_output_dim): """Return the kwargs for the cls '__init__' method. Reference implementation, merges input into one single channel. Override this method for other switchboard classes. """ in_channel_dim = prev_output_dim // prev_switchboard.output_channels return {"input_dim": prev_output_dim, "connections": range(prev_output_dim), "out_channel_dim": prev_output_dim, "in_channel_dim": in_channel_dim} class FactoryRectangular2dSwitchboard(FactoryExtensionChannelSwitchboard, Rectangular2dSwitchboard): free_parameters = ["field_channels_xy", "field_spacing_xy", "ignore_cover"] compatible_pre_switchboards = [Rectangular2dSwitchboard, DoubleRhomb2dSwitchboard] @staticmethod def _get_switchboard_kwargs(free_params, prev_switchboard, prev_output_dim): in_channel_dim = (prev_output_dim // prev_switchboard.output_channels) if not "ignore_cover" in free_params: free_params["ignore_cover"] = True return {"in_channels_xy": prev_switchboard.out_channels_xy, "field_channels_xy": free_params["field_channels_xy"], "field_spacing_xy": free_params["field_spacing_xy"], "in_channel_dim": in_channel_dim, "ignore_cover": free_params["ignore_cover"]} class FactoryDoubleRect2dSwitchboard(FactoryExtensionChannelSwitchboard, DoubleRect2dSwitchboard): free_parameters = ["field_channels_xy", "ignore_cover"] compatible_pre_switchboards = [Rectangular2dSwitchboard, DoubleRhomb2dSwitchboard] @staticmethod def _get_switchboard_kwargs(free_params, prev_switchboard, prev_output_dim): in_channel_dim = (prev_output_dim // prev_switchboard.output_channels) if not "ignore_cover" in free_params: free_params["ignore_cover"] = True return {"in_channels_xy": prev_switchboard.out_channels_xy, "field_channels_xy": free_params["field_channels_xy"], "in_channel_dim": in_channel_dim, "ignore_cover": free_params["ignore_cover"]} class FactoryDoubleRhomb2dSwitchboard(FactoryExtensionChannelSwitchboard, DoubleRhomb2dSwitchboard): free_parameters = ["field_size"] compatible_pre_switchboards = [DoubleRect2dSwitchboard] @staticmethod def _get_switchboard_kwargs(free_params, prev_switchboard, prev_output_dim): in_channel_dim = (prev_output_dim // prev_switchboard.output_channels) return {"long_out_channels_xy": prev_switchboard.long_out_channels_xy, "diag_field_channels": free_params["field_size"], "in_channel_dim": in_channel_dim} mdp-3.3/mdp/linear_flows.py000066400000000000000000000654161203131624700157700ustar00rootroot00000000000000from __future__ import with_statement import mdp import sys as _sys import os as _os import inspect as _inspect import warnings as _warnings import traceback as _traceback import cPickle as _cPickle import tempfile as _tempfile import copy as _copy from mdp import numx class CrashRecoveryException(mdp.MDPException): """Class to handle crash recovery """ def __init__(self, *args): """Allow crash recovery. Arguments: (error_string, crashing_obj, parent_exception) The crashing object is kept in self.crashing_obj The triggering parent exception is kept in self.parent_exception. """ errstr = args[0] self.crashing_obj = args[1] self.parent_exception = args[2] # ?? python 2.5: super(CrashRecoveryException, self).__init__(errstr) mdp.MDPException.__init__(self, errstr) def dump(self, filename=None): """ Save a pickle dump of the crashing object on filename. If filename is None, the crash dump is saved on a file created by the tempfile module. Return the filename. """ if filename is None: # This 'temporary file' should actually stay 'forever', i.e. until # deleted by the user. (fd, filename)=_tempfile.mkstemp(suffix=".pic", prefix="MDPcrash_") fl = _os.fdopen(fd, 'w+b', -1) else: fl = open(filename, 'w+b', -1) _cPickle.dump(self.crashing_obj, fl) fl.close() return filename class FlowException(mdp.MDPException): """Base class for exceptions in Flow subclasses.""" pass class FlowExceptionCR(CrashRecoveryException, FlowException): """Class to handle flow-crash recovery """ def __init__(self, *args): """Allow crash recovery. Arguments: (error_string, flow_instance, parent_exception) The triggering parent exception is kept in self.parent_exception. If flow_instance._crash_recovery is set, save a crash dump of flow_instance on the file self.filename""" CrashRecoveryException.__init__(self, *args) rec = self.crashing_obj._crash_recovery errstr = args[0] if rec: if isinstance(rec, str): name = rec else: name = None name = CrashRecoveryException.dump(self, name) dumpinfo = '\nA crash dump is available on: "%s"' % name self.filename = name errstr = errstr+dumpinfo Exception.__init__(self, errstr) class Flow(object): """A 'Flow' is a sequence of nodes that are trained and executed together to form a more complex algorithm. Input data is sent to the first node and is successively processed by the subsequent nodes along the sequence. Using a flow as opposed to handling manually a set of nodes has a clear advantage: The general flow implementation automatizes the training (including supervised training and multiple training phases), execution, and inverse execution (if defined) of the whole sequence. Crash recovery is optionally available: in case of failure the current state of the flow is saved for later inspection. A subclass of the basic flow class ('CheckpointFlow') allows user-supplied checkpoint functions to be executed at the end of each phase, for example to save the internal structures of a node for later analysis. Flow objects are Python containers. Most of the builtin 'list' methods are available. A 'Flow' can be saved or copied using the corresponding 'save' and 'copy' methods. """ def __init__(self, flow, crash_recovery=False, verbose=False): """ Keyword arguments: flow -- a list of Nodes crash_recovery -- set (or not) Crash Recovery Mode (save node in case a failure) verbose -- if True, print some basic progress information """ self._check_nodes_consistency(flow) self.flow = flow self.verbose = verbose self.set_crash_recovery(crash_recovery) def _propagate_exception(self, except_, nodenr): # capture exception. the traceback of the error is printed and a # new exception, containing the identity of the node in the flow # is raised. Allow crash recovery. (etype, val, tb) = _sys.exc_info() prev = ''.join(_traceback.format_exception(except_.__class__, except_,tb)) act = "\n! Exception in node #%d (%s):\n" % (nodenr, str(self.flow[nodenr])) errstr = ''.join(('\n', 40*'-', act, 'Node Traceback:\n', prev, 40*'-')) raise FlowExceptionCR(errstr, self, except_) def _train_node(self, data_iterable, nodenr): """Train a single node in the flow. nodenr -- index of the node in the flow """ node = self.flow[nodenr] if (data_iterable is not None) and (not node.is_trainable()): # attempted to train a node although it is not trainable. # raise a warning and continue with the next node. # wrnstr = "\n! Node %d is not trainable" % nodenr + \ # "\nYou probably need a 'None' iterable for"+\ # " this node. Continuing anyway." #_warnings.warn(wrnstr, mdp.MDPWarning) return elif (data_iterable is None) and node.is_training(): # None instead of iterable is passed to a training node err_str = ("\n! Node %d is training" " but instead of iterable received 'None'." % nodenr) raise FlowException(err_str) elif (data_iterable is None) and (not node.is_trainable()): # skip training if node is not trainable return try: train_arg_keys = self._get_required_train_args(node) train_args_needed = bool(len(train_arg_keys)) ## We leave the last training phase open for the ## CheckpointFlow class. ## Checkpoint functions must close it explicitly if needed! ## Note that the last training_phase is closed ## automatically when the node is executed. while True: empty_iterator = True for x in data_iterable: empty_iterator = False # the arguments following the first are passed only to the # currently trained node, allowing the implementation of # supervised nodes if (type(x) is tuple) or (type(x) is list): arg = x[1:] x = x[0] else: arg = () # check if the required number of arguments was given if train_args_needed: if len(train_arg_keys) != len(arg): err = ("Wrong number of arguments provided by " + "the iterable for node #%d " % nodenr + "(%d needed, %d given).\n" % (len(train_arg_keys), len(arg)) + "List of required argument keys: " + str(train_arg_keys)) raise FlowException(err) # filter x through the previous nodes if nodenr > 0: x = self._execute_seq(x, nodenr-1) # train current node node.train(x, *arg) if empty_iterator: if node.get_current_train_phase() == 1: err_str = ("The training data iteration for node " "no. %d could not be repeated for the " "second training phase, you probably " "provided an iterator instead of an " "iterable." % (nodenr+1)) raise FlowException(err_str) else: err_str = ("The training data iterator for node " "no. %d is empty." % (nodenr+1)) raise FlowException(err_str) self._stop_training_hook() if node.get_remaining_train_phase() > 1: # close the previous training phase node.stop_training() else: break except mdp.TrainingFinishedException, e: # attempted to train a node although its training phase is already # finished. raise a warning and continue with the next node. wrnstr = ("\n! Node %d training phase already finished" " Continuing anyway." % nodenr) _warnings.warn(wrnstr, mdp.MDPWarning) except FlowExceptionCR, e: # this exception was already propagated, # probably during the execution of a node upstream in the flow (exc_type, val) = _sys.exc_info()[:2] prev = ''.join(_traceback.format_exception_only(e.__class__, e)) prev = prev[prev.find('\n')+1:] act = "\nWhile training node #%d (%s):\n" % (nodenr, str(self.flow[nodenr])) err_str = ''.join(('\n', 40*'=', act, prev, 40*'=')) raise FlowException(err_str) except Exception, e: # capture any other exception occured during training. self._propagate_exception(e, nodenr) def _stop_training_hook(self): """Hook method that is called before stop_training is called.""" pass @staticmethod def _get_required_train_args(node): """Return arguments in addition to self and x for node.train. Argumentes that have a default value are ignored. """ train_arg_spec = _inspect.getargspec(node._train) train_arg_keys = train_arg_spec[0][2:] # ignore self, x if train_arg_spec[3]: # subtract arguments with a default value train_arg_keys = train_arg_keys[:-len(train_arg_spec[3])] return train_arg_keys def _train_check_iterables(self, data_iterables): """Return the data iterables after some checks and sanitizing. Note that this method does not distinguish between iterables and iterators, so this must be taken care of later. """ # verifies that the number of iterables matches that of # the signal nodes and multiplies them if needed. flow = self.flow # if a single array is given wrap it in a list of lists, # note that a list of 2d arrays is not valid if isinstance(data_iterables, numx.ndarray): data_iterables = [[data_iterables]] * len(flow) if not isinstance(data_iterables, list): err_str = ("'data_iterables' must be either a list of " "iterables or an array, and not %s" % type(data_iterables)) raise FlowException(err_str) # check that all elements are iterable for i, iterable in enumerate(data_iterables): if (iterable is not None) and (not hasattr(iterable, '__iter__')): err = ("Element number %d in the data_iterables" " list is not an iterable." % i) raise FlowException(err) # check that the number of data_iterables is correct if len(data_iterables) != len(flow): err_str = ("%d data iterables specified," " %d needed" % (len(data_iterables), len(flow))) raise FlowException(err_str) return data_iterables def _close_last_node(self): if self.verbose: print "Close the training phase of the last node" try: self.flow[-1].stop_training() except mdp.TrainingFinishedException: pass except Exception, e: self._propagate_exception(e, len(self.flow)-1) def set_crash_recovery(self, state = True): """Set crash recovery capabilities. When a node raises an Exception during training, execution, or inverse execution that the flow is unable to handle, a FlowExceptionCR is raised. If crash recovery is set, a crash dump of the flow instance is saved for later inspection. The original exception can be found as the 'parent_exception' attribute of the FlowExceptionCR instance. - If 'state' = False, disable crash recovery. - If 'state' is a string, the crash dump is saved on a file with that name. - If 'state' = True, the crash dump is saved on a file created by the tempfile module. """ self._crash_recovery = state def train(self, data_iterables): """Train all trainable nodes in the flow. 'data_iterables' is a list of iterables, one for each node in the flow. The iterators returned by the iterables must return data arrays that are then used for the node training (so the data arrays are the 'x' for the nodes). Note that the data arrays are processed by the nodes which are in front of the node that gets trained, so the data dimension must match the input dimension of the first node. If a node has only a single training phase then instead of an iterable you can alternatively provide an iterator (including generator-type iterators). For nodes with multiple training phases this is not possible, since the iterator cannot be restarted after the first iteration. For more information on iterators and iterables see http://docs.python.org/library/stdtypes.html#iterator-types . In the special case that 'data_iterables' is one single array, it is used as the data array 'x' for all nodes and training phases. Instead of a data array 'x' the iterators can also return a list or tuple, where the first entry is 'x' and the following are args for the training of the node (e.g. for supervised training). """ data_iterables = self._train_check_iterables(data_iterables) # train each Node successively for i in range(len(self.flow)): if self.verbose: print "Training node #%d (%s)" % (i, str(self.flow[i])) self._train_node(data_iterables[i], i) if self.verbose: print "Training finished" self._close_last_node() def _execute_seq(self, x, nodenr = None): # Filters input data 'x' through the nodes 0..'node_nr' included flow = self.flow if nodenr is None: nodenr = len(flow)-1 for i in range(nodenr+1): try: x = flow[i].execute(x) except Exception, e: self._propagate_exception(e, i) return x def execute(self, iterable, nodenr = None): """Process the data through all nodes in the flow. 'iterable' is an iterable or iterator (note that a list is also an iterable), which returns data arrays that are used as input to the flow. Alternatively, one can specify one data array as input. If 'nodenr' is specified, the flow is executed only up to node nr. 'nodenr'. This is equivalent to 'flow[:nodenr+1](iterable)'. """ if isinstance(iterable, numx.ndarray): return self._execute_seq(iterable, nodenr) res = [] empty_iterator = True for x in iterable: empty_iterator = False res.append(self._execute_seq(x, nodenr)) if empty_iterator: errstr = ("The execute data iterator is empty.") raise FlowException(errstr) return numx.concatenate(res) def _inverse_seq(self, x): #Successively invert input data 'x' through all nodes backwards flow = self.flow for i in range(len(flow)-1, -1, -1): try: x = flow[i].inverse(x) except Exception, e: self._propagate_exception(e, i) return x def inverse(self, iterable): """Process the data through all nodes in the flow backwards (starting from the last node up to the first node) by calling the inverse function of each node. Of course, all nodes in the flow must be invertible. 'iterable' is an iterable or iterator (note that a list is also an iterable), which returns data arrays that are used as input to the flow. Alternatively, one can specify one data array as input. Note that this is _not_ equivalent to 'flow[::-1](iterable)', which also executes the flow backwards but calls the 'execute' function of each node.""" if isinstance(iterable, numx.ndarray): return self._inverse_seq(iterable) res = [] empty_iterator = True for x in iterable: empty_iterator = False res.append(self._inverse_seq(x)) if empty_iterator: errstr = ("The inverse data iterator is empty.") raise FlowException(errstr) return numx.concatenate(res) def copy(self, protocol=None): """Return a deep copy of the flow. The protocol parameter should not be used. """ if protocol is not None: _warnings.warn("protocol parameter to copy() is ignored", mdp.MDPDeprecationWarning, stacklevel=2) return _copy.deepcopy(self) def save(self, filename, protocol=-1): """Save a pickled serialization of the flow to 'filename'. If 'filename' is None, return a string. Note: the pickled Flow is not guaranteed to be upward or backward compatible.""" if filename is None: return _cPickle.dumps(self, protocol) else: # if protocol != 0 open the file in binary mode mode = 'w' if protocol == 0 else 'wb' with open(filename, mode) as flh: _cPickle.dump(self, flh, protocol) def __call__(self, iterable, nodenr = None): """Calling an instance is equivalent to call its 'execute' method.""" return self.execute(iterable, nodenr=nodenr) ###### string representation def __str__(self): nodes = ', '.join([str(x) for x in self.flow]) return '['+nodes+']' def __repr__(self): # this should look like a valid Python expression that # could be used to recreate an object with the same value # eval(repr(object)) == object name = type(self).__name__ pad = len(name)+2 sep = ',\n'+' '*pad nodes = sep.join([repr(x) for x in self.flow]) return '%s([%s])' % (name, nodes) ###### private container methods def __len__(self): return len(self.flow) def _check_dimension_consistency(self, out, inp): """Raise ValueError when both dimensions are set and different.""" if ((out and inp) is not None) and out != inp: errstr = "dimensions mismatch: %d != %d" % (out, inp) raise ValueError(errstr) def _check_nodes_consistency(self, flow = None): """Check the dimension consistency of a list of nodes.""" if flow is None: flow = self.flow len_flow = len(flow) for i in range(1, len_flow): out = flow[i-1].output_dim inp = flow[i].input_dim self._check_dimension_consistency(out, inp) def _check_value_type_isnode(self, value): if not isinstance(value, mdp.Node): raise TypeError("flow item must be Node instance") def __getitem__(self, key): if isinstance(key, slice): flow_slice = self.flow[key] self._check_nodes_consistency(flow_slice) return self.__class__(flow_slice) else: return self.flow[key] def __setitem__(self, key, value): if isinstance(key, slice): [self._check_value_type_isnode(item) for item in value] else: self._check_value_type_isnode(value) # make a copy of list flow_copy = list(self.flow) flow_copy[key] = value # check dimension consistency self._check_nodes_consistency(flow_copy) # if no exception was raised, accept the new sequence self.flow = flow_copy def __delitem__(self, key): # make a copy of list flow_copy = list(self.flow) del flow_copy[key] # check dimension consistency self._check_nodes_consistency(flow_copy) # if no exception was raised, accept the new sequence self.flow = flow_copy def __contains__(self, item): return self.flow.__contains__(item) def __iter__(self): return self.flow.__iter__() def __add__(self, other): # append other to self if isinstance(other, Flow): flow_copy = list(self.flow).__add__(other.flow) # check dimension consistency self._check_nodes_consistency(flow_copy) # if no exception was raised, accept the new sequence return self.__class__(flow_copy) elif isinstance(other, mdp.Node): flow_copy = list(self.flow) flow_copy.append(other) # check dimension consistency self._check_nodes_consistency(flow_copy) # if no exception was raised, accept the new sequence return self.__class__(flow_copy) else: err_str = ('can only concatenate flow or node' ' (not \'%s\') to flow' % (type(other).__name__)) raise TypeError(err_str) def __iadd__(self, other): # append other to self if isinstance(other, Flow): self.flow += other.flow elif isinstance(other, mdp.Node): self.flow.append(other) else: err_str = ('can only concatenate flow or node' ' (not \'%s\') to flow' % (type(other).__name__)) raise TypeError(err_str) self._check_nodes_consistency(self.flow) return self ###### public container methods def append(self, x): """flow.append(node) -- append node to flow end""" self[len(self):len(self)] = [x] def extend(self, x): """flow.extend(iterable) -- extend flow by appending elements from the iterable""" if not isinstance(x, Flow): err_str = ('can only concatenate flow' ' (not \'%s\') to flow' % (type(x).__name__)) raise TypeError(err_str) self[len(self):len(self)] = x def insert(self, i, x): """flow.insert(index, node) -- insert node before index""" self[i:i] = [x] def pop(self, i = -1): """flow.pop([index]) -> node -- remove and return node at index (default last)""" x = self[i] del self[i] return x class CheckpointFlow(Flow): """Subclass of Flow class that allows user-supplied checkpoint functions to be executed at the end of each phase, for example to save the internal structures of a node for later analysis.""" def _train_check_checkpoints(self, checkpoints): if not isinstance(checkpoints, list): checkpoints = [checkpoints]*len(self.flow) if len(checkpoints) != len(self.flow): error_str = ("%d checkpoints specified," " %d needed" % (len(checkpoints), len(self.flow))) raise FlowException(error_str) return checkpoints def train(self, data_iterables, checkpoints): """Train all trainable nodes in the flow. In addition to the basic behavior (see 'Node.train'), calls the checkpoint function 'checkpoint[i]' when the training phase of node #i is over. A checkpoint function takes as its only argument the trained node. If the checkpoint function returns a dictionary, its content is added to the instance dictionary. The class CheckpointFunction can be used to define user-supplied checkpoint functions. """ data_iterables = self._train_check_iterables(data_iterables) checkpoints = self._train_check_checkpoints(checkpoints) # train each Node successively for i in range(len(self.flow)): node = self.flow[i] if self.verbose: print "Training node #%d (%s)" % (i, type(node).__name__) self._train_node(data_iterables[i], i) if (i <= len(checkpoints)) and (checkpoints[i] is not None): dic = checkpoints[i](node) if dic: self.__dict__.update(dic) if self.verbose: print "Training finished" self._close_last_node() class CheckpointFunction(object): """Base class for checkpoint functions. This class can be subclassed to build objects to be used as a checkpoint function in a CheckpointFlow. Such objects would allow to define parameters for the function and save informations for later use.""" def __call__(self, node): """Execute the checkpoint function. This is the method that is going to be called at the checkpoint. Overwrite it to match your needs.""" pass class CheckpointSaveFunction(CheckpointFunction): """This checkpoint function saves the node in pickle format. The pickle dump can be done either before the training phase is finished or right after that. In this way, it is for example possible to reload it in successive sessions and continue the training. """ def __init__(self, filename, stop_training=0, binary=1, protocol=2): """CheckpointSaveFunction constructor. 'filename' -- the name of the pickle dump file. 'stop_training' -- if set to 0 the pickle dump is done before closing the training phase if set to 1 the training phase is closed and then the node is dumped 'binary' -- sets binary mode for opening the file. When using a protocol higher than 0, make sure the file is opened in binary mode. 'protocol' -- is the 'protocol' argument for the pickle dump (see Pickle documentation for details) """ self.filename = filename self.proto = protocol self.stop_training = stop_training if binary or protocol > 0: self.mode = 'wb' else: self.mode = 'w' def __call__(self, node): with open(self.filename, self.mode) as fid: if self.stop_training: node.stop_training() _cPickle.dump(node, fid, self.proto) mdp-3.3/mdp/nodes/000077500000000000000000000000001203131624700140265ustar00rootroot00000000000000mdp-3.3/mdp/nodes/__init__.py000066400000000000000000000104541203131624700161430ustar00rootroot00000000000000# -*- coding:utf-8 -*- __docformat__ = "restructuredtext en" from pca_nodes import WhiteningNode, PCANode from sfa_nodes import SFANode, SFA2Node from ica_nodes import ICANode, CuBICANode, FastICANode, TDSEPNode from neural_gas_nodes import GrowingNeuralGasNode, NeuralGasNode from expansion_nodes import (QuadraticExpansionNode, PolynomialExpansionNode, RBFExpansionNode, GrowingNeuralGasExpansionNode, GeneralExpansionNode) from fda_nodes import FDANode from em_nodes import FANode from misc_nodes import (IdentityNode, HitParadeNode, TimeFramesNode, TimeDelayNode, TimeDelaySlidingWindowNode, EtaComputerNode, NoiseNode, NormalNoiseNode, CutoffNode, HistogramNode, AdaptiveCutoffNode) from isfa_nodes import ISFANode from rbm_nodes import RBMNode, RBMWithLabelsNode from regression_nodes import LinearRegressionNode from classifier_nodes import (SignumClassifier, PerceptronClassifier, SimpleMarkovClassifier, DiscreteHopfieldClassifier, KMeansClassifier, GaussianClassifier, NearestMeanClassifier, KNNClassifier) from jade import JADENode from nipals import NIPALSNode from lle_nodes import LLENode, HLLENode from xsfa_nodes import XSFANode, NormalizeNode # import internals for use in test_suites from misc_nodes import OneDimensionalHitParade as _OneDimensionalHitParade from expansion_nodes import expanded_dim as _expanded_dim __all__ = ['PCANode', 'WhiteningNode', 'NIPALSNode', 'FastICANode', 'CuBICANode', 'TDSEPNode', 'JADENode', 'SFANode', 'SFA2Node', 'ISFANode', 'XSFANode', 'FDANode', 'FANode', 'RBMNode', 'RBMWithLabelsNode', 'GrowingNeuralGasNode', 'LLENode', 'HLLENode', 'LinearRegressionNode', 'QuadraticExpansionNode', 'PolynomialExpansionNode', 'RBFExpansionNode','GeneralExpansionNode', 'GrowingNeuralGasExpansionNode', 'NeuralGasNode', '_expanded_dim', 'SignumClassifier', 'PerceptronClassifier', 'SimpleMarkovClassifier', 'DiscreteHopfieldClassifier', 'KMeansClassifier', 'NormalizeNode', 'GaussianClassifier', 'NearestMeanClassifier', 'KNNClassifier', 'EtaComputerNode', 'HitParadeNode', 'NoiseNode', 'NormalNoiseNode', 'TimeFramesNode', 'TimeDelayNode', 'TimeDelaySlidingWindowNode', 'CutoffNode', 'AdaptiveCutoffNode', 'HistogramNode', 'IdentityNode', '_OneDimensionalHitParade'] # nodes with external dependencies from mdp import config, numx_description, MDPException if numx_description == 'scipy': from convolution_nodes import Convolution2DNode __all__ += ['Convolution2DNode'] if config.has_shogun: from shogun_svm_classifier import ShogunSVMClassifier __all__ += ['ShogunSVMClassifier'] if config.has_libsvm: from libsvm_classifier import LibSVMClassifier __all__ += ['LibSVMClassifier'] if config.has_sklearn: import scikits_nodes for name in scikits_nodes.DICT_: if name.endswith('Node'): globals()[name] = scikits_nodes.DICT_[name] __all__.append(name) del name from mdp import utils utils.fixup_namespace(__name__, __all__ + ['ICANode'], ('pca_nodes', 'sfa_nodes', 'ica_nodes', 'neural_gas_nodes', 'expansion_nodes', 'fda_nodes', 'em_nodes', 'misc_nodes', 'isfa_nodes', 'rbm_nodes', 'regression_nodes', 'classifier_nodes', 'jade', 'nipals', 'lle_nodes', 'xsfa_nodes', 'convolution_nodes', 'shogun_svm_classifier', 'svm_classifiers', 'libsvm_classifier', 'regression_nodes', 'classifier_nodes', 'utils', 'scikits_nodes', 'numx_description', 'config', )) mdp-3.3/mdp/nodes/classifier_nodes.py000066400000000000000000000601531203131624700177210ustar00rootroot00000000000000__docformat__ = "restructuredtext en" import mdp from mdp import ClassifierNode, utils, numx, numx_rand, numx_linalg # TODO: The GaussianClassifier and NearestMeanClassifier could be parallelized. class SignumClassifier(ClassifierNode): """This classifier node classifies as ``1`` if the sum of the data points is positive and as ``-1`` if the data point is negative""" def _get_supported_dtypes(self): """Return the list of dtypes supported by this node.""" return (mdp.utils.get_dtypes('Float') + mdp.utils.get_dtypes('Integer')) @staticmethod def is_trainable(): return False def _label(self, x): ret = [xi.sum() for xi in x] return numx.sign(ret) class PerceptronClassifier(ClassifierNode): """A simple perceptron with input_dim input nodes.""" def __init__(self, execute_method=None, input_dim=None, output_dim=None, dtype=None): super(PerceptronClassifier, self).__init__( execute_method=execute_method, input_dim=input_dim, output_dim=output_dim, dtype=dtype) self.weights = [] self.offset_weight = 0 self.learning_rate = 0.1 def _check_train_args(self, x, labels): if (isinstance(labels, (list, tuple, numx.ndarray)) and len(labels) != x.shape[0]): msg = ("The number of labels should be equal to the number of " "datapoints (%d != %d)" % (len(labels), x.shape[0])) raise mdp.TrainingException(msg) if (not isinstance(labels, (list, tuple, numx.ndarray))): labels = [labels] if (not numx.all(map(lambda x: abs(x) == 1, labels))): msg = "The labels must be either -1 or 1." raise mdp.TrainingException(msg) def _train(self, x, labels): """Update the internal structures according to the input data 'x'. x -- a matrix having different variables on different columns and observations on the rows. labels -- can be a list, tuple or array of labels (one for each data point) or a single label, in which case all input data is assigned to the same class. """ # if weights are not yet initialised, initialise them if not len(self.weights): self.weights = numx.ones(self.input_dim) for xi, labeli in mdp.utils.izip_stretched(x, labels): new_weights = self.weights new_offset = self.offset_weight rate = self.learning_rate * (labeli - self._label(xi)) for j in range(self.input_dim): new_weights[j] = self.weights[j] + rate * xi[j] # the offset corresponds to a node with input 1 all the time new_offset = self.offset_weight + rate * 1 self.weights = new_weights self.offset_weight = new_offset def _label(self, x): """Returns an array with class labels from the perceptron. """ return numx.sign(numx.dot(x, self.weights) + self.offset_weight) class SimpleMarkovClassifier(ClassifierNode): """A simple version of a Markov classifier. It can be trained on a vector of tuples the label being the next element in the testing data. """ def __init__(self, execute_method=None, input_dim=None, output_dim=None, dtype=None): super(SimpleMarkovClassifier, self).__init__( execute_method=execute_method, input_dim=input_dim, output_dim=output_dim, dtype=dtype) self.ntotal_connections = 0 self.features = {} self.labels = {} self.connections = {} def _get_supported_dtypes(self): """Return the list of dtypes supported by this node.""" return (mdp.utils.get_dtypes('Float') + mdp.utils.get_dtypes('AllInteger') + mdp.utils.get_dtypes('Character')) def _check_train_args(self, x, labels): if (isinstance(labels, (list, tuple, numx.ndarray)) and len(labels) != x.shape[0]): msg = ("The number of labels should be equal to the number of " "datapoints (%d != %d)" % (len(labels), x.shape[0])) raise mdp.TrainingException(msg) if (not isinstance(labels, (list, tuple, numx.ndarray))): labels = [labels] def _train(self, x, labels): """Update the internal structures according to the input data 'x'. x -- a matrix having different variables on different columns and observations on the rows. labels -- can be a list, tuple or array of labels (one for each data point) or a single label, in which case all input data is assigned to the same class. """ # if labels is a number, all x's belong to the same class for xi, labeli in mdp.utils.izip_stretched(x, labels): self._learn(xi, labeli) def _learn(self, feature, label): feature = tuple(feature) self.ntotal_connections += 1 if label in self.labels: self.labels[label] += 1 else: self.labels[label] = 1 if feature in self.features: self.features[feature] += 1 else: self.features[feature] = 1 connection = (feature, label) if connection in self.connections: self.connections[connection] += 1 else: self.connections[connection] = 1 def _prob(self, features): return [self._prob_one(feature) for feature in features] def _prob_one(self, feature): feature = tuple(feature) probabilities = {} try: n_feature_connections = self.features[feature] except KeyError: n_feature_connections = 0 # if n_feature_connections == 0, we get a division by zero # we could throw here, but maybe it's best to simply return # an empty dict object return {} for label in self.labels: conn = (feature, label) try: n_conn = self.connections[conn] except KeyError: n_conn = 0 try: n_label_connections = self.labels[label] except KeyError: n_label_connections = 0 p_feature_given_label = 1.0 * n_conn / n_label_connections p_label = 1.0 * n_label_connections / self.ntotal_connections p_feature = 1.0 * n_feature_connections / self.ntotal_connections prob = 1.0 * p_feature_given_label * p_label / p_feature probabilities[label] = prob return probabilities class DiscreteHopfieldClassifier(ClassifierNode): """Node for simulating a simple discrete Hopfield model""" # TODO: It is unclear if this belongs to classifiers or is a general node # because label space is a subset of feature space def __init__(self, execute_method=None, input_dim=None, output_dim=None, dtype='b'): super(DiscreteHopfieldClassifier, self).__init__( execute_method=execute_method, input_dim=input_dim, output_dim=output_dim, dtype=dtype) self._weight_matrix = 0 # assigning zero to ease addition self._num_patterns = 0 self._shuffled_update = True def _get_supported_dtypes(self): return ['b'] def _train(self, x): """Provide the hopfield net with the possible states. x -- a matrix having different variables on different columns and observations on rows. """ for pattern in x: self._train_one(pattern) def _train_one(self, pattern): pattern = mdp.utils.bool_to_sign(pattern) weights = numx.outer(pattern, pattern) self._weight_matrix += weights / float(self.input_dim) self._num_patterns += 1 @property def memory_size(self): """Returns the Hopfield net's memory size""" return self.input_dim @property def load_parameter(self): """Returns the load parameter of the Hopfield net. The quality of memory recall for a Hopfield net breaks down when the load parameter is larger than 0.14.""" return self._num_patterns / float(self.input_dim) def _stop_training(self): # remove self-feedback # we could use numx.fill_diagonal, but thats numpy 1.4 only for i in range(self.input_dim): self._weight_matrix[i][i] = 0 def _label(self, x, threshold = 0): """Retrieves patterns from the associative memory. """ threshold = numx.zeros(self.input_dim) + threshold return numx.array([self._label_one(pattern, threshold) for pattern in x]) def _label_one(self, pattern, threshold): pattern = mdp.utils.bool_to_sign(pattern) has_converged = False while not has_converged: has_converged = True iter_order = range(len(self._weight_matrix)) if self._shuffled_update: numx_rand.shuffle(iter_order) for row in iter_order: w_row = self._weight_matrix[row] thresh_row = threshold[row] new_pattern_row = numx.sign(numx.dot(w_row, pattern) - thresh_row) if new_pattern_row == 0: # Following McKay, Neural Networks, we do nothing # when the new pattern is zero pass elif pattern[row] != new_pattern_row: has_converged = False pattern[row] = new_pattern_row return mdp.utils.sign_to_bool(pattern) # TODO: Make it more efficient class KMeansClassifier(ClassifierNode): """Employs K-Means Clustering for a given number of centroids.""" def __init__(self, num_clusters, max_iter=10000, execute_method=None, input_dim=None, output_dim=None, dtype=None): """ :Arguments: num_clusters number of centroids to use = number of clusters max_iter if the algorithm does not reach convergence (for some numerical reason), stop after ``max_iter`` iterations """ super(KMeansClassifier, self).__init__(execute_method=execute_method, input_dim=input_dim, output_dim=output_dim, dtype=dtype) self._num_clusters = num_clusters self.data = [] self.tlen = 0 self._centroids = None self.max_iter = max_iter def _train(self, x): # append all data # we could use a Cumulator class here self.tlen += x.shape[0] self.data.extend(x.ravel().tolist()) def _stop_training(self): self.data = numx.array(self.data, dtype=self.dtype) self.data.shape = (self.tlen, self.input_dim) # choose initial centroids unless they are already given if not self._centroids: import random centr_idx = random.sample(xrange(self.tlen), self._num_clusters) #numx_rand.permutation(self.tlen)[:self._num_clusters] centroids = self.data[centr_idx] else: centroids = self._centroids for step in xrange(self.max_iter): # list of (sum_position, num_clusters) new_centroids = [(0., 0.)] * len(centroids) # cluster for x in self.data: idx = self._nearest_centroid_idx(x, centroids) # update position and count pos_count = (new_centroids[idx][0] + x, new_centroids[idx][1] + 1.) new_centroids[idx] = pos_count # get new centroid position new_centroids = numx.array([c[0] / c[1] if c[1]>0. else centroids[idx] for idx, c in enumerate(new_centroids)]) # check if we are stable if numx.all(new_centroids == centroids): self._centroids = centroids return centroids = new_centroids def _nearest_centroid_idx(self, data, centroids): dists = numx.array([numx.linalg.norm(data - c) for c in centroids]) return dists.argmin() def _label(self, x): """For a set of feature vectors x, this classifier returns a list of centroids. """ return [self._nearest_centroid_idx(xi, self._centroids) for xi in x] class GaussianClassifier(ClassifierNode): """Perform a supervised Gaussian classification. Given a set of labelled data, the node fits a gaussian distribution to each class. """ def __init__(self, execute_method=False, input_dim=None, output_dim=None, dtype=None): super(GaussianClassifier, self).__init__(execute_method=execute_method, input_dim=input_dim, output_dim=output_dim, dtype=dtype) self._cov_objs = {} # only stored during training # this list contains the square root of the determinant of the # corresponding covariance matrix self._sqrt_def_covs = [] # we are going to store the inverse of the covariance matrices # since only those are useful to compute the probabilities self.inv_covs = [] self.means = [] self.p = [] # number of observations self.labels = None @staticmethod def is_invertible(): return False def _check_train_args(self, x, labels): if isinstance(labels, (list, tuple, numx.ndarray)) and ( len(labels) != x.shape[0]): msg = ("The number of labels should be equal to the number of " "datapoints (%d != %d)" % (len(labels), x.shape[0])) raise mdp.TrainingException(msg) def _update_covs(self, x, lbl): if lbl not in self._cov_objs: self._cov_objs[lbl] = utils.CovarianceMatrix(dtype=self.dtype) self._cov_objs[lbl].update(x) def _train(self, x, labels): """ :Arguments: x data labels Can be a list, tuple or array of labels (one for each data point) or a single label, in which case all input data is assigned to the same class. """ # if labels is a number, all x's belong to the same class if isinstance(labels, (list, tuple, numx.ndarray)): labels_ = numx.asarray(labels) # get all classes from cl for lbl in set(labels_): x_lbl = numx.compress(labels_==lbl, x, axis=0) self._update_covs(x_lbl, lbl) else: self._update_covs(x, labels) def _stop_training(self): self.labels = self._cov_objs.keys() self.labels.sort() nitems = 0 for lbl in self.labels: cov, mean, p = self._cov_objs[lbl].fix() nitems += p self._sqrt_def_covs.append(numx.sqrt(numx_linalg.det(cov))) if self._sqrt_def_covs[-1] == 0.0: err = ("The covariance matrix is singular for at least " "one class.") raise mdp.NodeException(err) self.means.append(mean) self.p.append(p) self.inv_covs.append(utils.inv(cov)) for i in range(len(self.p)): self.p[i] /= float(nitems) del self._cov_objs def _gaussian_prob(self, x, lbl_idx): """Return the probability of the data points x with respect to a gaussian. Input arguments: x -- Input data S -- Covariance matrix mn -- Mean """ x = self._refcast(x) dim = self.input_dim sqrt_detS = self._sqrt_def_covs[lbl_idx] invS = self.inv_covs[lbl_idx] # subtract the mean x_mn = x - self.means[lbl_idx][numx.newaxis, :] # exponent exponent = -0.5 * (utils.mult(x_mn, invS)*x_mn).sum(axis=1) # constant constant = (2.*numx.pi)**(-dim/2.) / sqrt_detS # probability return constant * numx.exp(exponent) def class_probabilities(self, x): """Return the posterior probability of each class given the input.""" self._pre_execution_checks(x) # compute the probability for each class tmp_prob = numx.zeros((x.shape[0], len(self.labels)), dtype=self.dtype) for i in range(len(self.labels)): tmp_prob[:, i] = self._gaussian_prob(x, i) tmp_prob[:, i] *= self.p[i] # normalize to probability 1 # (not necessary, but sometimes useful) tmp_tot = tmp_prob.sum(axis=1) tmp_tot = tmp_tot[:, numx.newaxis] return tmp_prob / tmp_tot def _prob(self, x): """Return the posterior probability of each class given the input in a dict.""" class_prob = self.class_probabilities(x) return [dict(zip(self.labels, prob)) for prob in class_prob] def _label(self, x): """Classify the input data using Maximum A-Posteriori.""" class_prob = self.class_probabilities(x) winner = class_prob.argmax(axis=-1) return [self.labels[winner[i]] for i in range(len(winner))] # TODO: Maybe extract some common elements form this class and # GaussianClassifier, like in _train. class NearestMeanClassifier(ClassifierNode): """Nearest-Mean classifier.""" def __init__(self, execute_method=None, input_dim=None, output_dim=None, dtype=None): super(NearestMeanClassifier, self).__init__( execute_method=execute_method, input_dim=input_dim, output_dim=output_dim, dtype=dtype) self.label_means = {} # not normalized during training self.n_label_samples = {} # initialized after training, used for vectorized execution: self.ordered_labels = [] self.ordered_means = None # will be array def _train(self, x, labels): """Update the mean information for the different classes. labels -- Can be a list, tuple or array of labels (one for each data point) or a single label, in which case all input data is assigned to the same class (computationally this is more efficient). """ if isinstance(labels, (list, tuple, numx.ndarray)): labels = numx.asarray(labels) for label in set(labels): x_label = numx.compress(labels==label, x, axis=0) self._update_mean(x_label, label) else: self._update_mean(x, labels) def _update_mean(self, x, label): """Update the mean with data for a single label.""" if label not in self.label_means: self.label_means[label] = numx.zeros(self.input_dim) self.n_label_samples[label] = 0 # TODO: use smarter summing to avoid rounding errors self.label_means[label] += numx.sum(x, axis=0) self.n_label_samples[label] += len(x) def _check_train_args(self, x, labels): if isinstance(labels, (list, tuple, numx.ndarray)) and ( len(labels) != x.shape[0]): msg = ("The number of labels should be equal to the number of " "datapoints (%d != %d)" % (len(labels), x.shape[0])) raise mdp.TrainingException(msg) def _stop_training(self): """Calculate the class means.""" ordered_means = [] for label in self.label_means: self.label_means[label] /= self.n_label_samples[label] self.ordered_labels.append(label) ordered_means.append(self.label_means[label]) self.ordered_means = numx.vstack(ordered_means) def _label(self, x): """Classify the data based on minimal distance to mean.""" n_labels = len(self.ordered_labels) differences = x[:,:,numx.newaxis].repeat(n_labels, 2). \ swapaxes(1,2) - self.ordered_means square_distances = (differences**2).sum(2) label_indices = square_distances.argmin(1) labels = [self.ordered_labels[i] for i in label_indices] return labels class KNNClassifier(ClassifierNode): """K-Nearest-Neighbour Classifier.""" def __init__(self, k=1, execute_method=None, input_dim=None, output_dim=None, dtype=None): """Initialize classifier. k -- Number of closest sample points that are taken into account. """ super(KNNClassifier, self).__init__(execute_method=execute_method, input_dim=input_dim, output_dim=output_dim, dtype=dtype) self.k = k self._label_samples = {} # temporary variable during training self.n_samples = None # initialized after training: self.samples = None # 2d array with all samples self.sample_label_indices = None # 1d array for label indices self.ordered_labels = [] def _train(self, x, labels): """Add the sampel points to the classes. labels -- Can be a list, tuple or array of labels (one for each data point) or a single label, in which case all input data is assigned to the same class (computationally this is more efficient). """ if isinstance(labels, (list, tuple, numx.ndarray)): labels = numx.asarray(labels) for label in set(labels): x_label = numx.compress(labels==label, x, axis=0) self._add_samples(x_label, label) else: self._add_samples(x, labels) def _add_samples(self, x, label): """Store x set for later neirest-neighbour calculation.""" if label not in self._label_samples: self._label_samples[label] = [] self._label_samples[label].append(x) def _check_train_args(self, x, labels): if isinstance(labels, (list, tuple, numx.ndarray)) and ( len(labels) != x.shape[0]): msg = ("The number of labels should be equal to the number of " "datapoints (%d != %d)" % (len(labels), x.shape[0])) raise mdp.TrainingException(msg) def _stop_training(self): """Organize the sample data.""" ordered_samples = [] for label in self._label_samples: ordered_samples.append( numx.concatenate(self._label_samples[label])) self.ordered_labels.append(label) del self._label_samples self.samples = numx.concatenate(ordered_samples) self.n_samples = len(self.samples) self.sample_label_indices = numx.concatenate( [numx.ones(len(ordered_samples[i]), dtype="int32") * i for i in range(len(self.ordered_labels))]) def _label(self, x): """Label the data by comparison with the reference points.""" square_distances = (x*x).sum(1)[:, numx.newaxis] \ + (self.samples*self.samples).sum(1) square_distances -= 2 * numx.dot(x, self.samples.T) min_inds = square_distances.argsort() win_inds = [numx.bincount(self.sample_label_indices[indices[0:self.k]]). argmax(0) for indices in min_inds] labels = [self.ordered_labels[i] for i in win_inds] return labels mdp-3.3/mdp/nodes/convolution_nodes.py000066400000000000000000000205531203131624700201540ustar00rootroot00000000000000__docformat__ = "restructuredtext en" from mdp import numx, numx_linalg, utils, NodeException import mdp import scipy.signal as signal # TODO automatic selection of convolution # TODO provide generators for standard filters # TODO look into Theano, define TheanoConvolutionNode class Convolution2DNode(mdp.Node): """Convolve input data with filter banks. The ``filters`` argument specifies a set of 2D filters that are convolved with the input data during execution. Convolution can be selected to be executed by linear filtering of the data, or in the frequency domain using a Discrete Fourier Transform. Input data can be given as 3D data, each row being a 2D array to be convolved with the filters, or as 2D data, in which case the ``input_shape`` argument must be specified. This node depends on ``scipy``. """ def __init__(self, filters, input_shape = None, approach = 'fft', mode = 'full', boundary = 'fill', fillvalue = 0, output_2d = True, input_dim = None, dtype = None): """ Input arguments: input_shape -- Is a tuple (h,w) that corresponds to the height and width of the input 2D data. If the input data is given in a flattened format, it is first reshaped before convolution approach -- 'approach' is one of ['linear', 'fft'] 'linear': convolution is done by linear filtering; 'fft': convoltion is done using the Fourier Transform If 'approach' is 'fft', the 'boundary' and 'fillvalue' arguments are ignored, and are assumed to be 'fill' and 0, respectively. (*Default* = 'fft') mode -- Convolution mode, as defined in scipy.signal.convolve2d 'mode' is one of ['valid', 'same', 'full'] (*Default* = 'full') boundary -- Boundary condition, as defined in scipy.signal.convolve2d 'boundary' is one of ['fill', 'wrap', 'symm'] (*Default* = 'fill') fillvalue -- Value to fill pad input arrays with (*Default* = 0) output_2d -- If True, the output array is 2D; the first index corresponds to data points; every output data point is the result of flattened convolution results, with the output of each filter concatenated together. If False, the output array is 4D; the format is data[idx,filter_nr,x,y], with filter_nr: index of convolution filter idx: data point index x, y: 2D coordinates """ super(Convolution2DNode, self).__init__(input_dim=input_dim, dtype=dtype) self.filters = filters self._input_shape = input_shape if approach not in ['linear', 'fft']: raise NodeException("'approach' argument must be one of ['linear', 'fft']") self._approach = approach if mode not in ['valid', 'same', 'full']: raise NodeException("'mode' argument must be one of ['valid', 'same', 'full']") self._mode = mode self.boundary = boundary self.fillvalue = fillvalue self.output_2d = output_2d self._output_shape = None # ------- class properties def get_filters(self): return self._filters def set_filters(self, filters): if not isinstance(filters, numx.ndarray): raise NodeException("'filters' argument must be a numpy array") if filters.ndim != 3: raise NodeException('Filters must be specified in a 3-dim array, with each '+ 'filter on a different row') self._filters = filters filters = property(get_filters, set_filters) def get_boundary(self): return self._boundary def set_boundary(self, boundary): if boundary not in ['fill', 'wrap', 'symm']: raise NodeException( "'boundary' argument must be one of ['fill', 'wrap', 'symm']") self._boundary = boundary boundary = property(get_boundary, set_boundary) @property def input_shape(self): return self._input_shape @property def approach(self): return self._approach @property def mode(self): return self._mode @property def output_shape(self): return self._output_shape # ------- /class properties def is_trainable(self): return False def is_invertible(self): return False def _get_supported_dtypes(self): """Return the list of dtypes supported by this node. Support floating point types with size smaller or equal than 64 bits. This is because fftpack does not support floating point types larger than that. """ return [t for t in utils.get_dtypes('Float') if t.itemsize<=8] def _pre_execution_checks(self, x): """This method contains all pre-execution checks. It can be used when a subclass defines multiple execution methods. In this case, the output dimension depends on the type of convolution we use (padding, full, ...). Also, we want to to be able to accept 3D arrays. """ # check input rank if not x.ndim in [2,3]: error_str = "x has rank %d, should be 2 or 3" % (x.ndim) raise NodeException(error_str) # set 2D shape if necessary if self._input_shape is None: if x.ndim == 2: error_str = "Cannot infer 2D shape from 1D data points. " + \ "Data must have rank 3, or shape argument given." raise NodeException(error_str) else: self._input_shape = x.shape[1:] # set the input dimension if necessary if self.input_dim is None: self.input_dim = numx.prod(self._input_shape) # set the dtype if necessary if self.dtype is None: self.dtype = x.dtype # check the input dimension if not numx.prod(x.shape[1:]) == self.input_dim: error_str = "x has dimension %d, should be %d" % (x.shape[1], self.input_dim) raise NodeException(error_str) # set output_dim if necessary if self.output_dim is None: input_shape = self.input_shape filters_shape = self.filters.shape if self.mode == 'same': self._output_shape = input_shape elif self.mode == 'full': self._output_shape = (input_shape[0]+filters_shape[1]-1, input_shape[1]+filters_shape[2]-1) else: # mode == 'valid' self._output_shape = (input_shape[0]-filters_shape[1]+1, input_shape[1]-filters_shape[2]+1) self.output_dim = self.filters.shape[0]*numx.prod(self._output_shape) if x.shape[0] == 0: error_str = "x must have at least one observation (zero given)" raise NodeException(error_str) def _execute(self, x): is_2d = x.ndim==2 output_shape, input_shape = self._output_shape, self._input_shape filters = self.filters nfilters = filters.shape[0] # XXX depends on convolution y = numx.empty((x.shape[0], nfilters, output_shape[0], output_shape[1]), dtype=self.dtype) for n_im, im in enumerate(x): if is_2d: im = im.reshape(input_shape) for n_flt, flt in enumerate(filters): if self.approach == 'fft': y[n_im,n_flt,:,:] = signal.fftconvolve(im, flt, mode=self.mode) elif self.approach == 'linear': y[n_im,n_flt,:,:] = signal.convolve2d(im, flt, mode=self.mode, boundary=self.boundary, fillvalue=self.fillvalue) # reshape if necessary if self.output_2d: y.resize((y.shape[0], self.output_dim)) return y mdp-3.3/mdp/nodes/em_nodes.py000066400000000000000000000170201203131624700161710ustar00rootroot00000000000000__docformat__ = "restructuredtext en" import mdp from mdp import numx, numx_linalg, utils, NodeException from mdp.utils import mult, CovarianceMatrix import warnings sqrt, inv, det = numx.sqrt, utils.inv, numx_linalg.det normal = mdp.numx_rand.normal # decreasing likelihood message _LHOOD_WARNING = ('Likelihood decreased in FANode. This is probably due ' 'to some numerical errors.') warnings.filterwarnings('always', _LHOOD_WARNING, mdp.MDPWarning) class FANode(mdp.Node): """Perform Factor Analysis. The current implementation should be most efficient for long data sets: the sufficient statistics are collected in the training phase, and all EM-cycles are performed at its end. The ``execute`` method returns the Maximum A Posteriori estimate of the latent variables. The ``generate_input`` method generates observations from the prior distribution. **Internal variables of interest** ``self.mu`` Mean of the input data (available after training) ``self.A`` Generating weights (available after training) ``self.E_y_mtx`` Weights for Maximum A Posteriori inference ``self.sigma`` Vector of estimated variance of the noise for all input components More information about Factor Analysis can be found in Max Welling's classnotes: http://www.ics.uci.edu/~welling/classnotes/classnotes.html , in the chapter 'Linear Models'. """ def __init__(self, tol=1e-4, max_cycles=100, verbose=False, input_dim=None, output_dim=None, dtype=None): """ :Parameters: tol tolerance (minimum change in log-likelihood before exiting the EM algorithm) max_cycles maximum number of EM cycles verbose if true, print log-likelihood during the EM-cycles """ # Notation as in Max Welling's notes super(FANode, self).__init__(input_dim, output_dim, dtype) self.tol = tol self.max_cycles = max_cycles self.verbose = verbose self._cov_mtx = CovarianceMatrix(dtype, bias=True) def _train(self, x): # update the covariance matrix self._cov_mtx.update(x) def _stop_training(self): #### some definitions verbose = self.verbose typ = self.dtype tol = self.tol d = self.input_dim # if the number of latent variables is not specified, # set it equal to the number of input components if not self.output_dim: self.output_dim = d k = self.output_dim # indices of the diagonal elements of a dxd or kxk matrix idx_diag_d = [i*(d+1) for i in range(d)] idx_diag_k = [i*(k+1) for i in range(k)] # constant term in front of the log-likelihood const = -d/2. * numx.log(2.*numx.pi) ##### request the covariance matrix and clean up cov_mtx, mu, tlen = self._cov_mtx.fix() del self._cov_mtx cov_diag = cov_mtx.diagonal() ##### initialize the parameters # noise variances sigma = cov_diag # loading factors # Zoubin uses the determinant of cov_mtx^1/d as scale but it's # too slow for large matrices. Is the product of the diagonal a good # approximation? if d<=300: scale = det(cov_mtx)**(1./d) else: scale = numx.product(sigma)**(1./d) if scale <= 0.: err = ("The covariance matrix of the data is singular. " "Redundant dimensions need to be removed.") raise NodeException(err) A = normal(0., sqrt(scale/k), size=(d, k)).astype(typ) ##### EM-cycle lhood_curve = [] base_lhood = None old_lhood = -numx.inf for t in xrange(self.max_cycles): ## compute B = (A A^T + Sigma)^-1 B = mult(A, A.T) # B += diag(sigma), avoid computing diag(sigma) which is dxd B.ravel().put(idx_diag_d, B.ravel().take(idx_diag_d)+sigma) # this quantity is used later for the log-likelihood # abs is there to avoid numerical errors when det < 0 log_det_B = numx.log(abs(det(B))) # end the computation of B B = inv(B) ## other useful quantities trA_B = mult(A.T, B) trA_B_cov_mtx = mult(trA_B, cov_mtx) ##### E-step ## E_yyT = E(y_n y_n^T | x_n) E_yyT = - mult(trA_B, A) + mult(trA_B_cov_mtx, trA_B.T) # E_yyT += numx.eye(k) E_yyT.ravel().put(idx_diag_k, E_yyT.ravel().take(idx_diag_k)+1.) ##### M-step A = mult(trA_B_cov_mtx.T, inv(E_yyT)) sigma = cov_diag - (mult(A, trA_B_cov_mtx)).diagonal() ##### log-likelihood trace_B_cov = (B*cov_mtx.T).sum() # this is actually likelihood/tlen. lhood = const - 0.5*log_det_B - 0.5*trace_B_cov if verbose: print 'cycle', t, 'log-lhood:', lhood ##### convergence criterion if base_lhood is None: base_lhood = lhood else: # convergence criterion if (lhood-base_lhood)<(1.+tol)*(old_lhood-base_lhood): break if lhood < old_lhood: # this should never happen # it sometimes does, e.g. if the noise is extremely low, # because of numerical rounding effects warnings.warn(_LHOOD_WARNING, mdp.MDPWarning) old_lhood = lhood lhood_curve.append(lhood) self.tlen = tlen self.A = A self.mu = mu.reshape(1, d) self.sigma = sigma ## MAP matrix # compute B = (A A^T + Sigma)^-1 B = mult(A, A.T).copy() B.ravel().put(idx_diag_d, B.ravel().take(idx_diag_d)+sigma) B = inv(B) self.E_y_mtx = mult(B.T, A) self.lhood = lhood_curve def _execute(self, x): return mult(x-self.mu, self.E_y_mtx) @staticmethod def is_invertible(): return False def generate_input(self, len_or_y=1, noise=False): """ Generate data from the prior distribution. If the training phase has not been completed yet, call stop_training. :Arguments: len_or_y If integer, it specified the number of observation to generate. If array, it is used as a set of samples of the latent variables noise if true, generation includes the estimated noise """ self._if_training_stop_training() # set the output dimension if necessary if self.output_dim is None: # if the input_dim is not defined, raise an exception if self.input_dim is None: errstr = ("Number of input dimensions undefined. Inversion " "not possible.") raise NodeException(errstr) self.output_dim = self.input_dim if isinstance(len_or_y, int): size = (len_or_y, self.output_dim) y = self._refcast(mdp.numx_rand.normal(size=size)) else: y = self._refcast(len_or_y) self._check_output(y) res = mult(y, self.A.T)+self.mu if noise: ns = mdp.numx_rand.normal(size=(y.shape[0], self.input_dim)) ns *= numx.sqrt(self.sigma) res += self._refcast(ns) return res mdp-3.3/mdp/nodes/expansion_nodes.py000066400000000000000000000324761203131624700176100ustar00rootroot00000000000000__docformat__ = "restructuredtext en" import mdp from mdp import numx from mdp.utils import mult, matmult, invert_exp_funcs2 from mdp.nodes import GrowingNeuralGasNode def nmonomials(degree, nvariables): """Return the number of monomials of a given degree in a given number of variables.""" return int(mdp.utils.comb(nvariables+degree-1, degree)) def expanded_dim(degree, nvariables): """Return the size of a vector of dimension ``nvariables`` after a polynomial expansion of degree ``degree``.""" return int(mdp.utils.comb(nvariables+degree, degree))-1 class _ExpansionNode(mdp.Node): def __init__(self, input_dim = None, dtype = None): super(_ExpansionNode, self).__init__(input_dim, None, dtype) def expanded_dim(self, dim): return dim @staticmethod def is_trainable(): return False @staticmethod def is_invertible(): return False def _set_input_dim(self, n): self._input_dim = n self._output_dim = self.expanded_dim(n) def _set_output_dim(self, n): msg = "Output dim cannot be set explicitly!" raise mdp.NodeException(msg) class PolynomialExpansionNode(_ExpansionNode): """Perform expansion in a polynomial space.""" def __init__(self, degree, input_dim = None, dtype = None): """ Input arguments: degree -- degree of the polynomial space where the input is expanded """ self._degree = int(degree) super(PolynomialExpansionNode, self).__init__(input_dim, dtype) def _get_supported_dtypes(self): """Return the list of dtypes supported by this node.""" return (mdp.utils.get_dtypes('AllFloat') + mdp.utils.get_dtypes('AllInteger')) def expanded_dim(self, dim): """Return the size of a vector of dimension 'dim' after a polynomial expansion of degree 'self._degree'.""" return expanded_dim(self._degree, dim) def _execute(self, x): degree = self._degree dim = self.input_dim n = x.shape[1] # preallocate memory dexp = numx.zeros((self.output_dim, x.shape[0]), dtype=self.dtype) # copy monomials of degree 1 dexp[0:n, :] = x.T k = n prec_end = 0 next_lens = numx.ones((dim+1, )) next_lens[0] = 0 for i in range(2, degree+1): prec_start = prec_end prec_end += nmonomials(i-1, dim) prec = dexp[prec_start:prec_end, :] lens = next_lens[:-1].cumsum(axis=0) next_lens = numx.zeros((dim+1, )) for j in range(dim): factor = prec[lens[j]:, :] len_ = factor.shape[0] dexp[k:k+len_, :] = x[:, j] * factor next_lens[j+1] = len_ k = k+len_ return dexp.T class QuadraticExpansionNode(PolynomialExpansionNode): """Perform expansion in the space formed by all linear and quadratic monomials. ``QuadraticExpansionNode()`` is equivalent to a ``PolynomialExpansionNode(2)``""" def __init__(self, input_dim = None, dtype = None): super(QuadraticExpansionNode, self).__init__(2, input_dim = input_dim, dtype = dtype) class RBFExpansionNode(mdp.Node): """Expand input space with Gaussian Radial Basis Functions (RBFs). The input data is filtered through a set of unnormalized Gaussian filters, i.e.:: y_j = exp(-0.5/s_j * ||x - c_j||^2) for isotropic RBFs, or more in general:: y_j = exp(-0.5 * (x-c_j)^T S^-1 (x-c_j)) for anisotropic RBFs. """ def __init__(self, centers, sizes, dtype = None): """ :Arguments: centers Centers of the RBFs. The dimensionality of the centers determines the input dimensionality; the number of centers determines the output dimensionalities sizes Radius of the RBFs. ``sizes`` is a list with one element for each RBF, either a scalar (the variance of the RBFs for isotropic RBFs) or a covariance matrix (for anisotropic RBFs). If ``sizes`` is not a list, the same variance/covariance is used for all RBFs. """ super(RBFExpansionNode, self).__init__(None, None, dtype) self._init_RBF(centers, sizes) @staticmethod def is_trainable(): return False @staticmethod def is_invertible(): return False def _init_RBF(self, centers, sizes): # initialize the centers of the RBFs centers = numx.array(centers, self.dtype) # define input/output dim self.set_input_dim(centers.shape[1]) self.set_output_dim(centers.shape[0]) # multiply sizes if necessary sizes = numx.array(sizes, self.dtype) if sizes.ndim==0 or sizes.ndim==2: sizes = numx.array([sizes]*self._output_dim) else: # check number of sizes correct if sizes.shape[0] != self._output_dim: msg = "There must be as many RBF sizes as centers" raise mdp.NodeException, msg if numx.isscalar(sizes[0]): # isotropic RBFs self._isotropic = True else: # anisotropic RBFs self._isotropic = False # check size if (sizes.shape[1] != self._input_dim or sizes.shape[2] != self._input_dim): msg = ("Dimensionality of size matrices should be the same " + "as input dimensionality (%d != %d)" % (sizes.shape[1], self._input_dim)) raise mdp.NodeException, msg # compute inverse covariance matrix for i in range(sizes.shape[0]): sizes[i,:,:] = mdp.utils.inv(sizes[i,:,:]) self._centers = centers self._sizes = sizes def _execute(self, x): y = numx.zeros((x.shape[0], self._output_dim), dtype = self.dtype) c, s = self._centers, self._sizes for i in range(self._output_dim): dist = x - c[i,:] if self._isotropic: tmp = (dist**2.).sum(axis=1) / s[i] else: tmp = (dist*matmult(dist, s[i,:,:])).sum(axis=1) y[:,i] = numx.exp(-0.5*tmp) return y class GrowingNeuralGasExpansionNode(GrowingNeuralGasNode): """ Perform a trainable radial basis expansion, where the centers and sizes of the basis functions are learned through a growing neural gas. positions of RBFs position of the nodes of the neural gas sizes of the RBFs mean distance to the neighbouring nodes. Important: Adjust the maximum number of nodes to control the dimension of the expansion. More information on this expansion type can be found in: B. Fritzke. Growing cell structures-a self-organizing network for unsupervised and supervised learning. Neural Networks 7, p. 1441--1460 (1994). """ def __init__(self, start_poss=None, eps_b=0.2, eps_n=0.006, max_age=50, lambda_=100, alpha=0.5, d=0.995, max_nodes=100, input_dim=None, dtype=None): """ For a full list of input arguments please check the documentation of GrowingNeuralGasNode. max_nodes (default 100) : maximum number of nodes in the neural gas, therefore an upper bound to the output dimension of the expansion. """ # __init__ is overwritten only to reset the default for # max_nodes. The default of the GrowingNeuralGasNode is # practically unlimited, possibly leading to very # high-dimensional expansions. super(GrowingNeuralGasExpansionNode, self).__init__( start_poss=start_poss, eps_b=eps_b, eps_n=eps_n, max_age=max_age, lambda_=lambda_, alpha=alpha, d=d, max_nodes=max_nodes, input_dim=input_dim, dtype=dtype) def _set_input_dim(self, n): # Needs to be overwritten because GrowingNeuralGasNode would # fix the output dim to n here. self._input_dim = n def _set_output_dim(self, n): msg = "Output dim cannot be set explicitly!" raise mdp.NodeException(msg) @staticmethod def is_trainable(): return True @staticmethod def is_invertible(): return False def _stop_training(self): super(GrowingNeuralGasExpansionNode, self)._stop_training() # set the output dimension to the number of nodes of the neural gas self._output_dim = self.get_nodes_position().shape[0] # use the nodes of the learned neural gas as centers for a radial # basis function expansion. centers = self.get_nodes_position() # use the mean distances to the neighbours as size of the RBF expansion sizes = [] for i,node in enumerate(self.graph.nodes): # calculate the size of the current RBF pos = node.data.pos sizes.append(numx.array([((pos-neighbor.data.pos)**2).sum() for neighbor in node.neighbors() ]).mean()) # initialize the radial basis function expansion with centers and sizes self.rbf_expansion = mdp.nodes.RBFExpansionNode(centers = centers, sizes = sizes) def _execute(self,x): return self.rbf_expansion(x) class GeneralExpansionNode(_ExpansionNode): """Expands the input signal x according to a list [f_0, ... f_k] of functions. Each function f_i should take the whole two-dimensional array x as input and output another two-dimensional array. Moreover the output dimension should depend only on the input dimension. The output of the node is [f_0[x], ... f_k[x]], that is, the concatenation of each one of the outputs f_i[x]. Original code contributed by Alberto Escalante. """ def __init__(self, funcs, input_dim = None, dtype = None): """ Short argument description: ``funcs`` list of functions f_i that realize the expansion """ self.funcs = funcs super(GeneralExpansionNode, self).__init__(input_dim, dtype) def expanded_dim(self, n): """The expanded dim is computed by directly applying the expansion functions f_i to a zero input of dimension n. """ return int(self.output_sizes(n).sum()) def output_sizes(self, n): """Return the individual output sizes of each expansion function when the input has lenght n""" sizes = numx.zeros(len(self.funcs)) x = numx.zeros((1,n)) for i, func in enumerate(self.funcs): outx = func(x) sizes[i] = outx.shape[1] return sizes @staticmethod def is_trainable(): return False @staticmethod def is_invertible(): return False def pseudo_inverse(self, x, use_hint=None): """Calculate a pseudo inverse of the expansion using scipy.optimize. ``use_hint`` when calculating the pseudo inverse of the expansion, the hint determines the starting point for the approximation This method requires scipy.""" try: app_x_2, app_ex_x_2 = invert_exp_funcs2(x, self.input_dim, self.funcs, use_hint=use_hint, k=0.001) return app_x_2.astype(self.dtype) except NotImplementedError, exc: raise mdp.MDPException(exc) def _execute(self, x): if self.input_dim is None: self.set_input_dim(x.shape[1]) num_samples = x.shape[0] sizes = self.output_sizes(self.input_dim) out = numx.zeros((num_samples, self.output_dim), dtype=self.dtype) current_pos = 0 for i, func in enumerate(self.funcs): out[:,current_pos:current_pos+sizes[i]] = func(x) current_pos += sizes[i] return out ### old weave inline code to perform a quadratic expansion # weave C code executed in the function QuadraticExpansionNode.execute ## _EXPANSION_POL2_CCODE = """ ## // first of all, copy the linear part ## for( int i=0; i deflation 'symm' --> symmetric g -- Nonlinearity to use. Possible values are: 'pow3' --> x^3 'tanh' --> tanh(fine_tanh*x) 'gaus' --> x*exp(-fine_gaus*x^2/2) 'skew' --> x^2 (for skewed signals) fine_g -- Nonlinearity for fine tuning. Possible values are the same as for 'g'. Set it to None to disable fine tuning. mu -- Step size. If mu != 1, a stabilization procedure is used: the value of mu can momentarily be halved if the algorithm is stuck between two points (this is called a stroke). Also if there is no convergence before half of the maximum number of iterations has been reached then mu will be halved for the rest of the rounds. sample_size -- Percentage of samples used in one iteration. If sample_size < 1, samples are chosen in random order. fine_tanh -- parameter for 'tanh' nonlinearity fine_gaus -- parameter for 'gaus' nonlinearity guess -- initial guess for the mixing matrix (ignored if None) max_it -- maximum number of iterations max_it_fine -- maximum number of iterations for fine tuning failures -- maximum number of failures to allow in deflation mode """ super(FastICANode, self).__init__(limit, False, verbose, whitened, white_comp, white_parm, input_dim, dtype) if approach in ['defl', 'symm']: self.approach = approach else: raise mdp.NodeException('%s approach method not known' % approach) if g in ['pow3', 'tanh', 'gaus', 'skew']: self.g = g else: raise mdp.NodeException('%s nonlinearity function not known' % g) if fine_g in ['pow3', 'tanh', 'gaus', 'skew', None]: self.fine_g = fine_g else: errmsg = '%s nonlinearity function not known' % fine_g raise mdp.NodeException(errmsg) if sample_size > 0 and sample_size <= 1: self.sample_size = sample_size else: raise mdp.NodeException('0 max_it//2): if verbose: print 'Taking long (reducing step size)...' lng = True mu = 0.5*mu if used_g % 2 == 0: used_g += 1 QOldF = QOld QOld = Q # Show the progress... if verbose: msg = ('Step no. %d,' ' convergence: %.3f' % (round+1,convergence[round])) print msg # First calculate the independent components (u_i's). # u_i = b_i' x = x' b_i. For all x:s simultaneously this is # non linearity if used_g == 10: u = mult(X.T, Q) Q = mult(X, u*u*u)/tlen - 3.*Q elif used_g == 11: u = mult(X.T, Q) Gpow3 = u*u*u Beta = (u*Gpow3).sum(axis=0) D = numx.diag((1/(Beta - 3*tlen))) Q = Q + mu * mult(Q, mult((mult(u.T, Gpow3) - numx.diag(Beta)), D)) elif used_g == 12: Xsub = self._get_rsamples(X) u = mult(Xsub.T, Q) Q = mult(Xsub, u*u*u)/Xsub.shape[1] - 3.*Q elif used_g == 13: Xsub = self._get_rsamples(X) u = mult(Xsub.T, Q) Gpow3 = u*u*u Beta = (u*Gpow3).sum(axis=0) D = numx.diag((1/(Beta - 3*Xsub.shape[1]))) Q = Q + mu * mult(Q, mult((mult(u.T, Gpow3) - numx.diag(Beta)), D)) elif used_g == 20: u = mult(X.T, Q) tang = numx.tanh(fine_tanh * u) temp = (1.-tang*tang).sum(axis=0)/tlen Q = mult(X, tang)/tlen - temp * Q * fine_tanh elif used_g == 21: u = mult(X.T, Q) tang = numx.tanh(fine_tanh * u) Beta = (u*tang).sum(axis=0) D = numx.diag(1/(Beta - fine_tanh*(1.-tang*tang).sum(axis=0))) Q = Q + mu * mult(Q, mult((mult(u.T, tang)- numx.diag(Beta)), D)) elif used_g == 22: Xsub = self._get_rsamples(X) u = mult(Xsub.T, Q) tang = numx.tanh(fine_tanh * u) temp = (1.-tang*tang).sum(axis=0)/Xsub.shape[1] Q = mult(Xsub, tang)/Xsub.shape[1] - temp * Q * fine_tanh elif used_g == 23: Xsub = self._get_rsamples(X) u = mult(Xsub.T, Q) tang = numx.tanh(fine_tanh * u) Beta = (u*tang).sum(axis=0) D = numx.diag(1/(Beta - fine_tanh*(1.-tang*tang).sum(axis=0))) Q = Q + mu * mult(Q, mult((mult(u.T, tang)- numx.diag(Beta)), D)) elif used_g == 30: u = mult(X.T, Q) u2 = u*u ex = numx.exp(-fine_gaus*u2*0.5) gauss = u*ex dgauss = (1. - fine_gaus*u2)*ex Q = (mult(X, gauss)-dgauss.sum(axis=0)*Q)/tlen elif used_g == 31: u = mult(X.T, Q) u2 = u*u ex = numx.exp(-fine_gaus*u2*0.5) gaus = u*ex Beta = (u*gaus).sum(axis=0) D = numx.diag(1/(Beta - ((1-fine_gaus*u2)*ex).sum(axis=0))) Q = Q + mu * mult(Q, mult((mult(u.T, gaus)- numx.diag(Beta)), D)) elif used_g == 32: Xsub = self._get_rsamples(X) u = mult(Xsub.T, Q) u2 = u*u ex = numx.exp(-fine_gaus*u2*0.5) gauss = u*ex dgauss = (1. - fine_gaus*u2)*ex Q = (mult(Xsub, gauss)-dgauss.sum(axis=0)*Q)/Xsub.shape[1] elif used_g == 33: Xsub = self._get_rsamples(X) u = mult(Xsub.T, Q) u2 = u*u ex = numx.exp(-fine_gaus*u2*0.5) gaus = u*ex Beta = (u*gaus).sum(axis=0) D = numx.diag(1/(Beta - ((1-fine_gaus*u2)*ex).sum(axis=0))) Q = Q + mu * mult(Q, mult((mult(u.T, gaus)- numx.diag(Beta)), D)) elif used_g == 40: u = mult(X.T, Q) Q = mult(X, u*u)/tlen elif used_g == 41: u = mult(X.T, Q) Gskew = u*u Beta = (u*Gskew).sum(axis=0) D = numx.diag(1/Beta) Q = Q + mu * mult(Q, mult((mult(u.T, Gskew)- numx.diag(Beta)), D)) elif used_g == 42: Xsub = self._get_rsamples(X) u = mult(Xsub.T, Q) Q = mult(Xsub, u*u)/Xsub.shape[1] elif used_g == 43: Xsub = self._get_rsamples(X) u = mult(Xsub.T, Q) Gskew = u*u Beta = (u*Gskew).sum(axis=0) D = numx.diag(1/Beta) Q = Q + mu * mult(Q, mult((mult(u.T, Gskew)- numx.diag(Beta)), D)) else: errstr = 'Nonlinearity not found: %i' % used_g raise mdp.NodeException(errstr) self.convergence = numx.array(convergence) self.convergence_fine = numx.array(convergence_fine) ret = convergence[-1] # DEFLATION APPROACH elif approach == 'defl': # adjust limit! #limit = 1 - limit*limit*0.5 # create array to store convergence convergence = [] convergence_fine = [] Q = numx.zeros((comp, comp), dtype=dtype) round = 0 nfail = 0 while round < comp: mu = self.mu used_g = gOrig stroke = 0 fine_tuned = False lng = False end_finetuning = 0 # Take a random initial vector of lenght 1 and orthogonalize it # with respect to the other vectors. w = guess[:, round] w -= mult(mult(Q, Q.T), w) w /= utils.norm2(w) wOld = numx.zeros(w.shape, dtype) wOldF = numx.zeros(w.shape, dtype) # This is the actual fixed-point iteration loop. i = 1 gabba = 1 #for i in range(max_it + 1): while i <= max_it + gabba: # Project the vector into the space orthogonal to the space # spanned by the earlier found basis vectors. Note that # we can do the projection with matrix Q, since the zero # entries do not contribute to the projection. w -= mult(mult(Q, Q.T), w) w /= utils.norm2(w) if not fine_tuned: if i == max_it + 1: err_msg = ('Component number %d did not' 'converge in %d iterations.' % (round, max_it)) if verbose: print err_msg if round == 0: raise mdp.NodeException(err_msg) nfail += 1 if nfail > failures: err = ('Too many failures to ' 'converge (%d). Giving up.' % nfail) raise mdp.NodeException(err) break else: if i >= end_finetuning: wOld = w # Test for termination condition. Note that the algorithm # has converged if the direction of w and wOld is the same. #conv = float(abs((w*wOld).sum())) conv = min(utils.norm2(w-wOld), utils.norm2(w+wOld)) convergence.append(conv) if conv < limit: if fine_tuning and (not fine_tuned): if verbose: print 'Initial convergence, fine-tuning...' fine_tuned = True gabba = max_it_fine wOld = numx.zeros(w.shape, dtype) wOldF = numx.zeros(w.shape, dtype) used_g = gFine mu = muK * self.mu end_finetuning = max_it_fine + i else: nfail = 0 convergence[round] = conv # Calculate ICA filter. Q[:, round] = w.copy() # Show the progress... if verbose: print 'IC %d computed ( %d steps )' % (round+1, i+1) break elif stabilization: conv_fine = min(utils.norm2(w-wOldF), utils.norm2(w+wOldF)) convergence_fine.append(conv_fine) if (stroke == 0) and conv_fine < limit: if verbose: print 'Stroke!' stroke = mu mu = 0.5*mu if used_g % 2 == 0: used_g += 1 elif (stroke != 0): mu = stroke stroke = 0 if (mu == 1) and (used_g % 2 != 0): used_g -= 1 elif (not lng) and (i > max_it//2): if verbose: print 'Taking long (reducing step size)...' lng = True mu = 0.5*mu if used_g % 2 == 0: used_g += 1 wOldF = wOld wOld = w if used_g == 10: u = mult(X.T, w) w = mult(X, u*u*u)/tlen - 3.*w elif used_g == 11: u = mult(X.T, w) EXGpow3 = mult(X, u*u*u)/tlen Beta = mult(w.T, EXGpow3) w = w - mu * (EXGpow3 - Beta*w)/(3-Beta) elif used_g == 12: Xsub = self._get_rsamples(X) u = mult(Xsub.T, w) w = mult(Xsub, u*u*u)/Xsub.shape[1] - 3.*w elif used_g == 13: Xsub = self._get_rsamples(X) u = mult(Xsub.T, w) EXGpow3 = mult(Xsub, u*u*u)/Xsub.shape[1] Beta = mult(w.T, EXGpow3) w = w - mu * (EXGpow3 - Beta*w)/(3-Beta) elif used_g == 20: u = mult(X.T, w) tang = numx.tanh(fine_tanh * u) temp = mult((1. - tang*tang).sum(axis=0), w) w = (mult(X, tang) - fine_tanh*temp)/tlen elif used_g == 21: u = mult(X.T, w) tang = numx.tanh(fine_tanh * u) Beta = mult(u.T, tang) temp = (1. - tang*tang).sum(axis=0) w = w-mu*((mult(X, tang)-Beta*w)/(fine_tanh*temp-Beta)) elif used_g == 22: Xsub = self._get_rsamples(X) u = mult(Xsub.T, w) tang = numx.tanh(fine_tanh * u) temp = mult((1. - tang*tang).sum(axis=0), w) w = (mult(Xsub, tang) - fine_tanh*temp)/Xsub.shape[1] elif used_g == 23: Xsub = self._get_rsamples(X) u = mult(Xsub.T, w) tang = numx.tanh(fine_tanh * u) Beta = mult(u.T, tang) w = w - mu * ((mult(Xsub, tang)-Beta*w) / (fine_tanh*(1. - tang*tang).sum(axis=0) - Beta)) elif used_g == 30: u = mult(X.T, w) u2 = u*u ex = numx.exp(-fine_gaus*u2*0.5) gauss = u*ex dgauss = (1. - fine_gaus *u2)*ex w = (mult(X, gauss)-mult(dgauss.sum(axis=0), w))/tlen elif used_g == 31: u = mult(X.T, w) u2 = u*u ex = numx.exp(-fine_gaus*u2*0.5) gauss = u*ex dgauss = (1. - fine_gaus *u2)*ex Beta = mult(u.T, gauss) w = w - mu*((mult(X, gauss)-Beta*w)/ (dgauss.sum(axis=0)-Beta)) elif used_g == 32: Xsub = self._get_rsamples(X) u = mult(Xsub.T, w) u2 = u*u ex = numx.exp(-fine_gaus*u2*0.5) gauss = u*ex dgauss = (1. - fine_gaus *u2)*ex w = (mult(Xsub, gauss)- mult(dgauss.sum(axis=0), w))/Xsub.shape[1] elif used_g == 33: Xsub = self._get_rsamples(X) u = mult(Xsub.T, w) u2 = u*u ex = numx.exp(-fine_gaus*u2*0.5) gauss = u*ex dgauss = (1. - fine_gaus *u2)*ex Beta = mult(u.T, gauss) w = w - mu*((mult(Xsub, gauss)-Beta*w)/ (dgauss.sum(axis=0)-Beta)) elif used_g == 40: u = mult(X.T, w) w = mult(X, u*u)/tlen elif used_g == 41: u = mult(X.T, w) EXGskew = mult(X, u*u) / tlen Beta = mult(w.T, EXGskew) w = w - mu * (EXGskew - mult(Beta, w))/(-Beta) elif used_g == 42: Xsub = self._get_rsamples(X) u = mult(Xsub.T, w) w = mult(Xsub, u*u)/Xsub.shape[1] elif used_g == 43: Xsub = self._get_rsamples(X) u = mult(Xsub.T, w) EXGskew = mult(Xsub, u*u) / Xsub.shape[1] Beta = mult(w.T, EXGskew) w = w - mu * (EXGskew - Beta*w)/(-Beta) else: errstr = 'Nonlinearity not found: %i' % used_g raise mdp.NodeException(errstr) # Normalize the new w. w /= utils.norm2(w) i += 1 round += 1 self.convergence = numx.array(convergence) self.convergence_fine = numx.array(convergence_fine) ret = convergence[-1] self.filters = Q return ret class TDSEPNode(ISFANode, ProjectMatrixMixin): """Perform Independent Component Analysis using the TDSEP algorithm. Note that TDSEP, as implemented in this Node, is an online algorithm, i.e. it is suited to be trained on huge data sets, provided that the training is done sending small chunks of data for each time. Reference: Ziehe, Andreas and Muller, Klaus-Robert (1998). TDSEP an efficient algorithm for blind separation using time structure. in Niklasson, L, Boden, M, and Ziemke, T (Editors), Proc. 8th Int. Conf. Artificial Neural Networks (ICANN 1998). **Internal variables of interest** ``self.white`` The whitening node used for preprocessing. ``self.filters`` The ICA filters matrix (this is the transposed of the projection matrix after whitening). ``self.convergence`` The value of the convergence threshold. """ def __init__(self, lags=1, limit = 0.00001, max_iter=10000, verbose = False, whitened = False, white_comp = None, white_parm = None, input_dim = None, dtype = None): """ Input arguments: lags -- list of time-lags to generate the time-delayed covariance matrices. If lags is an integer, time-lags 1,2,...,'lags' are used. Note that time-lag == 0 (instantaneous correlation) is always implicitly used. whitened -- Set whitened is True if input data are already whitened. Otherwise the node will whiten the data itself. white_comp -- If whitened is False, you can set 'white_comp' to the number of whitened components to keep during the calculation (i.e., the input dimensions are reduced to white_comp by keeping the components of largest variance). white_parm -- a dictionary with additional parameters for whitening. It is passed directly to the WhiteningNode constructor. Ex: white_parm = { 'svd' : True } limit -- convergence threshold. max_iter -- If the algorithms does not achieve convergence within max_iter iterations raise an Exception. Should be larger than 100. """ super(TDSEPNode, self).__init__(lags=lags, sfa_ica_coeff=(0., 1.), icaweights=None, sfaweights=None, whitened=whitened, white_comp=white_comp, white_parm = None, eps_contrast=limit, max_iter=max_iter, RP=None, verbose=verbose, input_dim=input_dim, output_dim=None, dtype=dtype) def _stop_training(self, covs=None): super(TDSEPNode, self)._stop_training(covs) # set filters self.filters = self.RP # set convergence self.convergence = self.final_contrast mdp-3.3/mdp/nodes/isfa_nodes.py000066400000000000000000000763361203131624700165310ustar00rootroot00000000000000__docformat__ = "restructuredtext en" import sys as _sys import mdp from mdp import Node, NodeException, numx, numx_rand from mdp.nodes import WhiteningNode from mdp.utils import (DelayCovarianceMatrix, MultipleCovarianceMatrices, rotate, mult) # TODO: support floats of size different than 64-bit; will need to change SQRT_EPS_D # rename often used functions sum, cos, sin, PI = numx.sum, numx.cos, numx.sin, numx.pi SQRT_EPS_D = numx.sqrt(numx.finfo('d').eps) def _triu(m, k=0): """ returns the elements on and above the k-th diagonal of m. k=0 is the main diagonal, k > 0 is above and k < 0 is below the main diagonal.""" N = m.shape[0] M = m.shape[1] x = numx.greater_equal(numx.subtract.outer(numx.arange(N), numx.arange(M)),1-k) out = (1-x)*m return out ############# class ISFANode(Node): """ Perform Independent Slow Feature Analysis on the input data. **Internal variables of interest** ``self.RP`` The global rotation-permutation matrix. This is the filter applied on input_data to get output_data ``self.RPC`` The *complete* global rotation-permutation matrix. This is a matrix of dimension input_dim x input_dim (the 'outer space' is retained) ``self.covs`` A `mdp.utils.MultipleCovarianceMatrices` instance containing the current time-delayed covariance matrices of the input_data. After convergence the uppermost ``output_dim`` x ``output_dim`` submatrices should be almost diagonal. ``self.covs[n-1]`` is the covariance matrix relative to the ``n``-th time-lag Note: they are not cleared after convergence. If you need to free some memory, you can safely delete them with:: >>> del self.covs ``self.initial_contrast`` A dictionary with the starting contrast and the SFA and ICA parts of it. ``self.final_contrast`` Like the above but after convergence. Note: If you intend to use this node for large datasets please have a look at the ``stop_training`` method documentation for speeding things up. References: Blaschke, T. , Zito, T., and Wiskott, L. (2007). Independent Slow Feature Analysis and Nonlinear Blind Source Separation. Neural Computation 19(4):994-1021 (2007) http://itb.biologie.hu-berlin.de/~wiskott/Publications/BlasZitoWisk2007-ISFA-NeurComp.pdf """ def __init__(self, lags=1, sfa_ica_coeff=(1., 1.), icaweights=None, sfaweights=None, whitened=False, white_comp = None, white_parm = None, eps_contrast=1e-6, max_iter=10000, RP=None, verbose=False, input_dim=None, output_dim=None, dtype=None): """ Perform Independent Slow Feature Analysis. The notation is the same used in the paper by Blaschke et al. Please refer to the paper for more information. :Parameters: lags list of time-lags to generate the time-delayed covariance matrices (in the paper this is the set of \tau). If lags is an integer, time-lags 1,2,...,'lags' are used. Note that time-lag == 0 (instantaneous correlation) is always implicitly used. sfa_ica_coeff a list of float with two entries, which defines the weights of the SFA and ICA part of the objective function. They are called b_{SFA} and b_{ICA} in the paper. sfaweights weighting factors for the covariance matrices relative to the SFA part of the objective function (called \kappa_{SFA}^{\tau} in the paper). Default is [1., 0., ..., 0.] For possible values see the description of icaweights. icaweights weighting factors for the cov matrices relative to the ICA part of the objective function (called \kappa_{ICA}^{\tau} in the paper). Default is 1. Possible values are: - an integer ``n``: all matrices are weighted the same (note that it does not make sense to have ``n != 1``) - a list or array of floats of ``len == len(lags)``: each element of the list is used for weighting the corresponding matrix - ``None``: use the default values. whitened ``True`` if input data is already white, ``False`` otherwise (the data will be whitened internally). white_comp If whitened is false, you can set ``white_comp`` to the number of whitened components to keep during the calculation (i.e., the input dimensions are reduced to ``white_comp`` by keeping the components of largest variance). white_parm a dictionary with additional parameters for whitening. It is passed directly to the WhiteningNode constructor. Ex: white_parm = { 'svd' : True } eps_contrast Convergence is achieved when the relative improvement in the contrast is below this threshold. Values in the range [1E-4, 1E-10] are usually reasonable. max_iter If the algorithms does not achieve convergence within max_iter iterations raise an Exception. Should be larger than 100. RP Starting rotation-permutation matrix. It is an input_dim x input_dim matrix used to initially rotate the input components. If not set, the identity matrix is used. In the paper this is used to start the algorithm at the SFA solution (which is often quite near to the optimum). verbose print progress information during convergence. This can slow down the algorithm, but it's the only way to see the rate of improvement and immediately spot if something is going wrong. output_dim sets the number of independent components that have to be extracted. Note that if this is not smaller than input_dim, the problem is solved linearly and SFA would give the same solution only much faster. """ # check that the "lags" argument has some meaningful value if isinstance(lags, (int, long)): lags = range(1, lags+1) elif isinstance(lags, (list, tuple)): lags = numx.array(lags, "i") elif isinstance(lags, numx.ndarray): if not (lags.dtype.char in ['i', 'l']): err_str = "lags must be integer!" raise NodeException(err_str) else: pass else: err_str = ("Lags must be int, list or array. Found " "%s!" % (type(lags).__name__)) raise NodeException(err_str) self.lags = lags # sanity checks for weights if icaweights is None: self.icaweights = 1. else: if (len(icaweights) != len(lags)): err = ("icaweights vector length is %d, " "should be %d" % (str(len(icaweights)), str(len(lags)))) raise NodeException(err) self.icaweights = icaweights if sfaweights is None: self.sfaweights = [0]*len(lags) self.sfaweights[0] = 1. else: if (len(sfaweights) != len(lags)): err = ("sfaweights vector length is %d, " "should be %d" % (str(len(sfaweights)), str(len(lags)))) raise NodeException(err) self.sfaweights = sfaweights # store attributes self.sfa_ica_coeff = sfa_ica_coeff self.max_iter = max_iter self.verbose = verbose self.eps_contrast = eps_contrast # if input is not white, insert a WhiteningNode self.whitened = whitened if not whitened: if white_parm is None: white_parm = {} if output_dim is not None: white_comp = output_dim elif white_comp is not None: output_dim = white_comp self.white = WhiteningNode(input_dim=input_dim, output_dim=white_comp, dtype=dtype, **white_parm) # initialize covariance matrices self.covs = [ DelayCovarianceMatrix(dt, dtype=dtype) for dt in lags ] # initialize the global rotation-permutation matrix # if not set that we'll eventually be an identity matrix self.RP = RP # initialize verbose structure to print nice and useful progress info if verbose: info = { 'sweep' : max(len(str(self.max_iter)), 5), 'perturbe': max(len(str(self.max_iter)), 5), 'float' : 5+8, 'fmt' : "%.5e", 'sep' : " | "} f1 = "Sweep".center(info['sweep']) f1_2 = "Pertb". center(info['perturbe']) f2 = "SFA part".center(info['float']) f3 = "ICA part".center(info['float']) f4 = "Contrast".center(info['float']) header = info['sep'].join([f1, f1_2, f2, f3, f4]) info['header'] = header+'\n' info['line'] = len(header)*"-" self._info = info # finally call base class constructor super(ISFANode, self).__init__(input_dim, output_dim, dtype) def _get_supported_dtypes(self): """Return the list of dtypes supported by this node. Support floating point types with size larger or equal than 64 bits. """ return [t for t in mdp.utils.get_dtypes('Float') if t.itemsize>=8] def _set_dtype(self, dtype): # when typecode is set, we set the whitening node if needed and # the SFA and ICA weights self._dtype = dtype if not self.whitened and self.white.dtype is None: self.white.dtype = dtype self.icaweights = numx.array(self.icaweights, dtype) self.sfaweights = numx.array(self.sfaweights, dtype) def _set_input_dim(self, n): self._input_dim = n if not self.whitened and self.white.output_dim is not None: self._effective_input_dim = self.white.output_dim else: self._effective_input_dim = n def _train(self, x): # train the whitening node if needed if not self.whitened: self.white.train(x) # update the covariance matrices [self.covs[i].update(x) for i in range(len(self.lags))] def _execute(self, x): # filter through whitening node if needed if not self.whitened: x = self.white.execute(x) # rotate input return mult(x, self.RP) def _inverse(self, y): # counter-rotate input x = mult(y, self.RP.T) # invert whitening node if needed if not self.whitened: x = self.white.inverse(x) return x def _fmt_prog_info(self, sweep, pert, contrast, sfa = None, ica = None): # for internal use only! # format the progress information # don't try to understand this code: it Just Works (TM) fmt = self._info sweep_str = str(sweep).rjust(fmt['sweep']) pert_str = str(pert).rjust(fmt['perturbe']) if sfa is None: sfa_str = fmt['float']*' ' else: sfa_str = (fmt['fmt']%(sfa)).rjust(fmt['float']) if ica is None: ica_str = fmt['float']*' ' else: ica_str = (fmt['fmt'] % (ica)).rjust(fmt['float']) contrast_str = (fmt['fmt'] % (contrast)).rjust(fmt['float']) table_entry = fmt['sep'].join([sweep_str, pert_str, sfa_str, ica_str, contrast_str]) return table_entry def _get_eye(self): # return an identity matrix with the right dimensions and type return numx.eye(self._effective_input_dim, dtype=self.dtype) def _get_rnd_rotation(self, dim): # return a random rot matrix with the right dimensions and type return mdp.utils.random_rot(dim, self.dtype) def _get_rnd_permutation(self, dim): # return a random permut matrix with the right dimensions and type zero = numx.zeros((dim, dim), dtype=self.dtype) row = numx_rand.permutation(dim) for col in range(dim): zero[row[col], col] = 1. return zero def _givens_angle(self, i, j, covs, bica_bsfa=None, complete=0): # Return the Givens rotation angle for which the contrast function # is minimal if bica_bsfa is None: bica_bsfa = self._bica_bsfa if j < self.output_dim: return self._givens_angle_case1(i, j, covs, bica_bsfa, complete=complete) else: return self._givens_angle_case2(i, j, covs, bica_bsfa, complete=complete) def _givens_angle_case2(self, m, n, covs, bica_bsfa, complete=0): # This function makes use of the constants computed in the paper # # R -> R # m -> \mu # n -> \nu # # Note that the minus sign before the angle phi is there because # in the paper the rotation convention is the opposite of ours. ncovs = covs.ncovs covs = covs.covs icaweights = self.icaweights sfaweights = self.sfaweights R = self.output_dim bica, bsfa = bica_bsfa Cmm, Cmn, Cnn = covs[m, m, :], covs[m, n, :], covs[n, n, :] d0 = (sfaweights * Cmm*Cmm).sum() d1 = 4*(sfaweights * Cmn*Cmm).sum() d2 = 2*(sfaweights * (2*Cmn*Cmn + Cmm*Cnn)).sum() d3 = 4*(sfaweights * Cmn*Cnn).sum() d4 = (sfaweights * Cnn*Cnn).sum() e0 = 2*(icaweights * ((covs[:R, m, :]*covs[:R, m, :]).sum(axis=0) - Cmm*Cmm)).sum() e1 = 4*(icaweights * ((covs[:R, m, :]*covs[:R, n, :]).sum(axis=0) - Cmm*Cmn)).sum() e2 = 2*(icaweights * ((covs[:R, n, :]*covs[:R, n, :]).sum(axis=0) - Cmn*Cmn)).sum() s22 = 0.25 * bsfa*(d1+d3) + 0.5* bica*(e1) c22 = 0.5 * bsfa*(d0-d4) + 0.5* bica*(e0-e2) s24 = 0.125* bsfa*(d1-d3) c24 = 0.125* bsfa*(d0-d2+d4) # Compute the contrast function in a grid of angles to find a # first approximation for the minimum. Repeat two times # (effectively doubling the resolution). Note that we can do # that because we know we have a single minimum. # # npoints should not be too large otherwise the contrast # funtion appears to be constant. This is because we hit the # maximum resolution for the cosine function (ca. 1e-15) npoints = 100 left = -PI/2 - PI/(npoints+1) right = PI/2 + PI/(npoints+1) for iter in (1, 2): phi = numx.linspace(left, right, npoints+3) contrast = c22*cos(-2*phi)+s22*sin(-2*phi)+\ c24*cos(-4*phi)+s24*sin(-4*phi) minidx = contrast.argmin() left = phi[max(minidx-1, 0)] right = phi[min(minidx+1, len(phi)-1)] # The contrast is almost a parabola around the minimum. # To find the minimum we can therefore compute the derivative # (which should be a line) and calculate its root. # This step helps to overcome the resolution limit of the # cosine function and clearly improve the final result. der_left = 2*c22*sin(-2*left)- 2*s22*cos(-2*left)+\ 4*c24*sin(-4*left)- 4*s24*cos(-4*left) der_right = 2*c22*sin(-2*right)-2*s22*cos(-2*right)+\ 4*c24*sin(-4*right)-4*s24*cos(-4*right) if abs(der_left - der_right) < SQRT_EPS_D: minimum = phi[minidx] else: minimum = right - der_right*(right-left)/(der_right-der_left) dc = numx.zeros((ncovs,), dtype = self.dtype) for t in range(ncovs): dg = covs[:R, :R, t].diagonal() dc[t] = (dg*dg).sum(axis=0) dc = ((dc-Cmm*Cmm)*sfaweights).sum() ec = numx.zeros((ncovs, ), dtype = self.dtype) for t in range(ncovs): ec[t] = sum([covs[i, j, t]*covs[i, j, t] for i in range(R-1) for j in range(i+1, R) if i != m and j != m]) ec = 2*(ec*icaweights).sum() a20 = 0.125*bsfa*(3*d0+d2+3*d4+8*dc)+0.5*bica*(e0+e2+2*ec) minimum_contrast = a20+c22*cos(-2*minimum)+s22*sin(-2*minimum)+\ c24*cos(-4*minimum)+s24*sin(-4*minimum) if complete: # Compute the contrast between -pi/2 and pi/2 # (useful for testing purposes) npoints = 1000 phi = numx.linspace(-PI/2, PI/2, npoints+1) contrast = a20 + c22*cos(-2*phi) + s22*sin(-2*phi) +\ c24*cos(-4*phi) + s24*sin(-4*phi) return phi, contrast, minimum, minimum_contrast else: return minimum, minimum_contrast def _givens_angle_case1(self, m, n, covs, bica_bsfa, complete=0): # This function makes use of the constants computed in the paper # # R -> R # m -> \mu # n -> \nu # # Note that the minus sign before the angle phi is there because # in the paper the rotation convention is the opposite of ours. ncovs = covs.ncovs covs = covs.covs icaweights = self.icaweights sfaweights = self.sfaweights bica, bsfa = bica_bsfa Cmm, Cmn, Cnn = covs[m, m, :], covs[m, n, :], covs[n, n, :] d0 = (sfaweights * (Cmm*Cmm+Cnn*Cnn)).sum() d1 = 4*(sfaweights * (Cmm*Cmn-Cmn*Cnn)).sum() d2 = 2*(sfaweights * (2*Cmn*Cmn+Cmm*Cnn)).sum() e0 = 2*(icaweights * Cmn*Cmn).sum() e1 = 4*(icaweights * (Cmn*Cnn-Cmm*Cmn)).sum() e2 = (icaweights * ((Cmm-Cnn)*(Cmm-Cnn)-2*Cmn*Cmn)).sum() s24 = 0.25* (bsfa * d1 + bica * e1) c24 = 0.25* (bsfa *(d0-d2)+ bica *(e0-e2)) # compute the exact minimum # Note that 'arctan' finds always the first maximum # because s24sin(4p)+c24cos(4p)=const*cos(4p-arctan) # the minimum lies +pi/4 apart (period = pi/2). # In other words we want that: abs(minimum) < pi/4 phi4 = numx.arctan2(s24, c24) # use if-structure until bug in numx.sign is solved if phi4 >= 0: minimum = -0.25*(phi4-PI) else: minimum = -0.25*(phi4+PI) # compute all constants: R = self.output_dim dc = numx.zeros((ncovs, ), dtype = self.dtype) for t in range(ncovs): dg = covs[:R, :R, t].diagonal() dc[t] = (dg*dg).sum(axis=0) dc = ((dc-Cnn*Cnn-Cmm*Cmm)*sfaweights).sum() ec = numx.zeros((ncovs, ), dtype = self.dtype) for t in range(ncovs): triu_covs = _triu(covs[:R, :R, t], 1).ravel() ec[t] = ((triu_covs*triu_covs).sum() - covs[m, n, t]*covs[m, n, t]) ec = 2*(icaweights*ec).sum() a20 = 0.25*(bsfa*(4*dc+d2+3*d0)+bica*(4*ec+e2+3*e0)) minimum_contrast = a20+c24*cos(-4*minimum)+s24*sin(-4*minimum) npoints = 1000 if complete == 1: # Compute the contrast between -pi/2 and pi/2 # (useful for testing purposes) phi = numx.linspace(-PI/2, PI/2, npoints+1) contrast = a20 + c24*cos(-4*phi) + s24*sin(-4*phi) return phi, contrast, minimum, minimum_contrast elif complete == 2: phi = numx.linspace(-PI/4, PI/4, npoints+1) contrast = a20 + c24*cos(-4*phi) + s24*sin(-4*phi) return phi, contrast, minimum, minimum_contrast else: return minimum, minimum_contrast def _get_contrast(self, covs, bica_bsfa = None): if bica_bsfa is None: bica_bsfa = self._bica_bsfa # return current value of the contrast R = self.output_dim ncovs = covs.ncovs covs = covs.covs icaweights = self.icaweights sfaweights = self.sfaweights # unpack the bsfa and bica coefficients bica, bsfa = bica_bsfa sfa = numx.zeros((ncovs, ), dtype=self.dtype) ica = numx.zeros((ncovs, ), dtype=self.dtype) for t in range(ncovs): sq_corr = covs[:R, :R, t]*covs[:R, :R, t] sfa[t] = sq_corr.trace() ica[t] = 2*_triu(sq_corr, 1).ravel().sum() return (bsfa*sfaweights*sfa).sum(), (bica*icaweights*ica).sum() def _adjust_ica_sfa_coeff(self): # adjust sfa/ica ratio. ica and sfa term are scaled # differently because sfa accounts for the diagonal terms # whereas ica accounts for the off-diagonal terms ncomp = self.output_dim if ncomp > 1: bica = self.sfa_ica_coeff[1]/(ncomp*(ncomp-1)) bsfa = -self.sfa_ica_coeff[0]/ncomp else: bica = 0.#self.sfa_ica_coeff[1] bsfa = -self.sfa_ica_coeff[0] self._bica_bsfa = [bica, bsfa] def _fix_covs(self, covs=None): # fiv covariance matrices if covs is None: covs = self.covs if not self.whitened: white = self.white white.stop_training() proj = white.get_projmatrix(transposed=0) else: proj = None # fix and whiten the covariance matrices for i in range(len(self.lags)): covs[i], avg, avg_dt, tlen = covs[i].fix(proj) # send the matrices to the container class covs = MultipleCovarianceMatrices(covs) # symmetrize the cov matrices covs.symmetrize() self.covs = covs def _optimize(self): # optimize contrast function # save initial contrast sfa, ica = self._get_contrast(self.covs) self.initial_contrast = {'SFA': sfa, 'ICA': ica, 'TOT': sfa + ica} # info headers if self.verbose: print self._info['header']+self._info['line'] # initialize control variables # contrast contrast = sfa+ica # local rotation matrix Q = self._get_eye() # local copy of correlation matrices covs = self.covs.copy() # maximum improvement in the contrast function max_increase = self.eps_contrast # Number of sweeps sweep = 0 # flag for stopping sweeping sweeping = True # flag to check if we already perturbed the outer space # - negative means that we exit from this routine # because we hit numerical precision or because # there's no outer space to be perturbed (input_dim == outpu_dim) # - positive means the number of perturbations done # before finding no further improvement perturbed = 0 # size of the perturbation matrix psize = self._effective_input_dim-self.output_dim # if there is no outer space don't perturbe if self._effective_input_dim == self.output_dim: perturbed = -1 # local eye matrix eye = self._get_eye() # main loop # we'll keep on sweeping until the contrast has improved less # then self.eps_contrast part_sweep = 0 while sweeping: # update number of sweeps sweep += 1 # perform a single sweep max_increase, covs, Q, contrast = self._do_sweep(covs, Q, contrast) if max_increase < 0 or contrast == 0: # we hit numerical precision, exit! sweeping = False if perturbed == 0: perturbed = -1 else: perturbed = -perturbed if (max_increase < self.eps_contrast) and (max_increase) >= 0 : # rate of change is small for all pairs in a sweep if perturbed == 0: # perturbe the outer space one time with a random rotation perturbed = 1 elif perturbed >= 1 and part_sweep == sweep-1: # after the last pertubation no useful step has # been done. exit! sweeping = False elif perturbed < 0: # we can't perturbe anymore sweeping = False # keep track of the last sweep we perturbed part_sweep = sweep # perform perturbation if needed if perturbed >= 1 and sweeping is True: # generate a random rotation matrix for the external subspace PRT = eye.copy() rot = self._get_rnd_rotation(psize) # generate a random permutation matrix for the ext. subspace perm = self._get_rnd_permutation(psize) # combine rotation and permutation rot_perm = mult(rot, perm) # apply rotation+permutation PRT[self.output_dim:, self.output_dim:] = rot_perm covs.transform(PRT) Q = mult(Q, PRT) # increment perturbation counter perturbed += 1 # verbose progress information if self.verbose: table_entry = self._fmt_prog_info(sweep, perturbed, contrast) _sys.stdout.write(table_entry+len(table_entry)*'\b') _sys.stdout.flush() # if we made too many sweeps exit with error! if sweep == self.max_iter: err_str = ("Failed to converge, maximum increase= " "%.5e" % (max_increase)) raise NodeException(err_str) # if we land here, we have converged! # calculate output contrast sfa, ica = self._get_contrast(covs) contrast = sfa+ica # print final information if self.verbose: print self._fmt_prog_info(sweep, perturbed, contrast, sfa, ica) print self._info['line'] self.final_contrast = {'SFA': sfa, 'ICA': ica, 'TOT': sfa + ica} # finally return optimal rotation matrix return Q def _do_sweep(self, covs, Q, prev_contrast): # perform a single sweep # initialize maximal improvement in a single sweep max_increase = -1 # shuffle rotation order numx_rand.shuffle(self.rot_axis) # sweep through all axes combinations for (i, j) in self.rot_axis: # get the angle that minimizes the contrast # and the contrast value angle, contrast = self._givens_angle(i, j, covs) if contrast == 0: # we hit numerical precision in case when b_sfa == 0 # we can only break things from here on, better quit! max_increase = -1 break # relative improvement in the contrast function relative_diff = (prev_contrast-contrast)/abs(prev_contrast) if relative_diff < 0: # if rate of change is negative we hit numerical precision # or we already sit on the optimum for this pair of axis. # don't rotate anymore and go to the next pair continue # update the rotation matrix rotate(Q, angle, [i, j]) # rotate the covariance matrices covs.rotate(angle, [i, j]) # store maximum and previous rate of change max_increase = max(max_increase, relative_diff) prev_contrast = contrast return max_increase, covs, Q, contrast def _stop_training(self, covs=None): """Stop the training phase. If the node is used on large datasets it may be wise to first learn the covariance matrices, and then tune the parameters until a suitable parameter set has been found (learning the covariance matrices is the slowest part in this case). This could be done for example in the following way (assuming the data is already white): >>> covs=[mdp.utils.DelayCovarianceMatrix(dt, dtype=dtype) ... for dt in lags] >>> for block in data: ... [covs[i].update(block) for i in range(len(lags))] You can then initialize the ISFANode with the desired parameters, do a fake training with some random data to set the internal node structure and then call stop_training with the stored covariance matrices. For example: >>> isfa = ISFANode(lags, .....) >>> x = mdp.numx_rand.random((100, input_dim)).astype(dtype) >>> isfa.train(x) >>> isfa.stop_training(covs=covs) This trick has been used in the paper to apply ISFA to surrogate matrices, i.e. covariance matrices that were not learnt on a real dataset. """ # fix, whiten, symmetrize and weight the covariance matrices # the functions sets also the number of input components self.ncomp self._fix_covs(covs) # if output_dim were not set, set it to be the number of input comps if self.output_dim is None: self.output_dim = self._effective_input_dim # adjust b_sfa and b_ica self._adjust_ica_sfa_coeff() # initialize all possible rotation axes self.rot_axis = [(i, j) for i in range(0, self.output_dim) for j in range(i+1, self._effective_input_dim)] # initialize the global rotation-permutation matrix (RP): RP = self.RP if RP is None: RP = self._get_eye() else: # apply the global rotation matrix self.covs.transform(RP) # find optimal rotation Q = self._optimize() RP = mult(RP, Q) # rotate and permute the covariance matrices # we do it here in one step, to avoid the cumulative errors # of multiple rotations in _optimize self.covs.transform(Q) # keep the complete rotation-permutation matrix self.RPC = RP.copy() # Reduce dimension to match output_dim# RP = RP[:, :self.output_dim] # the variance for the derivative of a whitened signal is # 0 <= v <= 4, therefore the diagonal elements of the delayed # covariance matrice with time lag = 1 (covs[:,:,0]) are # -1 <= v' <= +1 # reorder the components to have them ordered by slowness d = (self.covs.covs[:self.output_dim, :self.output_dim, 0]).diagonal() idx = d.argsort()[::-1] self.RP = RP.take(idx, axis=1) # we could in principle clean up self.covs, as we do in SFANode or # PCANode, but this algorithm is not stable enough to rule out # possible problems. When these occcurs examining the covariance # matrices is often the only way to debug. #del self.covs mdp-3.3/mdp/nodes/jade.py000066400000000000000000000210501203131624700153010ustar00rootroot00000000000000__docformat__ = "restructuredtext en" import mdp from ica_nodes import ICANode numx, numx_rand, numx_linalg = mdp.numx, mdp.numx_rand, mdp.numx_linalg mult = mdp.utils.mult class JADENode(ICANode): """ Perform Independent Component Analysis using the JADE algorithm. Note that JADE is a batch-algorithm. This means that it needs all input data before it can start and compute the ICs. The algorithm is here given as a Node for convenience, but it actually accumulates all inputs it receives. Remember that to avoid running out of memory when you have many components and many time samples. JADE does not support the telescope mode. Main references: * Cardoso, Jean-Francois and Souloumiac, Antoine (1993). Blind beamforming for non Gaussian signals. Radar and Signal Processing, IEE Proceedings F, 140(6): 362-370. * Cardoso, Jean-Francois (1999). High-order contrasts for independent component analysis. Neural Computation, 11(1): 157-192. Original code contributed by: Gabriel Beckers (2008). History: - May 2005 version 1.8 for MATLAB released by Jean-Francois Cardoso - Dec 2007 MATLAB version 1.8 ported to Python/NumPy by Gabriel Beckers - Feb 15 2008 Python/NumPy version adapted for MDP by Gabriel Beckers """ def __init__(self, limit = 0.001, max_it=1000, verbose = False, whitened = False, white_comp = None, white_parm = None, input_dim = None, dtype = None): """ Input arguments: General: whitened -- Set whitened == True if input data are already whitened. Otherwise the node will whiten the data itself white_comp -- If whitened == False, you can set 'white_comp' to the number of whitened components to keep during the calculation (i.e., the input dimensions are reduced to white_comp by keeping the components of largest variance). white_parm -- a dictionary with additional parameters for whitening. It is passed directly to the WhiteningNode constructor. Ex: white_parm = { 'svd' : True } limit -- convergence threshold. Specific for JADE: max_it -- maximum number of iterations """ super(JADENode, self).__init__(limit, False, verbose, whitened, white_comp, white_parm, input_dim, dtype) self.max_it = max_it def core(self, data): # much of the code here is a more or less line by line translation of # the original matlab code by Jean-Francois Cardoso. append = numx.append arange = numx.arange arctan2 = numx.arctan2 array = numx.array concatenate = numx.concatenate cos = numx.cos sin = numx.sin sqrt = numx.sqrt dtype = self.dtype verbose = self.verbose max_it = self.max_it (T, m) = data.shape X = data if verbose: print "jade -> Estimating cumulant matrices" # Dim. of the space of real symm matrices dimsymm = (m*(m+1)) // 2 # number of cumulant matrices nbcm = dimsymm # Storage for cumulant matrices CM = numx.zeros((m, m*nbcm), dtype=dtype) R = numx.eye(m, dtype=dtype) # Temp for a cum. matrix Qij = numx.zeros((m, m), dtype=dtype) # Temp Xim = numx.zeros(m, dtype=dtype) # Temp Xijm = numx.zeros(m, dtype=dtype) # I am using a symmetry trick to save storage. I should write a short # note one of these days explaining what is going on here. # will index the columns of CM where to store the cum. mats. Range = arange(m) for im in xrange(m): Xim = X[:, im] Xijm = Xim*Xim # Note to myself: the -R on next line can be removed: it does not # affect the joint diagonalization criterion Qij = ( mult(Xijm*X.T, X) / float(T) - R - 2 * numx.outer(R[:,im], R[:,im]) ) CM[:, Range] = Qij Range += m for jm in xrange(im): Xijm = Xim*X[:, jm] Qij = ( sqrt(2) * mult(Xijm*X.T, X) / T - numx.outer(R[:,im], R[:,jm]) - numx.outer(R[:,jm], R[:,im]) ) CM[:, Range] = Qij Range += m # Now we have nbcm = m(m+1)/2 cumulants matrices stored in a big # m x m*nbcm array. # Joint diagonalization of the cumulant matrices # ============================================== V = numx.eye(m, dtype=dtype) Diag = numx.zeros(m, dtype=dtype) On = 0.0 Range = arange(m) for im in xrange(nbcm): Diag = numx.diag(CM[:, Range]) On = On + (Diag*Diag).sum(axis=0) Range += m Off = (CM*CM).sum(axis=0) - On # A statistically scaled threshold on `small" angles seuil = (self.limit*self.limit) / sqrt(T) # sweep number encore = True sweep = 0 # Total number of rotations updates = 0 # Number of rotations in a given seep upds = 0 g = numx.zeros((2, nbcm), dtype=dtype) gg = numx.zeros((2, 2), dtype=dtype) G = numx.zeros((2, 2), dtype=dtype) c = 0 s = 0 ton = 0 toff = 0 theta = 0 Gain = 0 # Joint diagonalization proper # ============================ if verbose: print "jade -> Contrast optimization by joint diagonalization" while encore: encore = False if verbose: print "jade -> Sweep #%3d" % sweep , sweep += 1 upds = 0 for p in xrange(m-1): for q in xrange(p+1, m): Ip = arange(p, m*nbcm, m) Iq = arange(q, m*nbcm, m) # computation of Givens angle g = concatenate([numx.atleast_2d(CM[p, Ip] - CM[q, Iq]), numx.atleast_2d(CM[p, Iq] + CM[q, Ip])]) gg = mult(g, g.T) ton = gg[0, 0] - gg[1, 1] toff = gg[0, 1] + gg[1, 0] theta = 0.5 * arctan2(toff, ton + sqrt(ton*ton+toff*toff)) Gain = (sqrt(ton * ton + toff * toff) - ton) / 4.0 # Givens update if abs(theta) > seuil: encore = True upds = upds + 1 c = cos(theta) s = sin(theta) G = array([[c, -s] , [s, c] ]) pair = array([p, q]) V[:, pair] = mult(V[:, pair], G) CM[pair, :] = mult(G.T, CM[pair, :]) CM[:, concatenate([Ip, Iq])]= append(c*CM[:, Ip]+ s*CM[:, Iq], -s*CM[:, Ip]+ c*CM[:, Iq], axis=1) On = On + Gain Off = Off - Gain if verbose: print "completed in %d rotations" % upds updates += upds if updates > max_it: err_msg = 'No convergence after %d iterations.' % max_it raise mdp.NodeException(err_msg) if verbose: print "jade -> Total of %d Givens rotations" % updates # A separating matrix # =================== # B is whitening matrix B = V.T # Permute the rows of the separating matrix B to get the most energetic # components first. Here the **signals** are normalized to unit # variance. Therefore, the sort is according to the norm of the # columns of A = pinv(B) if verbose: print "jade -> Sorting the components" A = numx_linalg.pinv(B) B = B[numx.argsort((A*A).sum(axis=0))[::-1], :] if verbose: print "jade -> Fixing the signs" b = B[:, 0] # just a trick to deal with sign == 0 signs = numx.sign(numx.sign(b)+0.1) B = mult(numx.diag(signs), B) self.filters = B.T return theta mdp-3.3/mdp/nodes/libsvm_classifier.py000066400000000000000000000115621203131624700201050ustar00rootroot00000000000000__docformat__ = "restructuredtext en" import mdp from mdp import numx from svm_classifiers import _SVMClassifier, _LabelNormalizer import svmutil as libsvmutil class LibSVMClassifier(_SVMClassifier): """ The ``LibSVMClassifier`` class acts as a wrapper around the LibSVM library for support vector machines. Information to the parameters can be found on http://www.csie.ntu.edu.tw/~cjlin/libsvm/ The class provides access to change kernel and svm type with a text string. Additionally ``self.parameter`` is exposed which allows to change all other svm parameters directly. This node depends on ``libsvm``. """ # The kernels and classifiers which LibSVM allows. kernels = ["RBF", "LINEAR", "POLY", "SIGMOID"] classifiers = ["C_SVC", "NU_SVC", "ONE_CLASS", "EPSILON_SVR", "NU_SVR"] def __init__(self, kernel=None, classifier=None, probability=True, params=None, input_dim=None, output_dim=None, dtype=None): """ kernel -- The kernel to use classifier -- The type of the SVM params -- a dict of parameters to be passed to the svm_parameter probability -- Must be set to True, if algorithms based on probability shall be used. """ if not params: params = {} # initialise the parameter and be quiet self.parameter = libsvmutil.svm_parameter("-q") if probability: # allow for probability estimates self.parameter.probability = 1 super(LibSVMClassifier, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) if kernel: self.set_kernel(kernel) if classifier: self.set_classifier(classifier) # set all other parameters for k, v in params.iteritems(): if not k in self.parameter._names: # check that the name is a valid parameter msg = "'{}' is not a valid parameter for libsvm".format(k) raise mdp.NodeException(msg) if hasattr(self.parameter, k): setattr(self.parameter, k, v) else: msg = "'svm_parameter' has no attribute {}".format(k) raise AttributeError(msg) def _get_supported_dtypes(self): """Return the list of dtypes selfupported by this node.""" # Support only float64 because of external library return ('float64',) def set_classifier(self, classifier): """ Sets the classifier. classifier -- A string with the name of the classifier which should be used. Possible values are in self.classifiers """ if classifier.upper() in self.classifiers: self.parameter.svm_type = getattr(libsvmutil, classifier.upper()) else: msg = "Classifier Type %s is unknown or not supported." % classifier raise TypeError(msg) def set_kernel(self, kernel): """ Sets the kernel. kernel -- A string with the name of the classifier which should be used. Possible values are in self.kernels """ if kernel.upper() in self.kernels: self.parameter.kernel_type = getattr(libsvmutil, kernel.upper()) else: msg = "Kernel Type %s is unknown or not supported." % kernel raise TypeError(msg) def _stop_training(self): super(LibSVMClassifier, self)._stop_training() self.normalizer = _LabelNormalizer(self.labels) labels = self.normalizer.normalize(self.labels.tolist()) features = self.data # Call svm training method. prob = libsvmutil.svm_problem(labels, features.tolist()) # Train self.model = libsvmutil.svm_train(prob, self.parameter) def _label(self, x): if isinstance(x, (list, tuple, numx.ndarray)): y = [0] * len(x) p_labs, p_acc, p_vals = libsvmutil.svm_predict(y, x.tolist(), self.model) return numx.array(p_labs) else: msg = "Data must be a sequence of vectors" raise mdp.NodeException(msg) def predict_probability(self, x): self._pre_execution_checks(x) if isinstance(x, (list, tuple, numx.ndarray)): return self._prob(x) else: return self._prob([x]) def _prob(self, x): y = [0] * len(x) p_labs, p_acc, p_vals = libsvmutil.svm_predict(y, x.tolist(), self.model, "-b 1") labels = self.model.get_labels() return [dict(zip(labels, ps)) for ps in p_vals] def _train(self, x, labels): super(LibSVMClassifier, self)._train(x, labels) mdp-3.3/mdp/nodes/lle_nodes.py000066400000000000000000000506341203131624700163540ustar00rootroot00000000000000__docformat__ = "restructuredtext en" from mdp import numx, numx_linalg, Cumulator, TrainingException, MDPWarning from mdp.utils import mult, nongeneral_svd, svd, sqrtm, symeig import warnings as _warnings # some useful functions sqrt = numx.sqrt # search XXX for locations where future work is needed ######################################################### # Locally Linear Embedding ######################################################### class LLENode(Cumulator): """Perform a Locally Linear Embedding analysis on the data. **Internal variables of interest** ``self.training_projection`` The LLE projection of the training data (defined when training finishes). ``self.desired_variance`` variance limit used to compute intrinsic dimensionality. Based on the algorithm outlined in *An Introduction to Locally Linear Embedding* by L. Saul and S. Roweis, using improvements suggested in *Locally Linear Embedding for Classification* by D. deRidder and R.P.W. Duin. References: Roweis, S. and Saul, L., Nonlinear dimensionality reduction by locally linear embedding, Science 290 (5500), pp. 2323-2326, 2000. Original code contributed by: Jake VanderPlas, University of Washington, """ def __init__(self, k, r=0.001, svd=False, verbose=False, input_dim=None, output_dim=None, dtype=None): """ :Arguments: k number of nearest neighbors to use r regularization constant; if ``None``, ``r`` is automatically computed using the method presented in deRidder and Duin; this method involves solving an eigenvalue problem for every data point, and can slow down the algorithm If specified, it multiplies the trace of the local covariance matrix of the distances, as in Saul & Roweis (faster) svd if true, use SVD to compute the projection matrix; SVD is slower but more stable verbose if true, displays information about the progress of the algorithm output_dim number of dimensions to output or a float between 0.0 and 1.0. In the latter case, ``output_dim`` specifies the desired fraction of variance to be explained, and the final number of output dimensions is known at the end of training (e.g., for ``output_dim=0.95`` the algorithm will keep as many dimensions as necessary in order to explain 95% of the input variance) """ if isinstance(output_dim, float) and output_dim <= 1: self.desired_variance = output_dim output_dim = None else: self.desired_variance = None super(LLENode, self).__init__(input_dim, output_dim, dtype) self.k = k self.r = r self.svd = svd self.verbose = verbose def _stop_training(self): Cumulator._stop_training(self) if self.verbose: msg = ('training LLE on %i points' ' in %i dimensions...' % (self.data.shape[0], self.data.shape[1])) print msg # some useful quantities M = self.data N = M.shape[0] k = self.k r = self.r # indices of diagonal elements W_diag_idx = numx.arange(N) Q_diag_idx = numx.arange(k) if k > N: err = ('k=%i must be less than or ' 'equal to number of training points N=%i' % (k, N)) raise TrainingException(err) # determines number of output dimensions: if desired_variance # is specified, we need to learn it from the data. Otherwise, # it's easy learn_outdim = False if self.output_dim is None: if self.desired_variance is None: self.output_dim = self.input_dim else: learn_outdim = True # do we need to automatically determine the regularization term? auto_reg = r is None # determine number of output dims, precalculate useful stuff if learn_outdim: Qs, sig2s, nbrss = self._adjust_output_dim() # build the weight matrix #XXX future work: #XXX for faster implementation, W should be a sparse matrix W = numx.zeros((N, N), dtype=self.dtype) if self.verbose: print ' - constructing [%i x %i] weight matrix...' % W.shape for row in range(N): if learn_outdim: Q = Qs[row, :, :] nbrs = nbrss[row, :] else: # ----------------------------------------------- # find k nearest neighbors # ----------------------------------------------- M_Mi = M-M[row] nbrs = numx.argsort((M_Mi**2).sum(1))[1:k+1] M_Mi = M_Mi[nbrs] # compute covariance matrix of distances Q = mult(M_Mi, M_Mi.T) # ----------------------------------------------- # compute weight vector based on neighbors # ----------------------------------------------- #Covariance matrix may be nearly singular: # add a diagonal correction to prevent numerical errors if auto_reg: # automatic mode: correction is equal to the sum of # the (d_in-d_out) unused variances (as in deRidder & # Duin) if learn_outdim: sig2 = sig2s[row, :] else: sig2 = svd(M_Mi, compute_uv=0)**2 r = numx.sum(sig2[self.output_dim:]) Q[Q_diag_idx, Q_diag_idx] += r else: # Roweis et al instead use "a correction that # is small compared to the trace" e.g.: # r = 0.001 * float(Q.trace()) # this is equivalent to assuming 0.1% of the variance is unused Q[Q_diag_idx, Q_diag_idx] += r*Q.trace() #solve for weight # weight is w such that sum(Q_ij * w_j) = 1 for all i # XXX refcast is due to numpy bug: floats become double w = self._refcast(numx_linalg.solve(Q, numx.ones(k))) w /= w.sum() #update row of the weight matrix W[nbrs, row] = w if self.verbose: msg = (' - finding [%i x %i] null space of weight matrix\n' ' (may take a while)...' % (self.output_dim, N)) print msg self.W = W.copy() #to find the null space, we need the bottom d+1 # eigenvectors of (W-I).T*(W-I) #Compute this using the svd of (W-I): W[W_diag_idx, W_diag_idx] -= 1. #XXX future work: #XXX use of upcoming ARPACK interface for bottom few eigenvectors #XXX of a sparse matrix will significantly increase the speed #XXX of the next step if self.svd: sig, U = nongeneral_svd(W.T, range=(2, self.output_dim+1)) else: # the following code does the same computation, but uses # symeig, which computes only the required eigenvectors, and # is much faster. However, it could also be more unstable... WW = mult(W, W.T) # regularizes the eigenvalues, does not change the eigenvectors: WW[W_diag_idx, W_diag_idx] += 0.1 sig, U = symeig(WW, range=(2, self.output_dim+1), overwrite=True) self.training_projection = U def _adjust_output_dim(self): # this function is called if we need to compute the number of # output dimensions automatically; some quantities that are # useful later are pre-calculated to spare precious time if self.verbose: print ' - adjusting output dim:' #otherwise, we need to compute output_dim # from desired_variance M = self.data k = self.k N, d_in = M.shape m_est_array = [] Qs = numx.zeros((N, k, k)) sig2s = numx.zeros((N, d_in)) nbrss = numx.zeros((N, k), dtype='i') for row in range(N): #----------------------------------------------- # find k nearest neighbors #----------------------------------------------- M_Mi = M-M[row] nbrs = numx.argsort((M_Mi**2).sum(1))[1:k+1] M_Mi = M_Mi[nbrs] # compute covariance matrix of distances Qs[row, :, :] = mult(M_Mi, M_Mi.T) nbrss[row, :] = nbrs #----------------------------------------------- # singular values of M_Mi give the variance: # use this to compute intrinsic dimensionality # at this point #----------------------------------------------- sig2 = (svd(M_Mi, compute_uv=0))**2 sig2s[row, :sig2.shape[0]] = sig2 #----------------------------------------------- # use sig2 to compute intrinsic dimensionality of the # data at this neighborhood. The dimensionality is the # number of eigenvalues needed to sum to the total # desired variance #----------------------------------------------- sig2 /= sig2.sum() S = sig2.cumsum() m_est = S.searchsorted(self.desired_variance) if m_est > 0: m_est += (self.desired_variance-S[m_est-1])/sig2[m_est] else: m_est = self.desired_variance/sig2[m_est] m_est_array.append(m_est) m_est_array = numx.asarray(m_est_array) self.output_dim = int( numx.ceil( numx.median(m_est_array) ) ) if self.verbose: msg = (' output_dim = %i' ' for variance of %.2f' % (self.output_dim, self.desired_variance)) print msg return Qs, sig2s, nbrss def _execute(self, x): #---------------------------------------------------- # similar algorithm to that within self.stop_training() # refer there for notes & comments on code #---------------------------------------------------- N = self.data.shape[0] Nx = x.shape[0] W = numx.zeros((Nx, N), dtype=self.dtype) k, r = self.k, self.r d_out = self.output_dim Q_diag_idx = numx.arange(k) for row in range(Nx): #find nearest neighbors of x in M M_xi = self.data-x[row] nbrs = numx.argsort( (M_xi**2).sum(1) )[:k] M_xi = M_xi[nbrs] #find corrected covariance matrix Q Q = mult(M_xi, M_xi.T) if r is None and k > d_out: sig2 = (svd(M_xi, compute_uv=0))**2 r = numx.sum(sig2[d_out:]) Q[Q_diag_idx, Q_diag_idx] += r if r is not None: Q[Q_diag_idx, Q_diag_idx] += r #solve for weights w = self._refcast(numx_linalg.solve(Q , numx.ones(k))) w /= w.sum() W[row, nbrs] = w #multiply weights by result of SVD from training return numx.dot(W, self.training_projection) @staticmethod def is_trainable(): return True @staticmethod def is_invertible(): return False ######################################################### # Hessian LLE ######################################################### # Modified Gram-Schmidt def _mgs(a): m, n = a.shape v = a.copy() r = numx.zeros((n, n)) for i in range(n): r[i, i] = numx_linalg.norm(v[:, i]) v[:, i] = v[:, i]/r[i, i] for j in range(i+1, n): r[i, j] = mult(v[:, i], v[:, j]) v[:, j] = v[:, j] - r[i, j]*v[:, i] # q is v return v, r class HLLENode(LLENode): """Perform a Hessian Locally Linear Embedding analysis on the data. **Internal variables of interest** ``self.training_projection`` the HLLE projection of the training data (defined when training finishes) ``self.desired_variance`` variance limit used to compute intrinsic dimensionality. Implementation based on algorithm outlined in Donoho, D. L., and Grimes, C., Hessian Eigenmaps: new locally linear embedding techniques for high-dimensional data, Proceedings of the National Academy of Sciences 100(10): 5591-5596, 2003. Original code contributed by: Jake Vanderplas, University of Washington """ #---------------------------------------------------- # Note that many methods ar inherited from LLENode, # including _execute(), _adjust_output_dim(), etc. # The main advantage of the Hessian estimator is to # limit distortions of the input manifold. Once # the model has been trained, it is sufficient (and # much less computationally intensive) to determine # projections for new points using the LLE framework. #---------------------------------------------------- def __init__(self, k, r=0.001, svd=False, verbose=False, input_dim=None, output_dim=None, dtype=None): """ :Keyword arguments: k number of nearest neighbors to use; the node will raise an MDPWarning if k is smaller than k >= 1 + output_dim + output_dim*(output_dim+1)/2, because in this case a less efficient computation must be used, and the ablgorithm can become unstable r regularization constant; as opposed to LLENode, it is not possible to compute this constant automatically; it is only used during execution svd if true, use SVD to compute the projection matrix; SVD is slower but more stable verbose if true, displays information about the progress of the algorithm output_dim number of dimensions to output or a float between 0.0 and 1.0. In the latter case, output_dim specifies the desired fraction of variance to be exaplained, and the final number of output dimensions is known at the end of training (e.g., for 'output_dim=0.95' the algorithm will keep as many dimensions as necessary in order to explain 95% of the input variance) """ LLENode.__init__(self, k, r, svd, verbose, input_dim, output_dim, dtype) def _stop_training(self): Cumulator._stop_training(self) k = self.k M = self.data N = M.shape[0] if k > N: err = ('k=%i must be less than' ' or equal to number of training points N=%i' % (k, N)) raise TrainingException(err) if self.verbose: print 'performing HLLE on %i points in %i dimensions...' % M.shape # determines number of output dimensions: if desired_variance # is specified, we need to learn it from the data. Otherwise, # it's easy learn_outdim = False if self.output_dim is None: if self.desired_variance is None: self.output_dim = self.input_dim else: learn_outdim = True # determine number of output dims, precalculate useful stuff if learn_outdim: Qs, sig2s, nbrss = self._adjust_output_dim() d_out = self.output_dim #dp = d_out + (d_out-1) + (d_out-2) + ... dp = d_out*(d_out+1)/2 if min(k, N) <= d_out: err = ('k=%i and n=%i (number of input data points) must be' ' larger than output_dim=%i' % (k, N, d_out)) raise TrainingException(err) if k < 1+d_out+dp: wrn = ('The number of neighbours, k=%i, is smaller than' ' 1 + output_dim + output_dim*(output_dim+1)/2 = %i,' ' which might result in unstable results.' % (k, 1+d_out+dp)) _warnings.warn(wrn, MDPWarning) #build the weight matrix #XXX for faster implementation, W should be a sparse matrix W = numx.zeros((N, dp*N), dtype=self.dtype) if self.verbose: print ' - constructing [%i x %i] weight matrix...' % W.shape for row in range(N): if learn_outdim: nbrs = nbrss[row, :] else: # ----------------------------------------------- # find k nearest neighbors # ----------------------------------------------- M_Mi = M-M[row] nbrs = numx.argsort((M_Mi**2).sum(1))[1:k+1] #----------------------------------------------- # center the neighborhood using the mean #----------------------------------------------- nbrhd = M[nbrs] # this makes a copy nbrhd -= nbrhd.mean(0) #----------------------------------------------- # compute local coordinates # using a singular value decomposition #----------------------------------------------- U, sig, VT = svd(nbrhd) nbrhd = U.T[:d_out] del VT #----------------------------------------------- # build Hessian estimator #----------------------------------------------- Yi = numx.zeros((dp, k), dtype=self.dtype) ct = 0 for i in range(d_out): Yi[ct:ct+d_out-i, :] = nbrhd[i] * nbrhd[i:, :] ct += d_out-i Yi = numx.concatenate([numx.ones((1, k), dtype=self.dtype), nbrhd, Yi], 0) #----------------------------------------------- # orthogonalize linear and quadratic forms # with QR factorization # and make the weights sum to 1 #----------------------------------------------- if k >= 1+d_out+dp: Q, R = numx_linalg.qr(Yi.T) w = Q[:, d_out+1:d_out+1+dp] else: q, r = _mgs(Yi.T) w = q[:, -dp:] S = w.sum(0) #sum along columns #if S[i] is too small, set it equal to 1.0 # this prevents weights from blowing up S[numx.where(numx.absolute(S)<1E-4)] = 1.0 #print w.shape, S.shape, (w/S).shape #print W[nbrs, row*dp:(row+1)*dp].shape W[nbrs, row*dp:(row+1)*dp] = w / S #----------------------------------------------- # To find the null space, we want the # first d+1 eigenvectors of W.T*W # Compute this using an svd of W #----------------------------------------------- if self.verbose: msg = (' - finding [%i x %i] ' 'null space of weight matrix...' % (d_out, N)) print msg #XXX future work: #XXX use of upcoming ARPACK interface for bottom few eigenvectors #XXX of a sparse matrix will significantly increase the speed #XXX of the next step if self.svd: sig, U = nongeneral_svd(W.T, range=(2, d_out+1)) Y = U*numx.sqrt(N) else: WW = mult(W, W.T) # regularizes the eigenvalues, does not change the eigenvectors: W_diag_idx = numx.arange(N) WW[W_diag_idx, W_diag_idx] += 0.01 sig, U = symeig(WW, range=(2, self.output_dim+1), overwrite=True) Y = U*numx.sqrt(N) del WW del W #----------------------------------------------- # Normalize Y # # Alternative way to do it: # we need R = (Y.T*Y)^(-1/2) # do this with an SVD of Y del VT # Y = U*sig*V.T # Y.T*Y = (V*sig.T*U.T) * (U*sig*V.T) # = V * (sig*sig.T) * V.T # = V * sig^2 V.T # so # R = V * sig^-1 * V.T # The code is: # U, sig, VT = svd(Y) # del U # S = numx.diag(sig**-1) # self.training_projection = mult(Y, mult(VT.T, mult(S, VT))) #----------------------------------------------- if self.verbose: print ' - normalizing null space...' C = sqrtm(mult(Y.T, Y)) self.training_projection = mult(Y, C) mdp-3.3/mdp/nodes/misc_nodes.py000066400000000000000000000675011203131624700165340ustar00rootroot00000000000000from __future__ import with_statement __docformat__ = "restructuredtext en" import mdp from mdp import numx, utils, Node, NodeException, PreserveDimNode import cPickle as pickle import pickle as real_pickle class IdentityNode(PreserveDimNode): """Execute returns the input data and the node is not trainable. This node can be instantiated and is for example useful in complex network layouts. """ def _get_supported_dtypes(self): """Return the list of dtypes supported by this node.""" return (mdp.utils.get_dtypes('AllFloat') + mdp.utils.get_dtypes('AllInteger') + mdp.utils.get_dtypes('Character')) @staticmethod def is_trainable(): return False class OneDimensionalHitParade(object): """ Class to produce hit-parades (i.e., a list of the largest and smallest values) out of a one-dimensional time-series. """ def __init__(self, n, d, real_dtype="d", integer_dtype="l"): """ Input arguments: n -- Number of maxima and minima to remember d -- Minimum gap between two hits real_dtype -- dtype of sequence items integer_dtype -- dtype of sequence indices Note: be careful with dtypes! """ self.n = int(n) self.d = int(d) self.iM = numx.zeros((n, ), dtype=integer_dtype) self.im = numx.zeros((n, ), dtype=integer_dtype) real_dtype = numx.dtype(real_dtype) if real_dtype in mdp.utils.get_dtypes('AllInteger'): max_num = numx.iinfo(real_dtype).max min_num = numx.iinfo(real_dtype).min else: max_num = numx.finfo(real_dtype).max min_num = numx.finfo(real_dtype).min self.M = numx.array([min_num]*n, dtype=real_dtype) self.m = numx.array([max_num]*n, dtype=real_dtype) self.lM = 0 self.lm = 0 def update(self, inp): """ Input arguments: inp -- tuple (time-series, time-indices) """ (x, ix) = inp rows = len(x) d = self.d M = self.M m = self.m iM = self.iM im = self.im lM = self.lM lm = self.lm for i in xrange(rows): k1 = M.argmin() k2 = m.argmax() if x[i] > M[k1]: if ix[i]-iM[lM] <= d and x[i] > M[lM]: M[lM] = x[i] iM[lM] = ix[i] elif ix[i]-iM[lM] > d: M[k1] = x[i] iM[k1] = ix[i] lM = k1 if x[i] < m[k2]: if ix[i]-im[lm] <= d and x[i] < m[lm]: m[lm] = x[i] im[lm] = ix[i] elif ix[i]-im[lm] > d: m[k2] = x[i] im[k2] = ix[i] lm = k2 self.M = M self.m = m self.iM = iM self.im = im self.lM = lM self.lm = lm def get_maxima(self): """ Return the tuple (maxima, time-indices). Maxima are sorted in descending order. """ iM = self.iM M = self.M sort = M.argsort() return M[sort[::-1]], iM[sort[::-1]] def get_minima(self): """ Return the tuple (minima, time-indices). Minima are sorted in ascending order. """ im = self.im m = self.m sort = m.argsort() return m[sort], im[sort] class HitParadeNode(PreserveDimNode): """Collect the first ``n`` local maxima and minima of the training signal which are separated by a minimum gap ``d``. This is an analysis node, i.e. the data is analyzed during training and the results are stored internally. Use the ``get_maxima`` and ``get_minima`` methods to access them. """ def __init__(self, n, d=1, input_dim=None, output_dim=None, dtype=None): """ Input arguments: n -- Number of maxima and minima to store d -- Minimum gap between two maxima or two minima """ super(HitParadeNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) self.n = int(n) self.d = int(d) self.itype = 'int64' self.hit = None self.tlen = 0 def _set_input_dim(self, n): self._input_dim = n self.output_dim = n def _get_supported_dtypes(self): """Return the list of dtypes supported by this node.""" return (mdp.utils.get_dtypes('Float') + mdp.utils.get_dtypes('AllInteger')) def _train(self, x): hit = self.hit old_tlen = self.tlen if hit is None: hit = [OneDimensionalHitParade(self.n, self.d, self.dtype, self.itype) for c in range(self.input_dim)] tlen = old_tlen + x.shape[0] indices = numx.arange(old_tlen, tlen) for c in range(self.input_dim): hit[c].update((x[:, c], indices)) self.hit = hit self.tlen = tlen def get_maxima(self): """ Return the tuple (maxima, indices). Maxima are sorted in descending order. If the training phase has not been completed yet, call stop_training. """ self._if_training_stop_training() cols = self.input_dim n = self.n hit = self.hit iM = numx.zeros((n, cols), dtype=self.itype) M = numx.ones((n, cols), dtype=self.dtype) for c in range(cols): M[:, c], iM[:, c] = hit[c].get_maxima() return M, iM def get_minima(self): """ Return the tuple (minima, indices). Minima are sorted in ascending order. If the training phase has not been completed yet, call stop_training. """ self._if_training_stop_training() cols = self.input_dim n = self.n hit = self.hit im = numx.zeros((n, cols), dtype=self.itype) m = numx.ones((n, cols), dtype=self.dtype) for c in range(cols): m[:, c], im[:, c] = hit[c].get_minima() return m, im class TimeFramesNode(Node): """Copy delayed version of the input signal on the space dimensions. For example, for ``time_frames=3`` and ``gap=2``:: [ X(1) Y(1) [ X(1) Y(1) X(3) Y(3) X(5) Y(5) X(2) Y(2) X(2) Y(2) X(4) Y(4) X(6) Y(6) X(3) Y(3) --> X(3) Y(3) X(5) Y(5) X(7) Y(7) X(4) Y(4) X(4) Y(4) X(6) Y(6) X(8) Y(8) X(5) Y(5) ... ... ... ... ... ... ] X(6) Y(6) X(7) Y(7) X(8) Y(8) ... ... ] It is not always possible to invert this transformation (the transformation is not surjective. However, the ``pseudo_inverse`` method does the correct thing when it is indeed possible. """ def __init__(self, time_frames, gap=1, input_dim=None, dtype=None): """ Input arguments: time_frames -- Number of delayed copies gap -- Time delay between the copies """ self.time_frames = time_frames super(TimeFramesNode, self).__init__(input_dim=input_dim, output_dim=None, dtype=dtype) self.gap = gap def _get_supported_dtypes(self): """Return the list of dtypes supported by this node.""" return (mdp.utils.get_dtypes('AllFloat') + mdp.utils.get_dtypes('AllInteger') + mdp.utils.get_dtypes('Character')) @staticmethod def is_trainable(): return False @staticmethod def is_invertible(): return False def _set_input_dim(self, n): self._input_dim = n self._output_dim = n*self.time_frames def _set_output_dim(self, n): msg = 'Output dim can not be explicitly set!' raise NodeException(msg) def _execute(self, x): gap = self.gap tf = x.shape[0] - (self.time_frames-1)*gap rows = self.input_dim cols = self.output_dim y = numx.zeros((tf, cols), dtype=self.dtype) for frame in range(self.time_frames): y[:, frame*rows:(frame+1)*rows] = x[gap*frame:gap*frame+tf, :] return y def pseudo_inverse(self, y): """This function returns a pseudo-inverse of the execute frame. y == execute(x) only if y belongs to the domain of execute and has been computed with a sufficently large x. If gap > 1 some of the last rows will be filled with zeros. """ self._if_training_stop_training() # set the output dimension if necessary if not self.output_dim: # if the input_dim is not defined, raise an exception if not self.input_dim: errstr = ("Number of input dimensions undefined. Inversion" "not possible.") raise NodeException(errstr) self.outputdim = self.input_dim # control the dimension of y self._check_output(y) # cast y = self._refcast(y) gap = self.gap exp_length = y.shape[0] cols = self.input_dim rest = (self.time_frames-1)*gap rows = exp_length + rest x = numx.zeros((rows, cols), dtype=self.dtype) x[:exp_length, :] = y[:, :cols] count = 1 # Note that if gap > 1 some of the last rows will be filled with zeros! block_sz = min(gap, exp_length) for row in range(max(exp_length, gap), rows, gap): x[row:row+block_sz, :] = y[-block_sz:, count*cols:(count+1)*cols] count += 1 return x class TimeDelayNode(TimeFramesNode): """ Copy delayed version of the input signal on the space dimensions. For example, for ``time_frames=3`` and ``gap=2``:: [ X(1) Y(1) [ X(1) Y(1) 0 0 0 0 X(2) Y(2) X(2) Y(2) 0 0 0 0 X(3) Y(3) --> X(3) Y(3) X(1) Y(1) 0 0 X(4) Y(4) X(4) Y(4) X(2) Y(2) 0 0 X(5) Y(5) X(5) Y(5) X(3) Y(3) X(1) Y(1) X(6) Y(6) ... ... ... ... ... ... ] X(7) Y(7) X(8) Y(8) ... ... ] This node provides similar functionality as the ``TimeFramesNode``, only that it performs a time embedding into the past rather than into the future. See ``TimeDelaySlidingWindowNode`` for a sliding window delay node for application in a non-batch manner. Original code contributed by Sebastian Hoefer. Dec 31, 2010 """ def __init__(self, time_frames, gap=1, input_dim=None, dtype=None): """ Input arguments: time_frames -- Number of delayed copies gap -- Time delay between the copies """ super(TimeDelayNode, self).__init__(time_frames, gap, input_dim, dtype) def _execute(self, x): gap = self.gap rows = x.shape[0] cols = self.output_dim n = self.input_dim y = numx.zeros((rows, cols), dtype=self.dtype) for frame in range(self.time_frames): y[gap*frame:, frame*n:(frame+1)*n] = x[:rows-gap*frame, :] return y def pseudo_inverse(self, y): raise NotImplementedError class TimeDelaySlidingWindowNode(TimeDelayNode): """ ``TimeDelaySlidingWindowNode`` is an alternative to ``TimeDelayNode`` which should be used for online learning/execution. Whereas the ``TimeDelayNode`` works in a batch manner, for online application a sliding window is necessary which yields only one row per call. Applied to the same data the collection of all returned rows of the ``TimeDelaySlidingWindowNode`` is equivalent to the result of the ``TimeDelayNode``. Original code contributed by Sebastian Hoefer. Dec 31, 2010 """ def __init__(self, time_frames, gap=1, input_dim=None, dtype=None): """ Input arguments: time_frames -- Number of delayed copies gap -- Time delay between the copies """ self.time_frames = time_frames self.gap = gap super(TimeDelaySlidingWindowNode, self).__init__(time_frames, gap, input_dim, dtype) self.sliding_wnd = None self.cur_idx = 0 self.slide = False def _init_sliding_window(self): rows = self.gap+1 cols = self.input_dim*self.time_frames self.sliding_wnd = numx.zeros((rows, cols), dtype=self.dtype) def _execute(self, x): assert x.shape[0] == 1 if self.sliding_wnd == None: self._init_sliding_window() gap = self.gap rows = self.sliding_wnd.shape[0] cols = self.output_dim n = self.input_dim new_row = numx.zeros(cols, dtype=self.dtype) new_row[:n] = x # Slide if self.slide: self.sliding_wnd[:-1, :] = self.sliding_wnd[1:, :] # Delay if self.cur_idx-gap >= 0: new_row[n:] = self.sliding_wnd[self.cur_idx-gap, :-n] # Add new row to matrix self.sliding_wnd[self.cur_idx, :] = new_row if self.cur_idx < rows-1: self.cur_idx = self.cur_idx+1 else: self.slide = True return new_row[numx.newaxis,:] class EtaComputerNode(Node): """Compute the eta values of the normalized training data. The delta value of a signal is a measure of its temporal variation, and is defined as the mean of the derivative squared, i.e. ``delta(x) = mean(dx/dt(t)^2)``. ``delta(x)`` is zero if ``x`` is a constant signal, and increases if the temporal variation of the signal is bigger. The eta value is a more intuitive measure of temporal variation, defined as:: eta(x) = T/(2*pi) * sqrt(delta(x)) If ``x`` is a signal of length ``T`` which consists of a sine function that accomplishes exactly ``N`` oscillations, then ``eta(x)=N``. ``EtaComputerNode`` normalizes the training data to have unit variance, such that it is possible to compare the temporal variation of two signals independently from their scaling. Reference: Wiskott, L. and Sejnowski, T.J. (2002). Slow Feature Analysis: Unsupervised Learning of Invariances, Neural Computation, 14(4):715-770. Important: if a data chunk is tlen data points long, this node is going to consider only the first tlen-1 points together with their derivatives. This means in particular that the variance of the signal is not computed on all data points. This behavior is compatible with that of ``SFANode``. This is an analysis node, i.e. the data is analyzed during training and the results are stored internally. Use the method ``get_eta`` to access them. """ def __init__(self, input_dim=None, dtype=None): super(EtaComputerNode, self).__init__(input_dim, None, dtype) self._initialized = 0 def _set_input_dim(self, n): self._input_dim = n self.output_dim = n def _init_internals(self): input_dim = self.input_dim self._mean = numx.zeros((input_dim,), dtype='d') self._var = numx.zeros((input_dim,), dtype='d') self._tlen = 0 self._diff2 = numx.zeros((input_dim,), dtype='d') self._initialized = 1 def _train(self, data): # here SignalNode.train makes an automatic refcast if not self._initialized: self._init_internals() rdata = data[:-1] self._mean += rdata.sum(axis=0) self._var += (rdata*rdata).sum(axis=0) self._tlen += rdata.shape[0] td_data = utils.timediff(data) self._diff2 += (td_data*td_data).sum(axis=0) def _stop_training(self): var_tlen = self._tlen-1 # unbiased var = (self._var - self._mean*self._mean/self._tlen)/var_tlen # biased #var = (self._var - self._mean*self._mean/self._tlen)/self._tlen # old formula: wrong! is neither biased nor unbiased #var = (self._var/var_tlen) - (self._mean/self._tlen)**2 self._var = var delta = (self._diff2/self._tlen)/var self._delta = delta self._eta = numx.sqrt(delta)/(2*numx.pi) def get_eta(self, t=1): """Return the eta values of the data received during the training phase. If the training phase has not been completed yet, call stop_training. :Arguments: t Sampling frequency in Hz. The original definition in (Wiskott and Sejnowski, 2002) is obtained for ``t=self._tlen``, while for ``t=1`` (default), this corresponds to the beta-value defined in (Berkes and Wiskott, 2005). """ self._if_training_stop_training() return self._refcast(self._eta*t) class NoiseNode(PreserveDimNode): """Inject multiplicative or additive noise into the input data. Original code contributed by Mathias Franzius. """ def __init__(self, noise_func=mdp.numx_rand.normal, noise_args=(0, 1), noise_type='additive', input_dim=None, output_dim=None, dtype=None): """ Add noise to input signals. :Arguments: noise_func A function that generates noise. It must take a ``size`` keyword argument and return a random array of that size. Default is normal noise. noise_args Tuple of additional arguments passed to `noise_func`. Default is (0,1) for (mean, standard deviation) of the normal distribution. noise_type Either ``'additive'`` or ``'multiplicative'``. 'additive' returns ``x + noise``. 'multiplicative' returns ``x * (1 + noise)`` Default is ``'additive'``. """ super(NoiseNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) self.noise_func = noise_func self.noise_args = noise_args valid_noise_types = ['additive', 'multiplicative'] if noise_type not in valid_noise_types: err_str = '%s is not a valid noise type' % str(noise_type) raise NodeException(err_str) else: self.noise_type = noise_type def _get_supported_dtypes(self): """Return the list of dtypes supported by this node.""" return (mdp.utils.get_dtypes('Float') + mdp.utils.get_dtypes('AllInteger')) @staticmethod def is_trainable(): return False @staticmethod def is_invertible(): return False def _execute(self, x): noise_mat = self._refcast(self.noise_func(*self.noise_args, **{'size': x.shape})) if self.noise_type == 'additive': return x+noise_mat elif self.noise_type == 'multiplicative': return x*(1.+noise_mat) def save(self, filename, protocol = -1): """Save a pickled serialization of the node to 'filename'. If 'filename' is None, return a string. Note: the pickled Node is not guaranteed to be upward or backward compatible.""" if filename is None: # cPickle seems to create an error, probably due to the # self.noise_func attribute. return real_pickle.dumps(self, protocol) else: # if protocol != 0 open the file in binary mode mode = 'w' if protocol == 0 else 'wb' with open(filename, mode) as flh: real_pickle.dump(self, flh, protocol) class NormalNoiseNode(PreserveDimNode): """Special version of ``NoiseNode`` for Gaussian additive noise. Unlike ``NoiseNode`` it does not store a noise function reference but simply uses ``numx_rand.normal``. """ def __init__(self, noise_args=(0, 1), input_dim=None, output_dim=None, dtype=None): """Set the noise parameters. noise_args -- Tuple of (mean, standard deviation) for the normal distribution, default is (0,1). """ super(NormalNoiseNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) self.noise_args = noise_args @staticmethod def is_trainable(): return False @staticmethod def is_invertible(): return False def _execute(self, x): noise = self._refcast(mdp.numx_rand.normal(size=x.shape) * self.noise_args[1] + self.noise_args[0]) return x + noise class CutoffNode(PreserveDimNode): """Node to cut off values at specified bounds. Works similar to ``numpy.clip``, but also works when only a lower or upper bound is specified. """ def __init__(self, lower_bound=None, upper_bound=None, input_dim=None, output_dim=None, dtype=None): """Initialize node. :Parameters: lower_bound Data values below this are cut to the ``lower_bound`` value. If ``lower_bound`` is ``None`` no cutoff is performed. upper_bound Works like ``lower_bound``. """ super(CutoffNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) self.lower_bound = lower_bound self.upper_bound = upper_bound @staticmethod def is_trainable(): return False @staticmethod def is_invertible(): return False def _get_supported_dtypes(self): return (mdp.utils.get_dtypes('Float') + mdp.utils.get_dtypes('AllInteger')) def _execute(self, x): """Return the clipped data.""" # n.clip() does not work, since it does not accept None for one bound if self.lower_bound is not None: x = numx.where(x >= self.lower_bound, x, self.lower_bound) if self.upper_bound is not None: x = numx.where(x <= self.upper_bound, x, self.upper_bound) return x class HistogramNode(PreserveDimNode): """Node which stores a history of the data during its training phase. The data history is stored in ``self.data_hist`` and can also be deleted to free memory. Alternatively it can be automatically pickled to disk. Note that data is only stored during training. """ def __init__(self, hist_fraction=1.0, hist_filename=None, input_dim=None, output_dim=None, dtype=None): """Initialize the node. hist_fraction -- Defines the fraction of the data that is stored randomly. hist_filename -- Filename for the file to which the data history will be pickled after training. The data is pickled when stop_training is called and data_hist is then cleared (to free memory). If filename is None (default value) then data_hist is not cleared and can be directly used after training. """ super(HistogramNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) self._hist_filename = hist_filename self.hist_fraction = hist_fraction self.data_hist = None # stores the data history def _get_supported_dtypes(self): return (mdp.utils.get_dtypes('AllFloat') + mdp.utils.get_dtypes('AllInteger') + mdp.utils.get_dtypes('Character')) def _train(self, x): """Store the history data.""" if self.hist_fraction < 1.0: x = x[numx.random.random(len(x)) < self.hist_fraction] if self.data_hist is not None: self.data_hist = numx.concatenate([self.data_hist, x]) else: self.data_hist = x def _stop_training(self): """Pickle the histogram data to file and clear it if required.""" super(HistogramNode, self)._stop_training() if self._hist_filename: pickle_file = open(self._hist_filename, "wb") try: pickle.dump(self.data_hist, pickle_file, protocol=-1) finally: pickle_file.close( ) self.data_hist = None class AdaptiveCutoffNode(HistogramNode): """Node which uses the data history during training to learn cutoff values. As opposed to the simple ``CutoffNode``, a different cutoff value is learned for each data coordinate. For example if an upper cutoff fraction of 0.05 is specified, then the upper cutoff bound is set so that the upper 5% of the training data would have been clipped (in each dimension). The cutoff bounds are then applied during execution. This node also works as a ``HistogramNode``, so the histogram data is stored. When ``stop_training`` is called the cutoff values for each coordinate are calculated based on the collected histogram data. """ def __init__(self, lower_cutoff_fraction=None, upper_cutoff_fraction=None, hist_fraction=1.0, hist_filename=None, input_dim=None, output_dim=None, dtype=None): """Initialize the node. :Parameters: lower_cutoff_fraction Fraction of data that will be cut off after the training phase (assuming the data distribution does not change). If set to ``None`` (default value) no cutoff is performed. upper_cutoff_fraction Works like `lower_cutoff_fraction`. hist_fraction Defines the fraction of the data that is stored for the histogram. hist_filename Filename for the file to which the data history will be pickled after training. The data is pickled when `stop_training` is called and ``data_hist`` is then cleared (to free memory). If filename is ``None`` (default value) then ``data_hist`` is not cleared and can be directly used after training. """ super(AdaptiveCutoffNode, self).__init__(hist_fraction=hist_fraction, hist_filename=hist_filename, input_dim=input_dim, output_dim=output_dim, dtype=dtype) self.lower_cutoff_fraction = lower_cutoff_fraction self.upper_cutoff_fraction = upper_cutoff_fraction self.lower_bounds = None self.upper_bounds = None def _get_supported_dtypes(self): return (mdp.utils.get_dtypes('Float') + mdp.utils.get_dtypes('AllInteger')) def _stop_training(self): """Calculate the cutoff bounds based on collected histogram data.""" if self.lower_cutoff_fraction or self.upper_cutoff_fraction: sorted_data = self.data_hist.copy() sorted_data.sort(axis=0) if self.lower_cutoff_fraction: index = self.lower_cutoff_fraction * len(sorted_data) self.lower_bounds = sorted_data[index] if self.upper_cutoff_fraction: index = (len(sorted_data) - self.upper_cutoff_fraction * len(sorted_data)) self.upper_bounds = sorted_data[index] super(AdaptiveCutoffNode, self)._stop_training() def _execute(self, x): """Return the clipped data.""" if self.lower_bounds is not None: x = numx.where(x >= self.lower_bounds, x, self.lower_bounds) if self.upper_bounds is not None: x = numx.where(x <= self.upper_bounds, x, self.upper_bounds) return x mdp-3.3/mdp/nodes/neural_gas_nodes.py000066400000000000000000000411201203131624700177060ustar00rootroot00000000000000__docformat__ = "restructuredtext en" from mdp import numx, numx_rand, utils, graph, Node class _NGNodeData(object): """Data associated to a node in a Growing Neural Gas graph.""" def __init__(self, pos, error=0.0, hits=0, label=None): # reference vector (spatial position) self.pos = pos # cumulative error self.cum_error = error self.hits = hits self.label = label class _NGEdgeData(object): """Data associated to an edge in a Growing Neural Gas graph.""" def __init__(self, age=0): self.age = age def inc_age(self): self.age += 1 class GrowingNeuralGasNode(Node): """Learn the topological structure of the input data by building a corresponding graph approximation. The algorithm expands on the original Neural Gas algorithm (see mdp.nodes NeuralGasNode) in that the algorithm adds new nodes are added to the graph as more data becomes available. Im this way, if the growth rate is appropriate, one can avoid overfitting or underfitting the data. More information about the Growing Neural Gas algorithm can be found in B. Fritzke, A Growing Neural Gas Network Learns Topologies, in G. Tesauro, D. S. Touretzky, and T. K. Leen (editors), Advances in Neural Information Processing Systems 7, pages 625-632. MIT Press, Cambridge MA, 1995. **Attributes and methods of interest** - graph -- The corresponding `mdp.graph.Graph` object """ def __init__(self, start_poss=None, eps_b=0.2, eps_n=0.006, max_age=50, lambda_=100, alpha=0.5, d=0.995, max_nodes=2147483647, input_dim=None, dtype=None): """Growing Neural Gas algorithm. :Parameters: start_poss sequence of two arrays containing the position of the first two nodes in the GNG graph. If unspecified, the initial nodes are chosen with a random position generated from a gaussian distribution with zero mean and unit variance. eps_b coefficient of movement of the nearest node to a new data point. Typical values are 0 < eps_b << 1 . Default: 0.2 eps_n coefficient of movement of the neighbours of the nearest node to a new data point. Typical values are 0 < eps_n << eps_b . Default: 0.006 max_age remove an edge after `max_age` updates. Typical values are 10 < max_age < lambda. Default: 50 `lambda_` insert a new node after `lambda_` steps. Typical values are O(100). Default: 100 alpha when a new node is inserted, multiply the error of the nodes from which it generated by 0 max_age: g.remove_edge(edge) if edge.head.degree() == 0: g.remove_node(edge.head) if edge.tail.degree() == 0: g.remove_node(edge.tail) def _insert_new_node(self): """Insert a new node in the graph where it is more necessary (i.e. where the error is the largest).""" g = self.graph # determine the node with the highest error errors = map(lambda x: x.data.cum_error, g.nodes) qnode = g.nodes[numx.argmax(errors)] # determine the neighbour with the highest error neighbors = qnode.neighbors() errors = map(lambda x: x.data.cum_error, neighbors) fnode = neighbors[numx.argmax(errors)] # new node, halfway between the worst node and the worst of # its neighbors new_pos = 0.5*(qnode.data.pos + fnode.data.pos) new_node = self._add_node(new_pos) # update edges edges = qnode.get_edges(neighbor=fnode) g.remove_edge(edges[0]) self._add_edge(qnode, new_node) self._add_edge(fnode, new_node) # update errors qnode.data.cum_error *= self.alpha fnode.data.cum_error *= self.alpha new_node.data.cum_error = 0.5*(qnode.data.cum_error+ fnode.data.cum_error) def get_nodes_position(self): return numx.array(map(lambda n: n.data.pos, self.graph.nodes), dtype = self.dtype) def _train(self, input): g = self.graph d = self.d if len(g.nodes)==0: # if missing, generate two initial nodes at random # assuming that the input data has zero mean and unit variance, # choose the random position according to a gaussian distribution # with zero mean and unit variance normal = numx_rand.normal self._add_node(self._refcast(normal(0.0, 1.0, self.input_dim))) self._add_node(self._refcast(normal(0.0, 1.0, self.input_dim))) # loop on single data points for x in input: self.tlen += 1 # step 2 - find the nearest nodes # dists are the squared distances of x from n0, n1 (n0, n1), dists = self._get_nearest_nodes(x) # step 3 - increase age of the emanating edges for e in n0.get_edges(): e.data.inc_age() # step 4 - update error n0.data.cum_error += numx.sqrt(dists[0]) # step 5 - move nearest node and neighbours self._move_node(n0, x, self.eps_b) # neighbors undirected neighbors = n0.neighbors() for n in neighbors: self._move_node(n, x, self.eps_n) # step 6 - update n0<->n1 edge if n1 in neighbors: # should be one edge only edges = n0.get_edges(neighbor=n1) edges[0].data.age = 0 else: self._add_edge(n0, n1) # step 7 - remove old edges self._remove_old_edges(n0.get_edges()) # step 8 - add a new node each lambda steps if not self.tlen % self.lambda_ and len(g.nodes) < self.max_nodes: self._insert_new_node() # step 9 - decrease errors for node in g.nodes: node.data.cum_error *= d def nearest_neighbor(self, input): """Assign each point in the input data to the nearest node in the graph. Return the list of the nearest node instances, and the list of distances. Executing this function will close the training phase if necessary.""" super(GrowingNeuralGasNode, self).execute(input) nodes = [] dists = [] for x in input: (n0, _), dist = self._get_nearest_nodes(x) nodes.append(n0) dists.append(numx.sqrt(dist[0])) return nodes, dists class NeuralGasNode(GrowingNeuralGasNode): """Learn the topological structure of the input data by building a corresponding graph approximation (original Neural Gas algorithm). The Neural Gas algorithm was originally published in Martinetz, T. and Schulten, K.: A "Neural-Gas" Network Learns Topologies. In Kohonen, T., Maekisara, K., Simula, O., and Kangas, J. (eds.), Artificial Neural Networks. Elsevier, North-Holland., 1991. **Attributes and methods of interest** - graph -- The corresponding `mdp.graph.Graph` object - max_epochs - maximum number of epochs until which to train. """ def __init__(self, num_nodes = 10, start_poss=None, epsilon_i=0.3, # initial epsilon epsilon_f=0.05, # final epsilon lambda_i=30., # initial lambda lambda_f=0.01, # final lambda max_age_i=20, # initial edge lifetime max_age_f=200, # final edge lifetime max_epochs=100, n_epochs_to_train=None, input_dim=None, dtype=None): """Neural Gas algorithm. Default parameters taken from the original publication. :Parameters: start_poss sequence of two arrays containing the position of the first two nodes in the GNG graph. In unspecified, the initial nodes are chosen with a random position generated from a gaussian distribution with zero mean and unit variance. num_nodes number of nodes to use. Ignored if start_poss is given. epsilon_i, epsilon_f initial and final values of epsilon. Fraction of the distance between the closest node and the presented data point by which the node moves towards the data point in an adaptation step. Epsilon decays during training by e(t) = e_i(e_f/e_i)^(t/t_max) with t being the epoch. lambda_i, lambda_f initial and final values of lambda. Lambda influences how the weight change of nodes in the ranking decreases with lower rank. It is sometimes called the "neighborhood factor". Lambda decays during training in the same manner as epsilon does. max_age_i, max_age_f Initial and final lifetime, after which an edge will be removed. Lifetime is measured in terms of adaptation steps, i.e., presentations of data points. It decays during training like epsilon does. max_epochs number of epochs to train. One epoch has passed when all data points from the input have been presented once. The default in the original publication was 40000, but since this has proven to be impractically high too high for many real-world data sets, we adopted a default value of 100. n_epochs_to_train number of epochs to train on each call. Useful for batch learning and for visualization of the training process. Default is to train once until max_epochs is reached. """ self.graph = graph.Graph() if n_epochs_to_train is None: n_epochs_to_train = max_epochs #copy parameters self.num_nodes = num_nodes self.start_poss = start_poss self.epsilon_i = epsilon_i self.epsilon_f = epsilon_f self.lambda_i = lambda_i self.lambda_f = lambda_f self.max_age_i = max_age_i self.max_age_f = max_age_f self.max_epochs = max_epochs self.n_epochs_to_train = n_epochs_to_train super(GrowingNeuralGasNode, self).__init__(input_dim, None, dtype) if start_poss is not None: if self.num_nodes != len(start_poss): self.num_nodes = len(start_poss) if self.dtype is None: self.dtype = start_poss[0].dtype for node_ind in range(self.num_nodes): self._add_node(self._refcast(start_poss[node_ind])) self.epoch = 0 def _train(self, input): g = self.graph if len(g.nodes) == 0: # if missing, generate num_nodes initial nodes at random # assuming that the input data has zero mean and unit variance, # choose the random position according to a gaussian distribution # with zero mean and unit variance normal = numx_rand.normal for _ in range(self.num_nodes): self._add_node(self._refcast(normal(0.0, 1.0, self.input_dim))) epoch = self.epoch e_i = self.epsilon_i e_f = self.epsilon_f l_i = self.lambda_i l_f = self.lambda_f T_i = float(self.max_age_i) T_f = float(self.max_age_f) max_epochs = float(self.max_epochs) remaining_epochs = self.n_epochs_to_train while remaining_epochs > 0: # reset permutation of data points di = numx.random.permutation(input) if epoch < max_epochs: denom = epoch/max_epochs else: denom = 1. epsilon = e_i * ((e_f/e_i)**denom) lmbda = l_i * ((l_f/l_i)**denom) T = T_i * ((T_f/T_i)**denom) epoch += 1 for x in di: # Step 1 rank nodes according to their distance to random point ranked_nodes = self._rank_nodes_by_distance(x) # Step 2 move nodes for rank,node in enumerate(ranked_nodes): #TODO: cut off at some rank when using many nodes #TODO: check speedup by vectorizing delta_w = epsilon * numx.exp(-rank / lmbda) * \ (x - node.data.pos) node.data.pos += delta_w # Step 3 update edge weight for e in g.edges: e.data.inc_age() # Step 4 set age of edge between first two nodes to zero # or create it if it doesn't exist. n0 = ranked_nodes[0] n1 = ranked_nodes[1] nn = n0.neighbors() if n1 in nn: edges = n0.get_edges(neighbor=n1) edges[0].data.age = 0 # should only be one edge else: self._add_edge(n0, n1) # step 5 delete edges with age > max_age self._remove_old_edges(max_age=T) remaining_epochs -= 1 self.epoch = epoch def _rank_nodes_by_distance(self, x): """Return the nodes in the graph in a list ranked by their squared distance to x. """ #TODO: Refactor together with GNGNode._get_nearest_nodes # distance function def _distance_from_node(node): tmp = node.data.pos - x return utils.mult(tmp, tmp) # maps to mdp.numx.dot g = self.graph # distances of all graph nodes from x distances = numx.array(map(_distance_from_node, g.nodes)) ids = distances.argsort() ranked_nodes = [g.nodes[id] for id in ids] return ranked_nodes def _remove_old_edges(self, max_age): """Remove edges with age > max_age.""" g = self.graph for edge in self.graph.edges: if edge.data.age > max_age: g.remove_edge(edge) mdp-3.3/mdp/nodes/nipals.py000066400000000000000000000114651203131624700156750ustar00rootroot00000000000000__docformat__ = "restructuredtext en" from mdp import numx, NodeException, Cumulator from mdp.utils import mult from mdp.nodes import PCANode sqrt = numx.sqrt class NIPALSNode(Cumulator, PCANode): """Perform Principal Component Analysis using the NIPALS algorithm. This algorithm is particularyl useful if you have more variable than observations, or in general when the number of variables is huge and calculating a full covariance matrix may be unfeasable. It's also more efficient of the standard PCANode if you expect the number of significant principal components to be a small. In this case setting output_dim to be a certain fraction of the total variance, say 90%, may be of some help. **Internal variables of interest** ``self.avg`` Mean of the input data (available after training). ``self.d`` Variance corresponding to the PCA components. ``self.v`` Transposed of the projection matrix (available after training). ``self.explained_variance`` When output_dim has been specified as a fraction of the total variance, this is the fraction of the total variance that is actually explained. Reference for NIPALS (Nonlinear Iterative Partial Least Squares): Wold, H. Nonlinear estimation by iterative least squares procedures. in David, F. (Editor), Research Papers in Statistics, Wiley, New York, pp 411-444 (1966). More information about Principal Component Analysis, a.k.a. discrete Karhunen-Loeve transform can be found among others in I.T. Jolliffe, Principal Component Analysis, Springer-Verlag (1986). Original code contributed by: Michael Schmuker, Susanne Lezius, and Farzad Farkhooi (2008). """ def __init__(self, input_dim=None, output_dim=None, dtype=None, conv = 1e-8, max_it = 100000): """ The number of principal components to be kept can be specified as 'output_dim' directly (e.g. 'output_dim=10' means 10 components are kept) or by the fraction of variance to be explained (e.g. 'output_dim=0.95' means that as many components as necessary will be kept in order to explain 95% of the input variance). Other Arguments: conv - convergence threshold for the residual error. max_it - maximum number of iterations """ super(NIPALSNode, self).__init__(input_dim, output_dim, dtype) self.conv = conv self.max_it = max_it def _train(self, x): super(NIPALSNode, self)._train(x) def _stop_training(self, debug=False): # debug argument is ignored but needed by the base class super(NIPALSNode, self)._stop_training() self._adjust_output_dim() if self.desired_variance is not None: des_var = True else: des_var = False X = self.data conv = self.conv dtype = self.dtype mean = X.mean(axis=0) self.avg = mean max_it = self.max_it tlen = self.tlen # remove mean X -= mean var = X.var(axis=0).sum() self.total_variance = var exp_var = 0 eigenv = numx.zeros((self.input_dim, self.input_dim), dtype=dtype) d = numx.zeros((self.input_dim,), dtype = dtype) for i in range(self.input_dim): it = 0 # first score vector t is initialized to first column in X t = X[:, 0] # initialize difference diff = conv + 1 while diff > conv: # increase iteration counter it += 1 # Project X onto t to find corresponding loading p # and normalize loading vector p to length 1 p = mult(X.T, t)/mult(t, t) p /= sqrt(mult(p, p)) # project X onto p to find corresponding score vector t_new t_new = mult(X, p) # difference between new and old score vector tdiff = t_new - t diff = (tdiff*tdiff).sum() t = t_new if it > max_it: msg = ('PC#%d: no convergence after' ' %d iterations.'% (i, max_it)) raise NodeException(msg) # store ith eigenvector in result matrix eigenv[i, :] = p # remove the estimated principal component from X D = numx.outer(t, p) X -= D D = mult(D, p) d[i] = (D*D).sum()/(tlen-1) exp_var += d[i]/var if des_var and (exp_var >= self.desired_variance): self.output_dim = i + 1 break self.d = d[:self.output_dim] self.v = eigenv[:self.output_dim, :].T self.explained_variance = exp_var mdp-3.3/mdp/nodes/pca_nodes.py000066400000000000000000000306161203131624700163410ustar00rootroot00000000000000__docformat__ = "restructuredtext en" import mdp from mdp import numx from mdp.utils import (mult, nongeneral_svd, CovarianceMatrix, symeig, SymeigException) import warnings as _warnings class PCANode(mdp.Node): """Filter the input data through the most significatives of its principal components. **Internal variables of interest** ``self.avg`` Mean of the input data (available after training). ``self.v`` Transposed of the projection matrix (available after training). ``self.d`` Variance corresponding to the PCA components (eigenvalues of the covariance matrix). ``self.explained_variance`` When output_dim has been specified as a fraction of the total variance, this is the fraction of the total variance that is actually explained. More information about Principal Component Analysis, a.k.a. discrete Karhunen-Loeve transform can be found among others in I.T. Jolliffe, Principal Component Analysis, Springer-Verlag (1986). """ def __init__(self, input_dim=None, output_dim=None, dtype=None, svd=False, reduce=False, var_rel=1E-12, var_abs=1E-15, var_part=None): """The number of principal components to be kept can be specified as 'output_dim' directly (e.g. 'output_dim=10' means 10 components are kept) or by the fraction of variance to be explained (e.g. 'output_dim=0.95' means that as many components as necessary will be kept in order to explain 95% of the input variance). Other Keyword Arguments: svd -- if True use Singular Value Decomposition instead of the standard eigenvalue problem solver. Use it when PCANode complains about singular covariance matrices reduce -- Keep only those principal components which have a variance larger than 'var_abs' and a variance relative to the first principal component larger than 'var_rel' and a variance relative to total variance larger than 'var_part' (set var_part to None or 0 for no filtering). Note: when the 'reduce' switch is enabled, the actual number of principal components (self.output_dim) may be different from that set when creating the instance. """ # this must occur *before* calling super! self.desired_variance = None super(PCANode, self).__init__(input_dim, output_dim, dtype) self.svd = svd # set routine for eigenproblem if svd: self._symeig = nongeneral_svd else: self._symeig = symeig self.var_abs = var_abs self.var_rel = var_rel self.var_part = var_part self.reduce = reduce # empirical covariance matrix, updated during the training phase self._cov_mtx = CovarianceMatrix(dtype) # attributes that defined in stop_training self.d = None # eigenvalues self.v = None # eigenvectors, first index for coordinates self.total_variance = None self.tlen = None self.avg = None self.explained_variance = None def _set_output_dim(self, n): if n <= 1 and isinstance(n, float): # set the output dim after training, when the variances are known self.desired_variance = n else: self._output_dim = n def _check_output(self, y): # check output rank if not y.ndim == 2: error_str = "y has rank %d, should be 2" % (y.ndim) raise mdp.NodeException(error_str) if y.shape[1] == 0 or y.shape[1] > self.output_dim: error_str = ("y has dimension %d" ", should be 0= 1: # (eigenvalues sorted in ascending order) return (self.input_dim - self.output_dim + 1, self.input_dim) # otherwise, the number of principal components to keep has been # specified by the fraction of variance to be explained else: return None def _stop_training(self, debug=False): """Stop the training phase. Keyword arguments: debug=True if stop_training fails because of singular cov matrices, the singular matrices itselves are stored in self.cov_mtx and self.dcov_mtx to be examined. """ # request the covariance matrix and clean up self.cov_mtx, avg, self.tlen = self._cov_mtx.fix() del self._cov_mtx # this is a bit counterintuitive, as it reshapes the average vector to # be a matrix. in this way, however, we spare the reshape # operation every time that 'execute' is called. self.avg = avg.reshape(1, avg.shape[0]) # range for the eigenvalues rng = self._adjust_output_dim() # if we have more variables then observations we are bound to fail here # suggest to use the NIPALSNode instead. if debug and self.tlen < self.input_dim: wrn = ('The number of observations (%d) ' 'is larger than the number of input variables ' '(%d). You may want to use ' 'the NIPALSNode instead.' % (self.tlen, self.input_dim)) _warnings.warn(wrn, mdp.MDPWarning) # total variance can be computed at this point: # note that vartot == d.sum() vartot = numx.diag(self.cov_mtx).sum() ## compute and sort the eigenvalues # compute the eigenvectors of the covariance matrix (inplace) # (eigenvalues sorted in ascending order) try: d, v = self._symeig(self.cov_mtx, range=rng, overwrite=(not debug)) # if reduce=False and svd=False. we should check for # negative eigenvalues and fail if not (self.reduce or self.svd or (self.desired_variance is not None)): if d.min() < 0: raise mdp.NodeException( "Got negative eigenvalues: %s.\n" "You may either set output_dim to be smaller, " "or set reduce=True and/or svd=True" % str(d)) except SymeigException, exception: err = str(exception)+("\nCovariance matrix may be singular." "Try setting svd=True.") raise mdp.NodeException(err) # delete covariance matrix if no exception occurred if not debug: del self.cov_mtx # sort by descending order d = numx.take(d, range(d.shape[0]-1, -1, -1)) v = v[:, ::-1] if self.desired_variance is not None: # throw away immediately negative eigenvalues d = d[ d > 0 ] # the number of principal components to keep has # been specified by the fraction of variance to be explained varcum = (d / vartot).cumsum(axis=0) # select only the relevant eigenvalues # number of relevant eigenvalues neigval = varcum.searchsorted(self.desired_variance) + 1. #self.explained_variance = varcum[neigval-1] # cut d = d[0:neigval] v = v[:, 0:neigval] # define the new output dimension self.output_dim = int(neigval) # automatic dimensionality reduction if self.reduce: # remove entries that are smaller then var_abs and # smaller then var_rel relative to the maximum d = d[ d > self.var_abs ] # check that we did not throw away everything if len(d) == 0: raise mdp.NodeException('No eigenvalues larger than' ' var_abs=%e!'%self.var_abs) d = d[ d / d.max() > self.var_rel ] # filter for variance relative to total variance if self.var_part: d = d[ d / vartot > self.var_part ] v = v[:, 0:d.shape[0]] self._output_dim = d.shape[0] # set explained variance self.explained_variance = d.sum() / vartot # store the eigenvalues self.d = d # store the eigenvectors self.v = v # store the total variance self.total_variance = vartot def get_projmatrix(self, transposed=1): """Return the projection matrix.""" self._if_training_stop_training() if transposed: return self.v return self.v.T def get_recmatrix(self, transposed=1): """Return the back-projection matrix (i.e. the reconstruction matrix). """ self._if_training_stop_training() if transposed: return self.v.T return self.v def _execute(self, x, n=None): """Project the input on the first 'n' principal components. If 'n' is not set, use all available components.""" if n is not None: return mult(x-self.avg, self.v[:, :n]) return mult(x-self.avg, self.v) def _inverse(self, y, n=None): """Project 'y' to the input space using the first 'n' components. If 'n' is not set, use all available components.""" if n is None: n = y.shape[1] if n > self.output_dim: error_str = ("y has dimension %d," " should be at most %d" % (n, self.output_dim)) raise mdp.NodeException(error_str) v = self.get_recmatrix() if n is not None: return mult(y, v[:n, :]) + self.avg return mult(y, v) + self.avg class WhiteningNode(PCANode): """*Whiten* the input data by filtering it through the most significatives of its principal components. All output signals have zero mean, unit variance and are decorrelated. **Internal variables of interest** ``self.avg`` Mean of the input data (available after training). ``self.v`` Transpose of the projection matrix (available after training). ``self.d`` Variance corresponding to the PCA components (eigenvalues of the covariance matrix). ``self.explained_variance`` When output_dim has been specified as a fraction of the total variance, this is the fraction of the total variance that is actually explained. """ def _stop_training(self, debug=False): super(WhiteningNode, self)._stop_training(debug) ##### whiten the filters # self.v is now the _whitening_ matrix self.v = self.v / numx.sqrt(self.d) def get_eigenvectors(self): """Return the eigenvectors of the covariance matrix.""" self._if_training_stop_training() return numx.sqrt(self.d)*self.v def get_recmatrix(self, transposed=1): """Return the back-projection matrix (i.e. the reconstruction matrix). """ self._if_training_stop_training() v_inverse = self.v*self.d if transposed: return v_inverse.T return v_inverse mdp-3.3/mdp/nodes/rbm_nodes.py000066400000000000000000000342721203131624700163600ustar00rootroot00000000000000__docformat__ = "restructuredtext en" import mdp from mdp import numx from mdp.utils import mult random = mdp.numx_rand.random randn = mdp.numx_rand.randn exp = mdp.numx.exp # TODO: does it make sense to define the inverse of RBMNode as sampling # from the visible layer given an hidden state? # this and the other replication functions should go in mdp.utils def rrep(x, n): """Replicate x n-times on a new last dimension""" shp = x.shape + (1,) return x.reshape(shp).repeat(n, axis=-1) class RBMNode(mdp.Node): """Restricted Boltzmann Machine node. An RBM is an undirected probabilistic network with binary variables. The graph is bipartite into observed (*visible*) and hidden (*latent*) variables. By default, the ``execute`` method returns the *probability* of one of the hiden variables being equal to 1 given the input. Use the ``sample_v`` method to sample from the observed variables given a setting of the hidden variables, and ``sample_h`` to do the opposite. The ``energy`` method can be used to compute the energy of a given setting of all variables. The network is trained by Contrastive Divergence, as described in Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1711-1800 **Internal variables of interest** ``self.w`` Generative weights between hidden and observed variables ``self.bv`` bias vector of the observed variables ``self.bh`` bias vector of the hidden variables For more information on RBMs, see Geoffrey E. Hinton (2007) Boltzmann machine. Scholarpedia, 2(5):1668 """ def __init__(self, hidden_dim, visible_dim = None, dtype = None): """ :Parameters: hidden_dim number of hidden variables visible_dim number of observed variables """ super(RBMNode, self).__init__(visible_dim, hidden_dim, dtype) self._initialized = False def _init_weights(self): # weights and biases are initialized to small random values to # break the simmetry that might lead to degenerate solutions during # learning self._initialized = True # weights self.w = self._refcast(randn(self.input_dim, self.output_dim)*0.1) # bias on the visibile (input) units self.bv = self._refcast(randn(self.input_dim)*0.1) # bias on the hidden (output) units self.bh = self._refcast(randn(self.output_dim)*0.1) # delta w, bv, bh used for momentum term self._delta = (0., 0., 0.) def _sample_h(self, v): # returns P(h=1|v,W,b) and a sample from it probs = 1./(1. + exp(-self.bh - mult(v, self.w))) h = (probs > random(probs.shape)).astype(self.dtype) return probs, h def _sample_v(self, h): # returns P(v=1|h,W,b) and a sample from it probs = 1./(1. + exp(-self.bv - mult(h, self.w.T))) v = (probs > random(probs.shape)).astype(self.dtype) return probs, v def _train(self, v, n_updates=1, epsilon=0.1, decay=0., momentum=0., update_with_ph=True, verbose=False): """Update the internal structures according to the input data `v`. The training is performed using Contrastive Divergence (CD). :Parameters: v a binary matrix having different variables on different columns and observations on the rows n_updates number of CD iterations. Default value: 1 epsilon learning rate. Default value: 0.1 decay weight decay term. Default value: 0. momentum momentum term. Default value: 0. update_with_ph In his code, G.Hinton updates the hidden biases using the probability of the hidden unit activations instead of a sample from it. This is in order to speed up sequential learning of RBMs. Set this to False to use the samples instead. """ if not self._initialized: self._init_weights() # useful quantities n = v.shape[0] w, bv, bh = self.w, self.bv, self.bh # old gradients for momentum term dw, dbv, dbh = self._delta # first update of the hidden units for the data term ph_data, h_data = self._sample_h(v) # n updates of both v and h for the model term h_model = h_data.copy() for i in range(n_updates): pv_model, v_model = self._sample_v(h_model) ph_model, h_model = self._sample_h(v_model) # update w data_term = mult(v.T, ph_data) model_term = mult(v_model.T, ph_model) dw = momentum*dw + epsilon*((data_term - model_term)/n - decay*w) w += dw # update bv data_term = v.sum(axis=0) model_term = v_model.sum(axis=0) dbv = momentum*dbv + epsilon*((data_term - model_term)/n) bv += dbv # update bh if update_with_ph: data_term = ph_data.sum(axis=0) model_term = ph_model.sum(axis=0) else: data_term = h_data.sum(axis=0) model_term = h_model.sum(axis=0) dbh = momentum*dbh + epsilon*((data_term - model_term)/n) bh += dbh self._delta = (dw, dbv, dbh) self._train_err = float(((v-v_model)**2.).sum()) if verbose: print 'training error', self._train_err/v.shape[0] ph, h = self._sample_h(v) print 'energy', self._energy(v, ph).sum() def _stop_training(self): #del self._delta #del self._train_err pass # execution methods @staticmethod def is_invertible(): return False def _pre_inversion_checks(self, y): self._if_training_stop_training() # control the dimension of y self._check_output(y) def sample_h(self, v): """Sample the hidden variables given observations v. :Returns: a tuple ``(prob_h, h)``, where ``prob_h[n,i]`` is the probability that variable ``i`` is one given the observations ``v[n,:]``, and ``h[n,i]`` is a sample from the posterior probability. """ self._pre_execution_checks(v) return self._sample_h(v) def sample_v(self, h): """Sample the observed variables given hidden variable state h. :Returns: a tuple ``(prob_v, v)``, where ``prob_v[n,i]`` is the probability that variable ``i`` is one given the hidden variables ``h[n,:]``, and ``v[n,i]`` is a sample from that conditional probability. """ self._pre_inversion_checks(h) return self._sample_v(h) def _energy(self, v, h): return (-mult(v, self.bv) - mult(h, self.bh) - (mult(v, self.w)*h).sum(axis=1)) def energy(self, v, h): """Compute the energy of the RBM given observed variables state `v` and hidden variables state `h`. """ return self._energy(v, h) def _execute(self, v, return_probs=True): """If `return_probs` is True, returns the probability of the hidden variables h[n,i] being 1 given the observations v[n,:]. If `return_probs` is False, return a sample from that probability. """ probs, h = self._sample_h(v) if return_probs: return probs else: return h class RBMWithLabelsNode(RBMNode): """Restricted Boltzmann Machine with softmax labels. An RBM is an undirected probabilistic network with binary variables. In this case, the node is partitioned into a set of observed (*visible*) variables, a set of hidden (*latent*) variables, and a set of label variables (also observed), only one of which is active at any time. The node is able to learn associations between the visible variables and the labels. By default, the ``execute`` method returns the *probability* of one of the hiden variables being equal to 1 given the input. Use the ``sample_v`` method to sample from the observed variables (visible and labels) given a setting of the hidden variables, and ``sample_h`` to do the opposite. The ``energy`` method can be used to compute the energy of a given setting of all variables. The network is trained by Contrastive Divergence, as described in Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1711-1800 Internal variables of interest: ``self.w`` Generative weights between hidden and observed variables ``self.bv`` bias vector of the observed variables ``self.bh`` bias vector of the hidden variables For more information on RBMs with labels, see * Geoffrey E. Hinton (2007) Boltzmann machine. Scholarpedia, 2(5):1668. * Hinton, G. E, Osindero, S., and Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18:1527-1554. """ def __init__(self, hidden_dim, labels_dim, visible_dim=None, dtype=None): super(RBMWithLabelsNode, self).__init__(None, None, dtype) self._labels_dim = labels_dim if visible_dim is not None: self.input_dim = visible_dim+labels_dim self.output_dim = hidden_dim self._initialized = False def _set_input_dim(self, n): self._input_dim = n self._visible_dim = n - self._labels_dim def _sample_v(self, h, sample_l=False, concatenate=True): # returns P(v=1|h,W,b), a sample from it, P(l=1|h,W,b), # and a sample from it ldim, vdim = self._labels_dim, self._visible_dim # activation a = self.bv + mult(h, self.w.T) av, al = a[:, :vdim], a[:, vdim:] # ## visible units: logistic activation probs_v = 1./(1. + exp(-av)) v = (probs_v > random(probs_v.shape)).astype('d') # ## label units: softmax activation # subtract maximum to regularize exponent exponent = al - rrep(al.max(axis=1), ldim) probs_l = exp(exponent) probs_l /= rrep(probs_l.sum(axis=1), ldim) if sample_l: # ?? todo: I'm sure this can be optimized l = numx.zeros((h.shape[0], ldim)) for t in range(h.shape[0]): l[t, :] = mdp.numx_rand.multinomial(1, probs_l[t, :]) else: l = probs_l.copy() if concatenate: probs = numx.concatenate((probs_v, probs_l), axis=1) x = numx.concatenate((v, l), axis=1) return probs, x else: return probs_v, probs_l, v, l # execution methods def sample_h(self, v, l): """Sample the hidden variables given observations `v` and labels `l`. :Returns: a tuple ``(prob_h, h)``, where ``prob_h[n,i]`` is the probability that variable ``i`` is one given the observations ``v[n,:]`` and the labels ``l[n,:]``, and ``h[n,i]`` is a sample from the posterior probability.""" x = numx.concatenate((v, l), axis=1) self._pre_execution_checks(x) return self._sample_h(x) def sample_v(self, h): """Sample the observed variables given hidden variable state `h`. :Returns: a tuple ``(prob_v, probs_l, v, l)``, where ``prob_v[n,i]`` is the probability that the visible variable ``i`` is one given the hidden variables ``h[n,:]``, and ``v[n,i]`` is a sample from that conditional probability. ``prob_l`` and ``l`` have similar interpretations for the label variables. Note that the labels are activated using a softmax function, so that only one label can be active at any time. """ self._pre_inversion_checks(h) probs_v, probs_l, v, l = self._sample_v(h, sample_l=True, concatenate=False) return probs_v, probs_l, v, l def energy(self, v, h, l): """Compute the energy of the RBM given observed variables state `v` and `l`, and hidden variables state `h`.""" x = numx.concatenate((v, l), axis=1) return self._energy(x, h) def execute(self, v, l, return_probs = True): """If `return_probs` is True, returns the probability of the hidden variables h[n,i] being 1 given the observations v[n,:] and l[n,:]. If `return_probs` is False, return a sample from that probability. """ x = numx.concatenate((v, l), axis=1) self._pre_execution_checks(x) probs, h = self._sample_h(self._refcast(x)) if return_probs: return probs else: return h @staticmethod def is_invertible(): return False def train(self, v, l, n_updates=1, epsilon=0.1, decay=0., momentum=0., verbose=False): """Update the internal structures according to the visible data `v` and the labels `l`. The training is performed using Contrastive Divergence (CD). :Parameters: v a binary matrix having different variables on different columns and observations on the rows l a binary matrix having different variables on different columns and observations on the rows. Only one value per row should be 1. n_updates number of CD iterations. Default value: 1 epsilon learning rate. Default value: 0.1 decay weight decay term. Default value: 0. momentum momentum term. Default value: 0. """ if not self.is_training(): errstr = "The training phase has already finished." raise mdp.TrainingFinishedException(errstr) x = numx.concatenate((v, l), axis=1) self._check_input(x) self._train_phase_started = True self._train_seq[self._train_phase][0](self._refcast(x), n_updates=n_updates, epsilon=epsilon, decay=decay, momentum=momentum, verbose=verbose) mdp-3.3/mdp/nodes/regression_nodes.py000066400000000000000000000103021203131624700177440ustar00rootroot00000000000000__docformat__ = "restructuredtext en" from mdp import numx, numx_linalg, utils, Node, NodeException, TrainingException from mdp.utils import mult # ??? For the future: add an optional second phase to compute # residuals, significance of the slope. class LinearRegressionNode(Node): """Compute least-square, multivariate linear regression on the input data, i.e., learn coefficients ``b_j`` so that:: y_i = b_0 + b_1 x_1 + ... b_N x_N , for ``i = 1 ... M``, minimizes the square error given the training ``x``'s and ``y``'s. This is a supervised learning node, and requires input data ``x`` and target data ``y`` to be supplied during training (see ``train`` docstring). **Internal variables of interest** ``self.beta`` The coefficients of the linear regression """ def __init__(self, with_bias=True, use_pinv=False, input_dim=None, output_dim=None, dtype=None): """ :Arguments: with_bias If true, the linear model includes a constant term - True: y_i = b_0 + b_1 x_1 + ... b_N x_N - False: y_i = b_1 x_1 + ... b_N x_N If present, the constant term is stored in the first column of ``self.beta``. use_pinv If true, uses the pseudo-inverse function to compute the linear regression coefficients, which is more robust in some cases """ super(LinearRegressionNode, self).__init__(input_dim, output_dim, dtype) self.with_bias = with_bias self.use_pinv = use_pinv # for the linear regression estimator we need two terms # the first one is X^T X self._xTx = None # the second one is X^T Y self._xTy = None # keep track of how many data points have been sent self._tlen = 0 # final regression coefficients # if with_bias=True, beta includes the bias term in the first column self.beta = None @staticmethod def is_invertible(): return False def _check_train_args(self, x, y): # set output_dim if necessary if self._output_dim is None: self._set_output_dim(y.shape[1]) # check output dimensionality self._check_output(y) if y.shape[0] != x.shape[0]: msg = ("The number of output points should be equal to the " "number of datapoints (%d != %d)" % (y.shape[0], x.shape[0])) raise TrainingException(msg) def _train(self, x, y): """ **Additional input arguments** y array of size (x.shape[0], output_dim) that contains the observed output to the input x's. """ # initialize internal vars if necessary if self._xTx is None: if self.with_bias: x_size = self._input_dim + 1 else: x_size = self._input_dim self._xTx = numx.zeros((x_size, x_size), self._dtype) self._xTy = numx.zeros((x_size, self._output_dim), self._dtype) if self.with_bias: x = self._add_constant(x) # update internal variables self._xTx += mult(x.T, x) self._xTy += mult(x.T, y) self._tlen += x.shape[0] def _stop_training(self): try: if self.use_pinv: invfun = utils.pinv else: invfun = utils.inv inv_xTx = invfun(self._xTx) except numx_linalg.LinAlgError, exception: errstr = (str(exception) + "\n Input data may be redundant (i.e., some of the " + "variables may be linearly dependent).") raise NodeException(errstr) self.beta = mult(inv_xTx, self._xTy) # remove junk del self._xTx del self._xTy def _execute(self, x): if self.with_bias: x = self._add_constant(x) return mult(x, self.beta) def _add_constant(self, x): """Add a constant term to the vector 'x'. x -> [1 x] """ return numx.concatenate((numx.ones((x.shape[0], 1), dtype=self.dtype), x), axis=1) mdp-3.3/mdp/nodes/scikits_nodes.py000066400000000000000000000444271203131624700172540ustar00rootroot00000000000000# -*- coding:utf-8; -*- """Wraps the algorithms defined in scikits.learn in MDP Nodes. This module is based on the 0.6.X branch of scikits.learn . """ __docformat__ = "restructuredtext en" try: import sklearn _sklearn_prefix = 'sklearn' except ImportError: import scikits.learn as sklearn _sklearn_prefix = 'scikits.learn' import inspect import re import mdp class ScikitsException(mdp.NodeException): """Base class for exceptions in nodes wrapping scikits algorithms.""" pass # import all submodules of sklearn (to work around lazy import) from mdp.configuration import _version_too_old if _version_too_old(sklearn.__version__, (0, 8)): scikits_modules = ['ann', 'cluster', 'covariance', 'feature_extraction', 'feature_selection', 'features', 'gaussian_process', 'glm', 'linear_model', 'preprocessing', 'svm', 'pca', 'lda', 'hmm', 'fastica', 'grid_search', 'mixture', 'naive_bayes', 'neighbors', 'qda'] elif _version_too_old(sklearn.__version__, (0, 9)): # package structure has been changed in 0.8 scikits_modules = ['svm', 'linear_model', 'naive_bayes', 'neighbors', 'mixture', 'hmm', 'cluster', 'decomposition', 'lda', 'covariance', 'cross_val', 'grid_search', 'feature_selection.rfe', 'feature_extraction.image', 'feature_extraction.text', 'pipelines', 'pls', 'gaussian_process', 'qda'] elif _version_too_old(sklearn.__version__, (0, 11)): # from release 0.9 cross_val becomes cross_validation and hmm is deprecated scikits_modules = ['svm', 'linear_model', 'naive_bayes', 'neighbors', 'mixture', 'cluster', 'decomposition', 'lda', 'covariance', 'cross_validation', 'grid_search', 'feature_selection.rfe', 'feature_extraction.image', 'feature_extraction.text', 'pipelines', 'pls', 'gaussian_process', 'qda', 'ensemble', 'manifold', 'metrics', 'preprocessing', 'tree'] else: scikits_modules = ['svm', 'linear_model', 'naive_bayes', 'neighbors', 'mixture', 'cluster', 'decomposition', 'lda', 'covariance', 'cross_validation', 'grid_search', 'feature_selection', 'feature_extraction', 'pipeline', 'pls', 'gaussian_process', 'qda', 'ensemble', 'manifold', 'metrics', 'preprocessing', 'semi_supervised', 'tree', 'hmm'] for name in scikits_modules: # not all modules may be available due to missing dependencies # on the user system. # we just ignore failing imports try: __import__(_sklearn_prefix + '.' + name) except ImportError: pass _WS_LINE_RE = re.compile(r'^\s*$') _WS_PREFIX_RE = re.compile(r'^(\s*)') _HEADINGS_RE = re.compile(r'''^(Parameters|Attributes|Methods|Examples|Notes)\n (----+|====+)''', re.M + re.X) _UNDERLINE_RE = re.compile(r'----+|====+') _VARWITHUNDER_RE = re.compile(r'(\s|^)([a-zA-Z_][a-zA-Z0-9_]*_)(\s|$|[,.])') _HEADINGS = set(['Parameters', 'Attributes', 'Methods', 'Examples', 'Notes', 'References']) _DOC_TEMPLATE = """ %s This node has been automatically generated by wrapping the ``%s.%s`` class from the ``sklearn`` library. The wrapped instance can be accessed through the ``scikits_alg`` attribute. %s """ def _gen_docstring(object, docsource=None): module = object.__module__ name = object.__name__ if docsource is None: docsource = object docstring = docsource.__doc__ if docstring is None: return None lines = docstring.strip().split('\n') for i,line in enumerate(lines): if _WS_LINE_RE.match(line): break header = [line.strip() for line in lines[:i]] therest = [line.rstrip() for line in lines[i+1:]] body = [] if therest: prefix = min(len(_WS_PREFIX_RE.match(line).group(1)) for line in therest if line) quoteind = None for i, line in enumerate(therest): line = line[prefix:] if line in _HEADINGS: body.append('**%s**' % line) elif _UNDERLINE_RE.match(line): body.append('') else: line = _VARWITHUNDER_RE.sub(r'\1``\2``\3', line) if quoteind: if len(_WS_PREFIX_RE.match(line).group(1)) >= quoteind: line = quoteind * ' ' + '- ' + line[quoteind:] else: quoteind = None body.append('') body.append(line) if line.endswith(':'): body.append('') if i+1 < len(therest): next = therest[i+1][prefix:] quoteind = len(_WS_PREFIX_RE.match(next).group(1)) return _DOC_TEMPLATE % ('\n'.join(header), module, name, '\n'.join(body)) # TODO: generalize dtype support # TODO: have a look at predict_proba for Classifier.prob # TODO: inverse <-> generate/rvs # TODO: deal with input_dim/output_dim # TODO: change signature of overwritten functions # TODO: wrap_scikits_instance # TODO: add sklearn availability to test info strings # TODO: which tests ? (test that particular algorithm are / are not trainable) # XXX: if class defines n_components, allow output_dim, otherwise throw exception # also for classifiers (overwrite _set_output_dim) # Problem: sometimes they call it 'k' (e.g., algorithms in sklearn.cluster) def apply_to_scikits_algorithms(current_module, action, processed_modules=None, processed_classes=None): """ Function that traverses a module to find scikits algorithms. 'sklearn' algorithms are identified by the 'fit' 'predict', or 'transform' methods. The 'action' function is applied to each found algorithm. action -- a function that is called with as action(class_), where class_ is a class that defines the 'fit' or 'predict' method """ # only consider modules and classes once if processed_modules is None: processed_modules = [] if processed_classes is None: processed_classes = [] if current_module in processed_modules: return processed_modules.append(current_module) for member_name, member in current_module.__dict__.items(): if not member_name.startswith('_'): # classes if (inspect.isclass(member) and member not in processed_classes): processed_classes.append(member) if ((hasattr(member, 'fit') or hasattr(member, 'predict') or hasattr(member, 'transform')) and not member.__module__.endswith('_')): action(member) # other modules elif (inspect.ismodule(member) and member.__name__.startswith(_sklearn_prefix)): apply_to_scikits_algorithms(member, action, processed_modules, processed_classes) return processed_classes _OUTPUTDIM_ERROR = """'output_dim' keyword not supported. Please set the output dimensionality using sklearn keyword arguments (e.g., 'n_components', or 'k'). See the docstring of this class for details.""" def wrap_scikits_classifier(scikits_class): """Wrap a sklearn classifier as an MDP Node subclass. The wrapper maps these MDP methods to their sklearn equivalents: - _stop_training -> fit - _label -> predict """ newaxis = mdp.numx.newaxis # create a wrapper class for a sklearn classifier class ScikitsNode(mdp.ClassifierCumulator): def __init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs): if output_dim is not None: # output_dim and n_components cannot be defined at the same time if kwargs.has_key('n_components'): msg = ("Dimensionality set both by " "output_dim=%d and n_components=%d""") raise ScikitsException(msg % (output_dim, kwargs['n_components'])) super(ScikitsNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) self.scikits_alg = scikits_class(**kwargs) # ---- re-direct training and execution to the wrapped algorithm def _stop_training(self, **kwargs): super(ScikitsNode, self)._stop_training(self) return self.scikits_alg.fit(self.data, self.labels, **kwargs) def _label(self, x): return self.scikits_alg.predict(x)[:, newaxis] # ---- administrative details @staticmethod def is_invertible(): return False @staticmethod def is_trainable(): """Return True if the node can be trained, False otherwise.""" return hasattr(scikits_class, 'fit') # NOTE: at this point scikits nodes can only support up to # 64-bits floats because some call numpy.linalg.svd, which for # some reason does not support higher precisions def _get_supported_dtypes(self): """Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.""" return ['float32', 'float64'] # modify class name and docstring ScikitsNode.__name__ = scikits_class.__name__ + 'ScikitsLearnNode' ScikitsNode.__doc__ = _gen_docstring(scikits_class) # change the docstring of the methods to match the ones in sklearn # methods_dict maps ScikitsNode method names to sklearn method names methods_dict = {'__init__': '__init__', 'stop_training': 'fit', 'label': 'predict'} if hasattr(scikits_class, 'predict_proba'): methods_dict['prob'] = 'predict_proba' for mdp_name, scikits_name in methods_dict.items(): mdp_method = getattr(ScikitsNode, mdp_name) scikits_method = getattr(scikits_class, scikits_name) if hasattr(scikits_method, 'im_func'): # some scikits algorithms do not define an __init__ method # the one inherited from 'object' is a # "" # which does not have a 'im_func' attribute mdp_method.im_func.__doc__ = _gen_docstring(scikits_class, scikits_method.im_func) if scikits_class.__init__.__doc__ is None: ScikitsNode.__init__.im_func.__doc__ = _gen_docstring(scikits_class) return ScikitsNode def wrap_scikits_transformer(scikits_class): """Wrap a sklearn transformer as an MDP Node subclass. The wrapper maps these MDP methods to their sklearn equivalents: _stop_training -> fit _execute -> transform """ # create a wrapper class for a sklearn transformer class ScikitsNode(mdp.Cumulator): def __init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs): if output_dim is not None: raise ScikitsException(_OUTPUTDIM_ERROR) super(ScikitsNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) self.scikits_alg = scikits_class(**kwargs) # ---- re-direct training and execution to the wrapped algorithm def _stop_training(self, **kwargs): super(ScikitsNode, self)._stop_training(self) return self.scikits_alg.fit(self.data, **kwargs) def _execute(self, x): return self.scikits_alg.transform(x) # ---- administrative details @staticmethod def is_invertible(): return False @staticmethod def is_trainable(): """Return True if the node can be trained, False otherwise.""" return hasattr(scikits_class, 'fit') # NOTE: at this point scikits nodes can only support up to # 64-bits floats because some call numpy.linalg.svd, which for # some reason does not support higher precisions def _get_supported_dtypes(self): """Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.""" return ['float32', 'float64'] # modify class name and docstring ScikitsNode.__name__ = scikits_class.__name__ + 'ScikitsLearnNode' ScikitsNode.__doc__ = _gen_docstring(scikits_class) # change the docstring of the methods to match the ones in sklearn # methods_dict maps ScikitsNode method names to sklearn method names methods_dict = {'__init__': '__init__', 'stop_training': 'fit', 'execute': 'transform'} for mdp_name, scikits_name in methods_dict.items(): mdp_method = getattr(ScikitsNode, mdp_name) scikits_method = getattr(scikits_class, scikits_name, None) if hasattr(scikits_method, 'im_func'): # some scikits algorithms do not define an __init__ method # the one inherited from 'object' is a # "" # which does not have a 'im_func' attribute mdp_method.im_func.__doc__ = _gen_docstring(scikits_class, scikits_method.im_func) if scikits_class.__init__.__doc__ is None: ScikitsNode.__init__.im_func.__doc__ = _gen_docstring(scikits_class) return ScikitsNode def wrap_scikits_predictor(scikits_class): """Wrap a sklearn transformer as an MDP Node subclass. The wrapper maps these MDP methods to their sklearn equivalents: _stop_training -> fit _execute -> predict """ # create a wrapper class for a sklearn predictor class ScikitsNode(mdp.Cumulator): def __init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs): if output_dim is not None: raise ScikitsException(_OUTPUTDIM_ERROR) super(ScikitsNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) self.scikits_alg = scikits_class(**kwargs) # ---- re-direct training and execution to the wrapped algorithm def _stop_training(self, **kwargs): super(ScikitsNode, self)._stop_training(self) return self.scikits_alg.fit(self.data, **kwargs) def _execute(self, x): return self.scikits_alg.predict(x) # ---- administrative details @staticmethod def is_invertible(): return False @staticmethod def is_trainable(): """Return True if the node can be trained, False otherwise.""" return hasattr(scikits_class, 'fit') # NOTE: at this point scikits nodes can only support up to 64-bits floats # because some call numpy.linalg.svd, which for some reason does not # support higher precisions def _get_supported_dtypes(self): """Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.""" return ['float32', 'float64'] # modify class name and docstring ScikitsNode.__name__ = scikits_class.__name__ + 'ScikitsLearnNode' ScikitsNode.__doc__ = _gen_docstring(scikits_class) # change the docstring of the methods to match the ones in sklearn # methods_dict maps ScikitsNode method names to sklearn method names methods_dict = {'__init__': '__init__', 'stop_training': 'fit', 'execute': 'predict'} for mdp_name, scikits_name in methods_dict.items(): mdp_method = getattr(ScikitsNode, mdp_name) scikits_method = getattr(scikits_class, scikits_name) if hasattr(scikits_method, 'im_func'): # some scikits algorithms do not define an __init__ method # the one inherited from 'object' is a # "" # which does not have a 'im_func' attribute mdp_method.im_func.__doc__ = _gen_docstring(scikits_class, scikits_method.im_func) if scikits_class.__init__.__doc__ is None: ScikitsNode.__init__.im_func.__doc__ = _gen_docstring(scikits_class) return ScikitsNode #list candidate nodes def print_public_members(class_): """Print methods of sklearn algorithm. """ print '\n', '-' * 15 print '%s (%s)' % (class_.__name__, class_.__module__) for attr_name in dir(class_): attr = getattr(class_, attr_name) #print attr_name, type(attr) if not attr_name.startswith('_') and inspect.ismethod(attr): print ' -', attr_name #apply_to_scikits_algorithms(sklearn, print_public_members) def wrap_scikits_algorithms(scikits_class, nodes_list): """NEED DOCSTRING.""" name = scikits_class.__name__ if (name[:4] == 'Base' or name == 'LinearModel' or name.startswith('EllipticEnvelop') or name.startswith('ForestClassifier')): return if issubclass(scikits_class, sklearn.base.ClassifierMixin) and \ hasattr(scikits_class, 'fit'): nodes_list.append(wrap_scikits_classifier(scikits_class)) # Some (abstract) transformers do not implement fit. elif hasattr(scikits_class, 'transform') and hasattr(scikits_class, 'fit'): nodes_list.append(wrap_scikits_transformer(scikits_class)) elif hasattr(scikits_class, 'predict') and hasattr(scikits_class, 'fit'): nodes_list.append(wrap_scikits_predictor(scikits_class)) scikits_nodes = [] apply_to_scikits_algorithms(sklearn, lambda c: wrap_scikits_algorithms(c, scikits_nodes)) # add scikit nodes to dictionary #scikits_module = new.module('scikits') DICT_ = {} for wrapped_c in scikits_nodes: #print wrapped_c.__name__ DICT_[wrapped_c.__name__] = wrapped_c mdp-3.3/mdp/nodes/sfa_nodes.py000066400000000000000000000273721203131624700163540ustar00rootroot00000000000000__docformat__ = "restructuredtext en" import mdp from mdp import numx, Node, NodeException, TrainingException from mdp.utils import (mult, pinv, CovarianceMatrix, QuadraticForm, symeig, SymeigException) class SFANode(Node): """Extract the slowly varying components from the input data. More information about Slow Feature Analysis can be found in Wiskott, L. and Sejnowski, T.J., Slow Feature Analysis: Unsupervised Learning of Invariances, Neural Computation, 14(4):715-770 (2002). **Instance variables of interest** ``self.avg`` Mean of the input data (available after training) ``self.sf`` Matrix of the SFA filters (available after training) ``self.d`` Delta values corresponding to the SFA components (generalized eigenvalues). [See the docs of the ``get_eta_values`` method for more information] **Special arguments for constructor** ``include_last_sample`` If ``False`` the `train` method discards the last sample in every chunk during training when calculating the covariance matrix. The last sample is in this case only used for calculating the covariance matrix of the derivatives. The switch should be set to ``False`` if you plan to train with several small chunks. For example we can split a sequence (index is time):: x_1 x_2 x_3 x_4 in smaller parts like this:: x_1 x_2 x_2 x_3 x_3 x_4 The SFANode will see 3 derivatives for the temporal covariance matrix, and the first 3 points for the spatial covariance matrix. Of course you will need to use a generator that *connects* the small chunks (the last sample needs to be sent again in the next chunk). If ``include_last_sample`` was True, depending on the generator you use, you would either get:: x_1 x_2 x_2 x_3 x_3 x_4 in which case the last sample of every chunk would be used twice when calculating the covariance matrix, or:: x_1 x_2 x_3 x_4 in which case you loose the derivative between ``x_3`` and ``x_2``. If you plan to train with a single big chunk leave ``include_last_sample`` to the default value, i.e. ``True``. You can even change this behaviour during training. Just set the corresponding switch in the `train` method. """ def __init__(self, input_dim=None, output_dim=None, dtype=None, include_last_sample=True): """ For the ``include_last_sample`` switch have a look at the SFANode class docstring. """ super(SFANode, self).__init__(input_dim, output_dim, dtype) self._include_last_sample = include_last_sample # init two covariance matrices # one for the input data self._cov_mtx = CovarianceMatrix(dtype) # one for the derivatives self._dcov_mtx = CovarianceMatrix(dtype) # set routine for eigenproblem self._symeig = symeig # SFA eigenvalues and eigenvectors, will be set after training self.d = None self.sf = None # second index for outputs self.avg = None self._bias = None # avg multiplied with sf self.tlen = None def time_derivative(self, x): """Compute the linear approximation of the time derivative.""" # this is faster than a linear_filter or a weave-inline solution return x[1:, :]-x[:-1, :] def _set_range(self): if self.output_dim is not None and self.output_dim <= self.input_dim: # (eigenvalues sorted in ascending order) rng = (1, self.output_dim) else: # otherwise, keep all output components rng = None self.output_dim = self.input_dim return rng def _check_train_args(self, x, *args, **kwargs): # check that we have at least 2 time samples to # compute the update for the derivative covariance matrix s = x.shape[0] if s < 2: raise TrainingException('Need at least 2 time samples to ' 'compute time derivative (%d given)'%s) def _train(self, x, include_last_sample=None): """ For the ``include_last_sample`` switch have a look at the SFANode class docstring. """ if include_last_sample is None: include_last_sample = self._include_last_sample # works because x[:None] == x[:] last_sample_index = None if include_last_sample else -1 # update the covariance matrices self._cov_mtx.update(x[:last_sample_index, :]) self._dcov_mtx.update(self.time_derivative(x)) def _stop_training(self, debug=False): ##### request the covariance matrices and clean up self.cov_mtx, self.avg, self.tlen = self._cov_mtx.fix() del self._cov_mtx # do not center around the mean: # we want the second moment matrix (centered about 0) and # not the second central moment matrix (centered about the mean), i.e. # the covariance matrix self.dcov_mtx, self.davg, self.dtlen = self._dcov_mtx.fix(center=False) del self._dcov_mtx rng = self._set_range() #### solve the generalized eigenvalue problem # the eigenvalues are already ordered in ascending order try: self.d, self.sf = self._symeig(self.dcov_mtx, self.cov_mtx, range=rng, overwrite=(not debug)) d = self.d # check that we get only *positive* eigenvalues if d.min() < 0: err_msg = ("Got negative eigenvalues: %s." " You may either set output_dim to be smaller," " or prepend the SFANode with a PCANode(reduce=True)" " or PCANode(svd=True)"% str(d)) raise NodeException(err_msg) except SymeigException, exception: errstr = str(exception)+"\n Covariance matrices may be singular." raise NodeException(errstr) if not debug: # delete covariance matrix if no exception occurred del self.cov_mtx del self.dcov_mtx # store bias self._bias = mult(self.avg, self.sf) def _execute(self, x, n=None): """Compute the output of the slowest functions. If 'n' is an integer, then use the first 'n' slowest components.""" if n: sf = self.sf[:, :n] bias = self._bias[:n] else: sf = self.sf bias = self._bias return mult(x, sf) - bias def _inverse(self, y): return mult(y, pinv(self.sf)) + self.avg def get_eta_values(self, t=1): """Return the eta values of the slow components learned during the training phase. If the training phase has not been completed yet, call `stop_training`. The delta value of a signal is a measure of its temporal variation, and is defined as the mean of the derivative squared, i.e. delta(x) = mean(dx/dt(t)^2). delta(x) is zero if x is a constant signal, and increases if the temporal variation of the signal is bigger. The eta value is a more intuitive measure of temporal variation, defined as eta(x) = t/(2*pi) * sqrt(delta(x)) If x is a signal of length 't' which consists of a sine function that accomplishes exactly N oscillations, then eta(x)=N. :Parameters: t Sampling frequency in Hz. The original definition in (Wiskott and Sejnowski, 2002) is obtained for t = number of training data points, while for t=1 (default), this corresponds to the beta-value defined in (Berkes and Wiskott, 2005). """ if self.is_training(): self.stop_training() return self._refcast(t / (2 * numx.pi) * numx.sqrt(self.d)) class SFA2Node(SFANode): """Get an input signal, expand it in the space of inhomogeneous polynomials of degree 2 and extract its slowly varying components. The ``get_quadratic_form`` method returns the input-output function of one of the learned unit as a ``QuadraticForm`` object. See the documentation of ``mdp.utils.QuadraticForm`` for additional information. More information about Slow Feature Analysis can be found in Wiskott, L. and Sejnowski, T.J., Slow Feature Analysis: Unsupervised Learning of Invariances, Neural Computation, 14(4):715-770 (2002).""" def __init__(self, input_dim=None, output_dim=None, dtype=None, include_last_sample=True): self._expnode = mdp.nodes.QuadraticExpansionNode(input_dim=input_dim, dtype=dtype) super(SFA2Node, self).__init__(input_dim, output_dim, dtype, include_last_sample) @staticmethod def is_invertible(): """Return True if the node can be inverted, False otherwise.""" return False def _set_input_dim(self, n): self._expnode.input_dim = n self._input_dim = n def _train(self, x, include_last_sample=None): # expand in the space of polynomials of degree 2 super(SFA2Node, self)._train(self._expnode(x), include_last_sample) def _set_range(self): if (self.output_dim is not None) and ( self.output_dim <= self._expnode.output_dim): # (eigenvalues sorted in ascending order) rng = (1, self.output_dim) else: # otherwise, keep all output components rng = None return rng def _stop_training(self, debug=False): super(SFA2Node, self)._stop_training(debug) # set the output dimension if necessary if self.output_dim is None: self.output_dim = self._expnode.output_dim def _execute(self, x, n=None): """Compute the output of the slowest functions. If 'n' is an integer, then use the first 'n' slowest components.""" return super(SFA2Node, self)._execute(self._expnode(x), n) def get_quadratic_form(self, nr): """ Return the matrix H, the vector f and the constant c of the quadratic form 1/2 x'Hx + f'x + c that defines the output of the component 'nr' of the SFA node. """ if self.sf is None: self._if_training_stop_training() sf = self.sf[:, nr] c = -mult(self.avg, sf) n = self.input_dim f = sf[:n] h = numx.zeros((n, n), dtype=self.dtype) k = n for i in range(n): for j in range(n): if j > i: h[i, j] = sf[k] k = k+1 elif j == i: h[i, j] = 2*sf[k] k = k+1 else: h[i, j] = h[j, i] return QuadraticForm(h, f, c, dtype=self.dtype) ### old weave inline code to perform the time derivative # weave C code executed in the function SfaNode.time_derivative ## _TDERIVATIVE_1ORDER_CCODE = """ ## for( int i=0; i 2: msg = "In dual mode only two labels can be given" raise mdp.NodeException(msg) t_label_norm = zip(self._labels, [1, -1]) self._set_label_dicts(t_label_norm) elif mode == "multi": # enumerate from zero to len t_label_norm = zip(self._labels, count()) self._set_label_dicts(t_label_norm) else: msg = "Remapping mode not known" raise mdp.NodeException(msg) def _set_label_dicts(self, t_label_norm): self._mapping = dict(t_label_norm) self._inverse = dict((norm, label) for label, norm in t_label_norm) # check that neither original nor normalised labels have occured more than once if not (len(self._mapping) == len(t_label_norm) == len(self._inverse)): msg = "Error in label normalisation." raise mdp.NodeException(msg) def normalize(self, labels): return map(self._mapping.get, labels) def revert(self, norm_labels): return map(self._inverse.get, norm_labels) def _id(self, labels): return labels class _SVMClassifier(ClassifierCumulator): """Base class for the SVM classifier nodes.""" def __init__(self, input_dim=None, output_dim=None, dtype=None): self.normalizer = None super(_SVMClassifier, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) @staticmethod def is_invertible(): return False mdp-3.3/mdp/nodes/xsfa_nodes.py000066400000000000000000000322771203131624700165440ustar00rootroot00000000000000__docformat__ = "restructuredtext en" import mdp class XSFANode(mdp.Node): """Perform Non-linear Blind Source Separation using Slow Feature Analysis. This node is designed to iteratively extract statistically independent sources from (in principle) arbitrary invertible nonlinear mixtures. The method relies on temporal correlations in the sources and consists of a combination of nonlinear SFA and a projection algorithm. More details can be found in the reference given below (once it's published). The node has multiple training phases. The number of training phases depends on the number of sources that must be extracted. The recommended way of training this node is through a container flow:: >>> flow = mdp.Flow([XSFANode()]) >>> flow.train(x) doing so will automatically train all training phases. The argument ``x`` to the ``Flow.train`` method can be an array or a list of iterables (see the section about Iterators in the MDP tutorial for more info). If the number of training samples is large, you may run into memory problems: use data iterators and chunk training to reduce memory usage. If you need to debug training and/or execution of this node, the suggested approach is to use the capabilities of BiMDP. For example:: >>> flow = mdp.Flow([XSFANode()]) >>> tr_filename = bimdp.show_training(flow=flow, data_iterators=x) >>> ex_filename, out = bimdp.show_execution(flow, x=x) this will run training and execution with bimdp inspection. Snapshots of the internal flow state for each training phase and execution step will be opened in a web brower and presented as a slideshow. References: Sprekeler, H., Zito, T., and Wiskott, L. (2009). An Extension of Slow Feature Analysis for Nonlinear Blind Source Separation. Journal of Machine Learning Research. http://cogprints.org/7056/1/SprekelerZitoWiskott-Cogprints-2010.pdf """ def __init__(self, basic_exp=None, intern_exp=None, svd=False, verbose=False, input_dim=None, output_dim=None, dtype=None): """ :Keywords: basic_exp a tuple ``(node, args, kwargs)`` defining the node used for the basic nonlinear expansion. It is assumed that the mixture is linearly invertible after this expansion. The higher the complexity of the nonlinearity, the higher are the chances of inverting the unknown mixture. On the other hand, high complexity of the nonlinear expansion increases the danger of numeric instabilities, which can cause singularities in the simulation or errors in the source estimation. The trade-off has to be evaluated carefully. Default: ``(mdp.nodes.PolynomialExpansionNode, (2, ), {})`` intern_exp a tuple ``(node, args, kwargs)`` defining the node used for the internal nonlinear expansion of the estimated sources to be removed from the input space. The same trade-off as for basic_exp is valid here. Default: ``(mdp.nodes.PolynomialExpansionNode, (10, ), {})`` svd enable Singular Value Decomposition for normalization and regularization. Use it if the node complains about singular covariance matrices. verbose show some progress during training. Default: False """ # set up basic expansion if basic_exp is None: self.basic_exp = mdp.nodes.PolynomialExpansionNode self.basic_exp_args = (2, ) self.basic_exp_kwargs = {} else: self.basic_exp = basic_exp[0] self.basic_exp_args = basic_exp[1] self.basic_exp_kwargs = basic_exp[2] # set up internal expansion if intern_exp is None: self.exp = mdp.nodes.PolynomialExpansionNode self.exp_args = (10, ) self.exp_kwargs = {} else: self.exp = intern_exp[0] self.exp_args = intern_exp[1] self.exp_kwargs = intern_exp[2] # number of sources already extracted self.n_extracted_src = 0 # internal network self._flow = None self.verbose = verbose self.svd = svd super(XSFANode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) @property def flow(self): """Read-only internal flow property.""" return self._flow def _get_train_seq(self): #XXX: this is a hack # In order to enable the output_dim to be set automatically # after input_dim is known, instead of forcing the user to specify # it by hand, we need to initialize the internal flow just before # starting the first training (input_dim are known at that moment). # Problem is that when XSFANode is trained through a container flow, # which is the standard way of training this kind of nodes, # the flow checks that the data_iterators are *not* generators # for multiple phases nodes. To assess if a node has multiple phases # it checks that len(self._train_seq) > 1. But we still # don't know the number of training_phases at this point, because we # first need to know input_dim, which we will know after we receive the # first chunk of data. To avoid the flow to complain we just return # a bogus list of training phases: it should break anything else. if self._flow is None: # we still don't know the number of training_phases yet, # but we can assure that we will have more than 1: return [(None, None), (None, None)] else: return ([(self._train, self._stop_training)] * sum(self._training_phases)) def _set_input_dim(self, n): self._input_dim = n # set output_dim if thery are still not set if self.output_dim is None: self.output_dim = n def _check_train_args(self, x): # this method will be called before starting training. # it is the right moment to initialize the internal flow if self._flow is None: self._initialize_internal_flow() if self.verbose: print "Extracting source 1..." def _initialize_internal_flow(self): # create the initial flow if it's not there already # set input_dim is needed to correctly create the first # network layer self.basic_exp_kwargs['input_dim'] = self.input_dim exp = self.basic_exp(*self.basic_exp_args, **self.basic_exp_kwargs) # first element of the flow is the basic expansion node # after that the first source extractor module is appended self._flow = (exp + self._get_source_extractor(exp.output_dim, 0)) # set the training phases # set the total number of training phases training_phases = [] for S in range(self.output_dim): # get the number of training phases of every single # source extractor module mod = self._get_source_extractor(S+1, S) training_phases.append(len(mod._train_seq)) self._training_phases = training_phases # this is a list of the training phases the correspond to # completed training of a source extractor module self._training_phases_mods = [sum(training_phases[:i+1]) for i in range(len(training_phases[:-1]))] @staticmethod def is_invertible(): return False def _train(self, x): # train the last source extractor module in the flow self._flow[-1].train(self._flow[:-1](x)) def _stop_training(self): # stop the current training phase self._flow[-1].stop_training() # update the current training phase cur_tr_ph = self.get_current_train_phase() + 1 # if we finished to train the current source extractor module # and we still have to extract some sources # append a new source extractor module if (cur_tr_ph in self._training_phases_mods and self.n_extracted_src != (self.output_dim - 1)): self.n_extracted_src += 1 mod = self._get_source_extractor(self._flow[-1].output_dim, self.n_extracted_src) self._flow.append(mod) if self.verbose: print "Extracting source %d..." % (self.n_extracted_src+1) def _execute(self, x): return self._flow(x)[:,:self.output_dim] def _get_source_extractor(self, dim, nsources): # returns a module to extract the next source and remove its # projections in the data space S = nsources L = dim-S # sfa - extracts the next source sfa = mdp.nodes.SFANode(input_dim=L, output_dim=L) # identity - copies the new sources idn_new1 = mdp.nodes.IdentityNode(input_dim=S+1) # source expansion self.exp_kwargs['input_dim'] = S + 1 # N2 src_exp = mdp.hinet.FlowNode(self.exp(*self.exp_args, **self.exp_kwargs) + NormalizeNode() + mdp.nodes.WhiteningNode(svd=self.svd, reduce=True)) N2Layer = mdp.hinet.SameInputLayer((src_exp, idn_new1)) N2ContLayer = mdp.hinet.Layer((N2Layer, mdp.nodes.IdentityNode(input_dim=L-1))) if S == 0: # don't need to copy the current sources (there are none) N1 = mdp.hinet.FlowNode(sfa + N2ContLayer) elif S == self.output_dim - 1: # the last source does not need to be removed # take care of passing the sources down along the flow idn_old = mdp.nodes.IdentityNode(input_dim=S) return mdp.hinet.Layer((idn_old, mdp.nodes.SFANode(input_dim=L, output_dim=1))) else: # take care of passing the sources down along the flow idn_old = mdp.nodes.IdentityNode(input_dim=S) N1 = mdp.hinet.FlowNode(mdp.hinet.Layer((idn_old, sfa)) + N2ContLayer) # expanded sources projection proj = ProjectionNode(S, L-1) # use another identity node to copy the sources # we could in principle reuse the idn_new1 but using a new # node will make debugging much easier idn_new2 = mdp.nodes.IdentityNode(input_dim=S+1) # regularization after projection + new source copying reg_and_copy = mdp.hinet.Layer((idn_new2, mdp.nodes.WhiteningNode(input_dim=L-1, svd=self.svd, reduce=True))) # actual source removal flow src_rem = mdp.hinet.FlowNode( proj + reg_and_copy ) # return the actual source extraction module return mdp.hinet.FlowNode(N1 + src_rem) class ProjectionNode(mdp.Node): """Get expanded sources and input signals, and return the sources and the input signals projected into the space orthogonal to the expanded sources and their products.""" def __init__(self, S, L): #!! IMPORTANT!! # this node *must* return the sources together with the # projected input signals self.proj_mtx = None self.L = L super(ProjectionNode, self).__init__(output_dim=S+1+L) self._cov_mtx = mdp.utils.CrossCovarianceMatrix(self.dtype) def _train(self, x): # compute covariance between expanded sources # and input signals self._cov_mtx.update(x[:,:-self.output_dim], x[:,-self.L:]) def _stop_training(self): self.proj_mtx, avgx, avgy, self.tlen = self._cov_mtx.fix() def _execute(self, x): src = x[:, -self.output_dim:-self.L] exp = x[:, :-self.output_dim] inp = x[:, -self.L:] # result container result = mdp.numx.zeros((x.shape[0], self.output_dim)) # project input on the plane orthogonal to the expanded sources result[:, -self.L:] = inp - mdp.utils.mult(exp, self.proj_mtx) # copy the sources result[:, :-self.L] = src return result class NormalizeNode(mdp.PreserveDimNode): """Make input signal meanfree and unit variance""" def __init__(self, input_dim=None, output_dim=None, dtype=None): self._cov_mtx = mdp.utils.CovarianceMatrix(dtype) super(NormalizeNode, self).__init__(input_dim, output_dim, dtype) @staticmethod def is_trainable(): return True def _train(self, x): self._cov_mtx.update(x) def _stop_training(self): cov_mtx, avg, tlen = self._cov_mtx.fix() self.m = avg self.s = mdp.numx.sqrt(mdp.numx.diag(cov_mtx)) def _execute(self, x): return (x - self.m)/self.s def _inverse(self, y): return y*self.s + self.m mdp-3.3/mdp/parallel/000077500000000000000000000000001203131624700145125ustar00rootroot00000000000000mdp-3.3/mdp/parallel/__init__.py000066400000000000000000000060371203131624700166310ustar00rootroot00000000000000""" This is the MDP package for parallel processing. It is designed to work with nodes for which a large part of the computation is embaressingly parallel (like in :class:`~mdp.nodes.PCANode`). The hinet package is also fully supported, i.e., there are parallel versions of all hinet nodes. This package consists of two decoupled parts. The first part consists of parallel versions of the familiar MDP structures (nodes and flows). At the top there is the :class:`~ParallelFlow`, which generates tasks that are processed in parallel (this can be done automatically in the train or execute methods). The second part consists of the schedulers. They take tasks and process them in a more or less parallel way (e.g. in multiple processes). So they are designed to deal with the more technical aspects of the parallelization, but do not have to know anything about flows or nodes. """ from scheduling import ( ResultContainer, ListResultContainer, OrderedResultContainer, TaskCallable, SqrTestCallable, SleepSqrTestCallable, TaskCallableWrapper, Scheduler, cpu_count, MDPVersionCallable ) from process_schedule import ProcessScheduler from thread_schedule import ThreadScheduler from parallelnodes import ( ParallelExtensionNode, NotForkableParallelException, JoinParallelException, ParallelPCANode, ParallelSFANode, ParallelFDANode, ParallelHistogramNode ) from parallelclassifiers import ( ParallelGaussianClassifier, ParallelNearestMeanClassifier, ParallelKNNClassifier ) from parallelflows import ( _purge_flownode, FlowTaskCallable, FlowTrainCallable, FlowExecuteCallable, TrainResultContainer, ExecuteResultContainer, ParallelFlowException, NoTaskException, ParallelFlow, ParallelCheckpointFlow ) from parallelhinet import ( ParallelFlowNode, ParallelLayer, ParallelCloneLayer ) from mdp import config from mdp.utils import fixup_namespace if config.has_parallel_python: import pp_support # Note: the modules with the actual extension node classes are still available __all__ = [ "ResultContainer", "ListResultContainer", "OrderedResultContainer", "TaskCallable", "SqrTestCallable", "SleepSqrTestCallable", "TaskCallableWrapper", "Scheduler", "ProcessScheduler", "ThreadScheduler", "ParallelExtensionNode", "JoinParallelException", "NotForkableParallelException", "ParallelSFANode", "ParallelSFANode", "ParallelFDANode", "ParallelHistogramNode", "FlowTaskCallable", "FlowTrainCallable", "FlowExecuteCallable", "ExecuteResultContainer", "TrainResultContainer", "ParallelFlowException", "NoTaskException", "ParallelFlow", "ParallelCheckpointFlow", "ParallelFlowNode", "ParallelLayer", "ParallelCloneLayer"] import sys as _sys fixup_namespace(__name__, __all__, ('scheduling', 'process_schedule', 'thread_schedule', 'parallelnodes', 'parallelflows', 'parallelhinet', 'parallelclassifiers', 'config', 'fixup_namespace' )) mdp-3.3/mdp/parallel/parallelclassifiers.py000066400000000000000000000035301203131624700211110ustar00rootroot00000000000000""" Module for MDP classifiers that support parallel training. """ import mdp from mdp import numx from mdp.parallel import ParallelExtensionNode class ParallelGaussianClassifier(ParallelExtensionNode, mdp.nodes.GaussianClassifier): def _fork(self): return self._default_fork() def _join(self, forked_node): if not self._cov_objs: self.set_dtype(forked_node._dtype) self._cov_objs = forked_node._cov_objs else: for key, forked_cov in forked_node._cov_objs.items(): if key in self._cov_objs: self._join_covariance(self._cov_objs[key], forked_cov) else: self._cov_objs[key] = forked_cov class ParallelNearestMeanClassifier(ParallelExtensionNode, mdp.nodes.NearestMeanClassifier): def _fork(self): return self._default_fork() def _join(self, forked_node): for key in forked_node.label_means: if key in self.label_means: self.label_means[key] += forked_node.label_means[key] self.n_label_samples[key] += forked_node.n_label_samples[key] else: self.label_means[key] = forked_node.label_means[key] self.n_label_samples[key] = forked_node.n_label_samples[key] class ParallelKNNClassifier(ParallelExtensionNode, mdp.nodes.KNNClassifier): def _fork(self): return self._default_fork() def _join(self, forked_node): for key in forked_node._label_samples: if key in self._label_samples: self._label_samples[key] += forked_node._label_samples[key] else: self._label_samples[key] = forked_node._label_samples[key] mdp-3.3/mdp/parallel/parallelflows.py000066400000000000000000001020071203131624700177330ustar00rootroot00000000000000""" Module for parallel flows that can handle the parallel training / execution. Corresponding classes for task callables and ResultContainer are defined here as well. """ import mdp from mdp import numx as n from parallelnodes import NotForkableParallelException from scheduling import ( TaskCallable, ResultContainer, OrderedResultContainer, Scheduler ) from mdp.hinet import FlowNode ### Helper code for node purging before transport. ### class _DummyNode(mdp.Node): """Dummy node class for empty nodes.""" @staticmethod def is_trainable(): return False def _execute(self, x): err = "This is only a dummy created by 'parallel._purge_flownode'." raise mdp.NodeException(err) _DUMMY_NODE = _DummyNode() def _purge_flownode(flownode): """Replace nodes that are """ for i_node, node in enumerate(flownode._flow): if not (node._train_phase_started or node.use_execute_fork()): flownode._flow.flow[i_node] = _DUMMY_NODE ### Train task classes ### class FlowTaskCallable(TaskCallable): """Base class for all flow callables. It deals activating the required extensions. """ def __init__(self): """Store the currently active extensions.""" self._used_extensions = mdp.get_active_extensions() super(FlowTaskCallable, self).__init__() def setup_environment(self): """Activate the used extensions.""" # deactivate all active extensions for safety mdp.deactivate_extensions(mdp.get_active_extensions()) mdp.activate_extensions(self._used_extensions) class FlowTrainCallable(FlowTaskCallable): """Implements a single training phase in a flow for a data block. A FlowNode is used to simplify the forking process and to encapsulate the flow. You can also derive from this class to define your own callable class. """ def __init__(self, flownode, purge_nodes=True): """Store everything for the training. keyword arguments: flownode -- FlowNode containing the flow to be trained. purge_nodes -- If True nodes not needed for the join will be replaced with dummy nodes to reduce the footprint. """ self._flownode = flownode self._purge_nodes = purge_nodes super(FlowTrainCallable, self).__init__() def __call__(self, data): """Do the training and return only the trained node. data -- training data block (array or list if additional arguments are required) """ if type(data) is n.ndarray: self._flownode.train(data) else: self._flownode.train(*data) # note the local training in ParallelFlow relies on the flownode # being preserved, so derived classes should preserve it as well if self._purge_nodes: _purge_flownode(self._flownode) return self._flownode def fork(self): return self.__class__(self._flownode.fork(), purge_nodes=self._purge_nodes) class TrainResultContainer(ResultContainer): """Container for parallel nodes. Expects flownodes as results and joins them to save memory. A list containing one flownode is returned, so this container can replace the standard list container without any changes elsewhere. """ def __init__(self): super(TrainResultContainer, self).__init__() self._flownode = None def add_result(self, result, task_index): if not self._flownode: self._flownode = result else: self._flownode.join(result) def get_results(self): flownode = self._flownode self._flownode = None return [flownode,] ### Execute task classes ### class FlowExecuteCallable(FlowTaskCallable): """Implements data execution through a Flow. A FlowNode is used to simplify the forking process and to encapsulate the flow. """ def __init__(self, flownode, nodenr=None, purge_nodes=True): """Store everything for the execution. flownode -- FlowNode for the execution nodenr -- optional nodenr argument for the flow execute method purge_nodes -- If True nodes not needed for the join will be replaced with dummy nodes to reduce the footprint. """ self._flownode = flownode self._nodenr = nodenr self._purge_nodes = purge_nodes super(FlowExecuteCallable, self).__init__() def __call__(self, x): """Return the execution result. x -- data chunk If use_fork_execute is True for the flownode then it is returned in the result tuple. """ y = self._flownode.execute(x, nodenr=self._nodenr) if self._flownode.use_execute_fork(): if self._purge_nodes: _purge_flownode(self._flownode) return (y, self._flownode) else: return (y, None) def fork(self): return self.__class__(self._flownode.fork(), nodenr=self._nodenr, purge_nodes=self._purge_nodes) class ExecuteResultContainer(OrderedResultContainer): """Default result container with automatic restoring of the result order. This result container should be used together with BiFlowExecuteCallable. Both the execute result (x and possibly msg) and the forked BiFlowNode are stored. """ def __init__(self): """Initialize attributes.""" super(ExecuteResultContainer, self).__init__() self._flownode = None def add_result(self, result, task_index): """Remove the forked BiFlowNode from the result and join it.""" excecute_result, forked_flownode = result super(ExecuteResultContainer, self).add_result(excecute_result, task_index) if forked_flownode is not None: if self._flownode is None: self._flownode = forked_flownode else: self._flownode.join(forked_flownode) def get_results(self): """Return the ordered results. The joined BiFlowNode is returned in the first result list entry, for the following result entries BiFlowNode is set to None. This reduces memory consumption while staying transparent for the ParallelBiFlow. """ excecute_results = super(ExecuteResultContainer, self).get_results() flownode_results = ([self._flownode,] + ([None] * (len(excecute_results)-1))) return zip(excecute_results, flownode_results) ### ParallelFlow Class ### class ParallelFlowException(mdp.FlowException): """Standard exception for problems with ParallelFlow.""" pass class NoTaskException(ParallelFlowException): """Exception for problems with the task creation.""" pass class ParallelFlow(mdp.Flow): """A parallel flow provides the methods for parallel training / execution. Nodes in the flow which are not derived from ParallelNode are trained in the normal way. The training is also done normally if fork() raises a TrainingPhaseNotParallelException. This can be intentionally used by the node to request local training without forking. Parallel execution on the other hand should work for all nodes, since it only relies on the copy method of nodes. The stop_training method is always called locally, with no forking or copying involved. Both parallel training and execution can be done conveniently by providing a scheduler instance to the train or execute method. It is also possible to manage the tasks manually. This is done via the methods setup_parallel_training (or execution), get_task and use_results. The code of the train / execute method can serve as an example how to use these methods and process the tasks by a scheduler. """ def __init__(self, flow, verbose=False, **kwargs): """Initialize the internal variables. Note that the crash_recovery flag is is not supported, so it is disabled. """ kwargs["crash_recovery"] = False super(ParallelFlow, self).__init__(flow, verbose=verbose, **kwargs) self._train_data_iterables = None # all training data self._train_data_iterator = None # iterator for current training # index of currently trained node, also used as flag for training # takes value None for not training self._i_train_node = None self._flownode = None # used during training # iterable for execution data # also signals if parallel execution is underway self._exec_data_iterator = None self._next_task = None # buffer for next task self._train_callable_class = None self._execute_callable_class = None @mdp.with_extension("parallel") def train(self, data_iterables, scheduler=None, train_callable_class=None, overwrite_result_container=True, **kwargs): """Train all trainable nodes in the flow. If a scheduler is provided the training will be done in parallel on the scheduler. data_iterables -- A list of iterables, one for each node in the flow. The iterators returned by the iterables must return data arrays that are then used for the node training. See Flow.train for more details. If a custom train_callable_class is used to preprocess the data then other data types can be used as well. scheduler -- Value can be either None for normal training (default value) or a Scheduler instance for parallel training with the scheduler. If the scheduler value is an iterable or iterator then it is assumed that it contains a scheduler for each training phase. After a node has been trained the scheduler is shutdown. Note that you can e.g. use a generator to create the schedulers just in time. For nodes which are not trained the scheduler can be None. train_callable_class -- Class used to create training callables for the scheduler. By specifying your own class you can implement data transformations before the data is actually fed into the flow (e.g. from 8 bit image to 64 bit double precision). Note that the train_callable_class is only used if a scheduler was provided. By default NodeResultContainer is used. overwrite_result_container -- If set to True (default value) then the result container in the scheduler will be overwritten with an instance of NodeResultContainer (unless it is already an instance of NodeResultContainer). This improves the memory efficiency. """ # Warning: If this method is updated you also have to update train # in ParallelCheckpointFlow. if self.is_parallel_training: raise ParallelFlowException("Parallel training is underway.") if scheduler is None: if train_callable_class is not None: err = ("A train_callable_class was specified but no scheduler " "was given, so the train_callable_class has no effect.") raise ParallelFlowException(err) super(ParallelFlow, self).train(data_iterables, **kwargs) else: if train_callable_class is None: train_callable_class = FlowTrainCallable schedulers = None # do parallel training try: self.setup_parallel_training( data_iterables, train_callable_class=train_callable_class, **kwargs) # prepare scheduler if not isinstance(scheduler, Scheduler): # scheduler contains an iterable with the schedulers # self._i_train_node was set in setup_parallel_training schedulers = iter(scheduler) scheduler = schedulers.next() if self._i_train_node > 0: # dispose schedulers for pretrained nodes for _ in range(self._i_train_node): if scheduler is not None: scheduler.shutdown() scheduler = schedulers.next() elif self._i_train_node is None: # all nodes are already trained, dispose schedulers for _ in range(len(self.flow) - 1): if scheduler is not None: scheduler.shutdown() # the last scheduler will be shutdown in finally scheduler = schedulers.next() last_trained_node = self._i_train_node else: schedulers = None # check that the scheduler is compatible if ((scheduler is not None) and overwrite_result_container and (not isinstance(scheduler.result_container, TrainResultContainer))): scheduler.result_container = TrainResultContainer() ## train all nodes while self.is_parallel_training: while self.task_available: task = self.get_task() scheduler.add_task(*task) results = scheduler.get_results() if results == []: err = ("Could not get any training tasks or results " "for the current training phase.") raise Exception(err) else: self.use_results(results) # check if we have to switch to next scheduler if ((schedulers is not None) and (self._i_train_node is not None) and (self._i_train_node > last_trained_node)): # dispose unused schedulers for _ in range(self._i_train_node - last_trained_node): if scheduler is not None: scheduler.shutdown() scheduler = schedulers.next() last_trained_node = self._i_train_node # check that the scheduler is compatible if ((scheduler is not None) and overwrite_result_container and (not isinstance(scheduler.result_container, TrainResultContainer))): scheduler.result_container = TrainResultContainer() finally: # reset iterable references, which cannot be pickled self._train_data_iterables = None self._train_data_iterator = None if (schedulers is not None) and (scheduler is not None): scheduler.shutdown() def setup_parallel_training(self, data_iterables, train_callable_class=FlowTrainCallable): """Prepare the flow for handing out tasks to do the training. After calling setup_parallel_training one has to pick up the tasks with get_task, run them and finally return the results via use_results. tasks are available as long as task_available returns True. Training may require multiple phases, which are each closed by calling use_results. data_iterables -- A list of iterables, one for each node in the flow. The iterators returned by the iterables must return data arrays that are then used for the node training. See Flow.train for more details. If a custom train_callable_class is used to preprocess the data then other data types can be used as well. train_callable_class -- Class used to create training callables for the scheduler. By specifying your own class you can implement data transformations before the data is actually fed into the flow (e.g. from 8 bit image to 64 bit double precision). """ if self.is_parallel_training: err = "Parallel training is already underway." raise ParallelFlowException(err) self._train_callable_class = train_callable_class self._train_data_iterables = self._train_check_iterables(data_iterables) self._i_train_node = 0 self._flownode = FlowNode(mdp.Flow(self.flow)) self._next_train_phase() def _next_train_phase(self): """Find the next phase or node for parallel training. When it is found the corresponding internal variables are set. Nodes which are not derived from ParallelNode are trained locally. If a fork() fails due to a TrainingPhaseNotParallelException in a certain train phase, then the training is done locally as well (but fork() is tested again for the next phase). """ # find next node that can be forked, if required do local training while self._i_train_node < len(self.flow): current_node = self.flow[self._i_train_node] if not current_node.is_training(): self._i_train_node += 1 continue data_iterable = self._train_data_iterables[self._i_train_node] try: self._flownode.fork() # fork successful, prepare parallel training if self.verbose: print ("start parallel training phase of " + "node no. %d in parallel flow" % (self._i_train_node+1)) self._train_data_iterator = iter(data_iterable) first_task = self._create_train_task() # make sure that the iterator is not empty if first_task is None: if current_node.get_current_train_phase() == 1: err_str = ("The training data iteration for node " "no. %d could not be repeated for the " "second training phase, you probably " "provided an iterator instead of an " "iterable." % (self._i_train_node+1)) raise mdp.FlowException(err_str) else: err_str = ("The training data iterator for node " "no. %d is empty." % (self._i_train_node+1)) raise mdp.FlowException(err_str) task_data_chunk = first_task[0] # Only first task contains the new callable (enable caching). # A fork is not required here, since the callable is always # forked in the scheduler. self._next_task = (task_data_chunk, self._train_callable_class(self._flownode)) break except NotForkableParallelException, exception: if self.verbose: print ("could not fork node no. %d: %s" % (self._i_train_node+1, str(exception))) print ("start nonparallel training phase of " + "node no. %d in parallel flow" % (self._i_train_node+1)) self._local_train_phase(data_iterable) if self.verbose: print ("finished nonparallel training phase of " + "node no. %d in parallel flow" % (self._i_train_node+1)) self._stop_training_hook() self._flownode.stop_training() self._post_stop_training_hook() if not self.flow[self._i_train_node].is_training(): self._i_train_node += 1 else: # training is finished self._i_train_node = None def _local_train_phase(self, data_iterable): """Perform a single training phase locally. The internal _train_callable_class is used for the training. """ current_node = self.flow[self._i_train_node] task_callable = self._train_callable_class(self._flownode, purge_nodes=False) empty_iterator = True for i_task, data in enumerate(data_iterable): empty_iterator = False # Note: if x contains additional args assume that the # callable can handle this task_callable(data) if self.verbose: print (" finished nonparallel task no. %d" % (i_task+1)) if empty_iterator: if current_node.get_current_train_phase() == 1: err_str = ("The training data iteration for node " "no. %d could not be repeated for the " "second training phase, you probably " "provided an iterator instead of an " "iterable." % (self._i_train_node+1)) raise mdp.FlowException(err_str) else: err_str = ("The training data iterator for node " "no. %d is empty." % (self._i_train_node+1)) raise mdp.FlowException(err_str) def _post_stop_training_hook(self): """Hook method that is called after stop_training is called.""" pass def _create_train_task(self): """Create and return a single training task without callable. Returns None if data iterator end is reached. """ try: return (self._train_data_iterator.next(), None) except StopIteration: return None @mdp.with_extension("parallel") def execute(self, iterable, nodenr=None, scheduler=None, execute_callable_class=None, overwrite_result_container=True): """Train all trainable nodes in the flow. If a scheduler is provided the execution will be done in parallel on the scheduler. iterable -- An iterable or iterator that returns data arrays that are used as input to the flow. Alternatively, one can specify one data array as input. If a custom execute_callable_class is used to preprocess the data then other data types can be used as well. nodenr -- Same as in normal flow, the flow is only executed up to the nodenr. scheduler -- Value can be either None for normal execution (default value) or a Scheduler instance for parallel execution with the scheduler. execute_callable_class -- Class used to create execution callables for the scheduler. By specifying your own class you can implement data transformations before the data is actually fed into the flow (e.g. from 8 bit image to 64 bit double precision). Note that the execute_callable_class is only used if a scheduler was provided. If a scheduler is provided the default class used is NodeResultContainer. overwrite_result_container -- If set to True (default value) then the result container in the scheduler will be overwritten with an instance of OrderedResultContainer (unless it is already an instance of OrderedResultContainer). Otherwise the results might have a different order than the data chunks, which could mess up any subsequent analysis. """ if self.is_parallel_training: raise ParallelFlowException("Parallel training is underway.") if scheduler is None: if execute_callable_class is not None: err = ("A execute_callable_class was specified but no " "scheduler was given, so the execute_callable_class " "has no effect.") raise ParallelFlowException(err) return super(ParallelFlow, self).execute(iterable, nodenr) if execute_callable_class is None: execute_callable_class = FlowExecuteCallable # check that the scheduler is compatible if overwrite_result_container: if not isinstance(scheduler.result_container, ExecuteResultContainer): scheduler.result_container = ExecuteResultContainer() # do parallel execution self._flownode = FlowNode(mdp.Flow(self.flow)) try: self.setup_parallel_execution( iterable, nodenr=nodenr, execute_callable_class=execute_callable_class) while self.task_available: task = self.get_task() scheduler.add_task(*task) result = self.use_results(scheduler.get_results()) finally: # reset remaining iterator references, which cannot be pickled self._exec_data_iterator = None return result def setup_parallel_execution(self, iterable, nodenr=None, execute_callable_class=FlowExecuteCallable): """Prepare the flow for handing out tasks to do the execution. After calling setup_parallel_execution one has to pick up the tasks with get_task, run them and finally return the results via use_results. use_results will then return the result as if the flow was executed in the normal way. iterable -- An iterable or iterator that returns data arrays that are used as input to the flow. Alternatively, one can specify one data array as input. If a custom execute_callable_class is used to preprocess the data then other data types can be used as well. nodenr -- Same as in normal flow, the flow is only executed up to the nodenr. execute_callable_class -- Class used to create execution callables for the scheduler. By specifying your own class you can implement data transformations before the data is actually fed into the flow (e.g. from 8 bit image to 64 bit double precision). """ if self.is_parallel_training: raise ParallelFlowException("Parallel training is underway.") self._execute_callable_class = execute_callable_class if isinstance(iterable, n.ndarray): iterable = [iterable] self._exec_data_iterator = iter(iterable) first_task = self._create_execute_task() if first_task is None: errstr = ("The execute data iterator is empty.") raise mdp.FlowException(errstr) task_data_chunk = first_task[0] # Only first task contains the new callable (enable caching). # A fork is not required here, since the callable is always # forked in the scheduler. self._next_task = (task_data_chunk, self._execute_callable_class(self._flownode, purge_nodes=True)) def _create_execute_task(self): """Create and return a single execution task. Returns None if data iterator end is reached. """ try: # TODO: check if forked task is forkable before enforcing caching return (self._exec_data_iterator.next(), None) except StopIteration: return None def get_task(self): """Return a task either for either training or execution. A a one task buffer is used to make task_available work. tasks are available as long as need_result returns False or all the training / execution is done. If no tasks are available a NoTaskException is raised. """ if self._next_task is not None: task = self._next_task if self._i_train_node is not None: self._next_task = self._create_train_task() elif self._exec_data_iterator is not None: self._next_task = self._create_execute_task() else: raise NoTaskException("No data available for execution task.") return task else: raise NoTaskException("No task available for execution.") @property def is_parallel_training(self): """Return True if parallel training is underway.""" return self._i_train_node is not None @property def is_parallel_executing(self): """Return True if parallel execution is underway.""" return self._exec_data_iterator is not None @property def task_available(self): """Return True if tasks are available, otherwise False. If False is returned this can indicate that results are needed to continue training. """ return self._next_task is not None def use_results(self, results): """Use the result from the scheduler. During parallel training this will start the next training phase. For parallel execution this will return the result, like a normal execute would. results -- Iterable containing the results, normally the return value of scheduler.ResultContainer.get_results(). The individual results can be the return values of the tasks. """ if self.is_parallel_training: for result in results: # the flownode contains the original nodes self._flownode.join(result) if self.verbose: print ("finished parallel training phase of node no. " + "%d in parallel flow" % (self._i_train_node+1)) self._stop_training_hook() self._flownode.stop_training() self._post_stop_training_hook() if not self.flow[self._i_train_node].is_training(): self._i_train_node += 1 self._next_train_phase() elif self.is_parallel_executing: self._exec_data_iterator = None ys = [result[0] for result in results] if self._flownode.use_execute_fork(): flownodes = [result[1] for result in results] for flownode in flownodes: if flownode is not None: self._flownode.join(flownode) return n.concatenate(ys) class ParallelCheckpointFlow(ParallelFlow, mdp.CheckpointFlow): """Parallel version of CheckpointFlow. Note that train phases are always closed, so e.g. CheckpointSaveFunction should not expect open train phases. This is necessary since otherwise stop_training() would be called remotely. """ def __init__(self, flow, verbose=False, **kwargs): """Initialize the internal variables.""" self._checkpoints = None super(ParallelCheckpointFlow, self).__init__(flow=flow, verbose=verbose, **kwargs) def train(self, data_iterables, checkpoints, scheduler=None, train_callable_class=FlowTrainCallable, overwrite_result_container=True, **kwargs): """Train all trainable nodes in the flow. Same as the train method in ParallelFlow, but with additional support of checkpoint functions as in CheckpointFlow. """ super(ParallelCheckpointFlow, self).train( data_iterables=data_iterables, scheduler=scheduler, train_callable_class=train_callable_class, overwrite_result_container=overwrite_result_container, checkpoints=checkpoints, **kwargs) def setup_parallel_training(self, data_iterables, checkpoints, train_callable_class=FlowTrainCallable, **kwargs): """Checkpoint version of parallel training.""" self._checkpoints = self._train_check_checkpoints(checkpoints) super(ParallelCheckpointFlow, self).setup_parallel_training( data_iterables, train_callable_class=train_callable_class, **kwargs) def _post_stop_training_hook(self): """Check if we reached a checkpoint.""" super(ParallelCheckpointFlow, self)._post_stop_training_hook() i_node = self._i_train_node if self.flow[i_node].get_remaining_train_phase() == 0: if ((i_node <= len(self._checkpoints)) and self._checkpoints[i_node]): dict = self._checkpoints[i_node](self.flow[i_node]) # store result, just like in the original CheckpointFlow if dict: self.__dict__.update(dict) mdp-3.3/mdp/parallel/parallelhinet.py000066400000000000000000000054551203131624700177210ustar00rootroot00000000000000""" Parallel versions of hinet nodes. Note that internal nodes are referenced instead of copied, in order to save memory. """ import mdp.hinet as hinet import parallelnodes class ParallelFlowNode(hinet.FlowNode, parallelnodes.ParallelExtensionNode): """Parallel version of FlowNode.""" def _fork(self): """Fork nodes that require it, reference all other nodes. If a required fork() fails the exception is not caught here. """ node_list = [] found_train_node = False # set to True at the first training node for node in self._flow: if not found_train_node and node.is_training(): found_train_node = True node_list.append(node.fork()) elif node.use_execute_fork(): node_list.append(node.fork()) else: node_list.append(node) return self.__class__(self._flow.__class__(node_list)) def _join(self, forked_node): """Join the required nodes from the forked node into this FlowNode.""" found_train_node = False # set to True at the first training node for i_node, node in enumerate(forked_node._flow): if not found_train_node and node.is_training(): found_train_node = True self._flow[i_node].join(node) elif node.use_execute_fork(): self._flow[i_node].join(node) def use_execute_fork(self): return any(node.use_execute_fork() for node in self._flow) class ParallelLayer(hinet.Layer, parallelnodes.ParallelExtensionNode): """Parallel version of a Layer.""" def _fork(self): """Fork or copy all the nodes in the layer to fork the layer.""" forked_nodes = [] for node in self.nodes: if node.is_training(): forked_nodes.append(node.fork()) else: forked_nodes.append(node) return self.__class__(forked_nodes) def _join(self, forked_node): """Join the trained nodes from the forked layer.""" for i_node, layer_node in enumerate(self.nodes): if layer_node.is_training(): layer_node.join(forked_node.nodes[i_node]) def use_execute_fork(self): return any(node.use_execute_fork() for node in self.nodes) class ParallelCloneLayer(hinet.CloneLayer, parallelnodes.ParallelExtensionNode): """Parallel version of CloneLayer class.""" def _fork(self): """Fork the internal node in the clone layer.""" return self.__class__(self.node.fork(), n_nodes=len(self.nodes)) def _join(self, forked_node): """Join the internal node in the clone layer.""" self.node.join(forked_node.node) def use_execute_fork(self): return self.node.use_execute_fork() mdp-3.3/mdp/parallel/parallelnodes.py000066400000000000000000000225501203131624700177150ustar00rootroot00000000000000""" Module for MDP Nodes that support parallel training. This module contains both the parallel base class and some parallel implementations of MDP nodes. Note that such ParallelNodes are only needed for training, parallel execution works with any Node that can be pickled. """ # WARNING: There is a problem with unpickled arrays in NumPy < 1.1.x, see # http://projects.scipy.org/scipy/numpy/ticket/551 # To circumvent this, you can use a copy() of all unpickled arrays. import inspect import mdp from mdp import numx class NotForkableParallelException(mdp.NodeException): """Exception to signal that a fork is not possible. This exception is can be safely used and should be caught inside the ParallelFlow or the Scheduler. """ pass class JoinParallelException(mdp.NodeException): """Exception for errors when joining parallel nodes.""" pass class ParallelExtensionNode(mdp.ExtensionNode, mdp.Node): """Base class for parallel trainable MDP nodes. With the fork method new node instances are created which can then be trained. With the join method the trained instances are then merged back into a single node instance. This class defines default methods which raise a TrainingPhaseNotParallelException exception. """ extension_name = "parallel" # TODO: allow that forked nodes are not forkable themselves, # and are not joinable either # this implies that caching does not work for these def fork(self): """Return a new instance of this node class for remote training. This is a template method, the actual forking should be implemented in _fork. The forked node should be a ParallelNode of the same class as well, thus allowing recursive forking and joining. """ return self._fork() def join(self, forked_node): """Absorb the trained node from a fork into this parent node. This is a template method, the actual joining should be implemented in _join. """ # Warning: Use the properties / setters here. Otherwise we get problems # in certain situations (e.g., for FlowNode). if self.dtype is None: self.dtype = forked_node.dtype if self.input_dim is None: self.input_dim = forked_node.input_dim if self.output_dim is None: self.output_dim = forked_node.output_dim if forked_node._train_phase_started and not self._train_phase_started: self._train_phase_started = True self._join(forked_node) ## overwrite these methods ## def _fork(self): """Hook method for forking with default implementation. Overwrite this method for nodes that can be parallelized. You can use _default_fork, if that is compatible with your node class, typically the hard part is the joining. """ raise NotForkableParallelException("fork is not implemented " + "by this node (%s)" % str(self.__class__)) def _join(self, forked_node): """Hook method for joining, to be overridden.""" raise JoinParallelException("join is not implemented " + "by this node (%s)" % str(self.__class__)) @staticmethod def use_execute_fork(): """Return True if node requires a fork / join even during execution. The default output is False, overwrite this method if required. Note that the same fork and join methods are used as during training, so the distinction must be implemented in the custom _fork and _join methods. """ return False ## helper methods ## def _default_fork(self): """Default implementation of _fork. It uses introspection to determine the init kwargs and tries to fill them with attributes. These kwargs are then used to instanciate self.__class__ to create the fork instance. So you can use this method if all the required keys are also public attributes or have a single underscore in front. There are two reasons why this method does not simply replace _fork of ParallelExtensionNode (plus removing Node from the inheritance list): - If a node is not parallelized _fork raises an exception, as do nodes which can not fork due to some other reasons. Without this bahavior of _fork we would have to check with hasattr first if fork is present, adding more complexity at other places (mostly in container nodes). - This is a safeguard forcing users to think a little instead of relying on the inherited (but possibly incompatible) default implementation. """ args, varargs, varkw, defaults = inspect.getargspec(self.__init__) args.remove("self") if defaults: non_default_keys = args[:-len(defaults)] else: non_default_keys = [] kwargs = dict((key, getattr(self, key)) for key in args if hasattr(self, key)) # look for the key with an underscore in front for key in kwargs: args.remove(key) under_kwargs = dict((key, getattr(self, '_' + key)) for key in args if hasattr(self, '_' + key)) for key in under_kwargs: args.remove(key) kwargs.update(under_kwargs) # check that all the keys without default arguments are covered if non_default_keys: missing_defaults = set(non_default_keys) & set(args) if missing_defaults: err = ("could not find attributes for init arguments %s" % str(missing_defaults)) raise NotForkableParallelException(err) # create new instance return self.__class__(**kwargs) @staticmethod def _join_covariance(cov, forked_cov): """Helper method to join two CovarianceMatrix instances. cov -- Instance of CovarianceMatrix, to which the forked_cov instance is aded in-place. """ cov._cov_mtx += forked_cov._cov_mtx cov._avg += forked_cov._avg cov._tlen += forked_cov._tlen ## MDP parallel node implementations ## class ParallelPCANode(ParallelExtensionNode, mdp.nodes.PCANode): """Parallel version of MDP PCA node.""" def _fork(self): return self._default_fork() def _join(self, forked_node): """Combine the covariance matrices.""" if self._cov_mtx._cov_mtx is None: self.set_dtype(self._cov_mtx._dtype) self._cov_mtx = forked_node._cov_mtx else: self._join_covariance(self._cov_mtx, forked_node._cov_mtx) class ParallelSFANode(ParallelExtensionNode, mdp.nodes.SFANode): """Parallel version of MDP SFA node.""" def _fork(self): return self._default_fork() def _join(self, forked_node): """Combine the covariance matrices.""" if self._cov_mtx._cov_mtx is None: self.set_dtype(forked_node._cov_mtx._dtype) self._cov_mtx = forked_node._cov_mtx self._dcov_mtx = forked_node._dcov_mtx else: self._join_covariance(self._cov_mtx, forked_node._cov_mtx) self._join_covariance(self._dcov_mtx, forked_node._dcov_mtx) class ParallelFDANode(ParallelExtensionNode, mdp.nodes.FDANode): def _fork(self): if self.get_current_train_phase() == 1: forked_node = self.copy() # reset the variables that might contain data from this train phase forked_node._S_W = None forked_node._allcov = mdp.utils.CovarianceMatrix(dtype=self.dtype) else: forked_node = self._default_fork() return forked_node def _join(self, forked_node): if self.get_current_train_phase() == 1: if forked_node.get_current_train_phase() != 1: msg = ("This node is in training phase 1, but the forked node " "is not.") raise NotForkableParallelException(msg) if self._S_W is None: self.set_dtype(forked_node._allcov._dtype) self._allcov = forked_node._allcov self._S_W = forked_node._S_W else: self._join_covariance(self._allcov, forked_node._allcov) self._S_W += forked_node._S_W else: for lbl in forked_node.means: if lbl in self.means: self.means[lbl] += forked_node.means[lbl] self.tlens[lbl] += forked_node.tlens[lbl] else: self.means[lbl] = forked_node.means[lbl] self.tlens[lbl] = forked_node.tlens[lbl] class ParallelHistogramNode(ParallelExtensionNode, mdp.nodes.HistogramNode): """Parallel version of the HistogramNode.""" def _fork(self): return self._default_fork() def _join(self, forked_node): if (self.data_hist is not None) and (forked_node.data_hist is not None): self.data_hist = numx.concatenate([self.data_hist, forked_node.data_hist]) elif forked_node.data_hist != None: self.data_hist = forked_node.data_hist mdp-3.3/mdp/parallel/pp_slave_script.py000066400000000000000000000034361203131624700202670ustar00rootroot00000000000000""" Script to be called on a remote machine for starting a pp network server. This script calls pp_slave_wrapper in a new process and returns the pid. The ssh connection stays open and can be used to kill the server process. The python_executable and the paths are send via stdin. The first sys.argv argument ist the nice value. The other arguments and the paths are then used as arguments for the wrapper script. """ import sys import subprocess def main(): try: # receive sys_paths via stdin to be used in the wrapper python_executable = sys.stdin.readline()[:-1] # remove newline character sys_paths = [] while True: sys_path = sys.stdin.readline()[:-1] # remove newline character if sys_path == "_done_": break sys_paths.append(sys_path) # assemble the command line for the wrapper by forwarding the arguments and cmd = ("nice %s %s pp_slave_wrapper.py" % (sys.argv[1], python_executable)) for arg in sys.argv[2:]: cmd += " " + arg for sys_path in sys_paths: cmd += " " + sys_path # start the subprocess in which the slave process runs proc = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) # print status message from slave process sys.stdout.write(proc.stdout.readline()) sys.stdout.flush() # return the pid via stdout print proc.pid sys.stdout.flush() except Exception, e: print "Error while starting the server process." print e print -1 sys.stdout.flush() if __name__ == "__main__": main() mdp-3.3/mdp/parallel/pp_slave_wrapper.py000066400000000000000000000016511203131624700204400ustar00rootroot00000000000000""" Script to be called to start a network server. It differs from calling ppserver.py mainly in that it allows to add paths to sys.path. So it acts like a wrapper for the server initialization. The paths are passed via sys.argv[5:]. The first four arguments are port, timeout, secret, n_workers. """ import sys def main(): port, timeout, secret, n_workers = sys.argv[1:5] port = int(port) timeout = int(timeout) n_workers = int(n_workers) sys_paths = sys.argv[5:] for sys_path in sys_paths: sys.path.append(sys_path) import ppserver ## initialization code as in ppserver.py server = ppserver._NetworkServer(ncpus=n_workers, port=port, secret=secret, timeout=timeout) print "Server is ready." sys.stdout.flush() server.listen() if __name__ == "__main__": main() mdp-3.3/mdp/parallel/pp_support.py000066400000000000000000000327131203131624700173050ustar00rootroot00000000000000""" Adapters for the Parallel Python library (http://www.parallelpython.com). The PPScheduler class uses an existing pp scheduler and is a simple adapter. LocalPPScheduler includes the creation of a local pp scheduler. NetworkPPScheduler includes the management of the remote slaves via SSH. """ from __future__ import with_statement import sys import os import time import subprocess import signal import traceback import tempfile import scheduling import pp import mdp TEMPDIR_PREFIX='pp4mdp-monkeypatch.' def _monkeypatch_pp(container_dir): """Apply a hack for http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=620551. Importing numpy fails because the parent directory of the slave script (/usr/share/pyshared) is added to the begging of sys.path. This is a temporary fix until parallel python or the way it is packaged in debian is changed. This function monkey-patches the ppworker module and changes the path to the slave script. A temporary directory is created and the worker script is copied there. The temporary directory should be automatically removed when this module is destroyed. XXX: remove this when parallel python or the way it is packaged in debian is changed. """ import os.path, shutil # this part copied from pp.py, should give the same result hopefully ppworker = os.path.join(os.path.dirname(os.path.abspath(pp.__file__)), 'ppworker.py') global _ppworker_dir _ppworker_dir = mdp.utils.TemporaryDirectory(prefix=TEMPDIR_PREFIX, dir=container_dir) ppworker3 = os.path.join(_ppworker_dir.name, 'ppworker.py') shutil.copy(ppworker, ppworker3) mdp._pp_worker_command = pp._Worker.command[:] try: pp._Worker.command[pp._Worker.command.index(ppworker)] = ppworker3 except TypeError: # pp 1.6.0 compatibility pp._Worker.command = pp._Worker.command.replace(ppworker, ppworker3) if hasattr(mdp.config, 'pp_monkeypatch_dirname'): _monkeypatch_pp(mdp.config.pp_monkeypatch_dirname) class PPScheduler(scheduling.Scheduler): """Adaptor scheduler for the parallel python scheduler. This scheduler is a simple wrapper for a pp server. A pp server instance has to be provided. """ def __init__(self, ppserver, max_queue_length=1, result_container=None, verbose=False): """Initialize the scheduler. ppserver -- Parallel Python Server instance. max_queue_length -- How long the queue can get before add_task blocks. result_container -- ResultContainer used to store the results. ListResultContainer by default. verbose -- If True to get progress reports from the scheduler. """ if result_container is None: result_container = scheduling.ListResultContainer() super(PPScheduler, self).__init__(result_container=result_container, verbose=verbose) self.ppserver = ppserver self.max_queue_length = max_queue_length def _process_task(self, data, task_callable, task_index): """Non-blocking processing of tasks. Depending on the scheduler state this function is non-blocking or blocking. One reason for blocking can be a full task-queue. """ task = (data, task_callable.fork(), task_index) def execute_task(task): """Call the first args entry and return the return value.""" data, task_callable, task_index = task task_callable.setup_environment() return task_callable(data), task_index while True: if len(self.ppserver._Server__queue) > self.max_queue_length: # release lock for other threads and wait self._lock.release() time.sleep(0.5) self._lock.acquire() else: # release lock to enable result storage self._lock.release() # the inner tuple is a trick to prevent introspection by pp # this forces pp to simply pickle the object self.ppserver.submit(execute_task, args=(task,), callback=self._pp_result_callback) break def _pp_result_callback(self, result): """Calback method for pp to unpack the result and the task id. This method then calls the normal _store_result method. """ if result is None: result = (None, None) self._store_result(*result) def _shutdown(self): """Call destroy on the ppserver.""" self.ppserver.destroy() class LocalPPScheduler(PPScheduler): """Uses a local pp server to distribute the work across cpu cores. The pp server is created automatically instead of being provided by the user (in contrast to PPScheduler). """ def __init__(self, ncpus="autodetect", max_queue_length=1, result_container=None, verbose=False): """Create an internal pp server and initialize the scheduler. ncpus -- Integer or 'autodetect', specifies the number of processes used. max_queue_length -- How long the queue can get before add_task blocks. result_container -- ResultContainer used to store the results. ListResultContainer by default. verbose -- If True to get progress reports from the scheduler. """ ppserver = pp.Server(ncpus=ncpus) super(LocalPPScheduler, self).__init__(ppserver=ppserver, max_queue_length=max_queue_length, result_container=result_container, verbose=verbose) # default secret SECRET = "rosebud" class NetworkPPScheduler(PPScheduler): """Scheduler which can manage pp remote servers (requires SSH). The remote slave servers are automatically started and killed at the end. Since the slaves are started via SSH this schduler does not work on normal Windows systems. On such systems you can start the pp slaves manually and then use the standard PPScheduler. """ def __init__(self, max_queue_length=1, result_container=None, verbose=False, remote_slaves=None, source_paths=None, port=50017, secret=SECRET, nice=-19, timeout=3600, n_local_workers=0, slave_kill_filename=None, remote_python_executable=None): """Initialize the remote slaves and create the internal pp scheduler. result_container -- ResultContainer used to store the results. ListResultContainer by default. verbose -- If True to get progress reports from the scheduler. remote_slaves -- List of tuples, the first tuple entry is a string containing the name or IP adress of the slave, the second entry contains the number of processes (i.e. the pp ncpus parameter). The second entry can be None to use 'autodetect'. source_paths -- List of paths that will be appended to sys.path in the slaves. n_local_workers -- Value of ncpus for this machine. secret -- Secret password to secure the remote slaves. slave_kill_filename -- Filename (including path) where a list of the remote slave processes should be stored. Together with the 'kill_slaves' function this makes it possible to quickly all remote slave processes in case something goes wrong. If None, a tempfile is created. """ self._remote_slaves = remote_slaves self._running_remote_slaves = None # list of strings 'address:port' # list with processes for the ssh connections to the slaves self._ssh_procs = None self._remote_pids = None # list of the pids of the remote servers self._port = port if slave_kill_filename is None: slave_kill_file = tempfile.mkstemp(prefix='MDPtmp-')[1] self.slave_kill_file = slave_kill_file self._secret = secret self._slave_nice = nice self._timeout = timeout if not source_paths: self._source_paths = [] else: self._source_paths = source_paths if remote_python_executable is None: remote_python_executable = sys.executable self._python_executable = remote_python_executable module_file = os.path.abspath(__file__) self._script_path = os.path.dirname(module_file) self.verbose = verbose # start ppserver self._start_slaves() ppslaves = tuple(["%s:%d" % (address, self._port) for address in self._running_remote_slaves]) ppserver = pp.Server(ppservers=ppslaves, ncpus=n_local_workers, secret=self._secret) super(NetworkPPScheduler, self).__init__(ppserver=ppserver, max_queue_length=max_queue_length, result_container=result_container, verbose=verbose) def _shutdown(self): """Shutdown all slaves.""" for ssh_proc in self._ssh_procs: os.kill(ssh_proc.pid, signal.SIGQUIT) super(NetworkPPScheduler, self)._shutdown() if self.verbose: print "All slaves shut down." def start_slave(self, address, ncpus="autodetect"): """Start a single remote slave. The return value is a tuple of the ssh process handle and the remote pid. """ try: print "starting slave " + address + " ..." proc = subprocess.Popen(["ssh","-T", "%s" % address], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) proc.stdin.write("cd %s\n" % self._script_path) cmd = (self._python_executable + " pp_slave_script.py %d %d %d %s %d" % (self._slave_nice, self._port, self._timeout, self._secret, ncpus)) proc.stdin.write(cmd + "\n") # send additional information to the remote process proc.stdin.write(self._python_executable + "\n") for sys_path in self._source_paths: proc.stdin.write(sys_path + "\n") proc.stdin.write("_done_" + "\n") # print status message from slave sys.stdout.write(address + ": " + proc.stdout.readline()) # get PID for remote slave process pid = None if self.verbose: print "*** output from slave %s ***" % address while pid is None: # the slave process might first output some hello message try: value = proc.stdout.readline() if self.verbose: print value pid = int(value) except ValueError: pass if self.verbose: print "*** output end ***" return (proc, pid) except: print "Initialization of slave %s has failed." % address traceback.print_exc() return None def _start_slaves(self): """Start remote slaves. The slaves that could be started are stored in a textfile, in the form name:port:pid """ with open(self.slave_kill_file, 'w') as slave_kill_file: self._running_remote_slaves = [] self._remote_pids = [] self._ssh_procs = [] for (address, ncpus) in self._remote_slaves: ssh_proc, pid = self.start_slave(address, ncpus=ncpus) if pid is not None: slave_kill_file.write("%s:%d:%d\n" % (address, pid, ssh_proc.pid)) self._running_remote_slaves.append(address) self._remote_pids.append(pid) self._ssh_procs.append(ssh_proc) def kill_slaves(slave_kill_filename): """Kill all remote slaves which are stored in the given file. This functions is only meant for emergency situations, when something went wrong and the slaves have to be killed manually. """ with open(slave_kill_filename) as tempfile: for line in tempfile: address, pid, ssh_pid = line.split(":") pid = int(pid) ssh_pid = int(ssh_pid) # open ssh connection to to kill remote slave proc = subprocess.Popen(["ssh","-T", address], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) proc.stdin.write("kill %d\n" % pid) proc.stdin.flush() # kill old ssh connection try: os.kill(ssh_pid, signal.SIGKILL) except: pass # a kill might prevent the kill command transmission # os.kill(proc.pid, signal.SIGQUIT) print "killed slave " + address + " (pid %d)" % pid print "all slaves killed." if __name__ == "__main__": if len(sys.argv) == 2: kill_slaves(sys.argv[1]) else: sys.stderr.write("usage: %s slave_list.txt\n" % __file__) mdp-3.3/mdp/parallel/process_schedule.py000066400000000000000000000244331203131624700204240ustar00rootroot00000000000000""" Process based scheduler for distribution across multiple CPU cores. """ # TODO: use a queue instead of sleep? # http://docs.python.org/library/queue.html # TODO: use shared memory for data numpy arrays, but this also requires the # use of multiprocessing since the ctype objects can't be pickled # TODO: only return result when get_results is called, # this sends a special request to the processes to send their data, # we would have to add support for this to the callable, # might get too complicated # TODO: leverage process forks on unix systems, # might be very efficient due to copy-on-write, see # http://gael-varoquaux.info/blog/?p=119 # http://www.ibm.com/developerworks/aix/library/au-multiprocessing/ import sys import os import cPickle as pickle import threading import subprocess import time import traceback import warnings if __name__ == "__main__": # try to make sure that mdp can be imported by adding it to sys.path mdp_path = os.path.realpath(__file__) mdp_index = mdp_path.rfind("mdp") if mdp_index: mdp_path = mdp_path[:mdp_index-1] # mdp path goes after sys.path sys.path.append(mdp_path) # shut off warnings of any kinds warnings.filterwarnings("ignore", ".*") import mdp from mdp.parallel import Scheduler, cpu_count SLEEP_TIME = 0.1 # time spend sleeping when waiting for a free process class ProcessScheduler(Scheduler): """Scheduler that distributes the task to multiple processes. The subprocess module is used to start the requested number of processes. The execution of each task is internally managed by dedicated thread. This scheduler should work on all platforms (at least on Linux, Windows XP and Vista). """ def __init__(self, result_container=None, verbose=False, n_processes=1, source_paths=None, python_executable=None, cache_callable=True): """Initialize the scheduler and start the slave processes. result_container -- ResultContainer used to store the results. verbose -- Set to True to get progress reports from the scheduler (default value is False). n_processes -- Number of processes used in parallel. If None (default) then the number of detected CPU cores is used. source_paths -- List of paths that are added to sys.path in the processes to make the task unpickling work. A single path instead of a list is also accepted. If None (default value) then source_paths is set to sys.path. To prevent this you can specify an empty list. python_executable -- Python executable that is used for the processes. The default value is None, in which case sys.executable will be used. cache_callable -- Cache the task objects in the processes (default is True). Disabling caching can reduce the memory usage, but will generally be less efficient since the task_callable has to be pickled each time. """ super(ProcessScheduler, self).__init__( result_container=result_container, verbose=verbose) if n_processes: self._n_processes = n_processes else: self._n_processes = cpu_count() self._cache_callable = cache_callable if python_executable is None: python_executable = sys.executable # get the location of this module to start the processes module_path = os.path.dirname(mdp.__file__) module_file = os.path.join(module_path, "parallel", "process_schedule.py") # Note: -u argument is important on Windows to set stdout to binary # mode. Otherwise you might get a strange error message for # copy_reg. process_args = [python_executable, "-u", module_file] process_args.append(str(self._cache_callable)) if isinstance(source_paths, str): source_paths = [source_paths] if source_paths is None: source_paths = sys.path process_args += source_paths # list of processes not in use, start the processes now self._free_processes = [subprocess.Popen(args=process_args, stdout=subprocess.PIPE, stdin=subprocess.PIPE) for _ in range(self._n_processes)] # tag each process with its cached callable task_index, # this is compared with _last_callable_index to check if the cached # task_callable is still up to date for process in self._free_processes: process._callable_index = -1 if self.verbose: print ("scheduler initialized with %d processes" % self._n_processes) def _shutdown(self): """Shut down the slave processes. If a process is still running a task then an exception is raised. """ self._lock.acquire() if len(self._free_processes) < self._n_processes: raise Exception("some slave process is still working") for process in self._free_processes: pickle.dump("EXIT", process.stdin) process.stdin.flush() self._lock.release() if self.verbose: print "scheduler shutdown" def _process_task(self, data, task_callable, task_index): """Add a task, if possible without blocking. It blocks when the system is not able to start a new thread or when the processes are all in use. """ task_started = False while not task_started: if not len(self._free_processes): # release lock for other threads and wait self._lock.release() time.sleep(SLEEP_TIME) self._lock.acquire() else: try: process = self._free_processes.pop() self._lock.release() thread = threading.Thread(target=self._task_thread, args=(process, data, task_callable, task_index)) thread.start() task_started = True except thread.error: if self.verbose: print ("unable to create new task thread," " waiting 2 seconds...") time.sleep(2) def _task_thread(self, process, data, task_callable, task_index): """Thread function which cares for a single task. The task is pushed to the process via stdin, then we wait for the result on stdout, pass the result to the result container, free the process and exit. """ try: if self._cache_callable: # check if the cached callable is up to date if process._callable_index < self._last_callable_index: process._callable_index = self._last_callable_index else: task_callable = None # push the task to the process pickle.dump((data, task_callable, task_index), process.stdin, protocol=-1) process.stdin.flush() # wait for result to arrive result = pickle.load(process.stdout) except: traceback.print_exc() self._free_processes.append(process) sys.exit("failed to execute task %d in process:" % task_index) # store the result and clean up self._store_result(result, task_index) self._free_processes.append(process) def _process_run(cache_callable=True): """Run this function in a worker process to receive and run tasks. It waits for tasks on stdin, and sends the results back via stdout. """ # use sys.stdout only for pickled objects, everything else goes to stderr # NOTE: .buffer is the binary mode interface for stdin and out in py3k try: pickle_out = sys.stdout.buffer except AttributeError: pickle_out = sys.stdout try: pickle_in = sys.stdin.buffer except AttributeError: pickle_in = sys.stdin sys.stdout = sys.stderr exit_loop = False last_callable = None # cached callable while not exit_loop: task = None try: # wait for task to arrive task = pickle.load(pickle_in) if task == "EXIT": exit_loop = True else: data, task_callable, task_index = task if task_callable is None: if last_callable is None: err = ("No callable was provided and no cached " "callable is available.") raise Exception(err) task_callable = last_callable.fork() elif cache_callable: # store callable in cache last_callable = task_callable task_callable.setup_environment() task_callable = task_callable.fork() else: task_callable.setup_environment() result = task_callable(data) del task_callable # free memory pickle.dump(result, pickle_out, protocol=-1) pickle_out.flush() except Exception, exception: # return the exception instead of the result if task is None: print "unpickling a task caused an exception in a process:" else: print "task %d caused exception in process:" % task[2] print exception traceback.print_exc() sys.stdout.flush() sys.exit() if __name__ == "__main__": # first argument is cache_callable flag cache_callable = sys.argv[1] == "True" if len(sys.argv) > 2: # remaining arguments are code paths, # put them in front so that they take precedence over PYTHONPATH new_paths = [sys_arg for sys_arg in sys.argv[2:] if sys_arg not in sys.path] sys.path = new_paths + sys.path _process_run(cache_callable=cache_callable) mdp-3.3/mdp/parallel/scheduling.py000066400000000000000000000274721203131624700172250ustar00rootroot00000000000000""" This module contains the basic classes for task processing via a scheduler. """ import threading import time import os try: import multiprocessing except ImportError: # Python version < 2.6, have to use fallbacks pass class ResultContainer(object): """Abstract base class for result containers.""" def add_result(self, result_data, task_index): """Store a result in the container.""" pass def get_results(self): """Return results and reset container.""" pass class ListResultContainer(ResultContainer): """Basic result container using simply a list.""" def __init__(self): super(ListResultContainer, self).__init__() self._results = [] def add_result(self, result, task_index): """Store a result in the container.""" self._results.append(result) def get_results(self): """Return the list of results and reset this container. Note that the results are stored in the order that they come in, which can be different from the orginal task order. """ results = self._results self._results = [] return results class OrderedResultContainer(ListResultContainer): """Default result container with automatic restoring of the result order. In general the order of the incoming results in the scheduler can be different from the order of the tasks, since some tasks may finish quicker than other tasks. This result container restores the original order. """ def __init__(self): super(OrderedResultContainer, self).__init__() def add_result(self, result, task_index): """Store a result in the container. The task index is also stored and later used to reconstruct the original task order. """ self._results.append((result, task_index)) def get_results(self): """Sort the results into the original order and return them in list.""" results = self._results self._results = [] results.sort(key=lambda x: x[1]) return list(zip(*results))[0] class TaskCallable(object): """Abstract base class for task callables. This class encapsulates the task behavior and the related fixed data (data which stays constant over multiple tasks). """ def setup_environment(self): """This hook method is only called when the callable is first called in a different Python process / environment. It can be used for modifications in the Python environment that are required by this callable. """ pass def __call__(self, data): """Perform the computation and return the result. Override this method with a concrete implementation. """ return data # TODO: is 'fork' really a good name? # As an alternative one could have a separate CallableFactory class, # but this would make things more complicated for simple callables # (similar to why iterators implement the iterable interface). def fork(self): """Return a fork of this callable, e.g. by making a copy. This method is always called exactly once before a callable is called, so instead of the original callable a fresh fork is called. This ensures that the original callable is preserved when caching is used. If the callable is not modified by the call then it can simply return itself. """ return self class SqrTestCallable(TaskCallable): """Callable for testing.""" def __call__(self, data): """Return the squared data.""" return data**2 class SleepSqrTestCallable(TaskCallable): """Callable for testing.""" def __call__(self, data): """Return the squared data[0] after sleeping for data[1] seconds.""" time.sleep(data[1]) return data[0]**2 class MDPVersionCallable(TaskCallable): """Callable For testing MDP version. Should return a unique comparable object which includes version information and installed/used modules. """ def __call__(self, data): """Ignore input data and return mdp.info()""" import mdp return mdp.config.info() class TaskCallableWrapper(TaskCallable): """Wrapper to provide a fork method for simple callables like a function. This wrapper is applied internally in Scheduler. """ def __init__(self, task_callable): """Store and wrap the callable.""" self._callable = task_callable def __call__(self, data): """Call the internal callable with the data and return the result.""" return self._callable(data) # helper function def cpu_count(): """Return the number of CPU cores.""" try: return multiprocessing.cpu_count() # TODO: remove except clause once we support only python >= 2.6 except NameError: ## This code part is taken from parallel python. # Linux, Unix and MacOS if hasattr(os, "sysconf"): if "SC_NPROCESSORS_ONLN" in os.sysconf_names: # Linux & Unix n_cpus = os.sysconf("SC_NPROCESSORS_ONLN") if isinstance(n_cpus, int) and n_cpus > 0: return n_cpus else: # OSX return int(os.popen2("sysctl -n hw.ncpu")[1].read()) # Windows if "NUMBER_OF_PROCESSORS" in os.environ: n_cpus = int(os.environ["NUMBER_OF_PROCESSORS"]) if n_cpus > 0: return n_cpus # Default return 1 class Scheduler(object): """Base class and trivial implementation for schedulers. New tasks are added with add_task(data, callable). get_results then returns the results (and locks if tasks are pending). In this simple scheduler implementation the tasks are simply executed in the add_task method. """ def __init__(self, result_container=None, verbose=False): """Initialize the scheduler. result_container -- Instance of ResultContainer that is used to store the results (default is None, in which case a ListResultContainer is used). verbose -- If True then status messages will be printed to sys.stdout. """ if result_container is None: result_container = OrderedResultContainer() self.result_container = result_container self.verbose = verbose self._n_open_tasks = 0 # number of tasks that are currently running # count the number of submitted tasks, also used for the task index self._task_counter = 0 self._lock = threading.Lock() self._last_callable = None # last callable is stored # task index of the _last_callable, can be *.5 if updated between tasks self._last_callable_index = -1.0 ## public read only properties ## @property def task_counter(self): """This property counts the number of submitted tasks.""" return self._task_counter @property def n_open_tasks(self): """This property counts of submitted but unfinished tasks.""" return self._n_open_tasks ## main methods ## def add_task(self, data, task_callable=None): """Add a task to be executed. data -- Data for the task. task_callable -- A callable, which is called with the data. If it is None (default value) then the last provided callable is used. If task_callable is not an instance of TaskCallable then a TaskCallableWrapper is used. The callable together with the data constitutes the task. This method blocks if there are no free recources to store or process the task (e.g. if no free worker processes are available). """ self._lock.acquire() if task_callable is None: if self._last_callable is None: raise Exception("No task_callable specified and " + "no previous callable available.") self._n_open_tasks += 1 self._task_counter += 1 task_index = self.task_counter if task_callable is None: # use the _last_callable_index in _process_task to # decide if a cached callable can be used task_callable = self._last_callable else: if not hasattr(task_callable, "fork"): # not a TaskCallable (probably a function), so wrap it task_callable = TaskCallableWrapper(task_callable) self._last_callable = task_callable self._last_callable_index = self.task_counter self._process_task(data, task_callable, task_index) def set_task_callable(self, task_callable): """Set the callable that will be used if no task_callable is given. Normally the callables are provided via add_task, in which case there is no need for this method. task_callable -- Callable that will be used unless a new task_callable is given. """ self._lock.acquire() self._last_callable = task_callable # set _last_callable_index to half value since the callable is newer # than the last task, but not newer than the next incoming task self._last_callable_index = self.task_counter + 0.5 self._lock.release() def _store_result(self, result, task_index): """Store a result in the internal result container. result -- Result data task_index -- Task index. Can be None if an error occured. This function blocks to avoid any problems during result storage. """ self._lock.acquire() self.result_container.add_result(result, task_index) if self.verbose: if task_index is not None: print " finished task no. %d" % task_index else: print " task failed" self._n_open_tasks -= 1 self._lock.release() def get_results(self): """Get the accumulated results from the result container. This method blocks if there are open tasks. """ while True: self._lock.acquire() if self._n_open_tasks == 0: results = self.result_container.get_results() self._lock.release() return results else: self._lock.release() time.sleep(1) def shutdown(self): """Controlled shutdown of the scheduler. This method should always be called when the scheduler is no longer needed and before the program shuts down! Otherwise one might get error messages. """ self._shutdown() ## Context Manager interface ## def __enter__(self): """Return self.""" return self def __exit__(self, type, value, traceback): """Shutdown the scheduler. It is important that all the calculations have finished when this is called, otherwise the shutdown might fail. """ self.shutdown() ## override these methods in custom schedulers ## def _process_task(self, data, task_callable, task_index): """Process the task and store the result. You can override this method for custom schedulers. Warning: When this method is entered is has the lock, the lock must be released here. Warning: Note that fork has not been called yet, so the provided task_callable must not be called. Only a forked version can be called. """ # IMPORTANT: always call fork, since it must be called at least once! task_callable = task_callable.fork() result = task_callable(data) # release lock before store_result self._lock.release() self._store_result(result, task_index) def _shutdown(self): """Hook method for shutdown to be used in custom schedulers.""" pass mdp-3.3/mdp/parallel/thread_schedule.py000066400000000000000000000065521203131624700202170ustar00rootroot00000000000000""" Thread based scheduler for distribution across multiple CPU cores. """ import threading import time import cPickle as pickle from scheduling import Scheduler, cpu_count SLEEP_TIME = 0.1 # time spend sleeping when waiting for a thread to finish class ThreadScheduler(Scheduler): """Thread based scheduler. Because of the GIL this only makes sense if most of the time is spend in numpy calculations (or some other external non-blocking C code) or for IO, but can be more efficient than ProcessScheduler because of the shared memory. """ def __init__(self, result_container=None, verbose=False, n_threads=1, copy_callable=True): """Initialize the scheduler. result_container -- ResultContainer used to store the results. verbose -- Set to True to get progress reports from the scheduler (default value is False). n_threads -- Number of threads used in parallel. If None (default) then the number of detected CPU cores is used. copy_callable -- Use deep copies of the task callable in the threads. This is for example required if some nodes are stateful during execution (e.g., a BiNode using the coroutine decorator). """ super(ThreadScheduler, self).__init__( result_container=result_container, verbose=verbose) if n_threads: self._n_threads = n_threads else: self._n_threads = cpu_count() self._n_active_threads = 0 self.copy_callable = copy_callable def _process_task(self, data, task_callable, task_index): """Add a task, if possible without blocking. It blocks when the maximum number of threads is reached (given by n_threads) or when the system is not able to start a new thread. """ task_started = False while not task_started: if self._n_active_threads >= self._n_threads: # release lock for other threads and wait self._lock.release() time.sleep(SLEEP_TIME) self._lock.acquire() else: self._lock.release() task_callable = task_callable.fork() if self.copy_callable: # create a deep copy of the task_callable, # since it might not be thread safe # (but the fork is still required) as_str = pickle.dumps(task_callable, -1) task_callable = pickle.loads(as_str) try: thread = threading.Thread(target=self._task_thread, args=(data, task_callable, task_index)) thread.start() task_started = True except Exception: if self.verbose: print ("unable to create new thread," " waiting 2 seconds...") time.sleep(2) def _task_thread(self, data, task_callable, task_index): """Thread function which processes a single task.""" result = task_callable(data) self._store_result(result, task_index) self._n_active_threads -= 1 mdp-3.3/mdp/repo_revision.py000066400000000000000000000021001203131624700161440ustar00rootroot00000000000000import mdp import os from subprocess import Popen, PIPE, STDOUT def get_git_revision(): """When mdp is run from inside a git repository, this function returns the current revision that git-describe gives us. If mdp is installed (or git fails for some other reason), an empty string is returned. """ # TODO: Introduce some fallback method that takes the info from a file revision = '' try: # we need to be sure that we call from the mdp dir mdp_dir = os.path.dirname(mdp.__file__) # --tags ensures that most revisions have a name even without # annotated tags # --dirty=+ appends a plus if the working copy is modified command = ["git", "describe", "--tags", "--dirty=+"] proc = Popen(command, stdout=PIPE, stderr=STDOUT, cwd=mdp_dir, universal_newlines=True) exit_status = proc.wait() # only get the revision if command succeded if exit_status == 0: revision = proc.stdout.read().strip() except OSError: pass return revision mdp-3.3/mdp/signal_node.py000066400000000000000000000713041203131624700155570ustar00rootroot00000000000000from __future__ import with_statement __docformat__ = "restructuredtext en" import cPickle as _cPickle import warnings as _warnings import copy as _copy import inspect import mdp from mdp import numx class NodeException(mdp.MDPException): """Base class for exceptions in `Node` subclasses.""" pass class InconsistentDimException(NodeException): """Raised when there is a conflict setting the dimensionalities. Note that incoming data with conflicting dimensionality raises a normal `NodeException`. """ pass class TrainingException(NodeException): """Base class for exceptions in the training phase.""" pass class TrainingFinishedException(TrainingException): """Raised when the `Node.train` method is called although the training phase is closed.""" pass class IsNotTrainableException(TrainingException): """Raised when the `Node.train` method is called although the node is not trainable.""" pass class IsNotInvertibleException(NodeException): """Raised when the `Node.inverse` method is called although the node is not invertible.""" pass class NodeMetaclass(type): """A metaclass which copies docstrings from private to public methods. This metaclass is meant to overwrite doc-strings of methods like `Node.execute`, `Node.stop_training`, `Node.inverse` with the ones defined in the corresponding private methods `Node._execute`, `Node._stop_training`, `Node._inverse`, etc. This makes it possible for subclasses of `Node` to document the usage of public methods, without the need to overwrite the ancestor's methods. """ # methods that can overwrite docs: DOC_METHODS = ['_train', '_stop_training', '_execute', '_inverse', '_label', '_prob'] def __new__(cls, classname, bases, members): new_cls = super(NodeMetaclass, cls).__new__(cls, classname, bases, members) priv_infos = cls._select_private_methods_to_wrap(cls, members) # now add the wrappers for wrapper_name, priv_info in priv_infos.iteritems(): # Note: super works because we never wrap in the defining class orig_pubmethod = getattr(super(new_cls, new_cls), wrapper_name) priv_info['name'] = wrapper_name # preserve the last non-empty docstring if not priv_info['doc']: priv_info['doc'] = orig_pubmethod.__doc__ recursed = hasattr(orig_pubmethod, '_undecorated_') if recursed: undec_pubmethod = orig_pubmethod._undecorated_ priv_info.update(NodeMetaclass._get_infos(undec_pubmethod)) wrapper_method = cls._wrap_function(undec_pubmethod, priv_info) wrapper_method._undecorated_ = undec_pubmethod else: priv_info.update(NodeMetaclass._get_infos(orig_pubmethod)) wrapper_method = cls._wrap_method(priv_info, new_cls) wrapper_method._undecorated_ = orig_pubmethod setattr(new_cls, wrapper_name, wrapper_method) return new_cls @staticmethod def _get_infos(pubmethod): infos = {} wrapped_info = NodeMetaclass._function_infodict(pubmethod) # Preserve the signature if it still does not end with kwargs # (this is important for binodes). if wrapped_info['kwargs_name'] is None: infos['signature'] = wrapped_info['signature'] infos['argnames'] = wrapped_info['argnames'] infos['defaults'] = wrapped_info['defaults'] return infos @staticmethod def _select_private_methods_to_wrap(cls, members): """Select private methods that can overwrite the public docstring. Return a dictionary priv_infos[pubname], where the keys are the public name of the private method to be wrapped, and the values are dictionaries with the signature, doc, ... informations of the private methods (see `_function_infodict`). """ priv_infos = {} for privname in cls.DOC_METHODS: if privname in members: # get the name of the corresponding public method pubname = privname[1:] # If the public method has been overwritten in this # subclass, then keep it. # This is also important because we use super in the wrapper # (so the public method in this class would be missed). if pubname not in members: priv_infos[pubname] = cls._function_infodict(members[privname]) return priv_infos # The next two functions (originally called get_info, wrapper) # are adapted versions of functions in the # decorator module by Michele Simionato # Version: 2.3.1 (25 July 2008) # Download page: http://pypi.python.org/pypi/decorator # Note: Moving these functions to utils would cause circular import. @staticmethod def _function_infodict(func): """ Returns an info dictionary containing: - name (the name of the function : str) - argnames (the names of the arguments : list) - defaults (the values of the default arguments : tuple) - signature (the signature without the defaults : str) - doc (the docstring : str) - module (the module name : str) - dict (the function __dict__ : str) - kwargs_name (the name of the kwargs argument, if present, else None) >>> def f(self, x=1, y=2, *args, **kw): pass >>> info = getinfo(f) >>> info["name"] 'f' >>> info["argnames"] ['self', 'x', 'y', 'args', 'kw'] >>> info["defaults"] (1, 2) >>> info["signature"] 'self, x, y, *args, **kw' >>> info["kwargs_name"] kw """ regargs, varargs, varkwargs, defaults = inspect.getargspec(func) argnames = list(regargs) if varargs: argnames.append(varargs) if varkwargs: argnames.append(varkwargs) signature = inspect.formatargspec(regargs, varargs, varkwargs, defaults, formatvalue=lambda value: "")[1:-1] return dict(name=func.__name__, signature=signature, argnames=argnames, kwargs_name=varkwargs, defaults=func.func_defaults, doc=func.__doc__, module=func.__module__, dict=func.__dict__, globals=func.func_globals, closure=func.func_closure) @staticmethod def _wrap_function(original_func, wrapper_infodict): """Return a wrapped version of func. :param original_func: The function to be wrapped. :param wrapper_infodict: The infodict to use for constructing the wrapper. """ src = ("lambda %(signature)s: _original_func_(%(signature)s)" % wrapper_infodict) wrapped_func = eval(src, dict(_original_func_=original_func)) wrapped_func.__name__ = wrapper_infodict['name'] wrapped_func.__doc__ = wrapper_infodict['doc'] wrapped_func.__module__ = wrapper_infodict['module'] wrapped_func.__dict__.update(wrapper_infodict['dict']) wrapped_func.func_defaults = wrapper_infodict['defaults'] return wrapped_func @staticmethod def _wrap_method(wrapper_infodict, cls): """Return a wrapped version of func. :param wrapper_infodict: The infodict to be used for constructing the wrapper. :param cls: Class to which the wrapper method will be added, this is used for the super call. """ src = ("lambda %(signature)s: super(_wrapper_class_, _wrapper_class_)." "%(name)s(%(signature)s)" % wrapper_infodict) wrapped_func = eval(src, {"_wrapper_class_": cls}) wrapped_func.__name__ = wrapper_infodict['name'] wrapped_func.__doc__ = wrapper_infodict['doc'] wrapped_func.__module__ = wrapper_infodict['module'] wrapped_func.__dict__.update(wrapper_infodict['dict']) wrapped_func.func_defaults = wrapper_infodict['defaults'] return wrapped_func class Node(object): """A `Node` is the basic building block of an MDP application. It represents a data processing element, like for example a learning algorithm, a data filter, or a visualization step. Each node can have one or more training phases, during which the internal structures are learned from training data (e.g. the weights of a neural network are adapted or the covariance matrix is estimated) and an execution phase, where new data can be processed forwards (by processing the data through the node) or backwards (by applying the inverse of the transformation computed by the node if defined). Nodes have been designed to be applied to arbitrarily long sets of data: if the underlying algorithms supports it, the internal structures can be updated incrementally by sending multiple batches of data (this is equivalent to online learning if the chunks consists of single observations, or to batch learning if the whole data is sent in a single chunk). It is thus possible to perform computations on amounts of data that would not fit into memory or to generate data on-the-fly. A `Node` also defines some utility methods, like for example `copy` and `save`, that return an exact copy of a node and save it in a file, respectively. Additional methods may be present, depending on the algorithm. `Node` subclasses should take care of overwriting (if necessary) the functions `is_trainable`, `_train`, `_stop_training`, `_execute`, `is_invertible`, `_inverse`, `_get_train_seq`, and `_get_supported_dtypes`. If you need to overwrite the getters and setters of the node's properties refer to the docstring of `get_input_dim`/`set_input_dim`, `get_output_dim`/`set_output_dim`, and `get_dtype`/`set_dtype`. """ __metaclass__ = NodeMetaclass def __init__(self, input_dim=None, output_dim=None, dtype=None): """If the input dimension and the output dimension are unspecified, they will be set when the `train` or `execute` method is called for the first time. If dtype is unspecified, it will be inherited from the data it receives at the first call of `train` or `execute`. Every subclass must take care of up- or down-casting the internal structures to match this argument (use `_refcast` private method when possible). """ # initialize basic attributes self._input_dim = None self._output_dim = None self._dtype = None # call set functions for properties self.set_input_dim(input_dim) self.set_output_dim(output_dim) self.set_dtype(dtype) # skip the training phase if the node is not trainable if not self.is_trainable(): self._training = False self._train_phase = -1 self._train_phase_started = False else: # this var stores at which point in the training sequence we are self._train_phase = 0 # this var is False if the training of the current phase hasn't # started yet, True otherwise self._train_phase_started = False # this var is False if the complete training is finished self._training = True ### properties def get_input_dim(self): """Return input dimensions.""" return self._input_dim def set_input_dim(self, n): """Set input dimensions. Perform sanity checks and then calls ``self._set_input_dim(n)``, which is responsible for setting the internal attribute ``self._input_dim``. Note that subclasses should overwrite `self._set_input_dim` when needed. """ if n is None: pass elif (self._input_dim is not None) and (self._input_dim != n): msg = ("Input dim are set already (%d) " "(%d given)!" % (self.input_dim, n)) raise InconsistentDimException(msg) else: self._set_input_dim(n) def _set_input_dim(self, n): self._input_dim = n input_dim = property(get_input_dim, set_input_dim, doc="Input dimensions") def get_output_dim(self): """Return output dimensions.""" return self._output_dim def set_output_dim(self, n): """Set output dimensions. Perform sanity checks and then calls ``self._set_output_dim(n)``, which is responsible for setting the internal attribute ``self._output_dim``. Note that subclasses should overwrite `self._set_output_dim` when needed. """ if n is None: pass elif (self._output_dim is not None) and (self._output_dim != n): msg = ("Output dim are set already (%d) " "(%d given)!" % (self.output_dim, n)) raise InconsistentDimException(msg) else: self._set_output_dim(n) def _set_output_dim(self, n): self._output_dim = n output_dim = property(get_output_dim, set_output_dim, doc="Output dimensions") def get_dtype(self): """Return dtype.""" return self._dtype def set_dtype(self, t): """Set internal structures' dtype. Perform sanity checks and then calls ``self._set_dtype(n)``, which is responsible for setting the internal attribute ``self._dtype``. Note that subclasses should overwrite `self._set_dtype` when needed. """ if t is None: return t = numx.dtype(t) if (self._dtype is not None) and (self._dtype != t): errstr = ("dtype is already set to '%s' " "('%s' given)!" % (t, self.dtype.name)) raise NodeException(errstr) elif t not in self.get_supported_dtypes(): errstr = ("\ndtype '%s' is not supported.\n" "Supported dtypes: %s" % (t.name, [numx.dtype(t).name for t in self.get_supported_dtypes()])) raise NodeException(errstr) else: self._set_dtype(t) def _set_dtype(self, t): t = numx.dtype(t) if t not in self.get_supported_dtypes(): raise NodeException('dtype %s not among supported dtypes (%s)' % (str(t), self.get_supported_dtypes())) self._dtype = t dtype = property(get_dtype, set_dtype, doc="dtype") def _get_supported_dtypes(self): """Return the list of dtypes supported by this node. The types can be specified in any format allowed by :numpy:`dtype`. """ # TODO: http://epydoc.sourceforge.net/manual-othermarkup.html#external-api-links for numpy return mdp.utils.get_dtypes('Float') def get_supported_dtypes(self): """Return dtypes supported by the node as a list of :numpy:`dtype` objects. Note that subclasses should overwrite `self._get_supported_dtypes` when needed.""" return [numx.dtype(t) for t in self._get_supported_dtypes()] supported_dtypes = property(get_supported_dtypes, doc="Supported dtypes") _train_seq = property(lambda self: self._get_train_seq(), doc="""\ List of tuples:: [(training-phase1, stop-training-phase1), (training-phase2, stop_training-phase2), ...] By default:: _train_seq = [(self._train, self._stop_training)] """) def _get_train_seq(self): return [(self._train, self._stop_training)] def has_multiple_training_phases(self): """Return True if the node has multiple training phases.""" return len(self._train_seq) > 1 ### Node states def is_training(self): """Return True if the node is in the training phase, False otherwise.""" return self._training def get_current_train_phase(self): """Return the index of the current training phase. The training phases are defined in the list `self._train_seq`.""" return self._train_phase def get_remaining_train_phase(self): """Return the number of training phases still to accomplish. If the node is not trainable then return 0. """ if self.is_trainable(): return len(self._train_seq) - self._train_phase else: return 0 ### Node capabilities @staticmethod def is_trainable(): """Return True if the node can be trained, False otherwise.""" return True @staticmethod def is_invertible(): """Return True if the node can be inverted, False otherwise.""" return True ### check functions def _check_input(self, x): # check input rank if not x.ndim == 2: error_str = "x has rank %d, should be 2" % (x.ndim) raise NodeException(error_str) # set the input dimension if necessary if self.input_dim is None: self.input_dim = x.shape[1] # set the dtype if necessary if self.dtype is None: self.dtype = x.dtype # check the input dimension if not x.shape[1] == self.input_dim: error_str = "x has dimension %d, should be %d" % (x.shape[1], self.input_dim) raise NodeException(error_str) if x.shape[0] == 0: error_str = "x must have at least one observation (zero given)" raise NodeException(error_str) def _check_output(self, y): # check output rank if not y.ndim == 2: error_str = "y has rank %d, should be 2" % (y.ndim) raise NodeException(error_str) # check the output dimension if not y.shape[1] == self.output_dim: error_str = "y has dimension %d, should be %d" % (y.shape[1], self.output_dim) raise NodeException(error_str) def _if_training_stop_training(self): if self.is_training(): self.stop_training() # if there is some training phases left we shouldn't be here! if self.get_remaining_train_phase() > 0: error_str = "The training phases are not completed yet." raise TrainingException(error_str) def _pre_execution_checks(self, x): """This method contains all pre-execution checks. It can be used when a subclass defines multiple execution methods. """ # if training has not started yet, assume we want to train the node if (self.get_current_train_phase() == 0 and not self._train_phase_started): while True: self.train(x) if self.get_remaining_train_phase() > 1: self.stop_training() else: break self._if_training_stop_training() # control the dimension x self._check_input(x) # set the output dimension if necessary if self.output_dim is None: self.output_dim = self.input_dim def _pre_inversion_checks(self, y): """This method contains all pre-inversion checks. It can be used when a subclass defines multiple inversion methods. """ if not self.is_invertible(): raise IsNotInvertibleException("This node is not invertible.") self._if_training_stop_training() # set the output dimension if necessary if self.output_dim is None: # if the input_dim is not defined, raise an exception if self.input_dim is None: errstr = ("Number of input dimensions undefined. Inversion" "not possible.") raise NodeException(errstr) self.output_dim = self.input_dim # control the dimension of y self._check_output(y) ### casting helper functions def _refcast(self, x): """Helper function to cast arrays to the internal dtype.""" return mdp.utils.refcast(x, self.dtype) ### Methods to be implemented by the user # this are the methods the user has to overwrite # they receive the data already casted to the correct type def _train(self, x): if self.is_trainable(): raise NotImplementedError def _stop_training(self, *args, **kwargs): pass def _execute(self, x): return x def _inverse(self, x): if self.is_invertible(): return x def _check_train_args(self, x, *args, **kwargs): # implemented by subclasses if needed pass ### User interface to the overwritten methods def train(self, x, *args, **kwargs): """Update the internal structures according to the input data `x`. `x` is a matrix having different variables on different columns and observations on the rows. By default, subclasses should overwrite `_train` to implement their training phase. The docstring of the `_train` method overwrites this docstring. Note: a subclass supporting multiple training phases should implement the *same* signature for all the training phases and document the meaning of the arguments in the `_train` method doc-string. Having consistent signatures is a requirement to use the node in a flow. """ if not self.is_trainable(): raise IsNotTrainableException("This node is not trainable.") if not self.is_training(): err_str = "The training phase has already finished." raise TrainingFinishedException(err_str) self._check_input(x) self._check_train_args(x, *args, **kwargs) self._train_phase_started = True self._train_seq[self._train_phase][0](self._refcast(x), *args, **kwargs) def stop_training(self, *args, **kwargs): """Stop the training phase. By default, subclasses should overwrite `_stop_training` to implement this functionality. The docstring of the `_stop_training` method overwrites this docstring. """ if self.is_training() and self._train_phase_started == False: raise TrainingException("The node has not been trained.") if not self.is_training(): err_str = "The training phase has already finished." raise TrainingFinishedException(err_str) # close the current phase. self._train_seq[self._train_phase][1](*args, **kwargs) self._train_phase += 1 self._train_phase_started = False # check if we have some training phase left if self.get_remaining_train_phase() == 0: self._training = False def execute(self, x, *args, **kwargs): """Process the data contained in `x`. If the object is still in the training phase, the function `stop_training` will be called. `x` is a matrix having different variables on different columns and observations on the rows. By default, subclasses should overwrite `_execute` to implement their execution phase. The docstring of the `_execute` method overwrites this docstring. """ self._pre_execution_checks(x) return self._execute(self._refcast(x), *args, **kwargs) def inverse(self, y, *args, **kwargs): """Invert `y`. If the node is invertible, compute the input ``x`` such that ``y = execute(x)``. By default, subclasses should overwrite `_inverse` to implement their `inverse` function. The docstring of the `inverse` method overwrites this docstring. """ self._pre_inversion_checks(y) return self._inverse(self._refcast(y), *args, **kwargs) def __call__(self, x, *args, **kwargs): """Calling an instance of `Node` is equivalent to calling its `execute` method.""" return self.execute(x, *args, **kwargs) ###### adding nodes returns flows def __add__(self, other): # check other is a node if isinstance(other, Node): return mdp.Flow([self, other]) elif isinstance(other, mdp.Flow): flow_copy = other.copy() flow_copy.insert(0, self) return flow_copy.copy() else: err_str = ('can only concatenate node' ' (not \'%s\') to node' % (type(other).__name__)) raise TypeError(err_str) ###### string representation def __str__(self): return str(type(self).__name__) def __repr__(self): # print input_dim, output_dim, dtype name = type(self).__name__ inp = "input_dim=%s" % str(self.input_dim) out = "output_dim=%s" % str(self.output_dim) if self.dtype is None: typ = 'dtype=None' else: typ = "dtype='%s'" % self.dtype.name args = ', '.join((inp, out, typ)) return name + '(' + args + ')' def copy(self, protocol=None): """Return a deep copy of the node. :param protocol: the pickle protocol (deprecated).""" if protocol is not None: _warnings.warn("protocol parameter to copy() is ignored", mdp.MDPDeprecationWarning, stacklevel=2) return _copy.deepcopy(self) def save(self, filename, protocol=-1): """Save a pickled serialization of the node to `filename`. If `filename` is None, return a string. Note: the pickled `Node` is not guaranteed to be forwards or backwards compatible.""" if filename is None: return _cPickle.dumps(self, protocol) else: # if protocol != 0 open the file in binary mode mode = 'wb' if protocol != 0 else 'w' with open(filename, mode) as flh: _cPickle.dump(self, flh, protocol) class PreserveDimNode(Node): """Abstract base class with ``output_dim == input_dim``. If one dimension is set then the other is set to the same value. If the dimensions are set to different values, then an `InconsistentDimException` is raised. """ def _set_input_dim(self, n): if (self._output_dim is not None) and (self._output_dim != n): err = "input_dim must be equal to output_dim for this node." raise InconsistentDimException(err) self._input_dim = n self._output_dim = n def _set_output_dim(self, n): if (self._input_dim is not None) and (self._input_dim != n): err = "output_dim must be equal to input_dim for this node." raise InconsistentDimException(err) self._input_dim = n self._output_dim = n def VariadicCumulator(*fields): """A VariadicCumulator is a `Node` whose training phase simply collects all input data. In this way it is possible to easily implement batch-mode learning. The data is accessible in the attributes given with the VariadicCumulator's constructor after the beginning of the `Node._stop_training` phase. ``self.tlen`` contains the number of data points collected. """ class Cumulator(Node): def __init__(self, *args, **kwargs): super(Cumulator, self).__init__(*args, **kwargs) self._cumulator_fields = fields for arg in self._cumulator_fields: if hasattr(self, arg): errstr = "Cumulator Error: Property %s already defined" raise mdp.MDPException(errstr % arg) setattr(self, arg, []) self.tlen = 0 def _train(self, *args): """Collect all input data in a list.""" self.tlen += args[0].shape[0] for field, data in zip(self._cumulator_fields, args): getattr(self, field).append(data) def _stop_training(self, *args, **kwargs): """Concatenate the collected data in a single array.""" for field in self._cumulator_fields: data = getattr(self, field) setattr(self, field, numx.concatenate(data, 0)) return Cumulator Cumulator = VariadicCumulator('data') Cumulator.__doc__ = """A specialized version of `VariadicCumulator` which only fills the field ``self.data``. """ mdp-3.3/mdp/test/000077500000000000000000000000001203131624700136755ustar00rootroot00000000000000mdp-3.3/mdp/test/__init__.py000066400000000000000000000054331203131624700160130ustar00rootroot00000000000000import os SCRIPT="run_tests.py" from mdp.configuration import _version_too_old def test(filename=None, keyword=None, seed=None, options='', mod_loc=None, script_loc=None): """Run tests. filename -- only run tests in filename. If not set run all tests. You do not need the full path, the relative path within the test directory is enough. keyword -- only run test items matching the given space separated keywords. precede a keyword with '-' to negate. Terminate the expression with ':' to treat a match as a signal to run all subsequent tests. seed -- set random seed options -- options to be passed to the underlaying py.test script (as a string) mod_loc -- don't use it, it's for internal usage script_loc -- don't use it, it's for internal usage """ if mod_loc is None: mod_loc = os.path.dirname(__file__) if script_loc is None: script_loc = os.path.dirname(__file__) if filename is None: loc = mod_loc else: loc = os.path.join(mod_loc, os.path.basename(filename)) args = [] if keyword is not None: args.extend(('-k', str(keyword))) if seed is not None: args.extend(('--seed', str(seed))) # add --assert=reiterp option to work around permissions problem # with __pycache__ directory when MDP is installed on a normal # user non-writable directory options = "--assert=reinterp "+options args.extend(options.split()) args.append(loc) _worker = get_worker(script_loc) return _worker(args) def subtest(script, args): # run the auto-generated script in a subprocess" import subprocess import sys subtest = subprocess.Popen([sys.executable,script]+args, stdout = sys.stdout, stderr = sys.stderr) # wait for the subprocess to finish before returning the prompt subtest.wait() # ??? do we want to catch KeyboardInterrupt and send it to the # ??? subprocess? def get_worker(loc): try: # use py.test module interface if it's installed import py.test # check that we have at least version 2.1.2 try: py.test.__version__ except AttributeError: raise ImportError if _version_too_old(py.test.__version__, (2,1,2)): raise ImportError else: return py.test.cmdline.main except ImportError: # try to locate the script script = os.path.join(loc, SCRIPT) if os.path.exists(script): return lambda args: subtest(script, args) else: raise Exception('Could not find self-contained py.test script in' '"%s"'%script) mdp-3.3/mdp/test/_tools.py000066400000000000000000000133011203131624700155440ustar00rootroot00000000000000"""Tools for the test- and benchmark functions.""" import sys import time import itertools from functools import wraps import py.test import mdp from mdp import numx, numx_rand, numx_fft, numx_linalg, utils from numpy.testing import (assert_array_equal, assert_array_almost_equal, assert_equal, assert_almost_equal) mean = numx.mean std = numx.std normal = mdp.numx_rand.normal uniform = mdp.numx_rand.random testtypes = [numx.dtype('d'), numx.dtype('f')] testtypeschar = [t.char for t in testtypes] testdecimals = {testtypes[0]: 12, testtypes[1]: 6} decimal = 7 mult = mdp.utils.mult #### test tools def assert_array_almost_equal_diff(x,y,digits,err_msg=''): x,y = numx.asarray(x), numx.asarray(y) msg = '\nArrays are not almost equal' assert 0 in [len(numx.shape(x)),len(numx.shape(y))] \ or (len(numx.shape(x))==len(numx.shape(y)) and \ numx.alltrue(numx.equal(numx.shape(x),numx.shape(y)))),\ msg + ' (shapes %s, %s mismatch):\n\t' \ % (numx.shape(x),numx.shape(y)) + err_msg maxdiff = max(numx.ravel(abs(x-y)))/\ max(max(abs(numx.ravel(x))),max(abs(numx.ravel(y)))) if numx.iscomplexobj(x) or numx.iscomplexobj(y): maxdiff = maxdiff/2 cond = maxdiff< 10**(-digits) msg = msg+'\n\t Relative maximum difference: %e'%(maxdiff)+'\n\t'+\ 'Array1: '+str(x)+'\n\t'+\ 'Array2: '+str(y)+'\n\t'+\ 'Absolute Difference: '+str(abs(y-x)) assert cond, msg def assert_type_equal(act, des): assert act == numx.dtype(des), \ 'dtype mismatch: "%s" (should be "%s") '%(act,des) def get_random_mix(mat_dim = None, type = "d", scale = 1,\ rand_func = uniform, avg = 0, \ std_dev = 1): if mat_dim is None: mat_dim = (500, 5) T = mat_dim[0] N = mat_dim[1] d = 0 while d < 1E-3: #mat = ((rand_func(size=mat_dim)-0.5)*scale).astype(type) mat = rand_func(size=(T,N)).astype(type) # normalize mat -= mean(mat,axis=0) mat /= std(mat,axis=0) # check that the minimum eigenvalue is finite and positive d1 = min(mdp.utils.symeig(mdp.utils.mult(mat.T, mat), eigenvectors = 0)) if std_dev is not None: mat *= std_dev if avg is not None: mat += avg mix = (rand_func(size=(N,N))*scale).astype(type) matmix = mdp.utils.mult(mat,mix) matmix_n = matmix - mean(matmix, axis=0) matmix_n /= std(matmix_n, axis=0) d2 = min(mdp.utils.symeig(mdp.utils.mult(matmix_n.T,matmix_n), eigenvectors=0)) d = min(d1, d2) return mat, mix, matmix def verify_ICANode(icanode, rand_func = uniform, vars=3, N=8000, prec=3): dim = (N, vars) mat,mix,inp = get_random_mix(rand_func=rand_func,mat_dim=dim) icanode.train(inp) act_mat = icanode.execute(inp) cov = mdp.utils.cov2((mat-mean(mat,axis=0))/std(mat,axis=0), act_mat) maxima = numx.amax(abs(cov), axis=0) assert_array_almost_equal(maxima,numx.ones(vars), prec) def verify_ICANodeMatrices(icanode, rand_func=uniform, vars=3, N=8000): dim = (N, vars) mat,mix,inp = get_random_mix(rand_func=rand_func, mat_dim=dim, avg=0) icanode.train(inp) # test projection matrix act_mat = icanode.execute(inp) T = icanode.get_projmatrix() exp_mat = mdp.utils.mult(inp, T) assert_array_almost_equal(act_mat,exp_mat,6) # test reconstruction matrix out = act_mat.copy() act_mat = icanode.inverse(out) B = icanode.get_recmatrix() exp_mat = mdp.utils.mult(out, B) assert_array_almost_equal(act_mat,exp_mat,6) class BogusNode(mdp.Node): @staticmethod def is_trainable(): return False def _execute(self,x): return 2*x def _inverse(self,x): return 0.5*x class BogusNodeTrainable(mdp.Node): def _train(self, x): pass def _stop_training(self): self.bogus_attr = 1 class BogusExceptNode(mdp.Node): def _train(self,x): self.bogus_attr = 1 raise Exception, "Bogus Exception" def _execute(self,x): raise Exception, "Bogus Exception" class BogusMultiNode(mdp.Node): def __init__(self): super(BogusMultiNode, self).__init__() self.visited = [] def _get_train_seq(self): return [(self.train1, self.stop1), (self.train2, self.stop2)] def train1(self, x): self.visited.append(1) def stop1(self): self.visited.append(2) def train2(self, x): self.visited.append(3) def stop2(self): self.visited.append(4) #_spinner = itertools.cycle((' /\b\b', ' -\b\b', ' \\\b\b', ' |\b\b')) _spinner = itertools.cycle((' .\b\b', ' o\b\b', ' 0\b\b', ' O\b\b', ' 0\b\b', ' o\b\b')) #_spinner = itertools.cycle([" '\b\b"]*2 + [' !\b\b']*2 + [' .\b\b']*2 + # [' !\b\b']*2) def spinner(): sys.stderr.write(_spinner.next()) sys.stderr.flush() class skip_on_condition(object): """Skip a test if the eval(condition_str, namespace) returns True. namespace contains sys, os, and the mdp module. """ def __init__(self, condition_str, skipping_msg=None): self.condition_str = condition_str if skipping_msg is None: self.skipping_msg = "Condition %s not met." % condition_str else: self.skipping_msg = skipping_msg def __call__(self, f): import sys, os @wraps(f) def wrapped_f(*args, **kwargs): namespace = {'sys': sys, 'os': os, 'mdp': mdp} if eval(self.condition_str, namespace): py.test.skip(self.skipping_msg) f(*args, **kwargs) return wrapped_f mdp-3.3/mdp/test/benchmark_mdp.py000066400000000000000000000165271203131624700170540ustar00rootroot00000000000000"""These are some benchmark functions for MDP. """ import mdp #from mdp.utils import symeig from mdp.utils import matmult as mult numx = mdp.numx numx_rand = mdp.numx_rand numx_fft = mdp.numx_fft ####### benchmark function def matmult_c_MDP_benchmark(dim): """ This benchmark multiplies two contiguous matrices using the MDP internal matrix multiplication routine. First argument matrix dimensionality""" a = numx_rand.random((dim,dim)) b = numx_rand.random((dim,dim)) mult(a,b) def matmult_c_scipy_benchmark(dim): """ This benchmark multiplies two contiguous matrices using the scipy internal matrix multiplication routine. First argument matrix dimensionality""" a = numx_rand.random((dim,dim)) b = numx_rand.random((dim,dim)) numx.dot(a,b) def matmult_n_MDP_benchmark(dim): """ This benchmark multiplies two non-contiguous matrices using the MDP internal matrix multiplication routine. First argument matrix dimensionality""" a = numx_rand.random((dim,dim)).T b = numx_rand.random((dim,dim)).T mult(a,b) def matmult_n_scipy_benchmark(dim): """ This benchmark multiplies two non-contiguous matrices using the scipy internal matrix multiplication routine. First argument matrix dimensionality""" a = numx_rand.random((dim,dim)).T b = numx_rand.random((dim,dim)).T numx.dot(a,b) def matmult_cn_MDP_benchmark(dim): """ This benchmark multiplies a contiguous matrix with a non-contiguous matrix using the MDP internal matrix multiplication routine. First argument matrix dimensionality""" a = numx_rand.random((dim,dim)).T b = numx_rand.random((dim,dim)) mult(a,b) def matmult_cn_scipy_benchmark(dim): """ This benchmark multiplies a contiguous matrix with a non-contiguous matrix using the scipy internal matrix multiplication routine. First argument matrix dimensionality""" a = numx_rand.random((dim,dim)).T b = numx_rand.random((dim,dim)) numx.dot(a,b) def quadratic_expansion_benchmark(dim, len, times): """ This benchmark expands random data of shape (len, dim) 'times' times. Arguments: (dim,len,times).""" a = numx_rand.random((len,dim)) qnode = mdp.nodes.QuadraticExpansionNode() for i in xrange(times): qnode(a) def polynomial_expansion_benchmark(dim, len, degree, times): """ This benchmark expands random data of shape (len, dim) 'times' times in the space of polynomials of degree 'degree'. Arguments: (dim,len,degree,times).""" numx_rand.seed(4253529) a = numx_rand.random((len,dim)) pnode = mdp.nodes.PolynomialExpansionNode(degree) for i in xrange(times): pnode(a) # ISFA benchmark def _tobias_mix(src): mix = src.copy() mix[:,0]=(src[:,1]+3*src[:,0]+6)*numx.cos(1.5*numx.pi*src[:,0]) mix[:,1]=(src[:,1]+3*src[:,0]+6)*numx.sin(1.5*numx.pi*src[:,0]) return mix def _get_random_slow_sources(nsrc, distr_fun): # nsrc: number of sources # distr_fun: random numbers function src = distr_fun(size=(50000, nsrc)) fsrc = numx_fft.rfft(src, axis=0) # enforce different time scales for i in xrange(nsrc): fsrc[5000+(i+1)*1000:,i] = 0. src = numx_fft.irfft(fsrc,axis=0) return src def isfa_spiral_benchmark(): """ Apply ISFA to twisted data.""" numx_rand.seed(116599099) # create independent sources src = _get_random_slow_sources(2, numx_rand.laplace) # subtract mean and rescale between -1 and 1 src -= src.mean(axis=0) src /= abs(src).max() # apply nonlinear "twist" transformation exp_src = _tobias_mix(src) # train flow = mdp.Flow([mdp.nodes.PolynomialExpansionNode(5), mdp.nodes.SFANode(), mdp.nodes.ISFANode(lags=30, whitened=False, sfa_ica_coeff=[1.,300.], eps_contrast=1e-5, output_dim=2, verbose=False)]) flow.train(exp_src) def sfa_benchmark(): """ Apply SFA to twisted data.""" numx_rand.seed(424507) # create independent sources nsrc = 15 src = _get_random_slow_sources(nsrc, numx_rand.normal) src = src[:5000,:] src = mult(src, numx_rand.uniform(size=(nsrc, nsrc))) \ + numx_rand.uniform(size=nsrc) # train flow = mdp.Flow([mdp.nodes.PolynomialExpansionNode(3), mdp.nodes.PCANode(output_dim = 100), mdp.nodes.SFANode(output_dim = 30)]) #src = src.reshape(1000,5,nsrc) flow.train([None, [src], [src]]) #### benchmark tools # function used to measure time import time TIMEFUNC = time.time def timeit(func,*args,**kwargs): """Return function execution time in 1/100ths of a second.""" tstart = TIMEFUNC() func(*args,**kwargs) return (TIMEFUNC()-tstart)*100. def _random_seed(): import sys seed = int(numx_rand.randint(2**31-1)) numx_rand.seed(seed) sys.stderr.write("Random Seed: " + str(seed)+'\n') def run_benchmarks(bench_funcs, time_digits=15): results_str = '| %%s | %%%d.2f |' % time_digits label_str = '| %%s | %s |' % 'Time (sec/100)'.center(time_digits) tstart = TIMEFUNC() # loop over all benchmarks functions for func, args_list in bench_funcs: # number of combinations of arguments(cases) ncases = len(args_list) funcname = func.__name__[:-10] # loop over all cases for i in xrange(ncases): args = args_list[i] # format description string descr = funcname + str(tuple(args)) if i==0: # print summary table header descrlen = len(descr)+6 results_strlen = time_digits+descrlen+7 print '\nTiming results (%s, %d cases):' % (funcname, ncases) print func.__doc__ print '+'+'-'*(results_strlen-2)+'+' print label_str % 'Description'.center(descrlen) print '+'+'-'*(results_strlen-2)+'+' # execute function t = timeit(func, *args) # print summary table entry print results_str % (descr.center(descrlen), t) # print summary table tail print '+'+'-'*(results_strlen-2)+'+' print '\nTotal running time:', (TIMEFUNC()-tstart)*100. ####### /benchmark function POLY_EXP_ARGS = [(2**i, 100, j, 200) for j in xrange(2,5) for i in xrange(2,4)] #if mdp.numx_description in ['symeig', 'scipy', 'numpy']: # MUL_MTX_DIMS = [[2**i] for i in xrange(4,11)] # # list of (benchmark function, list of arguments) # BENCH_FUNCS = [(matmult_c_MDP_benchmark, MUL_MTX_DIMS), # (matmult_c_scipy_benchmark, MUL_MTX_DIMS), # (matmult_n_MDP_benchmark, MUL_MTX_DIMS), # (matmult_n_scipy_benchmark, MUL_MTX_DIMS), # (matmult_cn_MDP_benchmark, MUL_MTX_DIMS), # (matmult_cn_scipy_benchmark, MUL_MTX_DIMS), # (polynomial_expansion_benchmark, POLY_EXP_ARGS)] #else: # BENCH_FUNCS = [(polynomial_expansion_benchmark, POLY_EXP_ARGS)] BENCH_FUNCS = [(polynomial_expansion_benchmark, POLY_EXP_ARGS), (isfa_spiral_benchmark, [[]]), (sfa_benchmark, [[]])] def get_benchmarks(): return BENCH_FUNCS if __name__ == "__main__": print "Running benchmarks: " run_benchmarks(get_benchmarks()) mdp-3.3/mdp/test/conftest.py000066400000000000000000000047301203131624700161000ustar00rootroot00000000000000# global hooks for py.test import tempfile import os import shutil import glob import mdp import py.test _err_str = """ IMPORTANT: some tests use random numbers. This could occasionally lead to failures due to numerical degeneracies. To rule this out, please run the tests more than once. If you get reproducible failures please report a bug! """ def pytest_configure(config): seed = config.getvalue("seed") # if seed was not set by the user, we set one now if seed is None or seed == ('NO', 'DEFAULT'): config.option.seed = int(mdp.numx_rand.randint(2**31-1)) def pytest_unconfigure(config): # remove garbage created during tests # note that usage of TemporaryDirectory is not enough to assure # that all garbage is removed, expacially because we use subprocesses shutil.rmtree(py.test.mdp_tempdirname, ignore_errors=True) # if pp was monkey-patched, remove any stale pp4mdp directories if hasattr(mdp.config, 'pp_monkeypatch_dirname'): monkey_dirs = os.path.join(mdp.config.pp_monkeypatch_dirname, mdp.parallel.pp_support.TEMPDIR_PREFIX) [shutil.rmtree(d, ignore_errors=True) for d in glob.glob(monkey_dirs+'*')] def pytest_runtest_setup(item): # set random seed before running each test # so that a failure in a test can be reproduced just running # that particular test. if this was not done, you would need # to run the whole test suite again mdp.numx_rand.seed(item.config.option.seed) def pytest_addoption(parser): """Add random seed option to py.test. """ parser.addoption('--seed', dest='seed', type=int, action='store', help='set random seed') def pytest_report_header(config): # report the random seed before and after running the tests return '%s\nRandom Seed: %d\n' % (mdp.config.info(), config.option.seed) def pytest_terminal_summary(terminalreporter): # add a note about error due to randomness only if an error or a failure # occured t = terminalreporter t.write_sep("=", "NOTE") t.write_line("%s\nRandom Seed: %d" % (mdp.config.info(), t.config.option.seed)) if 'failed' in t.stats or 'error' in t.stats: t.write_line(_err_str) def pytest_namespace(): # get temporary directory to put temporary files # will be deleted at the end of the test run dirname = tempfile.mkdtemp(suffix='.tmp', prefix='MDPtestdir_') return dict(mdp_tempdirname=dirname) mdp-3.3/mdp/test/ide_run.py000066400000000000000000000003551203131624700156770ustar00rootroot00000000000000""" Helper script to run or debug the tests in an IDE as a simple .py file. """ import py #args_str = "" args_str = "-k hinet --maxfail 1 -s --tb native" #args_str = "--maxfail 1 --tb native" py.test.cmdline.main(args_str.split(" ")) mdp-3.3/mdp/test/run_tests.py000077500000000000000000005414331203131624700163120ustar00rootroot00000000000000#! /usr/bin/env python sources = """ eNrsvW17JEdyILZ3fpGvfTpJ9lm2z3e+2p7DVRWn0QMMudpVH5t7XHJGGolLzsOZkegHhJuF7gJQ i+qqnqrqAaAV9fi7f4Q/+oN/iH+A/4Z/xMVbvlZWd4Mvu/LzGOQA3VWZkZmRkZERkZER/9s//fbt T5I3f7y5ny7K+mq6WBRV0S0Wb//Jm78aj8cRPLsqqqvo45cvoiTeNPVqu8ybNo6yahXFy7pqt2v6 Dh+rfNnlq+hdkUU3+f1t3azaNAIgo9Hbf/rmD7CFtlu9/c9e/x//5Cc/Kdabuumi9r4djZZl1rbR q26V1Be/ARjpbBTBDza/zm7yNurqzXGZv8vLaHPfXddVtIZulPAie5cVZXZR5lEGX6oo67qmuNh2 +YQg4A83hEPorvN1BJUvi6btomy5zNt2qloa0YdVfhkpDCRtXl5KV/AHvwJ6VsUSXkZz7PpU+mFX vso77IXUn0RVts4tKF1zb77gzxpAQZPUS6hExXWB/G6Zb7roBb191jR141ZusqLNo4/VqKlEMgZM A6JnMCXbchVVdSdIiI7acXQUuU00ebdtAKOjEdSBvuA0pKO3//mbP8QJW9arfIq/3v4Xr//VUk/b 5n5kJnAS1e10k3XXo9HFtigB14sm3zQAC/+MRvi7LC7gO0CUEtMFIIJBJDEWiCdRLAXjVJHEJ9Bw nyZum2yzyZsoa+otEOFLJgnsZMRlW5rQ4HxOAGW3WNSaEnnC/aMBwxzKw0QVd8kAnuLw+N3w3FLZ y6LMEeWmAjSyUE9D5YE8y6LKq9qvYl4cR6f9mv1WnBaEmFxqCdHT6/uNIiUknszG7Sw6aoCIFF4m aWoTf/5W47mG5dbYWGY6M+ibcxH8YoOo8n0gsE9YQIMw1ZEK/XWLJCM1MyogI4k2dVExY6ijtt42 y5wGqmgHfzZMFFhrWtbLrExU/+05NMRRXFLvNtPldb68SVIXu4+ir776Clja/UWOtBJdZ80K6Lgs bnJkTtFtXjQr5LjF0qtXVFSg7YDrZlgGltMZksIyg5am280q6/jzebSq8/aXTn0cRajfPmI3jEjC EYy7qWGVdfcJfp9En9dVrn6PGY2X0KmitaljbFHD5bYsGa27Z0TW3CueAZmby7qhESMQNTnYbW6U J8qGpz9fNvU6UpxL8T0GYMoA0ElEPJxewJKrVlZXEU89BomVRqq29MhCknnqoMqZB/dnbI8Nds8u Az5F29QwTr83Pi24casar6vyPojMR3rVmoICKgMqzwC1MB/OXChScnphYVUPBffJ5qrdOZZuu4Ep vy2A2LDzUB6EjaqjHawNjWlgFNewbBjb10AOyy3jA3pAyx+7YW8G1mrxhyUL6F3WUBfOZvIAQMBm W3Xnaud63sD73tb1t+7Olam96xJLR9d1ucL+XC6I1bQkYl0ursr6Ar4RDOADt9fF8hp4+aYB8aUA +StaguADHCV/l5VbYAOr6bA4M+GmfKnG2/5o26WC08tFYPfTm5MqE9iUeCtTnbfK2sOxCsqQLZj0 ILTtUglrTYC8lSNZ+KQEzFiPbmqtWlheuDa9nS9It+NxGtzAPJAoMLjdEBxxbf3KZhj64YH8Ymyg EIdgmoEPmcMhkApkqml/jt57D6i19VaYohWU4Fd5rPYXC7PqJ8baIO03sMQ2HdBbVkbZalXIR5ql CMoUKIW3owBOWwIN1LotO8W+pX2AEWbihhwc8gC8b+6TtFdONsCERupPGGGEceES5UTXt/F3ly8X ByAQin0X5P2HXcj78VBhCd48wN34iCyEoFCu5DGbnfWYddxml4CNpKqr4yZfbpu2eAdtAFUf42JI YRk0yN5IN0D+G8s+FBy3WY9FPUXI1A/pgeld0YIisc2HOihQZHN40N5TFi2TK+5BbUSK2SRCuoWh YPcz2E/UjtQOjQHKw8ydnZt5wprNFRKN4SSqP5682NMnDNApbiTVKkmg3sSljDN4dJ6mTkUR/f86 vw8I/SzzwabFmyXLH7Cn1EuYRNhv8iratjhzL9v7Zd3bEqk/at973WTL/CJb3jyroPd93S2LEBJs 6jm+RyzALq/qGD0ct7eiusQ9BtniaIcyR4B6arp6weoQffR2nIaVJcXwecNWZafdxYJ3SkuKeHN1 PY2eTj+gffnp9GfRqrgEumwjUEFyxlNekRiQI6FbNdfA+gpaBWYvaKcwNBBNt1A3u6i3HUv4dblF 7jCJQEOzIIDggro8bPMo0CK7GNiS7REMbctNXlJf5k7dYwsxssEZhdOeAVyJffOIkMP4Q5cEoqN2 drT6CFVGHzzrFVYXHp+mB2zrD5F2t02DG6bZOu3lqYX43rj1xt7b/H8f273Ss8w64Rm2t/2Q4msL LB7aH6TngXrsq2KkJ7vCPDOf/X1wJDu9t+lOaEheT6Qk4AIk57wps3sSlRHk2NmtClx+IByH6OZL 85aHlBUlgjG4xqWtxJYMIIJSVuarCHlRs3YFFvjhdXuLyhB2n963tN3DN6whwrgvkWr2FhRFDV12 vPNOdf/SKW6im8Tl7ncWH1s4GBCF1KB/IpxkgUOfv4at0wUlBowCuDQaG0AEvptgP9LARuTbigxu GYM52h6rY9n2yWwUAThva3IRMo/ugrSjCjgkp/lTWC1+hMbev2INi0Vmy4AmStPxaQQ7aYZcwtKB lU00u0t28MRJdOIugb3auaY3s4qm35klFcocsFOhoe1Tg+9QZRfoAQRahudq5W4Pw0bOxUSV7jPH Jquu8gRe903asIrvAsJIr+HosbV9gSqfbxuQzIolWe26egO7cLsRdqDNFBF0Y2rVEmytc6CD6mqG j0BaAdFNSTvbFk8Tos8B4dQppy79WWDJRUTYIjaBDMRGrlshAuoq69voNo9WdRUDl8iAvZB1irm3 oZfUEQ4LFIoYbxYWCMfeItQEcFacTxthDFMo16LFJIlncWDVMn4LB6u6x02e3YS3vDMinRnUZlG2 bwDT43Gk8+titcqrHUsCWRCOxN6uxRyCRzqoHYHIobc+gJcvFh6hg9D0Tuy4CM7VwdY1kDhbrYhB oeIFsx0U2Hu03d++SJsc93o07hG4fdbR3rf5XcDS7tUJtv0ctj1HAG67gPTV6/llZW8guI+Hepgj rU8Dy5Cqx7/85S+NdianCz5ncgzJvW4oOTOwk5X+VmaUlIs6a1YvcLaabQ8texEnbY6h9z19chxF z9GMfNSAVIor7aj9uoroN4qol5UnkPL53YRgWoSNDw8Ut8RCqNEkaNTrhuE7so4U94UdJXR5mlaC mqqlYukX+jQMtJJs025LNPgg56pRbYmui6trPH3Ac1U9CD4V5ZVkS5tFbh2V4t9nol250v6QntZd eKsf3xZZWfxdzjzxqniXV+p0t/NG4O6RwCyANeAZa9JdTKIYFJ0qv+t8ZkdHEAmwlAATvL1GGkDt Fpg/yjv97VP93Bc56F00qazSIsRgSQQ3x99T6ZFHlG039W2xMABLGsphoc/2VoIqhg6X204e4wqf M/0w7coXS16RJ7Bkyu0q1xWGVCtDAEoW5JNXJEVl+qfdUxd0Ge/FPRL5OzJpZ9U9kO/6oqhIRsaq rG3IHk+Wbluygu3A5Ud0Zs8bA2382AHcM7r6+CI/1gKnIR3oGIjvebMGiCu3Z9TrrITtuUUMKucA aUSNLYgBR65QWzbrBh0bt7IWdYCkydf1O5ZJoMvbivaevKVCF0XX8hHKKs9KBxyddeBxBcmJylpK TA7w9kQPLw0bC6Ezd8q05JKSmPjvLNbUey8KoaslWssuIRVRiZsRNGZqzWlCU5zJ3upILJKza+PK U0tZQSI3gLKr4zSypDD9g1VU0SkVtIGnA+0LlVlN32mLyVxocKCqrTM49cM6AcKzvqZp0IDHgo/i 33fGXBU8bfC8SwrYQDU3uMlt2xdbG9st7CyJhs87Wjq1K2M1m6Fa+h68guUHWl3SlgV8P0n9QUgr 7A5DmxFAhIe9zpNdUPNikIVyZUm+rOZltr5YZdHdjOb0bqplxfQhDAmXyxL20QyIHsfWRrTw/BUP 4gwu+ehyWy2JAdHqQ5HVWCSVLXdiN/UCYLrLQJqeEM8Ss5wty5IdEFctdsey22UwOJe+xK5iTRTJ egwBkNJjp4DGDA9RiH/xOImPuWBeEBr4mBBNEYxWBxbMQm6E8xTtGO/ydJcd3lCrzKOSlFJXBV42 WXtNpLxD5geS6cg0wB3wDVs8OWWeGXQJqjQjZP6s603D0vyFFle5zz0/je7CM7irGv7wu4uz49Nz 28xE5xs1sHVQY3cMlQgByyheTmzjiTNZOOFNbmA6Xaqb4gp3TZhp6DRsvyA3NgV8Z2mRR2LqstG+ sSjNxgh5iMAQf/utq3JOjDk+r9CfD0+QvEHhD/AGqM36a8+Qy489awFpxGi66UCsRZ8YstpBm6bH WTcmPuZKPrgPkW9Lm3dC98yazs57Fquyz2Qv3X723pf1Ek+S++fMNl7IuQhLAl7KsHgIrV9O1SHU 5VTOGhf5221WDlt30YYuw6dBSicWp3P48PBqT+eqp6H9xiPlwsWzOkiyJ9Uci4aMSSM1PhIV1hsQ pZN4cES4IQ72Ow6ONf4lOgMiKo0n4DO18l9Ul3XYJbAlj0xgEQs0FAJXU+qPVnnMLF/n5YamuMre FVeZlgA9jqLWzoJU1Q5ketSK40EtZ7vRMjabK30B+1H0+a+mfHKovPDEXtwU72DR/TSKXm0vaMjw QpGgZ4xycHFM3mOqxjpDr7J3uRxNkUVbNzS12R/0NWw3xBdzD5X+gvPt14RiS4YAGGen55PoY+hU gz0lC0PIGmXsvuI3q+vG6/YqZibX17cCfQgTvtVAq4HvhodjUV+mpGiIPS2j0UTxwBpjgcihFHf8 syj2TikBw9I56Jj7DiUwsUXQmfPE2s5UvX4N20aD30FAw0eWxYxf6Q3XHByYHd7RMPce9znUGB2B YnNR5tWcz/yixOka6HtiDDRdSB3niyW6mKnl1NyTv9acxKLh/dVIBWQx5PMGV3gikSpWAGOxGeat MhlOtEXXAEOB0XWjVEcs7rRO2C2OHNSWHfJFA0N0SU/7tFtB7OYtLNRMNap12aStAyQERTwzIcm7 bEZB2Bc5CAzoSRgW6NiATfPediszD1PWYhe6aws9FT1iNJA1T5z+pi4qUrTa3lv8ow3UDhOSCenZ tamGtfa8tRVYgVZTZ5rI7BrnPZkOTTSG9JrGYmBMgICK/UZrl2Aqo5vURM8hYdprSC1wbM5aeOiE DsRBErW/+mShqKWu1o1eMe7BnvKEcBfe1BXXLWch2zKjxKgdMp7uJRT+Ej6jgfUz2OERKYkNDM2p 0tW070YiQBx1wnI+goHdstzA/ACPJe/LfD4u6+pq7AoS2UVL5i0p2F2wAjDnpY5KILrb7OIruFmk dLHEW6DK6GVNO+4T7pGm7uqMPqtjbnT99HQdtx4OaBbhgP6e5vHvq/rv0WL2zpJMuJSntvD4Zqjr 5cpeGyWsV/TOKiLyJdgOKHx4qoQtoOAWc+Oxtz7ZqVpDDZ1aKIYwzF6SUXADJVYTfsXLZKCaaufs 5NyYU/qF1W5DIv4aOcZzNt7lq2e8qSYWoZmPitrod5jY5K9FbuqDRXLqQ9+9fA0sEnfaXHUDV/ze c5/w0iZl1VV1egtzaFnjyeDEZStp3ywFywPVDbt/26rA7fEfTR+lP9JP5Zftz3ZPlZDVjhqB5fUh rh7qAP+52JFYufiLvEKVuW5ac1LxiJ2Q/EMUvkFQ1reLddbc5HhiMP6IayBs6+mzYR/uPaxQUySz u0O5H5/PadA4YfqLV4j5g8eJeHqlMdyK5aPndyfNo/YtH90C0nl4L5/cLRGNKZU69JADZ1u/uiyu 0KMOJ4uLsqs/HS95DhT961rqpDLg5UUyBTd3fJr+8IeWQV9Pt0O4YobcOnc13u/AUCes1XPiL6cP YNExFtLomOV4fYSbenILrVXLG0ZWr01syn9g2BnC8W8JH9IZdxmesVUu5JGGvRisXmqXYu053Fdu +s7Hnl/0QnsR90eofL0Ulbs9lyVgOQZbzsGsKsEH2vHYKmesMwPOwgqq7RnMgEQUVsNQcFPPEUsG TUwY+vwxtsFblS2RLZxpVQ5myJYXZOecH7OQp60fk2ivEnepmDJxU2KBq2i7UdNLSsU0qMRY+Nvj DmUcVrwbHuglkPb8B3gwUPzEbsB682F0Mhuq9XgeWexiyLvHKtJ37CnUwRSDDDhsN/llcacN3NaO 8hidHKKxu9h7R8sKZX31y5wI4i64zYebHke9hsT3Qoo8Ng5KvVKaUNm1wrHELJVhJ7zshbEra4i1 K3gYN22BrodtKce3hRIbpK2JwJzznwkRYVaKh2SPmxBMd1m4pgsf7AcGor8OQqSsxjeG/96Tr9Ye dyXuq03uGBtscaW2vMfzO9WGdGtAX5y2m7LokvjrKraIliQeG9+2nPJYOnd2OnOvOBAdIOPitmfD 02818DhyacE6qRHsBQz9dv88TIVnixi35a08EfUswLstRa4/gD77vsnv6SlKs4QEOTpgeQr5R70E /ST6Kczsfxz3605bvA6f9rYEsuoBICzTxwBvEnNp5gwLn4eWOtsGQalbLMQxrV0s4vDad2ZobFeA hj5U3z4a9y2pYU7DdPuanHCNswTHJ0Br+EXOTg/A9y/ue84fBgIZI5NUn+NO5JwT4JI9RCIITHET A4wNQFkV7dW2ILmZuMy7vEH3lIpkR1T6p2EDLyhgEtjA21I9o5jTGk47cnqpnMLO8fMT5f1gWYIG tFvHpTT0Q+5fE76mNIkwOsXQIZA7qUfHpydIrRQTQrzQdCcHxrJrcrVRG5vR4L/+mqzABH4Iqr4a P/xajJUbOkmUP4Ix7HSeredKG0QGd9sUIAkPijef8eIXY6XLGLTGtjCn2CLcuXKNo3UEtiy2D8KD gi9A0qF3d+GZfVyHb1929nxCfnQRpz+/j4yA3nMWNz70E8vP4QTdr39DjnzDTTrK/DE52g+3Y/zs d4h+Ri8FRpmMaVsb070RUIm9BUEvFc61sUvqeESzaURU/9z39ZadaUCCsmrKnSulLniTqvU8JZoo AbsnWltiBe/a9HuXtNPbSgc6u87bNrsip1lyiUWOwPPhhrcYZvAGgloKfIzG8oY+mgK2N3YxLLo+ kz9G+6GbTMbw4+2GRZnDPidceMDYbBMXmpylbx6iiAcIoAfOrVPXml4RKqR97wjd40HsTErTNdHE MrFAT+zByoyHxOTZ95B2P+iLtr2+2Z7J/JslWEuX1tNmXFFdYUpFbFHGnQNMH1U/9Eo6vciR2ZfU Vp84xBryxauBO65Kqw94NuIOXW1wb5a9msGnoYuyRK/VZhQCO7CXuFqB47ZvDmL1YunZ1RV/03Z+ D30GhmVu912eXJcln+gNPU3dk7Avt6AFyqWY1L+g4EE0rXv+UaYXtAtaJR0rsDq4cJg70HKTiVrp dJlvdDBf7HsvmfNl736RHLgn3BPZilKbXSGX6nEEOgayDdlBaUHJCTTN3r0mdqIX8UuD67FoQebc Q2/Id8IgZvzTn/4Ulq5yqULHZwrFlrTIdUUB+ffRpm4p9kA6PuzuDzID4xcgQ5iYlvWpiN5IfXHK Ps0ILQAsZBNwALWKa6XBA0dFte5xaZitOT5ETssTA9NcsCDf8azEqrN95zCt8Yg15yCu5Ab6HSga OD/kVYOK4IfR+wEL9BRvka/yJN52l8e/iPu2zINOXby4A90t89OinqqB/S1JyYlyAfOCvHR1J+WS zoqGhGDX2SZR7dasMcDO1evneCxOAebW6uFuJUetOUnPuujo5M7cIddex7CR8S6HBmgE6/UqMWZd Z5Mx9l0Tjq5GfwC28R5mWjcmdSmr6SswHNJYjnP24ThqyDcmaF5m2uuTq02N6WwwHsIQUc/8CADu snO+94uqxWYO6d3zoFwOw4jidN+y1UreWKEQJ2RXJBtbm2/m4+Nx7whKoGlDdr+afd5g0am4Dd3u HO0QYStDi9uSjtOheuXtvbfwYgMNbyZRX1SGt6QMC0Bndg1bDcws64A5ikDjRTR8EKh2kSBXDmHB 7LDWt5F3FdtsK/rzrvO/g2diUyOp8X1GG+t69/a72b/u6ipspNuFzdrYWNhSoacMxyp47hsm1NSN x/5tdR4F3TP19/f7Idoy9/4Fn7MgnbhlXGZA4kWAVHhvP4BWLFVDHUnxN+XgEFA3Zv0gI0g7bFDt EZd1tGt/7RfUB8zmSwAadwfPiU3HeofVrMfj30Pp0LLt+rqd3/bQdPpGboOckNKxEozZ/mzjZ3hy Mkx2vIwv6nIlp/YAZg7/3BqPhqiUt53ekO1ZGRq5vN7FJfcN+/AhHz5cewghi/wje1nrNTGJxmzr G2i3v7qdJvYtZ4tUZg+DP0Bg+6RMJU7gdWX+R9bV8deViFpmStJdoSdDgz28vHTe4U+OGeYwWcX2 DDRGGo/lKPOQ0sgHoh55Hon00pik5NOhHOJRpEP2olYojdTbbiOBI/MMIzC6DmaPJEZWVlkl11nH l5kwCEGUrwr0EYooQBSFctW11+2VUiBUZzWx4QDaK4pRSjPtXtzDw6TjUy+kNEGD32ezwtYAhCgx IlM7EzOfxrJz8X2CtVNnhsVoftjcMgnu3D4OnIxDGI7NRlyqVH4Mh/WavB28TsuGRhvZA3eYPptl q/ZlRg5e4+A5n4SY08PtAyHrGPYJJtI7vxk4h1IGrjbFI9Zc3LSxH/jgafQRYhAjzNwWK98453k/ UK3hO0b2THADwydRggcYywMOEQ/rhoH/GNAEuwAMM9DM7qb8jnoAdvdkDyLsDQJ+YONTZ6p8O28k Mb2W1/roNcnUTQDZI+XuFZ/dW/6L2069IvblXiCIdDtdLdFn41Y/NE42eNdQVZztDq6syzk2AmtI 9hU+7z5D7N7n07fRLM1cQ/EuE4ViWw6UxfWrHniv1GAFY7ODxiCFH9J5qbK712razAmePFHGje9G EXxD5CCioAmWoMSHEsVB+LdQeebTwPl0U6uLIaGp2I0qB7KaGQVypO4RSDT/+uI3dK9oqX17bDSR cGVFk7ZcR5XDgUFGyKAE1dA4VesYRXtC7kN56+YbdS4u1gtsLGafxJ1FsRy1dlDhg0uqEfhl+cLO km4vVisnegNXdHMUxH2vNrc4tAOwoJ1Uw5NzfpjLDcZ9KVrayxPXO9MOe+fO7VSBNJMsh1WBfUR1 5m5vz71ZvgtM+2j09r9888eLzX2Xt930N1sQK+7W5ds/eH31b3/yE6YuvjQBryUWMho4o796AyWP v/r1ZyIuTojmtq2ENvjL7apF53NADxL5ikJYXXGgQZAPGjQoT0ejX2Ut6FzkAkZhdZiIaTF/WYMs 9Fl2W+b30xHFV+2l8Khb9anJrbQe6iMe/8Ae9UjxhafTr6hD78NfXG/QmYuiLLr7kV4SaIa9bpI/ +1k6khWgA6vZBfAC8XXjViPD98fxroocyQRUAlMTLUHJ6VAlfEveYR2N42/zqMoxVkytPena7QUw eHVHv6hAkCpWujHy12wxNlLdrDjQGIDBSTudnliRFrhWIdEGN4aNrqZR9Jc5BawAPpyVS4rFNJJY u6t7kN4KpNZ7snnnGd5XpuwQ0DxdROgAwGvsJywL7g6WoPYAyhKKop/JLPoEPkWz2Tx6dPfn0d/D 74/p96fw++zR3dOTY/j88+fPz/n7s5MTfPL8+fNPz0dBJyMqdnrC5U5PoOTz89GiKMv8KisXPItk 9k1O7k5OJhH8/kU6ifDbr+jbJ/LtGX47/fOwVgoFPv0FV/8UWuAqzz/9lJ5gm6nVqEwGtKnIBeTg Y5CCU1SFhfLK+hagyBcMdBX0PcKlhkUnFAsrxZnzxzYKi6D1bfQhJwDK7qQb5yNVFRb+oqFELHoD TzRpnx2158DwjnZq27p0nLJe740eRrPKy35nnYdcdCSyJDGmRbZacSBy0KabVidXuWpquvXND9Fl gJ4kY6XmCPsq0JfG1JgacPHxsWJ6sG1ktFfOxy3ouznI3ytoez6Gd6hounf11rD1vMua+ZhfqRgn 8154XLyyPx8vmxyjmFFbxwBQ7qoJd6UMIRjYhaOJoevAnv6yo/Jgl60yA90GNrS/1wABJUPlFg2M h/Y1TmMBVGeGw7aDcerMmnJNpJME+CSzJvikMAz4eMpjm8pzJU/JV7OlSWmkU6j7WX0FnCyRUhMP loWA1AewKbdXRbXOquwK0/jkV0WL54kWeHcYIK0MDsQSQvgVygdMTCYCAA/GDATJ3Wptd/+2le4h 90zsBDJ+5x5YyKJRXyF1TWQO7YNWfoOiKKfBwmC1sEHCJg6SjvMIg/InUj71bVk9MBVIBOTqomp4 pjHlhs8fvNNbRDjU85wpxNrTtrmOstreFKC2rBw/QO3nbYrR0WXbL7VYbVXcY9ZcNf5qIHnsxhKE E3Ne5R4r8AqYywsY8SovVuKHPp7NLFsX51s5OUdFXD6iMbLM0MHxCSzBeBqnPbjkFUvGEcsTC8f0 2xjFmnjmD4L0Bqc3k2h8Mk6/7YE+uzPtTzf3aGUYpyYCm6xtoFhy905Sy9/bWvxqNDPbD0YZkHhi vThuui5KyaBGJScTu7Q1TFCHVOEY8TaeipXYwHBK64IghcMSmebtMtsI9ty7Z7hOCQfxUTuH7Xcc w95L8a7IKQc/oP0KkJivEhDnMSJVkqbnAzSqfZK/rj5UFBMdtXRIPo6k09Smc9FsWdYgtNkU5hvv +i18+ERV+MgOG86voaSKT7YGSfw9zx4oTi1lskY1vK+ToKUWBET9esqbaNp38Yk+nJNkM+ijpgWA R3dHJ0+/QoHBC9wTtluFqn/gVxeTJq+NM1dqmYIkjPRfTjwqgEoD/ttOTguEfZ7umQW8eH0UeXfL +N0CmdMQt7C51+O5n9XAZThS2Svikky/deR5uD3t6sDudh6JKR82MmEjqH3gsYnnhmCd81qkMb7D LozplhhXV5k0Awd2hmp7ExN/qPi6nCfMGfKx3K4jPXTDQYO2FexO5C9W3o+ffBQHzqwUMAfrfRJs czyvxTsIavDKbSLd1fv4Q8G76Sz1T56OPzpqP3wiXz4KxIzyED18gSaJQb6i4FJNE1LyxVGD9rwl isHJ+JNsA0sKht52q1D6TGvupPaO6EA7Jkxfi/wQ1Argl6DRfHSE4zZfzU0K7oP2VAkgVzbvwCrZ twSWNTCEZfd7Xgr7KUT6iYaQQ+hkkEbCWHFdYlj2IWQOoUtWyD8WdPVWv4Uuecfoki8/JLocZuHj i3D5PbDUGykB9BgH9A30WYcsqNjviii+PzH8uFuB4vdRn80O0IOFLdWLs5g6GJ/vimCtTtLNQT82 wFGSjdyvJqDn76XLOp4kr7iPs2gc4OEOfPPl7M9n57v3IIUlvKwCKjtbcvHh2IrbFgpXqNEK+8M+ HJoOTSLnIN5HRSB0zfdfg6KIN9tKyWTc6hChFpeqWyx1DRIei20+EeelAcAbUu9EQN5SfDbUk9AK GnDps5tiDuK3NSwT23XVprarp4K64bGq1a2ADKEXxyKYRQ1oEi0WlAiMjyhsfiBpWVAlneIvazaH UlI4sCip37ZzUomg2a4M5aM2yu4ZdkvU3HO3fcwGhr3qDU72kb10g6c5O2hnN2H4GPdlku82+f5W PTR/Kk2IvVXld+R/6DNxexPQamLWZT0lWtXfoxm52SNtdVjr7Io1jSP+pno7/sjlUjH+8ndHnQGF nveqmI3S0pNRt8cx9RHVSgZaJBQVEIQf9Xy6t0BqCyq4QALziH0AMNBw0V67kHEqCorLs22HV1Rx qY4SD7p2YWxvUgu152U7xS06sa1zoDrcxhNK84cHM/PeFY1ADgdj1zsYmkGe4K3ehNDmlsKHi1Ve EvX5FY/D82CsWts1Yl5fY1Fatq1N+BbPKV2zhp3zl2jAFSTPx6fTk7EZ05jGNP6lrVb69all7Fk0 XIiJfbyjBK9F2IULKAU0ay3QwTrCUPxaPN7BWshA/CrCVAbr0BhNHYXv4fIFjvdo+v4lGdf8KR6s Fw0imlyREpfvDM/JEz0pIYAkewQWrjovWrTb9TrT1+fVY+a4uc1G/VfcgQX68o6P0ah6xcHVgBiR yrB1JTE5aylNR2//K3MCT9MB9Pf2n73+rZzAt9sNHxIBQ8TXT0h81Z4drQk+pOK4maPyiQzQPh7/ PudqPKhy+HAKZAiWrh050Dmkopjj+qRKVbAOozg2XP80CopGvNhzPCbjOKb3dIIt1iGG5J7ekBl4 g+ZuGZwIImTtoQpz+q0dML96TqsokRnBd1N9p8n2u8F7VBK/NsOIMZuyWBZdRI4mXY0smQAQdJBM aGawY/yApWlgDRZInlkChnIRHdmjpoMzz6qhnnSdXJ7PDEXb0IG/e7EzrasfnABRDZNrpqM7d6BQ QUoovPwacPyM81/XzeB5E0uMVSBZO5nX51TAfSHBnyudhvo/qiRTuhFMMS9JOAYyV5FIqFQ8MoFp 2PZRHPf4oq7LQS91fMmVudVUMFrV1d/lTU24VCAMI7nNWnJrGABqOx0xomIQrWIn55xOCL0rP5o9 5oXU+eGj+qmgJHsjqvshgaEkRUF/dV912d1Q+HS2GJ1xmCi74rS+vESHlsfRB+jiO/5fx5PzUG19 LmK1M9P+Ly09HB8U/4F6cmBM6fd0T2dPvYiGFrNIxuw8qHLFUw5RiszOkuDX1XjYioq/KKnbcBHY RAZfHtlUrxOQwm6kb1qQf/zwnQkYByJh3ouVgHsAp1v3SZSPJes2nik81iDhxkA35gl8IW81PN9W h5e0YvmRldYDGJyS5agA+qgNLoaVyhqP1TBREPcQRGp/RQx5MvahkHOiAOpdDV5ZGAkvWHUGyrzD a8iOl6OwmQYvC3L1AS96udGCzni9i4nOBcU7iiZ1AERrzVNwEPwz5MduLXkmrnCaXYM56Sel0Fgt KNWITwCceQJEgPQ7OLs7a8+sMxXUSEXWH6dDI+IuDjfg4rt3hfLBSOzHLdhjCAi3LTSp/E9kW+Gy cT/cidoTJ3QM7vr+DGyqQjM3t3S1DgE4VV3gKABlnCevH/Z9U1qhgpKYhY5emgyxv2DxWSiuTW8H 3QSPw/Q9/sNPnsdAhyuKMTGjWxc68Se20WMD2EHXQU3ZzkhMS5CypV/9xNQsIQnRqoDMvQlgN2xY KyjyA+4c6UtgkEJXXMbay0iVnyrBwHFG1wpGost50yYn5bQwaaGyQDrQOgv6XIVyJC3oCQhKiA5G goOkzT17fqN9kT9biApAsMownKFWXFQbtqK8wZSSYVGkNThvuE5sGfUwgFGbYHUxpu4tkrblpu9y K1stGJ99/sXrL998fk5U58DxJiZEbRjASCyajkVJieH47YegQ6mcILypFSjLStBlXniBeJzxWopU Gp4Mxc2Exsy0DDILVWA2dJUYj7i9jn+XDvfMxDahQbfpEltiFNpe1KENKQ67zd9Ssn9YhX5QzDWZ QTkDoKanINr14NTbblmTejVWx6chdQKKjg7r5L6F85BF6CwwrUBJTKP9K89GQTCQ3d7R78L28Fr0 dhXVEzoLmg+fBXkdYlvdD94fb0KhcwJDzmMDh7HojhpsdeQBRJd9GJsEO/UMYJg09wpFL+Nz7bAs ergg+w0ZwRPn9KXXz/C58Z7TLrWZ3wlyQfG5o1+0IA8+39Ng2JqMEL6iXy8/fvVq7KEBs6/5qFC8 5Am78o52WRoHbIwdpSLw3tkrrCP3CnhMtyisW+C245DChPKdiOSrGtnMTdFmQOMcscTHuuRJ6hXl PorVk++E/uWLz1/P6A5QfNzEES97ynqc5xySxQq67V3bGAs6IubQlLSCaMWS2h/tivnFoqgXZBNR geihuy0D+MLli0Vw5d6NewE3bxeCwt5EqdviLllpWF8FYTHeHwILZ/HyeQjYw/uFsNpXIVjqbHEn MHV12DZPTRcSwWI8xyXCoU3YmcWa0XE4EkfoTjxDtO/F4+o5ZMCycG4DCwdh6mQl5BzeK2EoXrFm PWifR3AeUIqxFGIfm1pcw+WIejiG+Pj5xy8+k5vwG7SapPZw7w4a791BQ7rzx3S3Y1B3Dx6V7Rgz sI+EEfCVwUBEKOh7HYvUMwrecdeAIq22iUHbweSeZSeY1BcMdmJSWKfBpO8nYGMy5EPwI2ESNyfM Y2iRk4sK2wDjm10cJcHkNlc3WrAOVbfVbXWFRb80GXpDl4x7xc+w8XMvap1nn3tkHVzpZ5yX96OP 0DTddisQ3ydRMiaYx+uiRdekBgnK1d3xm9zB9K6pix1rDY/mY+zeOB0eJPcaU9KLUJYowI75St+5 ZdxfoiWFXXDaRJ25yiGKQRsZ7d6hJ2+h3R0NLiSLMb7ve55Jkk+MkoFJ0nELeh+wQqU53bqR90K5 iXU+3XfaMbd0N1IqSa/p7pe+HjHrRxKgHCFcNJ2gIV+3r1IbWItz39azTx4yF4EcuSWW51a0A/HM c5D6yMg8QPYgs5iKvsjjijt9sWR8ZOQskl0mdO+1uLyPjhtKIAAk1uS8IVo3KTUAiuErtOGJPV5k rEsiJIwxFqIrW7bhkn7QKee4ekEbt9VxtXWPAx7pW9DKL1s3BpLwLNwOA605bPxwf0iPGQZ9Iftc 8NVfv3gZnR2tziN0UlzJkXsQeLJjLHgqP3rzRxgyxAp1+fa/fv30D37yk95FdDrdkNtvo5FEdVAp F2iVjiTBA/MeE7yzuWcQaBHdYFSRWAqaDNKvoHEM/5PYOSKsM+NW8i7jxWpyuqYcEpyKvlgXXSsp V9AqjGdRbfF3uSpr6Y6UCblaltsVXe20MrJUJlVLqw69V9tGpZ4m3dfJOq2C6IqZ9m7AuEx2QDzZ z92xTSWSn1T2AvN6qU7vMDXvu7w8oA17XlzAdikGiw8IuH+XqncE9Sh63bAW+C6rirLMqJvik3GD N/RhxdNsmGmgjKsq368fRpeso7pl/xzLph4gOz++8v6T3SWGTNRZmYfPdvGxHMzrC65YNVaZmZFY t9VNVd9W3s2bYFx01YzcFMiDIdADI+qPamhkXiu6c8E0nx+eHbV4AX+cKpJmswMsCWQdKpAKTNvJ 3dEd+RImwdZYPVbtAs2YKL86iTWF+71Ld3nbqWwrafSRhFPL7nClBlgpBTK+w+uTiV3y+P30yZOn /TOu35jybvHjIg0n7sPoa7Bnx9PpNEaZmvMSp8e/8bivCetMe7kVDxizRlEr86cfnPRD1WTEhI6J PaFtAmoS7o95SXAGSu3awlPBLOa55LFUYZgthxhgaXhzWdte0EWGM/ToTLjiCkiuM5lk03DTUNG6 jdVQ4kj8nTBNNixmFkGX6HDU1VEFfKcRZxqLQwLHRTRHkjVI+sdL/rbBfYrFlqypt5K7G7neE7VD qBoZhiZRkT0o6l7WrKKn0z+LkFE6DPcRDPFdkd9ag6GsRIq9qJQPeitJzWOiB0b7XM2a9xY3jIF3 NXQOIZ/+2Ykt3rX69g9Hh377z9/8qQq/NV3oWEwgu7z9w9f/97OhLVXC6YzIxUpk9UaF7yLftAnI vR0eYiBkmKyRE+lrqltSlX7FvNNPhL7Q/h7arDoaoerWXcMsXV1TloAe69W5WdkZc9AlCrW8ngP2 3dJOTD5RiRKGmTI7drD+gn9U638DE9/LBosPowvtfz3lve7FZfSJ7D2W0IBlJ0hlVfQJcKmIQ1Vg qU1T391rVkj0yrL4tY40dScBXiSDvAaKRbi6nMJ/ggxWlhO7n1xs8Wz+PdWV97DaJ5QCAIV8Y11t tiX05iIv61tsDNbou7pYkWK2bVX4No5PU0a0DLgXJJz0+5O4o/+EkmkJGhjbyANkeAFId4JM7VEi Dph5d12veKyXtLIpno5qFXlGtu1qFKo4Yk5D4XIqhIfgvqCVhG5vEpWgLFB6yDnjWGY1BpCgFFIs x07SjeBiEBq0cYgioEGLzJeaPiKHd0DAlDOcVo+Gx8ggjgYkDTWA3+qOCBYQloVzINJc87v+VC4W WBbA0B1uRpxKmsbmYqAhLgQaI5RjrEKff3WvnA4mzBCpIYBsNc4JqwgYxS1l5atYuvMd3V7XrdUV zJJLCPdnWVZMVUP95bUBQrngYIJVR7IG3gLClnm+svwygT+pnH42MWFGUMzmDLw6W1Pqs08SOinm 7YqieUVlXd/w7XTdLAOi/mMLuvvzKIF9ekJei5MIPvLxGWWVQ2edLlrVeVvFGGSmwkSl9xJaSlrA jF9hiAUaDhDgRM1TFdELHs4EPisc4U54313T3g20ZOPyE9kQEW2wZNtilTccw+oip2IyrWpVlXTR A7O3l/eM4SB5SQqvVUMiApBXVvFelPGWIdiyOdXEDlLlTvYEIdTv8Bx4xeKFJkEe46tcEqHJrEUi z2NwnFrypa3r1Val4CNfX3IbRUCecczGtBM7ucKnSVPXHXWNMC2KAPx57+Z25Yd5xdsZLB71avvX 7mQBUwX/la5EBfQ3z/IkhTVqdt32cvNd6MoaG2cA4TzgthpwzxsAxbSglkfiWLgsFdDGrygyJm5u 53iuPbLZbI+PY1Qb4E7XBXBoWPH3hCbmwLh12FCanNYXJwOW6jxNccvJT130Bhy51HxJJ+1RGPzv cIeW6gZvNgQK9GFATNi0p1CZexlF/aCGiOlZKH+oKgjjWDe1OyN9jUclHXUIQeyM4aSh2gvu7BMq SCKF6TX2Vep+MlVr7Hx0oK+fPubG1z5K7ewgPuFpDNpL03QKw0a3XYIJnszTpL/uUveEzBtb0OvI 8lkxA5aMmgGDmimDjMeqYRm7r+timVs5gSxK8WnE94aRuju81e3huoeTqGBKfbJcnwahSImzk/Od iR9BUbjA80VcdeTHgvxa6obA4r3pJP5lLJjTHZkAvz4880981CZHTRrr+z3OcC1TgL08Uzkd8Khj qZP60p0ffEEmd/gDNU1BZMHusRPGg9RySuXDZUgeadwXebmyK47MUyit443RXQJQpDuUFBOSlrW6 8TFrY7AN5DkpJ0rHviSlFE/+RT7WN1QsVUslSLOuQqhYwSqsUePeJxGcs+ezrqY9i/sVHoFcUpbH lDWIApWybcjyyqdut8Mbmvj0ggwwfaYrJe5k+uWnOh7/PIo/xO55F1cdVr2v8LImJVQ0XasXn8CT v+DbZnVDp1Ep8mB83HMOEg5K+JnKSWDfyGhrvg81MLJB5LkVhSUdhZ2ZPV6r3G0tsug9gqnjrmv7 soCzzcTb6oFUgIdGOeoBDyCCX5O0l3BmSXzwqlt3yZk9o+fpPpKAru6eZG7l8AmWeb3Ll4vfycRq pFfAMhc7PMDV7f++oSXxJ9mEOMRwuInDdwSitZEh6j/XN8eEe3DezTAR6GzD8VFjspwnqTJU6HsZ +oosXUzbdyGLMQ9tk7Oitb64vfQHnosgA4TWufcHDv3/20PdvUU4YyVpyZpq1GfNYwcNSZ9BW7ep 3Bha/5gRtGMzbLd40PW5HlXKneNiwfS3ekn5oHbxbb7PKVYAjLfdHLSYpehBIxF+7IomgWsbxHlD gyQLiLK5sTfMU31jqt60g5Hg3cs2AWn7EZ8vLLdswEPp7TojW5iROdrQOTkpng4B2dtc+Jw8cJmI RuKjhp72N6WnQdwMzO2YXIyUy1bgTAxRbWpMevh1eb3Tmlqz4wXuEA1awBdlftlhg9ajpri67rB5 DXr/+aMrevTW5N4sUubH69ucRsyQvxsUGs6ccaOkmcDh6A4m8bBD0l0SmrWqqENqAX9crQ5ZvFDs 0IWrSMCL9Nu7t0giWVAM69N2QN4aomy7B9r3yaPdYBI0oSBr1kd7V7BVOLCC3dUbWHJxguewMR9O 8jVeu/voYBWnsZqqL5pDZuqL5v+fqB9lkgAtu+Zo9AgNHG8q9Iy2Tnvm89FNnm+ysniXM57J/N8q SzB82sDGkzfkZvVbOZoB0RdoDX5mUYxUZzEVuhs50eVeVO/QPRDKJf/glUql2LfGoW+kIzZzTz9u 0F8rRFV9ymIbghN82Kcvezhz8zE9gHgCm/teCgpMlmlUB8OOw8h72M9uwnzYxmT6+N22Ffxgb06/ +01lxHZeIWu1eA1FpbIaflUElsNh9P/xaiX0n/gyw+PeHptaC+LV9mKo4vHOir/elkMV39tZ8dPi 3VDFJ7tbrAfHeLSz4sv6Nm8Gujrc1zAf4Dn6vTAC6nCQEeCbtFd2kBHQMMOQGAP90g9hKtaK3btg g2wHOx9PZMDDbORgeDQCACgjseD9PvkSCc00T99faOaR/ePib9ZKMaasT7KyxDvWB2nAUta1dqio RLtMHdaJkIUq8TBCCGn8fY0XD9sV/V7MbV3292wGEV+qADMghy2nXJANDMvGmC+FouJY47+s4hnD 4uF/G5g/p3gSO7J2tiO1qhuVJWOD9F/z5aGALCvXipDcXIufdganV734JeEIGgYaneL4cFz8Zu4q zQb5KwxSmcktrByt0EaHx4WIYrcGPjmTauc0gLDUr/pbtNThAOfj+Xg8150A2X0Sh0wdPc0kG2bb A/FddGOYk+SonZARUvo4UT1ID2qcIXgABvi+TiffZc0ikFhdPQ6vEP06Ddd64LRivXjnZBrIgUm1 cPgeKmHD0xbEGtWxuh6aQIWu1QC+VnsQthrA2Oq7ogydgXajbHUwzr4T0qjSag/awvbD5KhN+9ZD 5rO25RCvGARUaXdWaBxT6BNfKoLO+/ZpxV75g5uyyELDrr1xn/UQ5GmXIf3YJ6liZiKcWWchTD5o g7Bt9yQ7hEz3jR5M/zx1j7AbY5S630om76+rb4nrNCBrTqLAgR4LQX8hDk4HyEBS9HdzChDcgKk0 c1PedaE7u4/H9hLJQcr57+QMvjeXMtKkb753Bm/fSuT4q8ZnDh1v8tb4ECt5ZMIeyJwdvqVMsDDm zr77pCcgidUBi4erCZ4JYMjWxWLM53eh3AZyrunPoqrZm8sdR3k4DH3ZTE+nEov74fYeNts/7HT7 fXWixJGd03r/e+IAZOj5Mj8u9Cplhw7o3bYsjQsG2X7UoQNd0zjo3IFKHuIDQvdbg8wC36ROuSCz eBS1xXpT4q3eWC5dk84R3V4DXcvnOXpKx/YcJAzQ4MS+CRtTLcCmqs2ppV0l3a9vtjxEfuqxdr84 3Z52H52d/mx2/PTcGhlfrLcuKmZtpEf5oVXV8lpxuR61sd+xR8FEGcLv1mjnUYrVgD/ikBWDOWHw 2s/v3lygqbq4qg6kaih5CFV//y1w75lJaBaByPEPRq7z9o2Qz9UxBfBm4tI5qzMa41pdlZBQBg4C jFLOhmDy9+nb6y2muq5Xu920oIlzt/wux6wDnLIAQsgnK7Ct2A5av2eRQAjy06JdZs1B57tS9B8v SfboUN2ix2k/YIBY7pDRkbMtlN11+knvexiAh2mvGAbHU+Nnl2CJl6Yyx6i2vdFSs9Oe850JCGoe Bs90UR/DezEcU9Bfv67FwqvGbrwSgqXedpKBYcyZoSiGNQmUeJfRcn/O1YVH1yxhwnsmrWDbDkyr Dmv7Rht+zLcm8YIHfU9O014BFfPlORWwaE0IlTyYsfEYiVGutMs21Qt57LoBc9mgPZHo0dgTHX4Q NCzuXutmnUs55+4pPe/dID2z1N0eVQXmObjtWoZLX73WQXgUBZhbtxJHKlLUMECtSPOPvvsPyJYf v3wRPYmeVYDfaFODENPCw+8OkKhRS6paopczq/a63pack0vipM/k0iHuCz0SEMISGDHy/ji1aELC GY2vAOkMYjyRD6O+cVf6oCLW32/ylkn6NXxMZ4eTvUOKcnfN4kLfh8bUbSafzB5E2hZB8t1tHZ3f 2l1VvH5CIYbrN6zPmqWeG3IyTnwindB9ZIrhWXSYzQHKksCCnL4fpXNMLbJnH95yzVcFKDCs5eLN 9S5aFSteRxiyM4peba+uUOutK+CPAXh4vR2VaOE41sWEi/yybnIlLOFLdF2Hzfz4uKrX2VWxTMeh dSxj5asVEqN73V4lEjHVcFaHuy05jIV/iUjFzjUEpQPjvkAKUECZooVIKcwfxQ/pLihmUndhF9hF nY+k92YRIgC1D/MOTbflVQhcTQtnyrxn7H5dAy1PtYqZTjHgwUZI5Y5urvlLHcoHVjtd1B0Ip3zn B8JniQuXJakbyVi3IlOTI4FUKugFhywDMKkTKetOz933FAUo6gpvvnbILgmY6SimlMYetEo7gRKW wxxkj+eYpOLDD5UDqNrP0wE5AcGwDdcKJNnldx2bgmcGjicn+OZkNDdBNUdtdhW6mVofsaPv37Fe etednf7ZzIlpiw9F2kJB73csd+zeLkI7xY/Isn2xYDQq6EYyzQaabmK8DFhUi0U8k5gjchXahL24 TPoXPn5mMgQE3r5vEvwkgTBRMYVYiU14+GQMbUTvISzs08/Gqf2OuG2S9h8ml+Lzj/WAeZ54ZS4Z 3JWuizF2PrBLFPi+BxvPIeEhVT5xX1mM4enj9x9/ALRV1lmHAJgCYdrGxHrcendqXKaUELWMDuii rjdtLNW4BGxekwjzppxOoqfhN9x5uykMCnSGEGHc5zSGD9y+xNd5WdbxGb4nErh2Wo2vtjd8HntN WIB3b//Fm3+BwVcwENqUbgy8/aPX/29JecxG9D3CV7ijl8SKmeGM8LUbS21CFkEMx4Y3oDF1oklq ZoK8KGD4HDbQNep7BfCG6v2nQu/ARDq8c0QheenFGCkgUXetKVUNkbnOmEFDrTorAhsUdSOshK+T V8FLiu6t7rrFcYnjSzRuuwUqXpXZ1ALxYzhAnV3TjyZDeJhTQT9Zs1UL7/5bX0179W3VT65FggFh LGRI+LzuXqhJzFeywX311VcR4zj1hbfNrbFm5iSmE2ekNIdTord8JYkhbldoSNncbouVWJLhUz8F BwJRV4FpAAhR8oYl9IATBqjfYy5UXxKaxnbuEspt5w3fClfF2fDINoTmaswjqEMv/fCYumo2B2IK SnJqviuNqat9mNKZ/RSm6IEm9Zd1W9y9BPwkvJym+PlXmVZxWEsHTAppYsStCQPFtbqcn3goXF5n 1VXOE9ReFxsM/GFCe1HALmLMlEnQwZ77Llpn9xjwQ2K6cEiZDMNRXuCkusZcfoXqq1z5X+L+jcJb rz+eC9Fy27QUemRqj0J/BkJkeRY7V6wS/GPwfaXeUo/hteDWiT6wnPViat5pI9I72Da6BPFYZuuL VRbdzaI7nnQUn24AvWkwuKZXKHxLKExFdTulGSXxFXbhSUQLz6GkQ2rS+rErW3EbM4xFdrNjifHp IKyuDLj2+qIuiyWKnDfuQpPCg71RDU2Ua0uDS93qyfoGX3e1UG9drjyejV3a4BqAId2DcHaNMdew DtKtCjHDlGV3bLBH0hvcaKQtC1lOx2DYTt/ESpZdtHUJeuH81F9YFKXLx5efFpLxmpBJhAJ2uqNI e4xMtedFrB4an/SaB8W6hY/8Yfcnik8mlwWZ3QgIL6ADjYlnxcgQbRQ/iSl6VXmb3WO0MgZBUL1V XRpV0IklI80B3ZSAd6zoX5cvV3xcKLbocDEOUEtFYRzbSlJPtvnGc7PNGmAOChq8Zr0oiadTED7S 9yqQBBLdWwoG/7BJ4AZ61C/6o82zRLtx9+a+TsvsHWjHIj7iy/NIb9NARwzz7KmjXOEz3bbLEZ3W 1X7Xb122Fbd5tYHpzQ/bZ7BeB+jhaPT81a+Yzhg6C4e4r+i9DgVCb7tT++FnSG+0HzIYK5alhJWq m4I2drYhXGZLig0pwYA5FBctN6ZcFB5ALLVToMKEZ1xfx9x1oztK+A3C1q+LlkKysBjBz/wkm1Zu 2hqZpUQYTsVho8k5BOBaQCHXXSywY6DbtTpwVUuZgJC6JJOM+DET7TWtkg/UdzsfJAXJQZnZ4/fD h7R2lBisSSGiQuezu9I4DoI30rCGbdKFg6JCfQ15WKqIu7zsnn32xRcvHw69HAA/MGgHjauiCSFR VYVK01eLF68+ffFlYuAk6XRNJgMHFE7wIbC+fPYXe2GBOtZ27S5oFgC3akgO4EOwvUjze/rZ53+d AMe1+mdHUaNgxQjLz2+Iy+MFqFdFVmJgU1yhOiwtL1LiCYoLTU3/6bkEnqNVi3f1RJ4FwRHD/eGM AVeoG0uAfEFR+MSAxGH07Bq3dXODW7OuSXEMs5u8MiBAh5Ag0k7f9Oa3zJoGQ+Dp7ZtG7tUH5bZm IGiBl0vpmbWlKpZGgQjJyeXejztPMe/uMijUojpk1ZpGb4DLAu6B0QBOcKPL8Ctw1s2ANO2anlnL DasdVg0amBUNbo5BigZZiqMR0yZqNGW1Q9oubYYuOISjp0/3w+0ty34kLKwW8AyU5pirAn6Wt6vE S9LTQ4cdcxYFD7YVp/ug0/CAFvBvor5jADt6gLIaiQY7I1Kz9vo3KCGJ0oqkT3ElNxSchbRqjx7H u2+HjMkxtTrO15vuXgKgt0guep+19OH+2as9ULa274v4d52114Mhu/BlMqApLBb5W22AoR3ctrmc yqmC51n5VB5z+b32AQZzOi3xmp7H7BjW0947Nc5TpPz2qcv3BjtsJSFPmI7nUsYGUHb7AOgxRx/a A9UwkCnojaYPxVL4SBnHpG9UJobHGHKbaIiiqiJ3QUjogRYjtLinqljyXHATCsvLQQJVS0T139Fo zUB71wE0HoMdcNwwVaRsf3WGoFvErFuw1el1/U6hGG0FpyAjX8Hqzhc03tY5lDZ4x1oYkg2xCuvQ 7DcJ3S4y3ymKGxsrQO2cOnRst6OMKpNIvt9eI2xqyd7RConFrgHB/smAVoMbAyuFZNMAMDjEoGkj ZFIRr7MaNT4Zdc1GCGJeJIKjZAyUs6pvgzFYgmTlsNvlNcgdyQcf/EKmIIUm62WHW+nJz09ORofZ XlR29OttV5TTZo2Yd9W38HU7d7qdb4fcORo2oKwB2Ycr8Lsw5WJpF3p2mHNw8gaNOXJirrn4hBg6 ekTMx+vVz8YgTFxvqxtKQ/Czpx88/cUvwqzoOr9bFVfikogg2GjCSQgw2HPP4NvTMoJqh+i3CBHj +mcUITik1FhqXNC4p4ZFZxjtdXY6DhOmKUfF+nsxu0UuFtwzYPaqhks0+NRK+wG1DGrTpOeWmLhK 2SQKq6VhmeLTGkNTYxi+6Br+gSigHCGOGmoUc5mpxu3kcJdKX6g3eZXEzUW8w+uSGdNpIKrHFuFc kuUw0eSSDsVlguJDiYq0RDHV9BRQ+qjAdrMCjT8BYNZwMK1V6TuLTpdl3eZ2kGgUsJnYMR70kE0Q poyN5O9ANRc3e6JlVy1QEv1lXYKcgSxb3Y3Mmqste+MTqHu8XFbUWwaArnJdO5uNvOFlsydtvc6f YJknXf0ke0JLBw/Z3YJ3dztkRArp3avg/TgViiZ4idX/seqiKH1wHcWmtk1+cD1VmVZJF9zkQJHo qxOufdJK2YZImaihTvQAJk63JuzrwEBg+V7c43GJJ/KMGZYCpev6gMaOtBOrV5Sf9+a2txHEdn0p RLFwAFSwTpgfFBUlgdax+9mNhH1jbm53bUYb9Aa6ubVz27l9cnF1UMqijkCe0RjOQ6w7HLGcFc22 PWArJh/djqwQlMzcc3CZxgNnOdy3eMq+9+4Cwx7rUeNNUEANl3IupMRCAmqyVB+wunp17kf6tt/h HVNnsQaCfLvzoapqSjaz4JVs8008ifp2dFgvAVVXq7ZO4+OjRDXTHiUIBv7o+W99khIejuHxjVJk VhFzXfzocV3MboyGDhQYbtG8EmOhWCxF/gmWFymAWKk+EuCBiQOWGfzIugpMSf8KzJIglxqwMeWy NYnTyFKg6aouB9qVe7uO/7pKEh2MJ8BXIYDEaIrjPhXKRWAawNnJeSArugEh0z4IxD0QYZB4TTc9 RJpVM8o+3FDTvt8bGpFeG+GlJX0KM4rhbii/IVVv2mDCDVrC4bKXEXkVHZ/OBrcThyfLmjff4zjM Gga7tw8kJv4ye8tZEci26OPS4bDDrSpiGWTAYcJBvrsXKBRKvwMihvcepKLgBuR222QMMPyC6Ffk M5O2w8sh6GRCQ85B3AydFmhAlO+nLKMYq8WoNzh2SJSPYeGDbDZVJ7bzU1TFt3jig1ersPvkIoUB 6KmIJH0hBirXrpCT4DnQoM04YG3pMwrb7GZM+Yo920zNsqNurCNQu5ZwtzP2QzCuEfj83O4LY5Sz vELvYzc9vWJvBaX0Q3cPSkcrDaTHp5OI/4VdKdSeUrQAWtU6K84HNmJrtLroUEEZoC74+HRghbnB Bin6S0Pmeqk68wNERBSrRPyaewfPu/T0R0hroHltK+30wMfrOeXe02aSaZ93e83GTwKczhTSAaqe 0Mbe7yIUmg0pXmpn1HdeEcDuCXmMRLYZ7XgNDXrSON/kwjOMQwWOnpUkJEWQispcgXJMx008wA4q KgvrmKx0njMH1n2IQwxA8oxJkXdYhqlJ1BnjBNuci0wB+nvo3AzLW4bCZV11yIgm6J3RFhdsXQPR ROX74n7DM8pq6BItSCa6Fra3wxhoX38pSt/PntVnTFYth0fPP/81HmyDhASP02HZy5aBdnjwEIo8 K5SXU/W6KFfbplQURLtAf48r6BRWnwRiLoOiTFTlNHSIpeOt6FKefydijhLMNHjtBL/t3qDQvLHD D0slB8bJ21arvCnvKakX2Yz5dDBAfyokEB4XU45I49fUFetd7VFOLbZWyE6LFXTCPlxq+xqkJmyr 4OZe6Fm5w2C6JiDEoFkci/P+iwe5VGFqN2g5XwuYSUTem/LtGFcUs6jthpKSr4bs2IjE+WnfgC2t WsbuAClo3yEprQWNaViekn5jiZ/OuY6XZ2JzT/aufOVga5eGzYcPy2STDp+3un6IB7k6No5GZbnF CM+9E8erfqBqcr7Kb+88tEjVcNkpYJg4dwqbCSZnFpTv9ao8HdhvCKjtv3en/RCTNKQ8WICDJBGa orsJtbMH3BDx6G66I7ZOlJCAXDrwlysxSm1DtFbLAz00JZmCe7wmbVpdWm3XG3WKiEnwLoqq5wK5 KZY3hi8VFfSJ+oYeDchL7I555uLbnebinYc13OoUOyh9u6TuPdyWu74xe+97AR1BzLn/Xl9XwmyJ evNFUgdBgRMtoTjseqQ6e5GXS3xweqhHPCWb/nnkxnSdb/py11dZlynB5tYXbKggFTET1DvJjS/I BoUggonObC+Ior247/I2QZDpIRZF47WAKTrbNqL64z3nWv1m8bLYUKsP7ychErHCHgNU6gGGBqlq N9jVuocTuvpypc1pebWsUa1MBs9P1k7Ey37UCrna7XbzUDpfMNsB0ur5jHGSVNUVzZpHjsMPF+Ey +9VPXcViir56KAXsjqU7Tp9PDnFi5OTqtKTDkWi1H+Gzr168eh3SxPAKKwrVq4ICNpDY9QQACidY 2XlOu2vcZZ4IUU8D0NBQUGawrgrlgEZSDLKQEAHvGXM40kjQ7UZ2mUPMIFyUvdIy1vSPhbERK2dP wyi50Ol/VY5jLegQltIpDuG+3ordBi+A+GdZZIekK7yxfwJWaXGTs/s2S52i1naMGDaQ7GS3nsUC 4AUsFlqAChNl4LzDkmTCdWzOtE/g2KiNMU71Ag51z9IkAh6+OKVfGlW2FRmdb5MHJYVXCspkh387 3QR09C9bhz2oI+UP0pNyd1favDMKz4SVn5AaDeUCSg87HCilmTdK2h0JTqz0Rg0psbInp4aKcYrj lhvHOtgYHquLuyk+tnxajWepBGZCUyBFRcCzO9Oyvke5ylEfRe45pK3rlvvRdfeJh1us2rNZ4MP9 SbceBjRBEyBD7sdZ0pz6xed/8/FnP0RrEnUQaSM17VpX6gKe3tb1Ocs3q2a3MMttucZEnb77sXvU XK76d18Out21wx3ItO7c9CrZLDao8VsW79vrYnlNRiTYpVpKtG7fZWqnQ5q/GYSyxzkNh5Zmdogh IluSgPhgE0TmmiD2ZYrlVGqSLNaxkVsg2i4AwbbUcNQIbClv1U1q1euXe3rttba539xcafQB/7uh UMAh1mWpIi/vu2u8YJItb7KrXB9iYOZrshmh1drdsRjslFzi5Yu2humE9AgaZ2Pk+XgIXW83sJWu WiGetkNfdU1CWaXvTk839z174+01bGzmvBDZGo3hmB2iMPyCK0h9aUKokIU6EjzR2S45BCEHxvgv KLIMiAZSxQuPjkMV0VaZROj0Mmkw5kGbc6yRvvu7i7agKuKIv3tMDHgtcJsPKTRFazCTCFhtbzo0 FZ3bKxKQYmuWUOCgng7dWtX4YwgPG4hi1wzkMIcMb6Coegjadzp/ed3zWjZLGxYsbqGgotlLzrUq yzFR4rJclNKjn0pgA3hzdnLuJb+WUFLShNz4VMXHwMp7obv4rKuiQKsnUMDZldB705xb7LzhvcFF wFkN3TvefGoBK54spXTk6BJ11eVXoNAUavcCVZ0UVnEFr9tjdRZLIDxdx7vfDd0gMWXvVe7+dUMe onvbMBjHhwZy1ETrbUscIKvUICiSDcFJv9u177AtdMcGTc6Z6uK2q8UfUktscN6xkL4zqOdebwcT UdmEen0uZW0RfP+ReDJDQ52OtwwJXePPo7SjeNvE3ROEySNNy5bT4+9QBAa1XbIuuVyC9ofFJdAO 7T1OndeYIZ5f0p0nmMvNtnuCzUJntxuaIFgjXKbdSUiW+hykHyNUfv7Fs89fe9ZstXBBuJVFyzgb T1zjxsBm0kfebBTmobzR6P0+DewvSg3GUQ3uMQ4V7LoU6XE7gW90177VC4q4XsqqjnFbseN6nJ0H DP3ZOm/t89z8rpvHsbrpfUAnFAT623PIGvCIadmHDj+hExL5a6udbsBnm+Fv6k3AWVhNKUCZjs2h 4kEu/o9QnFaiWbsE5X4iwtI6u8mR2crha/5Dzq5rxds1IqbDIZfanqO6VPRmHsSIVb1cLOLzHh07 R2PORAD1jwfPr7DhR9EtSHXkl07rGd1numvyymnx1RpzTg25FVFluldJU7VBi56aBoyW3NWwtJc3 pMgRdH/cdNQ6p4jE6lZ2gE/ii7PjD2bn2FYSw5iWlCxjc1+HPEYduFR35jvJ0UGSvLVCi/87DF+G ytahYP/8HDOIoFw30G0DXN+6yjdQx5mgsI3dndSf7p3UwaE/PR+FrGS8wNUdNik/eMhApUPX8jWx CgTJsj0K09sOE9/Oy/JmXzqTBh/ooUykep3RpbAliDL1OtI9X9Wo7LT5dlWLcjRwmcSJQ8qBsVFS UjgIsgBN2MFblLZw6g+QZ/HwW//W+QgG0+OriY7Yo3NDDN3/Hzh/ycv9MxA2mQcowBgQ71vsp2U6 f4em8w3aZhf1pmuHbAEYgJlDHZLjHwLZUsAXjA+D4SbYR0WdgU282zG5xBflm5ZyyYPC6CKfYmh9 7RjFJ3WoBvKClt6L6h0JUMqhXIK9mr60GFguLE1R99vthYbLl6hesjPTyxcvn9nO0O+QILINziml ynpnSb8aZ2cx44dd4d3HIJLRYwcwto3PyCJ+pukFGRu52lBD3rxYfozYFm62CJwjbS0pvM22QnNv 7l7mVQVus6Lzzr0Ch4kMvBc6gWY/eBqoe7P3PJC0lA6Z6knf6BDuCowvwBzNwIPdgXeHdMcRnNUh 13K9QlqcMpttcooP2NmazEOSE7ozNelfv6X3zuIkj26KLMFskjWrZoehLHMiK1zW22plG8nE7s7r wlXOLferlx+//kvXIZkUatKKuAe2vO7OXqdVG7UsYUWLlx7tWaxQddct2+Uy9GXIGtd4tswqsXbR CCZiEWt1IFunNJ4pzIA7FMQWYPIuMnTyQwgq2AXaDelsksPxhMaP4Zgx6BtWwfLL+ysoPKiDuW67 AQPVhknShKYMe+jtPSnTJ2Hf7V4sO9fqjRO6nVfviqauzmK02sbn6i7Hf4iDekYcs92iEkicE959 OLAZ0iSr/G2hawnD0XNofsidnHxgdZdf/S+vXj/79ZdffPE6Ph8KoDMsgQTvSe12fFDoO2vyKWwT SXz0ivr2JfTtKJ5YPRWb2m6OwFZYCmHBoM9DSMlWKOqcxTFuAc6MwcJ89tVrPWlChD0dceAa9g5S AMCGFGbhueJuJXE8SUdh69IANZBpfrVCcQEKMaQBdPcWzV1qNFEKT8cmIEwgSBCDYAYJ69BF55UX rjfby+5FsJfyySadHbRDDJqTdzKCnU4fH3/yybNXO67l2MvAjtwuyw93HvRwRlVsnXfXaJjlp6l7 W+26xlj0DQWmmQ0fmt55C/kvv/j1M2sJ71y6Xt0x1v30yxd/82zsrw98w6Tsjw26l4g7o9Vrb4TW GxnlI8X1H9F2mJUSlkxb/jAMBZK3H26Nl4ILQ2LTLeD1BjmGhzPvuIvhxCAR5yiTYsJ6fSrloCcR eRqKcayJDL5t2y36SmlPHttfMOxAaa05pWQJRBSd8DO5GjEYexgeEu1XrvMhPmVpBt+hc8wuOeal Jcc48SRBZs/bazrWPQA1eGoHeqfCQ71tSHIb2Ncl84MM3BFWucsDfgbykoJDTZ05TkOUGDaVa3wL rujW6hyFTYGvAijJew/x8tTGeXaTLzhkLbQhqxT2qSa/LO7moHzR+clx7E7IhFKMz9/fJdwCndws 8BSaNYHTnz/9xclJOiMtv7uto1V234amFXSSt1vbv4DdSVVc3SuaJbS/Z07QMtdold0V6+0aZDQ8 2kWlUGrjWU/bbtcsc/K9NK0kZpcImIc+7adozjcUBq2xOsdhZOzulXRaj31LoBPw8Bgrunulkn45 mMjwHZPvTk9OfDuK3o9znATuiZDTMhYgXKrYxduOA9+SHJKwsx5J5nIAxjhKnQ6TgFypQGZBGyoA unCSDAqcBwRNNDk4kovqDC/RKRjng+ESjevtDnlv5HopbjvGiKIkwYwQG1kUKOGJug/lZCzEDpk0 FVnbART/fITjf6CgMvNJF9qK8BLuaEgslLlXF4FCx9QMw5v64CEBFNx5suJ0CgP085cJ1vSEvEds ySdKAtYbZrjBqd0yKauBsc8qI5ISYTfS6uPT9IfyZA17repdQDzbA4AK5jvrPKvIcw0YDF082/L+ k12BEhnCtCaEueBz9gDTnKEirjs6SD5kzwebNrfobot7+A3foJLBONPFzncel4qyzqFxxhZlQbDH aDF9d3j4RqzeONnidEE98VSJ9f2GopZzSEIM0dvTiFVGXgV0EsXWdZfQuYAq6VyLIbrC1g661K9B yE2DgcrIcoHGF4y3haoVWqSP+FgGlvZNkkZt0W0lwTFdDlBeRxrZnPYnRNocUgwrEEIlzN1tIWyd CV3AAAuXpJchSEV7g6y/zXOSpa5zWJeOCAX/WlTYswYI/zkFT7wN2/H9XilaExcjGmNSTGEB3eay KwcAaUdQshY3dHezKkC4s3JQMcR0OnB3WlERWhJpxnZtLYdvRZoeJPrdAeF6H2Bz4AwjAPsK73k0 SYCkUmdlw6Tm5FZpCScjT4CZ/T62FDa6w7MP50ooio6pOwOqLwU1JSlimEkcpMZ3GM2yvFSe1nIv c7A0xrfUk3pYHVHmHVkXLRkXbdI9Pe5OMSrlME8c4uE0oe1NsXEETT4rR2j56jAFf5jcFMnxbXta e7zSUNCDja3V645jc1d1FJeX8cOnQFxhaYFwVMJdXQfl/qLOmhWln2q2m25PrIwdsGbEO6DrdDZO fjQOXibR33IUAfqGB+G7DSEjT8ihVBBWpR4WMNi8eBbY9oc3r559GZ/bLA4gbe8mEUYnLg+zdoRF qOH2Pv8YLSnYVijA5F6bqAU5FgE4Nvhom2Ukp6ZbsoqYjZAjaDfLsxn8UsFnjmM6toK/8FuBHkYj +q9XdHN15SRDl15/8SrQaYeZhiCKCJBAtyZREG4igCeRH0owkDwnDTTv6/RbJUz2NG5fR/ffq0y0 9vVo3e+ZZMeSMIEAi+NKApleZ+iUD8v5Cnd0OiKjwpc4YzQvfqBCB1WXMn+UQ+KQK6r7IhoSDXzP oIZhV17qqohlB4Qw5KwY1o2iXjHqq740SJlXbbdj6yhH3Kx6qWqpzNnJOR7+lJvrjLO8yUNOXheH r1tRSICR7YzFLl86Wsl4McZAS2kolDJnhJJ8MNg0btTp6O0fv/mXlJxMTi2Vj8rbP3mToAXgGnjk cZm/QxeC7cWxEjivYecuUQ5EPf/tf/PmjxBGUZvq/+2bf4PViwp9FGF7Q/XiOi83us6/fPPHiw1S Xje9rusbtJC+/e9e/18RpV+L8JF7iMiWUq4RbcrtVVFh6lI5JqSTeEz5N93ck1Qhh7mq5JRNKKNH 0fEP9QOwdGR+zkL9QwIfsWsrjnaRrVaEooQHs86q7EqHqoZhof2P1DAZLUgC2YovhqDJkyLmAAxE PWpzBCt6V2ToSoOxm7qaIDnQtXjJLbPrSEqXXJy+acexxPRHqA5dVZC/YJHjj8QYy/cU19kqj67K +oLMzNm7rChx+USiHpPkfj/FBp7IjOt2yGoHmzdRSNFGMniR+dF1IZPIcZTQtd4w8aBZmwTUlclX Yo9juUZ6zhec0NJBBTlrtL3hmfD75Mt7WVyJwXlCDYmSZOUAM/feQ21OL4umNbnvKBx3sIOw2KmP 3KbfOYmAKnhgHJj4qIIlRorq5rSHDCAWLsL5PS1Sw1M8eEWPjzkAk8ZwhQcFhTwVjHCKYSI2nCje ZoAGvvmGIU91U998QxDsFwAtAUEq/eab3XOG617QYaW8EcIgrsF+9mpKsLyHoSX7Kbw2lxRV1h8r 9SMbPpjlKC9vaBPvBPOaontu24o6h42Udb0Jzjhxqn0TroAPDo2t0s5A1GSQp9xFnlcO0SvXc1z0 whjVtHV0PAUN0WrjoyCqj7xkaNFsq/2dVMp4bpkrCjYPAdg+TMEfoi6RM4bDJ1Uqo8cKn3QmCkdQ bclXRclGkK9SZ2KsVoPz8sPuG1Z3iHB+tJ3DNBTApbp9wpSsewST1NXw3bviK/Vdcta19hOzKrlg DlWgC43q1CRSzIweD1O6QVsm5CmDwJTo6H8jvkLQ8yY/rptV3ugNhUADqR+TrDTtMRPTQyaRYeo7 oDe0rFRg6z6DlSD4AkJnk/FWkJ19AqOR4GW7qqPj4wJGxpSvYtxc1naXeL26GyVUQ/81Xj3IAuTA t3IMQ9BMgRNf02ihDUn/3OitbMlEa1hCcFAHUsRC25ZVEhm22A3ykSajMOHkLWrs0u7oeWQhUjXN Hdo/stAOdE0dypqGP8fbVtAXsm6F7sh/XN2TEIZ5skcc6ihfkeMqsWxXYqDNEJrEvbGlQze66yeS bFlfXSEeeO9xMRAYCR2uJfKlbmwMq2d87NlqOCHZAJeRvAd1E79ZkG7z6Dd4z0wXgE4TQ8ZyfVhS jBN6J/wn2C/FtHf2DJS33OpWG942dHfayFRAL0eJwNGHS2q3IgfpawiNip9+841+O1UrPBXxRjso fMIvviRwDqUGmvsdbElyP5p3f9jXiZpVtr8fd5Pa3KvR4tBZ09iz4rKIvfctEtmx6DxeqI6vLKLI s+W1cTAlJMhVOxtAHuINDFOv4oucQ9tKgJfbDKNYsH4lEbIs6LxqmQ2TVXVVk67OqQ5pyVuleww3 hLh9bM2tg+vA0LJyFAY1JsDk+NIF1nhiLUzKxiM3JXVsM07qsMXTN5mW6Y6OE2/Y220kShXaDD97 nIcUWcsL2SHkqdu4BrWv1au8yhuYswV+AyU47zKsazWrSkTJGmAUoC2kSLSAxq5hzRC1HvY8crv0 I5gjqDOwPSsp+Mdbvt5+wFsLfprQxma7ZiGWkhUGfsD7BCvKwu3J+gvYyQhEOh1SBxZKKvV3HK2h SSquDrgspkFW9Vqg3s0TnIUnXQ4v6ltLwi2qZbldKRlnmW2A0pF42NKrtVoRhCJmxmbLrSsVhwbm vjV3ePEWN0Gcydq5qOsyz6qZTiRW1UAIDZ29snjm6JnqPNfyg++tfR8v+0jZR3WC84TZr1X8QQun LUhyoA0KPhvx+GGS2lbDc0S4TgIL04hvING4dWhFUxXQ8gchm1I9wPrKDwtP1MtvvsGyuwAqWhgA yhJ+sLO7oCI3EwGBFwNWCO1es839jCz7s2+UERRgwOqdvobPLBR8I/x0pA5kza5mVZfaL3C8kbJ8 DYP/BDr0orqsvxkkKDOGB5DUkASnFX50ahWeZAQeYEnKMVbbk3lDJPJTy/U4wkQ9lKyC0hbjy0tx L7C6o+d0QfWH1TfbIiGFlIDp6gwexH348IojXnK5+4NjnkSDPUKPPo8wGUZ0mYFwsJr+GMZrZ/y/ G0lPWmMG1MeGTIyyNJBZLzU23v66k5Kuvj4h2xKe0W9tFeD2GniwZmLehP/QuLVUMhKbCm0OllDH fKL342Gam6HITuizmDXGXlxvgOnnl3gdDc+/e3bt/G5TZlWm46px/aLFfQlEO6RIvl5OA4HSjcyq bHp2GCJepkp2tCDbfkgmoXEhgZTQzYjTrYqPrkq+qkR3ibp4Qbo0Gj5buoGVVeYBAXqvqN7Dm5wc fkrVzlvY6sn4aALKofiIIDgoXYM2RLSn6kyQtBWvOHl6WxZX3XV5P2G7EoWVR2xxNEYfhIrM2G7X 60ylWv4R1rOhuaK6LLc5iMiOzJI4Z2WyLyw4HFRWpj8aKXIPFtd5BsJ5MmTk0pPE6FoVLdDKPW7F oLVRVco5Lyo899kMrmfNk0ZJbidO0N+bTMB54OfHKJxe1Q3MKcheTVfipYWGpL93eXOBkbgoPOcl WRbtVoca3LdbqEEshC4S9YAh2YcueHhHB3hIZTDsVoxN6DWnUCFQrM79GDS2qpfEQX/cbUJakaMu NIbQEkvkb38Olf/iSiV5EL2UpSUBZ03WQAO/AzMLCQRG2uHDF5FpVvnF1rLo/Xj2Fjr5WagD3nwl x51oQfdOllUAxPsIk4LK6fJVjWKcqtzflQX+tuq14ADuA7VrBAyPgieWqUBFw90tbOPTKOUss31Y N+JNxkDRnQwBIoMJA1TlI12eYb790zd/Qt4TnOddfB/++zdUf1uxFZesRHKxK9sUXPF/ePNHSjQX anz7P77+f/6QvR+A+y3rd8J/UFqRIq0EHt8qu705rGMOKFeQGDCGhh0Rs8cuYoyy6ZKimnGh18I6 QNtoJhH+fg6APhN18JDz4Kum3m6U42eDzrT0JBmLdUfCltNDc9CbjI+PZTzHMpaxuS3EJ7HzMQgM sDDxSs14oo5kOSeGKYv+JPOxjx/cydELpA8bvXvmYymrXu/tI7opDHXQ6tsYC7837e4wDBZait5l zXwMhDH2O6w7S4RhX2tHqtUQZ5EFMTwG6lr6oGMLOZef67iL9H2kgwli2qI7CmA+dsPU4EVWKjtl FE1dPAYdlj7lIr8OmHRHOoxPohvl6EAwXooOBKw4TjnzDndV6QUoTBcdB+iiwKloZPx65PiVyg3d irA7l44DgZI/QuLirx/0VPr9GjrVx+NoxLc4acWAMCyFE3s9CUgOjspcQadA3zTK5EJpwnP7FidF 6bGLUD5E89UtyILvPDLyLt0wqJVQoHJk2Jl36bopiqUqnB9V9pxIb+kpZhLJ0+G+Ta2WoA2FF8EH WiQSyzphoYTUEtQjTOB1l/tSQRE19I1zeWATpNRC510rNGIiJafQEzSnPOeWDgm3oWq+qUBXoZOh Z8oUmHo+eVJUhgHdVJ0hEvPcaDO076HHpFNpKi9c5ItvpVvSGTk7JpYqjyT1WH13b9bABIL6JUXk 22PVH/PAvU+2hnWGEcDMkDBgEmb9BfhegC+PVv1dJFHdmkhfJgp66uWk4sAg/mRPvyDPwU/kRrxb 6ctnL7/48vXizacvnj/v17Tf9jCnVo57c1l1NlU5cfI2WTZ+ZqBCLrtZOJ3QgxM1yOg4Oj1JMYv8 V1999cvejHiJvfxVqft3VswY3nngxhEWUOExxkcn76+iI8oRmxSPT7kfgYh2BSaSO/WvTIVXUXgR pQNdQcAyiVP29lysistLkOQQlhDc8Ar0qP0KA2LYE5hKcInx19X4kAtZBRo7FzIwnmeSf/RqRhto 4qxYWIzLBfGg4RGejd98/uyrl88+ef3s0+jZV588e/n6xRefzxj1e8IvbJrE6RW3mp4Pt6Yu0uP1 xItseTPF0+WsW+jTieS9Q0Yge5q/XYU2oh1RYlSYoe0G1qHF4iVc3NTh6Yqb27HWWS29rAfijNPu ctnyTssXycdnQhbnY29v0Ruz0xHZbJ6bMHjUMhtW/Wb3bi9seZ3gdmrYMfFhatkLasUhnWQAHKwN RJ4FmVV1NjeX/lmMuiyzq3auwD/77LMXL1+9eDXph5Za1NWC1BkKYTJRZgnkTi5yRNr6EVGjghvy aN3AlcqFsB/jUA7UJZqmiGQLXZ61EvXVQecOsnSBSod0ROD0gNnEqNEBkJM9WD98LtPR23/15k/0 OQxZF8r66u3/9PrdP2d1L8GLkxclGzR1jgxxFxBzBC4dXP1i98kkIhVH0StUlhCjAH4v/a1vYAOd Zqy7rnR2cTYdUKHi42NdAwT6vt6EiowuYak4tvJ0bWlYxBU8XUq7vq0z1KLyY9y5yZVc8Aagpz0V KeizaoY39zQd/Wakr/GTIx5efWEbq6oJk9OW6MdVUUTw5A5mVnQdClmnyqlgg+q2srLNx1SbLk3Y d0CgilyQpts2Gg5UuAXUnqKoQeLDxZb23JWVf5mXmT24L+nzZ/WVblbgp3618I2ApAc0Pcwr2O4E EJs78oVNLOQ/0EObvWnIo6l0vZcGCGNe9vq5e3zG/JT4IxPPCrLz2EfGy2tchHN2b8PjC3og3VBR rM/oId74MVE3mVMhpal3/ES/bJBPmQs/fJCyIgGRq5zOLLGwym81RCxmQ1NXj0yRuTTfiwPHDffF KQ5ArYKxhQJ8Bbq8W0JzQdrx3frl3bJ+4mbdNNlr7dtRChmOXiTXsvCuFeAyTs7iUGNujGunA24g PoN2hd+RJdXEMV9UvuJ9TFsO1ALkE/5ha4G/QD1LgTbmWFYcYx3QTEN94gAkeUMBDl1+4eZKxKPr BXC4RienhXXNohkfjCwp+0BZV1eWEdaLYUtB4hegnLRKNbHrKogpZXLO53aX06C1QrXGqgBrZ2kv FabfurSNxYMtmcxcMGTQXJa1zoilju33DFmNxGJqqmbMTjYOQxMa1LWCIXwsmMoXo/Xy0peS5tCa qz2zZKdMGHDgsAfuyOitJ7fhCcV0z1nbXOBYXHeV04jaMyueNxkWVyQ9xnexHxqE+y63iAUXZkRO PHAN5athKHHs1hGIeC83Xw3WsmIZ2bVYqPwePZa3eJ9/8zBAZ0/P095y1/Sr6G8fAbj+1wOTL9dW d2BqN0asmR8/Hwdip+waJ+mVyyZrrw9R+8UnwUXrYGde7ewM8K3Z0Up0+8jr1qiXXOC7oN890dKG UJ+lMh+QFwYh0/0MgZlB/NOYow4r2KCR/Os3/wwPhajlt//m9f8+Ik1kJDoHPK5qk8iGdi1xVn7x BVrba/bJQZcPrCZaR3vfTuDVhOur3Y7jHz/jqAPo42hfjD8gAxlMxBT3kAbnIoLJSNy0avBJtO3F XhNvryabNB9ejzI57K02jiQThg77zan18Mbm/vDPj8hoZGrstV+ne1OyIRsfnx2150LXycMH9p3H 1A9ZPRotbmFfRmIBYNC131KRpzMmIEk+wzDfDz08/bl+SsG9+OlTG8DrT198KY//7OfB5z/TTynq 6YTinqgkGhe0kf5y9K1Dzb9GH0lXeEM9vsz+ruDUze8KVP+0tdFZQXTXQOUziV5+8erFV7JgdODJ DO+mXHJi2ZwdimIqEkfqInYUfVyi18cSY/SwydYEtmi3F9Jbb9UZ5yP8y/FN5vyXo5tSK0+5MszH tzZFiXSjhVMvbLUlWFMGk3EosZAbMsOTpfmsQglR1BVpxYgQpZZEsD9UhnqbQGV7R2QYpqMTrBoK p+kGpLUACj+u6l2RaZ1EjxbyztBsf1jIFuiDKCtMhsSTyVqNCXjpdGH8prqp6tvqGRY4WuHCxed+ bDKqSAjCA5uE+Wwi8CcRP5gMLNLfxoaPgoqnQuLEA0taZ7OZcebcRtwyAF3fpv3tsYcbHi71N4RQ /c7K72YFO+W5wbsU+3I2072QzFxpIo8KIsWskn1O29Mpa+IGtIslpY9wYlguFtoOfw2Lm9KBOArn EGlgy4nfR58yCEcB2nBDGUlhf1H3yGnC1+5BybqIOP+eOhkI51tWViipJaxmKJWM8zSEloAJAAHC GzmmcJjdYNSifs476Z8dnWlX3u/ocyB7w9s8LOqXA7lmMGAEtg9kIQ3SZSZ4c0WJyYHLI2+WYEB9 IXOYUeHHQ6TZwehZw7Dt/fSMPgzHPt0d6Z4x2D/L5ucYhhjksiPOIYgkbmQpDkiR7iUTdq/L7T2I 91UQT//nN3+qDOYs06MZdtsV5dt/+/r//Nckq76Bb0VXyKaqSxkvOt8ePnpErsQkbIpPdJSp7Ujd 5cE21KECaOZ0le/iXgfC1M2MHrEr2KbJJTYEaRUUXEF3pclJIKfL1Jh4FcOf0H1tNp/fZi3d4sX4 CuIJGPGuREFZrlVUCSUKUMS1DK9785UcjNP4KPoUx/tC9SWHLd4ZocSWJSumPj/UbtiJ9dkIMnRV kgtTRkSrEPPjz/EdRW6Bjubri3yFQ9Cu2Niu+FhPYAy3mCNWAsbBK7mQ3+S5dc9qFn1d/XYCv74l THxd/YN4frMjNUaFRqjka70SnY5d5KBB9Mm2+tiiSJq7LN9MtF1Qm1blgLqNEk5vucB7Mpj8mr9p qSVJU+kXhd/kvCIayi1mNcdntgu9vhnabgCV6pYgH7iCeDG9mlI93ufY6xkFTzw9b11x7VGEkaZK DO8Si6ZLrFZSIvHnOcarYyMHP55HJ6OhqMJ8u2uuS1r9nlJItjEBhalRwMcTLuzYsqQ6BiaejYZj cnEUKdUfhbQCJDOk/LzarukqpU2TZ9TD2Xk/tfCShMzfBvKocTM9Rwc2EVGtbwdrHfu17EiOWOKQ 2GOBw3OWfRVjkMys2+oiK3FjAxaD3BEWgTBU+6KEnfGdsirwpD2OCitgtExftXK9owwesepxxFke 468rz0Bm0+vcaf5sRs2dk9eQNy2PT382A7iYO/LxLt3U68fjUz8gN/cfcf9zNp5ntwvlKGN3Bvdf zIspiUZgGKksDOY2hgPhfF3WGPiK+TewFmYs/zByvHB0S6C1nOt1XFLYU/3KOWnBSMZ2nPL4tzH5 H7oPvw09/AdfpHKceMpdJyDcEcI0zh+MHNN7jaxTNeUGeDYT0yq0vLzBMZ5Y35dVZz1yrOvhocJT f7S9paggY/8CGbbQ6oqoB53hAKGHijNBxJ6PBbrAKGTpU1BP0LD70ucB6q0Cc+J7yiBEfdAUPUYs x9Dv96hBqp0en2L6mJbSq1X5mUPMxGN6OPvWx5kw71DBwJD76Vb1MPqveAhnVAJxQEhQHd1BYENd +od4N4oANwY1PZwoqKbEXE2JOiADUmZzkkwnyWkv2YISoWeJXAvCUH4UZeD9kZaLrddz68soLOE7 peGDxLEL3JTbd0NOLq0WrSd30ILCu6p4TA9bWd20T+hTtWrVJn5brMic+4sT5Mc/g1+InXqTwqen wMng0fI6a1q5bSbuWacRJ2fNGsrJJsGtsI8LMVeDvl7U0zbD47xNk3D319kdBtOcY04EavjJU1kv NLCBuvTOVKaKx9hL3Z4AUXeOYMXSIR/8H/M5n5Tjy4amLTFQFoDyt3hBjQ4ns/XFKovuZramdzcB KAVeYei2GFNBfAFacncZrmOmWFWgGITDFfB1qvu0qyS8ld67m6XO3OBoaxhdfEN77XweB9KtUX5B RBM7uMsDm9qGt2dyrFxoCIpI+8KOQbPTlH54YHOyKhb524UDb3fDnddm952a2z9EnEGnKXrwYEyK p9uGjo2nG1Z+GO4D0jSGoXBnvF1CyIMT4f3gJAJgi2qARkwscJ3wkjK5Wgcv2szjZ5Xd51FrG7bs Dp3FiXMxmeLMkRY8I9fYvNUxFkGxdTf+GFTnrChbfRf+pcrRmOmQM9cUK4fcxO71edI09YyXcgbW d5E9H41GlqRv9X0WTHMmUuenlHabg4hgpKCrHJX5FoewyrpsYoXxUB5RmcphjoICBk6W3c/Ww9Po I9ge3vvFbBCZnxJGKDGX8QrEZN3injjBXEQVxVuJz0f2fnsmPNsT6GUvHGAtev/70iQY82+Jc1KU y0s8MbnFUHgIZWQiL9HdawpcX+bZSt197Bq5TY57Xrbsctj4OOuWMhQUuAsWmB4NVGjKtKRbWhdV sc5KV0n2UMUyD7rsn8CEUWwy9KCCfYJ2TwoUSFHgkYL+Lm9qnJUr5YeF6inrAtVVnkB7idoG0wlN nyxt9xQE358V55hTgt7D5yG1GFc6zPYHVurNghShk/5poJaKSMKgK6tWAkOfRF7h4TdiFrZlg8N4 iJfFalasiSgqwjNu6cW5pcVf0j5Jg7RkSkajHrK8ETJnPjafW0hzryRZaNbl+5oGNXpso/a4CKgc rjoewLGP5z249fH7+DsgmLEcovcwmn1Uz469ly6+9ev/xN67NrmRJAliI5PZmR3sTLovksxktmc5 4HGRSWaBLDbnhSV4y2azu2nTTdL42J5RdS0aBWRVYQpAgpkAq2pmev+Q9E/0Rb9Ev0H+igiPyEgA RXbP3Elqm2ElMiM8Ijw8PDw8/BH2UyT6arZSB+aG9Z46CBqbb+jSfHbSX+IDzYdnd3Wj3MVBpz1A 4gDRjD1hWNIu8WPQvux/rvXboIUnaxLXr5AWEF+oyMHqt6veDr+QdJab9nLbWuAU4rELt5oeRxdT o2/fIBEhRY8x2i/F8aSQXXm4zbKmFei+GlOJgYjzkZPFkWv5OJfe8tnX9u7Rfr17TVT8E3aPcWj7 YbsXCW5yC/bATxQBb91A/IvR83pfUqZuKn4Ap0Wq4z56/ICOladrQzS2dnQm7Gn+mUVubTIyUVvQ T23zS8sKndmFcwWQW6AH80RG1JnXQer5jXvI493VxQD2jfvYzp+UqI1uEzkJPiYFBDGFIb1ihToW YYgY1EQ+HQ2oIM80SnbmPb2+i8RM9WRPnZQVxrAdyWGYAN2lepYa/MONriD9k9Czl1L4CA/wHCMY F6JcNg0i64vHaHQDDx9kTVUiccCtWkSzd4aKsUamuljlg+Sjqt1tVJPha2WW0WA98LV6obYsqOn8 pE2+Df7eeZ+8+4fmPWZV2KvD9798+3/9u1/8wtnP6QvLz/l45l8b8NHLnNzkb933CxljpeDGIQqw 3fj8TmBOEa3ej1bSM4Ev9khSRNfZi1qUcynWOrp/3JbSse3o2m6pEB5lI+12Hx2dVOUFHF7MEfIY d/TxOrl9/2r6GM1uohuB9NXZ0eUgEdoRbCOkU3WS/hKjhmKebLrUP6VfYU7LOObKTUWqtFMGdLqZ z/ld7CpLSu9M7Lk1d5dtkh/QUglNvymEZXpqvaItzXPa9W35vZ4jq9uRlsy26qVL3X2n0Ow1moZx z6cFK1rxZpfF5B1ZRMn8cMskRPsmCccmYwxPUpdJOZlsqmTK0UYVPxB1L10kx9MwTsqRdf0HiKi0 hINx9xGc9x93G6mJuVdbiV61LjjJk1OjuiCP3sjktdzd6KU0IdUH0hiaPqA/YaHVFY+7MUska0ra Ahw/4aWgaSkHsQqHCYuGosKXSzJzAtk4/SxP7gvzanAsY+eGHTUstCsM3GJjhBkToQD+CaeIOuvS 1nBmos+6HYcXHWRN8X4AZiGZQDzuxbgOOxAfHWDgQZ78mu4YiWOAVLJG1Oodr/sn6KCNbtLSH9ip 9u6PGp1665MQ9bnzvuuSO/Gfonp/6+3tf93m71pvVtS8CdJOMXE4CRBHgZFnm9HJN/AxTs6BMbr5 WRUdOzy8v7HlKIkjYUeixRg0yM+OhznKMiIF3ph4k8/+8Pzt6OXvbawlk3vK9I1ckXw4Ezro8Pev y/LidTEff5rP7uoajuvtrrjL8mBdlvP6oFweoFuArxptibqEHrpA6VivXAbeuE2PdvLITbs2eBXV Q8OqV0/efk1WyiiplqfJWcnRMKGXZ+e0/3FanL50fy933VusV7Q5HCValzWwwXkFInlJ+XJN7mk6 RDKX5OxTNg4ibprWocILo9W83YnWaTcrDItzGkZ2KTypOY4RU8WIePtohLGPcCvqlpNu5mMDrZqA C45GAjWtUCli0/2JAPqKvj2pzuxnIxLaL1siFGmAKvIPvablTk/ONPcMg5mWF3DGoIwrAgdfKSC6 BABhopcK5E02Wi2yaGlmaiP+CZQjxU3WuDqLdhTJn2LowjmFW7DwOParA9gwC9elLeIojxLahbVi TgLdYCwZLTfLLXiPDCR7bNS6lGKdwMMEb69g/jerKVpCCaiWQnj738NXvbCEcX5Fv9edbjZTa0tq AGM+ycCVenok3T9uuOc8cphJblfpnTu3q+yx8y1hpEwtAeqJb8VkLNGdG573FXmh/h14yFL4fS/e DsMw0xu6OYS05miZaa3hF6euiW2p8Pbal2ZsMeyX/XHsKS/Ma6QUWyRqsmFLCl0Fwwtsw9n9GL/h fEXErJFaowiqEaaI55PjF2N7H2D27cizPud1yuLm3OIe8suIe0jouaE6aRYWjwzjk+Os8gaYckey 0ErFjkHBSWN+CRZRdiaOicVNXPKoLcTXVxEDuEawFCm6/9Bt8ZpGVothBKSva0SH7dnVsCvcv6tc vEIuFq5rDlUriGgSBscMakxVbJBeYIToMBXYybwYow23W9rBjPGKknkbhA77/L4ZbIwdKIDY7MPF pRVGXKgaEe5QLsKC8GwoJFy3dJRAy3E+phNRYtcCuqSkjQAVlhoG+Ar57SktvUbzfMwxPdja96zj uR1gdgzMZwAizWSMP+CdJJHhDFGAXZRorleFqshxTlHWuqxKspbCfKs27yMm3TBZLrgCqWnuGX8U ZPHj65oPqShrY+DcWqDqZkDkI9s8paSFrqZGhYG6bQyOpB1nDBncZmev+nZKOMvqLed0j6SYhxtd 29bbFLcTpdJcZfZjON9nnmPQ7dgH2dPMlyMO2KEFHeqM2u+DHUHtBvSVHG9DvSN+kO25FmNWX3XN p/sllxg0WDD6fzQXdCvLtVx64okl7Tem7Sd8NmT+F+QcYsSM+KBGbldidbthTwYyZW5Ol2bljmlc FNf+RMz9PZtGPabkm2pfbxqFY06f0RbsEWhDSpScozH1yuNMbqWsJ+Aa8+MpDO70Cdt+INQBBe8r I29qhux5a/T/41btd2yN6dOxnVBp2D8dBeIlW/8LsCY55uxeRzbkVISEy8A8liMrT3M7E85k380J 3vzHd381OcNhkBrb3/I4cgempv/2ydunX3etB6c3YWEUQ9jPUhpFrnCUS7Mi2mbtTMc0+/TrZ09/ /+y1aZkuPQhslifdg8fdbd3Yrn60A3u5vY2tTUTvOxoxHmGaGmE929e637km3ruoT4r2Kj5g0dRQ NAgMBm0UgXgDhnyCaU18HRT5ZTpEQrlSnp6Bq/HuhSeOJVsINdvGPVoJFM9Cmtib4RuNZ/FYEkd4 /gYhMiZ8aMI85gOHDXX/zy4CxRW6PaVH3UTCUaAC+yqjYV75PFHdmPjT8P1S/PY51GPW2NXiqJ6b 06Hd+3yHbGXiPWfz7tzMaYS5oudBXJ2BvULxIa7P+Aa+PsWvLcoQ/P4tautIRG0BIAXiEPDXdBbW Xi9W8gEpbLF6G5TSTbiynU5VXJloIHSacJJR+v30bpak31/ezUCYlwPZZskhobboZOAwCQCN11hV ydN0U2nnPaUEYU3N2n9tQKAKSh79AgYyu2dHCsAAoSZ81ig3sLJGWQASlDVgg7JmIBidUR4VM0C5 TvBZx8455uoZA/OhCJh6o20udRDjgYlO655xfWmudNMcodHOJt3i42LxY2Nru2+pF98HpqxuaDPC Aj6FXIsO8tsBEbAjKIrHVbTvh6rxHUPIc2oUMI6MP10F6H1+aqKRGQ2cib9nrkVDNc712oOGLEbW o4RplyJdfUb6wx/+ABUWJRyFphvy1ZfU7OtzXIK4VPn0qiSmuoAvkY5xecrGAeeMYllv0IV2QREw CUOqZTmfGhA2j2PjpBpYnR3ev5/tcf0tXR+a3vYXF8hoqNW7xO9n2Y6bSQqBEou9FJEWfCGATp7o qEppnzkG0HhJGZowlgiyNFfUHEy/sJmUbZLbZLKpMEY+hpdDS+iOvre0cWFZLMbwJ+zoalNP+/Rh McIPCiEB6zCZ7CNaPhCMUb8u9NIsMDlHmBg+czy/xBMwvdhDn8zhZuVn1tk7ltEjt/pg339Me74a be7rdwh6jN+t3PnPG2IjHJ45HdBtA5dJV54uxAR/4JXfwws86EgvqhbibwZranZBLIKFY1OTUEYt 0lZQVnRxcadp53F2IhpFVifSoQd7LLCMQrHJw6FAk9hPlyqCDJTAEcmlSiMKnw58yV7ip0t9gWqj OnOfs5bDA4mErsMRnQHHAwuuR1yS3WAi4AP0mllUOA3wDbfukz/1g4iPjfqoZ2yvjV9DEc6v765X ehl1VpEy5nLlYP8wLHuJgwMKCvWDSx/N9/vhrVFbfUmBEUDzh2BzeWE5HTMGCTVYP3ytrya2Qc9B WJT9VwZidz4NVJ8ULtwEd1vnoslsBKtho0Y59fNXS/vbzams6Ym2i8LcHFcSMJ/l/gbxYgCvZhUT vwN2nTZ4/k5io3/77LJ9g7SjVXrV1IDJZTi+Ui60q3FsgyQFZBwEMeB/wTwTGjjBw7K4TGE8Q2sL uQ8yjZrzDZXhcDRZ1p/H7IZWHIzPmBkVS6yZdjfr04PfdvGYf3nSzZpRFNfxeKAODasmza87juJi BNceIMmLXmWrNqnUhw+iooCXqYoDtPB6k1OM6UzZgYdSxYfoRX3fA+zqmgCbanGgN+4mkOOshyHi r5oAgVnBVxhIG0hLa7ZpARG0iea0AIgF0OfLGQvNsCcf9Zgv9o79YchYlXXlDaayB+JWb+tkriWT wsfC52xNLQ2IbCLLXK524YXODy6k3x4LV0wm1ELutAk4jp/g3s3JobZJhEYTrEo75FyYvaOhE9G4 iAinCsTquhWII5iIXLuSwwgcgeRkhvkUmsTEbYkhEgCUJ022GII4iCYNs+Qd7YLaxs6mYcAwGFjr BTHRzbzh+Nl0vM9XNFrO32XSRqCrdjFn+5dG6HkdYDiWNnho0gc3ghRzI6uiwks4E+I2Pbo6zjGM O20TbLyUmTgY25v1swsPI1mGh2IAljV5c+2xEByrmg8iuUHQBUMWpPNwtk9+Iozdc7YL4R64/xfj nBu3C2BOP6PUbz6iJXet1pEXYyW8k+I65FMq1QexeBmiwDWIst2SSllsFFDPDWS2RIXTKBhPaKF/ q5WE0sAufyutBWVZ2jdK4OAwE9Ldbsr8aGJrI7Ps5yUom3aQ3dK8A0a1ofloEwpucQA5OvhC0SUc goxYTAnu63Xtyf8KzQw2kCFukREjac1Yl8eBciRaoclXiq2hyQBFHZNgtOwqRV3wAtcTGHNlHUi9 2IsF9ieNCvYKUTy0YRMiEQx9TZskzu+J/huEDh958B5qgeLFVDQgfE/6MhJTgLy5OVxpEHDKOVod R3d215P0znxLDw9bFqPYqlOT/oKyZww65PlBfG40kvgQVmrBBoNQhWAR1+5obRd2nXbbsgN0o7c+ Us3c/cjPyMkcX9P9TxOR2zGYdg/qbp7gQOnNLZU7/ssvMJ7Axfis2M3amnxtdSPWdiuaNWZaNi2U tzEkob4RWpJL+eHeLUVS6+zUgVAxSXVdN7SJPrbkNOJbo0i4SqNxFHVrJGSJNX0x+aSbti8KKf7g 2DRCDM9Muuh9Mv1FG6a4Pi3mBTyaYfKqLjbT8lV7g0oVFO9y3Zbgeq8RW5u6sG1zg8waPv8STCg5 vl6UZVvTiM26zVhNc5wEQnrJ9mBo9h662TTFh8PLat+H84CuOYrFKhZmeFc4RWKbJvelAzWUu3F1 LIP/k7jbc801k1zRMEPWEJ0CwYk3CVgxPM+ifRnZ0pkQsqz6pqTPpBs3Fn1VcTBbfigvikSQ7Qed 3jWhR2qsx1rBaafUqMLFFsAplkNleONSSqwa1+6+rJVJUPG+QXF4EKSdbLTcLE5woYzwsGusWgWf 3aZ/PUZ6GcI5vipLvCsZaspF87yLEfp3lBvOChd2JTal8mUL746YB0fvfjhE2CAGzePPWTQP58St GoQ4v27c2fgot7MZ5QQug3LIDG54FLjJrrYTNZ2dzC++l+1cgFYwpiFbqRBFUlSgcBJ2MurohgFm 5Jim5es6bUjsZkvBfG7GEshAH7TZ+mDpUDQC2u3elmY9u0i+Jhp8v+QEHAa4PT00Blv7o1WdBVBw BlICDMOGd23aRykoh8gjBnDst6mA+FjmySB+h6GRkZOjbkoO9q5bF5do5LD9MmCQBMp06bDxwf2x E1P/KZkYVZR+EGLpTuR2RlUz6jR07LzVzbamU9uiicGDiMWGshHgzEJmOkjVY5RuvsblUyQ8BKvl OzoFj06uCbXGvJ5mtqF0DHQWXKo/WhSL0p6mm3ZxXKG/3TLOrlgq7BnyFUbAB9mXNcVsRUR/i6pi hbMOE7P8wE5yBac6CD2S4PVR79Uf33798gX6FvaOnUtdXaz4ngemG4UrTiV81IgDB8XRou1ymmYZ 2vZ+oGwbGmiOsYa1+/7F5VEPClJr8NfNJdlOJl141829L07nT2rqzQmcxidFXfdfEVICdAx9rAw9 5ITJo+j4sIfSnnZj67ranohCbp9geh/EYlf/vrg+KcfVlALNV5uV0bqIEsT0REbUdm9Ax1BTRl2T boUgb0jcCcwe5ZMTfVaHsSu/LiNWLffVg7ZyMDpdTqxiUVfBWXDtnHUnmwolk24e6L0VxZxid1aH fZpt/8bvFHuwehD7tCyRgaJ808d/tHIAS1u2FKWh00NLP6cP8sD7q6yL0ekU2Lf2GSfPl8vZ8rMH 3calK7XRvxz7HgqHjWyrpw8ar9hKEAaPGXkBg3qE/A0W3LTAPWCKBoSZFxbMLXUyIARERcDwNwUG XrSAQTKbbhYrDumdivXk6Wof06xGmOyWg2Uj+aRKOnm6ihpvvcOrwGnxjG6HW6JfEDAxo8Z4k8xt 4PBwu7a2WOVpQjfMSKOUkWOl48argTN/ua7FjjNahpiOlEGMhmvZmqumxiyVLFJzTbIHQMSZpydd XZ9YjlVPYJNfs9gTyq3oJS3r3WY4obr4MnU1jTooymmI0RhImZ/iSQELOtM8SntCjGzZAELME33X +ICU2FLxEhNRz68pMTS1k1AGVvJ/L0kpK8GZksJl9QngXKJqd4ndwanupurWE92vNmuMpZAz+Kwb VCYKofQml5zFhhqVvsDxlBJQjO2xAwDGDi0tbTZc2/NtwW5MGxgFFB2HaEgwU+gSQOabzSgFdJ53 8xMQFIYD9yYxR39JVAtwyBL/gpm/BOKhpjK8GuZCzTRJsYAk1KQx8wBqRNsYtNXk98SvsqyVPENU yvz5tKq6FOjtsE8fT5yUB9AFyvin5g367Sr7J+NpgJki6Sbb6lO2BjWq/YypLttfY/JGE3sxtwAZ eOodX+lN2/wAnqVEK4JbSLZ7MOm6BsNOOSOUO5Erhr31GgbWQaDaILUGi6QRzUZD1aYVZUMJd+gU 4U4AbdG/3TLW7gcHSB6AArRfM9f3V1lQMmIX7E/uLT6P1bX/RnXWNTPs9/JGR5Wl7lXgvGI++frc K9Yhe8mCo/pn14dVz4QPoOhbO3YJ3pW6wpq6Nn6ZM2FZjS+XI48yOMI83h2uKKqeKKIO7/fv/2QL 1OeVTcaI7OwDuvYmIpomK+7OAXWYdY3dzLeBZ+Vc02ylK3VVedFKDtvSffobqcWfduEEyZTT6CYe EZtk4NxCbntm8BrnmDQqFHYbaPd/hjPnDp675ktwoLSezB/LiufA4Ag4yIP+w27chhpjDvVW16vr kQ7Y1OMQ7r1fP+ypCLs2ZtNiPDkHwauZzFxRAMI8+PXD5GS2ZsGEgyUVU9URl/A9crJBZPT9QxC0 SqawMjKNYwE1lL/mlKIPCFS7L8jk3FEKuw1FHpZmz6tA7N+dXBNK9bmGtZ/sxOXoL4oWOdqY3z9/ 8fbZ6xdPvkEcHgCs3x4wYN4abNB0IUWjpEvSNoFfR8bEweQuzo66XWr1s0Gv7EZ0kyAGDJUxUJsm 1y4DbQR+EIJHR6TwzKcVB3ZRVUxQni2l3B0WSSd7uhTqde2ZdJtagay31RW0FZipFQDb4cKPNzfG WXFWU5o9Fd7GZtpNKDQYR7EqKFrCMiF+jPD8O5xdo2WHSrPl3FIGIrP1tQ2GZu/Vm34F3lCGrbf3 0Zzv3SYmj676UsHfoIPuelaqgAsvh/wMy2CawmG3+5Ed4yCIsYs16eeY8CIgLs9LOBpNyzUuXuNl j9sk9Q3mS89IJLYAgNEDNYjlnoc0YhTOGKSRgmXIHRvUyil31dLcrqFqBX/v6ZRlIgoZ7CXkcG9+ kHvjqo/a39lUwkZ0B4NutiXEAVRouDXMY/o9L6SD76OsyM+RPWZjX5YeaWKcVCCQX3YjIc9TO4zc 92YQv+HHyeGuXvEFCgUIX5+POc+iogCvZ3y1sj1EvetRMs9aPJXVWkMn6k1V+IutdyOa7kUWGxLe LgLMWDm76nMClGa/iqm0gzZ0LZ5nBrIdRy/eRQcc+V+bzy0eAopp4FCHQZ3Dl9y9vRfcFtujQdNZ YtXnfkStQvAzLVHM+IicqyXgFIOILhg+3gosGV5ED8AfdkNgXET0i5zXJlbfXCBQH3PTVC5V9CUQ IKxtsgyt4VprBA3wZllzdb7HlE/WYBn7gffy0hN85L4M73stjuemz/hs+40/LElE2g9vUQ2dSeIQ C7ZR0BKfKymvGkVN+64kv1GkzzGsAjQ2d3COATE4ZvK2tVW4rBiI1thaLEO1N9cI+GXFQhMeoUUQ DHvA0u2stKnf3hZX6+cvddAuxtXIhMARzTBTAT0/CHZlQS5Zt3ABtkm1AeVTtrmcX2dWj37Yjwnp Yyt5UbIHDkdA3L+vW9w/GAictw21mZG7Y3gWR03fnAV0EBy/RF0UF/orD4mznJvt2fOWa4ai4HsI jNfDsWb17cQDb3alQqukn4SBDmRxLSWS0FJbxRvH1oh/so0S4uoHKsmtlOBrb/hzzAxQKGQYWgGY kW+H51dqA80PxqCg78eysDI+llE+13oeRpi4sVy0jdU2ZBUi/uy1Xh89aEb00vw4esEkGQ2Qa12h WCgdRQUZRaLYHkqJpeDpIDHBhqhO9hOHAqMOhmHAgLevNmsKt0YRM+KobsMxUet5eYm2XIdka/eg qSgLbt1cUbl7U4rg8EZr3zm0a9vN0ZHKfbUETsXT47upKgbmSUB7xS/aRjPLkpBHI6a9lGxwgqTK M5Nm/LA5e6rHwoojAb8CwjN1WgJ34jR1i6vxRITxwUcRm5WYDIWbVrcSOjcuVfZpmCsgmmZrV8G0 dcPetgX9kgOXP1lbwh5Sn6T0PoOIUoJHR9Eh46F1nyE76jWyqS184yBgVbGgBE+bpeVFt0mXXIi1 AqnZaJ6hJ+//87v/tRmr3mx572+//T//I0WS7+iY8ZJFkmKKxPIjwlI6lexrFmbd7yAYl3QllomF goD7UeDL5UVxvdLR4tWrTjzWvim5Wc/mnxTpfVqcbM74OqE93js3i665OrS7jerOnxewi3bjya0m 5+VsAkdcnDq64kTFtwn0j8+rOUxoN2/JjWWixKvaC5iUD+Nq2P325RfPWlqlEPIwIyg2VuXcTVRC gz7je/NyDvPWow70EvEGw3QmseJQ0PS6pxMV1PbkYPKWwJvTdUGZLK/NIUE5UDnYxRWSWs00ZemL GqKx9pKUclSaxAbyOtbijG9qbOAUSyWdRsttzaIsvC3qf5wQXIz/aDR/E/qfK0M5npgvnr16/ezp k7fPvkiK95sZCLoFm6IYehsyVWzpz2J8Npt8ZG+o7kd0JsyG9AYnwCY1pV82h2mYn7ZPWSZapG5j 24nrKJS9F2wLin+CuEa49VsjU/7Zx9tgskDs2g5090qDIM00bhfVArdZ1iJXkDLBlEkk8pkx7psc 060go7bjQvGQhGBX+5ZLHeGC43odXuI8py+R2xvTqGU/W4wPJF9KLCFKDxOixPxN4vDNuH5ph6sC UGJII0kjU6c0/Z7MipqBk9mS74Wi2eQtx4MT/si6F0f8xSL56Vs8qoaGHMvV0DQ6VC0PI1m0KZBK cTnCxHYm1rx0Jx5PWMoOWkLNoaDb+375b5LW2BRvjdq6hXLC/zDhR2E8sSmd8AksW0wiA2I0mTCh SQmIEcwb1zrgdRNWuSoqm20JEw4TRGT72K01+dWiAoOEoGVRbuotwHq3ezYfIesH+smzejJekd/t gpLp9VvrVxJksMaLlzlwg7R7G3fY27e7WyP6Veo8uyBWY0WQpk/0CHVaS+THLMot+nBAU2mwYaU4 V6EwvVye9PwcSr3Y9q121yAfXawZFIMwqhJSthB2L9cLR9Ik8nWlPVZtpxkbL4o+uF68phcwLxiI X+t8MG4XdGjE2VgYNXRrSkUux9VyND6BM+xoMUOv5jOXv1yteoNi/kbbO3TC325Sb69oq9a4nY2V oR0Dr2lkQ5iy9RyhRfKBSWxdfJUHCWQiDgYDjem2XhnsU7Fo9jgfmRJsjpHpb2TmrsK449uMPuQz ZxooMXOP+LRzQjQtKbFykSAVU6lel8TP4Ay6wAxAEqoEF+VijBrWRMzvUbco9llX05lsRLeSlBnB tCxqcfok8F67HlGayAAfjzJi9DBnJkyBxYiHsWgMA3/WfrKumOgNnOeeZJ7mXmdlp2/Gf57BLNk9 3chOVTEnOwpS0xllrWRcksWZe6nFbLT01mNTWHo7I9gFi8rz+HYv8kEzBZTI5E7tIjKMz/f8C7NO U2LZvQNiCL7ZKeYWBzyjL7w7u8rGhSvBkLpeId2t959ds4POzpYggWs7oohFp9+JrjlHs0frYnxt 7ofHNY0UzdA7Hl/gQ74Yy3a/e/L6xfMXXw0SspfVwO+2drprjd0jBzdBAdt5At21j717ck0cAXa/ oppf4yheseBgyQsYxZb6KTZ2XW44pYaROg5e/peMlP2dVrENEId7Xr/5pfP+H9/9B5dhrrp433t7 8L9wrryzYllUswmcnifnY1j/C2JxWIgUGcspHiqQnbq+GAsSSZHn6TFyyiyreAvdMK9wQ/EDZv+l h230Bsm38Ocr7MQYTmromPcpqgsazXjuHRBH+oR4oSSLuOYCRJzLspqq02Kvp5QLv3/2x+9evv7i 2R9evVY6Bj4x0o6CBudMt7x8OOcfm/AIbHXU7ichLXSfeBoAYLj1DKGVZHE4KQ5qtCog7odpL3ES AgDPxtAgfksWm1qUpf3kVVVMCjzt8yfa0nsHPdzNl8UZ8vEGoLeUVhOlDdUhrjigigsv750Mlf8N YaF5U705qdFadbkWDKWbekPLEvXoB5z4B7rRtp/jJjE7vWbnUgmX44kYgl/srZM0eO778q2jFKeq eHhT1ZHjNDS8Wa4pY7bjxVBdVT06ODwmBjvoekd0VdXTmfqd1IAGAKkjiwTVmbjmXIb2gmFqa4fA 3TKID+V3k9fyxWx1ci1vTXSoXBcLjo+uVZvVREJKbWHntvetdZwBs8HSIOKir/HE8oLUc70abAuF 4ooxtQzdi8yPi8qX7LbfTH97I0tZjeFMywHvunXRAzcZLzF6kmfbIil8mdrL6h75xHb0bfSeZNsx w5KCNTuxmV/+bCAJwRfSXmuPVtWISSGkHWhLod2jARM/cBJNd0WoYsEmuMLh4Di0GEsR3L8y44DG Dcbhb+6NI4yQrHBuTX/tKJdOslJ4kNj4fHlF7tRRYdXUMEayWLJvuxH6Y9NX3GjFtUr6ZcpLRord wxvYGQEqpRUNn8xVfz+MOHBxada8AdDQVFkwlzcPgiKDcFzPwzZrPL29262FL8cUt516OaCSgx+w KJpPU9EfErYsrpMDXBYl2uCMWX4dJz/8YK52UEL44YcE5aB5sSYRSsJ+JckzPoQNBk4OdGlV3at/ 1rD69by8XOu8AqRMszEdSEYI739ZsGaqQZEWT8DjpGdA9bwRPl+elmZwVKXk8+EPP3iN/GDKSJJh p/4VW08v5WfgNihJDoeRJId8Z/4EAMxOQE7le3M/8qfMojcdxobZTar95Cb1STI1L53VsBUFaXdx geMwM27yHRmwroB85QQ9E72aZABoQ5yTnK5BJqeag6o4HfwAfa9mxQfO60zxw/FAQLufXeKP4KDE lzvIUx//wBzUG5Mys0axuTxdQz+5T9NkPiORZlZrykLiOVSeG0RMmMcoVNTVM9IMeM15UB7EoBD+ D4cc7xmgoHy3wLH+GTrkw+rI/Sqlel8jguGIJThGZNg5ouB8/hzpEf0zdaZz02Xg9qLW9LQ2XpGJ 1xvaWEvIjOpCZ5H1PoqzlcmgGNpQSU7BocDHIjrD6kfnoZX+TDnxrelhL2smWfDJ6bZJtZC6XLSu M3jutChqD6WAS4zCFJNB4liYHumnKAW8ybU4MFNc8HIiG/Tqgq7sfBWwC2IHB6AwIeNseUCqYJbi pb6+8tseEh7JRGZIB1xVRtccOhBj9hG5Gg8mrEg5ExgtPboY+j7cnBql0YmrxuLRS4ObFI9XMTEv qovelhRuKMpjhb4r3lq2mb13npPh4Rb4glgFHkX9ObvEHbdW254VTpvr+xage8OIdCreo3Yo5+Wc UzQYZwZGvMcL2vJnWNUigWiNX99oyuwradCK5TG55ifZlnShW7rMrX1Muj6uKa0bQXOvDklNWsh3 h248bTFzsON+9CTVTMgFLy5NX8Ju6KgJ3Ha766vNyRQin3YEHc59SMFnnOiBM+akjm9FEcWSgd2l Yf9vESzt3t7fb6uKZMq4NeCtoATOZ0QpfweK7VxQCwTZmcTeNew64TzdxWK1vmaesCyMRrKYtm57 Gqo5P1qQuHthK9W1B3hN7iktsIM98wYJhcyc4DZH83W7MjNnEgw1XRYMktoXm9bwoARn71o4YO+g E2eifAQXVeaXJlJw9Bjs6dNlw6IzNSaoGZGkI0lDjG7dk5sbdQ3nMjBg25gtCEwvt4B5tayuRQpG jbp09im+sV3/lpTo7gjOR1E+zpHPwuQcwym1J//FmzfXTrCp4HLzuC2dULnPaquLMVy8MeDa0YNx vDdcY+v+ZrTKFBaMirdzRvxO3Da7wQ7DQINqLQd0wnJ4xg9CUyMgXGkMMeu8T9/9R6NDX+F138ls +T57+3/8e9aj15uTxYydPzZ4qJADUt20qEKRa5wYEFCi+jCbFIEunQLkGLa4qUQ9hnIN3o+cr9er wb17BAOk1UlZ9svqjLnd1WJerSaS6A2jhtzjN/f4M1pp6o/4Gz59itJ9zWriuVzS4OVMq/79wIxb 69QDM0HRy/dIL98zenlTUSnm6ZThLIGuV8Wwy3aFGIFDDAyPemxrh/HWQObsHYda+7rAzFxU5q/s hAvMDuboFeK1OU+tZlOj0QI6NWPBNlBMy7xiHAJUdXfYSFuVlxAixl/EGVOJ9tp2A4/9vrOZuZI1 RcRH37TVf1tg47BbfEkZVS7v9nRUVqcn9/2GMP4Chy7umfnl6S0qVR8EEbpyw/aq/mh9yTdwwbmy GEkQm+jJR8NJo7Hj2kZpYuOgu0mmB+V6QoiQ9vczU1CHA2NS0fNa7WXbcd9w3BEeMC/PlPmDV4Nj +WyHGsY/w7RwsYIqtlx5dT2b8i5AP1LKS/UKC5Mn5HmJq8l1L9PXyQQQOA/H2JA4BfC7jxwjN9Cz LbewaulQCwfQBPrOYhij27qJn44eESky/eaUf9Rjo9ljqyIWXHjXkeLDwaxyPjvpv4HlXlSvqCiO mr9kvBBrj5RMf0b1ZgG70HUadtBRVvil37LKfymX4MW02yLbEGH7wEwjhuHRjX2FFoprdXxvdIHQ M6oL4OQYbLD7BvghSpLBlkVkk7xhLuiHJ1mH4yDr6CDyXbxZtMhPu4xc3ObEtdphPPcyLOGc0PR5 VB3z/G20R2jgIJyCn31i8y1qTji76s/LJfoKY0yKVWXda+iX5LTHi0D6jctwXk5iAfnaZEy/PSRh 50t9DtyB8NRw/V1fKhdLHu93iNYqNV6EHKgsUOyv+phwjsun68usEV0Lls82T0ZloIKqnoBTO76j 5ivKeLJQxXhTniPcXpMShQN6LGQE2Mw9RgNi3J13/xPGG0MXTEEBVa/e3337Dyl7fXS+BsFA5VKp ibDEwo/1qlyT1d443XW/Q54eHeXhkSel8vLodCgG5ghqjCYoq9T2TjmwKaY9ngNmRiyDBCCDsG8j sO2l8zbzZ9Lrdozv5XQGB0zKlJOGggsMuKzz08lyPc+BNjZyv0EJ5ocJvQcynKzn6WEupftvn798 +tV3z1+8+d/y7vf379/v3vmt2LcVaCmcX86mHHuZ4PU3yxWsqBTEW/ivy7aaSZYcDR4ca1YtlROq 3bFc3bFh+pDGrKq8qtBwOGaNLn0cgNkprhQjxmsWVVZHMPvyyTfffP7k6e/VzHBbGGRTRRwmRvT0 5Tfvvn3xBkTT394XPhdGRMMgj3iDZKIrej2WK5LkpDzb1GiQve7VST1ezk6v4YhxMlv7AayxI4+S h/d9tmM6+Nv7GsuCXR+pzHUbmO50uJ8baph00qOCzJOnfHg+AXZ4SRM1ho6POOIoJwOFcuJGyXcG wCdwIRPfgg/zTX3uBQBHzz0UcBtqQBOayoohPP3QCG2VV+t+pfNo4u1oPaH1G6gYqEfrzWquLy2w LJzl8VujfkuwW8sb+jMUJ6/1YV66lfbEjv2o9/3V4cnR7XqBMfgm5VRMO8jFFdo5zpKI6RpBab5m WPcXvUxo6MmLN8/RsIBBFmgKWBdrZ1PAKA96d5cM7XudcLQNTrNlmFDtUEbg09wJJ8lt+BIymgn5 qQ7hh+8B2BWi9/B4m+2JQPb9VaE6RWdEA4AUsTJIvnz5+tlXr1++e/HF6Luvn799lkfM7pco/8yj 2ob0s8M886C8fvZFHjXer5QuzgfxIADx1etnz17EOgKCR7FsAfJZDMhfGx27lVwXc1yFcSgPAyif f/MughKM9zpvcf9MP/tVBEazI3i/ualW8zYov94BRZB0K5lcj9tw8psARusMX57r06oP5Hf7AqHV FAXisgzgeRKD2AkhEvsnRhM24AnWSMxBHEqE89ehroYx52CBv/2jLfjm7Rejl+/evnr3dvT1kxdf fPMMWj44PPS+P3v9+uVr/fmBbtiwWMdNA68iWPS0nL4q1m/W06/pZxrC3bZO2yF4Pc/8rDNzvNim Ok9h+yvnBd3vMKysf2ml67oTIix19f8xuX91/1Qd0N9YcBioxAIRuDnBcKXF8R0EX+STGFvqswe/ +fVvgwtSp57AUkcDKnMciK5uczpiGJ5pFL7fCnX/EdjBx4SMBlS70eLuG5Sjd6nowEAs38yno2k5 Qu3rZoVJEQr/YG0vWmfYjx7vERKw0ts1/Ftg/BxKS2+fvf62RzHPetPN4qTXrEHGZDvc8wzokUk3 0VtSWCpWtQbHKC8A4khtI6QZPpmDvDr8DNMIFdMh7AjMqIfA2IXbDoE9x9XcyEeHwHaFGQ6BexJH GwIDZLY0BDYWr/s5tfsQ2n0N7T6Edr+idh9Cu3/kdh9+1loX2n0I7b7idh9Cu0+x3YfQ7nfU7sO2 dpEdDQ9RzY/eftDYCcgNF8NfYUC+D+gQ+xsbZRDFwSllikNLBRsQyNxhtd3LKUHQnmDZWdfGn4/k do4LhMbGVOAM4rdUKlaRSJBByKLdtw+ungoDomOUUryJYCE4C4jmEgdgRH+ksOXydvTmIbCJMa8p dYg8gvBkb9u5UfMJDhyNSKd8bSvNe4pE+oIR2LeeAfy4UuOaOQJK5FEmoW4g5aggJGCPBf4cW2Hb Az+I8dOfRKp2QnR4JYrfdXBKGpPue5hshmTaID4dMZ+IZaYY+Nk02DQphu/sEyRns7xYlpdL6deA PfXYMClr3PRdXB7hl+MmYGhSG424PnCFeBBlO5OMBxbjcSZ11OTCIAueJudjSuuwtqvekpn8DnCJ i90SYnTFazr1CTfIcntWjucIYl3C0vzAbgwUiQYN6uoE1X3kblTW9exErYZb7NojuRWW0xkrSCku GQabxPAlyaNh0mwXt3QcavvNKuVioNx9D5K7yYM7CBBWyxyjjJPAgdVboHP9WR+NYxNVX9Cc3Xmx NxDzXwOArpMcaGgHyYMWIFQrba+WJffuJanflE+oL5JPBIAoJHKgj8md5EUjopdEEnfhw7GOkCZt S/Ntwqybty0Ia5mpAKswFg0j1lE3jrS1ns7pjHykBJ5Sz9abMesz2f8c6LAqS/b4Gi/ZFd1CH685 pzNpzHxoGNV0NtnMoRTqlQp2UuJFMRajRAeILGq7owRTVuplBI2xlx+5NZlFwQpHRKV0AG9ny1O7 PBvBXWm93dWDt4oen8ibYdzYeMqv02lGuJM8O0GaLLl7ZFYWY1RNc/xA0eR23Nzt9xHzCS+JyAnd OaHMktbRaFrB/ot7Zsz83wNsNjEzyIaUNNLXra3f7dlAx/PTHWZkZaF+syo+zIpL9n8hO3LUd56X eMoqz2aTRsoDK+EY7ZsOho+28Y+Sz+ibRnaO6smJH0hFb2AMVWalKWWRDBdOYCz0YPQayTRCN9Uf m6jJmmSR/vSgwFJJD9N28DGFjLFQJwdt2ESIW9KkEG3LnJCTY0DDhAnvWj6LfUMtoT0/fYfaQTmK yhnKP1KpYPI3XUFaA+CFwbsZ5d9Y8/CJGoggZNXoxGZdddqI+1/eb5QXlZGrFlvtF5dsrY5HM5jB MFXm3qqjbbCr4iNAv372RTzunu0xyKw3B4vawO1w6Qh+c8CkVtwOmY/0Hwn6rztx02bYZiCGJHP/ N81Zu5km62+yLX3E/hH0zw58cOPxKr1XnO1h/oDdbA8FvLK6wHx4a3TfZzuDuqAgQHyzUsfqc3xe wxrtkX5XWg2qvCjW5+W0Xe/RduwPDvQKFhRSv+KSzHS8HkeSDBnggXMwFEYOBX927J6NvqTUkIrW SlMftUHGO/XTxrUTl/L3FtSbNPafTvzSnAJ4yA2WfIUm+LvosMz91oRiukg6ve/kHnbCbXR26dsb unalbPf42pOnv6dBD3lx36e7Fow8RfqGRvF34nsmxQ9RoEbFhbnm47japGbsh7WJIenaD1pqEy9t VAf2lXiNP2ypDhtHozJdcujKvwlL2E3JlPitDx7T3i/XeAOOZu7cAN64b8UkIBKtUdFxjoAQaptV faweRqtGcKtghLh9sB2GwrACEmL44XYgVQQNIZ5/cz8sEeL5t9FGQmwzUX/98vVbtNSjFdKfjOpz jClJBiHE656+fPn6i1Q+vyHzjk2luRcw/GI+rTEw8VHa+wNsqgSzJdpn2vujLXGsmnnz7ZNvvgFs PX27f1vfFKfrnc29LVc7y7zGk/HOUp+X63W5iPb+6csXb15+82z05inSzOjzd19++ew1TMuXL/cf zfTyzezPKEcRxlt7Mb18uqnqsnolHio7KyhBtpdbztj/bludumLmiIO1E7OlS9+Or2aLzYIrecNQ yQ48If1i5uVwNLm/uG9oITOf9y+KalnMP3vQb9YU4o3LDIBPd5/UArKlqtfbyLFA7adoWz7cNvPZ rj64Jt5MkHl8vjk9LSpq7vvIDWtu4JxcV8Vpih1oZqPBt/6pvWkJ9lFHJxlvHC1KC/Bmc1KjTeWa 0u7Q3sxGT9P1ea48WUnpSpQMUst4pRNFGsWBWK9+D3SB0Zxq0myRlxSpfGAbn87q1Xx83Y9hoc8L qv/H3Pv5h+QgOex03ufv/kc0HpyXZ30M6wUNvT94+3//+1/8Iu648YXcgAEev+PiafNVuxwo5ouI Auz7smx4UZO1KPwbZAAZk/ob/zS1WctSYrIvy72dwG7Xg9tTY5trm8g10LuHue1TpgDX63a4prwY II5XM0RqSne2otIRJMCrycW8+FDM8drR2GVqkfgWG5nhfrUo6/X8GhbZq+ewjxIxsc3pg/7DezJt dX913asTE59Nlt0tpFTaV1HcUAbQnabPk+tSqAU6rdg9EDVTZMaLv9NDFe0fa0GBQ3UltKFctFS0 fzqiuHaTko5FlPINM2WpJukW/OAwsEmg2oPAbSKAGh6uJuX2dobYTtT/i/rcoli0AJDW8O8npYIw 2GmYi3ETdzUm9QSYcZ8YWa8l+pzuLUPihYU3AinM+KRIhBxhCWTWsFjTqYvnmYYE6x5xjVjaNQ+Z ixLYqHoYp3W6BePX8bskM53uVhrIlnJwIjnSrbJtXSvEl6VKmYJps8aXdKqzpZlWqnqtGIiP0Ib6 kyJsVo3loGgsjNPsrjN9SBzfsTagxL+jodWnQRxumW8Hh/sGFCKv4rBsKTVo42ph1KzkbCFQGsGl N7RnytcjV+d4Sydtte4j1qs87samV4CygcyI1jHaFGhT2aXHFE6Xc7nfp/UOLLOoUs/oDErACX7K RjNd4JMTDt/deF92I7YL0i/zyDHNPHWaD+c/I7nhZrkHMBu/uGdroVcfsvKsNfglYRFj5YwwuNco DJkTVdertpHUxtXZhzDMxZ7+HLdA9jgpplPKFWHjP8LGAzL4mi+eTQucreJkc5bc+u1nvzv81eG2 bvXMcHphMK3mlAdVGSfspyyCgk47npqijqWxgikizDQFFHF/Jp5jt1qKgIphumeT2TqV12jfvy7O yup6KODyBoEP8cJEylMX1WmCGxyar/wzVyIGZj0D4GFnbPSXGvMEk27dd2AHSjGFDRCgmjz5y488 OmDY7/vvOigHXi3mZ8Xy/b23x/+JPUeE3k7J9IbiYrkomtVsPJ/9GX9DNQ6KsoaH2pzx6s7JtQT9 lDBCEm9ToiLB2TudZBj8AfPnXlTFBYoe8nO8ThZFBUjYXCXFpp88uH//d62pR9BerWO8TvTdWfJ4 mKSf5feVSLpJ64jgpnLR0b2Vl+xdGTtdkaWTFGoaOwm4q74qkzZtSyiZfMct7tZ+md6Iill+YmBU fuoYwfyFiUr6bQH7EL5K8ZTkieJ7huIyAfBGvRvH4hLtKla3HdqVptQWFCUCXpnK2UkZc63HZ7jf u4hH8kJtl2iNIqWUDMFmlMoCScrceGzUO7nC+suPDYUyHgxmk4tr3gkDicFUPerBSqEYTMdhtI0J 7d902clxJlI3UIk8kme5A6VV0WwSpwOBALQG1cE7axQ6PktVaAPJ9oFAPE1+yymuxeU5mozbRsdo g9UaP6rerPCudXzGx7DMZjNKfV9qjsfB80LPNA7nc61adktyy/HNLDk4NxTL9VCsquXchy5WDkyn wTVMNmeq6uW+8y3m3hA/lDvyf4FDGAatM1GNDICs/wG/BEGOzBFW0g7Os92hT8zO78Wxab1Q7z66 XeEqMSFqbk85KApT1kyCLkGzbuUOY/ynZ9+BTCOElSdAw3+hFntqDfcGiQt90NM0D1+QBMwHvcbg k6S9+dFeSX0NGxASN/xfs/wbzg5CaZuUPCHVMClgxI9r2zRt0m43czN1i6JDVhjxhzKk8I5pQ0rL KPBlarFnjLUVVqC7MlJLnA4v+lypcGxsvI9gDztUSW2drNAb5+OTkyofT6pyeb3Ix9MpxnfNMRpf sc7HcLjNT/KTaZmfzM5yMpXOnbTWOwFp6+L9plwX+Uk5vc4BEvDSdbnMJ2Pyos4nBUqM+QSD2eOE wD9zDQF+UoQNeL9Ao+p8Os2nIBNMT5f5dFbB/z/kU/i5zotFTmKors1qZOjoabnEf6pFTiczfHV+ mJ8/yM8/y88f5ue/ys9/naMPco6I1iBm+Yyq5LPFWT5brjbrHJN5XJxM8/n4BHoyL86QFuaznEaP PBTlPAViMV7li3H1flMUOYxhk2NQkJwDY8BolyWgZVly55cld1DXX5b1pJqt1rksGKhTrjgwSc7O 6PkqB7k1f5/XuRRV1TkccF4v4IiXA/ks0bd1dlHgnxJ6Wq+v5/BjcwL/X+VkAKurr2nm1tMc9UU0 4evTslznIBCvCWNsP7iu8vU63+SbeX61WHlEMIYFif/wJBAyz6sc1UzT4iqnQJB5PYZKH8YV18sk IGwv71Hi7qtjYWlyJYI93ntfyiKpQvPkmi2T+xxoO3L1fwWr48qdxkZ4CjvoZZ22yGncIELOrAhW jS/9boK0+ieMVz5OTsortiTEIJdyywWvjTjH52Fja0hZBvi8u5zMN3jSAuIHsPNrk51DcoS2hwMD yBi7KdCu8luWHuHBdDy6GYUjAYaGeuoZnO8+cBHUPHNIEhnH1gBlksLC7Ko5ar3dD8VTyb8/Zrxg bvv9T5Px5LzwRTJ6T52k0OJ/+REOrUgIUzipso6pPDXDKZd+Ne4SeTdPjVuHa8t0GXUo5jnUVFeU YMFP9E5+GXaI7EJgfmCQVfwlOn3k1wnFz1I2+W6DyZOlb/GHU4P6f053jOF5sUCv5tVzjwJH0gbj B8FkcZPCSFmJoF3lRe1bmwuH9yMAcxxqu35fXEd0BzgBwHZExidpFFpeVGUoLDfbO/MWnQFi5Ze2 KIuzUw/O1uhfN1HeRpCh4iM3iZOQhFmuXb1OBBbH93ZzzjBl0P5ivqUscmyItsBShz0XUiP6YDHt tYDMKgbahywcw7M1quN9kvAZIB7ie8/dQ5ZidH0eSYVjT8UJm2jqSnFTrg8o4Il7ihXz/C7g+61d aFA41JC1a+ZUVvKRp+6TwBO4l0Vo3APiGXSGQ8YO+kOGN0YDKuS+xqOPPTDSYCPSe+PwrbiU8SBk wR/3YDIVk6Zixzdn5ZXcBdm1l8CWfCcAmwUn7ggY1wUTzTPglUGD0NIjDI/yGJqDk4Z0MHdnO764 J7RlEWs7mSvj4IPF4hnDG931Jwe1Mq3A0WQy2zaCe2YABsHbEHMQR0yT5SAkOd4K3LsxtERth0ec nolizbdMe3Qm7slEmLZDT6s4Y2yAeexQ4kBZ7KgVrcYSW9i3rLKxUEU9j2jkS3TCoXXT9zUkpkC/ hn1WzeG83Y3NVAnVe9ZGlE7atjO2UdGqhbOBFbduQDaicVUo3ca8D2K4dVkDxkDiOy+DHSdNy+j9 fjJ68XfdrgB8cEw+yaNQA0hhkxSH0mBiTn/uVqQXDV2tos761ypqWL3kdj3s3q67PaeDCPY2DjOv 6cliKkZNKrmJxQvtayBazNgdmoQVAEB5KbXY1Ng3qBk8V/Py7NPv5lXeHpcr0qWj460XuwBdSP3o 6m5vAOi5m1zLMYeOB7ZD5rATxeoR8nYqyrjFJQqv/gn4PZOQbUmbLnnsxKI2oCJMRg2nnJRDq6Cr 3wkcWD4UVTWbAqujPooIV9Qat1oJ5+Rjr3XZwH6upjnGilIlmcNQ7ISUSfIBiWuvtCuJUq90mnZi JxWpF+h0zedhVAzAIZk0BaRXoFN0Lyql9lgtQSfrnj46iwsro+gG3RknqPRJROmTnCTm9J6cTMvk ZHYGgnGCKhsOtTM9ReujhApEetibJTC4hDqZXJxME9KbJO8TDOO0WEmCpIT0E+gnR5ch6E8Xg8U6 C5wz1AYnRieRrNfJJkH9gRk+kG12/ElMj248WLb6BKbHZVsDuqt4h5rgSdGtyM0ovINReA3fbE0a R2+RMY1UzBVvssJaAHEtqwkRt+kWM7CGyT/n/R2KXtj8B+wI+dIAH/4R1Yr/1Mty/PHIvp3bd4/t uzN6F0L6R/sdiFAqdXtd+3JV1o1qgUIBnSqL01FVXFH8xD6mIkTDEwD0V7PxqvFgNGbgvnpXGol+ yZxkKX9Syy0EAzniVHcclvi+x4r9NBwbViEFuxycHQ4Km1VZsnf5Sid/exMw7n4xNYCjnXQo6cMa lQsrM0jXqU7HTrFQBshg7++/+x9M9Olqs1wW1fvDt//7A449DSxoNrHpZJGdQxEKP72qynUJHxLi yKgkFgdgilUYGC2KpVzfWJ9hRHIuYdwsXsMS2y+7o9Izj2fznqWSAUVazpUG9mK20p/xt/rMHSgr LjZI9G9VrLiarTUU/M2ff+x0bnVuSX8TDiHLuYNiKXDJpCvIgGtS30KZ9WZlsi+/wR+ceXmPZLo5 9QjrbuqtiXVtLl0MY8p5mk02u9G6GFfT8nI5OqXAnVJzaFP6WuF5FWTdjQNAK1byIQ0h5RLIe+jF GzUddQOhA7m5UAdqcZkbIgpNtC0JYvWz9ai8R0NBefTQKZQ8MpTspQfAMO4zPURTGobGU+lFgp9N h1SFn12f+K35lYuVB80Cw/Mbb2ZDo000UhSbONMqWp5V4kXoysR4lqJdoq8u1TFt4GfSjsLjsZrg PucJ0syQitqEi/EW8LU04LWwo5qhm6CqJIknCNE5owE1JiyynvqrCpOaFoLfGCzsWgOUfEzjVUy3 9+qBXRvF1XiyjnSkZfkZmwCOg/oU/kjKmS1N4GUTpRwfOtS6uO/wFRbZZGZXE5GvjTgs9GOK9N2H /mSzTuE1XoKgsRSaPsGeAYd8ZZYRrzpUDfQlD6Q6V0vo47BptBCF12QeSglB/GDDZonIEOkQ95oJ y0AMJo4+jig3LHGYlN84s1f+3ad07XgZa9eMQqQnrEoFjvg8iIUoMVZocqsyJw/PHA1+T8q6OLA5 Y8Nrf2Kd2PIz+gdd/Lq+oaO0jVvVKmzcQJGPCKKm4Nu/f/7q1bMvursTM6LnKv+/w1vcc2+LZ8TG FzSir8GahIKphllByFpVHS85vWO+ipMEW5aBg3mCVQ+GDBKbGuI/lvYxRD6KLylNMFptEseSCx6D UOZ9WXyTUzuAtOg2MySlJu9SmGobt9uxRQfTDVpD/S+W5IEIhryEMfQytzC8fliuMR8vTqbjAUNw qMqEW/AcyI5rKrkrx9dFvZmv7z0j5YlJOpKMla348oPscU6OvcW2JIhiW5HS/UiCShPmWDx5rJo+ FryN8mMphAl8409RnqoeDMjTB165Jcy7k66pd57uoliUIuN2gwsYohY3AXHVE6shEUWSxi3NIndx JyW0+BwNL6rNah0xewvqRBpx2DKm9B5m0z1MkKzfdbgPiL8CS1/dwsAV3xwOMKIq9lnZt935QYAx biRimNjT8bs8kgLRkB9TJqZArB+7esxhRN61Ibrr+fhDgd2iQ7O6mfUmS5rgy1Zbhe4G98nWNDUX tbaquz4kP4Tu7bpP/8PuTo962gC2d3w0+OzYExrDPqDMjlCObtfHCUWbT16xna4LHuV7yR/1ZtPe cY4PcMwyoSnxzQfcaeA1J39BXWovElrNrPjPx3Uh+2cjgoAK6C/XjRtt/qF2cJG6+XfMZpfr9+he PEzTsllzRKTGZHLGL5zRLAbTtAdwXU9D6DbDghoMDmML9doO1aoRo0SQXJpDPAWvimp9Lfw1uRok V32oOilZk9XlkiKDsaywTzXJ0MHVZCffp57Z9KWf/2wquGgIaKBwvs3sko8wJjf0YNDN0EEhJgUH ey9vuwJVGCcJdZZ7qmzZUQs4L3nWcRZsEFpA1Dnc4mKtxYqdgxjJ2k3HJ4AtjcYjm0iJPM6Xm7Ru uibzHCVWjLwRNEzCn2HAqE13PTCpf5ECXOvtLRtK8bch1yY5VGE3JtW4ETpFdZHWRyUuFEllvCXg aQEHlfFZsWORbcOEObP4Mlvkzs51hw9vuPNJihMzNZF7U3ShURJNQuIC51dj2WBnQyPdEhyI/Mbs KaVev3bLxOoIIqoBvUhyg508cTzOk9MUYMfAlaHY56S8I+2UyNBiSJyO53WZbJCF0QUbjZvTbPPA Ce014J/30PPimnhX1t+ZqDnQf/iK3J1DQ4U750UZplmQN5Mi5lPeahHVZmJilcwCixbuhGzWs2lH gxlT/MUCdku81VL+RbBbluj3RNPHAUbR9Gw2YUcbL8DgABPJbcZzO0xK6MmYppzhB5hBeYFRHPDi azpDP3q0NTDu574MK+kpSWwt+md9SoycOGup2fK8qMiGjuqPFUD2CurvofjycEBnjYPHck3isn4a k0hyLYJDm2XYQBwaAmdExBzpdTmZUd5UCc3IOHCngSBXqNsAzKMtoOETFEsj4/nl+Lq2Ur2w89zy jNxxs6A9x1/kycODOZ2MbW5FSrlUm4NuA6128TeEHA1W+tmjZYX2q7iY8K9ZXT002BTaKpxCHSSa cCb18UM3QTfP0AasuolkBsegRxJnU6fX4nz3y6KYYpyhYA7rc8kBHhhIyhokb36yMuPf2Q1yuzre hNldjSQvsxBP76rWbp5oWV/NpOJ9TdVPgwm2bC+CVXcEbNcsi9g32J8SQgwe+bITykwjWfJGdSI/ y6pF42e/90fqpBqcW3nLiYk2HyMmNdxdPe3czyR5qHGG+6puHgaOLLvL97Iknvw9hBPS0hkp0fbc Ez10r6Ox8cwhxsJCdaM9onQjljq61afcqlkCCkyrtGZLeEoihvM6oEa7FpubtQVvFFETUjz3+FBv sv86hZLXQGOZbpEjInKCKA5aJIWoALBjT7jZ2raKHnmADaRhYb6dgTZPZWabbmGnfAbmoxvjVrgi v7oJX/amAlnzvFjyQIa36+38ucGjOTCyRUnWwqobZKrvfLeGqNnCd01kmj00FN5J3oOTcXoNvlsw Phju9rXheVHjNfs04YtakZxJOtys7uFmgo8kPltJkDwI7Iraljk+HCtFsYh4L/AdEcjBFCz2Lz8q 283p1H6zSS/kNyYNnqvLKjMiWLzjCcpttqSJRsg2ClKr720F/M5Y1+QcXZncZaALPqzzsUoUV3O8 i2kYf1tdXqVZX/cvtCh3mS7suFS6CyfdSHHb06XCaSs6tdu+1MwB/Zm5wrQl9XpblSu6BbE3gsFc mi4MVQ8CA2buh8UCCtKmV8wUzNSpZnHIrucy280p9milMV7shh2o77ohUX9s2TDkw5JiZ3iQwsD4 S08bvWV4bZ3n/gYjtbhoEmQYnogp106Mn8s1JI1R21hDUqrtsSFOXY4rabIOqILR62rF/LcblHUD ivEnU4bgHBDcWCPd5Vtq8dD1Z8TcHTBvojDoRNuauNFN44bDiVwdxKY+WAPmYr+NfIhhs0aHo1zA ObLkk7zjyMnkHD0Ahech2z4IHIMIgVo1ssKY8uWmnl/74PuaX+GBq5iOHOdnIZFVPiASULNptidB +HgHdDfAHw3sZkyFskjilaZb1c6ZuQX4KaDJmWRJGNfrhvYFeDJfX00pHh7u8uFK85dIY2wuKfqc MqLzxAIhTmK26jp7alVQy+kdqNrXFbNYF5pYC5A2iFEus2xjxlLOs91BfrA3bKESjchvz0jxmnoc EiYInugSKc3ieXrQn334Kf9BfTyxJy9ZdrM95KwJ55S0mHMVT4CKjEYM5AIjNoUV0/AgiMsxLCQs pNb+iHyorJ1+RKuejMcuXz+fgGTHotZ46WkFRQKtd6orQYqUbEBwPEcrlajTa1ssRK5Cl7H0tP/l K8CKp4owEqoNdeiHg3AIemzjJTY90fJmpAzujpVx+QSehvNhAx5i6oxzIPtZfY76rOSzC8zFcjq+ oCiXc/Q9FgdoWYq1VIS3ZTVlKZJCUElTStFOKSYoFBG5FlT3lrOJxNIejVifSZ3uGdA90+0v6RTe 1mvyJmE+xGpSTDvDgaFYowJdMqoKPKCnmaWMHe0+u5qt08YFfqRZDGq8WBRT1IjiXd1ZNV6weWaS LsvE2kHV9+rNYjHGzO7ZDuq0Wbdg2dWlZ4PUQpuNjvYjBysTAAO7zatb9M1iboedTt0JDAeJr2i1 4dzB4Ch0+RgvCZpNJpfwYV3Nzs4KjCCtEG1xcD6bBiEyOPTPM9Nyp4MtOoUSFMVvkh4SyCol/HT1 +Qze8rzjpTcxKeQOpJp2ZxpR/fQTkOTWxQAYUK9ONvUGaIRP5BKKf01XJFTR0A3qvUnPNTsVHThF fS2ATqrCaL/hjYQNMBe3nNYLqJyYl03tVXMCo8WsJutywqtc4eO5UcgC951iOQFa6WOEYdMhNoPE BrgeWR+Lcz1Qd8D79kC8YQqIVcK/r89Dw2QuwTNAS4hnIMo9mWR4/RVzvj/CuZlsKry5mF8fbJ+l b2WWjArznw27Fe89bIqK4wqwenvKP0WJmaxOX2nDb4wT4TiCEjdO+Zt1sBEPR1xDcsdrk/IUWAyr 6heU0rIOIm8qZziuBnQjMd8wOiydfYHCzmdnGF1gNDK2JSM8ZC/tpRJHJkFSTjDaDIVWT3quyV5y oIIeE1/U+lsRMOQyR1aKkJRs+pZeb4BN46JhcYA5gXrFB7TUwBD8JdpKTs5B1oH9/YpkyLppvrMg P3/gZoQf4GcWHO/h/C9GwZyWGMvIWRCQrzjVCkx6vLF3J5hOUuUKkxSHppnMWhM6fDZ9jGQSF3KG +ICh+shp2OhKF5h7o6dmsKdP334oYNdQGAqYvyDvt0WMzUa/G4tZaSuQStLVss2anqojrrxSgJNH AqgNh0KztysiWE2nt0l5+34zQx3arDYJJOMJYe3UShf0svHNxBHR7x+8+0/Gh4VPu4iOZXFpI2O+ /+ztf3+fIyl+OVtKyEyzVSMhbvjOT7S6FBlDzs2k7lsIc6nKzdm5kXqSJ2/e9jtvUf3FqUgT8bnB 7DC2aRAUoQWoTztMv6MdZNA5Rh7hbKX8ZthlpjEmQ5jAN+ctRfpVYZs2xT/n/j4xZWgVdGygxlhO 4y7mNO5m0URoQATpgzz5VZ48sCIj7kzn6/VqcO/eyeas7v+JLNf6ZXV2b1bXm+Lw4e9+I7mOr1Z0 Hk67n5fl/CUZaH4+W/LDO7yG5sdvyMYJn56fPruiV1/MJutuGPa/+w0Q9FPgMFjiK46NWVZS448Y iwofsABs0vToW4MKFNRG49cXmwWZS6/pl7UFpHebEw70ROWAPON9wa9v0YBA1iHI4Ys1j/hLMZj9 ojilnqB8Kc+viZxplMW84AZhwmZny2YrTzZn5lPSfYUCNT58yVbj36EugdFGP2E2CT7uaE1Qb6tr 3r+o19X1lzPaPaR1IBeCRGTknr4EymuCegYbOs0BRQ4m2/UrxukrjMKL04wqCJ4NPANsLIaQJkao TmBF5Tq1Tp71WjwtvagATEQKvTeqTPORaW/LERQlmGkQ1cczGTUnLdsDbrUBCOHvD8h1PwhEuke/ 1FaBBXLkIn0sn92kU1EoWN7F1PpSrjQjpy5iJhJJnfVYlv+gyIA8klheyykHhcTxkr2luo3zDWVa aNeCiEWyhUD2a/aXEcNMd9K63FQTzLpacVQTzh+NQtxQZ15jQQORACyjLqQat2pibmHUYmB0z13o 5ZSgZk25RapIZBMA7sklglmltBfBNRz0kYrLIBOHMc9lYuSv3c3V2Jr3id3U7Sh8t53DnrZmy4gZ H9+q4qDa8JGi29iju/YU722kZE9Uo944mc5YkiIJGQ4uGzgL1nTqGcTgoYSI4R9kz0USZj8nOEtB F1jUl4/QIySKgwP+PaQAjlk3s95yaXl6ysk6RpxglWYmjHNPofwb4puJ8G8MOb/kmPJBwgVPAnH0 FW3XWalbk0TphU+4uKf3+awyUl/SQhO4EKG4whe8PgxlcE56CcMx8OJZ1/7KMP76lGDTK4dJTu8O k26SPHpkjM4JYK59GnS/EYiEO0EAsgA4YTrm9zBQTGfvH6uIYY2hoaoZ6mpRpOfLLQNDB1o7LO3h n6PDXw+O9fxwWnW8ckfBYLQYr6yPO67xz2frl1UCVPlX2dLk5R9Kevuv/tsnwOXg7T+qt9+8OZ+d rvHto0fq9Wv7+vFj9frJlADcVa9ArsBXB+rVt2haAO/uqHdfzD7gq3vq1ZfzsqzMe/3h25Jaua1e PXuPb4ZD9epFuea3v9Rvv+GxeG+e0Std6isemveGSj3WpV6VlzQMPY7nNb6a1d4r6Aq/Ra6hvyzp 9dLvNb/lwGLdzo+dzgblxsbUClAsd9trDo8S9OnfvPfvzEz4b82UwVtsy6bgCfg/t6jjksgOaQvh ZpiwpAJn3rN5MV4gKzvdzGFnBGhnzFGdwiLZtnMydwk2TC9DiQ7EA3LxbDLaEsDwFoZdmZPTJu0D l4XOJTBmfcI4dPj3TAO2SSz+xvrMMngJt9dILSN3gE5zkO2TDFlscxwq+ni4SyflTW9inAIIOXZY W/M/1Rb5rIo1TEy/ryoaQybftn+bsOYj8Fs6Z6dHWOh4H/Tl6NQFAvoeaNTYgyqjnxh9ggyntNnm ksrx4L0NIMXgQkSXlDOMc0eR4OoF+TBjl2iVIvoBJophF4miG8n3ZqpI4e4jdfI2i5imD/gegcqC 2KAjJGwn2VYtTja8DLGA8aMLwMj0isa+1D6pHBNrQTEV4UsfIwVH7i1lpbMU7wN/QSJNLPVAhEAN LfkcxI+6w8d/uSVeilxFOT+UQTPhlGSCCrtOn+s0MwpdyTeTZjbGeX82VUJHhKq1XB4lZmpjBzvY Tsu3mPsBtwZK2ODA0O8HqC/ajkfMs1N5rxRq1gaDOwOrfV1tipTKxTNYGESE7GILRwlmWzQf1lBi sdKyYXFKQXThbR+fvQ8jDzq98WmCaQE/+Bfu5QooC6SuUbnCqf7zDA2NoIVyVXMPKGjPmOSxMHsB 1fMapjexhqUJn7uUq1F9vTgpcT60zHdUrtzB+3gLP++St6WRWpt4sA3E80/G/gvHFASktCtjRDsn CgAjmpjbqEG1ryp0KKFOuS7s5PzhGvmYvTNPgo4NFS3sj4XGWIZqZj9tg2nFbXSoYn0dWYjSl73j Eu/l3N108G5z8o63smspbl0xHc2P6IhJxgC8/or2WJgcYbNRQZGe7oXXZCzoZsSS3Vt0lQojuj9r YwWycLYT+FFq3gaTSooapd7iMn3kTyhEvayih9q6GY6UOAndG0g7vG8H49qyfTLHEjCt4qSNl9NK 0IxOAjzkEe4iIQkswdjAMVDUC/mHfrcc7LspBp+gDYiO97qfGXzpZt2PmDPR8Nv4+PBLzdlqjHfv pA4xJ7ojeurH2bcgNOTQ/DI2AwZYMA/+yE0viNU2G9gqlKi6mp8XpArPtggr+zJgfBj6Q9xXmtmD ad5g8eGdjVl7s2UZihV7Sg9Ute/LELQ7+PX5VTsA+q4Uw1ERgIvGKSmkfSMFZC1iwM1kgMaIss7H b/+Nvf9jZOOfeb9v7PV6Av8u9PrUGGJ7gQqIwW+WE39y8Y1PZeTLha+VXXV1NmrfNej3X/ScYu0u ximEvz9qKJtFM1UFbjrwiTI0YNMYBSuwTPebz+lFbIHA+6xR04QbUv2zEu8yBQYfiJbL+shUw3j5 rq0QMg/G7GOmTrat617p6CwjNsRj2GLEeBB/JFakehjH5ufFjzQ6UofhenhbAJsexebLx2oABuqa tsMA8W2YjkPwNrms4f+JOvlPoEMPxl4Yx8Ldn4IMu3cExzfFk1dxB3o40d2nIMekytsDNReX0/on Qs3H42YP5OCA+NtsSTFs8Pqf5ckQbps0BmskNftwk1X7DfgNh83xyHdsvRTAy7RHWRF/xo32zh1/ 3J+4GzrxGZD2/fIvtxEF+PSjFtVXe2itWwViKE1hXCN3d/vux6jlFhMqIri4LlKPpTJJZKyWkE4u 7r3RJnabc6v6bpVp8vCpE+sUuh+lXPSj3fvSypNI0PQgWrofgogM9SbrKz7ZflOOp1l7d31lbpDG lMYdCLv8LipdYLthMLhw/UpMtDQGmwDEuhCsS9KWG4Zj6vyNhGA6c3lI+AkXbQNbbuH2ee3a6HfN BXsD9Vt4t/NJYPabyFvJU+1AZuxLjYOGId72WwJjyhXMBRnQcmqPHuXPjqx7Jdr8RNSC3R6ZPv+c RBM2FN4r+N/V/UJEpemVbbCE1Z4XmPtsLD/3tiGMkW7kDFes62rtmYHUwTFdwv9G+BZWJcvl1pUo dibCcxqws85uhbJqPsB9XAlP3CJurvuT68djOFf9jSB+drZ0iIcfaki0A/io51ctuIfaO3aNfr9P NOZMkFqwL9IxmV+gNNFUu9ktMd3C6Dg4zFD3zQSMaa0zKeej8vS0LtZ+PfdedbO4HHEh6awgVCrC UQdjX5rE4X5vdvWjvT+xnkSsCGzfjreyyKgdQTwgrm8/0GSMmjp+Zn2Qbqrz/mHMV0Ab7L//1dt/ ePaLXzhbfGesj3lT0Q5mzcb3gpbK2NmTuShRG/oKof0gCFptdvp4t2IqNs3+Pt26H+M5icMCpUrR jtMwlOIqsO3dGoynkSvmaiLhXtg4kJ/XJztMdcljnOPymNb/ZVY00/XiS8p/xNlT+8yLnp8mT3G3 GzuH4fKUAGAQkmKZPE2vMvZQLrDUqiqvrk3Qw3GFFsomoKF5e9VPkrcomXBAPwuUnOyoutyQPWVp BL0LROmLBrPj5I7pyh2s9pRy22IwGjjrnlwTmApWWJ2cFPPyEhsrk/GHcob+ysvTjc17eynOWB9w 4NwLClTQ7E/qj/4pCucGDYxtPBnJ8CKQrgSZ9oDBPt7i/sk2u2TLLR6+3Cr5N27WJdqlTci9DLCM IQsRHoJ7uaast6Rtd76NJkDOWDUGkKAUUjzFPHSN4MIRGtQ4RPdZhxaZLzN9RA4fgIA5XZzEV2R4 YsBHAEYjqDEauY4IFhCWwjlnxRK/ruZUjkZYFn3u0L2DERd63kmhi+IayjFWoc+fXycSW4ZIVRoC yKrxWW2BLUrMgscOfBN/vpPL87JWXUGVBiE8nGVZMcsS6qNvnfWxr3mCTUfGFXwlh14MEmjdMDnb Jg9NERO5oX4JomZxNUZDwBwWHbl9cgSg01kFPZ9jmHrSYNtmGRD1H1uw3R8mKWz0fHubJ/DIJ/ty iUm2yfFxWhZocIMhZTDExLUEupMWUPCNQwQ+yQBzM0/LhD5IQnd4NjgqPhTV9Zo8cNFGW+PyKZIP sC4KXwRonk3RpZmdfKmYTKtZVZjgAjn4h2J+zRiOkhemx8PQoBU5uwJ5jZfklgn0ujIZjGXZG1Kn vWdNq+40mOwcIUh+PRyEIkEeI5pnUfQOmbUwBBO6sOlYotbslAAFqb01pr3YYyhFjNKqLNfUNcJ0 ntwhHRolhZ/WwT6CF3ScDaBROwx3IwuYKoSfbCUqYH/pmLpWlHGoSXeJMM5Px1S22DgCCHsm3G4B xbRglgcq5KO5vTR+5TBsN+ZAJ3RLs9kGH+cAusn5DDg0rPhrQhNzYNw6NBTg0Li+0HN9ZTPd0zT1 0C/E7JMRbxQb5t3Ml3RSj8Lhf0sUcqmucqIrCOR+7EDkOL+5yijp68lEYoMjmcP0ILyVqSnTPBeE cSyq0p+RpmWqVPIJQe6pEGn4vUXZdPSUCj7182BiX6Xu075ZY8dx44+2lIYtVoJHOmFdQHgWg3pp uk6Jv3Cx3Czc27S57jL/si4Y2yAW/VIpL92Ac95bIyZYrgwyHlVDnVjOy9nEpQb2KCWkkfDwLnXb dabecH2zbbwSlPoZ2uUcRqFICQyn305Wt/BQcYKZQ8nXHiZzhvxa6sbArq9XRdr7Lz3BnO0I6cX2 Dg3Zw0uWKutZVaM3XOWxo5dnJh52AXVM5mYJkq0yfiCHavgDNV1BZMGBXfPmRMUCCuEypIA0rtHD V1fsuLdQ2rprWCcMlBRTkpbtceMJn9xgGygKjmolERBOYevmoMoiH9soX76yghR0FtXueqPFUyOu +CcdYrzCLZBL5vMDcs5APJmoMMpPzrmiRjc0LNn0vfAnMyyP+boK0a70HmH3HvdiWxuz6l2FJ5y3 Rk7FqhdP4Y312yaj+Ax5ML5ueA6YJDtbr1u8k+9NE+34OgZaLFsDwG7zRW28QkdZp2jVwVNVztzN 8oZUYCMU3IAIRPPDThjkhIYuJ0d6Ro+zXSQBXd0+ydzK/hPc5nTyc02sRToGofK8LuNcMuKeGc6x 85kmZaTHdgSg2seM0tJjHnSZ2ebkZa43ejvcKJCTs85lrGIct11ihncGdnnJpchPPBVR/getc+/3 HPp/20PdvkN4Yw3uuIOrbw8NsZD67OtuHrP/JhC0ZS/kNN0v7Kgy7pzyvg+vZOySCkFtY9tESzaE NRu/77GYpeheIxF27Esmpj++aXoVHaS1TF9LJuBV9cAG2yxXdWuoTcnEJfHaI8L2LQ7PN9msxcuf I6VSJD+zb9dtjhU+Aeldbn/HChxJiBp629yTHkRx0zK3vndPzCuh6jeNyRV++z+dH48FvadLZkz8 EVfFj3LCIdNeGvGnu/Iwbrb48mxhEjdzkNkmoKlVRR0yC/jJcrrP4oVi+y7cLb4i1IGZSqdWR6Ww Jm1HxK02ym7xGam2UKdHQWrWOztXsCqc7fI2iSy5XtpL7iY92rZ6cVeSXmajbb6s9pmpl9X/P1E/ yyQBWrbNEQUaTsiHR1/2DIedi6JYjSk0LOGZtP+1UQTDk7EFAXz7ARrgvwEmhVyHV8S9SMCGXvpv QalMiv3Yp4QLqFHgmINETdTTJxWGEYlRVZOyWIXgcn5G6EsPZ+gesz2IJ7K576SgyGS5RvsScS7t xZF3s/+2E+bNNibXx4/bVshIQBP133xTkZyVQtZm8TqKymQ1fD6LLIf96B8jizD9p6HMcLexx2Y9 P2RMS8WDrRW/3czbKt7ZWhGjzLRUvLe9xbJ1jLe3VnxVXhZVS1fb+xrnAzxHfxdGIC72EUbgO72Z sq2MgIYZhxR4v9nSN2EqasXuXLBRtoOd7+Uy4HY2sjc8GkEvNyNR8P6efMn6w/0EQjOP7L8u/qZW ilNloe8cxq7c6wQsZX1tR1nuVuuoCyGFKjEwQghZ71OVFzfbFcNeDPVZ9u+sBhFTqggz8AMqUcTJ GBtol40/jDm5lF6Mp8vegGHx8H+MzJ9XPO35voxW0G46Sfkxj8esj/49e6hFZFnjCAjk5mv8rOsP fcK+NJP+boHmWVkbOD5+x4FnVCt/hUEaLbn2IJyijg5vCxHFfg18cyTVyH+rReq3bpB1JGqDmo+7 Q9sJkN3z3rYADM4Bs51tx7HnGuuhJ2Od95QnY256kO3VOEMIALTwfZUSpxo1Kcq+jq8Q+zmL17rh tGK93tbJdJAjk6pweAcPYe3TFsUa1VFdj02gQde0BV/THQibtmBs+rEoQ1ug7Sib7o2zj0IaVZru QFtcf2idA333S+KzWnOIMe4jR2l/VmgcfeiTxLrMQ21s4M9yNDhQcWAVGrbtjbu0hyBP+wzp575I FTUT4UzdhTD5eIGyjOwQU91XdjDN69Qdwm4v8GIhroMB6vMkcqHHQtBXYt+0hwwkRf82twDRDVi8 5ezVGHZn+/XYTiLZ63D+N7mCb8yljDRtqu+9wWvHNOWSRiZzCaWMcibERh7J2QCZIu5SMnaYJs6C 05yAtGcuWBr+al3jryYx9XsRQVTuNcNZtJ5u4VzewFXNF4sbbd9wtn/a6Q77atOGxLzd/k4cgBQ9 r4sDPwok2s5SshmVI2I4tJcO7Km2z70DldzHBIRS5USZhe/JRl4jMWZxixNwsWcCL/ZxDaQnvm6P 1LCVaYfPGwj4busXAxN3Wu6Pcm/ZeuGgGvgZvOZ+3kO1nXv07dpv7qHkPnP/6RvFzpuF2CyyHx5O YshdY4ZJFDReiGttLNXZtWxh/AlYwVf7bnDuEiZ3rnwNrXa2xYsttGWCJo798tusl/awXMI4rBHD pQjz3e769jfdOE206lk9GVd73YJK0f96SbJBhyavIU77HgPEcvuMTkfabbsjpO8NDFDU3bBYH1sy KQE4VwGlYR9tzwtQULP9hoWajZevXg7anGnReWSQdHkRe+vXP9cH1djWFT366vUUE6lfVrN1kUpY f9w3JbQ/UKOO6m+8Av3DO42ZMsqm9UclZQjTMOzI2qBoTQhV8nnY5GNy10DblFJBscjm28py2ajW jejRad08fhBVv21f626dt6UI2ZUcJKCqyDxHt12l3gsPoUwBRVUZCggTiAwSQw0t1Io0f+vj/wMJ 7Mmr58m95NkS8Jus4Fi9ruHlxwP86NwwmN8sJAGdKAapC3h/L1M0cYulru4ZIJ1BcABxeIikfZM+ ADUD0vto4F8zSb+Fx2ywP9mH+fPQwUtxoU+hsTABzUeRtiJI9mW2SQTV7ipv+oRCzIqc/n86y01z MauA+jbXzKI+A740wZlz7NVjcfSt6W4jHxRVWWf850gGBiiTtVAqxifgFIfrE3xO1ye6wDYSvdXI ouPnziEaOKFdnxruW4I4MpowpyKDk/f6pG9PYxk8Vjar5RX5eIXrHcpHljy5tHbizkpXYS5GFrtw bUp+Aueiz4MrkEqWB9gZICbKe5gAGD+N0NWeeYJ2ygOIBdmBf8osQGZT//Q8QIGw8DdLAWSlvb+x 8LF9z4htFz8j3w5lA8oCaWYDtRw9dJubYV5Sk9tRnIZdgIjTtOka8SuXayby9TP79Ty9inifLdEF mw9jLCB2oY3kDsLCPv1K+J58I5abZs2X6amYx2M94KD3gzKnDO7M1p0Bth7qEjP83oCNV3bwkirf 9z8pxvDg7md3HwJtzcvxmvKR0Qhh2rrEevx6V2ZcrpQQtYwO6KIsV3VPqnEJ2MHyBEPTHubJg/gX 7rxuajG+So8QIoz7mMbw0O9L77yYz8veEX4nEjj3Wu2dbS746vKcsADf3v/63b/j2CTvf/P23/47 zqy6WcKWiNg/lTSX47nNGE4Oda84KSilQB2NxvM5HcmOekhvveOOH/NkghuiiAVYIE/eYTZiYgBw poZlBHiecubr2qtqahEE2OQniymypGgZlaF220pghTGKCpSGmdOAIoWdXCe9FQ0rOVhIXu6eTqt9 XcNexFnNARbKwW7zDgaRSjLkVbnazCmHPGPiDsUEqleYWbYuKWF7yp+y5LKsLurO+9+++w8ab+9/ 9/ZgyJMiw31FTXwLjPasAPSdjOvZJMEwL7PxfPZn5uEUJgF2V5ivfiedZMnX5RxKJ7+viotinjy4 f//hwYP7h/dp/nRQnNJmsTXhceKJbA3Sz8vyAsvhcDEKwdW6WJIYRGI9sgnqGboYdGRRAHNVU6Uy Eh8NHhwnj5GKDvHC7WHvOE9wB4Y1OZ/zuFZVCfx2YUQydAHAWBtliXl6Mc7YovxA4dM2q7NqDAc8 mNEe7VV+q3KExXAPI5kyVPp3ULBbns7OKCc80WrCkUOA/oFkloBCIZHV9CTBVULImIxXa0z0aLO4 Q/e668VqOquA5y8viusVZcWuisnlGJgkyNTr4gSAIwFIi0uMBeLE3DPAI5MmwvoTNnW1mIuid16e JdNygm13M8Gg1Q+9HZ+9RcGqagkDFAYAGq3HZw8waoiLC2G/0RGxCi0q+DoPk9njNn1f5yZbxyOA yuZgu/ZmcyIFUxOn1gKhACZ1LSnLpRj0seYIsX5IAN3L1gwYUK55UU4fzBhAPOrCHqUG1mkG5IIt XwpLHNrFeJXWmNGa+pXFLde4c6hoAWI9ul0f09k65VZyAzZPugOBikPNsp0RLcysHWH541QhaL9g FitrFIlIr4u1dJTxzT9CUrHkwA9edZm2svJmzL4NIjm4rZbL+WI4DXtMJgzsCY8/DKcYdLNt0Rya OiBugU9HnW1IRMnP9LexmoBkW+NqoX89jzjEGH6RQCb+BzM8+KOgTmgnFah3AnK3APuRJdIPCUCm ZXEdTkxsShzoxkz2w4nce7X74VJS20auMHBXMQBGuLe9+eHEIpjHPdfT6ri5RbgPmLlHONsIg2Hg 9WsQEscVcPuCMqjjT+ezRpR/HhJSCt57GqJJ6ZYh7TKsBQ2qm0UbGk1nKF/ReV3BhcVSwhFg+WFW gbyFwHqv/vj22Zu3oy+eff7uq9DGpKgqOTWwks//uIRTDW5PQxvhBYrgqUU+4La7WZ/+treHWS63 BNvqrOxPNyu8j2FoBtjQPNw0IdRKR6TxcWtpVJiVVWMGOEWxBHr3Nfx5XczH1+mREVRAplgthtFg AGcwAcIAMz/323gaXAvhdQJKPUBagRjRYr/M4pIUSrGujhOgm4b+USFeVUO+CgPpEq2dGpSuzhli Y2NgFdOU4WQGYBhlk/7AQAwpmGZ7RnDuEVtOZxaSb/BkkrA3VtsWc1h7zeimNO2aLgOpdE3QTGmx fUkfGeuyYHCSaxzYqODaDKvLgutoPJ0SLUBTf+npdVnB2QBr/xihpb5UlqWqkCx1h3rqhmHvRfMk sxjbzYV6jFVbCKC5zfnV4P9ooHC/iTgjeaElhiW4zTJKckJsy0ChIiMQVhq1VrVsVmKSrwX5BGrZ vCe0HWeJPY1OdwTxruOWvhvIxsXJRCSmPksOfNQgIeNekjWkRcnKbYiruaancIhqIUltLK8627K0 I9Ksj71sW2I3N7V68B8wU2VkXUqis7Q5XjN9Q6zbGpHKpyGzjkxWVmRpg8gEjlxBKEKs7HR2NTTL sZt5xwdzfo4H7VDzKAVVbTx8zU4Xsxokv7O4YKJDCZyPa43mgJ6vyQqnjzDN7o2BGjC4HjdA92NO r036MAtxm1Rk/SfCiY4hon0YUQVneIWwy8WijYT3OkC0wWCJx+gwTOI5zg8sY+QbQKRVwMYaNpkx HZv5mxP1QOqxM46ruQrw8IFStcrdWiAlSWnTrL+4d64pKGNU6nmvwUx1rEQTSRI7G1KtFhMYTWZE XZHhXn3z7qvnL950Y5EstooMtlmQhTarNdBUDcBhnvk6NexJ1KowWV1QYhm6PqmNRgflKgY1Ylg5 WpywaSWIay/K9Zc2/K6ikedUu51MbiV/+MMfAO/1BjOQoxZE212SDzdFKGw0n/aYkg4PQ1lX5Jhi 1fQdkRWiLx4cw2kyOCURHf1m0IjUJy1EmWoiu80ekpDRIsyWm2K3fG23VGh9TgHPY2J0bGr2bDV6 +DACSAqt4jsrUEQ920WG8Ha1gClaQkXhh69JOOKlf7bl+Cfrwxz+eWAyDXO4Ufz36HBw3Ny2sAJu Wt2DVbdF7nbNUx8BVootRDtoS9ge+jwXXngEtSwH3ThJQsmjz5p0dLPt3Y5BSWtKQFvGPUGjMvLB 4RZJstkxHPkWr6Qmc6I8Z02MomYT15xNWM0/+RYsIvnYYfolZcD+y/5ohEfO0SjGOm0PuGwAL9bV RXuWdkp14k7MHDjfE0vxLJH6hyMsuitUJuUxSdI5LTNWT0V4kzSfBllPwv2l2eB+G0nw0Yw/2Gmb SjUpw5q7LTKsgbVry23fqryzIneXT4oGtFeabUm5WNCFiFizLdlzzEizfZuDkUs7+246Wn7yZ0EA 4VaU3dButHACUfvVsSJH4+UgAi+c/lHkjcWRihhTz5s0zbZOVnZ2Fid7QVTaNZuaju5UimlihXAx P0gdCRb9RX2WZTuPy5a/0BqOkk+UfSA/UPcTbaoAfcTLGvZQXM1FIZjZtKTbtBX2urBLQWBR2lCK SsA/wIkpBkAshFMUA8mxkLdGg+BhI76C6kVZ6XZVCsfK7zadsf/So93eKFX60gYIHPT6x+xmTXHt TVVEm5L++015IXMZHzxeimWOKNk1RxFNAlRzUSnaPZ/FaBa9fSKEzzzKbCjUndhZxWct7RuygDSp JeXgxTY6Ua9q1wfPLpEN5dH4KY2uDW1CyiujPpKCKGTAY1tLfWUicByH2UQGrbqwHHw3wAyHiHdU LwNa7pHssC2O0M2uROreqDO3UFg9+VPyy6GQWCxMH6kdaOr2HKCobWWNez0NmZW5xB7Vm8ViXF2b Ozp5zbZE3n3f+pIssPzv/dH6UvORxndZ5MwVeM/mV039OXJ5d2gint/EyvqSEialXfxu7M7wWQ1x WjpGZHgw8Rg1moW/OBTnckUkbQpGA0Y7AWHofXNPlC5ES2MqfIsRvSkVtVTNNdMzbI6cRzD5o99j y8+UbDw7i2v227h1oFIVO4KhwFLuMDQnWpvqWuenoYbgd3Sz3NlV27CG0mmqSnf3SzcmPTN9Cmp5 ivt+oM/W0YDL9ez0elQYMUTGYG1mmQoiqlj+wBlMmJrRPZAoOnQbuSaLwO68BKl0mwhiCnJSka6W 49DnURnGwtZAXpBofYXHXzJi9Te++ry85DCyQ7uRrMRb3X3r5XzxEiRLo64M6V/3Qau5bOB9PTtk CwurvSDTWOn1UP42Lztckqrx8hpTEWZNJkDWxsgE1hagsWfpfr+MSc1N/4XnL94+e/3iyTfPXr9+ +fqxcV5AyNm22qfzTX2uV6Vd6C4lh76LqluvROptdyIuhYGxb1PhSsy5jq/A0bmSrB2sbnuX/lbp w73rbEplcRM7EGsy7O/QlGimEWzRXAQtk+idZ1SbhSxSyTxGvGv6P0ekVKwLVA1wKSdRryX2LHbW bJlYJRKqZB6FihVbgaqrtTagddFWt7U3MiM2c1dkXqJ6u3mfLOymKfY522bRQCRgcn7MGyrsudKB KLHbv6XCfvMSkHTmjcuMxh44DPZMC8EsoiN+OI6kb+RGhvwn5wRUbO/GRvx6K+043YV0nX+oayjZ 8t37qD02BfkN7irCUY5Gcj4fsSjJWjj+F7PXlpPRqOfZckfVBLf2OpzfMnb/WR/zZImWcIhKPJ0/ oHF+xnUslyfNbjsktPZezG/sjNqUGomgPRlzYjvYBkqXzkTMIY2dcH3PpNdITH6NiNWOFZhicx21 5nHSGRG0/AxWAEODMvwQc02NWe2oxtHo2v3anXIFyXxDyelvT8U2E0Zzm5PwsZMEGpzrDmR5Yl+Z YTSWZ/eRnQfY0Ayehrerx6j54FZzPWi1j5lVEnT18nw2Lzxk+vyGX5rdysziqlwFFw8WyUbRxy9k ShpZPvBgSF9STBvWjDvEiZD20IIL+gxPrYo6ummEMxpnzS7HUjzzb1t9rbOTDnm3toILTeVqCizq lAqHIqpJxKPZ0lxyWnTusbUyVC9GlaML+zq2/cQFAnfbwRyUoqDTemfrxG4cpfFuhIQtK5PYtx0r Mg8ZqTdCEwgQPvdHpnRnj/3TyjRGQWkThxk+xW2SeO8XEGJVXaJZgp9KfMGfxDwZIyBlU3lusrYr KQ5VXAouJ+yAbHyNUObnl6qYhw50PZEiUMUi5MiAH6iPgOVJuVmuj/dB1xXn/xPDB5taFj2koCeo xOIuHfVsoz2c4KvA6Y13EGt012q0aWzx0Kx0EbEIaRhsKM2aqos7QbBA7Fdk8/ZHw3xTlwo/rhY4 KYu4Yedqoe0QydABgXV96yPrFIEW/qatmJbWmsaYUgYZWnUZGNpo5HHZQdvwDK+0L1w3AWiBJOqb 5TmFp9Awc6RaQWjVfoZXH43O2Ua87dZeq1GDsKjUZ3t6bcA4n4h1J+6Q1oqNu65lCfXcrkHTlc8n zXIGV56hlZXE2B087ZoEucvikuZITBkb52KCFgv28S+ouhO3U+uNjJlSEeJtoaqErkBQCGgaDBDC NYFkLuyrw9XWVVnhuo0gMmbZRWUBLfY5SBDtbCr2FbeiS85vj1/vnQuv+8gNPDHSk02l4+XU803v 7zROPE4CDXoEHKPv6Q6C5DyendQU1TJKY9iQ3lb0XZ/D6p+iO+4A1lAu3Kx/pkRUkA/JhC9tVSdC CTC07BaHnLvDxFksLHCtR5StsbNDSGBbVSgkm07UqbIpl7bIonIZjb5t9Xk319mZugcHj7voiaZG eYo68nnMi6c59AM9dCWmdtCABtcr71kkQDWcDp0vIpY07N/3qCCPCTpPSxgXcmak44XYmjfVXXgK tP7vxdUMHTKnBbngkw/i+BTdgsaoEzoQRxHrLlptJC+6SUd9RomSJ+VigdIXaQCh9Q3FXKM9DXtI qcaXLhU3UrRJpYzSXTF1+ifJaKuOnOLw1dTLmaMLnL7h8QOaFXWsZkibZhBlgTC0Gq/P+6RLzRpQ jvCYTvSsgIQ2HgQoJb2eSCxbxRsu3/SGam4RtJckEoiADxGEQ2pkYEITEDx1vpSuGy3E+by4En2r W5izU0tCthMjT+EOECyR4QnxvoplgQmnNid18X6DnUFegQ6a7EtJicUnVTEmbcIp0PW5jc7ogveH TbXRb8dIfNQd70IA3zYFaKexbWqidyg1mx0Lzd+y5n2Id+XCnst8/Z22WPsZZwK/JTaBG4a+fbO1 1T7EGqJlHblEkbOA88DeN+BAU/NOGncbLigtKCD40f3jPGvp52f6wOC+GCHF9SkN7UZID4VfaIbY wiTZYPmE3n2AJUr3eZju/P3g3f+M/r50GBpZn1qQo97/09sf7rM39ZczyqitosFgbKeNhIM02iK0 rOKbOBUY8v8h792a3EiWNLFjetDKsDetdrWS6SkPaBwkSBR46ZnZHajRfXh4OVOabrJFFuecmeoa MAtIVOUQhQSRAKuqe3pMj/oPetYf0su+6VfoByj8FuFxSQBk95mRmdraWInMuIeHh4eH++cMh5Px SSl78uZk2DmByOsEPJJxDITM1V0vZmaCIDz9crOFzYBc55XvtTwWhtGFTte2M84/WJze4ziWgxb8 RHSKh0k0KTeQzTNU+vviY8GAppBGPKXRIe3LLH88yP5skD3uC5AEhKG/3GxWowcPzrcXzfDvCQ+g Xl88QBvfR3/6F/+JNiOAxkHn6u5v63rxamU2x+5vqyU9YAgWevymuDqfFfB0PH9+g6+emZNnZO/R /casOojNByksoh/n+BsIFgwPHLwPH81wx6W8NmIjfH25vYI/bzb4y56Q8Z1hY+iCjenM7p5uC3w9 Af7Op8AJIMRRj1+wtuFZOceWAJXz82tcBNjLclFShYSfGNfyZHshn7Lud3DegIcXNTb596DXo2HD n2Y2sXzYMOKiTta3tLSw1evbFySacO2GXLAkpC339MLQYFzUcyM34Rxg1Ep4Aog1bKLpJk4zhKqi 2aB7DBkhoIkJQvCh/LrJ5RRYiCVD355FEUcXiUgN7ydlxvlwbquTqjHrEpfMGoHS4mMDwvJYz1Xb gokLyqsLgvIPL8g1X8FHHNguJShAAgK2c/iMBzYqWQoBHQonFuDJBBtGZmIGeGs4BemULUYSSCLA C5GztZwyFaMad7vhgWFaAFJWuAvtQZB04JGfDQTHcJIa+41q/WgGy7AX8/1ZaRidhf0x234bOBtn GeLf3FoxtaH8MIDPJ2K08d//X8GnLWtGUENUHIGAqudzc8gxbZsotLFPw4Py4Z5CdCgP2ssRV7Le foT8JROVwv9KbN6SPo5iwD6JQi+0VIRI+M7wYKQw/JBHuil8HcS8glfWGLfbhi7mwMV2Q4uBVcch 0GJ+Lx+efSLKWLcFZaz7SShjHQoEV6+NML2Caxsb0Oy31ebVOjOk/Q/dgX75hxrf/p3/9olhlebt n6i337y5rOYQBrH75Zfq9Wv7+quv1GsILmfe3e/6YePMq6OuFxAOs97r+rHezKsH6tWLRV2v5b3+ AOHdzLu76tXzD/BmPFavXtYbevtr/fYb6ov35jm+0ql+R13z3mCqr3Sq7+pr7Ibux3EDr6rGewUx J/EtEK/+ssTXS7/V9JZue7qdnzqdLQif0dRyoZDurledBK7s/qP3/q3MhP9Wpsy8hboEuzncRKjG WfnXtGm4bdYmgh01I3HHHKMvFmVxBfxwvl2Y7dWUdkFsmVgJLPBs1/YbhexChRnzQfyr7zONcF1N J7SRsZbclyjugIJ+ARCOtJlcl9msXvbAZvEjHPFBD12BvyWgzsHhiZqo+c4uscffnR2Cb+4HxrKg 32JneLWqxGx/P+S8aKLdUBDK77T+VOAKH7S7NXqsXxca+qWuKOwxWZ3sAkU5apx2SXxJdPVTSHR2 yPAZ0R20o91Do/T5cOm/5PApr+aB79sc219ROf4OkHenBdHlcmZEVvIgQelX+13bvrNPO8uPZiTK cReIohtL0zYLJ+5+qQ7pXoyJr8hnSZtn49KaAGE78XjdgqRDyxARqkMrbyrm23YfNo1/Dvb45/Us pY7mlU5HAb9wBPFPeqMnCNRZh2gOEsZtAR0Cx24Rd3bUtA7jcCxdiqdCRq25dSa7wKOneYEyA95A VrNBv7ODqrVwnyRmrGMPO9hNy3eI+xlubShhCx0DFa+hvmQ9/g2sCGLwMWl6Ip0MWcEObhHMJKtG rDX31UpLjBTiFN5iUFPvg69lwjf+fNM8+8FQybPXUI0RqSb1Snx7sYZ61VALhlNsFMhaoZkw5vMq xjepirkKn3PUq0lze3Vew1hree60XrmT+dkOXg0gg/g/m3OF42ArODwiZdinfjpOWDeKsttEkUex Ua4Je7l6SP+fsy8OklFUJz8/iqqa2Z+3eXxa5GtedLsMxvyo64c7kbZ4TraEoYlr2bfsdq4OF/N4 DdOvTrp4a05rzuM9+C51T4ofFJ3pZnh1HnaL6q8wbYZ3OB8jdTKzMUBVqTUjM0IRqm2UsovSDIEZ gTT0an1IBFViX4Zt4G0B18NIOqM2rhHug8SeuJj+3piGrdRLw4kFj6mH+2hIjKBxNKAPXdCn8D/4 u+2gnos1dCLwofnS7Xc/Y85Y38+Thgewvjbbo8Dq48wezU7xaZjm1TygITuml6kZkMKCefB7Lq1A vhpXsFO6UHm7Qayg7qC/2xrhIG6LUYf8Lh4qlhzAIT9h8cENjqy9almHMsSBogJmDaKn41bg56dX 7QUEQdPT+z0lTVNSSPuy5fdb9vxP2/CjHiUiPh6810cb/ecIuX/kzT0dHv2fk16fOpslMEkITHv9 yWVjX0VlkGXorHLZkGLSvmvg7x/1nELubjbCwn/SpWw52GC46ZhPsOFg1TF6cVA9mggkFwhCs4Q5 xRtWB7R23gqGwQdy5BLtuic2oLWtKyyZOmNjUHOe/q6me6mTswyjYWNl84jw788dFc4+TDh2//HG hyudqFNtM77LBUuLUvPlj2pQDBn5TARF95CRTpfgbXLe1QCOOGjXfwYdemUcNOKQuPtLkGH3Ho/x p46Tl3HP8JA14s8ZnJR3TMvQQMztX2hoPn9sDhgc6BB9q5ZoQg3GACRPhuW2XhipKN0xq/YriMzc vOqo53u2XqjC1kfGc3+8jfbevWXzC+6GTnzuBhFxtai+OkD93CoQm9Sww64St3CH7scTFyUcCS6t VNR9WbOLjlP34cnFvRe1YDeeW9V2qznjh587sU4z+1laQg8XOZBWrBmS8kXfbTZqEgynmxs62X5T F6HnmG6ur5XFsoOBC4RdepeULqDeYBON1i9D0eSpsrGAVBOCdclB0GlZSp5/IiEYz1zeIPyCizYa Lbdwh7R2cejSC/YTdG3hJc3PKuawibyTPYWgRDo+Oyju0aIRbcWJeNvV/WLYFYVg79lA6oiKlVj3 SrT5hagljG3+RyOanxNE/SDufuhe8cfeCZjXUWx1ZnRNs94EQdJ9OsM3SVYEWYPY6WgZ4pfgWYkg 8zW9x7/ZmO1fv49hiMJCjATERcQoJWFiDrOuX50++rPR0eNW9QMbqzC7i8YgMttRY/LLx1j/eTr3 FB2o5iaIQSKIDzjWuI/3vA3YDr1qoQeTe8/mRBHRteVSy0gLsKuEM4+1e3bnzXfwUzB6WtZj3bYh vWvPM60Xk3o+b8qNn8+9V80sryeUyA/JzhkN6Rue2oiDn9+afe1ob0+qJQmrA9u2s52cOGl3kA6C tC88u6aOP7LaSVfV+fDl2/9JQEWAT8+KRb0sN+UVmN6XH8Yn/89//atf3fl19mDbrB+cV8sH5fIj I2l0OoLHPUZDnt+8efX29dPnb37T4i5wXjTln/+p/PphUZ3bAG5XKxcyfrohU+4DQupw/aGRkGsW P7m1C16qPoIuYGWx22GxuUwgJ0kCiwDHhbZKrcnM97PeUFrf+/SivGi7FgbKLOF0Zzy7J44CrVKP DA+waTt6N88QX1FcNhzM4g7qllXBfTmVcgMQ12ayen8RGRHshHdoLdofzZaKXAwIu3qtBKWnnQOh BpFP2ciZwSMhjg9Hz8ldToVCaQc+LMeCP5P88oAZuLxVUxZnhBkr15g1pCvqZWQ9RfmAkAnB0s2F A7kDhlNOkQnZHCSixo62Cj4zKIpth1kx1xZGIpxBxCGIfM0ZVSWFt9xagE8C/aQfahBZsyuRNbvW 2jryH/pqnOVfDLKHntGQGa0uYwzaoTMH+f4o836CfBXBtFbT94syEPoVZxpi5CkMbT+tqi7E4iyX DRjZnwNPTmSkEhFnv8mBiQ5nJdA12BvmxGLxzazEEnLhkf1UEG+JWfodFgphRoMG7+s8rCtSa3i9 /5zmkn+obm/HNRG9K729QdJ1LM6rOYIh6YveT3JyQRipAVbhb56/PHn9N79hyCfuFn4dWP1Mv/Ph q7f/EiNxEol9+Prk//iPEtyUfPpAAF/dmi6NzE68qmbWjxg+zMqP5aJeoZvrdlMtDDluwO2Oucu2 MaNjMplFDM52CDBY/HB7JA7FzfackzYdKA43R0MuWXZyWWIIVZPzCJYtOB2bfmXnprprsmKdVyYX 0P3RV2SKf0VBSxqwfF2zM0Q2LcCFwTS2NkfetYv22lGl1uQqzbQMY9tJxGhF5EkoxPBVP16rimMK wbofDf90+Ge9jngKWs9AGohO5w5Emt1cFgjJiD6Uw3v28G3Gy7Qd6QkiWWsBQ/OpnmRF3Bcuevhk URUNi3ddSdFF0ObhRH70KJ+hF8nGA507k344no9/7HGC3khq+AlPl6ZBZq004x/Z97CYkoN5nQE8 FslXa/Akd7MLCXvNZmaKMgzNPIzgxyBZwKpuqhtwLl3WvQaDDBNZUCHUeiwGH0f0YkDk3zNS3qxa 9zJMMIHVAhxvRG+pvh7PFhSyuh2p2esNTGPAePoa2o/mOJvqvFpUm1uJk0NC5NHj4UMALgSnaSNB OrIagMU1eMcUMBVe9zH+8PuyXEFoXMO654ZIccKlHmZIPYxznGHj8HHgXoNkvzDCYutn8jX2P3Pt 6JiNWHGg/NnUq6MFrF5vvtZmOVBxjBoAJf1oGZ5Fp5MXOMqcdCQf3YmlZ9qDckcq/ZA/jiSRyve+ Wix6aov08sFHeB5hKpXrRb1+X87Ax7MX55rjR7hOGKl0lPsnoR6md7/TvDx6UiIlGslr1YAnq4pW X89L6V4H1ZkiGDvYr/F4WT3l964jNvHIfVZ1fweucihe9lJ51OegEcDwDppmky41x83H5fW0F84V cFH8Mnrzcfn7p08JXeI7qMvPu12rmfbymi+QuSUr7mHJaslW9xv4N8xkinuyhe62txW/+0N0R0AV GT/M8IoHsJcfXZC3M+gunnx3TMMJH/YMp1QMSZOrhgT4ZHoWEUecxsv3Bj/pbHE+TqNyPcUGZ+lc 2ERModcZnPJ7u3JQCpXFnu2PDTfupbL4KVRWDNwKTLnXVptLobKBy19DKpRe22DoNH5WRm7rtdSo Uqh8GFNc7tKaXiJfkELl3S6j3EHeKIXKPfHVhj2vZuv+MApS6QIAPIR9IyZmf+qlCwhTtZTQCwct WUKQ29q/7sjtJdPZY8fNXrqARMJwsQMlITorinzFbFaRQVE9F4AJFkX41yHMk5OmVnu53F4BF/H6 LendR5VD8PB7qRrsR81m4YQwC9aPZOCPKnmxvI2ZiCSHjzqtv1EHaf39ubGUkWqGTxBGeP7BnBF9 WpK07qPK8VtzzLJMpBfk8D+qXAqwotqEufyPmt6MLF7etAwof9SMAXRTk5bk/NFfDKj0TM6v/agz mFMdHRR6iQzuo6Y60NH2WuaCPuoK4AxLR9deXIH66DWqBq/dlnXAH3X6qsGzfLrX8tHPsKMC/qjT G65dXYGeJTVK7mOQRSA3e6ks9mOQSW8eUaZw3/B2jDBDit3D/MyVhBBNHn7UEoVZiXCcTGawH0NG WC1X281Rvd2YP9llubAhQXtVvV/EERG0TjE9DlCeTD+cFqsNQDJIIi0MmDk9fpUSV1Q+TqR5A9BO mC/MJom0pPPsKX3s7cjnEmlZbDOLs4Y5VaJk1hfPevuzmkTeAFEEmN9jPPaen1nCw1Cw9lGQ1tsB mmqCjCnR+qAUldaXoVyQm+tqhkJ3SwmJtHrXKOCkvFr3UnMnH0c2VUjEzRXoFGAnvyqLZXZztXhw ublaZE52J5I2Hw6gaazXJDW5U2QNJQfE6WXB73q2ioswuZcevuuNv7jemRy+q+QvRSvRSyd33zVv aQyJpRYmZ+LvwSFyUQdn2DuZeYWu8BBZOAd9wmw7NZJJD+cCIoSBKyb8ngIS2NTIINnHqhCjXGWH 1z4RporULMC5GwLa9hLphxzpdmQT6TM0NzKZESqzCXypRjrTS2bSCTwZokS4tWh2VD76HuwtOzNd JDLhKThFNrZb4TH55Nmrtye99gycwM/y/PXr3Vkggc5y2yDZtGehBI7Ufup3Pvzm7X87sWG5WGv9 5O3/eCfUwj4ePho+7nU+/Pbtv3PQc5Lh6VvARGo51mf+sR5vfT88e/sfoJhQB/Xh+cn//l/96lcO Ho6fagCNvG0iiDe8obmull88RpcvGymoARuqJR6as14Pw0v0DENNYLDLZQZeYe4NcUF3GtDafFXN 4qDqgS4u726K5j0kzx68yB58d/wsuzsDz/wVuHGn7lZ2VvDd61dPn795Mzl5/vrb45dPTp5nGkwV 0SLJ/X/M/RmaoZmhN8F6WS6+eDx8tSqX31Eb81ZDiKgahqkeZKsq8O5rqYZ3wk0pdVG7BtnRo4Py P13UTfmXmIez9gMwseQY1URIOLrZoz9jsKIgIRAqzogAuZ7fZtVMhfNwJXc+vHj772V1XNVLw0tR Z/Dhdyf/5d/itU6m3solzlU9fQ/PAsVfgL57mGXawMHSs4phx8GwJhNVZr4GnFELA2+KgJscC1X7 7p1K++5dxkVA1z5WiKhyWfK9Ocib5drCKgNiaT2r5rcCLkvBHPEGpzIZwSvUBpwfjTrqntlWOGyL JDhAIxMzBgrHNMw7Kxdh3v2ZTIUAz57z1VRL+MKgGszxqdVAtHuvRxDeu1zOxgFge1BXlO2Aym4b vHrnCsDF8pIvH58YKsVZqqbshIUAs+dltl3OasAVRlziDZIE0omcKPCqUQgQb1YIy7k0hA4U9O4d N+zdOwqLUICGEQqblSQ6wu36PCuskQVQhB9fgVGPqUF4hp6JKawZwAcwGlg7OSlKO5Z1RlZaQ6Fo wqjDwcAom474BZkMuzYsZjMEmq5+gAi3NHgwDh5+Gb3vdJb1xjQCXCSRvHMb/0eV7/gBJcLbJFzF prQpeGPVcwHRfwBUBGZMD3i+sqlhTBflrjhAUeACLisVlYEpm8GvpTgfwX/PMnO1QZeg94XMl5lv yGc4hJmFd+9MQebRsIB377Cgd+8GwAXZVqUT4My4SQckWmfrTCULMF55U5k5soOB7Hgx+4gQIIlI sDQ/vkEHdYYuhikrwaCYhEkAHI8ac7CpZgJzraNQArbefj89IyoImm4ktcLL1BY61W1MzNeiKdsx V7hUytR04Z03T5ieowBzwR8KwGNJT5LhMR+retssblPzFYTMdqMWxcLgvrRZzAYzEwQnT8eu3jMV 7SQU+HNF+0rfW064GdDcmP3On8R4ASmjBjIEsRNjFg9ZqXoj6K1l3RNVF0RYFnukVCfM91Nxc8Mq PNpKt38/bfkUBf0a8GRZBg8GYqbLdQlQPkwfIXmADRU7JUWBuffThVR1OEX8jHFkYnDj6dEBbNQ6 LkqwywcGnUIPLA+hfc5HIyiBms+jCctQhzge795xgbzNkqkhBJq7NBuuOaOvwWugmOFynW7Xayg4 WQlRgnX/AyMiKhmz2mqzYgbugGYntuEIVBtctd7ECvKmQ9j0EOW5HgwlxRatSjT0xltKor/3bd77 Ksepb+ZJdnq8MF2qHew1nLpPJX41vvEa2LjNrBlGS9tKk1E7pRH8VzOdQL6TCM5GyvMbK4NlJsx8 FHrC0y79hsAHIJLwGWLBkPDNTi4uQZWb4mPJTQljgfJqcwnYjBUeT0c+QD5e7rsFCSSDXVEdBpks lH+gf/De7kBOfMoQWB4jL7JaB48tNitmazbF9L1JCzAHiAeJsSEg9rZZAJUhSt75y/kcBLntclGq qK+39RbOdUb2W5fRic2K0FgReirr4UR8nGC7d2G9edOKGCEnIxS3lAizW5rw3FSS8VcT21xyOzXz F0MvuG2tvVtA5zu7lewSM1wu/TQRmS/dmyCL3fqSG0LUJxX19hByd6RtfTNU8s6Hv3z77+TQf1Eu CSL/w/HJk/+GjvyszkL4T0NBi/JoLpEnjyCgLfmCs/IM47mw6SeQldNsdaxnwaZeoelYroQtBDTk 1UgRISSqKtoN2Q7BXSHhXjlbnZwzaizjc5MI0j7wjPcriAp9PpyCG6PZW9fjR/2WOMzngeMLlEWe 5+AAu7oF+/TIRBuQGKlsGKG2wk2qjpMRvqnr99uVFhNIR/Iewy7mMlaG49b1Bnrp8dIV+L6RI9/l EH/k/VPAtJDU8rLv4yH3hj3qzalUcGY2rdOb4Wq7LqGvuPPBpNzgdEAhZ65pZgonrJPQkyhlYRiw aKJtEN/HzXrqgmbCCYvTBYOX3WEDSM9rC3LbVnvtBfK2H0DMyP0wNivDU4sLPzre6nZOkPcuJ7mb 9e6ZWQ6XEvlMyNwQK3EzRGUFSCLSYpPpDAkX0ujWKRxISEhjLMbfdpT5r3j5FJvCRf1h6/HZ9mrV OFXR434iKdqWW8ty+DbI/iKVkI3OyfCejc4hhZcW/rBdet5D2/ye1yf43gk6xJPQ4OiJUxI32hGF jveACVWgNEo73K5M8WWeokavEa1DKdDuxN0mxPfExF2a6Q943AubjhbA1WoRcSfxYukPDceB93mf 1l4vdiVDmmOtPPA5KNAjZXxr/h1ySJq8Zx3KeoPMzVAqIVn0m2TYx74fu6fseMrZYjajWF05xlAQ tIKLdb1F3EF8CecRfAPOD+fbC7IVZQ8H/DB05XSPjuzuAt6VUwoZ0WxqDOvCB32KWaauB5rNuKvz gUm2OSWMuzCSKogJ6HvHXY5F5QbWbkZUQFZsOF4ZKeWIcUr4gfaIS9J9aQjSAnyAIcCtO1ettDD2 9pXSjEn+kPA8ttGz++b/zPvEm9vGEAkYEuXcJbHWHmI7+/2e7+oJNv5wGzQRQ25r0n3mEvadolv3 zSdg+6UfJ+bQUeF3JquHnQ//y9t/DRdf1pL4w1+d/Id/T4LFuenZ8mgG58EGAcWZrHD1mwxHzebW vITV0ww7+dN+9rpeLm+z7+bF0lR/eVXNzInG97I4Osq+PT4xG/gUDLxmCf+K7sPhY8O0Pj7udswX DBIJ0pUymh541tBnokl176IwKgmvS9qq2Qc3u2ou1FZisw/3ZvIlQj6soNjmfaAsGE0dHoJY6w1w VvOvRpE3PL09yubdZmRjN+S25oGu6/6jgS3dKpvf0DXo79fgXLTfM5UoIlK6UeQYCbpGD+2xSJ2j p2lXPW9zsnNwKbxwVXIO9Rhoz0DZQoWZ81PAoKBVED5g7Ef2S1QjpUjATChKMvNfVzb/9eKYXtAh wA6aKWF3jQ1NQnNqqz07NZnOvBgCm3Kd0tpPNZ59UF7YlVPl9q3G32/fwYMP2aK4xXhCQ/vSnFuB CceUPajmFmKFBTRBwY2Dbnqlm09txRCBizKNSdyyib3UTUsGtuRQvabXstUoGAEV6WFZlTp6otlF UPhIun7OCRNz6dboAejIm/p9ubQuoxzAsMIISIE6MRnnFD8Mp3Bjne/SaSarQYlRRayRuKXBudc6 Rv74U6g4mwo6o/nkzahwTUsowUmf2hMd8YXmk+PrmoSbcC519AyvlYyXJfgfAwHAObjXT6ma2DPb a9pZzK211rm1PbrNS39U0lDXyQ7MtmYSpigrcWF31727bpX1++mBUMwFH878SWpXedh+LVsK+qzG Y5HYcprtg5vdduXAVdEKTu/eJI8paUBtkN7W7QrlOGS2UOS/vg844Z1496TCjL3wGJrUMdauGVBr 5Z+nCpdj6EBOMWpBQvqc4l1Rof0g8gHEKvdVUwGRgsLvJd7Ngz8vcyp52YZCL87HecuixXNMqimc LqC3fc1pXUiorTQ5P4H2zEFtc2snB02uUjQnc+epnw4dBBcVJRqEKUVeLHaPwidMyAFY+clx2C7L mxWbXZKntGpZYkjM5rkhaHjoOgQZTqeZMJQNPDK9wvPp0ePRWarxNk/7RH92H1rrg4al6+Em9wB6 627TQ4hZyaHP5+3kgL0dHT0ChRxpaPop4AYfg9EtZsdk7JLWMCcQphtgM72wb7wTceDo3p1eHyK0 QdD1Sk/Tnex8USzfU6S4VKi50YEhfu5E67gi8RHCz4Hp4ilZ+eIrGAl4d9ZrUdWaNI9gvKJKfL5l o3lzRcOqQePlvH8AxpaWJfzBGpvj4qN+ilK6o6wr+qs0sURBwJNBMtz3hCyXauzOBkOrki0+oLZD FhLSAWzJhvThud8a9IbJi1s5jKktyesS4aE0nVFPuaj00dDba0LR3B6kPckNjz65l9GHDOZsrZxW EatJdj97lDpmhpvgAQfONhhTTp3vEn76h+ELiS6XjaV2nEyDM3dgP7FDUt1pLcH1B6oFVeP+c23y OKlaQ4dKq9xoOVke1gK+EWvsqAAAeHReZzB4rxWdD9+8/bfW1pwo78O3J//X8Fe/wnv8yWS+BYcd iDtM2sEL8S9sEkBibJU+oINP9UMZBiFvA7Garm4Fu82hQXU6ltQsHklDpnPUONje+ct3t09fTF69 /OZvJhDqsGgy+Dt58c2T33XabM1tClPjQ3pD8gJheYl6iWCUvIM3KBKNiHN1td2g4QibNl7WixmZ NjJyKFrmz9fFBdpBOFziummq8wUYmlXLWclBcQNjTRmOab1dEsjOw7ZD/z28KgMoaIb/HsWKuoY5 chAkYAbDTbmQ3/SkSWaH8c1paWcOUtPLKC1echVID3SNF52A0VDOfIn5PbxNNNSia1m4Ykg44Onp 7y4Hnoe+BGF356jEfANx2AdoCdLfU/DpjUgsiOHpX16eHVCZWXrs0jhULqmHdIdFge9Tci9YiWGz 0tspRdy2JY5anRTAMklSgUwkm9xoJ75jHIAq2ZGEPJ4+xeuug6sQLUgaPyHX8azsJ2YYFFl7SpQS cvuyn4hdXN5syG7cplGLsfxglyKCLR2wV+p1OaZcQfOZa/lmnqMd6wELQWuhfuoITNWy9Ubf1noA nPg+dbDfDFQ2V8tNf0+/ST3cPvOmVJPCUFy5kt08J6nkUb/N9vHYzOSNC9UJ2Rq4lTEcmfgEWewZ /mxK7fZbG4g9xpymy9QKpDZ6hOCAanAW5bL1UmMhatKYaFQNLIlRHYbMVDnL8trCUfNO1I8/Wg6v hheLG5nSogjwNpuyn8N1nbAnc1lkT+O9jkwfzbAvxB1C200uSgJLU2c3U9RV/bGcDXX5bp+ynYeN bmAHznWWmBamy77MrLEkTHHQ6RY2RVnva28uKhPK+oo/J8o0n1tZH2Q90iW2T5aeKWUVtW+ywpla GZFjbUQCsXGkUPfjntmD0coOn5gn9rJedi/70/SUFkY6Wd2C6RQ6k0ST618NUzXsB4sV9bJrFErN PGB7rAATTi1ldUNCv1WYjTmJN/wZf/YPWwB2G85y6rI55eD5z2osrcCLSc92LB1qFv+8n8lfal7A m9sXEu8lNDe/+DSA/SY1jIo2g39+ixnJvoDeHpEQkZ6NT2Qopzyw9w8b1vaBER/fKzc8cpK3SDfs 95UcJ+bftgxYu9PLTE4+njUrGgvggGAV26tzQ2A5CdIzOjo87B/AhzjQhWv4Gsxp86jd/dTNqF7T yVGgwj5jKHK1U2QorPJgNKuCRiK7MueDq2IR8j8eunV5AYpibwSBWztCovakh+hO9oc//AHSgp3v 1hyYAKrS/LMhG99iSQ6R2WW5XRsRuppm18Vtkw2Hw3QRxW1WQp2mGC7E5myy0VEEVJ8/zL6UWyyz E9idop8632uJgLPUWzTwxuHvKn3rnezREE0IaSXidiDLUobOR2SG4x0EI2bvZkY3mxL2WxA2m09D PgUdPUJv3UjwcdhPnRT4e2LviNXUd7Dl83qxqK9hcmlFA1AnDCJgBt+iB2M5G7CdOO3pUx3NVbWp 12xXwPUrog7aCQiYxnvlQWnHCtpd0/O+NG0qAGEVj97dtIaTBqdr4/ZIzV0s0nufrppVfGWogW0V osT45Cy4F/mwrabvs8vC/LOpmflqNg6tAnsvMKRvUOpc40l/mcFVyUWdbIAy/O+Z7t7c3Ix60QW7 ZeHm/JexVa0UEN7eeP/9XYb65wyW3n4NdEDSua05qTPO39waVkLTOchefSzXc0N7/NOpePupenhG gqY/VgsSWHLrcpR1VlKAXb3KwGbI8oj7qbODHs+QcycI0CYfVg1egYAKqO1MbM9dwrE77V/0LnG+ qKfvnSdL4lLFcNBYDroB8O3vlz2niFDETIWMzlyvsBbJzLq7odSeB9R0h9DqXZ5Om6IZhtsl62dH WvVsz9t8YKYgEAlfMCsgxYcPKYPkHypi6I3/8Zxf25tJQzjL7GILYASFSErkMc0JgaX7SwL4Z7Vu wLlkeeSOMsMse7M9b8Bjernhsae99LL4aFa9rz6or8t1qjpx6jbiX0WuDOfm+5XhFGEjbpFrG7a3 vTIJDac5J+UNLIer7WJTEd/YsVmbPfacIH4JRpd0s2soa7UG13HDdxnVwhbY8TkI8a4mKzfT4Wr1 9WfJk3Ts8QiAPggZ9A+RsNWiYx9N0QElnNO4HPiCRizUVEMTtpCBEzfA7ihwbdT6WeUanB7siIda eG6wMA01O0nsE/KdATY6Qb8EPBktiqvzWZHdjGyYhBuzORTNrBfHSEj4eYbFsR11s602iuNWbmkG +S1vZOXR0M5gf2e9Yb4dgVju+BG07gMPG2Tdm24cxCXuks6Uul86r4v17BiU6OvtKuXl/jnB9vZe DbogFwfYvn6/5BCTSWWRHRyKJGIe7S0iRoowx/8AjNruiYviYuxubIZc0noCH+LkM7MPG/HNCAfV ZmwEkInJiuFxdvBnLnJGlx3EoIcYYIXbKRzYXxlgdAjqFsPbNoC9URULl4O46cws9YU5OkRMmeb7 AUp+ALzObJUcfM0iBmtuDOqTXKV8jLCVoUOONj2XL33tGvQwltLt6LRbgbgk5O2J8IvwO3/UZwmd GuxLMsuBvRK2JQzngqs7kQYOvK8Juz52XOt+eXd2BJlN6gycyLxYSXzBlDAhlBDnYVJfiZYY04Tx pow2ebWBmAL23rOv0FplvkybgqXvA1JlrTNdnB2fVMEuQGBq1bEA1c6qKFyNu65k7uPWJS3JAa69 iB0pETnQJN4wiZiEGPAk759GB451eQR4geZ0vdwwE8wwlAAHRATODnsVY4Txnu2HomkuEuecUXnD JvbRjZv5QptzPBFSlg3nm3Xv2eQwkH8XnFyiDDkyjeuicWzECCTnBVyOYo/gnAI+ARhL1y5M38yw vMbBU0OL12I0tVJlP5FJWjp2nUwkskvRPicSAW4rJYGnhCICk+2+7gCCyf7EXUu33uJM6/iGCnV1 IP4HF5EpLZ1Y9AYhdsAQ6qtx9sUooUsAKXJ1+0Wv0ccEonyYlbyfIa9sKOp86ELpykF06mxVrr54 +DhTcZQght51CRtQb8Ni9I5CNrTZMM0coXN6U0MBy5JE5HnxvgSRLTrnIhmawVIRorqT1S2UJ6fd VVNuZzUH5EooIdhbUQaC42Wco5PlqdDoWWhDG+TWgVEgHaS/ilvqx5p61EmUBJM7LczwD/FfrwX5 o4GyUWoGLUvI0VXnTudOttqeL6op4gg2l0ZGnW4dzlVjUnSUUDKJ+F9CLkHSbsa+grZNKgmkEGVy IQYiTk/JvByCsJh1dq1lkIEH5AHuaqDmpLMfxB818gaOFwsbBejJmgz8IddaPFtXEJg10M+TXUdN kAaqTq/KYnkLEFdbs8A+gmKL3AQ9OanxTT5ApkCTlsCkWN2uymBPMC7jm5NQJQATtgW7IVaXb1d4 AjErCmFN35x8Hcqeh25ljT8xRECfJt807oyo1UgUvlGoIG+p3Mmi9ilwue2Ic60NkABWO/0EHiVO AXocokgFsSIwqYamBNYQ7NNxOF+wts4DfuAzRig2C4KiJtkINFsyRGJDojFpa1endRaQkfmSw2F5 Iu4SGuUzKNiBGh/4oFUxOBk4o0XQ0DXWSCqpGTx+1WLcaQvxWhLvjnE6P43tLc6n56pos+In1Oeo XZxHkPJLp4gLMphjxPiC/ibIi74yx7X1B/OdGjYeLm0UhLj8gR+ISjvq7LZ8Tutmwks+1GqJ6bV/ yee64mE5CKeQAZPLLduhyBrNvFNLzgUjcb33x3CzJsSGaFlFw3XslGsBmdlCut93f7u9uLgVyViA QwAMpwJPqu3qYo0mC4OMDUdAcU8Vfs/smSMqa26M5ZPNjR4dYXL82Y4EjZNVn3i6L33ErkLt5Shy xZErjiZeKGJ7Xd6szMrfFOdNYGYltoDDRWj/LJJhvFKtqAzKXbwMPkKteiwe+eZf8Vq2JT0M+jo2 r2ILmqrZaE20kaMD60AYUkArQDckK1VwnoMHjsxtMUFwAiQjEVDnpHL0esRz1+hn0tKKmLwlKeQa LuXIsMfmy8s0mUC2yaQTFw5dNRza/J835JQx6Zvn0j1PrHNi9QMsRkYFIHfFXOqJVR5YXPaVJYIm ce2B8282/+d0YYMBGaDpRtaa7SpPZrbfeunJd3L46yh7dNZO4cpC3hK58+4YEcG1emMkqj3lM/CZ 6dgzXr+xwwb3wHrVoCtCdElVuSsqnp4yMdok0j3DA5FwDLkoc/e59TyRyerwxR82r4blUL1mxUD/ oB40p9WZx2/zkOE6c+/hCTyE13yropFwgk9mJBjzrQmiZkIPm9I08PnwAjWHxZKrQiuFomHg26HH AMQkk++2PAI681Fu+H3nw8u3ECKUQ9x9eHXyf/73FDeUY97h8K7rBd03YszIcubiJAI1F7KDMAbL sNMxTc8uN5vV6MGD1e2qGlKCYb2+wN8PJIQmROe8JNyI94gbgSE5/yLzwCM6Cr+q9izsd5rPx+E8 EW7iz3u040iwTAYR0gExOW4mgIPlfXXWgiwV4gDzUYs7zLGo7QCRD3IVYOTVixlBWOlIzWAxzi1g RB+FvaMRbDEzoIgwiE2P/RJlv1DO8+iMzoh5hg0g0g1fNMxOXQFwFp53AhgzVY+LpqnIFgtwX6AM yjJUr3cUKgf3qEz7wStS3u4okcJHR+XxawRsCwdjRfLcCq0OpCrKcCY1uWAVbDvhyENsGMKWcPyK LLgXwObQN69v+IpmRTCUoMR+W1+7Evi6q5HnbWn4SQpitHyiNxu2Mk3rgPiyWhuKvRnb70T/Klox a2U4wVmGgNkzXkm49IwEmptXdGaA/MoLyQYpn0wo8WTi0jrpfJCpobLjAOMv5UWiEOvDhLUR7LNS kiGjMq+kAHG+G7IOy+oEA3ArW5bMM70YZDceG6W34hbjBtqxIQ+WwnRNvMwOtYfHTJ9oAO9o1wIr AujWZNJuAy+J3pfYW1tC3299Y33kQuRi11gPQ5HLgVMOXqpvbnMZhoEt0vcc0148RI1EM3Dk8aiV SAZJNYXr4YKp+87f/NECDt04InAVwTDeZL8ew1gCysgagEAhagOQZlgUjm0CJAOpnVuLCdzPjD3h tKYhBvQUfzkcbmggAVUjlkpKNCJrlK4ZUrie7w78gVBIHK2QoAE0cYwEM/AGSQ9Zul2+I4SeSeBY iU4023O+yOrdbYZ3mx4DH/m9iMU0I1KE7I6LaiOgBFyAYnWcGVg2FX3QmFHSQz14GqIJGUB2Xxol vJeYW7KjE/kJPUwMArM5SUdqLniCiwwzherO18fLRGBS9voe9tKyv2uEIuT78rrTBmkQM+9fYr79 OVcB0+2sc7vSWT9tqoWxDq040N6hTyGNdvII2YvFSUlsspp9wl1obFOx2Itbmxb3MK+9meQk466Z dKhHJspm9X2xoypY8Gwrv4didC8onDEbfS/v5H7Z+9Iu/Qwvv+nmO6CmbsYX3IsYzsEvIZXdG2u7 XSbdro3cvyh+qOCWo75aQTQEQgGwTtDmbypsAXRwu3y/rK+XHiC0cHepNc3enVRBuIrjSKGd2NF2 ywm2JJ0EOVCiqH7C2AtEw3tUSj/fYfEUUXZQZwDrvNNbHoxSqd3hzREWnN7ZU9oGuO+6HWRe0uyq urjkO1mJd9K0yVYXES/o7w8qsi9QgQUq2iV8d/axJCrlAKgygNfeMRN7Z0M0HhiMlMLc0B3gkZzm qw3hS5iPs3p7viiPoM5pAcF7gD2kFrxGX0HnQeggClpuber1SmekkDfegRUJlumLBd9dLjegV+Lr TT67kIkpnkdn2TXYzUp5QGCgHdSBNMzRqpmuvSt1e0bDMw4+KUkSXoAdqGQld0tprGeQCClb8S0C ntvUESxSCLonovAnYJqEp4X0Bn3AWSVShmmci4rvju04+2cIfNfn+029/5sJ8/Z/tyD0MYHkhlPf D8m8zAPgDDPK5q3f7Jv0unP6mECoDzGx4kG4USfNm7YVDAd93ipv+omDsBHfqDd8GlVj4p1HPbDI NolBWiXCXUu3Ul25PzZbLMgIkqhls3btg+0a6FL2XDuFN/2wtTxKSFFtgCcJ5iuT6wGFKENU4R1x bJYoZkWqQD9eiOI4FC1hV0MloEKyhdyLBHn3Ox++e/svGIH4w//69v/WwCNQtRx+caMDUw+7v1n4 EchqVTwlIRpTyEECMlahD13pgzCjSNE9bkyP4NUxTQeZK8W/ANMZCma0MuKXOZuAkhg4bA+SNj2l Ne3gNiG6gOHqtvPh9dv/qON2mlLflzO4h/7w5uQ3//pXv+qIqPQCv7wwX1w0wgLc7YCFr7ewwdj4 dLgXUVESlbEjpioXuK9n1aaRWeBQPc1mBu5ykMY8lut1xpG1C1DeLxZ8L2q2tItlsaAjGAwueOhs m20zpKlFw5QSLfABdHxWrcndAO3qTe5iSYrvTiImaQJl5qpYN5dOFeVGwUdpef6H45M3J09O3r6Z PP/D0+ffnRy/emmm64s2BYwZKkTOaVjbQpe2/GMJrvqIue+ZAgN7MInSFsD4JTyMUKnpDPwt1q6Y poEGe7v0X3Nq+ON/sAXRg/9RpmCcuSfPPGR49X4Gn/IACfr185O/fvKNyzcsl812XeY9Ug32guQU cDeRnKgqkfz569fp5IbyNDTeqpqR3h/oOfcjKVWzEUleqwIjPWla9+qjQsy/vuhJmaeX1WLWnneC 33NHFPqsRN+YCboUWgpbo0xtjj3aZhHD87gVO8jOwe4OHZBcKlWIkU5RkDt+BW63GyuoeyIkJHsP hmCGKwEH4xXNbAksJeF2G+WwarWBODzqBKUyjDP34CZ3iEDAvWs9lzbPcL7YNpdqeuazoLAh2nrV /gzaVObc8igwHWiGs+3qcS5JPEw5rhiY1DhzD460Eo3FgnSO9hZBKtOixztaZJLYEBhOsT6XVtDy oVZ0r8+VZanjVNYsIu1PNFcElbDRaIbw2SPLvUKu1f4Lo8nvWd4CpiuKn/RbMs8ZkJ/5MgcFoW/9 lEHYKCVAgwmyshCyTjjHaAwfayRpvqhmzg7G8/VycZsnIHm9McZOpfYHRxkpGGqm2hCHWjUn9UkG KfxmpotePUq8e+y9m6DE4hqs2M11UW0o7iyzHHhRrscmFzz5wZsxaHODUR1oizZjQelz4YmD7KFH /Dp1RI2mkt8fv3hz/LuXT755/izXafupSRbJgLj37yH4tcns5wOkwsf/+QCtclScGx+/RN+LgnQQ z2/MsRzI60UBbgc5JfUHZ5BNr2aDXe7UFh+FvQ0MdfQH9pchiJ0ecl4H3LInYYouz91E/Un28OY/ zcPzmirCQtdg9lGnfZFrRtRbn/cO5hICjku/dIyYncvGFbBjfcD5i/kHaB6Ef0x8+Nqkux3l9nR/ yc0qaG9yjwjTEPIoQhcFIhFLyq9RMZO7iRjwDAwyuUeVzYoq1KKCKjg4kyLiuhPW2BUtFVRQUlBZ uae1NQeutjjJXp9Ykua+7Asy8Al9DWr1KN79CPD2Lf3jg//RTjdfR3sftWwRVKz2+M6Hk7f/BuPD 1MNpsQJ4yw9vT/63wa9+tevQ4c4kMGZzH54StflVLUfHN2hfdfyqFXoS00uqKFcn4ZyTfZnlXwzk 6EHTdVLebI5f5ZJP22HAxkBBcshFGhZREoRRXVgSXPQWgubMUtZovPg5ASf3Nb69CQbvMs0B1+S3 Jy+O/nMv2IultcOofeoITj0Ds2U7Jm1j/Vs4fe8YahoqTvX5Y/UJ40QbjDXZz7t4MZid38q1yIhD ZodA23uGp4PxNA1hsAL1x4ejDA5HFfhKP6JnOFcZGZR+wKnpJ3tKfvaUaN3ZePELPEDUD3BcC7b2 qpsjiqoHhO5OB2Th1cYX6BJibkSIDa0QOTvX1+ibj+FJsAdJeCOID2nLUEeSAUUrN7sVgUH4u1UJ Ew7Bo+fsGrQGgIZjoG5phpk8n22uymk1B9SFwi7m4Qk/5H0LzQBVhjarywxd/cA1pQVBCnmy9GJs O6Q5u2uXu6qxWX7tmUGLkZvXTuzwC2hs7/r8frh/c+HjzJxMsEfAsnlZjru4KAOvtlg2pV7Ykvgp DJliJmzOJ3FTV+513T+V87ynYuLWi5n5opTCpHhTtH4qZZ4FYFDXifIQUEXvgvRinx0SKBNgM8p1 1yI/nFdvUl44IaJ6F3LPgqWD3NYsf5CwY8OELkSPhxC6qATgPmSba3OS+7rbj6UCS1+GWHzQQp6q gIJ4llDkm5Ufl9vFghT55uWryetngJzcbznZeicC75Ayn0V3AOFFOM1ur98emTeebq+LZwMwgt68 NnLZC8OhjsGcfDfwhLRcD4c91w+yFir9I7ZfN0SH2a45dEIUwxlLw2mdLspimW1XAxY3Cd7NW53o MMcOhf0kS/IGhEm7dRzs5KYXQkSBMbvyxr0py/f5w522Eukh/sTh5VKSUHzSGHVohs21Xl+kt3yY BEzhMLs2NQ58va4u4IQTas3ScDDAslftvLu/kxslZowKtKTcjzZC85HEBv/QlDyUcXph/Oya5HYM 9N+l/ft8O5+XMAyACSJB2HEHVxuLum+M0Jtw71ZkivrKHtxVuKhK9Tybd3QAZNwe7feZCKwonxiy sY3qidm+2+Tpmzk//GCWB0P28CwaqcDmJJeQHveopyVCBnUqQZ7w/REqcUegm3H0Jwajl2JTnasd UvcW4YYayAIgxgDOTTXCBx5GwppS+SXrBkjASiQ0qiWvRyE23/kHWP3c12PuuZ52LdoFjiOukuKF oXf+uTPCLukilYQafDB73xxuzspOC/AAoA6ow40ATzBltd8mq9psxNnuOWAGBfKNnXGTGnF5gj7B 5j/DvRH7JZ7HNpsj80GGC2Y+GzvE+8AVlW8mdpXX/7R+ciufY1zk2YtApOvHM8Qiv86w70wP+ERl caWKDRUGlAAMTugpOGBLL8a2hIDdCuCb9ZmNT1boN5o+WJH7qPmXw0PnXq19P+JOWGSMie5ZYSRE CKwNgKs8l2Y9EMxp8bvfUYoy6ID7MJCAOm9ytCZWEtoUfi2aj3cik4C2S/aEGdbQzizftRNt8Nkv pgsQQPPposH7UHBelYuAKLaEx+bzNbqyguoH9D4WlzbE35DrZFDIOCwb3FvxZvkB4tnwD994YIsr 2F5ol6LB9ZH/pmLbRAdZm5yQ300PHgR3onIJzqogvx1cKdyMJ+rzzn3oY75odm3ra/TPgCbl4bDu 3qxlXIEo6yEM2iZPBehyM9CxYzEmFciV2cvqWQ6vFEVRUUngdzD2c5d5D1hbCiMliAs0YDNlE8CO hcMQcCsU+bD0Xn/fOQp8/4oFqGRvqUHdWDNLvsQeVzftge0Px4EPsIRbhwK3EaZY/dBJDG8Xtw/d di7NnrL4N0lOs2iegEtxioR+3Mvs3zJKY3l14POvx1H1/ClZPXVBUiSq9zLHFGSJxx2et81KYEhb 8N+2a7xnb5bFqrk0DWSyaCBs+pURmI0MJgJwQBimOqWAh+bSm7x/2FSm2o+tl9A9mxmzuRfPcn5S IurJpdlxKSWjv+AlPHEB5L/45sWzRzj4L549DiBlwNp5uQQxrchevv3mG1ZBQZaHWY7uCGuAtt54 HtyM1cVLq1r2SV0FXrFshvlw8GjwODxiqOB3AMKEmgRAOELREAgDsER5RfqwMond3gwUa+TMePHT VXVTzlisd3UtJ6Hqjn6KTi8SE+qVBMtFW5ImukyBBZgfqsiBzUhvgJg5vCpZTqzxNNd+2jMvlcmp ugtySUBXqiysHSW6JKBBdUlwhOJE+Folk5GKU8qXnmcRZJp6AOQKFkVdtapcAImOVa3BFPpiD0+i PCThWpJarkhqItY4atE9xoiPyssT2EwPV1hKseIKqQMRoH1gaILdwDxyA8N/B7tiFrlh2zdEEaEz Hclkg31AqC39nKE1lBfftrPshAS3a9x2t2ShpgJNVg6ZChEpdl/J75j/9rmjlefm7vE/4dzhApcR AwOOz547X98MfCu+0Q3EIOBRqRtd896qsdtzowldIjfM+/7caFGXyA1jEKnQwfngqmwRE80X3nhA UkYx26rJYOPPEjt/K4O3OtEBXgftgHTOQ9Ggz2aq9co1J5I3HDWn6LRlkH0Fu1pXbTJguAwDaaVl NvxqFDG2yXq6Gki+t5oEwQVMA0gvLgamInV1E2zlTgrzJqbjwfSwiNcuUFpBklzI6nnmn0Ds5AY0 Ffj9ad+PvEf3oimqpzNZ4CsaDV3CmD2ErVAJk1yELtpCfXhwNEwJ7Tjby6mqCknGXbTHzhhOtdQW lqAhUzuLM7ap4V6TTvEmWz/R9M3aHFwBceFh/8COWXfHdRlr5hexmP75Qro7qP4Dk0m3Wh7hAeS2 64ntKC/TUZOhc+tlBjCPWA4MfyCfD7Njc+hbKmj4KzPwc8gQ1foPmCOzXhtklTWrS1TtOcVAvZ1e ZtvlDJD3gWuCii575mT9LAewfn1+YbjJYtP/LOHeyfFazN8hxS/Efsp2MU7BIrO1u41TmLFwKapl JyHMAMPzBUNYOeEqpcawwUs/ZfWjW9giP7XITdSNUNa0obsPFJWolHQL7bm8k5Dmzb+fdxjaIWjI uISRALQ5NyeJskUj5xtWc5IoW3SKsbOeqc0FnhPXuf8fEgAsSUWbfmoXjoeUl84ekSCpFErclVry iWSDHa3xjrKLWSS6R5JDUkfU1podEz3WK/9TxIJ/KvHyjyqAEBGIGnH/cvTXIcBjYYWhYbmVOVu2 Xyf0t5NPSk1oKWlXxSiF7qoYzwtBxZE2kfbvaN3bXd7ei5uR3p7Tdg9asVkJqnC6aCUKM8IAuaVD 2DKJ+efIg9GPz8uM7kkBWnJARGODboNOHO9gRFvGkKjZvFhndBtBl/1mNy5Ls/2TDOKKnoJ//MV2 TdFjNub9xSWdUM/LaQEbN4gA2019hffW4PEHzngNaOtMQRIpz4gx66K5xJB5HfJnxuBg7BpYLm7j nR6lQw76HdzGcEC1V6w6X3MEVpJlcOwIyJKV9SBVcf+7AvPk0CXFxVveNfqlRLyXd7Z1fOccxjiJ 1PqmzmpdTmFw3sisEug6MQiz48I9vVxhd/sq+I/Zim9boqi4wCx4j2WNWEIkQmdKqiyh+C6bXnR2 XJiT1wMhNoyz3nLT06akurzey7ff9BI3xEGqB+b3A3jR63z467f/HRgmowOMDQL44fcn/+VfWeNk 3yS581sS4J9IYoLV8GR7/tsM/USdDrpDo+3CWnZC8H4DtrEZuABVAOCNgbYYAHddIoL6CkVSyl41 EIFl1ZAVycR8MLQ/KW9Wi2KJUAe5enbSPS4sStyQlYVNRFT1Er4teNGWV+flDCAJLAYstNsIzcUK uMFlfV1+xBCUEFZSohhuLs0iduYbzSj7fvnjwPzzE26p3y//ERe4RPraXNdYKvTQrMQZA5RDuUsM lqjbCObvpnB7xYio70IyXkKLv1beFFcrs19l+fBj1Rih/SnuUIOMflmCy/t9btcCEOFhZipXCjJE eKfqQAdBOgutzFAi2CdcwTLIcAnYmpCPCJ9C6wCDMEeZeYBDvy6uJ7Lq9cSBJUev149Cz9/hSXAT g8FgMPQkUY4ZcRrvf+wIsCp6xtqaTh9yWDqE6SWAVPn0aHTmCbcL2tobYOt578ceQpn7L39KvfzH CI3Hgzrt78Efb06PHp2Bw3/ve4iwdx8OsB48CYdS4agtgJ/yHvr4UP2eLjfqVQRJHHcVAGyD3sag vFwytC+hYgUeA0OfZb0DNLiYHK/tg+RYix2scpkzoEo/TsVticMEyVcp5mE/AfRiIZKy+zDKPdPu e1gh5u4fPQIPMQnRDCMWGH9EY/ZTOGbMx1IJE10G8J90JxOfqAunmALGAAdBGrqDwNqa9I+93UNk xsYNTTQmUqpLMZYpEWgKGyhH8HEswKO3WeTJfabdnCiUTpLZh8lMoW/9oXdjV80F2+xALrPGkndb O7D102Zvu7xWVb3dL0/P1wB+bFFGzrK7DaAq3H14M/sK3D3SsG7UVhdsa5BVM9uDXTxprpxlX2DM jTACxwF+fTY+wJwKgrjAHHExoU904RPb4V12oul4VXIUAi8y+HwYh8J2lnYtsDsuivABtSaD8uzG pvNbvZGAgzoO4jCFpO9IDjWNw+cvXz1/ebJjEpJtgzAqKK0bAcPMTVZPp1trmCSy2LokPO4ByQZh NHQqR4WOQ9iNEo863S/N4eSr7rCTnOydRK9qd7FjBnxUmsyLapGYvJZ9Ry+lKR61gMZmpSn/ivZI EhRNJ7/qpjzZrNd6S+GMiJFLTYBvlzRGBVvUTIxRI45FEDB4+umKZM1ikxsN0IBAAvgTThHhRH40 gwMHSEJnvv2iq7zfKJabnAMmdiVAaSz62yLB6iqouNNmYvt4kP05ykXIKYxYt4Eh1ftN9+9Nw7qC A97WDjDp3tMO1Rv11icZbGvnwx/e/psJY/EQdPqHvzn5zY8IzJ59R2DreIQ2MisK4rcgdW+2KzJF 21LkR0hgI7WQWiby2+RIIknfTkTw8fuLQYI4wQkSYbF4bTi7RanhVnem200CRCWXTllYx6FJhZjc HAeEvk+K2Yyuo3MKzMqDf7GutysXrdVwSXyTdylaw4JPxfhy6MroHR3BOACxx/FACxyfcbfZ1Oty sjFLswvhc5uNeWWOT5IRX6LRfIstCYTGoSyakO0xaH2xBWYOxyOC0DcngNVie1EtudHcJ9Nqs//n XZp2HCaoG0Dux11qiLbJxwbliKA0uWdoBS707k2YcHp9l5aad7Goz4+aze2CLOvBOADiflGIOk1Y jHpn6WtnI3GXjprpmtc9MUV2B1FrCEG2jGsnaeuwyi2Ft1e/+cTqvcMr2Yh6jdGUOr2i8CYImUVa MBewhn4PiQyHmpzU3ZJ6KwWEWpyHnc5vLDTX+v0QwtqbI7Buh43IIP7wKLE1DjLEgnINYQ9y77Xk +6OtuEfZe9mIC3KD15uaQ6zhGb2RTk/oZOYdPRgyfAqTaF7gX/P7mM3PzSt5VIW+4DE3X19YsJ/e 7yRojHltn1UuWB8Lu7ZHmf5JcFE/ebPF3ybMh2i4jZj/YVtae3TgsaDqpCQ8h+J7QhbTBAIK2sr6 6kpCqGRmmhuL2Grtj7HoIZXitWV1C62ZoG252UJBNzFFJEZAWINPgKftaAl2SC/ZkIyvvUgtULCl 3rEqZ2ja7zmfui+TqsGwPQSnhgwztIdQpeb3dFb/cJKQx3ku/LbI2x2V3JM0wWoj0kO2mJOzJkFa cYsp4ihuOBI/aHUub1bbdQnBcS0kIUZyxRhLXcMtuxHwIxU9NNQPMsMQPCiqDUWQAFzpGKt9VWwc yhbPOexSwK96mqOnLkvJJfhyOF9egaUW1NESg8ePJtUqPxIJhryEG1dd1vX7oSVFGVgAxiLenwe+ IZvLsR7wMY97QNKJchLTxE35NpFgR3lAO+FK4e/gEE84jM6Vhuw5WpaMzD8kSh7dnP9Ax5qkUtA1 sxEAR8t9p507yjzGNmqInja9rSEbREb08M5cJXcAy8wUCdd6FfkAXlU3gLiG+yCFj4U99CmEdp6V 6wpcqYM7Wlcp5oLRMKQGKEYJOx13jQ8kHXRF/kPGjUECpWQ4SE+3RlK6oiHoYpJuvw36GD/nNDFM NrYw8RPzmg6rPmw57sLWGgIHdDKh+eyFXlMqyljcJ26U3Ud2NqwVWIfiaqoRWVoJJHckyJFRdIMg WSoyoqzLOAog5sDg5hDuM/uT7IvHhlZsiVZO2eHPaNLzgcwdBQi3Gpw+zSqbig8iESVTodn/rsgz FG3oXVza9P0Sq8e+uzVd/xYoN3ahAqTcAGBknoiFsgtsyQUq0ZvZAb6cgQmOeM3Zq3nTDmxeW6W6 Omx4exiUsBoXCUVPtcAYzzEMyhyvlshWvasMoDiqFg0lLzLbeM8DgDuQvgL0HN+Y+5MHILwIAggQ Ci3ucWxwsqnNL2Ll1m5qutjOmL0nUUssSXNYMCMzrQHmtvpYykUy+CcbkRn6SQX5dgPTy8LZToAr IL5QU4S/h+BKsva8diSsRmj1iAfXJWXbBWSzRK9UEVATXJHDzfm6JAarh8zDCPE4VQFtfYniWT2N sYrK5YxVECCeJCLTca3mz+no6IuzpHpSzd+oLa6FN6Pt+j4KPcJ69vaoHLF4siMjfYrnEQ1Qhxyq ARPFMAbOwXl4Ci5qp91gUdiYx58UcUlyfUrUJc9VQfEWzTruZDc3N9miBA3etCZ+mpkNlbF3bgFZ oYBzxHqGHtEpo2Xa/7DXkyISI51DsKToBPaH0jc/7KwXG7qze1CU3QVofECRFiPqA7gxRBI3vfh6 37BEvSPkgAlqZm1k4rCrd8xiayjcPaAMVItqc+vL4xL+WGNUw9A4deWZ1UOF65Uy+wtw2k0ZO9uQ 0vhgluEjfxne4QX90H+LUbJ3RsfOUkGw0wGgoVfJQUtF1+D0bhR2Hd+8ENB6v7QU02+pA1Oq3ST2 yPfKHrgoQiJLPBUhiyQKJ1wIAvnQphBQe9ycfREyaaqOZzUKXyRREPzDWqxcSgjQGE7O6Yk5qFSr 6ILuuMoUxxfSf147RQP3i7eSRjhe4C9/O8RDy8e6mkHc1ll9JYIGB50qy/eIDGb2ehcpw2yCOlLG nSwHTiG2i4tb0J2vDV1OS7Y4LwwhfHeLdhwgi4D5Dahov+57kTNww/fkHDkncKCNQfbjT31fIgBN wHTRYAwzt9Ku1rUtwN0/hpD9UKVsZ1zO0IWkcBy3XPq45Ql3DJMJmoBFtgV/A8ZJadpjv/FUE52U y/QmnpRdpKk2wpTnNx6UDyHCfm125kl3lxMFXSfJkV2dilrkDz6Ft1+e7mx8GrlxTYHmmzYlimvt 6VoCuYf/LSSgruersRiCCj9/X96OF8XV+azIoJsj/HeodsX+6ejxWcrFQ1aXHSEddMI/t+o9eIfK BkvxIR+EN45d4WNbw9iH6PBPsbo18NZvkY3lKRyejhQcYqTvbe1mXZh2xGlRL9DXyQgnAhITwhc8 QlGwEXo3wFdmlUOjILYOP+bcyAHraseKUfrXNKbcMcJ48NmF/riGmGGAwYVxlkYzv9VD70KDQ9wK JSWagzuQE5XquTMjo2mzEMCMwk4QLUOJx3hF1OghpDc5WLZgzlTrhytUK3PagR21sTy4Gl44dbHd 3T0djyTo9qPAOFzYENUxTVKklty+ssWPapRgi1ggh5jcWY0LH9i925zebc7AqoTqkjKGVQBppzTk XvPGNnSgambsZyvljuWBqoJlZSnRMIdrI8c34x9VK0bAWX9qcQeTRqUYBstEoi8l6ecF2gWHcpI2 P9qpFqDJBm+w6odyhuTSA6VFT8Dj6EYXqpJl4koOv8UCgtNoXqPhKXqLid1sbRi0kRDWOlJv8lzG AbMRhZ9E8tUt1Z1TJAwj3KNq2jep4APbm9vlpkgZwXwasj4Za2MjeKzJ/EsA9kFjhwEL4V4Vr4DX m24/ao3uBSHyfls1qOpPtZAPLibvhDeT4GDR1qoYRpOv5vHC94qrHH2/7LalLGfC5O+u0e4cbW3k qOSsedNlZNndJvnh+rKaXgoXRF8CWLcFoTfSvSu00JDLdUEOB7yHfWI9f3n88mSUgctBAbhZH7Zl Jrc+yFpuzVFL1cdHw7ig7G5WDqNrqn4Uj9EOmro47+IW45+9eSchnnxVLIuLcg0vmwpCnF/Z6E6p 0F7KwQ3MPPa40bP0i0m54Oj4bOTukT5CmwKm76HVpKLDL9TU4NgN6iIzO0bogHuKVd1gxKZiASYO AwSFWq3r8wK9SuQOlSwOQkOuO+Bgcl1SYEDUCTJsDlMfKT4jVZoT1mFyzLMT2HWP+0ZUbQPGCJLa Ag42yopKUMzRnHDWs/p6edgkSep/7nka/pNOVNDrPXMVpP686QoLcahvKOaE++huJUPLuZgX7Gla kBKlcrdP0kY373d9iehszzonkpvyvVziwKsSRD7w1dzL3yqOtlVi3wJowhWqRnoDnbrfVoyasL2r xE7Tjl76aVIdDUpp7Wtrbf4Hr8f+p/6O8nS/qQIhgcPI7RMEOXevo1nRsry25+FUwIudl2C6eDjU CWY+y6mhVk5Zz9DlNTnnA9+aIfO6AvOq2p034KBn72Jjt77kAoDSjdwIJpps8Ei1mAn2LLf2gu0h yOL6YzmjgUsr0mlggqStqJpqHlovkFS5PGN6ilI+rLtmqBWj06eIToIpV4J9aEl0lLrQEhkC0/Z2 VOynl2noBZwLDZ7qNZenlhuuazp6tTCtIOse7uWn/hT+oylM0hxKZJ86uG5LOnh8bZZ4iHfPvZdR hubAGUjn3c9UdfLUHJiltV2CucO0PDdSjY2kg8eqnShReFANwIF8Y0tw2sCSR4FaDOGR5dj31KLm RuIE3U6oyxS0hDF56AN5hrRctdguYUQfOiXad/686qT2eTjdbnJlcqXqGavnCKhfl6aKi6Uir9pl S71J+JyDq9hXjRkXuHPG2shavJ86ow9biuBrEz9w1GqNDhbbdTlZ3frkNMj4eL6olxfdAFVbIuWt 14qfvyALxNdswflNXb/frrSLV/qWzMjm5HRF+J1OGxw7Tw6ySeoW0GElx1QJnKgaWA/Jcrm9Ql0k QiY36XsBdqBDXxzPkc6MWq/fpse32jszBqrvYO2fBz1OuIWOqvuPzhBk2qpJyFSmv3/gk2bKaQM2 r2zQz8YrXosyUWsiYWdreH/uiTikDusPI/ISwgrcIIHI8F8PYsNlDcmSoDd0qABlBsJosHw3Msi6 eF1GL7dNcUHxAAAyGuMBdNNKvk9pecxJN+fSHScAJmhCu4O0+0CKr5XH09BWX3yyEX8KIZNDaVUS QRQffowghSQvgY7IryBwsPh7jW1zwgSa6Wsm78SHesP9lThP1/oQc40uezrMq3Jr9joTbJ2csUsq rvve2t1Rmuore6J3v1+Gt8Fe4akKIKDKLFClxl2xxdxtRndnXYjX7g3rIBrE++B2ydTjrC+DVfZJ J6LWe2kU0Jy5Imqe8R62aNyVWINRhM3Z5Ba9B1X2avmxhsjAIEQ+sOIf2LDVq+2iWIs9v77BrpZ0 X31+y8cdPOl0ySWnCzIVobAUAAuO/gNLRHDAtjYBKpbYmk+wCegDOjSrGwS63L80uYM4C94gDp0P GNbuyphUjTvHTB598aeBWVBwxtkhRwZ3NfEdN+9QN/72JLtZ3k/IwnR/MpDY4HLTBS/RK+amn+L9 ymsZU54vSkJyPyD63N11xj4PpqhlJvmJfCgQndM3YOP6/U7yur3t1loupU7vzuBKKqsO0FXZPL27 TQ9zpWwJd1/zx8gypq9yY0l00QgVLo9YP45Fum6zHUp4HN1lImCvsXDYgks/KhMDxxPCv1yX4XTt uiHXVNASKiO49scZyzHSAl//J4/gmE4vJoD0gBCCtFzbY3GcPjwbeA7zcLcQ7VdqNilP2kxvDwAC l+Ddf3srRdoTFsUDGa6tTqdzp3MnQ1eC7BgsSczvyWx7dXVLhQFL7Qd6npBPM1c+ds5JcGQunF4H vmBEoLJZwe2GOPQaboQBh7YrD3uSQ0iYD4U44fnLkdrlK4jgDh5D1duxScga3lUuyU9Ig/TItgL4 IxaB7R0vJ2aCpcFS17v0mbl2KMT50qQVJKWVun2BnQK1jU0YbPH8NwSPZ2LwrqoE52y/c5cYHGOU ArnPTtwiCr90bgS5Ezz74IpPHiUfS+v11d2lqqrm6pJ/F3iCxGAXRzJ7oW7fGeJSG5A/OEIo6ho+ dUayrjt2CLpmYoqrbsv5SARrlAYwpa4DXxx6OaG6pnrBw4OmXzQ6RHvtrgacvBMVzqYOCUW6VuOH p02Kcc90PtyuZrCNa8YHVqAmM1iuucMqzYV3yJMyEi0Pi5ff+30bmDVaFwZXIRvEG/nT7EZgEm6Y O85HaZj0D4X1lxHuDuZsR4+SjJcM66uzgz0qAmeKxOLbZYuBRKCWgjo/btGyJ6WrZI82FD2V78gO RWXKlsx5ojo3TTbS2XMjlWRxlHQX6rb12Q11KcqBNw8aMJmUHxygL0CaHWDSn1sSAX0V5qJfZu/Z sbJpQUoG+rkzhwjOkoV/78yD61YywI+21GFAaWRqcu9litiZCYuPcrUYNXyqq4PvBuZmalm2zpQI J6y3tSOgsxtquYyDtHNO+Jj71y3OlYk972LPRvgZ6rt7shd7o4Ka/GrjuxZC7l+LkGT38CTJkQl1 h+VXpmb0z7dGXgMCKgaonBstNZDnhDkzXZQkpAOsACA5NNnHYo1ldBghCX+QKrvZzIbBbXve4k6I qrS+CKTVXLUjPsuob1E99o7DlorS3CPYAB52FDqFvlV13UcGMJEkvdZgFomcE5vNzRshuND0mTOn qnqBwX3pp51elSSiLhnaU9f70ZHKcNbZk/iMpt7jZHaQnMQM38Ecq5EAbpDQcWoWLdhE0emhNYP2 TQYFasDDRAAF+WQJOPZVs2liKR+gFt+YCW1X2HFBAME1oI001MspASby7KfNBgSwauYHvwaRSSQb 28CEjCDClZOpqJUEKNbCIrpfSsdMveO7a8oND9JC8xwDkHFrByHDxdwArfJ1T4IlK3wCNs5kK9Wd I0kkrI8eaJNMT2yUHB4inEYWb6AQR8JHiBKL7MCaMhQAPXfhcPaEnbRyrF3hZuLtzXG3arnJA/Pn fiBlWvvvT+gX2QKHyig88cyCOA4UGNBOSzEja1efvnkODLU4amR6V28C4csUxdF8KcJivVMME3wy DUTGOkn4uroE7/7aNNDCIw8B7RWFu2Jj17e0vy/op3JeNYczVwMD6YTFr9ZVTcDpMDugsTH7pwrj YLpQTDdb1k6q+I4IvLouzUG+xJjsDP3sGWO4q2h06cDLLHSwKWSYs3kBCliArwZjUMBvOy/RLWiN 8EilYxK4/UFH3r3DWXj3buicGUdYlmDpWHQlOWugOw1Yu5matjYsMSp3tXKHsHDLQFgOaqlmIyoD dBhyn3dBQwiDV80hFjOO8Ha9qgFzB2PXmUHynYbqZW8D8vIKLO6W2btq9g5rNzvUBQC4z7Vp3eI2 uFQDRQifmIXOfOISaGJoa9gHHMCRTVLeQDtnONnZogDk7XB+YKym/mGJRmsNyN48L0KRPD/OMhhG YONik/jjUEIYara/vcGQJay2Nv34WG046IgEkiZqjJBiUMG0bQk7yioNuy+xYINOlU6Z53Y2oBbv sKJzphUTOpJOjOkUamtRR7mM+W1at6DuP/OuTMzdNZay5YjZaI1iSVZUxV4nqllaOx1riA3Tw/FB ED4CKO4mimrbpnF7RzxVx5hV9hkDRGovDC5z6Vj2vja6oOgmI6qqM12gLQfYYx5/wUaJjlpkgzwl 3IT3nCx3OcHtiZXN5gqDwyG8ieNiwFWkPfbBcXZC564Q7psuXQ3/tWxPwWcDgEGhcFgpI0aEXaol C+vHOs3IdkFb2XA47Lu2HIP63PAkZMWGN0BJs5ruFQtoQcrh6d07h6NW1+8pTrtro469yaGMA92t GSl2ah1n3RCYy6XR2m+CHMdpUffOedomA9UiFDF5KRuUErLtRootapPWIggudVlmP3mwVilth/tK EX6dujAZaM7KugpiS2eKfc1AawBkyC6iuRKcBhKahC0REaX//O9Dccl5mKlad/qZMSCA8EhnKBkj fBnBdXUbRTuFmX0sO42n9eTFSnsgzJWe/98Iborn651Sq+H5ShYmI7fx/kqbh10qvmrMU8cF4Gkt TRCVZaoJ8s2r27YLS3Yb9XCYMThpAYD+o1H2ePgwub21tVKqa2tq2lULDXZJ0iaI84Ro4S7PjWxx UFucryfvZto9tKV94PqYaBwt+5xlRVJ5fUZLfV/UnW0N3FPJf9NkTmNBWV/Vlm61WWBD3+Rb0L16 yQLyp85DoJ+vrqDO3DsI9luHn8AQE61sRUM0w1RPK7xYZnEd7/0Ue91HJ4KL2LK+G4tElGgSum81 tw0wYkSd0MvMkjSeM2TQnBXvYc2jBqhNItZPK5ut9lN0MAOKstKioLqU8y4bJv6GQoi2RDQHXtrt uadDrDnee6tlS+tcsz3JNrUBtIFwJZKecplnygSEv1EoJv4OcPz2LA+nKYBEtQga9CM8p9Ohiz/S uQf0bouIJ7uNwRaAAVmc2bmRwOfbBYwBHIPpVCcOiziqhfD9B1SdOxUvMWRLorImPLBRTnOwzUar 2xFywNG7iYaA/db880xghN6lXLCm67JgoOHC6idAphu6Ul4++fZ5DjIhnK5Te01sYEFNG2ReOX/4 m79tRctIWqTwwarwi7nng50lyUV2uVNqCGYUsuEBdyytmF6Ws4m7uWJ/JefEIxf4Rk4px13rvIkC DwA8BMo4BW5GAj7hzTNYPTl14oi/e4cVGen9T2zmd++kVvOaDiTwEusGMX85M7+kZvPC4bxH+PVe UZEKgJVBTU3Hi2Z73gBPXm6s/kAVJO28Rn3OuiSyQUYqHQupU+oeuapNvtKc3tGKwwj3H6t625gl Ryd7V45XDH4EGl/WR1aDY0vEdlCBbflh4CA/m3xyaLotynrv3klJ794NzC+kTXqkWVbHGFycaxwL NLAwo1EtZ6jWYWUaY1lli2peTm+ni1L2mpamySyOMhTmYOUhxDsc7KQs8xlKydVGwYTY37kWPSdS MRHpEZEjwSePFmzTq5Kh5M03Ttdl8d4cyb5W+kSTAloowQGUUD5w/kjYXIIwwYa7ldP3iwrgNFQz 2m9wzXyiWYNJdCrtOQtvKv+qvE3cUVLW8BratsYVRxCN4e5mLT93GqbQLY+Z0uoH8ERIK3Nm5aK1 Cymnmdw0py1Cu6ElV599Eu5FMxCKNNA7bU7n7ajWMC/AnQI29xrsJUrczyIdB3A4609vRapOhK+h z1m0j7JOEHQMcal8JNTKaNy3iiVeELfnII2rDXIGEcs6AQQ1J60Y7t3iDwSA71LwRqKfuXL4vHpV oE7ZcItgNA3XkAZj6WZzXiJKr3etP6sNXwZpAlnbbCZe2VbdHXMUzQP2IBeGotRhq8W70W8T7vSh PWWEpL9HolwS+wCU2sjJ0lpQ0j+MPS1F+hYKCiH8FrHdYP3SfQl9FINVKttPtM0SmBLeO81+tjWH 8Y+lUsTrmCrp7qbGBcWfBSqqQKNbzuL28LQ5jciOKoL4WvViRmqSkEnvUqlwWe1E1UJNGKpEXfNy 2wQ/P0D1mROkW6pkrzXUhRTwtmcEprYaEuSQ3fkRnuEVwrnLgu7uWCwJzR269LpSWBxIF+JZ5QYZ UdgIst18juqBO3azeyPiFt34I02tyUSoTQ8UfzxomFqVOjvCmm6X75foQUu7FamnqXq2N4dnsK7X 97NulxOvIf7t71Fw/Wo/OSZuuLATgov5xl43hfffEFO2uSxn+pozspTbs+s6UtOWarv6EJPutF6w NrlVsGo/D6U8RrzqA/s6fj1W7eH6x/zXM7nbY2bhX5Lg1sqGFWFDPU/NiCe2SSIulE4EEUZbgtI+ 0BbRfk/3sVg3DInWDqCoHSSD7aRFwtPGou4WTOU6a0UsVJus7Wg7WqFNkkZS3uGGqukl6ZLKQcbU fcoIow3zjjSvt8tZ+rqL7+JkUxv4Zd43hX6/DAIi0Z393QaoBGCbCePZpumnywCZqyc4NS6gU3a6 wduRzeVZD1sIsX1A1QIQ9sNuJwGXpW+QTAVsMZiKw6NCfjnlC9AcIWpdr4vVhNeftgrT73M5oU10 BRguiCuOP+SB4T+3Q1Y6X2lJMBQXzMIGKgHXjlESQ1AlgAiL+p4nEZU5zhllMFt5KtAXfdxc04eq Hop76O/XFflt4zGtXJ8DgJHYNbmw613+xJogf71HK30Hl2BnU2vW4HGBNJxqghMEvuA7eULID3bk Pev8PK4QcYRTr7tnvqY5XRYNBTeafgwnKHsGsNjJ0dzRQEMIpAlf8Bkw53wDJpokvgDPe7q/skrA rg8BHw0jYEZCLTNVDeJi26NY+gVG3nba3VUlHWQX67JsC8I5w37LyXQyMb8nE9AvdbupDpvPLW1T Tr4mUbtf72c4EO8fGa+guw1uB6YVrGqz0w5Dj4M+2AGyKz7FYhLtCMKZRBJFsKm2H5sQB2MZsZgQ H92ZN9t42oJzr+/EfFPoYQIwHQyAILTEYlPnXrtsQ8LPmvlbt+h8vpTN+P4j03kXiYW3EgqshhsW KK3R0QNf5c9vVnhdZdEPOA6w+XPv/bUKI4yXG3RXhKreggBOzs0IvX/gVKmgk+DafhOV3RFPPNoi i4yRAlyAeLLPvzZfWRN8PDfbMdABO+w9Hv4ZEHhxXn80ixDuRURL4tkpgRN20REsZwi9xaqA0chp mL/66itShvBY/G25rp9VEAreD7BMd+ZD+PPowUPK/wrRHdGs0MVUsN6EEHGxIHvrwhwSjs7LIz4k EIxz2Iq2BgwkPVTs1tSXHoYotO0rKq9OtAoEqPNqswYLRdtACi4ndkRhc1DdmJtdQyjt0YMbPRIH tn0OcVx3Nvrgcm7Gh3T/CRDBegYQhA0HaoDjrDmlEUYnsxS6qeV/qwXpZNFYER2cZofPTneeP+x3 9/fRGiRNLKzMZTWjMLYOBp+1Pv5lJs/Aa2zDUyLmeNWqEFfqAo3jWCM4incIhLgwgRsnr22Vm5JB VseS1uwMF4S67vg7Mb4ezifIPSPLHPPWeuFd+2H9BOiUOQhVx5cvJP37RbfrlQL0pTeo6MQu9TGA RbUIQdAU8wZCmHD/pUKIZ1oAoLepuR9gVsK9xqyaLXsn2VVpFpxNDYoPaatSxuLaLBZfB8WQnV3V ENKKGfj61nAMVDXrRccK1ogGkpqVXajD/pbMUOdMM+0jizjouEeAI7jaJ375BsJ/AMbkjm6nZs8b EyXcoNiC4A/UABGxfVk0wIuC4sC9wsgr71WDV3Cu6N1t8rvru02/Z7ZU7Q6ErkAkp/bIjr9PDt6D 7D0HjNWGtM+On2UvX51kr58cv3netVae/hKOwpYFVoHxEg+MA6MEpgfpHdeh1lgwat/pczIBg/iE 8iXIZV3SluW1SZycujSqI5fhVXmjentvs1LVtrJJfXG2ggARSbPjnZOR6ppzlDfNCNtfNc32XIFs WrSw25UYPibYcefD3779lxDMelFfGCny4sPpyb/4HzCqdue8aKqpYSMXF2IqShczsFsBxMMMzTcA MxfQXtYPALgZwE0MI7wsr0AwAn5jJNdFyRdIZhSefHc8ynIjC4Et7HYDgLyGk9ZZA/WAk+jt1/2O YsGghPkGP6XvRKCLY9N+aP6bk2ev3p4MWi4dz7cXhyTk+LVj3yUUcqGCqGtk00UNC/26Xi+0UTgk 4cyJVOkuYev52bJdtEhuOTwc3I2gH16k8wEGNbdOWg2gZu1d52JkQgwlXHvWrnNsE/oJNCKDtq/b gO68RZUqzPSqWOVmlxm4krRynDQHbWVA/IMM1XHdERfnNbjvqXZNLa2aXTJHpsrAAlhu8bAHfRuR 4DteDv6AwqnEA4cwk25XDiwKFvMaiK9lNmeck4b19rg0Zp5DEoQEo9XWY243EPtIMIxB8aSxMSub zazeKkAN87tcrwl6DADGNtNh9rZBrb/pDd3xwdkAAg0dPRo+ctbq+MAkg/FG6Algpmq6yWbvGYJ5 1ugCMuKPLZ9AO+f99MZPV6DQWbN1EmzmaThMl6+H6oHQ9ENRKkLi2AysTQg98Sa7SNthOVDr0nze TzIWN+CJ9z5dq83jpz38NoJZhVBldreBq4jMXwyTFMrEhCULDzzGH/PepAdiDXxIGbj5LvNhlENp kSjkxVYuaJJZaaTJ8iZG+6IyaAyX10+EFsUPumsUK1f29IChwVq9Bp0smEcygbN/mRn9dW0WF5iD CRnnTd+7ImP5NDGNiOBlcwUDH/ha7fCzArkWcwvr9otiBm3Fub+i999iE1rYexs7b1urcB23ERCz FlYZlaHPV4ye4QpB9/9Nubchw+miLNZ5f18yPrhRoT56lp4Bx2faTBrtCGCQELF+uzA8cmnzKmtA Ooqwxx4ArmDyqxoiCWyrDSo1bJGGxV4XC3DAtOgHF3i3EvljnmOIA/SNZL9HN93WMCfR1iLFiAiK CvpnWH4+N2sBkiKgrGk0cwxmUv2kzQ0i2sHSXxdLQ4Cg07fjOMgeDrKjR58awTiawlNr1jqqzs5S wY3TpjuiTPMCo+0mTIi63OMO9waWP8t3H/GlhX4GdvQDSjJZNBlR6HJ4qSbRv2S/w3CI1Q/EeGCb alrw11z1vu4kudUxQjBtodG+1/dtNxIbquUxOzZVW+wk2i2x2MB2WpExNDFpKK1QCsH+UttKs6YM zNcgOwp6tkgfWdfOgOKsFi47iZwYz2fC5lRSmeHBTaN3ANhiUmIHG3B2IWevl4HTg8I9sxHWjhbV e+qkrTa4wlAMG6JhuU7sYZh2tZ3R1SO+JA18uBzwhtiJtcB+OEnmBsNJoHoPJUE0y0lV3sNEPbd9 grqOZNMhjiVI/VjbfbzjwebEkpNpsLfH5XJ9rdZq6ypNljjck9WWz/uX2rlay9NJO94eyjmZRaUL cIkJsPApNwXgCsU5lyPi+ocOOG5amoBb+BUp+/F+CMmpL65LOtRECNQR4zAL/c8d4TsGFitpGegY UWn08oHs9apc9kKxG12Zxtm8VWxzVLhHaIMhiG2YoPyAyoyoSXTWAtJFmQCpZ7FtLpM25VQsfnfh dL4Dj63dM4NXVTAOFCoW29XQSQ9y75wbCzJMN94Ufj6hFZiVi+K2nE3InYGTZefb+Rxja0dh69lq BwuFozs+JEZRgOdgNvlnOJm2EpPOPodWpH77kqh+pgXzyo8t5V4GsugVada9XsDY9ooeEGHv2sWJ mIPOcGUNldyIQhn/dISZ8GYYZF2ssNvfMx6fRdzOdtdOyn6ahh6T5inYCHxidiTMXB0Yf8Dozat2 Rg+VPH/9+tMqART4g3cTWp1vbkH3uL8GtGDHtNmsKBHQepezPKDKgKaUdNj+udl+TKuFXVYyznr1 u8nxyxev/DlWqeTxl6dIRjijbvOf3KueAhpABWZAQbadwBc4GnSff/v89e+yJ988f32SPX19fJKZ 2cx+/+T1y+OXvwNV9/HT5xn0K3v2/Ldvf9cVOZQaSsWAAZ7pPVht4ItOdJQQVQDN4oCSObQqrwP8 lSWmPbh+6AHw4fu3/0qbuX04O/nuJWrHDZkAQIUot8mKFzyg2ANuta6nZmxHCJNnzdYGgltpRrpe MdyC1cu6J0gyEAM7eV03qLgdQK0de/FAgUIiUxBptcSo7A9NKg473wFs5wrNPhoVkwr1xWie0Xn+ h+OTyau/MqU+pOeT529O3rx4cvzN82fm5SN6efzyxEzp2+9O8OVj9fIlzPrrV6/N6y9oMTOoRTGb EZwHgG80VgqjH2CkiyHczfEHnQ1K0+YGtOjmb0kGVqYL4DRGeyOGWicQIvRNqJddt+/Bvce4i868 9kg3znvDe3Ct9vSv38CfyaxYT8FfuPejebj8qce0cSdoEI0dVsw14C1o17aGrRhw5vFSfwGLiV1Z hS5I36oLoDbCkMOxXrXztHvv3gPCyx9ubjY6j13GLsXqFgbJ/L6HeCTwmw7Md6g7F+t6uyLAuIaE SXyTdwnGZAG5DWEuBcXJrHCGyKLJEndozDVUk9g7uoHBOzoCekIdhPlZYNZxtzETVk4gNr3qWFo4 gVvkcdcW0o0SgEUQJWCAgM3iFq6fSO/B0CbrjEynyZu9y3coiUYfXRU3kLRHYaA/Futxd7m9iqv1 umJ6QfNlDi3YI2gzl6R6+HBX48n2nlptahTDIgRspogVQ7ksap00NnKFSXNgZt4MeX2VGA/Lxa0C lWyZJOiSyqBGhDpBIYspASHnD9jhW6HuXgVj7zVndcsu+qkWBLUZTm/9RFfrEqGYFKpQ0QBbAWur lWGEcGEybBmG7tFRdbGkOZR6SSTsKhIA9hi1gfIRmgJjySkIufzKTHl1xBBR/eGuaVjOiVf33EDL K0U+Pvq6bRsmSk3Goi5maCvMK78Hur2FYTwUDZUYUwVAeNV6P23h/SJZHqvquhQJlfYwuoO8EF5B ibfrYjcVwoXxprxa2c7Li7DrbV32eg6ZM8hdo7mY2x78sHQAhwZd1jsQgjcYerHnbqHmMYKe5YDV Pz5GNAkbZmVsnwZ4zB5ToPE3DEL/RoPQyzEeyuKix/zXyd66QTKApW9iLygAAc6l2QMo6i3YjW2q 247TbqkARZaTamMqnYDZFu7l2Jikgf6srtSJ9c17sy1swHhSiQog6FwAKrwI3UwkYxkd6ZVnrw/t A33GFjSGLGxYSGJUdKDoEUl7SafJWR2PoZO2VYGPwnI04jc3DdXweWt4AV3cY+XBWm0iDwUtZLIw 9hZk7UDORN1gx9dqn9fFenYMXG+9Xak5PDxIe6KD77ncSSUFS9SrscQXVFJ12zwpmU81+mc00Z9L cyI1YtTE2tnawFw+/R7cUhFE94dYe4OoOc9vLMXbGsJjZBdOAiC/j7Jpsb243GRvVmZjqLd4luRC fm3P2vacHTWURC32VcPNlOSX0QG9UwK5Rugm2vzKEOfoAGInb7+Q2gfBXYtUPY5b00/W/Wj/gt0u 00vW3tOENXk8fHo1A96jPYN8JWra3Uh7GO1wKoLTnVNo+6xuUwNiFl5/mR5X1oHTnfCoGSxMU2wh tCQXmFR31ZOYFydZpFlQIgsfKIEe4zz+PhOW7Y9Zuy+VLiVRnbgoOG8rf69R8mR0U2yt5XBIMUyO a0mL85ZCyUqNxESmKUcPTnK09ILAcOl412Kkz1XqzklSWS6MHplhTo9oyUBajRWJjnYgJUqq3uRR BjOHdnc4x1ZSRnjTOE8wctUgQQ+/wZnRHgcmVsClaKXJkryfZsTZM4wDj5r5SvmdccF2rbvUak5U wUO0Iprlp75G4qbvDGFdEWfeEKKUDR5tqjjR0/2lmerv1vXNbTuceJMYYofqjV9Bc0sIZnvRxA+y ggECjGI3a4AHAgT1leIExLRYTOSefgKpmjxwZUn5HCrAGFOb5BcHRNXRfsqI2bV2uCIIVAaRSFhH C5hf3E4ichJDBaAuV+OCbhlwp562DnG6OVJzubFxFjtUJhQy4KtQ/1zUdbZ0Cmj1rpHPwYOCxS10 g2PrIRW+jenpJXg9RbcyeLKgBHhTbw6cL1FPhgjDpT77bdYlQ5LY40EmJrjgxATIZ2bUFrN1afYH DC5GUCzs92nEghkcWXdokRPxujyk/JYoW3cAAkoHsrNQKoQzwEhNQfR2FRaG/P50eS69CxqJHRh2 kkFfwsLDsmLoxJ0LEpgVhxXS+IiqQD0xl3jFP0MAYZMLTC2SMSWdF68qX3svhy1HBZ6CVxTQTanP g6P0LXl8HmTXAEdr6dGXKBSNi0wkq1+5HcPrFXDE9LoPDXN/tBM8wr2K7bm+lZgCwYLu0gfm+4jC kUiE7znNsUB3xsnkE6d84UIwhClfOPAITFkl2/aisi07JpyIqEbzuhtEzSKjVGcjH3FzCrtwEZkX hsinEPcmycZCA380pZoUq8ow12XefTx8CGq7LUU1wOVzt/EDFpvPMWilY2qT1a1s+FfFe8LDAHtx BFoinDrqZaoYtUaQ98Qg6Q7B9RNsTE0XPIQLseS0TtNR9BB4bQPVdFi3TRsT8V1DsnRfjO1swUKF b9UsbNZuPCjK8ynBnXQ+QxwOtAbHf5YnN9obNXycbpdVOXEeruN+1h2NumJh7jjx3oBfidCfFOgK drB+clAoSlVypGQWVcwv+wpONN93Uohs6bBiuwKC/TPEyFIxmakN/X1B3fA20IW6FmS6nakmV+VV Xf1Q6tgqgsvlYt/0te5EPsPF56S8wYtPeadg18gZOWRULnu4i4BNMWdqNSvWPuHrEg9BOWdC10x5 fqSeH5/tCGgnjUkTnt90m3bXKm5UwJ488uLT7TdbdHmTONUFeQK7ajus7tIYnKgmATBMbICuB55L 6e+ovEl2HSKNx9K3h/kF573pZVElwd0lUjCbB4PgGsprTbZd4Z0AVNxJBCci4+NsXdcUWiWQdT3T AIx6DSU509zrS9isH8UgX4sw3C8AeAk70BRpdgHgFvItlSQG1QA8f7Mv5A8HNmP/gEijcWhlGDsH Th1zkNOboYV4vLEoS25W+meeobk+lLUw/kOPcl6xhDdG1GM2ai2+EIgc71DBvMhHMXnVHtT0KSzO L5KfhoFgLxIDfVXcb7XeLktrpmCXCmmaW3kmihq8iIys42cDY5PbOB6YRPb11U7z7WKB1QfbOZbQ XdTLi+6OWLwSBN7rQ6QnJ5dqttoVvTLEFzS9AKMEMLaHlTgrNgUFoa/l9JfrUthaJKs2X3u9gram jYTSfd6cUxaE6rus15tuAm2cUkiCQ2LySg5/0Hjmg25bVBhEONkbIw2QnsjvfJzoj/u6vyCcV/yX 14smJdP6kLREr2QP77kSk4CjulO9LJTGith8rLfRn3zgKYEP2VxuYU4hUgz71JkdCuKjOdWBVQbr tpCVeMqjGtFNlmzwEF1Ho3ZgU4CuqJBjAFt4BaFdpL2tGwmUYENsSW9Ro9rwta9sKv0IEgnPw8FZ I43dSrrWl/XmWAypyhlb+heGlM3C2+gznJ7BVoZCfeA4O+h5YRvCWYeh1WX6RgjpUc9IwBpNLkNW kheVpclzh9nqTaIhAxak/c4jvhdwPF6s/UC63DGNFlybwHQErh/xeKcEqovxQDmEmiKkvL8jEoQn 0/YmVtnRcxAwqJbmxcwFel62B24MiQjJpCMJD9mkUVH7ZeB4xBWpubLv/BDXOql9Hk63pLoftypX 4dyl8qrMCSg0XcnSr4W14lhZYN/XDynP74dfFrv5uLBdbxyPs0/9vWr0T1Y7Wu2Wr/jnITPb3M3N TYZGK8gl8M4GqRCUJKvbr8NQ5pRxCCpZP6D5PNsnGK7LhSuAgKlYpGyZQJOmFbfOfGuLhQ7YkQsw JqmbYVOuBln3gcZuoFDgb5T5CEUDtyOuFbwyyIEJQesFxj4NA4sIQJBac5g8jnU1LqRhGGpNicJR ZgMAXyhFWu0IMbVwMEBJZoYjmhgunJ92j6A9Bi2oy8rVRQ1ru/5emL+9i+VIdGjSTlsiJQZ1YSgA FBlBYAQBsugCUgWrY5c+ip7iQeoS/vpVBUYC5+YBjMVWizJZYuMLBLzfCTDpjkOErFdam7LLx+oy izH3SQqzRRj08gCVmeQREkoCrJLkq8gJw/1lH7Zgmd4AkkCETtwaO2EC5laTCfhZ8c193p2oojFT d5D9+FN/v1eqXWcUpEA6Y/buT/RCtQXtW0eqihiQMG5FfIcZDHpugW8l1yMC13I1PT7r9xMHHlXI IhXtVA7N8s3epX0Lx9dIZEVZDGS8GbAF65XMEZewFPRCkA94+vbWpViRxaud1626n48sp0JP3Opi CQBXGBnSZlOhhbXUM5kQEDiC+vRYtdT0DMki8urtF62XdvH1M24Db8QFINgCApRYf98NfTthDw52 iQBjnl3v2mMMrMsLI6CV+npjbHHdwXEFYyAHSJ5EHcpGydrnuTs1axUBvqeeXhm/o3zi7Avw5xBU Teh+HZsya907uwNYc3ttpZDwGNDoMJ7xi1j3pfbJvQYhJolvDeLliesUexQj3RAD5BGnH34D6N2Q xxbd1MhCXWBvRV9KCY0wbF2xB9npWdI7TM/WfW39iIbF1gBUU4oz+hC79khQkoxWq+/V89VYErSE OfOoBGy/2SEV7eLvzqxBPN7cJ8/8UaUa2UwmmoYJRdH0XPy/7L3pkhvZlSZY/8YM1jPVNt0/ZuaX J6JYAEjAk0tqKVQisyiSkmhFMWlclNIEoyAE4BEBBQAH4UAsKqkeYN5hXmCecs56d3cgmKmq7p5J kxhw97sv5557lu/Y2cIb33xr+Bm2n4mPWjVNkbC6bP0lBiqOdM7XySozHRdnrFkMrXbmwV1LgTWW QYXX8OznxYq4B96wvmFKMLv4qv7U57u9gUcP6mUUOa3MXxVpCzakn+gSQ2B3Lq3qp+AakLB54Zu4 U0FtyQAaNdVHxo0ulVTMNCzevZrWjrbT92EQGKZKS+gE8cqdAA/I0hLFbjuot82rQZBcEjmYbs4p VL2/t5Vibhmg3gP/jpcuR4SK3EdNks22ivHDKQT8KotjGWp6WUHoWwCJcOmkWA23EgXrpofGxGhf hvGsKaXHL1lTiTxF9I3AKloJa22ylxnvVFrCWOh3XRm1VUs+/jOCP3snc+BOpp5MZj7jSA4UfAwl UTYqUTKt4j8j1e2uSr7DIpIGhpSnqGv3Nj0NW0AlxgIr/0CMbNrxFN5yYASMZfBxxTDkVB5W3Yt8 j3VLJa8fMFo5iwf2iemVcunf45P4dr/OUdFQJEblTAyZCM7G1DmsP3v4siZGjhKWk/vRxXJ6SYOA gNLUCO7O2AxpWyVOmk0YZJ5jBaGtQACTx7sosY2U2rC2gc5dcaZqE62py1JPbuovUZ6OzD1GaqDa KS5rELBHblhyu4jzHWmIKuDM5zO+ScCulcOfTsCqTOeSKwh7Di0YBttI89GerIigSdMkVkkXr3Nf ZwzXrl5N5Lo9O1//O90Uk8vWobntQRYdYM6oa0TeJjItF1e+j2F8rYc9X/wGFxa4lk4vuzCCoxDs yYnVSyX0s/Z8BYzlfEaLlsMc4XjFugNls6gGxLDeIqjEiGXL2c0wu5F6UYIDFfcTet3pSGXqdCXo Z6dnrIyC1QBkOGCUmlcswSsELFzTso1JlAyGM2TS9FZ97XQhJtOnbk1r+OJWJVqlLYoWgwHJiFhc z8DQY2Ij0Z6BjHDY5sTwoEuIex4mTOFHrrG2y/vFfRIa2o35t8SZOzaDNPLk2Hxku5pmWh2fMyLW XN8bmx+px9y34Exwww6G1+Fo6N21tqK1hFX3am0g/Nv6washNfjGFbN2BlIeIHZW4DCBMbmCHUOO wjI7Nz6uiwJDB9TquKxYlDI55SjpnbzTO0FDulv+EIXBwUG9EeAJTHwAZt3ZjGBh2BccZUPz5TpH oD+RFxmRfqhkkMPsJYE51EgMk2Hr0kp5tOmaNVvGcHPz6aKsQhD5KLwrdub48QnK7LFDb/75V+Pn L9++ePb+u7e/j0sOBxxmHbvdhSHonRzQeM0P6U/CBQHv3BuZHk+pk8yxezLXaCaLDAcnYgXaCYrc SPA9xc282gaoexI0kqNj9vYYl7AXe4ofE/FEsIyDw1XuSVKXxrEBTq6X0MLoVcdoStpfArvG6pLo xHY0AswVSjkwdKdVINQTcu5Qi94hRiapvtPG5IhpNMBEs8gnP3MuBYcYm7il1Gatu4FQcLQsHmyE YGdPq1isA59buiNwyQVnb99Ip+Mj12WqbbZ2nOVOV3gqxLEyNs2pLRWRJEYZonBSwt6eBtooj4Nv MGYBZAcOjdt+5wsq1T3KHqZkpqoWCBp+PHwUm0Gwober0zx4GmQdGy2CQMxRsuTVUlO2EhpdL6iE MrKu9nkFNz9loTnDo6FLxPD2qFMYRl6Ta6bWH/NM1vpkRVE9ZImjdrImQGO6o4nmmBCJHqZZLWJq OtaK1yqrkfHV42juc4ggJm4JC2PiApqFMXE5e678F5OK1xfpMXzuJ2LI98kEyJZVzd5jKOy62XDl B85aP7456dsllrg9pvvgRZ3wrRMnU+Y7nlesrWa4FvJOa3d7bVaTky7NmLnVLTS3TvKKLvB80eHp 4QA8EgNeHRCsoiZIqHbSsd+Ng1L/WENXuygbRHTWBlurdkX9K5VskxbN97Bw6a6mbEuq1uft97gD WKqxv2Jn53gn8G0Q3++/ov4Pt8/P2CPzUOmefwe3M8wlNIQYTMmu7rLgPv3Lh/+CMVhEMJdPlzOE U/o0fp//TxyLpeUixQkQnD5C+ySjAyVHdi72iwYQfIOQkf3szcs3L8SLl6vqwl9XZ88rf7eaU3yq crfFWC3lmYaFRvBeyEHo/whmDVwUavK1/WJ/SzyAVJC/0IDSv2RNIb6UeEDzMw+BjrV6uRrQOEH3 MAgaVnJFwr5V1ik2m47j/yvWr3qyU2bkCwfSARoUp6pcXdrJNdYOFt/kgFIWfL0yVYprclasYFyg mi91hBiHujK4k+fGEEWTdnvc/NMCXZTh/kjVEj2WWuhS0s86H97/cvDzjm8NpC0bOc3MaTJx5mA5 YMCZhMU1tA8uIdVkgaGQUAifMstmJM6RWzKsj76ECQnf8w7EeCI4uAQ/KasWBhXHg2GK9f4Ag4Et QI0cCkWRE/86e4KeHojbdfsEhbEwWlkwkn3+/BhHpl5nqWDJOsgS8i05+kfZHxksDAMx8mkGR+B8 exfDppr6BMrUvEV4WZ5FkxvRT0fayS6Nno5bIndYj+PtREOu5VAUF6ccBKnZX45BktGca6BMzpTR 5wChKAv2b5dT9TP9S8tQV4Xnzg8v1ZonLASIHm3Q3LWvTpjUmMrI9zpdpW2wsULK/SAXgb2KjoIA zPgfnYp4ks1j4Ke+RPYL/g0CdtE0wb/+a14FNCC2p01Bh9rBiA3RRsJRrjk96ZsG9U0bBOuTSL9P SuFSXTDwJ3FqGEY8OH14Tjot75Fit3ktaplNGXzIfQuquIJOTS72GcY80gLZmF1/Z/YtoL5BEUXI 1j9wPDOx2Po0eT/53/7mb/SoxLhXpsGbAikT8c/6U+t6DW1w9j8bY3ISGPNPO9hQhS+Iij/j7Vre ihMIOjbDc0Y2NpXCMGgGC2nwVoOPhNmTThSeoTwXRYEAtUXjcadXY+DJqXM3bbdXj0ftFI6GV7Al EmWHIRtUxYiuB2aQetlyV5FN7MQ0I2X2K63S/tvBbTk2gHEyA+vhQa6ERKX4lLKbgtckfPkUf9oU S2YvRNqCJURVJAMUBma1fia8s6QyzK397NJha9z7Q/ZNKHnhjEll4pyRErSXx/OTtG7S7ea8TkaN hSUm/t22XL9kjyc0D/XHBsbsfHsxvpivtnuHyOm03bFoqjDCfxv2KXzt0kI49cKJGOZev0XCl5ta ebx3u/Z0Jk7TFmhMB/82NW2x+Lym4Ta8OVB94+lVTPNQ80mUjn/UNFKI5ni6XNOptsZYNEUx09DE +GZSmThCsLUdv3kq2PQPj+f1yISakV+yVUcPne5DwVKhp9Th2Gr1Gg5E4oBcSdsrbZMU3b3pZ7e9 OgkZDZfT8+4NObzfRrYvaYH13eppKp9cqjHeSLdgZ7gecDbyixaJPLhL5aThWm9HqF5HZAcf/o1a Q65cZsW6U6P5agtf5LggupKu19BMSdnz7WFpnYQJ5bWftnGVqOu2Dh0OY3fsjCuM5SLSQS3cnYMY VHBz/FOxwqhNo+BFzU7i+3exNTfvIBfwZa7JSatYYcwBBhY1v51W/GJSFW4EXe+5pg1hHpvelvsr QgCHMxNBKyGN91xTLp+6XspaZ9X36Atq2U6cJ5xD8QroU/zXSvxkgMigAAjfGRD1rbXf4ZgFdElU CFyNEP/sDV/tHuc/yQqMzIazWl5DPfLFFCJ3UYbd8nzQo9EIGFjTiarTajHEBLpaRE4HfQcFtJ/9 pliWm1thWL3igXNI3I/Rgrr7BAPSDwWar5hmaLKFZybHeoBV/I/0fjzCfxWYj5eZOiooqA5GlaXQ btfQRgc6GCNHsGco93yMjlMSTYs0k45KvtRA0H0j+Ei44LuwM5j0FKVeATGEDxyeOZ8VfIGNL7iJ IGFUXhzfjEtDLSr8iphHeOfcsuYVtad7E9+xnIputNluRur93nwUf95Dm5ovkY/pJoBeAgwUTUGM Neah8N0M4eKViAkJWPouZbbVO6qdLJOm4Y7lYZ5UebgcyfrmDNJhaHk1Ul+U0yqxZChJ+gQXr1bc HtRzjG7fDWymOPsoY6dXjWUfOkU5KRhKIJDoLDLPE5jWH2VLtksKpKotA7fWAELY8fam3YBjU5W7 DfnBnKEweuLiRyWt2yFZaMAxLQXvjAy6qDw4WrBmogeEyL8FagJEdb4NdP5EN7rTUuaHp8aZQhP9 DrdUCkOJVgNtyLZGmUF/G3usCyEa6w0c4aIqS5eI2DBJsYKsBAmyD4YOSuQl/clnkUbIG5mfn7Hp afPeectL43+EbT9f0r7/j9/1db6gTWSgdxcJan15+JOKjFvMp1/3PjtiJFBSkdNYwmVxOeHdzGoS 9MR6cilHJqFTI8uXB8HgcO92so4X1xjekm6DKwqFBJiDv5BpLKV2BoHcLjofV36R8Lq2SM7iFomp XcJAC7yDf51CSNXq5qLvFGLPSqU9Tvk6abWEBA1hCdlg6BgPVBesl3Od9JrFPGrg69wNOUwH+gKg Y4HvyzLZjklNGOquTa1xS4Uv0JwJdZ8T9czV44bfqH/+Z6c5/m3ayQdzEpx1YyaEcvDYs04fDl2h xOTVLNDx2AA7XMxnzIt6IybnKDq2Jk8sOjoPOUonBH5Jn2tOUz4Za2tSL2SvnODM1eO1thBTQNgA HKTH3oDrWDdy0g9drYIC200xM6zXfrY9dSZp71gHsS+phI6TC8Whgd4E0jiHo7DzML/72tPYFlGt OrlauiZrhuiAQqltyFOju6mfHdXPipwPRwQf8d37riGWoza2Aamr3RQl/gjFdSuMQTET9apd5o6V VdUKzXMD46a6A2o85uINpoaDGo1TNCun3uTss8a1cSwSMIBm3u4bRLBPpx/+VmOubYopwr1+mr7/ v/4TB4tjKDBkbPAD6jcVvInjgikcbqHKkLrQcBT/rfSjIQjgFgrxqd7uBoXklYNqbfQJ30vtb6k9 hUW2YlW06LnpPl4VCsgqUvT72R/+gKcM6nfP4U7LFO4PfxgaockEuiL986Ola5bcFCRR3DE3/TRY Uzo+xiTzXVFkF9vtevjllzCHVc4Rn/Jyc/7lYn6K4X++1Dz5xXapwRnQndLCF6DtgDRM2jIvAryN OgryuP8z99LC0+C2kq+iixmj7iDlMu2RV8eOfaD5VlHsQIHqMcHHfYx4xLAifwYsJTTOTdSCtxDT EIdHpOWAbpN0w5j/qUA/cqdorvUa1g+UEK6RripzE6Vgllwfe34EjmLaHHdJEv5rx2If032iM8yC N3/RONTeW+IdgRtMMoUE38JkhVa2Qlz94Q+YK+Ik//AHiTU0Pz/HOZxkz6UyWAsyIP5ykdFfKni2 YIZ6C2NhLUBhWvDTuLhZL+ZTkrQpC+yVhMGU3XQdey3x3jfwwgs19PQZrrAFUQluO/c0L2jVZzcm bkM8tnk4bt5zTXpJFgeQ2pAEN7kGWskr+F3bsr89wsY6AAlHfEPxsh3OkzAv8JQWO6xWYcfvbbLZ fKbWS7MdkPh4SaMFAe2inrd3NyifFosNoQIzyVFroSEwg31D693A1Gh4tCrFCziwktQItCMtIrCv kOLofs8/A1QsG4Faf/oJuHJSZuCP+KN80h6H9K+my72aNVJDC5waq61vGU7Ks4vyWtJ3Dx5KT4vm VaC7Lpi9GFDj8Gk7ADiJTVwWs3Fzb9K2l+keUtUpNypz80x5BX8tcSEf5z/93EZGDQqsh4IS7Kkv u975VkMX/Nxueiv3KNcW53ckkxhcKiGNw3BvZMK1rr6QB0cDc2aNDX0tCDFO/eyaAvSoCqpr1lTC w7PScCfd69yOXAwiHKr2aYli5+a9O9+9HvYzJG7GNQhbyw7FdB2yzRUNiOGjGPdoc+vu3qM6nu7I 47BK4JA1ey6sayLZeCw/Ne14bFJbASu9CCiIaTUzcccOkLRyVxHWvE9kwuWUWKOtT7MP/1nvKDug ZPjjU/H+9L/yJWU2r6ZonnTLAcskhFmJBoGzgfDaWVsztgUWmIB4+LoShK8Obi3r2anHECYCWUwt TiJvPSuE1mrlQsZ7iCw9ux391jEmjCZ1JGKQZXgEC+i0nN0iDgAbgmK8iSvYOJo1fw//PJuIKMij d/MKE6FxjN0AdOmO8vYODOwYBYE06tRhrKT0Nei0D7Hi5LX4A3zQxnhgk6OpNeqXg89Lq6b/+MWR oNTgVmAQ2mJjjz5vIF7RR1fP4VzvOSfOo1b92iCw574Owlraa1KNFeP3LHD0TsZzwKttGLCLprAD yT+sqdudRBgFyldrbIBfuyH8pzusCv7GbertjyhR307M8Rxy/LWbalpm1kk0+rJWfumrE8ZWhEJ2 PYfEuIpA88K6GgFUa4NR5GMZs3JWhvaK3tBKuvmShCodOU7wbS9XvLSPsTcz93852VxyKrhdTyrE nHLLzbH2vYFGLLrZFJFMR16AGCjFBjAJQZtYG+53R8vpZ4ls0SDZ1B1q25gFQJ1adDlMnLtJbW8P WNtN9Wuew5oQpE62gsT7uJxk2Wnm+lAEk9lMlrAC902uY5jqI6D6GG4UGLEtzNF8snCZLVz/kBuO yWvUKs5QxAApYHGztUaxKK9difG13TQG8c+8REGmfep4DaoXkO6LyJsqpJnJTjL/yZcqBZFjwYGo JvGcE+f3fqoZvvGXdRNsv/7u/Yth9nLl2AFZQ6e3Cv4ukVnbtX5bbWB51ovJLUOPMkb/8OPq46qd boNsdNribVGXLXrkJ4c9G5HqKM6q9utOdjsH/awJtjRSudUUPtzfXoqDPIRT/nIFW6Vu8BoGa+ON K4wTs93O+mweiv1sUH1P/Tja/ggOU2NyYBBqCXUVwwd3xnabHZ/09EJtlqi16Z1JxAKfrNRQC67P oSzhspcif+kFOfiRCn13OV/HJdJR1WQCIINeYW5J3UrPCyapm5eokb5exWvoi5s1ObXuH4U7tJ9P 5oowVClLXS9umpbXXbrxYVVIR96xHmz/4BsmX6csmbEuA8KpHnDEGfUCg6I2syBd9piN4JMODe/j 84Z5c74wjHzeGL1BAx0YcDAyk88DDTbfEcdje0uEbvyTy7PBiiHpSSLONd1NxWMXr6h90iTYiNeO QRB/DhnWwEYSkeOVsgSBlqDc3BIuL2WIwKaGa5RHMBJbR2keI9UpL0J2H6GuEZiejbk+u28dqT6n 6s0VkOxVnCu73+tdcKE/NqWInk/wgDu+q4oQBr0FoCEZrDGJ7qJvc2Hj/WFGnBuzuMdzjTtTjanZ 4nTuXVetxccUNpgbeWZMjlxi94HPCKPkPp6GZiDOeJn89cYUdjd6nGFDYLp6NwWnVmxlc6WyBTnl iNChuqa9vUPb2dUinMHSgYkLCeY0HGU7wOkiEguAaQbN2X7+1dgROHshZ318CJ6VtNOsb0PQMfG7 P3QdzmCSRgbhG2ZwlpBnWZeDP5XrIid0+zOEWxcBHF3cTYylyib39+ycKYdkevlW2mCN/m0RiZu5 SS+HILRT2aZHvdansw9/SxgHhDZVLpfl6tP5+23O+AaO+DBAN3ACdeGQFRuhIOMZ4dSPS1TAYSc4 8HCHBdvAtXUW89Ul/p3NN/iHjONqpRAB/p/cugV/yw22AqVFvrp1UaxcKe6i+Ixss3KbzOnFjnVC 5OR0y63QJ4Lw7FxztTQYqBrVbc7DvClwM3jP+Fr1TSG73pHvGsZz8hn9134lm147ClH9bEuMvbtb Qe6QEKiY4wyMIYUOKoyT+tkFlfGwAjSxXwSsfW5YUxkqWqHVnZssfknjAgms6nr62eV14JvIMjY5 q9AelCGQI7Q4wrtLAiqJGNPz76ulxAmZZwoZ6ABjY13gkPd4+OQE10UHVnun5nTU9ifBlBplHI3N Pn4yPEmfmQd2IWLeE1Le9FkeWukmi26vSkQCnzJ5zSZXcF6RST3OPd3xs64XMRnW0gHaYMQhlds3 hoacXJPNOba3B2/HsAanJfCj2TdhxFingC7huqGomjmPXvYvMk1NIFsJp9iDeKPTElhirgjqoScW k/+gaq30R7EuXn/34vV7FJXYF++fv3xbg0msmhJmsBKnXm1/NOfldX2zySqcrkiJWM13GloHR482 GRwUnjHG5zUOF8APbKDnEC3xitAH8S25IEaeiywF5nRvYK2jF6UFFFC1e3Ux2WBUIz26OGoSzhce 2IwXwpicnNWq2pWNgXHSny5XMptf7YuBTiuBhI/k+0aJesa+d1dQERjxlAvD+LK3Ty5rTtXAnMCc oQiMSQbqeATVx4OEiTm9rdbFtNvRrJ2eQijaAznTaGRdfedb6JoTVyx1ex7DVddY+fQZbZWctqla lNNSeeU3VF4m2rkGHr9pZPE7h6ejigT8g5p8SIvd4m2z3bdu2933fgfcL4leJKAosPEEuYeGsW67 s+58hREr1cwXucaD+gKl2S6wI5hpOTz6DYYX6VXhhOxp9gxyzNINPDVtbe2MBrmmd7ixiplvaos+ xog9cT5H52Lqu/FvqYls63R9VVybZT/qwBjR3k3bPjJzOZmpWR2cmaPOphP1aMIIjBu17CZHN/ad 4xjukELnJd4WKWPnr9HW+UlvmIal7+zwDvWhkzznKQU2NU2d8Qva9yEroBjGN1Ca6+FkbFbQi3Lp Af3VmfyfHeY/adwnvfElLDQ1rvIjGKQHWA3UKaNFkuOxRaSFDbJhdPL08UtLANcM6tumWJZoYGKy otcXQupNphdUajRFePZNA34QIe5siHcegc7mQycZdEQSK8D5x1WnCYfBn4O40CaxCbeEB/VAYUli YtAYJEV9utb0u+e4d8NOznar9Xx6udBxtYPS80Yz7NtpZ//6MvwZc8ZidS6qTK41xxb3s7O7r0Fc Cyqmh+1fhLEK8TsH0iZqsy0lWbRI5LW9kCbxmCyv+fL1b5++6nKu2BSzLXFRqHqJxQF1TxB20NLO EiXZWFejozNPBR2H0sVY4aKt+t3zF78dojJd3Qinm7KqBrPiaj4tcHckgqBMy/VtVLILYoRD7N5z Ocj2uD7oOfaTaaiv6+RzQuYiyWBg+q5et10jD8ZVl1Pqvne1xnrpM+JK4lATxIuRZSH1kVORnFW8 Ln4PJxKCyZlTqO+yuQppSaWy8EWYyWtcDESkggJl1FXENmwlvStHj9jkgSK1Gh/sKNVDk4pkS+y/ 2lXcRQLooFbFt2KKJpM5lcCLVByrS68l0frgNYb9xnRH8tvv0+/LHVn/IUsyP7u1wXn1IgzTR8GX yhWMHs/NBNn9YUzLkgEOvGgvfWl1D5vDLc7WwAZwaOIyHM7ESRBelC6vgab9q8o7s2H26C9J7kMv GV2xnDLSHliMdRIpiSwSItwxG+UsMOVqvkQPP+JBGR6aPMjIiXDQkbI6zoLjxSYfYLwnm8l0m1h2 95WBkELx1na7vQiB0L4NkiGnJtGYTdlehuOq+HQSZDAp+a7to78df8E5wgwanRPTmwwvzebj7jEo bSW7fCBouSWhL4qz5A5xbd3oBLCsESVWg79QQcalDinDJOvc72AyidfKGF3IdzhLELM5wz73G4bA swgFLg1Mtk8Tuw2jCO4GBvwc+7b1VkMjR/zL1xQToNh0dZF1A9WyK1uV6BV7qPX1xXx6IThCmIVM ecytUCmhPapKXp7EzHekik7etPkcXagJqMHB1lRE0NuHg3hvMxRyKK3WaFfMyZApjRTtiNcgrROP nYwXFlFQJicRhWU1j8eDRxTSRSD/gsC2TrYHNo21bNNQwn5xowOTPvZqdjvEUemNdLoXXkvghrBF AzHCj7+er548buNgqWQVFUQd8idUs1cW7IZyXQ5ET5R4BaWRMYPU3nMl+yHxTmXSUW9wbOCSjwkX 0mYYnviu75LMrd9N3fqskhUstu3IeJzw4ryVZkW1PXQfTdxdhJyJ8tZe+7ow68B9EOUhJE/DIfbw FVaIHuAX5IIZc2h0C3baSdNImXARa6wxyN8KT1Y6t0/RZh8IyBKv69oVzJ7eyHG8Ww5uo5XWTiws h+luE/Ihp451LmszuzTAKcQKTEvMrTSbXaIhP0Gar2blddWwqhLlYq2P3RYwyTydVBHO/mLGThMz dkxPJ5OhoCLjcVhJLfQ5J6F9V7d1L3sQAI2mJewrL164cY6lrYZjewJn/SqG9l+kZ8Z4lUqK4J5K VwLtMzRSDTVTm4sTfwaCjZ0ZRwToqBgV6ZINIWv2HQbWGbDnlsoXEEeUgnDgCawbah7E5FgvdpXl 8Pkull72Ktoa+duXRh/fOJ7pF8ilB7oY8ukfaSm+JMKULL9sVLwINIRKMQmHB4S5dM19JdvnAURa REfrw0LbNS1iD+SFnJZZCRH5kzAQv1FWn7Og42NDh5JwW5uCr0FuSemZkqH2dKQkfutnt8hj/kmM N3NeZT1puz7G7nk3eP7Wi2ywvtRc37RSCZ1b9GQ2q9VQOMO3Kq5d/obHrUMZOmjaaDhWwzceJD+l N/r0wFWAOE2cLteHNBFxR0Wv3R3AFQ3W2oNHvbz53HCAXOUk3zA/yNMhj58DikVFKlPUz5y+IZCj 6l7tvGMtTqcX29o+13XC7UD2td+Dz6SJ1Hgoy7betpHjuXITMaqrYgJP5RfHzBw52jmK3SpCVQre miKn5BRWEeXkqObkVuFJ9o0DkiOnMFIMvPR18bJqrkYYEknQ5Hr9KL6Ye8syLAmJeem0L7hB2Hix Jpc7PpGFrr3EQgMSo1jMeq3g9fTApgrWBdznNuWiAope4AT4djwShKzC03FKF8dUM/0W8LxQI1aO rwDrSKlO+MSJZtd4qVW4DTgRCELYPxFKE3Ncj7Aq657eajP6NJMWbpUiPIs9XDg2p2c4OyTzogmY ThCmekIHymx7IShExWSDfC3c4FCsz/Wm4nogxIhkyrPn/G6oSN8+iCOGS3Irpjew0vCoJlKHTtQo 2bdS0wXM/CJN+42a5be4Q+C2CFNNG0P3BO4D3gE9DEyUEvTGcX8xOQ2/ciSVFOGZf+Gb6Owwjlnw kfCT2CCyk7z/VAzgTEUfwAuaDK53FhBzJz5xA81W+SafvpviDFXpcpBgKYSjpmfOpGISV6uaU9o3 GnkEi1e2TEYt5kXDJNUh41KWGMn2jDwLrWgEnuvihFF9cQlyDUAAHxXleYUYWF2dWSrHmVjRdk1r ixUrexyuoW8iFU+zkw1xgELADsbq8F8LjRnJWPofve08gmEOwBjWW9p/jG7PMhCGGkO0byd2eM8L 4JY2/Ywd6GRXsyUouemjEWh0wjvdSJ2OLZeem3ubHRwos7JaKupQ93jNEkCGZNhG9wDX1sqLx05j 0JWTg86dEjfLtLvu9U7SAb3dMY6VwKz/wXZgO9OWSms3dlqXczSGTV+30tm1+9Llmqi0coRr7/Qd dLEuKPo6iur7H9JpIS9muw8bjJIDMbxygXT0jzzprHC+tJ3TK9vmc4txx0RlxhjPmiOd0x0fqOMg uBiqwYlrLttAECQ50lgW9RxxVqW/HKqyulp9G3JW0mKUdj/ItEn802u9VdqielZooLGdNUG5aTxb rU8XH/53hcKYKHQSbEdC+vw0f/9/f0926W/5RWaSZE/fvcfTRjGVVqieJB2mQulULnAf3pzUBJ+R E5Ch0VeQb1XqA0ZZ2JbAtpkXy7X+XE42cPlcWAt5A6yx3eymWwdmQ3+in0TVCuMBRj1WU//dFsgx uhB9z+Ioko1XGVsq4gIno40qu7+a39hvaLSYtwKhrSvZROFtWxbgm6fvfz1+9t1v3nz3GsocQ24M MM5BslalGEU6SJn702P1FEfhNiesj+mEdDM8iVu007lFFG/06YBf+HE8pubqEQid72ft82I73k7O TTt///7Fu/fj909/JbHd5XsXRV3tAX9uu3jXDreEOCft9e36duya1rR9AEE8/ihRu2XFwqHY+4+T q0k7zsYh7NopHBBJMV07Sa4IlCM09Yn72b5XDe5V8I90D01+scA+lkDRSvCvRm1G1zN47lOdrdab 3z8bv/jdeywmh07hME3bOCzj8aw43Z2Px9TUrF22Ke37py9fUWJM6jQDH6ikVuvti+/fvnz/Yvz6 xfevXr5+8S7RieMhKxi6j/vZz/i8S5k0Pelnjw1DZ0DSeF/DDeXXZXkZGXcyKGiGsTtFJC6EoRJC wMace3HBREhaVRx+w5PsMPYZO72hHvcvHi7EWHKZSFX0FBZ9tgKCxjwDfdc47QhCvpp327wSxmSQ Ggb2ts2SXx7i0EwiXwQQHduLBIi/V1zSEl1Ic9D/qrKxqLgFVkXEcVAqjipJfRrLyOMnP6HG2XWa TfdP1p4q+rTvP+DG7843bB4F67GfObDIKPby4oIPHllh6Tr2Z+Dj04NcMUGSjXojOEXXxsHIjTqO JdVZ9SeNsM5mjP+PV3qhWu4kak+EIUhZp9fi4NbNn7TzbNYcXuhsFkUvoF6sWTw/PX58EhaJ37gP b35PB8DLVy+eJ80OfTrOgRDGeNCNidq3a9iws5WMUZSje7a6izslFXS2Oh66K8OQdOjHF6Yf7777 8PbZixRW3XMKx42YDEBrJlu2M5o7VqVNs5Aw4sM2qQaT9B5rFBnyQkdRMRm04GLvwdgjrUaa7div reCwVMe3Wy6GMKC9sTnKXlbc0okgGAON+Ta+VsD+xavrfEt6gbNVMCPAbxSMvsUaRAIfm07gUr9b ZCQ7Py1YosN8Cp3mVC1HAZqsguIEwBVaNb2dLoo8Fd0vSY7rtxZr6Q2TzzS31uXCDJ/hPOGhwWnB o2BqU2FGVL1vYG83RIKPlSfNy7ZuO9daaiYOjKa7f3OXutcolCGbqzle0FHsPWNg0HR3j7L3ZNVB 8MMGPTxbwOlcZYv5ZeGuQRS/6GkNLHXO8accOesRCYqWZYVc9Tn6+fp2IhyidEimqCxTkvgqWqgr OjxSfqDPlF7zWBaUc+e4zicLaBqRGT+NU5qeXhKvfMLjhE28nVIZJmjmNcpEtc0UK4tDOpVnTnFw dKr8W3lk6Z9hk9WcCJuOv5FlhkErs8lVOZ+1vJ01vbzNcFax3JnaxV2jydecDYjIlAmuOeU1CbdX V5PNfLLaDnEC3WZNaElAVSRRXlxPbpGOIBzNotiyp+F8xn3+bi2hJNHsCAODyQi4U7Atl3NI+ua7 dy9/16nkOWPLUyy1ILJxAd28zYNgYiOmU8A3UsgcejlGU3oTlIahGfDugJKAgLTa3W58RdrOZaPt CVqo8AMOc6hheYkCJlNt8sT+7l3NaV1ECAHAuOd810wBAuBpK3eqF797+e59mmgcZS/mJHHFSXb6 6Mi3JwsUptyKsWXWDYXs7sIkXShhk6DoZr6FeTuFY+YS1sXpLSkpVgMccFRW5NnLVVZf2IIu9RkD qlwXncXC6CuIbMus4nJqHeRVaU5wGpr0PbRukL5beZ49tKiBzq5gPJDusf2XGbI+0a/FbU1hevhB pzZCEP40X3NIkmQWXdR1vo/BdD999uzFuxrUCZeIkzMEWf2Zlju0OtwCNSfVvrbVnlYeFhSvOsV7 NnWrBApYsScncpHEa6YDQzMNN6/ZXn2nVPfYeV1u5xoZgYMKnrlTgaMy8Eeln73sLLPzUhRpJJB1 CkReYuKQPzmd1C4a6Hq53mKEmDzP/WBfY6wMl7Adb/QfnXoEBlImrwzBXCrZr+UttEbeN4wQhEX0 7YBHFh51dXPH32zK08npAjf2u1s4K244lrgcGVvPZuuAS0eCkpLzI2L1jM25SsMVNpxGDSa87B3K wTDIsEyaJ1ty/aP9UXQv98e4rE4oYBtVnlJD+R4w8d3bYSClFL06Kw4uAgSF6/dldj2vLuDPtNwt ZtkfdxXHQaA7CVUkMeRmxGv3s9Mdhm1D8gJ0HS5XbvBNpEYomRCXhMUtUWUJu/kkf/ygL/cBKP+a 6jst6MzF4qflBneIQ+iOkMx7bbArAPoUgALpEGLDV8W1DpDf4egwhVS56Q6OP7qdRxjxutc3S2R+ tRdq64CrGnuXJ0rmNUFlu/PK5CaXgHe5DbjHuRhqLlQeDSNsp6j/rXqqqEspysIRe8wBSM2QW6+l IQqF/b6YXqwI+eaW+LoZ3Uz1esZ/Vckqyx/Ye9Sxdp/1eCX00e5+WrQMp41iP+Qp0WGaVgVHf6W1 BLsIFoGMk9zw8uzX5XVBwkUynuooHDdwt9tFIYBpGfoF0cZG3vVldgFLkiKdAvXUu+SEo8eiGb0U ES97+Kw8+zLPuhhqRo2+JMyysx+hxPKqyHnmllTTCL2jus6A5vReYeC8FWljRBIFal+ftj106Jff BfwcHNIHcHJ4pGPCwxiVIz5xLsh8kgxEVjjTaC45yZj9w157ojQY8m3K6uWojg2EKXG/QH0wzRSH 1uPqWvsd980iD0ZSArKpdJ4ucl2H/poUrB+BQ3962W1/vWj3eeKcpKJayWe75Zq2x9m6JhJIEIjT QxV4+xoF2h83H1ftnELawsGx254Nfg5zzJ8SHzQEWOqctcLo9zCk29J1SGVO6v7Z6r7rocoLFy6O LM02ftje0Nngoyv2kmxvgmX4YnU135QrXP/BegwPZGR29QDQqMoiLsfLKXI0j/OfiQiU7YTIWgku 8nCjhS49yR/1zapCfeAG7tIZQyaKqkl8aEUaBms9VAwkuiZXbnU0fvu6n71G3c3reDi2mwJzTDAa AdYuW9kbEIdhcUF/XxUiqsP4C7s1+8EoRwNDqyhaeY2weqIEiCquEX2EQ65rRaQOXWx/gtR48WEx Da4oa6fYdqJGN3fyJd7LbChuI55khedc3KA5xPTuXHofSUdQamTYTuYyXsrhQmJPNHrOzjh4GK31 bbFY7B036d/BIycO0bLrmlhFyyOGmxCJhMMD0naD1/ch/f08CCS2Tx8q8k/Vt24meEeE6VjsKnS7 mBDuJ5aOZwaDitBeUq8tOo9Z1uGUtxHEh6pkfk8EYSsxkCKiFlwgA/ZA+t9LqBihuXhg4M62ngv9 jL1AVOAyr4xE5jkOV0CXnMLEs5ikgiIQnLIlF7m1kaCKfd2oPKTGmexqFO3Zux0kHTNPjP8+EM0i 2XMR+ud6PnNPh/lZss9SShhjvFIfZk0gw6PUW+5lKia0JcB6eMNb51aot5XCQTPF646djd0VJQ57 b9Wx2FttuLhMTEXX1BwmxQ9Q1sh8bPYxH+H+CchmyGlH3JDPCBm+drKdOJGvfx6ZUNWfPnVXwiMG 0ZGofyw5hJ/oGw10ZYaKROJVvSWPyFKolsP29FBL83PIE7X0ePjViapwHDYjTCo8xm7lcBmU/avh CYKcYBHMdezvDVFuZUkIDeFs3WvwRcTFS6Ye+TNYHehs2IsZvglScYKEhLWHgexc8UJdSyzB3McM 8U4IjybgHt34k0Z2nhlPQYl2Wwl2iKGfgZIeoVjzzY5AQbCycTU5Q0ReiTYyL3N9UWPjkqNxixq6 GNx5ICMr9u2H+8s4fi37Wz4gFhjcsJ3vQPfm1bjc+MEM212kPF36QqSLHBflH3oW8Be3JDIs6bWl RjLgwu4QPPum6JZrcu5HkN8K0XDWGOYITYvHNjyKhG4iC19Mof4icKqcF7TUoZReLyynyeoR4TtF Pr4pqmijptGxJVMYn3gWyYWszgqj9cAE5W6f0zrlKRxoMMGjOD327nh+YocleEAnNd/qUcqq1V3r 8qdk7gzjuMEq3K0mm1ugCEhX/1WC2G7z11DWEPH0tmh10DfvXzKyIHz6N+/9h3e7U3w78N8+nc3w 7QN42/pLq3U6X5XrqK5fzLffbTDZn52s8PJ3Jb39F//t0xUV+ffO21fvLuZn1Kivv3ZevzWvv/nG ea1tcl5p451Xv0Ejenh333n3fH6Fr750Xv1yUZYbfe9++E1JtdxzXr34hG9GI+cVDDO//cJ9+4r7 4r15Qa/cVL/irnlvKNU3bqo3wJZiN9x+vKzw1dyb10pmnNeRN+P0euW3mt8yZgBNbUvtfVDFzpGM gI7bGHjTcjEuz86qwrFOegc3D/Lh0DxutFu08d1tKopkYSgqU7L5zb7CZW+0OUGboi8jQzI24rBA 8U1f6+IrYkm2ikNLsznoDqMPnqHw9GK+oLhvOKx4QozpzRgLqKiTwSlInac0yd63TJraAXJJACZq 1ZiUwWnFcz0rxHugN2y5gPsW18t3b/jlHBld9P1MnY/3If197/pPIqsEUAGepBjYbBgJONnlCG8x Zd5kng7XvhU3Ae91qoFWA4wJQz1s4dYrei6RE+G1bnKGmrjJypXzYtBm8i6vRDJ0ttsSGrYWaRsD zBsC9aJrKg4iPXYdUar8xUB7yJGPx8DJ/tP6dqzv271kbElbVrvW5LfNRRE6/mTT7jle+ySPGJtu hMcbXI8932azHR76gRURD1I4nXh+kCGNKiIv+BCDn0jizXpDFo+xPtdPngtyOlHr7SaFXgDVabQB SpxXKSVxW8w0n3/3+v1YZDC0oyF7nSr0vV0bqEuYzSsUyc5SQoG8wU4lhf6DA/xgRJZ10IBeNgh8 0GvmLYHNulCQ2eRAs1XcL4GtFDU1jBJ5cWXfZA/De4AMFqaRbgPj37brPWWOZhYLF+0T0Hr7Hlxz 0P9HQVhys3O43d1jWvUnStBGMV0bPUwqa0lRiXmJyPKuPHH1M7SEj6EZQ/i/aGawAe61jO3djcVs yyXz1FIoJ/Q+p2/xqUB86EzUXL3ITYNFHmfkXeGeCvQmeSCkHLMKPh2qOnMt9Gf24vaGzQAG3JxM Nn4qFdxrxLh1L3R8SBHHRQO3B/AWthnw+BWZjfNI543poRM5YXlKkDT2yKVaGyzM6rXzbrmCycCF Naau7/O793s6bFbQAdVVAhjDpzqvEmhoL20Z0Y1XgzYpSWt9A07mu0TUBrRnw+ZyaDGOpyQFGrVY S7WFRbsl7Pk2SzcTq+zVrVx3vHg9Oj7ZaKR1ukhB1v6K1EK47jWRx3EcZR/I19ZFzmIBDDqMsGJg PsOYeFAnO2mSVRn5DV94ZF8MOug05yFQCd5KAWlzbcOYEEgM5LHRt+t301NfPazMG+p+bYQlmKTz lQk1sg58Tn+Fktj7+P4+DgTa0LoDoE5Nbu1hbDHLx2mzZG9jvUSsMd6p+CfxOY0hZnq9E2lQ1INE Hoyw2u156LgU1662Y8/IV96TnqBIl1BJzdqthfK9KBYYm7OtWdtSha1fUrjeA/djYF5qxUQSG9Ne CY/tVg6rQsbadN7l1vxBMGwfbH3JYaALulIS2um1BaA8ndsZbGxmF0vj46FLHcGocC5OsdN5YUfT 9hskDTMKO1V+i7/CfUx83+u6pEj23zK+e/tgR8AtsJ81rCFHRDVGRLhlejWpKe+GvZruvm/dekxh 1bH5eUJhfNabCO3qHgnbbP0oRavaLiB3daECPELCvUmHOks3gJ1ywr08vdRt3JC150VwTzWB5W/j YCh9rB6pzmd5jL8Nfkv4XNf2xeby/Fgui1vDNcINoQvPPWJm4AfuSIX2wXRd51YkfUJjFlmVz+En Zq9kl2hOulUgPo0X/3kpuX4xX33HclUajL5KgRDexqmjlzwpOMHdVxz6REmYpjvTZKzz82jyebEq NvPp2AVBCVhT2Pi/JpMpj4PwTXFEcknaIGiMRyxEduBwCMz6GAbBabdZFHIGrjzMbQNrW/XjhWWp Qa4HDSTshUAvY+ZgNUANPYx9UZO8zJfVeay3IPMfY1U8UQfbXDgufGRQofk2qc84ltJP6k5kn50X KLfVbAwMzRwdHbp1HEYiY7jm+E7HHr2CmdZLUJNESXZ1Jj6maJpr8+c7LKuK9XS3mhLYvcONOGiY 67Gx6O+7ZN/wNrRida62HuAaXPTQQAjWorRZZ8meYHBVNLgGpm92QxfnWhkJv1GwDiRBJKZIB7wG HsRevTzrarF9qp9iuLraRrePbRksJCSuBsgMULFEZb4ev0gmnWSOKVF1blzrY7KvpThkcGlIvXJU sR4KTZaq855rFRbzAkYISUrTGm6guJlKJmJotCAYGmjISczPBKCcnu/tw5Psm1H2JIF4OpY6CL4J AwCGxcV3yaZ8YW6cTAP+R/m8pbgoJhuar3IDyyizGxYt0Iqt2rEtFeYojw5Wk2UYwGM7W7/hhIgj 9picvuBgSk0dZc7JY1L2nanFVvoTmsA6j3cBFe8OzS/nNyQGyla75SlMZe5JJ6vt0rq42dJCBsPR U2AOQ9RzlSzpcyg5j2F3tIrwzHhto70FnLNq+gXFWI1NyDkRpXPGAoIEa+W0clc7g30IY96mPCgu pkVft/ARZquhEAkvu6+Ucm3FciskZ0Zx5PJTW0bppKpEp6m0hq6d8xni/aEm+Bi71ufGnXgkTZfT y7MXN+sulihntx7SNKBGGmwKTt6Ra4/94KbHs/YLOOa+09jSrCz3whZV4yta7XVXZWIACQHTZ0po 2SM3iaOb3AOw3qNLnikNrnm8H2pIIuvpRy7DxI3PyzXn+G7Tiw4y5MJG0To22wnOt7GbJnHmkQwZ lwVKsKVG5pN9WfZBR/4bsZQitDG0zQcOso/m1pvtYDrfTHdE9VA9VBQz185eZJZXvrzSb07iitHc HplvMUPwGIgrP5VLypP8tyybiAVnXjOUJyZcd3Em5qsVMWMJuW3DXQAYCOQ4+k4ZAQtRz7LZLIHb HixKMZmv5xJcBiN0y0FpiUONoLnQJqfYBiK0p7uoUOpSYOt4XLOvZbnGA4xDRAhfVVq4Pq4LZigZ a/g9/BoP9L6JrOUEeSL3zqF3LZCt7s9inAy3eWvPSvDS0Gw5/GSK+eMN2O5nDhmj0dkt2bAoYGQb VpNXXf2l1dtoQiOb6P8iov86i3wAkHFMEnfKmM0c0y8gs+hHg4r0sXNNEyI2JjqiDzE90SLoe9PV 1l1nmqnv1tJLX3y13feyrtuKfnz8kRRDTj+01nFFTLfL0xJbbsx4julXTd8XxdlWRCH6M+g258aP Tqvn5xeazfxO5qOvtTeh7r0qo//1yKbatKAv3XBL3zfiPChOf7TbTiGbmpH3DPH8sTacWJ8oojPS 6BmG8d6BasO/iRHA9Dl+c9iHzTklDK70WBSKeeO3HKwtfo9W3qim1CQpJGd8LcLeHEsZHnJqUqze QNVGbbMY6FUvDNXMfdIU/myJdO+63MxMa+T5sBZJ4jwRDN6OkEuEJYPJCN9Tp3jUbic9CndHekH3 Lfmw/Tj68YjqvKQ64WXb0472/fqaubfD9CjUVqwh//ZUm6hXCmzfq7q6S81q72cYo48NTE1pziBj u0Iuwm4aXVd9Zxb7zhiat41SRKnCS+NtxYMEieYk2KEe3/b54+pf72GV+Osv1Hktvp/ZXyF1snTF lhfJKs3tQcSVWw9iNTa9xgT5dHtjT81ePcS2L/ulst1gSDsNr50gWlRPsNXSBNd2Qc1+MCsrrpJ3 px80NWviY705yXlawrWKKWtmynbaaW39kRCdBXJVluMAniKozMbLCh1OVHrNWYtFhkctM0tWGhTc aflahZTVGl9TMWSB7Rz06FmqOGT20nPlzRUVRhc0r1pHKIzXovpiPLFUUzlznEzfYtw2moc2J/ED /WKza4wRVfmUKZSM3y6DN2J/ji918E/CCylyCKjMGQvThc2eb0MSeyMzR7/imXMLiG6n2ARjJSAl BZel26XHsqWZNe2je9i9IxzIZa8X3f54uzB/lWCvnK7EeaMqEhdE2VabQI5kFzn0Au+I2uOTgySJ 7tXcWW3H85MTs5M3QUvS+yoxZ4EticaL930BzsjzEFVBGpk+9y9UcqZ5l6nQjaLd32tWhD19v1uj KQzMqX8bukNmu7E/uwhx8vjM3MbHI0n0MRyLu7uzb0JQQj5diIVnsR4tgRXpZG3nGiX6VIJN64Ib HqpZrLvOipji0x8//C+wXNjrvrpaXU8/Xb7/L/8Pofa24HkAy3+JxAOhYWaIb+jGaSeX03e7U1Fq ZN+Xm8v56vxZuebA0OSD9+5q9f0zKYYiQ2uEOQS84uAHkM6F+8XIZIQBi+5euDvojgK7aLJxoHgV rXd3Kk6O7LikvckMYDCCJLVaR4PP/691lD2bcNAKFAFUW4qjQHbp6JSIpJKCMczo/YBCMUCe7rmr K4FeVxhPwwS+UTuFuTeosCKOWozjjN612XI7QAOhH9Z+sbInLAFeY4g/IpJisc8gqFfzZNHa5QVm EIRVLuwtdPXFyjg8JXC3dxviUq54IoEmxowFJEF5xiZAsYdMJBG78l+bYuCj+e0CyFbbRGBaNXTB uUEc03+Eo4PPjq62oW+q7Qc19dzuPrPjh6EU0DQbXzMaHaxvxK1Go7FpCTuimLmrhNcEUZgVIWXM N2blVDlFuwSeeD4jGD6oHD0vOcDZVHzBKT4GASg5UAVm8YkXAUbCChxYpbBR9vghxb9BQV4lPgaM M3BNJhDn5IoPTCY8kkM5lkfBjLSSA6F6qdfMr9hgUahIOzj1eretX0IJ7FxcLg2QuV7kTrOE0riw wQrL2f+25TFXuDdujZ6POhD5IdDCHnFSXGEJgDn6lF7S6bTxrjDn/nozB0rSRn8fbA+KPTHLHiP4 +Mjhro3s5u6mNnEqSAUMghGUYEY/kTaQU8jWMK10jHuciXdlUuV1MCGfMR8uhICZl7RFO9b3dTRD DyxtgB1V47AhHeWVRyqj5bKYIcxO9mK7Wd3WTo3rHamt69uZb+1PO3gU0jx93TKUiGm6IWbkeHyE EQPgIF3LgYl+WU9fvfru+xfPx89+/fQt4oa3x9ngy48fR3+X/9uDe+3saDKbORGH0Tx7VeAhjPYC hIezJZjRVn0sUh4/v54H8HXY9isf//q7d4iKHqTMOv807IizLywb6IFwIV34Ozo+kYn1HHJlVBiM 3XPchzUZ4nZcCXQvMxf5dDlDwI9uG8dq8CkbDKQ+B57yCtFB5q6hIRbSyUWYBJ8JIhte9BB33UlW bHTzXEX39ivpZVFNJ+tiLKw5snLaR/gpUSHorToESxAHb/y/gPbQ+DtRXsP8BvSl83coCfv48e86 nuceJlI3a7QlQAZzfDohi69NRWVQ8KnyupB3I2/yHG/rKW3brdcdlQvm82qyWO2W3WCPIh87X/mu 0VN2sXGq3JPHxR2KkJNaJuQ89Io7ZUkF8HUX68seCbI+7dBKrEI91eRUj+ziBjYFnGEb2HLAup7v 5rMyu86/VTZqWyJ5mzPfI0uiPUQHXsVNx7nDdOT6hJgLTgiDixI1WZBf0KXhly6rLzse8voRg01O DbADdIb2bKUAOwgkRRVNS8bvrTxsh+Tscv3xDu1lH8P4jcn80vKgiHTY59+iXEviPmvjse2CVUPU rUcUbPzD/lMmD+4qGoi6yxeH3ASmtnwfPgZcO42v5A4Zehs2C0OuwP77suNwUue48BY1PKsbXIu+ Mb/M4IHb267kdrXV/ayNiYgfxCsH3OHoctfu7WWUU7GM8e5fVKab3flqutjN+MvVgK2gevvigrk1 X0yqi1oeHT964SCdRmP4SuYN7t+/vA6aPWXryQm6PTAitt5JdSBoELKnGKj8qoNagt0yDMU6X83m 0wnFpyB3HeV7fbNYH55ZVUBaYMUtYKO5csf18t4aDlvBIX6x3a5h4+OWQmHgl3hKf4kZviSgFSSz foY/11zw/ux4ojJ+7F5BR1Dkn7Mo/NABObleIHXFnXObEtxeuuuoPP0jou0wDNt4jCoRXjZWhNhz Ewt7fHmNKDNdmmZ7rfNTTnZEOjUpPmpa/N1rBYPZN4PT9zqL4nRjhoJ78fQWvQWCUOhtLcVk88qA InyE645+6iCtu7yOeNmOm18SIb3tQFHJPGmKqm5mChrFO5xp6+V1k2BqfcqjB9cNCfnY9dvkD1Mo h93GubHlNJqR5QwmxyszHrfw22XikY9K3KuofPhGSj9/B11eH9vRRY8Y6Amnsq4cfsNk7qBx8qtX lxKIuy4ijPblzqhfbYCMBevRhE9Ebxytp0LxxBr/mHGseGoaZqa+tCCrRmU//aN/FskKloA58DMg tCglsGEHiK65SN15lnWAFHY40iqfIV4DSfqFR+sET0PUflAxSiNzEqpwsRyvlGO5BnSaeSxgbiiA 9GZWbLQR3mLOfYJ7W+4IJZrT3AaUnGm0n+OOBPpHJ8+fS5x/KGluJMyb0HODp2vkHfnCk8bbwbPs holQ5rUf7mUnaFB0rdJrOiWBza67NB0l1XU1M3HsqdEUcybUMS3cks22bSyaCxs8OjkkcKsWSTpd yRcLIVBvJgnzDQUSRHKXtBBMBRY0c5M4ruxzp3MHCPmmoo6HCBVlnubDk6RYRUfVOyzqwqvY0a09 S+L5woNkb4HROdPccT46USEsJ6cTbIiinvCZmXSURqtOh/stPh0SUH0rwXmJLrJa4cqCaYuNMsWE SXHeNvi5H4835X7fteL2kcQtxweNAxp+8MPDr4razhiMH46Kagqx2WkTCj8f+zTbewgy4nSr6hrF CUFnK3PeEyE2RTNa4pXb76SeC9gKKocpPkfIwLOq6oQxJtLBpcV2JWVxFiDQixkSksQtqRKP4Ulo mCGIrikbJj7pRfTUuwnBTRdYFUzpGiLJQe+zxSlK14tUHD6nqxPGRTqieLhqWqls7IXt4yavthQl noccA1yuzF3VDx5NFium4VpJcgdRUr9BJhp30BZSaJFPvlTKIef8qrWVbuVBdHWD2kgpnZCC8z+l AB4yxxUdk+gAECRrMCikFq29LzM8eY6F2FoZA7S5WtLT8N1X+HjCEnUasrdSqqYlIq7qYrKhgAaL dbYsthflzCVjLIlUe6HlLN74gbAS07SkbJEbzhez5eQGlqPbs6NgVUGK+XK3tGouFjhgv6iEKuu6 pIq2qHyxQokjbteVjRZ9Zd9b4BvSf6oSYSzhqUnxzgPktNAWCA3s0pbCNDlLIdAWWi6cR95JcOWO AKx31iVr35rGwdE6D8xYYNMd0YbXXZYTReKjIzE5S9AMI0aSltv0R6gRRK8XoMaLEjjl68niEl3/ kCsxWscBNk4Pq7kJRHlkUYkeBSNoRPNHPjX1xpX65i2XXi/OtMY6UdrCUSATCSYmRCRG2KE4ekE5 ogroCohvPwsk/zm97iWabJVqR3UKySOflzB1tDmoCB4ss2JbbJbkW1hc62xn3myzwRMdNLWriwWZ 5ONXbCqVYuqzs1MZ4CRSyiZRTEOSQeModOMSV4Ew4Z1UiK31bU5I53lzXExv312Xm8vK1bvSnvE+ Htpu2+A45HyylRRCubaZKfSzFOPoBTZwe4HE+McbdCyt88P747WQI4/90Db+kEal1wId/4jzP56s N2M6FVk5q7tybpTom/DOlLonxSIxuxupHtlwWknLWghsbE149WkJ2wbMp2B4k64aSa/bTNjv934/ uLcc3Ju9v/fr4b3fDO+9a/uqNcy2vKRMtjxjhPIGeBV05CSUEEL0sFqJSYZvgVSwChZ54rNiS7G8 2QsRSBtMzLurldp0qRE2nJWLyZ/mi1sP6dS35WEW9LK4Zas1h4zMSTzrJT7u3shZQmTrhmSSkvUk gB5wKLN7t8Dgy9tiaYpERJVhxD5K5anELt/OyZMGHx4jSqtXmVGvEKenVVyZWGDXsa6862/IXbpQ RiJUjKcUs1JM59Wz8dNXr0bPso67VuDyjqp7iggD7B9q+narS+KNJKJCVS6uCnuLRKYA2FHVjOCr T7uS/VqrClZI6+WrVy9+9fSV0fp37md/zj5mX2bD7Ovsm+zb7OM2+7jKPt48PMV/ptnHTUcFOBns NOhUWeHNA2fcK4w75b0CRmxZXhVdztFrvXz3/cvXz7/7XsOUuzYDMjQtYK3Ox6TnHc/m1SWZw+Qa o2PT+Re4ag3+dPJx+PFj79vjfxmePEANNiR52XP11XT8k3pJ5mKxKM4nyDF5DTwWKUa1VtbB5aWg r6bFjuKai9K+dYadCHg+6EPOAW6r9T4VaIcmUsPcCQQeRfxcoGZu2JOqGNuXNaXV2tep42sGThbh bE72KhTlyckmvdjXIB03C113j7JjQzuoocUPGE9ESPf2Yrwtx2eVGf9+NpnNJtsRnpLS/WiKmqeA 8tNSpq/IeH3hU3nK2rlX/dO9itpUrfsmrYbn0IISuX794ulzzeeR6mrN3YJdxbGvw1XF/ZR2Rx2n c5cLxE1YsLUJ2mtAgYv5aU5vG1Yay39GNcuJ63LErtoY/mFNPD5+RBuPL/1lSmXk55tyt+4+Ctal Kanz5b1KxtRPnyh8v+E1dVeafYym1X6ZvaGHPBMH7NZWueWkoq80JBSJW2oR2U7bhcTv4sWUrM1b SpLTW07EyQ2//NIvvOdYJjzdweJhfahz7AsdgL1HAiXUbLpA5mSXYC20G074XVVsNLx9VaFSWyN6 YqG0Rfsc/wK2+vyqcDetteeVQtAwRX6Gxz2XTduCfwbQUaZKRC43D34ipxkcL0WfHKnJ5LKAm1tJ 4RUiZnbngk5qS+26bbPdU7vjCuWgtTPLKXDbG7NsCH3dUZSgHSJWFMkPVTDdGQy0MaM2MJ+0FHZh PFhap9iapnK0hbYczhMUpBJaZ9ybSl2VA0wyoNSddEnOdDQXtRo4STsR99RRJ8wN4t8cauX9tewU s/5G96osz/NvrL23LvQe2kXejE8XvBY8TuJjdb/7cfagR3/fPehl3fw+HrB2O3pODQ3WQuvYJAh4 tLOCwckpCNGXvuSuJHvMa3amgA2+nheOTPrlloDARSqXVfPlfIFwTRKaareacqBvOIclUC8dj346 RxpKfTBKXKx5upgj5KBnRs6mS8yq+ToANMuYop/N9RQrG7EREtGMwFKbNQGhSQfk9daR4xzKJQIZ WoQQKhi+dsqyIZVgIZs4TeFryMrg9L4yeirUWcoiDt1LNa+xcUStCmc64Giz/6XMIA8zeZOmisub DkwIyWg6ILarplPaWD+5UFmPvkbaDGtOYMzOYMOczibZzZC0Sze22l5giSZGZPjJ3HOv0iXdsNgA NifRltHDHusqvPJUGuYbszUo1Vx5gjM4I7RjIFBrygCU23zsREC0NSZ68zNTCNfdGUMqzZ6+bYqP jSN2j03pxyy7JYEy/jwSs8KuvhFhs2fVFENWmrL6bhjdMepReVbSkGKYo/V5pobfPyP8fW9uPlNT YOSTGE5UZtZEaTKEdT69XGhAPMZopNlVcLU6fYeZao5WigU7Kp6rVWKuHVG8em2psaeRwn//bEDh B3ylYf2ES3m6TaVinWRvgaMRZv2JR2573XsbhjHwrCezo6xv5/XgQ1QVyM7GdrIiZ4Uko4a1UghA ZkXrvXJc4wN3ejBfHnBvbvX4Hbps11SftKLhYSMKWPpDwVMN5jRedB2mzIXZ4KEKO9ir2SrYCqqf FdhOG4P2hQpu3FjHHTxqmduHRCdhw9khXjcQrqyeFwGFdMa2RA1H4ANiLAy/pVygV6q3VlzDcfSn OZvfiO8igwMW2SmQZqDPqA5iGH4intd4bJGs0nGnl1AhjtQLoUeyNjN0i4Zw40kRM8Ioj0gO9psX 7949/dWLd7HhykW5mDGLUnD4wzwpxSObAJPmGL6jHWDnWVwge80lXEBC+kkXvTC0n7MdsGVpy5K4 IZj2DqYpGOI8KKQVy9yTmqzA8SsVJDsM0MixtAuU4edoQ7GJTbI4Vc7C9zMORoLxAnerWacX3qp9 1idQDjBdieL7xBWwukJLT8WTifMoHPLd8rYl+gQbm5tY3W3J3tvfM4oNnuhZMhA9LEDnYLqek5gw 4R2HpxN/Ru6ezS4+vH3ln0S88ZV4djg9cCvHUNaJQ7uI2y1dB0zh6IHfCGksyR8kPQcxRn6AwsEj mqhw1V4zYtgncZZMnhSOGtv1EfY1rEjPxVApXFy4YJq9oGpjkgROZWRL1XmUP0mZHGMz0XWNJDzt BinV/Ky+3IZi783waA+dANNUwT3OOoNNR2Dd2XgrlYgYg5pVMi07fLo6C2S3xnCwsjxwUXRIrtUo G8KFwvnca+0tH2ZmzRp/i6xL8zr4JsOie/4KglOZVhB2jhpwEsEL1QkgMKuBhYokEO3aYdihZTdm doaBAMINZA7Z+FA8umLUue4EXWcwcbUEIk6VdwjFo76tcBSup4n9allVrs6ryAtUwiYNe+3p8DQ3 dIvt4eTir3xts4FSYGPmsNLSgvsS1oPu/46dmXMlWKG/HKSPOEcEGLGGWb6ZhpF7UTQEIjpo2w7X xMA1gRO4tcRhH+qz0+flpW8lEF2JSUlOFbgjzo5JibHnpOxcP2HObYB8kFq+8yGUdU9vM/EmgPuc D2pGKwU2AvQArdjVgn0S+R+RGQZPL23CTui7tDK2+hR5vtxMoRm4B0/RfcquDZrVpCHkWmeIeLn7 PseJcyTjG4ts9IsQv2I2Yi1ITMXW+WQ2i/Bw+cLke1YQKSM3HrQ/6WcP0/hh65olYdbcOrHgGhfT WvZku+2+85ttmmzF0pdqc5Pep8Jb/H16w8qqIUtY6nhoodponepOWLTI49PEXnI6lLjTazZ7pUhQ s1nq7oze4iUG9lqcqXgz5k2oJkjZcaiFqFH1sJmOHvV5zY4eRQQOU8pOQZbAXczAwxU5uv5NOxzW Fda+vzjPVyUaPOJNEYgsAnHQ4+J6cluxPXZXrz/lmc+jrCDt4hbPNHKjL5aT1XY+rbEiFkENtKRP N3e8SXFwaGo+HklO7Nl2WlQfbKLgsOUrHNkqzxBqQQa8O1ndLqGT3wJ1/uOu0ip96unJDGkiVZPd awLWOFtMEmwdTVSgpsOEjhKAknR6qZXA9cKOvk+ZXBYVWAdZEtsJ4paEewhZC6JwKOymFEEcrLRX P+VTXDFrRd9nB3WuqectzpWF2U+2hFNYK02nQdkhLaI1IaN0l5bB/F2m9mFFoejwKwZKnS52uMx6 GrNsU1SwSaEmj93aWVNpwxBhCZ1e5JYjizQCxzjKgEwTGAaidbDlI9+MCg2AnhCFm+DoXSjw+Kuh e0vZreq6uFtxJ9UkdEEhK7jPe/vFxaZ7Bhk850PIMOz0/todReCc1W6dFkgyxVvdUu8qvoHVTiRj SlUXZOJ2ikbhN8iAkHx3cfvFF1/Uy2T4AsZD3gvkCyH7VTmG4giDt6v0Jkn8fzV6yIT8IbkQoSZu UXlsmMOtAr9L0U9pkb6jwlTca8SusW37kbrtwQydluUlULDZ4BSGkVz46M3Fdrk4Qtf46cXgyaCC Agdf5U/yR04Z7n+PHz98xD8e/cNjffnH3TLjiBP+ELd851Xu4T5VD06NnAQwHXRHlcHrZe1mBVO7 XJl66EJVZbeF61Icn+xHj/LHivdSDW0rURA2GPBZODBvQ/NSJ3HHv5JPQ9Zj6qVJIdxNuU7v3Ou0 gkU7KwuKDE+XR6RW6ONRWasG+evEzRVKlBj/o6gTqR570gleuIFkgl9S/l1TF52ETrHRFgOyj0l4 0rPBFVD9m+UiI407Ny9T0EtS5ifXhNTVZ/bCdMc/ulOEj7QunyU3TLT737/FtJTKciutGGXfP3tn SU8vR8LIQluksKwRaQRedMv63W9e3ak4Ncg3ZbjX9LMzR3CSkKYZtzdMGl7NWZd/PkEdn3UMQNFX V+6Nobe1GAWQdxBWVsOTpmRyImDDXRTL5Vz5UXuwyYyAqtd8vmKvjESpSdpJNhmN+CTk64UDhB52 Yr4AXD7MZiroUleHi6B3lxhTjuN599LYE07jqeHUHs9EBcFB/UGjNFgmDiZxC2gYSy+8KyEQL0kU 6P5Uzok892QWZhRY366pRjrgx52JdhmiEduW9W39IeTOkkxgxBxGrAhNxtj1GtbF8hCUCRiINYwm Bc26R55byNiyzY2h6+LLut+Sos3x6IFx0aiVCABMgxHB+F5pXFYMirZkS8Kqm0Ix1qXcna+2JCnV nNT1lE8vZOLFOsYh4yUVmhDl95/R+21hfaIyNirKxSz5+Xfvn7561XNuNphBSMSyOh91OnLtja44 VCMJAhS4jVzZ3HNUUlUJNnCene8o2hEqAunqavjCGYpeTwuM2ZFh8Mhvv/i2FVB7qX2wRCTmtl5Q BovynK1Bq/OUXVw/uihEHAOW/wAqyAavO62DyX90mKJWjKxISOdOmtRILfbPxW3iOCP+1Wf6413C TbETL5sF0iYlJLioltaQ1XdlrTzf2r7C2KdEQ3iN8Rxbkbyz/0K5SnkCUj2hJRAKfjCWTbdh/FDC N2NpUYcqCEQ/KvpKhK5waKZ0r6Ndw+s+eivxrXXdM3rzRh3qzAogDndKPo+Gqn6EnCgAtt3npt2p 3U+wEegli7dEGL7JfIF7aFVcI8Hw2wlrsb6d8LHYFj+sqVDGj9RU41YtN7S6o3cJ5JLuuGeBozUu SPOGnY7y1ku6GSAvwSbEJIN2+Bzjs6TFAmPPEMokudoR1AF9gcICcWfiypEYIWIVBm/T8ks09hgb kkne4NgmMw4HIHQnqgvq0gMfMSjEXebjqibN8Y1KH6zzFH07fjQ8OUl1wfMK43bzCe+Kqq5sDOH0 5GICa+2Bhoerc2WszGqcB5M5QyejliOxjCTSqSli1smT9VHt3hwlR7smZ6fxjP4fAz3u/wePuwNG EamG/NXj2sYGyktcHLQsGhZfffY9ilKTrkEpGoKh1KoS/zuFRTlgDqxuyun8jzC0ZMxkPFzYtvhR zaiuMvSQxbNyN92iypb56ytCSb2aozLF8a1JWnpqHaxJMjxoruxKLzZWOCsPsICTa5RH+zBrp87P +gDZzWF2XyrKzK0J1RvRHtPE+xYdRudWb+3VbPHV55pqcJ6cVrSlAaR/MwQbLSQ25aL9Y9fuG2jZ i9S7377OHuVPyCVD5qhEA9oZ2sqhoAZu8hwyfob3mC5DYcDlCe++QXmyDB9+gYodCjgP6ci1t5+d 7giYH9b9Dv19S61srtUGZSHrRI3I8zwyieIchs1AC6ROyvbNLjw195MFwBNvNJBG4dA53BLOHXOu o5cylReXdeNs09W0QX+fkWXeBhbJ5BRBjyXqDQYlgRaX1xXtZZwCdrnBASILMLj+RmYKB+Jmu04r uMeJsH0RUra7L8F23fAO29mDxjOyjfLWL0bqJmJa1Q/aFMXE9e/K4n7Qii6y8M7qGAVNg9sMPIfY CVblZtso2qyKT7tiNSV0IqQklQPTKIVysAtFuJ+jmTHGxUBRH6v2Vfpnw2pws1CMQ1eTVehyNb0o 59Oi/hBzXCegL3RHDR1f52iMKI5ev3z9G7z0w56A171AurJbkXGOmuQAa4NtosPkFU7BGweNxEPe gIlHyu44EYeWJJjTAAjiokThoaNY4IuTJ5fEW4Q9eOeRF4ROJFfei0UBh5+6sZCQukPQc1mXLIHn FGUe/sDrlOEPLQgF/osMA2Bt0S2VkvFai61taE2YdQrXUeWt2Dxv06k9/jEtyhaRRhu+io0TMW/S Ls6Ub5HaAuu8fbiqaecUNxemT8L7+KKdGpSfBJRHyuyN/EhcOI4kio764cfsLwUUVJp1ODyPwanS 6hvwY7oBweyHptW9u+D2/LvwSkk2qdP7YlTLnNS11yv8bufxZ9QUMzwH4hyx/cm51XNR4NOttSIf w9ZGOy5o7incSyIbwKSa51V5/kLCvAhoTYB/1jI1aXwxehCkehG+WzWZ8Zedb4xuTNqm+b3YR2Fe NkxGwBPpRiDgwgISbYaulgwcKgg9aAAsspaZL94SSzHH4qtHBwxFBQPGsKIDbiPDALzEfO16EOhg jDJvYMiwGqlxm2zVxXaev3u5CwlKb4akJmfh2M+z2lArHmlKAYXjIkdZygEEvpZrioTabhQAmWSo dKyGwueYSs368kKr4PRIPp0s6sfginrhVMnOSlDAle+85JzvhLdzCmwh8iSG1bezBueg1FXjTGbL wmN31ZerQV8vDezDtMZj5UnXadCDGuOU9H/kXQbbUOwIVBOfGTSAO5WmI993h7NvOtu/W2E1bmes XHJ7fIdeHNDILBFYglgOHGqcLxNJMH9Dhzr67iUJJ8/YyM3w8s2L2rQwqwemvSgWC0baMN8dFshf JyNuOMr+lsBwouixGyZm5Y9x/t2WFADSFHRLltNC2IAlL5F3dt0dgW+dz8pl/8UNjBmding1oMCK MB/dRje+Ao9LKSAn/8B3bDPB1Uf2JraOfbBDK1ELKG3eo5dn/vfcRAkDmruFs5PPI7IMpkPgGWJM 5oQ0+Rr4twTigBaSr+D7+9s1IU6bly9evfgNsCTj1989f5EEC3cUzXoydDV3b68A+/8r2LOHRokJ WG7/juJCHCPwLHPNasbDBaLnvkZ87nZU8t/pd8hqGrXWMHxni/kUNYGd3UoOaXxQO6VOvI07rNKj ZKgMGtuCsRA8+lf0kwyfxiYWb6qo+QrFGFgc5kDIx+W8Il0zPovJeofBCy75l6jdZ7E3a69VB/yj YBJqkkT3F/tAh9cmheWRB/gZB8Xw5LKRNNCPGJCCaAz/aKWjBlBKnb3wIuMiMrDe+di1p50sFo6n FMkqmGsL1EIzG/n0LvUrFL1grHFclsvrY3x5ElMFLFZv5edR03s1Pr/HmAWFNI88j/JZflnchu5O 0MFAj5Hju9hHZaHQzyjAYNFjNUW1LDC7InVElqfAANLsFfEY7rETZGpPi+11AUeoAX9Sn8ojgY28 gMvKFYYbxSs1SdE4Vhtpe7mMOWdXPTLWRCLSVWerkNQF+wqesqIOvlclhq8BkropERB/2LUWOcZ6 LwD1eYD2N38e9OjXuwf0N3/wLfz918f9vyjGjy4Wx9APduukT0Z9n7VdIt2N0iJjz4y221gJ8Dyd dNiNpIFj0CJtjLbDTrMQHN572Dr/fETzLJgDbIGroR6m7L4wsQqP4yUahdqj6ZOwmMID4cRz0ASy Qog8REjxjsc4fj4e/vyENdrHPw/iShzJ/W1aLnZL37R++rA/fdSfPu5Pn/SnX/WnP+nf/LQ//Rny 9ViDXwwGVbrfUU17aNOPPCI3n7K2+xQVrctuKYRKU231Jf4OhNOIu/gQy+58+7uXCfHx2Uo6KgPP 6+hRnXABykKB/bc1YS4MTbYrg3VrZ3DVmJxWo0e9tDDALK9cjillVkLoIE8hI6353R1aYyWJtbJs J3WgIbS9qMddIqmkU0Qsm0x0Ws/0u/T65V9vDuR0D1tTv9v8NautxFX3b190CNvzK2rzu05ieUuE k3JrArwXM7Hf3BTTYn6FQlFY7rJppw+DliwdkpQ7BFgs43hTHGZBiu3+GbX0fs3o0n7BIpNhgX7M fRDwaPuWRi310/sD7nFfcBdacw+bJIMeCWdT1Wqr1BpyAsHQIflRGienjeNf0ull39SKE5l1IC9F 0p2juzOc17OSzEjzPEfXlovJukJF5vVkhV9rCqq2fL4vSYq3LVxNKvkuSk/gHOlj7OHN/PxiW1MW CtvmWxKbsVxvW64HC+BHFtZtBu0FxVnyej4takrqlqi1guo0Xz/TN3An3SxhfDJzTyBXnF5NSdaV lFoE7BQpkiXUZhX489xtLo+yy6JAU7/b0BsgbaAdYp6LpbYezr2DZMAR49HnbVpjdn3XzXkkwlBJ KuLQVvpk/E2CbqTy480UzxEM0DhD7THblnuOwxyuTmZUr9O4nGNbdYdw6J2viWC458hTJtBP6OFB Jxs2FU7r9NCSn3cay5LL6qGlPWsuTe/Lhxb3b83FuRfeQ4v8orlIe6M+tMC3zQXqfXtvcQTZ/bCe a/bYL9UHNBaa3Ig/8BzHfj+q3UROGz3RRlM71YGPQMNKvASi7x4jjBq/PfYziFrymFryijfHT+jh n5ubxYKQpvY0sxd3OPzTcKRYsqVpe5ZOKB9JU5KktCRFFwLZSeKMtwzE8EDehyu3D/tve7H7G/Fs 5iZNgeYn2/mUAWy79osB13U97vSHbJkf/1bOh14HS+pkXaha0euMQdeWjSIJ3GHbY/ieSVaZ63vy ts7rHq/Wxqmx7yGvVBdwxWJ2Y0hshJOVTh3LA1iTwz4projpoDRnuwV/x9bOz1wEv4uC0ZWuJ2SQ TOwJuQeZiw4wZK53ITIhpVvErJgsjN0KKVopSgQ2HoaDLigUOmKbDfgzuXMhn+UUYj1tcf9MNi77 JN7KE2QIoR8OG+UqlCxHVa4KE7q+8qQnVakNzM6gDhKmzLH9f33piapIsrvrSGbltEZFgqvxYAXJ frOEiOlDBxzXsW2HZvTkDQ1tQp3QC/Gi/MXt+8k5Rr40VxUf9Fsy1rnPBmSEE2O8U6zjqQa0JCv+ UJNDWwdVI8WCBFO17aJEnQgGithLKSCojWB+A7clam6x8PMkarueDjgtXLce+qNMq1mXmBYYVI1p OvHh4WYdoZ7AmHIkDqla5llsstJ327vLd5I3DGJmguaa2+Fhbd0j/6mX/bj9S0t/DpP8fIbU5+Cx UK3MX2HaakRCn99Uq176a7T2IFa7XoQlARvT2yhBL9I7iXV5meDmOh/gJEzHdMZaOw9j/ZnhxDrf xh8N75X6SLCHozDWcpqBFzm50qBt6R8hdsjIh30/ueKEMa1SVljKSbODpsua7LBZiAel0/B9zOeO W0Xq8KG0NRUh7pkpa1grFqa4SASdbBLDsDga/2GjbbV05wG0lQwA6DhNK+iSEx41Y0+XsRSauI+N zTqo/dSioOWtu2/3VkKk4uwrNEc26m7cc2lxSzehNJ/PPt5RABOdotAOVcqi7jauRHjLoafMj5N5 h6+js49TqryDE1prgVTVSG2UHPkJ/kK+QXYY+86QhkO+pcCDqim21Lbvdb8XZWq8vJKf9j7GhRLF 5NbNKzrvLve2bwa89/mShP/uLtmubI2pAP4JnVVg8c0WBQV4rYSdVNwSNGZcliTxPisDh2Wdmmov 2XdLjifNFpQYO4cTtulSx8fG43c3zQxvgmS7+WnZGLsbQ0N6wx+qogldOC8PGT3GgIusv4pVV0ro fYYQ6scUkIROUcM60x5xlnLRr9AIbBjCln14+2qoDsUYPLKCq/plviq2iKH2JTpDkWPxdgPU8MvZ vNo67/yS3uLKmxPp/vDh5fNhdjZ7OPvZ6dnjwezs9KeDh08ePRz8fPbk0eD0Z8X0rPiHn04ms4mX XxRh2eNHP3Hx2PCEy/55Dp21p4Pz+R0cMrPdohiKqMP59Art057JEfKU9i10dn1ZlwSagLU/fFiX 4DksOUjx8OGTAfTm8c/g5/CrJ8NHX2UPHkK2rPsblNTA++/gMMNkrv3wG8ZHmBcVF/qBVvBMy3sE Q5Q9+mr41c+GX/3cKw/evy6vpLwmOyW15VAvvx/fmsOGPPUtFzrDDhouhGkhEfxrlIsGGibDzR5s NC2V/iYVvDMNlXDlMWA1oIUEID877mBongMxYFha4unIXtf4V7QDYXcoaOlntVlFBB/bzXFoZ2wz 8mr41DnR+NriWktSQMI7Ri7LS5mywUNVMSQy7HpdCF4P/BUykJwkTExGpm68XDJM9YRCHbQIFQ6L 8BJQfpOYHIZdmI29VgZ5T2pLlitBXeGYcmyOa79gyXpSVzSx3nUFLyXCM0eivp7iQU1Wsn4dVMZJ AhxHsjtl3c9gl+N/nxHUajxGtBKOfkbpzBs3XrbTSj9itjXlraA82OwUUQ7ly0DHp8D5f3j/zFrv ojh3gpf6z6B+DC+mBiEdtMMbyP8z+P9Q/t/LuscPBif0K78PBMILvh2bjcT6bMnAJmYBxFhdNG+u 5k/o4RLprI9Qe4UlCNdmUhIIOwIW9b14zw6UFgze3SODZ+nI4OgVsZpNNrR+zpd+dHANeJkCsrme IqvRHKWOj4rmNJvixrenbDtHWbnKOmQ9OWz3oqXlw/yI1+7gGxe2xkL8mMVm8XAsDk58pOGSuJH4 61iJPQ7Z055KcSIvzVd6ZpGPTfeh4+zAAHiyRH3bh7S90n5jCsGSGpBZYdKqQoivXXQur+0YWOMk NkjgsCK1Znd8H5xBkChYCCT08MRDMoYLaig+l9KCoUqex6Zm47UrL6LodCYlrPYlWvFcTK4Kjg2k sFGwlr5wYLFxRo95EPDE94COVG9jSvW2C2Vt8c6wyhiG/zg+sTHY6U1EWumt4cszyJrPUKVEBanG xv9O871BiTI0S1NalU3L+tlLpK7jhOboJNjy2Arh+dVlpJbXN64kwzoewLiq1InxfO3LYk0+g4HD jCmk0VMGs/puMvSm2UfGy/ialh+KcPmwTl8Ffa8Bzu1489TL6UggnqhtWZ3XVGXS2/LrBW58ulfn d2tUvVw4UW5CvFjXKeJFaoz26CB/+LPB4394Dwf5w58MHz3Kf/IPP//pk5/9n8kMcmDdvWMc1IWF IsyVTNabsceTHNwhcvFvWhLiFxRQw8j1Ir3Cqb7a5R1KwKKlvj5gqdc2WIkoXtPZRYyK6/UODQfZ +fqV+rqh+QPwE2L7cK8iWRT8/SZ2nVRK0Xd3VN/OGTpRfVp8+F/H61u88OcYrRMFnvPzT8v3//P/ 8Td/g6e9YvAgr9nPMEkG81pNzpHibzeTKbu/Y67dRiCU6LgXarm+tb9IrCBPJco3V8husdNii4iu NmWKpqKS8mrCTjnC+1KC8WQmcSSZZ1LWl85bXYsbpIkM0dmZFae7c26mXE7pQ27L6QwG0lcEMybO ZtQmm9Uxhgpp+4wUDsSoPZsD1zK5lUbBsXpqxgsPZumACy/Vdit3etEeXLThBB0MsOB2ugGwUqrt qM0pEq1B6xNvhjTyiZ0baktdGzqDtdN1XrOm1vVidw7zRc8czgh3YcRdLovtBCZs1MYpa0efuaHF ZLO4HSzKyUxwOLjwrLtEX/zBhEHLev5geTOFC6+Q6YwqaRg7J5/tCYMV1LSVcpjwMRMOw12e0bDS Ul3fssM/tLZf11xafXdqKOU4sIlsWE1Bz9AARzclFeEuPnIc4KVK33LE9sVB9nbWdEko1WPaQt3x mOYELiqL8Vg2GY8fzL/3MUeUjZ3xV56fSbqchyGnKochiyzRzHPgfMmFrx20zxHGUJhxdOFmFhEj sjkRz6mqMeUjN9lRZs2ezjSqk7FN5FoG9yoEG5E/QB1X8PPjano9G+FfikCLPz6uMEJLEFSHJn88 liLRPXV96z+3cwmNCvegLt0H0aJHE5AuIDB5KMl3G+rv9vqmT+Vmfk7AeVF3aW3mdImoii31cdOV zjoSHqhUYMNkGPAPhfC2Y+2tk22ZYbdVlOXzr1R1q/VPMgLLyeYSGnKLUhJ3Ge1WSngowh38svz5 xaQiXRe/xyDbZt7cS0s0qfl0UVaeQ3yia6hn2d+xViAPDSryrzpNwy1xxVM7aDmBuQ/77m8KWQxD Lw6bL7MzKw0bBnuwofPvxVwOCqDzSM8iXN98FqGpMJ6zqVHoJte0s+qYSguGG6xTfpbEZKQmnfUC x5lMAfCWI9SuS5PsJWV6kLU/rtqRvytL99nU2xtnJJTRwuJql5MVHJUbuAuOoxVrxxqOViwj+mAq pgUQJuMqt9c8qfMyf19slgiI/T0vHpFXXdtIkLQOhXuBtssvXsEUTl3dgCATjkQ39XQEj1Wx7mbt ETIVQq+JLMLCxa1ftf187WOe/BOYjvlAYE3ULHMFl9sAx1/IdA6p/7wtb+gvFA3n3vSMaxq2w5a1 Qp/roLt48w28r7HDwGUCD+6yHYl8JFMgf2x37WHGdAxa+jLK2ixTdILWrSl4TRv46u69ioOvs8M5 5uiFIqd2lt0bPP7KxPBaY8ATbLRjICr9Z6dXeELz3uv5bHuhHuxmhLJ/rJlHmEaXBvvZ2i4nIjMX D74ue0zIFroj8iMvZuGqG2syHNOc0uqa08mTe4BMIdE8XTIP/FqGiRGDIXv0kyqTMbPlHThmzcsd 1alhm4BRpXRdYX7GwssSyoJiDjIHahDPkTntwOl8whu7idoNfcmmyC1dWpmkN9KKMWpeDTBjDcFk eZ3GZqGNti3LRQUL4hyyU1hD6dSw7Qt8sPi+dq+B1i7IblmtUDgVnsxy7HTEFZo/hFCTYsJDGwdY qGyyVbRKrB7NQdCie8xzTa9kHFH0NE1Appu+IvzlA63AY0QonXfuooHEBqnkBI6LxtlJM6fkkuK9 dq4LdZOxQ7/3oWUo4RdcAwfU/b3Hq88x9hwVMLy840ErOYaJwRThpKTwhzs9Hgd0nOMrp5advAqi DwnRSe8FXBmP+Z2SnAimw13FAXEJ+Mp49aaMO/BQ5ho12SHOWKRPsBuhcenS6fCwGnqkbtNLLePW UTb6nP8g39VkMScRnAxPdbvaTm7o7n9RlpfVZxft7i0hVpbedGXudFZkhNm+hSeawvw5E8L9pjZZ jQ7OLr3Ck3eOoRt4SeC7fKxfXJ4AX4mNUNetRNPqNuKzfV46zNfN9uV3XSt1e4N2ud0wLEsy4q4W pqx2Fl7t+IbmYnX5GYhd5R5PCHPWIjSyPkdGzlbLy4VfozKJv6PWBle/izknQBZkkonsDCPN+EC4 58UKtv8UB6mbwKGJTEIC0Bsa3KSpiR4Z3D7YdExCJgvMAsdGXTRoHvw2a4rEIJOqGbbDyCeGEvDI ESxl/TZNDAe3rRUIquDVWMKkEf6sChqlK73IWqzjCTzImMApJu6i81ED3gRF+FVg76ExNNYBN6uf pb2mqfiuF12sINXexvEyd+c3nc6ZKwN+fG/D+KdqoYd8cBe+RgG7wgKm291kgXsPtVxkEcakkS84 8N6MfXM5WjEPmYm0IcU1w2n3de9qn2vqSq0297/TTTG5jCBnCA8ZctbjzZAqGsUvutijznvxlqOy RIebYNXwC7FqqcOJ9dpv6Mtv+ciACRAV971q+HElLBvTHUO/oB4ygehSMGLWW6ZLMSFvHWMBOZOw FxiBxeRgkwDW5NbRJjkhV6IgavvcO8F8emb0mNDVgWuGs91qCvM8HrfFqsI9NcrTP0ZHl3M0YSA1 ua4inLZNrVuIv6eqbYex+Lj8Y8ly4swhFNvXojzuQPJws+06wR75gwQzKNNn+SZIlOui6DsxLUjp IjIFXP+IRn1fX2OT5B3Vws2Bgfu0+vC3yK86wQ0/le//8z+Qmqi1LNFQU3Y2Otq6oVxJ1itenYhz PtPg1Oqp1ZqyzSthg3byjARZXCRcJuZLRvbHOLEavIPiMxCLMz+/KDYtDKWwRMAQRuMlYThQG3ZC ZdfbyWYxt0ElJO6Uq46qbiu2GcK7jqO/IokZXK71jVooIOiVfuRx0SSMJ5/+RpDY6BcLP1LpcrIQ 2G2B+EiOX+zmi9m0rLZPKcrBM/zez57CJjh/xtYEz1/84sOvWEegW/Td1UoMid8QNJ5WlsMHfPOL iTmV2WOaW+ii628R7KU8O4MBcwIsdNdlVc0xHgMbtfecmZYVKdbC80IsfxjCbVFtiiuOozJK9gmY qRtUckK+0aPHP+9pNvSQMxltt73kDx8+hGN+ciP2bqOfPswfeuiIq+J6PO5O0WM7dConB8MEFCLq G2jR5k72Xk18AS50GtmRs6UGTWzohoD16jf8nXJ25d3GemTbODOZxOOcTmbTiwkCx3uueG4JGzby 6XzZCS0/uegQHb4J7FGa7bXYjQu0V6mNEIam/BjJXHXeEkD13qaHqm23tY1o0kHmfiYFpLBFKbit bS9uzTlGnL0S8xsEC06GbRRdF4YXQGIk5Kuf8d0vK3ebDNlZtFQWoGwGHA/iU5hhSAopPZNQbBsL iW2bGsZBw/92BpuO7exJ9oBZXT/7YZX540Sff7xRcgdmQQ3HHBxlGso/ieIadxQumltMMU8L8rji EHuEk4B+gZDe5l6oLMQfAOMqHJaK8MnFDFdSz7X3P5vfEGS8BtYrDPgTIqJQ1Bqy272meDAoVw3u hSQJFTZ04ZEUIuWh4R4xvTiaZABL57x6jzjWnm4gKYl6gJu0GyZNxWaO8ujMHjSlWQehKmCy3Glk 6G9LqxzodRTpLWfRTiDKUhuf2sSR7UB2kz5HE1n4FEOxeyta+sYxZm2v9XWqu5HzgWBv+3ElfkCY 9f0YDxjXyY078UsUFLy4gR1caRSIuhBPppIw+2SBgahvoaluMQcEd6JYHEG8M5Nu7+IiXPPPWFp9 C7GKwRBob2MHCGYWsfb/211zawWcd1fc2gld44+BjpzECQgzOrQobSTmEcIELD6VQrzUGCHCyA4t GTxGDng7PLjD6zuP9zAPPYUb4vJtuamQUEqoPua73NDJf7WgPjDX+NI4s7Y3KATYfGj3e/tsvymf iDskJncxo3sWfun3eg2hPcQ5GqEPcT1PttJ+tNrOiuV6y5ZD9qj4fNbA7q6O1pUOp7DHYjG5fDyp Ti3L1dQWCuzxeS2y9ovJtrnrBy7oDgz4QTE3cdVYv3S+p555CEr0Ds/pINYmLbrJ6lYt2DCZkUOl A5ItTFQkbry4PPCrYu2xAeSY2su+zr5KrVBLlF++/u3TVxpiDu/WSstI1NJ2582WCkz3V/Vz2By0 s3n+MeiqRjIbdTq9msLEB4fj4iIlV9kLG/Nn5EhwilIXVW7MrNR+eWkj1DVPsoQA/vv0bPOkykSi iNa5OVWIr0V241E4X4rCWHBUZUZ7UHPL9KxzmjHbd3MzCRyASkenCewOgm+tb8mAFPpccsgFZ/a4 J26QVTuwqVQOC9+hClDXPVh2+k5zwjhswdHgFKYbKw5oYRPZbYixsGR6UJRUbIGcV+ejDryfM+pa 1NOQzlPUUeotRnSiQhQ0gmT6xsAVCs4DSq8yNa2bojv3iYb26g/69L7SMtp6gYNlQYbJy92W5N5R LDufi2YCiJ0ZLIn82X+AEHah9c3uWTHh6wXEkFvoKZ5Sk8nJ3Il0oojjZk1MWJu/zGrWZkhPSUxq QncdMnFZzcxxU37IvAk9/CGzhtoaM2uDAbCd08KfveaZw7n90aaPTfbjXVib2JtfUjzZKPGPZD/y h8M2JKcFmsgMy8Yho12K3WmfKUgQx4hH5gdnv3XAtnX5nX/nqd4sve2pOzMxe3edJzMJxQ1yjrrJ yoDh5tsPyXoJhpKuLJmlquZGShnVFxVHiS5xKCDAeHEl+pMuWDAxr/qW2wwslx3rUhPFLnl0SX2h Raq0v14k002s+wS4W016Kb7fO7nb+ghEXkJISMaHci/8fMBikHlgyRPCGfBJSwdtPz4AubHORK8Q J/0g7oSTspR9woKqgfAkNOx8Qc+6sD/V9nHiw4bzduzl2cuz7LbcsfMyBqmNmRYEFqCgOL6DmQlr qAimROdmJiSLzxvXxtn9D6Dccrg0cEPIVvkcF/NBD33TIvJBlvMRvbIqjDxYbKqChit2g7xRpPJY YuJrhBOccjI+tGEyMNw4qdXhVkF8LynLvmh7F4Mb5N45S9IcQ5uH8w3/r5Xr4Ojgvd5tzfvnL992 b+hC78zLO36b4vlvHFIhzLY2DvbdYlu6+UyE45FJHdyAKECK4ym9hUsyz6IfEhc4WvzUGKqOs+ay H7VCtD7kcYnEPCMOTCg6txS/VnNxbctGFrmsT8q42hpkSz+ttJiWso5VrybnTZCsTlRdS+s8qWXt eebM3hmarS9qRll4jJg60lLQ6x7J/M/iW58jHGPP+nS4rE1RuRJu1dR0JFMntBqyMfeOh4NHJxSB YjPHIAqTOZ2ScFmlSEV+/aQfiSRs9VVj+o5vGc4xqioX0iD6fnyjjveWCrEj/6PhyUkk2zMSTc9d XpzEMZvDbJAsORHez0rPEGsKR2M33VJUD1GqD6A/V3OMmOG6LHrEHigkW/r6rI5ntIaZx1Xxif0w IXk+lnDkY/3s5DhdaWm6mhOWn2QLK/bZWnw6SBtQx9NVbURjY2hdT46FfbaLImp7/dBCq3YFAq2U 3JoBtdseoqKuxs+kYHZHFuvCoJ5ou3xIjO26Nbkgu/bBVScZbPuvGRWbgcjRPnS1Zd+RydaJ59kY tbp2KvpcRY0beaTdIPMrNCn869cngCDkPjCxVg/NccB/hHo/rMjyDVg0FLT+9Wrz4waJHSrxJ+yf e5eanz579uJdc81hFpK/J9LuI7AJKhSgBZDrWaXOZwGYjCuDreLAcdaKiozzBbRPLG26nKMW0BYz sa07warmnWEmyoFH+U9wa852GDYRPiDNqOplQ27/VHXdtaUzvezVj0kYLjKwttNkP5pG6LD7vatH sMZCyLqXG6KM3RobDlEl9Q/UK1h+ncntwcoNT6VV36zPa4zTHHP2yNEjGwcue2KxRCFOE3EvyJvJ HjfiNoMmT2qzxaAYKPOxCjN4R8aC/t0S7wb/b3Pf1uTGkaX35HAY6107/GxHlKrNQJUIFNnU6rII gRqNJM7SnpFkDRljBwSD1ajq7hoCKDQK6Is0sv3oJ/8V/0XnueQ9swBSe7F2VmoAec+TJ0+ey3dk LYoUiz7/DDco0Ra+vZ0b7BJdaVmEefHtH8AFFmKVm1VUdOAljwsObM8wq+WBVPSGADG1+vCg/rUE YYkjBOUAh4tQ/MTJxUjlANA3v8EMGyA8N04xrvPjy7auK7QT/cZFERc5WGacdFc0sjQnl64DsWrW AOGi76i0EPvnrqZEEAAKxliOCNCTtPE3Q3JatVdK7Xi7QA9Zpl/4LHgWqCI5rfvUDROwkqvKKD2J l6TUVqiw0FrnYqB6Aj7DOUV2GOyufLAy1qagoqzlfNaQcBTOE7UALxWuj4mwY3X//psvvxZVKPoK pgG1KPWw0qwExowuqhDwAKEfy2vCj5CA2OSXGrcq58kZ8FtI0dKhw+aO16BGBwljU+RKTBNrVWAL kKZTGD66Uavfrdo1YlYa6xGpKX4dmARn9DyVRdkLjtr0zij8Im4P8IpKe5mxKoYW5Qm7PKsOFWXl ZiAdbAzXk9uEkxjf4hSMLu/XK3QymSZRc7Yg6mQ8FgXBoq2N2idy+4ynMDLHNUpsk7Z+/xzBKaO3 QV27CWcynod5QjsJ2WLDgUlYIRcvCZo1YaB6gcEkepiFDSa/7MdMooFJGUY/c+WpUfhmeehhL6oO BmdPz5999Lcff/LpZ393wl+ffDqAYIxnzz7+hKNqtm9lw+effExwvH+bnH86+fhjhcZWbB8GlPyp 27Yyd9TvDmLFR5jj8bz4qHgK4Yfi8gV/aXgAlavmaoO5LFEt2LHBuKo/+OADHML5R+fPkj+315vN g7Eg5588+zT5Q/mQPP0YIIM/eoawzouqXra7UtzqHY7Fxoy2EKMpTdTw6RfDRCJhwRfrpgKgywad T8Q91pChB7jq3XUNHihYTCHZNh23RsjXQKM1acgw9nvFCa1XANsCTvw2jKTeq+F/Tz7Mvvj+c0H4 zxEA9DF8Iiyq55ClW3zx9AsqA6izWCj/IrH11EP8HRwBnv949zh5/GP187NfksezH6vJXLYJXPR5 8WH+H4d5FE6vsUSmM5VgqoT4fgjKojh1PHh03DsJoV0UhR7T2QL36lzsFf7z58Na/vQ0+U+Hldjc 5PzjybPPxOYLnn/9RANSgugjxRu1ekF8SswcPqUahCSJ2J+eDYp0qlB6RpKJbzjBQpCjCcWXJ8NJ SPdXagxbKg9qM78gYxmjNr2HWVtlcVh69fA3cVxQ4YbxdwxGicP7fjg4BimMc3WghEMIwVjOQwY2 MY+pCHwYzlnSk+3Tl/iaeTrogQuGDwvQvyzWDaZBXzzU5Y4bcSGD/4nhggdni/f4RzCYMzznIFyI lwG+696zKQ1YHFknH7y43JSrh59qSuwLq4OMDA9lCUjGVwx/Cswr5VMqLvMBW7TwRU2osggVDO7Y iI0Gv0GXRt7gIfdOJFeuL5qr9sCuQFIOk0E9DNZL3SxAfGPg3yvcQwlxskcrEv8mmmYtBSMmiyfZ tQ8YjFX4DIyS4aOLoVK4VeXD8fKVKP+MyqPAOk2sIoLT4bwx1vqwmwhp4bAnUdHSLQp2kU5S1IqI Vo74RmqaxrbdMUILMLT/Zicqgv5JXjmgtYghe0g7qVufWBVGuny4E0BXFv18NPl47o0KdgpGoEWm hRKHMig0ol0ZwVKPrP5GydMR/p/16lT1n1Pj9jpht2Mh4g5+VV8RhGjZng7FlJiSUQxZwpqcRON+ ooifBdmUsuHrVy/Gn7mRQwRepxqw4Wfpx2EebUK5X3MrmDA5BCffbh/g4C+s0dqdyTJjygEY7dPs 12o3gDplldEXT2/3cB+B08fN9vXfaCBLxNO8efV/fkN4mjJigvgXacxGiLqErjUSsVEC+rC+XvA8 DqXzYDUxarHtwc+kYN0/EK6CA8YZB/6zADoY3sCB//sK/7BL5gMTYAtb0lZ5B0LuFLQ4eKTADWTi RS1Xdbk5GHheUCAzTgTgGOnbhD5Tii5z8bW3rg5SDByfA3gssd6Cd0NsWyhesFiUm3bzsG4xec53 uLe/Q/jRdHno9mJ7eL/TEQOUTm19DzVCmOH2W40DClT3Og5FfHBK4YAhMyf81/mNobLs/CLqpwDW OElqzWbP36tV0t0DxiULKWb0tyV+6tLe61Eh8LgXTmjeGXehx1FW1aZVoWjwp7cr8KWCZcESurpC iNUmZ4TgXO6aLYFzAhirOI7BPMPggQL+b2zlh5y86DFJo0xw8wt98Uzg10ly2DQ3h5qUjextzhUI OlSXNwYyEQ8xBPVU3yBFE2CsxNDXNXHIE9UHioXUPFA0RE60IA5UNUajRVoxFVFQntB1m41FqN6G 0iSkKhWnHDPHYlH1owTvNU+OtyGRo9MoUUIOthmJBnG09Uac813JRCLHnQcGrodN63fEw8dsrmg2 oKnLmsfnI5qJp7Cg2ZqUa52eD8Un9BYDVyrXWUwUprAXggQcd/uHlSQb20LvcCIDiNbpwDjLyKdp FI6jGramgAgRgVX//YcH2irisI7yWrExi6s9ZmW2HuA8TGK9xCVh7vy0nIJQAFcBS5lnRTZdWA4G KrJvu1fI0Q8AN6mmXJjEqOZOFJlH24FFl/BzmTXkYB1G3tOVaLUy2Vw4vkxXZOR1sXPZDPwe7w2f Eox/dfd60dV7i/bIUdHj35SPpOWDxw6WnCXIvtptdw3KmYJmBd2AepVKhC9X3GMsDO7NaMmbP/py mgdJXF4WB9fAkXxzSxzlABPno6VEMObFTlSZsgQZVGUfPFbVE+jkJmFFagoCJtgvIOQMxg3/BfkD v8vD9/NMAoRkAQTMPHJxqxsOvaBYGDLIN/pACF96zGZDcg5riqyjRP5Axombmi16/EQCM87drFXM WuiPfmbJaUeOMUwG/kVVH51Fa9taOdzwyc8CPbm8Vo5tocygkoi7ayFsH8S+7NjAZYiZ7zSlf6pR os9raJC6linrBWY4cfPr6BI+AjCIso3cpWKBRYEHhh0yxA/iIY+PxTHlS4NvzudF07GzSp83ihnW icWXEC6HXUqCFG/rendbV6n/UN0ar4oAyRa2PCzF1BDZy3OqRFk+qs5tGt7o781sB6HHvvm7IUE7 58oIBoh2UgQfQhIlFj/ZTg5wfwG/4ktMUrwcKYMY19tGG4bpu701YvnaMEdOTxBzU/ALZzvAMPjj hmEVZikUmSB+qGHoxnrznBGkg/Uh1Dv0u7x83dsIHo+0hV/xyz0j3BhDpYlqSdJcYqJEApda8tue rrlO6p8Rukh8bXhX35W7qlOaAYWNJJGD8Evq1FFehhKIbkgJwFcjNLo8gJdF+FELF9gzKLS20LF0 AdkeSIf8p1NCjhubWrdVqBU9DEYJpQ+aFwkBAVHjylVYVMVlhgCkFpyVK4yOALM+2F/Aox0USGTz bAEWMBHi3tWqvShXvPo2c4KFlcPuGJFKqsLBYnToDuAPjSE34jm1uKgvIYmDG5FC0RdAxKjC0TtI Nv91+YA3VUgr5Lnj3IFG/NBRhq1NC5W2q3pfg4NOeVkndyUmYqvEV4AybhIQw0wurxNSTvOzU6lA MMyC0PkQ2GkFBhdyMYyMBRuDWgQr/tAB3gplu0mSP9XJnyGEF7YD8npDS+AIbj2lMFfuDphhlRRF JHiWo7DdEKTcvBjBwDQea5IZ2u9AOPU7cK3IwF2IAP8cYI/t/hwz9O7w51kzzwNainMT5E18zmN3 1DkaYrb7eMopOY7kedLE8wjBc4QXQAchzcQDc06Z4uiuDnpwBoYLHG2aRm5IrytoYAbjhIk+Pp/M veR0Pad3e+TdbEBU4ptCPAq583kU0xfRJYd6SsMc6VB8bWHvjcfpqQlvmS/4axycKmD9YQUV/pOc wYkk/NKuXfNJICYcU2SZrHQG3HYuLzrI+8G8gZl7Rv35K34mCGg/ZDcimZkG3QQ+RL/RINSnOfaY U5oVEsU1pAtiiu2n/Xmz7k/owFqRwKzv83eOq+KX11PxshJsERJ0AINDDot86YvUUvi5PQZBUIL+ ZyqJhn8RswuWRoMonFhK14d/iY1Og5SBuRFPS2zcXKrYPd+lNoJpQtZXFVUI3YLRwEl8XG0lepwf 5mFOYOa5UYqaH1DVwFGUTMLlGyGq5cLoiwqT/GBqfkUO+AisB6Fm9EuMwQkaCbDKWFo+U2CRq4Ds wcy2kEYdvc3qNhhM/EzgcipHKFgcojbZVmY2GoCsxfqzydM+jlJtQ0FfAdoDrgjtGa4yF4juWW8g Ee0kKW/bpmKXRRZDURLaJcCNxH1v1CwvLwH5U0guF0LaWQkxpeOjRUihCJAgJRJCCLb6lmYjmqBh g9jZsWJ0hl0hdg2g1DIDOXmMi2rkh8CTznRtT9VECcmtLu26J/SPHCK+K3a//G3BUagOED4Iz81G FjopTIiAZGEVFFQ3fPAhtpnPKNNmJDbIu0yJw0j2JPVPKhrNpmJGvjJpeeKq0ALHJ7frxNmqjfrm vDtm5hcnMtjt2yvmAtawtg/8Q+bJCbJGlCUvKGJ0wQgMC3FL0YZmdheijIrwHIRlrvjswMkCX1rO sGk/nFEzh3dLhxl/o6CWtC3I4iH+jC/5AYbIXl2kWlgv7F508jJgrs8joaBlnMJUji7Oa5nO1hzQ RAeqixbHwTFnXrfBwNlt2+15Zb0S+ghaOHCqgoI8C1lO5Ys65LEnf7O7NLtj5LgItYn/GDG21lGq 6hUGADKjmXHR+aD3vOAtK3Uh6+o7toB42pDrdgXaDHI4kG9MwEQuJe/pih4FRgBQwXKEk9khTOyq E7Kspp/rMSePds8xpZPZ7ig3FT3NlT8xDZNsp90kDYMEZmdfCSJlAkinrBa9ahurrnvLnE1COgTV r9RWlBVE9942ZaJtTqyyFE/w3GwOz5OdRRWKODXw1u9Cek5wENEkkMcUkqzttKibNI0p/meSPNru 2qtkxlQyT2aYrKXdiU3ZeZ/EmOZO2lHDacN1J9AFramXzj55ICmk+LXKTJ06Yvksx5sMMko5T3by BcUEo9NAm2bCQZB0OJGbZzRSj5GpoYCUOj6bU8iiThOceCIwBPjFtzdJMG5fiUeuOfpVgP/6DR4a IwkHqoOFzAjHCs2WiMvNhgb4Mqjcg6cuxqyLtT10NUbBwsXSHS54SyXguCnGo6uSaD/zs4VuBTtQ dswR25g6U29tlHaNmMxjjEEHcEXsujJXUij5pL3s8lXPyQ7E+Mx3seNHpfiFu8uOsEW83GWaSGRg ESkr42HLeP6PdsAE7S9HLgpSZOjew4X7t/EWpGOPsirZl6BU/2CYzJBfrKz7Ed8F3IbY6iQrGhxp pGrkIU+jfby8/Ivtr3oC4ljKdB68udKw0bmi7Wql1MnW65eSgrmyXEGcOwtm1TLMGYENUAnDuizv VdZI2ccc2tzr77E32MAThqfrRQRSGyYSsFLiO0TqnPczcbFsEZxCOnIfdjLTOPI6aCXJzpquO9R/ 95ETM6XcILkNtk98RR+lK6Q6geVWrB1wT+v3rJfMZSasjGoL4uTOuMDQxAu/a1YreEAfNkbCPmAP VEVI5cMuMX0gc2dwBXj9rmsuz30WSH6U40ewMiJawzGjDx4n9FR0ttmwt/hw8UE4GNSlH8RpqXeo V6VRdocOJGc59hxjiq6uEAAGM3agPeMKaHHvPMkcoZdSwYoeOKmW+CsPldDJYj30AAKr1vSIM1TO IyGhEq67yytK1iL+4LtiZmRABY8OToIKf6o8qOk8LPUU7LAy5AsH8Drqe8GHg2Yg+BX6HB5pbK0y BEINsP+s2Zhzc2gwQeSDmRR4aPPh2vUB09ehA4OlhABcN5syGumo0fnQbLOJqZmCuimXTfMwPBu+ /Hm42Wl3jnWjn1ZFZ9NcYKDhtgUL8Yn91ZtbT7QNsLijo6laxx1Ebu/xtVUCXOH6ccu1ILYy1Xs7 dRM6WCvtPI6IruTW0WlgAFuT4igPtom4jb/aY6WG6BcOi0ldtfP6gUr56TJjNaAzrPR5chty2pSx 9tjea3hUkKIqjOYFYcXVRB4bK1U8KyfVN8M0BgNqrpVyhleQFfwD7FJ7aa1jBDaCSowCa5Kf5sZ5 Rj+yeUI+vLX5V5A8e0ixyMyJb4wGvoXE6gjpQ+XAuWoj+MtFLW3gLSAfbdkUhfnlGWzI1e+58pni cKMAxFqqeqIpGIPHRLPrFkP3ueuvzOF7L6V211yxF2OA3/QxD8sPRSYRCRzkBRbwsDGtNlwXTGK3 huCZuy5UfmY+BDVklZR04kGUw+Vd5SHEmlM2DWG212TY+BXQY3AkHrpMopUNASj317WVRru5BLRn dBUUew0xeBd1vTEzFu+vd+3h6hqe2whg+uYNOSHRjfbmjSEDCfEpkzoMjgPkHHeC1A3jFuhc8bhX Rb/1zVITyxetk3OxVy0cqomJGVfa5ZlW2FffatOC/ZSI7oc3fMsh3s2PbhFbJDt679x8k6HMzxmg hgk/EtEC0idzWjYYYoHvMijQOvMUoxBCxuJyWbeJeEp4o+5wGNqumZ9bHrMSSv/eScB6ajmtRC6F PKJmX4WRq3b1Sqre+eB316v6ni9H8pf2Z6YyHFVbMmRyM4YnSZBGV3YCATVr9GQOHqnooCItKV/o YGuzPZnWKTXwNluV64uqTO4nicJPZOuelA4g0W6OOzQ/7jJgkkPkiJrWv4Xx9F3IjT9qAfROAtrf xAaAZi+gDDjNPtl/gmn41rREQ0E4Z9A2yAzYIVpc+bAhPHb0rOJpBJUwBjAUlxvZnYeQkbjtqfxL vH3RZTJLn6ROdTttht8CT1lRPI0g7DulTgiX85QZqzAdhLd/RIj3IZ9o/CF0meAPqtkgmGmAUizi MG93ytp6jDiNq/7NGyj25g1zavCTrCvMdcHekP7j1Ahuy6paCFBLzNUB01SZ0NAZ0whyQ7claNwI /YKfZZ7VMlEZci1vybCHYOxmvwrp8+w1PcHebS27uel6Qe2d9neg3XVvm+1J+2AtoZ4HbSQhf0Gw oZCCTAduUtaIPujU8e4RergtBS0WqOy9KJdvr8XzFRAj7PTGoTvbeLTrgbjMiEkbih9xRHLcBCJi 0WmWeWPuGXh+PdpJARWc8kyhhCyhlgZHXMTSuC5fShJB/t4On0JjLIZVQfTBB4A+lRI/DIrnshXH Y3U+CLg+WrU8ziy+s20KlOAHqx7HKkfnVz1JqGfO2d8gkFjgZ2KXRtk85Ju1jeOgE0Bmo6wvYlbq Y/Fy08jA8aj/1JAeukMeM9fs6mUk7M9D+eUqM9nQ3Iw///kXzqcsGNNhi571GfuAjhRKuQx2jzgg 8/c0g7tr8Pw0MM08d1p/JeMrKOWoYGv+9QwNgSDFJUJthdypB4Ob3et/LRXlm7arb7pX//5fEkjB 7rDBR3zSHRrx3wSUqfuaIo6hqItFsEcbicYk4GTSBkSBFe0vmsf/AgwL4Obu9tlisQZYPeBti8UI s3eOkNPx4vxRnPFXZONUnF27J5A+CkYmNVGjZCirWMopsVrye/tSFl0B2jHiRCIIg/EFgDQaooxs IHdzvJNbbne46PbN/rCvOcc1NUp5pmyG7Uqd0Okz1JyLzhcLNJsuFiz0TmyGBxzJGiRJ2TktW3F3 XTsJ4K0pUh/P5GfBIH/DpLAud28LcREAtEpo01BfSolWJwMfAJR2LuPGXnDERG6LQE75gmDClX7r d5APHsDOnBW+qsFKY9ToM/yJwoIIFkAU4k8c9TDkGg4UR/KBONaiZIHJ2IdcIcT5jNFDeWfwL/m3 CF/0+2OM9P5uoaA1F/fq5tnbzeNSWQ27JIuqFTb2rsRdskqo9QYVOjtOJyeTt1OJENU6fQbnFKKm fS3Yb3u3OUZQLj1NTp647MGd+7FR63oDE99VkpdFt3EyO8N/BfqyakS6k45Y5uCsetaSAjcVT4PV SuzUgtkqf1QnyV5a9WvvybMHr+qENtcuigWMu5QhxDUHVwUcDk0FzUV8yeQIUQ8bRMW5WNXrjqDI JZutVawviZpiIgRrWG6MproGklusHpLmatPuOI2zxtOkvv1UFnjYbvav/628MinIqlzdHF7lv6Vr U36V0OpD2+zWfHkQx8xTiw9eKfjBq7aFKBKEXjSD1EpKNHhb7hqAldENk8MYYsVGr2G4d/nP1r6B tbHJChhVIAiO41fqzwwzqBofCCMlvUICWrHmncKtjd4gLwFkJxgz/OkQjSoYcr4UMvs+9ewf4H4x Tbl4qgP5n1KQ/zRtNoC+IrabyjT7hyJlM0qg+xvq/ubQYMaQkzrHwqGuq5qT3pzU9W5otC877sRJ omlBT7Sgy2ux/KmNWTACwiwFGYix0s+6LRoMItyRGZlFN/FcR2RXkGg6Qyd/8ZBgG0l2mZfiNFRi K52Zp1mXg5SxhR/Fw+eSy2X/NQcfz7oq4ju8oiWG8aCs3EXnLZ6jBz15XcGYO76qw3OlsohvKl+1 YkpV04FjGM5SOuz0DHZMS37q1sR3BQ3Z3kCNFz3i7iTjXd4zmP3F0GgSMV58ogyPb3/BxdUAhwAY NPSqIyiGIKK2WUJp+qObzri44OjXuCCJkKfh36DuwU/lvrmth3OvQZqp2gNKV08JlzNo8wk2+ATa eUKNPNm0vXsC3BLbG/ZP3qYfVauHfMwRVy1op5YHSrhl0ZC2M4An74EoyMJKC6OV2e5+Emx6PHV+ QIbC8LoQ1AtRsuL2h84Q2htXkDDLxW3VIJrwVQnXtsTkLQHEX0bfiDvvxdckm5GOjJxkNFiEdpwZ uG51HXjVwQWOjx5llMXCI8JtFWTQp8ze1HeXcKm3XVEBjhY56FBFNxXVGdFGCs4+WE9sFf7XVfiE gCC9MK4A5qycuBjMZYX5xLH5Ef9SYNLuxAntlO6g+3a5gv2aJsZi0blHj4RXfAn+wF/x3svGbei7 sLOYbA0FPmpNfjVU4o9NLFV9cbhCvZ/1NdIrfTWxcgitH9D1CRLgoheP7QiAIg6mIk4TBkoAs4p4 THJRVyVHgyN3qgWc4SydGX3PEV/ByibM4zScl4WMKD09dxk7Tk3YmXmkBpyfhghoUK9c/6HcOkvy ZlqDH0hEcfbZ+0IR7SBCGQX+O8uVNpGWB9xXrSGqrzuNaC8+SQOovZFUeKDDx8mqamp4ITGI4AcF zmiRpV9/8/0P33z15atvvp7wpYIu6nWJ+dXknSZRVZwbBAMRbcZgeckFByH1iWInkTERZotldxul Ya9aLD+Vf0mjXehdrQqLRWPhI40AuugFfgyF03AUvNngPYkwpzV4nxr7SLJSeNvwN7l3xlcTS20L X5GtLFBAWYXKnXTI1aMJmcfMkUKtgYWJL3+1X9v49QI9fMXT69AxI9JHij4XJNzpXlf1nnhfWtCS 4LpyYd6jUOnOL03rHyr8IrWzR2Bx0F+hrh3elM6u6aqXqT/7QvDiZQuPSio3cr4vEItI44W6bD2K rSPZDZ6gEFSKDjcgz3zrR/VGkAaVoLDgoPQKOfdaHGvtf6dbeT51UcmhNDlin1oaxDN6laJK0K3j VllsDmt+/9cVAi66oMLlPoQQszzsAmk8bR02gicEbUf4S0yW0WPb37GloS3kjv6JLkJMkuuNCLQp arUsQzmnlTLPfojXOyE0ZQeaU3w+ywHpL7VZUHylwx2geVPTco2I1T8PmVkNJ8nwHkVyOmnwuRv+ gtpuKDvC+l5YCHMbdxp6DHSZ0+QZT9oKW4AT09kqNl6pD6aB9QvBNjvrS3/YsqBNqkQjxQUgppNr ihdUYO41Ok8618iZdI15zybJZZyH9RgEpHwQKbSzEm7SenKYpXjvXTb3MmYPP4zoXY6weoHYRYVJ bS0aIChg7cnJCxBcemqkb8ZUwjqNON5JXyUsYUzn+EDGz9w02QvOwOq6HAcX5FcuA55we8c0W9+j FvfDD+mwes7uatJ+UZcI1HQo2UiwUXSgILghO+EH9uYsTYAE8XundQOhoDYn956D0FNOf9yBhB/t jqbd1dLDAdM475v9SiJvxlb1+DypUdlcsHeWbxAzTTB8TKYm00zeLyHaN7dFMZn1EeYtS5ieYwEq M58+L7999c0P3375+29++OG7H54ncmU8HnzuDZEehAvt/mpF9YaI//hzz3jMff/71797+a3hXTuh fNSMzj5yOCXmMkI/7voW1AuAQ3aN/k8y0Ar1+k+kVQLTLjhNaEyxA2QL3h82JWKRcaBPl0CmB47e At19tPpVubtY2eXJfwjeNs4W9G4PPkPdha9qUQwlFl5xRBnw8yEI0QUeqqz7yYa6nriBZ/NcOn1T da8bZclaYGjUAvJWGdRoydvxXlOsk1KHyh8Nq3o9ShOa6AkRsBQEelU31QjVo46p7Iz5rnTjrxV0 EL4sQWmKKHaWL/dZcg5IbnXHiW7ZWEigW+ApYAHNKX5Lw5BHazJJrSwgkshdIXTipXrVmSB5PrjT Uk75UM0yRhj2rUxsLE0dZ1RHfI4yAVtkkoOA5no2h01xEToQX6Cro4WaaOR2ZjZgBvjEXnPTnekN KdZFv4Du2h3l8uuOECDWsonPi0bg55eEYIK2XbMyGOjKC4zZgAelw036XFTkbuhnyOfiodEb4ruD O2IIBMdqzN7t7N1S0VQht5Wm6UDkBBGjnCvTrehXss2wsIDikjtsVyGnAfpVvS3g4wlDooe09563 bg7Z5M9DQaP1ZjgBG+cvYVUKLozzgo80toOHypGmPNVBpK2HGgy8oebivKEjkuB9zAWTgE9hRtFD Sb1UE+IqtFFaSPFg+xwy6SvfQ2dRuSkQNd8/n5C0N3vUgUr3Ee4SVCyuxK1+V4p3dZUfI/xjK+B2 FpCfTn5BKEMMqiMCcYtmLl71Do9xdsDWoIaE6FMURSKu4It2Vbl42Hav/Zw9rvp6v/vfPURBFVxP 01Kl2ts4SjeODyz3xYmsbN51r/2ewIFn3qcwejxF5FMWoFyGH9mjsz5ejYvL7Hr4zTAgBPBdyYPI rHeS9YtMsS1kN89pHzcF0xTWm8xYXfQ6xB9JRDTeMLTWkSpSneNUQi2Y6D8oBKV6HftS5DqlgZxT +833mB59/v4gJjYRgKWQpdkHugFNefIkeVRxEWAb9Je1uSESdepL0hQN8J+nkkd4zewuPJxvIgy+ rJF3G2e9x7roPDeIc8U50kKIys3lA6GaOdwpRJxhXtMBRDgg32gRn7/y9BRmUZmhkrSmmDWPkxSG GKRWNPf6DxsrAI/zdIrQB+htQl3zsyANrugtJBtCRXNauLZHUOvKoGYoNJt8ZB4NfuVuV+UeIOMh 8fJ4nHz/sL8WfXIWaGhCFhjJzvIQmAymTBtuH7YPC7NP9448Nl6vAWfQcuBAhDMoPH7Uif/NcbTc eKylj+b25JGQxZRV7Dk2Ih2CzYDwHkn6efI0QSOep15QRuYfB5bTqRFDY1WA5Su7fX3RqNj/SXDe MGYE1GeH7fq+Xh724KGTD3qf8eYxPuExRMRrQh0UrpoeW4rAakq10OUKshFs8Gh3+a88/oIxNd11 9MyGFT3Sv3LjgqvIGHXxUFcsW7IWbLpwLlbrna0vHzYt5D3PGDzbH8DZ1rNJoB6odQKotxQSqDOD ndQbP36LfSs9ITI5gDwWmHseCpR4+ms42hmnCoO947gcJ10Yb48HqBBSDyIrRGH1kVLAYUIwbj63 QVdCWxnSUJ2BZxC5M4B7C8loJUXVNQjTBzo85ZnbmchDdbIHl6JlSSj8qxrAdqAV6acuWnpbJllu Qf0uS9C9kZsRsxbKEXDVoscswvknu6ZiB1eAA3gQr4QjxC09oj53X/S4C5BBAAJ+YHo+vZBCSfr/ 06d4IVY/qdDRySTLJxMgaVBFxUkf95Eq57034R5c3azA2GZTUWyNkzWvZ16buq7qaqF3Tk4P4oqX 16W4cfLZ+WQObsvgN0HY5jA8N7MDaeTcuFK5EzTYqd/fbILyKfyez8Nn1AZft+aP2YVzbyNF8zBf vzPd1yTQGTXJ7xJRK/CCRueolUrSl2Z5wJ/jLAhS7O1SpkczPs+TD8U1laRHCCN91PGhxlbAdrTK oyKbxf/r+4a1daPECnYyzrn1PV+VcQtQmjomO9kBZh97OkoE33kWusPYM3hBsnrIeCZLSJ4fNLD5 d7Fk5Auun7k+bm7SRmvUYkOfha48vuHf1g8Xbbmr0NazO2w9CGOMXYEKXsnFul63g+AMDetAHi6B l1nmb7LsZaEHpAxPKHm60nl4WIISZQAXORnsMkgzgzBN4QeB6Rnn6zwcXJ1Ir8P8HVdaX1vRQs5Y 2LqmdHRHdoUkfK4EfHu33JXddbEWZ8nLY2s8QUBMsQRGyGH+n7mvl7IvzGkuik0iyaDNS0r5Efus RQ7vqMwS1t/5sws2pNfa1muytZoCfKTaQ2mrRZGNeE5U7bpsbKOPXQXi+CVkP5pdZDKfa8BfTtIn 6XgDWblWzU81IW0H/EBA+KA/9RX7449wvT5Jc0zTYPXpXPZSnyfYafL5WL7fwjPL/d7DRiJB8yHv Dki3g0vTjxrDZR5PHRnT0iRM5EuGSvuo6Lj2PaqIiXoL8TadoMmZSSKYpx4iBbYLjio2NAWzbRB/ DcoBU06QVZCWWHbjsoYwgZHNSmnVQ3I5FeyZoYTdNhUIKdmmv3CwNuCwRKcSBXdiQwAYGO1z904Y EH1pFdwu8tnk46fz98mqYOSSw1PLnv50CzWcYcYSLIxyljtyF8aQCsC43AdebBSON5vnvSa3e7ha ttUFvLI2wz68ofs+HBVPzog5Atm+EBTqgt6hm9bDDcI1MFAr5Kqot2hoYlwoti8xRYDWgr348uXv X//wzR/TfBB5Gke76J+lxEiKeC5bdjDrqOzqCMBCWJHh2+piyXHoqlY9utzmlH5x3Rbu1e1LJOKh KppT8iQ27VEPy7H/qLRDWv5/HNJBZ6IewtEndMbjCLye3mdTgtZPcLyO6WuULNFDFzSdxDRAUIRI 3BqMvt7o9Q9uAunkeOvlnuPU28t3aF76H5zYg3J+6unkZJo+gZ7dX0N33VGNGV0TS2Lj7K/I3CcC oGKr/cbpSNb3qJ2bm41BJTGlZKnh/ED0xJblJ+PzeY/3JhcLnGx6e3mmE7wEF5WEIQxZOZJxxCai XdbFgwTjc+haUJ4qbLnTD8OUnZjsNRa1ndsT2sv8axMKclwHlPDXC76Vt6X4Ozezr+87/9bmnv3G DLAmfZ+LYi44rwZrCmAAQZ9yNOmjipWYoCsR5fMRdJvnrhcpcDEykGB9U5cIjoaCF2CkvWvOeYSq ikfFM2DVggoq6gstcu4u99o1PA+hEJ8Vnb6noc+t6dGp42No30KmJ6FNMsfHDDtAKQN0IxDZTJCb vtRlGZtnZs/zfBS6D8V2gtdGbs+P7eM7wDq4biU6cXZrRr0ABfkBG7dAgY4tz88uJIXdR12B/yPD GLwCb80EQ9mrhy3JyyMjItRP3ANS+C0PW1lwuJiSc1cxBBohy2agb5UuWAEYrQfTNnQfuBwfmnpV JQ89NEUl7geDm9vX/8bJT3Bz9+rr/0AQEtt6N6b3EMa4PKHoPPapRcj8GsJ+m27djZI3b8T3YrXf vMEHOX68rMQnlahYITxzaqE4VsSvBIg4ivsw5rkaoAs6gN2IqDeBOCgMnhA50lE8cv2ywiCZh46j 1o0YdQr1VqtqriNijACYDSYLrVTYd/4X0dJfNm1PhHrnzmKBuWkoxhRi0YVoKcPSed6pOyaMiofo cyAwtTxT6he3Ano8gAKLJDP62yDr0DX/U7PNZqkoCsxDFE/nTkX7KvfkXyUjyDsgSzkrRgUkSYfU 7Tg/MXT3LHHSX3Cs9IN0OhcCFziQ6xSbCaSlkTeHytMRjLFWaU+ycCYOMpLIzBm0/c+4pF4Gnle5 1fkrnLJg9AAbTebeciqpBTlIg0v9XQng6oAhgIHlDdjugAlDyD5HRKMG3GipBcyaO4CvAQDN9iCY RtWSj0vzE8k7iMskMcoAhUWIwlmeFmbOEJl1Q4c5ftt+ZU4Wry8QikQb7qWlouzpHoDMI8eK1Pve EnI0kcRrQKypyq1mZWGJRGN62nV7kygIUVcFWAWxZ1sIBXTrXxoCJBcpXtXAFcXV/gIq6J0GNAEV ZlgdqD3Q+QuSBzqdpq9fvRh/ZrzlLlXQuDNpaMoZIdj4NldNG1kmGdx4v3/5XeZkIZLZVTiXIC6G feIl+hJIvFUUull08Md9xVvw4uts095FQDSArXB2MWt1c0z/EvzFasJxaDSGJ9jvaeM7fXRqZb3R 6V9OHF1Aa8BYy3JUfaJlD7S6dzWhkCf30kk6Fcz6E0z9ABKouLAUmKm4OFLzukl9LFLzaIoC9lJE oEP0DT+Fi1jlloo2LIjwBN0w0EJo6m5Itptuy4PR8Di9hUgWLNszvKBO2G3NwWlOOVWcEggIBzhk tOiFvbW6spbRO+UqRsPfNEhfXxdXRfJnFPHDy0LL7+wJ5/FzLgie1QKvRu3GYiaSka0qJRVvJXo8 yJ3EDwHTjzkLggQcklzj6qoI+g9/Aiv/EGTCYU6uMOVyeVgfwK2FhSK4iCEOCioFsbbtvFveWbR/ jjNf1yqrDrs7ep8/LMsNaRmhJ3G6V7u6rB4MdoHAosgrPDJh/ZAqa+ZnIBkndHmi3kBO1b5lda/T xMjnqEBUt+EQ/lAvM/o4J0HLUmE7q22jNm4LJbb0MVooyKKL9WC3cpFpOg3iOVQ1CPm3Jbies00+ y99rWx3aV+XcIb/LjtiLHjdtqrNgi4Vh34nwyODBLBapp+3MOHaAkv04UX+PonDN1mGdnRu1zudB aHD6NbQDvSxBwkLrzmISqKQTf985WvcBvnAYm8OddBkA0OL64SGdkBhJ78hCtxU7laoI7bb/paNK VK8dLkJJReTwC1kxnJgbps244vjk4vc36iDCSLvugJSzlxpAHqrFvxUL8oE3jnPogDqyc3AZTMfm yBoHMpoFmooePPXadGpMeiYY3LSAj527LPw4rI8fatWq51QUAo+l1XGQuV1E2d6bXYPFyutdf+Pe 8Z5I1XP/nogWx4cb1A7Lw65rbmvZ/whf7Lv6ssRswmGuZ3rfEtAsnJk1BEivIHHlXY3pK43GvWYC rgnA4485+gWzaypey+5pdlZNzUqCWiOX4YnfiAZc2O/LZscwg1Hs76B05+8WiYJY7D26Wkq9xDv2 ZJXyucP7DkfhVL/v5ONQAeFz5scLRE+Z9P+XZ8xmXCcco38GKiXTrJOT6d0o9/18Md9FagtP4d2J x8hzENxrL9vBiRLoadvUuyP2odEeDcHI8SMb5NVOyIfhqAHee60Nwo+5sOyGBGGsOi/UYkEySQaZ PHW2hjRN602JeCP6MdMyjg3qajXC2BONj4gvadQXDbRUATMkvJLytmxWmJD+timVXaaApxqvWP7m jTzBsFPdQOOf8DygXibTJ0NxNEcVctQm0J3Uz9FMM1cvlkfX47J65+VA+DWZbxBCFM5xLZ79f7Ii jvBtKDiCOZLgPS1WIZEiFwQpdIxdy1rb09b4xde5o7DmknH4QCqGdTzkQKW2Nktp/aap/tORoJFW zAeyrqakxKM+49JSF+KGsgs2OoQFTqk/M9QkatfDqm2nbU0jug1Sowd1/4YAfHP/+q/FVi3gAV0C 6795eP2pMnoOtg8hIEKs4CfAu/np9V9JS+22urj5+dX//Suy0uI1g5wZjsbFgQw/qIj5/uvfjlCl xMGpX+PP9e5Yyhw3R84/vAFWTOGd8LQPXS2qnAimjSRHqjRjbXgNKl4D0LaRs56Lp40ZqbbgRq4g dClVE+SXoKxlwwkXFiuc5YX6/pdTgLm1OlilMZPzM9+KvdjN31cXLze37Vs0cQxF1QY/DRUfUMNT rDX5vqsPVQtEQZhLYqT1DtkqLJUg9FUiGtJZ2jBWzICjVM8lD8FOzd89EkSfMDLsVpXThCp2dNdu t5heYvOQvPxOs347X9zlDgOt8EZEp0L4LNb+cgFA6dYlrfK6wt86xphHPxg4mqT4w1kZeQ2tcNTS m9qWXseFUt4c2pQbFUffRZsUUDKHlF8jX7llSUsoeEcxS91yIZxD6bL3HJRX9l5nL78b6/scjhLc zJeXubFEbPME+hOkbR6pDDdanlBB6OotQyuhCL3grUf1eeitqPK4W+86/1vlajmV/fkJqUjEPqKz OKKtsF46ehrmMdNE69xRfXK29xQKt01HuP+FEFtjxROY2yh2NPkHeJDoF0g8OvG014aRjkklG3Nx BfqSq/E0QFYrvpE5ffJ3bAIJ+0IQ9m+ri/9yaPZhfTA8HMyoMUTdTaWzLHvGdUerkmWpgwe2BtYr L9pbSErEyyMND3UVUBKR+8gd3AdjCLuG+i4o9bBT2YUILT8Qro14jl0NweGgsbptG0gpJEYlvRMg My6kIhHHoD2AzG80Qqn9IIp8LWRsAnJciVMJl1SHmV4tpnUSd3bDP9McnH4Hca5mcTSVkSPNY+7P puOzVbcGIQRmLA6QyfJaIH4Yy9rOnbfgP2bP5nZnKsjFzbfmPT6NpiW9ibrat0R8oCtdnt6MOW9u 52+A5M8YliwhiYQ46Uri8LM4tMQtoY0iWMtDyoknZ4W21uU9hA3ruOhk7KbK0KrthpD2oNysmYtb T0gFlJEHjVWpl2tWbIuLq2QNYDw9Dwdx0VxpQcUo8a6iT9YrZFsoyVPIu4Q9K5bg5i+v/0al7Fpv hXR/88ur//2vSI7vDlsUvoH4t7v2tsFTspfOP4kojay7oacwpSJm30rpUulL9BhaKftctxvBSbaQ 0TjhosZXGoMeh/b3Yk1XRxHo3wl2nuSBqZ2VA3eIVsPEriRoOZi+BC1B55gRrMP0/IRkxf7KPRAc BjyNbNlF8Lgr8UTYcB+QoQq+E+NrVpgkTbwnxAq/xCw8h+4A16vVwgWhUIptkTxzSDMa+u6vpFaw M8ngdgP6hXiTrLf7h+SwaW4O9Vj6io5BiKboRj0bWx0mDoAY3NWh3JWC9GpM4HNRU3PhbNVnghms 2ivxMts2d+VO8Mjn58U5cCycBI7fH34azgMudhHyyuJ+5YwLmJlbJv7f2N31W2NnZULaUbI5rC/A IdNNISybNqKhdG/udS8bcbL1cEJhqFOs34rhBBLh+qb7rftAL1DSk30soB1CXpyqaQSiKuotZGoD xAxYCzkOBMZ9u4BQEHARC+jKSQROabnAS8db/a2V5VutSuAhxhUwb3PgcBQnpTInJYfs5l1CV909 tN2cvBZ5Q+X3PqM22rM1KPKHYIyZIoHehNGqlHjOCdkpYPN0SOmUiPvYgOMUxRmjxyEwwYXR3N6g bb8k09CmvlOFUu9GlkxTU5OBGOIyeW6RShgM4QTlB47XuHR45faY0sq8d6zMFsqJTUiBm8NWwl/P xPofNlUrZlPQYFhiWoPv+N7Ny4TNX1PzQ1FnhB5O4CnLek2nqrxGh/ougmqF/ui1ENR1U8e+sltp lqNXFaW6NZTRcP/g1QDsvS7FN5YckOh7grgQ5YSsEoRT6A4XRgecDdRmB7T7miUkr1A1JB/tlHkX k4S+sWj4zYLkHj1oWzNOKDkJL4GQZLVjB/xklmERFVTHh4ssnf34pzncSSCzaWZ9bzSmEmOZ+1vw DRO6V0yl3j1oeFesZB3c/I/X/w4UsDixpVje+rBvVjf/89W/+GsU0ihJKjvVQ8ABJMJL8BG6vwZV 8bgrL8H2vgTlIzxe4MosSUQbDL5crZKv4LcuuS5vWYEmuHa7g6xaFcXH4Z8SsrmqIUHs7oGC8QZk oiC8LfICkMeMZAM0btHbc9+Ue04oy+NB6Y6ytBLgBqAyshiIfwsiuao5MJFEwd+WXbPEEWe0q3lU HBQCO4xUSKfT82efuRxD/0qyPX+wC213hw0AnsGTbrPPjDpjo86Tz1yXo6pZ7h0ne3wuhhX0ULqg 3x1HZlppmg6sd9gygA3MxO9zEz/t4NYGP3zxhdc9TlLsnKBc8tnIQtPB9gGDBtowfYxWXjc2B+pB otA+OHoGJzn5gn2cOgnElYOPqCWCSGI2hihJumdwOC3Tn8Ag+5MGaTcQO0/OXhx0N3K/3D3Hbymf vYWX5+5iQODCQohvJ+kYrTJULaF6ttS1QRGbj4kKoMT9ssVcXe751D1hLv59fQvno1wu213FgHk4 qWHHY/BcltE7L6OZUxE6EFHnSQ4F5jVUgaV01thhb+53U3SoZBy4mGjA3Y0pjm324HmDYo3nbtit HJcxAYV8N5tgpXkMWUCU+8OrpGv2B+LdlNWbuHmyRk+rC8Af3NRxvAiTdftyAvNXoM1l2+2/XEL8 FXFazXS1qJB8SWVfCeb8hAqPMU80bGjoujFUZNc1rwHlzKDMj+IaA+3agYUDzL+xE/faEkoNbGDA w3pcYpd1N24vx+WYmvgQb43xvh3jERuLNsbGOYF/XmG6cPEVkz50IzgvZoQWsgsNi8AckUptrGV9 dRlXAdyz3XW7wududxAv5CVA++n5vgCYQ2stkstVfd9ciLe8eFmvKaeIeGljtKUWsFD4k1vLowHk hRKnaYszZ3RhKrGLH/7okiemR5YOXhVR0HBxXJzAHMl+OZWXsXFAKKqea5gu55sqWJ552J9wnevq K6aYb5AuRWNwYIXMAv0JmmSiDJc+dv/jNo9gUWlXvYtPZhwlErEDzYk8JfNWbYCbjfxbLyGZTEN3 O9d67DUUvsj1SOTYxBEQBLXHtTnoY/olWCp7DidKhUSpyKQ6ZPr0haZboCUEPm0TQb42QR0VqPAz AxNMz58WT825wynI9CApuD4vVIO6qdwTy7hNEsv4w6likX3n+lIFYCriJSJkhPttA4mmQn4ctpDr R6DIC9/+3byf3+18xU8RTQdX0jgj6lwBXpq9bLHhGHTzTscHlwmZFvTo0vcpR8huQS+M3gLn5Oz7 OMdeyRd8kG7+16H4f3bpRO4= """ import sys import base64 import zlib import imp class DictImporter(object): def __init__(self, sources): self.sources = sources def find_module(self, fullname, path=None): if fullname in self.sources: return self if fullname + '.__init__' in self.sources: return self return None def load_module(self, fullname): # print "load_module:", fullname from types import ModuleType try: s = self.sources[fullname] is_pkg = False except KeyError: s = self.sources[fullname + '.__init__'] is_pkg = True co = compile(s, fullname, 'exec') module = sys.modules.setdefault(fullname, ModuleType(fullname)) module.__file__ = "%s/%s" % (__file__, fullname) module.__loader__ = self if is_pkg: module.__path__ = [fullname] do_exec(co, module.__dict__) return sys.modules[fullname] def get_source(self, name): res = self.sources.get(name) if res is None: res = self.sources.get(name + '.__init__') return res if __name__ == "__main__": if sys.version_info >= (3, 0): exec("def do_exec(co, loc): exec(co, loc)\n") import pickle sources = sources.encode("ascii") # ensure bytes sources = pickle.loads(zlib.decompress(base64.decodebytes(sources))) else: import cPickle as pickle exec("def do_exec(co, loc): exec co in loc\n") sources = pickle.loads(zlib.decompress(base64.decodestring(sources))) importer = DictImporter(sources) sys.meta_path.append(importer) entry = "import py; raise SystemExit(py.test.cmdline.main())" do_exec(entry, locals()) mdp-3.3/mdp/test/test_AdaptiveCutoffNode.py000066400000000000000000000026301203131624700210210ustar00rootroot00000000000000from _tools import * def test_AdaptiveCutoffNode_smalldata(): """Test AdaptiveCutoffNode thoroughly on a small data set.""" # values from 0.1 to 0.6 and 0.2 to 0.7 x1 = numx.array([[0.1, 0.3], [0.3, 0.5], [0.5, 0.7]]) x2 = numx.array([[0.4, 0.6], [0.2, 0.4], [0.6, 0.2]]) x = numx.concatenate([x1, x2]) node = mdp.nodes.AdaptiveCutoffNode(lower_cutoff_fraction= 0.2, # clip first upper_cutoff_fraction=0.4) # last two node.train(x1) node.train(x2) node.stop_training() assert numx.all(x == node.data_hist) # test bound values assert numx.all(node.lower_bounds == numx.array([0.2, 0.3])) assert numx.all(node.upper_bounds == numx.array([0.4, 0.5])) # test execute x_test = (numx.array([[0.1, 0.2], [0.3, 0.4], [0.5, 0.6]])) x_clip = node.execute(x_test) x_goal = numx.array([[0.2, 0.3], [0.3, 0.4], [0.4, 0.5]]) assert (x_clip == x_goal).all() def test_AdaptiveCutoffNode_randomdata(): """Test AdaptiveCutoffNode on a large random data.""" node = mdp.nodes.AdaptiveCutoffNode(lower_cutoff_fraction= 0.2, upper_cutoff_fraction=0.4, hist_fraction=0.5) x1 = numx_rand.random((1000, 3)) x2 = numx_rand.random((500, 3)) x = numx.concatenate([x1, x2]) node.train(x1) node.train(x2) node.stop_training() node.execute(x) mdp-3.3/mdp/test/test_Convolution2DNode.py000066400000000000000000000115431203131624700206250ustar00rootroot00000000000000from __future__ import with_statement from _tools import * import py.test requires_signal = skip_on_condition( "not hasattr(mdp.nodes, 'Convolution2DNode')", "This test requires the 'scipy.signal' module.") @requires_signal def testConvolution2Dsimple(): # copied over from convolution_nodes.py im = numx_rand.rand(4, 3,3) node = mdp.nodes.Convolution2DNode(numx.array([[[1.]]])) node.execute(im) @requires_signal def testConvolution2DNodeFunctionality(): filters = numx.empty((3,1,1)) filters[:,0,0] = [1.,2.,3.] x = numx_rand.random((10,3,4)) for mode in ['valid', 'same', 'full']: for boundary in ['fill', 'wrap', 'symm']: node = mdp.nodes.Convolution2DNode(filters, approach='linear', mode=mode, boundary=boundary, output_2d=False) y = node.execute(x) assert_equal(y.shape, (x.shape[0], 3, x.shape[1], x.shape[2])) for n_flt in range(3): assert_array_equal(x*(n_flt+1.), y[:,n_flt,:,:]) @requires_signal def testConvolution2DNode_2D3Dinput(): filters = numx.empty((3,1,1)) filters[:,0,0] = [1.,2.,3.] # 1) input 2D/3D x = numx_rand.random((10,12)) node = mdp.nodes.Convolution2DNode(filters, approach='linear', input_shape=(3,4), output_2d=False) y = node.execute(x) assert_equal(y.shape, (x.shape[0], 3, 3, 4)) x = numx.random.random((10,3,4)) node = mdp.nodes.Convolution2DNode(filters, output_2d=False) y = node.execute(x) assert_equal(y.shape, (x.shape[0], 3, 3, 4)) # 2) output 2D/3D x = numx.random.random((10,12)) node = mdp.nodes.Convolution2DNode(filters, approach='linear', input_shape=(3,4), output_2d=True) y = node.execute(x) assert_equal(y.shape, (x.shape[0], 3*3*4)) for i in range(3): assert_array_equal(x*(i+1.), y[:,i*12:(i+1)*12]) @requires_signal def testConvolution2DNode_fft(): filters = numx.empty((3,1,1)) filters[:,0,0] = [1.,2.,3.] x = numx.random.random((10,3,4)) for mode in ['valid', 'same', 'full']: node = mdp.nodes.Convolution2DNode(filters, approach='fft', mode=mode, output_2d=False) y = node.execute(x) assert_equal(y.shape, (x.shape[0], 3, x.shape[1], x.shape[2])) for n_flt in range(3): assert_array_almost_equal(x*(n_flt+1.), y[:,n_flt,:,:], 6) # with random filters x = numx.random.random((10,30,20)) filters = numx.random.random((3,5,4)) for mode in ['valid', 'same', 'full']: node_fft = mdp.nodes.Convolution2DNode(filters, approach='fft', mode=mode, output_2d=False) node_lin = mdp.nodes.Convolution2DNode(filters, approach='linear', mode=mode, boundary='fill', output_2d=False) y_fft = node_fft.execute(x) y_lin = node_lin.execute(x) assert_array_almost_equal(y_fft, y_lin, 6) @requires_signal def testConvolution2DNode_in_Flow(): filters = numx.empty((3,1,1)) filters[:,0,0] = [1.,2.,3.] # with 3D input x = numx.random.random((10,3,4)) node = mdp.nodes.Convolution2DNode(filters, output_2d=True) flow = mdp.Flow([node, mdp.nodes.PCANode(output_dim=3)]) flow.train(x) flow.execute(x) # with 2D input x = numx.random.random((10,12)) node = mdp.nodes.Convolution2DNode(filters, input_shape=(3,4), output_2d=True) flow = mdp.Flow([mdp.nodes.IdentityNode(), node, mdp.nodes.PCANode(output_dim=3)]) flow.train(x) flow.execute(x) @requires_signal def testConvolution2DNode_arguments(): # filters must be 3D filters = numx.random.random((5,4)) py.test.raises(mdp.NodeException, "mdp.nodes.Convolution2DNode(filters)") filters = numx.random.random((5,4,2,2)) py.test.raises(mdp.NodeException, "mdp.nodes.Convolution2DNode(filters)") # filters must be array filters = [[[2.]]] py.test.raises(mdp.NodeException, "mdp.nodes.Convolution2DNode(filters)") filters = numx.random.random((1,1,1)) with py.test.raises(mdp.NodeException): mdp.nodes.Convolution2DNode(filters, approach='bug') with py.test.raises(mdp.NodeException): mdp.nodes.Convolution2DNode(filters, mode='bug') with py.test.raises(mdp.NodeException): mdp.nodes.Convolution2DNode(filters, boundary='bug') @requires_signal def testConvolution2DNode_shape_mismatch(): x = numx.random.random((10,60)) filters = numx.random.random((3,5,4)) node = mdp.nodes.Convolution2DNode(filters, input_shape=(3,2)) with py.test.raises(mdp.NodeException): node.execute(x) mdp-3.3/mdp/test/test_CutoffNode.py000066400000000000000000000004051203131624700173410ustar00rootroot00000000000000from _tools import * def test_CutoffNode(): node = mdp.nodes.CutoffNode(-1.5, 1.2) x = numx.array([[0.1, 0, -2, 3, 1.2, -1.5, -3.33]]) y_ref = numx.array([[0.1, 0, -1.5, 1.2, 1.2, -1.5, -1.5]]) y = node.execute(x) assert numx.all(y==y_ref) mdp-3.3/mdp/test/test_EtaComputerNode.py000066400000000000000000000006271203131624700203510ustar00rootroot00000000000000import mdp from _tools import * def testEtaComputerNode(): tlen = 1e5 t = numx.linspace(0,2*numx.pi,tlen) inp = numx.array([numx.sin(t), numx.sin(5*t)]).T # create node to be tested ecnode = mdp.nodes.EtaComputerNode() ecnode.train(inp) # etas = ecnode.get_eta(t=tlen) # precision gets better with increasing tlen assert_array_almost_equal(etas, [1, 5], decimal=4) mdp-3.3/mdp/test/test_FANode.py000066400000000000000000000046601203131624700164100ustar00rootroot00000000000000from _tools import * def test_FANode(): d = 10 N = 5000 k = 4 mu = uniform((1, d))*3.+2. sigma = uniform((d,))*0.01 #A = utils.random_rot(d)[:k,:] A = numx_rand.normal(size=(k,d)) # latent variables y = numx_rand.normal(0., 1., size=(N, k)) # observations noise = numx_rand.normal(0., 1., size=(N, d)) * sigma x = mult(y, A) + mu + noise fa = mdp.nodes.FANode(output_dim=k, dtype='d') fa.train(x) fa.stop_training() # compare estimates to real parameters assert_array_almost_equal(fa.mu[0,:], mean(x, axis=0), 5) assert_array_almost_equal(fa.sigma, std(noise, axis=0)**2, 2) # FA finds A only up to a rotation. here we verify that the # A and its estimation span the same subspace AA = numx.concatenate((A,fa.A.T),axis=0) u,s,vh = utils.svd(AA) assert sum(s/max(s)>1e-2)==k, \ 'A and its estimation do not span the same subspace' y = fa.execute(x) fa.generate_input() fa.generate_input(10) fa.generate_input(y) fa.generate_input(y, noise=True) # test that noise has the right mean and variance est = fa.generate_input(numx.zeros((N, k)), noise=True) est -= fa.mu assert_array_almost_equal(numx.diag(numx.cov(est, rowvar=0)), fa.sigma, 3) assert_almost_equal(numx.amax(abs(numx.mean(est, axis=0)), axis=None), 0., 3) est = fa.generate_input(100000) assert_array_almost_equal_diff(numx.cov(est, rowvar=0), mdp.utils.mult(fa.A, fa.A.T), 1) def test_FANode_indim(): # FANode uses two slightly different initialization for input_dims # larger or smaller than 200 x = numx_rand.normal(size=(5000, 10)) mdp.nodes.FANode(output_dim=1)(x) x = numx_rand.normal(size=(5000, 500)) mdp.nodes.FANode(output_dim=1)(x) def test_FANode_singular_cov(): x = numx.array([[ 1., 1., 0., 0., 0.], [ 0., 1., 1., 0., 0.], [ 0., 1., 0., 0., 0.], [ 0., 1., 1., 0., 0.], [ 0., 1., 0., 0., 1.], [ 0., 1., 0., 1., 0.], [ 0., 1., 0., 0., 0.], [ 1., 1., 0., 0., 0.], [ 1., 1., 0., 0., 0.], [ 0., 1., 0., 1., 1.]]) fanode = mdp.nodes.FANode(output_dim=3) fanode.train(x) # the matrix x is singular py.test.raises(mdp.NodeException, "fanode.stop_training()") mdp-3.3/mdp/test/test_FDANode.py000066400000000000000000000033261203131624700165120ustar00rootroot00000000000000import py.test from _tools import * def test_FDANode(): mean1 = [0., 2.] mean2 = [0., -2.] std_ = numx.array([1., 0.2]) npoints = 50000 rot = 45 # input data: two distinct gaussians rotated by 45 deg def distr(size): return normal(0, 1., size=(size)) * std_ x1 = distr((npoints,2)) + mean1 utils.rotate(x1, rot, units='degrees') x2 = distr((npoints,2)) + mean2 utils.rotate(x2, rot, units='degrees') x = numx.concatenate((x1, x2), axis=0) # labels cl1 = numx.ones((x1.shape[0],), dtype='d') cl2 = 2.*numx.ones((x2.shape[0],), dtype='d') classes = numx.concatenate((cl1, cl2)) # shuffle the data perm_idx = numx_rand.permutation(classes.shape[0]) x = numx.take(x, perm_idx, axis=0) classes = numx.take(classes, perm_idx) flow = mdp.Flow([mdp.nodes.FDANode()]) py.test.raises(mdp.TrainingException, flow[0].train, x, numx.ones((2,))) flow.train([[(x, classes)]]) fda_node = flow[0] assert fda_node.tlens[1] == npoints assert fda_node.tlens[2] == npoints m1 = numx.array([mean1]) m2 = numx.array([mean2]) utils.rotate(m1, rot, units='degrees') utils.rotate(m2, rot, units='degrees') assert_array_almost_equal(fda_node.means[1], m1, 2) assert_array_almost_equal(fda_node.means[2], m2, 2) y = flow.execute(x) assert_array_almost_equal(mean(y, axis=0), [0., 0.], decimal-2) assert_array_almost_equal(std(y, axis=0), [1., 1.], decimal-2) assert_almost_equal(mult(y[:,0], y[:,1].T), 0., decimal-2) v1 = fda_node.v[:,0]/fda_node.v[0,0] assert_array_almost_equal(v1, [1., -1.], 2) v1 = fda_node.v[:,1]/fda_node.v[0,1] assert_array_almost_equal(v1, [1., 1.], 2) mdp-3.3/mdp/test/test_GaussianClassifier.py000066400000000000000000000044011203131624700210640ustar00rootroot00000000000000from _tools import * def testGaussianClassifier_train(): nclasses = 10 dim = 4 npoints = 10000 covs = [] means = [] node = mdp.nodes.GaussianClassifier() for i in xrange(nclasses): cov = utils.symrand(uniform((dim,))*dim+1) mn = uniform((dim,))*10. x = normal(0., 1., size=(npoints, dim)) x = mult(x, utils.sqrtm(cov)) + mn x = utils.refcast(x, 'd') cl = numx.ones((npoints,))*i mn_estimate = mean(x, axis=0) means.append(mn_estimate) covs.append(numx.cov(x, rowvar=0)) node.train(x, cl) try: node.train(x, numx.ones((2,))) assert False, 'No exception despite wrong number of labels' except mdp.TrainingException: pass node.stop_training() for i in xrange(nclasses): lbl_idx = node.labels.index(i) assert_array_almost_equal_diff(means[i], node.means[lbl_idx], decimal-1) assert_array_almost_equal_diff(utils.inv(covs[i]), node.inv_covs[lbl_idx], decimal-2) def testGaussianClassifier_labellistbug(): gc = mdp.nodes.GaussianClassifier() # this was failing as of MDP-2.5-309-gefa0f9d! gc.train(mdp.numx_rand.random((50, 3)), [+1] * 50) def testGaussianClassifier_label(): mean1 = [0., 2.] mean2 = [0., -2.] std_ = numx.array([1., 0.2]) npoints = 100 rot = 45 # input data: two distinct gaussians rotated by 45 deg def distr(size): return normal(0, 1., size=(size)) * std_ x1 = distr((npoints,2)) + mean1 utils.rotate(x1, rot, units='degrees') x2 = distr((npoints,2)) + mean2 utils.rotate(x2, rot, units='degrees') x = numx.concatenate((x1, x2), axis=0) # labels cl1 = numx.ones((x1.shape[0],), dtype='i') cl2 = 2*numx.ones((x2.shape[0],), dtype='i') classes = numx.concatenate((cl1, cl2)) # shuffle the data perm_idx = numx_rand.permutation(classes.shape[0]) x = numx.take(x, perm_idx, axis=0) classes = numx.take(classes, perm_idx, axis=0) node = mdp.nodes.GaussianClassifier() node.train(x, classes) classification = node.label(x) assert_array_equal(classes, classification) mdp-3.3/mdp/test/test_GeneralExpansionNode.py000066400000000000000000000047751203131624700213730ustar00rootroot00000000000000import mdp from _tools import * requires_scipy = skip_on_condition( "not mdp.numx_description == 'scipy'", "This test requires 'scipy'") def dumb_quadratic_expansion(x): dim_x = x.shape[1] return numx.asarray([(x[i].reshape(dim_x,1) * x[i].reshape(1,dim_x)).flatten() for i in range(len(x))]) def testGeneralExpansionNode(): samples = 2 input_dim = 10 funcs = [lambda x:x, lambda x: x**2, dumb_quadratic_expansion] cen = mdp.nodes.GeneralExpansionNode(funcs) input = numx.random.normal(size=(samples, input_dim)) out = cen.execute(input) assert_array_almost_equal(out[:, 0:input_dim], input, 6, "incorrect constant expansion") assert_array_almost_equal(out[:, input_dim:2*input_dim], input ** 2, 6, "incorrect constant expansion") assert_array_almost_equal(out[:, 2*input_dim:], dumb_quadratic_expansion(input), 6, "incorrect constant expansion") assert cen.expanded_dim(input_dim) == 2 * input_dim + input_dim**2, "expanded_dim failed" assert_array_almost_equal(cen.output_sizes(input_dim), numx.array([input_dim, input_dim, input_dim*input_dim]), 6, "output_sizes failed") @requires_scipy def testGeneralExpansionNode_inverse(): samples = 2 input_dim = 10 funcs = [lambda x:x, lambda x: x**2, dumb_quadratic_expansion] cen = mdp.nodes.GeneralExpansionNode(funcs) input = numx.random.normal(size=(samples, input_dim)) out = cen.execute(input) app_input = cen.pseudo_inverse(out, use_hint=True) assert_array_almost_equal_diff(input, app_input, 6, 'inversion not good enough by use_hint=True') # ??? # testing with use_hint = False is tricky, it often fails. # we try 20 times in a row and hope for the best trials = 20 for trial in range(trials): cen = mdp.nodes.GeneralExpansionNode(funcs) input = numx.random.normal(size=(samples, input_dim)) out = cen.execute(input) app_input = cen.pseudo_inverse(out, use_hint=False) maxdiff = max(numx.ravel(abs(app_input-input)))/\ max(max(abs(numx.ravel(app_input))),max(abs(numx.ravel(input)))) cond = maxdiff < 10** (-4) if cond: break assert cond, 'inversion not good enough by use_hint=False' mdp-3.3/mdp/test/test_GrowingNeuralGasNode.py000066400000000000000000000042151203131624700213340ustar00rootroot00000000000000from _tools import * def _uniform(min_, max_, dims): return uniform(dims)*(max_-min_)+min_ def test_GrowingNeuralGasNode(): ### test 1D distribution in a 10D space # line coefficients dim = 10 npoints = 1000 const = _uniform(-100,100,[dim]) dir = _uniform(-1,1,[dim]) dir /= utils.norm2(dir) x = _uniform(-1,1,[npoints]) data = numx.outer(x, dir)+const # train the gng network gng = mdp.nodes.GrowingNeuralGasNode(start_poss=[data[0,:],data[1,:]]) gng.train(data) gng.stop_training() # control that the nodes in the graph lie on the line poss = gng.get_nodes_position()-const norms = numx.sqrt(numx.sum(poss*poss, axis=1)) poss = (poss.T/norms).T assert max(numx.minimum(numx.sum(abs(poss-dir),axis=1), numx.sum(abs(poss+dir),axis=1)))<1e-7, \ 'At least one node of the graph does lies out of the line.' # check that the graph is linear (no additional branches) # get a topological sort of the graph topolist = gng.graph.topological_sort() deg = map(lambda n: n.degree(), topolist) assert_equal(deg[:2],[1,1]) assert_array_equal(deg[2:], [2 for i in xrange(len(deg)-2)]) # check the distribution of the nodes' position is uniform # this node is at one of the extrema of the graph x0 = numx.outer(numx.amin(x, axis=0), dir)+const x1 = numx.outer(numx.amax(x, axis=0), dir)+const linelen = utils.norm2(x0-x1) # this is the mean distance the node should have dist = linelen/poss.shape[0] # sort the node, depth first nodes = gng.graph.undirected_dfs(topolist[0]) poss = numx.array(map(lambda n: n.data.pos, nodes)) dists = numx.sqrt(numx.sum((poss[:-1,:]-poss[1:,:])**2, axis=1)) assert_almost_equal(dist, mean(dists), 1) # # test the nearest_neighbor function start_poss = [numx.asarray([2.,0]), numx.asarray([-2.,0])] gng = mdp.nodes.GrowingNeuralGasNode(start_poss=start_poss) x = numx.asarray([[2.,0]]) gng.train(x) nodes, dists = gng.nearest_neighbor(numx.asarray([[1.,0]])) assert_equal(dists[0],1.) assert_array_equal(nodes[0].data.pos,numx.asarray([2,0])) mdp-3.3/mdp/test/test_HistogramNode.py000066400000000000000000000012111203131624700200440ustar00rootroot00000000000000from _tools import * def testHistogramNode_nofraction(): """Test HistogramNode with fraction set to 1.0.""" node = mdp.nodes.HistogramNode() x1 = numx.array([[0.1, 0.2], [0.3, 0.5]]) x2 = numx.array([[0.3, 0.6], [0.2, 0.1]]) x = numx.concatenate([x1, x2]) node.train(x1) node.train(x2) assert numx.all(x == node.data_hist) def testHistogramNode_fraction(): """Test HistogramNode with fraction set to 0.5.""" node = mdp.nodes.HistogramNode(hist_fraction=0.5) x1 = numx_rand.random((1000, 3)) x2 = numx_rand.random((500, 3)) node.train(x1) node.train(x2) assert len(node.data_hist) < 1000 mdp-3.3/mdp/test/test_HitParadeNode.py000066400000000000000000000043101203131624700177530ustar00rootroot00000000000000import mdp from _tools import * def testHitParadeNode(): signal = uniform((300,3)) gap = 5 signal[10,0], signal[120,1], signal[230,2] = 4,3,2 signal[11,0], signal[121,1], signal[231,2] = -4,-3,-2 hit = mdp.nodes.HitParadeNode(1,gap,3) hit.train(signal[:100,:]) hit.train(signal[100:200,:]) hit.train(signal[200:300,:]) maxima, max_ind = hit.get_maxima() minima, min_ind = hit.get_minima() assert_array_equal(maxima,numx.array([[4,3,2]])) assert_array_equal(max_ind,numx.array([[10,120,230]])) assert_array_equal(minima,numx.array([[-4,-3,-2]])) assert_array_equal(min_ind,numx.array([[11,121,231]])) # test integer type: signal = (uniform((300,3))*10).astype('i') gap = 5 signal[10,0], signal[120,1], signal[230,2] = 40,30,20 signal[11,0], signal[121,1], signal[231,2] = -40,-30,-20 hit = mdp.nodes.HitParadeNode(1,gap,3) hit.train(signal[:100,:]) hit.train(signal[100:200,:]) hit.train(signal[200:300,:]) maxima, max_ind = hit.get_maxima() minima, min_ind = hit.get_minima() assert_array_equal(maxima,numx.array([[40,30,20]])) assert_array_equal(max_ind,numx.array([[10,120,230]])) assert_array_equal(minima,numx.array([[-40,-30,-20]])) assert_array_equal(min_ind,numx.array([[11,121,231]])) def testOneDimensionalHitParade(): signal = (uniform(300)-0.5)*2 gap = 5 # put some maxima and minima signal[0] , signal[10] , signal[50] = 1.5, 1.4, 1.3 signal[1] , signal[11] , signal[51] = -1.5, -1.4, -1.3 # put two maxima and two minima within the gap signal[100], signal[103] = 2, 3 signal[110], signal[113] = 3.1, 2 signal[120], signal[123] = -2, -3.1 signal[130], signal[133] = -3, -2 hit = mdp.nodes._OneDimensionalHitParade(5,gap) hit.update((signal[:100],numx.arange(100))) hit.update((signal[100:200],numx.arange(100,200))) hit.update((signal[200:300],numx.arange(200,300))) maxima,ind_maxima = hit.get_maxima() minima,ind_minima = hit.get_minima() assert_array_equal(maxima,[3.1,3,1.5,1.4,1.3]) assert_array_equal(ind_maxima,[110,103,0,10,50]) assert_array_equal(minima,[-3.1,-3,-1.5,-1.4,-1.3]) assert_array_equal(ind_minima,[123,130,1,11,51]) mdp-3.3/mdp/test/test_ICANode.py000066400000000000000000000040451203131624700165130ustar00rootroot00000000000000from _tools import * def verify_ICANode(icanode, rand_func = uniform, vars = 3, N=8000, prec = 3): dim = (N,vars) mat,mix,inp = get_random_mix(rand_func=rand_func,mat_dim=dim) icanode.train(inp) act_mat = icanode.execute(inp) cov = utils.cov2((mat-mean(mat,axis=0))/std(mat,axis=0), act_mat) maxima = numx.amax(abs(cov), axis=0) assert_array_almost_equal(maxima,numx.ones(vars),prec) def verify_ICANodeMatrices(icanode, rand_func = uniform, vars = 3, N=8000): dim = (N,vars) mat,mix,inp = get_random_mix(rand_func=rand_func, mat_dim=dim, avg = 0) icanode.train(inp) # test projection matrix act_mat = icanode.execute(inp) T = icanode.get_projmatrix() exp_mat = mult(inp, T) assert_array_almost_equal(act_mat,exp_mat,6) # test reconstruction matrix out = act_mat.copy() act_mat = icanode.inverse(out) B = icanode.get_recmatrix() exp_mat = mult(out, B) assert_array_almost_equal(act_mat,exp_mat,6) def rand_with_timestruct(size=None): T, N = size # do something special only if T!=N, otherwise # we were asked to generate a mixing matrix if T == N: return uniform(size=size) # create independent sources src = uniform((T,N))*2-1 fsrc = numx_fft.rfft(src,axis=0) # enforce different speeds for i in xrange(N): fsrc[(i+1)*(T//20):,i] = 0. src = numx_fft.irfft(fsrc,axis=0) return src def test_CuBICANode_batch(): ica = mdp.nodes.CuBICANode(limit=10**(-decimal)) ica2 = ica.copy() verify_ICANode(ica) verify_ICANodeMatrices(ica2) def test_CuBICANode_telescope(): ica = mdp.nodes.CuBICANode(limit=10**(-decimal), telescope=1) ica2 = ica.copy() verify_ICANode(ica) verify_ICANodeMatrices(ica2) def test_TDSEPNode(): ica = mdp.nodes.TDSEPNode(lags=20, limit=1e-10) ica2 = ica.copy() verify_ICANode(ica, rand_func=rand_with_timestruct, vars=2, N=2**14, prec=2) verify_ICANodeMatrices(ica2, rand_func=rand_with_timestruct, vars=2, N=2**14) mdp-3.3/mdp/test/test_ISFANode.py000066400000000000000000000214031203131624700166360ustar00rootroot00000000000000from _tools import * def _std(x): return x.std(axis=0) # standard deviation without bias mx = mean(x, axis=0) mx2 = mean(x*x, axis=0) return numx.sqrt((mx2-mx)/(x.shape[0]-1)) def _cov(x,y=None): #return covariance matrix for x and y if y is None: y = x.copy() x = x - mean(x,0) x = x / _std(x) y = y - mean(y,0) y = y / _std(y) #return mult(numx.transpose(x),y)/(x.shape[0]-1) return mult(numx.transpose(x),y)/(x.shape[0]) def testISFANodeGivensRotations(): ncovs = 5 dim = 7 ratio = uniform(2).tolist() covs = [uniform((dim,dim)) for j in xrange(ncovs)] covs= mdp.utils.MultipleCovarianceMatrices(covs) covs.symmetrize() i = mdp.nodes.ISFANode(range(1, ncovs+1),sfa_ica_coeff=ratio, icaweights=uniform(ncovs), sfaweights=uniform(ncovs), output_dim = dim-1, dtype="d") i._adjust_ica_sfa_coeff() ratio = i._bica_bsfa # case 2: only one axis within output space # get contrast using internal function phi, cont1, min_, dummy =\ i._givens_angle_case2(dim-2,dim-1,covs,ratio,complete=1) # get contrast using explicit rotations cont2 = [] for angle in phi: cp = covs.copy() cp.rotate(angle,[dim-2,dim-1]) cont2.append(numx.sum(i._get_contrast(cp,ratio))) assert_array_almost_equal(cont1,cont2,decimal) # case 1: both axes within output space # get contrast using internal function phi,cont1, min_ , dummy =\ i._givens_angle_case1(0,1,covs,ratio,complete = 1) # get contrast using explicit rotations cont2 = [] for angle in phi: cp = covs.copy() cp.rotate(angle,[0,1]) cont2.append(numx.sum(i._get_contrast(cp,ratio))) assert abs(min_) < numx.pi/4, 'Estimated Minimum out of bounds' assert_array_almost_equal(cont1,cont2,decimal) def testISFANode_SFAPart(): # create independent sources mat = uniform((100000,3))*2-1 fmat = numx_fft.rfft(mat,axis=0) # enforce different speeds for i in xrange(3): fmat[(i+1)*5000:,i] = 0. mat = numx_fft.irfft(fmat,axis=0) _sfanode = mdp.nodes.SFANode() _sfanode.train(mat) src = _sfanode.execute(mat) # test with unmixed signals (i.e. the node should make nothing at all) out = mdp.nodes.ISFANode(lags=1, whitened=True, sfa_ica_coeff=[1.,0.])(src) max_cv = numx.diag(abs(_cov(out,src))) assert_array_almost_equal(max_cv, numx.ones((3,)),5) # mix linearly the signals mix = mult(src,uniform((3,3))*2-1) out = mdp.nodes.ISFANode(lags=1, whitened=False, sfa_ica_coeff=[1.,0.])(mix) max_cv = numx.diag(abs(_cov(out,src))) assert_array_almost_equal(max_cv, numx.ones((3,)),5) def testISFANode_ICAPart(): # create independent sources src = uniform((100000,3))*2-1 fsrc = numx_fft.rfft(src,axis=0) # enforce different speeds for i in xrange(3): fsrc[(i+1)*5000:,i] = 0. src = numx_fft.irfft(fsrc,axis=0) # enforce time-lag-1-independence src = mdp.nodes.ISFANode(lags=1, sfa_ica_coeff=[1.,0.])(src) out = mdp.nodes.ISFANode(lags=1, whitened=True, sfa_ica_coeff=[0.,1.])(src) max_cv = numx.diag(abs(_cov(out,src))) assert_array_almost_equal(max_cv, numx.ones((3,)),5) # mix linearly the signals mix = mult(src,uniform((3,3))*2-1) out = mdp.nodes.ISFANode(lags=1, whitened=False, sfa_ica_coeff=[0.,1.])(mix) max_cv = numx.diag(abs(_cov(out,src))) assert_array_almost_equal(max_cv, numx.ones((3,)),5) def testISFANode_3Complete(): # test transition from ica to sfa behavior of isfa # use ad hoc sources lag = 25 src = numx.zeros((1001,3),"d") idx = [(2,4),(80,1),(2+lag,6)] for i in xrange(len(idx)): i0, il = idx[i] src[i0:i0+il,i] = 1. src[i0+il:i0+2*il,i] = -1. src[:,i] -= mean(src[:,i]) src[:,i] /= std(src[:,i]) # test extreme cases # case 1: ICA out = mdp.nodes.ISFANode(lags=[1,lag], icaweights=[1.,1.], sfaweights=[1.,0.], output_dim=2, whitened=True, sfa_ica_coeff=[1E-4,1.])(src) cv = abs(_cov(src,out)) idx_cv = numx.argmax(cv,axis=0) assert_array_equal(idx_cv,[2,1]) max_cv = numx.amax(cv,axis=0) assert_array_almost_equal(max_cv, numx.ones((2,)),5) # case 2: SFA out = mdp.nodes.ISFANode(lags=[1,lag], icaweights=[1.,1.], sfaweights=[1.,0.], output_dim=2, whitened=True, sfa_ica_coeff=[1.,0.])(src) cv = abs(_cov(src,out)) idx_cv = numx.argmax(cv,axis=0) assert_array_equal(idx_cv,[2,0]) max_cv = numx.amax(cv,axis=0) assert_array_almost_equal(max_cv, numx.ones((2,)),5) def _ISFA_analytical_solution( nsources, nmat, dim, ica_ambiguity): # build a sequence of random diagonal matrices matrices = [numx.eye(dim, dtype='d')]*nmat # build first matrix: # - create random diagonal with elements # in [0, 1] diag = uniform(dim) # - sort it in descending order (in absolute value) # [large first] diag = numx.take(diag, numx.argsort(abs(diag)))[::-1] # - save larger elements [sfa solution] sfa_solution = diag[:nsources].copy() # - modify diagonal elements order to allow for a # different solution for isfa: # create index array idx = range(0,dim) # take the second slowest element and put it at the end idx = [idx[0]]+idx[2:]+[idx[1]] diag = numx.take(diag, idx) # - save isfa solution isfa_solution = diag[:nsources] # - set the first matrix matrices[0] = matrices[0]*diag # build other matrices diag_dim = nsources+ica_ambiguity for i in xrange(1,nmat): # get a random symmetric matrix matrices[i] = mdp.utils.symrand(dim) # diagonalize the subspace diag_dim tmp_diag = (uniform(diag_dim)-0.5)*2 matrices[i][:diag_dim,:diag_dim] = numx.diag(tmp_diag) # put everything in MultCovMat matrices = mdp.utils.MultipleCovarianceMatrices(matrices) return matrices, sfa_solution, isfa_solution def _ISFA_unmixing_error( nsources, goal, estimate): check = mult(goal[:nsources,:], estimate[:,:nsources]) error = (abs(numx.sum(numx.sum(abs(check),axis=1)-1))+ abs(numx.sum(numx.sum(abs(check),axis=0)-1))) error /= nsources*nsources return error def testISFANode_AnalyticalSolution(): nsources = 2 # number of time lags nmat = 20 # degree of polynomial expansion deg = 3 # sfa_ica coefficient sfa_ica_coeff = [1., 1.] # how many independent subspaces in addition to the sources ica_ambiguity = 2 # dimensions of expanded space dim = mdp.nodes._expanded_dim(deg, nsources) assert (nsources+ica_ambiguity) < dim, 'Too much ica ambiguity.' trials = 20 for trial in xrange(trials): # get analytical solution: # prepared matrices, solution for sfa, solution for isf covs,sfa_solution,isfa_solution=_ISFA_analytical_solution( nsources,nmat,dim,ica_ambiguity) # get contrast of analytical solution # sfasrc, icasrc = _get_matrices_contrast(covs, nsources, dim, # sfa_ica_coeff) # set rotation matrix R = mdp.utils.random_rot(dim) covs_rot = covs.copy() # rotate the analytical solution covs_rot.transform(R) # find the SFA solution to initialize ISFA eigval, SFARP = mdp.utils.symeig(covs_rot.covs[:,:,0]) # order SFA solution by slowness SFARP = SFARP[:,-1::-1] # run ISFA isfa = mdp.nodes.ISFANode(lags = covs_rot.ncovs, whitened=True, sfa_ica_coeff = sfa_ica_coeff, eps_contrast = 1e-7, output_dim = nsources, max_iter = 500, verbose = False, RP = SFARP) isfa.train(uniform((100,dim))) isfa.stop_training(covs = covs_rot.copy()) # check that the rotation matrix found by ISFA is R # up to a permutation matrix. # Unmixing error as in Tobias paper error = _ISFA_unmixing_error(nsources, R, isfa.RPC) if error < 1E-4: break assert error < 1E-4, 'None out of the %d trials succeded.' % trials mdp-3.3/mdp/test/test_KNNClassifier.py000066400000000000000000000033471203131624700177500ustar00rootroot00000000000000from _tools import * # These tests are basically taken from the GaussianClassifier. def testKNNClassifier_train(): nclasses = 10 dim = 4 npoints = 10000 covs = [] means = [] node = mdp.nodes.KNNClassifier() for i in xrange(nclasses): cov = utils.symrand(uniform((dim,))*dim+1) mn = uniform((dim,))*10. x = normal(0., 1., size=(npoints, dim)) x = mult(x, utils.sqrtm(cov)) + mn x = utils.refcast(x, 'd') cl = numx.ones((npoints,))*i mn_estimate = mean(x, axis=0) means.append(mn_estimate) covs.append(numx.cov(x, rowvar=0)) node.train(x, cl) try: node.train(x, numx.ones((2,))) assert False, 'No exception despite wrong number of labels' except mdp.TrainingException: pass node.stop_training() def testKNNClassifier_label(): mean1 = [0., 2.] mean2 = [0., -2.] std_ = numx.array([1., 0.2]) npoints = 100 rot = 45 # input data: two distinct gaussians rotated by 45 deg def distr(size): return normal(0, 1., size=(size)) * std_ x1 = distr((npoints,2)) + mean1 utils.rotate(x1, rot, units='degrees') x2 = distr((npoints,2)) + mean2 utils.rotate(x2, rot, units='degrees') x = numx.concatenate((x1, x2), axis=0) # labels cl1 = numx.ones((x1.shape[0],), dtype='i') cl2 = 2*numx.ones((x2.shape[0],), dtype='i') classes = numx.concatenate((cl1, cl2)) # shuffle the data perm_idx = numx_rand.permutation(classes.shape[0]) x = numx.take(x, perm_idx, axis=0) classes = numx.take(classes, perm_idx, axis=0) node = mdp.nodes.KNNClassifier() node.train(x, classes) classification = node.label(x) assert_array_equal(classes, classification) mdp-3.3/mdp/test/test_LinearRegressionNode.py000066400000000000000000000047201203131624700213720ustar00rootroot00000000000000import py.test from _tools import * INDIM, OUTDIM, TLEN = 5, 3, 10000 def train_LRNode(inp, out, with_bias): lrnode = mdp.nodes.LinearRegressionNode(with_bias) for i in xrange(len(inp)): lrnode.train(inp[i], out[i]) lrnode.stop_training() return lrnode def test_LinearRegressionNode(): # 1. first, without noise # 1a without bias term # regression coefficients beta = numx_rand.uniform(-10., 10., size=(INDIM, OUTDIM)) # input data x = numx_rand.uniform(-20., 20., size=(TLEN, INDIM)) # output of the linear model y = mult(x, beta) # train lrnode = train_LRNode([x], [y], False) # test results assert_array_almost_equal(lrnode.beta, beta, decimal) res = lrnode(x) assert_array_almost_equal(res, y, decimal) def test_LinearRegressionNode_with_bias(): # 1b with bias beta = numx_rand.uniform(-10., 10., size=(INDIM+1, OUTDIM)) x = numx_rand.uniform(-20., 20., size=(TLEN, INDIM)) y = mult(x, beta[1:,:]) + beta[0,:] lrnode = train_LRNode([x], [y], True) assert_array_almost_equal(lrnode.beta, beta, decimal) res = lrnode(x) assert_array_almost_equal(res, y, decimal) def test_LinearRegressionNode_with_noise(): # 2. with noise, multiple sets of input beta = numx_rand.uniform(-10., 10., size=(INDIM+1, OUTDIM)) x = numx_rand.uniform(-20., 20., size=(TLEN, INDIM)) y = mult(x, beta[1:,:]) + beta[0,:] inp = [numx_rand.uniform(-20., 20., size=(TLEN, INDIM)) for i in xrange(5)] out = [mult(x, beta[1:,:]) + beta[0,:] + numx_rand.normal(size=y.shape)*0.1 for x in inp] lrnode = train_LRNode(inp, out, True) assert_array_almost_equal(lrnode.beta, beta, 2) res = lrnode(inp[0]) assert_array_almost_equal_diff(res, out[0], 2) def test_LinearRegressionNode_raises_on_linearly_dependent_input(): # 3. test error for linearly dependent input beta = numx_rand.uniform(-10., 10., size=(INDIM, OUTDIM)) x = numx.linspace(-20,20,TLEN) x = mdp.utils.rrep(x, INDIM) x[:,-1] = 2.*x[:,0] y = mult(x, beta) py.test.raises(mdp.NodeException, train_LRNode, [x], [y], False) def test_LinearRegressionNode_raises_on_wrong_output_size(): # 4. test wrong output size beta = numx_rand.uniform(-10., 10., size=(INDIM, OUTDIM)) x = numx_rand.uniform(-20., 20., size=(TLEN, INDIM)) x[:,-1] = 2.*x[:,0] y = mult(x, beta) y = y[:10,:] py.test.raises(mdp.TrainingException, train_LRNode, [x], [y], False) mdp-3.3/mdp/test/test_NearestMeanClassifier.py000066400000000000000000000037601203131624700215230ustar00rootroot00000000000000from _tools import * # These tests are basically taken from the GaussianClassifier. def testNearestMeanClassifier_train(): nclasses = 10 dim = 4 npoints = 10000 covs = [] means = [] node = mdp.nodes.NearestMeanClassifier() for i in xrange(nclasses): cov = utils.symrand(uniform((dim,))*dim+1) mn = uniform((dim,))*10. x = normal(0., 1., size=(npoints, dim)) x = mult(x, utils.sqrtm(cov)) + mn x = utils.refcast(x, 'd') cl = numx.ones((npoints,))*i mn_estimate = mean(x, axis=0) means.append(mn_estimate) covs.append(numx.cov(x, rowvar=0)) node.train(x, cl) try: node.train(x, numx.ones((2,))) assert False, 'No exception despite wrong number of labels' except mdp.TrainingException: pass node.stop_training() for i in xrange(nclasses): lbl_idx = node.ordered_labels.index(i) assert_array_almost_equal_diff(means[i], node.label_means[lbl_idx], decimal-1) def testNearestMeanClassifier_label(): mean1 = [0., 2.] mean2 = [0., -2.] std_ = numx.array([1., 0.2]) npoints = 100 rot = 45 # input data: two distinct gaussians rotated by 45 deg def distr(size): return normal(0, 1., size=(size)) * std_ x1 = distr((npoints,2)) + mean1 utils.rotate(x1, rot, units='degrees') x2 = distr((npoints,2)) + mean2 utils.rotate(x2, rot, units='degrees') x = numx.concatenate((x1, x2), axis=0) # labels cl1 = numx.ones((x1.shape[0],), dtype='i') cl2 = 2*numx.ones((x2.shape[0],), dtype='i') classes = numx.concatenate((cl1, cl2)) # shuffle the data perm_idx = numx_rand.permutation(classes.shape[0]) x = numx.take(x, perm_idx, axis=0) classes = numx.take(classes, perm_idx, axis=0) node = mdp.nodes.NearestMeanClassifier() node.train(x, classes) classification = node.label(x) assert_array_equal(classes, classification) mdp-3.3/mdp/test/test_NeuralGasNode.py000066400000000000000000000045641203131624700200060ustar00rootroot00000000000000from _tools import * def _uniform(min_, max_, dims): return uniform(dims)*(max_-min_)+min_ def test_NeuralGasNode(): ### test 1D distribution in a 10D space # line coefficients dim = 10 npoints = 1000 const = _uniform(-100,100,[dim]) dir = _uniform(-1,1,[dim]) dir /= utils.norm2(dir) x = _uniform(-1,1,[npoints]) data = numx.outer(x, dir)+const # train the ng network num_nodes = 10 ng = mdp.nodes.NeuralGasNode(start_poss=[data[n,:] for n in range(num_nodes)], max_epochs=10) ng.train(data) ng.stop_training() # control that the nodes in the graph lie on the line poss = ng.get_nodes_position()-const norms = numx.sqrt(numx.sum(poss*poss, axis=1)) poss = (poss.T/norms).T assert max(numx.minimum(numx.sum(abs(poss-dir),axis=1), numx.sum(abs(poss+dir),axis=1))) < 1e-7, \ 'At least one node of the graph does lies out of the line.' # check that the graph is linear (no additional branches) # get a topological sort of the graph topolist = ng.graph.topological_sort() deg = numx.asarray(map(lambda n: n.degree(), topolist)) idx = deg.argsort() deg = deg[idx] assert_equal(deg[:2],[1,1]) assert_array_equal(deg[2:], [2 for i in xrange(len(deg)-2)]) # check the distribution of the nodes' position is uniform # this node is at one of the extrema of the graph x0 = numx.outer(numx.amin(x, axis=0), dir)+const x1 = numx.outer(numx.amax(x, axis=0), dir)+const linelen = utils.norm2(x0-x1) # this is the mean distance the node should have dist = linelen / poss.shape[0] # sort the node, depth first nodes = ng.graph.undirected_dfs(topolist[idx[0]]) poss = numx.array(map(lambda n: n.data.pos, nodes)) dists = numx.sqrt(numx.sum((poss[:-1,:]-poss[1:,:])**2, axis=1)) assert_almost_equal(dist, mean(dists), 1) def test_NeuralGasNode_nearest_neighbor(): # test the nearest_neighbor function start_poss = [numx.asarray([2.,0]), numx.asarray([-2.,0])] ng = mdp.nodes.NeuralGasNode(start_poss=start_poss, max_epochs=4) x = numx.asarray([[2.,0]]) ng.train(x) nodes, dists = ng.nearest_neighbor(numx.asarray([[3.,0]])) assert_almost_equal(dists[0], 1., 7) assert_almost_equal(nodes[0].data.pos, numx.asarray([2., 0.]), 7) mdp-3.3/mdp/test/test_NoiseNode.py000066400000000000000000000014301203131624700171670ustar00rootroot00000000000000from _tools import * def testNoiseNode(): def bogus_noise(mean, size=None): return numx.ones(size)*mean node = mdp.nodes.NoiseNode(bogus_noise, (1.,)) out = node.execute(numx.zeros((100,10),'d')) assert_array_equal(out, numx.ones((100,10),'d')) node = mdp.nodes.NoiseNode(bogus_noise, (1.,), 'multiplicative') out = node.execute(numx.zeros((100,10),'d')) assert_array_equal(out, numx.zeros((100,10),'d')) def testNormalNoiseNode(): node = mdp.nodes.NormalNoiseNode(noise_args=(2.1, 0.1)) x = numx.zeros((20000, 10)) y = node.execute(x) assert numx.allclose(y.mean(0), 2.1, atol=1e-02) assert numx.allclose(y.std(0), 0.1, atol=1e-02) def testNoiseNodePickling(): node = mdp.nodes.NoiseNode() node.copy() node.save(None) mdp-3.3/mdp/test/test_PCANode.py000066400000000000000000000203231203131624700165170ustar00rootroot00000000000000from _tools import * def testPCANode(): line_x = numx.zeros((1000,2),"d") line_y = numx.zeros((1000,2),"d") line_x[:,0] = numx.linspace(-1,1,num=1000,endpoint=1) line_y[:,1] = numx.linspace(-0.2,0.2,num=1000,endpoint=1) mat = numx.concatenate((line_x,line_y)) des_var = std(mat,axis=0) utils.rotate(mat,uniform()*2*numx.pi) mat += uniform(2) pca = mdp.nodes.PCANode() pca.train(mat) act_mat = pca.execute(mat) assert_array_almost_equal(mean(act_mat,axis=0),\ [0,0],decimal) assert_array_almost_equal(std(act_mat,axis=0),\ des_var,decimal) # test that the total_variance attribute makes sense est_tot_var = ((des_var**2)*2000/1999.).sum() assert_almost_equal(est_tot_var, pca.total_variance, decimal) assert_almost_equal(1, pca.explained_variance, decimal) # test a bug in v.1.1.1, should not crash pca.inverse(act_mat[:,:1]) ## # test segmentation fault with symeig, see ## # http://projects.scipy.org/scipy/numpy/ticket/551 ## def testPCANode_pickled(): ## for i in xrange(2,100): ## mat, mix, inp = get_random_mix(mat_dim=(200, i)) ## pca = mdp.nodes.PCANode() ## pca.train(mat) ## s = cPickle.dumps(pca) ## pca = cPickle.loads(s) ## act_mat = pca.execute(mat) def testPCANode_total_variance(): mat, mix, inp = get_random_mix(mat_dim=(1000, 3)) des_var = ((std(mat, axis=0)**2)*1000/999.).sum() pca = mdp.nodes.PCANode(output_dim=2) pca.train(mat) pca.execute(mat) assert_almost_equal(des_var, pca.total_variance, decimal) def testPCANode_desired_variance(): mat, mix, inp = get_random_mix(mat_dim=(1000, 3)) # first make them white pca = mdp.nodes.WhiteningNode() pca.train(mat) mat = pca.execute(mat) # set the variances mat *= [0.6,0.3,0.1] #mat -= mat.mean(axis=0) pca = mdp.nodes.PCANode(output_dim=0.8) pca.train(mat) out = pca.execute(mat) # check that we got exactly two output_dim: assert pca.output_dim == 2, '%s'%pca.output_dim assert out.shape[1] == 2 # check that explained variance is > 0.8 and < 1 assert (pca.explained_variance > 0.8 and pca.explained_variance < 1) def testPCANode_desired_variance_after_train(): mat, mix, inp = get_random_mix(mat_dim=(1000, 3)) # first make them white pca = mdp.nodes.WhiteningNode() pca.train(mat) mat = pca.execute(mat) # set the variances mat *= [0.6,0.3,0.1] #mat -= mat.mean(axis=0) pca = mdp.nodes.PCANode() pca.train(mat) # this was not working before the bug fix pca.output_dim = 0.8 out = pca.execute(mat) # check that we got exactly two output_dim: assert pca.output_dim == 2 assert out.shape[1] == 2 # check that explained variance is > 0.8 and < 1 assert (pca.explained_variance > 0.8 and pca.explained_variance < 1) def testPCANode_range_argument(): node = mdp.nodes.PCANode() x = numx.random.random((100,10)) node.train(x) node.stop_training() y = node.execute(x, n=5) assert y.shape[1] == 5 def testPCANode_SVD(): # it should pass atleast the same test as PCANode line_x = numx.zeros((1000,2),"d") line_y = numx.zeros((1000,2),"d") line_x[:,0] = numx.linspace(-1,1,num=1000,endpoint=1) line_y[:,1] = numx.linspace(-0.2,0.2,num=1000,endpoint=1) mat = numx.concatenate((line_x,line_y)) des_var = std(mat,axis=0) utils.rotate(mat,uniform()*2*numx.pi) mat += uniform(2) pca = mdp.nodes.PCANode(svd=True) pca.train(mat) act_mat = pca.execute(mat) assert_array_almost_equal(mean(act_mat,axis=0),\ [0,0],decimal) assert_array_almost_equal(std(act_mat,axis=0),\ des_var,decimal) # Now a more difficult test, create singular cov matrices # and test that PCANode crashes whereas PCASVDNode doesn't mat, mix, inp = get_random_mix(mat_dim=(1000, 100), avg=1E+15) # now create a degenerate input for i in xrange(1,100): inp[:,i] = inp[:,1].copy() # check that standard PCA fails pca = mdp.nodes.PCANode() pca.train(inp) try: pca.stop_training() raise Exception, "PCANode didn't catch singular covariance matrix: degenerate" except mdp.NodeException: pass # now try the SVD version pca = mdp.nodes.PCANode(svd=True) pca.train(inp) pca.stop_training() # now check the undetermined case mat, mix, inp = get_random_mix(mat_dim=(500, 2)) inp = inp.T pca = mdp.nodes.PCANode() pca.train(inp) try: pca.stop_training() raise Exception, "PCANode didn't catch singular covariance matrix: undetermined" except mdp.NodeException: pass # now try the SVD version pca = mdp.nodes.PCANode(svd=True) pca.train(inp) pca.stop_training() # try using the automatic dimensionality reduction function mat, mix, inp = get_random_mix(mat_dim=(1000, 3)) # first make them decorellated pca = mdp.nodes.PCANode() pca.train(mat) mat = pca.execute(mat) mat *= [1E+5,1E-3, 1E-4] mat -= mat.mean(axis=0) pca = mdp.nodes.PCANode(svd=True,reduce=True, var_rel=1E-2) pca.train(mat) out = pca.execute(mat) # check that we got the only large dimension assert_array_almost_equal(mat[:,0].mean(axis=0),out.mean(axis=0), decimal) assert_array_almost_equal(mat[:,0].std(axis=0),out.std(axis=0), decimal) # second test for automatic dimansionality reduction # try using the automatic dimensionality reduction function mat, mix, inp = get_random_mix(mat_dim=(1000, 3)) # first make them decorellated pca = mdp.nodes.PCANode() pca.train(mat) mat = pca.execute(mat) mat *= [1E+5,1E-3, 1E-18] mat -= mat.mean(axis=0) pca = mdp.nodes.PCANode(svd=True,reduce=True, var_abs=1E-8, var_rel=1E-30) pca.train(mat) out = pca.execute(mat) # check that we got the only large dimension assert_array_almost_equal(mat[:,:2].mean(axis=0),out.mean(axis=0), decimal) assert_array_almost_equal(mat[:,:2].std(axis=0),out.std(axis=0), decimal) def mock_symeig(x, range=None, overwrite=False): if range is None: N = x.shape[0] else: N = range[1]-range[0] + 1 y = numx.zeros((N,)) z = numx.zeros((N,N)) y[0] = -1 y[-1] = 1 return y, z def testPCANode_negative_eigenvalues(): # should throw an Exception if reduce=False and # svd = False and output_dim=None pca = mdp.nodes.PCANode(output_dim=None, svd=False, reduce=False) pca._symeig = mock_symeig pca.train(uniform((10,10))) try: pca.stop_training() assert False, "PCA did not catch negative eigenvalues!" except mdp.NodeException, e: if "Got negative eigenvalues" in str(e): pass else: raise Exception("PCA did not catch negative eigenvalues!\n"+ str(e)) # if reduce=True, should not throw any Exception, # and return output_dim = 1 pca = mdp.nodes.PCANode(output_dim=None, svd=False, reduce=True) pca._symeig = mock_symeig pca.train(uniform((10,10))) pca.stop_training() assert pca.output_dim == 1, 'PCA did not remove non-positive eigenvalues!' # if svd=True, should not throw any Exception, # and return output_dim = 10 pca = mdp.nodes.PCANode(output_dim=None, svd=True, reduce=False) pca._symeig = mock_symeig pca.train(uniform((10,10))) pca.stop_training() assert pca.output_dim == 10, 'PCA did not remove non-positive eigenvalues!' # if output_dim is set, should not throw any Exception, # and return the right output_dim pca = mdp.nodes.PCANode(output_dim=1, svd=False, reduce=False) pca._symeig = mock_symeig pca.train(uniform((10,10))) pca.stop_training() assert pca.output_dim == 1, 'PCA did not remove non-positive eigenvalues!' def test_PCANode_no_eigenvalues_left(): mat = numx.zeros((100,4), dtype='d') pca = mdp.nodes.PCANode(svd=True, reduce=True) pca.train(mat) py.test.raises(mdp.NodeException, 'pca.stop_training()') mdp-3.3/mdp/test/test_PolynomialExpansionNode.py000066400000000000000000000032121203131624700221220ustar00rootroot00000000000000from _tools import * def hardcoded_expansion(x, degree): nvars = x.shape[1] exp_dim = mdp.nodes._expanded_dim(degree, nvars) exp = numx.zeros((x.shape[0], exp_dim), 'd') # degree 1 exp[:,:nvars] = x.copy() # degree 2 k = nvars if degree>=2: for i in xrange(nvars): for j in xrange(i,nvars): exp[:,k] = x[:,i]*x[:,j] k += 1 # degree 3 if degree>=3: for i in xrange(nvars): for j in xrange(i,nvars): for l in xrange(j,nvars): exp[:,k] = x[:,i]*x[:,j]*x[:,l] k += 1 # degree 4 if degree>=4: for i in xrange(nvars): for j in xrange(i,nvars): for l in xrange(j,nvars): for m in xrange(l,nvars): exp[:,k] = x[:,i]*x[:,j]*x[:,l]*x[:,m] k += 1 # degree 5 if degree>=5: for i in xrange(nvars): for j in xrange(i,nvars): for l in xrange(j,nvars): for m in xrange(l,nvars): for n in xrange(m,nvars): exp[:,k] = \ x[:,i]*x[:,j]*x[:,l]*x[:,m]*x[:,n] k += 1 return exp def test_expansion(): for degree in xrange(1,6): for dim in xrange(1,5): expand = mdp.nodes.PolynomialExpansionNode(degree=degree) mat,mix,inp = get_random_mix((10,dim)) des = hardcoded_expansion(inp, degree) exp = expand.execute(inp) assert_array_almost_equal(exp, des, decimal) mdp-3.3/mdp/test/test_PreseverDimNode.py000066400000000000000000000016451203131624700203470ustar00rootroot00000000000000import py.test from _tools import * class DummyPreserveDimNode(mdp.PreserveDimNode): """Non-abstract dummy version of PreserveDimNode.""" def is_trainable(self): return False def testPreserveDimNode(): """Test the different dimension setting options.""" dim = 3 node = DummyPreserveDimNode(input_dim=dim, output_dim=dim) assert node.output_dim == dim assert node.input_dim == dim node = DummyPreserveDimNode(input_dim=3) assert node.output_dim == dim assert node.input_dim == dim node = DummyPreserveDimNode(output_dim=3) assert node.output_dim == dim assert node.input_dim == dim node = DummyPreserveDimNode(output_dim=3) node.input_dim = dim assert node.output_dim == dim assert node.input_dim == dim def get_node(): DummyPreserveDimNode(input_dim=dim, output_dim=dim+1) py.test.raises(mdp.InconsistentDimException, get_node) mdp-3.3/mdp/test/test_RBFExpansionNode.py000066400000000000000000000035571203131624700204240ustar00rootroot00000000000000import mdp from _tools import * def testRBFExpansionNode(): rrep = mdp.utils.rrep dim, n = 2, 10 centers = numx_rand.random((n, dim)) # grid of points to numerically compute the integral grid = numx.meshgrid(numx.linspace(-3., 4., 100), numx.linspace(-3., 4., 100)) grid = numx.array([grid[0].flatten(), grid[1].flatten()]).T # compute covariance for each point of the grid grid_cov = numx.zeros((grid.shape[0], dim, dim)) for i in xrange(dim): for j in xrange(dim): grid_cov[:,i,j] = grid[:,i]*grid[:,j] def check_mn_cov(rbf, real_covs): y = rbf(grid) # verify means, sizes for i in xrange(n): p = y[:,i]/y[:,i].sum() # check mean mn = (rrep(p,dim)*grid).sum(0) assert_array_almost_equal(mn, centers[i,:], 2) # check variance vr = ((rrep(rrep(p,2),2)*grid_cov).sum(0) - numx.outer(mn, mn)) assert_array_almost_equal(vr, real_covs[i], 2) def scalar_to_covs(x, n): if numx.isscalar(x): x = [x]*n return [numx.array([[x[i],0],[0,x[i]]]) for i in xrange(n)] # 1: sizes is a scalar sizes = 0.32 rbf = mdp.nodes.RBFExpansionNode(centers, sizes) check_mn_cov(rbf, scalar_to_covs(sizes, n)) # 2: sizes is many scalars sizes = 0.3 + numx_rand.random(n)*0.2 rbf = mdp.nodes.RBFExpansionNode(centers, sizes) check_mn_cov(rbf, scalar_to_covs(sizes, n)) # 3: sizes is one covariance sizes = mdp.utils.symrand(numx.array([0.2, 0.4])) rbf = mdp.nodes.RBFExpansionNode(centers, sizes) check_mn_cov(rbf, [sizes]*n) # 4: sizes is many covariances sizes = [mdp.utils.symrand(numx.array([0.2, 0.4])) for i in xrange(n)] rbf = mdp.nodes.RBFExpansionNode(centers, sizes) check_mn_cov(rbf, sizes) mdp-3.3/mdp/test/test_RBM.py000066400000000000000000000204231203131624700157270ustar00rootroot00000000000000import mdp from _tools import * def test_RBM_sample_h(): # number of visible and hidden units I, J = 2, 4 # create RBM node bm = mdp.nodes.RBMNode(J, I) # fake training to initialize internals bm.train(numx.zeros((1,I))) # init to deterministic model bm.w[0,:] = [1,0,1,0] bm.w[1,:] = [0,1,0,1] bm.w *= 2e4 bm.bv *= 0. bm.bh *= 0. # ### test 1 v = numx.array([[0,0],[1,0],[0,1],[1,1.]]) h = [] for n in xrange(1000): prob, sample = bm.sample_h(v) h.append(sample) # check inferred probabilities expected_probs = numx.array([[0.5, 0.5, 0.5, 0.5], [1.0, 0.5, 1.0, 0.5], [0.5, 1.0, 0.5, 1.0], [1.0, 1.0, 1.0, 1.0]]) assert_array_almost_equal(prob, expected_probs, 8) # check sampled units h = numx.array(h) for n in xrange(4): distr = h[:,n,:].mean(axis=0) assert_array_almost_equal(distr, expected_probs[n,:], 1) # ### test 2, with bias bm.bh -= 1e2 h = [] for n in xrange(100): prob, sample = bm.sample_h(v) h.append(sample) # check inferred probabilities expected_probs = numx.array([[0., 0., 0., 0.], [1.0, 0., 1.0, 0.], [0., 1.0, 0., 1.0], [1.0, 1.0, 1.0, 1.0]]) assert_array_almost_equal(prob, expected_probs, 8) # check sampled units h = numx.array(h) for n in xrange(4): distr = h[:,n,:].mean(axis=0) assert_array_almost_equal(distr, expected_probs[n,:], 1) def test_RBM_sample_v(): # number of visible and hidden units I, J = 4, 2 # create RBM node bm = mdp.nodes.RBMNode(J, I) # fake training to initialize internals bm.train(numx.zeros((1,I))) # init to deterministic model bm.w[:,0] = [1,0,1,0] bm.w[:,1] = [0,1,0,1] bm.w *= 2e4 bm.bv *= 0 bm.bh *= 0 # test 1 h = numx.array([[0,0],[1,0],[0,1],[1,1.]]) v = [] for n in xrange(1000): prob, sample = bm.sample_v(h) v.append(sample) # check inferred probabilities expected_probs = numx.array([[0.5, 0.5, 0.5, 0.5], [1.0, 0.5, 1.0, 0.5], [0.5, 1.0, 0.5, 1.0], [1.0, 1.0, 1.0, 1.0]]) assert_array_almost_equal(prob, expected_probs, 8) # check sampled units v = numx.array(v) for n in xrange(4): distr = v[:,n,:].mean(axis=0) assert_array_almost_equal(distr, expected_probs[n,:], 1) # test 2, with bias bm.bv -= 1e2 v = [] for n in xrange(1000): prob, sample = bm.sample_v(h) v.append(sample) # check inferred probabilities expected_probs = numx.array([[0., 0., 0., 0.], [1.0, 0., 1.0, 0.], [0., 1.0, 0., 1.0], [1.0, 1.0, 1.0, 1.0]]) assert_array_almost_equal(prob, expected_probs, 8) # check sampled units v = numx.array(v) for n in xrange(4): distr = v[:,n,:].mean(axis=0) assert_array_almost_equal(distr, expected_probs[n,:], 1) def test_RBM_stability(): # number of visible and hidden units I, J = 8, 2 # create RBM node bm = mdp.nodes.RBMNode(J, I) bm._init_weights() # init to random model bm.w = mdp.utils.random_rot(max(I,J), dtype='d')[:I, :J] bm.bv = numx_rand.randn(I) bm.bh = numx_rand.randn(J) # save original weights real_w = bm.w.copy() real_bv = bm.bv.copy() real_bh = bm.bh.copy() # Gibbs sample to reach the equilibrium distribution N = 1e4 v = numx_rand.randint(0,2,(N,I)).astype('d') for k in xrange(100): if k%5==0: spinner() p, h = bm._sample_h(v) p, v = bm._sample_v(h) # see that w remains stable after learning for k in xrange(100): if k%5==0: spinner() bm.train(v) bm.stop_training() assert_array_almost_equal(real_w, bm.w, 1) assert_array_almost_equal(real_bv, bm.bv, 1) assert_array_almost_equal(real_bh, bm.bh, 1) def test_RBM_learning(): # number of visible and hidden units I, J = 4, 2 bm = mdp.nodes.RBMNode(J, I) bm.w = mdp.utils.random_rot(max(I,J), dtype='d')[:I, :J] # the observations consist of two disjunct patterns that # never appear together N = 1e4 v = numx.zeros((N,I)) for n in xrange(int(N)): r = numx_rand.random() if r>0.666: v[n,:] = [0,1,0,1] elif r>0.333: v[n,:] = [1,0,1,0] for k in xrange(1500): if k%5==0: spinner() if k>5: mom = 0.9 else: mom = 0.5 bm.train(v, epsilon=0.3, momentum=mom) if bm._train_err/N<0.1: break #print '-------', bm._train_err assert bm._train_err / N < 0.1 def _generate_data(bm, I, N): data = [] h = numx.ones(I, dtype='d') for t in range(N): prob, v = bm._sample_v(h) prob, h = bm._sample_h(v) if (t > 500): data.append(v) return numx.asarray(data, dtype='d') def test_RBM_bv_learning(): # number of visible and hidden units I, J = 4, 4 bm = mdp.nodes.RBMNode(J, I) bm._init_weights() # init to random biases, unit generation matrix bm.w = numx.eye(I, dtype='d') bm.bh *= 0.0 bm.bv = numx.linspace(0.1, 0.9, I) * 5 #### generate training data data = _generate_data(bm, I, 5000) #### learn from generated data train_bm = mdp.nodes.RBMNode(J, I) train_bm.train(data) train_bm.w = numx.eye(I, dtype='d') N = data.shape[0] for k in xrange(5000): if k%5==0: spinner() train_bm.train(data, epsilon=0.6, momentum=0.7) if abs(train_bm.bv - bm.bv).max() < 0.5: break # bv, bh, and w are dependent, so we need to keep one of them clamped train_bm.w = numx.eye(I, dtype='d') assert abs(train_bm.bv - bm.bv).max() < 0.5 def _test_RBM_bh_learning(): # This one is tricky, as hidden biases are a very indirect parameter # of the input. We need to keep the rest of the weights clamped or there # would be alternative ways to explain the data # number of visible and hidden units I, J = 4, 4 bm = mdp.nodes.RBMNode(J, I) bm._init_weights() # init to random biases, unit generation matrix bm.w = numx.eye(I, dtype='d') bm.bv *= 0.0 bm.bh = numx.linspace(0.1, 0.9, I) * 5 #### generate training data data = _generate_data(bm, I, 10000) #### learn from generated data train_bm = mdp.nodes.RBMNode(J, I) train_bm.train(data) train_bm.w = bm.w.copy() train_bm.bv *= 0.0 N = data.shape[0] for k in xrange(5000): if k%5==0: spinner() train_bm.train(data, epsilon=3.0, momentum=0.8, update_with_ph=False) if abs(train_bm.bh - bm.bh).max() < 0.75: break # keep other weights clamped train_bm.w = bm.w.copy() train_bm.bv *= 0.0 assert abs(train_bm.bh - bm.bh).max() < 0.75 def test_RBMWithLabelsNode(): I, J, L = 4, 4, 2 bm = mdp.nodes.RBMWithLabelsNode(J,L,I) assert bm.input_dim == I+L # generate input data N = 2500 v = numx.zeros((2*N,I)) l = numx.zeros((2*N,L)) for n in xrange(N): r = numx_rand.random() if r>0.1: v[n,:] = [1,0,1,0] l[n,:] = [1,0] for n in xrange(N): r = numx_rand.random() if r>0.1: v[n,:] = [0,1,0,1] l[n,:] = [1,0] x = numx.concatenate((v, l), axis=1) for k in xrange(2500): if k%5==0: spinner() if k>200: mom = 0.9 eps = 0.7 else: mom = 0.5 eps = 0.2 bm.train(v, l, epsilon=eps, momentum=mom) ph, sh = bm._sample_h(x) pv, pl, sv, sl = bm._sample_v(sh, concatenate=False) v_train_err = float(((v-sv)**2.).sum()) #print '-------', k, v_train_err/(2*N) if v_train_err / (2*N) < 0.1: break # visible units are reconstructed assert v_train_err / (2*N) < 0.1 # units with 0 input have 50/50 labels idxzeros = v.sum(axis=1)==0 nzeros = idxzeros.sum() point5 = numx.zeros((nzeros, L)) + 0.5 assert_array_almost_equal(pl[idxzeros], point5, 2) mdp-3.3/mdp/test/test_SFA2Node.py000066400000000000000000000044231203131624700166120ustar00rootroot00000000000000from _tools import * def test_basic_training(): dim = 10000 freqs = [2*numx.pi*100.,2*numx.pi*500.] t = numx.linspace(0,1,num=dim) mat = numx.array([numx.sin(freqs[0]*t),numx.sin(freqs[1]*t)]).T mat += normal(0., 1e-10, size=(dim, 2)) mat = (mat - mean(mat[:-1,:],axis=0))\ /std(mat[:-1,:],axis=0) des_mat = mat.copy() mat = mult(mat,uniform((2,2))) + uniform(2) sfa = mdp.nodes.SFA2Node() sfa.train(mat) out = sfa.execute(mat) assert out.shape[1]==5, "Wrong output_dim" correlation = mult(des_mat[:-1,:].T, numx.take(out[:-1,:], (0,2), axis=1))/(dim-2) assert_array_almost_equal(abs(correlation), numx.eye(2), 3) for nr in xrange(sfa.output_dim): qform = sfa.get_quadratic_form(nr) outq = qform.apply(mat) assert_array_almost_equal(outq, out[:,nr], decimal-1) sfa = mdp.nodes.SFANode(output_dim = 2) sfa.train(mat) out = sfa.execute(mat) assert out.shape[1]==2, 'Wrong output_dim' correlation = mult(des_mat[:-1,:1].T,out[:-1,:1])/(dim-2) assert_array_almost_equal(abs(correlation), numx.eye(1), 3) def test_range_argument(): node = mdp.nodes.SFA2Node() x = numx.random.random((100,10)) node.train(x) node.stop_training() y = node.execute(x, n=5) assert y.shape[1] == 5 def test_input_dim_bug(): dim = 10000 freqs = [2*numx.pi*100.,2*numx.pi*500.] t = numx.linspace(0,1,num=dim) mat = numx.array([numx.sin(freqs[0]*t),numx.sin(freqs[1]*t)]).T mat += normal(0., 1e-10, size=(dim, 2)) mat = (mat - mean(mat[:-1,:],axis=0))\ /std(mat[:-1,:],axis=0) mat = mult(mat,uniform((2,2))) + uniform(2) sfa = mdp.nodes.SFA2Node(input_dim=2) sfa.train(mat) sfa.execute(mat) def test_output_dim_bug(): dim = 10000 freqs = [2*numx.pi*100.,2*numx.pi*500.] t = numx.linspace(0,1,num=dim) mat = numx.array([numx.sin(freqs[0]*t),numx.sin(freqs[1]*t)]).T mat += normal(0., 1e-10, size=(dim, 2)) mat = (mat - mean(mat[:-1,:],axis=0)) \ / std(mat[:-1,:],axis=0) mat = mult(mat,uniform((2,2))) + uniform(2) sfa = mdp.nodes.SFA2Node(output_dim=3) sfa.train(mat) out = sfa.execute(mat) assert out.shape[1] == 3 mdp-3.3/mdp/test/test_SFANode.py000066400000000000000000000122711203131624700165300ustar00rootroot00000000000000from __future__ import with_statement from _tools import * mult = mdp.utils.mult def testSFANode(): dim=10000 freqs = [2*numx.pi*1, 2*numx.pi*5] t = numx.linspace(0,1,num=dim) mat = numx.array([numx.sin(freqs[0]*t), numx.sin(freqs[1]*t)]).T mat = ((mat - mean(mat[:-1,:], axis=0)) / std(mat[:-1,:],axis=0)) des_mat = mat.copy() mat = mult(mat,uniform((2,2))) + uniform(2) sfa = mdp.nodes.SFANode() sfa.train(mat) out = sfa.execute(mat) correlation = mult(des_mat[:-1,:].T,out[:-1,:])/(dim - 2) assert sfa.get_eta_values(t=0.5) is not None, 'get_eta is None' assert_array_almost_equal(abs(correlation), numx.eye(2), decimal-3) sfa = mdp.nodes.SFANode(output_dim = 1) sfa.train(mat) out = sfa.execute(mat) assert out.shape[1]==1, 'Wrong output_dim' correlation = mult(des_mat[:-1,:1].T,out[:-1,:])/(dim - 2) assert_array_almost_equal(abs(correlation), numx.eye(1), decimal - 3) def testSFANode_range_argument(): node = mdp.nodes.SFANode() x = numx.random.random((100,10)) node.train(x) node.stop_training() y = node.execute(x, n=5) assert y.shape[1] == 5 def testSFANode_one_time_samples(): # when training with x.shape = (1, n), stop_training # was failing with a ValueError: array must not contain infs or NaNs # because with only one samples no time difference can be computed and # the covmatrix is updated with zeros! node = mdp.nodes.SFANode() x = numx.random.random((1,5)) with py.test.raises(mdp.TrainingException): node.train(x) def testSFANode_include_last_sample(): # check that the default behaviour is True node = mdp.nodes.SFANode() x = numx.random.random((100,10)) node.train(x) node.stop_training() assert node.tlen == 100 assert node.dtlen == 99 # check that you can set it explicitly node = mdp.nodes.SFANode(include_last_sample=True) x = numx.random.random((100,10)) node.train(x) node.stop_training() assert node.tlen == 100 assert node.dtlen == 99 # check the old behaviour node = mdp.nodes.SFANode(include_last_sample=False) x = numx.random.random((100,10)) node.train(x) node.stop_training() assert node.tlen == 99 assert node.dtlen == 99 # check that we can change it during training node = mdp.nodes.SFANode(include_last_sample=False) x = numx.random.random((100,10)) node.train(x, include_last_sample=True) node.stop_training() assert node.tlen == 100 assert node.dtlen == 99 def testSFANode_derivative_bug1D(): # one dimensional worst case scenario T = 100 x = numx.zeros((T,1)) x[0,:] = -1. x[-1,:] = +1. x /= x.std(ddof=1) sfa = mdp.nodes.SFANode(include_last_sample=True) sfa.train(x) sfa.stop_training(debug=True) xdot = sfa.time_derivative(x) tlen = xdot.shape[0] correct_dcov_mtx = (xdot*xdot).sum()/(tlen-1) sfa_dcov_mtx = sfa.dcov_mtx # quantify the error error = abs(correct_dcov_mtx-sfa_dcov_mtx)[0,0] assert error < 10**(-decimal) # the bug was that we were calculating the covariance matrix # of the derivative, i.e. # sfa_dcov-mtx = (xdot*xdot).sum()/(tlen-1) - xdot.sum()**2/(tlen*(tlen-1)) # so that the error in the estimated matrix was exactly # xdot.sum()**2/(tlen*(tlen-1)) def testSFANode_derivative_bug2D(): T = 100 x = numx.zeros((T,2)) x[0,0] = -1. x[-1,0] = +1. x[:,1] = numx.arange(T) x -= x.mean(axis=0) x /= x.std(ddof=1, axis=0) sfa = mdp.nodes.SFANode(include_last_sample=True) sfa.train(x) sfa.stop_training(debug=True) xdot = sfa.time_derivative(x) tlen = xdot.shape[0] correct_dcov_mtx = mdp.utils.mult(xdot.T, xdot)/(tlen-1) sfa_dcov_mtx = sfa.dcov_mtx # the bug was that we were calculating the covariance matrix # of the derivative, i.e. # sfa_dcov_mtx = mdp.utils.mult(xdot.T, xdot)/(tlen-1) - \ # numx.outer(xdot.sum(axis=0), # xdot.sum(axis=0))/(tlen*(tlen-1))) # so that the error in the estimated matrix was exactly # numx.outer(xdot.sum(axis=0),xdot.sum(axis=0))/(tlen*(tlen-1)) error = abs(correct_dcov_mtx-sfa_dcov_mtx) assert_array_almost_equal(numx.zeros(error.shape), error, decimal) def testSFANode_derivative_bug2D_eigen(): # this is a copy of the previous test, where we # quantify the error in the estimated eigenvalues # and eigenvectors T = 100 x = numx.zeros((T,2)) x[0,0] = -1. x[-1,0] = +1. x[:,1] = numx.arange(T) x -= x.mean(axis=0) x /= x.std(ddof=1, axis=0) sfa = mdp.nodes.SFANode(include_last_sample=True) sfa.train(x) sfa.stop_training(debug=True) xdot = sfa.time_derivative(x) tlen = xdot.shape[0] correct_dcov_mtx = mdp.utils.mult(xdot.T, xdot)/(tlen-1) eigvalues, eigvectors = sfa._symeig(correct_dcov_mtx, sfa.cov_mtx, range=None, overwrite=False) assert_array_almost_equal(eigvalues, sfa.d, decimal) assert_array_almost_equal(eigvectors, sfa.sf, decimal) mdp-3.3/mdp/test/test_TimeDelayNodes.py000066400000000000000000000046141203131624700201610ustar00rootroot00000000000000from _tools import * from mdp.nodes import TimeDelayNode, TimeDelaySlidingWindowNode def test_TimeDelayNodes(): x = numx.array( [ [1, 2, 3] , [2, 4, 4], [3, 8, 5], [4, 16, 6], [5, 32, 7] ]) ################################################ node = TimeDelayNode(time_frames=3, gap=2) slider = TimeDelaySlidingWindowNode(time_frames=3, gap=2) real_res = [ [1, 2, 3, 0, 0, 0, 0, 0, 0], [2, 4, 4, 0, 0, 0, 0, 0, 0], [3, 8, 5, 1, 2, 3, 0, 0, 0], [4, 16,6, 2, 4, 4, 0, 0, 0], [5, 32,7, 3, 8, 5, 1, 2, 3] ] real_res = numx.array(real_res) res = node.execute(x) assert_array_equal(real_res, res) # test sliding window slider_res = numx.zeros_like(real_res) for row_nr in range(x.shape[0]): slider_res[row_nr, :] = slider.execute(x[[row_nr], :]) assert_array_equal(real_res, slider_res) ################################################ node = TimeDelayNode(time_frames=2, gap=3) slider = TimeDelaySlidingWindowNode(time_frames=2, gap=3) real_res = [ [1, 2, 3, 0, 0, 0], [2, 4, 4, 0, 0, 0], [3, 8, 5, 0, 0, 0], [4, 16,6, 1, 2, 3], [5, 32,7, 2, 4, 4] ] real_res = numx.array(real_res) res = node.execute(x) assert_array_equal(real_res, res) # test sliding window slider_res = numx.zeros_like(real_res) for row_nr in range(x.shape[0]): slider_res[row_nr, :] = slider.execute(x[[row_nr], :]) assert_array_equal(real_res, slider_res) ################################################ node = TimeDelayNode(time_frames=4, gap=1) slider = TimeDelaySlidingWindowNode(time_frames=4, gap=1) real_res = [ [1, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0], [2, 4, 4, 1, 2, 3, 0, 0, 0, 0, 0, 0], [3, 8, 5, 2, 4, 4, 1, 2, 3, 0, 0, 0], [4, 16,6, 3, 8, 5, 2, 4, 4, 1, 2, 3], [5, 32,7, 4, 16,6, 3, 8, 5, 2, 4, 4] ] real_res = numx.array(real_res) res = node.execute(x) assert_array_equal(real_res, res) # test sliding window slider_res = numx.zeros_like(real_res) for row_nr in range(x.shape[0]): slider_res[row_nr, :] = slider.execute(x[[row_nr], :]) assert_array_equal(real_res, slider_res) mdp-3.3/mdp/test/test_TimeFrameNode.py000066400000000000000000000015601203131624700177670ustar00rootroot00000000000000import mdp from _tools import * def test_TimeFramesNode(): length = 14 gap = 6 time_frames = 3 inp = numx.array([numx.arange(length), -numx.arange(length)]).T # create node to be tested tf = mdp.nodes.TimeFramesNode(time_frames,gap) out = tf.execute(inp) # check last element assert_equal(out[-1,-1], -length+1) # check horizontal sequence for i in xrange(1,time_frames): assert_array_equal(out[:,2*i],out[:,0]+i*gap) assert_array_equal(out[:,2*i+1],out[:,1]-i*gap) # check pseudo-inverse rec = tf.pseudo_inverse(out) assert_equal(rec.shape[1], inp.shape[1]) block_size = min(out.shape[0], gap) for i in xrange(0,length,gap): assert_array_equal(rec[i:i+block_size], inp[i:i+block_size]) def test_TimeFramesNodeBugInputDim(): mdp.nodes.TimeFramesNode(time_frames=10, gap=1, input_dim=1) mdp-3.3/mdp/test/test_VariadicCumulator.py000066400000000000000000000020661203131624700207300ustar00rootroot00000000000000import mdp from _tools import * def test_VariadicCumulator(): # create random data ONELEN = 101 NREP = 7 x = [numx_rand.rand(ONELEN, 3) for _ in range(NREP)] y = [numx_rand.rand(ONELEN, 3) for _ in range(NREP)] ABCumulator = mdp.VariadicCumulator('a', 'b') class TestABCumulator(ABCumulator): def _stop_training(self, *args, **kwargs): super(TestABCumulator, self)._stop_training(*args, **kwargs) # verify that the attributes are there assert hasattr(self, 'a') assert hasattr(self, 'b') # test tlen tlen = ONELEN*NREP assert self.tlen == tlen assert self.a.shape == (tlen, 3) assert self.b.shape == (tlen, 3) # test content for i in range(NREP): assert numx.all(self.a[i*ONELEN:(i+1)*ONELEN,:] == x[i]) assert numx.all(self.b[i*ONELEN:(i+1)*ONELEN,:] == y[i]) ab = TestABCumulator() for i in range(NREP): ab.train(x[i], y[i]) ab.stop_training() mdp-3.3/mdp/test/test_WhiteningNode.py000066400000000000000000000015551203131624700200560ustar00rootroot00000000000000from _tools import * def testWhiteningNode(): vars = 5 dim = (10000,vars) mat,mix,inp = get_random_mix(mat_dim=dim, avg=uniform(vars)) w = mdp.nodes.WhiteningNode() w.train(inp) out = w.execute(inp) assert_array_almost_equal(mean(out, axis=0), numx.zeros(dim[1]), decimal) assert_array_almost_equal(std(out, axis=0), numx.ones(dim[1]), decimal - 3) def testWhiteningNode_SVD(): vars = 5 dim = (10000,vars) mat,mix,inp = get_random_mix(mat_dim=dim, avg=uniform(vars)) w = mdp.nodes.WhiteningNode(svd=True) w.train(inp) out = w.execute(inp) assert_array_almost_equal(mean(out, axis=0), numx.zeros(dim[1]), decimal) assert_array_almost_equal(std(out, axis=0), numx.ones(dim[1]), decimal - 3) mdp-3.3/mdp/test/test_caching.py000066400000000000000000000154401203131624700167060ustar00rootroot00000000000000"""Test caching extension.""" from __future__ import with_statement import tempfile from _tools import * requires_joblib = skip_on_condition( "not mdp.config.has_joblib", "This test requires the 'joblib' module.") _counter = 0 class _CounterNode(mdp.Node): def __init__(self): super(_CounterNode, self).__init__() def is_trainable(self): return False def _execute(self, x): """The execute method has the side effect of increasing a global counter by one.""" global _counter _counter += 1 return x @requires_joblib def test_caching_extension(): """Test that the caching extension is working at the global level.""" global _counter _counter = 0 node = _CounterNode() # before decoration the global counter is incremented at every call k = 0 for i in range(3): x = mdp.numx.array([[i]], dtype='d') for j in range(2): k += 1 assert mdp.numx.all(node.execute(x) == x) assert _counter == k # reset counter _counter = 0 # activate the extension cachedir = tempfile.mkdtemp(prefix='mdp-tmp-joblib-cache.', dir=py.test.mdp_tempdirname) mdp.caching.activate_caching(cachedir=cachedir) assert mdp.get_active_extensions() == ['cache_execute'] # after decoration the global counter is incremented for each new 'x' for i in range(3): x = mdp.numx.array([[i]], dtype='d') for _ in range(2): assert mdp.numx.all(node.execute(x) == x) assert _counter == i + 1 # after deactivation mdp.caching.deactivate_caching() assert mdp.get_active_extensions() == [] # reset counter _counter = 0 k = 0 for i in range(3): x = mdp.numx.array([[i]], dtype='d') for j in range(2): k += 1 assert mdp.numx.all(node.execute(x) == x) assert _counter == k @requires_joblib def test_different_instances_same_content(): global _counter x = mdp.numx.array([[100.]], dtype='d') cachedir = tempfile.mkdtemp(prefix='mdp-tmp-joblib-cache.', dir=py.test.mdp_tempdirname) mdp.caching.activate_caching(cachedir=cachedir) node = _CounterNode() _counter = 0 # add attribute to make instance unique node.attr = 'unique' # cache x node.execute(x) assert _counter == 1 # should be cached now node.execute(x) assert _counter == 1 # create new instance, make it also unique and check that # result is still cached _counter = 0 node = _CounterNode() node.attr = 'unique and different' node.execute(x) assert _counter == 1 mdp.caching.deactivate_caching() @requires_joblib def test_caching_context_manager(): global _counter node = _CounterNode() _counter = 0 assert mdp.get_active_extensions() == [] cachedir = tempfile.mkdtemp(prefix='mdp-tmp-joblib-cache.', dir=py.test.mdp_tempdirname) with mdp.caching.cache(cachedir=cachedir): assert mdp.get_active_extensions() == ['cache_execute'] for i in range(3): x = mdp.numx.array([[i]], dtype='d') for _ in range(2): assert mdp.numx.all(node.execute(x) == x) assert _counter == i + 1 assert mdp.get_active_extensions() == [] @requires_joblib def test_class_caching(): """Test that we can cache individual classes.""" cached = mdp.nodes.PCANode() notcached = mdp.nodes.SFANode() with mdp.caching.cache(cache_classes=[mdp.nodes.PCANode]): assert cached.is_cached() assert not notcached.is_cached() @requires_joblib def test_class_caching_functionality(): """Test that cached classes really cache.""" global _counter x = mdp.numx.array([[210]], dtype='d') node = _CounterNode() # here _CounterNode is not cached _counter = 0 with mdp.caching.cache(cache_classes=[mdp.nodes.PCANode]): node.execute(x) assert _counter == 1 node.execute(x) assert _counter == 2 # here _CounterNode is cached _counter = 0 with mdp.caching.cache(cache_classes=[_CounterNode]): node.execute(x) assert _counter == 1 node.execute(x) assert _counter == 1 @requires_joblib def test_instance_caching(): """Test that we can cache individual instances.""" cached = mdp.nodes.PCANode() notcached = mdp.nodes.PCANode() with mdp.caching.cache(cache_instances=[cached]): assert cached.is_cached() assert not notcached.is_cached() @requires_joblib def test_instance_caching_functionality(): """Test that cached instances really cache.""" global _counter x = mdp.numx.array([[130]], dtype='d') node = _CounterNode() othernode = _CounterNode() # here _CounterNode is not cached _counter = 0 with mdp.caching.cache(cache_instances=[othernode]): node.execute(x) assert _counter == 1 node.execute(x) assert _counter == 2 # here _CounterNode is cached _counter = 0 with mdp.caching.cache(cache_instances=[node]): node.execute(x) assert _counter == 1 node.execute(x) assert _counter == 1 @requires_joblib def test_preexecution_problem(): """Test that automatic setting of e.g. input_dim does not stop the caching extension from caching on the first run.""" global _counter x = mdp.numx.array([[102.]]) node = _CounterNode() # here _CounterNode is cached _counter = 0 with mdp.caching.cache(): # on the first execution, input_dim and dtype are set ... node.execute(x) assert _counter == 1 # ... yet the result is cached node.execute(x) assert _counter == 1 @requires_joblib def test_switch_cache(): """Test changing cache directory while extension is active.""" global _counter dir1 = tempfile.mkdtemp(prefix='mdp-tmp-joblib-cache.', dir=py.test.mdp_tempdirname) dir2 = tempfile.mkdtemp(prefix='mdp-tmp-joblib-cache.', dir=py.test.mdp_tempdirname) x = mdp.numx.array([[10]], dtype='d') mdp.caching.activate_caching(cachedir=dir1) node = _CounterNode() _counter = 0 node.execute(x) assert _counter == 1 node.execute(x) assert _counter == 1 # now change path mdp.caching.set_cachedir(cachedir=dir2) node.execute(x) assert _counter == 2 node.execute(x) assert _counter == 2 mdp.caching.deactivate_caching() @requires_joblib def test_execute_magic(): """Test calling execute with magic while caching.""" x = mdp.numx_rand.rand(100, 10) node = mdp.nodes.PCANode() with mdp.caching.cache(): y = node(x) y2 = node(x) assert_array_equal(y, y2) mdp-3.3/mdp/test/test_classifier.py000066400000000000000000000143541203131624700174410ustar00rootroot00000000000000# -*- coding: utf-8 -*- """These are test functions for MDP classifiers. """ from _tools import * from mdp import ClassifierNode from mdp.nodes import (SignumClassifier, PerceptronClassifier, SimpleMarkovClassifier, DiscreteHopfieldClassifier, KMeansClassifier) def _sigmoid(t): return 1.0 / (1.0 + numx.exp(-t)) class _BogusClassifier(ClassifierNode): @staticmethod def is_trainable(): return False def _label(self, x): return [r[0] for r in self.rank(x)] def _prob(self, x): return [{-1: _sigmoid(sum(xi)), \ 1: 1 - _sigmoid(sum(xi))} for xi in x] def testClassifierNode_ranking(): bc = _BogusClassifier() test_data = numx_rand.random((30, 20)) - 0.5 for r, p in zip(bc.rank(test_data), bc.prob(test_data)): # check that the ranking order is correct assert p[r[0]] >= p[r[1]], "Rank returns labels in incorrect order" # check that the probabilities sum up to 100 assert 0.999 < p[r[0]] + p[r[1]] < 1.001 def testClassifier_execute_method(): """Test that the execute result has the correct format when execute_method is used. """ bc = _BogusClassifier(execute_method="label") data = numx_rand.random((5, 20)) - 0.5 result = bc.execute(data) assert isinstance(result, list) assert isinstance(result[0], int) bc.execute_method = "prob" result = bc.execute(data) assert isinstance(result, list) assert isinstance(result[0], dict) bc.execute_method = "rank" result = bc.execute(data) assert isinstance(result, list) assert isinstance(result[0], list) def testSignumClassifier(): c = SignumClassifier() res = c.label(mdp.numx.array([[1, 2, -3, -4], [1, 2, 3, 4]])) assert c.input_dim == 4 assert res.tolist() == [-1, 1] def testPerceptronClassifier(): or_Classifier = PerceptronClassifier() for i in xrange(100): or_Classifier.train(mdp.numx.array([[0., 0.]]), -1) or_Classifier.train(mdp.numx.array([[0., 1.], [1., 0.], [1., 1.]]), 1) assert or_Classifier.input_dim == 2 res = or_Classifier.label(mdp.numx.array([[0., 0.], [0., 1.], [1., 0.], [1., 1.]])) assert res.tolist() == [-1, 1, 1, 1] and_Classifier = PerceptronClassifier() for i in xrange(100): and_Classifier.train(mdp.numx.array([[0., 0.], [0., 1.], [1., 0.]]), -1) and_Classifier.train(mdp.numx.array([[1., 1.]]), 1) res = and_Classifier.label(mdp.numx.array([[0., 0.], [0., 1.], [1., 0.], [1., 1.]])) assert res.tolist() == [-1, -1, -1, 1] xor_Classifier = PerceptronClassifier() for i in xrange(100): xor_Classifier.train(mdp.numx.array([[0., 0.], [1., 1.]]), -1) xor_Classifier.train(mdp.numx.array([[0., 1.], [1., 0.]]), 1) res = xor_Classifier.label(mdp.numx.array([[0., 0.], [0., 1.], [1., 0.], [1., 1.]])) assert res.tolist() != [-1, 1, 1, -1], \ "Something must be wrong here. XOR is impossible in a single-layered perceptron." def testSimpleMarkovClassifier(): mc = SimpleMarkovClassifier(dtype="c") text = "after the letter e follows either space or the letters r t or i" for word in text.split(): word = word.lower() features = zip(" " + word) labels = list(word + " ") mc.train(mdp.numx.array(features), labels) assert mc.input_dim == 1 num_transitions = 0 features = mc.features for feature, count in features.items(): if count: prob = mc.prob(mdp.numx.array([feature])) prob_sum = 0 for p in prob: for k, v in p.items(): prob_sum += v if v: num_transitions += 1 assert abs(prob_sum - 1.0) < 1e-5 # calculate the number of transitions (the negative set deletes the artefact of two spaces) trans = len(set((zip(" ".join(text.split()) + " ", \ " " + " ".join(text.split())))) - set([(' ', ' ')])) assert num_transitions == trans letters_following_e = [' ', 'r', 't', 'i'] letters_prob = mc.prob(mdp.numx.array([['e']]))[0] prob_sum = 0 for letter, prob in letters_prob.items(): prob_sum += prob if prob > 1e-5: assert letter in letters_following_e assert abs(prob_sum - 1.0) < 1e-5 def testDiscreteHopfieldClassifier(): h = DiscreteHopfieldClassifier() memory_size = 100 patterns = numx.array( [numx.sin(numx.linspace(0, 100 * numx.pi, memory_size)) > 0, numx.sin(numx.linspace(0, 50 * numx.pi, memory_size)) > 0, numx.sin(numx.linspace(0, 20 * numx.pi, memory_size)) > 0, numx.sin(numx.linspace(0, 15 * numx.pi, memory_size)) > 0, numx.sin(numx.linspace(0, 10 * numx.pi, memory_size)) > 0, numx.sin(numx.linspace(0, 5 * numx.pi, memory_size)) > 0, numx.sin(numx.linspace(0, 2 * numx.pi, memory_size)) > 0 ]) h.train(patterns) h.input_dim = memory_size for p in patterns: # check if patterns are fixpoints assert numx.all(p == h.label(numx.array([p]))) for p in patterns: # check, if a noisy pattern is recreated noisy = numx.array(p) for i in xrange(len(noisy)): if numx.random.random() > 0.95: noisy[i] = not noisy[i] retrieved = h.label(numx.array([noisy])) # Hopfield nets are blind for inversion, need to check either case assert numx.all(retrieved == p) or numx.all(retrieved != p) def testKMeansClassifier(): num_centroids = 3 k = KMeansClassifier(num_centroids) a = numx.random.rand(50, 2) k.train(a) res = k.label(a) # check that the number of centroids is correct assert len(set(res)) == num_centroids k = KMeansClassifier(2) a1 = numx.random.rand(50, 2) - 1 a2 = numx.random.rand(50, 2) + 1 k.train(a1) k.train(a2) res1 = k.label(a1) res2 = k.label(a2) # check that both clusters are completely identified and different assert (len(set(res1)) == 1 and len(set(res2)) == 1 and set(res1) != set(res2) ), ("Error in K-Means classifier. " "This might be a bug or just a local minimum.") mdp-3.3/mdp/test/test_config.py000066400000000000000000000024171203131624700165570ustar00rootroot00000000000000"""Test the configuration object.""" from mdp import config class TestConfig(object): def teardown_method(self, method): delattr(config, 'has_test_property') def test_config_depfound(self): s = config.ExternalDepFound('test_property', 0.777) assert bool(s) == True assert config.has_test_property info = config.info() assert 'test property' in info assert '0.777' in info def test_config_depfound_string(self): s = config.ExternalDepFound('test_property', '0.777') assert bool(s) == True assert config.has_test_property info = config.info() assert 'test property' in info assert '0.777' in info def test_config_depfailed_exc(self): s = config.ExternalDepFailed('test_property', ImportError('GOOGOO')) assert bool(s) == False assert not config.has_test_property info = config.info() assert 'test property' in info assert 'GOOGOO' in info def test_config_depfailed_string(self): s = config.ExternalDepFailed('test_property', 'GOOGOO') assert bool(s) == False assert not config.has_test_property info = config.info() assert 'test property' in info assert 'GOOGOO' in info mdp-3.3/mdp/test/test_contrib.py000066400000000000000000000155361203131624700167600ustar00rootroot00000000000000"""These are test functions for MDP contributed nodes. """ from _tools import * from test_ICANode import verify_ICANode, verify_ICANodeMatrices requires_joblib = skip_on_condition( "not mdp.config.has_caching", "This test requires the 'joblib' module.") def _s_shape(theta): """ returns x,y a 2-dimensional S-shaped function for theta ranging from 0 to 1 """ t = 3*numx.pi * (theta-0.5) x = numx.sin(t) y = numx.sign(t)*(numx.cos(t)-1) return x,y def _s_shape_1D(n): t = numx.linspace(0., 1., n) x, z = _s_shape(t) y = numx.linspace(0., 5., n) return x, y, z, t def _s_shape_2D(nt, ny): t, y = numx.meshgrid(numx.linspace(0., 1., nt), numx.linspace(0., 2., ny)) t = t.flatten() y = y.flatten() x, z = _s_shape(t) return x, y, z, t def _compare_neighbors(orig, proj, k): n = orig.shape[0] err = numx.zeros((n,)) # compare neighbors indices for i in xrange(n): # neighbors in original space dist = orig - orig[i,:] orig_nbrs = numx.argsort((dist**2).sum(1))[1:k+1] orig_nbrs.sort() # neighbors in projected space dist = proj - proj[i,:] proj_nbrs = numx.argsort((dist**2).sum(1))[1:k+1] proj_nbrs.sort() for idx in orig_nbrs: if idx not in proj_nbrs: err[i] += 1 return err def test_JADENode(): trials = 3 for i in xrange(trials): try: ica = mdp.nodes.JADENode(limit = 10**(-decimal)) ica2 = ica.copy() verify_ICANode(ica, rand_func=numx_rand.exponential) verify_ICANodeMatrices(ica2) return except Exception: if i == trials - 1: raise def test_NIPALSNode(): line_x = numx.zeros((1000,2),"d") line_y = numx.zeros((1000,2),"d") line_x[:,0] = numx.linspace(-1,1,num=1000,endpoint=1) line_y[:,1] = numx.linspace(-0.2,0.2,num=1000,endpoint=1) mat = numx.concatenate((line_x,line_y)) des_var = std(mat,axis=0) utils.rotate(mat,uniform()*2*numx.pi) mat += uniform(2) pca = mdp.nodes.NIPALSNode(conv=1E-15, max_it=1000) pca.train(mat) act_mat = pca.execute(mat) assert_array_almost_equal(mean(act_mat,axis=0),\ [0,0],decimal) assert_array_almost_equal(std(act_mat,axis=0),\ des_var,decimal) # test a bug in v.1.1.1, should not crash pca.inverse(act_mat[:,:1]) # try standard PCA on the same data and compare the eigenvalues pca2 = mdp.nodes.PCANode() pca2.train(mat) pca2.stop_training() assert_array_almost_equal(pca2.d, pca.d, decimal) def test_NIPALSNode_desired_variance(): mat, mix, inp = get_random_mix(mat_dim=(1000, 3)) # first make them white pca = mdp.nodes.WhiteningNode() pca.train(mat) mat = pca.execute(mat) # set the variances mat *= [0.6,0.3,0.1] #mat -= mat.mean(axis=0) pca = mdp.nodes.NIPALSNode(output_dim=0.8) pca.train(mat) out = pca.execute(mat) # check that we got exactly two output_dim: assert pca.output_dim == 2 assert out.shape[1] == 2 # check that explained variance is > 0.8 and < 1 assert (pca.explained_variance > 0.8 and pca.explained_variance < 1) def test_LLENode(): # 1D S-shape in 3D n, k = 50, 2 x, y, z, t = _s_shape_1D(n) data = numx.asarray([x,y,z]).T res = mdp.nodes.LLENode(k, output_dim=1, svd=False)(data) # check that the neighbors are the same err = _compare_neighbors(data, res, k) assert err.max() == 0 # with svd=True res = mdp.nodes.LLENode(k, output_dim=1, svd=True)(data) err = _compare_neighbors(data, res, k) assert err.max() == 0 return #TODO: fix this test! # 2D S-shape in 3D nt, ny = 40, 15 n, k = nt*ny, 8 x, y, z, t = _s_shape_2D(nt, ny) data = numx.asarray([x,y,z]).T res = mdp.nodes.LLENode(k, output_dim=2, svd=True)(data) res[:,0] /= res[:,0].std() res[:,1] /= res[:,1].std() # test alignment yval = y[::nt] tval = t[:ny] for yv in yval: idx = numx.nonzero(y==yv)[0] err = abs(res[idx,1]-res[idx[0],1]).max() assert err<0.01,\ 'Projection should be aligned as original space: %s'%(str(err)) for tv in tval: idx = numx.nonzero(t==tv)[0] err = abs(res[idx,0]-res[idx[0],0]).max() assert err<0.01,\ 'Projection should be aligned as original space: %s'%(str(err)) def test_LLENode_outputdim_float_bug(): # 1D S-shape in 3D, output_dim n, k = 50, 2 x, y, z, t = _s_shape_1D(n) data = numx.asarray([x,y,z]).T res = mdp.nodes.LLENode(k, output_dim=0.9, svd=True)(data) # check that the neighbors are the same err = _compare_neighbors(data, res, k) assert err.max() == 0 def test_HLLENode(): # 1D S-shape in 3D n, k = 250, 4 x, y, z, t = _s_shape_1D(n) data = numx.asarray([x,y,z]).T res = mdp.nodes.HLLENode(k, r=0.001, output_dim=1, svd=False)(data) # check that the neighbors are the same err = _compare_neighbors(data, res, k) assert err.max() == 0 # with svd=True res = mdp.nodes.HLLENode(k, r=0.001, output_dim=1, svd=True)(data) err = _compare_neighbors(data, res, k) assert err.max() == 0 # 2D S-shape in 3D nt, ny = 40, 15 n, k = nt*ny, 8 x, y, z, t = _s_shape_2D(nt, ny) data = numx.asarray([x,y,z]).T res = mdp.nodes.HLLENode(k, r=0.001, output_dim=2, svd=False)(data) res[:,0] /= res[:,0].std() res[:,1] /= res[:,1].std() # test alignment yval = y[::nt] tval = t[:ny] for yv in yval: idx = numx.nonzero(y==yv)[0] assert numx.all(res[idx,1]-res[idx[0],1]<1e-2),\ 'Projection should be aligned as original space' for tv in tval: idx = numx.nonzero(t==tv)[0] assert numx.all(res[idx,0]-res[idx[0],0]<1e-2),\ 'Projection should be aligned as original space' def test_XSFANode(): T = 5000 N = 3 src = numx_rand.random((T, N))*2-1 # create three souces with different speeds fsrc = numx_fft.rfft(src, axis=0) for i in xrange(N): fsrc[(i+1)*(T/10):, i] = 0. src = numx_fft.irfft(fsrc,axis=0) src -= src.mean(axis=0) src /= src.std(axis=0) #mix = sigmoid(numx.dot(src, mdp.utils.random_rot(3))) mix = src flow = mdp.Flow([mdp.nodes.XSFANode()]) # let's test also chunk-mode training flow.train([[mix[:T/2, :], mix[T/2:, :]]]) out = flow(mix) #import bimdp #tr_filename = bimdp.show_training(flow=flow, # data_iterators=[[mix[:T/2, :], mix[T/2:, :]]]) #ex_filename, out = bimdp.show_execution(flow, x=mix) corrs = mdp.utils.cov_maxima(mdp.utils.cov2(out, src)) assert min(corrs) > 0.8, ('source/estimate minimal' ' covariance: %g' % min(corrs)) mdp-3.3/mdp/test/test_copying.py000066400000000000000000000007631203131624700167640ustar00rootroot00000000000000import mdp def test_Node_deepcopy_lambda(): """Copying a node with a lambda member function should not throw an Exception""" generic_node = mdp.Node() generic_node.lambda_function = lambda: 1 generic_node.copy() def test_Flow_deepcopy_lambda(): """Copying a Flow with a lambda member function should not throw an Exception""" generic_node = mdp.Node() generic_node.lambda_function = lambda: 1 generic_flow = mdp.Flow([generic_node]) generic_flow.copy() mdp-3.3/mdp/test/test_extension.py000066400000000000000000000301321203131624700173210ustar00rootroot00000000000000from __future__ import with_statement import mdp import sys import py.test def teardown_function(function): """Deactivate all extensions and remove testing extensions.""" mdp.deactivate_extensions(mdp.get_active_extensions()) for key in mdp.get_extensions().copy(): if key.startswith("__test"): del mdp.get_extensions()[key] def testSimpleExtension(): """Test for a single new extension.""" class TestExtensionNode(mdp.ExtensionNode): extension_name = "__test" def _testtest(self): pass _testtest_attr = 1337 class TestSFANode(TestExtensionNode, mdp.nodes.SFANode): def _testtest(self): return 42 _testtest_attr = 1338 sfa_node = mdp.nodes.SFANode() mdp.activate_extension("__test") assert sfa_node._testtest() == 42 assert sfa_node._testtest_attr == 1338 mdp.deactivate_extension("__test") assert not hasattr(mdp.nodes.SFANode, "_testtest") def testContextDecorator(): """Test the with_extension function decorator.""" class Test1ExtensionNode(mdp.ExtensionNode): extension_name = "__test1" def _testtest(self): pass @mdp.with_extension("__test1") def test(): return mdp.get_active_extensions() # check that the extension is activated assert mdp.get_active_extensions() == [] active = test() assert active == ["__test1"] assert mdp.get_active_extensions() == [] # check that it is only deactiveted if it was activated there mdp.activate_extension("__test1") active = test() assert active == ["__test1"] assert mdp.get_active_extensions() == ["__test1"] def testContextManager1(): """Test that the context manager activates extensions.""" class Test1ExtensionNode(mdp.ExtensionNode): extension_name = "__test1" def _testtest(self): pass class Test2ExtensionNode(mdp.ExtensionNode): extension_name = "__test2" def _testtest(self): pass assert mdp.get_active_extensions() == [] with mdp.extension('__test1'): assert mdp.get_active_extensions() == ['__test1'] assert mdp.get_active_extensions() == [] # with multiple extensions with mdp.extension(['__test1', '__test2']): active = mdp.get_active_extensions() assert '__test1' in active assert '__test2' in active assert mdp.get_active_extensions() == [] mdp.activate_extension("__test1") # Test that only activated extensions are deactiveted. with mdp.extension(['__test1', '__test2']): active = mdp.get_active_extensions() assert '__test1' in active assert '__test2' in active assert mdp.get_active_extensions() == ["__test1"] def testDecoratorExtension(): """Test extension decorator with a single new extension.""" class TestExtensionNode(mdp.ExtensionNode): extension_name = "__test" def _testtest(self): pass @mdp.extension_method("__test", mdp.nodes.SFANode, "_testtest") def _sfa_testtest(self): return 42 @mdp.extension_method("__test", mdp.nodes.SFA2Node) def _testtest(self): return 42 + _sfa_testtest(self) sfa_node = mdp.nodes.SFANode() sfa2_node = mdp.nodes.SFA2Node() mdp.activate_extension("__test") assert sfa_node._testtest() == 42 assert sfa2_node._testtest() == 84 mdp.deactivate_extension("__test") assert not hasattr(mdp.nodes.SFANode, "_testtest") assert not hasattr(mdp.nodes.SFA2Node, "_testtest") def testDecoratorInheritance(): """Test inhertiance with decorators for a single new extension.""" class TestExtensionNode(mdp.ExtensionNode): extension_name = "__test" def _testtest(self): pass @mdp.extension_method("__test", mdp.nodes.SFANode, "_testtest") def _sfa_testtest(self): return 42 @mdp.extension_method("__test", mdp.nodes.SFA2Node) def _testtest(self): return 42 + super(mdp.nodes.SFA2Node, self)._testtest() sfa_node = mdp.nodes.SFANode() sfa2_node = mdp.nodes.SFA2Node() mdp.activate_extension("__test") assert sfa_node._testtest() == 42 assert sfa2_node._testtest() == 84 def testExtensionInheritance(): """Test inheritance of extension nodes.""" class TestExtensionNode(mdp.ExtensionNode): extension_name = "__test" def _testtest(self): pass class TestSFANode(TestExtensionNode, mdp.nodes.SFANode): def _testtest(self): return 42 _testtest_attr = 1337 class TestSFA2Node(TestSFANode, mdp.nodes.SFA2Node): def _testtest(self): if sys.version_info[0] < 3: return TestSFANode._testtest.im_func(self) else: return TestSFANode._testtest(self) sfa2_node = mdp.nodes.SFA2Node() mdp.activate_extension("__test") assert sfa2_node._testtest() == 42 assert sfa2_node._testtest_attr == 1337 def testExtensionInheritance2(): """Test inheritance of extension nodes, using super.""" class TestExtensionNode(mdp.ExtensionNode): extension_name = "__test" def _testtest(self): pass class TestSFANode(TestExtensionNode, mdp.nodes.SFANode): def _testtest(self): return 42 class TestSFA2Node(mdp.nodes.SFA2Node, TestSFANode): def _testtest(self): return super(mdp.nodes.SFA2Node, self)._testtest() sfa2_node = mdp.nodes.SFA2Node() mdp.activate_extension("__test") assert sfa2_node._testtest() == 42 def testExtensionInheritance3(): """Test explicit use of extension nodes and inheritance.""" class TestExtensionNode(mdp.ExtensionNode): extension_name = "__test" def _testtest(self): pass class TestSFANode(TestExtensionNode, mdp.nodes.SFANode): def _testtest(self): return 42 # Note the inheritance order, otherwise this would not work. class TestSFA2Node(mdp.nodes.SFA2Node, TestSFANode): def _testtest(self): return super(mdp.nodes.SFA2Node, self)._testtest() sfa2_node = TestSFA2Node() assert sfa2_node._testtest() == 42 def testMultipleExtensions(): """Test behavior of multiple extensions.""" class Test1ExtensionNode(mdp.ExtensionNode, mdp.Node): extension_name = "__test1" def _testtest1(self): pass class Test2ExtensionNode(mdp.ExtensionNode, mdp.Node): extension_name = "__test2" def _testtest2(self): pass mdp.activate_extension("__test1") node = mdp.Node() node._testtest1() mdp.activate_extension("__test2") node._testtest2() mdp.deactivate_extension("__test1") assert not hasattr(mdp.nodes.SFANode, "_testtest1") mdp.activate_extension("__test1") node._testtest1() mdp.deactivate_extensions(["__test1", "__test2"]) assert not hasattr(mdp.nodes.SFANode, "_testtest1") assert not hasattr(mdp.nodes.SFANode, "_testtest2") def testExtCollision(): """Test the check for method name collision.""" class Test1ExtensionNode(mdp.ExtensionNode, mdp.Node): extension_name = "__test1" def _testtest(self): pass class Test2ExtensionNode(mdp.ExtensionNode, mdp.Node): extension_name = "__test2" def _testtest(self): pass py.test.raises(mdp.ExtensionException, mdp.activate_extensions, ["__test1", "__test2"]) # none of the extension should be active after the exception assert not hasattr(mdp.Node, "_testtest") def testExtensionInheritanceInjection(): """Test the injection of inherited methods""" class TestNode(object): def _test1(self): return 0 class TestExtensionNode(mdp.ExtensionNode): extension_name = "__test" def _test1(self): return 1 def _test2(self): return 2 def _test3(self): return 3 class TestNodeExt(TestExtensionNode, TestNode): def _test2(self): return "2b" @mdp.extension_method("__test", TestNode) def _test4(self): return 4 test_node = TestNode() mdp.activate_extension("__test") assert test_node._test1() == 1 assert test_node._test2() == "2b" assert test_node._test3() == 3 assert test_node._test4() == 4 mdp.deactivate_extension("__test") assert test_node._test1() == 0 assert not hasattr(test_node, "_test2") assert not hasattr(test_node, "_test3") assert not hasattr(test_node, "_test4") def testExtensionInheritanceInjectionNonExtension(): """Test non_extension method injection.""" class TestExtensionNode(mdp.ExtensionNode): extension_name = "__test" def _execute(self): return 0 class TestNode(mdp.Node): # no _execute method pass class ExtendedTestNode(TestExtensionNode, TestNode): pass test_node = TestNode() mdp.activate_extension('__test') assert hasattr(test_node, "_non_extension__execute") mdp.deactivate_extension('__test') assert not hasattr(test_node, "_non_extension__execute") assert not hasattr(test_node, "_extension_for__execute") # test that the non-native _execute has been completely removed assert "_execute" not in test_node.__class__.__dict__ def testExtensionInheritanceInjectionNonExtension2(): """Test non_extension method injection.""" class TestExtensionNode(mdp.ExtensionNode): extension_name = "__test" def _execute(self): return 0 class TestNode(mdp.Node): def _execute(self): return 1 class ExtendedTestNode(TestExtensionNode, TestNode): pass test_node = TestNode() mdp.activate_extension('__test') # test that non-extended attribute has been added as well assert hasattr(test_node, "_non_extension__execute") mdp.deactivate_extension('__test') assert not hasattr(test_node, "_non_extension__execute") assert not hasattr(test_node, "_extension_for__execute") # test that the native _execute has been preserved assert "_execute" in test_node.__class__.__dict__ def testExtensionInheritanceTwoExtensions(): """Test non_extension injection for multiple extensions.""" class Test1ExtensionNode(mdp.ExtensionNode): extension_name = "__test1" def _execute(self): return 1 class Test2ExtensionNode(mdp.ExtensionNode): extension_name = "__test2" class Test3ExtensionNode(mdp.ExtensionNode): extension_name = "__test3" def _execute(self): return "3a" class TestNode1(mdp.Node): pass class TestNode2(TestNode1): pass class ExtendedTest1Node2(Test1ExtensionNode, TestNode2): pass class ExtendedTest2Node1(Test2ExtensionNode, TestNode1): def _execute(self): return 2 class ExtendedTest3Node1(Test3ExtensionNode, TestNode1): def _execute(self): return "3b" test_node = TestNode2() mdp.activate_extension('__test2') assert test_node._execute() == 2 mdp.deactivate_extension('__test2') # in this order TestNode2 should get execute from __test1, # the later addition by __test1 to TestNode1 doesn't matter mdp.activate_extensions(['__test1', '__test2']) assert test_node._execute() == 1 mdp.deactivate_extensions(['__test2', '__test1']) # now activate in inverse order # TestNode2 already gets _execute from __test2, but that is still # overriden by __test1, thats how its registered in _extensions mdp.activate_extensions(['__test2', '__test1']) assert test_node._execute() == 1 mdp.deactivate_extensions(['__test2', '__test1']) ## now the same with extension 3 mdp.activate_extension('__test3') assert test_node._execute() == "3b" mdp.deactivate_extension('__test3') # __test3 does not override, since the _execute slot for Node2 # was first filled by __test1 mdp.activate_extensions(['__test3', '__test1']) assert test_node._execute() == 1 mdp.deactivate_extensions(['__test3', '__test1']) # inverse order mdp.activate_extensions(['__test1', '__test3']) assert test_node._execute() == 1 mdp.deactivate_extensions(['__test2', '__test1']) mdp-3.3/mdp/test/test_fastica.py000066400000000000000000000034201203131624700167170ustar00rootroot00000000000000import mdp from _tools import * uniform = mdp.numx_rand.random def pytest_generate_tests(metafunc): _fastica_test_factory(metafunc) def _fastica_test_factory(metafunc): # generate FastICANode testcases fica_parm = {'approach': ['symm', 'defl'], 'g': ['pow3', 'tanh', 'gaus', 'skew'], 'fine_g': [None, 'pow3', 'tanh', 'gaus', 'skew'], 'sample_size': [1, 0.99999], 'mu': [1, 0.999999], } for parms in mdp.utils.orthogonal_permutations(fica_parm): # skew nonlinearity works only with skewed input data if parms['g'] != 'skew' and parms['fine_g'] == 'skew': continue if parms['g'] == 'skew' and parms['fine_g'] != 'skew': continue funcargs = dict(parms=parms) theid = fastICA_id(parms) metafunc.addcall(funcargs, id=theid) def fastICA_id(parms): app = 'AP:'+parms['approach'] nl = 'NL:'+parms['g'] fine_nl = 'FT:'+str(parms['fine_g']) if parms['sample_size'] == 1: compact = 'SA:01 ' else: compact = 'SA:<1 ' if parms['mu'] == 1: compact += 'S:01' else: compact += 'S:<1' desc = ' '.join([app, nl, fine_nl, compact]) return desc def test_FastICA(parms): if parms['g'] == 'skew': rand_func = mdp.numx_rand.exponential else: rand_func = uniform # try three times just to clear failures due to randomness for exc in (Exception, Exception, ()): try: ica = mdp.nodes.FastICANode(limit=10**(-decimal),**parms) ica2 = ica.copy() verify_ICANode(ica, rand_func=rand_func, vars=2) verify_ICANodeMatrices(ica2, rand_func=rand_func, vars=2) except exc: pass mdp-3.3/mdp/test/test_flows.py000066400000000000000000000270371203131624700164510ustar00rootroot00000000000000from __future__ import with_statement import tempfile import pickle import cPickle import os from _tools import * uniform = numx_rand.random def _get_default_flow(flow_class=mdp.Flow, node_class=BogusNode): flow = flow_class([node_class(),node_class(),node_class()]) return flow # CheckpointFunction used in testCheckpointFunction class _CheckpointCollectFunction(mdp.CheckpointFunction): def __init__(self): self.classes = [] # collect the classes of the nodes it checks def __call__(self, node): self.classes.append(node.__class__) def testFlow(): inp = numx.ones((100,3)) flow = _get_default_flow() for i in xrange(len(flow)): assert not flow.flow[i].is_training(), \ 'Training of node #%d has not been closed.' % i out = flow(inp) assert_array_equal(out,(2**len(flow))*inp) rec = flow.inverse(out) assert_array_equal(rec,inp) def testFlow_copy(): dummy_list = [1,2,3] flow = _get_default_flow() flow[0].dummy_attr = dummy_list copy_flow = flow.copy() assert flow[0].dummy_attr == copy_flow[0].dummy_attr, \ 'Flow copy method did not work' copy_flow[0].dummy_attr[0] = 10 assert flow[0].dummy_attr != copy_flow[0].dummy_attr, \ 'Flow copy method did not work' def test_Flow_copy_with_lambda(): generic_node = mdp.Node() generic_node.lambda_function = lambda: 1 generic_flow = mdp.Flow([generic_node]) generic_flow.copy() def testFlow_save(): dummy_list = [1,2,3] flow = _get_default_flow() flow[0].dummy_attr = dummy_list # test string save copy_flow_pic = flow.save(None) copy_flow = cPickle.loads(copy_flow_pic) assert flow[0].dummy_attr == copy_flow[0].dummy_attr, \ 'Flow save (string) method did not work' copy_flow[0].dummy_attr[0] = 10 assert flow[0].dummy_attr != copy_flow[0].dummy_attr, \ 'Flow save (string) method did not work' # test file save dummy_file = tempfile.mktemp(prefix='MDP_', suffix=".pic", dir=py.test.mdp_tempdirname) flow.save(dummy_file, protocol=1) dummy_file = open(dummy_file, 'rb') copy_flow = cPickle.load(dummy_file) assert flow[0].dummy_attr == copy_flow[0].dummy_attr, \ 'Flow save (file) method did not work' copy_flow[0].dummy_attr[0] = 10 assert flow[0].dummy_attr != copy_flow[0].dummy_attr, \ 'Flow save (file) method did not work' def testFlow_container_privmethods(): mat,mix,inp = get_random_mix(mat_dim=(100,3)) flow = _get_default_flow() # test __len__ assert_equal(len(flow), len(flow.flow)) # test __?etitem__, integer key for i in xrange(len(flow)): assert flow[i]==flow.flow[i], \ '__getitem__ returned wrong node %d' % i new_node = BogusNode() flow[i] = new_node assert flow[i]==new_node, '__setitem__ did not set node %d' % i # test __?etitem__, normal slice -> this fails for python < 2.2 and # if Flow is a subclassed from builtin 'list' flowslice = flow[0:2] assert isinstance(flowslice,mdp.Flow), \ '__getitem__ slice is not a Flow instance' assert len(flowslice) == 2, '__getitem__ returned wrong slice size' new_nodes_list = [BogusNode(), BogusNode()] flow[:2] = new_nodes_list assert (flow[0] == new_nodes_list[0]) and \ (flow[1] == new_nodes_list[1]), '__setitem__ did not set slice' # test__?etitem__, extended slice flowslice = flow[:2:1] assert isinstance(flowslice,mdp.Flow), \ '__getitem__ slice is not a Flow instance' assert len(flowslice) == 2, '__getitem__ returned wrong slice size' new_nodes_list = [BogusNode(), BogusNode()] flow[:2:1] = new_nodes_list assert (flow[0] == new_nodes_list[0]) and \ (flow[1] == new_nodes_list[1]), '__setitem__ did not set slice' # test __delitem__, integer key copy_flow = mdp.Flow(flow[:]) del copy_flow[0] assert len(copy_flow) == len(flow)-1, '__delitem__ did not del' for i in xrange(len(copy_flow)): assert copy_flow[i] == flow[i+1], '__delitem__ deleted wrong node' # test __delitem__, normal slice copy_flow = mdp.Flow(flow[:]) del copy_flow[:2] assert len(copy_flow) == len(flow)-2, \ '__delitem__ did not del normal slice' assert copy_flow[0] == flow[2], \ '__delitem__ deleted wrong normal slice' # test __delitem__, extended slice copy_flow = mdp.Flow(flow[:]) del copy_flow[:2:1] assert len(copy_flow) == len(flow)-2, \ '__delitem__ did not del extended slice' assert copy_flow[0] == flow[2], \ '__delitem__ deleted wrong extended slice' # test __add__ newflow = flow + flow assert len(newflow) == len(flow)*2, '__add__ did not work' def testFlow_container_listmethods(): # for all methods try using a node with right dimensionality # and one with wrong dimensionality flow = _get_default_flow() length = len(flow) # we test __contains__ and __iter__ with the for loop for node in flow: node.input_dim = 10 node.output_dim = 10 # append newnode = BogusNode(input_dim=10, output_dim=10) flow.append(newnode) assert_equal(len(flow), length+1) length = len(flow) try: newnode = BogusNode(input_dim=11) flow.append(newnode) raise Exception, 'flow.append appended inconsistent node' except ValueError: assert_equal(len(flow), length) # extend newflow = flow.copy() flow.extend(newflow) assert_equal(len(flow), 2*length) length = len(flow) try: newflow = _get_default_flow() for idx in xrange(len(newflow)): if idx == 0: newflow[idx].input_dim = 11 else: newflow[idx].input_dim = 10 newflow[idx].output_dim = 10 flow.extend(newflow) raise Exception, 'flow.extend appended inconsistent flow' except ValueError: assert_equal(len(flow), length) # insert newnode = BogusNode(input_dim=10, output_dim=None) flow.insert(2, newnode) assert_equal(len(flow), length+1) length = len(flow) try: newnode = BogusNode(output_dim=11) flow.insert(2, newnode) raise Exception, 'flow.insert inserted inconsistent node' except ValueError: assert_equal(len(flow), length) # pop oldnode = flow[5] popnode = flow.pop(5) assert oldnode == popnode, 'flow.pop popped wrong node out' assert_equal(len(flow), length-1) # pop - test Flow._check_nodes_consistency flow = _get_default_flow() + _get_default_flow() length = len(flow) flow[3].output_dim = 2 flow[4].input_dim = 2 flow[4].output_dim = 3 flow[5].input_dim = 3 flow._check_nodes_consistency(flow.flow) try: nottobepopped = flow.pop(4) raise Exception, 'flow.pop left inconsistent flow' except ValueError: assert_equal(len(flow), length) def testFlow_append_node_copy(): # when appending a node to a flow, # we don't want the flow to be a copy! node1 = BogusNode() node2 = BogusNode() flow = mdp.Flow([node1]) flow += node2 assert flow[0] is node1 def testFlow_iadd(): # check that in-place adding to flow does not return new flow node1 = BogusNode() node2 = BogusNode() node3 = BogusNode() flow = mdp.Flow([node1]) oldflow = flow flow += node2 assert oldflow is flow flow += mdp.Flow([node3]) assert oldflow is flow def testFlow_as_sum_of_nodes(): node1 = BogusNode() node2 = BogusNode() flow = node1+node2 assert type(flow) is mdp.Flow assert len(flow) == 2 node3 = BogusNode() flow = node1+node2+node3 assert type(flow) is mdp.Flow assert len(flow) == 3 node4 = BogusNode() flow = node4 + flow assert type(flow) is mdp.Flow assert len(flow) == 4 def testFlowWrongItarableException(): samples = mdp.numx_rand.random((100,10)) labels = mdp.numx.arange(100) flow = mdp.Flow([mdp.nodes.PCANode(), mdp.nodes.FDANode()]) try: flow.train([[samples], [samples, labels]]) # correct line would be (note the second iterable): # flow.train([[[samples]], [[samples, labels]]]) # should trigger exception for missing train argument for FDANode err = "Flow did not raise FlowException for wrong iterable." raise Exception(err) except mdp.FlowException: pass try: # try to give one argument too much! flow.train([[[samples]], [[samples, labels, labels]]]) err = "Flow did not raise FlowException for wrong iterable." raise Exception(err) except mdp.FlowException: pass def testCheckpointFlow(): lst = [] # checkpoint function, it collects a '1' for each call def cfunc(node, lst = lst): lst.append(1) mat,mix,inp = get_random_mix(mat_dim=(100,3)) flow = _get_default_flow(flow_class = mdp.CheckpointFlow, node_class = BogusNodeTrainable) flow.train(inp, cfunc) # assert len(lst)==len(flow), \ 'The checkpoint function has been called %d times instead of %d times.' % (len(lst), len(flow)) def testCheckpointFunction(): cfunc = _CheckpointCollectFunction() mat,mix,inp = get_random_mix(mat_dim=(100,3)) flow = _get_default_flow(flow_class = mdp.CheckpointFlow, node_class = BogusNodeTrainable) flow.train(inp, cfunc) # for i in xrange(len(flow)): assert flow[i].__class__==cfunc.classes[i], 'Wrong class collected' def testCrashRecovery(): flow = mdp.Flow([BogusExceptNode()]) flow.set_crash_recovery(1) try: flow.train(mdp.numx.zeros((1,2), 'd')) except Exception, e: assert isinstance(e,mdp.FlowExceptionCR) with open(e.filename, 'rb') as fl: pic_flow = pickle.load(fl) os.remove(e.filename) assert flow[0].bogus_attr == pic_flow[0].bogus_attr flow.set_crash_recovery(0) try: flow.execute([None]) except Exception, e: assert isinstance(e,mdp.FlowExceptionCR) assert not hasattr(e,'filename') def testCrashRecoveryException(): a = 3 try: raise mdp.CrashRecoveryException('bogus errstr', a, StandardError()) except mdp.CrashRecoveryException, e: filename1 = e.dump() filename2 = e.dump(tempfile.mkstemp(prefix='MDP_', dir=py.test.mdp_tempdirname)[1]) assert isinstance(e.parent_exception, StandardError) for fname in filename1, filename2: fl = open(fname, 'rb') obj = pickle.load(fl) fl.close() try: os.remove(fname) except Exception: pass assert obj == a def testMultiplePhases(): # test basic multiple phase sequence flow = mdp.Flow([BogusMultiNode()]) flow.train(mdp.numx.zeros((1,2), 'd')) assert flow[0].visited == [1,2,3,4] # try to use an iterable to train it, check for rewinds class TestIterable: def __init__(self): self.used = 0 def __iter__(self): self.used += 1 yield mdp.numx.zeros((1,2), 'd') flow = mdp.Flow([BogusMultiNode()]) iterable = TestIterable() flow.train([iterable]) assert iterable.used == 2 # should not work with an iterator def testgenerator(): yield mdp.numx.zeros((1,2), 'd') flow = mdp.Flow([BogusMultiNode()]) try: flow.train([testgenerator()]) raise Exception('Expected mdp.FlowException') except mdp.FlowException: pass mdp-3.3/mdp/test/test_graph.py000066400000000000000000000121431203131624700164100ustar00rootroot00000000000000from mdp import graph def testAddNode(): # add_node g = graph.Graph() nnodes = 5 for i in xrange(nnodes): g.add_node() assert len(g.nodes)==nnodes, "Wrong number of nodes, expected: %d, got :%d" % (nnodes, len(g.nodes)) # add nodes g = graph.Graph() g.add_nodes(5) assert len(g.nodes)==nnodes, "Wrong number of nodes, expected: %d, got :%d" % (nnodes, len(g.nodes)) g = graph.Graph() g.add_nodes([None] * nnodes) assert len(g.nodes)==nnodes, "Wrong number of nodes, expected: %d, got :%d" % (nnodes, len(g.nodes)) def testAddEdge(): g = graph.Graph() nnodes = 5 nds = [g.add_node() for i in xrange(nnodes)] eds = [g.add_edge(nds[i], nds[i+1]) for i in xrange(nnodes-1)] assert len(g.edges)==nnodes-1, "Wrong number of edges, expected: %d, got :%d" % (nnodes-1, len(g.edges)) # the last nnodes-1 nodes should have in_degree==1, # and the first nnodes-1 out_degree==1 for i in xrange(nnodes): if i>0: assert nds[i].in_degree()==1, "Wrong in_degree, expected: 1, got: %d." % nds[i].in_degree() if i 0: flownode.train(x) flownode.stop_training() flownode.execute(x) def test_FlowNode_trainability(): flow = mdp.Flow([mdp.nodes.PolynomialExpansionNode(degree=2)]) flownode = mh.FlowNode(flow) assert flownode.is_trainable() is False flow = mdp.Flow([mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.PCANode(output_dim=15), mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.PCANode(output_dim=3)]) flownode = mh.FlowNode(flow) assert flownode.is_trainable() is True def test_FlowNode_invertibility(): flow = mdp.Flow([mdp.nodes.PolynomialExpansionNode(degree=2)]) flownode = mh.FlowNode(flow) assert flownode.is_invertible() is False flow = mdp.Flow([mdp.nodes.PCANode(output_dim=15), mdp.nodes.SFANode(), mdp.nodes.PCANode(output_dim=3)]) flownode = mh.FlowNode(flow) assert flownode.is_invertible() is True def test_FlowNode_pretrained_node(): x = numx_rand.random([100,10]) pretrained_node = mdp.nodes.PCANode(output_dim=6) pretrained_node.train(x) pretrained_node.stop_training() flow = mdp.Flow([pretrained_node, mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.PCANode(output_dim=3)]) flownode = mh.FlowNode(flow) while flownode.get_remaining_train_phase() > 0: flownode.train(x) flownode.stop_training() flownode.execute(x) def test_FlowNode_fix_nodes_dimensions1(): x = numx_rand.random([100,10]) last_node = mdp.nodes.IdentityNode() flow = mdp.Flow([mdp.nodes.PCANode(output_dim=3), mdp.nodes.IdentityNode(), last_node]) flownode = mh.FlowNode(flow) flownode.train(x) flownode.stop_training() # check that the dimensions of NoiseNode and FlowNode where all set # by calling _fix_nodes_dimensions assert flownode.output_dim == 3 assert last_node.input_dim == 3 assert last_node.output_dim == 3 def test_FlowNode_fix_nodes_dimensions2(): flow = mdp.Flow([mdp.nodes.IdentityNode(), mdp.nodes.IdentityNode()]) flownode = mh.FlowNode(flow) # this should fail, since the internal nodes don't have fixed dims py.test.raises(mdp.InconsistentDimException, lambda: flownode.set_output_dim(10)) x = numx_rand.random([100,10]) flownode.execute(x) assert flownode.output_dim == 10 def test_FlowNode_fix_nodes_dimensions3(): flow = mdp.Flow([mdp.nodes.IdentityNode()]) flownode = mh.FlowNode(flow) # for a single node this should not raise an Exception flownode.set_output_dim(10) x = numx_rand.random([100,10]) flownode.execute(x) def test_FlowNode_pretrained_flow(): flow = mdp.Flow([mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.PCANode(output_dim=15, reduce=True), mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.PCANode(output_dim=3, reduce=True)]) flownode = mh.FlowNode(flow) x = numx_rand.random([300,20]) while flownode.get_remaining_train_phase() > 0: flownode.train(x) flownode.stop_training() # build new flownode with the trained nodes flownode = mh.FlowNode(flow) assert not flownode.is_training() flownode.execute(x) def test_FlowNode_copy1(): flow = mdp.Flow([mdp.nodes.PCANode(), mdp.nodes.SFANode()]) flownode = mh.FlowNode(flow) flownode.copy() def test_FlowNode_copy2(): # Test that the FlowNode copy method delegates to internal nodes. class CopyFailException(Exception): pass class CopyFailNode(mdp.Node): def copy(self, protocol=None): raise CopyFailException() flow = mdp.Flow([mdp.Node(), CopyFailNode()]) flownode = mh.FlowNode(flow) py.test.raises(CopyFailException, flownode.copy) def _pca_nodes(input_dims, output_dims): return [mdp.nodes.PCANode(input_dim=ind, output_dim=outd) for ind, outd in zip(input_dims, output_dims)] def test_Layer(): layer = mh.Layer(_pca_nodes([10, 17, 3], [5, 3, 1])) x = numx_rand.random([100,30]).astype('f') layer.train(x) y = layer.execute(x) assert layer.dtype == numx.dtype('f') assert y.dtype == layer.dtype def test_Layer_invertibility(): layer = mh.Layer(_pca_nodes([10, 17, 3], [10, 17, 3])) x = numx_rand.random([100,30]).astype('f') layer.train(x) y = layer.execute(x) x_inverse = layer.inverse(y) assert numx.all(numx.absolute(x - x_inverse) < 0.001) def test_Layer_invertibility2(): # reduce the dimensions, so input_dim != output_dim layer = mh.Layer(_pca_nodes([10, 17, 3], [8, 12, 3])) x = numx_rand.random([100,30]).astype('f') layer.train(x) y = layer.execute(x) layer.inverse(y) def test_SameInputLayer(): layer = mh.SameInputLayer(_pca_nodes([10, 10, 10], [5, 3, 1])) x = numx_rand.random([100,10]).astype('f') layer.train(x) y = layer.execute(x) assert layer.dtype == numx.dtype('f') assert y.dtype == layer.dtype def test_CloneLayer(): node = mdp.nodes.PCANode(input_dim=10, output_dim=5) x = numx_rand.random([10,70]).astype('f') layer = mh.CloneLayer(node, 7) layer.train(x) y = layer.execute(x) assert layer.dtype == numx.dtype('f') assert y.dtype == layer.dtype def test_SwitchboardInverse1(): sboard = mh.Switchboard(input_dim=3, connections=[2,0,1]) assert sboard.is_invertible() y = numx.array([[2,3,4],[5,6,7]]) x = sboard.inverse(y) assert numx.all(x == numx.array([[3,4,2],[6,7,5]])) def testSwitchboardInverse2(): sboard = mh.Switchboard(input_dim=3, connections=[2,1,1]) assert not sboard.is_invertible() ## Tests for MeanInverseSwitchboard ## def test_MeanInverseSwitchboard1(): sboard = mh.MeanInverseSwitchboard(input_dim=3, connections=[0,0,2]) assert sboard.is_invertible() y = numx.array([[2,4,3],[1,1,7]]) x = sboard.inverse(y) assert numx.all(x == numx.array([[3,0,3],[1,0,7]])) def test_MeanInverseSwitchboard2(): sboard = mh.MeanInverseSwitchboard(input_dim=3, connections=[1,1,1,2,2]) assert sboard.is_invertible() y = numx.array([[2,4,0,1,1],[3,3,3,2,4]]) x = sboard.inverse(y) assert numx.all(x == numx.array([[0,2,1],[0,3,3]])) ## Tests for ChannelSwitchboard ## def testOutChannelInput(): sboard = mh.ChannelSwitchboard(input_dim=6, connections=[5,5, 0,1], out_channel_dim=2, in_channel_dim=2) assert numx.all(sboard.get_out_channel_input(0) == numx.array([5,5])) assert numx.all(sboard.get_out_channel_input(1) == numx.array([0,1])) def testOutChannelsInputChannels(): sboard = mh.ChannelSwitchboard(input_dim=6, connections=[5,5, # out chan 1 0,1], # out chan 2 out_channel_dim=2, in_channel_dim=2) # note that there are 3 input channels assert numx.all(sboard.get_out_channels_input_channels(0) == numx.array([2])) assert numx.all(sboard.get_out_channels_input_channels(1) == numx.array([0])) assert numx.all(sboard.get_out_channels_input_channels([0,1]) == numx.array([0,2])) ## Tests for Rectangular2dSwitchboard ## def testRect2dRouting1(): sboard = mh.Rectangular2dSwitchboard(in_channels_xy=(3,2), in_channel_dim=2, field_channels_xy=(2,1), field_spacing_xy=1) assert numx.all(sboard.connections == numx.array([0, 1, 2, 3, 2, 3, 4, 5, 6, 7, 8, 9, 8, 9, 10, 11])) x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) # test generated switchboard channel_sboard = sboard.get_out_channel_node(0) channel_sboard.execute(x) def testRect2dRouting2(): sboard = mh.Rectangular2dSwitchboard(in_channels_xy=(2,4), in_channel_dim=1, field_channels_xy=(1,2), field_spacing_xy=(1,2)) assert numx.all(sboard.connections == numx.array([0, 2, 1, 3, 4, 6, 5, 7])) x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) # test generated switchboard channel_sboard = sboard.get_out_channel_node(0) channel_sboard.execute(x) def testRect2dRouting3(): sboard = mh.Rectangular2dSwitchboard(in_channels_xy=(2,4), in_channel_dim=1, field_channels_xy=2, field_spacing_xy=(1,2)) assert (sboard.connections == numx.array([0, 1, 2, 3, 4, 5, 6, 7])).all() def testRect2dRouting4(): sboard = mh.Rectangular2dSwitchboard(in_channels_xy=4, in_channel_dim=1, field_channels_xy=(3,2), field_spacing_xy=(1,2)) assert (sboard.connections == numx.array([0, 1, 2, 4, 5, 6, 1, 2, 3, 5, 6, 7, 8, 9, 10, 12, 13, 14, 9, 10, 11, 13, 14, 15])).all() def testRect2d_get_out_channel_node(): sboard = mh.Rectangular2dSwitchboard(in_channels_xy=(5,4), in_channel_dim=2, field_channels_xy=(3,2), field_spacing_xy=(1,2)) x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) y = sboard.execute(x) # routing layer nodes = [sboard.get_out_channel_node(index) for index in xrange(sboard.output_channels)] layer = mh.SameInputLayer(nodes) layer_y = layer.execute(x) assert (y == layer_y).all() def test_Rect2d_exception_1(): bad_args = dict(in_channels_xy=(12,8), # 3 is the problematic value: field_channels_xy=(4,3), field_spacing_xy=2, in_channel_dim=3, ignore_cover=False) with py.test.raises(mh.Rectangular2dSwitchboardException): mh.Rectangular2dSwitchboard(**bad_args) def test_Rect2d_exception_2(): bad_args = dict(in_channels_xy=(12,8), # 9 is the problematic value: field_channels_xy=(4,9), field_spacing_xy=2, in_channel_dim=3, ignore_cover=False) with py.test.raises(mh.Rectangular2dSwitchboardException): mh.Rectangular2dSwitchboard(**bad_args) def test_Rect2d_exception_3(): bad_args = dict(in_channels_xy=(12,8), # 9 is the problematic value: field_channels_xy=(4,9), field_spacing_xy=2, in_channel_dim=3, ignore_cover=True) with py.test.raises(mh.Rectangular2dSwitchboardException): mh.Rectangular2dSwitchboard(**bad_args) ## Tests for DoubleRect2dSwitchboard ## def test_Rect_double_routing_1(): sboard = mh.DoubleRect2dSwitchboard(in_channels_xy=4, field_channels_xy=2, in_channel_dim=1) assert (sboard.connections == numx.array([0,1,4,5, 2,3,6,7, 8,9,12,13, 10,11,14,15, # uneven fields 5,6,9,10])).all() x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) def test_Rect_double_routing_2(): sboard = mh.DoubleRect2dSwitchboard(in_channels_xy=(6,4), field_channels_xy=(2,2), in_channel_dim=1) assert (sboard.connections == numx.array([0,1,6,7, 2,3,8,9, 4,5,10,11, 12,13,18,19, 14,15,20,21, 16,17,22,23, # uneven fields 7,8,13,14, 9,10,15,16])).all() x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) def test_Rect_double_routing_3(): sboard = mh.DoubleRect2dSwitchboard(in_channels_xy=(4,6), field_channels_xy=2, in_channel_dim=1) assert (sboard.connections == numx.array([0,1,4,5, 2,3,6,7, 8,9,12,13, 10,11,14,15, 16,17,20,21, 18,19,22,23, # uneven fields 5,6,9,10, 13,14,17,18])).all() x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) ## Tests for DoubleRhomb2dSwitchboard ## def test_DoubleRhomb_routing_1(): sboard = mh.DoubleRhomb2dSwitchboard(long_in_channels_xy=(3,2), diag_field_channels=2, in_channel_dim=1) assert (sboard.connections == numx.array([1,6,7,4])).all() x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) def test_DoubleRhomd_routing_2(): sboard = mh.DoubleRhomb2dSwitchboard(long_in_channels_xy=(2,3), diag_field_channels=2, in_channel_dim=1) assert (sboard.connections == numx.array([6,2,3,7])).all() x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) def test_DoubleRhomd_routing_3(): sboard = mh.DoubleRhomb2dSwitchboard(long_in_channels_xy=(4,2), diag_field_channels=2, in_channel_dim=1) assert (sboard.connections == numx.array([1,8,9,5, 2,9,10,6])).all() x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) def test_DoubleRhomd_routing_4(): sboard = mh.DoubleRhomb2dSwitchboard(long_in_channels_xy=(2,4), diag_field_channels=2, in_channel_dim=1) assert (sboard.connections == numx.array([8,2,3,9, 9,4,5,10])).all() x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) def test_DoubleRhomd_routing_5(): sboard = mh.DoubleRhomb2dSwitchboard(long_in_channels_xy=4, diag_field_channels=2, in_channel_dim=1) assert (sboard.connections == numx.array([1,16,17,5, 2,17,18,6, 5,19,20,9, 6,20,21,10, 9,22,23,13, 10,23,24,14])).all() x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) def test_DoubleRhomd_routing_6(): sboard = mh.DoubleRhomb2dSwitchboard(long_in_channels_xy=(7,4), diag_field_channels=4, in_channel_dim=1) x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) def test_DoubleRhomd_routing_7(): sboard = mh.DoubleRhomb2dSwitchboard(long_in_channels_xy=(4,7), diag_field_channels=4, in_channel_dim=1) x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) def test_DoubleRhomd_routing_8(): sboard = mh.DoubleRhomb2dSwitchboard(long_in_channels_xy=(6,7), diag_field_channels=4, in_channel_dim=1) x = numx.array([range(0, sboard.input_dim), range(101, 101+sboard.input_dim)]) sboard.execute(x) def test_hinet_simple_net(): switchboard = mh.Rectangular2dSwitchboard(in_channels_xy=(12,8), field_channels_xy=4, field_spacing_xy=2, in_channel_dim=3) node = mdp.nodes.PCANode(input_dim=4*4*3, output_dim=5) flownode = mh.FlowNode(mdp.Flow([node,])) layer = mh.CloneLayer(flownode, switchboard.output_channels) flow = mdp.Flow([switchboard, layer]) x = numx_rand.random([5, switchboard.input_dim]) flow.train(x) def pytest_funcarg__noisenode(request): return mdp.nodes.NoiseNode(input_dim=20*20, noise_args=(0, 0.0001)) def test_SFA_net(noisenode): sfa_node = mdp.nodes.SFANode(input_dim=20*20, output_dim=10, dtype='f') switchboard = mh.Rectangular2dSwitchboard(in_channels_xy=100, field_channels_xy=20, field_spacing_xy=10) flownode = mh.FlowNode(mdp.Flow([noisenode, sfa_node])) sfa_layer = mh.CloneLayer(flownode, switchboard.output_channels) flow = mdp.Flow([switchboard, sfa_layer]) train_gen = numx.cast['f'](numx_rand.random((3, 10, 100*100))) flow.train([None, train_gen]) def testHiNetHTML(noisenode): # create some flow for testing sfa_node = mdp.nodes.SFANode(input_dim=20*20, output_dim=10) switchboard = mh.Rectangular2dSwitchboard(in_channels_xy=100, field_channels_xy=20, field_spacing_xy=10) flownode = mh.FlowNode(mdp.Flow([noisenode, sfa_node])) sfa_layer = mh.CloneLayer(flownode, switchboard.output_channels) flow = mdp.Flow([switchboard, sfa_layer]) # create dummy file to write the HTML representation html_file = StringIO.StringIO() hinet_html = mdp.hinet.HiNetHTMLVisitor(html_file) hinet_html.convert_flow(flow) html_file.close() def testHiNetXHTML(): # create some flow for testing sfa_node = mdp.nodes.SFANode(input_dim=20*20, output_dim=10) flow = mdp.Flow([sfa_node]) # create dummy file to write the HTML representation html_file = StringIO.StringIO() hinet_html = mdp.hinet.HiNetXHTMLVisitor(html_file) hinet_html.convert_flow(flow) html_file.close() mdp-3.3/mdp/test/test_hinet_generic.py000066400000000000000000000026041203131624700201130ustar00rootroot00000000000000from __future__ import with_statement import mdp.hinet as mh from _tools import * from test_nodes_generic import ( generic_test_factory, test_dtype_consistency, # this test fails due to the checks in _set_output_dim #test_outputdim_consistency, test_dimdtypeset, #test_inverse, # ???: test_inverse is not on the list, because it would # skip all the nodes in NODES anyway, because they're not # always invertible ) def _get_new_flow(): return mdp.Flow([mdp.nodes.NoiseNode(), mdp.nodes.SFANode()]) def _get_new_nodes(): return [mdp.nodes.CuBICANode(input_dim=1, whitened=True), mdp.nodes.CuBICANode(input_dim=2, whitened=True), mdp.nodes.CuBICANode(input_dim=1, whitened=True)] def _get_single_node(): return mdp.nodes.CuBICANode(input_dim=2, whitened=True) def hinet_get_random_mix(): return get_random_mix(type='d', mat_dim=(500,4))[2] NODES = [dict(klass=mh.FlowNode, inp_arg_gen=hinet_get_random_mix, init_args = [_get_new_flow]), dict(klass=mh.Layer, inp_arg_gen=hinet_get_random_mix, init_args = [_get_new_nodes]), dict(klass=mh.CloneLayer, inp_arg_gen=hinet_get_random_mix, init_args = [_get_single_node, 2]), ] def pytest_generate_tests(metafunc): generic_test_factory(NODES, metafunc) mdp-3.3/mdp/test/test_metaclass_and_extensions.py000066400000000000000000000061261203131624700223700ustar00rootroot00000000000000from __future__ import with_statement import mdp import inspect import py.test X = mdp.numx_rand.random(size=(500,5)) def get_signature(func): regargs, varargs, varkwargs, defaults = inspect.getargspec(func) return inspect.formatargspec(regargs, varargs, varkwargs, defaults, formatvalue=lambda value: "")[1:-1] def teardown_function(function): """Deactivate all extensions and remove testing extensions.""" mdp.deactivate_extensions(mdp.get_active_extensions()) for key in mdp.get_extensions().copy(): if key.startswith("__test"): del mdp.get_extensions()[key] def test_signatures_same_no_arguments(): class AncestorNode(mdp.Node): def _train(self, x, foo2=None): self.foo2 = None class ChildNode(AncestorNode): def _train(self, x, foo=None): self.foo = foo cnode = ChildNode() assert get_signature(cnode.train) == 'self, x, foo' assert get_signature(cnode._train) == 'self, x, foo' cnode.train(X, foo=42) assert cnode.foo == 42 py.test.raises(AttributeError, 'cnode.foo2') def test_signatures_more_arguments(): class AncestorNode(mdp.Node): def _train(self, x): self.foo2 = None class ChildNode(AncestorNode): def _train(self, x, foo=None): self.foo = foo cnode = ChildNode() assert get_signature(cnode.train) == 'self, x, foo' assert get_signature(cnode.train._undecorated_) == 'self, x, *args, **kwargs' assert get_signature(cnode._train) == 'self, x, foo' # next two lines should give the same: cnode.train._undecorated_(cnode, X, foo=42) cnode.train(X, foo=42) assert cnode.foo == 42 py.test.raises(AttributeError, 'cnode.foo2') def test_signatures_less_arguments(): class AncestorNode(mdp.Node): def _train(self, x, foo=None): self.foo = None class ChildNode(AncestorNode): def _train(self, x): self.moo = 3 cnode = ChildNode() assert get_signature(cnode.train) == 'self, x' assert get_signature(cnode.train._undecorated_) == 'self, x, *args, **kwargs' assert get_signature(cnode._train) == 'self, x' # next two lines should give the same: cnode.train._undecorated_(cnode, X) cnode.train(X) assert cnode.moo == 3 py.test.raises(AttributeError, 'cnode.foo') def test_simple_extension(): class TestExtensionNode(mdp.ExtensionNode, mdp.nodes.IdentityNode): extension_name = "__test" def execute(self, x): self.foo = 42 return self._non_extension_execute(x) class Dummy(mdp.nodes.IdentityNode): def _execute(self, x): return 42 node = mdp.nodes.IdentityNode() assert mdp.numx.all(node.execute(X) == X) assert not hasattr(node,'foo') with mdp.extension("__test"): assert mdp.numx.all(node.execute(X) == X) assert hasattr(node,'foo') node = Dummy() assert not hasattr(node,'foo') assert node.execute(X) == 42 with mdp.extension("__test"): assert node.execute(X) == 42 assert hasattr(node,'foo') mdp-3.3/mdp/test/test_namespace_fixups.py000066400000000000000000000023211203131624700206360ustar00rootroot00000000000000import sys from _tools import * def _list_module(module): try: names = module.__all__ except AttributeError: names = dir(module) for name in names: if name.startswith('_'): continue item = getattr(module, name) try: modname = getattr(item, '__module__') except AttributeError: continue if hasattr(item, '__module__'): yield modname, name, item MODULES = ['mdp', 'mdp.nodes', 'mdp.hinet', 'mdp.parallel', 'mdp.graph', 'mdp.utils', ] def pytest_generate_tests(metafunc): generate_calls(MODULES, metafunc) def generate_calls(modules, metafunc): for module in modules: metafunc.addcall(funcargs=dict(parentname=module), id=module) def test_exports(parentname): rootname = parentname.split('.')[-1] module = sys.modules[parentname] for modname, itemname, item in _list_module(module): parts = modname.split('.') assert (parts[0] != rootname or modname == parentname), \ '%s.%s.__module_ == %s != %s' % ( parentname, itemname, item.__module__, parentname) mdp-3.3/mdp/test/test_node_covariance.py000066400000000000000000000154451203131624700204360ustar00rootroot00000000000000from _tools import * TESTYPES = [numx.dtype('d'), numx.dtype('f')] def testCovarianceMatrix(): mat,mix,inp = get_random_mix() des_cov = numx.cov(inp, rowvar=0) des_avg = mean(inp,axis=0) des_tlen = inp.shape[0] act_cov = utils.CovarianceMatrix() act_cov.update(inp) act_cov,act_avg,act_tlen = act_cov.fix() assert_array_almost_equal(act_tlen,des_tlen, decimal) assert_array_almost_equal(act_avg,des_avg, decimal) assert_array_almost_equal(act_cov,des_cov, decimal) def testDelayCovarianceMatrix(): dt = 5 mat,mix,inp = get_random_mix() des_tlen = inp.shape[0] - dt des_avg = mean(inp[:des_tlen,:],axis=0) des_avg_dt = mean(inp[dt:,:],axis=0) des_cov = utils.cov2(inp[:des_tlen,:], inp[dt:,:]) act_cov = utils.DelayCovarianceMatrix(dt) act_cov.update(inp) act_cov,act_avg,act_avg_dt,act_tlen = act_cov.fix() assert_array_almost_equal(act_tlen,des_tlen, decimal-1) assert_array_almost_equal(act_avg,des_avg, decimal-1) assert_array_almost_equal(act_avg_dt,des_avg_dt, decimal-1) assert_array_almost_equal(act_cov,des_cov, decimal-1) def testCrossCovarianceMatrix(): mat,mix,inp1 = get_random_mix(mat_dim=(500,5)) mat,mix,inp2 = get_random_mix(mat_dim=(500,3)) des_tlen = inp1.shape[0] des_avg1 = mean(inp1, axis=0) des_avg2 = mean(inp2, axis=0) des_cov = utils.cov2(inp1, inp2) act_cov = utils.CrossCovarianceMatrix() act_cov.update(inp1, inp2) act_cov, act_avg1, act_avg2, act_tlen = act_cov.fix() assert_almost_equal(act_tlen,des_tlen, decimal-1) assert_array_almost_equal(act_avg1,des_avg1, decimal-1) assert_array_almost_equal(act_avg2,des_avg2, decimal-1) assert_array_almost_equal(act_cov,des_cov, decimal-1) def testdtypeCovarianceMatrix(): for type in TESTYPES: mat,mix,inp = get_random_mix(type='d') cov = utils.CovarianceMatrix(dtype=type) cov.update(inp) cov,avg,tlen = cov.fix() assert_type_equal(cov.dtype,type) assert_type_equal(avg.dtype,type) def testdtypeDelayCovarianceMatrix(): for type in TESTYPES: dt = 5 mat,mix,inp = get_random_mix(type='d') cov = utils.DelayCovarianceMatrix(dt=dt, dtype=type) cov.update(inp) cov,avg,avg_dt,tlen = cov.fix() assert_type_equal(cov.dtype,type) assert_type_equal(avg.dtype,type) assert_type_equal(avg_dt.dtype,type) def testdtypeCrossCovarianceMatrix(): for type in TESTYPES: mat,mix,inp = get_random_mix(type='d') cov = utils.CrossCovarianceMatrix(dtype=type) cov.update(inp, inp) cov,avg1,avg2,tlen = cov.fix() assert_type_equal(cov.dtype,type) assert_type_equal(avg1.dtype,type) assert_type_equal(avg2.dtype,type) def testRoundOffWarningCovMatrix(): import warnings warnings.filterwarnings("error",'.*',mdp.MDPWarning) for type in ['f','d']: inp = uniform((1,2)) cov = utils.CovarianceMatrix(dtype=type) cov._tlen = int(1e+15) cov.update(inp) try: cov.fix() assert False, 'RoundOff warning did not work' except mdp.MDPWarning: pass # hope to reset the previous state... warnings.filterwarnings("once",'.*',mdp.MDPWarning) def testMultipleCovarianceMatricesDtypeAndFuncs(): for type in TESTYPES: dec = testdecimals[type] res_type = _MultipleCovarianceMatrices_funcs(type,dec) assert_type_equal(type,res_type) def _MultipleCovarianceMatrices_funcs(dtype, decimals): def assert_all(des,act, dec=decimals): # check list of matrices equals multcov array for x in xrange(nmat): assert_array_almost_equal_diff(des[x],act.covs[:,:,x],dec) def rotate(mat,angle,indices): # perform a givens rotation of a single matrix [i,j] = indices c, s = numx.cos(angle), numx.sin(angle) mat_i, mat_j = mat[:,i].copy(), mat[:,j].copy() mat[:,i], mat[:,j] = c*mat_i-s*mat_j, s*mat_i+c*mat_j mat_i, mat_j = mat[i,:].copy(), mat[j,:].copy() mat[i,:], mat[j,:] = c*mat_i-s*mat_j, s*mat_i+c*mat_j return mat.copy() def permute(mat,indices): # permute rows and cols of a single matrix [i,j] = indices mat_i, mat_j = mat[:,i].copy(), mat[:,j].copy() mat[:,i], mat[:,j] = mat_j, mat_i mat_i, mat_j = mat[i,:].copy(), mat[j,:].copy() mat[i,:], mat[j,:] = mat_j, mat_i return mat.copy() dim = 7 nmat = 13 # create mult cov mat covs = [uniform((dim,dim)).astype(dtype) for x in xrange(nmat)] mult_cov = mdp.utils.MultipleCovarianceMatrices(covs) assert_equal(nmat,mult_cov.ncovs) # test symmetrize sym_covs = [0.5*(x+x.T) for x in covs] mult_cov.symmetrize() assert_all(sym_covs,mult_cov) # test weight weights = uniform(nmat) w_covs = [weights[x]*sym_covs[x] for x in xrange(nmat)] mult_cov.weight(weights) assert_all(w_covs,mult_cov) # test rotate angle = uniform()*2*numx.pi idx = numx_rand.permutation(dim)[:2] rot_covs = [rotate(x,angle,idx) for x in w_covs] mult_cov.rotate(angle,idx) assert_all(w_covs,mult_cov) # test permute per_covs = [permute(x,idx) for x in rot_covs] mult_cov.permute(idx) assert_all(per_covs,mult_cov) # test transform trans = uniform((dim,dim)) trans_covs = [mult(mult(trans.T,x),trans) for x in per_covs] mult_cov.transform(trans) assert_all(trans_covs,mult_cov) # test copy cp_mult_cov = mult_cov.copy() assert_array_equal(mult_cov.covs,cp_mult_cov.covs) # check that we didn't got a reference mult_cov[0][0,0] = 1000 assert int(cp_mult_cov[0][0,0]) != 1000 # return dtype return mult_cov.covs.dtype def testMultipleCovarianceMatricesTransformations(): def get_mult_covs(inp,nmat): # return delayed covariance matrices covs = [] for delay in xrange(nmat): tmp = mdp.utils.DelayCovarianceMatrix(delay) tmp.update(inp) cov,avg,avg_dt,tlen = tmp.fix() covs.append(cov) return mdp.utils.MultipleCovarianceMatrices(covs) dim = 7 nmat = 13 angle = uniform()*2*numx.pi idx = numx_rand.permutation(dim)[:2] inp = uniform((100*dim,dim)) rot_inp, per_inp = inp.copy(), inp.copy() # test if rotating or permuting the cov matrix is equivalent # to rotate or permute the sources. mdp.utils.rotate(rot_inp,angle,idx) mdp.utils.permute(per_inp,idx,rows=0,cols=1) mcov = get_mult_covs(inp, nmat) mcov2 = mcov.copy() mcov_rot = get_mult_covs(rot_inp, nmat) mcov_per = get_mult_covs(per_inp, nmat) mcov.rotate(angle,idx) mcov2.permute(idx) assert_array_almost_equal_diff(mcov.covs, mcov_rot.covs, decimal) assert_array_almost_equal_diff(mcov2.covs, mcov_per.covs, decimal) mdp-3.3/mdp/test/test_node_metaclass.py000066400000000000000000000100211203131624700202610ustar00rootroot00000000000000from __future__ import with_statement import mdp import inspect X = mdp.numx_rand.random(size=(500,5)) def get_signature(func): regargs, varargs, varkwargs, defaults = inspect.getargspec(func) return inspect.formatargspec(regargs, varargs, varkwargs, defaults, formatvalue=lambda value: "")[1:-1] def test_docstrings(): # first try on a subclass of Node if # the docstring is exported to the public method class AncestorNode(mdp.Node): def _train(self, x): """doc ancestor""" self.foo = 42 anode = AncestorNode() assert anode.train.__doc__ == "doc ancestor" anode.train(X) assert anode.foo == 42 assert get_signature(anode.train) == 'self, x' # now try on a subclass of it class ChildNode(AncestorNode): def _train(self, x): """doc child""" self.foo2 = 42 cnode = ChildNode() assert cnode.train.__doc__ == "doc child" cnode.train(X) assert cnode.foo2 == 42 assert get_signature(cnode.train) == 'self, x' def test_signatures_no_doc(): # first try on a subclass of Node if # the signature is exported to the public method class AncestorNode(mdp.Node): def _train(self, x, foo=None): self.foo = 42 anode = AncestorNode() anode.train(X, foo='abc') assert anode.foo == 42 assert get_signature(anode.train) == 'self, x, foo' # now try on a subclass of it class ChildNode(AncestorNode): def _train(self, x, foo2=None): self.foo2 = 42 cnode = ChildNode() cnode.train(X, foo2='abc') assert cnode.foo2 == 42 assert get_signature(cnode.train) == 'self, x, foo2' def test_signatures_with_doc_in_both(): # first try on a subclass of Node if # the signature and the docstring are exported to # the public method class AncestorNode(mdp.Node): def _train(self, x, foo=None): """doc ancestor""" self.foo = 42 anode = AncestorNode() assert anode.train.__doc__ == "doc ancestor" anode.train(X, foo='abc') assert anode.foo == 42 assert get_signature(anode.train) == 'self, x, foo' # now try on a subclass of it class ChildNode(AncestorNode): def _train(self, x, foo2=None): """doc child""" self.foo2 = 42 cnode = ChildNode() assert cnode.train.__doc__ == "doc child" cnode.train(X, foo2='abc') assert cnode.foo2 == 42 assert get_signature(cnode.train) == 'self, x, foo2' def test_signatures_with_doc_in_ancestor(): # first try on a subclass of Node if # the signature and the docstring are exported to # the public method class AncestorNode(mdp.Node): def _train(self, x, foo=None): """doc ancestor""" self.foo = 42 anode = AncestorNode() assert anode.train.__doc__ == "doc ancestor" anode.train(X, foo='abc') assert anode.foo == 42 assert get_signature(anode.train) == 'self, x, foo' # now try on a subclass of it class ChildNode(AncestorNode): def _train(self, x, foo2=None): self.foo2 = 42 cnode = ChildNode() assert cnode.train.__doc__ == "doc ancestor" cnode.train(X, foo2='abc') assert cnode.foo2 == 42 assert get_signature(cnode.train) == 'self, x, foo2' def test_signatures_with_doc_in_child(): # first try on a subclass of Node if # the signature and the docstring are exported to # the public method class AncestorNode(mdp.Node): def _train(self, x, foo=None): self.foo = 42 anode = AncestorNode() anode.train(X, foo='abc') assert anode.foo == 42 assert get_signature(anode.train) == 'self, x, foo' # now try on a subclass of it class ChildNode(AncestorNode): def _train(self, x, foo2=None): """doc child""" self.foo2 = 42 cnode = ChildNode() assert cnode.train.__doc__ == "doc child" cnode.train(X, foo2='abc') assert cnode.foo2 == 42 assert get_signature(cnode.train) == 'self, x, foo2' mdp-3.3/mdp/test/test_node_operations.py000066400000000000000000000052601203131624700205010ustar00rootroot00000000000000from __future__ import with_statement import tempfile import cPickle import mdp from _tools import BogusMultiNode, BogusNodeTrainable import py.test uniform = mdp.numx_rand.random MAT_DIM = (500,5) def test_Node_copy(): test_list = [1,2,3] generic_node = mdp.Node() generic_node.dummy_attr = test_list copy_node = generic_node.copy() assert generic_node.dummy_attr == copy_node.dummy_attr,\ 'Node copy method did not work' copy_node.dummy_attr[0] = 10 assert generic_node.dummy_attr != copy_node.dummy_attr,\ 'Node copy method did not work' def test_Node_copy_with_arrays_and_subnodes(): node = mdp.Node() node.node = mdp.Node() node.node.x = mdp.numx.zeros((2,2)) node2 = node.copy() assert hasattr(node2, 'node') assert mdp.numx.all(node2.node.x == node.node.x) def test_Node_copy_with_lambdas(): generic_node = mdp.Node() generic_node.lambda_function = lambda: 1 generic_node.copy() def test_Node_save(): test_list = [1,2,3] generic_node = mdp.Node() generic_node.dummy_attr = test_list # test string save copy_node_pic = generic_node.save(None) copy_node = cPickle.loads(copy_node_pic) assert generic_node.dummy_attr == copy_node.dummy_attr,\ 'Node save (string) method did not work' copy_node.dummy_attr[0] = 10 assert generic_node.dummy_attr != copy_node.dummy_attr,\ 'Node save (string) method did not work' # test file save dummy_file = tempfile.mktemp(prefix='MDP_', suffix=".pic", dir=py.test.mdp_tempdirname) generic_node.save(dummy_file, protocol=1) dummy_file = open(dummy_file, 'rb') copy_node = cPickle.load(dummy_file) assert generic_node.dummy_attr == copy_node.dummy_attr,\ 'Node save (file) method did not work' copy_node.dummy_attr[0] = 10 assert generic_node.dummy_attr != copy_node.dummy_attr,\ 'Node save (file) method did not work' def test_Node_multiple_training_phases(): x = uniform(size=MAT_DIM) node = BogusMultiNode() phases = node.get_remaining_train_phase() for i in xrange(phases): assert node.get_current_train_phase() == i assert not node._train_phase_started node.train(x) assert node._train_phase_started node.stop_training() assert not node.is_training() def test_Node_execution_without_training(): x = uniform(size=MAT_DIM) # try execution without training: single train phase node = BogusNodeTrainable() node.execute(x) assert hasattr(node, 'bogus_attr') # multiple train phases node = BogusMultiNode() node.execute(x) assert node.visited == [1, 2, 3, 4] mdp-3.3/mdp/test/test_nodes_generic.py000066400000000000000000000344501203131624700201200ustar00rootroot00000000000000from __future__ import with_statement import py.test import inspect from mdp import (config, nodes, ClassifierNode, PreserveDimNode, InconsistentDimException) from _tools import * uniform = numx_rand.random def _rand_labels(x): return numx_rand.randint(0, 2, size=(x.shape[0],)) def _rand_labels_array(x): return numx_rand.randint(0, 2, size=(x.shape[0], 1)) def _rand_classification_labels_array(x): labels = numx_rand.randint(0, 2, size=(x.shape[0],)) labels[labels==0] = -1 return labels def _dumb_quadratic_expansion(x): dim_x = x.shape[1] return numx.asarray([(x[i].reshape(dim_x,1) * x[i].reshape(1,dim_x)).flatten() for i in range(len(x))]) def _rand_array_halfdim(x): return uniform(size=(x.shape[0], x.shape[1]//2)) class Iter(object): pass def _rand_array_single_rows(): x = uniform((500,4)) class _Iter(Iter): def __iter__(self): for row in range(x.shape[0]): yield x[numx.newaxis,row,:] return _Iter() def _contrib_get_random_mix(): return get_random_mix(type='d', mat_dim=(100, 3))[2] def _positive_get_random_mix(): return abs(get_random_mix()[2]) def _train_if_necessary(inp, node, sup_arg_gen): if node.is_trainable(): while True: if sup_arg_gen is not None: # for nodes that need supervision node.train(inp, sup_arg_gen(inp)) else: # support generators if isinstance(inp, Iter): for x in inp: node.train(x) else: node.train(inp) if node.get_remaining_train_phase() > 1: node.stop_training() else: break def _stop_training_or_execute(node, inp): if node.is_trainable(): node.stop_training() else: if isinstance(inp, Iter): for x in inp: node.execute(x) else: node.execute(inp) def pytest_generate_tests(metafunc): generic_test_factory(NODES, metafunc) def generic_test_factory(big_nodes, metafunc): """Generator creating a test for each of the nodes based upon arguments in a list of nodes in big_nodes. Format of big_nodes: each item in the list can be either a - class name, in this case the class instances are initialized without arguments and default arguments are used during the training and execution phases. - dict containing items which can override the initialization arguments, provide extra arguments for training and/or execution. Available keys in the configuration dict: `klass` Mandatory. The type of Node. `init_args=()` A sequence used to provide the initialization arguments to node constructor. Before being used, the items in this sequence are executed if they are callable. This allows one to create fresh instances of nodes before each Node initalization. `inp_arg_gen=...a call to get_random_mix('d')` Used to construct the `inp` data argument used for training and execution. It can be an iterable. `sup_arg_gen=None` A function taking a single argument (`inp`) Used to contruct extra arguments passed to `train`. `execute_arg_gen=None` A function similar to `sup_arg_gen` but used during execution. The return value is unpacked and used as additional arguments to `execute`. """ for nodetype in big_nodes: if not isinstance(nodetype, dict): nodetype = dict(klass=nodetype) funcargs = dict( init_args=(), inp_arg_gen=lambda: get_random_mix(type='d')[2], sup_arg_gen=None, execute_arg_gen=None) funcargs.update(nodetype) if hasattr(metafunc.function, 'only_if_node_condition'): # A TypeError can be thrown by the condition checking # function (e.g. when nodetype.is_trainable() is not a staticmethod). condition = metafunc.function.only_if_node_condition try: if not condition(nodetype['klass']): continue except TypeError: continue theid = nodetype['klass'].__name__ metafunc.addcall(funcargs, id=theid) def only_if_node(condition): """Execute the test only if condition(nodetype) is True. If condition(nodetype) throws TypeError, just assume False. """ def f(func): func.only_if_node_condition = condition return func return f def call_init_args(init_args): return [item() if hasattr(item, '__call__') else item for item in init_args] def test_dtype_consistency(klass, init_args, inp_arg_gen, sup_arg_gen, execute_arg_gen): args = call_init_args(init_args) supported_types = klass(*args).get_supported_dtypes() for dtype in supported_types: inp = inp_arg_gen() args = call_init_args(init_args) node = klass(dtype=dtype, *args) _train_if_necessary(inp, node, sup_arg_gen) extra = [execute_arg_gen(inp)] if execute_arg_gen else [] # support generators if isinstance(inp, Iter): for x in inp: out = node.execute(x, *extra) else: out = node.execute(inp, *extra) assert out.dtype == dtype def test_outputdim_consistency(klass, init_args, inp_arg_gen, sup_arg_gen, execute_arg_gen): args = call_init_args(init_args) inp = inp_arg_gen() # support generators if isinstance(inp, Iter): for x in inp: pass output_dim = x.shape[1] // 2 else: output_dim = inp.shape[1] // 2 extra = [execute_arg_gen(inp)] if execute_arg_gen else [] def _test(node): _train_if_necessary(inp, node, sup_arg_gen) # support generators if isinstance(inp, Iter): for x in inp: out = node.execute(x) else: out = node.execute(inp, *extra) assert out.shape[1] == output_dim assert node._output_dim == output_dim # check if the node output dimension can be set or must be determined # by the node if (not issubclass(klass, PreserveDimNode) and 'output_dim' in inspect.getargspec(klass.__init__)[0]): # case 1: output dim set in the constructor node = klass(output_dim=output_dim, *args) _test(node) # case 2: output_dim set explicitly node = klass(*args) node.output_dim = output_dim _test(node) else: if issubclass(klass, PreserveDimNode): # check that constructor allows to set output_dim assert 'output_dim' in inspect.getargspec(klass.__init__)[0] # check that setting the input dim, then incompatible output dims # raises an appropriate error # case 1: both in the constructor py.test.raises(InconsistentDimException, 'klass(input_dim=inp.shape[1], output_dim=output_dim, *args)') # case 2: first input_dim, then output_dim node = klass(input_dim=inp.shape[1], *args) py.test.raises(InconsistentDimException, 'node.output_dim = output_dim') # case 3: first output_dim, then input_dim node = klass(output_dim=output_dim, *args) node.output_dim = output_dim py.test.raises(InconsistentDimException, 'node.input_dim = inp.shape[1]') # check that output_dim is set to whatever the output dim is node = klass(*args) _train_if_necessary(inp, node, sup_arg_gen) # support generators if isinstance(inp, Iter): for x in inp: out = node.execute(x, *extra) else: out = node.execute(inp, *extra) assert out.shape[1] == node.output_dim def test_dimdtypeset(klass, init_args, inp_arg_gen, sup_arg_gen, execute_arg_gen): init_args = call_init_args(init_args) inp = inp_arg_gen() node = klass(*init_args) _train_if_necessary(inp, node, sup_arg_gen) _stop_training_or_execute(node, inp) assert node.output_dim is not None assert node.dtype is not None assert node.input_dim is not None @only_if_node(lambda nodetype: nodetype.is_invertible()) def test_inverse(klass, init_args, inp_arg_gen, sup_arg_gen, execute_arg_gen): args = call_init_args(init_args) inp = inp_arg_gen() # take the first available dtype for the test dtype = klass(*args).get_supported_dtypes()[0] args = call_init_args(init_args) node = klass(dtype=dtype, *args) _train_if_necessary(inp, node, sup_arg_gen) extra = [execute_arg_gen(inp)] if execute_arg_gen else [] out = node.execute(inp, *extra) # compute the inverse rec = node.inverse(out) # cast inp for comparison! inp = inp.astype(dtype) assert_array_almost_equal_diff(rec, inp, decimal-3) assert rec.dtype == dtype def SFA2Node_inp_arg_gen(): freqs = [2*numx.pi*100.,2*numx.pi*200.] t = numx.linspace(0, 1, num=1000) mat = numx.array([numx.sin(freqs[0]*t), numx.sin(freqs[1]*t)]).T inp = mat.astype('d') return inp def NeuralGasNode_inp_arg_gen(): return numx.asarray([[2.,0,0],[-2,0,0],[0,0,0]]) def LinearRegressionNode_inp_arg_gen(): return uniform(size=(1000, 5)) def _rand_1d(x): return uniform(size=(x.shape[0],)) NODES = [ dict(klass='NeuralGasNode', init_args=[3,NeuralGasNode_inp_arg_gen()], inp_arg_gen=NeuralGasNode_inp_arg_gen), dict(klass='SFA2Node', inp_arg_gen=SFA2Node_inp_arg_gen), dict(klass='PolynomialExpansionNode', init_args=[3]), dict(klass='RBFExpansionNode', init_args=[[[0.]*5, [0.]*5], [1., 1.]]), dict(klass='GeneralExpansionNode', init_args=[[lambda x:x, lambda x: x**2, _dumb_quadratic_expansion]]), dict(klass='HitParadeNode', init_args=[2, 5]), dict(klass='TimeFramesNode', init_args=[3, 4]), dict(klass='TimeDelayNode', init_args=[3, 4]), dict(klass='TimeDelaySlidingWindowNode', init_args=[3, 4], inp_arg_gen=_rand_array_single_rows), dict(klass='FDANode', sup_arg_gen=_rand_labels), dict(klass='GaussianClassifier', sup_arg_gen=_rand_labels), dict(klass='NearestMeanClassifier', sup_arg_gen=_rand_labels), dict(klass='KNNClassifier', sup_arg_gen=_rand_labels), dict(klass='RBMNode', init_args=[5]), dict(klass='RBMWithLabelsNode', init_args=[5, 1], sup_arg_gen=_rand_labels_array, execute_arg_gen=_rand_labels_array), dict(klass='LinearRegressionNode', sup_arg_gen=_rand_array_halfdim), dict(klass='Convolution2DNode', init_args=[mdp.numx.array([[[1.]]]), (5,1)]), dict(klass='JADENode', inp_arg_gen=_contrib_get_random_mix), dict(klass='NIPALSNode', inp_arg_gen=_contrib_get_random_mix), dict(klass='XSFANode', inp_arg_gen=_contrib_get_random_mix, init_args=[(nodes.PolynomialExpansionNode, (1,), {}), (nodes.PolynomialExpansionNode, (1,), {}), True]), dict(klass='LLENode', inp_arg_gen=_contrib_get_random_mix, init_args=[3, 0.001, True]), dict(klass='HLLENode', inp_arg_gen=_contrib_get_random_mix, init_args=[10, 0.001, True]), dict(klass='KMeansClassifier', init_args=[2, 3]), dict(klass='PerceptronClassifier', sup_arg_gen=_rand_classification_labels_array), dict(klass='SimpleMarkovClassifier', sup_arg_gen=_rand_classification_labels_array), dict(klass='ShogunSVMClassifier', sup_arg_gen=_rand_labels_array, init_args=["libsvmmulticlass", (), None, "GaussianKernel"]), dict(klass='LibSVMClassifier', sup_arg_gen=_rand_labels_array, init_args=["LINEAR","C_SVC"]), dict(klass='MultinomialNBScikitsLearnNode', inp_arg_gen=_positive_get_random_mix, sup_arg_gen=_rand_labels), dict(klass='NeighborsScikitsLearnNode', sup_arg_gen=_rand_1d), ] # LabelSpreadingScikitsLearnNode is broken in sklearn version 0.11 # It works fine in version 0.12 EXCLUDE_NODES = ['ICANode', 'LabelSpreadingScikitsLearnNode'] def generate_nodes_list(nodes_dicts): nodes_list = [] # append nodes with additional arguments or supervised if they exist visited = [] excluded = [] for dct in nodes_dicts: klass = dct['klass'] if type(klass) is str: # some of the nodes on the list may be optional if not hasattr(nodes, klass): continue # transform class name into class (needed by automatic tests) klass = getattr(nodes, klass) dct['klass'] = klass # only append to list if the node is present in MDP # in case some of the nodes in NODES are optional if hasattr(nodes, klass.__name__): nodes_list.append(dct) visited.append(klass) for node_name in EXCLUDE_NODES: if hasattr(nodes, node_name): excluded.append(getattr(nodes, node_name)) # append sklearn nodes if supported # XXX # remove all non classifier nodes from the scikits nodes # they do not have a common API that would allow # automatic testing # XXX for node_name in mdp.nodes.__dict__: node = mdp.nodes.__dict__[node_name] if (inspect.isclass(node) and node_name.endswith('ScikitsLearnNode') and (node not in visited) and (node not in excluded)): if issubclass(node, ClassifierNode): nodes_list.append(dict(klass=node, sup_arg_gen=_rand_labels)) visited.append(node) else: excluded.append(node) # append all other nodes in mdp.nodes for attr in dir(nodes): if attr[0] == '_': continue attr = getattr(nodes, attr) if (inspect.isclass(attr) and issubclass(attr, mdp.Node) and attr not in visited and attr not in excluded): nodes_list.append(attr) return nodes_list NODES = generate_nodes_list(NODES) mdp-3.3/mdp/test/test_parallelclassifiers.py000066400000000000000000000056741203131624700213460ustar00rootroot00000000000000import mdp.parallel as parallel from _tools import * def test_ParallelGaussianClassifier(): """Test ParallelGaussianClassifier.""" precision = 6 xs = [numx_rand.random([4,5]) for _ in range(8)] labels = [1,2,1,1,2,3,2,3] node = mdp.nodes.GaussianClassifier() pnode = parallel.ParallelGaussianClassifier() for i, x in enumerate(xs): node.train(x, labels[i]) node.stop_training() pnode1 = pnode.fork() pnode2 = pnode.fork() for i, x in enumerate(xs): if i % 2: pnode1.train(x, labels[i]) else: pnode2.train(x, labels[i]) pnode.join(pnode1) pnode.join(pnode2) pnode.stop_training() # check that results are the same for all object classes for i in range(3): assert_array_almost_equal(node.inv_covs[i], pnode.inv_covs[i], precision) assert_array_almost_equal(node.means[i], pnode.means[i], precision) assert node.p[i] == pnode.p[i] assert node.p[i] == pnode.p[i] assert node.labels[i] == pnode.labels[i] def test_ParallelNearestMeanClassifier(): """Test ParallelGaussianClassifier.""" precision = 6 xs = [numx_rand.random([4,5]) for _ in range(8)] labels = [1,2,1,1,2,3,2,3] node = mdp.nodes.NearestMeanClassifier() pnode = parallel.ParallelNearestMeanClassifier() for i, x in enumerate(xs): node.train(x, labels[i]) node.stop_training() pnode1 = pnode.fork() pnode2 = pnode.fork() for i, x in enumerate(xs): if i % 2: pnode1.train(x, labels[i]) else: pnode2.train(x, labels[i]) pnode.join(pnode1) pnode.join(pnode2) pnode.stop_training() # check that results are the same for all object classes assert_array_almost_equal(node.ordered_means, pnode.ordered_means, precision) for key in node.label_means: assert_array_almost_equal(node.label_means[key], pnode.label_means[key], precision) assert node.n_label_samples[key] == pnode.n_label_samples[key] def test_ParallelKNNClassifier(): """Test ParallelGaussianClassifier.""" precision = 6 xs = [numx_rand.random([3,2]) for _ in range(8)] labels = [1,2,1,1,2,3,2,3] node = mdp.nodes.KNNClassifier() pnode = parallel.ParallelKNNClassifier() for i, x in enumerate(xs): node.train(x, labels[i]) node.stop_training() pnode1 = pnode.fork() pnode2 = pnode.fork() for i, x in enumerate(xs): if i < 4: pnode1.train(x, labels[i]) else: pnode2.train(x, labels[i]) pnode.join(pnode1) pnode.join(pnode2) pnode.stop_training() # check that results are the same for all object classes assert_array_almost_equal(node.samples, pnode.samples, precision) assert node.n_samples == pnode.n_samples mdp-3.3/mdp/test/test_parallelflows.py000066400000000000000000000265111203131624700201620ustar00rootroot00000000000000from _tools import * import mdp.parallel as parallel n = numx def test_tasks(): """Test parallel training and execution by running the tasks.""" flow = parallel.ParallelFlow([ mdp.nodes.SFANode(output_dim=5), mdp.nodes.PolynomialExpansionNode(degree=3), mdp.nodes.SFANode(output_dim=20)]) data_iterables = [[n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)], None, [n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)]] scheduler = parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler) # parallel execution iterable = [n.random.random((20,10)) for _ in xrange(6)] flow.execute(iterable, scheduler=scheduler) def test_non_iterator(): """Test parallel training and execution with a single array.""" flow = parallel.ParallelFlow([ mdp.nodes.SFANode(output_dim=5), mdp.nodes.PolynomialExpansionNode(degree=3), mdp.nodes.SFANode(output_dim=20)]) data_iterables = n.random.random((200,10))*n.arange(1,11) scheduler = parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler) # test execution x = n.random.random((100,10)) flow.execute(x) def test_multiple_schedulers(): """Test parallel flow training with multiple schedulers.""" flow = parallel.ParallelFlow([ mdp.nodes.SFANode(output_dim=5), mdp.nodes.PolynomialExpansionNode(degree=3), mdp.nodes.SFANode(output_dim=20)]) data_iterables = [[n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)], None, [n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)]] schedulers = [parallel.Scheduler(), None, parallel.Scheduler()] flow.train(data_iterables, scheduler=schedulers) # parallel execution iterable = [n.random.random((20,10)) for _ in xrange(6)] flow.execute(iterable, scheduler=parallel.Scheduler()) def test_multiple_schedulers2(): """Test parallel flow training with multiple schedulers (part 2).""" # now the first node is untrainable as well flow = parallel.ParallelFlow([ mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.SFANode(output_dim=5), mdp.nodes.PolynomialExpansionNode(degree=3), mdp.nodes.SFANode(output_dim=20)]) data_iterables = [None, [n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)], None, [n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)]] schedulers = [None, parallel.Scheduler(), None, parallel.Scheduler()] flow.train(data_iterables, scheduler=schedulers) # parallel execution iterable = [n.random.random((20,10)) for _ in xrange(6)] flow.execute(iterable, scheduler=parallel.Scheduler()) def test_multiphase(): """Test parallel training and execution for nodes with multiple training phases. """ sfa_node = mdp.nodes.SFANode(input_dim=10, output_dim=8) sfa2_node = mdp.nodes.SFA2Node(input_dim=8, output_dim=6) flownode = mdp.hinet.FlowNode(mdp.Flow([sfa_node, sfa2_node])) flow = parallel.ParallelFlow([ flownode, mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.SFANode(output_dim=5)]) data_iterables = [[n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)], None, [n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)]] scheduler = parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler) # test normal execution x = n.random.random([100,10]) flow.execute(x) # parallel execution iterable = [n.random.random((20,10)) for _ in xrange(6)] flow.execute(iterable, scheduler=scheduler) def test_firstnode(): """Test special case in which the first node is untrainable. This tests the proper initialization of the internal variables. """ flow = parallel.ParallelFlow([ mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.SFANode(output_dim=20)]) data_iterables = [None, n.random.random((6,20,10))] scheduler = parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler) def test_multiphase_checkpoints(): """Test parallel checkpoint flow.""" sfa_node = mdp.nodes.SFANode(input_dim=10, output_dim=8) sfa2_node = mdp.nodes.SFA2Node(input_dim=8, output_dim=6) flownode = mdp.hinet.FlowNode(mdp.Flow([sfa_node, sfa2_node])) flow = parallel.ParallelCheckpointFlow([ flownode, mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.SFANode(output_dim=5)]) data_iterables = [[n.random.random((30,10)) for _ in xrange(6)], None, [n.random.random((30,10)) for _ in xrange(6)]] checkpoint = mdp.CheckpointFunction() scheduler = parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler, checkpoints=checkpoint) def test_nonparallel1(): """Test training for mixture of parallel and non-parallel nodes.""" sfa_node = mdp.nodes.SFANode(input_dim=10, output_dim=8) # TODO: use a node with no parallel here sfa2_node = mdp.nodes.CuBICANode(input_dim=8) flownode = mdp.hinet.FlowNode(mdp.Flow([sfa_node, sfa2_node])) flow = parallel.ParallelFlow([ flownode, mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.SFANode(output_dim=5)]) data_iterables = [[n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)], None, [n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)]] scheduler = parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler) # test execution x = n.random.random([100,10]) flow.execute(x) def test_nonparallel2(): """Test training for mixture of parallel and non-parallel nodes.""" # TODO: use a node with no parallel here sfa_node = mdp.nodes.SFANode(input_dim=10, output_dim=8) sfa2_node = mdp.nodes.SFA2Node(input_dim=8, output_dim=6) flownode = mdp.hinet.FlowNode(mdp.Flow([sfa_node, sfa2_node])) flow = parallel.ParallelFlow([ flownode, mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.SFANode(output_dim=5)]) data_iterables = [[n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)], None, [n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)]] scheduler = parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler) # test execution x = n.random.random([100,10]) flow.execute(x) def test_nonparallel3(): """Test training for non-parallel nodes.""" # TODO: use a node with no parallel here sfa_node = mdp.nodes.SFANode(input_dim=10, output_dim=8) # TODO: use a node with no parallel here sfa2_node = mdp.nodes.SFA2Node(input_dim=8, output_dim=6) flow = parallel.ParallelFlow([sfa_node, sfa2_node]) data_iterables = [[n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)], [n.random.random((30,10))*n.arange(1,11) for _ in xrange(6)]] scheduler = parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler) while flow.is_parallel_training: results = [] while flow.task_available(): task = flow.get_task() results.append(task()) flow.use_results(results) # test execution x = n.random.random([100,10]) flow.execute(x) def test_train_purge_nodes(): """Test that FlowTrainCallable correctly purges nodes.""" sfa_node = mdp.nodes.SFANode(input_dim=10, output_dim=8) sfa2_node = mdp.nodes.SFA2Node(input_dim=8, output_dim=6) flownode = mdp.hinet.FlowNode(mdp.Flow([sfa_node, mdp.nodes.IdentityNode(), sfa2_node])) data = n.random.random((30,10)) mdp.activate_extension("parallel") try: clbl = mdp.parallel.FlowTrainCallable(flownode) flownode = clbl(data) finally: mdp.deactivate_extension("parallel") assert flownode._flow[1].__class__.__name__ == "_DummyNode" def test_execute_fork(): """Test the forking of a node based on use_execute_fork.""" class _test_ExecuteForkNode(mdp.nodes.IdentityNode): # Note: The explicit signature is important to preserve the dim # information during the fork. def __init__(self, input_dim=None, output_dim=None, dtype=None): self.n_forks = 0 self.n_joins = 0 super(_test_ExecuteForkNode, self).__init__(input_dim=input_dim, output_dim=output_dim, dtype=dtype) class Parallel_test_ExecuteForkNode(parallel.ParallelExtensionNode, _test_ExecuteForkNode): def _fork(self): self.n_forks += 1 return self._default_fork() def _join(self, forked_node): self.n_joins += forked_node.n_joins + 1 def use_execute_fork(self): return True try: n_chunks = 6 ## Part 1: test execute fork during flow training data_iterables = [[n.random.random((30,10)) for _ in xrange(n_chunks)], None, [n.random.random((30,10)) for _ in xrange(n_chunks)], None] flow = parallel.ParallelFlow([mdp.nodes.PCANode(output_dim=5), _test_ExecuteForkNode(), mdp.nodes.SFANode(), _test_ExecuteForkNode()]) scheduler = parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler) for node in flow: if isinstance(node, _test_ExecuteForkNode): assert node.n_forks == 2 * n_chunks + 2 assert node.n_joins == 2 * n_chunks # reset the counters to prepare the execute test node.n_forks = 0 node.n_joins = 0 ## Part 2: test execute fork during flow execute data_iterable = [n.random.random((30,10)) for _ in xrange(n_chunks)] flow.execute(data_iterable, scheduler=scheduler) for node in flow: if isinstance(node, _test_ExecuteForkNode): assert node.n_forks == n_chunks assert node.n_joins == n_chunks finally: # unregister the testing class del mdp.get_extensions()["parallel"][_test_ExecuteForkNode] scheduler.shutdown() mdp-3.3/mdp/test/test_parallelhinet.py000066400000000000000000000072351203131624700201410ustar00rootroot00000000000000from _tools import * import mdp.parallel as parallel import mdp.hinet as hinet n = numx class TestParallelHinetNodes(): """Tests for ParallelFlowNode.""" def setup_method(self, method): if "parallel" in mdp.get_active_extensions(): self.set_parallel = False else: mdp.activate_extension("parallel") self.set_parallel = True def teardown_method(self, method): if self.set_parallel: mdp.deactivate_extension("parallel") def test_flownode(self): """Test ParallelFlowNode.""" flow = mdp.Flow([mdp.nodes.SFANode(output_dim=5), mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.SFANode(output_dim=3)]) flownode = mdp.hinet.FlowNode(flow) x = n.random.random([100,50]) chunksize = 25 chunks = [x[i*chunksize : (i+1)*chunksize] for i in xrange(len(x)//chunksize)] while flownode.get_remaining_train_phase() > 0: for chunk in chunks: forked_node = flownode.fork() forked_node.train(chunk) flownode.join(forked_node) flownode.stop_training() # test execution flownode.execute(x) def test_flownode_forksingle(self): """Test that ParallelFlowNode forks only the first training node.""" flow = mdp.Flow([mdp.nodes.SFANode(output_dim=5), mdp.nodes.PolynomialExpansionNode(degree=2), mdp.nodes.SFANode(output_dim=3)]) flownode = mdp.hinet.FlowNode(flow) forked_flownode = flownode.fork() assert flownode._flow[0] is not forked_flownode._flow[0] assert flownode._flow[1] is forked_flownode._flow[1] assert flownode._flow[2] is forked_flownode._flow[2] # Sabotage joining for the second SFANode, which should not be joined, # causing AttributeError: 'NoneType' ... when it is joined. flownode._flow[2]._cov_mtx = None flownode.join(forked_flownode) def test_parallelnet(self): """Test a simple parallel net with big data. Includes ParallelFlowNode, ParallelCloneLayer, ParallelSFANode and training via a ParallelFlow. """ noisenode = mdp.nodes.NormalNoiseNode(input_dim=20*20, noise_args=(0,0.0001)) sfa_node = mdp.nodes.SFANode(input_dim=20*20, output_dim=10) switchboard = hinet.Rectangular2dSwitchboard(in_channels_xy=100, field_channels_xy=20, field_spacing_xy=10) flownode = mdp.hinet.FlowNode(mdp.Flow([noisenode, sfa_node])) sfa_layer = mdp.hinet.CloneLayer(flownode, switchboard.output_channels) flow = parallel.ParallelFlow([switchboard, sfa_layer]) data_iterables = [None, [n.random.random((10, 100*100)) for _ in xrange(3)]] scheduler = parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler) def test_layer(self): """Test Simple random test with three nodes.""" node1 = mdp.nodes.SFANode(input_dim=10, output_dim=5) node2 = mdp.nodes.SFANode(input_dim=17, output_dim=3) node3 = mdp.nodes.SFANode(input_dim=3, output_dim=1) layer = mdp.hinet.Layer([node1, node2, node3]) flow = parallel.ParallelFlow([layer]) data_iterables = [[n.random.random((10, 30)) for _ in xrange(3)]] scheduler = parallel.Scheduler() flow.train(data_iterables, scheduler=scheduler) mdp-3.3/mdp/test/test_parallelnodes.py000066400000000000000000000142571203131624700201440ustar00rootroot00000000000000import mdp.parallel as parallel from _tools import * def test_PCANode(): """Test Parallel PCANode""" precision = 6 x = numx_rand.random([100,10]) x_test = numx_rand.random([20,10]) # set different variances (avoid numerical errors) x *= numx.arange(1,11) x_test *= numx.arange(1,11) pca_node = mdp.nodes.PCANode() parallel_pca_node = parallel.ParallelPCANode() chunksize = 25 chunks = [x[i*chunksize : (i+1)*chunksize] for i in xrange(len(x)//chunksize)] for chunk in chunks: pca_node.train(chunk) forked_node = parallel_pca_node.fork() forked_node.train(chunk) parallel_pca_node.join(forked_node) assert_array_almost_equal(pca_node._cov_mtx._cov_mtx, parallel_pca_node._cov_mtx._cov_mtx, precision) pca_node.stop_training() y1 = pca_node.execute(x_test) parallel_pca_node.stop_training() y2 = parallel_pca_node.execute(x_test) assert_array_almost_equal(abs(y1), abs(y2), precision) def test_SFANode(): """Test Parallel SFANode""" precision = 6 x = numx_rand.random([100,10]) x_test = numx_rand.random([20,10]) # set different variances (avoid numerical errors) x *= numx.arange(1,11) x_test *= numx.arange(1,11) sfa_node = mdp.nodes.SFANode() parallel_sfa_node = parallel.ParallelSFANode() chunksize = 25 chunks = [x[i*chunksize : (i+1)*chunksize] for i in xrange(len(x)//chunksize)] for chunk in chunks: sfa_node.train(chunk) forked_node = parallel_sfa_node.fork() forked_node.train(chunk) parallel_sfa_node.join(forked_node) assert_array_almost_equal(sfa_node._cov_mtx._cov_mtx, parallel_sfa_node._cov_mtx._cov_mtx, precision) sfa_node.stop_training() y1 = sfa_node.execute(x_test) parallel_sfa_node.stop_training() y2 = parallel_sfa_node.execute(x_test) assert_array_almost_equal(abs(y1), abs(y2), precision) def test_FDANode(): """Test Parallel FDANode.""" # this test code is an adaption of the FDANode test precision = 4 mean1 = [0., 2.] mean2 = [0., -2.] std_ = numx.array([1., 0.2]) npoints = 50000 rot = 45 # input data: two distinct gaussians rotated by 45 deg def distr(size): return numx_rand.normal(0, 1., size=(size)) * std_ x1 = distr((npoints,2)) + mean1 utils.rotate(x1, rot, units='degrees') x2 = distr((npoints,2)) + mean2 utils.rotate(x2, rot, units='degrees') # labels cl1 = numx.ones((x1.shape[0],), dtype='d') cl2 = 2.*numx.ones((x2.shape[0],), dtype='d') flow = parallel.ParallelFlow([parallel.ParallelFDANode()]) flow.train([[(x1, cl1), (x2, cl2)]], scheduler=parallel.Scheduler()) fda_node = flow[0] assert fda_node.tlens[1] == npoints assert fda_node.tlens[2] == npoints m1 = numx.array([mean1]) m2 = numx.array([mean2]) utils.rotate(m1, rot, units='degrees') utils.rotate(m2, rot, units='degrees') assert_array_almost_equal(fda_node.means[1], m1, 2) assert_array_almost_equal(fda_node.means[2], m2, 2) y = flow.execute([x1, x2], scheduler=parallel.Scheduler()) assert_array_almost_equal(numx.mean(y, axis=0), [0., 0.], precision) assert_array_almost_equal(numx.std(y, axis=0), [1., 1.], precision) assert_almost_equal(utils.mult(y[:,0], y[:,1].T), 0., precision) v1 = fda_node.v[:,0]/fda_node.v[0,0] assert_array_almost_equal(v1, [1., -1.], 2) v1 = fda_node.v[:,1]/fda_node.v[0,1] assert_array_almost_equal(v1, [1., 1.], 2) def test_ParallelHistogramNode_nofraction(): """Test HistogramNode with fraction set to 1.0.""" node = parallel.ParallelHistogramNode() x1 = numx.array([[0.1, 0.2], [0.3, 0.5]]) x2 = numx.array([[0.3, 0.6], [0.2, 0.1]]) x = numx.concatenate([x1, x2]) chunks = [x1, x2] for chunk in chunks: forked_node = node.fork() forked_node.train(chunk) node.join(forked_node) assert numx.all(x == node.data_hist) node.stop_training() def test_ParallelHistogramNode_fraction(): """Test HistogramNode with fraction set to 0.5.""" node = parallel.ParallelHistogramNode(hist_fraction=0.5) x1 = numx.random.random((1000, 3)) x2 = numx.random.random((500, 3)) chunks = [x1, x2] for chunk in chunks: forked_node = node.fork() forked_node.train(chunk) node.join(forked_node) assert len(node.data_hist) < 1000 class TestDerivedParallelMDPNodes(object): """Test derived nodes that use the parallel node classes.""" def setup_method(self, method): if "parallel" in mdp.get_active_extensions(): self.set_parallel = False else: mdp.activate_extension("parallel") self.set_parallel = True def teardown_method(self, method): if self.set_parallel: mdp.deactivate_extension("parallel") def test_WhiteningNode(self): """Test Parallel WhiteningNode""" x = numx_rand.random([100,10]) x_test = numx_rand.random([20,10]) # set different variances (avoid numerical errors) x *= numx.arange(1,11) x_test *= numx.arange(1,11) node = mdp.nodes.WhiteningNode() chunksize = 25 chunks = [x[i*chunksize : (i+1)*chunksize] for i in xrange(len(x)//chunksize)] for chunk in chunks: forked_node = node.fork() forked_node.train(chunk) node.join(forked_node) node.stop_training() node.execute(x_test) def test_SFA2Node(self): """Test Parallel SFA2Node""" x = numx_rand.random([100,10]) x_test = numx_rand.random([20,10]) # set different variances (avoid numerical errors) x *= numx.arange(1,11) x_test *= numx.arange(1,11) node = mdp.nodes.SFA2Node() chunksize = 25 chunks = [x[i*chunksize : (i+1)*chunksize] for i in xrange(len(x)//chunksize)] for chunk in chunks: forked_node = node.fork() forked_node.train(chunk) node.join(forked_node) node.stop_training() node.execute(x_test) mdp-3.3/mdp/test/test_pp_local.py000066400000000000000000000050651203131624700171050ustar00rootroot00000000000000import mdp.parallel as parallel from _tools import * requires_parallel_python = skip_on_condition( "not mdp.config.has_parallel_python", "This test requires Parallel Python") @requires_parallel_python def test_reverse_patching(): # revert pp patching # XXX This is needed to avoid failures of the other # XXX pp tests when run more then once in the same interpreter # XXX session if hasattr(mdp.config, 'pp_monkeypatch_dirname'): import pp pp._Worker.command = mdp._pp_worker_command[:] parallel.pp_support._monkeypatch_pp(mdp.config.pp_monkeypatch_dirname) @requires_parallel_python def test_simple(): """Test local pp scheduling.""" scheduler = parallel.pp_support.LocalPPScheduler(ncpus=2, max_queue_length=0, verbose=False) # process jobs for i in range(50): scheduler.add_task(i, parallel.SqrTestCallable()) results = scheduler.get_results() scheduler.shutdown() # check result results.sort() results = numx.array(results[:6]) assert numx.all(results == numx.array([0,1,4,9,16,25])) @requires_parallel_python def test_scheduler_flow(): """Test local pp scheduler with real Nodes.""" precision = 10**-6 node1 = mdp.nodes.PCANode(output_dim=20) node2 = mdp.nodes.PolynomialExpansionNode(degree=1) node3 = mdp.nodes.SFANode(output_dim=10) flow = mdp.parallel.ParallelFlow([node1, node2, node3]) parallel_flow = mdp.parallel.ParallelFlow(flow.copy()[:]) scheduler = parallel.pp_support.LocalPPScheduler(ncpus=3, max_queue_length=0, verbose=False) input_dim = 30 scales = numx.linspace(1, 100, num=input_dim) scale_matrix = mdp.numx.diag(scales) train_iterables = [numx.dot(mdp.numx_rand.random((5, 100, input_dim)), scale_matrix) for _ in range(3)] parallel_flow.train(train_iterables, scheduler=scheduler) x = mdp.numx.random.random((10, input_dim)) # test that parallel execution works as well # note that we need more chungs then processes to test caching parallel_flow.execute([x for _ in range(8)], scheduler=scheduler) scheduler.shutdown() # compare to normal flow flow.train(train_iterables) assert parallel_flow[0].tlen == flow[0].tlen y1 = flow.execute(x) y2 = parallel_flow.execute(x) assert_array_almost_equal(abs(y1 - y2), precision) mdp-3.3/mdp/test/test_pp_remote.py000066400000000000000000000012501203131624700172760ustar00rootroot00000000000000## import mdp.parallel as parallel ## from _tools import * ## from test_pp_local import requires_parallel_python ## remote_slaves = [("localhost", 2)] ## @requires_parallel_python ## def test_simple(): ## scheduler = parallel.pp_support.NetworkPPScheduler( ## remote_slaves=remote_slaves, ## timeout=60, ## verbose=False) ## # process jobs ## for i in range(30): ## scheduler.add_task(i, parallel.SqrTestCallable()) ## results = scheduler.get_results() ## scheduler.shutdown() ## # check result ## results.sort() ## results = numx.array(results) ## assert numx.all(results[:6] == numx.array([0,1,4,9,16,25])) mdp-3.3/mdp/test/test_process_schedule.py000066400000000000000000000102221203131624700206350ustar00rootroot00000000000000from __future__ import with_statement from _tools import * import mdp.parallel as parallel n = numx def test_process_scheduler_shutdown(): """Test that we can properly shutdown the subprocesses""" scheduler = parallel.ProcessScheduler(verbose=False, n_processes=1, source_paths=None, cache_callable=False) scheduler.shutdown() def test_process_scheduler_order(): """Test the correct result order in process scheduler.""" scheduler = parallel.ProcessScheduler(verbose=False, n_processes=3, source_paths=None) max_i = 8 for i in xrange(max_i): scheduler.add_task((n.arange(0,i+1), (max_i-1-i)*1.0/4), parallel.SleepSqrTestCallable()) results = scheduler.get_results() scheduler.shutdown() # check result results = n.concatenate(results) assert n.all(results == n.concatenate([n.arange(0,i+1)**2 for i in xrange(max_i)])) def test_process_scheduler_no_cache(): """Test process scheduler with caching turned off.""" scheduler = parallel.ProcessScheduler(verbose=False, n_processes=2, source_paths=None, cache_callable=False) for i in xrange(8): scheduler.add_task(i, parallel.SqrTestCallable()) results = scheduler.get_results() scheduler.shutdown() # check result results = n.array(results) assert n.all(results == n.array([0,1,4,9,16,25,36,49])) def test_process_scheduler_manager(): """Test process scheduler with context manager itnerface.""" with parallel.ProcessScheduler(n_processes=2, source_paths=None) as scheduler: for i in xrange(8): scheduler.add_task(i, parallel.SqrTestCallable()) results = scheduler.get_results() # check result results = n.array(results) assert n.all(results == n.array([0,1,4,9,16,25,36,49])) def test_process_scheduler_flow(): """Test process scheduler with real Nodes.""" precision = 6 node1 = mdp.nodes.PCANode(output_dim=20) node2 = mdp.nodes.PolynomialExpansionNode(degree=1) node3 = mdp.nodes.SFANode(output_dim=10) flow = mdp.parallel.ParallelFlow([node1, node2, node3]) parallel_flow = mdp.parallel.ParallelFlow(flow.copy()[:]) input_dim = 30 scales = n.linspace(1, 100, num=input_dim) scale_matrix = mdp.numx.diag(scales) train_iterables = [n.dot(mdp.numx_rand.random((5, 100, input_dim)), scale_matrix) for _ in xrange(3)] x = mdp.numx.random.random((10, input_dim)) with parallel.ProcessScheduler(verbose=False, n_processes=3, source_paths=None) as scheduler: parallel_flow.train(train_iterables, scheduler=scheduler) # test that parallel execution works as well # note that we need more chungs then processes to test caching parallel_flow.execute([x for _ in xrange(8)], scheduler=scheduler) # compare to normal flow flow.train(train_iterables) assert parallel_flow[0].tlen == flow[0].tlen y1 = flow.execute(x) y2 = parallel_flow.execute(x) assert_array_almost_equal(abs(y1), abs(y2), precision) def test_process_scheduler_mdp_version(): """Test that we are running the same mdp in subprocesses""" scheduler = parallel.ProcessScheduler(verbose=False, n_processes=2, source_paths=None, cache_callable=False) for i in xrange(2): scheduler.add_task(i, parallel.MDPVersionCallable()) out = scheduler.get_results() scheduler.shutdown() # check that we get 2 identical dictionaries assert out[0] == out[1], 'Subprocesses did not run '\ 'the same MDP as the parent:\n%s\n--\n%s'%(out[0], out[1]) mdp-3.3/mdp/test/test_reload.py000066400000000000000000000001601203131624700165510ustar00rootroot00000000000000try: reload except NameError: from imp import reload def test_reload(): import mdp reload(mdp) mdp-3.3/mdp/test/test_schedule.py000066400000000000000000000055421203131624700171100ustar00rootroot00000000000000from __future__ import with_statement from _tools import * import mdp.parallel as parallel n = numx # TODO: add test that the callable is forked exactly once before a call? def test_scheduler(): """Test scheduler with 6 tasks.""" scheduler = parallel.Scheduler() for i in xrange(6): scheduler.add_task(i, lambda x: x**2) results = scheduler.get_results() scheduler.shutdown() # check result results = n.array(results) assert n.all(results == n.array([0,1,4,9,16,25])) def test_scheduler_manager(): """Test context manager interface for scheduler.""" with parallel.Scheduler() as scheduler: for i in xrange(6): scheduler.add_task(i, lambda x: x**2) results = scheduler.get_results() assert n.all(results == n.array([0,1,4,9,16,25])) def test_scheduler_manager_exception(): """Test context manager interface for scheduler in case of an exception.""" log = [] class TestSchedulerException(Exception): pass class TestScheduler(parallel.Scheduler): def _shutdown(self): log.append("shutdown") def _process_task(self, data, task_callable, task_index): raise TestSchedulerException() try: with TestScheduler() as scheduler: for i in xrange(6): scheduler.add_task(i, lambda x: x**2) scheduler.get_results() except TestSchedulerException: pass assert log == ["shutdown"] def test_cpu_count(): """Test the cpu_count helper function.""" n_cpus = parallel.cpu_count() assert isinstance(n_cpus, int) def test_thread_scheduler_flow(): """Test thread scheduler with real Nodes.""" precision = 6 node1 = mdp.nodes.PCANode(output_dim=20) node2 = mdp.nodes.PolynomialExpansionNode(degree=1) node3 = mdp.nodes.SFANode(output_dim=10) flow = mdp.parallel.ParallelFlow([node1, node2, node3]) parallel_flow = mdp.parallel.ParallelFlow(flow.copy()[:]) scheduler = parallel.ThreadScheduler(verbose=False, n_threads=3) input_dim = 30 scales = n.linspace(1, 100, num=input_dim) scale_matrix = mdp.numx.diag(scales) train_iterables = [n.dot(mdp.numx_rand.random((5, 100, input_dim)), scale_matrix) for _ in xrange(3)] parallel_flow.train(train_iterables, scheduler=scheduler) x = mdp.numx.random.random((10, input_dim)) # test that parallel execution works as well # note that we need more chungs then processes to test caching parallel_flow.execute([x for _ in xrange(8)], scheduler=scheduler) scheduler.shutdown() # compare to normal flow flow.train(train_iterables) assert parallel_flow[0].tlen == flow[0].tlen y1 = flow.execute(x) y2 = parallel_flow.execute(x) assert_array_almost_equal(abs(y1), abs(y2), precision) mdp-3.3/mdp/test/test_scikits.py000066400000000000000000000021151203131624700167560ustar00rootroot00000000000000from _tools import * requires_scikits = skip_on_condition( "not mdp.config.has_sklearn or mdp.numx_description != 'scipy'", "This test requires sklearn and SciPy") requires_pcasikitslearnnode = skip_on_condition( "'PCAScikitsLearnNode' not in dir(mdp.nodes)", "This test requires sklearn.decomposition.pca.PCA to be available") @requires_scikits @requires_pcasikitslearnnode def test_scikits_PCANode_training(): """Check functionality of scikits' PCANode.""" node = mdp.nodes.PCAScikitsLearnNode(n_components=2) # the first two principal components are the second and fourth axes T = 50000 x = numx_rand.randn(T, 4) x[:,1] *= 10. x[:,3] *= 100. node.train(x) node.stop_training() y = node.execute(x) # check dimensionality assert y.shape[1] == 2 assert y.shape[0] == T # arrays should be equal up to sign if (y[:,0]*x[:,3]).mean() < 0.: y[:,0] *= -1. if (y[:,1]*x[:,1]).mean() < 0.: y[:,1] *= -1. assert_array_almost_equal(y[:,0]/100., x[:,3]/100., 1) assert_array_almost_equal(y[:,1]/10., x[:,1]/10., 1) mdp-3.3/mdp/test/test_seed.py000066400000000000000000000017351203131624700162340ustar00rootroot00000000000000################################################################### ### After changing this file, please copy it so that files ### ../../{mbp,bimdp}/test/test_seed.py are identical. ################################################################### import mdp _SEED = None def _compare_with_seed(seed): global _SEED if _SEED is None: _SEED = seed return _SEED == seed def test_seed(): seed = mdp.numx_rand.get_state()[1][0] assert _compare_with_seed(seed), (_SEED, seed) mdp.numx_rand.seed(seed+1) def test_seed_clone(): # we need two identical functions to check that the seed # is reset at every call seed = mdp.numx_rand.get_state()[1][0] assert _compare_with_seed(seed), (_SEED, seed) mdp.numx_rand.seed(seed+1) def test_seed_reset(): # this function resets the global _SEED, so # that we can call the tests several times in # a row with different seeds without getting a failure global _SEED _SEED = None mdp-3.3/mdp/test/test_svm_classifier.py000066400000000000000000000261241203131624700203240ustar00rootroot00000000000000from _tools import * def _randomly_filled_hypercube(widths, num_elem=1000): """Fills a hypercube with given widths, centred at the origin. """ p = [] for i in xrange(num_elem): rand_data = numx_rand.random(len(widths)) rand_data = [w*(d - 0.5) for d, w in zip(rand_data, widths)] p.append(tuple(rand_data)) return p def _randomly_filled_hyperball(dim, radius, num_elem=1000): """Fills a hyperball with a number of random elements. """ r = numx_rand.random(num_elem) points = numx_rand.random((num_elem, dim)) for i in xrange(len(points)): norm = numx.linalg.norm(points[i]) scale = pow(r[i], 1./dim) points[i] = points[i] * radius * scale / norm return points def _random_clusters(positions, radius=1, num_elem=1000): """Puts random clusters with num_elem elements at the given positions. positions - a list of tuples """ data = [] for p in positions: dim = len(p) ball = _randomly_filled_hyperball(dim, radius, num_elem) ball = [numx.array(b) + numx.array(p) for b in ball] data.append(ball) return data def _separable_data(positions, labels, radius=1, num_elem=1000, shuffled=False): """ For each position, we create num_elem data points in a certain radius around that position. If shuffled, we shuffle the output data and labels. positions -- List of position tuples, e.g. [(1, 1), (-1, -1)] labels -- List of labels, e.g. [1, -1] radius -- The maximum distance to the position num_elem -- The number of elements to be created shuffled -- Should the output be shuffled. Returns: data, labels """ assert len(positions) == len(labels) data = numx.vstack( _random_clusters(positions, radius, num_elem) ) #data = numx.vstack( (numx_rand.random( (num_elem,2) ) - dist, # numx_rand.random( (num_elem,2) ) + dist) ) a_labels = numx.hstack(map(lambda x: [x] * num_elem, labels)) if shuffled: ind = range(len(data)) numx_rand.shuffle(ind) return data[ind], a_labels[ind] return data, a_labels def _sqdist(tuple_a, tuple_b): return sum( (a-b)**2 for a, b in zip(tuple_a, tuple_b) ) def test_separable_data_is_inside_radius(): positions = [[(1, 1), (-1, -1)], [(1, 1, 10), (100, -20, 30), (-1, 10, 1000)]] labels = [[1, -1], [1, 2, 3]] radii = [0.5, 1, 10] num_elem = 100 for pos, labs in zip(positions, labels): for rad in radii: data, ls = _separable_data(pos, labs, rad, num_elem) for d,l in zip(data, ls): idx = labs.index(l) assert rad**2 > _sqdist(pos[idx], d) @skip_on_condition( "not hasattr(mdp.nodes, 'ShogunSVMClassifier')", "This test requires the 'shogun' module.") def test_ShogunSVMClassifier(): # TODO: Implement parameter ranges num_train = 100 num_test = 50 for positions in [((1,), (-1,)), ((1,1), (-1,-1)), ((1,1,1), (-1,-1,1)), ((1,1,1,1), (-1,1,1,1)), ((1,1,1,1), (-1,-1,-1,-1)), ((1,1), (-1,-1), (1, -1), (-1, 1)) ]: radius = 0.3 if len(positions) == 2: labels = (-1, 1) elif len(positions) == 3: labels = (-1, 1, 1) elif len(positions) == 4: labels = (-1, -1, 1, 1) traindata_real, trainlab = _separable_data(positions, labels, radius, num_train) testdata_real, testlab = _separable_data(positions, labels, radius, num_test) classifiers = ['GMNPSVM', 'GNPPSVM', 'GPBTSVM', #'KernelPerceptron', 'LDA', 'LibSVM', #'LibSVMOneClass', 'MPDSVM', 'Perceptron', 'SVMLin'] kernels = ['PolyKernel', 'LinearKernel', 'SigmoidKernel', 'GaussianKernel'] #kernels = list(mdp.nodes.ShogunSVMClassifier.kernel_parameters.keys()) combinations = {'classifier': classifiers, 'kernel': kernels} for comb in utils.orthogonal_permutations(combinations): # this is redundant but makes it clear, # what has been taken out deliberately if comb['kernel'] in ['PyramidChi2', 'Chi2Kernel']: # We don't have good init arguments for these continue if comb['classifier'] in ['LaRank', 'LibLinear', 'LibSVMMultiClass', 'MKLClassification', 'MKLMultiClass', 'MKLOneClass', 'MultiClassSVM', 'SVM', 'SVMOcas', 'SVMSGD', 'ScatterSVM', 'SubGradientSVM']: # We don't have good init arguments for these and/or they work differently continue # something does not work here: skipping if comb['classifier'] == 'GPBTSVM' and comb['kernel'] == 'LinearKernel': continue sg_node = mdp.nodes.ShogunSVMClassifier(classifier=comb['classifier']) if sg_node.classifier.takes_kernel: sg_node.set_kernel(comb['kernel']) # train in two chunks to check update mechanism sg_node.train( traindata_real[:num_train], trainlab[:num_train] ) sg_node.train( traindata_real[num_train:], trainlab[num_train:] ) assert sg_node.input_dim == len(traindata_real.T) out = sg_node.label(testdata_real) if sg_node.classifier.takes_kernel: # check that the kernel has stored all our training vectors assert sg_node.classifier.kernel.get_num_vec_lhs() == num_train * len(positions) # check that the kernel has also stored the latest classification vectors in rhs assert sg_node.classifier.kernel.get_num_vec_rhs() == num_test * len(positions) # Test also for inverse worked = numx.all(numx.sign(out) == testlab) or \ numx.all(numx.sign(out) == -testlab) failed = not worked should_fail = False if len(positions) == 2: if comb['classifier'] in ['LibSVMOneClass', 'GMNPSVM']: should_fail = True if comb['classifier'] == 'GPBTSVM' and \ comb['kernel'] in ['LinearKernel']: should_fail = True # xor problem if len(positions) == 4: if comb['classifier'] in ['LibSVMOneClass', 'SVMLin', 'Perceptron', 'LDA', 'GMNPSVM']: should_fail = True if comb['classifier'] == 'LibSVM' and \ comb['kernel'] in ['LinearKernel', 'SigmoidKernel']: should_fail = True if comb['classifier'] == 'GPBTSVM' and \ comb['kernel'] in ['LinearKernel', 'SigmoidKernel']: should_fail = True if comb['classifier'] == 'GNPPSVM' and \ comb['kernel'] in ['LinearKernel', 'SigmoidKernel']: should_fail = True if should_fail: msg = ("Classification should fail but did not in %s. Positions %s." % (sg_node.classifier, positions)) else: msg = ("Classification should not fail but failed in %s. Positions %s." % (sg_node.classifier, positions)) assert should_fail == failed, msg class TestLibSVMClassifier(object): @skip_on_condition("not hasattr(mdp.nodes, 'LibSVMClassifier')", "This test requires the 'libsvm' module.") def setup_method(self, method): self.combinations = {'kernel': mdp.nodes.LibSVMClassifier.kernels, 'classifier': mdp.nodes.LibSVMClassifier.classifiers} def test_that_parameters_are_correct(self): import svm as libsvm for comb in utils.orthogonal_permutations(self.combinations): C = 1.01 epsilon = 1.1e-5 svm_node = mdp.nodes.LibSVMClassifier(params={"C": C, "eps": epsilon}) svm_node.set_kernel(comb['kernel']) svm_node.set_classifier(comb['classifier']) # check that the parameters are correct assert svm_node.parameter.kernel_type == getattr(libsvm, comb['kernel']) assert svm_node.parameter.svm_type == getattr(libsvm, comb['classifier']) assert svm_node.parameter.C == C assert svm_node.parameter.eps == epsilon def test_linear_separable_data(self): num_train = 100 num_test = 50 C = 1.01 epsilon = 1e-5 for positions in [((1,), (-1,)), ((1,1), (-1,-1)), ((1,1,1), (-1,-1,1)), ((1,1,1,1), (-1,1,1,1)), ((1,1,1,1), (-1,-1,-1,-1))]: radius = 0.3 traindata_real, trainlab = _separable_data(positions, (-1, 1), radius, num_train, True) testdata_real, testlab = _separable_data(positions, (-1, 1), radius, num_test, True) for comb in utils.orthogonal_permutations(self.combinations): # Take out non-working cases if comb['classifier'] in ["ONE_CLASS"]: continue if comb['kernel'] in ["SIGMOID", "POLY"]: continue if len(positions[0]) == 1 and comb['kernel'] == "RBF": # RBF won't work in 1d continue svm_node = mdp.nodes.LibSVMClassifier(kernel=comb['kernel'], classifier=comb['classifier'], probability=True, params={"C": C, "eps": epsilon}) # train in two chunks to check update mechanism svm_node.train(traindata_real[:num_train], trainlab[:num_train]) svm_node.train(traindata_real[num_train:], trainlab[num_train:]) assert svm_node.input_dim == len(traindata_real.T) out = svm_node.label(testdata_real) testerr = numx.all(numx.sign(out) == testlab) assert testerr, ('classification error for ', comb) # we don't have ranks in our regression models if not comb['classifier'].endswith("SVR"): pos1_rank = numx.array(svm_node.rank(numx.array([positions[0]]))) pos2_rank = numx.array(svm_node.rank(numx.array([positions[1]]))) assert numx.all(pos1_rank == -pos2_rank) assert numx.all(abs(pos1_rank) == 1) assert numx.all(abs(pos2_rank) == 1) mdp-3.3/mdp/test/test_tempdir.py000066400000000000000000000007021203131624700167510ustar00rootroot00000000000000from __future__ import with_statement import tempfile import os import py.test def test_tmpdir_exists(): assert os.path.exists(py.test.mdp_tempdirname) def test_tmpdir_writable1(): with open(os.path.join(py.test.mdp_tempdirname, 'empty'), 'w'): pass def test_tmpdir_writable2(): with tempfile.NamedTemporaryFile(prefix='MDP_', suffix='.testfile', dir=py.test.mdp_tempdirname): pass mdp-3.3/mdp/test/test_utils.py000066400000000000000000000120711203131624700164470ustar00rootroot00000000000000"""These are test functions for MDP utilities. """ import py.test from _tools import * from mdp import Node, nodes class BogusClass(object): def __init__(self): self.x = numx_rand.random((2,2)) class BogusNode(Node): x = numx_rand.random((2,2)) y = BogusClass() z = BogusClass() z.z = BogusClass() def test_introspection(): bogus = BogusNode() arrays, string = utils.dig_node(bogus) assert len(arrays.keys()) == 4, 'Not all arrays where caught' assert sorted(arrays.keys()) == ['x', 'y.x', 'z.x', 'z.z.x'], 'Wrong names' sizes = [x[0] for x in arrays.values()] assert sorted(sizes) == [numx_rand.random((2,2)).itemsize*4]*4, \ 'Wrong sizes' sfa = nodes.SFANode() sfa.train(numx_rand.random((1000, 10))) a_sfa, string = utils.dig_node(sfa) keys = ['_cov_mtx._avg', '_cov_mtx._cov_mtx', '_dcov_mtx._avg', '_dcov_mtx._cov_mtx'] assert sorted(a_sfa.keys()) == keys, 'Wrong arrays in SFANode' sfa.stop_training() a_sfa, string = utils.dig_node(sfa) keys = ['_bias', 'avg', 'd', 'davg', 'sf'] assert sorted(a_sfa.keys()) == keys, 'Wrong arrays in SFANode' def test_random_rot(): dim = 20 tlen = 10 for i in xrange(tlen): x = utils.random_rot(dim, dtype='f') assert x.dtype.char=='f', 'Wrong dtype' y = utils.mult(x.T, x) assert_almost_equal(numx_linalg.det(x), 1., 4) assert_array_almost_equal(y, numx.eye(dim), 4) def test_random_rot_determinant_sign(): x = utils.random_rot(4) assert_almost_equal(numx_linalg.det(x), 1., 4) x = utils.random_rot(5) assert_almost_equal(numx_linalg.det(x), 1., 4) def test_casting(): x = numx_rand.random((5,3)).astype('d') y = 3*x assert_type_equal(y.dtype, x.dtype) x = numx_rand.random((5,3)).astype('f') y = 3.*x assert_type_equal(y.dtype, x.dtype) x = (10*numx_rand.random((5,3))).astype('i') y = 3.*x assert_type_equal(y.dtype, 'd') y = 3L*x assert_type_equal(y.dtype, 'i') x = numx_rand.random((5,3)).astype('f') y = 3L*x assert_type_equal(y.dtype, 'f') def test_mult_diag(): dim = 20 d = numx_rand.random(size=(dim,)) dd = numx.diag(d) mtx = numx_rand.random(size=(dim, dim)) res1 = utils.mult(dd, mtx) res2 = utils.mult_diag(d, mtx, left=True) assert_array_almost_equal(res1, res2, 10) res1 = utils.mult(mtx, dd) res2 = utils.mult_diag(d, mtx, left=False) assert_array_almost_equal(res1, res2, 10) def test_symeig_fake_integer(): a = numx.array([[1,2],[2,7]]) b = numx.array([[3,1],[1,5]]) w,z = utils._symeig._symeig_fake(a) w,z = utils._symeig._symeig_fake(a,b) def test_symeig_fake_LAPACK_bug(): # bug. when input matrix is almost an identity matrix # but not exactly, the lapack dgeev routine returns a # matrix of eigenvectors which is not orthogonal. # this bug was present when we used numx_linalg.eig # instead of numx_linalg.eigh . # Note: this is a LAPACK bug. y = numx_rand.random((4,4))*1E-16 y = (y+y.T)/2 for i in xrange(4): y[i,i]=1 val, vec = utils._symeig._symeig_fake(y) assert_almost_equal(abs(numx_linalg.det(vec)), 1., 12) def test_QuadraticForm_extrema(): # TODO: add some real test # check H with negligible linear term noise = 1e-8 tol = 1e-6 x = numx_rand.random((10,)) H = numx.outer(x, x) + numx.eye(10)*0.1 f = noise*numx_rand.random((10,)) q = utils.QuadraticForm(H, f) xmax, xmin = q.get_extrema(utils.norm2(x), tol=tol) assert_array_almost_equal(x, xmax, 5) # check I + linear term H = numx.eye(10, dtype='d') f = x q = utils.QuadraticForm(H, f=f) xmax, xmin = q.get_extrema(utils.norm2(x), tol=tol) assert_array_almost_equal(f, xmax, 5) def test_QuadraticForm_invariances(): #nu = numx.linspace(2.,-3,10) nu = numx.linspace(6., 1, 10) H = utils.symrand(nu) E, W = mdp.utils.symeig(H) q = utils.QuadraticForm(H) xmax, xmin = q.get_extrema(5.) e_w, e_sd = q.get_invariances(xmax) #print e_sd,nu[1:]-nu[0] assert_array_almost_equal(e_sd,nu[1:]-nu[0],6) assert_array_almost_equal(abs(e_w),abs(W[:,-2::-1]),6) e_w, e_sd = q.get_invariances(xmin) assert_array_almost_equal(e_sd,nu[-2::-1]-nu[-1],6) assert_array_almost_equal(abs(e_w),abs(W[:,1:]),6) def test_QuadraticForm_non_symmetric_raises(): """Test the detection of non symmetric H! """ H = numx_rand.random((10,10)) py.test.raises(mdp.utils.QuadraticFormException, utils.QuadraticForm, H) def test_nongeneral_svd_bug(): a = numx.array([[ 0.73083003, 0. , 0.7641788 , 0. ], [ 0. , 0. , 0. , 0. ], [ 0.7641788 , 0. , 0.79904932, 0. ], [ 0. , 0. , 0. , 0. ]]) w, z = utils.nongeneral_svd(a) diag = numx.diagonal(utils.mult(utils.hermitian(z), utils.mult(a, z))).real assert_array_almost_equal(diag, w, 12) mdp-3.3/mdp/test/test_utils_generic.py000066400000000000000000000043571203131624700201530ustar00rootroot00000000000000from __future__ import with_statement from _tools import * TESTDECIMALS = {numx.dtype('d'): 12, numx.dtype('f'): 3, numx.dtype('D'): 12, numx.dtype('F'): 3, } def test_eigenproblem(dtype, range, func): """Solve a standard eigenvalue problem.""" dtype = numx.dtype(dtype) dim = 5 if range: range = (2, dim -1) else: range = None a = utils.symrand(dim, dtype)+numx.diag([2.1]*dim).astype(dtype) w,z = func(a, range=range) # assertions assert_type_equal(z.dtype, dtype) w = w.astype(dtype) diag = numx.diagonal(utils.mult(utils.hermitian(z), utils.mult(a, z))).real assert_array_almost_equal(diag, w, TESTDECIMALS[dtype]) def test_geneigenproblem(dtype, range, func): """Solve a generalized eigenvalue problem.""" dtype = numx.dtype(dtype) dim = 5 if range: range = (2, dim -1) else: range = None a = utils.symrand(dim, dtype) b = utils.symrand(dim, dtype)+numx.diag([2.1]*dim).astype(dtype) w,z = func(a,b,range=range) # assertions assert z.dtype == dtype w = w.astype(dtype) diag1 = numx.diagonal(utils.mult(utils.hermitian(z), utils.mult(a, z))).real assert_array_almost_equal(diag1, w, TESTDECIMALS[dtype]) diag2 = numx.diagonal(utils.mult(utils.hermitian(z), utils.mult(b, z))).real assert_array_almost_equal(diag2, numx.ones(diag2.shape[0]), TESTDECIMALS[dtype]) test_geneigenproblem.funcs = [utils._symeig._symeig_fake] if mdp.utils.symeig is utils._symeig.wrap_eigh: test_geneigenproblem.funcs.append(utils._symeig.wrap_eigh) test_eigenproblem.funcs = test_geneigenproblem.funcs + [utils.nongeneral_svd] def pytest_generate_tests(metafunc): for testtype in ('d', 'f'): for therange in (False, True): for func in metafunc.function.funcs: funcargs = dict(dtype=testtype, range=therange, func=func) theid = "%s, %s, %s" % (func.__name__, testtype, therange) metafunc.addcall(funcargs, id=theid) mdp-3.3/mdp/utils/000077500000000000000000000000001203131624700140565ustar00rootroot00000000000000mdp-3.3/mdp/utils/__init__.py000066400000000000000000000211201203131624700161630ustar00rootroot00000000000000__docformat__ = "restructuredtext en" from routines import (timediff, refcast, scast, rotate, random_rot, permute, symrand, norm2, cov2, mult_diag, comb, sqrtm, get_dtypes, nongeneral_svd, hermitian, cov_maxima, lrep, rrep, irep, orthogonal_permutations, izip_stretched, weighted_choice, bool_to_sign, sign_to_bool, gabor, invert_exp_funcs2) try: from collections import OrderedDict except ImportError: ## Getting an Ordered Dict for Python < 2.7 from _ordered_dict import OrderedDict try: from tempfile import TemporaryDirectory except ImportError: from temporarydir import TemporaryDirectory from introspection import dig_node, get_node_size, get_node_size_str from quad_forms import QuadraticForm, QuadraticFormException from covariance import (CovarianceMatrix, DelayCovarianceMatrix, MultipleCovarianceMatrices,CrossCovarianceMatrix) from progress_bar import progressinfo from slideshow import (basic_css, slideshow_css, HTMLSlideShow, image_slideshow_css, ImageHTMLSlideShow, SectionHTMLSlideShow, SectionImageHTMLSlideShow, image_slideshow, show_image_slideshow) from _symeig import SymeigException import mdp as _mdp # matrix multiplication function # we use an alias to be able to use the wrapper for the 'gemm' Lapack # function in the future mult = _mdp.numx.dot matmult = mult if _mdp.numx_description == 'scipy': def matmult(a,b, alpha=1.0, beta=0.0, c=None, trans_a=0, trans_b=0): """Return alpha*(a*b) + beta*c. a,b,c : matrices alpha, beta: scalars trans_a : 0 (a not transposed), 1 (a transposed), 2 (a conjugate transposed) trans_b : 0 (b not transposed), 1 (b transposed), 2 (b conjugate transposed) """ if c: gemm,=_mdp.numx_linalg.get_blas_funcs(('gemm',),(a,b,c)) else: gemm,=_mdp.numx_linalg.get_blas_funcs(('gemm',),(a,b)) return gemm(alpha, a, b, beta, c, trans_a, trans_b) # workaround to numpy issues with dtype behavior: # 'f' is upcasted at least in the following functions _inv = _mdp.numx_linalg.inv inv = lambda x: refcast(_inv(x), x.dtype) _pinv = _mdp.numx_linalg.pinv pinv = lambda x: refcast(_pinv(x), x.dtype) _solve = _mdp.numx_linalg.solve solve = lambda x, y: refcast(_solve(x, y), x.dtype) def svd(x, compute_uv = True): """Wrap the numx SVD routine, so that it returns arrays of the correct dtype and a SymeigException in case of failures.""" tc = x.dtype try: if compute_uv: u, s, v = _mdp.numx_linalg.svd(x) return refcast(u, tc), refcast(s, tc), refcast(v, tc) else: s = _mdp.numx_linalg.svd(x, compute_uv=False) return refcast(s, tc) except _mdp.numx_linalg.LinAlgError, exc: raise SymeigException(str(exc)) __all__ = ['CovarianceMatrix', 'DelayCovarianceMatrix','CrossCovarianceMatrix', 'MultipleCovarianceMatrices', 'QuadraticForm', 'QuadraticFormException', 'comb', 'cov2', 'dig_node', 'get_dtypes', 'get_node_size', 'hermitian', 'inv', 'mult', 'mult_diag', 'nongeneral_svd', 'norm2', 'permute', 'pinv', 'progressinfo', 'random_rot', 'refcast', 'rotate', 'scast', 'solve', 'sqrtm', 'svd', 'symrand', 'timediff', 'matmult', 'HTMLSlideShow', 'ImageHTMLSlideShow', 'basic_css', 'slideshow_css', 'image_slideshow_css', 'SectionHTMLSlideShow', 'SectionImageHTMLSlideShow', 'image_slideshow', 'lrep', 'rrep', 'irep', 'orthogonal_permutations', 'izip_stretched', 'weighted_choice', 'bool_to_sign', 'sign_to_bool', 'OrderedDict', 'TemporaryDirectory', 'gabor', 'fixup_namespace'] def _without_prefix(name, prefix): if name.startswith(prefix): return name[len(prefix):] else: return None import os FIXUP_DEBUG = os.getenv('MDPNSDEBUG') def fixup_namespace(mname, names, old_modules, keep_modules=()): """Update ``__module__`` attribute and remove ``old_modules`` from namespace When classes are imported from implementation modules into the package exporting them, the ``__module__`` attribute reflects the place of definition. Splitting the code into separate files (and thus modules) makes the implementation managable. Nevertheless, we do not want the implementation modules to be visible and delete their names from the package's namespace. This causes some problems: when looking at the exported classes and other objects, their ``__module__`` attribute points to something non-importable, ``repr`` output and documentation do not show the module from which they are supposed to be imported. The documentation generators like epydoc and sphinx are also confused. To alleviate those problems, the ``__module__`` attributes of all exported classes defined in a "private" module and then exported elsewhere are changed to the latter. For each name in ``names``, if ``.`` is accessible, and if its ``__module__`` attribute is equal to one of the names in ``old_modules``, it is changed to ``""``. In other words, all the ``__module__`` attributes of objects exported from module ```` are updated, iff they used to point to one of the "private" modules in ``old_modules``. This operation is performed not only for classes, but actually for all objects with the ``__module__`` attribute, following the rules stated above. The operation is also performed recursively, not only for names in ``names``, but also for methods, inner classes, and other attributes. This recursive invocation is necessary because all the problems affecting top-level exported classes also affect their attributes visible for the user, and especially documented functions. If ``names`` is ``None``, all public names in module ```` (not starting with ``'_'``) are affected. After the ``__module__`` attributes are changed, "private" modules given in ``old_modules``, except for the ones in ``keep_modules``, are deleted from the namespace of ```` module. """ import sys module = sys.modules[mname] if names is None: names = [name for name in dir(module) if not name.startswith('_')] if FIXUP_DEBUG: print 'NAMESPACE FIXUP: %s (%s)' % (module, mname) for name in names: _fixup_namespace_item(module, mname, name, old_modules, '') # take care of removing the module filenames for filename in old_modules: # skip names in keep modules if filename in keep_modules: continue try: delattr(module, filename) if FIXUP_DEBUG: print 'NAMESPACE FIXUP: deleting %s from %s' % (filename, module) except AttributeError: # if the name is not there, we are in a reload, so do not # do anything pass def _fixup_namespace_item(parent, mname, name, old_modules, path): try: item = getattr(parent, name) except AttributeError: if name.startswith('__'): # those sometimes fail unexplicably return else: raise current_name = getattr(item, '__module__', None) if (current_name is not None and _without_prefix(current_name, mname + '.') in old_modules): if FIXUP_DEBUG: print 'namespace fixup: {%s => %s}%s.%s' % ( current_name, mname, path, name) try: item.__module__ = mname except AttributeError: try: item.im_func.__module__ = mname except AttributeError, e: if FIXUP_DEBUG: print 'namespace fixup failed: ', e # don't recurse into functions anyway return subitems = [_name for _name in dir(item) if _name.startswith('__') or not _name.startswith('_')] for subitem in subitems: _fixup_namespace_item(item, mname, subitem, old_modules, path + '.' + name) fixup_namespace(__name__, __all__, ('routines', 'introspection', 'quad_forms', 'covariance', 'progress_bar', 'slideshow', '_ordered_dict', 'templet', 'temporarydir', 'os', )) mdp-3.3/mdp/utils/_ordered_dict.py000066400000000000000000000056731203131624700172310ustar00rootroot00000000000000## {{{ http://code.activestate.com/recipes/576693/ (r6) from UserDict import DictMixin as _DictMixin class OrderedDict(dict, _DictMixin): """Backported Ordered Dict for Python < 2.7""" def __init__(self, *args, **kwds): if len(args) > 1: raise TypeError('expected at most 1 arguments, got %d' % len(args)) try: self.__end except AttributeError: self.clear() self.update(*args, **kwds) def clear(self): self.__end = end = [] end += [None, end, end] # sentinel node for doubly linked list self.__map = {} # key --> [key, prev, next] dict.clear(self) def __setitem__(self, key, value): if key not in self: end = self.__end curr = end[1] curr[2] = end[1] = self.__map[key] = [key, curr, end] dict.__setitem__(self, key, value) def __delitem__(self, key): dict.__delitem__(self, key) key, prev, next = self.__map.pop(key) prev[2] = next next[1] = prev def __iter__(self): end = self.__end curr = end[2] while curr is not end: yield curr[0] curr = curr[2] def __reversed__(self): end = self.__end curr = end[1] while curr is not end: yield curr[0] curr = curr[1] def popitem(self, last=True): if not self: raise KeyError('dictionary is empty') if last: key = reversed(self).next() else: key = iter(self).next() value = self.pop(key) return key, value def __reduce__(self): items = [[k, self[k]] for k in self] tmp = self.__map, self.__end del self.__map, self.__end inst_dict = vars(self).copy() self.__map, self.__end = tmp if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) def keys(self): return list(self) setdefault = _DictMixin.setdefault update = _DictMixin.update pop = _DictMixin.pop values = _DictMixin.values items = _DictMixin.items iterkeys = _DictMixin.iterkeys itervalues = _DictMixin.itervalues iteritems = _DictMixin.iteritems def __repr__(self): if not self: return '%s()' % (self.__class__.__name__,) return '%s(%r)' % (self.__class__.__name__, self.items()) def copy(self): return self.__class__(self) @classmethod def fromkeys(cls, iterable, value=None): d = cls() for key in iterable: d[key] = value return d def __eq__(self, other): if isinstance(other, OrderedDict): return len(self)==len(other) and self.items() == other.items() return dict.__eq__(self, other) def __ne__(self, other): return not self == other ## end of http://code.activestate.com/recipes/576693/ }}} mdp-3.3/mdp/utils/_symeig.py000066400000000000000000000123501203131624700160650ustar00rootroot00000000000000import mdp from mdp import numx, numx_linalg class SymeigException(mdp.MDPException): pass # the following functions and classes were part of the scipy_emulation.py file _type_keys = ['f', 'd', 'F', 'D'] _type_conv = {('f','d'): 'd', ('f','F'): 'F', ('f','D'): 'D', ('d','F'): 'D', ('d','D'): 'D', ('F','d'): 'D', ('F','D'): 'D'} def _greatest_common_dtype(alist): """ Apply conversion rules to find the common conversion type dtype 'd' is default for 'i' or unknown types (known types: 'f','d','F','D'). """ dtype = 'f' for array in alist: if array is None: continue tc = array.dtype.char if tc not in _type_keys: tc = 'd' transition = (dtype, tc) if transition in _type_conv: dtype = _type_conv[transition] return dtype def _assert_eigenvalues_real_and_positive(w, dtype): tol = numx.finfo(dtype.type).eps * 100 if abs(w.imag).max() > tol: err = "Some eigenvalues have significant imaginary part: %s " % str(w) raise mdp.SymeigException(err) #if w.real.min() < 0: # err = "Got negative eigenvalues: %s" % str(w) # raise SymeigException(err) def wrap_eigh(A, B = None, eigenvectors = True, turbo = "on", range = None, type = 1, overwrite = False): """Wrapper for scipy.linalg.eigh for scipy version > 0.7""" args = {} args['a'] = A args['b'] = B args['eigvals_only'] = not eigenvectors args['overwrite_a'] = overwrite args['overwrite_b'] = overwrite if turbo == "on": args['turbo'] = True else: args['turbo'] = False args['type'] = type if range is not None: n = A.shape[0] lo, hi = range if lo < 1: lo = 1 if lo > n: lo = n if hi > n: hi = n if lo > hi: lo, hi = hi, lo # in scipy.linalg.eigh the range starts from 0 lo -= 1 hi -= 1 range = (lo, hi) args['eigvals'] = range try: return numx_linalg.eigh(**args) except numx_linalg.LinAlgError, exception: raise SymeigException(str(exception)) def _symeig_fake(A, B = None, eigenvectors = True, turbo = "on", range = None, type = 1, overwrite = False): """Solve standard and generalized eigenvalue problem for symmetric (hermitian) definite positive matrices. This function is a wrapper of LinearAlgebra.eigenvectors or numarray.linear_algebra.eigenvectors with an interface compatible with symeig. Syntax: w,Z = symeig(A) w = symeig(A,eigenvectors=0) w,Z = symeig(A,range=(lo,hi)) w,Z = symeig(A,B,range=(lo,hi)) Inputs: A -- An N x N matrix. B -- An N x N matrix. eigenvectors -- if set return eigenvalues and eigenvectors, otherwise only eigenvalues turbo -- not implemented range -- the tuple (lo,hi) represent the indexes of the smallest and largest (in ascending order) eigenvalues to be returned. 1 <= lo < hi <= N if range = None, returns all eigenvalues and eigenvectors. type -- not implemented, always solve A*x = (lambda)*B*x overwrite -- not implemented Outputs: w -- (selected) eigenvalues in ascending order. Z -- if range = None, Z contains the matrix of eigenvectors, normalized as follows: Z^H * A * Z = lambda and Z^H * B * Z = I where ^H means conjugate transpose. if range, an N x M matrix containing the orthonormal eigenvectors of the matrix A corresponding to the selected eigenvalues, with the i-th column of Z holding the eigenvector associated with w[i]. The eigenvectors are normalized as above. """ dtype = numx.dtype(_greatest_common_dtype([A, B])) try: if B is None: w, Z = numx_linalg.eigh(A) else: # make B the identity matrix wB, ZB = numx_linalg.eigh(B) _assert_eigenvalues_real_and_positive(wB, dtype) ZB = ZB.real / numx.sqrt(wB.real) # transform A in the new basis: A = ZB^T * A * ZB A = mdp.utils.mult(mdp.utils.mult(ZB.T, A), ZB) # diagonalize A w, ZA = numx_linalg.eigh(A) Z = mdp.utils.mult(ZB, ZA) except numx_linalg.LinAlgError, exception: raise SymeigException(str(exception)) _assert_eigenvalues_real_and_positive(w, dtype) w = w.real Z = Z.real idx = w.argsort() w = w.take(idx) Z = Z.take(idx, axis=1) # sanitize range: n = A.shape[0] if range is not None: lo, hi = range if lo < 1: lo = 1 if lo > n: lo = n if hi > n: hi = n if lo > hi: lo, hi = hi, lo Z = Z[:, lo-1:hi] w = w[lo-1:hi] # the final call to refcast is necessary because of a bug in the casting # behavior of Numeric and numarray: eigenvector does not wrap the LAPACK # single precision routines if eigenvectors: return mdp.utils.refcast(w, dtype), mdp.utils.refcast(Z, dtype) else: return mdp.utils.refcast(w, dtype) mdp-3.3/mdp/utils/basic.css000066400000000000000000000004641203131624700156550ustar00rootroot00000000000000/* Basic default style used by MDP for a pleasant uniform appearance. */ html, body { font-family: sans-serif; text-align: center; } h1, h2, h3, h4 { color: #003399; } par.explanation { color: #003399; font-size: small; } table.flow { margin-left: auto; margin-right: auto; } mdp-3.3/mdp/utils/covariance.py000066400000000000000000000344311203131624700165470ustar00rootroot00000000000000import mdp import warnings # import numeric module (scipy, Numeric or numarray) numx = mdp.numx def _check_roundoff(t, dtype): """Check if t is so large that t+1 == t up to 2 precision digits""" # limit precision limit = 10.**(numx.finfo(dtype).precision-2) if int(t) >= limit: wr = ('You have summed %e entries in the covariance matrix.' '\nAs you are using dtype \'%s\', you are ' 'probably getting severe round off' '\nerrors. See CovarianceMatrix docstring for more' ' information.' % (t, dtype.name)) warnings.warn(wr, mdp.MDPWarning) class CovarianceMatrix(object): """This class stores an empirical covariance matrix that can be updated incrementally. A call to the 'fix' method returns the current state of the covariance matrix, the average and the number of observations, and resets the internal data. Note that the internal sum is a standard __add__ operation. We are not using any of the fancy sum algorithms to avoid round off errors when adding many numbers. If you want to contribute a CovarianceMatrix class that uses such algorithms we would be happy to include it in MDP. For a start see the Python recipe by Raymond Hettinger at http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/393090 For a review about floating point arithmetic and its pitfalls see http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html """ def __init__(self, dtype=None, bias=False): """If dtype is not defined, it will be inherited from the first data bunch received by 'update'. All the matrices in this class are set up with the given dtype and no upcast is possible. If bias is True, the covariance matrix is normalized by dividing by T instead of the usual T-1. """ if dtype is None: self._dtype = None else: self._dtype = numx.dtype(dtype) self._input_dim = None # will be set in _init_internals # covariance matrix, updated during the training phase self._cov_mtx = None # average, updated during the training phase self._avg = None # number of observation so far during the training phase self._tlen = 0 self.bias = bias def _init_internals(self, x): """Init the internal structures. The reason this is not done in the constructor is that we want to be able to derive the input dimension and the dtype directly from the data this class receives. """ # init dtype if self._dtype is None: self._dtype = x.dtype dim = x.shape[1] self._input_dim = dim type_ = self._dtype # init covariance matrix self._cov_mtx = numx.zeros((dim, dim), type_) # init average self._avg = numx.zeros(dim, type_) def update(self, x): """Update internal structures. Note that no consistency checks are performed on the data (this is typically done in the enclosing node). """ if self._cov_mtx is None: self._init_internals(x) # cast input x = mdp.utils.refcast(x, self._dtype) # update the covariance matrix, the average and the number of # observations (try to do everything inplace) self._cov_mtx += mdp.utils.mult(x.T, x) self._avg += x.sum(axis=0) self._tlen += x.shape[0] def fix(self, center=True): """Returns a triple containing the covariance matrix, the average and the number of observations. The covariance matrix is then reset to a zero-state. If center is false, the returned matrix is the matrix of the second moments, i.e. the covariance matrix of the data without subtracting the mean.""" # local variables type_ = self._dtype tlen = self._tlen _check_roundoff(tlen, type_) avg = self._avg cov_mtx = self._cov_mtx ##### fix the training variables # fix the covariance matrix (try to do everything inplace) if self.bias: cov_mtx /= tlen else: cov_mtx /= tlen - 1 if center: avg_mtx = numx.outer(avg, avg) if self.bias: avg_mtx /= tlen*(tlen) else: avg_mtx /= tlen*(tlen - 1) cov_mtx -= avg_mtx # fix the average avg /= tlen ##### clean up # covariance matrix, updated during the training phase self._cov_mtx = None # average, updated during the training phase self._avg = None # number of observation so far during the training phase self._tlen = 0 return cov_mtx, avg, tlen class DelayCovarianceMatrix(object): """This class stores an empirical covariance matrix between the signal and time delayed signal that can be updated incrementally. Note that the internal sum is a standard __add__ operation. We are not using any of the fancy sum algorithms to avoid round off errors when adding many numbers. If you want to contribute a CovarianceMatrix class that uses such algorithms we would be happy to include it in MDP. For a start see the Python recipe by Raymond Hettinger at http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/393090 For a review about floating point arithmetic and its pitfalls see http://docs.sun.com/source/806-3568/ncg_goldberg.html """ def __init__(self, dt, dtype=None, bias=False): """dt is the time delay. If dt==0, DelayCovarianceMatrix equals CovarianceMatrix. If dtype is not defined, it will be inherited from the first data bunch received by 'update'. All the matrices in this class are set up with the given dtype and no upcast is possible. If bias is True, the covariance matrix is normalized by dividing by T instead of the usual T-1. """ # time delay self._dt = int(dt) if dtype is None: self._dtype = None else: self._dtype = numx.dtype(dtype) # clean up variables to spare on space self._cov_mtx = None self._avg = None self._avg_dt = None self._tlen = 0 self.bias = bias def _init_internals(self, x): """Inits some internals structures. The reason this is not done in the constructor is that we want to be able to derive the input dimension and the dtype directly from the data this class receives. """ # init dtype if self._dtype is None: self._dtype = x.dtype dim = x.shape[1] self._input_dim = dim # init covariance matrix self._cov_mtx = numx.zeros((dim, dim), self._dtype) # init averages self._avg = numx.zeros(dim, self._dtype) self._avg_dt = numx.zeros(dim, self._dtype) def update(self, x): """Update internal structures.""" if self._cov_mtx is None: self._init_internals(x) # cast input x = mdp.utils.refcast(x, self._dtype) dt = self._dt # the number of data points in each block should be at least dt+1 tlen = x.shape[0] if tlen < (dt+1): err = 'Block length is %d, should be at least %d.' % (tlen, dt+1) raise mdp.MDPException(err) # update the covariance matrix, the average and the number of # observations (try to do everything inplace) self._cov_mtx += mdp.utils.mult(x[:tlen-dt, :].T, x[dt:tlen, :]) totalsum = x.sum(axis=0) self._avg += totalsum - x[tlen-dt:, :].sum(axis=0) self._avg_dt += totalsum - x[:dt, :].sum(axis=0) self._tlen += tlen-dt def fix(self, A=None): """The collected data is adjusted to compute the covariance matrix of the signal x(1)...x(N-dt) and the delayed signal x(dt)...x(N), which is defined as <(x(t)-)*(x(t+dt)-)> . The function returns a tuple containing the covariance matrix, the average over the first N-dt points, the average of the delayed signal and the number of observations. The internal data is then reset to a zero-state. If A is defined, the covariance matrix is transformed by the linear transformation Ax . E.g. to whiten the data, A is the whitening matrix. """ # local variables type_ = self._dtype tlen = self._tlen _check_roundoff(tlen, type_) avg = self._avg avg_dt = self._avg_dt cov_mtx = self._cov_mtx ##### fix the training variables # fix the covariance matrix (try to do everything inplace) avg_mtx = numx.outer(avg, avg_dt) avg_mtx /= tlen cov_mtx -= avg_mtx if self.bias: cov_mtx /= tlen else: cov_mtx /= tlen - 1 if A is not None: cov_mtx = mdp.utils.mult(A, mdp.utils.mult(cov_mtx, A.T)) # fix the average avg /= tlen avg_dt /= tlen ##### clean up variables to spare on space self._cov_mtx = None self._avg = None self._avg_dt = None self._tlen = 0 return cov_mtx, avg, avg_dt, tlen class MultipleCovarianceMatrices(object): """Container class for multiple covariance matrices to easily execute operations on all matrices at the same time. Note: all operations are done in place where possible.""" def __init__(self, covs): """Insantiate with a sequence of covariance matrices.""" # swap axes to get the different covmat on to the 3rd axis self.dtype = covs[0].dtype self.covs = (numx.array(covs, dtype=self.dtype)).transpose([1, 2, 0]) self.ncovs = len(covs) def __getitem__(self, item): return self.covs[:, :, item] def symmetrize(self): """Symmetrize matrices: C -> (C+C^T)/2 .""" # symmetrize cov matrices covs = self.covs covs = 0.5*(covs+covs.transpose([1, 0, 2])) self.covs = covs def weight(self, weights): """Apply a weighting factor to matrices. Argument can be a sequence or a single value. In the latter case the same weight is applied to all matrices.""" # apply a weighting vector to cov matrices err = ("len(weights)=%d does not match number " "of matrices (%d)" % (len(weights), self.ncovs)) assert len(weights) == self.ncovs, err self.covs *= mdp.utils.refcast(weights, self.dtype) def rotate(self, angle, indices): """Rotate matrices by angle in the plane defined by indices [i,j].""" covs = self.covs [i, j] = indices cos_ = numx.cos(angle) sin_ = numx.sin(angle) # rotate columns # you need to copy the first column that is modified covs_i = covs[:, i, :] + 0 covs_j = covs[:, j, :] covs[:, i, :] = cos_*covs_i - sin_*covs_j covs[:, j, :] = sin_*covs_i + cos_*covs_j # rotate rows # you need to copy the first row that is modified covs_i = covs[i, :, :] + 0 covs_j = covs[j, :, :] covs[i, :, :] = cos_*covs_i - sin_*covs_j covs[j, :, :] = sin_*covs_i + cos_*covs_j self.covs = covs def permute(self, indices): """Swap two columns and two rows of all matrices, whose indices are specified as [i,j].""" covs = self.covs [i, j] = indices covs[i, :, :], covs[j, :, :] = covs[j, :, :], covs[i, :, :] + 0 covs[:, i, :], covs[:, j, :] = covs[:, j, :], covs[:, i, :] + 0 self.covs = covs def transform(self, trans_matrix): """Apply a linear transformation to all matrices, defined by the transformation matrix.""" trans_matrix = mdp.utils.refcast(trans_matrix, self.dtype) for cov in range(self.ncovs): self.covs[:, :, cov] = mdp.utils.mult( mdp.utils.mult(trans_matrix.T, self.covs[:, :, cov]), trans_matrix) def copy(self): """Return a deep copy of the instance.""" return MultipleCovarianceMatrices(self.covs.transpose([2, 0, 1])) class CrossCovarianceMatrix(CovarianceMatrix): def _init_internals(self, x, y): if self._dtype is None: self._dtype = x.dtype if y.dtype != x.dtype: err = 'dtype mismatch: x (%s) != y (%s)'%(x.dtype, y.dtype) raise mdp.MDPException(err) dim_x = x.shape[1] dim_y = y.shape[1] type_ = self._dtype self._cov_mtx = numx.zeros((dim_x, dim_y), type_) self._avgx = numx.zeros(dim_x, type_) self._avgy = numx.zeros(dim_y, type_) def update(self, x, y): # check internal dimensions consistency if x.shape[0] != y.shape[0]: err = '# samples mismatch: x (%d) != y (%d)'%(x.shape[0], y.shape[0]) raise mdp.MDPException(err) if self._cov_mtx is None: self._init_internals(x, y) # cast input x = mdp.utils.refcast(x, self._dtype) y = mdp.utils.refcast(y, self._dtype) self._cov_mtx += mdp.utils.mult(x.T, y) self._avgx += x.sum(axis=0) self._avgy += y.sum(axis=0) self._tlen += x.shape[0] def fix(self): type_ = self._dtype tlen = self._tlen _check_roundoff(tlen, type_) avgx = self._avgx avgy = self._avgy cov_mtx = self._cov_mtx ##### fix the training variables # fix the covariance matrix (try to do everything inplace) avg_mtx = numx.outer(avgx, avgy) if self.bias: avg_mtx /= tlen*(tlen) cov_mtx /= tlen else: avg_mtx /= tlen*(tlen - 1) cov_mtx /= tlen - 1 cov_mtx -= avg_mtx # fix the average avgx /= tlen avgy /= tlen ##### clean up # covariance matrix, updated during the training phase self._cov_mtx = None # average, updated during the training phase self._avgx = None self._avgy = None # number of observation so far during the training phase self._tlen = 0 return cov_mtx, avgx, avgy, tlen mdp-3.3/mdp/utils/introspection.py000066400000000000000000000112731203131624700173340ustar00rootroot00000000000000import types import cPickle import mdp class _Walk(object): """Recursively crawl an object and search for attributes that are reference to numpy arrays, return a dictionary: {attribute_name: array_reference}. Usage: _Walk()(object) """ def __init__(self): self.arrays = {} self.start = None self.allobjs = {} def __call__(self, x, start = None): arrays = self.arrays # loop through the object dictionary for name in dir(x): # get the corresponding member obj = getattr(x, name) if id(obj) in self.allobjs.keys(): # if we already examined the member, skip to the next continue else: # add the id of this object to the list of know members self.allobjs[id(obj)] = None if start is None: # initialize a string structure to keep track of array names struct = name else: # struct is x.y.z (where x and y are objects and z an array) struct = '.'.join((start, name)) if isinstance(obj, mdp.numx.ndarray): # the present member is an array # add it to the dictionary of all arrays if start is not None: arrays[struct] = obj else: arrays[name] = obj elif name.startswith('__') or type(obj) in (int, long, float, types.MethodType): # the present member is a private member or a known # type that does not support arrays as attributes # Note: this is to avoid infinite # recursion in python2.6. Just remove the "or type in ..." # condition to see the error. There must be a better way. continue else: # we need to examine the present member in more detail arrays.update(self(obj, start = struct)) self.start = start return arrays def _format_dig(dict_): longest_name = max(map(len, dict_.keys())) longest_size = max(map(lambda x: len('%d'%x[0]), dict_.values())) msgs = [] total_size = 0 for name in sorted(dict_.keys()): size = dict_[name][0] total_size += size pname = (name+':').ljust(longest_name+1) psize = ('%d bytes' % size).rjust(longest_size+6) msg = "%s %s" % (pname, psize) msgs.append(msg) final = "Total %d arrays (%d bytes)" % (len(dict_), total_size) msgs.append(final) return '\n'.join(msgs) def dig_node(x): """Crawl recursively an MDP Node looking for arrays. Return (dictionary, string), where the dictionary is: { attribute_name: (size_in_bytes, array_reference)} and string is a nice string representation of it. """ if not isinstance(x, mdp.Node): raise Exception('Cannot dig %s' % (str(type(x)))) arrays = _Walk()(x) for name in arrays.keys(): ar = arrays[name] if len(ar.shape) == 0: size = 1 else: size = mdp.numx.prod(ar.shape) bytes = ar.itemsize*size arrays[name] = (bytes, ar) return arrays, _format_dig(arrays) def get_node_size(x): """Return node total byte-size using cPickle with protocol=2. The byte-size is related to the memory needed by the node). """ # TODO: add check for problematic node types, like NoiseNode? # TODO: replace this with sys.getsizeof for Python >= 2.6 size = len(cPickle.dumps(x, protocol = 2)) return size def get_node_size_str(x, si_units=False): """Return node total byte-size as a well readable string. si_units -- If True si-units like KB are used instead of KiB. The get_node_size function is used to get the size. """ return _memory_size_str(get_node_size(x), si_units=si_units) _SI_MEMORY_PREFIXES = ("", "k", "M", "G", "T", "P", "E") _IEC_MEMORY_PREFIXES = ("", "Ki", "Mi", "Gi", "Ti", "Pi", "Ei") def _memory_size_str(size, si_units=False): """Convert the given memory size into a nicely formatted string. si_units -- If True si-units like kB are used instead of kiB. """ if si_units: base = 10**3 else: base = 2**10 scale = 0 # 1024**scale is the actual scale while size > base**(scale+1): scale += 1 unit = "B" if scale: size_str = size = "%.1f" % (1.0 * size / (base**scale)) if si_units: unit = _SI_MEMORY_PREFIXES[scale] + unit else: unit = _IEC_MEMORY_PREFIXES[scale] + unit else: size_str = "%d" % size return size_str + " " + unit mdp-3.3/mdp/utils/progress_bar.py000066400000000000000000000270301203131624700171220ustar00rootroot00000000000000from __future__ import with_statement from datetime import timedelta import sys import time def get_termsize(): """Return terminal size as a tuple (height, width).""" try: # this works on unix machines import struct, fcntl, termios height, width = struct.unpack("hhhh", fcntl.ioctl(0,termios.TIOCGWINSZ, "\000"*8))[0:2] if not (height and width): height, width = 24, 79 except ImportError: # for windows machins, use default values # Does anyone know how to get the console size under windows? # One approach is: # http://code.activestate.com/recipes/440694/ height, width = 24, 79 return height, width def fmt_time(t, delimiters): """Return time formatted as a timedelta object.""" meta_t = timedelta(seconds=round(t)) return ''.join([delimiters[0], str(meta_t), delimiters[1]]) def _progress(percent, last, style, layout): # percentage string percent_s = "%3d%%" % int(round(percent*100)) if style == 'bar': # how many symbols for such percentage symbols = int(round(percent * layout['width'])) # build percent done arrow done = ''.join([layout['char1']*(symbols), layout['char2']]) # build remaining space todo = ''.join([layout['char3']*(layout['width']-symbols)]) # build the progress bar box = ''.join([layout['delimiters'][0], done, todo, layout['delimiters'][1]]) if layout['position'] == 'left': # put percent left box = ''.join(['\r', layout['indent'], percent_s, box]) elif layout['position'] == 'right': # put percent right box = ''.join(['\r', layout['indent'], box, percent_s]) else: # put it in the center percent_s = percent_s.lstrip() percent_idx = (len(box) // 2) - len(percent_s) + 2 box = ''.join(['\r', layout['indent'], box[0:percent_idx], percent_s, box[percent_idx+len(percent_s):]]) else: now = time.time() if percent == 0: # write the time box directly tbox = ''.join(['?', layout['separator'], '?']) else: # Elapsed elapsed = now - layout['t_start'] # Estimated total time if layout['speed'] == 'mean': e_t_a = elapsed/percent - elapsed else: # instantaneous speed progress = percent-_progress.last_percent e_t_a = (1 - percent)/progress*(now-_progress.last_time) # build the time box tbox = ''.join([fmt_time(elapsed, layout['delimiters']), layout['separator'], fmt_time(e_t_a, layout['delimiters'])]) # compose progress info box if layout['position'] == 'left': box = ''.join(['\r', layout['indent'], percent_s, ' ', tbox]) else: box = ''.join(['\r', layout['indent'], tbox, ' ', percent_s]) _progress.last_percent = percent _progress.last_time = now # print it only if something changed from last time if box != last: sys.stdout.write(box) sys.stdout.flush() return box def progressinfo(sequence, length = None, style = 'bar', custom = None): """A fully configurable text-mode progress info box tailored to the command-line die-hards. To get a progress info box for your loops use it like this: >>> for i in progressinfo(sequence): ... do_something(i) You can also use it with generators, files or any other iterable object, but in this case you have to specify the total length of the sequence: >>> for line in progressinfo(open_file, nlines): ... do_something(line) If the number of iterations is not known in advance, you may prefer to iterate on the items directly. This can be useful for example if you are downloading a big file in a subprocess and want to monitor the progress. If the file to be downloaded is TOTAL bytes large and you are downloading it on local: >>> def done(): ... yield os.path.getsize(localfile) >>> for bytes in progressinfo(done(), -TOTAL) ... time.sleep(1) ... if download_process_has_finished(): ... break Arguments: sequence - if it is a Python container object (list, dict, string, etc...) and it supports the __len__ method call, the length argument can be omitted. If it is an iterator (generators, file objects, etc...) the length argument must be specified. Keyword arguments: length - length of the sequence. Automatically set if `sequence' has the __len__ method. If length is negative, iterate on items. style - If style == 'bar', display a progress bar. The default layout is: [===========60%===>.........] If style == 'timer', display a time elapsed / time remaining info box. The default layout is: 23% [02:01:28] - [00:12:37] where fields have the following meaning: percent_done% [time_elapsed] - [time_remaining] custom - a dictionary for customizing the layout. Default layout for the 'bar' style: custom = { 'indent': '', 'width' : terminal_width - 1, 'position' : 'middle', 'delimiters' : '[]', 'char1' : '=', 'char2' : '>', 'char3' : '.' } Default layout for the 'timer' style: custom = { 'speed': 'mean', 'indent': '', 'position' : 'left', 'delimiters' : '[]', 'separator' : ' - ' } Description: speed = completion time estimation method, must be one of ['mean', 'last']. 'mean' uses average speed, 'last' uses last step speed. indent = string used for indenting the progress info box position = position of the percent done string, must be one out of ['left', 'middle', 'right'] Note 1: by default sys.stdout is flushed each time a new box is drawn. If you need to rely on buffered stdout you'd better not use this (any?) progress info box. Note 2: progressinfo slows down your loops. Always profile your scripts and check that you are not wasting 99% of the time in drawing the progress info box. """ iterate_on_items = False # try to get the length of the sequence try: length = len(sequence) # if the object is unsized except TypeError: if length is None: err_str = "Must specify 'length' if sequence is unsized." raise Exception(err_str) elif length < 0: iterate_on_items = True length = -length length = float(length) # set layout if style == 'bar': layout = { 'indent': '', 'width' : get_termsize()[1], 'position' : 'middle', 'delimiters' : '[]', 'char1' : '=', 'char2' : '>', 'char3' : '.' } if custom is not None: layout.update(custom) fixed_lengths = len(layout['indent']) + 4 if layout['position'] in ['left', 'right']: fixed_lengths += 4 layout['width'] = layout['width'] - fixed_lengths elif style == 'timer': layout = { 'speed': 'mean', 'indent': '', 'position' : 'left', 'delimiters' : '[]', 'separator': ' - ', 't_start' : time.time() } if custom is not None: layout.update(custom) else: err_str = "Style `%s' not known." % style raise ValueError(err_str) # start main loop last = None for count, value in enumerate(sequence): # generate progress info if iterate_on_items: last = _progress(value/length, last, style, layout) else: last = _progress(count/length, last, style, layout) yield value else: # we need this for the 100% notice if iterate_on_items: last = _progress(1., last, style, layout) else: last = _progress((count+1)/length, last, style, layout) # clean up terminal sys.stdout.write('\n\r') # execute this file for a demo of the progressinfo style if __name__ == '__main__': #import random import mdp import tempfile print 'Testing progressinfo...' # test various customized layouts cust_list = [ {'position' : 'left', 'indent': 'Progress: ', 'delimimters': '()', 'char3': ' '}, {}, {'position': 'right', 'width': 50} ] for cust in cust_list: test = 0 for i in progressinfo(range(100, 600), style = 'bar', custom = cust): test += i time.sleep(0.001) if test != 174750: raise Exception('Something wrong with progressinfo...') # generate random character sequence inp_list = [] for j in range(500): # inp_list.append(chr(random.randrange(256))) inp_list.append(chr(mdp.numx_rand.randint(256))) string = ''.join(inp_list) # test various customized layouts cust_list = [ {'position': 'left', 'separator': ' | ', 'delimiters': '()'}, {'position':'right'}] for cust in cust_list: out_list = [] for i in progressinfo(string, style = 'timer', custom = cust): time.sleep(0.02) out_list.append(i) if inp_list != out_list: raise Exception('Something wrong with progressinfo...' ) # write random file with tempfile.TemporaryFile(mode='r+') as fl: for i in range(1000): fl.write(str(i)+'\n') fl.flush() # rewind fl.seek(0) lines = [] for line in progressinfo(fl, 1000): lines.append(int(line)) time.sleep(0.01) if lines != range(1000): raise Exception('Something wrong with progressinfo...' ) # test iterate on items with tempfile.TemporaryFile(mode='r+') as fl: for i in range(10): fl.write(str(i)+'\n') fl.flush() # rewind fl.seek(0) def gen(): for line_ in fl: yield int(line_) for line in progressinfo(gen(), -10, style='timer', custom={'speed':'last'}): time.sleep(1) print 'Done.' mdp-3.3/mdp/utils/quad_forms.py000066400000000000000000000124221203131624700165710ustar00rootroot00000000000000import mdp from routines import refcast numx = mdp.numx numx_linalg = mdp.numx_linalg # 10 times machine eps epsilon = 10*numx.finfo(numx.double).eps class QuadraticFormException(mdp.MDPException): pass class QuadraticForm(object): """ Define an inhomogeneous quadratic form as 1/2 x'Hx + f'x + c . This class implements the quadratic form analysis methods presented in: Berkes, P. and Wiskott, L. (2006). On the analysis and interpretation of inhomogeneous quadratic forms as receptive fields. Neural Computation, 18(8): 1868-1895. """ def __init__(self, H, f=None, c=None, dtype='d'): """ The quadratic form is defined as 1/2 x'Hx + f'x + c . 'dtype' specifies the numerical type of the internal structures. """ local_eps = 10*numx.finfo(numx.dtype(dtype)).eps # check that H is almost symmetric if not numx.allclose(H, H.T, rtol=100*local_eps, atol=local_eps): raise QuadraticFormException('H does not seem to be symmetric') self.H = refcast(H, dtype) if f is None: f = numx.zeros((H.shape[0],), dtype=dtype) if c is None: c = 0 self.f = refcast(f, dtype) self.c = c self.dtype = dtype def apply(self, x): """Apply the quadratic form to the input vectors. Return 1/2 x'Hx + f'x + c .""" x = numx.atleast_2d(x) return (0.5*(mdp.utils.mult(x, self.H.T)*x).sum(axis=1) + mdp.utils.mult(x, self.f) + self.c) def _eig_sort(self, x): E, W = numx_linalg.eig(x) E, W = E.real, W.real idx = E.argsort() E = E.take(idx) W = W.take(idx, axis=1) return E, W def get_extrema(self, norm, tol = 1.E-4): """ Find the input vectors xmax and xmin with norm 'nrm' that maximize or minimize the quadratic form. tol: norm error tolerance """ H, f, c = self.H, self.f, self.c if max(abs(f)) < numx.finfo(self.dtype).eps: E, W = self._eig_sort(H) xmax = W[:, -1]*norm xmin = W[:, 0]*norm else: H_definite_positive, H_definite_negative = False, False E, W = self._eig_sort(H) if E[0] >= 0: # H is positive definite H_definite_positive = True elif E[-1] <= 0: # H is negative definite H_definite_negative = True x0 = mdp.numx_linalg.solve(-H, f) if H_definite_positive and mdp.utils.norm2(x0) <= norm: xmin = x0 # x0 is a minimum else: xmin = self._maximize(norm, tol, factor=-1) if H_definite_negative and mdp.utils.norm2(x0) <= norm : xmax = x0 # x0 is a maximum else: xmax = self._maximize(norm, tol, factor=None) self.xmax, self.xmin = xmax, xmin return xmax, xmin def _maximize(self, norm, tol = 1.E-4, x0 = None, factor = None): H, f = self.H, self.f if factor is not None: H = factor*H f = factor*f if x0 is not None: x0 = mdp.utils.refcast(x0, self.dtype) f = mdp.utils.mult(H, x0)+ f # c = 0.5*x0'*H*x0 + f'*x0 + c -> do we need it? mu, V = self._eig_sort(H) alpha = mdp.utils.mult(V.T, f).reshape((H.shape[0],)) # v_i = alpha_i * v_i (alpha is a raw_vector) V = V*alpha # left bound for lambda ll = mu[-1] # eigenvalue's maximum # right bound for lambda lr = mdp.utils.norm2(f)/norm + ll # search by bisection until norm(x)**2 = norm**2 norm_2 = norm**2 norm_x_2 = 0 while abs(norm_x_2-norm_2) > tol and (lr-ll)/lr > epsilon: # bisection of the lambda-interval lambd = 0.5*(lr-ll)+ll # eigenvalues of (lambda*Id - H)^-1 beta = (lambd-mu)**(-1) # solution to the second lagragian equation norm_x_2 = (alpha**2*beta**2).sum() #%[ll,lr] if norm_x_2 > norm_2: ll = lambd else: lr = lambd x = (V*beta).sum(axis=1) if x0: x = x + x0 return x def get_invariances(self, xstar): """Compute invariances of the quadratic form at extremum 'xstar'. Outputs: w -- w[:,i] is the direction of the i-th invariance nu -- nu[i] second derivative on the sphere in the direction w[:,i] """ # find a basis for the tangential plane of the sphere in x+ # e(1) ... e(N) is the canonical basis for R^N r = mdp.utils.norm2(xstar) P = numx.eye(xstar.shape[0], dtype=xstar.dtype) P[:, 0] = xstar Q, R = numx_linalg.qr(P) # the orthogonal subspace B = Q[:, 1:] # restrict the matrix H to the tangential plane Ht = mdp.utils.mult(B.T, mdp.utils.mult(self.H, B)) # compute the invariances nu, w = self._eig_sort(Ht) nu -= ((mdp.utils.mult(self.H, xstar)*xstar).sum() +(self.f*xstar).sum())/(r*r) idx = abs(nu).argsort() nu = nu[idx] w = w[:, idx] w = mdp.utils.mult(B, w) return w, nu mdp-3.3/mdp/utils/routines.py000066400000000000000000000370451203131624700163110ustar00rootroot00000000000000import mdp # import numeric module (scipy, Numeric or numarray) numx, numx_rand, numx_linalg = mdp.numx, mdp.numx_rand, mdp.numx_linalg numx_description = mdp.numx_description import random import itertools def timediff(data): """Returns the array of the time differences of data.""" # this is the fastest way we found so far return data[1:]-data[:-1] def refcast(array, dtype): """ Cast the array to dtype only if necessary, otherwise return a reference. """ dtype = mdp.numx.dtype(dtype) if array.dtype == dtype: return array return array.astype(dtype) def scast(scalar, dtype): """Convert a scalar in a 0D array of the given dtype.""" return numx.array(scalar, dtype=dtype) def rotate(mat, angle, columns=(0, 1), units='radians'): """ Rotate in-place data matrix (NxM) in the plane defined by the columns=[i,j] when observation are stored on rows. Observations are rotated counterclockwise. This corresponds to the following matrix-multiplication for each data-point (unchanged elements omitted): [ cos(angle) -sin(angle) [ x_i ] sin(angle) cos(angle) ] * [ x_j ] If M=2, columns=[0,1]. """ if units is 'degrees': angle = angle/180.*numx.pi cos_ = numx.cos(angle) sin_ = numx.sin(angle) [i, j] = columns col_i = mat[:, i] + 0. col_j = mat[:, j] mat[:, i] = cos_*col_i - sin_*col_j mat[:, j] = sin_*col_i + cos_*col_j def permute(x, indices=(0, 0), rows=0, cols=1): """Swap two columns and (or) two rows of 'x', whose indices are specified in indices=[i,j]. Note: permutations are done in-place. You'll lose your original matrix""" ## the nicer option: ## x[i,:],x[j,:] = x[j,:],x[i,:] ## does not work because array-slices are references. ## The following would work: ## x[i,:],x[j,:] = x[j,:].tolist(),x[i,:].tolist() ## because list-slices are copies, but you get 2 ## copies instead of the one you need with our method. ## This would also work: ## tmp = x[i,:].copy() ## x[i,:],x[j,:] = x[j,:],tmp ## but it is slower (for larger matrices) than the one we use. [i, j] = indices if rows: x[i, :], x[j, :] = x[j, :], x[i, :] + 0 if cols: x[:, i], x[:, j] = x[:, j], x[:, i] + 0 def hermitian(x): """Compute the Hermitian, i.e. conjugate transpose, of x.""" return x.T.conj() def symrand(dim_or_eigv, dtype="d"): """Return a random symmetric (Hermitian) matrix. If 'dim_or_eigv' is an integer N, return a NxN matrix, with eigenvalues uniformly distributed on (-1,1). If 'dim_or_eigv' is 1-D real array 'a', return a matrix whose eigenvalues are 'a'. """ if isinstance(dim_or_eigv, int): dim = dim_or_eigv d = (numx_rand.random(dim)*2) - 1 elif isinstance(dim_or_eigv, numx.ndarray) and len(dim_or_eigv.shape) == 1: dim = dim_or_eigv.shape[0] d = dim_or_eigv else: raise mdp.MDPException("input type not supported.") v = random_rot(dim) #h = mdp.utils.mult(mdp.utils.mult(hermitian(v), mdp.numx.diag(d)), v) h = mdp.utils.mult(mult_diag(d, hermitian(v), left=False), v) # to avoid roundoff errors, symmetrize the matrix (again) h = 0.5*(h.T+h) if dtype in ('D', 'F', 'G'): h2 = symrand(dim_or_eigv) h = h + 1j*(numx.triu(h2)-numx.tril(h2)) return refcast(h, dtype) def random_rot(dim, dtype='d'): """Return a random rotation matrix, drawn from the Haar distribution (the only uniform distribution on SO(n)). The algorithm is described in the paper Stewart, G.W., "The efficient generation of random orthogonal matrices with an application to condition estimators", SIAM Journal on Numerical Analysis, 17(3), pp. 403-409, 1980. For more information see http://en.wikipedia.org/wiki/Orthogonal_matrix#Randomization""" H = mdp.numx.eye(dim, dtype=dtype) D = mdp.numx.ones((dim,), dtype=dtype) for n in range(1, dim): x = mdp.numx_rand.normal(size=(dim-n+1,)).astype(dtype) D[n-1] = mdp.numx.sign(x[0]) x[0] -= D[n-1]*mdp.numx.sqrt((x*x).sum()) # Householder transformation Hx = ( mdp.numx.eye(dim-n+1, dtype=dtype) - 2.*mdp.numx.outer(x, x)/(x*x).sum() ) mat = mdp.numx.eye(dim, dtype=dtype) mat[n-1:, n-1:] = Hx H = mdp.utils.mult(H, mat) # Fix the last sign such that the determinant is 1 D[-1] = (-1)**(1-dim%2)*D.prod() # Equivalent to mult(numx.diag(D), H) but faster H = (D*H.T).T return H def norm2(v): """Compute the 2-norm for 1D arrays. norm2(v) = sqrt(sum(v_i^2))""" return numx.sqrt((v*v).sum()) def cov2(x, y): """Compute the covariance between 2D matrices x and y. Complies with the old scipy.cov function: different variables are on different columns.""" mnx = x.mean(axis=0) mny = y.mean(axis=0) tlen = x.shape[0] return mdp.utils.mult(x.T, y)/(tlen-1) - numx.outer(mnx, mny) def cov_maxima(cov): """Extract the maxima of a covariance matrix.""" dim = cov.shape[0] maxs = [] if dim >= 1: cov=abs(cov) glob_max_idx = (cov.argmax()//dim, cov.argmax()%dim) maxs.append(cov[glob_max_idx[0], glob_max_idx[1]]) cov_reduce = cov.copy() cov_reduce = cov_reduce[numx.arange(dim) != glob_max_idx[0], :] cov_reduce = cov_reduce[:, numx.arange(dim) != glob_max_idx[1]] maxs.extend(cov_maxima(cov_reduce)) return maxs else: return [] def mult_diag(d, mtx, left=True): """Multiply a full matrix by a diagonal matrix. This function should always be faster than dot. Input: d -- 1D (N,) array (contains the diagonal elements) mtx -- 2D (N,N) array Output: mult_diag(d, mts, left=True) == dot(diag(d), mtx) mult_diag(d, mts, left=False) == dot(mtx, diag(d)) """ if left: return (d*mtx.T).T else: return d*mtx def comb(N, k): """Return number of combinations of k objects from a set of N objects without repetitions, a.k.a. the binomial coefficient of N and k.""" ret = 1 for mlt in xrange(N, N-k, -1): ret *= mlt for dv in xrange(1, k+1): ret //= dv return ret # WARNING numpy.linalg.eigh does not support float sizes larger than 64 bits, # and complex numbers of size larger than 128 bits. # Also float16 is not supported either. # This is not a problem for MDP, as long as scipy.linalg.eigh is available. def get_dtypes(typecodes_key, _safe=True): """Return the list of dtypes corresponding to the set of typecodes defined in numpy.typecodes[typecodes_key]. E.g., get_dtypes('Float') = [dtype('f'), dtype('d'), dtype('g')]. If _safe is True (default), we remove large floating point types if the numerical backend does not support them. """ types = [] for c in numx.typecodes[typecodes_key]: try: type_ = numx.dtype(c) if (_safe and not mdp.config.has_symeig == 'scipy.linalg.eigh' and type_ in _UNSAFE_DTYPES): continue types.append(type_) except TypeError: pass return types _UNSAFE_DTYPES = [numx.typeDict[d] for d in ['float16', 'float96', 'float128', 'complex192', 'complex256'] if d in numx.typeDict] def nongeneral_svd(A, range=None, **kwargs): """SVD routine for simple eigenvalue problem, API is compatible with symeig.""" Z2, w, Z = mdp.utils.svd(A) # sort eigenvalues and corresponding eigenvectors idx = w.argsort() w = w.take(idx) Z = Z.take(idx, axis=0).T if range is not None: lo, hi = range Z = Z[:, lo-1:hi] w = w[lo-1:hi] return w, Z def sqrtm(A): """This is a symmetric definite positive matrix sqrt function""" d, V = mdp.utils.symeig(A) return mdp.utils.mult(V, mult_diag(numx.sqrt(d), V.T)) # replication functions def lrep(x, n): """Replicate x n-times on a new first dimension""" shp = [1] shp.extend(x.shape) return x.reshape(shp).repeat(n, axis=0) def rrep(x, n): """Replicate x n-times on a new last dimension""" shp = x.shape + (1,) return x.reshape(shp).repeat(n, axis=-1) def irep(x, n, dim): """Replicate x n-times on a new dimension dim-th dimension""" x_shape = x.shape shp = x_shape[:dim] + (1,) + x_shape[dim:] return x.reshape(shp).repeat(n, axis=dim) # /replication functions try: # product exists only in itertools >= 2.6 from itertools import product except ImportError: def product(*args, **kwds): """Cartesian product of input iterables. """ # taken from python docs 2.6 # product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy # product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111 pools = map(tuple, args) * kwds.get('repeat', 1) result = [[]] for pool in pools: result = [x+[y] for x in result for y in pool] for prod in result: yield tuple(prod) def orthogonal_permutations(a_dict): """ Takes a dictionary with lists as keys and returns all permutations of these list elements in new dicts. This function is useful, when a method with several arguments shall be tested and all of the arguments can take several values. The order is not defined, therefore the elements should be orthogonal to each other. >>> for i in orthogonal_permutations({'a': [1,2,3], 'b': [4,5]}): print i {'a': 1, 'b': 4} {'a': 1, 'b': 5} {'a': 2, 'b': 4} {'a': 2, 'b': 5} {'a': 3, 'b': 4} {'a': 3, 'b': 5} """ pool = dict(a_dict) args = [] for func, all_args in pool.items(): # check the size of the list in the second item of the tuple args_with_fun = [(func, arg) for arg in all_args] args.append(args_with_fun) for i in product(*args): yield dict(i) def izip_stretched(*iterables): """Same as izip, except that for convenience non-iterables are repeated ad infinitum. This is useful when trying to zip input data with respective labels and allows for having a single label for all data, as well as for havning a list of labels for each data vector. Note that this will take strings as an iterable (of course), so strings acting as a single value need to be wrapped in a repeat statement of their own. Thus, >>> for zipped in izip_stretched([1, 2, 3], -1): print zipped (1, -1) (2, -1) (3, -1) is equivalent to >>> for zipped in izip([1, 2, 3], [-1] * 3): print zipped (1, -1) (2, -1) (3, -1) """ def iter_or_repeat(val): try: return iter(val) except TypeError: return itertools.repeat(val) iterables= map(iter_or_repeat, iterables) while iterables: # need to care about python < 2.6 yield tuple([it.next() for it in iterables]) def weighted_choice(a_dict, normalize=True): """Returns a key from a dictionary based on the weight that the value suggests. If 'normalize' is False, it is assumed the weights sum up to unity. Otherwise, the algorithm will take care of normalising. Example: >>> d = {'a': 0.1, 'b': 0.5, 'c': 0.4} >>> weighted_choice(d) # draws 'b':'c':'a' with 5:4:1 probability TODO: It might be good to either shuffle the order or explicitely specify it, before walking through the items, to minimise possible degeneration. """ if normalize: d = a_dict.copy() s = sum(d.values()) for key, val in d.items(): d[key] = d[key] / s else: d = a_dict rand_num = random.random() total_rand = 0 for key, val in d.items(): total_rand += val if total_rand > rand_num: return key return None def bool_to_sign(an_array): """Return -1 for each False; +1 for each True""" return numx.sign(an_array - 0.5) def sign_to_bool(an_array, zero=True): """Return False for each negative value, else True. The value for 0 is specified with 'zero'. """ if zero: return numx.array(an_array) >= 0 else: return numx.array(an_array) > 0 def gabor(size, alpha, phi, freq, sgm, x0=None, res=1, ampl=1.): """Return a 2D array containing a Gabor wavelet. Input arguments: size -- (height, width) (pixels) alpha -- orientation (rad) phi -- phase (rad) freq -- frequency (cycles/deg) sgm -- (sigma_x, sigma_y) standard deviation along the axis of the gaussian ellipse (pixel) x0 -- (x,y) coordinates of the center of the wavelet (pixel) Default: None, meaning the center of the array res -- spatial resolution (deg/pixel) Default: 1, so that 'freq' is measured in cycles/pixel ampl -- constant multiplying the result Default: 1. """ # init w, h = size if x0 is None: x0 = (w//2, h//2) y0, x0 = x0 # some useful quantities freq *= res sinalpha = numx.sin(alpha) cosalpha = numx.cos(alpha) v0, u0 = freq*cosalpha, freq*sinalpha # coordinates #x = numx.mgrid[-x0:w-x0, -y0:h-y0] x = numx.meshgrid(numx.arange(w)-x0, numx.arange(h)-y0) x = (x[0].T, x[1].T) xr = x[0]*cosalpha - x[1]*sinalpha yr = x[0]*sinalpha + x[1]*cosalpha # gabor im = ampl*numx.exp(-0.5*(xr*xr/(sgm[0]*sgm[0]) + yr*yr/(sgm[1]*sgm[1]))) \ *numx.cos(-2.*numx.pi*(u0*x[0]+v0*x[1]) - phi) return im def residuals(app_x, y_noisy, exp_funcs, x_orig, k=0.0): """Function used internally by invert_exp_funcs2 to approximate inverses in ConstantExpansionNode. """ app_x = app_x.reshape((1,len(app_x))) app_exp_x = numx.concatenate([func(app_x) for func in exp_funcs],axis=1) div_y = numx.sqrt(len(y_noisy)) div_x = numx.sqrt(len(x_orig)) return numx.append( (1-k)*(y_noisy-app_exp_x[0]) / div_y, k * (x_orig - app_x[0])/div_x ) def invert_exp_funcs2(exp_x_noisy, dim_x, exp_funcs, use_hint=False, k=0.0): """Approximates a preimage app_x of exp_x_noisy. Returns an array app_x, such that each row of exp_x_noisy is close to each row of exp_funcs(app_x). use_hint: determines the starting point for the approximation of the preimage. There are three possibilities. if it equals False: starting point is generated with a normal distribution if it equals True: starting point is the first dim_x elements of exp_x_noisy otherwise: use the parameter use_hint itself as the first approximation k: weighting factor in [0, 1] to balance between approximation error and closeness to the starting point. For instance: objective function is to minimize: (1-k) * |exp_funcs(app_x) - exp_x_noisy|/output_dim + k * |app_x - starting point|/input_dim Note: this function requires scipy. """ if numx_description != 'scipy': raise NotImplementedError('This function requires scipy.') else: import scipy.optimize num_samples = exp_x_noisy.shape[0] if isinstance(use_hint, numx.ndarray): app_x = use_hint.copy() elif use_hint == True: app_x = exp_x_noisy[:,0:dim_x].copy() else: app_x = numx.random.normal(size=(num_samples,dim_x)) for row in range(num_samples): plsq = scipy.optimize.leastsq(residuals, app_x[row], args=(exp_x_noisy[row], exp_funcs, app_x[row], k), maxfev=50*dim_x) app_x[row] = plsq[0] app_exp_x = numx.concatenate([func(app_x) for func in exp_funcs],axis=1) return app_x, app_exp_x mdp-3.3/mdp/utils/slideshow.css000066400000000000000000000011721203131624700165720ustar00rootroot00000000000000/* CSS for the slideshow control table. */ div.slideshow { text-align: center; } table.slideshow, table.slideshow td, table.slideshow th { border-collapse: collapse; padding: 1px 2px 1px 2px; font-size: small; border: 1px solid; } table.slideshow { border: 2px solid; margin: 0 auto; } table.slideshow td { text-align: center; } span.inactive_section { color: #0000FF; cursor: pointer; } span.inactive_section:hover { color: #6666FF; } span.active_section { color: #0000FF; background-color: #BBDDFF; cursor: pointer; } span.active_section:hover { color: #6666FF; } mdp-3.3/mdp/utils/slideshow.py000066400000000000000000000616771203131624700164520ustar00rootroot00000000000000""" Module for HTML slideshows. It uses the templating library 'Templet'. The slideshow base class HTMLSlideShow does not display anything, but can be used to derive custom slideshows like in BiMDP. The JavaScript slideshow code in this module was originally inspired by a slideshow script found at http://javascript.internet.com/miscellaneous/image-slideshow.html (which in turn seems to be based on something from http://www.ricocheting.com) """ from __future__ import with_statement import random import tempfile import os import webbrowser import warnings import templet _BASIC_CSS_FILENAME = "basic.css" _SLIDESHOW_CSS_FILENAME = "slideshow.css" def basic_css(): """Return the basic default CSS.""" css_filename = os.path.join(os.path.split(__file__)[0], _BASIC_CSS_FILENAME) with open(css_filename, 'r') as css_file: css = css_file.read() return css def slideshow_css(): """Return the additional CSS for a slideshow.""" css_filename = os.path.join(os.path.split(__file__)[0], _SLIDESHOW_CSS_FILENAME) with open(css_filename, 'r') as css_file: css = css_file.read() return css class HTMLSlideShow(templet.Template): """Abstract slideshow base class. It does not display anything, but can be adapted by overriding some of the templating attributes. See ImageHTMLSlideShow for an example. """ def __init__(self, title=None, delay=100, delay_delta=20, loop=True, slideshow_id=None, shortcuts=True, **kwargs): """Return the complete HTML code for the slideshow. title -- Optional slideshow title (for defualt None not title is shown). delay - Delay between slides in ms (default 100). delay_delta - Step size for increasing or decreasing the delay. loop -- If True continue with first slide when the last slide is reached during the automatic slideshow (default is False). slideshow_id -- String with the id used for the JS closure, and this is also the id of the div with the slideshow (so it can be used by CSS) and it is used as a prefix for the HTML elements. If the value is None (default) then a random id is used. shortcuts -- Bind keyboard shortcuts to this slideshow (default is True). Note that keyboard shortcuts only work for a single slideshow per page. """ # translate boolean variable into JS format if loop: loop = "true" else: loop = "false" if slideshow_id is None: slideshow_id = self._get_random_id() self.slideshow_id = slideshow_id kwargs.update(vars()) del kwargs["self"] super(HTMLSlideShow, self).__init__(**kwargs) def _get_random_id(self): """Factory method for random slideshow id.""" return "slideshow%d" % random.randint(10000, 99999) template = r'''
$
${{ if title: self.write('' % title) }} $ $
%s
$
$
''' js_controls_template = r''' // step size for in- or decreasing the delay var delay_delta = $delay_delta; that.slower = function () { show_delay += delay_delta; slideform.${slideshow_id}_delaytext.value = show_delay.toString(); } that.faster = function (text) { show_delay -= delay_delta; if (show_delay < 0) { show_delay = 0; } slideform.${slideshow_id}_delaytext.value = show_delay.toString(); } that.changeDelay = function () { var new_delay = parseInt(slideform.${slideshow_id}_delaytext.value, 10); if (new_delay < 0) { new_delay = 0; } show_delay = new_delay; slideform.${slideshow_id}_delaytext.value = new_delay.toString(); } ''' js_update_template = r''' that.updateSlide = function () { slideselect.selectedIndex = current_slide; that.loadSlide(); } ''' # overwrite this to implement the actual slide change js_loadslide_template = r''' that.loadSlide = function () { } ''' js_onload_template = r''' that.onLoad = function () { slideform = document.${slideshow_id}_slideform; slideselect = slideform.${slideshow_id}_slideselect; current_slide = slideselect.selectedIndex; that.updateSlide(); slideform.${slideshow_id}_delaytext.value = show_delay.toString(); } ''' # define keyboard shortcuts, # note that these are also mentionend in the button hover-text js_keyboard_shortcuts_template = r''' document.onkeydown = function(e) { if (!e.ctrlKey) { // control key must be pressed return; } else if (e.which == 37) { // left key document.getElementById("${slideshow_id}_prevButton").click(); } else if(e.which == 39) { // right key document.getElementById("${slideshow_id}_nextButton").click(); } else if(e.which == 38) { // up key document.getElementById("${slideshow_id}_firstButton").click(); } else if(e.which == 40) { // down key document.getElementById("${slideshow_id}_lastButton").click(); } else if(e.which == 45) { // insert key document.getElementById("${slideshow_id}_startButton").click(); } } ''' html_buttons_template = r''' ''' html_controls_template = r''' ${{ if delay is not None: self.write('\n') self.html_delay_template(vars()) self.write('\n') }} ''' html_delay_template = r''' delay: ms ''' html_top_template = r''' ''' html_box_template = r''' ''' html_bottom_template = r''' ''' class SectionHTMLSlideShow(HTMLSlideShow): """Astract slideshow with additional support for section markers.""" def __init__(self, section_ids, slideshow_id=None, **kwargs): """Return the complete HTML code for the slideshow. section_ids -- List with the section id for each slide index. The id can be a string or a number. For additional keyword arguments see the super class. """ # we need the slideshow_id for the section names if slideshow_id is None: slideshow_id = self._get_random_id() kwargs.update(vars()) # check if there is more than one section slideshow_id, # otherwise some controls must be disabled to prevent infinite loop only_one_section = "false" first_section_id = section_ids[0] for section_id in section_ids: if section_id != first_section_id: break else: only_one_section = "true" kwargs["only_one_section"] = only_one_section # translate section_id_list into JavaScript list section_ids = [str(section_id) for section_id in section_ids] js_section_ids = "".join([' "%s_section_id_%s",\n' % (slideshow_id, section_id) for section_id in section_ids]) js_section_ids = "\n" + js_section_ids[:-2] kwargs["js_section_ids"] = js_section_ids del kwargs["self"] super(SectionHTMLSlideShow, self).__init__(**kwargs) js_update_template = r''' // maps slide index to section slideshow_id var section_ids = new Array($js_section_ids); // currently highlighted section slideshow_id var current_section_id = section_ids[0]; that.updateSlide = function () { document.getElementById(current_section_id).className = "inactive_section"; current_section_id = section_ids[current_slide] document.getElementById(current_section_id).className = "active_section"; slideselect.selectedIndex = current_slide; that.loadSlide(); } // use this function when a section is selected, // e.g. onClick="setSlide(42)" that.setSlide = function (index) { current_slide = index; that.updateSlide(); } that.previousSection = function () { if ($only_one_section) { return; } while (current_section_id === section_ids[current_slide]) { if (current_slide > 0) { current_slide -= 1; } else { current_slide = slideselect.length-1; } } var new_section_id = section_ids[current_slide]; // now go to start of this section while (new_section_id === section_ids[current_slide]) { current_slide -= 1; if (current_slide < 0) { break; } } current_slide += 1; that.updateSlide(); } that.nextSection = function () { if ($only_one_section) { return; } while (current_section_id === section_ids[current_slide]) { if (current_slide+1 < slideselect.length) { current_slide += 1; } else { current_slide = 0; } } that.updateSlide(); } $ ''' # define keyboard shortcuts, # note that these are also mentionend in the button hover-text js_keyboard_shortcuts_template = r''' document.onkeydown = function(e) { if (!e.ctrlKey) { // control key must be pressed return; } else if (e.which === 37) { // left key document.getElementById("${slideshow_id}_prevButton").click(); } else if(e.which === 39) { // right key document.getElementById("${slideshow_id}_nextButton").click(); } else if(e.which === 38) { // up key document.getElementById("${slideshow_id}_prevSectionButton").click(); } else if(e.which === 40) { // down key document.getElementById("${slideshow_id}_nextSectionButton").click(); } else if(e.which === 45) { // insert key document.getElementById("${slideshow_id}_startButton").click(); } } ''' html_buttons_template = r''' ''' html_controls_template = r''' ${{super(SectionHTMLSlideShow, self).html_controls_template(vars())}}
${{ last_section_id = None link = '' for index, section_id in enumerate(section_ids): if section_id != last_section_id: if index > 0: self.write(link + ' | ') last_section_id = section_id link = ('%s' % (slideshow_id, index, section_id)) self.write(link + '\n') }}
''' def image_slideshow_css(): """Use nearest neighbour resampling in Firefox 3.6+ and IE. Webkit (Chrome, Safari) does not support this yet. (see http://code.google.com/p/chromium/issues/detail?id=1502) """ return slideshow_css() + ''' img.slideshow { image-rendering: -moz-crisp-edges; -ms-interpolation-mode: nearest-neighbor; } ''' class ImageHTMLSlideShow(HTMLSlideShow): """Slideshow for images. This also serves as an example for implementing a slideshow based on HTMLSlideShow. """ def __init__(self, filenames, image_size, magnification=1, mag_control=True, **kwargs): """Return the complete HTML code for a slideshow of the given images. filenames -- sequence of strings, containing the path for each image image_size -- Tuple (x,y) with the original image size, or enter a different size to force scaling. magnification -- Magnification factor for images (default 1). This factor is applied on top of the provided image size. mag_control -- Set to True (default) to display a magnification control element. For additional keyword arguments see the super class. """ if len(filenames) == 0: raise Exception("Empty list was given.") kwargs.update(vars()) # translate image size to width and heigh to be used in the templates del kwargs["image_size"] kwargs["width"] = image_size[0] kwargs["height"] = image_size[1] del kwargs["self"] super(ImageHTMLSlideShow, self).__init__(**kwargs) js_controls_template = r''' ${{super(ImageHTMLSlideShow, self).js_controls_template(vars())}} var magnification = $magnification; // image magnification var original_width = $width; // original image width var original_height = $height; // original image height that.smaller = function () { magnification = magnification / 2; slideform.${slideshow_id}_magtext.value = magnification.toString(); that.resizeImage(); } that.larger = function (text) { magnification = magnification * 2; slideform.${slideshow_id}_magtext.value = magnification.toString(); that.resizeImage(); } that.changeMag = function () { magnification = parseFloat(slideform.${slideshow_id}_magtext.value); that.resizeImage(); } $ ''' js_controls_resize_template = r''' that.resizeImage = function () { document.images.${slideshow_id}_image_display.width = parseInt(magnification * original_width, 10); document.images.${slideshow_id}_image_display.height = parseInt(magnification * original_height, 10); } ''' js_loadslide_template = r''' that.loadSlide = function () { document.images.${slideshow_id}_image_display.src = slideselect[current_slide].value; } ''' js_onload_template = r''' that.onLoad = function () { slideform = document.${slideshow_id}_slideform; slideselect = slideform.${slideshow_id}_slideselect; current_slide = slideselect.selectedIndex; that.updateSlide(); ${{ if delay is not None: self.write('slideform.%s_delaytext.value = ' % slideshow_id + 'show_delay.toString();\n') }} ${{ if mag_control: self.write('slideform.%s_magtext.value = ' % slideshow_id + 'magnification.toString();\n') }} that.resizeImage(); } ''' html_box_template = r''' ''' html_controls_template = r''' ${{ if mag_control or (delay is not None): self.write('\n') if mag_control: self.html_mag_template(vars()) if delay is not None: self.write('
\n') if delay is not None: self.html_delay_template(vars()) self.write('\n') }} ''' html_mag_template = r''' magnification: ''' class SectionImageHTMLSlideShow(SectionHTMLSlideShow, ImageHTMLSlideShow): """Image slideshow with section markers.""" def __init__(self, filenames, section_ids, image_size, **kwargs): """Return the HTML code for a sectioned slideshow of the given images. For keyword arguments see the super classes. """ if len(section_ids) != len(filenames): err = ("The number of section slideshow_id entries does not match " "the number of slides / filenames.") raise Exception(err) kwargs.update(vars()) del kwargs["self"] super(SectionImageHTMLSlideShow, self).__init__(**kwargs) js_controls_resize_template = r''' that.resizeImage = function () { document.images.${slideshow_id}_image_display.width = parseInt(magnification * original_width, 10); document.images.${slideshow_id}_image_display.height = parseInt(magnification * original_height, 10); // make sure that section ids are nicely line wrapped var section_panel_width = 250; if (magnification * original_height > section_panel_width) { section_panel_width = magnification * original_width; } document.getElementById("${slideshow_id}_sections_panel").style.width = parseInt(section_panel_width, 10) + "px"; } ''' ### helper functions ### # TODO: extract image size automatically, # but this introduces an optional dependency on PIL def image_slideshow(filenames, image_size, title=None, section_ids=None, delay=100, delay_delta=20, loop=True, slideshow_id=None, magnification=1, mag_control=True, shortcuts=True): """Return a string with the JS and HTML code for an image slideshow. Note that the CSS code for the slideshow is not included, so you should add SLIDESHOW_STYLE or a custom style to your CSS code. filenames -- Sequence of the image filenames. image_size -- Tuple (x,y) with the original image size, or enter a different size to force scaling. title -- Optional slideshow title (for default None not title is shown). section_ids -- List with the section id for each slide index. The id can be a string or a number. Default value None disables the section feature. For additional keyword arguments see the ImageHTMLSlideShow class. """ if section_ids: slideshow = SectionImageHTMLSlideShow(**vars()) else: slideshow = ImageHTMLSlideShow(**vars()) return str(slideshow) def show_image_slideshow(filenames, image_size, filename=None, title=None, section_ids=None, delay=100, delay_delta=20, loop=True, slideshow_id=None, magnification=1, mag_control=True, open_browser=True): """Write the slideshow into a HTML file, open it in the browser and return a file object pointing to the file. If the filename is not given, a temporary file is used, and will be deleted when the returned file object is closed or destroyed. filenames -- Sequence of the image filenames. image_size -- Tuple (x,y) with the original image size, or enter a different size to force scaling. filename -- Filename for the HTML file to be created. If None a temporary file is created. title -- Optional slideshow title (for default None not title is shown). section_ids -- List with the section id for each slide index. The id can be a string or a number. Default value None disables the section feature. open_browser -- If True (default value) then the slideshow file is automatically opened in a webbrowser. One can also use string value with the browser name (for webbrowser.get) to request a specific browser. For additional keyword arguments see the ImageHTMLSlideShow class. """ if filename is None: html_file = tempfile.NamedTemporaryFile(suffix=".html", prefix="MDP_") else: html_file = open(filename, 'w') html_file.write('\n\n%s\n' % title) html_file.write('\n\n\n') kwargs = vars() del kwargs['filename'] del kwargs['open_browser'] del kwargs['html_file'] html_file.write(image_slideshow(**kwargs)) html_file.write('\n') html_file.flush() if open_browser: if isinstance(open_browser, str): try: custom_browser = webbrowser.get(open_browser) custom_browser.open(os.path.abspath(filename)) except webbrowser.Error: err = ("Could not open browser '%s', using default." % open_browser) warnings.warn(err) webbrowser.open(os.path.abspath(filename)) else: webbrowser.open(os.path.abspath(filename)) return html_file mdp-3.3/mdp/utils/templet.py000066400000000000000000000311251203131624700161040ustar00rootroot00000000000000"""A lightweight python templating engine. Templet version 2 beta. Slighlty modifed version for MDP (different indentation handling). Supports two templating idioms: 1. template functions using @stringfunction and @unicodefunction 2. template classes inheriting from StringTemplate and UnicodeTemplate Each template function is marked with the attribute @stringfunction or @unicodefunction. Template functions will be rewritten to expand their document string as a template and return the string result. For example: @stringtemplate def myTemplate(animal, thing): "the $animal jumped over the $thing." print myTemplate('cow', 'moon') The template language understands the following forms: $myvar - inserts the value of the variable 'myvar' ${...} - evaluates the expression and inserts the result ${{...}} - executes enclosed code; use 'out.append(text)' to insert text $$ - an escape for a single $ $ (at the end of the line) - a line continuation Template functions are compiled into code that accumulates a list of strings in a local variable 'out', and then returns the concatenation of them. If you want do do complicated computation, you can append to 'out' directly inside a ${{...}} block. Another alternative is to use template classes. Each template class is a subclass of StringTemplate or UnicodeTemplate. Template classes should define a class attribute 'template' that contains the template code. Also, any class attribute ending with '_template' will be compiled into a template method. Use a template class by instantiating it with a dictionary or keyword arguments. Get the expansion by converting the instance to a string. For example: class MyTemplate(templet.Template): template = "the $animal jumped over the $thing." print MyTemplate(animal='cow', thing='moon') Within a template class, the template language is similar to a template function, but 'self.write' should be used to build the string inside ${{..}} blocks. Also, there is a shorthand for calling template methods: $ - shorthand for '${{self.sub_template(vars())}}' This idiom is helpful for decomposing a template and when subclassing. A longer example: import cgi class RecipeTemplate(templet.Template): template = r''' $dish $ $ ''' header_template = r'''

${cgi.escape(dish)}

''' body_template = r'''
    ${{ for item in ingredients: self.write('
  1. ', item, '\n') }}
''' This template can be expanded as follows: print RecipeTemplate(dish='burger', ingredients=['bun', 'beef', 'lettuce']) And it can be subclassed like this: class RecipeWithPriceTemplate(RecipeTemplate): header_template = "

${cgi.escape(dish)} - $$$price

\n" Templet is by David Bau and was inspired by Tomer Filiba's Templite class. For details, see http://davidbau.com/templet Templet is posted by David Bau under BSD-license terms. Copyright (c) 2007, David Bau All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Templet nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import sys, re, inspect class _TemplateBuilder(object): __pattern = re.compile(r"""\$( # Directives begin with a $ \$ | # $$ is an escape for $ [^\S\n]*\n | # $\n is a line continuation [_a-z][_a-z0-9]* | # $simple Python identifier \{(?!\{)[^\}]*\} | # ${...} expression to eval \{\{.*?\}\} | # ${{...}} multiline code to exec <[_a-z][_a-z0-9]*> | # $ method call )(?:(?:(?<=\}\})|(?<=>))[^\S\n]*\n)? # eat some trailing newlines """, re.IGNORECASE | re.VERBOSE | re.DOTALL) def __init__(self, constpat, emitpat, callpat=None): self.constpat, self.emitpat, self.callpat = constpat, emitpat, callpat def __realign(self, str, spaces=''): """Removes any leading empty columns of spaces and an initial empty line. This is important for embedded Python code. """ lines = str.splitlines() if lines and not lines[0].strip(): del lines[0] lspace = [len(l) - len(l.lstrip()) for l in lines if l.lstrip()] margin = len(lspace) and min(lspace) return '\n'.join((spaces + l[margin:]) for l in lines) def build(self, template, filename, s=''): code = [] for i, part in enumerate(self.__pattern.split(template)): if i % 2 == 0: if part: code.append(s + self.constpat % repr(part)) else: if not part or (part.startswith('<') and self.callpat is None): raise SyntaxError('Unescaped $ in ' + filename) elif part.endswith('\n'): continue elif part == '$': code.append(s + self.emitpat % '"$"') elif part.startswith('{{'): code.append(self.__realign(part[2:-2], s)) elif part.startswith('{'): code.append(s + self.emitpat % part[1:-1]) elif part.startswith('<'): code.append(s + self.callpat % part[1:-1]) else: code.append(s + self.emitpat % part) return '\n'.join(code) class _TemplateMetaClass(type): __builder = _TemplateBuilder( 'self.out.append(%s)', 'self.write(%s)', 'self.%s(vars())') def __compile(cls, template, n): globals = sys.modules[cls.__module__].__dict__ if '__file__' not in globals: filename = '<%s %s>' % (cls.__name__, n) else: filename = '%s: <%s %s>' % (globals['__file__'], cls.__name__, n) code = compile(cls.__builder.build(template, filename), filename, 'exec') def expand(self, __dict = None, **kw): if __dict: kw.update([i for i in __dict.iteritems() if i[0] not in kw]) kw['self'] = self exec code in globals, kw return expand def __init__(cls, *args): for attr, val in cls.__dict__.items(): if attr == 'template' or attr.endswith('_template'): if isinstance(val, basestring): setattr(cls, attr, cls.__compile(val, attr)) type.__init__(cls, *args) class StringTemplate(object): """A base class for string template classes.""" __metaclass__ = _TemplateMetaClass def __init__(self, *args, **kw): self.out = [] self.template(*args, **kw) def write(self, *args): self.out.extend([str(a) for a in args]) def __str__(self): return ''.join(self.out) # The original version of templet called StringTemplate "Template" Template = StringTemplate class UnicodeTemplate(object): """A base class for unicode template classes.""" __metaclass__ = _TemplateMetaClass def __init__(self, *args, **kw): self.out = [] self.template(*args, **kw) def write(self, *args): self.out.extend([unicode(a) for a in args]) def __unicode__(self): return u''.join(self.out) def __str__(self): return unicode(self).encode('utf-8') def _templatefunction(func, listname, stringtype): globals, locals = sys.modules[func.__module__].__dict__, {} if '__file__' not in globals: filename = '<%s>' % func.__name__ else: filename = '%s: <%s>' % (globals['__file__'], func.__name__) builder = _TemplateBuilder('%s.append(%%s)' % listname, '%s.append(%s(%%s))' % (listname, stringtype)) args = inspect.getargspec(func) code = [ 'def %s%s:' % (func.__name__, inspect.formatargspec(*args)), ' %s = []' % listname, builder.build(func.__doc__, filename, ' '), ' return "".join(%s)' % listname] code = compile('\n'.join(code), filename, 'exec') exec code in globals, locals return locals[func.__name__] def stringfunction(func): """Function attribute for string template functions""" return _templatefunction(func, listname='out', stringtype='str') def unicodefunction(func): """Function attribute for unicode template functions""" return _templatefunction(func, listname='out', stringtype='unicode') # When executed as a script, run some testing code. if __name__ == '__main__': ok = True def expect(actual, expected): global ok if expected != actual: print "error - got:\n%s" % repr(actual) ok = False class TestAll(Template): """A test of all the $ forms""" template = r""" Bought: $count ${name}s$ at $$$price. ${{ for i in xrange(count): self.write(TestCalls(vars()), "\n") # inherit all the local $vars }} Total: $$${"%.2f" % (count * price)} """ class TestCalls(Template): """A recursive test""" template = "$name$i ${*[TestCalls(name=name[0], i=n) for n in xrange(i)]}" expect( str(TestAll(count=5, name="template call", price=1.23)), "Bought: 5 template calls at $1.23.\n" "template call0 \n" "template call1 t0 \n" "template call2 t0 t1 t0 \n" "template call3 t0 t1 t0 t2 t0 t1 t0 \n" "template call4 t0 t1 t0 t2 t0 t1 t0 t3 t0 t1 t0 t2 t0 t1 t0 \n" "Total: $6.15\n") class TestBase(Template): template = r""" $ $ """ class TestDerived(TestBase): head_template = "$name" body_template = "${TestAll(vars())}" expect( str(TestDerived(count=4, name="template call", price=2.88)), "template call\n" "" "Bought: 4 template calls at $2.88.\n" "template call0 \n" "template call1 t0 \n" "template call2 t0 t1 t0 \n" "template call3 t0 t1 t0 t2 t0 t1 t0 \n" "Total: $11.52\n" "\n") class TestUnicode(UnicodeTemplate): template = u""" \N{Greek Small Letter Pi} = $pi """ expect( unicode(TestUnicode(pi = 3.14)), u"\N{Greek Small Letter Pi} = 3.14\n") goterror = False try: class TestError(Template): template = 'Cost of an error: $0' except SyntaxError: goterror = True if not goterror: print 'TestError failed' ok = False @stringfunction def testBasic(name): "Hello $name." expect(testBasic('Henry'), "Hello Henry.") @stringfunction def testReps(a, count=5): r""" ${{ if count == 0: return '' }} $a${testReps(a, count - 1)}""" expect( testReps('foo'), "foofoofoofoofoo") @unicodefunction def testUnicode(count=4): u""" ${{ if not count: return '' }} \N{BLACK STAR}${testUnicode(count - 1)}""" expect( testUnicode(count=10), u"\N{BLACK STAR}" * 10) if ok: print "OK" mdp-3.3/mdp/utils/temporarydir.py000066400000000000000000000053261203131624700171570ustar00rootroot00000000000000# This is a backport of tempfile.TemporaryDirectory from Python 3.2 import os as _os from tempfile import mkdtemp import errno template = "tmp" class TemporaryDirectory(object): """Create and return a temporary directory. This has the same behavior as mkdtemp but can be used as a context manager. For example: with TemporaryDirectory() as tmpdir: ... Upon exiting the context, the directory and everthing contained in it are removed. """ def __init__(self, suffix="", prefix=template, dir=None): self._closed = False self._ENOENT = errno.ENOENT self.name = None # Handle mkdtemp throwing an exception self.name = mkdtemp(suffix, prefix, dir) def __repr__(self): return "<%s %r>" % (self.__class__.__name__, self.name) def __enter__(self): return self.name def cleanup(self, _warn=False): if self.name and not self._closed: try: self._rmtree(self.name) except (TypeError, AttributeError), ex: # Issue #10188: Emit a warning on stderr # if the directory could not be cleaned # up due to missing globals if "None" not in str(ex): raise return except OSError, ex: # ignore if the directory has been deleted already if ex.errno != self._ENOENT: raise self._closed = True def __exit__(self, exc, value, tb): self.cleanup() def __del__(self): self.cleanup() # XXX (ncoghlan): The following code attempts to make # this class tolerant of the module nulling out process # that happens during CPython interpreter shutdown # Alas, it doesn't actually manage it. See issue #10188 _listdir = staticmethod(_os.listdir) _path_join = staticmethod(_os.path.join) _isdir = staticmethod(_os.path.isdir) _remove = staticmethod(_os.remove) _rmdir = staticmethod(_os.rmdir) _os_error = _os.error def _rmtree(self, path): # Essentially a stripped down version of shutil.rmtree. We can't # use globals because they may be None'ed out at shutdown. for name in self._listdir(path): fullname = self._path_join(path, name) try: isdir = self._isdir(fullname) except self._os_error: isdir = False if isdir: self._rmtree(fullname) else: try: self._remove(fullname) except self._os_error: pass try: self._rmdir(path) except self._os_error: pass mdp-3.3/mdp_pylint.rc000066400000000000000000000170401203131624700146450ustar00rootroot00000000000000[MASTER] # Specify a configuration file. #rcfile= # Python code to execute, usually for sys.path manipulation such as # pygtk.require(). #init-hook= # Profiled execution. profile=no # Add files or directories to the blacklist. They should be base names, not # paths. ignore=test,run_tests.py # Pickle collected data for later comparisons. persistent=yes # List of plugins (as comma separated values of python modules names) to load, # usually to register additional checkers. load-plugins= [MESSAGES CONTROL] # Enable the message, report, category or checker with the given id(s). You can # either give multiple identifier separated by comma (,) or put this option # multiple time. #enable= # Disable the message, report, category or checker with the given id(s). You # can either give multiple identifier separated by comma (,) or put this option # multiple time (only on the command line, not in the configuration file where # it should appear only once). # MDP: # W0221 (Arguments number differs from %s method): This is by design, sinve methods # like _execute are explicitly allowed to take additional arguments. # W0142 (Used * or ** magic): This isn't really a problem. # W0622 (Redefining built-in 'str') disable=W0221,W0142,W0622 [REPORTS] # Set the output format. Available formats are text, parseable, colorized, msvs # (visual studio) and html output-format=text # Include message's id in output include-ids=no # Put messages in a separate file for each module / package specified on the # command line instead of printing them on stdout. Reports (if any) will be # written in a file name "pylint_global.[txt|html]". files-output=no # Tells whether to display a full report or only the messages reports=yes # Python expression which should return a note less than 10 (10 is the highest # note). You have access to the variables errors warning, statement which # respectively contain the number of errors / warnings messages and the total # number of statements analyzed. This is used by the global evaluation report # (RP0004). evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) # Add a comment according to your evaluation note. This is used by the global # evaluation report (RP0004). comment=no [BASIC] # Required attributes for module, separated by a comma required-attributes= # List of builtins function names that should not be used, separated by a comma bad-functions=apply,input # Regular expression which should only match correct module names module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ # MDP: The following name restrictions has been loosened. # Regular expression which should only match correct module level names const-rgx=(([a-zA-Z0-9_]*)|(__.*__))$ # Regular expression which should only match correct class names class-rgx=[A-Z_][a-zA-Z0-9]+$ # Regular expression which should only match correct function names function-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct method names method-rgx=[a-z_][a-z0-9_]{2,30}$ # MDP: The following name restrictions have been loosened, to allow mathematical single letter variables. # Regular expression which should only match correct instance attribute names attr-rgx=[a-zA-Z0-9_]{1,30}$ # Regular expression which should only match correct argument names argument-rgx=[a-zA-Z0-9_]{1,30}$ # Regular expression which should only match correct variable names variable-rgx=[a-zA-Z0-9_]{1,30}$ # Regular expression which should only match correct list comprehension / # generator expression variable names inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$ # Good variable names which should always be accepted, separated by a comma good-names=i,j,k,ex,Run,_ # Bad variable names which should always be refused, separated by a comma bad-names=foo,bar,baz,toto,tutu,tata # Regular expression which should only match functions or classes name which do # not require a docstring no-docstring-rgx=__.*__ [FORMAT] # Maximum number of characters on a single line. max-line-length=80 # Maximum number of lines in a module max-module-lines=1000 # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 # tab). indent-string=' ' [MISCELLANEOUS] # List of note tags to take in consideration, separated by a comma. notes=FIXME,XXX,TODO [SIMILARITIES] # Minimum lines number of a similarity. min-similarity-lines=12 # Ignore comments when computing similarities. ignore-comments=yes # Ignore docstrings when computing similarities. ignore-docstrings=yes [TYPECHECK] # Tells whether missing members accessed in mixin class should be ignored. A # mixin class is detected if its name ends with "mixin" (case insensitive). ignore-mixin-members=yes # List of classes names for which member attributes should not be checked # (useful for classes with attributes dynamically set). ignored-classes=SQLObject # When zope mode is activated, add a predefined set of Zope acquired attributes # to generated-members. zope=no # List of members which are set dynamically and missed by pylint inference # system, and so shouldn't trigger E0201 when accessed. Python regular # expressions are accepted. generated-members=REQUEST,acl_users,aq_parent [VARIABLES] # Tells whether we should check for unused import in __init__ files. init-import=no # A regular expression matching the beginning of the name of dummy variables # (i.e. not used). dummy-variables-rgx=_|dummy # List of additional names supposed to be defined in builtins. Remember that # you should avoid to define new builtins when possible. additional-builtins= [CLASSES] # List of interface methods to ignore, separated by a comma. This is used for # instance to not check methods defines in Zope's Interface base class. ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by # List of method names used to declare (i.e. assign) instance attributes. defining-attr-methods=__init__,__new__,setUp # List of valid names for the first argument in a class method. valid-classmethod-first-arg=cls [DESIGN] # Maximum number of arguments for function / method max-args=20 # Argument names that match this expression will be ignored. Default to name # with leading underscore ignored-argument-names=_.* # Maximum number of locals for function / method body max-locals=50 # Maximum number of return / yield for function / method body max-returns=6 # Maximum number of branch for function / method body max-branchs=20 # Maximum number of statements in function / method body max-statements=100 # Maximum number of parents for a class (see R0901). max-parents=7 # Maximum number of attributes for a class (see R0902). max-attributes=20 # Minimum number of public methods for a class (see R0903). min-public-methods=0 # Maximum number of public methods for a class (see R0904). max-public-methods=30 [IMPORTS] # Deprecated modules which should not be used, separated by a comma deprecated-modules=regsub,string,TERMIOS,Bastion,rexec # Create a graph of every (i.e. internal and external) dependencies in the # given file (report RP0402 must not be disabled) import-graph= # Create a graph of external dependencies in the given file (report RP0402 must # not be disabled) ext-import-graph= # Create a graph of internal dependencies in the given file (report RP0402 must # not be disabled) int-import-graph= [EXCEPTIONS] # Exceptions that will emit a warning when being caught. Defaults to # "Exception" overgeneral-exceptions=Exception mdp-3.3/py3tool.py000066400000000000000000000067151203131624700141320ustar00rootroot00000000000000#!/usr/bin/env python3 """ Convert *py files with lib2to3. Taken from numpy. Adapted to our needs. """ import shutil import os import sys import fnmatch import lib2to3.main from io import StringIO EXTRA_2TO3_FLAGS = {} BASE = os.path.normpath(os.path.join(os.path.dirname(__file__), '..')) TEMP = os.path.normpath(os.path.join(BASE, '_py3k')) def custom_mangling(filename): pass def walk_sync(dir1, dir2, _seen=None): if _seen is None: seen = {} else: seen = _seen if not dir1.endswith(os.path.sep): dir1 = dir1 + os.path.sep # Walk through stuff (which we haven't yet gone through) in dir1 for root, dirs, files in os.walk(dir1): sub = root[len(dir1):] if sub in seen: dirs = [x for x in dirs if x not in seen[sub][0]] files = [x for x in files if x not in seen[sub][1]] seen[sub][0].extend(dirs) seen[sub][1].extend(files) else: seen[sub] = (dirs, files) if not dirs and not files: continue yield os.path.join(dir1, sub), os.path.join(dir2, sub), dirs, files if _seen is None: # Walk through stuff (which we haven't yet gone through) in dir2 for root2, root1, dirs, files in walk_sync(dir2, dir1, _seen=seen): yield root1, root2, dirs, files def sync_2to3(src, dst, clean=False): to_convert = [] for src_dir, dst_dir, dirs, files in walk_sync(src, dst): for fn in dirs + files: src_fn = os.path.join(src_dir, fn) dst_fn = os.path.join(dst_dir, fn) # skip temporary etc. files if fn.startswith('.#') or fn.endswith('~'): continue # remove non-existing if os.path.exists(dst_fn) and not os.path.exists(src_fn): if clean: if os.path.isdir(dst_fn): shutil.rmtree(dst_fn) else: os.unlink(dst_fn) continue # make directories if os.path.isdir(src_fn): if not os.path.isdir(dst_fn): os.makedirs(dst_fn) continue dst_dir = os.path.dirname(dst_fn) if os.path.isfile(dst_fn) and not os.path.isdir(dst_dir): os.makedirs(dst_dir) # don't replace up-to-date files try: if os.path.isfile(dst_fn) and \ os.stat(dst_fn).st_mtime >= os.stat(src_fn).st_mtime: continue except OSError: pass # copy file shutil.copyfile(src_fn, dst_fn) # add .py files to 2to3 list if dst_fn.endswith('.py'): to_convert.append((src_fn, dst_fn)) # run 2to3 flag_sets = {} for fn, dst_fn in to_convert: flag = '' for pat, opt in EXTRA_2TO3_FLAGS.items(): if fnmatch.fnmatch(fn, pat): flag = opt break flag_sets.setdefault(flag, []).append(dst_fn) for flags, filenames in flag_sets.items(): if flags == 'skip': continue _old_stdout = sys.stdout try: sys.stdout = StringIO() lib2to3.main.main("lib2to3.fixes", ['-w'] + flags.split()+filenames) finally: sys.stdout = _old_stdout for fn, dst_fn in to_convert: # perform custom mangling custom_mangling(dst_fn) mdp-3.3/pytest.ini000066400000000000000000000001071203131624700141650ustar00rootroot00000000000000[pytest] norecursedirs = build .git cover html dist minversion = 2.1.2 mdp-3.3/setup.py000066400000000000000000000110331203131624700136460ustar00rootroot00000000000000# This file must be runnable with all supported python versions: # 2.5, 2.6, 2.7, 3.1, and 3.2. # Things which might not be available: # context managers, the print statement, some modules (e.g. ast). from distutils.core import setup import os import sys email = 'mdp-toolkit-devel@lists.sourceforge.net' classifiers = ["Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "Intended Audience :: Education", "Intended Audience :: Science/Research", "License :: OSI Approved :: BSD License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 2", "Programming Language :: Python :: 3", "Topic :: Scientific/Engineering :: Information Analysis", "Topic :: Scientific/Engineering :: Mathematics"] def get_module_code(): # keep old python compatibility, so no context managers mdp_init = open(os.path.join(os.getcwd(), 'mdp', '__init__.py')) module_code = mdp_init.read() mdp_init.close() return module_code def throw_bug(): raise ValueError('Can not get MDP version!\n' 'Please report a bug to ' + email) try: import ast def get_extract_variable(tree, variable): for node in ast.walk(tree): if type(node) is ast.Assign: try: if node.targets[0].id == variable: return node.value.s except: pass throw_bug() def get_mdp_ast_tree(): return ast.parse(get_module_code()) def get_version(): tree = get_mdp_ast_tree() return get_extract_variable(tree, '__version__') def get_short_description(): tree = get_mdp_ast_tree() return get_extract_variable(tree, '__short_description__') def get_long_description(): tree = get_mdp_ast_tree() return ast.get_docstring(tree) except ImportError: import re def get_variable(pattern): m = re.search(pattern, get_module_code(), re.M + re.S + re.X) if not m: throw_bug() return m.group(1) def get_version(): return get_variable(r'^__version__\s*=\s*[\'"](.+?)[\'"]') def get_short_description(): text = get_variable(r'''^__short_description__\s*=\s* # variable name and = \\?\s*(?:"""|\'\'\')\\?\s* # opening quote with backslash (.+?) \s*(?:"""|\'\'\')''') # closing quote return text.replace(' \\\n', ' ') def get_long_description(): return get_variable(r'''^(?:"""|\'\'\')\\?\s* # opening quote with backslash (.+?) \s*(?:"""|\'\'\')''') # closing quote def setup_package(): # Perform 2to3 if needed local_path = os.path.dirname(os.path.abspath(sys.argv[0])) src_path = local_path if sys.version_info[0] == 3: src_path = os.path.join(local_path, 'build', 'py3k') import py3tool print("Converting to Python3 via 2to3...") py3tool.sync_2to3('mdp', os.path.join(src_path, 'mdp')) py3tool.sync_2to3('bimdp', os.path.join(src_path, 'bimdp')) # check that we have a version version = get_version() short_description = get_short_description() long_description = get_long_description() # Run build os.chdir(src_path) sys.path.insert(0, src_path) setup(name = 'MDP', version=version, author = 'MDP Developers', author_email = email, maintainer = 'MDP Developers', maintainer_email = email, license = "http://mdp-toolkit.sourceforge.net/license.html", platforms = ["Any"], url = 'http://mdp-toolkit.sourceforge.net', download_url = 'http://sourceforge.net/projects/mdp-toolkit/files', description = short_description, long_description = long_description, classifiers = classifiers, packages = ['mdp', 'mdp.nodes', 'mdp.utils', 'mdp.hinet', 'mdp.test', 'mdp.graph', 'mdp.caching', 'mdp.parallel', 'bimdp', 'bimdp.hinet', 'bimdp.inspection', 'bimdp.nodes', 'bimdp.parallel', 'bimdp.test'], package_data = {'mdp.hinet': ['hinet.css'], 'mdp.utils': ['slideshow.css']} ) if __name__ == '__main__': setup_package() mdp-3.3/testall.py000066400000000000000000000061461203131624700141670ustar00rootroot00000000000000# calls to os.system should be changed to subprocess.Popen! # we don't need any temporary files and such. it is important # to set the environment properly # I call it like this: # $ cd /home/tiziano/git/MDP/mdp-toolkit # $ python testall.py /home/tiziano/python/x86_64/lib/pythonVERSION/site-packages PARMS = {'2.5': ('numpy', None), '2.7': ('numpy', None), '3.1': ('numpy', None), '2.6': ('scipy', None, 'parallel_python', 'shogun', 'libsvm', 'joblib', 'scikits'), } import os import sys import subprocess # get from sys.argv a directory to add to pythonpath # /path/to/pythonVERSION/dir if len(sys.argv) > 1: dirbase = sys.argv[1] else: dirbase = '/dev/null' # check that we are in our git repo conds = (os.path.exists('.git'), os.path.basename(os.getcwd()) == 'mdp-toolkit', os.path.exists('mdp'), os.path.exists('bimdp'), ) if not all(conds): sys.stderr.write('Not in mdp git clone!') sys.exit(-1) startwd = os.getcwd() config = '-c "import mdp; import sys; sys.stdout.write(mdp.config.info())"' # create command line for vers in PARMS: print 'Running: '+vers path = dirbase.replace('VERSION', vers) # if version is 3.X we need to build mdp and change to the build directory if vers[0] == '3': cmdline = ('python'+vers, 'setup.py', 'build', '> /tmp/mdp_build', '2>&1', ) print 'Building for Python3...', #out.write('echo "Building for Python3..."\n') os.system(' '.join(cmdline)) print 'done.' # we need to change directory build_dir = os.listdir(os.path.join('build','py3k','build'))[0] os.chdir(os.path.join('build','py3k','build', build_dir)) else: os.chdir(startwd) wd = os.getcwd() env = {'MDPNUMX': PARMS[vers][0]} for dep in PARMS[vers][1:]: print 'NoDep: '+str(dep) if dep is not None: key = 'MDP_DISABLE_'+dep.upper() else: key = 'MDP_DISABLE_NONE' env[key] = '1' cmdline_base = ('MDPNUMX='+env['MDPNUMX'], key+'=1', 'PYTHONPATH='+path+':'+wd, ' /usr/bin/python'+vers, ) cmdline_config = (config,) cmdline_tests = (os.path.join('mdp','test','run_tests.py'), '--capture', 'fd', '-x', 'mdp', 'bimdp', ' >', '/tmp/mdp_current_test', '2>&1', ) # show config #os.system(' '.join(cmdline_base+cmdline_config)) sys.stdout.write('\n') # write out command line #print ' '+' '.join(cmdline_base+cmdline_tests) exit_status = os.system(' '.join(cmdline_base+cmdline_tests)) if exit_status != 0: sys.stderr.write('='*30+' FAILURE '+'='*30) sys.stderr.write('\nLog is in /tmp/mdp_current_test.\n') sys.exit(-1)