multipletau-0.1.9/000077500000000000000000000000001316472354200140755ustar00rootroot00000000000000multipletau-0.1.9/CHANGELOG000066400000000000000000000016211316472354200153070ustar00rootroot000000000000000.1.9 - include docs in sdist 0.1.8 - update documentation and example files - move documentation to readthedocs.io 0.1.7 - code cleanup with pep8 and autopep8 - always use numpy dtypes - fix tests: - take into account floating inaccuracies - support i386 numpy dtypes 0.1.6 - also compute correlation for zero lag time (`G(tau==0)`) - support NumPy 1.11 - add tests to complete code coverage - bugfixes: - wrong normalization for cplx array `v` in `correlate` if `normalize==True` - wrong normalization in `correlate_numpy` if `normalize==False` 0.1.5 - update documentation - support Python 3 0.1.4 - integer and boolean input types are now automatically converted to floats - `multipletau.correlate` now works with complex data types - `multipletau.correlate` now checks if input data are same objects - documentation now contains examples 0.1.3 - first non-cython implementation multipletau-0.1.9/LICENSE000066400000000000000000000027071316472354200151100ustar00rootroot00000000000000Copyright (c) 2014 Paul Müller Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of multipletau nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL INFRAE OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. multipletau-0.1.9/MANIFEST.in000066400000000000000000000003071316472354200156330ustar00rootroot00000000000000include CHANGELOG include LICENSE include README.rst recursive-include examples *.py *.jpg recursive-include docs *.py *.md *.rst *.txt recursive-include tests *.py *.md test_*.npy prune docs/_build multipletau-0.1.9/README.rst000066400000000000000000000044211316472354200155650ustar00rootroot00000000000000multipletau =========== |PyPI Version| |Tests Status| |Coverage Status| |Docs Status| Multipe-tau correlation is computed on a logarithmic scale (less data points are computed) and is thus much faster than conventional correlation on a linear scale such as `numpy.correlate `__. Installation ------------ Multipletau supports Python 2.6+ and Python 3.3+ with a common codebase. The only requirement for ``multipletau`` is `NumPy `__ (for fast operations on arrays). Install multipletau from the Python package index: :: pip install multipletau Documentation ------------- The documentation, including the reference and examples, is available on `readthedocs.io `__. Usage ----- .. code:: python import numpy as np import multipletau a = np.linspace(2,5,42) v = np.linspace(1,6,42) multipletau.correlate(a, v, m=2) array([[ 0. , 569.56097561], [ 1. , 549.87804878], [ 2. , 530.37477692], [ 4. , 491.85812017], [ 8. , 386.39500297]]) Citing ------ The multipletau package should be cited like this (replace "x.x.x" with the actual version of multipletau that you used and "DD Month YYYY" with a matching date). Paul Müller (2012) *Python multiple-tau algorithm* (Version x.x.x) [Computer program]. Available at ``__ (Accessed DD Month YYYY) You can find out what version you are using by typing (in a Python console): .. code:: python >>> import multipletau >>> multipletau.__version__ '0.1.4' .. |PyPI Version| image:: http://img.shields.io/pypi/v/multipletau.svg :target: https://pypi.python.org/pypi/multipletau .. |Tests Status| image:: http://img.shields.io/travis/FCS-analysis/multipletau.svg?label=tests :target: https://travis-ci.org/FCS-analysis/multipletau .. |Coverage Status| image:: https://img.shields.io/coveralls/FCS-analysis/multipletau.svg :target: https://coveralls.io/r/FCS-analysis/multipletau .. |Docs Status| image:: https://readthedocs.org/projects/multipletau/badge/?version=latest :target: https://readthedocs.org/projects/multipletau/builds/ multipletau-0.1.9/docs/000077500000000000000000000000001316472354200150255ustar00rootroot00000000000000multipletau-0.1.9/docs/README.md000066400000000000000000000003301316472354200163000ustar00rootroot00000000000000multipletau documentation ========================= To install the requirements for building the documentation, run pip install -r requirements.txt To compile the documentation, run sphinx-build . _build multipletau-0.1.9/docs/conf.py000066400000000000000000000223221316472354200163250ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # project documentation build configuration file, created by # sphinx-quickstart on Sat Feb 22 09:35:49 2014. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # # import os # import sys # sys.path.insert(0, os.path.abspath('.')) # Get version number from qpimage._version file import mock import os.path as op import sys # include parent directory pdir = op.dirname(op.dirname(op.abspath(__file__))) sys.path.insert(0, pdir) # include extenstions sys.path.append(op.abspath('extensions')) # Mock all dependencies install_requires = ["numpy"] for mod_name in install_requires: sys.modules[mod_name] = mock.Mock() # There should be a file "setup.py" that has the property "version" from setup import author, authors, description, name, version, year projectname = name projectdescription = description # http://www.sphinx-doc.org/en/stable/ext/autodoc.html#confval-autodoc_member_order # Order class attributes and functions in separate blocks autodoc_member_order = 'bysource' autodoc_mock_imports = install_requires # Display link to GitHub repo instead of doc on rtfd rst_prolog = """ :github_url: https://github.com/FCS-analysis/multipletau """ # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = ['sphinx.ext.intersphinx', 'sphinx.ext.autosummary', 'sphinx.ext.autodoc', 'sphinx.ext.mathjax', 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'fancy_include', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = projectname copyright = year+", "+author # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. # The full version, including alpha/beta/rc tags. release = version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # The reST default role (used for this markup: `text`) to use for all # documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. #pygments_style = 'default' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. #keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. #html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = projectname+'doc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', projectname+'.tex', projectname+' Documentation', author, 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', projectname, projectname+' Documentation', authors, 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', projectname, projectname+u' Documentation', author, projectname, projectdescription, 'Numeric'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False # ----------------------------------------------------------------------------- # intersphinx # ----------------------------------------------------------------------------- intersphinx_mapping = {"python": ('https://docs.python.org/', None), "numpy": ('http://docs.scipy.org/doc/numpy', None), } multipletau-0.1.9/docs/extensions/000077500000000000000000000000001316472354200172245ustar00rootroot00000000000000multipletau-0.1.9/docs/extensions/fancy_include.py000066400000000000000000000051371316472354200224070ustar00rootroot00000000000000"""Include single scripts with doc string, code, and image Use case -------- There is an "examples" directory in the root of a repository, e.g. 'include_doc_code_img_path = "../examples"' in conf.py (default). An example is a file ("an_example.py") that consists of a doc string at the beginning of the file, the example code, and, optionally, an image file (png, jpg) ("an_example.png"). Configuration ------------- In conf.py, set the parameter fancy_include_path = "../examples" to wherever the included files reside. Usage ----- The directive .. fancy_include:: an_example.py will display the doc string formatted with the first line as a heading, a code block with line numbers, and the image file. """ import io import os.path as op from docutils.statemachine import ViewList from docutils.parsers.rst import Directive from sphinx.util.nodes import nested_parse_with_titles from docutils import nodes class IncludeDirective(Directive): required_arguments = 1 optional_arguments = 0 def run(self): path = self.state.document.settings.env.config.fancy_include_path full_path = op.join(path, self.arguments[0]) with io.open(full_path, "r") as myfile: text = myfile.read() source = text.split('"""') doc = source[1].split("\n") doc.insert(1, "~" * len(doc[0])) # make title heading code = source[2].split("\n") # documentation rst = [] for line in doc: rst.append(line) # image for ext in [".png", ".jpg"]: image_path = full_path[:-3] + ext if op.exists(image_path): break else: image_path = "" if image_path: rst.append(".. figure:: {}".format(image_path)) rst.append("") # download file rst.append(":download:`{}<{}>`".format( op.basename(full_path), full_path)) # code rst.append("") rst.append(".. code-block:: python") rst.append(" :linenos:") rst.append("") for line in code: rst.append(" {}".format(line)) rst.append("") vl = ViewList(rst, "fakefile.rst") # Create a node. node = nodes.section() node.document = self.state.document # Parse the rst. nested_parse_with_titles(self.state, vl, node) return node.children def setup(app): app.add_config_value('fancy_include_path', "../examples", 'html') app.add_directive('fancy_include', IncludeDirective) return {'version': '0.1'} # identifies the version of our extension multipletau-0.1.9/docs/index.rst000066400000000000000000000007621316472354200166730ustar00rootroot00000000000000multipletau documentation ========================= General ::::::: .. automodule:: multipletau :members: Methods ::::::: Summary: .. autosummary:: autocorrelate correlate correlate_numpy Autocorrelation --------------- .. autofunction:: autocorrelate Cross-correlation ----------------- .. autofunction:: correlate Cross-correlation (NumPy) ------------------------- .. autofunction:: correlate_numpy Examples ======== .. fancy_include:: compare_correlation_methods.py multipletau-0.1.9/docs/requirements.txt000066400000000000000000000000231316472354200203040ustar00rootroot00000000000000mock sphinx>=1.6.4 multipletau-0.1.9/examples/000077500000000000000000000000001316472354200157135ustar00rootroot00000000000000multipletau-0.1.9/examples/compare_correlation_methods.jpg000066400000000000000000001630261316472354200241770ustar00rootroot00000000000000JFIFC     C  " }!1AQa"q2#BR$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr $4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?(((F? &~_G'մK~͕24sA4Op9Edd`YH E{-&$3oh5!h@Exo< I kF?M_Hg;Z7$Q^<jC>ѿ&yW7֍4TW?ρvoIy| õM{&$3oh5!h@Exo< I kF?M_Hg;Z7$Q^<jC>ѿ&yW7֍4TW?ρvoIy| õM{&$3oh5!h@Exo< I kF?M_Hg;Z7$Q^<jC>ѿ&yW7֍4TW?ρvoIy| õM{&$3oh5!h@Exo< I kF?M_Hg;Z7$Q^<jC>ѿ&yW7֍4TW?ρvoIy| õM{&$3oh5!h@Exo< I kF?M_Hg;Z7$Q^<jC>ѿ&yW7֍4TW?ρvoIy| õM{&$3oh5!h@Exo< I kF?M_Hg;Z7$Q^<jC>ѿ&yW7֍4TW?ρvoIy| õM{&$3oh5!h@Exo< I kF?M_Hg;Z7$Q^<jC>ѿ&yW7֍4TW?ρvoIy| õM{&$3oh5!h@Exo< I kF?M_Hg;Z7$Q^<jC>ѿ&yW7֍4TW?ρvoIy| õM{&$3oh5!h@Exo< I kF?M_Hg;Z7$Q^<jC>ѿ&yW7֍4TV? ~5x&_|9F xe4oX䍶є+z ( ( ( '=˗O5* _ /^&?lz~:H:I՞k04֖PϺQ#Ip;Vkقa5\kWkjM*;%WQ*4Yv2'On?ekowWS&z__ʺҨz+ ?q];g٪jI|*wmDwR[ҩ$y-7zK>>K܁??MoHS+w sW[-fO?S%:b(|ҏ8WE'DhzXoAc(?Qo8]Fo]@do_ҥOTy=E' 1xVYbǕe:$kirt>#E&>7^ 2mIU &*浝5' gTW wsn&hKIsSe~,x[ 5fNcuSqbu1hKΩӇI?G~؟|.~|-3mgQ >.4oAP>tLYv ]@ؼwC?b ZW $~߃.OiIIu=!.QlZhIP%q~1K\|Y?t&8'ń0˯CuY5ʼn&/*!>bҿQ'>!ֵG4Yoll12KYIq;Ao!ED?t{CC:!Nd@ef r@S1i_(Njf/(G|G{o z~&\nR=iau7nZ/6n62&E#+ wxZ׼saN]*VҷZtbW4۹`G`VS\ƨftd~mÄ3yhݰ1 9b ZW $O犾1Mk-n|[|8]ĞNxt>fj!;Y瘦%*,YGυ?P/֞">j L0 vmq33EH(C@Oh'htnq!Dܹ 2=i>1' \x>BQ{Oh'Nlqm3hl|ph7?~j>97#KQa/k=_^&HkYƙg3\ILBU97+@UO? z/k4`Q cמ%Ʒc1.<+ dg?L~˷6,5 f/VҴ׳9DCLϒF Q@Q@Q@Q@5mB6i{1,\/d3{foڇf?\Dʥ5~81V/T|%IJd ۾rJĩ6Θ+G=Zi񎄌1I*1_ZQ\rB c96S „#.)?$/Ⱦ H?x{X+yr}͵7>G:mSR-{& W5>'.+=>78+aq_׊N_ Iw|!vw>!?ikDlmn} 4?)KU\q>%ăդHǟa_V1ӏLuoj0O*W_?kJRaDm w',~>||pV_$^8?dUu?,c+.~}G&i6ֲKmyB5g+}_0ݖiW;Y4}G8:_ze!yphM/)y\V_x*ُ|$`?隆ִusk)et>3> ꉠ՚66~1ZQFY'Go|d;GeS_Jq$|A)M]?>~:IDf uh Uʏ?~U52ad}kSBgnN4>{g_Ž-?Fe+KMW\ƧfNGkp$M%~Q_.3JC.[xr8_744_*&e )|nodV?|Med"ٚmޛ|߳y{voݵ ?g _mkvAwwWķ1XNL,[[ }M4kG㗃]>Mm/Z v2G:HUhRiJʖ|%g?amWu*TId[Dro@;׈x K\|~%|ckG^\ѭ%[+#$tY$^Cry'͟?g_xExm2}9<6&\ *'L;s€ _4/ko0g[9^x͔)!-2v /'O v&S Us4@é& _ &x<*_Zj u?f6̆<[|X* /tǂ5OĚv+YRִ QkI#ed+zo'Ś«5u?MJفw[~"|R~6 ~(]fHtZ}NV$c!Ε;1dg~s /čsQgOOxG״G~y4kx5^F^UZ,}[Z4ٗw&:t}3? |Ys-ڵ}^{y/cgBۑ '=˗O5*'=˗O5*( ( ( ( ( ( ( ( ( ( WF:V]9a_n:~: +*(!V*KIƾւ_F_s>a?'g  x0A$K.-MrIߟFC8֓| ԡ{ޔ'3 Z*~I? O7p%K;4R >#oE/(1\f𿉢'ٮe>:~G؛ O Xb/^c?, IP_Dsgo_G'Dzpmbڃ~NkMy3[J74 9RI|u OqeU X+hF|j=Vmk=ZїĖs< S pPDEVtoOٻ_ q&Rop[)qƻGV~PZ|O'L7oK o:?j4_]wī!o;mLK? mڼ(^ ?o?:?ߦtXƎ)I9U =:CRO&ֶr>|)֚e>kc7s<ʋ cOBRiYw:*N,t˨(h~5"g `%UHf£ĒHeFF|Cwx⾝2G%Ə-omexxfO G: >&}? `eIнTOnx/7<]W~1{}[Pա$Fejsp %X^|[{C©ea:k,:<<-q|rNhj dm[=Wë:[qXd豽 ""ܕ*(|3u|_?kuW⇋un/zǥj:wC.V /%3B.$D01|CE u@6!a/3@k ?cͭipiЬ%7P0M f8FUm~0kOWvwqyowkvWVBRӘ% i`)cf6%u\Wo x^ B3e٘ '5 ?o?:? j/t K ?o?:?EǏ|#?ɉ"Vbc~]覀< i>"5_h:7~&\Y^Z)k#YH 5usOz{Jye玢6vk#8q_TPEP_bF'JQu'~%x zO6 ͗R/5POw fů0GI4;+;?0gaWr˫--$ x#.` I.#ZWso^  gOk$%̱L 42Ca./H?OEY#tbl繒P}rrYEVweU,9 X!|)վ0x<}k>:Y/.jfZo?0HBcय़u@j|S>xͤ,-"S(Ig=>8ctg@>ү y|O^H?x1x_Iۮ3W/soiwYGαŪ[_B 0LP*n2JZcK:mPAfG`ڊ0( ( ( ( ( ( ( ( ( ( ( ( (qooynp$J%E `ھtS^&-\)&~ʱi:}LpmnS$6#ի+bXmU?491x&:)WvM3N~߲'x7V;CC485쵽' ]Ok6d6 yLJ>S'[H%F`1_D׃lz|Vg<}u9@y7Hdbcp\e8ŪY_)ZJe82N:˖ EY/' ?uvF;`q€|D$|?>YY LG!t|b9+l\~"2ߏ[_ t76яki,@;Skeߴf?-jr 'ϊM(RNeTY.QanwmI%k[xs{bXLgr^u)>_AJs>ۢX?meO{ "uȓEP|&nl~Ꮨ#+j5b#>n8,WTPN2IBJ_xn(8((((((((() hNJʖO.?濕\*Z(<Tyq5hkQ<򥢀ˏyG_ʖO.?濕\*Z(<Tyq5h ?":ݏ`ҟh>w^~13֡ HHZ _c5{ݜbMUOn;D->[ "O4(g7Qe}/_7K4FO׉'i5;Opjzz&\ab/噘+2Dc wA|{u/l&\γ:DjKyw5}5jzwּZ.,?԰™mMy54&nsnݖgyGA riT*SOFiUFx> ;|Och\隮rAs Uj.S|O;v ㏁~xT/K)Ҵ?><Ӥ$aPØWH'?[dzVa.!a Wf_QǧGfNީ>v3 t#\4JMr2Zu`V)RZfQ^Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@'CMk2Y?|8o'!?ّZv ?{B}ńr+Y5P[E}c; zk55|];o_xS f(}`m7Fq$Ԭ[kXKaʟ࠺Ľ?W]-G⟆?h?X[vw|?KbF+ŗ,QX# } ch> y|O^Ns_ ?~7|+ߌ)'I|U6*nвGj/&W|1ov^ 冏i~%Yi:NfVx]H(HDPP@@EQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE῀>-BxD#yI<Gdu< ~xo'Lf]x=^xq-xl4icoϾSeQҊGD r(eaR2f)s'Qm55=噂NgzURHjfqmJϕRt?ciߊu1qH d%cܮ7DčrWѕ]3}krďΠڏf+"~ᙐAkhDB(#XweO/|C07Nu[w{yFf@k b)|P1Yd9QZߗvݮKW͚e7,#m.ӅSxYEyr?<-;6 [/m7[`2ep2AW׳CCOu~RU(MJ=ӿu[+cp(((((((((1>+x_x /VB..#0*L G⟶Lo>ˮ|]ЭG+ܿPe3L??n,o{hj Cs,@Wһ?8b)5 %i:QEQEV`n5!ѧYke1xiO_ 5F 9ap~ͪZU|j娮/4Oniq> >YZf](iG7?/-?b+]t?ZxC|e&ԾXD !&  [ǗakXETUoR~Ҋǀ~ n~tl 6[L<fR22*J[ cχo ? '@WQHP_0ڇ PwK'kI "+'+ k˅&$K&X=һ})%Xd -stbg|3XW5Lng40\5Wag(=nk_x oh3]ʟN>2dFcj~OE$.fXx5믄71#KΡ R9-XEp(*>f2~` RUf+Aʽl4#5/w QQE!ezWQEPEPEP_!x:~?f?|]ӯ?^[FvJouH.$8 OjsIk3~]Hs&=\VD\3H51\Ub=A&һgynP9MFSTrtMVPN +Y;5ċU"v"NXk?'8x߈Ѵm>šoF=I;Ӵ"hw8>,x/ |I"ket QL03%qzʽ%;[tfl|JjєdR Yx;tz?M>)~/w ӼSyi}5땊uO`'+߫c * z^!զhKI/$T.B!s3^!R5ot:[+_:IksCp$8UAV w %j>0?h>/xUb@#mrwN rNkТ?zy^?̟O>ٷ@v#OyGn K.rlcT Vx@D v.+;{™kQEQEVt}#AM/Bc,ccK1f!T2I'Ԓk[t|pEOXJM\6+[ud#V m^ w\gW5c?M~.QN]:!SYxmD[,$'pA{⧋?wݚ^ e @{WL1o[kewm\X$rx]uYR9J"*@7 $_?Sz=ʊ((((((((((((((((((((j&Tv:~"~/7LmьL1$ v*YH~5,<hp2Qx~ΜEx` RpYܚf䩃)ۄUj_G8G &*QЩw O%(__'h~$k9JoêI=z߷3;gb[ |8G㎪7p{6܉KK< wr9("X`Q@DAtvU%J*K%4c .R?1/ t˝3 |^Oxcgg/W=ԑ¨>FnM<bx; W⻋cS_ -/ 5˲zǭ}EsQȲ qiҔqoX<5QP\)~'z_gO-um"m&pdkxT$Fr?⯌LZxskRᯈ`XhF/odLtrk􂊪S\V"^R_lzT():έƒgS?D? ęO %MUrʯ;TZ.6Skv|MҾ Ŀ |T[ aWdu,2#g wAж7ĭey" 52$(8Ȯ~||'sc-`ֿr|i<}%E|ԒQ_qm7/?36]R&KkCP"?-"h9۲2 Ԇik܉%m[QTi+R˫>&i{:);ſ6 M}{)_0o_ Q? FK?^ÿG8(M9..}uFpU]uuƆ>qR"u5VTeNM5M=SOt)_cR|).jαkԂ?[N:maeLPFbB؇ ;4*ͶAD%ӋlEHV$"ms½Ov:}ik=CUcdQGݒ{0*ú΄hI67 mH[! 2TQHTsjӕJUʓJ[sw'>x<Us)ޭxYU8y]̔ȜEѳS|T~~??¿!|t>s/>!xq\Kh`Y?LJ;_P+nj=ǝkKM z;&9O64d(bHev_독[HcOY촫h,fT˙v,2M~|v2|!|9E_X|9bIK[0w'i5 [Px;oC\UƁ 5m7vw[ſ$N]̮5B5Y᛿mnwq$>q js-h5k|'$~i͍qoGƸSK =$kml2MG&uM0H4]He J* ƽ.omZVɴ+g9gc9?YΜ'%ٔ|sn8 ᭢~t־IJ.,w;O`o7ݸ]:ɸoϿNGLԿRY~O9U?iW T%RBö3%$GH򾧚WI4wog6Cxƒy7 H`aنpGc_[D?F?s}c^SI&Z)QEQEQEO? z/kUCxSľ Pe[H/mOKd `.'R4ρ2TTW*L %Y1.s=F[j_:s$+8 F\1,g91ȅT}Exo'R4ρ2UᏋS?_-nEAj#2Ba#sX)%@TWϾ'1mMFi"s-9E\BC,9C*L %@E|ᏋS?_-nEAj#2Ba#sX)%A\]/Q[Xo?h-YcSm ||(KF "X) cY>}1]+9Vzuj~ZiCl<)YPEİ\O_y=5 CbK Vs&j-S$nЬN^ 3o$6̕J>Ck?PQ^ 3o$6̕J>Ck?PQ^ 3o$6̕g|\x +=:5i?h-XZܴ\!6d@n"bX(TW*L %G'R4ρ2TTW*L %Y1O4~4u:4WՒɤ\2DN )# O_i!d>v^_د^iWOGAj١1$vdH+o.J>7)WgHmg*Ϲ1]+9,WuZhdA|)I R ˒(` UF7YJ1s c=6]B+9zݨԿh-YduHVpŒدꚋZsgAji-@V2ac6,J>Ck?Q 3o$6̕{J>Ck?Q 3o$6̕{J>Ck?V.L|9Ũ[~5FQ6o4 NCQG!DqP AQ^ 3o$6̕J>Ck?PQ^ 3o$6̕g?:~,KV+ VXC7_ у1 |%V` UF7YJ]s co j~'jtQ:W=G8 *a4J>Ck?V.L|9Ũ[~5FQ6o4 NCQG!DqP AQ^ 3o$6̕g|\x +=:5i?h-XZܴ\!6d@n"bX(TWϾ'1mMFi"s-9E\BC,9C*L %@E|SSL=WsHQӡIh-Y.H*xQH`29O_yG1x7MǀU}?pAmdsZ[B2m6oߏ>F UF7YJO_i!d:/_^6Wљz2Y'Z]{v#xCG㯃f?-qxOCP8VwrHR5g;4/Ck?Wm 1F4Va[5j**ne}]Z%$^|k:==6PeY5{;+B8,N2kR})|)x+9KI꿴VM$j <($BvV`N և'R4ρ2U/ž i^#-"B}s?JA s kC'_]KѬQsc9Yt?( I{1|]nɎ|e6W].3YO>x[>,-_A/z1Q錨&2mPOs۹˚ښER|ݛWkfݟKh[e68M.w'K-3 g֏g/ki2|k|!g~"|x.[O+g* o\~M |{^7|0 E_`>]ǧZNtUYOQ=v3 Ȱt{IԃVzSn3WKZt|(O'/اlۿ#ۮju!DDz(ºo[NCZa֏ vlنސ~???cTvږ V;ڋnVIVk` <F۳U$O(*~WGZ9<=xiotߏIj.+s"yp7xfQOmv_Xׄ5f߲_q߁.|G~ :v*/B$YGjŢO[QEQEQEQEQEQEQEQEQEQEdxW~~%=r]"5fXEgbcn$$@iWvtK}8XaUkDiw e$qNNQ_>!'ÏǾ< ږDm<_.~#HH:|R\18k9\&#KK - | ᇍ K).4鴋?-y#># .?k]g $x/>ڎ}b>{JCC条LY(} ( ( ( ( ( ( ( ( K5 [: 屜e,Ы572RI#kHFg+7>ttr_j$T,׋fɑ6j#dI8,1!;FOAƛ_Es~ռ/'\|O&,nnmF%l*YHV붿x?Z4w NY[LB0Χm~|AKhTWŧ/|sKߴ^]\?Oh~ DNRL:oS GlO WVn_C GLmle x id,ɤFD)0c*}E|[/t˿^ u[WP[ﳴR< \4goIظ ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (3xOچxvwj-hWWQ&vm73aqq tysș`~5[߳<eַ_i֗~U፮>a$x`wY˫l"%$׾Mswx mR}HQml[kZFRӱwl ^Tgszn־2Ɖj I, BbM!2QqEvtP+/6 *666Oc(KF0n- nNWeo֖4?߳_4+!}ez,oX9{IE2 Y݋)]<^"_ë&j@.S{/=frl0~蝟-c^"xA>@ kZz+4[ 4$4ėb{( _ ޅ=i:D6t?7vA m{gv((((((((((((((((((~=~ G|7_ WÐ<3OAavoq;,ƑA#3Q_;Jx7\g6EMD{#Z-ED^JdrV9Aoυ^y_xGUcĿ[7MkU[Vv[ydHc7y&[hȟo)K[b/+6m7m 5;X!bⷷ%HU ⾻_OHï?KDtώE4o ƚԎR?;.vrc'+ F&&$μ :'^*S ;L_<??rĞ.}O }?[ծdzcVtO~٭Ǐe05n;'/ s ϭ?=]Q C/I/J{:C4AYo xҿ3ݦ%d#_} _;+rü{ǟ_<|EazhZqͲGek o}EDZرJQ\x(R&i SHcb_??rĞ.y$?u_> kk(K>u?@!?κ#kQg/K_gV|<iO_ 1x_\z;''y˪??=]WO4[ğG/ƫΕF8R[V|o aQ>QW@kO 7[X~hP5OIKy$?uG;''y˪-OIh?﷉~Bﭏk>/.ټ]@O0=ޚp ɗq\b5Z]c I}M;''y˪??=]W7k%?~-cV?{QZ`+2WNћHj<7_<??rĞ.ܨfxo;''y˪??=]WQ@ C/IwO9OxUTPü{ǟ_<{??rĞ.y$?u^Exo;''y˪??=]WQ@ C/IwO9OxUTPü{ǟ_<{??rĞ.y$?u^Exo;''y˪??=]WQ@ C/IwO9OxUTPü{ǟ_<{??rĞ.y$?u^Exo;''y˪??=]WQ@ C/IwO9OxUTPü{ǟ_<{??rĞ.y$?u^Exo;''y˪??=]WQ@ C/IwO9OxUTPü{ǟ_<{??rĞ.y$?u^Exo;''y˪??=]WQ@ C/IwO9OxUTPü{ǟ_<{??rĞ.y$?u^Exo;''y˪??=]WQ@ C/IwO9OxUTPü{ǟਿGg߀Z?~|kjW^0{m1h6ͫhr~^g1o" _xTlW1׵^8`Ojki^9.y!%2;k/%?U"mѴQEr'Gg>4Eውm!=GZ]Pߛ[Y':1+?+o?oڟM{ljx<{m&}Kgw$V-egDt(Ek?Wĭ?-cm΅qU<.I{}\<.@d^mׁ[:q,j,=s\kv]m[YZx)|8׼;myK73,GޥK%4X鉧*M*R_)|A@x#S9mMa)8N&X† gg r{-cjҚs֥ua0Y8W k#  7,N_$PkԐM` 0q ^wM5O<oCRpG*rVqgd~,o+AaaZoUkum|@#_M2 iCZfz>ZxQ6-o2RHp|ԝp׉B-|<]aeX|;n/o.[M(]5J,ix$)d;|tNKo4χW^'V~7yb"eFR^Qg97323o]zȿOSU[M'ix|/PZ>9KWд>ni :d37-:d#MGQ>}[VP̀$&#ž i<| }?R(.ew"ygO|g=n;Fxm]IJߕR>F4W<\'yWW_|lfB-e0I^}+ǎsiM·͡Nn$"W}u/!Fʢ {xZK9fKkVFC;g>gJ٢((ּg7 r-ꚕ܅lL;'5TN:ӊ~NMRPow?OOi2跟¨aaN%`& i}_SEPf55t4>:x1W-U|ΚOd%59%ϟӔR_&|: 8|jѢABIiz^Vb)W= i7FҾ4?'+uw+iWvc5@VEw7*qj_LoĿ/^C)nI?>v^)aۗgoZʾKf5vo#]hwn}9+TIԤ}{~RI)[*$=쿌*ToTQoS9o*V>5}'U7$wsOHe 8 ՊOk[ƷouN$xІp#yn>dkB9c*+1QW'H_VzUPItRs~W`s suIzyOCц8eM5cMT]b޻6먢<(((((((((((((((((((?~%?jiG1Zxq2i"UVkIE/'μg7CWxOA|/ ^jֶWR 2.:A/3{~>!?5_ h&׾!d^E׭t'l&hgL ,q|.2x#A ;[Ю<@|C[w&ƌ`}B'F۬:졪[k־͗ď|iB~,DGg65K#9A ?T_G_w WxwúMZ ywl{=.FGSϐ+~$|#ͥk> Y|B^ 凅.tȬ4O Ess6{ҵk&i#O)$k}T.{?>8߇:/xVYX[Y9b:7V w[vak1|yCLϒF| ?p2-t3~~t75oƅO` M/iO3?F_ڷ᪺ <GPGM>w~GG'?3cTP W~'{Fwx\iٻw;WҼk+ZRgK, xolKjzvK2"oHRVt9P^0l'⯆N ;[ދr .7mlgq3K=x,~,J_ W&yвM*G.Ju`}VBǾ&|$Y;Ou9>nio-6-SK%.So jp|d|K_~_5q-3w1hte钭SC-W w@ O (:}GN-s-K`FWȗtRllSt_ G[-fQҭkha1Dxe=|Л>?Km 4PkHmv#dV C~>:Nw u5{"h?w鬊OZcs"F?ߴ>x;~ž;W]Q4-N%͍ͤ%$}iRQ}AeMg6卵|.|ױO5f.+[hbx-.vd+ K>x巐$F%%POy.#Zw|Bڅ\Olx h˥KASV+kj@eP^ v}׃|0~)WWBI5Hl.f{a !تK棈_o^#_alj;_y5'ݱ1كٹn> gǿ ?hO hHMKɦ躴Qhl-a@Ð4"C*fhp4!6ߒ{ڬ]zע((((((hk-c__ VkԬCw Y¾2A>$l?z%y=V?)Hg=\ɒpfD'-FsEy\ygewF r\8G9ODڤ$*%S]I&,~ ߴ!]$P97>cKj]ǐI;I8xAݝs*WFK9Eߊ<hXp]CK_ŏ_ >;x>dyxGYaXQgRuWڗkK~zug(|I1vy/aUٕXb%n+Ζm2mIwi,S;qSatђ_1JN9E/菄|t]7+QK{rF(ުk/T/0V]SCr`|r&گQG$? v&7V|^!U1g+ 1~~ vFnCPK12.`#MSSmo=7Ml]n6vùRZѠK٫.qM}Ǘ>_w% ]t=S:/߰.%W5O:5ݟWnG\[ #F/GM>n Ǿ2tz.e{+3ַ4tв y9ytqYѵ?* g+TMtdk<(6eQms ?౟>,kះ$^<[Y1 Y'&k#@ɧ4@HBb3nX'蟱4 Oآu/S@n/%[;K)"De8UI$ |AxZ?q-Me`D[5*a LF<7+7Sw_<k#4+6O 7 Ʒ1Ŕ{d}wϏo* z;Hz f7D ,~C<8@Mq'ODċI.tԴ}6R?i,B gUF3+Fi ,M=YR%ޥ}_ILmK>>kwLq}u_A'TP+sFV&*|gkQ"AѾR HYO _ iڳ +ZQ^hh~?KXdⳏTl +ZBq?U[x t-zĖ[i}{w@`~\VJ\Z?Gz'꟧>SRt*Y8N7\ѻN2N3qx'IJ|+ԓS?;DΡaX컈$, ۚ`to¿/Ng> Cq|W&]"Gi[}dW !!2V_R)'wƶo_ٯŞ1lnb5-+Q:*np+31ݿ*Jz߬bT탫 e7+.xy7.GE+Goj ٥Jvm*ַdr LaԞg>{9@hw^$3 N Z\iUX|9RE{tЯjRR^NX̻1ɋ:oEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE~p~wσ>P|$t'1?Wj,hď 9f{n8|K(w!ea2=FA/%(S}r~s\_DP\|O/_s=CuI\*Yk"īczW_\w?R#?o ɮxCڞk##j2ZI:ۆUrC@BSҀ<Gg0<+xGm~"L٣]>4_PH2S]Wu_'X^!fgK(Heiu vH!cڏIUxnJ}gΏgmrm?"1 X!Z.||7On^h"tѼ[['t$crU8@>,|Mƺ.^>c𞚲[GcOMb '4fHΡGS_.~"~^w{w^?ͣKmZ^ym.J&5{8(wZ O[.?𗈵ƟUOԦ{&hd,EO P RV|a=o0^^!ŷX]iN۰ViHqe1O |_/,t𯃵=v쳜-ty+=܊'w-m] 4}2M;HMDX"Ј|DG1# AsxvCtЬa{%Oyc[՝Ē Y:!ec )mFZ?cEuO>I6KuX#2j1&0i6wGf'Gtz_f,RS[$Χ~lbB<G팫rZq~io?<9ڍz,ŽZFE10"S;N*_ %ĺEn2G-ǛZGNogḮ>#AEeK3?i.>b&;vƏT/Vé,M+Zs" BKE_sJ "-g@}/kh|Y.[ F <nr%8swUWßFj|)C [֧/Ŷ6_ؼP}$ n(E(/{ӿw-^/{ӿw-^(((((((((=mͺHYd@AR0A@_C🅴iW*9db W2ԏ#k'f?Ǐ]ihщ|iN?Z:hXsc#̴̦ z~Q^>3%bj:ԟW_]KX|8OV-J2|^vS[qz9:_oZυGkck 4Ox.T 9ٛ20kAm~vUay P/J/P[jpMoʭ!'.ez {:45bmdf o4(/Qm5)(h7 ymQ+|kiuG:3|3JKQk(Ѩs&v~&)EaxOC\x[&k%ֈ3 B۾X61Qw.Yrអkٌ%xj*KfuyQL(Kj B⽔^)ԋMSN O ʤ`EPEPExo۫Gß.ZWŔ?nHjR 1ݰ |V ( ( (7/&m힟om,O$ #2dQEQ^ck SS4e<*5}WTiJ2S&+.!pNvuQEQEe/ԭQVf}8>qzc 8|;3ql-ϏtPLN;1ޠ?kݦГ{憟͵| fcefA$Ul?p7Zh+76i&X;ɯ -ͬ~Ŷ*uGΩ4؁u' LgNLn h'↿?xB!ǦxԦFJ2Pؒ gv5柶OƙCWҮ~kjw~n|GD(3x%#WoGToNtOHmSM6&xtt+-J!V{Fdi& \[~ȯd_4ͦ||׍~ L?é7Cj67767gq%=`4QEW_|t >dኤ1,+Ò"e*ggS'5w7OKZ4Oks&\DP9\eʣ*JB.M-dk_3ZT+W-(?$}лO1G|wBy?xfhX3~'FtO ׈/螋}1jdYaj/XI~!S3>t(o]ҿo'If_/w}߲xL2xS_%rg&ja4~851:Dm W }._7VG|wBy?m W Ed}._7QлO1ZPG|wBy?m W Ed}._7Ri~wǮkHa,]ȸǖ=MlQ@Q@Q@?g< S -_imSP[YG,첃޸sB/O rIͦxZ? i3]Dq#5%UP`?~|N`1#Iz/XQ_'ۏ}OJ.?ھxL-*٤g{=o񯈣/>.'>*h>C&lv玵s0sKEnJ5Rt;|Si^1_j_+v>ӵ-;XU/ẵIosm(xCee$0> ŰQE(((((|1šgGjog)!eʐGAk_ğl$ŝCZ ~|W?g`8ӵ2EmM;)v5?u?oدg`oon%XJ&z4?pBY,4%*Ui.YmfUL⨪Et5+RN2JIk 4쟵Mx+t4M+YQA$"D28KW?.OZPh]kiiv7A7*#u *eE~|05s5MĖ6鞛ʯБ_Shl?u OKa߇ٶCEu0ͷ|xcpЄnE)]Qvmb)n쟥%?RWWEeEД4Pt\'̮YI~ ~-WT#Myx-<+<g ȇA23 > ~ߎSLX"Q]*CusITI,jp{.pؓ+ ϣ2x3Z%ÝjM*v( :avib-)F&BrkgOڏ  Lg>#<-I|\R+0R_fnGq{:8H:s5o7IEN24oH?h'm.s[abMƖV圐 W/cS~6Ceub/`mnVKymo,JA??dWeq.o kK$B^# 2]% ~)D]/WG0^#|VV"}vv9el杌0˻xf97>\< cFʝGhͽ59==ƽ(˨8xҤ6$ct8ZVtOWZK_ 265濷'Pm?A+8k-GLfImA'6[]og\ͯ,2aS(R֤%̢rJ uo x@ѺH49w'N~6~wSYOY|5 c ̓I$|q?Gy5=*95(%$ z+OS U G[L"2f*&d%k_Qn/t{4C/6|= [Ś}~+e\;?j?ؐXxG ܈ $g5kU_3O}hT0[wg_Q)O?٪y_j6 /}X۲YfS֥fئhcz1,P-@Ce3(j:hH6,+HR6 }g~ٶρ#O_ Йq.mH+bos\- kkq/OŻ4dl1N ܥG-Gߴ9uk˝6`f*|kc F1ߌ_:q>!ԍOiNK픩ꦬԕSR=TkƇ"J3R윭ʚsMK_*^=e樚oHn.&V!AOA?(?|I=Ɲyvh%Ǘ1X! m-kG gFu)^N[em[}5>kX\(_Q)/kE~`i^h|kS>ּ2!>쭤u ExeWSqoLo1}% i15K^/D}VIy{˄|` JqҭJ9g9 b=*2}OiEEZ)|%]kgrStaT`t_0'1P?2-s>%Y4!mJdiðss_wj[~g?ҽ-6R%y1 -!z2W?_JD_LWR.mjpsm~p8.O$~۳GmL1C6^RZm._䓌⥷4$kEWI|7մ7 A;J8< E3*eZl8rˉ+aIk<foeoue}"Zb.ihF;Guq8Wc7߃ͥ[^7SPm-Q)HܰP)XK⽊ +o~8Vkij4C] Oᆣ);sKIO7~-֠R0 CWſه?Ÿ牼i:26fnl:H]W @ <+~Ӷ5?gJy&7>淩]^A/;Y]o#a*$(L_QECvxGi\>/~;Rt4 awiҘ^Kd8PѕE 㿌ix>>*|;G ,u $)%awjw;!>V C麍[mmu Mo2tt`X{AYY_ h/I Wc^___τ~ga0? B"o& /bFGj&W|1ov^ 冏i~%Yi:NfVx]H(HDPPBm=_?1{}@?5؛2#1:LP+?Gud$iգɯZ ǭguSk+?TK\ƭd\kY7WS=~*E}9EwS. eմV_V^Eifj3!M?j-=%{mJ1vr1t/Wă>TՌ5m|n*SҠCi/LK ΊD;&:'Li(6?[쵮k7Zt),?haK(ӏצ(A۱_G_^}Oěz=ZwVU$%vaxh}7amm`Ǘoֲk%Ak>R? EM8X_07VnId}k)m`c2)5G??=m&|%Sǰ}JRs iZV|;u}=XLw(Y1HҢon-)9Z~I"v>}GV S|gh_'ž,¿<)ok#>.& MRBdԕ5(41 QO5g_kց;J4IYwQ^-|E|UgVܤm߫zsI9Mݾ+/φ_9/,|;OxI̬*)ԩJjpm5Z4(QwNşOψ?߈w49 WgT|9Kc#7]Nר}g OtWo_>P7zO(mķ_`čqmqɧ<>Xv5x X>|<UWc:uF4J$7v X |Ǥ~Ÿg-#rD"/DB [;ԖΪ`Z8d(jt]qGl&%]5N]q~Oy"FW fcOڟ ?X?a?6>JytKVV[m&+2 ' Zx*~к *?o~-~|9}/PCf[4{ ʤf^_ U+Ï YciᵁG`;'$䚨<#S Eyiuʝ5MTS'y[)6m[k|LϋV?U DC+1Oj- رR~UFVmFcQ8_YXҤ' [?8/^Xz;]|uo K*+jύ&ۧc T/̶q/'#jc e |a$M/PPEG Cx!C2x=Aso\@v=W)a72~R% z)'5)z4m3[/~?wOz͌ϱؚ_Wڏ>dR5V8!x|eطڈ$fcդ)u&b_[IK G&m 1&ŠE-"2{t$WQ\5PTkלඌ&gzi-Ɏ[-V)R89+>7-`o#Ǘ~)~:d$捨i7W-dl'굩UT}>Zih*$2*a(bcji).h;4j)]Nd[O9#ǏXk]gX2=>wਲ਼-22 ;;s/ߧ:|4,_-5mN{Mo J^O;(|[(x.ZRcc@{hC*>Į }SJXg)A^e4W3%D,v'WGN)'֭Ih;J2F+Ÿ-JO xw\IuS'dY?wji gǿ'u5Tes@d6B#"V>i6_50~O!xCO #y<ٵuff~ƞ ~oً7iZt6{x/NF<_lnfqy͸aBӧKuIsrBNWRrOݲ_Q4B8N8E%kNM:O>k=x#75j^:+㦃bđ7r+F}KD $[ApYqҟ_~'xk^o~ MR{y'kf#@n(UK g++E:/4oA^۞ͪ4D~V9W5'~ -u|OXYg4QG$HT}ZUi%tR\IΜuju.L 8>PTT&:NjVn?l|Hro >(o:OEe]}Ni6ΦȞMXfګd,!8ttόZ']|<=ަI`ҵU t-Ա9-O&O{I>;V]R:x|]n-NP99cq*Wȿ1e x ǯ?"5 'vTMԱ+yP**sl6gek54뤚oEU%N|35g7OJ{˘EռodFʎCLd '`fYݼwu$?gO^OkW`HnVSQz$\4H%6v?>'?oxSX'K[ O4ķm+~|7Ms\DTN4)ZOݴQNwg$⥝f֍55.Y[G?uT֛ ,? j[ OA[[qyͰl,\nlE |KWk\/ kntl8jӏ~_%(|5ݕֿ-M]0Ίu5Af7',6Wǟ~ |R}ni-?NRImτ?ms,Bi1 QqѩG O2t۔5e*mzJi8>7n''?a) R}js*_Q_(Qx r_Sk:))< @rRo5W:&׬nR+35C̗y.g}S>xw2~^'v.|_Y۸tVk/h,PJbG^kOo ybMm{IP  V}DxP@7Np:5ֿkU|Gb"r;K/.S59R^MYBGcp~_OۿD㯅nSWGi2H h=oVs@4ѢX:d׈I/kW@|; v<r(g /P7by׳ɏ8$~R=YXP5h#4NC38 ےx,:੿?g__|#.MEȷۻ]Bż8Vݎ-|&I^7¯_,wVw$Ep$<]Ymndht$S` *p-π |ra)h|#?K@Ex+{?Nռ3Uk]. ٣Hsd:6dq3HUrvDI?@RQ^ B/FhRٟĖo{?/㻸{?٣R"19#uꮌ@Exo<3'|yZJfY hIfH<<JXת2E{Zgݚj>!Ƌ y.chh2A@y%8z UxgO>90TWhRٟĖo{?/㻸{?٣R"19#uꮌF+CN|3J[]R ٣NcE(6csvJс Ex?Rٞ]𽿆~4>giޞG3JQ X1ӈ)CUt-օ~:յ$kxnV tbbq4m: π |ra)h|#?K@Exo<3'|yZ??@RQ^ B/FC+u۟ gCvv]]4x 4 ەU\ 8+?@R B/F=ʊxgO>90S@+ЬQ htJ;;P90>Ёǟr7_><-π |ra)hܨ |#?KU5)_4|C៍]=ѧdmK$qguQ@}7_><-π |ra)hܨ |#?KU4)_Ϭ^jv៍WU[:w)@ѳ@Tn +?@RM W3SBG3N4I.BhPZY`b@hh |#?KU5)_4|C៍]=ѧdmK$qguQ@%~^'ſf߅]Nja~!CJ#'9ċ#I8"@BY|G7 ¯ [hZizm;"A,řff$I5_π |ra)j?Rٞ]𽿆~4>giޞG3JQ X1Ӊ?=lmG;w浯mm/=-k W3n5gEvy4x4i [FhYTd*<3'|yZܨ??+/xc?53RN,fK(xF*C"]U?dr^>EmboK9lNyn۳9+3oW_ln.4mtг4O)ek 8cxgO>90>Ёǟn8a0TP0p~~j?d$=k^ ic=TL-5/?!甄lcp ~|#?K^Cx~2>+mO[YX٫/]!Paے:`r@0eZVwN-5x-Yui5f6^xyGzF?'k5;?m;⿃4 )u2m_PIf`]C0%FX{_'Q7Ȩtig  Yc qa;Þex|?ٓ2IlVXţB%QسFWq'G_<9N9I'wG$2Yt K]XFS[&#GY*SqP'%9[X͸-%fRTúk2J/v3$Qҵ鷻I~!?Rku ( ߦ7KfMg?/W53uOx^wkjV ^q+/ ·' +)|Ilwvx_"~^8l RXՕ^-|{?|iZ-No ߳w.M%;y .Fk,mJ5R\Roi;${-|V_UiЕm]ۙ7wNW>h|S2*@ {!օK!8 H>pڞoړOſكø|1Hud74"= P JW|aoKχƁٛk]nYeںi6Ἐ1;Nx߆|r 4W$KU> G6hƦ*:F%Ѹ8Τvv6՗3.7nӅ44mzy׍~ |8x.xosk=.d1jĠ3 ), NCsY|R{l9Y䔩U?NTkfyZ0OkBM$OO}+*pͫ/n]䛼hiiNVvXC{ xO[KܘH:GUbpNF;Y#i~ j < Jo?Xh'"8u-CNI^)m$=<1^Qm:HO?^~xOe*aP4X<1-~{{nR9"\f%C+c>n|>*~څ/e$?]F%㏂%t j|.XMR -ϑiEl#@)_쟬'~4Kj7-Y^IqowdD]wuUr^^MROٲX|+h? 6>&ޕ}+k7viyS"I:(((*++ :좂6IY!(.]܁LORXɩh(,,o&)djF ?u;G8f h N(VHc w ;3Ԗ$jZ(,tZOY%d0wr31=IbO&"aSƲ*цãr*Z(,tZOY%d0wr31=IbO&{ 9)..3玼IӗWLT>l6 _$xc? ꗚƋoƲ~_ r~+׷@a`| ogw- #閾7߀mSMдMaK!Kd#UF"&-^=_?╷~׶?m|bC祡93|JN:z?ƨ?k|C Ŀz>w]Lb,iK]e0Qf`o(;58[HXUK :(*+ *3L%$H P-TOac%zQ51-|+M_žWѴ5ާA-2-D6/?c|)燵+=9,.mH*?vDNB>^1Ҽ 9ߍ? xs_IM?V}:rR&Mym3Ă 4RnpwWߴx/x:?Rh0xZP{$;!jKf?v>ƹ!/,UE%<9j&'x{ź8{sxgZP G컆FGrp{Ymx ho(#id‚%<(++ :좂6IY!(.]܁LORXɢQ[QOFGJPTȩh((((((((((zE]WG=*:&9e'ѕVTuşO ^7k%!ZKƼ{;{vV$ƌo ?g SvwmKjzwj7*+\^-̥#7#"(((((((((*~9x ޝOi/džu/<1j*_?KR\:o ۸xS &׏;σQZoD%M!dMPhI%>8fi*1 Oi JOqڋ[˟> k`n vNyYT|0g8>-Oiڄ|} e NiB9H&8Ml3o~x~*jZom$ԧ٢9vq>U'puJbE ~ Y|ui/OB--ZimYe}x-1+ {E*W x?xgEsi-M9P([`3bf-k1MbBփ; sP5n"9J{#y%J}YE|s*&.k_<|ڝmOs1X%YǸSXS%C~/}r;o ޅXɣߝ;S]SR SHdiY";4WQz7|"G%9b۟]YVpT*m+#4eBQEQEQEQEQEQEy%(||3a%4F{?Yծa)_˻s9h"?0Md|Gf/HVIhƚ}[Mđj@0_$d-PEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP6x>'}]7hl.4GԴ|1$H EEDŽ_w=?N'/dž|SYZXi XȶɧKnxm ifX}iE|~ɟWo??jxž4sHg0:T[Sq奢򷴒Wc¿,|BKvSO _'wمBK?' Ik ~3x$ҵ[xB%x>> import multipletau >>> multipletau.__version__ '0.1.4' Usage ----- The package is straightforward to use. Here is a quick example: >>> import numpy as np >>> import multipletau >>> a = np.linspace(2,5,42) >>> v = np.linspace(1,6,42) >>> multipletau.correlate(a, v, m=2) array([[ 0. , 569.56097561], [ 1. , 549.87804878], [ 2. , 530.37477692], [ 4. , 491.85812017], [ 8. , 386.39500297]]) """ from .core import autocorrelate, correlate, correlate_numpy # noqa: F401 from ._version import version as __version__ # noqa: F401 __author__ = u"Paul Müller" __license__ = "BSD (3 clause)" multipletau-0.1.9/multipletau/_version.py000066400000000000000000000105201316472354200206360ustar00rootroot00000000000000#!/usr/bin/env python """ Determine package version for git repositories. Each time this file is imported it checks if the ".git" folder is present and if so, obtains the version from the git history using `git describe`. This information is then stored in the file `_version_save.py` which is not versioned by git, but distributed along with e.g. pypi. """ from __future__ import print_function # Put the entire script into a `True` statement and add the hint # `pragma: no cover` to ignore code coverage here. if True: # pragma: no cover import imp import os from os.path import join, abspath, dirname import subprocess import sys import time import traceback import warnings def git_describe(): """ Returns a string describing the version returned by the command `git describe --tags HEAD`. If it is not possible to determine the correct version, then an empty string is returned. """ # make sure we are in a directory that belongs to the correct # repository. ourdir = dirname(abspath(__file__)) def _minimal_ext_cmd(cmd): # construct minimal environment env = {} for k in ['SYSTEMROOT', 'PATH']: v = os.environ.get(k) if v is not None: env[k] = v # LANGUAGE is used on win32 env['LANGUAGE'] = 'C' env['LANG'] = 'C' env['LC_ALL'] = 'C' cmd = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env) out = cmd.communicate()[0] return out # change directory olddir = abspath(os.curdir) os.chdir(ourdir) try: out = _minimal_ext_cmd(['git', 'describe', '--tags', 'HEAD']) git_revision = out.strip().decode('ascii') except OSError: git_revision = "" # go back to original directory os.chdir(olddir) return git_revision def load_version(versionfile): """ load version from version_save.py """ longversion = "" try: _version_save = imp.load_source("_version_save", versionfile) longversion = _version_save.longversion except BaseException: try: from ._version_save import longversion except BaseException: try: from _version_save import longversion except BaseException: pass return longversion def save_version(version, versionfile): """ save version to version_save.py """ data = "#!/usr/bin/env python\n" \ + "# This file was created automatically\n" \ + "longversion = '{VERSION}'\n" try: with open(versionfile, "w") as fd: fd.write(data.format(VERSION=version)) except BaseException: msg = "Could not write package version to {}.".format(versionfile) warnings.warn(msg) versionfile = join(dirname(abspath(__file__)), "_version_save.py") # Determine the accurate version longversion = "" # 1. git describe try: # Get the version using `git describe` longversion = git_describe() except BaseException: pass # 2. previously created version file if longversion == "": # Either this is this is not a git repository or we are in the # wrong git repository. # Get the version from the previously generated `_version_save.py` longversion = load_version(versionfile) # 3. last resort: date if longversion == "": print("Could not determine version. Reason:") print(traceback.format_exc()) ctime = os.stat(__file__)[8] longversion = time.strftime("%Y.%m.%d-%H-%M-%S", time.gmtime(ctime)) print("Using creation time as version: {}".format(longversion)) if not hasattr(sys, 'frozen'): # Save the version to `_version_save.py` to allow distribution using # `python setup.py sdist`. # This is only done if the program is not frozen (with e.g. # pyinstaller), if longversion != load_version(versionfile): save_version(longversion, versionfile) # PEP 440-conform development version: version = ".dev".join(longversion.split("-")[:2]) multipletau-0.1.9/multipletau/core.py000077500000000000000000000402411316472354200177500ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- """ A multiple-τ algorithm for Python 2.7 and 3.x. Copyright (c) 2014 Paul Müller Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of multipletau nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL INFRAE OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ from __future__ import division import numpy as np import warnings __all__ = ["autocorrelate", "correlate", "correlate_numpy"] def autocorrelate(a, m=16, deltat=1, normalize=False, copy=True, dtype=None): """ Autocorrelation of a 1-dimensional sequence on a log2-scale. This computes the correlation similar to :py:func:`numpy.correlate` for positive :math:`k` on a base 2 logarithmic scale. :func:`numpy.correlate(a, a, mode="full")[len(a)-1:]` :math:`z_k = \Sigma_n a_n a_{n+k}` Parameters ---------- a : array-like input sequence m : even integer defines the number of points on one level, must be an even integer deltat : float distance between bins normalize : bool normalize the result to the square of the average input signal and the factor :math:`M-k`. copy : bool copy input array, set to ``False`` to save memory dtype : object to be converted to a data type object The data type of the returned array and of the accumulator for the multiple-tau computation. Returns ------- autocorrelation : ndarray of shape (N,2) the lag time (1st column) and the autocorrelation (2nd column). Notes ----- .. versionchanged :: 0.1.6 Compute the correlation for zero lag time. The algorithm computes the correlation with the convention of the curve decaying to zero. For experiments like e.g. fluorescence correlation spectroscopy, the signal can be normalized to :math:`M-k` by invoking ``normalize = True``. For normalizing according to the behavior of :py:func:`numpy.correlate`, use ``normalize = False``. For complex arrays, this method falls back to the method :func:`correlate`. Examples -------- >>> from multipletau import autocorrelate >>> autocorrelate(range(42), m=2, dtype=np.float_) array([[ 0.00000000e+00, 2.38210000e+04], [ 1.00000000e+00, 2.29600000e+04], [ 2.00000000e+00, 2.21000000e+04], [ 4.00000000e+00, 2.03775000e+04], [ 8.00000000e+00, 1.50612000e+04]]) """ assert isinstance(copy, bool) assert isinstance(normalize, bool) if dtype is None: dtype = np.dtype(a[0].__class__) else: dtype = np.dtype(dtype) # Complex data if dtype.kind == "c": # run cross-correlation return correlate(a=a, v=a, m=m, deltat=deltat, normalize=normalize, copy=copy, dtype=dtype) elif dtype.kind != "f": warnings.warn("Input dtype is not float; casting to np.float_!") dtype = np.dtype(np.float_) # If copy is false and dtype is the same as the input array, # then this line does not have an effect: trace = np.array(a, dtype=dtype, copy=copy) # Check parameters if m // 2 != m / 2: mold = m m = np.int_((m // 2 + 1) * 2) warnings.warn("Invalid value of m={}. Using m={} instead" .format(mold, m)) else: m = np.int_(m) N = N0 = trace.shape[0] # Find out the length of the correlation function. # The integer k defines how many times we can average over # two neighboring array elements in order to obtain an array of # length just larger than m. k = np.int_(np.floor(np.log2(N / m))) # In the base2 multiple-tau scheme, the length of the correlation # array is (only taking into account values that are computed from # traces that are just larger than m): lenG = m + k * (m // 2) + 1 G = np.zeros((lenG, 2), dtype=dtype) normstat = np.zeros(lenG, dtype=dtype) normnump = np.zeros(lenG, dtype=dtype) traceavg = np.average(trace) # We use the fluctuation of the signal around the mean if normalize: trace -= traceavg assert traceavg != 0, "Cannot normalize: Average of `a` is zero!" # Otherwise the following for-loop will fail: assert N >= 2 * m, "len(a) must be larger than 2m!" # Calculate autocorrelation function for first m+1 bins # Discrete convolution of m elements for n in range(0, m + 1): G[n, 0] = deltat * n # This is the computationally intensive step G[n, 1] = np.sum(trace[:N - n] * trace[n:]) normstat[n] = N - n normnump[n] = N # Now that we calculated the first m elements of G, let us # go on with the next m/2 elements. # Check if len(trace) is even: if N % 2 == 1: N -= 1 # Add up every second element trace = (trace[:N:2] + trace[1:N:2]) / 2 N //= 2 # Start iteration for each m/2 values for step in range(1, k + 1): # Get the next m/2 values via correlation of the trace for n in range(1, m // 2 + 1): npmd2 = n + m // 2 idx = m + n + (step - 1) * m // 2 if len(trace[:N - npmd2]) == 0: # This is a shortcut that stops the iteration once the # length of the trace is too small to compute a corre- # lation. The actual length of the correlation function # does not only depend on k - We also must be able to # perform the sum with respect to k for all elements. # For small N, the sum over zero elements would be # computed here. # # One could make this for-loop go up to maxval, where # maxval1 = int(m/2) # maxval2 = int(N-m/2-1) # maxval = min(maxval1, maxval2) # However, we then would also need to find out which # element in G is the last element... G = G[:idx - 1] normstat = normstat[:idx - 1] normnump = normnump[:idx - 1] # Note that this break only breaks out of the current # for loop. However, we are already in the last loop # of the step-for-loop. That is because we calculated # k in advance. break else: G[idx, 0] = deltat * npmd2 * 2**step # This is the computationally intensive step G[idx, 1] = np.sum(trace[:N - npmd2] * trace[npmd2:]) normstat[idx] = N - npmd2 normnump[idx] = N # Check if len(trace) is even: if N % 2 == 1: N -= 1 # Add up every second element trace = (trace[:N:2] + trace[1:N:2]) / 2 N //= 2 if normalize: G[:, 1] /= traceavg**2 * normstat else: G[:, 1] *= N0 / normnump return G def correlate(a, v, m=16, deltat=1, normalize=False, copy=True, dtype=None): """ Cross-correlation of two 1-dimensional sequences on a log2-scale. This computes the cross-correlation similar to :py:func:`numpy.correlate` for positive :math:`k` on a base 2 logarithmic scale. :func:`numpy.correlate(a, v, mode="full")[len(a)-1:]` :math:`z_k = \Sigma_n a_n v_{n+k}` Note that only the correlation in the positive direction is computed. To obtain the correlation for negative lag times swap the input variables ``a`` and ``v``. Parameters ---------- a, v : array-like input sequences with equal length m : even integer defines the number of points on one level, must be an even integer deltat : float distance between bins normalize : bool normalize the result to the square of the average input signal and the factor :math:`M-k`. copy : bool copy input array, set to ``False`` to save memory dtype : object to be converted to a data type object The data type of the returned array and of the accumulator for the multiple-tau computation. Returns ------- cross_correlation : ndarray of shape (N,2) the lag time (column 1) and the cross-correlation (column2). Notes ----- .. versionchanged :: 0.1.6 Compute the correlation for zero lag time and correctly normalize the correlation for a complex input sequence `v`. The algorithm computes the correlation with the convention of the curve decaying to zero. For experiments like e.g. fluorescence correlation spectroscopy, the signal can be normalized to :math:`M-k` by invoking ``normalize = True``. For normalizing according to the behavior of :py:func:`numpy.correlate`, use ``normalize = False``. Examples -------- >>> from multipletau import correlate >>> correlate(range(42), range(1,43), m=2, dtype=np.float_) array([[ 0.00000000e+00, 2.46820000e+04], [ 1.00000000e+00, 2.38210000e+04], [ 2.00000000e+00, 2.29600000e+04], [ 4.00000000e+00, 2.12325000e+04], [ 8.00000000e+00, 1.58508000e+04]]) """ assert isinstance(copy, bool) assert isinstance(normalize, bool) # See `autocorrelation` for better documented code. traceavg1 = np.average(v) traceavg2 = np.average(a) if normalize: assert traceavg1 != 0, "Cannot normalize: Average of `v` is zero!" assert traceavg2 != 0, "Cannot normalize: Average of `a` is zero!" if dtype is None: dtype = np.dtype(v[0].__class__) dtype2 = np.dtype(a[0].__class__) if dtype != dtype2: if dtype.kind == "c" or dtype2.kind == "c": # The user might try to combine complex64 and float128. warnings.warn( "Input dtypes not equal; casting to np.complex_!") dtype = np.dtype(np.complex_) else: warnings.warn("Input dtypes not equal; casting to np.float_!") dtype = np.dtype(np.float_) else: dtype = np.dtype(dtype) if dtype.kind not in ["c", "f"]: warnings.warn("Input dtype is not float; casting to np.float_!") dtype = np.dtype(np.float_) trace1 = np.array(v, dtype=dtype, copy=copy) # Prevent traces from overwriting each other if a is v: # Force copying trace 2 copy = True trace2 = np.array(a, dtype=dtype, copy=copy) assert trace1.shape[0] == trace2.shape[0], "`a`,`v` must have same length!" # Complex data if dtype.kind == "c": np.conjugate(trace1, out=trace1) # Check parameters if m // 2 != m / 2: mold = m m = np.int_(m // 2 + 1) * 2 warnings.warn("Invalid value of m={}. Using m={} instead" .format(mold, m)) else: m = np.int_(m) N = N0 = trace1.shape[0] # Find out the length of the correlation function. # The integer k defines how many times we can average over # two neighboring array elements in order to obtain an array of # length just larger than m. k = np.int_(np.floor(np.log2(N / m))) # In the base2 multiple-tau scheme, the length of the correlation # array is (only taking into account values that are computed from # traces that are just larger than m): lenG = m + k * m // 2 + 1 G = np.zeros((lenG, 2), dtype=dtype) normstat = np.zeros(lenG, dtype=dtype) normnump = np.zeros(lenG, dtype=dtype) # We use the fluctuation of the signal around the mean if normalize: trace1 -= np.conj(traceavg1) trace2 -= traceavg2 # Otherwise the following for-loop will fail: assert N >= 2 * m, "len(a) must be larger than 2m!" # Calculate autocorrelation function for first m+1 bins for n in range(0, m + 1): G[n, 0] = deltat * n G[n, 1] = np.sum(trace1[:N - n] * trace2[n:]) normstat[n] = N - n normnump[n] = N # Check if len(trace) is even: if N % 2 == 1: N -= 1 # Add up every second element trace1 = (trace1[:N:2] + trace1[1:N:2]) / 2 trace2 = (trace2[:N:2] + trace2[1:N:2]) / 2 N //= 2 for step in range(1, k + 1): # Get the next m/2 values of the trace for n in range(1, m // 2 + 1): npmd2 = (n + m // 2) idx = m + n + (step - 1) * m // 2 if len(trace1[:N - npmd2]) == 0: # Abort G = G[:idx - 1] normstat = normstat[:idx - 1] normnump = normnump[:idx - 1] break else: G[idx, 0] = deltat * npmd2 * 2**step G[idx, 1] = np.sum( trace1[:N - npmd2] * trace2[npmd2:]) normstat[idx] = N - npmd2 normnump[idx] = N # Check if len(trace) is even: if N % 2 == 1: N -= 1 # Add up every second element trace1 = (trace1[:N:2] + trace1[1:N:2]) / 2 trace2 = (trace2[:N:2] + trace2[1:N:2]) / 2 N //= 2 if normalize: G[:, 1] /= traceavg1 * traceavg2 * normstat else: G[:, 1] *= N0 / normnump return G def correlate_numpy(a, v, deltat=1, normalize=False, dtype=None, copy=True): """ Convenience function that wraps around :py:func:`numpy.correlate` and returns the correlation in the same format as :func:`correlate` does. Parameters ---------- a, v : array-like input sequences deltat : float distance between bins normalize : bool normalize the result to the square of the average input signal and the factor :math:`M-k`. The resulting curve follows the convention of decaying to zero for large lag times. copy : bool copy input array, set to ``False`` to save memory dtype : object to be converted to a data type object The data type of the returned array. Returns ------- cross_correlation : ndarray of shape (N,2) the lag time (column 1) and the cross-correlation (column 2). Notes ----- .. versionchanged :: 0.1.6 Removed false normalization when `normalize==False`. """ ab = np.array(a, dtype=dtype, copy=copy) vb = np.array(v, dtype=dtype, copy=copy) assert ab.shape[0] == vb.shape[0], "`a`,`v` must have same length!" avg = np.average(ab) vvg = np.average(vb) if normalize: ab -= avg vb -= vvg assert avg != 0, "Cannot normalize: Average of `a` is zero!" assert vvg != 0, "Cannot normalize: Average of `v` is zero!" Gd = np.correlate(ab, vb, mode="full")[len(ab) - 1:] if normalize: N = len(Gd) m = N - np.arange(N) Gd /= m * avg * vvg G = np.zeros((len(Gd), 2), dtype=dtype) G[:, 1] = Gd G[:, 0] = np.arange(len(Gd)) * deltat return G multipletau-0.1.9/setup.cfg000066400000000000000000000000651316472354200157170ustar00rootroot00000000000000[aliases] test = pytest [bdist_wheel] universal = 1 multipletau-0.1.9/setup.py000066400000000000000000000024401316472354200156070ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- from os.path import exists, dirname, realpath from setuptools import setup import sys author = u"Paul Müller" authors = [author] description = 'A multiple-tau algorithm for Python/NumPy.' name = 'multipletau' year = "2013" sys.path.insert(0, realpath(dirname(__file__))+"/"+name) from _version import version if __name__ == "__main__": setup( name=name, author=author, author_email='dev@craban.de', url='https://github.com/FCS-analysis/multipletau', version=version, packages=[name], package_dir={name: name}, license="BSD (3 clause)", description=description, long_description=open('README.rst').read() if exists('README.rst') else '', install_requires=["numpy >= 1.5.1"], keywords=["multiple tau", "fluorescence correlation spectroscopy"], setup_requires=['pytest-runner'], tests_require=["pytest"], classifiers= [ 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 3', 'Topic :: Scientific/Engineering :: Visualization', 'Intended Audience :: Science/Research' ], platforms=['ALL'] ) multipletau-0.1.9/tests/000077500000000000000000000000001316472354200152375ustar00rootroot00000000000000multipletau-0.1.9/tests/README.md000066400000000000000000000003061316472354200165150ustar00rootroot00000000000000### Test Scripts Execute all tests using `setup.py` in the parent directory: python setup.py test ### Running single tests Directly execute the scripts, e.g. python test_basic.py multipletau-0.1.9/tests/data/000077500000000000000000000000001316472354200161505ustar00rootroot00000000000000multipletau-0.1.9/tests/data/test_autocorrelate.py_test_ac_m.npy000066400000000000000000000171401316472354200252600ustar00rootroot00000000000000NUMPYF{'descr': '-H@|V*H@%K((H @R%H$@!H(@$6~H,@ -PH0@FH4@HNH8@Y G<@@/7G@@DRGD@†$GH@(]GL@i(GP@z)QRGT@@ ͆FX@A ƵE\@ L E`@"c-Ed@8iՕyCFh@fwFl@uIg(Fp@1D#E!@7H?sQ5H@]3H@B8D;1H@wb\FE0H@>-H@|V*H@%K((H @R%H"@Z ,#H$@H @߼!H&@O/uH(@s]H*@'=vc$H,@2WןH.@P/H0@<%mH2@W} H4@q aaH6@3z TH8@-|G:@ G<@ojG>@~[G@@IAGB@WGD@%aGF@ƶGH@"iD_GJ@qFGL@G"6}GN@&;^eGP@6QJGR@nרGT@|c8VʵFV@=X/,FX@:T?)0EZ@")ΒE\@.2yE^@n+E`@{cr Eb@,D:(Fd@ѝܸJFf@ĦSUeFh@6_|Fj@UHc_Fl@c0Fn@;aEp@!𽪛E!@7H?sQ5H@]3H@B8D;1H@wb\FE0H@>-H@|V*H@%K((H @R%H"@Z ,#H$@H @߼!H&@O/uH(@s]H*@'=vc$H,@2WןH.@P/H0@<%mH1@AӀoH2@3E H3@ S H4@=)?H5@vH6@]$&H7@>IG8@;PO'G9@%G:@mG;@H)G<@%e@IG=@;l&G>@CEb q:G?@9gG@@p<)GA@O GB@ GC@xИGD@mVGE@i,_GF@ $o(GG@ R1GH@U pmGI@~GJ@IՍVGK@r`kGL@ Y{GM@JýpGN@pcGO@ڮ˧VGP@ƏGGQ@J!B,GR@ڱTuGS@yUr"FT@C :FU@ٲyaFV@BuՇzFW@PV &EX@ "=EY@BǻE`@ MEa@R)eFb@8 +(Fc@VL:Fd@JFe@P@jqXFf@LeFg@`ZrFh@ra׏}Fi@jXtFj@ k%czaFk@5ySKFl@2Fm@ $Fn@L4,-PEo@^clEp@G-H@|V*H@%K((H @R%H"@Z ,#H$@H @߼!H&@O/uH(@s]H*@'=vc$H,@2WןH.@P/H0@<%mH1@AӀoH2@3E H3@ S H4@=)?H5@vH6@]$&H7@>IG8@;PO'G9@%G:@mG;@H)G<@%e@IG=@;l&G>@CEb q:G?@9gG@@p<)G@@%9TD_GA@4un`4GA@Iٶ:GB@ '5GB@GP.GC@4e 1GC@Br)GD@Q>8΍GD@(gtIGE@%MGE@.GF@UuGF@2ӱGG@|X~GG@1KGH@fj;GH@FAGI@=\(eGI@0OQGJ@EeB GJ@>WEs\ZGK@_1GK@GL@\5mzGL@X~tGM@BϟpGM@b=EfhGN@#pbGN@䄩%\GO@(pkTGO@t*HNGP@ЄFGP@Jv$8GQ@Dא''GQ@ey6GR@o^GR@*]PFS@IOFS@El FT@G>AFT@9SկXrFU@CɓDFU@rkkFV@!plFV@JyxFW@n5+uEW@5mEX@dzVAEX@SR~EY@tLEY@*EZ@`EuEZ@AERE[@2n ٶE[@5o0E\@f=]E\@,8E]@HsE]@MBCE^@,uаE^@vYE_@"'mE_@yE`@oEE`@><Fa@`CBFa@c0Fb@{9<'Fb@Vd1Fc@)>j9Fc@I1+BFd@e"VzIFd@ՔQFe@6M2WFe@Mْ_Ff@:V:YeFf@|ugkFg@qFg@}svFh@Ԟhm|Fh@g2Fi@XINFi@FJyFj@?CpFj@ tfFk@H[Fk@G\@QFl@ÜxDFl@b$E,& 8Fm@a0k*Fm@:`?Fn@GV|~ Fn@`++ vEo@5+Eo@x[Ep@x~CS0E!@7H?sQ5H@]3H@B8D;1H@wb\FE0H@>-H@|V*H@%K((H @R%H"@Z ,#H$@H @߼!H&@O/uH(@s]H*@'=vc$H,@2WןH.@P/H0@<%mH1@AӀoH2@3E H3@ S H4@=)?H5@vH6@]$&H7@>IG8@;PO'G9@%G:@mG;@H)G<@%e@IG=@;l&G>@CEb q:G?@9gG@@p<)G@@%9TD_GA@4un`4GA@Iٶ:GB@ '5GB@GP.GC@4e 1GC@Br)GD@Q>8΍GD@(gtIGE@%MGE@.GF@UuGF@2ӱGG@|X~GG@1KGH@fj;GH@FAGI@=\(eGI@0OQGJ@EeB GJ@>WEs\ZGK@_1GK@GL@\5mzGL@X~tGM@BϟpGM@b=EfhGN@#pbGN@䄩%\GO@(pkTGO@t*HNGP@ЄFG@P@,6@GP@)2$ 7GP@&0GQ@ XK&G@Q@q*JDGQ@&,GQ@dʶ^ GR@ſ!*G@R@GDFR@2*FR@~|pFS@ #F@S@6FS@k ?FS@A8FT@UF@T@-pɌFT@G0YiFT@sYuaRFU@98F@U@=s[iFU@ =? FU@ۗ>rmFV@.F@V@J1FV@%Wa_FV@Զ FW@E@W@[EW@IEW@:OEX@?}ZE@X@LEX@ΓyEX@Fa@@$g"Fb@ g&F@b@W7,Fb@\#Q81Fb@oC04Fc@|09F@c@ٍ1`6>Fc@ lBFc@\a9gEFd@VLIF@d@I ~?MFd@Fe=DqQFd@SA[TFe@!vīWF@e@DP IO?"@Ct?$@sK?&@IO?(@,?*@!?,@=~[?.@nw?0@}ΙJ]?2@+Sd?4@#Pr?6@ƭf!?8@Syb?:@~m <@Mjȿ>@,ggҿ@@hzؿB@Sg(D@IWF@KF*hH@58&J@l9}L@/6eN@,{ P@8&O|s??TP ?@"r?@LSV?@s?@;o'?@n=4ǐ?@f$? @k\?"@w?$@ez{?&@(?(@\uE|?*@p?,@ř*߫V?.@P3V0@t1lip2@3@4@n6@չo8@b:@8mL<@!™>@oI{@@ySkx@?]?@ aj?I?@ڀۂG?@ ?@I21?@,q?@VԳ ? @A@J?"@ET ?$@$]-?&@y ?(@A ??*@[U94(<@9J׎>@M@@:zJ4@?68Psf@@L6;?@2?@pSC:?@廛?@<_c?@_š? @A?"@EI~}k?$@[Ḅ+?&@<\.?(@n-(?*@E?,@/奪o?.@m[Dl?0@@?2@Y&?4@َh9?6@y(?8@RJH?:@UQ0?<@jC?>@zk?@@TieB@w~>uÿD@ %ҿF@EV%ܿH@s9t J@;jL@Qs9N@8P@S( *multipletau-0.1.9/tests/data/test_autocorrelate.py_test_ac_simple.npy000066400000000000000000000044001316472354200263100ustar00rootroot00000000000000NUMPYF{'descr': 'B!@7H?sQ5H@]3H@B8D;1H@wb\FE0H@>-H@|V*H@%K((H @R%H"@Z ,#H$@H @߼!H&@߸O/uH(@s]H*@(=vc$H,@1WןH.@P/H0@<%mH2@4l1 H4@LgeH6@F3 H8@z_ G:@e EG<@mG>@ h%G@@_r~GB@%9cGD@eڢBFGF@YGH@bcr5yGJ@։cGL@0QH{zGN@ye>$bGP@B[7 EGVn\1@?E&wT@@\2'@@uj@@2C7@8@vw@:@*@<@2jt@>@Zr@@@BW@(aC?dȾ߽_C@SK;]C@[%aZC@WC@TytUC@dSC@8ciQC @FңZ-OC"@H5KC$@HC&@ژhEC(@vkCC*@$AC,@֙3>C.@WE8:C0@Z6C2@Z4 0C4@(C6@D!C8@wavC:@KDVleC<@:gC>@wv,TB@@{R`Brt)(D?,nXD@z(D@KXD@C/D@?2D@,gD@ qQD @XD"@_:SD$@Ǫ+D&@Um&WD(@a\D*@xŤD,@巋eD.@s6D0@>}D2@D4@kkD6@3;쉴D8@$߱D:@\s D<@ΪD>@:eAUD@@*"ߺӣDB@tPDD@6~~DJ@t7SttDL@OriDN@V38`DP@|8SDmultipletau-0.1.9/tests/data/test_correlate.py_test_cc_m.npy000066400000000000000000000400201316472354200243620ustar00rootroot00000000000000NUMPYF{'descr': '-HgT>/D@|V*Hև[d1D@$K((H0S2D @R%HÏx(4D$@!H=75D(@$6~H,/6D,@-PH~5D0@FH?%,m4D4@HNH3 U_T1D8@Y G@(D<@A/7GPD@@DRGӗD@†$GV¥.H@(]G.,f0L@h(GO[<DP@z)QRG'AP&DT@@ ͆Fw2 XX@@ ƵEG}6Ty\@ L EIc0rD`@"c-E3h-Dd@8iՕyCFr[Dh@fwF*ʘDl@uIg(F+>?JDp@1D#ExۖƴC!@7H?sQ5Hz}o-HgT>/D@|V*Hև[d1D@$K((H0S2D @R%HÏx(4D"@Z ,#H?K4D$@H @߼!Hv-u5D(@$6~H,/6D,@-PH~5D0@FH?%,m4D2@W} Hŏc3D4@q aaH,o1D8@Y G@(D<@A/7GPD@@DRGӗB@WG?pD@'aG /3l-H@(]G.,f0L@h(GO[<DP@z)QRG'AP&DR@nרG[W!(T@|c8VʵFRvXX@@ ƵEG}6Ty\@ L EIc0rD`@"c-E3h-Db@,D:(FiDd@ҝܸJFDh@fwF*ʘDl@uIg(F+>?JDp@1D#ExۖƴC!@7H?sQ5Hz}o-HgT>/D@|V*Hև[d1D@$K((H0S2D @R%HÏx(4D"@Z ,#H?K4D$@H @߼!Hv-u5D&@߸O/uHRks5D(@s]H3!6D*@(=vc$H(h6D,@2WןH-+5D.@P/HKJx5D0@<%mHȓL4D2@W} Hŏc3D4@q aaH,o1D6@4z TH^7.D8@-|G2t (D:@ G-#D<@ojGTID>@~[GR  D@@IAG֔yB@WG?pD@'aG /3l-F@ƶG*++6H@#iD_G:ƕ7J@qFGAgL@H"6}GJ@]rDN@&;^eGV(&DP@6QJGIị)DR@nרG[W!(T@|c8VʵFRvXV@@X/,Fds{qX@:T?)0E/{} yZ@!)ΒE/=PnX\@.2yEm'-orD^@n+ECD`@{cr EHI*ODb@,D:(FiDd@ҝܸJFDf@ĦSUeF[R_Dh@6_|F&/ Dj@UHc_Fn85){Dl@c0F|LhKDn@;aEq%Dp@!𽪛E> C!@7H?sQ5Hz}o-HgT>/D@|V*Hև[d1D@$K((H0S2D @R%HÏx(4D"@Z ,#H?K4D$@H @߼!Hv-u5D&@߸O/uHRks5D(@s]H3!6D*@(=vc$H(h6D,@2WןH-+5D.@P/HKJx5D0@<%mHȓL4D1@@ӀoHH d<4D2@3E H3m3D3@ S HP 2D4@=)?H @~x1D6@4z TH^7.D8@-|G2t (D:@ G-#D<@ojGTID>@~[GR  D@@IAG֔yA@O G|C' B@ GZC@xИG 3W%D@nVG-HѸY-F@ƶG*++6H@#iD_G:ƕ7J@qFGAgL@H"6}GJ@]rDN@&;^eGV(&DP@6QJGIị)DQ@J!B,GnmMDR@ڱTuGapT#S@yUr"F rLDT@C :FXWV@@X/,Fds{qX@:T?)0E/{} yZ@!)ΒE/=PnX\@.2yEm'-orD^@n+ECD`@{cr EHI*ODa@R)eFnc~VߏDb@9 +(F#1Dc@VL:FJqUDd@JF*ݟDf@ĦSUeF[R_Dh@6_|F&/ Dj@UHc_Fn85){Dl@c0F|LhKDn@;aEq%Dp@!𽪛E> Cq@[tgVE JCCr@̊UDxό!@7H?sQ5Hz}o-HgT>/D@|V*Hև[d1D@$K((H0S2D @R%HÏx(4D"@Z ,#H?K4D$@H @߼!Hv-u5D&@߸O/uHRks5D(@s]H3!6D*@(=vc$H(h6D,@2WןH-+5D.@P/HKJx5D0@<%mHȓL4D1@@ӀoHH d<4D2@3E H3m3D3@ S HP 2D4@=)?H @~x1D5@vHUY0D6@]$&HaH.D7@>IGf7+D8@;PO'GTv)D9@%G{!1&D:@mG{ f*cA#D;@H)GFS8 D<@&e@IGo/D=@:l&Gj5D>@BEb q:G28΍G W$-D@(gtIG{0E@&MG1l2E@.Gigc4F@UuGcyҳVb6F@2ӱG!Ui8G@|X~G;@al%{:G@1KG28b<H@ej;Gω7H@FAG83I@=\(eGY.I@0OQGs_s&J@FeB G^QJ@=WEs\ZGUhK@_1GLO]K@Gl**DCL@\5mzGNcDL@X~tG0@DM@BϟpG2` DM@b=EfhGKJ4$DN@#pbGKth&DN@䄩%\G$)DO@(pkTGF+X*DO@t*HNG!o*DP@ЄFG5ޔ*DP@Jv$8GgQ&DQ@Eא''G[նDQ@ey6G F CR@o^GT#h!R@*]PF~5S@IOFICS@El F޺4OT@G>AFVT@9SկXrF]_U@BɓDF账eU@rkkF`2h-glV@!plFFh]rV@KyxFcwW@n5+uE­3{W@5mEX4E{X@dzVAEACyX@TR~EB:wY@uLE~yD.,uY@*ETaoZ@`EuEkl]Z@BEREZ[@1n ٶE[8[D[@4o0EjnKjD\@g=]EsD\@,8EҚxD]@HsEt@ ~D]@MBCEsHHD^@*uаEl,D^@~vYE{D_@!'mE!PD9D_@yECD`@oEEn)]D`@><F"Da@`CBFNpDa@c0FG텐Db@|9<'FlMDb@Vd1F0Dc@)>j9F}5^ԒDc@I1+BF4Dd@e"VzIFmVDd@ՔQFVHDe@7M2WFt&vוDe@Lْ_FS*B]Df@:V:YeFlCXDf@|ugkFHTbDg@qF{٘Dg@}svFfDh@Ԟhm|F؝ZDh@g2FL?:kDi@XINF 4 Di@FJyFoI8BDj@?CpF:48܅Dj@tfF]cF{Dk@H[Fen<qDk@G\@QFСWeDl@ÜxDF CYDl@b$E,& 8F/چLDm@a0k*F`X?Dm@:`?FNSIZ0Dn@GV|~ Fo*!Dn@`++ vE M-Do@5+ENCo@x[Ev Cp@x~CS0E_0)C!@7H?sQ5Hz}o-HgT>/D@|V*Hև[d1D@$K((H0S2D @R%HÏx(4D"@Z ,#H?K4D$@H @߼!Hv-u5D&@߸O/uHRks5D(@s]H3!6D*@(=vc$H(h6D,@2WןH-+5D.@P/HKJx5D0@<%mHȓL4D1@@ӀoHH d<4D2@3E H3m3D3@ S HP 2D4@=)?H @~x1D5@vHUY0D6@]$&HaH.D7@>IGf7+D8@;PO'GTv)D9@%G{!1&D:@mG{ f*cA#D;@H)GFS8 D<@&e@IGo/D=@:l&Gj5D>@BEb q:G28΍G W$-D@(gtIG{0E@&MG1l2E@.Gigc4F@UuGcyҳVb6F@2ӱG!Ui8G@|X~G;@al%{:G@1KG28b<H@ej;Gω7H@FAG83I@=\(eGY.I@0OQGs_s&J@FeB G^QJ@=WEs\ZGUhK@_1GLO]K@Gl**DCL@\5mzGNcDL@X~tG0@DM@BϟpG2` DM@b=EfhGKJ4$DN@#pbGKth&DN@䄩%\G$)DO@(pkTGF+X*DO@t*HNG!o*DP@ЄFG5ޔ*D@P@,6@GBF [)DP@(2$ 7G-$'DP@&0G@#DQ@ XK&G*?0D@Q@q*JDG~BDQ@&,Gc .CQ@cʶ^ GUR@ƿ!*GG6KG @R@GDF p,R@2*FSP`5R@~|pFk=S@ #FHEYjC@S@6F#K |HS@k ?FnNS@?8F8RT@UF꤭}^V@T@-pɌF!,ZT@G0YiF>_T@sYuaRFPkcbU@98F[faOe@U@=s[iF]zhU@ =? FݞlU@ڗ>rmF[pV@.F"eڵ0r@V@J1Fq9/{tV@%Wa_FL k-wV@Զ F3:zW@EWB(}@W@[Eq^*|W@IEf=+{W@:OE:2+zX@?}ZE;f*y@X@LE!+%'xX@ΓyEp!#wX@6,dD[@,ܚԛE.n6jD[@ 1EGpD\@8QE tnesD@\@4CE$3;uD\@.EPSUxD\@Pg|E.|a_S{D]@i,E%OB}D@]@0pVjEKD]@uVtEׁD]@WEߜ@ڂD^@;R|EDD@^@“TyEHS`D^@otETD^@@9HE$FD_@uE!'D@_@f |QE۵@mD_@SElD_@gE8nhD`@sED@`@ցiZFSD`@@uFёōD`@%D FBDa@?8FͼrD@a@IFRc Da@>F3wDa@@$g"FDb@ g&F4QOMD@b@W7,FY@Db@\#Q81F$Db@oC04F󋷯NsDc@|09Fʅ HԒD@c@ٍ1`6>F8r5Dc@ lBF+@Dc@\a9gEF4ԜDd@VLIF.[sVD@d@H ~?MF:QɶDd@Fe=DqQFQ[Dd@SA[TFGk^wDe@!vīWFFRוD@e@DPXD@f@kx~gFmybe츗Df@jRHkF-Df@[ǥ;oF%nyDg@b6qF_ ٘D@g@q}sF=ͤ9Dg@"{vF01Dg@QyFrDh@Vr|FeGZD@h@-q*FbDh@(ׁF|N`&6Dh@FFE2:w{Di@@_ F!T{pD@i@/׉eF0U&Di@EUyFFDi@yuF2~Dj@ˌ eqF73D@j@ްAkF@%Dj@ܕӻ)fF_y|Dj@! aFŖ:vDk@\F'ĹqD@k@W;+VFn$ukDk@/@(a^QFN\eDk@~1KF,8)`Dl@aLcDF-ڻHYD@l@!~?FC)8SDl@ [F:8FjLDl@]72F=_._EDm@g < +F8X?D@m@t+^#F7`=a7Dm@|F}]!0Dm@ F)ׅt'(Dn@T- Fh(I!D@n@ <:FJcDn@b EDU.2Dn@*}EPdmDo@CE PC@o@I%[8EQƧCo@r[E*@ϠCo@ IE_>3Cp@f$nOE7@rhh?@@x?B@ ?D@`?F@:2?H@Nsś?J@eLǿL@~mۿN@$7P@ nU ?T&Y @Apm@ُZvk@zLj@34@Ir@\3  @eVW"@_]D$@M臖 &@9u (@ *@]. ,@3l.@2G0@7m2@Bl4@ۿ6@Hond?8@40P?:@7 #?@<@w\6 @>@mG@@@8B@:r!,׿?fؿ@)Dڿ@ѓwۿ@&8:4ݿ@_ڿ@>CO׿@at+Կ @ԓ bѿ"@r!P jͿ$@ƞDȿ&@fAAnSÿ(@4L8)E**@C(?,@ʚdJ.@w0@VEً8?2@@O8+?4@YD?6@R?8@(%?:@o0A?<@S?>@[7?@@siB*?('??\m?@1)=g?@aK?@N'ݙ-?@&P[ ?@vQ?@4? @`u,?"@et?$@{AoG?&@D?(@m-?*@!P?,@Y%x?.@p=?0@ Q?2@uGV?4@ñA!?6@$Pb?8@ifi?:@ti%?<@*ҕ?>@oI?@@(6?B@z#?D@y?F@Z^y?H@Fa?J@fk୿L@N@~G&˿P@Jӿmultipletau-0.1.9/tests/data/test_correlate.py_test_cc_simple.npy000066400000000000000000000106601316472354200254260ustar00rootroot00000000000000NUMPYF{'descr': 'B[9&w!@7H?sQ5HBs,@]3HɾZ@C8D;1H9sM@wb\FE0HDpT@>-HWE@{V*HCת@%K((Hd1/ @Q%H]u"@Z ,#H8w&}$@H @߼!H^i/)&@߸O/uH?(@s]Hjf*@&=vc$Hk"4,,@1WןHM.@P/Hʷ 0@<%mH QzR]2@4l1 H'p/4@KgeHh/6@F3 HxK8@z_ G\CJ:@e EG0iJ.<@mGE;>@ h%G|@@_r~Gnַ@B@%9cGJd9D@eڢBFGylUbF@YG4B}DH@bcr5yG6˩J@։cGgVrL@0QH{zGvN@xe>$bGh2gP@B[7 EG8'&Z0+#@?@t@1~M@-@r,\@,@\d@hao@|1Fl~k@`8@;bop@QqN@s~O^s@k]t@F2u @`Dd{@uw"@~)@Giy$@-)0n@}~{&@`q@؍k}(@.MD@KM- *@f՗@:9L,@C5@2k.@V;gO߽@:!0@dy@Gu2@_}@'4@ܹ6ϸ@Oą6@2)@$28@],n6@Q3}:@)@A<@C^u@8+>@4~@ϣa@@n!@k@aC??ɾ߽_C}aA@hTK;]CޑA@-%aZC'8A@_WC /fA@ztUCw aA@XdSCuZA@8ciQC޴ l"A @ӣZ-OCd,KA"@H5KC)aDA$@NHCp A&@(hEC'{ڲA(@kCCv A*@އ$AC/B6,iBA,@8ٙ3>C>\A.@+WE8:C[QfA0@Z6CaA2@k4 0C.A4@`(ChL#A6@!C-A8@$avCwPFA:@6JVleCd2A<@:gCdkɍA>@ĉ,TBE o=A@@-{R`Bȭ8FArt)(D?+nXDg` O1@z(DTKHlv@KXDas@C/DGvغ@?2D}<@,gD1A @ qQDcV @YDm-O{"@_:SDF}$@Ǫ+D*O^&@Um&WD7e (@`\Dd5j*@xŤD.K%7I,@巋eD|I .@s6D$Ɛ#Ё0@>}D"Ea!32@DTI4@kkD.ɳ#6@2;쉴Dl68@$߱D&$6i:@\s DJ<@ΪDq>2>@:eAUDp@@*"ߺӣDB@tPDpzD@6~~DBXJ@t7SttD7^L@OriD|>N@V38`DADP@|8SD0.YZmultipletau-0.1.9/tests/test_ac_cc.py000066400000000000000000000075461316472354200177140ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """ Tests correlation-autocorrelation identity """ from __future__ import division, print_function import numpy as np import os from os.path import abspath, basename, dirname, join, split, exists import platform import sys import warnings import zipfile # Add parent directory to beginning of path variable DIR = dirname(abspath(__file__)) sys.path = [split(DIR)[0]] + sys.path import multipletau from test_autocorrelate import get_sample_arrays def test_ac_cc_m(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) arrs = get_sample_arrays() ms = [8, 16, 32, 64, 128] a = np.concatenate(arrs) res = [] for m in ms: r = multipletau.autocorrelate(a=a, m=m, deltat=1, normalize=False, copy=True, dtype=np.float_) res.append(r) res = np.concatenate(res) rescc = [] for m in ms: r = multipletau.correlate(a=a, v=a, m=m, deltat=1, normalize=False, copy=True, dtype=np.float_) rescc.append(r) # test minimal length of array _r2 = multipletau.correlate(a=a[:2*m], v=a[:2*m], m=m, deltat=1, normalize=False, copy=True, dtype=np.float_) rescc = np.concatenate(rescc) assert np.all(res==rescc) def test_ac_cc_normalize(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) arrs = get_sample_arrays() res = [] for a in arrs: r = multipletau.autocorrelate(a=a, m=16, deltat=1, normalize=True, copy=True, dtype=np.float_) res.append(r) res = np.concatenate(res) rescc = [] for a in arrs: r = multipletau.correlate(a=a, v=a, m=16, deltat=1, normalize=True, copy=True, dtype=np.float_) rescc.append(r) rescc = np.concatenate(rescc) assert np.all(res==rescc) def test_ac_cc_simple(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) arrs = get_sample_arrays() rescc = [] for a in arrs: r = multipletau.correlate(a=a, v=a, m=16, deltat=1, normalize=False, copy=True, dtype=np.float_) rescc.append(r) rescc = np.concatenate(rescc) resac = [] for a in arrs: r = multipletau.autocorrelate(a=a, m=16, deltat=1, normalize=False, copy=True, dtype=np.float_) resac.append(r) resac = np.concatenate(resac) assert np.all(resac==rescc) if __name__ == "__main__": # Run all tests loc = locals() for key in list(loc.keys()): if key.startswith("test_") and hasattr(loc[key], "__call__"): loc[key]() multipletau-0.1.9/tests/test_autocorrelate.py000066400000000000000000000175541316472354200215350ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """ Tests autocorrelation algorithm """ from __future__ import division, print_function import numpy as np import os from os.path import abspath, basename, dirname, join, split, exists import platform import sys import warnings import zipfile # Add parent directory to beginning of path variable DIR = dirname(abspath(__file__)) sys.path = [split(DIR)[0]] + sys.path import multipletau def get_reference_data(funcname, pyfile): adir = os.path.dirname(pyfile)+"/data/" aname = os.path.basename(pyfile)+"_"+funcname+".npy" return np.load(adir + aname) def get_sample_arrays(): a = [-4.3, 1, 9, -99.2, 13] b = [9921, 281, 23.5, 5.3, 77] l = [ 33, 92, 47, 54, 99] r = [ 0, 1, 12, 4, 0] p = [ 1, 4, .5, 2, 3] arrs = [] for ai, bi, li, ri, pi in zip(a,b,l,r,p): x = np.linspace(ai,bi,li) arr = (x*np.roll(x,ri))**pi arrs.append(arr) return arrs def test_ac_copy(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) arrs = get_sample_arrays() res1 = [] for a in arrs: r = multipletau.autocorrelate(a=a, m=16, deltat=1, normalize=True, copy=True, dtype=np.float_) res1.append(r) res2 = [] for a in arrs: r = multipletau.autocorrelate(a=a, m=16, deltat=1, normalize=True, copy=False, dtype=np.float_) res2.append(r) # simple test if result is the same assert np.all(np.concatenate(res1) == np.concatenate(res2)) arrs = np.concatenate(arrs) refarrs = np.concatenate(get_sample_arrays()) # make sure the copy function really changes something assert not np.all(arrs == refarrs) def test_ac_dtype(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) a = np.round(get_sample_arrays()[0]) # integer rf = multipletau.autocorrelate(a=a, m=16, deltat=1, normalize=True, copy=True, dtype=np.float_) ri = multipletau.autocorrelate(a=a, m=16, deltat=1, normalize=True, copy=True, dtype=np.uint) ri2 = multipletau.autocorrelate(a=np.array(a, dtype=np.uint), m=16, deltat=1, normalize=True, copy=True, dtype=None) assert ri.dtype == np.dtype(np.float_), "if wrong dtype, dtype should default to np.float_" assert ri2.dtype == np.dtype(np.float_), "if wrong dtype, dtype should default to np.float_" assert np.all(rf == ri), "result should be the same, because input us the same" assert np.all(rf == ri2), "result should be the same, because input us the same" def test_ac_m(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) arrs = get_sample_arrays() ms = [8, 16, 32, 64, 128] a = np.concatenate(arrs) res = [] for m in ms: r = multipletau.autocorrelate(a=a, m=m, deltat=1, normalize=False, copy=True, dtype=np.float_) res.append(r) # test minimal length of array _r2 = multipletau.autocorrelate(a=a[:2*m], m=m, deltat=1, normalize=False, copy=True, dtype=np.float_) res = np.concatenate(res) #np.save(os.path.dirname(__file__)+"/data/"+os.path.basename(__file__)+"_"+myname+".npy", res) ref = get_reference_data(myname, __file__) assert np.allclose(res, ref, atol=0, rtol=1e-15) def test_ac_m_wrong(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) a = get_sample_arrays()[0] # integer r1 = multipletau.autocorrelate(a=a, m=16, deltat=1, normalize=True, copy=True, dtype=np.float_) r2 = multipletau.autocorrelate(a=a, m=15, deltat=1, normalize=True, copy=True, dtype=np.float_) r3 = multipletau.autocorrelate(a=a, m=15.5, deltat=1, normalize=True, copy=True, dtype=np.float_) r4 = multipletau.autocorrelate(a=a, m=14.5, deltat=1, normalize=True, copy=True, dtype=np.float_) r5 = multipletau.autocorrelate(a=a, m=16., deltat=1, normalize=True, copy=True, dtype=np.float_) assert np.all(r1==r2) assert np.all(r1==r3) assert np.all(r1==r4) assert np.all(r1==r5) def test_ac_normalize(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) arrs = get_sample_arrays() res = [] for a in arrs: r = multipletau.autocorrelate(a=a, m=16, deltat=1, normalize=True, copy=True, dtype=np.float_) res.append(r) res = np.concatenate(res) #np.save(os.path.dirname(__file__)+"/data/"+os.path.basename(__file__)+"_"+myname+".npy", res) ref = get_reference_data(myname, __file__) assert np.allclose(res, ref, atol=0, rtol=1e-14) def test_ac_simple(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) arrs = get_sample_arrays() res = [] for a in arrs: r = multipletau.autocorrelate(a=a, m=16, deltat=1, normalize=False, copy=True, dtype=np.float_) res.append(r) res = np.concatenate(res) #np.save(os.path.dirname(__file__)+"/data/"+os.path.basename(__file__)+"_"+myname+".npy", res) ref = get_reference_data(myname, __file__) assert np.allclose(res, ref, atol=0, rtol=1e-15) if __name__ == "__main__": # Run all tests loc = locals() for key in list(loc.keys()): if key.startswith("test_") and hasattr(loc[key], "__call__"): loc[key]() multipletau-0.1.9/tests/test_basic.py000066400000000000000000000024221316472354200177310ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- """ basic tests also available in the function docs """ import numpy as np from os.path import abspath, dirname, join import sys sys.path.insert(0, dirname(dirname(abspath(__file__)))) from multipletau import autocorrelate, correlate def test_ac(): ist = autocorrelate(range(42), m=2, dtype=np.float_) soll = np.array([[ 0.00000000e+00, 2.38210000e+04], [ 1.00000000e+00, 2.29600000e+04], [ 2.00000000e+00, 2.21000000e+04], [ 4.00000000e+00, 2.03775000e+04], [ 8.00000000e+00, 1.50612000e+04]]) assert np.allclose(soll, ist) def test_cc(): ist = correlate(range(42), range(1,43), m=2, dtype=np.float_) soll = np.array([[ 0.00000000e+00, 2.46820000e+04], [ 1.00000000e+00, 2.38210000e+04], [ 2.00000000e+00, 2.29600000e+04], [ 4.00000000e+00, 2.12325000e+04], [ 8.00000000e+00, 1.58508000e+04]]) assert np.allclose(soll, ist) if __name__ == "__main__": # Run all tests loc = locals() for key in list(loc.keys()): if key.startswith("test_") and hasattr(loc[key], "__call__"): loc[key]() multipletau-0.1.9/tests/test_correlate.py000066400000000000000000000223321316472354200206320ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """ Tests correlation algorithm """ from __future__ import division, print_function import numpy as np import os from os.path import abspath, basename, dirname, join, split, exists import platform import sys import warnings import zipfile # Add parent directory to beginning of path variable DIR = dirname(abspath(__file__)) sys.path = [split(DIR)[0]] + sys.path import multipletau from test_autocorrelate import get_reference_data def get_sample_arrays_cplx(): a = [-4.3, 1, 9, -99.2, 13] b = [9921, 281, 23.5, 5.3, 77] c = [ 12, 0, 2.1, 1.3, 33] d = [ 32, .1, -2, 6.3, 88] l = [ 33, 92, 47, 54, 99] r = [ 0, 1, 12, 4, 0] p = [ 1, 4, .5, 2, 3] arrs = [] for ai, bi, ci, di, li, ri, pi in zip(a,b,c,d,l,r,p): x = np.linspace(ai,bi,li) y = np.linspace(ci,di,li) arr = (x*np.roll(x,ri))**pi + 1j*y arrs.append(arr) return arrs def test_cc_copy(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) arrs = get_sample_arrays_cplx() res1 = [] for a in arrs: r = multipletau.correlate(a=a, v=a, m=16, deltat=1, normalize=True, copy=True) res1.append(r) res2 = [] for a in arrs: r = multipletau.correlate(a=a, v=a, m=16, deltat=1, normalize=True, copy=False) res2.append(r) # simple test if result is the same assert np.all(np.concatenate(res1) == np.concatenate(res2)) arrs = np.concatenate(arrs) refarrs = np.concatenate(get_sample_arrays_cplx()) # make sure the copy function really changes something assert not np.all(arrs == refarrs) def test_cc_dtype(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) a = np.round(get_sample_arrays_cplx()[0].real) # integer rf = multipletau.correlate(a=a, v=a, m=16, deltat=1, normalize=True, copy=True, dtype=np.float_) ri = multipletau.correlate(a=a, v=a, m=16, deltat=1, normalize=True, copy=True, dtype=np.int_) ri2 = multipletau.correlate(a=np.array(a, dtype=np.int_), v=np.array(a, dtype=np.int_), m=16, deltat=1, normalize=True, copy=True, dtype=None) assert ri.dtype == np.dtype(np.float_), "if wrong dtype, dtype should default to np.float_" assert ri2.dtype == np.dtype(np.float_), "if wrong dtype, dtype should default to np.float_" assert np.all(rf == ri), "result should be the same, because input us the same" assert np.all(rf == ri2), "result should be the same, because input us the same" def test_cc_dtype2(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) a = np.round(get_sample_arrays_cplx()[0]) print("this should issue a warning of unequal input dtypes, casting to complex") rf = multipletau.correlate(a=a.real, v=a, m=16, deltat=1, normalize=True, copy=True) assert np.dtype(rf.dtype) == np.dtype(np.complex_) print("this should issue a warning of unequal input dtypes, casting to float") rf2 = multipletau.correlate(a=a.real, v=np.array(a.imag, dtype=np.int_), m=16, deltat=1, normalize=True, copy=True) assert np.dtype(rf2.dtype) == np.dtype(np.float_) def test_cc_m(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) arrs = get_sample_arrays_cplx() ms = [4, 8, 10, 16, 20, 64, 128] a = np.concatenate(arrs) res = [] for m in ms: r = multipletau.correlate(a=a, v=a, m=m, deltat=1, normalize=False, copy=True, dtype=np.complex_) res.append(r) # test minimal length of array _r2 = multipletau.correlate(a=a[:2*m], v=a[:2*m], m=m, deltat=1, normalize=False, copy=True, dtype=np.complex_) res = np.concatenate(res) #np.save(os.path.dirname(__file__)+"/data/"+os.path.basename(__file__)+"_"+myname+".npy", res) ref = get_reference_data(myname, __file__) assert np.allclose(res, ref, atol=0, rtol=1e-15) def test_cc_m_wrong(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) a = get_sample_arrays_cplx()[0] # integer r1 = multipletau.correlate(a=a, v=a, m=16, deltat=1, normalize=True, copy=True) r2 = multipletau.correlate(a=a, v=a, m=15, deltat=1, normalize=True, copy=True) r3 = multipletau.correlate(a=a, v=a, m=15.5, deltat=1, normalize=True, copy=True) r4 = multipletau.correlate(a=a, v=a, m=14.5, deltat=1, normalize=True, copy=True) r5 = multipletau.correlate(a=a, v=a, m=16., deltat=1, normalize=True, copy=True) assert np.all(r1==r2) assert np.all(r1==r3) assert np.all(r1==r4) assert np.all(r1==r5) def test_cc_normalize(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) arrs = get_sample_arrays_cplx() res = [] for a in arrs: r = multipletau.correlate(a=a.real, v=a.imag, m=16, deltat=1, normalize=True, copy=True, dtype=np.float_) res.append(r) res = np.concatenate(res) #np.save(os.path.dirname(__file__)+"/data/"+os.path.basename(__file__)+"_"+myname+".npy", res) ref = get_reference_data(myname, __file__) assert np.allclose(res, ref, atol=0, rtol=1e-14) def test_cc_simple(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) arrs = get_sample_arrays_cplx() res = [] for a in arrs: r = multipletau.correlate(a=a, v=a, m=16, deltat=1, normalize=False, copy=True, dtype=np.complex_) res.append(r) res = np.concatenate(res) #np.save(os.path.dirname(__file__)+"/data/"+os.path.basename(__file__)+"_"+myname+".npy", res) ref = get_reference_data(myname, __file__) assert np.allclose(res, ref, atol=0, rtol=1e-15) # also check result of autocorrelate res2 = [] for a in arrs: r = multipletau.autocorrelate(a=a, m=16, deltat=1, normalize=False, copy=True, dtype=np.complex_) res2.append(r) res2 = np.concatenate(res2) assert np.allclose(res, res2, atol=0, rtol=1e-15) if __name__ == "__main__": # Run all tests loc = locals() for key in list(loc.keys()): if key.startswith("test_") and hasattr(loc[key], "__call__"): loc[key]() multipletau-0.1.9/tests/test_ref_numpy.py000066400000000000000000000135221316472354200206570ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- """ Compare to numpy data. """ import numpy as np from os.path import abspath, dirname, join import sys sys.path.insert(0, dirname(dirname(abspath(__file__)))) import multipletau from test_correlate import get_sample_arrays_cplx def test_corresponds_ac(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) a = np.concatenate(get_sample_arrays_cplx()).real m=16 restau = multipletau.autocorrelate(a=1*a, m=m, copy=True, normalize=True, dtype=np.float_) reslin = multipletau.correlate_numpy(a=1*a, v=1*a, copy=True, normalize=True, dtype=np.float_) idx = np.array(restau[:,0].real, dtype=int)[:m] assert np.allclose(reslin[idx, 1], restau[:m,1]) def test_corresponds_ac_first_loop(): """ numpy correlation: G_m = sum_i(a_i*a_{i+m}) multipletau correlation 2nd order: b_j = (a_{2i} + a_{2i+1} / 2) G_m = sum_j(b_j*b_{j+1}) = 1/4*sum_i(a_{2i} * a_{2i+m} + a_{2i} * a_{2i+m+1} + a_{2i+1} * a_{2i+m} + a_{2i+1} * a_{2i+m+1} ) The values after the first m+1 lag times in the multipletau correlation differ from the normal correlation, because the traces are averaged over two consecutive items, effectively halving the size of the trace. The multiple-tau correlation can be compared to the regular correlation by using an even sized sequence (here 222) in which the elements 2i and 2i+1 are equal, as is done in this test. """ myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) a = [ arr / np.average(arr) for arr in get_sample_arrays_cplx() ] a = np.concatenate(a)[:222] # two consecutive elements are the same, so the multiple-tau method # corresponds to the numpy correlation for the first loop. a[::2] = a[1::2] for m in [2,4,6,8,10,12,14,16]: restau = multipletau.correlate(a=a, v=a.imag+1j*a.real, m=m, copy=True, normalize=False, dtype=np.complex_) reslin = multipletau.correlate_numpy(a=a, v=a.imag+1j*a.real, copy=True, normalize=False, dtype=np.complex_) idtau = np.where(restau[:,0]==m+2)[0][0] tau3 = restau[idtau, 1] #m+1 initial bins idref = np.where(reslin[:,0]==m+2)[0][0] tau3ref = reslin[idref, 1] assert np.allclose(tau3, tau3ref) def test_corresponds_ac_nonormalize(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) a = np.concatenate(get_sample_arrays_cplx()).real m=16 restau = multipletau.autocorrelate(a=1*a, m=m, copy=True, normalize=False, dtype=np.float_) reslin = multipletau.correlate_numpy(a=1*a, v=1*a, copy=True, normalize=False, dtype=np.float_) idx = np.array(restau[:,0].real, dtype=int)[:m+1] assert np.allclose(reslin[idx, 1], restau[:m+1,1]) def test_corresponds_cc(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) a = np.concatenate(get_sample_arrays_cplx()) m=16 restau = multipletau.correlate(a=a, v=a.imag+1j*a.real, m=m, copy=True, normalize=True, dtype=np.complex_) reslin = multipletau.correlate_numpy(a=a, v=a.imag+1j*a.real, copy=True, normalize=True, dtype=np.complex_) idx = np.array(restau[:,0].real, dtype=int)[:m+1] assert np.allclose(reslin[idx, 1], restau[:m+1,1]) def test_corresponds_cc_nonormalize(): myframe = sys._getframe() myname = myframe.f_code.co_name print("running ", myname) a = np.concatenate(get_sample_arrays_cplx()) m=16 restau = multipletau.correlate(a=a, v=a.imag+1j*a.real, m=m, copy=True, normalize=False, dtype=np.complex_) reslin = multipletau.correlate_numpy(a=a, v=a.imag+1j*a.real, copy=True, normalize=False, dtype=np.complex_) idx = np.array(restau[:,0].real, dtype=int)[:m+1] assert np.allclose(reslin[idx, 1], restau[:m+1,1]) if __name__ == "__main__": # Run all tests loc = locals() for key in list(loc.keys()): if key.startswith("test_") and hasattr(loc[key], "__call__"): loc[key]()