pax_global_header00006660000000000000000000000064136125663440014524gustar00rootroot0000000000000052 comment=72d6c2ab8c2c538212eab50b3e32aea8d5a71caf seaborn-0.10.0/000077500000000000000000000000001361256634400132335ustar00rootroot00000000000000seaborn-0.10.0/.coveragerc000066400000000000000000000002161361256634400153530ustar00rootroot00000000000000[run] omit = seaborn/widgets.py seaborn/external/* seaborn/colors/* seaborn/cm.py seaborn/conftest.py seaborn/tests/* seaborn-0.10.0/.github/000077500000000000000000000000001361256634400145735ustar00rootroot00000000000000seaborn-0.10.0/.github/CONTRIBUTING.md000066400000000000000000000024601361256634400170260ustar00rootroot00000000000000Contributing to seaborn ======================= General support --------------- General support questions ("how do I do ?") are most at home on [StackOverflow](https://stackoverflow.com/), where they will be seen by more people and are more easily searchable. StackOverflow has a `[seaborn]` tag, which will bring the question to the attention of people who might be able to answer. Reporting bugs -------------- If you have encountered a bug in seaborn, please report it on the [Github issue tracker](https://github.com/mwaskom/seaborn/issues/new). It is only really possible to address bug reports if they include a reproducible script using randomly-generated data or one of the example datasets (accessed through `seaborn.load_dataset()`). Please also specify your versions of seaborn and matplotlib, as well as which matplotlib backend you are using. New features ------------ If you think there is a new feature that should be added to seaborn, you can open an issue to discuss it. However, seaborn's development has become increasingly conservative, and the answer to most feature requests or proposed additions is "no". Polite requests with an explanation of the proposed feature's virtues will usually get an explanation; feature requests that say "I would like feature X, you need to add it" typically won't. seaborn-0.10.0/.gitignore000066400000000000000000000001551361256634400152240ustar00rootroot00000000000000*.pyc *.sw* build/ .ipynb_checkpoints/ dist/ seaborn.egg-info/ .cache/ .coverage cover/ .idea .pytest_cache/ seaborn-0.10.0/.mailmap000066400000000000000000000002211361256634400146470ustar00rootroot00000000000000Michael Waskom mwaskom Tal Yarkoni Daniel B. Allan seaborn-0.10.0/.travis.yml000066400000000000000000000025721361256634400153520ustar00rootroot00000000000000language: python dist: xenial services: - xvfb env: - PYTHON=3.6 DEPS=pinned BACKEND=agg DOCTESTS=true - PYTHON=3.6 DEPS=latest BACKEND=agg DOCTESTS=true - PYTHON=3.7 DEPS=latest BACKEND=agg DOCTESTS=true - PYTHON=3.8 DEPS=latest BACKEND=agg DOCTESTS=true - PYTHON=3.8 DEPS=latest BACKEND=qtagg DOCTESTS=true - PYTHON=3.8 DEPS=minimal BACKEND=agg DOCTESTS=false before_install: - sudo apt-get update -yq - sudo sh testing/getmsfonts.sh - wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh - bash miniconda.sh -b -p $HOME/miniconda - export PATH="$HOME/miniconda/bin:$PATH" - hash -r - conda config --set always_yes yes --set changeps1 no - conda config --add channels conda-forge - conda update -q conda - conda config --set channel_priority false - conda info -a install: - conda create -n testenv pip python=$PYTHON - source activate testenv - cat testing/deps_${DEPS}.txt testing/utils.txt > deps.txt - conda install --file deps.txt - pip install . before_script: - cp testing/matplotlibrc_${BACKEND} matplotlibrc - if [ $BACKEND == "qtagg" ]; then export DISPLAY=:99.0; sh -e /etc/init.d/xvfb start; sleep 3; fi script: - make lint - if [ $DOCTESTS == 'true' ]; then make coverage; else make unittests; fi after_success: - pip install codecov - codecov seaborn-0.10.0/LICENSE000066400000000000000000000027231361256634400142440ustar00rootroot00000000000000Copyright (c) 2012-2020, Michael L. Waskom All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the project nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. seaborn-0.10.0/MANIFEST.in000066400000000000000000000001271361256634400147710ustar00rootroot00000000000000include README.md include CONTRIBUTING.md include LICENSE recursive-include licences * seaborn-0.10.0/Makefile000066400000000000000000000005271361256634400146770ustar00rootroot00000000000000export SHELL := /bin/bash test: pytest --doctest-modules seaborn unittests: pytest seaborn coverage: pytest --doctest-modules --cov=seaborn --cov-config=.coveragerc seaborn lint: flake8 --ignore E121,E123,E126,E226,E24,E704,E741,W503,W504 --exclude seaborn/__init__.py,seaborn/colors/__init__.py,seaborn/cm.py,seaborn/external seaborn seaborn-0.10.0/README.md000066400000000000000000000071531361256634400145200ustar00rootroot00000000000000seaborn: statistical data visualization ======================================= -------------------------------------- [![PyPI Version](https://img.shields.io/pypi/v/seaborn.svg)](https://pypi.org/project/seaborn/) [![License](https://img.shields.io/pypi/l/seaborn.svg)](https://github.com/mwaskom/seaborn/blob/master/LICENSE) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1313201.svg)](https://doi.org/10.5281/zenodo.1313201) [![Build Status](https://travis-ci.org/mwaskom/seaborn.svg?branch=master)](https://travis-ci.org/mwaskom/seaborn) [![Code Coverage](https://codecov.io/gh/mwaskom/seaborn/branch/master/graph/badge.svg)](https://codecov.io/gh/mwaskom/seaborn) Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. Documentation ------------- Online documentation is available at [seaborn.pydata.org](https://seaborn.pydata.org). The docs include a [tutorial](https://seaborn.pydata.org/tutorial.html), [example gallery](https://seaborn.pydata.org/examples/index.html), [API reference](https://seaborn.pydata.org/api.html), and other useful information. Dependencies ------------ Seaborn supports Python 3.6+ and no longer supports Python 2. Installation requires [numpy](http://www.numpy.org/), [scipy](https://www.scipy.org/), [pandas](https://pandas.pydata.org/), and [matplotlib](https://matplotlib.org/). Some functions will optionally use [statsmodels](https://www.statsmodels.org/) if it is installed. Installation ------------ The latest stable release (and older versions) can be installed from PyPI: pip install seaborn You may instead want to use the development version from Github: pip install git+https://github.com/mwaskom/seaborn.git#egg=seaborn Testing ------- To test the code, run `make test` in the source directory. This will exercise both the unit tests and docstring examples (using `pytest`). The doctests require a network connection (unless all example datasets are cached), but the unit tests can be run offline with `make unittests`. Run `make coverage` to generate a test coverage report and `make lint` to check code style consistency. Development ----------- Seaborn development takes place on Github: https://github.com/mwaskom/seaborn Please submit bugs that you encounter to the [issue tracker](https://github.com/mwaskom/seaborn/issues) with a reproducible example demonstrating the problem. Questions about usage are more at home on StackOverflow, where there is a [seaborn tag](https://stackoverflow.com/questions/tagged/seaborn). seaborn-0.10.0/doc/000077500000000000000000000000001361256634400140005ustar00rootroot00000000000000seaborn-0.10.0/doc/.gitignore000066400000000000000000000004001361256634400157620ustar00rootroot00000000000000*_files/ _build/ generated/ examples/ example_thumbs/ introduction.rst aesthetics.rst relational.rst color_palettes.rst distributions.rst regression.rst categorical.rst plotting_distributions.rst dataset_exploration.rst timeseries_plots.rst axis_grids.rst seaborn-0.10.0/doc/Makefile000066400000000000000000000135311361256634400154430ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " clean to remove genrated output" @echo " html to make standalone HTML files" @echo " notebooks to make the Jupyter notebook-based tutorials" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* -rm -rf examples/* -rm -rf example_thumbs/* -rm -rf tutorial/*_files/ -rm -rf tutorial/*.rst -rm -rf generated/* -rm -rf introduction_files/* -rm introduction.rst tutorials: make -C tutorial introduction: introduction.ipynb tools/nb_to_doc.py introduction notebooks: tutorials introduction html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/lyman.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/lyman.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/lyman" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/lyman" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." seaborn-0.10.0/doc/_static/000077500000000000000000000000001361256634400154265ustar00rootroot00000000000000seaborn-0.10.0/doc/_static/copybutton.js000066400000000000000000000051361361256634400201770ustar00rootroot00000000000000// originally taken from scikit-learn's Sphinx theme $(document).ready(function() { /* Add a [>>>] button on the top-right corner of code samples to hide * the >>> and ... prompts and the output and thus make the code * copyable. * Note: This JS snippet was taken from the official python.org * documentation site.*/ var div = $('.highlight-python .highlight,' + '.highlight-python3 .highlight,' + '.highlight-pycon .highlight') var pre = div.find('pre'); // get the styles from the current theme pre.parent().parent().css('position', 'relative'); var hide_text = 'Hide the prompts and output'; var show_text = 'Show the prompts and output'; var border_width = pre.css('border-top-width'); var border_style = pre.css('border-top-style'); var border_color = pre.css('border-top-color'); var button_styles = { 'cursor':'pointer', 'position': 'absolute', 'top': '0', 'right': '0', 'border-color': border_color, 'border-style': border_style, 'border-width': border_width, 'color': border_color, 'text-size': '75%', 'font-family': 'monospace', 'padding-left': '0.2em', 'padding-right': '0.2em' } // create and add the button to all the code blocks that contain >>> div.each(function(index) { var jthis = $(this); if (jthis.find('.gp').length > 0) { var button = $('>>>'); button.css(button_styles) button.attr('title', hide_text); jthis.prepend(button); } // tracebacks (.gt) contain bare text elements that need to be // wrapped in a span to work with .nextUntil() (see later) jthis.find('pre:has(.gt)').contents().filter(function() { return ((this.nodeType == 3) && (this.data.trim().length > 0)); }).wrap(''); }); // define the behavior of the button when it's clicked $('.copybutton').toggle( function() { var button = $(this); button.parent().find('.go, .gp, .gt').hide(); button.next('pre').find('.gt').nextUntil('.gp, .go').css('visibility', 'hidden'); button.css('text-decoration', 'line-through'); button.attr('title', show_text); }, function() { var button = $(this); button.parent().find('.go, .gp, .gt').show(); button.next('pre').find('.gt').nextUntil('.gp, .go').css('visibility', 'visible'); button.css('text-decoration', 'none'); button.attr('title', hide_text); }); }); seaborn-0.10.0/doc/_static/favicon.ico000066400000000000000000010200761361256634400175550ustar00rootroot00000000000000 ( ( ^'^'°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŐ°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLh°rL°rLðrLő°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŕ°rL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLg°rLаrLż°rLň°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLß°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLu°rLݰrLě°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLްrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLg°rL°rLΰrLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLʰrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLo°rLްrLë°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL|°rLž°rLř°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLK˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL°rLѰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLë°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLg°rL•°rLްrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL´˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLi°rL°rLć°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLH˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLg°rL °rLě°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLl°rLř°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŐ°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL“°rLĺ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙°rL3°rLî°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLi˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL†°rLٰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLü°rLä°rLѰrLİrLş°rLš°rLť°rLȰrL×°rLć°rLő°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙°rL2°rLî°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLë°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLv°rL˰rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLă°rLаrLs°rLG°rL°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL°rL8˙˙˙˙˙˙˙˙˙˙˙˙°rL>°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL‡˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLh°rLаrLř°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLű°rLˇ°rLk°rL$˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL}°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLó°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL°rLá°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rLذrLf°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLž°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLr˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLg°rLʰrLű°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLç°rLްrLu°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL!°rLů°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŕ°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLw°rL۰rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLű°rLɰrL‹°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL‰°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLU˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL“°rLô°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLí°rLʰrLr°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLę°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÇ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL‰°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLİrLT°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLh°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL8˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL °rLą°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLü°rLĽ°rL3˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLá°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLĽ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL̰rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLř°rL–°rL!˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLs°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLő°rL ˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL,°rLâ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLň°rL‰°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLó°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL]˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLE°rLń°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLů°rL°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLš˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLe°rLü°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rLٰrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL"°rLţ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLý°rL˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLˆ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLť°rL0˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLݰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLq˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙°rL °rLްrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLç°rLW°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLO°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÍ˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙°rL°rLǰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŸ°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLî°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL)˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙°rL(°rLݰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLé°rLN˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL˜°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL€˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL@°rLď°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLź°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL<°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÍ˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLž°rLú°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL€°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLß°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLg°rLś°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLř°rLU˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL„°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLf˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLo°rLÓ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLë°rL9˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL)°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLł˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL°rLé°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLذrL"˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLٰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLö°rL °rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL—°rLř°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLż°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLi°rL¸°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLĄ°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL@°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˜°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL{°rLŕ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL~°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLđ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLä°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLž°rLř°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLů°rL[˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL§°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLƒ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLo°rLʰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLî°rL>˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL[°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŤ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLˆ°rLě°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLܰrL&˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLҰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLg°rLݰrLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŰrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLř°rLi°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLv°rLذrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL¨°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLx°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL‡°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL•°rLô°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL5°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLݰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLh°rLš°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLň°rLN˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLń°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL×°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLp°rLŐ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLذrL(˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL˛°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLú°rLj°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL}°rLę°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLą°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLp°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL‹°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL‡°rLó°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rL€°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL.°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLł°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL“°rLů°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLä°rL@˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLë°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÚ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLš°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLΰrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLŤ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLü°rLk°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL™°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLů°rLٰrLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLi°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLаrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL˜°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLă°rL~°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL'°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLʰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL‘°rLü°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLź°rLj°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLć°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLϰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL†°rLů°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLó°rL“°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLŤ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLň°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL|°rLó°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL×°rLu°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLq°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL{°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLr°rLę°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rL°°rLg°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL7°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLž°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLi°rLذrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLö°rL“°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLř°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÁ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLž°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLě°rL~°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLðrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLä°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLĄ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLă°rLu°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL‰°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rLo°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL3°rLů°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÁ°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLO°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLްrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLł°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLްrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLİrL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL۰rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŐ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLp°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLѰrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLĄ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLő°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL/°rLů°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLá°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLg°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL{°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL °rLÔ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLď°rL)˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL/°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLš°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL‘°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLý°rLF˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLö°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLş°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLF°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLq˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLŰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÚ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLă°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL °rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLř°rLg°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL›°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLҰrL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL\°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLE°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLó°rL'˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL'°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŸ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL °rLá°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLY˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLń°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLż°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL’°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL˝°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLްrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL4°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLܰrL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL‰°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLű°rLi°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLΰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLü°rL8˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLT°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL„°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLq°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLٰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLô°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLΰrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLę°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÁ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLĽ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLű°rL/˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLś°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLްrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL;°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL‚°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLú°rLh°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLͰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLѰrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLQ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙°rLe°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rL7˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL °rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙°rL °rLé°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL•˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLî°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLť°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙°rL‚°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLç°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL˝°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLذrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙°rL°rLő°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLV˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLŒ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLő°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙°rL™°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLž°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL[°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLy°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙°rL)°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLü°rL-˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL*°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL—°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙°rLްrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL“˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLö°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLľ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL1°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLě°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLǰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLҰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL˛°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLl˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL–°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLđ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL…°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL۰rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLe°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLs°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLÓ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLU˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL3°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL‡°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLɰrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLű°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLʰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLҰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL>˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLҰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLɰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL¸˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL˘°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLć°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLʰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL7˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLr°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rLk°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL{°rLţ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL´˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLC°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL…°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL3˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˘°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLu°rLű°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLä°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLž°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLś°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL3˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL´°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL۰rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLl°rLő°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLš˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL…°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLö°rLg°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLĽ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLU°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLz°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLg°rLé°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLưrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL&°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL–°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL•°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLô°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLł°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLÚ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÓ°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLǰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLϰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL`˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL—°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLě°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLÁ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLç°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLh°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLo°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLo°rLř°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL{˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL8°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL‹°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL§°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLö°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL °rLţ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL¨°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLç°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL—˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLč°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL(˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL̰rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLá°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLΰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLť˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLݰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLű°rLi°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLv°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLQ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL“°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL´°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLă°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLv°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLi°rLđ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL|˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLó°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLş°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLš°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLú°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rL×°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÖ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLÚ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL§˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLş°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLó°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL€°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL<˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLu°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL˝°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLҰrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL’°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLj°rLő°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLg˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLi°rLű°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLްrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLœ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLń°rL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLá°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˰rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLذrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL’˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLŰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLç°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL|°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL)˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rL¨°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rLl°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLˇ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŔ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLŒ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL‡°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLi°rLń°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL[˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLp°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLٰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL–°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLő°rL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLí°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŔ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLҰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˝˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLаrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLݰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLw°rLţ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rL´°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLř°rLg°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLą°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLްrLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rL—°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL|°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLg°rLě°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˘°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rL{°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL™°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLř°rLm°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLg°rL÷°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLľ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL̰rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLðrLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rL۰rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLҰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLq°rLű°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL‡°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLż°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLî°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLڰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLä°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rL˘°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLq°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLŕ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL¨°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rL…°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL€°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLű°rLq°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLk°rLţ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLްrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLš°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLʰrLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLć°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLưrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLh°rLń°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLʰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLă°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL“°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLî°rLg°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rL­°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLü°rLi°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL̰rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLś°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rL‘°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL‚°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLq°rLű°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL|°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLt°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŸ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLڰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLܰrLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLń°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLź°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLŕ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˘°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLŐ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLذrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL€°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLú°rLo°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rL¸°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLő°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLš°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLɰrLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLœ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLx°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLh°rLń°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL”°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL‘°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLî°rLg°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLh°rLű°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLą°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLǰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLś°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLŕ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLͰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLl°rLř°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL|°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLðrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLę°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL™°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLܰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLڰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLm°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLϰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLٰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLаrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL‰°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLp°rLü°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLű°rLp°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLo°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLĽ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL˘°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLϰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLë°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLذrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL™°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLΰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLްrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLv°rLţ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLř°rLk°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL˛°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLů°rLg°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLްrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLưrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL•°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL~°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLŕ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLy°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLš°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL}°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLň°rLh°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLö°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLˇ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLł°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLž°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLÚ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÔ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLé°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLˆ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL˝°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLđ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL…°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLě°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL °rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLs°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLş°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLś°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL„°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLí°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLj°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLʰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL…°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLä°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLä°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL̰rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL¸°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLްrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLȰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLě°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLę°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLy°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLŤ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLr°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLƒ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLݰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL‹°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL‘°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLś°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŤ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLm°rLţ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLą°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLč°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLy°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLĺ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLѰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŕ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLưrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLđ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLą°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL­°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLڰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLv°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLá°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL{°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL†°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL–°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLx°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLâ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLj°rLü°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLś°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL¨°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLݰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLŕ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÖ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLذrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLŔ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLő°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLq°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLé°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL °rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL|°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLž°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL¸°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLž°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLΰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLˆ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLh°rLů°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŔ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLj°rLů°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLń°rLg°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLÚ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLă°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL‘°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLš°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rLm°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLż°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL‘°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL–°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLí°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLř°rLj°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLt°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLݰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL̰rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLě°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLҰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLy°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL]˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLɰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLó°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLŰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL§°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL|°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLĹ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL…°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŸ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL\°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLy˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLi°rLů°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÁ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL¨°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL-˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLÚ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLä°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLđ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLá°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL¸°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rLp°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL@°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL–˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL–°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL“°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLŒ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLJ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLs°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLš°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL×°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLö°rL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLé°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLݰrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL#°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˛˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLðrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLü°rLl°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLo°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLž°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLްrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLť°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLy°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLł°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLú°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÎ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLď°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLذrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLS°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLƒ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLɰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLú°rLi°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLŸ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL7˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL¤°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLˆ°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLé°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLę°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLްrLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL6°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŸ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLó°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL×°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf°rLf˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL‚°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLT˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLްrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLř°rL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLͰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLú°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLq°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLC˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLź˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL/°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL‡˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLe°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLp˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLę°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLË˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL%˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL¨°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLő°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŮ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLd°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLS˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLH°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL!°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL—˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL”°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLA˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLܰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÚ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLß°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLń°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL™°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL+°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLŞ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLU°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLk˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLw°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL^˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLţ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLś˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLţ°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLͰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL÷°rL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÇ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL‡°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLZ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL{˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL<°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˜˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLڰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL/˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLî°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLă°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLń°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLă°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLĽ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL.˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLH°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˜˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLZ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLz˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLš°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLý°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLČ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLę°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL÷°rL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLİrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL>°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLą˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLx°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLr˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL_˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL-°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLČ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLâ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLü°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLŕ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL4°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLť˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLŒ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLs˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLi˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL7°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLČ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLć°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLá°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLD°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLĹ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLŒ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL{˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL °rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLs˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL6°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLްrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL °rLó°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL!˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLŕ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLC˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLX°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÄ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL‹°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL¨˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL´°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLi˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL6°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLů°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLü°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLű°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL×°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLp˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL‡°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLą˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLr°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLذrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL °rLí°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLU˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLú°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLT˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLd°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLń°rL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLްrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLаrL˙˙˙˙˙˙˙˙˙°rL°rLҰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLE°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLM˙˙˙˙˙˙˙˙˙°rLB°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL<˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLß°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLɰrL˙˙˙°rL°rL˰rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLΰrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL|°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLT˙˙˙°rL]°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL`˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL÷°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLé°rL°rLä°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLë°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL’°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL„˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLú°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLü°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL™°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL§˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL"°rLü°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLý°rL+˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL °rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL–˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLö°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLń°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLq°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLt˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLİrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLݰrL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL&°rL÷°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLD˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLo°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rL °rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL °rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLٰrL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL¸°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLˆ°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLÁ°rL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rLg°rLř°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLü°rL°rL ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL)°rLʰrLů°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rL˙°rLö°rL–°rL$˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙°rL°rLN°rLˆ°rLź°rLаrLݰrLްrLi°rL˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ü˙ü˙üüü?üüüüüŔüŕüřđü˙˙řü?˙˙řü˙˙˙üü˙˙˙ţü˙˙˙ţü˙˙˙˙˙˙|ţ˙˙˙˙˙|ü˙˙˙˙˙€<ř˙˙˙˙˙€<đ˙˙˙˙˙˙Ŕ<ŕ˙˙˙˙˙˙Ŕ€˙˙˙˙˙˙ŕ?˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙ŕ ˙˙˙˙˙˙˙đ ˙˙˙˙˙˙˙đ ˙˙˙˙˙˙˙đ?˙˙˙˙˙˙˙ř˙˙˙˙˙˙˙ř˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙ţ?˙˙˙˙˙˙˙˙ţ˙˙˙˙˙˙˙˙ţ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€?˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙Ŕ˙˙˙˙˙˙˙˙˙˙˙Ŕ˙˙˙˙˙˙˙˙˙˙˙Ŕ˙˙˙˙˙˙˙˙˙˙˙Ŕ˙˙˙˙˙˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙˙˙˙˙đ˙˙˙˙˙˙˙˙˙˙˙đ˙đ˙˙˙˙˙˙˙˙˙˙˙˙đ˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙đ˙Ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙Ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙€˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙řţ?˙˙˙˙˙˙˙˙˙˙˙˙˙üü?˙˙˙˙˙˙˙˙˙˙˙˙˙üü˙˙˙˙˙˙˙˙˙˙˙˙˙üř˙˙˙˙˙˙˙˙˙˙˙˙˙˙üđ˙˙˙˙˙˙˙˙˙˙˙˙˙˙üđ˙˙˙˙˙˙˙˙˙˙˙˙˙˙ţŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙ţŔ˙˙˙˙˙˙˙˙˙˙˙˙˙˙ţŔ˙˙˙˙˙˙˙˙˙˙˙˙˙˙ţ€˙˙˙˙˙˙˙˙˙˙˙˙˙˙ţ€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ţ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ţ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙Ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙Ŕ?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙Ŕ?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ˙ţ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ?˙ţ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ?˙ţ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ?˙ţ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ŕ˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ŕ˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ŕ˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ŕ˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙đ˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙đ˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙đ˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙Ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙Ŕ?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ţ˙Ŕ?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ţ˙€?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ţ˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€>˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€>˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙đ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ţ?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ţ?˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙€˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙Ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ŕ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ř˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ü˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙seaborn-0.10.0/doc/_static/style.css000066400000000000000000000036331361256634400173050ustar00rootroot00000000000000body { color: #444444 !important; } h1 { font-size: 40px !important; } h2 { font-size: 32px !important; } h3 { font-size: 24px !important; } h4 { font-size: 18px !important; } h5 { font-size: 14px !important; } h6 { font-size: 10px !important; } footer a{ color: #4c72b0 !important; } a.reference { color: #4c72b0 !important; } blockquote p { font-size: 14px !important; } blockquote { padding-top: 4px !important; padding-bottom: 4px !important; margin: 0 0 0px !important; } pre { background-color: #f6f6f9 !important; } code { color: #49759c !important; background-color: transparent; !important; } code.descclassname { padding-right: 0px !important; } code.descname { padding-left: 0px !important; } dt:target, span.highlighted { background-color: #ffffff !important; } ul { padding-left: 20px !important; } ul.dropdown-menu { padding-left: 0px !important; } .alert-info { background-color: #adb8cb !important; border-color: #adb8cb !important; color: #2c3e50 !important; } /* From https://github.com/twbs/bootstrap/issues/1768 */ *[id]:before { display: block; content: " "; margin-top: -60px; height: 60px; visibility: hidden; } .dataframe table { /*Uncomment to center tables horizontally*/ /* margin-left: auto; */ /* margin-right: auto; */ border: none; border-collapse: collapse; border-spacing: 0; font-size: 12px; table-layout: fixed; } .dataframe thead { border-bottom: 1px solid; vertical-align: bottom; } .dataframe tr, th, td { text-align: left; vertical-align: middle; padding: 0.5em 0.5em; line-height: normal; white-space: normal; max-width: none; border: none; } .dataframe th { font-weight: bold; } table { margin-bottom: 20px; } tbody tr:nth-child(odd) { background: #f5f5f5; } tbody tr:hover { background: rgba(66, 165, 245, 0.2); } seaborn-0.10.0/doc/_templates/000077500000000000000000000000001361256634400161355ustar00rootroot00000000000000seaborn-0.10.0/doc/_templates/autosummary/000077500000000000000000000000001361256634400205235ustar00rootroot00000000000000seaborn-0.10.0/doc/_templates/autosummary/base.rst000066400000000000000000000002471361256634400221720ustar00rootroot00000000000000.. raw:: html
{{ fullname | escape | underline}} .. currentmodule:: {{ module }} .. auto{{ objtype }}:: {{ objname }} seaborn-0.10.0/doc/_templates/autosummary/class.rst000066400000000000000000000011171361256634400223620ustar00rootroot00000000000000.. raw:: html
{{ fullname | escape | underline}} .. currentmodule:: {{ module }} .. autoclass:: {{ objname }} {% block methods %} .. automethod:: __init__ {% if methods %} .. rubric:: Methods .. autosummary:: {% for item in methods %} ~{{ name }}.{{ item }} {%- endfor %} {% endif %} {% endblock %} {% block attributes %} {% if attributes %} .. rubric:: Attributes .. autosummary:: {% for item in attributes %} ~{{ name }}.{{ item }} {%- endfor %} {% endif %} {% endblock %} seaborn-0.10.0/doc/_templates/layout.html000066400000000000000000000014751361256634400203470ustar00rootroot00000000000000{% extends "!layout.html" %} {%- block footer %}

Back to top {% if theme_source_link_position == "footer" %}
{% include "sourcelink.html" %} {% endif %}

{% trans copyright=copyright|e %}© Copyright {{ copyright }}, Michael Waskom.{% endtrans %} {%- if last_updated %} {% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %}
{%- endif %} {%- if show_sphinx %} {% trans sphinx_version=sphinx_version|e %}Created using Sphinx {{ sphinx_version }}.{% endtrans %}
{%- endif %}

{%- endblock %} seaborn-0.10.0/doc/api.rst000066400000000000000000000043451361256634400153110ustar00rootroot00000000000000.. _api_ref: .. currentmodule:: seaborn API reference ============= .. _relational_api: Relational plots ---------------- .. autosummary:: :toctree: generated relplot scatterplot lineplot .. _categorical_api: Categorical plots ----------------- .. autosummary:: :toctree: generated/ catplot stripplot swarmplot boxplot violinplot boxenplot pointplot barplot countplot .. _distribution_api: Distribution plots ------------------ .. autosummary:: :toctree: generated/ distplot kdeplot rugplot .. _regression_api: Regression plots ---------------- .. autosummary:: :toctree: generated/ lmplot regplot residplot .. _matrix_api: Matrix plots ------------ .. autosummary:: :toctree: generated/ heatmap clustermap .. _grid_api: Multi-plot grids ---------------- Facet grids ~~~~~~~~~~~ .. autosummary:: :toctree: generated/ FacetGrid FacetGrid.map FacetGrid.map_dataframe Pair grids ~~~~~~~~~~ .. autosummary:: :toctree: generated/ pairplot PairGrid PairGrid.map PairGrid.map_diag PairGrid.map_offdiag PairGrid.map_lower PairGrid.map_upper Joint grids ~~~~~~~~~~~ .. autosummary:: :toctree: generated/ jointplot JointGrid JointGrid.plot JointGrid.plot_joint JointGrid.plot_marginals .. _style_api: Style control ------------- .. autosummary:: :toctree: generated/ set axes_style set_style plotting_context set_context set_color_codes reset_defaults reset_orig .. _palette_api: Color palettes -------------- .. autosummary:: :toctree: generated/ set_palette color_palette husl_palette hls_palette cubehelix_palette dark_palette light_palette diverging_palette blend_palette xkcd_palette crayon_palette mpl_palette Palette widgets --------------- .. autosummary:: :toctree: generated/ choose_colorbrewer_palette choose_cubehelix_palette choose_light_palette choose_dark_palette choose_diverging_palette Utility functions ----------------- .. autosummary:: :toctree: generated/ load_dataset despine desaturate saturate set_hls_values seaborn-0.10.0/doc/conf.py000066400000000000000000000224711361256634400153050ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # seaborn documentation build configuration file, created by # sphinx-quickstart on Mon Jul 29 23:25:46 2013. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os import sphinx_bootstrap_theme import matplotlib as mpl mpl.use("Agg") # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration --------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. sys.path.insert(0, os.path.abspath('sphinxext')) extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage', 'sphinx.ext.mathjax', 'sphinx.ext.autosummary', 'sphinx.ext.intersphinx', 'matplotlib.sphinxext.plot_directive', 'gallery_generator', 'numpydoc', ] # Generate the API documentation when building autosummary_generate = True numpydoc_show_class_members = False # Include the example source for plots in API docs plot_include_source = True plot_formats = [("png", 90)] plot_html_show_formats = False plot_html_show_source_link = False # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'seaborn' import time copyright = u'2012-{}'.format(time.strftime("%Y")) # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. sys.path.insert(0, os.path.abspath(os.path.pardir)) import seaborn version = seaborn.__version__ # The full version, including alpha/beta/rc tags. release = seaborn.__version__ # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'bootstrap' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = { 'source_link_position': "footer", 'bootswatch_theme': "paper", 'navbar_sidebarrel': False, 'bootstrap_version': "3", 'navbar_links': [ ("Gallery", "examples/index"), ("Tutorial", "tutorial"), ("API", "api"), ], } # Add any paths that contain custom themes here, relative to this directory. html_theme_path = sphinx_bootstrap_theme.get_html_theme_path() # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. html_favicon = "_static/favicon.ico" # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static', 'example_thumbs'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. html_show_sourcelink = False # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'seaborndoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'seaborn.tex', u'seaborn Documentation', u'Michael Waskom', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'seaborn', u'seaborn Documentation', [u'Michael Waskom'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'seaborn', u'seaborn Documentation', u'Michael Waskom', 'seaborn', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # Add the 'copybutton' javascript, to hide/show the prompt in code # examples, originally taken from scikit-learn's doc/conf.py def setup(app): app.add_javascript('copybutton.js') app.add_stylesheet('style.css') # -- Intersphinx ------------------------------------------------ intersphinx_mapping = { 'numpy': ('http://docs.scipy.org/doc/numpy/', None), 'scipy': ('http://docs.scipy.org/doc/scipy/reference/', None), 'matplotlib': ('http://matplotlib.org/', None), 'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None), 'statsmodels': ('http://www.statsmodels.org/stable/', None) } seaborn-0.10.0/doc/index.rst000066400000000000000000000075661361256634400156570ustar00rootroot00000000000000.. raw:: html seaborn: statistical data visualization ======================================= .. raw:: html

Seaborn is a Python data visualization library based on `matplotlib `_. It provides a high-level interface for drawing attractive and informative statistical graphics. For a brief introduction to the ideas behind the library, you can read the :doc:`introductory notes `. Visit the :doc:`installation page ` to see how you can download the package. You can browse the :doc:`example gallery ` to see what you can do with seaborn, and then check out the :doc:`tutorial ` and :doc:`API reference ` to find out how. To see the code or report a bug, please visit the `github repository `_. General support issues are most at home on `stackoverflow `_, where there is a seaborn tag. .. raw:: html

Contents

.. toctree:: :maxdepth: 1 Introduction Release notes Installing Example gallery Tutorial API reference .. raw:: html

Features

* Relational: :ref:`API ` | :doc:`Tutorial ` * Categorical: :ref:`API ` | :doc:`Tutorial ` * Distribution: :ref:`API ` | :doc:`Tutorial ` * Regression: :ref:`API ` | :doc:`Tutorial ` * Multiples: :ref:`API ` | :doc:`Tutorial ` * Style: :ref:`API ` | :doc:`Tutorial ` * Color: :ref:`API ` | :doc:`Tutorial ` .. raw:: html
seaborn-0.10.0/doc/installing.rst000066400000000000000000000031411361256634400166750ustar00rootroot00000000000000.. _installing: .. currentmodule:: seaborn Installing and getting started ------------------------------ .. raw:: html
To install the latest release of seaborn, you can use ``pip``:: pip install seaborn It's also possible to install the released version using ``conda``:: conda install seaborn Alternatively, you can use ``pip`` to install the development version directly from github:: pip install git+https://github.com/mwaskom/seaborn.git Another option would be to to clone the `github repository `_ and install from your local copy:: pip install . Dependencies ~~~~~~~~~~~~ - Python 3.6+ Mandatory dependencies ^^^^^^^^^^^^^^^^^^^^^^ - `numpy `__ (>= 1.13.3) - `scipy `__ (>= 1.0.1) - `pandas `__ (>= 0.22.0) - `matplotlib `__ (>= 2.1.2) Recommended dependencies ^^^^^^^^^^^^^^^^^^^^^^^^ - `statsmodels `__ (>= 0.8.0) Bugs ~~~~ Please report any bugs you encounter through the github `issue tracker `_. It will be most helpful to include a reproducible example on synthetic data or one of the example datasets (accessed through :func:`load_dataset`). It is difficult to debug any issues without knowing the versions of seaborn and matplotlib you are using, as well as what `matplotlib backend `__ you are have active, so please include those in your bug report. .. raw:: html
seaborn-0.10.0/doc/introduction.ipynb000066400000000000000000000604271361256634400175750ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _introduction:\n", "\n", ".. currentmodule:: seaborn\n", "\n", "An introduction to seaborn\n", "==========================\n", "\n", ".. raw:: html\n", "\n", "
\n", "\n", "Seaborn is a library for making statistical graphics in Python. It is built on top of `matplotlib `_ and closely integrated with `pandas `_ data structures.\n", "\n", "Here is some of the functionality that seaborn offers:\n", "\n", "- A dataset-oriented API for examining :ref:`relationships ` between :ref:`multiple variables `\n", "- Specialized support for using categorical variables to show :ref:`observations ` or :ref:`aggregate statistics ` \n", "- Options for visualizing :ref:`univariate ` or :ref:`bivariate ` distributions and for :ref:`comparing ` them between subsets of data\n", "- Automatic estimation and plotting of :ref:`linear regression ` models for different kinds :ref:`dependent ` variables\n", "- Convenient views onto the overall :ref:`structure ` of complex datasets\n", "- High-level abstractions for structuring :ref:`multi-plot grids ` that let you easily build :ref:`complex ` visualizations\n", "- Concise control over matplotlib figure styling with several :ref:`built-in themes `\n", "- Tools for choosing :ref:`color palettes ` that faithfully reveal patterns in your data\n", "\n", "Seaborn aims to make visualization a central part of exploring and understanding data. Its dataset-oriented plotting functions operate on dataframes and arrays containing whole datasets and internally perform the necessary semantic mapping and statistical aggregation to produce informative plots.\n", "\n", "Here's an example of what this means:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import seaborn as sns\n", "sns.set()\n", "tips = sns.load_dataset(\"tips\")\n", "sns.relplot(x=\"total_bill\", y=\"tip\", col=\"time\",\n", " hue=\"smoker\", style=\"smoker\", size=\"size\",\n", " data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A few things have happened here. Let's go through them one by one:\n", "\n", "1. We import seaborn, which is the only library necessary for this simple example." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-output" ] }, "outputs": [], "source": [ "import seaborn as sns" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Behind the scenes, seaborn uses matplotlib to draw plots. Many tasks can be accomplished with only seaborn functions, but further customization might require using matplotlib directly. This is explained in more detail :ref:`below `. For interactive work, it's recommended to use a Jupyter/IPython interface in `matplotlib mode `_, or else you'll have to call :func:`matplotlib.pyplot.show` when you want to see the plot.\n", "\n", "2. We apply the default default seaborn theme, scaling, and color palette." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-output" ] }, "outputs": [], "source": [ "sns.set()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This uses the `matplotlib rcParam system `_ and will affect how all matplotlib plots look, even if you don't make them with seaborn. Beyond the default theme, there are :ref:`several other options `, and you can independently control the style and scaling of the plot to quickly translate your work between presentation contexts (e.g., making a plot that will have readable fonts when projected during a talk). If you like the matplotlib defaults or prefer a different theme, you can skip this step and still use the seaborn plotting functions.\n", "\n", "3. We load one of the example datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-output" ] }, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Most code in the docs will use the :func:`load_dataset` function to get quick access to an example dataset. There's nothing particularly special about these datasets; they are just pandas dataframes, and we could have loaded them with :func:`pandas.read_csv` or build them by hand. Many examples use the \"tips\" dataset, which is very boring but quite useful for demonstration. The tips dataset illustrates the \"tidy\" approach to organizing a dataset. You'll get the most out of seaborn if your datasets are organized this way, and it is explained in more detail :ref:`below `.\n", "\n", "4. We draw a faceted scatter plot with multiple semantic variables." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-output" ] }, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", col=\"time\",\n", " hue=\"smoker\", style=\"smoker\", size=\"size\",\n", " data=tips)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This particular plot shows the relationship between five variables in the tips dataset. Three are numeric, and two are categorical. Two numeric variables (``total_bill`` and ``tip``) determined the position of each point on the axes, and the third (``size``) determined the size of each point. One categorical variable split the dataset onto two different axes (facets), and the other determined the color and shape of each point.\n", "\n", "All of this was accomplished using a single call to the seaborn function :func:`relplot`. Notice how we only provided the names of the variables in the dataset and the roles that we wanted them to play in the plot. Unlike when using matplotlib directly, it wasn't necessary to translate the variables into parameters of the visualization (e.g., the specific color or marker to use for each category). That translation was done automatically by seaborn. This lets the user stay focused on the question they want the plot to answer.\n", "\n", ".. _intro_api_abstraction:\n", "\n", "API abstraction across visualizations\n", "-------------------------------------\n", "\n", "There is no universal best way to visualize data. Different questions are best answered by different kinds of visualizations. Seaborn tries to make it easy to switch between different visual representations that can be parameterized with the same dataset-oriented API.\n", "\n", "The function :func:`relplot` is named that way because it is designed to visualize many different statistical *relationships*. While scatter plots are a highly effective way of doing this, relationships where one variable represents a measure of time are better represented by a line. The :func:`relplot` function has a convenient ``kind`` parameter to let you easily switch to this alternate representation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dots = sns.load_dataset(\"dots\")\n", "sns.relplot(x=\"time\", y=\"firing_rate\", col=\"align\",\n", " hue=\"choice\", size=\"coherence\", style=\"choice\",\n", " facet_kws=dict(sharex=False),\n", " kind=\"line\", legend=\"full\", data=dots);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Notice how the ``size`` and ``style`` parameters are shared across the scatter and line plots, but they affect the two visualizations differently (changing marker area and symbol vs line width and dashing). We did not need to keep those details in mind, letting us focus on the overall structure of the plot and the information we want it to convey.\n", "\n", ".. _intro_stat_estimation:\n", "\n", "Statistical estimation and error bars\n", "-------------------------------------\n", "\n", "Often we are interested in the average value of one variable as a function of other variables. Many seaborn functions can automatically perform the statistical estimation that is necessary to answer these questions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fmri = sns.load_dataset(\"fmri\")\n", "sns.relplot(x=\"timepoint\", y=\"signal\", col=\"region\",\n", " hue=\"event\", style=\"event\",\n", " kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When statistical values are estimated, seaborn will use bootstrapping to compute confidence intervals and draw error bars representing the uncertainty of the estimate.\n", "\n", "Statistical estimation in seaborn goes beyond descriptive statistics. For example, it is also possible to enhance a scatterplot to include a linear regression model (and its uncertainty) using :func:`lmplot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", col=\"time\", hue=\"smoker\",\n", " data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _intro_categorical:\n", "\n", "Specialized categorical plots\n", "-----------------------------\n", "\n", "Standard scatter and line plots visualize relationships between numerical variables, but many data analyses involve categorical variables. There are several specialized plot types in seaborn that are optimized for visualizing this kind of data. They can be accessed through :func:`catplot`. Similar to :func:`relplot`, the idea of :func:`catplot` is that it exposes a common dataset-oriented API that generalizes over different representations of the relationship between one numeric variable and one (or more) categorical variables.\n", "\n", "These representations offer different levels of granularity in their presentation of the underlying data. At the finest level, you may wish to see every observation by drawing a scatter plot that adjusts the positions of the points along the categorical axis so that they don't overlap:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"smoker\",\n", " kind=\"swarm\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Alternately, you could use kernel density estimation to represent the underlying distribution that the points are sampled from:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"smoker\",\n", " kind=\"violin\", split=True, data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Or you could show the only mean value and its confidence interval within each nested category:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"smoker\",\n", " kind=\"bar\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _intro_func_types:\n", "\n", "Figure-level and axes-level functions\n", "-------------------------------------\n", "\n", "How do these tools work? It's important to know about a major distinction between seaborn plotting functions. All of the plots shown so far have been made with \"figure-level\" functions. These are optimized for exploratory analysis because they set up the matplotlib figure containing the plot(s) and make it easy to spread out the visualization across multiple axes. They also handle some tricky business like putting the legend outside the axes. To do these things, they use a seaborn :class:`FacetGrid`.\n", "\n", "Each different figure-level plot ``kind`` combines a particular \"axes-level\" function with the :class:`FacetGrid` object. For example, the scatter plots are drawn using the :func:`scatterplot` function, and the bar plots are drawn using the :func:`barplot` function. These functions are called \"axes-level\" because they draw onto a single matplotlib axes and don't otherwise affect the rest of the figure.\n", "\n", "The upshot is that the figure-level function needs to control the figure it lives in, while axes-level functions can be combined into a more complex matplotlib figure with other axes that may or may not have seaborn plots on them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "f, axes = plt.subplots(1, 2, sharey=True, figsize=(6, 4))\n", "sns.boxplot(x=\"day\", y=\"tip\", data=tips, ax=axes[0])\n", "sns.scatterplot(x=\"total_bill\", y=\"tip\", hue=\"day\", data=tips, ax=axes[1]);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Controlling the size of the figure-level functions works a little bit differently than it does for other matplotlib figures. Instead of setting the overall figure size, the figure-level functions are parameterized by the size of each facet. And instead of setting the height and width of each facet, you control the height and *aspect* ratio (ratio of width to height). This parameterization makes it easy to control the size of the graphic without thinking about exactly how many rows and columns it will have, although it can be a source of confusion:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"time\", y=\"firing_rate\", col=\"align\",\n", " hue=\"choice\", size=\"coherence\", style=\"choice\",\n", " height=4.5, aspect=2 / 3,\n", " facet_kws=dict(sharex=False),\n", " kind=\"line\", legend=\"full\", data=dots);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The way you can tell whether a function is \"figure-level\" or \"axes-level\" is whether it takes an ``ax=`` parameter. You can also distinguish the two classes by their output type: axes-level functions return the matplotlib ``axes``, while figure-level functions return the :class:`FacetGrid`.\n", "\n", "\n", ".. _intro_dataset_funcs:\n", "\n", "Visualizing dataset structure\n", "-----------------------------\n", "\n", "There are two other kinds of figure-level functions in seaborn that can be used to make visualizations with multiple plots. They are each oriented towards illuminating the structure of a dataset. One, :func:`jointplot`, focuses on a single relationship:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris = sns.load_dataset(\"iris\")\n", "sns.jointplot(x=\"sepal_length\", y=\"petal_length\", data=iris);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The other, :func:`pairplot`, takes a broader view, showing all pairwise relationships and the marginal distributions, optionally conditioned on a categorical variable :" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(data=iris, hue=\"species\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Both :func:`jointplot` and :func:`pairplot` have a few different options for visual representation, and they are built on top of classes that allow more thoroughly customized multi-plot figures (:class:`JointGrid` and :class:`PairGrid`, respectively).\n", "\n", ".. _intro_plot_customization:\n", "\n", "Customizing plot appearance\n", "---------------------------\n", "\n", "The plotting functions try to use good default aesthetics and add informative labels so that their output is immediately useful. But defaults can only go so far, and creating a fully-polished custom plot will require additional steps. Several levels of additional customization are possible. \n", "\n", "The first way is to use one of the alternate seaborn themes to give your plots a different look. Setting a different theme or color palette will make it take effect for all plots:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set(style=\"ticks\", palette=\"muted\")\n", "sns.relplot(x=\"total_bill\", y=\"tip\", col=\"time\",\n", " hue=\"smoker\", style=\"smoker\", size=\"size\",\n", " data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "For figure-specific customization, all seaborn functions accept a number of optional parameters for switching to non-default semantic mappings, such as different colors. (Appropriate use of color is critical for effective data visualization, and seaborn has :ref:`extensive support ` for customizing color palettes).\n", "\n", "Finally, where there is a direct correspondence with an underlying matplotlib function (like :func:`scatterplot` and :meth:`matplotlib.axes.Axes.scatter`), additional keyword arguments will be passed through to the matplotlib layer:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", col=\"time\",\n", " hue=\"size\", style=\"smoker\", size=\"size\",\n", " palette=\"YlGnBu\", markers=[\"D\", \"o\"], sizes=(10, 125),\n", " edgecolor=\".2\", linewidth=.5, alpha=.75,\n", " data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In the case of :func:`relplot` and other figure-level functions, that means there are a few levels of indirection because :func:`relplot` passes its exta keyword arguments to the underlying seaborn axes-level function, which passes *its* extra keyword arguments to the underlying matplotlib function. So it might take some effort to find the right documentation for the parameters you'll need to use, but in principle an extremely detailed level of customization is possible.\n", "\n", "Some customization of figure-level functions can be accomplished through additional parameters that get passed to :class:`FacetGrid`, and you can use the methods on that object to control many other properties of the figure. For even more tweaking, you can access the matplotlib objects that the plot is drawn onto, which are stored as attributes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.catplot(x=\"total_bill\", y=\"day\", hue=\"time\",\n", " height=3.5, aspect=1.5,\n", " kind=\"box\", legend=False, data=tips);\n", "g.add_legend(title=\"Meal\")\n", "g.set_axis_labels(\"Total bill ($)\", \"\")\n", "g.set(xlim=(0, 60), yticklabels=[\"Thursday\", \"Friday\", \"Saturday\", \"Sunday\"])\n", "g.despine(trim=True)\n", "g.fig.set_size_inches(6.5, 3.5)\n", "g.ax.set_xticks([5, 15, 25, 35, 45, 55], minor=True);\n", "plt.setp(g.ax.get_yticklabels(), rotation=30);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Because the figure-level functions are oriented towards efficient exploration, using them to manage a figure that you need to be precisely sized and organized may take more effort than setting up the figure directly in matplotlib and using the corresponding axes-level seaborn function. Matplotlib has a comprehensive and powerful API; just about any attribute of the figure can be changed to your liking. The hope is that a combination of seaborn's high-level interface and matplotlib's deep customizability will allow you to quickly explore your data and create graphics that can be tailored into a `publication quality `_ final product.\n", "\n", ".. _intro_tidy_data:\n", "\n", "Organizing datasets\n", "-------------------\n", "\n", "As mentioned above, seaborn will be most powerful when your datasets have a particular organization. This format is alternately called \"long-form\" or \"tidy\" data and is described in detail by Hadley Wickham in this `academic paper `_. The rules can be simply stated:\n", "\n", "1. Each variable is a column\n", "2. Each observation is a row\n", "\n", "A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a \"variable\" is something that will be assigned a role in the plot. It may be useful to look at the example datasets and see how they are structured. For example, the first five rows of the \"tips\" dataset look like this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips.head()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In some domains, the tidy format might feel awkward at first. Timeseries data, for example, are sometimes stored with every timepoint as part of the same observational unit and appearing in the columns. The \"fmri\" dataset that we used :ref:`above ` illustrates how a tidy timeseries dataset has each timepoint in a different row:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fmri.head()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Many seaborn functions can plot wide-form data, but only with limited functionality. To take advantage of the features that depend on tidy-formatted data, you'll likely find the :func:`pandas.melt` function useful for \"un-pivoting\" a wide-form dataframe. More information and useful examples can be found `in this blog post `_ by one of the pandas developers.\n", "\n", ".. _intro_next_steps:\n", "\n", "Next steps\n", "----------\n", "\n", "You have a few options for where to go next. You might first want to learn how to :ref:`install seaborn `. Once that's done, you can browse the :ref:`example gallery ` to get a broader sense for what kind of graphics seaborn can produce. Or you can read through the :ref:`official tutorial ` for a deeper discussion of the different tools and what they are designed to accomplish. If you have a specific plot in mind and want to know how to make it, you could check out the :ref:`API reference `, which documents each function's parameters and shows many examples to illustrate usage." ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", " \n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3.6 (seaborn-py37-latest)", "language": "python", "name": "seaborn-py37-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 2 } seaborn-0.10.0/doc/releases/000077500000000000000000000000001361256634400156035ustar00rootroot00000000000000seaborn-0.10.0/doc/releases/v0.10.0.txt000066400000000000000000000042411361256634400172470ustar00rootroot00000000000000 v0.10.0 (January 2020) ---------------------- This is a major update that is being released simultaneously with version 0.9.1. It has all of the same features (and bugs!) as 0.9.1, but there are important changes to the dependencies. Most notably, all support for Python 2 has now been dropped. Support for Python 3.5 has also been dropped. Seaborn is now strictly compatible with Python 3.6+. Minimally supported versions of the dependent PyData libraries have also been increased, in some cases substantially. While seaborn has tended to be very conservative about maintaining compatibility with older dependencies, this was causing increasing pain during development. At the same time, these libraries are much easier to install than used to be the case. Going forward, seaborn will likely stay close to the `Numpy community guidelines `_ for version support. This release also removes a few previously-deprecated features: - The ``tsplot`` function and ``seaborn.timeseries`` module have been removed. Recall that ``tsplot`` was replaced with :func:`lineplot`. - The ``seaborn.apionly`` entry-point has been removed. - The ``seaborn.linearmodels`` module (previously renamed to ``seaborn.regression``) has been removed. Looking forward ~~~~~~~~~~~~~~~ Now that seaborn is a Python 3 library, it can take advantage of `keyword-only arguments `_. It is likely that future versions will introduce this syntax, potentially in a breaking way. For guidance, most seaborn functions have a signature that looks like :: func(x, y, ..., data=None, **kwargs) where the ``**kwargs`` are specified in the function. Going forward it will likely be necessary to specify ``data`` and all subsequent arguments with an explicit ``key=value`` mapping. This style has long been used throughout the documentation, and the formal requirement will not be introduced until at least the next major release. Adding this feature will make it possible to enhance some older functions with more modern capabilities (e.g., adding a native ``hue`` semantic within functions like :func:`jointplot` and :func:`regplot`). seaborn-0.10.0/doc/releases/v0.2.0.txt000066400000000000000000000132411361256634400171700ustar00rootroot00000000000000 v0.2.0 (December 2013) ---------------------- This is a major release from 0.1 with a number of API changes, enhancements, and bug fixes. Highlights include an overhaul of timeseries plotting to work intelligently with dataframes, the new function ``interactplot()`` for visualizing continuous interactions, bivariate kernel density estimates in ``kdeplot()``, and significant improvements to color palette handling. Version 0.2 also introduces experimental support for Python 3. In addition to the library enhancements, the documentation has been substantially rewritten to reflect the new features and improve the presentation of the ideas behind the package. API changes ~~~~~~~~~~~ - The ``tsplot()`` function was rewritten to accept data in a long-form ``DataFrame`` and to plot different traces by condition. This introduced a relatively minor but unavoidable API change, where instead of doing ``sns.tsplot(time, heights)``, you now must do ``sns.tsplot(heights, time=time)`` (the ``time`` parameter is now optional, for quicker specification of simple plots). Additionally, the ``"obs_traces"`` and ``"obs_points"`` error styles in ``tsplot()`` have been renamed to ``"unit_traces"`` and ``"unit_points"``, respectively. - Functions that fit kernel density estimates (``kdeplot()`` and ``violinplot()``) now use ``statsmodels`` instead of ``scipy``, and the parameters that influence the density estimate have changed accordingly. This allows for increased flexibility in specifying the bandwidth and kernel, and smarter choices for defining the range of the support. Default options should produce plots that are very close to the old defaults. - The ``kdeplot()`` function now takes a second positional argument of data for drawing bivariate densities. - The ``violin()`` function has been changed to ``violinplot()``, for consistency. In 0.2, ``violin`` will still work, but it will fire a ``UserWarning``. New plotting functions ~~~~~~~~~~~~~~~~~~~~~~ - The ``interactplot()`` function draws a contour plot for an interactive linear model (i.e., the contour shows ``y-hat`` from the model ``y ~ x1 * x2``) over a scatterplot between the two predictor variables. This plot should aid the understanding of an interaction between two continuous variables. - The ``kdeplot()`` function can now draw a bivariate density estimate as a contour plot if provided with two-dimensional input data. - The ``palplot()`` function provides a simple grid-based visualization of a color palette. Other changes ~~~~~~~~~~~~~ Plotting functions ^^^^^^^^^^^^^^^^^^ - The ``corrplot()`` function can be drawn without the correlation coefficient annotation and with variable names on the side of the plot to work with large datasets. - Additionally, ``corrplot()`` sets the color palette intelligently based on the direction of the specified test. - The ``distplot()`` histogram uses a reference rule to choose the bin size if it is not provided. - Added the ``x_bins`` option in ``lmplot()`` for binning a continuous predictor variable, allowing for clearer trends with many datapoints. - Enhanced support for labeling plot elements and axes based on ``name`` attributes in several distribution plot functions and ``tsplot()`` for smarter Pandas integration. - Scatter points in ``lmplot()`` are slightly transparent so it is easy to see where observations overlap. - Added the ``order`` parameter to ``boxplot()`` and ``violinplot()`` to control the order of the bins when using a Pandas object. - When an ``ax`` argument is not provided to a plotting function, it grabs the currently active axis instead of drawing a new one. Color palettes ^^^^^^^^^^^^^^ - Added the ``dark_palette()`` and ``blend_palette()`` for on-the-fly creation of blended color palettes. - The color palette machinery is now intelligent about qualitative ColorBrewer palettes (``Set1``, ``Paired``, etc.), which are properly treated as discrete. - Seaborn color palettes (``deep``, ``muted``, etc.) have been standardized in terms of basic hue sequence, and all palettes now have 6 colors. - Introduced ``{mpl_palette}_d`` palettes, which make a palette with the basic color scheme of the source palette, but with a sequential blend from dark instead of light colors for use with line/scatter/contour plots. - Added the ``palette_context()`` function for blockwise color palettes controlled by a ``with`` statement. Plot styling ^^^^^^^^^^^^ - Added the ``despine()`` function for easily removing plot spines. - A new plot style, ``"ticks"`` has been added. - Tick labels are padded a bit farther from the axis in all styles, avoiding collisions at (0, 0). General package issues ^^^^^^^^^^^^^^^^^^^^^^ - Reorganized the package by breaking up the monolithic ``plotobjs`` module into smaller modules grouped by general objective of the constituent plots. - Removed the ``scikits-learn`` dependency in ``moss``. - Installing with ``pip`` should automatically install most missing dependencies. - The example notebooks are now used as an automated test suite. Bug fixes ~~~~~~~~~ - Fixed a bug where labels did not match data for ``boxplot()`` and ``violinplot()`` when using a groupby. - Fixed a bug in the ``desaturate()`` function. - Fixed a bug in the ``coefplot()`` figure size calculation. - Fixed a bug where ``regplot()`` choked on list input. - Fixed buggy behavior when drawing horizontal boxplots. - Specifying bins for the ``distplot()`` histogram now works. - Fixed a bug where ``kdeplot()`` would reset the axis height and cut off existing data. - All axis styling has been moved out of the top-level ``seaborn.set()`` function, so context or color palette can be cleanly changed. seaborn-0.10.0/doc/releases/v0.2.1.txt000066400000000000000000000014751361256634400171770ustar00rootroot00000000000000 v0.2.1 (December 2013) ---------------------- This is a bugfix release, with no new features. Bug fixes ~~~~~~~~~ - Changed the mechanics of ``violinplot()`` and ``boxplot()`` when using a ``Series`` object as data and performing a ``groupby`` to assign data to bins to address a problem that arises in Pandas 0.13. - Additionally fixed the ``groupby`` code to work with all styles of group specification (specifically, using a dictionary or a function now works). - Fixed a bug where artifacts from the kde fitting could undershoot and create a plot where the density axis starts below 0. - Ensured that data used for kde fitting is double-typed to avoid a low-level statsmodels error. - Changed the implementation of the histogram bin-width reference rule to take a ceiling of the estimated number of bins. seaborn-0.10.0/doc/releases/v0.3.0.txt000066400000000000000000000146041361256634400171750ustar00rootroot00000000000000 v0.3.0 (March 2014) ------------------- This is a major release from 0.2 with a number of enhancements to the plotting capabilities and styles. Highlights include :class:`FacetGrid`, ``factorplot``, :func:`jointplot`, and an overhaul to :ref:`style management `. There is also lots of new documentation, including an :ref:`example gallery ` and reorganized :ref:`tutorial `. New plotting functions ~~~~~~~~~~~~~~~~~~~~~~ - The :class:`FacetGrid` class adds a new form of functionality to seaborn, providing a way to abstractly structure a grid of plots corresponding to subsets of a dataset. It can be used with a wide variety of plotting functions (including most of the matplotlib and seaborn APIs. See the :ref:`tutorial ` for more information. - Version 0.3 introduces the ``factorplot`` function, which is similar in spirit to :func:`lmplot` but intended for use when the main independent variable is categorical instead of quantitative. ``factorplot`` can draw a plot in either a point or bar representation using the corresponding Axes-level functions :func:`pointplot` and :func:`barplot` (which are also new). Additionally, the ``factorplot`` function can be used to draw box plots on a faceted grid. For examples of how to use these functions, you can refer to the tutorial. - Another new function is :func:`jointplot`, which is built using the new :class:`JointGrid` object. :func:`jointplot` generalizes the behavior of :func:`regplot` in previous versions of seaborn (:func:`regplot` has changed somewhat in 0.3; see below for details) by drawing a bivariate plot of the relationship between two variables with their marginal distributions drawn on the side of the plot. With :func:`jointplot`, you can draw a scatterplot or regression plot as before, but you can now also draw bivariate kernel densities or hexbin plots with appropriate univariate graphs for the marginal distributions. Additionally, it's easy to use :class:`JointGrid` directly to build up more complex plots when the default methods offered by :func:`jointplot` are not suitable for your visualization problem. The tutorial for :class:`JointGrid` has more examples of how this object can be useful. - The :func:`residplot` function complements :func:`regplot` and can be quickly used to diagnose problems with a linear model by calculating and plotting the residuals of a simple regression. There is also a ``"resid"`` kind for :func:`jointplot`. API changes ~~~~~~~~~~~ - The most noticeable change will be that :func:`regplot` no longer produces a multi-component plot with distributions in marginal axes. Instead. :func:`regplot` is now an "Axes-level" function that can be plotted into any existing figure on a specific set of axes. :func:`regplot` and :func:`lmplot` have also been unified (the latter uses the former behind the scenes), so all options for how to fit and represent the regression model can be used for both functions. To get the old behavior of :func:`regplot`, use :func:`jointplot` with ``kind="reg"``. - As noted above, :func:`lmplot` has been rewritten to exploit the :class:`FacetGrid` machinery. This involves a few changes. The ``color`` keyword argument has been replaced with ``hue``, for better consistency across the package. The ``hue`` parameter will always take a variable *name*, while ``color`` will take a color name or (in some cases) a palette. The :func:`lmplot` function now returns the :class:`FacetGrid` used to draw the plot instance. - The functions that interact with matplotlib rc parameters have been updated and standardized. There are now three pairs of functions, :func:`axes_style` and :func:`set_style`, :func:`plotting_context` and :func:`set_context`, and :func:`color_palette` and :func:`set_palette`. In each case, the pairs take the exact same arguments. The first function defines and returns the parameters, and the second sets the matplotlib defaults. Additionally, the first function in each pair can be used in a ``with`` statement to temporarily change the defaults. Both the style and context functions also now accept a dictionary of matplotlib rc parameters to override the seaborn defaults, and :func:`set` now also takes a dictionary to update any of the matplotlib defaults. See the :ref:`tutorial ` for more information. - The ``nogrid`` style has been deprecated and changed to ``white`` for more uniformity (i.e. there are now ``darkgrid``, ``dark``, ``whitegrid``, and ``white`` styles). Other changes ~~~~~~~~~~~~~ Using the package ^^^^^^^^^^^^^^^^^ - If you want to use plotting functions provided by the package without setting the matplotlib style to a seaborn theme, you can now do ``import seaborn.apionly as sns`` or ``from seaborn.apionly import lmplot``, etc. This is using the (also new) :func:`reset_orig` function, which returns the rc parameters to what they are at matplotlib import time — i.e. they will respect any custom `matplotlibrc` settings on top of the matplotlib defaults. - The dependency load of the package has been reduced. It can now be installed and used with only ``numpy``, ``scipy``, ``matplotlib``, and ``pandas``. Although ``statsmodels`` is still recommended for full functionality, it is not required. Plotting functions ^^^^^^^^^^^^^^^^^^ - :func:`lmplot` (and :func:`regplot`) have two new options for fitting regression models: ``lowess`` and ``robust``. The former fits a nonparametric smoother, while the latter fits a regression using methods that are less sensitive to outliers. - The regression uncertainty in :func:`lmplot` and :func:`regplot` is now estimated with fewer bootstrap iterations, so plotting should be faster. - The univariate :func:`kdeplot` can now be drawn as a *cumulative* density plot. - Changed :func:`interactplot` to use a robust calculation of the data range when finding default limits for the contour colormap to work better when there are outliers in the data. Style ^^^^^ - There is a new style, ``dark``, which shares most features with ``darkgrid`` but does not draw a grid by default. - There is a new function, :func:`offset_spines`, and a corresponding option in :func:`despine` called ``trim``. Together, these can be used to make plots where the axis spines are offset from the main part of the figure and limited within the range of the ticks. This is recommended for use with the ``ticks`` style. - Other aspects of the seaborn styles have been tweaked for more attractive plots. seaborn-0.10.0/doc/releases/v0.3.1.txt000066400000000000000000000017411361256634400171740ustar00rootroot00000000000000 v0.3.1 (April 2014) ------------------- This is a minor release from 0.3 with fixes for several bugs. Plotting functions ~~~~~~~~~~~~~~~~~~ - The size of the points in :func:`pointplot` and ``factorplot`` are now scaled with the linewidth for better aesthetics across different plotting contexts. - The :func:`pointplot` glyphs for different levels of the hue variable are drawn at different z-orders so that they appear uniform. Bug Fixes ~~~~~~~~~ - Fixed a bug in :class:`FacetGrid` (and thus affecting lmplot and factorplot) that appeared when ``col_wrap`` was used with a number of facets that did not evenly divide into the column width. - Fixed an issue where the support for kernel density estimates was sometimes computed incorrectly. - Fixed a problem where ``hue`` variable levels that were not strings were missing in :class:`FacetGrid` legends. - When passing a color palette list in a ``with`` statement, the entire palette is now used instead of the first six colors. seaborn-0.10.0/doc/releases/v0.4.0.txt000066400000000000000000000102501361256634400171670ustar00rootroot00000000000000 v0.4.0 (September 2014) ----------------------- This is a major release from 0.3. Highlights include new approaches for :ref:`quick, high-level dataset exploration ` (along with a more :ref:`flexible interface `) and easy creation of :ref:`perceptually-appropriate color palettes ` using the cubehelix system. Along with these additions, there are a number of smaller changes that make visualizing data with seaborn easier and more powerful. Plotting functions ~~~~~~~~~~~~~~~~~~ - A new object, :class:`PairGrid`, and a corresponding function :func:`pairplot`, for drawing grids of pairwise relationships in a dataset. This style of plot is sometimes called a "scatterplot matrix", but the representation of the data in :class:`PairGrid` is flexible and many styles other than scatterplots can be used. See the :ref:`docs ` for more information. **Note:** due to a bug in older versions of matplotlib, you will have best results if you use these functions with matplotlib 1.4 or later. - The rules for choosing default color palettes when variables are mapped to different colors have been unified (and thus changed in some cases). Now when no specific palette is requested, the current global color palette will be used, unless the number of variables to be mapped exceeds the number of unique colors in the palette, in which case the ``"husl"`` palette will be used to avoid cycling. - Added a keyword argument ``hist_norm`` to :func:`distplot`. When a :func:`distplot` is now drawn without a KDE or parametric density, the histogram is drawn as counts instead of a density. This can be overridden by by setting ``hist_norm`` to ``True``. - When using :class:`FacetGrid` with a ``hue`` variable, the legend is no longer drawn by default when you call :meth:`FacetGrid.map`. Instead, you have to call :meth:`FacetGrid.add_legend` manually. This should make it easier to layer multiple plots onto the grid without having duplicated legends. - Made some changes to ``factorplot`` so that it behaves better when not all levels of the ``x`` variable are represented in each facet. - Added the ``logx`` option to :func:`regplot` for fitting the regression in log space. - When :func:`violinplot` encounters a bin with only a single observation, it will now plot a horizontal line at that value instead of erroring out. Style and color palettes ~~~~~~~~~~~~~~~~~~~~~~~~ - Added the :func:`cubehelix_palette` function for generating sequential palettes from the cubehelix system. See the :ref:`palette docs ` for more information on how these palettes can be used. There is also the :func:`choose_cubehelix` which will launch an interactive app to select cubehelix parameters in the notebook. - Added the :func:`xkcd_palette` and the ``xkcd_rgb`` dictionary so that colors :ref:`can be specified ` with names from the `xkcd color survey `_. - Added the ``font_scale`` option to :func:`plotting_context`, :func:`set_context`, and :func:`set`. ``font_scale`` can independently increase or decrease the size of the font elements in the plot. - Font-handling should work better on systems without Arial installed. This is accomplished by adding the ``font.sans-serif`` field to the ``axes_style`` definition with Arial and Liberation Sans prepended to matplotlib defaults. The font family can also be set through the ``font`` keyword argument in :func:`set`. Due to matplotlib bugs, this might not work as expected on matplotlib 1.3. - The :func:`despine` function gets a new keyword argument ``offset``, which replaces the deprecated :func:`offset_spines` function. You no longer need to offset the spines before plotting data. - Added a default value for ``pdf.fonttype`` so that text in PDFs is editable in Adobe Illustrator. Other API Changes ~~~~~~~~~~~~~~~~~ - Removed the deprecated ``set_color_palette`` and ``palette_context`` functions. These were replaced in version 0.3 by the :func:`set_palette` function and ability to use :func:`color_palette` directly in a ``with`` statement. - Removed the ability to specify a ``nogrid`` style, which was renamed to ``white`` in 0.3. seaborn-0.10.0/doc/releases/v0.5.0.txt000066400000000000000000000110701361256634400171710ustar00rootroot00000000000000 v0.5.0 (November 2014) -------------------------- This is a major release from 0.4. Highlights include new functions for plotting heatmaps, possibly while applying clustering algorithms to discover structured relationships. These functions are complemented by new custom colormap functions and a full set of IPython widgets that allow interactive selection of colormap parameters. The palette tutorial has been rewritten to cover these new tools and more generally provide guidance on how to use color in visualizations. There are also a number of smaller changes and bugfixes. Plotting functions ~~~~~~~~~~~~~~~~~~ - Added the :func:`heatmap` function for visualizing a matrix of data by color-encoding the values. See the docs for more information. - Added the :func:`clustermap` function for clustering and visualizing a matrix of data, with options to label individual rows and columns by colors. See the docs for more information. This work was lead by Olga Botvinnik. - :func:`lmplot` and :func:`pairplot` get a new keyword argument, ``markers``. This can be a single kind of marker or a list of different markers for each level of the ``hue`` variable. Using different markers for different hues should let plots be more comprehensible when reproduced to black-and-white (i.e. when printed). See the `github pull request (#323) `_ for examples. - More generally, there is a new keyword argument in :class:`FacetGrid` and :class:`PairGrid`, ``hue_kws``. This similarly lets plot aesthetics vary across the levels of the hue variable, but more flexibily. ``hue_kws`` should be a dictionary that maps the name of keyword arguments to lists of values that are as long as the number of levels of the hue variable. - The argument ``subplot_kws`` has been added to ``FacetGrid``. This allows for faceted plots with custom projections, including `maps with Cartopy `_. Color palettes ~~~~~~~~~~~~~~ - Added two new functions to create custom color palettes. For sequential palettes, you can use the :func:`light_palette` function, which takes a seed color and creates a ramp from a very light, desaturated variant of it. For diverging palettes, you can use the :func:`diverging_palette` function to create a balanced ramp between two endpoints to a light or dark midpoint. See the :ref:`palette tutorial ` for more information. - Added the ability to specify the seed color for :func:`light_palette` and :func:`dark_palette` as a tuple of ``husl`` or ``hls`` space values or as a named ``xkcd`` color. The interpretation of the seed color is now provided by the new ``input`` parameter to these functions. - Added several new interactive palette widgets: :func:`choose_colorbrewer_palette`, :func:`choose_light_palette`, :func:`choose_dark_palette`, and :func:`choose_diverging_palette`. For consistency, renamed the cubehelix widget to :func:`choose_cubehelix_palette` (and fixed a bug where the cubehelix palette was reversed). These functions also now return either a color palette list or a matplotlib colormap when called, and that object will be live-updated as you play with the widget. This should make it easy to iterate over a plot until you find a good representation for the data. See the `Github pull request `_ or `this notebook (download it to use the widgets) `_ for more information. - Overhauled the color :ref:`palette tutorial ` to organize the discussion by class of color palette and provide more motivation behind the various choices one might make when choosing colors for their data. Bug fixes ~~~~~~~~~ - Fixed a bug in :class:`PairGrid` that gave incorrect results (or a crash) when the input DataFrame has a non-default index. - Fixed a bug in :class:`PairGrid` where passing columns with a date-like datatype raised an exception. - Fixed a bug where :func:`lmplot` would show a legend when the hue variable was also used on either the rows or columns (making the legend redundant). - Worked around a matplotlib bug that was forcing outliers in :func:`boxplot` to appear as blue. - :func:`kdeplot` now accepts pandas Series for the ``data`` and ``data2`` arguments. - Using a non-default correlation method in :func:`corrplot` now implies ``sig_stars=False`` as the permutation test used to significance values for the correlations uses a pearson metric. - Removed ``pdf.fonttype`` from the style definitions, as the value used in version 0.4 resulted in very large PDF files. seaborn-0.10.0/doc/releases/v0.5.1.txt000066400000000000000000000011321361256634400171700ustar00rootroot00000000000000 v0.5.1 (November 2014) ---------------------- This is a bugfix release that includes a workaround for an issue in matplotlib 1.4.2 and fixes for two bugs in functions that were new in 0.5.0. - Implemented a workaround for a bug in matplotlib 1.4.2 that prevented point markers from being drawn when the seaborn styles had been set. See this `github issue `_ for more information. - Fixed a bug in :func:`heatmap` where the mask was vertically reversed relative to the data. - Fixed a bug in :func:`clustermap` when using nested lists of side colors. seaborn-0.10.0/doc/releases/v0.6.0.txt000066400000000000000000000260051361256634400171760ustar00rootroot00000000000000 v0.6.0 (June 2015) ------------------ .. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.19108.svg :target: https://doi.org/10.5281/zenodo.19108 This is a major release from 0.5. The main objective of this release was to unify the API for categorical plots, which means that there are some relatively large API changes in some of the older functions. See below for details of those changes, which may break code written for older versions of seaborn. There are also some new functions (:func:`stripplot`, and :func:`countplot`), numerous enhancements to existing functions, and bug fixes. Additionally, the documentation has been completely revamped and expanded for the 0.6 release. Now, the API docs page for each function has multiple examples with embedded plots showing how to use the various options. These pages should be considered the most comprehensive resource for examples, and the tutorial pages are now streamlined and oriented towards a higher-level overview of the various features. Changes and updates to categorical plots ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In version 0.6, the "categorical" plots have been unified with a common API. This new category of functions groups together plots that show the relationship between one numeric variable and one or two categorical variables. This includes plots that show distribution of the numeric variable in each bin (:func:`boxplot`, :func:`violinplot`, and :func:`stripplot`) and plots that apply a statistical estimation within each bin (:func:`pointplot`, :func:`barplot`, and :func:`countplot`). There is a new :ref:`tutorial chapter ` that introduces these functions. The categorical functions now each accept the same formats of input data and can be invoked in the same way. They can plot using long- or wide-form data, and can be drawn vertically or horizontally. When long-form data is used, the orientation of the plots is inferred from the types of the input data. Additionally, all functions natively take a ``hue`` variable to add a second layer of categorization. With the (in some cases new) API, these functions can all be drawn correctly by :class:`FacetGrid`. However, ``factorplot`` can also now create faceted verisons of any of these kinds of plots, so in most cases it will be unnecessary to use :class:`FacetGrid` directly. By default, ``factorplot`` draws a point plot, but this is controlled by the ``kind`` parameter. Here are details on what has changed in the process of unifying these APIs: - Changes to :func:`boxplot` and :func:`violinplot` will probably be the most disruptive. Both functions maintain backwards-compatibility in terms of the kind of data they can accept, but the syntax has changed to be more similar to other seaborn functions. These functions are now invoked with ``x`` and/or ``y`` parameters that are either vectors of data or names of variables in a long-form DataFrame passed to the new ``data`` parameter. You can still pass wide-form DataFrames or arrays to ``data``, but it is no longer the first positional argument. See the `github pull request (#410) `_ for more information on these changes and the logic behind them. - As :func:`pointplot` and :func:`barplot` can now plot with the major categorical variable on the y axis, the ``x_order`` parameter has been renamed to ``order``. - Added a ``hue`` argument to :func:`boxplot` and :func:`violinplot`, which allows for nested grouping the plot elements by a third categorical variable. For :func:`violinplot`, this nesting can also be accomplished by splitting the violins when there are two levels of the ``hue`` variable (using ``split=True``). To make this functionality feasible, the ability to specify where the plots will be draw in data coordinates has been removed. These plots now are drawn at set positions, like (and identical to) :func:`barplot` and :func:`pointplot`. - Added a ``palette`` parameter to :func:`boxplot`/:func:`violinplot`. The ``color`` parameter still exists, but no longer does double-duty in accepting the name of a seaborn palette. ``palette`` supersedes ``color`` so that it can be used with a :class:`FacetGrid`. Along with these API changes, the following changes/enhancements were made to the plotting functions: - The default rules for ordering the categories has changed. Instead of automatically sorting the category levels, the plots now show the levels in the order they appear in the input data (i.e., the order given by ``Series.unique()``). Order can be specified when plotting with the ``order`` and ``hue_order`` parameters. Additionally, when variables are pandas objects with a "categorical" dtype, the category order is inferred from the data object. This change also affects :class:`FacetGrid` and :class:`PairGrid`. - Added the ``scale`` and ``scale_hue`` parameters to :func:`violinplot`. These control how the width of the violins are scaled. The default is ``area``, which is different from how the violins used to be drawn. Use ``scale='width'`` to get the old behavior. - Used a different style for the ``box`` kind of interior plot in :func:`violinplot`, which shows the whisker range in addition to the quartiles. Use ``inner='quartile'`` to get the old style. New plotting functions ~~~~~~~~~~~~~~~~~~~~~~ - Added the :func:`stripplot` function, which draws a scatterplot where one of the variables is categorical. This plot has the same API as :func:`boxplot` and :func:`violinplot`. It is useful both on its own and when composed with one of these other plot kinds to show both the observations and underlying distribution. - Added the :func:`countplot` function, which uses a bar plot representation to show counts of variables in one or more categorical bins. This replaces the old approach of calling :func:`barplot` without a numeric variable. Other additions and changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~ - The :func:`corrplot` and underlying :func:`symmatplot` functions have been deprecated in favor of :func:`heatmap`, which is much more flexible and robust. These two functions are still available in version 0.6, but they will be removed in a future version. - Added the :func:`set_color_codes` function and the ``color_codes`` argument to :func:`set` and :func:`set_palette`. This changes the interpretation of shorthand color codes (i.e. "b", "g", k", etc.) within matplotlib to use the values from one of the named seaborn palettes (i.e. "deep", "muted", etc.). That makes it easier to have a more uniform look when using matplotlib functions directly with seaborn imported. This could be disruptive to existing plots, so it does not happen by default. It is possible this could change in the future. - The :func:`color_palette` function no longer trims palettes that are longer than 6 colors when passed into it. - Added the ``as_hex`` method to color palette objects, to return a list of hex codes rather than rgb tuples. - :func:`jointplot` now passes additional keyword arguments to the function used to draw the plot on the joint axes. - Changed the default ``linewidths`` in :func:`heatmap` and :func:`clustermap` to 0 so that larger matrices plot correctly. This parameter still exists and can be used to get the old effect of lines demarcating each cell in the heatmap (the old default ``linewidths`` was 0.5). - :func:`heatmap` and :func:`clustermap` now automatically use a mask for missing values, which previously were shown with the "under" value of the colormap per default `plt.pcolormesh` behavior. - Added the ``seaborn.crayons`` dictionary and the :func:`crayon_palette` function to define colors from the 120 box (!) of `Crayola crayons `_. - Added the ``line_kws`` parameter to :func:`residplot` to change the style of the lowess line, when used. - Added open-ended ``**kwargs`` to the ``add_legend`` method on :class:`FacetGrid` and :class:`PairGrid`, which will pass additional keyword arguments through when calling the legend function on the ``Figure`` or ``Axes``. - Added the ``gridspec_kws`` parameter to :class:`FacetGrid`, which allows for control over the size of individual facets in the grid to emphasize certain plots or account for differences in variable ranges. - The interactive palette widgets now show a continuous colorbar, rather than a discrete palette, when `as_cmap` is True. - The default Axes size for :func:`pairplot` and :class:`PairGrid` is now slightly smaller. - Added the ``shade_lowest`` parameter to :func:`kdeplot` which will set the alpha for the lowest contour level to 0, making it easier to plot multiple bivariate distributions on the same axes. - The ``height`` parameter of :func:`rugplot` is now interpreted as a function of the axis size and is invariant to changes in the data scale on that axis. The rug lines are also slightly narrower by default. - Added a catch in :func:`distplot` when calculating a default number of bins. For highly skewed data it will now use sqrt(n) bins, where previously the reference rule would return "infinite" bins and cause an exception in matplotlib. - Added a ceiling (50) to the default number of bins used for :func:`distplot` histograms. This will help avoid confusing errors with certain kinds of datasets that heavily violate the assumptions of the reference rule used to get a default number of bins. The ceiling is not applied when passing a specific number of bins. - The various property dictionaries that can be passed to ``plt.boxplot`` are now applied after the seaborn restyling to allow for full customizability. - Added a ``savefig`` method to :class:`JointGrid` that defaults to a tight bounding box to make it easier to save figures using this class, and set a tight bbox as the default for the ``savefig`` method on other Grid objects. - You can now pass an integer to the ``xticklabels`` and ``yticklabels`` parameter of :func:`heatmap` (and, by extension, :func:`clustermap`). This will make the plot use the ticklabels inferred from the data, but only plot every ``n`` label, where ``n`` is the number you pass. This can help when visualizing larger matrices with some sensible ordering to the rows or columns of the dataframe. - Added `"figure.facecolor"` to the style parameters and set the default to white. - The :func:`load_dataset` function now caches datasets locally after downloading them, and uses the local copy on subsequent calls. Bug fixes ~~~~~~~~~ - Fixed bugs in :func:`clustermap` where the mask and specified ticklabels were not being reorganized using the dendrograms. - Fixed a bug in :class:`FacetGrid` and :class:`PairGrid` that lead to incorrect legend labels when levels of the ``hue`` variable appeared in ``hue_order`` but not in the data. - Fixed a bug in :meth:`FacetGrid.set_xticklabels` or :meth:`FacetGrid.set_yticklabels` when ``col_wrap`` is being used. - Fixed a bug in :class:`PairGrid` where the ``hue_order`` parameter was ignored. - Fixed two bugs in :func:`despine` that caused errors when trying to trim the spines on plots that had inverted axes or no ticks. - Improved support for the ``margin_titles`` option in :class:`FacetGrid`, which can now be used with a legend. seaborn-0.10.0/doc/releases/v0.7.0.txt000066400000000000000000000055571361256634400172100ustar00rootroot00000000000000 v0.7.0 (January 2016) --------------------- .. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.45133.svg :target: https://doi.org/10.5281/zenodo.45133 This is a major release from 0.6. The main new feature is :func:`swarmplot` which implements the beeswarm approach for drawing categorical scatterplots. There are also some performance improvements, bug fixes, and updates for compatibility with new versions of dependencies. - Added the :func:`swarmplot` function, which draws beeswarm plots. These are categorical scatterplots, similar to those produced by :func:`stripplot`, but position of the points on the categorical axis is chosen to avoid overlapping points. See the :ref:`categorical plot tutorial ` for more information. - Changed some of the :func:`stripplot` defaults to be closer to :func:`swarmplot`. Points are now somewhat smaller, have no outlines, and are not split by default when using ``hue``. These settings remain customizable through function parameters. - Added an additional rule when determining category order in categorical plots. Now, when numeric variables are used in a categorical role, the default behavior is to sort the unique levels of the variable (i.e they will be in proper numerical order). This can still be overridden by the appropriate ``{*_}order`` parameter, and variables with a ``category`` datatype will still follow the category order even if the levels are strictly numerical. - Changed how :func:`stripplot` draws points when using ``hue`` nesting with ``split=False`` so that the different ``hue`` levels are not drawn strictly on top of each other. - Improve performance for large dendrograms in :func:`clustermap`. - Added ``font.size`` to the plotting context definition so that the default output from ``plt.text`` will be scaled appropriately. - Fixed a bug in :func:`clustermap` when ``fastcluster`` is not installed. - Fixed a bug in the zscore calculation in :func:`clustermap`. - Fixed a bug in :func:`distplot` where sometimes the default number of bins would not be an integer. - Fixed a bug in :func:`stripplot` where a legend item would not appear for a ``hue`` level if there were no observations in the first group of points. - Heatmap colorbars are now rasterized for better performance in vector plots. - Added workarounds for some matplotlib boxplot issues, such as strange colors of outlier points. - Added workarounds for an issue where violinplot edges would be missing or have random colors. - Added a workaround for an issue where only one :func:`heatmap` cell would be annotated on some matplotlib backends. - Fixed a bug on newer versions of matplotlib where a colormap would be erroneously applied to scatterplots with only three observations. - Updated seaborn for compatibility with matplotlib 1.5. - Added compatibility for various IPython (and Jupyter) versions in functions that use widgets. seaborn-0.10.0/doc/releases/v0.7.1.txt000066400000000000000000000041021361256634400171720ustar00rootroot00000000000000 v0.7.1 (June 2016) ------------------- .. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.54844.svg :target: https://doi.org/10.5281/zenodo.54844 - Added the ability to put "caps" on the error bars that are drawn by :func:`barplot` or :func:`pointplot` (and, by extension, ``factorplot``). Additionally, the line width of the error bars can now be controlled. These changes involve the new parameters ``capsize`` and ``errwidth``. See the `github pull request (#898) `_ for examples of usage. - Improved the row and column colors display in :func:`clustermap`. It is now possible to pass Pandas objects for these elements and, when possible, the semantic information in the Pandas objects will be used to add labels to the plot. When Pandas objects are used, the color data is matched against the main heatmap based on the index, not on position. This is more accurate, but it may lead to different results if current code assumed positional matching. - Improved the luminance calculation that determines the annotation color in :func:`heatmap`. - The ``annot`` parameter of :func:`heatmap` now accepts a rectangular dataset in addition to a boolean value. If a dataset is passed, its values will be used for the annotations, while the main dataset will be used for the heatmap cell colors. - Fixed a bug in :class:`FacetGrid` that appeared when using ``col_wrap`` with missing ``col`` levels. - Made it possible to pass a tick locator object to the :func:`heatmap` colorbar. - Made it possible to use different styles (e.g., step) for :class:`PairGrid` histograms when there are multiple hue levels. - Fixed a bug in scipy-based univariate kernel density bandwidth calculation. - The :func:`reset_orig` function (and, by extension, importing ``seaborn.apionly``) resets matplotlib rcParams to their values at the time seaborn itself was imported, which should work better with rcParams changed by the jupyter notebook backend. - Removed some objects from the top-level ``seaborn`` namespace. - Improved unicode compatibility in :class:`FacetGrid`. seaborn-0.10.0/doc/releases/v0.8.0.txt000066400000000000000000000106051361256634400171770ustar00rootroot00000000000000 v0.8.0 (July 2017) ------------------ .. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.824567.svg :target: https://doi.org/10.5281/zenodo.824567 - The default style is no longer applied when seaborn is imported. It is now necessary to explicitly call :func:`set` or one or more of :func:`set_style`, :func:`set_context`, and :func:`set_palette`. Correspondingly, the ``seaborn.apionly`` module has been deprecated. - Changed the behavior of :func:`heatmap` (and by extension :func:`clustermap`) when plotting divergent dataesets (i.e. when the ``center`` parameter is used). Instead of extending the lower and upper limits of the colormap to be symmetrical around the ``center`` value, the colormap is modified so that its middle color corresponds to ``center``. This means that the full range of the colormap will not be used (unless the data or specified ``vmin`` and ``vmax`` are symmetric), but the upper and lower limits of the colorbar will correspond to the range of the data. See the Github pull request `(#1184) `_ for examples of the behavior. - Removed automatic detection of diverging data in :func:`heatmap` (and by extension :func:`clustermap`). If you want the colormap to be treated as diverging (see above), it is now necessary to specify the ``center`` value. When no colormap is specified, specifying ``center`` will still change the default to be one that is more appropriate for displaying diverging data. - Added four new colormaps, created using `viscm `_ for perceptual uniformity. The new colormaps include two sequential colormaps ("rocket" and "mako") and two diverging colormaps ("icefire" and "vlag"). These colormaps are registered with matplotlib on seaborn import and the colormap objects can be accessed in the ``seaborn.cm`` namespace. - Changed the default :func:`heatmap` colormaps to be "rocket" (in the case of sequential data) or "icefire" (in the case of diverging data). Note that this change reverses the direction of the luminance ramp from the previous defaults. While potentially confusing and disruptive, this change better aligns the seaborn defaults with the new matplotlib default colormap ("viridis") and arguably better aligns the semantics of a "heat" map with the appearance of the colormap. - Added ``"auto"`` as a (default) option for tick labels in :func:`heatmap` and :func:`clustermap`. This will try to estimate how many ticks can be labeled without the text objects overlapping, which should improve performance for larger matrices. - Added the ``dodge`` parameter to :func:`boxplot`, :func:`violinplot`, and :func:`barplot` to allow use of ``hue`` without changing the position or width of the plot elements, as when the ``hue`` varible is not nested within the main categorical variable. - Correspondingly, the ``split`` parameter for :func:`stripplot` and :func:`swarmplot` has been renamed to ``dodge`` for consistency with the other categorical functions (and for differentiation from the meaning of ``split`` in :func:`violinplot`). - Added the ability to draw a colorbar for a bivariate :func:`kdeplot` with the ``cbar`` parameter (and related ``cbar_ax`` and ``cbar_kws`` parameters). - Added the ability to use error bars to show standard deviations rather than bootstrap confidence intervals in most statistical functions by putting ``ci="sd"``. - Allow side-specific offsets in :func:`despine`. - Figure size is no longer part of the seaborn plotting context parameters. - Put a cap on the number of bins used in :func:`jointplot` for ``type=="hex"`` to avoid hanging when the reference rule prescribes too many. - Changed the y axis in :func:`heatmap`. Instead of reversing the rows of the data internally, the y axis is now inverted. This may affect code that draws on top of the heatmap in data coordinates. - Turn off dendrogram axes in :func:`clustermap` rather than setting the background color to white. - New matplotlib qualitative palettes (e.g. "tab10") are now handled correctly. - Some modules and functions have been internally reorganized; there should be no effect on code that uses the ``seaborn`` namespace. - Added a deprecation warning to ``tsplot`` function to indicate that it will be removed or replaced with a substantially altered version in a future release. - The ``interactplot`` and ``coefplot`` functions are officially deprecated and will be removed in a future release. seaborn-0.10.0/doc/releases/v0.8.1.txt000066400000000000000000000030311361256634400171730ustar00rootroot00000000000000 v0.8.1 (September 2017) ----------------------- .. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.883859.svg :target: https://doi.org/10.5281/zenodo.883859 - Added a warning in :class:`FacetGrid` when passing a categorical plot function without specifying ``order`` (or ``hue_order`` when ``hue`` is used), which is likely to produce a plot that is incorrect. - Improved compatibility between :class:`FacetGrid` or :class:`PairGrid` and interactive matplotlib backends so that the legend no longer remains inside the figure when using ``legend_out=True``. - Changed categorical plot functions with small plot elements to use :func:`dark_palette` instead of :func:`light_palette` when generating a sequential palette from a specified color. - Improved robustness of :func:`kdeplot` and :func:`distplot` to data with fewer than two observations. - Fixed a bug in :func:`clustermap` when using ``yticklabels=False``. - Fixed a bug in :func:`pointplot` where colors were wrong if exactly three points were being drawn. - Fixed a bug in :func:`pointplot` where legend entries for missing data appeared with empty markers. - Fixed a bug in :func:`clustermap` where an error was raised when annotating the main heatmap and showing category colors. - Fixed a bug in :func:`clustermap` where row labels were not being properly rotated when they overlapped. - Fixed a bug in :func:`kdeplot` where the maximum limit on the density axes was not being updated when multiple densities were drawn. - Improved compatibility with future versions of pandas. seaborn-0.10.0/doc/releases/v0.9.0.txt000066400000000000000000000252151361256634400172030ustar00rootroot00000000000000 v0.9.0 (July 2018) ------------------ .. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1313201.svg :target: https://doi.org/10.5281/zenodo.1313201 This is a major release with several substantial and long-desired new features. There are also updates/modifications to the themes and color palettes that give better consistency with matplotlib 2.0 and some notable API changes. New relational plots ~~~~~~~~~~~~~~~~~~~~ Three completely new plotting functions have been added: :func:`catplot`, :func:`scatterplot`, and :func:`lineplot`. The first is a figure-level interface to the latter two that combines them with a :class:`FacetGrid`. The functions bring the high-level, dataset-oriented API of the seaborn categorical plotting functions to more general plots (scatter plots and line plots). These functions can visualize a relationship between two numeric variables while mapping up to three additional variables by modifying ``hue``, ``size``, and/or ``style`` semantics. The common high-level API is implemented differently in the two functions. For example, the size semantic in :func:`scatterplot` scales the area of scatter plot points, but in :func:`lineplot` it scales width of the line plot lines. The API is dataset-oriented, meaning that in both cases you pass the variable in your dataset rather than directly specifying the matplotlib parameters to use for point area or line width. Another way the relational functions differ from existing seaborn functionality is that they have better support for using numeric variables for ``hue`` and ``size`` semantics. This functionality may be propagated to other functions that can add a ``hue`` semantic in future versions; it has not been in this release. The :func:`lineplot` function also has support for statistical estimation and is replacing the older ``tsplot`` function, which still exists but is marked for removal in a future release. :func:`lineplot` is better aligned with the API of the rest of the library and more flexible in showing relationships across additional variables by modifying the size and style semantics independently. It also has substantially improved support for date and time data, a major pain factor in ``tsplot``. The cost is that some of the more esoteric options in ``tsplot`` for representing uncertainty (e.g. a colormapped KDE of the bootstrap distribution) have not been implemented in the new function. There is quite a bit of new documentation that explains these new functions in more detail, including detailed examples of the various options in the :ref:`API reference ` and a more verbose :ref:`tutorial `. These functions should be considered in a "stable beta" state. They have been thoroughly tested, but some unknown corner cases may remain to be found. The main features are in place, but not all planned functionality has been implemented. There are planned improvements to some elements, particularly the default legend, that are a little rough around the edges in this release. Finally, some of the default behavior (e.g. the default range of point/line sizes) may change somewhat in future releases. Updates to themes and palettes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Several changes have been made to the seaborn style themes, context scaling, and color palettes. In general the aim of these changes was to make the seaborn styles more consistent with the `style updates in matplotlib 2.0 `_ and to leverage some of the new style parameters for better implementation of some aspects of the seaborn styles. Here is a list of the changes: - Reorganized and updated some :func:`axes_style`/:func:`plotting_context` parameters to take advantage of improvements in the matplotlib 2.0 update. The biggest change involves using several new parameters in the "style" spec while moving parameters that used to implement the corresponding aesthetics to the "context" spec. For example, axes spines and ticks are now off instead of having their width/length zeroed out for the darkgrid style. That means the width/length of these elements can now be scaled in different contexts. The effect is a more cohesive appearance of the plots, especially in larger contexts. These changes include only minimal support for the 1.x matplotlib series. Users who are stuck on matplotlib 1.5 but wish to use seaborn styling may want to use the seaborn parameters that can be accessed through the `matplotlib stylesheet interface `_. - Updated the seaborn palettes ("deep", "muted", "colorblind", etc.) to correspond with the new 10-color matplotlib default. The legacy palettes are now available at "deep6", "muted6", "colorblind6", etc. Additionally, a few individual colors were tweaked for better consistency, aesthetics, and accessibility. - Calling :func:`color_palette` (or :func:`set_palette`) with a named qualitative palettes (i.e. one of the seaborn palettes, the colorbrewer qualitative palettes, or the matplotlib matplotlib tableau-derived palettes) and no specified number of colors will return all of the colors in the palette. This means that for some palettes, the returned list will have a different length than it did in previous versions. - Enhanced :func:`color_palette` to accept a parameterized specification of a cubehelix palette in in a string, prefixed with ``"ch:"`` (e.g. ``"ch:-.1,.2,l=.7"``). Note that keyword arguments can be spelled out or referenced using only their first letter. Reversing the palette is accomplished by appending ``"_r"``, as with other matplotlib colormaps. This specification will be accepted by any seaborn function with a ``palette=`` parameter. - Slightly increased the base font sizes in :func:`plotting_context` and increased the scaling factors for ``"talk"`` and ``"poster"`` contexts. - Calling :func:`set` will now call :func:`set_color_codes` to re-assign the single letter color codes by default API changes ~~~~~~~~~~~ A few functions have been renamed or have had changes to their default parameters. - The ``factorplot`` function has been renamed to :func:`catplot`. The new name ditches the original R-inflected terminology to use a name that is more consistent with terminology in pandas and in seaborn itself. This change should hopefully make :func:`catplot` easier to discover, and it should make more clear what its role is. ``factorplot`` still exists and will pass its arguments through to :func:`catplot` with a warning. It may be removed eventually, but the transition will be as gradual as possible. - The other reason that the ``factorplot`` name was changed was to ease another alteration which is that the default ``kind`` in :func:`catplot` is now ``"strip"`` (corresponding to :func:`stripplot`). This plots a categorical scatter plot which is usually a much better place to start and is more consistent with the default in :func:`relplot`. The old default style in ``factorplot`` (``"point"``, corresponding to :func:`pointplot`) remains available if you want to show a statistical estimation. - The ``lvplot`` function has been renamed to :func:`boxenplot`. The "letter-value" terminology that was used to name the original kind of plot is obscure, and the abbreviation to ``lv`` did not help anything. The new name should make the plot more discoverable by describing its format (it plots multiple boxes, also known as "boxen"). As with ``factorplot``, the ``lvplot`` function still exists to provide a relatively smooth transition. - Renamed the ``size`` parameter to ``height`` in multi-plot grid objects (:class:`FacetGrid`, :class:`PairGrid`, and :class:`JointGrid`) along with functions that use them (``factorplot``, :func:`lmplot`, :func:`pairplot`, and :func:`jointplot`) to avoid conflicts with the ``size`` parameter that is used in ``scatterplot`` and ``lineplot`` (necessary to make :func:`relplot` work) and also makes the meaning of the parameter a bit more clear. - Changed the default diagonal plots in :func:`pairplot` to use `func`:kdeplot` when a ``"hue"`` dimension is used. - Deprecated the statistical annotation component of :class:`JointGrid`. The method is still available but will be removed in a future version. - Two older functions that were deprecated in earlier versions, ``coefplot`` and ``interactplot``, have undergone final removal from the code base. Documentation improvements ~~~~~~~~~~~~~~~~~~~~~~~~~~ There has been some effort put into improving the documentation. The biggest change is that the :ref:`introduction to the library ` has been completely rewritten to provide much more information and, critically, examples. In addition to the high-level motivation, the introduction also covers some important topics that are often sources of confusion, like the distinction between figure-level and axes-level functions, how datasets should be formatted for use in seaborn, and how to customize the appearance of the plots. Other improvements have been made throughout, most notably a thorough re-write of the :ref:`categorical tutorial `. Other small enhancements and bug fixes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Changed :func:`rugplot` to plot a matplotlib ``LineCollection`` instead of many ``Line2D`` objects, providing a big speedup for large arrays. - Changed the default off-diagonal plots to use :func:`scatterplot`. (Note that the ``"hue"`` currently draws three separate scatterplots instead of using the hue semantic of the scatterplot function). - Changed color handling when using :func:`kdeplot` with two variables. The default colormap for the 2D density now follows the color cycle, and the function can use ``color`` and ``label`` kwargs, adding more flexibility and avoiding a warning when using with multi-plot grids. - Added the ``subplot_kws`` parameter to :class:`PairGrid` for more flexibility. - Removed a special case in :class:`PairGrid` that defaulted to drawing stacked histograms on the diagonal axes. - Fixed :func:`jointplot`/:class:`JointGrid` and :func:`regplot` so that they now accept list inputs. - Fixed a bug in :class:`FacetGrid` when using a single row/column level or using ``col_wrap=1``. - Fixed functions that set axis limits so that they preserve auto-scaling state on matplotlib 2.0. - Avoided an error when using matplotlib backends that cannot render a canvas (e.g. PDF). - Changed the install infrastructure to explicitly declare dependencies in a way that ``pip`` is aware of. This means that ``pip install seaborn`` will now work in an empty environment. Additionally, the dependencies are specified with strict minimal versions. - Updated the testing infrastructure to execute tests with `pytest `_ (although many individual tests still use nose assertion). seaborn-0.10.0/doc/releases/v0.9.1.txt000066400000000000000000000115051361256634400172010ustar00rootroot00000000000000 v0.9.1 (January 2020) --------------------- This is a minor release with a number of bug fixes and adaptations to changes in seaborn's dependencies. There are also several new features. This is the final version of seaborn that will support Python 2.7 or 3.5. New features ~~~~~~~~~~~~ - Added more control over the arrangement of the elements drawn by :func:`clustermap` with the ``{dendrogram,colors}_ratio`` and ``cbar_pos`` parameters. Additionally, the default organization and scaling with different figure sizes has been improved. - Added the ``corner`` option to :class:`PairGrid` and :func:`pairplot` to make a grid without the upper triangle of bivariate axes. - Added the ability to seed the random number generator for the bootstrap used to define error bars in several plots. Relevant functions now have a ``seed`` parameter, which can take either fixed seed (typically an ``int``) or a numpy random number generator object (either the newer :class:`numpy.random.Generator` or the older :class:`numpy.random.mtrand.RandomState`). - Generalized the idea of "diagonal" axes in :class:`PairGrid` to any axes that share an x and y variable. - In :class:`PairGrid`, the ``hue`` variable is now excluded from the default list of variables that make up the rows and columns of the grid. - Exposed the ``layout_pad`` parameter in :class:`PairGrid` and set a smaller default than what matptlotlib sets for more efficient use of space in dense grids. - It is now possible to force a categorical interpretation of the ``hue`` varaible in a relational plot by passing the name of a categorical palette (e.g. ``"deep"``, or ``"Set2"``). This complements the (previously supported) option of passig a list/dict of colors. - Added the ``tree_kws`` parameter to :func:`clustermap` to control the properties of the lines in the dendrogram. - Added the ability to pass hierarchical label names to the :class:`FacetGrid` legend, which also fixes a bug in :func:`relplot` when the same label appeared in diffent semantics. - Improved support for grouping observations based on pandas index information in categorical plots. Bug fixes and adaptations ~~~~~~~~~~~~~~~~~~~~~~~~~ - Avoided an error when singular data is passed to :func:`kdeplot`, issuing a warning instead. This makes :func:`pairplot` more robust. - Fixed the behavior of ``dropna`` in :class:`PairGrid` to properly exclude null datapoints from each plot when set to ``True``. - Fixed an issue where :func:`regplot` could interfere with other axes in a multi-plot matplotlib figure. - Semantic variables with a ``category`` data type will always be treated as categorical in relational plots. - Avoided a warning about color specifications that arose from :func:`boxenplot` on newer matplotlibs. - Adapted to a change in how matplotlib scales axis margins, which caused multiple calls to :func:`regplot` with ``truncate=False`` to progressively expand the x axis limits. Because there are currently limitations on how autoscaling works in matplotlib, the default value for ``truncate`` in seaborn has also been changed to ``True``. - Relational plots no longer error when hue/size data are inferred to be numeric but stored with a string datatype. - Relational plots now consider semantics with only a single value that can be interpreted as boolean (0 or 1) to be categorical, not numeric. - Relational plots now handle list or dict specifications for ``sizes`` correctly. - Fixed an issue in :func:`pointplot` where missing levels of a hue variable would cause an exception after a recent update in matplotlib. - Fixed a bug when setting the rotation of x tick labels on a :class:`FacetGrid`. - Fixed a bug where values would be excluded from categorical plots when only one variable was a pandas ``Series`` with a non-default index. - Fixed a bug when using ``Series`` objects as arguments for ``x_partial`` or ``y_partial`` in :func:`regplot`. - Fixed a bug when passing a ``norm`` object and using color annotations in :func:`clustermap`. - Fixed a bug where annotations were not rearranged to match the clustering in :func:`clustermap`. - Fixed a bug when trying to call :func:`set` while specifying a list of colors for the palette. - Fixed a bug when resetting the color code short-hands to the matplotlib default. - Avoided errors from stricter type checking in upcoming ``numpy`` changes. - Avoided error/warning in :func:`lineplot` when plotting categoricals with empty levels. - Allowed ``colors`` to be passed through to a bivariabe :func:`kdeplot`. - Standardized the output format of custom color palette functions. - Fixed a bug where legends for numerical variables in a relational plot could show a surprisingly large number of decimal places. - Improved robustness to missing values in distribution plots. - Made it possible to specify the location of the :class:`FacetGrid` legend using matplotlib keyword arguments. seaborn-0.10.0/doc/requirements.txt000066400000000000000000000001521361256634400172620ustar00rootroot00000000000000sphinx==2.3.1 sphinx_bootstrap_theme==0.6.5 # Later versions mess up the css somehow nbconvert ipykernel seaborn-0.10.0/doc/sphinxext/000077500000000000000000000000001361256634400160325ustar00rootroot00000000000000seaborn-0.10.0/doc/sphinxext/gallery_generator.py000066400000000000000000000240621361256634400221150ustar00rootroot00000000000000""" Sphinx plugin to run example scripts and create a gallery page. Lightly modified from the mpld3 project. """ from __future__ import division import os import os.path as op import re import glob import token import tokenize import shutil import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt # noqa: E402 # Python 3 has no execfile def execfile(filename, globals=None, locals=None): with open(filename, "rb") as fp: exec(compile(fp.read(), filename, 'exec'), globals, locals) RST_TEMPLATE = """ .. _{sphinx_tag}: {docstring} .. image:: {img_file} **Python source code:** :download:`[download source: {fname}]<{fname}>` .. raw:: html
.. literalinclude:: {fname} :lines: {end_line}- .. raw:: html
""" INDEX_TEMPLATE = """ .. raw:: html .. _{sphinx_tag}: Example gallery =============== {toctree} {contents} .. raw:: html
""" def create_thumbnail(infile, thumbfile, width=275, height=275, cx=0.5, cy=0.5, border=4): baseout, extout = op.splitext(thumbfile) im = matplotlib.image.imread(infile) rows, cols = im.shape[:2] x0 = int(cx * cols - .5 * width) y0 = int(cy * rows - .5 * height) xslice = slice(x0, x0 + width) yslice = slice(y0, y0 + height) thumb = im[yslice, xslice] thumb[:border, :, :3] = thumb[-border:, :, :3] = 0 thumb[:, :border, :3] = thumb[:, -border:, :3] = 0 dpi = 100 fig = plt.figure(figsize=(width / dpi, height / dpi), dpi=dpi) ax = fig.add_axes([0, 0, 1, 1], aspect='auto', frameon=False, xticks=[], yticks=[]) ax.imshow(thumb, aspect='auto', resample=True, interpolation='bilinear') fig.savefig(thumbfile, dpi=dpi) return fig def indent(s, N=4): """indent a string""" return s.replace('\n', '\n' + N * ' ') class ExampleGenerator(object): """Tools for generating an example page from a file""" def __init__(self, filename, target_dir): self.filename = filename self.target_dir = target_dir self.thumbloc = .5, .5 self.extract_docstring() with open(filename, "r") as fid: self.filetext = fid.read() outfilename = op.join(target_dir, self.rstfilename) # Only actually run it if the output RST file doesn't # exist or it was modified less recently than the example file_mtime = op.getmtime(filename) if not op.exists(outfilename) or op.getmtime(outfilename) < file_mtime: self.exec_file() else: print("skipping {0}".format(self.filename)) @property def dirname(self): return op.split(self.filename)[0] @property def fname(self): return op.split(self.filename)[1] @property def modulename(self): return op.splitext(self.fname)[0] @property def pyfilename(self): return self.modulename + '.py' @property def rstfilename(self): return self.modulename + ".rst" @property def htmlfilename(self): return self.modulename + '.html' @property def pngfilename(self): pngfile = self.modulename + '.png' return "_images/" + pngfile @property def thumbfilename(self): pngfile = self.modulename + '_thumb.png' return pngfile @property def sphinxtag(self): return self.modulename @property def pagetitle(self): return self.docstring.strip().split('\n')[0].strip() @property def plotfunc(self): match = re.search(r"sns\.(.+plot)\(", self.filetext) if match: return match.group(1) match = re.search(r"sns\.(.+map)\(", self.filetext) if match: return match.group(1) match = re.search(r"sns\.(.+Grid)\(", self.filetext) if match: return match.group(1) return "" def extract_docstring(self): """ Extract a module-level docstring """ lines = open(self.filename).readlines() start_row = 0 if lines[0].startswith('#!'): lines.pop(0) start_row = 1 docstring = '' first_par = '' line_iter = lines.__iter__() tokens = tokenize.generate_tokens(lambda: next(line_iter)) for tok_type, tok_content, _, (erow, _), _ in tokens: tok_type = token.tok_name[tok_type] if tok_type in ('NEWLINE', 'COMMENT', 'NL', 'INDENT', 'DEDENT'): continue elif tok_type == 'STRING': docstring = eval(tok_content) # If the docstring is formatted with several paragraphs, # extract the first one: paragraphs = '\n'.join(line.rstrip() for line in docstring.split('\n') ).split('\n\n') if len(paragraphs) > 0: first_par = paragraphs[0] break thumbloc = None for i, line in enumerate(docstring.split("\n")): m = re.match(r"^_thumb: (\.\d+),\s*(\.\d+)", line) if m: thumbloc = float(m.group(1)), float(m.group(2)) break if thumbloc is not None: self.thumbloc = thumbloc docstring = "\n".join([l for l in docstring.split("\n") if not l.startswith("_thumb")]) self.docstring = docstring self.short_desc = first_par self.end_line = erow + 1 + start_row def exec_file(self): print("running {0}".format(self.filename)) plt.close('all') my_globals = {'pl': plt, 'plt': plt} execfile(self.filename, my_globals) fig = plt.gcf() fig.canvas.draw() pngfile = op.join(self.target_dir, self.pngfilename) thumbfile = op.join("example_thumbs", self.thumbfilename) self.html = "" % self.pngfilename fig.savefig(pngfile, dpi=75, bbox_inches="tight") cx, cy = self.thumbloc create_thumbnail(pngfile, thumbfile, cx=cx, cy=cy) def toctree_entry(self): return " ./%s\n\n" % op.splitext(self.htmlfilename)[0] def contents_entry(self): return (".. raw:: html\n\n" " \n\n" "\n\n" "".format(self.htmlfilename, self.thumbfilename, self.plotfunc)) def main(app): static_dir = op.join(app.builder.srcdir, '_static') target_dir = op.join(app.builder.srcdir, 'examples') image_dir = op.join(app.builder.srcdir, 'examples/_images') thumb_dir = op.join(app.builder.srcdir, "example_thumbs") source_dir = op.abspath(op.join(app.builder.srcdir, '..', 'examples')) if not op.exists(static_dir): os.makedirs(static_dir) if not op.exists(target_dir): os.makedirs(target_dir) if not op.exists(image_dir): os.makedirs(image_dir) if not op.exists(thumb_dir): os.makedirs(thumb_dir) if not op.exists(source_dir): os.makedirs(source_dir) banner_data = [] toctree = ("\n\n" ".. toctree::\n" " :hidden:\n\n") contents = "\n\n" # Write individual example files for filename in sorted(glob.glob(op.join(source_dir, "*.py"))): ex = ExampleGenerator(filename, target_dir) banner_data.append({"title": ex.pagetitle, "url": op.join('examples', ex.htmlfilename), "thumb": op.join(ex.thumbfilename)}) shutil.copyfile(filename, op.join(target_dir, ex.pyfilename)) output = RST_TEMPLATE.format(sphinx_tag=ex.sphinxtag, docstring=ex.docstring, end_line=ex.end_line, fname=ex.pyfilename, img_file=ex.pngfilename) with open(op.join(target_dir, ex.rstfilename), 'w') as f: f.write(output) toctree += ex.toctree_entry() contents += ex.contents_entry() if len(banner_data) < 10: banner_data = (4 * banner_data)[:10] # write index file index_file = op.join(target_dir, 'index.rst') with open(index_file, 'w') as index: index.write(INDEX_TEMPLATE.format(sphinx_tag="example_gallery", toctree=toctree, contents=contents)) def setup(app): app.connect('builder-inited', main) seaborn-0.10.0/doc/tools/000077500000000000000000000000001361256634400151405ustar00rootroot00000000000000seaborn-0.10.0/doc/tools/nb_to_doc.py000077500000000000000000000016411361256634400174450ustar00rootroot00000000000000#! /usr/bin/env python """ Convert empty IPython notebook to a sphinx doc page. """ import sys from subprocess import check_call as sh def convert_nb(nbname): # Execute the notebook sh(["jupyter", "nbconvert", "--to", "notebook", "--execute", "--inplace", nbname]) # Convert to .rst for Sphinx sh(["jupyter", "nbconvert", "--to", "rst", nbname, "--TagRemovePreprocessor.remove_cell_tags={'hide'}", "--TagRemovePreprocessor.remove_input_tags={'hide-input'}", "--TagRemovePreprocessor.remove_all_outputs_tags={'hide-output'}"]) # Clear notebook output sh(["jupyter", "nbconvert", "--to", "notebook", "--inplace", "--ClearOutputPreprocessor.enabled=True", nbname]) # Touch the .rst file so it has a later modify time than the source sh(["touch", nbname + ".rst"]) if __name__ == "__main__": for nbname in sys.argv[1:]: convert_nb(nbname) seaborn-0.10.0/doc/tutorial.rst000066400000000000000000000075401361256634400164030ustar00rootroot00000000000000.. _tutorial: Official seaborn tutorial ========================= .. raw:: html

Plotting functions

.. toctree:: :maxdepth: 2 tutorial/relational .. raw:: html

.. toctree:: :maxdepth: 2 tutorial/categorical .. raw:: html

.. toctree:: :maxdepth: 2 tutorial/distributions .. raw:: html

.. toctree:: :maxdepth: 2 tutorial/regression .. raw:: html

Multi-plot grids

.. toctree:: :maxdepth: 2 tutorial/axis_grids .. raw:: html

Plot aesthetics

.. toctree:: :maxdepth: 2 tutorial/aesthetics .. raw:: html

.. toctree:: :maxdepth: 2 tutorial/color_palettes .. raw:: html
seaborn-0.10.0/doc/tutorial/000077500000000000000000000000001361256634400156435ustar00rootroot00000000000000seaborn-0.10.0/doc/tutorial/Makefile000066400000000000000000000001751361256634400173060ustar00rootroot00000000000000rst_files := $(patsubst %.ipynb,%.rst,$(wildcard *.ipynb)) tutorial: ${rst_files} %.rst: %.ipynb ../tools/nb_to_doc.py $* seaborn-0.10.0/doc/tutorial/aesthetics.ipynb000066400000000000000000000276011361256634400210500ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _aesthetics_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Controlling figure aesthetics\n", "=============================\n", "\n", ".. raw:: html\n", "\n", "
\n" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Drawing attractive figures is important. When making figures for yourself, as you explore a dataset, it's nice to have plots that are pleasant to look at. Visualizations are also central to communicating quantitative insights to an audience, and in that setting it's even more necessary to have figures that catch the attention and draw a viewer in.\n", "\n", "Matplotlib is highly customizable, but it can be hard to know what settings to tweak to achieve an attractive plot. Seaborn comes with a number of customized themes and a high-level interface for controlling the look of matplotlib figures." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "np.random.seed(sum(map(ord, \"aesthetics\")))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Let's define a simple function to plot some offset sine waves, which will help us see the different stylistic parameters we can tweak." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def sinplot(flip=1):\n", " x = np.linspace(0, 14, 100)\n", " for i in range(1, 7):\n", " plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This is what the plot looks like with matplotlib defaults:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To switch to seaborn defaults, simply call the :func:`set` function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set()\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "(Note that in versions of seaborn prior to 0.8, :func:`set` was called on import. On later versions, it must be explicitly invoked).\n", "\n", "Seaborn splits matplotlib parameters into two independent groups. The first group sets the aesthetic style of the plot, and the second scales various elements of the figure so that it can be easily incorporated into different contexts.\n", "\n", "The interface for manipulating these parameters are two pairs of functions. To control the style, use the :func:`axes_style` and :func:`set_style` functions. To scale the plot, use the :func:`plotting_context` and :func:`set_context` functions. In both cases, the first function returns a dictionary of parameters and the second sets the matplotlib defaults.\n", "\n", ".. _axes_style:\n", "\n", "Seaborn figure styles\n", "---------------------\n", "\n", "There are five preset seaborn themes: ``darkgrid``, ``whitegrid``, ``dark``, ``white``, and ``ticks``. They are each suited to different applications and personal preferences. The default theme is ``darkgrid``. As mentioned above, the grid helps the plot serve as a lookup table for quantitative information, and the white-on grey helps to keep the grid from competing with lines that represent data. The ``whitegrid`` theme is similar, but it is better suited to plots with heavy data elements:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"whitegrid\")\n", "data = np.random.normal(size=(20, 6)) + np.arange(6) / 2\n", "sns.boxplot(data=data);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "For many plots, (especially for settings like talks, where you primarily want to use figures to provide impressions of patterns in the data), the grid is less necessary." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"dark\")\n", "sinplot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"white\")\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Sometimes you might want to give a little extra structure to the plots, which is where ticks come in handy:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"ticks\")\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _remove_spines:\n", "\n", "Removing axes spines\n", "--------------------\n", "\n", "Both the ``white`` and ``ticks`` styles can benefit from removing the top and right axes spines, which are not needed. The seaborn function :func:`despine` can be called to remove them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sinplot()\n", "sns.despine()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Some plots benefit from offsetting the spines away from the data, which can also be done when calling :func:`despine`. When the ticks don't cover the whole range of the axis, the ``trim`` parameter will limit the range of the surviving spines." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots()\n", "sns.violinplot(data=data)\n", "sns.despine(offset=10, trim=True);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can also control which spines are removed with additional arguments to :func:`despine`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"whitegrid\")\n", "sns.boxplot(data=data, palette=\"deep\")\n", "sns.despine(left=True)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Temporarily setting figure style\n", "--------------------------------\n", "\n", "Although it's easy to switch back and forth, you can also use the :func:`axes_style` function in a ``with`` statement to temporarily set plot parameters. This also allows you to make figures with differently-styled axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f = plt.figure(figsize=(6, 6))\n", "gs = f.add_gridspec(2, 2)\n", "\n", "with sns.axes_style(\"darkgrid\"):\n", " ax = f.add_subplot(gs[0, 0])\n", " sinplot()\n", " \n", "with sns.axes_style(\"white\"):\n", " ax = f.add_subplot(gs[0, 1])\n", " sinplot()\n", "\n", "with sns.axes_style(\"ticks\"):\n", " ax = f.add_subplot(gs[1, 0])\n", " sinplot()\n", "\n", "with sns.axes_style(\"whitegrid\"):\n", " ax = f.add_subplot(gs[1, 1])\n", " sinplot()\n", " \n", "f.tight_layout()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Overriding elements of the seaborn styles\n", "-----------------------------------------\n", "\n", "If you want to customize the seaborn styles, you can pass a dictionary of parameters to the ``rc`` argument of :func:`axes_style` and :func:`set_style`. Note that you can only override the parameters that are part of the style definition through this method. (However, the higher-level :func:`set` function takes a dictionary of any matplotlib parameters).\n", "\n", "If you want to see what parameters are included, you can just call the function with no arguments, which will return the current settings:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.axes_style()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can then set different versions of these parameters:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_style(\"darkgrid\", {\"axes.facecolor\": \".9\"})\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _plotting_context:\n", "\n", "Scaling plot elements\n", "---------------------\n", "\n", "A separate set of parameters control the scale of plot elements, which should let you use the same code to make plots that are suited for use in settings where larger or smaller plots are appropriate.\n", "\n", "First let's reset the default parameters by calling :func:`set`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The four preset contexts, in order of relative size, are ``paper``, ``notebook``, ``talk``, and ``poster``. The ``notebook`` style is the default, and was used in the plots above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_context(\"paper\")\n", "sinplot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_context(\"talk\")\n", "sinplot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_context(\"poster\")\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Most of what you now know about the style functions should transfer to the context functions.\n", "\n", "You can call :func:`set_context` with one of these names to set the parameters, and you can override the parameters by providing a dictionary of parameter values.\n", "\n", "You can also independently scale the size of the font elements when changing the context. (This option is also available through the top-level :func:`set` function)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_context(\"notebook\", font_scale=1.5, rc={\"lines.linewidth\": 2.5})\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Similarly, you can temporarily control the scale of figures nested under a ``with`` statement.\n", "\n", "Both the style and the context can be quickly configured with the :func:`set` function. This function also sets the default color palette, but that will be covered in more detail in the :ref:`next section ` of the tutorial." ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
\n" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "seaborn-py37-latest", "language": "python", "name": "seaborn-py37-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.10.0/doc/tutorial/axis_grids.ipynb000066400000000000000000000542071361256634400210520ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _grid_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Building structured multi-plot grids\n", "====================================\n", "\n", ".. raw:: html\n", "\n", "
\n" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When exploring medium-dimensional data, a useful approach is to draw multiple instances of the same plot on different subsets of your dataset. This technique is sometimes called either \"lattice\" or \"trellis\" plotting, and it is related to the idea of `\"small multiples\" `_. It allows a viewer to quickly extract a large amount of information about complex data. Matplotlib offers good support for making figures with multiple axes; seaborn builds on top of this to directly link the structure of the plot to the structure of your dataset.\n", "\n", "To use these features, your data has to be in a Pandas DataFrame and it must take the form of what Hadley Whickam calls `\"tidy\" data `_. In brief, that means your dataframe should be structured such that each column is a variable and each row is an observation.\n", "\n", "For advanced use, you can use the objects discussed in this part of the tutorial directly, which will provide maximum flexibility. Some seaborn functions (such as :func:`lmplot`, :func:`catplot`, and :func:`pairplot`) also use them behind the scenes. Unlike other seaborn functions that are \"Axes-level\" and draw onto specific (possibly already-existing) matplotlib ``Axes`` without otherwise manipulating the figure, these higher-level functions create a figure when called and are generally more strict about how it gets set up. In some cases, arguments either to those functions or to the constructor of the class they rely on will provide a different interface attributes like the figure size, as in the case of :func:`lmplot` where you can set the height and aspect ratio for each facet rather than the overall size of the figure. Any function that uses one of these objects will always return it after plotting, though, and most of these objects have convenience methods for changing how the plot is drawn, often in a more abstract and easy way." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import seaborn as sns\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set(style=\"ticks\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "np.random.seed(sum(map(ord, \"axis_grids\")))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _facet_grid:\n", "\n", "Conditional small multiples\n", "---------------------------\n", "\n", "The :class:`FacetGrid` class is useful when you want to visualize the distribution of a variable or the relationship between multiple variables separately within subsets of your dataset. A :class:`FacetGrid` can be drawn with up to three dimensions: ``row``, ``col``, and ``hue``. The first two have obvious correspondence with the resulting array of axes; think of the hue variable as a third dimension along a depth axis, where different levels are plotted with different colors.\n", "\n", "The class is used by initializing a :class:`FacetGrid` object with a dataframe and the names of the variables that will form the row, column, or hue dimensions of the grid. These variables should be categorical or discrete, and then the data at each level of the variable will be used for a facet along that axis. For example, say we wanted to examine differences between lunch and dinner in the ``tips`` dataset.\n", "\n", "Additionally, each of :func:`relplot`, :func:`catplot`, and :func:`lmplot` use this object internally, and they return the object when they are finished so that it can be used for further tweaking." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"time\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Initializing the grid like this sets up the matplotlib figure and axes, but doesn't draw anything on them.\n", "\n", "The main approach for visualizing data on this grid is with the :meth:`FacetGrid.map` method. Provide it with a plotting function and the name(s) of variable(s) in the dataframe to plot. Let's look at the distribution of tips in each of these subsets, using a histogram." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"time\")\n", "g.map(plt.hist, \"tip\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This function will draw the figure and annotate the axes, hopefully producing a finished plot in one step. To make a relational plot, just pass multiple variable names. You can also provide keyword arguments, which will be passed to the plotting function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"sex\", hue=\"smoker\")\n", "g.map(plt.scatter, \"total_bill\", \"tip\", alpha=.7)\n", "g.add_legend();" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "There are several options for controlling the look of the grid that can be passed to the class constructor." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, row=\"smoker\", col=\"time\", margin_titles=True)\n", "g.map(sns.regplot, \"size\", \"total_bill\", color=\".3\", fit_reg=False, x_jitter=.1);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Note that ``margin_titles`` isn't formally supported by the matplotlib API, and may not work well in all cases. In particular, it currently can't be used with a legend that lies outside of the plot.\n", "\n", "The size of the figure is set by providing the height of *each* facet, along with the aspect ratio:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"day\", height=4, aspect=.5)\n", "g.map(sns.barplot, \"sex\", \"total_bill\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The default ordering of the facets is derived from the information in the DataFrame. If the variable used to define facets has a categorical type, then the order of the categories is used. Otherwise, the facets will be in the order of appearance of the category levels. It is possible, however, to specify an ordering of any facet dimension with the appropriate ``*_order`` parameter:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ordered_days = tips.day.value_counts().index\n", "g = sns.FacetGrid(tips, row=\"day\", row_order=ordered_days,\n", " height=1.7, aspect=4,)\n", "g.map(sns.distplot, \"total_bill\", hist=False, rug=True);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Any seaborn color palette (i.e., something that can be passed to :func:`color_palette()` can be provided. You can also use a dictionary that maps the names of values in the ``hue`` variable to valid matplotlib colors:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pal = dict(Lunch=\"seagreen\", Dinner=\"gray\")\n", "g = sns.FacetGrid(tips, hue=\"time\", palette=pal, height=5)\n", "g.map(plt.scatter, \"total_bill\", \"tip\", s=50, alpha=.7, linewidth=.5, edgecolor=\"white\")\n", "g.add_legend();" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can also let other aspects of the plot vary across levels of the hue variable, which can be helpful for making plots that will be more comprehensible when printed in black-and-white. To do this, pass a dictionary to ``hue_kws`` where keys are the names of plotting function keyword arguments and values are lists of keyword values, one for each level of the hue variable." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, hue=\"sex\", palette=\"Set1\", height=5, hue_kws={\"marker\": [\"^\", \"v\"]})\n", "g.map(plt.scatter, \"total_bill\", \"tip\", s=100, linewidth=.5, edgecolor=\"white\")\n", "g.add_legend();" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "If you have many levels of one variable, you can plot it along the columns but \"wrap\" them so that they span multiple rows. When doing this, you cannot use a ``row`` variable." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "attend = sns.load_dataset(\"attention\").query(\"subject <= 12\")\n", "g = sns.FacetGrid(attend, col=\"subject\", col_wrap=4, height=2, ylim=(0, 10))\n", "g.map(sns.pointplot, \"solutions\", \"score\", order=[1, 2, 3], color=\".3\", ci=None);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Once you've drawn a plot using :meth:`FacetGrid.map` (which can be called multiple times), you may want to adjust some aspects of the plot. There are also a number of methods on the :class:`FacetGrid` object for manipulating the figure at a higher level of abstraction. The most general is :meth:`FacetGrid.set`, and there are other more specialized methods like :meth:`FacetGrid.set_axis_labels`, which respects the fact that interior facets do not have axis labels. For example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with sns.axes_style(\"white\"):\n", " g = sns.FacetGrid(tips, row=\"sex\", col=\"smoker\", margin_titles=True, height=2.5)\n", "g.map(plt.scatter, \"total_bill\", \"tip\", color=\"#334488\", edgecolor=\"white\", lw=.5);\n", "g.set_axis_labels(\"Total bill (US Dollars)\", \"Tip\");\n", "g.set(xticks=[10, 30, 50], yticks=[2, 6, 10]);\n", "g.fig.subplots_adjust(wspace=.02, hspace=.02);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "For even more customization, you can work directly with the underling matplotlib ``Figure`` and ``Axes`` objects, which are stored as member attributes at ``fig`` and ``axes`` (a two-dimensional array), respectively. When making a figure without row or column faceting, you can also use the ``ax`` attribute to directly access the single axes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, col=\"smoker\", margin_titles=True, height=4)\n", "g.map(plt.scatter, \"total_bill\", \"tip\", color=\"#338844\", edgecolor=\"white\", s=50, lw=1)\n", "for ax in g.axes.flat:\n", " ax.plot((0, 50), (0, .2 * 50), c=\".2\", ls=\"--\")\n", "g.set(xlim=(0, 60), ylim=(0, 14));" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _custom_map_func:\n", "\n", "Using custom functions\n", "----------------------\n", "\n", "You're not limited to existing matplotlib and seaborn functions when using :class:`FacetGrid`. However, to work properly, any function you use must follow a few rules:\n", "\n", "1. It must plot onto the \"currently active\" matplotlib ``Axes``. This will be true of functions in the ``matplotlib.pyplot`` namespace, and you can call :func:`matplotlib.pyplot.gca` to get a reference to the current ``Axes`` if you want to work directly with its methods.\n", "2. It must accept the data that it plots in positional arguments. Internally, :class:`FacetGrid` will pass a ``Series`` of data for each of the named positional arguments passed to :meth:`FacetGrid.map`.\n", "3. It must be able to accept ``color`` and ``label`` keyword arguments, and, ideally, it will do something useful with them. In most cases, it's easiest to catch a generic dictionary of ``**kwargs`` and pass it along to the underlying plotting function.\n", "\n", "Let's look at minimal example of a function you can plot with. This function will just take a single vector of data for each facet:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy import stats\n", "def quantile_plot(x, **kwargs):\n", " qntls, xr = stats.probplot(x, fit=False)\n", " plt.scatter(xr, qntls, **kwargs)\n", " \n", "g = sns.FacetGrid(tips, col=\"sex\", height=4)\n", "g.map(quantile_plot, \"total_bill\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "If we want to make a bivariate plot, you should write the function so that it accepts the x-axis variable first and the y-axis variable second:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def qqplot(x, y, **kwargs):\n", " _, xr = stats.probplot(x, fit=False)\n", " _, yr = stats.probplot(y, fit=False)\n", " plt.scatter(xr, yr, **kwargs)\n", " \n", "g = sns.FacetGrid(tips, col=\"smoker\", height=4)\n", "g.map(qqplot, \"total_bill\", \"tip\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Because :func:`matplotlib.pyplot.scatter` accepts ``color`` and ``label`` keyword arguments and does the right thing with them, we can add a hue facet without any difficulty:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, hue=\"time\", col=\"sex\", height=4)\n", "g.map(qqplot, \"total_bill\", \"tip\")\n", "g.add_legend();" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This approach also lets us use additional aesthetics to distinguish the levels of the hue variable, along with keyword arguments that won't be dependent on the faceting variables:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.FacetGrid(tips, hue=\"time\", col=\"sex\", height=4,\n", " hue_kws={\"marker\": [\"s\", \"D\"]})\n", "g.map(qqplot, \"total_bill\", \"tip\", s=40, edgecolor=\"w\")\n", "g.add_legend();" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Sometimes, though, you'll want to map a function that doesn't work the way you expect with the ``color`` and ``label`` keyword arguments. In this case, you'll want to explicitly catch them and handle them in the logic of your custom function. For example, this approach will allow use to map :func:`matplotlib.pyplot.hexbin`, which otherwise does not play well with the :class:`FacetGrid` API:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def hexbin(x, y, color, **kwargs):\n", " cmap = sns.light_palette(color, as_cmap=True)\n", " plt.hexbin(x, y, gridsize=15, cmap=cmap, **kwargs)\n", "\n", "with sns.axes_style(\"dark\"):\n", " g = sns.FacetGrid(tips, hue=\"time\", col=\"time\", height=4)\n", "g.map(hexbin, \"total_bill\", \"tip\", extent=[0, 50, 0, 10]);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _pair_grid:\n", "\n", "Plotting pairwise data relationships\n", "------------------------------------\n", "\n", ":class:`PairGrid` also allows you to quickly draw a grid of small subplots using the same plot type to visualize data in each. In a :class:`PairGrid`, each row and column is assigned to a different variable, so the resulting plot shows each pairwise relationship in the dataset. This style of plot is sometimes called a \"scatterplot matrix\", as this is the most common way to show each relationship, but :class:`PairGrid` is not limited to scatterplots.\n", "\n", "It's important to understand the differences between a :class:`FacetGrid` and a :class:`PairGrid`. In the former, each facet shows the same relationship conditioned on different levels of other variables. In the latter, each plot shows a different relationship (although the upper and lower triangles will have mirrored plots). Using :class:`PairGrid` can give you a very quick, very high-level summary of interesting relationships in your dataset.\n", "\n", "The basic usage of the class is very similar to :class:`FacetGrid`. First you initialize the grid, then you pass plotting function to a ``map`` method and it will be called on each subplot. There is also a companion function, :func:`pairplot` that trades off some flexibility for faster plotting.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris = sns.load_dataset(\"iris\")\n", "g = sns.PairGrid(iris)\n", "g.map(plt.scatter);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's possible to plot a different function on the diagonal to show the univariate distribution of the variable in each column. Note that the axis ticks won't correspond to the count or density axis of this plot, though." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(iris)\n", "g.map_diag(plt.hist)\n", "g.map_offdiag(plt.scatter);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A very common way to use this plot colors the observations by a separate categorical variable. For example, the iris dataset has four measurements for each of three different species of iris flowers so you can see how they differ." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(iris, hue=\"species\")\n", "g.map_diag(plt.hist)\n", "g.map_offdiag(plt.scatter)\n", "g.add_legend();" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "By default every numeric column in the dataset is used, but you can focus on particular relationships if you want." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(iris, vars=[\"sepal_length\", \"sepal_width\"], hue=\"species\")\n", "g.map(plt.scatter);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's also possible to use a different function in the upper and lower triangles to emphasize different aspects of the relationship." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(iris)\n", "g.map_upper(plt.scatter)\n", "g.map_lower(sns.kdeplot)\n", "g.map_diag(sns.kdeplot, lw=3, legend=False);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The square grid with identity relationships on the diagonal is actually just a special case, and you can plot with different variables in the rows and columns." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(tips, y_vars=[\"tip\"], x_vars=[\"total_bill\", \"size\"], height=4)\n", "g.map(sns.regplot, color=\".3\")\n", "g.set(ylim=(-1, 11), yticks=[0, 5, 10]);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Of course, the aesthetic attributes are configurable. For instance, you can use a different palette (say, to show an ordering of the ``hue`` variable) and pass keyword arguments into the plotting functions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(tips, hue=\"size\", palette=\"GnBu_d\")\n", "g.map(plt.scatter, s=50, edgecolor=\"white\")\n", "g.add_legend();" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ":class:`PairGrid` is flexible, but to take a quick look at a dataset, it can be easier to use :func:`pairplot`. This function uses scatterplots and histograms by default, although a few other kinds will be added (currently, you can also plot regression plots on the off-diagonals and KDEs on the diagonal)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(iris, hue=\"species\", height=2.5);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can also control the aesthetics of the plot with keyword arguments, and it returns the :class:`PairGrid` instance for further tweaking." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.pairplot(iris, hue=\"species\", palette=\"Set2\", diag_kind=\"kde\", height=2.5)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "seaborn-py37-latest", "language": "python", "name": "seaborn-py37-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 4 } seaborn-0.10.0/doc/tutorial/categorical.ipynb000066400000000000000000000507171361256634400211750ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _categorical_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting with categorical data\n", "==============================\n", "\n", ".. raw:: html\n", "\n", "
\n", " \n", "In the :ref:`relational plot tutorial ` we saw how to use different visual representations to show the relationship between multiple variables in a dataset. In the examples, we focused on cases where the main relationship was between two numerical variables. If one of the main variables is \"categorical\" (divided into discrete groups) it may be helpful to use a more specialized approach to visualization.\n", "\n", "In seaborn, there are several different ways to visualize a relationship involving categorical data. Similar to the relationship between :func:`relplot` and either :func:`scatterplot` or :func:`lineplot`, there are two ways to make these plots. There are a number of axes-level functions for plotting categorical data in different ways and a figure-level interface, :func:`catplot`, that gives unified higher-level access to them.\n", "\n", "It's helpful to think of the different categorical plot kinds as belonging to three different families, which we'll discuss in detail below. They are:\n", "\n", "Categorical scatterplots:\n", "\n", "- :func:`stripplot` (with ``kind=\"strip\"``; the default)\n", "- :func:`swarmplot` (with ``kind=\"swarm\"``)\n", "\n", "Categorical distribution plots:\n", "\n", "- :func:`boxplot` (with ``kind=\"box\"``)\n", "- :func:`violinplot` (with ``kind=\"violin\"``)\n", "- :func:`boxenplot` (with ``kind=\"boxen\"``)\n", "\n", "Categorical estimate plots:\n", "\n", "- :func:`pointplot` (with ``kind=\"point\"``)\n", "- :func:`barplot` (with ``kind=\"bar\"``)\n", "- :func:`countplot` (with ``kind=\"count\"``)\n", "\n", "These families represent the data using different levels of granularity. When deciding which to use, you'll have to think about the question that you want to answer. The unified API makes it easy to switch between different kinds and see your data from several perspectives.\n", "\n", "In this tutorial, we'll mostly focus on the figure-level interface, :func:`catplot`. Remember that this function is a higher-level interface each of the functions above, so we'll reference them when we show each kind of plot, keeping the more verbose kind-specific API documentation at hand." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import seaborn as sns\n", "import matplotlib.pyplot as plt\n", "sns.set(style=\"ticks\", color_codes=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "np.random.seed(sum(map(ord, \"categorical\")))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Categorical scatterplots\n", "------------------------\n", "\n", "The default representation of the data in :func:`catplot` uses a scatterplot. There are actually two different categorical scatter plots in seaborn. They take different approaches to resolving the main challenge in representing categorical data with a scatter plot, which is that all of the points belonging to one category would fall on the same position along the axis corresponding to the categorical variable. The approach used by :func:`stripplot`, which is the default \"kind\" in :func:`catplot` is to adjust the positions of points on the categorical axis with a small amount of random \"jitter\":" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")\n", "sns.catplot(x=\"day\", y=\"total_bill\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The ``jitter`` parameter controls the magnitude of jitter or disables it altogether:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", jitter=False, data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The second approach adjusts the points along the categorical axis using an algorithm that prevents them from overlapping. It can give a better representation of the distribution of observations, although it only works well for relatively small datasets. This kind of plot is sometimes called a \"beeswarm\" and is drawn in seaborn by :func:`swarmplot`, which is activated by setting ``kind=\"swarm\"`` in :func:`catplot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", kind=\"swarm\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Similar to the relational plots, it's possible to add another dimension to a categorical plot by using a ``hue`` semantic. (The categorical plots do not currently support ``size`` or ``style`` semantics). Each different categorical plotting function handles the ``hue`` semantic differently. For the scatter plots, it is only necessary to change the color of the points:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"sex\", kind=\"swarm\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Unlike with numerical data, it is not always obvious how to order the levels of the categorical variable along its axis. In general, the seaborn categorical plotting functions try to infer the order of categories from the data. If your data have a pandas ``Categorical`` datatype, then the default order of the categories can be set there. If the variable passed to the categorical axis looks numerical, the levels will be sorted. But the data are still treated as categorical and drawn at ordinal positions on the categorical axes (specifically, at 0, 1, ...) even when numbers are used to label them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"size\", y=\"total_bill\", kind=\"swarm\",\n", " data=tips.query(\"size != 3\"));" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The other option for choosing a default ordering is to take the levels of the category as they appear in the dataset. The ordering can also be controlled on a plot-specific basis using the ``order`` parameter. This can be important when drawing multiple categorical plots in the same figure, which we'll see more of below:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"smoker\", y=\"tip\", order=[\"No\", \"Yes\"], data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "We've referred to the idea of \"categorical axis\". In these examples, that's always corresponded to the horizontal axis. But it's often helpful to put the categorical variable on the vertical axis (particularly when the category names are relatively long or there are many categories). To do this, swap the assignment of variables to axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"total_bill\", y=\"day\", hue=\"time\", kind=\"swarm\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Distributions of observations within categories\n", "-----------------------------------------------\n", "\n", "As the size of the dataset grows, categorical scatter plots become limited in the information they can provide about the distribution of values within each category. When this happens, there are several approaches for summarizing the distributional information in ways that facilitate easy comparisons across the category levels.\n", "\n", "Boxplots\n", "^^^^^^^^\n", "\n", "The first is the familiar :func:`boxplot`. This kind of plot shows the three quartile values of the distribution along with extreme values. The \"whiskers\" extend to points that lie within 1.5 IQRs of the lower and upper quartile, and then observations that fall outside this range are displayed independently. This means that each value in the boxplot corresponds to an actual observation in the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", kind=\"box\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When adding a ``hue`` semantic, the box for each level of the semantic variable is moved along the categorical axis so they don't overlap:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"smoker\", kind=\"box\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This behavior is called \"dodging\" and is turned on by default because it is assumed that the semantic variable is nested within the main categorical variable. If that's not the case, you can disable the dodging:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips[\"weekend\"] = tips[\"day\"].isin([\"Sat\", \"Sun\"])\n", "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"weekend\",\n", " kind=\"box\", dodge=False, data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A related function, :func:`boxenplot`, draws a plot that is similar to a box plot but optimized for showing more information about the shape of the distribution. It is best suited for larger datasets:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "diamonds = sns.load_dataset(\"diamonds\")\n", "sns.catplot(x=\"color\", y=\"price\", kind=\"boxen\",\n", " data=diamonds.sort_values(\"color\"));" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Violinplots\n", "^^^^^^^^^^^\n", "\n", "A different approach is a :func:`violinplot`, which combines a boxplot with the kernel density estimation procedure described in the :ref:`distributions ` tutorial:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"total_bill\", y=\"day\", hue=\"sex\",\n", " kind=\"violin\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This approach uses the kernel density estimate to provide a richer description of the distribution of values. Additionally, the quartile and whisker values from the boxplot are shown inside the violin. The downside is that, because the violinplot uses a KDE, there are some other parameters that may need tweaking, adding some complexity relative to the straightforward boxplot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"total_bill\", y=\"day\", hue=\"sex\",\n", " kind=\"violin\", bw=.15, cut=0,\n", " data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's also possible to \"split\" the violins when the hue parameter has only two levels, which can allow for a more efficient use of space:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"sex\",\n", " kind=\"violin\", split=True, data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Finally, there are several options for the plot that is drawn on the interior of the violins, including ways to show each individual observation instead of the summary boxplot values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"sex\",\n", " kind=\"violin\", inner=\"stick\", split=True,\n", " palette=\"pastel\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It can also be useful to combine :func:`swarmplot` or :func:`striplot` with a box plot or violin plot to show each observation along with a summary of the distribution:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.catplot(x=\"day\", y=\"total_bill\", kind=\"violin\", inner=None, data=tips)\n", "sns.swarmplot(x=\"day\", y=\"total_bill\", color=\"k\", size=3, data=tips, ax=g.ax);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Statistical estimation within categories\n", "----------------------------------------\n", "\n", "For other applications, rather than showing the distribution within each category, you might want to show an estimate of the central tendency of the values. Seaborn has two main ways to show this information. Importantly, the basic API for these functions is identical to that for the ones discussed above.\n", "\n", "Bar plots\n", "^^^^^^^^^\n", "\n", "A familiar style of plot that accomplishes this goal is a bar plot. In seaborn, the :func:`barplot` function operates on a full dataset and applies a function to obtain the estimate (taking the mean by default). When there are multiple observations in each category, it also uses bootstrapping to compute a confidence interval around the estimate, which is plotted using error bars:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "titanic = sns.load_dataset(\"titanic\")\n", "sns.catplot(x=\"sex\", y=\"survived\", hue=\"class\", kind=\"bar\", data=titanic);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A special case for the bar plot is when you want to show the number of observations in each category rather than computing a statistic for a second variable. This is similar to a histogram over a categorical, rather than quantitative, variable. In seaborn, it's easy to do so with the :func:`countplot` function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"deck\", kind=\"count\", palette=\"ch:.25\", data=titanic);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Both :func:`barplot` and :func:`countplot` can be invoked with all of the options discussed above, along with others that are demonstrated in the detailed documentation for each function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(y=\"deck\", hue=\"class\", kind=\"count\",\n", " palette=\"pastel\", edgecolor=\".6\",\n", " data=titanic);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Point plots\n", "^^^^^^^^^^^\n", "\n", "An alternative style for visualizing the same information is offered by the :func:`pointplot` function. This function also encodes the value of the estimate with height on the other axis, but rather than showing a full bar, it plots the point estimate and confidence interval. Additionally, :func:`pointplot` connects points from the same ``hue`` category. This makes it easy to see how the main relationship is changing as a function of the hue semantic, because your eyes are quite good at picking up on differences of slopes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"sex\", y=\"survived\", hue=\"class\", kind=\"point\", data=titanic);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "While the categorical functions lack the ``style`` semantic of the relational functions, it can still be a good idea to vary the marker and/or linestyle along with the hue to make figures that are maximally accessible and reproduce well in black and white:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"class\", y=\"survived\", hue=\"sex\",\n", " palette={\"male\": \"g\", \"female\": \"m\"},\n", " markers=[\"^\", \"o\"], linestyles=[\"-\", \"--\"],\n", " kind=\"point\", data=titanic);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting \"wide-form\" data\n", "-------------------------\n", "\n", "While using \"long-form\" or \"tidy\" data is preferred, these functions can also by applied to \"wide-form\" data in a variety of formats, including pandas DataFrames or two-dimensional numpy arrays. These objects should be passed directly to the ``data`` parameter:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris = sns.load_dataset(\"iris\")\n", "sns.catplot(data=iris, orient=\"h\", kind=\"box\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Additionally, the axes-level functions accept vectors of Pandas or numpy objects rather than variables in a ``DataFrame``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.violinplot(x=iris.species, y=iris.sepal_length);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To control the size and shape of plots made by the functions discussed above, you must set up the figure yourself using matplotlib commands:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots(figsize=(7, 3))\n", "sns.countplot(y=\"deck\", data=titanic, color=\"c\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "This is the approach you should take when you need a categorical figure to happily coexist in a more complex figure with other kinds of plots.\n", "\n", "Showing multiple relationships with facets\n", "------------------------------------------\n", "\n", "Just like :func:`relplot`, the fact that :func:`catplot` is built on a :class:`FacetGrid` means that it is easy to add faceting variables to visualize higher-dimensional relationships:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.catplot(x=\"day\", y=\"total_bill\", hue=\"smoker\",\n", " col=\"time\", aspect=.6,\n", " kind=\"swarm\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "For further customization of the plot, you can use the methods on the :class:`FacetGrid` object that it returns:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.catplot(x=\"fare\", y=\"survived\", row=\"class\",\n", " kind=\"box\", orient=\"h\", height=1.5, aspect=4,\n", " data=titanic.query(\"fare > 0\"))\n", "g.set(xscale=\"log\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3.6 (seaborn-py37-latest)", "language": "python", "name": "seaborn-py37-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 1 } seaborn-0.10.0/doc/tutorial/color_palettes.ipynb000066400000000000000000000647621361256634400217440ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ ".. _palette_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Choosing color palettes\n", "=======================\n", "\n", ".. raw:: html\n", "\n", "
\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Color is more important than other aspects of figure style because color can reveal patterns in the data if used effectively or hide those patterns if used poorly. There are a number of great resources to learn about good techniques for using color in visualizations, I am partial to this `series of blog posts `_ from Rob Simmon and this `more technical paper `_. The matplotlib docs also now have a `nice tutorial `_ that illustrates some of the perceptual properties of the built in colormaps.\n", "\n", "Seaborn makes it easy to select and use color palettes that are suited to the kind of data you are working with and the goals you have in visualizing it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt\n", "sns.set()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "np.random.seed(sum(map(ord, \"palettes\")))" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Building color palettes\n", "-----------------------\n", "\n", "The most important function for working with discrete color palettes is :func:`color_palette`. This function provides an interface to many (though not all) of the possible ways you can generate colors in seaborn, and it's used internally by any function that has a ``palette`` argument (and in some cases for a ``color`` argument when multiple colors are needed).\n", "\n", ":func:`color_palette` will accept the name of any seaborn palette or matplotlib colormap (except ``jet``, which you should never use). It can also take a list of colors specified in any valid matplotlib format (RGB tuples, hex color codes, or HTML color names). The return value is always a list of RGB tuples.\n", "\n", "Finally, calling :func:`color_palette` with no arguments will return the current default color cycle.\n", "\n", "A corresponding function, :func:`set_palette`, takes the same arguments and will set the default color cycle for all plots. You can also use :func:`color_palette` in a ``with`` statement to temporarily change the default palette (see :ref:`below `).\n", "\n", "It is generally not possible to know what kind of color palette or colormap is best for a set of data without knowing about the characteristics of the data. Following that, we'll break up the different ways to use :func:`color_palette` and other seaborn palette functions by the three general kinds of color palettes: *qualitative*, *sequential*, and *diverging*." ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _qualitative_palettes:\n", "\n", "Qualitative color palettes\n", "--------------------------\n", "\n", "Qualitative (or categorical) palettes are best when you want to distinguish discrete chunks of data that do not have an inherent ordering.\n", "\n", "When importing seaborn, the default color cycle is changed to a set of ten colors that evoke the standard matplotlib color cycle while aiming to be a bit more pleasing to look at." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "current_palette = sns.color_palette()\n", "sns.palplot(current_palette)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "There are six variations of the default theme, called ``deep``, ``muted``, ``pastel``, ``bright``, ``dark``, and ``colorblind``." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "# TODO hide input here when merged with doc updating branch\n", "f = plt.figure(figsize=(6, 6))\n", "\n", "ax_locs = dict(\n", " deep=(.4, .4),\n", " bright=(.8, .8),\n", " muted=(.49, .71),\n", " dark=(.8, .2),\n", " pastel=(.2, .8),\n", " colorblind=(.71, .49),\n", ")\n", "\n", "s = .35\n", "\n", "for pal, (x, y) in ax_locs.items():\n", " ax = f.add_axes([x - s / 2, y - s / 2, s, s])\n", " ax.pie(np.ones(10),\n", " colors=sns.color_palette(pal, 10),\n", " counterclock=False, startangle=180,\n", " wedgeprops=dict(linewidth=1, edgecolor=\"w\"))\n", " f.text(x, y, pal, ha=\"center\", va=\"center\", size=14,\n", " bbox=dict(facecolor=\"white\", alpha=0.85, boxstyle=\"round,pad=0.2\"))\n", "\n", "f.text(.1, .05, \"Saturation\", size=18, ha=\"left\", va=\"center\",\n", " bbox=dict(facecolor=\"white\", edgecolor=\"w\"))\n", "f.text(.05, .1, \"Luminance\", size=18, ha=\"center\", va=\"bottom\", rotation=90,\n", " bbox=dict(facecolor=\"white\", edgecolor=\"w\"))\n", "\n", "ax = f.add_axes([0, 0, 1, 1])\n", "ax.set_axis_off()\n", "ax.arrow(.15, .05, .4, 0, width=.002, head_width=.015, color=\"k\")\n", "ax.arrow(.05, .15, 0, .4, width=.002, head_width=.015, color=\"k\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Using circular color systems\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "When you have an arbitrary number of categories to distinguish without emphasizing any one, the easiest approach is to draw evenly-spaced colors in a circular color space (one where the hue changes while keeping the brightness and saturation constant). This is what most seaborn functions default to when they need to use more colors than are currently set in the default color cycle.\n", "\n", "The most common way to do this uses the ``hls`` color space, which is a simple transformation of RGB values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.color_palette(\"hls\", 8))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "There is also the :func:`hls_palette` function that lets you control the lightness and saturation of the colors." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.hls_palette(8, l=.3, s=.8))" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "However, because of the way the human visual system works, colors that are even \"intensity\" in terms of their RGB levels won't necessarily look equally intense. `We perceive `_ yellows and greens as relatively brighter and blues as relatively darker, which can be a problem when aiming for uniformity with the ``hls`` system.\n", "\n", "To remedy this, seaborn provides an interface to the `husl `_ system (since renamed to HSLuv), which also makes it easy to select evenly spaced hues while keeping the apparent brightness and saturation much more uniform." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.color_palette(\"husl\", 8))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "There is similarly a function called :func:`husl_palette` that provides a more flexible interface to this system.\n", "\n", "Using categorical Color Brewer palettes\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "Another source of visually pleasing categorical palettes comes from the `Color Brewer `_ tool (which also has sequential and diverging palettes, as we'll see below). These also exist as matplotlib colormaps, but they are not handled properly. In seaborn, when you ask for a qualitative Color Brewer palette, you'll always get the discrete colors, but this means that at a certain point they will begin to cycle.\n", "\n", "A nice feature of the Color Brewer website is that it provides some guidance on which palettes are color blind safe. There is a variety of `kinds `_ of color blindness, but the most common variant leads to difficulty distinguishing reds and greens. It's generally a good idea to avoid using red and green for plot elements that need to be discriminated based on color. [This comparison](https://gist.github.com/mwaskom/b35f6ebc2d4b340b4f64a4e28e778486) can be helpful to understand how the the seaborn color palettes perform for different type of colorblindess." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.color_palette(\"Paired\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.color_palette(\"Set2\"))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To help you choose palettes from the Color Brewer library, there is the :func:`choose_colorbrewer_palette` function. This function, which must be used in a Jupyter notebook, will launch an interactive widget that lets you browse the various options and tweak their parameters.\n", "\n", "Of course, you might just want to use a set of colors you particularly like together. Because :func:`color_palette` accepts a list of colors, this is easy to do." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flatui = [\"#9b59b6\", \"#3498db\", \"#95a5a6\", \"#e74c3c\", \"#34495e\", \"#2ecc71\"]\n", "sns.palplot(sns.color_palette(flatui))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _using_xkcd_palettes:\n", " \n", "Using named colors from the xkcd color survey\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "A while back, `xkcd `_ ran a `crowdsourced effort `_ to name random RGB colors. This produced a set of `954 named colors `_, which you can now reference in seaborn using the ``xkcd_rgb`` dictionary:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot([0, 1], [0, 1], sns.xkcd_rgb[\"pale red\"], lw=3)\n", "plt.plot([0, 1], [0, 2], sns.xkcd_rgb[\"medium green\"], lw=3)\n", "plt.plot([0, 1], [0, 3], sns.xkcd_rgb[\"denim blue\"], lw=3);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In addition to pulling out single colors from the ``xkcd_rgb`` dictionary, you can also pass a list of names to the :func:`xkcd_palette` function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "colors = [\"windows blue\", \"amber\", \"greyish\", \"faded green\", \"dusty purple\"]\n", "sns.palplot(sns.xkcd_palette(colors))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _sequential_palettes:\n", "\n", "Sequential color palettes\n", "-------------------------\n", "\n", "The second major class of color palettes is called \"sequential\". This kind of color mapping is appropriate when data range from relatively low or uninteresting values to relatively high or interesting values. Although there are cases where you will want discrete colors in a sequential palette, it's more common to use them as a colormap in functions like :func:`kdeplot` and :func:`heatmap` (along with similar matplotlib functions).\n", "\n", "It's common to see colormaps like ``jet`` (or other rainbow palettes) used in this case, because the range of hues gives the impression of providing additional information about the data. However, colormaps with large hue shifts tend to introduce discontinuities that don't exist in the data, and our visual system isn't able to naturally map the rainbow to quantitative distinctions like \"high\" or \"low\". The result is that these visualizations end up being more like a puzzle, and they obscure patterns in the data rather than revealing them. The jet colormap is misleading because the brightest colors, yellow and cyan, are used for intermediate data values. This has the effect of emphasizing uninteresting (and arbitrary) values while deemphasizing the extremes.\n", "\n", "For sequential data, it's better to use palettes that have at most a relatively subtle shift in hue accompanied by a large shift in brightness and saturation. This approach will naturally draw the eye to the relatively important parts of the data.\n", "\n", "The Color Brewer library has a great set of these palettes. They're named after the dominant color (or colors) in the palette." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.color_palette(\"Blues\"))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Like in matplotlib, if you want the lightness ramp to be reversed, you can add a ``_r`` suffix to the palette name." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.color_palette(\"BuGn_r\"))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Seaborn also adds a trick that allows you to create \"dark\" palettes, which do not have as wide a dynamic range. This can be useful if you want to map lines or points sequentially, as brighter-colored lines might otherwise be hard to distinguish." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.color_palette(\"GnBu_d\"))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Remember that you may want to use the :func:`choose_colorbrewer_palette` function to play with the various options, and you can set the ``as_cmap`` argument to ``True`` if you want the return value to be a colormap object that you can pass to seaborn or matplotlib functions." ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ ".. _cubehelix_palettes:\n", "\n", "Sequential \"cubehelix\" palettes\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "The `cubehelix `_ color palette system makes sequential palettes with a linear increase or decrease in brightness and some variation in hue. This means that the information in your colormap will be preserved when converted to black and white (for printing) or when viewed by a colorblind individual.\n", "\n", "Matplotlib has the default cubehelix version built into it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.color_palette(\"cubehelix\", 8))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Seaborn adds an interface to the cubehelix *system* so that you can make a variety of palettes that all have a well-behaved linear brightness ramp.\n", "\n", "The default palette returned by the seaborn :func:`cubehelix_palette` function is a bit different from the matplotlib default in that it does not rotate as far around the hue wheel or cover as wide a range of intensities. It also reverses the order so that more important values are darker:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.cubehelix_palette(8))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Other arguments to :func:`cubehelix_palette` control how the palette looks. The two main things you'll change are the ``start`` (a value between 0 and 3) and ``rot``, or number of rotations (an arbitrary value, but probably within -1 and 1)," ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.cubehelix_palette(8, start=.5, rot=-.75))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can also control how dark and light the endpoints are and even reverse the ramp:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.cubehelix_palette(8, start=2, rot=0, dark=0, light=.95, reverse=True))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "By default you just get a list of colors, like any other seaborn palette, but you can also return the palette as a colormap object that can be passed to seaborn or matplotlib functions using ``as_cmap=True``." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x, y = np.random.multivariate_normal([0, 0], [[1, -.5], [-.5, 1]], size=300).T\n", "cmap = sns.cubehelix_palette(light=1, as_cmap=True)\n", "sns.kdeplot(x, y, cmap=cmap, shade=True);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To help select good palettes or colormaps using this system, you can use the :func:`choose_cubehelix_palette` function in a notebook to launch an interactive app that will let you play with the different parameters. Pass ``as_cmap=True`` if you want the function to return a colormap (rather than a list) for use in function like ``hexbin``." ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Custom sequential palettes\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "For a simpler interface to custom sequential palettes, you can use :func:`light_palette` or :func:`dark_palette`, which are both seeded with a single color and produce a palette that ramps either from light or dark desaturated values to that color. These functions are also accompanied by the :func:`choose_light_palette` and :func:`choose_dark_palette` functions that launch interactive widgets to create these palettes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.light_palette(\"green\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.dark_palette(\"purple\"))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "These palettes can also be reversed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.light_palette(\"navy\", reverse=True))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "They can also be used to create colormap objects rather than lists of colors." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pal = sns.dark_palette(\"palegreen\", as_cmap=True)\n", "sns.kdeplot(x, y, cmap=pal);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "By default, the input can be any valid matplotlib color. Alternate interpretations are controlled by the ``input`` argument. Currently you can provide tuples in ``hls`` or ``husl`` space along with the default ``rgb``, and you can also seed the palette with any valid ``xkcd`` color." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.light_palette((210, 90, 60), input=\"husl\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.dark_palette(\"muted purple\", input=\"xkcd\"))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Note that the default input space for the interactive palette widgets is ``husl``, which is different from the default for the function itself, but much more useful in this context." ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _diverging_palettes:\n", "\n", "Diverging color palettes\n", "------------------------\n", "\n", "The third class of color palettes is called \"diverging\". These are used for data where both large low and high values are interesting. There is also usually a well-defined midpoint in the data. For instance, if you are plotting changes in temperature from some baseline timepoint, it is best to use a diverging colormap to show areas with relative decreases and areas with relative increases.\n", "\n", "The rules for choosing good diverging palettes are similar to good sequential palettes, except now you want to have two relatively subtle hue shifts from distinct starting hues that meet in an under-emphasized color at the midpoint. It's also important that the starting values are of similar brightness and saturation.\n", "\n", "It's also important to emphasize here that using red and green should be avoided, as a substantial population of potential viewers will be `unable to distinguish them `_.\n", "\n", "It should not surprise you that the Color Brewer library comes with a set of well-chosen diverging colormaps." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.color_palette(\"BrBG\", 7))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.color_palette(\"RdBu_r\", 7))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Another good choice that is built into matplotlib is the ``coolwarm`` palette. Note that this colormap has less contrast between the middle values and the extremes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.color_palette(\"coolwarm\", 7))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Custom diverging palettes\n", "~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "You can also use the seaborn function :func:`diverging_palette` to create a custom colormap for diverging data. (Naturally there is also a companion interactive widget, :func:`choose_diverging_palette`). This function makes diverging palettes using the ``husl`` color system. You pass it two hues (in degrees) and, optionally, the lightness and saturation values for the extremes. Using ``husl`` means that the extreme values, and the resulting ramps to the midpoint, will be well-balanced." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.diverging_palette(220, 20, n=7))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.diverging_palette(145, 280, s=85, l=25, n=7))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The ``sep`` argument controls the width of the separation between the two ramps in the middle region of the palette." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.diverging_palette(10, 220, sep=80, n=7))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's also possible to make a palette with the midpoint is dark rather than light." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.palplot(sns.diverging_palette(255, 133, l=60, n=7, center=\"dark\"))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _palette_contexts:\n", "\n", "Setting the default color palette\n", "---------------------------------\n", "\n", "The :func:`color_palette` function has a companion called :func:`set_palette`. The relationship between them is similar to the pairs covered in the :ref:`aesthetics tutorial `. :func:`set_palette` accepts the same arguments as :func:`color_palette`, but it changes the default matplotlib parameters so that the palette is used for all plots." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def sinplot(flip=1):\n", " x = np.linspace(0, 14, 100)\n", " for i in range(1, 7):\n", " plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set_palette(\"husl\")\n", "sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The :func:`color_palette` function can also be used in a ``with`` statement to temporarily change the color palette." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with sns.color_palette(\"PuBuGn_d\"):\n", " sinplot()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3.6 (seaborn-py37-latest)", "language": "python", "name": "seaborn-py37-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 1 } seaborn-0.10.0/doc/tutorial/distributions.ipynb000066400000000000000000000344731361256634400216230ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _distribution_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Visualizing the distribution of a dataset\n", "=========================================\n", "\n", ".. raw:: html\n", "\n", "
\n" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When dealing with a set of data, often the first thing you'll want to do is get a sense for how the variables are distributed. This chapter of the tutorial will give a brief introduction to some of the tools in seaborn for examining univariate and bivariate distributions. You may also want to look at the :ref:`categorical plots ` chapter for examples of functions that make it easy to compare the distribution of a variable across levels of other variables." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt\n", "from scipy import stats" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set(color_codes=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "np.random.seed(sum(map(ord, \"distributions\")))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting univariate distributions\n", "---------------------------------\n", "\n", "The most convenient way to take a quick look at a univariate distribution in seaborn is the :func:`distplot` function. By default, this will draw a `histogram `_ and fit a `kernel density estimate `_ (KDE). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = np.random.normal(size=100)\n", "sns.distplot(x);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Histograms\n", "^^^^^^^^^^\n", "\n", "Histograms are likely familiar, and a ``hist`` function already exists in matplotlib. A histogram represents the distribution of data by forming bins along the range of the data and then drawing bars to show the number of observations that fall in each bin.\n", "\n", "To illustrate this, let's remove the density curve and add a rug plot, which draws a small vertical tick at each observation. You can make the rug plot itself with the :func:`rugplot` function, but it is also available in :func:`distplot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.distplot(x, kde=False, rug=True);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When drawing histograms, the main choice you have is the number of bins to use and where to place them. :func:`distplot` uses a simple rule to make a good guess for what the right number is by default, but trying more or fewer bins might reveal other features in the data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.distplot(x, bins=20, kde=False, rug=True);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Kernel density estimation\n", "^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "The kernel density estimate may be less familiar, but it can be a useful tool for plotting the shape of a distribution. Like the histogram, the KDE plots encode the density of observations on one axis with height along the other axis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.distplot(x, hist=False, rug=True);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Drawing a KDE is more computationally involved than drawing a histogram. What happens is that each observation is first replaced with a normal (Gaussian) curve centered at that value:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = np.random.normal(0, 1, size=30)\n", "bandwidth = 1.06 * x.std() * x.size ** (-1 / 5.)\n", "support = np.linspace(-4, 4, 200)\n", "\n", "kernels = []\n", "for x_i in x:\n", "\n", " kernel = stats.norm(x_i, bandwidth).pdf(support)\n", " kernels.append(kernel)\n", " plt.plot(support, kernel, color=\"r\")\n", "\n", "sns.rugplot(x, color=\".2\", linewidth=3);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Next, these curves are summed to compute the value of the density at each point in the support grid. The resulting curve is then normalized so that the area under it is equal to 1:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.integrate import trapz\n", "density = np.sum(kernels, axis=0)\n", "density /= trapz(density, support)\n", "plt.plot(support, density);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "We can see that if we use the :func:`kdeplot` function in seaborn, we get the same curve. This function is used by :func:`distplot`, but it provides a more direct interface with easier access to other options when you just want the density estimate:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(x, shade=True);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The bandwidth (``bw``) parameter of the KDE controls how tightly the estimation is fit to the data, much like the bin size in a histogram. It corresponds to the width of the kernels we plotted above. The default behavior tries to guess a good value using a common reference rule, but it may be helpful to try larger or smaller values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(x)\n", "sns.kdeplot(x, bw=.2, label=\"bw: 0.2\")\n", "sns.kdeplot(x, bw=2, label=\"bw: 2\")\n", "plt.legend();" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "As you can see above, the nature of the Gaussian KDE process means that estimation extends past the largest and smallest values in the dataset. It's possible to control how far past the extreme values the curve is drawn with the ``cut`` parameter; however, this only influences how the curve is drawn and not how it is fit:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.kdeplot(x, shade=True, cut=0)\n", "sns.rugplot(x);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Fitting parametric distributions\n", "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "You can also use :func:`distplot` to fit a parametric distribution to a dataset and visually evaluate how closely it corresponds to the observed data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = np.random.gamma(6, size=200)\n", "sns.distplot(x, kde=False, fit=stats.gamma);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting bivariate distributions\n", "--------------------------------\n", "\n", "It can also be useful to visualize a bivariate distribution of two variables. The easiest way to do this in seaborn is to just use the :func:`jointplot` function, which creates a multi-panel figure that shows both the bivariate (or joint) relationship between two variables along with the univariate (or marginal) distribution of each on separate axes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mean, cov = [0, 1], [(1, .5), (.5, 1)]\n", "data = np.random.multivariate_normal(mean, cov, 200)\n", "df = pd.DataFrame(data, columns=[\"x\", \"y\"])" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Scatterplots\n", "^^^^^^^^^^^^\n", "\n", "The most familiar way to visualize a bivariate distribution is a scatterplot, where each observation is shown with point at the *x* and *y* values. This is analogous to a rug plot on two dimensions. You can draw a scatterplot with :func:`scatterplot`, and it is also the default kind of plot shown by the :func:`jointplot` function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(x=\"x\", y=\"y\", data=df);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Hexbin plots\n", "^^^^^^^^^^^^\n", "\n", "A bivariate analogue of a histogram is known as a \"hexbin\" plot, because it shows the counts of observations that fall within hexagonal bins. This plot works best with relatively large datasets. It's available through in matplotlib as :meth:`matplotlib.axes.Axes.hexbin` and as a style in :func:`jointplot`. It looks best with a white background:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x, y = np.random.multivariate_normal(mean, cov, 1000).T\n", "with sns.axes_style(\"white\"):\n", " sns.jointplot(x=x, y=y, kind=\"hex\", color=\"k\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Kernel density estimation\n", "^^^^^^^^^^^^^^^^^^^^^^^^^\n", "\n", "It is also possible to use the kernel density estimation procedure described above to visualize a bivariate distribution. In seaborn, this kind of plot is shown with a contour plot and is available as a style in :func:`jointplot`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(x=\"x\", y=\"y\", data=df, kind=\"kde\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can also draw a two-dimensional kernel density plot with the :func:`kdeplot` function. This allows you to draw this kind of plot onto a specific (and possibly already existing) matplotlib axes, whereas the :func:`jointplot` function manages its own figure:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots(figsize=(6, 6))\n", "sns.kdeplot(df.x, df.y, ax=ax)\n", "sns.rugplot(df.x, color=\"g\", ax=ax)\n", "sns.rugplot(df.y, vertical=True, ax=ax);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "If you wish to show the bivariate density more continuously, you can simply increase the number of contour levels:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots(figsize=(6, 6))\n", "cmap = sns.cubehelix_palette(as_cmap=True, dark=0, light=1, reverse=True)\n", "sns.kdeplot(df.x, df.y, cmap=cmap, n_levels=60, shade=True);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The :func:`jointplot` function uses a :class:`JointGrid` to manage the figure. For more flexibility, you may want to draw your figure by using :class:`JointGrid` directly. :func:`jointplot` returns the :class:`JointGrid` object after plotting, which you can use to add more layers or to tweak other aspects of the visualization:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.jointplot(x=\"x\", y=\"y\", data=df, kind=\"kde\", color=\"m\")\n", "g.plot_joint(plt.scatter, c=\"w\", s=30, linewidth=1, marker=\"+\")\n", "g.ax_joint.collections[0].set_alpha(0)\n", "g.set_axis_labels(\"$X$\", \"$Y$\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Visualizing pairwise relationships in a dataset\n", "-----------------------------------------------\n", "\n", "To plot multiple pairwise bivariate distributions in a dataset, you can use the :func:`pairplot` function. This creates a matrix of axes and shows the relationship for each pair of columns in a DataFrame. By default, it also draws the univariate distribution of each variable on the diagonal Axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris = sns.load_dataset(\"iris\")\n", "sns.pairplot(iris);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Specifying the ``hue`` parameter automatically changes the histograms to KDE plots to facilitate comparisons between multiple distributions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(iris, hue=\"species\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Much like the relationship between :func:`jointplot` and :class:`JointGrid`, the :func:`pairplot` function is built on top of a :class:`PairGrid` object, which can be used directly for more flexibility:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = sns.PairGrid(iris)\n", "g.map_diag(sns.kdeplot)\n", "g.map_offdiag(sns.kdeplot, n_levels=6);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3.6 (seaborn-py37-latest)", "language": "python", "name": "seaborn-py37-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 1 } seaborn-0.10.0/doc/tutorial/regression.ipynb000066400000000000000000000436001361256634400210710ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _regression_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Visualizing linear relationships\n", "================================\n", "\n", ".. raw:: html\n", "\n", "
" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Many datasets contain multiple quantitative variables, and the goal of an analysis is often to relate those variables to each other. We :ref:`previously discussed ` functions that can accomplish this by showing the joint distribution of two variables. It can be very helpful, though, to use statistical models to estimate a simple relationship between two noisy sets of observations. The functions discussed in this chapter will do so through the common framework of linear regression.\n", "\n", "In the spirit of Tukey, the regression plots in seaborn are primarily intended to add a visual guide that helps to emphasize patterns in a dataset during exploratory data analyses. That is to say that seaborn is not itself a package for statistical analysis. To obtain quantitative measures related to the fit of regression models, you should use `statsmodels `_. The goal of seaborn, however, is to make exploring a dataset through visualization quick and easy, as doing so is just as (if not more) important than exploring a dataset through tables of statistics." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set(color_codes=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "np.random.seed(sum(map(ord, \"regression\")))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Functions to draw linear regression models\n", "------------------------------------------\n", "\n", "Two main functions in seaborn are used to visualize a linear relationship as determined through regression. These functions, :func:`regplot` and :func:`lmplot` are closely related, and share much of their core functionality. It is important to understand the ways they differ, however, so that you can quickly choose the correct tool for particular job.\n", "\n", "In the simplest invocation, both functions draw a scatterplot of two variables, ``x`` and ``y``, and then fit the regression model ``y ~ x`` and plot the resulting regression line and a 95% confidence interval for that regression:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.regplot(x=\"total_bill\", y=\"tip\", data=tips);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You should note that the resulting plots are identical, except that the figure shapes are different. We will explain why this is shortly. For now, the other main difference to know about is that :func:`regplot` accepts the ``x`` and ``y`` variables in a variety of formats including simple numpy arrays, pandas ``Series`` objects, or as references to variables in a pandas ``DataFrame`` object passed to ``data``. In contrast, :func:`lmplot` has ``data`` as a required parameter and the ``x`` and ``y`` variables must be specified as strings. This data format is called \"long-form\" or `\"tidy\" `_ data. Other than this input flexibility, :func:`regplot` possesses a subset of :func:`lmplot`'s features, so we will demonstrate them using the latter.\n", "\n", "It's possible to fit a linear regression when one of the variables takes discrete values, however, the simple scatterplot produced by this kind of dataset is often not optimal:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"size\", y=\"tip\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "One option is to add some random noise (\"jitter\") to the discrete values to make the distribution of those values more clear. Note that jitter is applied only to the scatterplot data and does not influence the regression line fit itself:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"size\", y=\"tip\", data=tips, x_jitter=.05);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A second option is to collapse over the observations in each discrete bin to plot an estimate of central tendency along with a confidence interval:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"size\", y=\"tip\", data=tips, x_estimator=np.mean);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Fitting different kinds of models\n", "---------------------------------\n", "\n", "The simple linear regression model used above is very simple to fit, however, it is not appropriate for some kinds of datasets. The `Anscombe's quartet `_ dataset shows a few examples where simple linear regression provides an identical estimate of a relationship where simple visual inspection clearly shows differences. For example, in the first case, the linear regression is a good model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "anscombe = sns.load_dataset(\"anscombe\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'I'\"),\n", " ci=None, scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The linear relationship in the second dataset is the same, but the plot clearly shows that this is not a good model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'II'\"),\n", " ci=None, scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": { "collapsed": true }, "source": [ "In the presence of these kind of higher-order relationships, :func:`lmplot` and :func:`regplot` can fit a polynomial regression model to explore simple kinds of nonlinear trends in the dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'II'\"),\n", " order=2, ci=None, scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "A different problem is posed by \"outlier\" observations that deviate for some reason other than the main relationship under study:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'III'\"),\n", " ci=None, scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In the presence of outliers, it can be useful to fit a robust regression, which uses a different loss function to downweight relatively large residuals:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'III'\"),\n", " robust=True, ci=None, scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When the ``y`` variable is binary, simple linear regression also \"works\" but provides implausible predictions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips[\"big_tip\"] = (tips.tip / tips.total_bill) > .15\n", "sns.lmplot(x=\"total_bill\", y=\"big_tip\", data=tips,\n", " y_jitter=.03);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The solution in this case is to fit a logistic regression, such that the regression line shows the estimated probability of ``y = 1`` for a given value of ``x``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"big_tip\", data=tips,\n", " logistic=True, y_jitter=.03);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Note that the logistic regression estimate is considerably more computationally intensive (this is true of robust regression as well) than simple regression, and as the confidence interval around the regression line is computed using a bootstrap procedure, you may wish to turn this off for faster iteration (using ``ci=None``).\n", "\n", "An altogether different approach is to fit a nonparametric regression using a `lowess smoother `_. This approach has the fewest assumptions, although it is computationally intensive and so currently confidence intervals are not computed at all:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", data=tips,\n", " lowess=True);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The :func:`residplot` function can be a useful tool for checking whether the simple regression model is appropriate for a dataset. It fits and removes a simple linear regression and then plots the residual values for each observation. Ideally, these values should be randomly scattered around ``y = 0``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.residplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'I'\"),\n", " scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "If there is structure in the residuals, it suggests that simple linear regression is not appropriate:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.residplot(x=\"x\", y=\"y\", data=anscombe.query(\"dataset == 'II'\"),\n", " scatter_kws={\"s\": 80});" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Conditioning on other variables\n", "-------------------------------\n", "\n", "The plots above show many ways to explore the relationship between a pair of variables. Often, however, a more interesting question is \"how does the relationship between these two variables change as a function of a third variable?\" This is where the difference between :func:`regplot` and :func:`lmplot` appears. While :func:`regplot` always shows a single relationship, :func:`lmplot` combines :func:`regplot` with :class:`FacetGrid` to provide an easy interface to show a linear regression on \"faceted\" plots that allow you to explore interactions with up to three additional categorical variables.\n", "\n", "The best way to separate out a relationship is to plot both levels on the same axes and to use color to distinguish them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In addition to color, it's possible to use different scatterplot markers to make plots the reproduce to black and white better. You also have full control over the colors used:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips,\n", " markers=[\"o\", \"x\"], palette=\"Set1\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To add another variable, you can draw multiple \"facets\" which each level of the variable appearing in the rows or columns of the grid:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", col=\"time\", data=tips);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\",\n", " col=\"time\", row=\"sex\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Controlling the size and shape of the plot\n", "------------------------------------------\n", "\n", "Before we noted that the default plots made by :func:`regplot` and :func:`lmplot` look the same but on axes that have a different size and shape. This is because :func:`regplot` is an \"axes-level\" function draws onto a specific axes. This means that you can make multi-panel figures yourself and control exactly where the regression plot goes. If no axes object is explicitly provided, it simply uses the \"currently active\" axes, which is why the default plot has the same size and shape as most other matplotlib functions. To control the size, you need to create a figure object yourself." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots(figsize=(5, 6))\n", "sns.regplot(x=\"total_bill\", y=\"tip\", data=tips, ax=ax);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In contrast, the size and shape of the :func:`lmplot` figure is controlled through the :class:`FacetGrid` interface using the ``height`` and ``aspect`` parameters, which apply to each *facet* in the plot, not to the overall figure itself:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", col=\"day\", data=tips,\n", " col_wrap=2, height=3);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x=\"total_bill\", y=\"tip\", col=\"day\", data=tips,\n", " aspect=.5);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting a regression in other contexts\n", "---------------------------------------\n", "\n", "A few other seaborn functions use :func:`regplot` in the context of a larger, more complex plot. The first is the :func:`jointplot` function that we introduced in the :ref:`distributions tutorial `. In addition to the plot styles previously discussed, :func:`jointplot` can use :func:`regplot` to show the linear regression fit on the joint axes by passing ``kind=\"reg\"``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.jointplot(x=\"total_bill\", y=\"tip\", data=tips, kind=\"reg\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Using the :func:`pairplot` function with ``kind=\"reg\"`` combines :func:`regplot` and :class:`PairGrid` to show the linear relationship between variables in a dataset. Take care to note how this is different from :func:`lmplot`. In the figure below, the two axes don't show the same relationship conditioned on two levels of a third variable; rather, :func:`PairGrid` is used to show multiple relationships between different pairings of the variables in a dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(tips, x_vars=[\"total_bill\", \"size\"], y_vars=[\"tip\"],\n", " height=5, aspect=.8, kind=\"reg\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Like :func:`lmplot`, but unlike :func:`jointplot`, conditioning on an additional categorical variable is built into :func:`pairplot` using the ``hue`` parameter:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.pairplot(tips, x_vars=[\"total_bill\", \"size\"], y_vars=[\"tip\"],\n", " hue=\"smoker\", height=5, aspect=.8, kind=\"reg\");" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3.6 (seaborn-py37-latest)", "language": "python", "name": "seaborn-py37-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 1 } seaborn-0.10.0/doc/tutorial/relational.ipynb000066400000000000000000000525251361256634400210510ustar00rootroot00000000000000{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ ".. _relational_tutorial:\n", "\n", ".. currentmodule:: seaborn" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Visualizing statistical relationships\n", "=====================================\n", "\n", ".. raw:: html\n", "\n", "
\n", "\n", "Statistical analysis is a process of understanding how variables in a dataset relate to each other and how those relationships depend on other variables. Visualization can be a core component of this process because, when data are visualized properly, the human visual system can see trends and patterns that indicate a relationship.\n", "\n", "We will discuss three seaborn functions in this tutorial. The one we will use most is :func:`relplot`. This is a :ref:`figure-level function ` for visualizing statistical relationships using two common approaches: scatter plots and line plots. :func:`relplot` combines a :class:`FacetGrid` with one of two axes-level functions:\n", "\n", "- :func:`scatterplot` (with ``kind=\"scatter\"``; the default)\n", "- :func:`lineplot` (with ``kind=\"line\"``)\n", "\n", "As we will see, these functions can be quite illuminating because they use simple and easily-understood representations of data that can nevertheless represent complex dataset structures. They can do so because they plot two-dimensional graphics that can be enhanced by mapping up to three additional variables using the semantics of hue, size, and style." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "sns.set(style=\"darkgrid\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "%matplotlib inline\n", "np.random.seed(sum(map(ord, \"relational\")))" ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _scatterplot_tutorial:\n", "\n", "Relating variables with scatter plots\n", "-------------------------------------\n", "\n", "The scatter plot is a mainstay of statistical visualization. It depicts the joint distribution of two variables using a cloud of points, where each point represents an observation in the dataset. This depiction allows the eye to infer a substantial amount of information about whether there is any meaningful relationship between them.\n", "\n", "There are several ways to draw a scatter plot in seaborn. The most basic, which should be used when both variables are numeric, is the :func:`scatterplot` function. In the :ref:`categorical visualization tutorial `, we will see specialized tools for using scatterplots to visualize categorical data. The :func:`scatterplot` is the default ``kind`` in :func:`relplot` (it can also be forced by setting ``kind=\"scatter\"``):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tips = sns.load_dataset(\"tips\")\n", "sns.relplot(x=\"total_bill\", y=\"tip\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "While the points are plotted in two dimensions, another dimension can be added to the plot by coloring the points according to a third variable. In seaborn, this is referred to as using a \"hue semantic\", because the color of the point gains meaning:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To emphasize the difference between the classes, and to improve accessibility, you can use a different marker style for each class:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", style=\"smoker\",\n", " data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It's also possible to represent four variables by changing the hue and style of each point independently. But this should be done carefully, because the eye is much less sensitive to shape than to color:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", style=\"time\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In the examples above, the hue semantic was categorical, so the default :ref:`qualitative palette ` was applied. If the hue semantic is numeric (specifically, if it can be cast to float), the default coloring switches to a sequential palette:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", hue=\"size\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "In both cases, you can customize the color palette. There are many options for doing so. Here, we customize a sequential palette using the string interface to :func:`cubehelix_palette`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", hue=\"size\", palette=\"ch:r=-.5,l=.75\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The third kind of semantic variable changes the size of each point:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", size=\"size\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Unlike with :func:`matplotlib.pyplot.scatter`, the literal value of the variable is not used to pick the area of the point. Instead, the range of values in data units is normalized into a range in area units. This range can be customized:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", size=\"size\", sizes=(15, 200), data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "More examples for customizing how the different semantics are used to show statistical relationships are shown in the :func:`scatterplot` API examples." ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. _lineplot_tutorial:\n", "\n", "Emphasizing continuity with line plots\n", "--------------------------------------\n", "\n", "Scatter plots are highly effective, but there is no universally optimal type of visualisation. Instead, the visual representation should be adapted for the specifics of the dataset and to the question you are trying to answer with the plot.\n", "\n", "With some datasets, you may want to understand changes in one variable as a function of time, or a similarly continuous variable. In this situation, a good choice is to draw a line plot. In seaborn, this can be accomplished by the :func:`lineplot` function, either directly or with :func:`relplot` by setting ``kind=\"line\"``:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame(dict(time=np.arange(500),\n", " value=np.random.randn(500).cumsum()))\n", "g = sns.relplot(x=\"time\", y=\"value\", kind=\"line\", data=df)\n", "g.fig.autofmt_xdate()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Because :func:`lineplot` assumes that you are most often trying to draw ``y`` as a function of ``x``, the default behavior is to sort the data by the ``x`` values before plotting. However, this can be disabled:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame(np.random.randn(500, 2).cumsum(axis=0), columns=[\"x\", \"y\"])\n", "sns.relplot(x=\"x\", y=\"y\", sort=False, kind=\"line\", data=df);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Aggregation and representing uncertainty\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "More complex datasets will have multiple measurements for the same value of the ``x`` variable. The default behavior in seaborn is to aggregate the multiple measurements at each ``x`` value by plotting the mean and the 95% confidence interval around the mean:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fmri = sns.load_dataset(\"fmri\")\n", "sns.relplot(x=\"timepoint\", y=\"signal\", kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The confidence intervals are computed using bootstrapping, which can be time-intensive for larger datasets. It's therefore possible to disable them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", ci=None, kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Another good option, especially with larger data, is to represent the spread of the distribution at each timepoint by plotting the standard deviation instead of a confidence interval:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", kind=\"line\", ci=\"sd\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "To turn off aggregation altogether, set the ``estimator`` parameter to ``None`` This might produce a strange effect when the data have multiple observations at each point." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", estimator=None, kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting subsets of data with semantic mappings\n", "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "The :func:`lineplot` function has the same flexibility as :func:`scatterplot`: it can show up to three additional variables by modifying the hue, size, and style of the plot elements. It does so using the same API as :func:`scatterplot`, meaning that we don't need to stop and think about the parameters that control the look of lines vs. points in matplotlib.\n", "\n", "Using semantics in :func:`lineplot` will also determine how the data get aggregated. For example, adding a hue semantic with two levels splits the plot into two lines and error bands, coloring each to indicate which subset of the data they correspond to." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"event\", kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Adding a style semantic to a line plot changes the pattern of dashes in the line by default:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"region\", style=\"event\",\n", " kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "But you can identify subsets by the markers used at each observation, either together with the dashes or instead of them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"region\", style=\"event\",\n", " dashes=False, markers=True, kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "As with scatter plots, be cautious about making line plots using multiple semantics. While sometimes informative, they can also be difficult to parse and interpret. But even when you are only examining changes across one additional variable, it can be useful to alter both the color and style of the lines. This can make the plot more accessible when printed to black-and-white or viewed by someone with color blindness:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"event\", style=\"event\",\n", " kind=\"line\", data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When you are working with repeated measures data (that is, you have units that were sampled multiple times), you can also plot each sampling unit separately without distinguishing them through semantics. This avoids cluttering the legend:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"region\",\n", " units=\"subject\", estimator=None,\n", " kind=\"line\", data=fmri.query(\"event == 'stim'\"));" ] }, { "cell_type": "raw", "metadata": { "collapsed": true }, "source": [ "The default colormap and handling of the legend in :func:`lineplot` also depends on whether the hue semantic is categorical or numeric:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dots = sns.load_dataset(\"dots\").query(\"align == 'dots'\")\n", "sns.relplot(x=\"time\", y=\"firing_rate\",\n", " hue=\"coherence\", style=\"choice\",\n", " kind=\"line\", data=dots);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "It may happen that, even though the ``hue`` variable is numeric, it is poorly represented by a linear color scale. That's the case here, where the levels of the ``hue`` variable are logarithmically scaled. You can provide specific color values for each line by passing a list or dictionary:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "palette = sns.cubehelix_palette(light=.8, n_colors=6)\n", "sns.relplot(x=\"time\", y=\"firing_rate\",\n", " hue=\"coherence\", style=\"choice\",\n", " palette=palette,\n", " kind=\"line\", data=dots);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Or you can alter how the colormap is normalized:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from matplotlib.colors import LogNorm\n", "palette = sns.cubehelix_palette(light=.7, n_colors=6)\n", "sns.relplot(x=\"time\", y=\"firing_rate\",\n", " hue=\"coherence\", style=\"choice\",\n", " hue_norm=LogNorm(),\n", " kind=\"line\", data=dots);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "The third semantic, size, changes the width of the lines:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"time\", y=\"firing_rate\",\n", " size=\"coherence\", style=\"choice\",\n", " kind=\"line\", data=dots);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "While the ``size`` variable will typically be numeric, it's also possible to map a categorical variable with the width of the lines. Be cautious when doing so, because it will be difficult to distinguish much more than \"thick\" vs \"thin\" lines. However, dashes can be hard to perceive when lines have high-frequency variability, so using different widths may be more effective in that case:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"time\", y=\"firing_rate\",\n", " hue=\"coherence\", size=\"choice\",\n", " palette=palette,\n", " kind=\"line\", data=dots);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Plotting with date data\n", "~~~~~~~~~~~~~~~~~~~~~~~\n", "\n", "Line plots are often used to visualize data associated with real dates and times. These functions pass the data down in their original format to the underlying matplotlib functions, and so they can take advantage of matplotlib's ability to format dates in tick labels. But all of that formatting will have to take place at the matplotlib layer, and you should refer to the matplotlib documentation to see how it works:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame(dict(time=pd.date_range(\"2017-1-1\", periods=500),\n", " value=np.random.randn(500).cumsum()))\n", "g = sns.relplot(x=\"time\", y=\"value\", kind=\"line\", data=df)\n", "g.fig.autofmt_xdate()" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "Showing multiple relationships with facets\n", "------------------------------------------\n", "\n", "We've emphasized in this tutorial that, while these functions *can* show several semantic variables at once, it's not always effective to do so. But what about when you do want to understand how a relationship between two variables depends on more than one other variable?\n", "\n", "The best approach may be to make more than one plot. Because :func:`relplot` is based on the :class:`FacetGrid`, this is easy to do. To show the influence of an additional variable, instead of assigning it to one of the semantic roles in the plot, use it to \"facet\" the visualization. This means that you make multiple axes and plot subsets of the data on each of them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\",\n", " col=\"time\", data=tips);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "You can also show the influence two variables this way: one by faceting on the columns and one by faceting on the rows. As you start adding more variables to the grid, you may want to decrease the figure size. Remember that the size :class:`FacetGrid` is parameterized by the height and aspect ratio of *each facet*:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide" ] }, "outputs": [], "source": [ "subject_number = fmri[\"subject\"].str[1:].astype(int)\n", "fmri= fmri.iloc[subject_number.argsort()]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"subject\",\n", " col=\"region\", row=\"event\", height=3,\n", " kind=\"line\", estimator=None, data=fmri);" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "When you want to examine effects across many levels of a variable, it can be a good idea to facet that variable on the columns and then \"wrap\" the facets into the rows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.relplot(x=\"timepoint\", y=\"signal\", hue=\"event\", style=\"event\",\n", " col=\"subject\", col_wrap=5,\n", " height=3, aspect=.75, linewidth=2.5,\n", " kind=\"line\", data=fmri.query(\"region == 'frontal'\"));" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "These visualizations, which are often called \"lattice\" plots or \"small-multiples\", are very effective because they present the data in a format that makes it easy for the eye to detect both overall patterns and deviations from those patterns. While you should make use of the flexibility afforded by :func:`scatterplot` and :func:`relplot`, always try to keep in mind that several simple plots are usually more effective than one complex plot." ] }, { "cell_type": "raw", "metadata": {}, "source": [ ".. raw:: html\n", "\n", "
" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3.6 (seaborn-py37-latest)", "language": "python", "name": "seaborn-py37-latest" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 2 } seaborn-0.10.0/doc/whatsnew.rst000066400000000000000000000016051361256634400163740ustar00rootroot00000000000000.. _whatsnew: .. currentmodule:: seaborn What's new in each version ========================== This page contains information about what has changed in each new version of ``seaborn``. Each release is also marked with a DOI from `Zenodo `_, which can be used to cite the library. .. raw:: html
.. include:: releases/v0.10.0.txt .. include:: releases/v0.9.1.txt .. include:: releases/v0.9.0.txt .. include:: releases/v0.8.1.txt .. include:: releases/v0.8.0.txt .. include:: releases/v0.7.1.txt .. include:: releases/v0.7.0.txt .. include:: releases/v0.6.0.txt .. include:: releases/v0.5.1.txt .. include:: releases/v0.5.0.txt .. include:: releases/v0.4.0.txt .. include:: releases/v0.3.1.txt .. include:: releases/v0.3.0.txt .. include:: releases/v0.2.1.txt .. include:: releases/v0.2.0.txt .. raw:: html
seaborn-0.10.0/examples/000077500000000000000000000000001361256634400150515ustar00rootroot00000000000000seaborn-0.10.0/examples/.gitignore000066400000000000000000000000201361256634400170310ustar00rootroot00000000000000*.html *_files/ seaborn-0.10.0/examples/anscombes_quartet.py000066400000000000000000000006501361256634400211430ustar00rootroot00000000000000""" Anscombe's quartet ================== _thumb: .4, .4 """ import seaborn as sns sns.set(style="ticks") # Load the example dataset for Anscombe's quartet df = sns.load_dataset("anscombe") # Show the results of a linear regression within each dataset sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df, col_wrap=2, ci=None, palette="muted", height=4, scatter_kws={"s": 50, "alpha": 1}) seaborn-0.10.0/examples/color_palettes.py000066400000000000000000000017321361256634400204450ustar00rootroot00000000000000""" Color palette choices ===================== """ import numpy as np import seaborn as sns import matplotlib.pyplot as plt sns.set(style="white", context="talk") rs = np.random.RandomState(8) # Set up the matplotlib figure f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(7, 5), sharex=True) # Generate some sequential data x = np.array(list("ABCDEFGHIJ")) y1 = np.arange(1, 11) sns.barplot(x=x, y=y1, palette="rocket", ax=ax1) ax1.axhline(0, color="k", clip_on=False) ax1.set_ylabel("Sequential") # Center the data to make it diverging y2 = y1 - 5.5 sns.barplot(x=x, y=y2, palette="vlag", ax=ax2) ax2.axhline(0, color="k", clip_on=False) ax2.set_ylabel("Diverging") # Randomly reorder the data to make it qualitative y3 = rs.choice(y1, len(y1), replace=False) sns.barplot(x=x, y=y3, palette="deep", ax=ax3) ax3.axhline(0, color="k", clip_on=False) ax3.set_ylabel("Qualitative") # Finalize the plot sns.despine(bottom=True) plt.setp(f.axes, yticks=[]) plt.tight_layout(h_pad=2) seaborn-0.10.0/examples/cubehelix_palette.py000066400000000000000000000013451361256634400211140ustar00rootroot00000000000000""" Different cubehelix palettes ============================ _thumb: .4, .65 """ import numpy as np import seaborn as sns import matplotlib.pyplot as plt sns.set(style="dark") rs = np.random.RandomState(50) # Set up the matplotlib figure f, axes = plt.subplots(3, 3, figsize=(9, 9), sharex=True, sharey=True) # Rotate the starting point around the cubehelix hue circle for ax, s in zip(axes.flat, np.linspace(0, 3, 10)): # Create a cubehelix colormap to use with kdeplot cmap = sns.cubehelix_palette(start=s, light=1, as_cmap=True) # Generate and plot a random bivariate dataset x, y = rs.randn(2, 50) sns.kdeplot(x, y, cmap=cmap, shade=True, cut=5, ax=ax) ax.set(xlim=(-3, 3), ylim=(-3, 3)) f.tight_layout() seaborn-0.10.0/examples/different_scatter_variables.py000066400000000000000000000014351361256634400231510ustar00rootroot00000000000000""" Scatterplot with categorical and numerical semantics ==================================================== _thumb: .45, .5 """ import seaborn as sns import matplotlib.pyplot as plt sns.set(style="whitegrid") # Load the example diamonds dataset diamonds = sns.load_dataset("diamonds") # Draw a scatter plot while assigning point colors and sizes to different # variables in the dataset f, ax = plt.subplots(figsize=(6.5, 6.5)) sns.despine(f, left=True, bottom=True) clarity_ranking = ["I1", "SI2", "SI1", "VS2", "VS1", "VVS2", "VVS1", "IF"] sns.scatterplot(x="carat", y="price", hue="clarity", size="depth", palette="ch:r=-.2,d=.3_r", hue_order=clarity_ranking, sizes=(1, 8), linewidth=0, data=diamonds, ax=ax) seaborn-0.10.0/examples/distplot_options.py000066400000000000000000000015661361256634400210500ustar00rootroot00000000000000""" Distribution plot options ========================= """ import numpy as np import seaborn as sns import matplotlib.pyplot as plt sns.set(style="white", palette="muted", color_codes=True) rs = np.random.RandomState(10) # Set up the matplotlib figure f, axes = plt.subplots(2, 2, figsize=(7, 7), sharex=True) sns.despine(left=True) # Generate a random univariate dataset d = rs.normal(size=100) # Plot a simple histogram with binsize determined automatically sns.distplot(d, kde=False, color="b", ax=axes[0, 0]) # Plot a kernel density estimate and rug plot sns.distplot(d, hist=False, rug=True, color="r", ax=axes[0, 1]) # Plot a filled kernel density estimate sns.distplot(d, hist=False, color="g", kde_kws={"shade": True}, ax=axes[1, 0]) # Plot a histogram and kernel density estimate sns.distplot(d, color="m", ax=axes[1, 1]) plt.setp(axes, yticks=[]) plt.tight_layout() seaborn-0.10.0/examples/errorband_lineplots.py000066400000000000000000000005751361256634400215010ustar00rootroot00000000000000""" Timeseries plot with error bands ================================ _thumb: .48, .45 """ import seaborn as sns sns.set(style="darkgrid") # Load an example dataset with long-form data fmri = sns.load_dataset("fmri") # Plot the responses for different events and regions sns.lineplot(x="timepoint", y="signal", hue="region", style="event", data=fmri) seaborn-0.10.0/examples/facet_projections.py000066400000000000000000000013451361256634400211270ustar00rootroot00000000000000""" FacetGrid with custom projection ================================ _thumb: .33, .5 """ import numpy as np import pandas as pd import seaborn as sns sns.set() # Generate an example radial datast r = np.linspace(0, 10, num=100) df = pd.DataFrame({'r': r, 'slow': r, 'medium': 2 * r, 'fast': 4 * r}) # Convert the dataframe to long-form or "tidy" format df = pd.melt(df, id_vars=['r'], var_name='speed', value_name='theta') # Set up a grid of axes with a polar projection g = sns.FacetGrid(df, col="speed", hue="speed", subplot_kws=dict(projection='polar'), height=4.5, sharex=False, sharey=False, despine=False) # Draw a scatterplot onto each axes in the grid g.map(sns.scatterplot, "theta", "r") seaborn-0.10.0/examples/faceted_histogram.py000066400000000000000000000006131361256634400210730ustar00rootroot00000000000000""" Facetting histograms by subsets of data ======================================= _thumb: .42, .57 """ import numpy as np import seaborn as sns import matplotlib.pyplot as plt sns.set(style="darkgrid") tips = sns.load_dataset("tips") g = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True) bins = np.linspace(0, 60, 13) g.map(plt.hist, "total_bill", color="steelblue", bins=bins) seaborn-0.10.0/examples/faceted_lineplot.py000066400000000000000000000011751361256634400207300ustar00rootroot00000000000000""" Line plots on multiple facets ============================= _thumb: .45, .42 """ import seaborn as sns sns.set(style="ticks") dots = sns.load_dataset("dots") # Define a palette to ensure that colors will be # shared across the facets palette = dict(zip(dots.coherence.unique(), sns.color_palette("rocket_r", 6))) # Plot the lines on two facets sns.relplot(x="time", y="firing_rate", hue="coherence", size="choice", col="align", size_order=["T1", "T2"], palette=palette, height=5, aspect=.75, facet_kws=dict(sharex=False), kind="line", legend="full", data=dots) seaborn-0.10.0/examples/grouped_barplot.py000066400000000000000000000006461361256634400206210ustar00rootroot00000000000000""" Grouped barplots ================ _thumb: .45, .5 """ import seaborn as sns sns.set(style="whitegrid") # Load the example Titanic dataset titanic = sns.load_dataset("titanic") # Draw a nested barplot to show survival for class and sex g = sns.catplot(x="class", y="survived", hue="sex", data=titanic, height=6, kind="bar", palette="muted") g.despine(left=True) g.set_ylabels("survival probability") seaborn-0.10.0/examples/grouped_boxplot.py000066400000000000000000000006001361256634400206330ustar00rootroot00000000000000""" Grouped boxplots ================ _thumb: .66, .45 """ import seaborn as sns sns.set(style="ticks", palette="pastel") # Load the example tips dataset tips = sns.load_dataset("tips") # Draw a nested boxplot to show bills by day and time sns.boxplot(x="day", y="total_bill", hue="smoker", palette=["m", "g"], data=tips) sns.despine(offset=10, trim=True) seaborn-0.10.0/examples/grouped_violinplots.py000066400000000000000000000010071361256634400215300ustar00rootroot00000000000000""" Grouped violinplots with split violins ====================================== _thumb: .43, .47 """ import seaborn as sns sns.set(style="whitegrid", palette="pastel", color_codes=True) # Load the example tips dataset tips = sns.load_dataset("tips") # Draw a nested violinplot and split the violins for easier comparison sns.violinplot(x="day", y="total_bill", hue="smoker", split=True, inner="quart", palette={"Yes": "y", "No": "b"}, data=tips) sns.despine(left=True) seaborn-0.10.0/examples/heatmap_annotation.py000066400000000000000000000006571361256634400213040ustar00rootroot00000000000000""" Annotated heatmaps ================== """ import matplotlib.pyplot as plt import seaborn as sns sns.set() # Load the example flights dataset and convert to long-form flights_long = sns.load_dataset("flights") flights = flights_long.pivot("month", "year", "passengers") # Draw a heatmap with the numeric values in each cell f, ax = plt.subplots(figsize=(9, 6)) sns.heatmap(flights, annot=True, fmt="d", linewidths=.5, ax=ax) seaborn-0.10.0/examples/hexbin_marginals.py000066400000000000000000000004711361256634400207370ustar00rootroot00000000000000""" Hexbin plot with marginal distributions ======================================= _thumb: .45, .4 """ import numpy as np import seaborn as sns sns.set(style="ticks") rs = np.random.RandomState(11) x = rs.gamma(2, size=1000) y = -.5 * x + rs.normal(size=1000) sns.jointplot(x, y, kind="hex", color="#4CB391") seaborn-0.10.0/examples/horizontal_barplot.py000066400000000000000000000015361361256634400213440ustar00rootroot00000000000000""" Horizontal bar plots ==================== """ import seaborn as sns import matplotlib.pyplot as plt sns.set(style="whitegrid") # Initialize the matplotlib figure f, ax = plt.subplots(figsize=(6, 15)) # Load the example car crash dataset crashes = sns.load_dataset("car_crashes").sort_values("total", ascending=False) # Plot the total crashes sns.set_color_codes("pastel") sns.barplot(x="total", y="abbrev", data=crashes, label="Total", color="b") # Plot the crashes where alcohol was involved sns.set_color_codes("muted") sns.barplot(x="alcohol", y="abbrev", data=crashes, label="Alcohol-involved", color="b") # Add a legend and informative axis label ax.legend(ncol=2, loc="lower right", frameon=True) ax.set(xlim=(0, 24), ylabel="", xlabel="Automobile collisions per billion miles") sns.despine(left=True, bottom=True) seaborn-0.10.0/examples/horizontal_boxplot.py000066400000000000000000000013571361256634400213710ustar00rootroot00000000000000""" Horizontal boxplot with observations ==================================== _thumb: .7, .37 """ import seaborn as sns import matplotlib.pyplot as plt sns.set(style="ticks") # Initialize the figure with a logarithmic x axis f, ax = plt.subplots(figsize=(7, 6)) ax.set_xscale("log") # Load the example planets dataset planets = sns.load_dataset("planets") # Plot the orbital period with horizontal boxes sns.boxplot(x="distance", y="method", data=planets, whis="range", palette="vlag") # Add in points to show each observation sns.swarmplot(x="distance", y="method", data=planets, size=2, color=".3", linewidth=0) # Tweak the visual presentation ax.xaxis.grid(True) ax.set(ylabel="") sns.despine(trim=True, left=True) seaborn-0.10.0/examples/jitter_stripplot.py000066400000000000000000000017421361256634400210500ustar00rootroot00000000000000""" Conditional means with observations =================================== """ import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set(style="whitegrid") iris = sns.load_dataset("iris") # "Melt" the dataset to "long-form" or "tidy" representation iris = pd.melt(iris, "species", var_name="measurement") # Initialize the figure f, ax = plt.subplots() sns.despine(bottom=True, left=True) # Show each observation with a scatterplot sns.stripplot(x="value", y="measurement", hue="species", data=iris, dodge=True, alpha=.25, zorder=1) # Show the conditional means sns.pointplot(x="value", y="measurement", hue="species", data=iris, dodge=.532, join=False, palette="dark", markers="d", scale=.75, ci=None) # Improve the legend handles, labels = ax.get_legend_handles_labels() ax.legend(handles[3:], labels[3:], title="species", handletextpad=0, columnspacing=1, loc="lower right", ncol=3, frameon=True) seaborn-0.10.0/examples/joint_kde.py000066400000000000000000000010131361256634400173640ustar00rootroot00000000000000""" Joint kernel density estimate ============================= _thumb: .6, .4 """ import numpy as np import pandas as pd import seaborn as sns sns.set(style="white") # Generate a random correlated bivariate dataset rs = np.random.RandomState(5) mean = [0, 0] cov = [(1, .5), (.5, 1)] x1, x2 = rs.multivariate_normal(mean, cov, 500).T x1 = pd.Series(x1, name="$X_1$") x2 = pd.Series(x2, name="$X_2$") # Show the joint distribution using kernel density estimation g = sns.jointplot(x1, x2, kind="kde", height=7, space=0) seaborn-0.10.0/examples/kde_ridgeplot.py000066400000000000000000000023311361256634400202360ustar00rootroot00000000000000""" Overlapping densities ('ridge plot') ==================================== """ import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set(style="white", rc={"axes.facecolor": (0, 0, 0, 0)}) # Create the data rs = np.random.RandomState(1979) x = rs.randn(500) g = np.tile(list("ABCDEFGHIJ"), 50) df = pd.DataFrame(dict(x=x, g=g)) m = df.g.map(ord) df["x"] += m # Initialize the FacetGrid object pal = sns.cubehelix_palette(10, rot=-.25, light=.7) g = sns.FacetGrid(df, row="g", hue="g", aspect=15, height=.5, palette=pal) # Draw the densities in a few steps g.map(sns.kdeplot, "x", clip_on=False, shade=True, alpha=1, lw=1.5, bw=.2) g.map(sns.kdeplot, "x", clip_on=False, color="w", lw=2, bw=.2) g.map(plt.axhline, y=0, lw=2, clip_on=False) # Define and use a simple function to label the plot in axes coordinates def label(x, color, label): ax = plt.gca() ax.text(0, .2, label, fontweight="bold", color=color, ha="left", va="center", transform=ax.transAxes) g.map(label, "x") # Set the subplots to overlap g.fig.subplots_adjust(hspace=-.25) # Remove axes details that don't play well with overlap g.set_titles("") g.set(yticks=[]) g.despine(bottom=True, left=True) seaborn-0.10.0/examples/large_distributions.py000066400000000000000000000005541361256634400215030ustar00rootroot00000000000000""" Plotting large distributions ============================ """ import seaborn as sns sns.set(style="whitegrid") diamonds = sns.load_dataset("diamonds") clarity_ranking = ["I1", "SI2", "SI1", "VS2", "VS1", "VVS2", "VVS1", "IF"] sns.boxenplot(x="clarity", y="carat", color="b", order=clarity_ranking, scale="linear", data=diamonds) seaborn-0.10.0/examples/logistic_regression.py000066400000000000000000000010241361256634400214750ustar00rootroot00000000000000""" Faceted logistic regression =========================== _thumb: .58, .5 """ import seaborn as sns sns.set(style="darkgrid") # Load the example Titanic dataset df = sns.load_dataset("titanic") # Make a custom palette with gendered colors pal = dict(male="#6495ED", female="#F08080") # Show the survival probability as a function of age and sex g = sns.lmplot(x="age", y="survived", col="sex", hue="sex", data=df, palette=pal, y_jitter=.02, logistic=True, truncate=False) g.set(xlim=(0, 80), ylim=(-.05, 1.05)) seaborn-0.10.0/examples/many_facets.py000066400000000000000000000021321361256634400177120ustar00rootroot00000000000000""" Plotting on a large number of facets ==================================== _thumb: .4, .3 """ import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set(style="ticks") # Create a dataset with many short random walks rs = np.random.RandomState(4) pos = rs.randint(-1, 2, (20, 5)).cumsum(axis=1) pos -= pos[:, 0, np.newaxis] step = np.tile(range(5), 20) walk = np.repeat(range(20), 5) df = pd.DataFrame(np.c_[pos.flat, step, walk], columns=["position", "step", "walk"]) # Initialize a grid of plots with an Axes for each walk grid = sns.FacetGrid(df, col="walk", hue="walk", palette="tab20c", col_wrap=4, height=1.5) # Draw a horizontal line to show the starting point grid.map(plt.axhline, y=0, ls=":", c=".5") # Draw a line plot to show the trajectory of each random walk grid.map(plt.plot, "step", "position", marker="o") # Adjust the tick positions and labels grid.set(xticks=np.arange(5), yticks=[-3, 3], xlim=(-.5, 4.5), ylim=(-3.5, 3.5)) # Adjust the arrangement of the plots grid.fig.tight_layout(w_pad=1) seaborn-0.10.0/examples/many_pairwise_correlations.py000066400000000000000000000016111361256634400230550ustar00rootroot00000000000000""" Plotting a diagonal correlation matrix ====================================== _thumb: .3, .6 """ from string import ascii_letters import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set(style="white") # Generate a large random dataset rs = np.random.RandomState(33) d = pd.DataFrame(data=rs.normal(size=(100, 26)), columns=list(ascii_letters[26:])) # Compute the correlation matrix corr = d.corr() # Generate a mask for the upper triangle mask = np.triu(np.ones_like(corr, dtype=np.bool)) # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = sns.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) seaborn-0.10.0/examples/marginal_ticks.py000066400000000000000000000010201361256634400204030ustar00rootroot00000000000000""" Scatterplot with marginal ticks =============================== _thumb: .62, .39 """ import numpy as np import seaborn as sns sns.set(style="white", color_codes=True) # Generate a random bivariate dataset rs = np.random.RandomState(9) mean = [0, 0] cov = [(1, 0), (0, 2)] x, y = rs.multivariate_normal(mean, cov, 100).T # Use JointGrid directly to draw a custom plot grid = sns.JointGrid(x, y, space=0, height=6, ratio=20) grid.plot_joint(sns.scatterplot, color="g") grid.plot_marginals(sns.rugplot, height=1, color="g") seaborn-0.10.0/examples/multiple_joint_kde.py000066400000000000000000000015641361256634400213120ustar00rootroot00000000000000""" Multiple bivariate KDE plots ============================ _thumb: .6, .45 """ import seaborn as sns import matplotlib.pyplot as plt sns.set(style="darkgrid") iris = sns.load_dataset("iris") # Subset the iris dataset by species setosa = iris.query("species == 'setosa'") virginica = iris.query("species == 'virginica'") # Set up the figure f, ax = plt.subplots(figsize=(8, 8)) ax.set_aspect("equal") # Draw the two density plots ax = sns.kdeplot(setosa.sepal_width, setosa.sepal_length, cmap="Reds", shade=True, shade_lowest=False) ax = sns.kdeplot(virginica.sepal_width, virginica.sepal_length, cmap="Blues", shade=True, shade_lowest=False) # Add labels to the plot red = sns.color_palette("Reds")[-2] blue = sns.color_palette("Blues")[-2] ax.text(2.5, 8.2, "virginica", size=16, color=blue) ax.text(3.8, 4.5, "setosa", size=16, color=red) seaborn-0.10.0/examples/multiple_regression.py000066400000000000000000000007101361256634400215140ustar00rootroot00000000000000""" Multiple linear regression ========================== _thumb: .45, .45 """ import seaborn as sns sns.set() # Load the iris dataset iris = sns.load_dataset("iris") # Plot sepal width as a function of sepal_length across days g = sns.lmplot(x="sepal_length", y="sepal_width", hue="species", height=5, data=iris) # Use more informative axis labels than are provided by default g.set_axis_labels("Sepal length (mm)", "Sepal width (mm)") seaborn-0.10.0/examples/pair_grid_with_kde.py000066400000000000000000000004721361256634400212440ustar00rootroot00000000000000""" Paired density and scatterplot matrix ===================================== _thumb: .5, .5 """ import seaborn as sns sns.set(style="white") df = sns.load_dataset("iris") g = sns.PairGrid(df, diag_sharey=False) g.map_upper(sns.scatterplot) g.map_lower(sns.kdeplot, colors="C0") g.map_diag(sns.kdeplot, lw=2) seaborn-0.10.0/examples/paired_pointplots.py000066400000000000000000000010521361256634400211600ustar00rootroot00000000000000""" Paired categorical plots ======================== """ import seaborn as sns sns.set(style="whitegrid") # Load the example Titanic dataset titanic = sns.load_dataset("titanic") # Set up a grid to plot survival probability against several variables g = sns.PairGrid(titanic, y_vars="survived", x_vars=["class", "sex", "who", "alone"], height=5, aspect=.5) # Draw a seaborn pointplot onto each Axes g.map(sns.pointplot, scale=1.3, errwidth=4, color="xkcd:plum") g.set(ylim=(0, 1)) sns.despine(fig=g.fig, left=True) seaborn-0.10.0/examples/pairgrid_dotplot.py000066400000000000000000000020771361256634400207770ustar00rootroot00000000000000""" Dot plot with several variables =============================== _thumb: .3, .3 """ import seaborn as sns sns.set(style="whitegrid") # Load the dataset crashes = sns.load_dataset("car_crashes") # Make the PairGrid g = sns.PairGrid(crashes.sort_values("total", ascending=False), x_vars=crashes.columns[:-3], y_vars=["abbrev"], height=10, aspect=.25) # Draw a dot plot using the stripplot function g.map(sns.stripplot, size=10, orient="h", palette="ch:s=1,r=-.1,h=1_r", linewidth=1, edgecolor="w") # Use the same x axis limits on all columns and add better labels g.set(xlim=(0, 25), xlabel="Crashes", ylabel="") # Use semantically meaningful titles for the columns titles = ["Total crashes", "Speeding crashes", "Alcohol crashes", "Not distracted crashes", "No previous crashes"] for ax, title in zip(g.axes.flat, titles): # Set a different title for each axes ax.set(title=title) # Make the grid horizontal instead of vertical ax.xaxis.grid(False) ax.yaxis.grid(True) sns.despine(left=True, bottom=True) seaborn-0.10.0/examples/pointplot_anova.py000066400000000000000000000007231361256634400206410ustar00rootroot00000000000000""" Plotting a three-way ANOVA ========================== _thumb: .42, .5 """ import seaborn as sns sns.set(style="whitegrid") # Load the example exercise dataset df = sns.load_dataset("exercise") # Draw a pointplot to show pulse as a function of three categorical factors g = sns.catplot(x="time", y="pulse", hue="kind", col="diet", capsize=.2, palette="YlGnBu_d", height=6, aspect=.75, kind="point", data=df) g.despine(left=True) seaborn-0.10.0/examples/regression_marginals.py000066400000000000000000000005741361256634400216460ustar00rootroot00000000000000""" Linear regression with marginal distributions ============================================= _thumb: .65, .65 """ import seaborn as sns sns.set(style="darkgrid") tips = sns.load_dataset("tips") g = sns.jointplot("total_bill", "tip", data=tips, kind="reg", truncate=False, xlim=(0, 60), ylim=(0, 12), color="m", height=7) seaborn-0.10.0/examples/residplot.py000066400000000000000000000005401361256634400174270ustar00rootroot00000000000000""" Plotting model residuals ======================== """ import numpy as np import seaborn as sns sns.set(style="whitegrid") # Make an example dataset with y ~ x rs = np.random.RandomState(7) x = rs.normal(2, 1, 75) y = 2 + 1.5 * x + rs.normal(0, 2, 75) # Plot the residuals after fitting a linear model sns.residplot(x, y, lowess=True, color="g") seaborn-0.10.0/examples/scatter_bubbles.py000066400000000000000000000006751361256634400205760ustar00rootroot00000000000000""" Scatterplot with varying point sizes and hues ============================================== _thumb: .45, .5 """ import seaborn as sns sns.set(style="white") # Load the example mpg dataset mpg = sns.load_dataset("mpg") # Plot miles per gallon against horsepower with other semantics sns.relplot(x="horsepower", y="mpg", hue="origin", size="weight", sizes=(40, 400), alpha=.5, palette="muted", height=6, data=mpg) seaborn-0.10.0/examples/scatterplot_categorical.py000066400000000000000000000010121361256634400223160ustar00rootroot00000000000000""" Scatterplot with categorical variables ====================================== """ import pandas as pd import seaborn as sns sns.set(style="whitegrid", palette="muted") # Load the example iris dataset iris = sns.load_dataset("iris") # "Melt" the dataset to "long-form" or "tidy" representation iris = pd.melt(iris, "species", var_name="measurement") # Draw a categorical scatterplot to show each observation sns.swarmplot(x="measurement", y="value", hue="species", palette=["r", "c", "y"], data=iris) seaborn-0.10.0/examples/scatterplot_matrix.py000066400000000000000000000002531361256634400213530ustar00rootroot00000000000000""" Scatterplot Matrix ================== _thumb: .5, .43 """ import seaborn as sns sns.set(style="ticks") df = sns.load_dataset("iris") sns.pairplot(df, hue="species") seaborn-0.10.0/examples/scatterplot_sizes.py000066400000000000000000000007151361256634400212070ustar00rootroot00000000000000""" Scatterplot with continuous hues and sizes ========================================== _thumb: .45, .45 """ import seaborn as sns sns.set() # Load the example planets dataset planets = sns.load_dataset("planets") cmap = sns.cubehelix_palette(rot=-.2, as_cmap=True) ax = sns.scatterplot(x="distance", y="orbital_period", hue="year", size="mass", palette=cmap, sizes=(10, 200), data=planets) seaborn-0.10.0/examples/simple_violinplots.py000066400000000000000000000007571361256634400213670ustar00rootroot00000000000000""" Violinplots with observations ============================= """ import numpy as np import seaborn as sns sns.set() # Create a random dataset across several variables rs = np.random.RandomState(0) n, p = 40, 8 d = rs.normal(0, 2, (n, p)) d += np.log(np.arange(1, p + 1)) * -5 + 10 # Use cubehelix to get a custom sequential palette pal = sns.cubehelix_palette(p, rot=-.5, dark=.3) # Show each distribution with both violins and points sns.violinplot(data=d, palette=pal, inner="points") seaborn-0.10.0/examples/structured_heatmap.py000066400000000000000000000020531361256634400213260ustar00rootroot00000000000000""" Discovering structure in heatmap data ===================================== _thumb: .4, .25 """ import pandas as pd import seaborn as sns sns.set() # Load the brain networks example dataset df = sns.load_dataset("brain_networks", header=[0, 1, 2], index_col=0) # Select a subset of the networks used_networks = [1, 5, 6, 7, 8, 12, 13, 17] used_columns = (df.columns.get_level_values("network") .astype(int) .isin(used_networks)) df = df.loc[:, used_columns] # Create a categorical palette to identify the networks network_pal = sns.husl_palette(8, s=.45) network_lut = dict(zip(map(str, used_networks), network_pal)) # Convert the palette to vectors that will be drawn on the side of the matrix networks = df.columns.get_level_values("network") network_colors = pd.Series(networks, index=df.columns).map(network_lut) # Draw the full plot sns.clustermap(df.corr(), center=0, cmap="vlag", row_colors=network_colors, col_colors=network_colors, linewidths=.75, figsize=(13, 13)) seaborn-0.10.0/examples/wide_data_lineplot.py000066400000000000000000000007211361256634400212520ustar00rootroot00000000000000""" Lineplot from a wide-form dataset ================================= _thumb: .52, .5 """ import numpy as np import pandas as pd import seaborn as sns sns.set(style="whitegrid") rs = np.random.RandomState(365) values = rs.randn(365, 4).cumsum(axis=0) dates = pd.date_range("1 1 2016", periods=365, freq="D") data = pd.DataFrame(values, dates, columns=["A", "B", "C", "D"]) data = data.rolling(7).mean() sns.lineplot(data=data, palette="tab10", linewidth=2.5) seaborn-0.10.0/examples/wide_form_violinplot.py000066400000000000000000000020371361256634400216570ustar00rootroot00000000000000""" Violinplot from a wide-form dataset =================================== _thumb: .6, .45 """ import seaborn as sns import matplotlib.pyplot as plt sns.set(style="whitegrid") # Load the example dataset of brain network correlations df = sns.load_dataset("brain_networks", header=[0, 1, 2], index_col=0) # Pull out a specific subset of networks used_networks = [1, 3, 4, 5, 6, 7, 8, 11, 12, 13, 16, 17] used_columns = (df.columns.get_level_values("network") .astype(int) .isin(used_networks)) df = df.loc[:, used_columns] # Compute the correlation matrix and average over networks corr_df = df.corr().groupby(level="network").mean() corr_df.index = corr_df.index.astype(int) corr_df = corr_df.sort_index().T # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 6)) # Draw a violinplot with a narrower bandwidth than the default sns.violinplot(data=corr_df, palette="Set3", bw=.2, cut=1, linewidth=1) # Finalize the figure ax.set(ylim=(-.7, 1.05)) sns.despine(left=True, bottom=True) seaborn-0.10.0/licences/000077500000000000000000000000001361256634400150205ustar00rootroot00000000000000seaborn-0.10.0/licences/HUSL_LICENSE000066400000000000000000000020431361256634400166570ustar00rootroot00000000000000Copyright (C) 2012 Alexei Boronine Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. seaborn-0.10.0/pytest.ini000066400000000000000000000003011361256634400152560ustar00rootroot00000000000000[pytest] filterwarnings = ; Warnings raised from within pytest itself ignore:Using or importing the ABCs:DeprecationWarning ignore:the imp module is deprecated in favour of importlib seaborn-0.10.0/requirements.txt000066400000000000000000000000361361256634400165160ustar00rootroot00000000000000numpy scipy matplotlib pandas seaborn-0.10.0/seaborn/000077500000000000000000000000001361256634400146645ustar00rootroot00000000000000seaborn-0.10.0/seaborn/__init__.py000066400000000000000000000007361361256634400170030ustar00rootroot00000000000000# Capture the original matplotlib rcParams import matplotlib as mpl _orig_rc_params = mpl.rcParams.copy() # Import seaborn objects from .rcmod import * from .utils import * from .palettes import * from .relational import * from .regression import * from .categorical import * from .distributions import * from .matrix import * from .miscplot import * from .axisgrid import * from .widgets import * from .colors import xkcd_rgb, crayons from . import cm __version__ = "0.10.0" seaborn-0.10.0/seaborn/algorithms.py000066400000000000000000000105621361256634400174130ustar00rootroot00000000000000"""Algorithms to support fitting routines in seaborn plotting functions.""" from __future__ import division import numbers import numpy as np import warnings def bootstrap(*args, **kwargs): """Resample one or more arrays with replacement and store aggregate values. Positional arguments are a sequence of arrays to bootstrap along the first axis and pass to a summary function. Keyword arguments: n_boot : int, default 10000 Number of iterations axis : int, default None Will pass axis to ``func`` as a keyword argument. units : array, default None Array of sampling unit IDs. When used the bootstrap resamples units and then observations within units instead of individual datapoints. func : string or callable, default np.mean Function to call on the args that are passed in. If string, tries to use as named method on numpy array. seed : Generator | SeedSequence | RandomState | int | None Seed for the random number generator; useful if you want reproducible resamples. Returns ------- boot_dist: array array of bootstrapped statistic values """ # Ensure list of arrays are same length if len(np.unique(list(map(len, args)))) > 1: raise ValueError("All input arrays must have the same length") n = len(args[0]) # Default keyword arguments n_boot = kwargs.get("n_boot", 10000) func = kwargs.get("func", np.mean) axis = kwargs.get("axis", None) units = kwargs.get("units", None) random_seed = kwargs.get("random_seed", None) if random_seed is not None: msg = "`random_seed` has been renamed to `seed` and will be removed" warnings.warn(msg) seed = kwargs.get("seed", random_seed) if axis is None: func_kwargs = dict() else: func_kwargs = dict(axis=axis) # Initialize the resampler rng = _handle_random_seed(seed) # Coerce to arrays args = list(map(np.asarray, args)) if units is not None: units = np.asarray(units) # Allow for a function that is the name of a method on an array if isinstance(func, str): def f(x): return getattr(x, func)() else: f = func # Handle numpy changes try: integers = rng.integers except AttributeError: integers = rng.randint # Do the bootstrap if units is not None: return _structured_bootstrap(args, n_boot, units, f, func_kwargs, integers) boot_dist = [] for i in range(int(n_boot)): resampler = integers(0, n, n) sample = [a.take(resampler, axis=0) for a in args] boot_dist.append(f(*sample, **func_kwargs)) return np.array(boot_dist) def _structured_bootstrap(args, n_boot, units, func, func_kwargs, integers): """Resample units instead of datapoints.""" unique_units = np.unique(units) n_units = len(unique_units) args = [[a[units == unit] for unit in unique_units] for a in args] boot_dist = [] for i in range(int(n_boot)): resampler = integers(0, n_units, n_units) sample = [np.take(a, resampler, axis=0) for a in args] lengths = map(len, sample[0]) resampler = [integers(0, n, n) for n in lengths] sample = [[c.take(r, axis=0) for c, r in zip(a, resampler)] for a in sample] sample = list(map(np.concatenate, sample)) boot_dist.append(func(*sample, **func_kwargs)) return np.array(boot_dist) def _handle_random_seed(seed=None): """Given a seed in one of many formats, return a random number generator. Generalizes across the numpy 1.17 changes, preferring newer functionality. """ if isinstance(seed, np.random.RandomState): rng = seed else: try: # General interface for seeding on numpy >= 1.17 rng = np.random.default_rng(seed) except AttributeError: # We are on numpy < 1.17, handle options ourselves if isinstance(seed, (numbers.Integral, np.integer)): rng = np.random.RandomState(seed) elif seed is None: rng = np.random.RandomState() else: err = "{} cannot be used to seed the randomn number generator" raise ValueError(err.format(seed)) return rng seaborn-0.10.0/seaborn/axisgrid.py000066400000000000000000002477521361256634400170710ustar00rootroot00000000000000from __future__ import division from itertools import product from distutils.version import LooseVersion import warnings from textwrap import dedent import numpy as np import pandas as pd from scipy import stats import matplotlib as mpl import matplotlib.pyplot as plt from . import utils from .palettes import color_palette, blend_palette from .distributions import distplot, kdeplot, _freedman_diaconis_bins __all__ = ["FacetGrid", "PairGrid", "JointGrid", "pairplot", "jointplot"] class Grid(object): """Base class for grids of subplots.""" _margin_titles = False _legend_out = True def set(self, **kwargs): """Set attributes on each subplot Axes.""" for ax in self.axes.flat: ax.set(**kwargs) return self def savefig(self, *args, **kwargs): """Save the figure.""" kwargs = kwargs.copy() kwargs.setdefault("bbox_inches", "tight") self.fig.savefig(*args, **kwargs) def add_legend(self, legend_data=None, title=None, label_order=None, **kwargs): """Draw a legend, maybe placing it outside axes and resizing the figure. Parameters ---------- legend_data : dict, optional Dictionary mapping label names (or two-element tuples where the second element is a label name) to matplotlib artist handles. The default reads from ``self._legend_data``. title : string, optional Title for the legend. The default reads from ``self._hue_var``. label_order : list of labels, optional The order that the legend entries should appear in. The default reads from ``self.hue_names``. kwargs : key, value pairings Other keyword arguments are passed to the underlying legend methods on the Figure or Axes object. Returns ------- self : Grid instance Returns self for easy chaining. """ # Find the data for the legend if legend_data is None: legend_data = self._legend_data if label_order is None: if self.hue_names is None: label_order = list(legend_data.keys()) else: label_order = list(map(utils.to_utf8, self.hue_names)) blank_handle = mpl.patches.Patch(alpha=0, linewidth=0) handles = [legend_data.get(l, blank_handle) for l in label_order] title = self._hue_var if title is None else title try: title_size = mpl.rcParams["axes.labelsize"] * .85 except TypeError: # labelsize is something like "large" title_size = mpl.rcParams["axes.labelsize"] # Unpack nested labels from a hierarchical legend labels = [] for entry in label_order: if isinstance(entry, tuple): _, label = entry else: label = entry labels.append(label) # Set default legend kwargs kwargs.setdefault("scatterpoints", 1) if self._legend_out: kwargs.setdefault("frameon", False) kwargs.setdefault("loc", "center right") # Draw a full-figure legend outside the grid figlegend = self.fig.legend(handles, labels, **kwargs) self._legend = figlegend figlegend.set_title(title, prop={"size": title_size}) # Draw the plot to set the bounding boxes correctly if hasattr(self.fig.canvas, "get_renderer"): self.fig.draw(self.fig.canvas.get_renderer()) # Calculate and set the new width of the figure so the legend fits legend_width = figlegend.get_window_extent().width / self.fig.dpi fig_width, fig_height = self.fig.get_size_inches() self.fig.set_size_inches(fig_width + legend_width, fig_height) # Draw the plot again to get the new transformations if hasattr(self.fig.canvas, "get_renderer"): self.fig.draw(self.fig.canvas.get_renderer()) # Now calculate how much space we need on the right side legend_width = figlegend.get_window_extent().width / self.fig.dpi space_needed = legend_width / (fig_width + legend_width) margin = .04 if self._margin_titles else .01 self._space_needed = margin + space_needed right = 1 - self._space_needed # Place the subplot axes to give space for the legend self.fig.subplots_adjust(right=right) else: # Draw a legend in the first axis ax = self.axes.flat[0] kwargs.setdefault("loc", "best") leg = ax.legend(handles, labels, **kwargs) leg.set_title(title, prop={"size": title_size}) return self def _clean_axis(self, ax): """Turn off axis labels and legend.""" ax.set_xlabel("") ax.set_ylabel("") ax.legend_ = None return self def _update_legend_data(self, ax): """Extract the legend data from an axes object and save it.""" handles, labels = ax.get_legend_handles_labels() data = {l: h for h, l in zip(handles, labels)} self._legend_data.update(data) def _get_palette(self, data, hue, hue_order, palette): """Get a list of colors for the hue variable.""" if hue is None: palette = color_palette(n_colors=1) else: hue_names = utils.categorical_order(data[hue], hue_order) n_colors = len(hue_names) # By default use either the current color palette or HUSL if palette is None: current_palette = utils.get_color_cycle() if n_colors > len(current_palette): colors = color_palette("husl", n_colors) else: colors = color_palette(n_colors=n_colors) # Allow for palette to map from hue variable names elif isinstance(palette, dict): color_names = [palette[h] for h in hue_names] colors = color_palette(color_names, n_colors) # Otherwise act as if we just got a list of colors else: colors = color_palette(palette, n_colors) palette = color_palette(colors, n_colors) return palette _facet_docs = dict( data=dedent("""\ data : DataFrame Tidy ("long-form") dataframe where each column is a variable and each row is an observation.\ """), col_wrap=dedent("""\ col_wrap : int, optional "Wrap" the column variable at this width, so that the column facets span multiple rows. Incompatible with a ``row`` facet.\ """), share_xy=dedent("""\ share{x,y} : bool, 'col', or 'row' optional If true, the facets will share y axes across columns and/or x axes across rows.\ """), height=dedent("""\ height : scalar, optional Height (in inches) of each facet. See also: ``aspect``.\ """), aspect=dedent("""\ aspect : scalar, optional Aspect ratio of each facet, so that ``aspect * height`` gives the width of each facet in inches.\ """), palette=dedent("""\ palette : palette name, list, or dict, optional Colors to use for the different levels of the ``hue`` variable. Should be something that can be interpreted by :func:`color_palette`, or a dictionary mapping hue levels to matplotlib colors.\ """), legend_out=dedent("""\ legend_out : bool, optional If ``True``, the figure size will be extended, and the legend will be drawn outside the plot on the center right.\ """), margin_titles=dedent("""\ margin_titles : bool, optional If ``True``, the titles for the row variable are drawn to the right of the last column. This option is experimental and may not work in all cases.\ """), ) class FacetGrid(Grid): """Multi-plot grid for plotting conditional relationships.""" def __init__(self, data, row=None, col=None, hue=None, col_wrap=None, sharex=True, sharey=True, height=3, aspect=1, palette=None, row_order=None, col_order=None, hue_order=None, hue_kws=None, dropna=True, legend_out=True, despine=True, margin_titles=False, xlim=None, ylim=None, subplot_kws=None, gridspec_kws=None, size=None): MPL_GRIDSPEC_VERSION = LooseVersion('1.4') OLD_MPL = LooseVersion(mpl.__version__) < MPL_GRIDSPEC_VERSION # Handle deprecations if size is not None: height = size msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) # Determine the hue facet layer information hue_var = hue if hue is None: hue_names = None else: hue_names = utils.categorical_order(data[hue], hue_order) colors = self._get_palette(data, hue, hue_order, palette) # Set up the lists of names for the row and column facet variables if row is None: row_names = [] else: row_names = utils.categorical_order(data[row], row_order) if col is None: col_names = [] else: col_names = utils.categorical_order(data[col], col_order) # Additional dict of kwarg -> list of values for mapping the hue var hue_kws = hue_kws if hue_kws is not None else {} # Make a boolean mask that is True anywhere there is an NA # value in one of the faceting variables, but only if dropna is True none_na = np.zeros(len(data), np.bool) if dropna: row_na = none_na if row is None else data[row].isnull() col_na = none_na if col is None else data[col].isnull() hue_na = none_na if hue is None else data[hue].isnull() not_na = ~(row_na | col_na | hue_na) else: not_na = ~none_na # Compute the grid shape ncol = 1 if col is None else len(col_names) nrow = 1 if row is None else len(row_names) self._n_facets = ncol * nrow self._col_wrap = col_wrap if col_wrap is not None: if row is not None: err = "Cannot use `row` and `col_wrap` together." raise ValueError(err) ncol = col_wrap nrow = int(np.ceil(len(col_names) / col_wrap)) self._ncol = ncol self._nrow = nrow # Calculate the base figure size # This can get stretched later by a legend # TODO this doesn't account for axis labels figsize = (ncol * height * aspect, nrow * height) # Validate some inputs if col_wrap is not None: margin_titles = False # Build the subplot keyword dictionary subplot_kws = {} if subplot_kws is None else subplot_kws.copy() gridspec_kws = {} if gridspec_kws is None else gridspec_kws.copy() if xlim is not None: subplot_kws["xlim"] = xlim if ylim is not None: subplot_kws["ylim"] = ylim # Initialize the subplot grid if col_wrap is None: kwargs = dict(figsize=figsize, squeeze=False, sharex=sharex, sharey=sharey, subplot_kw=subplot_kws, gridspec_kw=gridspec_kws) if OLD_MPL: kwargs.pop('gridspec_kw', None) if gridspec_kws: msg = "gridspec module only available in mpl >= {}" warnings.warn(msg.format(MPL_GRIDSPEC_VERSION)) fig, axes = plt.subplots(nrow, ncol, **kwargs) self.axes = axes else: # If wrapping the col variable we need to make the grid ourselves if gridspec_kws: warnings.warn("`gridspec_kws` ignored when using `col_wrap`") n_axes = len(col_names) fig = plt.figure(figsize=figsize) axes = np.empty(n_axes, object) axes[0] = fig.add_subplot(nrow, ncol, 1, **subplot_kws) if sharex: subplot_kws["sharex"] = axes[0] if sharey: subplot_kws["sharey"] = axes[0] for i in range(1, n_axes): axes[i] = fig.add_subplot(nrow, ncol, i + 1, **subplot_kws) self.axes = axes # Now we turn off labels on the inner axes if sharex: for ax in self._not_bottom_axes: for label in ax.get_xticklabels(): label.set_visible(False) ax.xaxis.offsetText.set_visible(False) if sharey: for ax in self._not_left_axes: for label in ax.get_yticklabels(): label.set_visible(False) ax.yaxis.offsetText.set_visible(False) # Set up the class attributes # --------------------------- # First the public API self.data = data self.fig = fig self.axes = axes self.row_names = row_names self.col_names = col_names self.hue_names = hue_names self.hue_kws = hue_kws # Next the private variables self._nrow = nrow self._row_var = row self._ncol = ncol self._col_var = col self._margin_titles = margin_titles self._col_wrap = col_wrap self._hue_var = hue_var self._colors = colors self._legend_out = legend_out self._legend = None self._legend_data = {} self._x_var = None self._y_var = None self._dropna = dropna self._not_na = not_na # Make the axes look good fig.tight_layout() if despine: self.despine() __init__.__doc__ = dedent("""\ Initialize the matplotlib figure and FacetGrid object. This class maps a dataset onto multiple axes arrayed in a grid of rows and columns that correspond to *levels* of variables in the dataset. The plots it produces are often called "lattice", "trellis", or "small-multiple" graphics. It can also represent levels of a third variable with the ``hue`` parameter, which plots different subsets of data in different colors. This uses color to resolve elements on a third dimension, but only draws subsets on top of each other and will not tailor the ``hue`` parameter for the specific visualization the way that axes-level functions that accept ``hue`` will. When using seaborn functions that infer semantic mappings from a dataset, care must be taken to synchronize those mappings across facets. In most cases, it will be better to use a figure-level function (e.g. :func:`relplot` or :func:`catplot`) than to use :class:`FacetGrid` directly. The basic workflow is to initialize the :class:`FacetGrid` object with the dataset and the variables that are used to structure the grid. Then one or more plotting functions can be applied to each subset by calling :meth:`FacetGrid.map` or :meth:`FacetGrid.map_dataframe`. Finally, the plot can be tweaked with other methods to do things like change the axis labels, use different ticks, or add a legend. See the detailed code examples below for more information. See the :ref:`tutorial ` for more information. Parameters ---------- {data} row, col, hue : strings Variables that define subsets of the data, which will be drawn on separate facets in the grid. See the ``*_order`` parameters to control the order of levels of this variable. {col_wrap} {share_xy} {height} {aspect} {palette} {{row,col,hue}}_order : lists, optional Order for the levels of the faceting variables. By default, this will be the order that the levels appear in ``data`` or, if the variables are pandas categoricals, the category order. hue_kws : dictionary of param -> list of values mapping Other keyword arguments to insert into the plotting call to let other plot attributes vary across levels of the hue variable (e.g. the markers in a scatterplot). {legend_out} despine : boolean, optional Remove the top and right spines from the plots. {margin_titles} {{x, y}}lim: tuples, optional Limits for each of the axes on each facet (only relevant when share{{x, y}} is True. subplot_kws : dict, optional Dictionary of keyword arguments passed to matplotlib subplot(s) methods. gridspec_kws : dict, optional Dictionary of keyword arguments passed to matplotlib's ``gridspec`` module (via ``plt.subplots``). Requires matplotlib >= 1.4 and is ignored if ``col_wrap`` is not ``None``. See Also -------- PairGrid : Subplot grid for plotting pairwise relationships. relplot : Combine a relational plot and a :class:`FacetGrid`. catplot : Combine a categorical plot and a :class:`FacetGrid`. lmplot : Combine a regression plot and a :class:`FacetGrid`. Examples -------- Initialize a 2x2 grid of facets using the tips dataset: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set(style="ticks", color_codes=True) >>> tips = sns.load_dataset("tips") >>> g = sns.FacetGrid(tips, col="time", row="smoker") Draw a univariate plot on each facet: .. plot:: :context: close-figs >>> import matplotlib.pyplot as plt >>> g = sns.FacetGrid(tips, col="time", row="smoker") >>> g = g.map(plt.hist, "total_bill") (Note that it's not necessary to re-catch the returned variable; it's the same object, but doing so in the examples makes dealing with the doctests somewhat less annoying). Pass additional keyword arguments to the mapped function: .. plot:: :context: close-figs >>> import numpy as np >>> bins = np.arange(0, 65, 5) >>> g = sns.FacetGrid(tips, col="time", row="smoker") >>> g = g.map(plt.hist, "total_bill", bins=bins, color="r") Plot a bivariate function on each facet: .. plot:: :context: close-figs >>> g = sns.FacetGrid(tips, col="time", row="smoker") >>> g = g.map(plt.scatter, "total_bill", "tip", edgecolor="w") Assign one of the variables to the color of the plot elements: .. plot:: :context: close-figs >>> g = sns.FacetGrid(tips, col="time", hue="smoker") >>> g = (g.map(plt.scatter, "total_bill", "tip", edgecolor="w") ... .add_legend()) Change the height and aspect ratio of each facet: .. plot:: :context: close-figs >>> g = sns.FacetGrid(tips, col="day", height=4, aspect=.5) >>> g = g.map(plt.hist, "total_bill", bins=bins) Specify the order for plot elements: .. plot:: :context: close-figs >>> g = sns.FacetGrid(tips, col="smoker", col_order=["Yes", "No"]) >>> g = g.map(plt.hist, "total_bill", bins=bins, color="m") Use a different color palette: .. plot:: :context: close-figs >>> kws = dict(s=50, linewidth=.5, edgecolor="w") >>> g = sns.FacetGrid(tips, col="sex", hue="time", palette="Set1", ... hue_order=["Dinner", "Lunch"]) >>> g = (g.map(plt.scatter, "total_bill", "tip", **kws) ... .add_legend()) Use a dictionary mapping hue levels to colors: .. plot:: :context: close-figs >>> pal = dict(Lunch="seagreen", Dinner="gray") >>> g = sns.FacetGrid(tips, col="sex", hue="time", palette=pal, ... hue_order=["Dinner", "Lunch"]) >>> g = (g.map(plt.scatter, "total_bill", "tip", **kws) ... .add_legend()) Additionally use a different marker for the hue levels: .. plot:: :context: close-figs >>> g = sns.FacetGrid(tips, col="sex", hue="time", palette=pal, ... hue_order=["Dinner", "Lunch"], ... hue_kws=dict(marker=["^", "v"])) >>> g = (g.map(plt.scatter, "total_bill", "tip", **kws) ... .add_legend()) "Wrap" a column variable with many levels into the rows: .. plot:: :context: close-figs >>> att = sns.load_dataset("attention") >>> g = sns.FacetGrid(att, col="subject", col_wrap=5, height=1.5) >>> g = g.map(plt.plot, "solutions", "score", marker=".") Define a custom bivariate function to map onto the grid: .. plot:: :context: close-figs >>> from scipy import stats >>> def qqplot(x, y, **kwargs): ... _, xr = stats.probplot(x, fit=False) ... _, yr = stats.probplot(y, fit=False) ... sns.scatterplot(xr, yr, **kwargs) >>> g = sns.FacetGrid(tips, col="smoker", hue="sex") >>> g = (g.map(qqplot, "total_bill", "tip", **kws) ... .add_legend()) Define a custom function that uses a ``DataFrame`` object and accepts column names as positional variables: .. plot:: :context: close-figs >>> import pandas as pd >>> df = pd.DataFrame( ... data=np.random.randn(90, 4), ... columns=pd.Series(list("ABCD"), name="walk"), ... index=pd.date_range("2015-01-01", "2015-03-31", ... name="date")) >>> df = df.cumsum(axis=0).stack().reset_index(name="val") >>> def dateplot(x, y, **kwargs): ... ax = plt.gca() ... data = kwargs.pop("data") ... data.plot(x=x, y=y, ax=ax, grid=False, **kwargs) >>> g = sns.FacetGrid(df, col="walk", col_wrap=2, height=3.5) >>> g = g.map_dataframe(dateplot, "date", "val") Use different axes labels after plotting: .. plot:: :context: close-figs >>> g = sns.FacetGrid(tips, col="smoker", row="sex") >>> g = (g.map(plt.scatter, "total_bill", "tip", color="g", **kws) ... .set_axis_labels("Total bill (US Dollars)", "Tip")) Set other attributes that are shared across the facetes: .. plot:: :context: close-figs >>> g = sns.FacetGrid(tips, col="smoker", row="sex") >>> g = (g.map(plt.scatter, "total_bill", "tip", color="r", **kws) ... .set(xlim=(0, 60), ylim=(0, 12), ... xticks=[10, 30, 50], yticks=[2, 6, 10])) Use a different template for the facet titles: .. plot:: :context: close-figs >>> g = sns.FacetGrid(tips, col="size", col_wrap=3) >>> g = (g.map(plt.hist, "tip", bins=np.arange(0, 13), color="c") ... .set_titles("{{col_name}} diners")) Tighten the facets: .. plot:: :context: close-figs >>> g = sns.FacetGrid(tips, col="smoker", row="sex", ... margin_titles=True) >>> g = (g.map(plt.scatter, "total_bill", "tip", color="m", **kws) ... .set(xlim=(0, 60), ylim=(0, 12), ... xticks=[10, 30, 50], yticks=[2, 6, 10]) ... .fig.subplots_adjust(wspace=.05, hspace=.05)) """).format(**_facet_docs) def facet_data(self): """Generator for name indices and data subsets for each facet. Yields ------ (i, j, k), data_ijk : tuple of ints, DataFrame The ints provide an index into the {row, col, hue}_names attribute, and the dataframe contains a subset of the full data corresponding to each facet. The generator yields subsets that correspond with the self.axes.flat iterator, or self.axes[i, j] when `col_wrap` is None. """ data = self.data # Construct masks for the row variable if self.row_names: row_masks = [data[self._row_var] == n for n in self.row_names] else: row_masks = [np.repeat(True, len(self.data))] # Construct masks for the column variable if self.col_names: col_masks = [data[self._col_var] == n for n in self.col_names] else: col_masks = [np.repeat(True, len(self.data))] # Construct masks for the hue variable if self.hue_names: hue_masks = [data[self._hue_var] == n for n in self.hue_names] else: hue_masks = [np.repeat(True, len(self.data))] # Here is the main generator loop for (i, row), (j, col), (k, hue) in product(enumerate(row_masks), enumerate(col_masks), enumerate(hue_masks)): data_ijk = data[row & col & hue & self._not_na] yield (i, j, k), data_ijk def map(self, func, *args, **kwargs): """Apply a plotting function to each facet's subset of the data. Parameters ---------- func : callable A plotting function that takes data and keyword arguments. It must plot to the currently active matplotlib Axes and take a `color` keyword argument. If faceting on the `hue` dimension, it must also take a `label` keyword argument. args : strings Column names in self.data that identify variables with data to plot. The data for each variable is passed to `func` in the order the variables are specified in the call. kwargs : keyword arguments All keyword arguments are passed to the plotting function. Returns ------- self : object Returns self. """ # If color was a keyword argument, grab it here kw_color = kwargs.pop("color", None) if hasattr(func, "__module__"): func_module = str(func.__module__) else: func_module = "" # Check for categorical plots without order information if func_module == "seaborn.categorical": if "order" not in kwargs: warning = ("Using the {} function without specifying " "`order` is likely to produce an incorrect " "plot.".format(func.__name__)) warnings.warn(warning) if len(args) == 3 and "hue_order" not in kwargs: warning = ("Using the {} function without specifying " "`hue_order` is likely to produce an incorrect " "plot.".format(func.__name__)) warnings.warn(warning) # Iterate over the data subsets for (row_i, col_j, hue_k), data_ijk in self.facet_data(): # If this subset is null, move on if not data_ijk.values.size: continue # Get the current axis ax = self.facet_axis(row_i, col_j) # Decide what color to plot with kwargs["color"] = self._facet_color(hue_k, kw_color) # Insert the other hue aesthetics if appropriate for kw, val_list in self.hue_kws.items(): kwargs[kw] = val_list[hue_k] # Insert a label in the keyword arguments for the legend if self._hue_var is not None: kwargs["label"] = utils.to_utf8(self.hue_names[hue_k]) # Get the actual data we are going to plot with plot_data = data_ijk[list(args)] if self._dropna: plot_data = plot_data.dropna() plot_args = [v for k, v in plot_data.iteritems()] # Some matplotlib functions don't handle pandas objects correctly if func_module.startswith("matplotlib"): plot_args = [v.values for v in plot_args] # Draw the plot self._facet_plot(func, ax, plot_args, kwargs) # Finalize the annotations and layout self._finalize_grid(args[:2]) return self def map_dataframe(self, func, *args, **kwargs): """Like ``.map`` but passes args as strings and inserts data in kwargs. This method is suitable for plotting with functions that accept a long-form DataFrame as a `data` keyword argument and access the data in that DataFrame using string variable names. Parameters ---------- func : callable A plotting function that takes data and keyword arguments. Unlike the `map` method, a function used here must "understand" Pandas objects. It also must plot to the currently active matplotlib Axes and take a `color` keyword argument. If faceting on the `hue` dimension, it must also take a `label` keyword argument. args : strings Column names in self.data that identify variables with data to plot. The data for each variable is passed to `func` in the order the variables are specified in the call. kwargs : keyword arguments All keyword arguments are passed to the plotting function. Returns ------- self : object Returns self. """ # If color was a keyword argument, grab it here kw_color = kwargs.pop("color", None) # Iterate over the data subsets for (row_i, col_j, hue_k), data_ijk in self.facet_data(): # If this subset is null, move on if not data_ijk.values.size: continue # Get the current axis ax = self.facet_axis(row_i, col_j) # Decide what color to plot with kwargs["color"] = self._facet_color(hue_k, kw_color) # Insert the other hue aesthetics if appropriate for kw, val_list in self.hue_kws.items(): kwargs[kw] = val_list[hue_k] # Insert a label in the keyword arguments for the legend if self._hue_var is not None: kwargs["label"] = self.hue_names[hue_k] # Stick the facet dataframe into the kwargs if self._dropna: data_ijk = data_ijk.dropna() kwargs["data"] = data_ijk # Draw the plot self._facet_plot(func, ax, args, kwargs) # Finalize the annotations and layout self._finalize_grid(args[:2]) return self def _facet_color(self, hue_index, kw_color): color = self._colors[hue_index] if kw_color is not None: return kw_color elif color is not None: return color def _facet_plot(self, func, ax, plot_args, plot_kwargs): # Draw the plot func(*plot_args, **plot_kwargs) # Sort out the supporting information self._update_legend_data(ax) self._clean_axis(ax) def _finalize_grid(self, axlabels): """Finalize the annotations and layout.""" self.set_axis_labels(*axlabels) self.set_titles() self.fig.tight_layout() def facet_axis(self, row_i, col_j): """Make the axis identified by these indices active and return it.""" # Calculate the actual indices of the axes to plot on if self._col_wrap is not None: ax = self.axes.flat[col_j] else: ax = self.axes[row_i, col_j] # Get a reference to the axes object we want, and make it active plt.sca(ax) return ax def despine(self, **kwargs): """Remove axis spines from the facets.""" utils.despine(self.fig, **kwargs) return self def set_axis_labels(self, x_var=None, y_var=None): """Set axis labels on the left column and bottom row of the grid.""" if x_var is not None: self._x_var = x_var self.set_xlabels(x_var) if y_var is not None: self._y_var = y_var self.set_ylabels(y_var) return self def set_xlabels(self, label=None, **kwargs): """Label the x axis on the bottom row of the grid.""" if label is None: label = self._x_var for ax in self._bottom_axes: ax.set_xlabel(label, **kwargs) return self def set_ylabels(self, label=None, **kwargs): """Label the y axis on the left column of the grid.""" if label is None: label = self._y_var for ax in self._left_axes: ax.set_ylabel(label, **kwargs) return self def set_xticklabels(self, labels=None, step=None, **kwargs): """Set x axis tick labels of the grid.""" for ax in self.axes.flat: if labels is None: curr_labels = [l.get_text() for l in ax.get_xticklabels()] if step is not None: xticks = ax.get_xticks()[::step] curr_labels = curr_labels[::step] ax.set_xticks(xticks) ax.set_xticklabels(curr_labels, **kwargs) else: ax.set_xticklabels(labels, **kwargs) return self def set_yticklabels(self, labels=None, **kwargs): """Set y axis tick labels on the left column of the grid.""" for ax in self.axes.flat: if labels is None: curr_labels = [l.get_text() for l in ax.get_yticklabels()] ax.set_yticklabels(curr_labels, **kwargs) else: ax.set_yticklabels(labels, **kwargs) return self def set_titles(self, template=None, row_template=None, col_template=None, **kwargs): """Draw titles either above each facet or on the grid margins. Parameters ---------- template : string Template for all titles with the formatting keys {col_var} and {col_name} (if using a `col` faceting variable) and/or {row_var} and {row_name} (if using a `row` faceting variable). row_template: Template for the row variable when titles are drawn on the grid margins. Must have {row_var} and {row_name} formatting keys. col_template: Template for the row variable when titles are drawn on the grid margins. Must have {col_var} and {col_name} formatting keys. Returns ------- self: object Returns self. """ args = dict(row_var=self._row_var, col_var=self._col_var) kwargs["size"] = kwargs.pop("size", mpl.rcParams["axes.labelsize"]) # Establish default templates if row_template is None: row_template = "{row_var} = {row_name}" if col_template is None: col_template = "{col_var} = {col_name}" if template is None: if self._row_var is None: template = col_template elif self._col_var is None: template = row_template else: template = " | ".join([row_template, col_template]) row_template = utils.to_utf8(row_template) col_template = utils.to_utf8(col_template) template = utils.to_utf8(template) if self._margin_titles: if self.row_names is not None: # Draw the row titles on the right edge of the grid for i, row_name in enumerate(self.row_names): ax = self.axes[i, -1] args.update(dict(row_name=row_name)) title = row_template.format(**args) bgcolor = self.fig.get_facecolor() ax.annotate(title, xy=(1.02, .5), xycoords="axes fraction", rotation=270, ha="left", va="center", backgroundcolor=bgcolor, **kwargs) if self.col_names is not None: # Draw the column titles as normal titles for j, col_name in enumerate(self.col_names): args.update(dict(col_name=col_name)) title = col_template.format(**args) self.axes[0, j].set_title(title, **kwargs) return self # Otherwise title each facet with all the necessary information if (self._row_var is not None) and (self._col_var is not None): for i, row_name in enumerate(self.row_names): for j, col_name in enumerate(self.col_names): args.update(dict(row_name=row_name, col_name=col_name)) title = template.format(**args) self.axes[i, j].set_title(title, **kwargs) elif self.row_names is not None and len(self.row_names): for i, row_name in enumerate(self.row_names): args.update(dict(row_name=row_name)) title = template.format(**args) self.axes[i, 0].set_title(title, **kwargs) elif self.col_names is not None and len(self.col_names): for i, col_name in enumerate(self.col_names): args.update(dict(col_name=col_name)) title = template.format(**args) # Index the flat array so col_wrap works self.axes.flat[i].set_title(title, **kwargs) return self @property def ax(self): """Easy access to single axes.""" if self.axes.shape == (1, 1): return self.axes[0, 0] else: err = ("You must use the `.axes` attribute (an array) when " "there is more than one plot.") raise AttributeError(err) @property def _inner_axes(self): """Return a flat array of the inner axes.""" if self._col_wrap is None: return self.axes[:-1, 1:].flat else: axes = [] n_empty = self._nrow * self._ncol - self._n_facets for i, ax in enumerate(self.axes): append = (i % self._ncol and i < (self._ncol * (self._nrow - 1)) and i < (self._ncol * (self._nrow - 1) - n_empty)) if append: axes.append(ax) return np.array(axes, object).flat @property def _left_axes(self): """Return a flat array of the left column of axes.""" if self._col_wrap is None: return self.axes[:, 0].flat else: axes = [] for i, ax in enumerate(self.axes): if not i % self._ncol: axes.append(ax) return np.array(axes, object).flat @property def _not_left_axes(self): """Return a flat array of axes that aren't on the left column.""" if self._col_wrap is None: return self.axes[:, 1:].flat else: axes = [] for i, ax in enumerate(self.axes): if i % self._ncol: axes.append(ax) return np.array(axes, object).flat @property def _bottom_axes(self): """Return a flat array of the bottom row of axes.""" if self._col_wrap is None: return self.axes[-1, :].flat else: axes = [] n_empty = self._nrow * self._ncol - self._n_facets for i, ax in enumerate(self.axes): append = (i >= (self._ncol * (self._nrow - 1)) or i >= (self._ncol * (self._nrow - 1) - n_empty)) if append: axes.append(ax) return np.array(axes, object).flat @property def _not_bottom_axes(self): """Return a flat array of axes that aren't on the bottom row.""" if self._col_wrap is None: return self.axes[:-1, :].flat else: axes = [] n_empty = self._nrow * self._ncol - self._n_facets for i, ax in enumerate(self.axes): append = (i < (self._ncol * (self._nrow - 1)) and i < (self._ncol * (self._nrow - 1) - n_empty)) if append: axes.append(ax) return np.array(axes, object).flat class PairGrid(Grid): """Subplot grid for plotting pairwise relationships in a dataset. This class maps each variable in a dataset onto a column and row in a grid of multiple axes. Different axes-level plotting functions can be used to draw bivariate plots in the upper and lower triangles, and the the marginal distribution of each variable can be shown on the diagonal. It can also represent an additional level of conditionalization with the ``hue`` parameter, which plots different subsets of data in different colors. This uses color to resolve elements on a third dimension, but only draws subsets on top of each other and will not tailor the ``hue`` parameter for the specific visualization the way that axes-level functions that accept ``hue`` will. See the :ref:`tutorial ` for more information. """ def __init__(self, data, hue=None, hue_order=None, palette=None, hue_kws=None, vars=None, x_vars=None, y_vars=None, corner=False, diag_sharey=True, height=2.5, aspect=1, layout_pad=0, despine=True, dropna=True, size=None): """Initialize the plot figure and PairGrid object. Parameters ---------- data : DataFrame Tidy (long-form) dataframe where each column is a variable and each row is an observation. hue : string (variable name), optional Variable in ``data`` to map plot aspects to different colors. This variable will be excluded from the default x and y variables. hue_order : list of strings Order for the levels of the hue variable in the palette palette : dict or seaborn color palette Set of colors for mapping the ``hue`` variable. If a dict, keys should be values in the ``hue`` variable. hue_kws : dictionary of param -> list of values mapping Other keyword arguments to insert into the plotting call to let other plot attributes vary across levels of the hue variable (e.g. the markers in a scatterplot). vars : list of variable names, optional Variables within ``data`` to use, otherwise use every column with a numeric datatype. {x, y}_vars : lists of variable names, optional Variables within ``data`` to use separately for the rows and columns of the figure; i.e. to make a non-square plot. corner : bool, optional If True, don't add axes to the upper (off-diagonal) triangle of the grid, making this a "corner" plot. height : scalar, optional Height (in inches) of each facet. aspect : scalar, optional Aspect * height gives the width (in inches) of each facet. layout_pad : scalar, optional Padding between axes; passed to ``fig.tight_layout``. despine : boolean, optional Remove the top and right spines from the plots. dropna : boolean, optional Drop missing values from the data before plotting. See Also -------- pairplot : Easily drawing common uses of :class:`PairGrid`. FacetGrid : Subplot grid for plotting conditional relationships. Examples -------- Draw a scatterplot for each pairwise relationship: .. plot:: :context: close-figs >>> import matplotlib.pyplot as plt >>> import seaborn as sns; sns.set() >>> iris = sns.load_dataset("iris") >>> g = sns.PairGrid(iris) >>> g = g.map(plt.scatter) Show a univariate distribution on the diagonal: .. plot:: :context: close-figs >>> g = sns.PairGrid(iris) >>> g = g.map_diag(plt.hist) >>> g = g.map_offdiag(plt.scatter) (It's not actually necessary to catch the return value every time, as it is the same object, but it makes it easier to deal with the doctests). Color the points using a categorical variable: .. plot:: :context: close-figs >>> g = sns.PairGrid(iris, hue="species") >>> g = g.map_diag(plt.hist) >>> g = g.map_offdiag(plt.scatter) >>> g = g.add_legend() Use a different style to show multiple histograms: .. plot:: :context: close-figs >>> g = sns.PairGrid(iris, hue="species") >>> g = g.map_diag(plt.hist, histtype="step", linewidth=3) >>> g = g.map_offdiag(plt.scatter) >>> g = g.add_legend() Plot a subset of variables .. plot:: :context: close-figs >>> g = sns.PairGrid(iris, vars=["sepal_length", "sepal_width"]) >>> g = g.map(plt.scatter) Pass additional keyword arguments to the functions .. plot:: :context: close-figs >>> g = sns.PairGrid(iris) >>> g = g.map_diag(plt.hist, edgecolor="w") >>> g = g.map_offdiag(plt.scatter, edgecolor="w", s=40) Use different variables for the rows and columns: .. plot:: :context: close-figs >>> g = sns.PairGrid(iris, ... x_vars=["sepal_length", "sepal_width"], ... y_vars=["petal_length", "petal_width"]) >>> g = g.map(plt.scatter) Use different functions on the upper and lower triangles: .. plot:: :context: close-figs >>> g = sns.PairGrid(iris) >>> g = g.map_upper(sns.scatterplot) >>> g = g.map_lower(sns.kdeplot, colors="C0") >>> g = g.map_diag(sns.kdeplot, lw=2) Use different colors and markers for each categorical level: .. plot:: :context: close-figs >>> g = sns.PairGrid(iris, hue="species", palette="Set2", ... hue_kws={"marker": ["o", "s", "D"]}) >>> g = g.map(sns.scatterplot, linewidths=1, edgecolor="w", s=40) >>> g = g.add_legend() """ # Handle deprecations if size is not None: height = size msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(UserWarning(msg)) # Sort out the variables that define the grid if vars is not None: x_vars = list(vars) y_vars = list(vars) elif (x_vars is not None) or (y_vars is not None): if (x_vars is None) or (y_vars is None): raise ValueError("Must specify `x_vars` and `y_vars`") else: numeric_cols = self._find_numeric_cols(data) if hue in numeric_cols: numeric_cols.remove(hue) x_vars = numeric_cols y_vars = numeric_cols if np.isscalar(x_vars): x_vars = [x_vars] if np.isscalar(y_vars): y_vars = [y_vars] self.x_vars = list(x_vars) self.y_vars = list(y_vars) self.square_grid = self.x_vars == self.y_vars # Create the figure and the array of subplots figsize = len(x_vars) * height * aspect, len(y_vars) * height fig, axes = plt.subplots(len(y_vars), len(x_vars), figsize=figsize, sharex="col", sharey="row", squeeze=False) # Possibly remove upper axes to make a corner grid # Note: setting up the axes is usually the most time-intensive part # of using the PairGrid. We are foregoing the speed improvement that # we would get by just not setting up the hidden axes so that we can # avoid implementing plt.subplots ourselves. But worth thinking about. self._corner = corner if corner: hide_indices = np.triu_indices_from(axes, 1) for i, j in zip(*hide_indices): try: axes[i, j].remove() except NotImplementedError: # Problem on old matplotlibs? axes[i, j].set_axis_off() axes[i, j] = None self.fig = fig self.axes = axes self.data = data # Save what we are going to do with the diagonal self.diag_sharey = diag_sharey self.diag_vars = None self.diag_axes = None self._dropna = dropna # Label the axes self._add_axis_labels() # Sort out the hue variable self._hue_var = hue if hue is None: self.hue_names = ["_nolegend_"] self.hue_vals = pd.Series(["_nolegend_"] * len(data), index=data.index) else: hue_names = utils.categorical_order(data[hue], hue_order) if dropna: # Filter NA from the list of unique hue names hue_names = list(filter(pd.notnull, hue_names)) self.hue_names = hue_names self.hue_vals = data[hue] # Additional dict of kwarg -> list of values for mapping the hue var self.hue_kws = hue_kws if hue_kws is not None else {} self.palette = self._get_palette(data, hue, hue_order, palette) self._legend_data = {} # Make the plot look nice if despine: self._despine = True utils.despine(fig=fig) fig.tight_layout(pad=layout_pad) def map(self, func, **kwargs): """Plot with the same function in every subplot. Parameters ---------- func : callable plotting function Must take x, y arrays as positional arguments and draw onto the "currently active" matplotlib Axes. Also needs to accept kwargs called ``color`` and ``label``. """ row_indices, col_indices = np.indices(self.axes.shape) indices = zip(row_indices.flat, col_indices.flat) self._map_bivariate(func, indices, **kwargs) return self def map_lower(self, func, **kwargs): """Plot with a bivariate function on the lower diagonal subplots. Parameters ---------- func : callable plotting function Must take x, y arrays as positional arguments and draw onto the "currently active" matplotlib Axes. Also needs to accept kwargs called ``color`` and ``label``. """ indices = zip(*np.tril_indices_from(self.axes, -1)) self._map_bivariate(func, indices, **kwargs) return self def map_upper(self, func, **kwargs): """Plot with a bivariate function on the upper diagonal subplots. Parameters ---------- func : callable plotting function Must take x, y arrays as positional arguments and draw onto the "currently active" matplotlib Axes. Also needs to accept kwargs called ``color`` and ``label``. """ indices = zip(*np.triu_indices_from(self.axes, 1)) self._map_bivariate(func, indices, **kwargs) return self def map_offdiag(self, func, **kwargs): """Plot with a bivariate function on the off-diagonal subplots. Parameters ---------- func : callable plotting function Must take x, y arrays as positional arguments and draw onto the "currently active" matplotlib Axes. Also needs to accept kwargs called ``color`` and ``label``. """ self.map_lower(func, **kwargs) if not self._corner: self.map_upper(func, **kwargs) return self def map_diag(self, func, **kwargs): """Plot with a univariate function on each diagonal subplot. Parameters ---------- func : callable plotting function Must take an x array as a positional argument and draw onto the "currently active" matplotlib Axes. Also needs to accept kwargs called ``color`` and ``label``. """ # Add special diagonal axes for the univariate plot if self.diag_axes is None: diag_vars = [] diag_axes = [] for i, y_var in enumerate(self.y_vars): for j, x_var in enumerate(self.x_vars): if x_var == y_var: # Make the density axes diag_vars.append(x_var) ax = self.axes[i, j] diag_ax = ax.twinx() diag_ax.set_axis_off() diag_axes.append(diag_ax) # Work around matplotlib bug # https://github.com/matplotlib/matplotlib/issues/15188 if not plt.rcParams.get("ytick.left", True): for tick in ax.yaxis.majorTicks: tick.tick1line.set_visible(False) # Remove main y axis from density axes in a corner plot if self._corner: ax.yaxis.set_visible(False) if self._despine: utils.despine(ax=ax, left=True) # TODO add optional density ticks (on the right) # when drawing a corner plot? if self.diag_sharey: # This may change in future matplotlibs # See https://github.com/matplotlib/matplotlib/pull/9923 group = diag_axes[0].get_shared_y_axes() for ax in diag_axes[1:]: group.join(ax, diag_axes[0]) self.diag_vars = np.array(diag_vars, np.object) self.diag_axes = np.array(diag_axes, np.object) # Plot on each of the diagonal axes fixed_color = kwargs.pop("color", None) for var, ax in zip(self.diag_vars, self.diag_axes): hue_grouped = self.data[var].groupby(self.hue_vals) plt.sca(ax) for k, label_k in enumerate(self.hue_names): # Attempt to get data for this level, allowing for empty try: # TODO newer matplotlib(?) doesn't need array for hist data_k = np.asarray(hue_grouped.get_group(label_k)) except KeyError: data_k = np.array([]) if fixed_color is None: color = self.palette[k] else: color = fixed_color if self._dropna: data_k = utils.remove_na(data_k) func(data_k, label=label_k, color=color, **kwargs) self._clean_axis(ax) self._add_axis_labels() return self def _map_bivariate(self, func, indices, **kwargs): """Draw a bivariate plot on the indicated axes.""" kws = kwargs.copy() # Use copy as we insert other kwargs kw_color = kws.pop("color", None) for i, j in indices: x_var = self.x_vars[j] y_var = self.y_vars[i] ax = self.axes[i, j] self._plot_bivariate(x_var, y_var, ax, func, kw_color, **kws) self._add_axis_labels() def _plot_bivariate(self, x_var, y_var, ax, func, kw_color, **kwargs): """Draw a bivariate plot on the specified axes.""" plt.sca(ax) if x_var == y_var: axes_vars = [x_var] else: axes_vars = [x_var, y_var] hue_grouped = self.data.groupby(self.hue_vals) for k, label_k in enumerate(self.hue_names): # Attempt to get data for this level, allowing for empty try: data_k = hue_grouped.get_group(label_k) except KeyError: data_k = pd.DataFrame(columns=axes_vars, dtype=np.float) if self._dropna: data_k = data_k[axes_vars].dropna() x = data_k[x_var] y = data_k[y_var] for kw, val_list in self.hue_kws.items(): kwargs[kw] = val_list[k] color = self.palette[k] if kw_color is None else kw_color func(x, y, label=label_k, color=color, **kwargs) self._clean_axis(ax) self._update_legend_data(ax) def _add_axis_labels(self): """Add labels to the left and bottom Axes.""" for ax, label in zip(self.axes[-1, :], self.x_vars): ax.set_xlabel(label) for ax, label in zip(self.axes[:, 0], self.y_vars): ax.set_ylabel(label) if self._corner: self.axes[0, 0].set_ylabel("") def _find_numeric_cols(self, data): """Find which variables in a DataFrame are numeric.""" # This can't be the best way to do this, but I do not # know what the best way might be, so this seems ok numeric_cols = [] for col in data: try: data[col].astype(np.float) numeric_cols.append(col) except (ValueError, TypeError): pass return numeric_cols class JointGrid(object): """Grid for drawing a bivariate plot with marginal univariate plots.""" def __init__(self, x, y, data=None, height=6, ratio=5, space=.2, dropna=True, xlim=None, ylim=None, size=None): """Set up the grid of subplots. Parameters ---------- x, y : strings or vectors Data or names of variables in ``data``. data : DataFrame, optional DataFrame when ``x`` and ``y`` are variable names. height : numeric Size of each side of the figure in inches (it will be square). ratio : numeric Ratio of joint axes size to marginal axes height. space : numeric, optional Space between the joint and marginal axes dropna : bool, optional If True, remove observations that are missing from `x` and `y`. {x, y}lim : two-tuples, optional Axis limits to set before plotting. See Also -------- jointplot : High-level interface for drawing bivariate plots with several different default plot kinds. Examples -------- Initialize the figure but don't draw any plots onto it: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set(style="ticks", color_codes=True) >>> tips = sns.load_dataset("tips") >>> g = sns.JointGrid(x="total_bill", y="tip", data=tips) Add plots using default parameters: .. plot:: :context: close-figs >>> g = sns.JointGrid(x="total_bill", y="tip", data=tips) >>> g = g.plot(sns.regplot, sns.distplot) Draw the join and marginal plots separately, which allows finer-level control other parameters: .. plot:: :context: close-figs >>> import matplotlib.pyplot as plt >>> g = sns.JointGrid(x="total_bill", y="tip", data=tips) >>> g = g.plot_joint(sns.scatterplot, color=".5") >>> g = g.plot_marginals(sns.distplot, kde=False, color=".5") Draw the two marginal plots separately: .. plot:: :context: close-figs >>> import numpy as np >>> g = sns.JointGrid(x="total_bill", y="tip", data=tips) >>> g = g.plot_joint(sns.scatterplot, color="m") >>> _ = g.ax_marg_x.hist(tips["total_bill"], color="b", alpha=.6, ... bins=np.arange(0, 60, 5)) >>> _ = g.ax_marg_y.hist(tips["tip"], color="r", alpha=.6, ... orientation="horizontal", ... bins=np.arange(0, 12, 1)) Remove the space between the joint and marginal axes: .. plot:: :context: close-figs >>> g = sns.JointGrid(x="total_bill", y="tip", data=tips, space=0) >>> g = g.plot_joint(sns.kdeplot, cmap="Blues_d") >>> g = g.plot_marginals(sns.kdeplot, shade=True) Draw a smaller plot with relatively larger marginal axes: .. plot:: :context: close-figs >>> g = sns.JointGrid(x="total_bill", y="tip", data=tips, ... height=5, ratio=2) >>> g = g.plot_joint(sns.kdeplot, cmap="Reds_d") >>> g = g.plot_marginals(sns.kdeplot, color="r", shade=True) Set limits on the axes: .. plot:: :context: close-figs >>> g = sns.JointGrid(x="total_bill", y="tip", data=tips, ... xlim=(0, 50), ylim=(0, 8)) >>> g = g.plot_joint(sns.kdeplot, cmap="Purples_d") >>> g = g.plot_marginals(sns.kdeplot, color="m", shade=True) """ # Handle deprecations if size is not None: height = size msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) # Set up the subplot grid f = plt.figure(figsize=(height, height)) gs = plt.GridSpec(ratio + 1, ratio + 1) ax_joint = f.add_subplot(gs[1:, :-1]) ax_marg_x = f.add_subplot(gs[0, :-1], sharex=ax_joint) ax_marg_y = f.add_subplot(gs[1:, -1], sharey=ax_joint) self.fig = f self.ax_joint = ax_joint self.ax_marg_x = ax_marg_x self.ax_marg_y = ax_marg_y # Turn off tick visibility for the measure axis on the marginal plots plt.setp(ax_marg_x.get_xticklabels(), visible=False) plt.setp(ax_marg_y.get_yticklabels(), visible=False) # Turn off the ticks on the density axis for the marginal plots plt.setp(ax_marg_x.yaxis.get_majorticklines(), visible=False) plt.setp(ax_marg_x.yaxis.get_minorticklines(), visible=False) plt.setp(ax_marg_y.xaxis.get_majorticklines(), visible=False) plt.setp(ax_marg_y.xaxis.get_minorticklines(), visible=False) plt.setp(ax_marg_x.get_yticklabels(), visible=False) plt.setp(ax_marg_y.get_xticklabels(), visible=False) ax_marg_x.yaxis.grid(False) ax_marg_y.xaxis.grid(False) # Possibly extract the variables from a DataFrame if data is not None: x = data.get(x, x) y = data.get(y, y) for var in [x, y]: if isinstance(var, str): err = "Could not interpret input '{}'".format(var) raise ValueError(err) # Find the names of the variables if hasattr(x, "name"): xlabel = x.name ax_joint.set_xlabel(xlabel) if hasattr(y, "name"): ylabel = y.name ax_joint.set_ylabel(ylabel) # Convert the x and y data to arrays for indexing and plotting x_array = np.asarray(x) y_array = np.asarray(y) # Possibly drop NA if dropna: not_na = pd.notnull(x_array) & pd.notnull(y_array) x_array = x_array[not_na] y_array = y_array[not_na] self.x = x_array self.y = y_array if xlim is not None: ax_joint.set_xlim(xlim) if ylim is not None: ax_joint.set_ylim(ylim) # Make the grid look nice utils.despine(f) utils.despine(ax=ax_marg_x, left=True) utils.despine(ax=ax_marg_y, bottom=True) f.tight_layout() f.subplots_adjust(hspace=space, wspace=space) def plot(self, joint_func, marginal_func, annot_func=None): """Shortcut to draw the full plot. Use `plot_joint` and `plot_marginals` directly for more control. Parameters ---------- joint_func, marginal_func: callables Functions to draw the bivariate and univariate plots. Returns ------- self : JointGrid instance Returns `self`. """ self.plot_marginals(marginal_func) self.plot_joint(joint_func) if annot_func is not None: self.annotate(annot_func) return self def plot_joint(self, func, **kwargs): """Draw a bivariate plot of `x` and `y`. Parameters ---------- func : plotting callable This must take two 1d arrays of data as the first two positional arguments, and it must plot on the "current" axes. kwargs : key, value mappings Keyword argument are passed to the plotting function. Returns ------- self : JointGrid instance Returns `self`. """ plt.sca(self.ax_joint) func(self.x, self.y, **kwargs) return self def plot_marginals(self, func, **kwargs): """Draw univariate plots for `x` and `y` separately. Parameters ---------- func : plotting callable This must take a 1d array of data as the first positional argument, it must plot on the "current" axes, and it must accept a "vertical" keyword argument to orient the measure dimension of the plot vertically. kwargs : key, value mappings Keyword argument are passed to the plotting function. Returns ------- self : JointGrid instance Returns `self`. """ kwargs["vertical"] = False plt.sca(self.ax_marg_x) func(self.x, **kwargs) kwargs["vertical"] = True plt.sca(self.ax_marg_y) func(self.y, **kwargs) return self def annotate(self, func, template=None, stat=None, loc="best", **kwargs): """Annotate the plot with a statistic about the relationship. *Deprecated and will be removed in a future version*. Parameters ---------- func : callable Statistical function that maps the x, y vectors either to (val, p) or to val. template : string format template, optional The template must have the format keys "stat" and "val"; if `func` returns a p value, it should also have the key "p". stat : string, optional Name to use for the statistic in the annotation, by default it uses the name of `func`. loc : string or int, optional Matplotlib legend location code; used to place the annotation. kwargs : key, value mappings Other keyword arguments are passed to `ax.legend`, which formats the annotation. Returns ------- self : JointGrid instance. Returns `self`. """ msg = ("JointGrid annotation is deprecated and will be removed " "in a future release.") warnings.warn(UserWarning(msg)) default_template = "{stat} = {val:.2g}; p = {p:.2g}" # Call the function and determine the form of the return value(s) out = func(self.x, self.y) try: val, p = out except TypeError: val, p = out, None default_template, _ = default_template.split(";") # Set the default template if template is None: template = default_template # Default to name of the function if stat is None: stat = func.__name__ # Format the annotation if p is None: annotation = template.format(stat=stat, val=val) else: annotation = template.format(stat=stat, val=val, p=p) # Draw an invisible plot and use the legend to draw the annotation # This is a bit of a hack, but `loc=best` works nicely and is not # easily abstracted. phantom, = self.ax_joint.plot(self.x, self.y, linestyle="", alpha=0) self.ax_joint.legend([phantom], [annotation], loc=loc, **kwargs) phantom.remove() return self def set_axis_labels(self, xlabel="", ylabel="", **kwargs): """Set the axis labels on the bivariate axes. Parameters ---------- xlabel, ylabel : strings Label names for the x and y variables. kwargs : key, value mappings Other keyword arguments are passed to the set_xlabel or set_ylabel. Returns ------- self : JointGrid instance returns `self` """ self.ax_joint.set_xlabel(xlabel, **kwargs) self.ax_joint.set_ylabel(ylabel, **kwargs) return self def savefig(self, *args, **kwargs): """Wrap figure.savefig defaulting to tight bounding box.""" kwargs.setdefault("bbox_inches", "tight") self.fig.savefig(*args, **kwargs) def pairplot(data, hue=None, hue_order=None, palette=None, vars=None, x_vars=None, y_vars=None, kind="scatter", diag_kind="auto", markers=None, height=2.5, aspect=1, corner=False, dropna=True, plot_kws=None, diag_kws=None, grid_kws=None, size=None): """Plot pairwise relationships in a dataset. By default, this function will create a grid of Axes such that each numeric variable in ``data`` will by shared in the y-axis across a single row and in the x-axis across a single column. The diagonal Axes are treated differently, drawing a plot to show the univariate distribution of the data for the variable in that column. It is also possible to show a subset of variables or plot different variables on the rows and columns. This is a high-level interface for :class:`PairGrid` that is intended to make it easy to draw a few common styles. You should use :class:`PairGrid` directly if you need more flexibility. Parameters ---------- data : DataFrame Tidy (long-form) dataframe where each column is a variable and each row is an observation. hue : string (variable name), optional Variable in ``data`` to map plot aspects to different colors. hue_order : list of strings Order for the levels of the hue variable in the palette palette : dict or seaborn color palette Set of colors for mapping the ``hue`` variable. If a dict, keys should be values in the ``hue`` variable. vars : list of variable names, optional Variables within ``data`` to use, otherwise use every column with a numeric datatype. {x, y}_vars : lists of variable names, optional Variables within ``data`` to use separately for the rows and columns of the figure; i.e. to make a non-square plot. kind : {'scatter', 'reg'}, optional Kind of plot for the non-identity relationships. diag_kind : {'auto', 'hist', 'kde', None}, optional Kind of plot for the diagonal subplots. The default depends on whether ``"hue"`` is used or not. markers : single matplotlib marker code or list, optional Either the marker to use for all datapoints or a list of markers with a length the same as the number of levels in the hue variable so that differently colored points will also have different scatterplot markers. height : scalar, optional Height (in inches) of each facet. aspect : scalar, optional Aspect * height gives the width (in inches) of each facet. corner : bool, optional If True, don't add axes to the upper (off-diagonal) triangle of the grid, making this a "corner" plot. dropna : boolean, optional Drop missing values from the data before plotting. {plot, diag, grid}_kws : dicts, optional Dictionaries of keyword arguments. ``plot_kws`` are passed to the bivariate plotting function, ``diag_kws`` are passed to the univariate plotting function, and ``grid_kws`` are passed to the :class:`PairGrid` constructor. Returns ------- grid : :class:`PairGrid` Returns the underlying :class:`PairGrid` instance for further tweaking. See Also -------- PairGrid : Subplot grid for more flexible plotting of pairwise relationships. Examples -------- Draw scatterplots for joint relationships and histograms for univariate distributions: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set(style="ticks", color_codes=True) >>> iris = sns.load_dataset("iris") >>> g = sns.pairplot(iris) Show different levels of a categorical variable by the color of plot elements: .. plot:: :context: close-figs >>> g = sns.pairplot(iris, hue="species") Use a different color palette: .. plot:: :context: close-figs >>> g = sns.pairplot(iris, hue="species", palette="husl") Use different markers for each level of the hue variable: .. plot:: :context: close-figs >>> g = sns.pairplot(iris, hue="species", markers=["o", "s", "D"]) Plot a subset of variables: .. plot:: :context: close-figs >>> g = sns.pairplot(iris, vars=["sepal_width", "sepal_length"]) Draw larger plots: .. plot:: :context: close-figs >>> g = sns.pairplot(iris, height=3, ... vars=["sepal_width", "sepal_length"]) Plot different variables in the rows and columns: .. plot:: :context: close-figs >>> g = sns.pairplot(iris, ... x_vars=["sepal_width", "sepal_length"], ... y_vars=["petal_width", "petal_length"]) Plot only the lower triangle of bivariate axes: .. plot:: :context: close-figs >>> g = sns.pairplot(iris, corner=True) Use kernel density estimates for univariate plots: .. plot:: :context: close-figs >>> g = sns.pairplot(iris, diag_kind="kde") Fit linear regression models to the scatter plots: .. plot:: :context: close-figs >>> g = sns.pairplot(iris, kind="reg") Pass keyword arguments down to the underlying functions (it may be easier to use :class:`PairGrid` directly): .. plot:: :context: close-figs >>> g = sns.pairplot(iris, diag_kind="kde", markers="+", ... plot_kws=dict(s=50, edgecolor="b", linewidth=1), ... diag_kws=dict(shade=True)) """ # Handle deprecations if size is not None: height = size msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) if not isinstance(data, pd.DataFrame): raise TypeError( "'data' must be pandas DataFrame object, not: {typefound}".format( typefound=type(data))) plot_kws = {} if plot_kws is None else plot_kws.copy() diag_kws = {} if diag_kws is None else diag_kws.copy() grid_kws = {} if grid_kws is None else grid_kws.copy() # Set up the PairGrid grid_kws.setdefault("diag_sharey", diag_kind == "hist") grid = PairGrid(data, vars=vars, x_vars=x_vars, y_vars=y_vars, hue=hue, hue_order=hue_order, palette=palette, corner=corner, height=height, aspect=aspect, dropna=dropna, **grid_kws) # Add the markers here as PairGrid has figured out how many levels of the # hue variable are needed and we don't want to duplicate that process if markers is not None: if grid.hue_names is None: n_markers = 1 else: n_markers = len(grid.hue_names) if not isinstance(markers, list): markers = [markers] * n_markers if len(markers) != n_markers: raise ValueError(("markers must be a singleton or a list of " "markers for each level of the hue variable")) grid.hue_kws = {"marker": markers} # Maybe plot on the diagonal if diag_kind == "auto": diag_kind = "hist" if hue is None else "kde" diag_kws = diag_kws.copy() if grid.square_grid: if diag_kind == "hist": grid.map_diag(plt.hist, **diag_kws) elif diag_kind == "kde": diag_kws.setdefault("shade", True) diag_kws["legend"] = False grid.map_diag(kdeplot, **diag_kws) # Maybe plot on the off-diagonals if grid.square_grid and diag_kind is not None: plotter = grid.map_offdiag else: plotter = grid.map if kind == "scatter": from .relational import scatterplot # Avoid circular import plotter(scatterplot, **plot_kws) elif kind == "reg": from .regression import regplot # Avoid circular import plotter(regplot, **plot_kws) # Add a legend if hue is not None: grid.add_legend() return grid def jointplot(x, y, data=None, kind="scatter", stat_func=None, color=None, height=6, ratio=5, space=.2, dropna=True, xlim=None, ylim=None, joint_kws=None, marginal_kws=None, annot_kws=None, **kwargs): """Draw a plot of two variables with bivariate and univariate graphs. This function provides a convenient interface to the :class:`JointGrid` class, with several canned plot kinds. This is intended to be a fairly lightweight wrapper; if you need more flexibility, you should use :class:`JointGrid` directly. Parameters ---------- x, y : strings or vectors Data or names of variables in ``data``. data : DataFrame, optional DataFrame when ``x`` and ``y`` are variable names. kind : { "scatter" | "reg" | "resid" | "kde" | "hex" }, optional Kind of plot to draw. stat_func : callable or None, optional *Deprecated* color : matplotlib color, optional Color used for the plot elements. height : numeric, optional Size of the figure (it will be square). ratio : numeric, optional Ratio of joint axes height to marginal axes height. space : numeric, optional Space between the joint and marginal axes dropna : bool, optional If True, remove observations that are missing from ``x`` and ``y``. {x, y}lim : two-tuples, optional Axis limits to set before plotting. {joint, marginal, annot}_kws : dicts, optional Additional keyword arguments for the plot components. kwargs : key, value pairings Additional keyword arguments are passed to the function used to draw the plot on the joint Axes, superseding items in the ``joint_kws`` dictionary. Returns ------- grid : :class:`JointGrid` :class:`JointGrid` object with the plot on it. See Also -------- JointGrid : The Grid class used for drawing this plot. Use it directly if you need more flexibility. Examples -------- Draw a scatterplot with marginal histograms: .. plot:: :context: close-figs >>> import numpy as np, pandas as pd; np.random.seed(0) >>> import seaborn as sns; sns.set(style="white", color_codes=True) >>> tips = sns.load_dataset("tips") >>> g = sns.jointplot(x="total_bill", y="tip", data=tips) Add regression and kernel density fits: .. plot:: :context: close-figs >>> g = sns.jointplot("total_bill", "tip", data=tips, kind="reg") Replace the scatterplot with a joint histogram using hexagonal bins: .. plot:: :context: close-figs >>> g = sns.jointplot("total_bill", "tip", data=tips, kind="hex") Replace the scatterplots and histograms with density estimates and align the marginal Axes tightly with the joint Axes: .. plot:: :context: close-figs >>> iris = sns.load_dataset("iris") >>> g = sns.jointplot("sepal_width", "petal_length", data=iris, ... kind="kde", space=0, color="g") Draw a scatterplot, then add a joint density estimate: .. plot:: :context: close-figs >>> g = (sns.jointplot("sepal_length", "sepal_width", ... data=iris, color="k") ... .plot_joint(sns.kdeplot, zorder=0, n_levels=6)) Pass vectors in directly without using Pandas, then name the axes: .. plot:: :context: close-figs >>> x, y = np.random.randn(2, 300) >>> g = (sns.jointplot(x, y, kind="hex") ... .set_axis_labels("x", "y")) Draw a smaller figure with more space devoted to the marginal plots: .. plot:: :context: close-figs >>> g = sns.jointplot("total_bill", "tip", data=tips, ... height=5, ratio=3, color="g") Pass keyword arguments down to the underlying plots: .. plot:: :context: close-figs >>> g = sns.jointplot("petal_length", "sepal_length", data=iris, ... marginal_kws=dict(bins=15, rug=True), ... annot_kws=dict(stat="r"), ... s=40, edgecolor="w", linewidth=1) """ # Handle deprecations if "size" in kwargs: height = kwargs.pop("size") msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) # Set up empty default kwarg dicts joint_kws = {} if joint_kws is None else joint_kws.copy() joint_kws.update(kwargs) marginal_kws = {} if marginal_kws is None else marginal_kws.copy() annot_kws = {} if annot_kws is None else annot_kws.copy() # Make a colormap based off the plot color if color is None: color = color_palette()[0] color_rgb = mpl.colors.colorConverter.to_rgb(color) colors = [utils.set_hls_values(color_rgb, l=l) # noqa for l in np.linspace(1, 0, 12)] cmap = blend_palette(colors, as_cmap=True) # Initialize the JointGrid object grid = JointGrid(x, y, data, dropna=dropna, height=height, ratio=ratio, space=space, xlim=xlim, ylim=ylim) # Plot the data using the grid if kind == "scatter": joint_kws.setdefault("color", color) grid.plot_joint(plt.scatter, **joint_kws) marginal_kws.setdefault("kde", False) marginal_kws.setdefault("color", color) grid.plot_marginals(distplot, **marginal_kws) elif kind.startswith("hex"): x_bins = min(_freedman_diaconis_bins(grid.x), 50) y_bins = min(_freedman_diaconis_bins(grid.y), 50) gridsize = int(np.mean([x_bins, y_bins])) joint_kws.setdefault("gridsize", gridsize) joint_kws.setdefault("cmap", cmap) grid.plot_joint(plt.hexbin, **joint_kws) marginal_kws.setdefault("kde", False) marginal_kws.setdefault("color", color) grid.plot_marginals(distplot, **marginal_kws) elif kind.startswith("kde"): joint_kws.setdefault("shade", True) joint_kws.setdefault("cmap", cmap) grid.plot_joint(kdeplot, **joint_kws) marginal_kws.setdefault("shade", True) marginal_kws.setdefault("color", color) grid.plot_marginals(kdeplot, **marginal_kws) elif kind.startswith("reg"): from .regression import regplot marginal_kws.setdefault("color", color) grid.plot_marginals(distplot, **marginal_kws) joint_kws.setdefault("color", color) grid.plot_joint(regplot, **joint_kws) elif kind.startswith("resid"): from .regression import residplot joint_kws.setdefault("color", color) grid.plot_joint(residplot, **joint_kws) x, y = grid.ax_joint.collections[0].get_offsets().T marginal_kws.setdefault("color", color) marginal_kws.setdefault("kde", False) distplot(x, ax=grid.ax_marg_x, **marginal_kws) distplot(y, vertical=True, fit=stats.norm, ax=grid.ax_marg_y, **marginal_kws) stat_func = None else: msg = "kind must be either 'scatter', 'reg', 'resid', 'kde', or 'hex'" raise ValueError(msg) if stat_func is not None: grid.annotate(stat_func, **annot_kws) return grid seaborn-0.10.0/seaborn/categorical.py000066400000000000000000004206371361256634400175270ustar00rootroot00000000000000from __future__ import division from textwrap import dedent import colorsys import numpy as np from scipy import stats import pandas as pd import matplotlib as mpl from matplotlib.collections import PatchCollection import matplotlib.patches as Patches import matplotlib.pyplot as plt import warnings from . import utils from .utils import iqr, categorical_order, remove_na from .algorithms import bootstrap from .palettes import color_palette, husl_palette, light_palette, dark_palette from .axisgrid import FacetGrid, _facet_docs __all__ = [ "catplot", "factorplot", "stripplot", "swarmplot", "boxplot", "violinplot", "boxenplot", "lvplot", "pointplot", "barplot", "countplot", ] class _CategoricalPlotter(object): width = .8 default_palette = "light" def establish_variables(self, x=None, y=None, hue=None, data=None, orient=None, order=None, hue_order=None, units=None): """Convert input specification into a common representation.""" # Option 1: # We are plotting a wide-form dataset # ----------------------------------- if x is None and y is None: # Do a sanity check on the inputs if hue is not None: error = "Cannot use `hue` without `x` or `y`" raise ValueError(error) # No hue grouping with wide inputs plot_hues = None hue_title = None hue_names = None # No statistical units with wide inputs plot_units = None # We also won't get a axes labels here value_label = None group_label = None # Option 1a: # The input data is a Pandas DataFrame # ------------------------------------ if isinstance(data, pd.DataFrame): # Order the data correctly if order is None: order = [] # Reduce to just numeric columns for col in data: try: data[col].astype(np.float) order.append(col) except ValueError: pass plot_data = data[order] group_names = order group_label = data.columns.name # Convert to a list of arrays, the common representation iter_data = plot_data.iteritems() plot_data = [np.asarray(s, np.float) for k, s in iter_data] # Option 1b: # The input data is an array or list # ---------------------------------- else: # We can't reorder the data if order is not None: error = "Input data must be a pandas object to reorder" raise ValueError(error) # The input data is an array if hasattr(data, "shape"): if len(data.shape) == 1: if np.isscalar(data[0]): plot_data = [data] else: plot_data = list(data) elif len(data.shape) == 2: nr, nc = data.shape if nr == 1 or nc == 1: plot_data = [data.ravel()] else: plot_data = [data[:, i] for i in range(nc)] else: error = ("Input `data` can have no " "more than 2 dimensions") raise ValueError(error) # Check if `data` is None to let us bail out here (for testing) elif data is None: plot_data = [[]] # The input data is a flat list elif np.isscalar(data[0]): plot_data = [data] # The input data is a nested list # This will catch some things that might fail later # but exhaustive checks are hard else: plot_data = data # Convert to a list of arrays, the common representation plot_data = [np.asarray(d, np.float) for d in plot_data] # The group names will just be numeric indices group_names = list(range((len(plot_data)))) # Figure out the plotting orientation orient = "h" if str(orient).startswith("h") else "v" # Option 2: # We are plotting a long-form dataset # ----------------------------------- else: # See if we need to get variables from `data` if data is not None: x = data.get(x, x) y = data.get(y, y) hue = data.get(hue, hue) units = data.get(units, units) # Validate the inputs for var in [x, y, hue, units]: if isinstance(var, str): err = "Could not interpret input '{}'".format(var) raise ValueError(err) # Figure out the plotting orientation orient = self.infer_orient(x, y, orient) # Option 2a: # We are plotting a single set of data # ------------------------------------ if x is None or y is None: # Determine where the data are vals = y if x is None else x # Put them into the common representation plot_data = [np.asarray(vals)] # Get a label for the value axis if hasattr(vals, "name"): value_label = vals.name else: value_label = None # This plot will not have group labels or hue nesting groups = None group_label = None group_names = [] plot_hues = None hue_names = None hue_title = None plot_units = None # Option 2b: # We are grouping the data values by another variable # --------------------------------------------------- else: # Determine which role each variable will play if orient == "v": vals, groups = y, x else: vals, groups = x, y # Get the categorical axis label group_label = None if hasattr(groups, "name"): group_label = groups.name # Get the order on the categorical axis group_names = categorical_order(groups, order) # Group the numeric data plot_data, value_label = self._group_longform(vals, groups, group_names) # Now handle the hue levels for nested ordering if hue is None: plot_hues = None hue_title = None hue_names = None else: # Get the order of the hue levels hue_names = categorical_order(hue, hue_order) # Group the hue data plot_hues, hue_title = self._group_longform(hue, groups, group_names) # Now handle the units for nested observations if units is None: plot_units = None else: plot_units, _ = self._group_longform(units, groups, group_names) # Assign object attributes # ------------------------ self.orient = orient self.plot_data = plot_data self.group_label = group_label self.value_label = value_label self.group_names = group_names self.plot_hues = plot_hues self.hue_title = hue_title self.hue_names = hue_names self.plot_units = plot_units def _group_longform(self, vals, grouper, order): """Group a long-form variable by another with correct order.""" # Ensure that the groupby will work if not isinstance(vals, pd.Series): if isinstance(grouper, pd.Series): index = grouper.index else: index = None vals = pd.Series(vals, index=index) # Group the val data grouped_vals = vals.groupby(grouper) out_data = [] for g in order: try: g_vals = grouped_vals.get_group(g) except KeyError: g_vals = np.array([]) out_data.append(g_vals) # Get the vals axis label label = vals.name return out_data, label def establish_colors(self, color, palette, saturation): """Get a list of colors for the main component of the plots.""" if self.hue_names is None: n_colors = len(self.plot_data) else: n_colors = len(self.hue_names) # Determine the main colors if color is None and palette is None: # Determine whether the current palette will have enough values # If not, we'll default to the husl palette so each is distinct current_palette = utils.get_color_cycle() if n_colors <= len(current_palette): colors = color_palette(n_colors=n_colors) else: colors = husl_palette(n_colors, l=.7) # noqa elif palette is None: # When passing a specific color, the interpretation depends # on whether there is a hue variable or not. # If so, we will make a blend palette so that the different # levels have some amount of variation. if self.hue_names is None: colors = [color] * n_colors else: if self.default_palette == "light": colors = light_palette(color, n_colors) elif self.default_palette == "dark": colors = dark_palette(color, n_colors) else: raise RuntimeError("No default palette specified") else: # Let `palette` be a dict mapping level to color if isinstance(palette, dict): if self.hue_names is None: levels = self.group_names else: levels = self.hue_names palette = [palette[l] for l in levels] colors = color_palette(palette, n_colors) # Desaturate a bit because these are patches if saturation < 1: colors = color_palette(colors, desat=saturation) # Convert the colors to a common representations rgb_colors = color_palette(colors) # Determine the gray color to use for the lines framing the plot light_vals = [colorsys.rgb_to_hls(*c)[1] for c in rgb_colors] lum = min(light_vals) * .6 gray = mpl.colors.rgb2hex((lum, lum, lum)) # Assign object attributes self.colors = rgb_colors self.gray = gray def infer_orient(self, x, y, orient=None): """Determine how the plot should be oriented based on the data.""" orient = str(orient) def is_categorical(s): try: # Correct way, but does not exist in older Pandas try: return pd.api.types.is_categorical_dtype(s) except AttributeError: return pd.core.common.is_categorical_dtype(s) except AttributeError: # Also works, but feels hackier return str(s.dtype) == "categorical" def is_not_numeric(s): try: np.asarray(s, dtype=np.float) except ValueError: return True return False no_numeric = "Neither the `x` nor `y` variable appears to be numeric." if orient.startswith("v"): return "v" elif orient.startswith("h"): return "h" elif x is None: return "v" elif y is None: return "h" elif is_categorical(y): if is_categorical(x): raise ValueError(no_numeric) else: return "h" elif is_not_numeric(y): if is_not_numeric(x): raise ValueError(no_numeric) else: return "h" else: return "v" @property def hue_offsets(self): """A list of center positions for plots when hue nesting is used.""" n_levels = len(self.hue_names) if self.dodge: each_width = self.width / n_levels offsets = np.linspace(0, self.width - each_width, n_levels) offsets -= offsets.mean() else: offsets = np.zeros(n_levels) return offsets @property def nested_width(self): """A float with the width of plot elements when hue nesting is used.""" if self.dodge: width = self.width / len(self.hue_names) * .98 else: width = self.width return width def annotate_axes(self, ax): """Add descriptive labels to an Axes object.""" if self.orient == "v": xlabel, ylabel = self.group_label, self.value_label else: xlabel, ylabel = self.value_label, self.group_label if xlabel is not None: ax.set_xlabel(xlabel) if ylabel is not None: ax.set_ylabel(ylabel) if self.orient == "v": ax.set_xticks(np.arange(len(self.plot_data))) ax.set_xticklabels(self.group_names) else: ax.set_yticks(np.arange(len(self.plot_data))) ax.set_yticklabels(self.group_names) if self.orient == "v": ax.xaxis.grid(False) ax.set_xlim(-.5, len(self.plot_data) - .5, auto=None) else: ax.yaxis.grid(False) ax.set_ylim(-.5, len(self.plot_data) - .5, auto=None) if self.hue_names is not None: leg = ax.legend(loc="best") if self.hue_title is not None: # Matplotlib rcParams does not expose legend title size? try: title_size = mpl.rcParams["axes.labelsize"] * .85 except TypeError: # labelsize is something like "large" title_size = mpl.rcParams["axes.labelsize"] prop = mpl.font_manager.FontProperties(size=title_size) leg.set_title(self.hue_title, prop=prop) def add_legend_data(self, ax, color, label): """Add a dummy patch object so we can get legend data.""" rect = plt.Rectangle([0, 0], 0, 0, linewidth=self.linewidth / 2, edgecolor=self.gray, facecolor=color, label=label) ax.add_patch(rect) class _BoxPlotter(_CategoricalPlotter): def __init__(self, x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, dodge, fliersize, linewidth): self.establish_variables(x, y, hue, data, orient, order, hue_order) self.establish_colors(color, palette, saturation) self.dodge = dodge self.width = width self.fliersize = fliersize if linewidth is None: linewidth = mpl.rcParams["lines.linewidth"] self.linewidth = linewidth def draw_boxplot(self, ax, kws): """Use matplotlib to draw a boxplot on an Axes.""" vert = self.orient == "v" props = {} for obj in ["box", "whisker", "cap", "median", "flier"]: props[obj] = kws.pop(obj + "props", {}) for i, group_data in enumerate(self.plot_data): if self.plot_hues is None: # Handle case where there is data at this level if group_data.size == 0: continue # Draw a single box or a set of boxes # with a single level of grouping box_data = np.asarray(remove_na(group_data)) # Handle case where there is no non-null data if box_data.size == 0: continue artist_dict = ax.boxplot(box_data, vert=vert, patch_artist=True, positions=[i], widths=self.width, **kws) color = self.colors[i] self.restyle_boxplot(artist_dict, color, props) else: # Draw nested groups of boxes offsets = self.hue_offsets for j, hue_level in enumerate(self.hue_names): # Add a legend for this hue level if not i: self.add_legend_data(ax, self.colors[j], hue_level) # Handle case where there is data at this level if group_data.size == 0: continue hue_mask = self.plot_hues[i] == hue_level box_data = np.asarray(remove_na(group_data[hue_mask])) # Handle case where there is no non-null data if box_data.size == 0: continue center = i + offsets[j] artist_dict = ax.boxplot(box_data, vert=vert, patch_artist=True, positions=[center], widths=self.nested_width, **kws) self.restyle_boxplot(artist_dict, self.colors[j], props) # Add legend data, but just for one set of boxes def restyle_boxplot(self, artist_dict, color, props): """Take a drawn matplotlib boxplot and make it look nice.""" for box in artist_dict["boxes"]: box.update(dict(facecolor=color, zorder=.9, edgecolor=self.gray, linewidth=self.linewidth)) box.update(props["box"]) for whisk in artist_dict["whiskers"]: whisk.update(dict(color=self.gray, linewidth=self.linewidth, linestyle="-")) whisk.update(props["whisker"]) for cap in artist_dict["caps"]: cap.update(dict(color=self.gray, linewidth=self.linewidth)) cap.update(props["cap"]) for med in artist_dict["medians"]: med.update(dict(color=self.gray, linewidth=self.linewidth)) med.update(props["median"]) for fly in artist_dict["fliers"]: fly.update(dict(markerfacecolor=self.gray, marker="d", markeredgecolor=self.gray, markersize=self.fliersize)) fly.update(props["flier"]) def plot(self, ax, boxplot_kws): """Make the plot.""" self.draw_boxplot(ax, boxplot_kws) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() class _ViolinPlotter(_CategoricalPlotter): def __init__(self, x, y, hue, data, order, hue_order, bw, cut, scale, scale_hue, gridsize, width, inner, split, dodge, orient, linewidth, color, palette, saturation): self.establish_variables(x, y, hue, data, orient, order, hue_order) self.establish_colors(color, palette, saturation) self.estimate_densities(bw, cut, scale, scale_hue, gridsize) self.gridsize = gridsize self.width = width self.dodge = dodge if inner is not None: if not any([inner.startswith("quart"), inner.startswith("box"), inner.startswith("stick"), inner.startswith("point")]): err = "Inner style '{}' not recognized".format(inner) raise ValueError(err) self.inner = inner if split and self.hue_names is not None and len(self.hue_names) != 2: msg = "There must be exactly two hue levels to use `split`.'" raise ValueError(msg) self.split = split if linewidth is None: linewidth = mpl.rcParams["lines.linewidth"] self.linewidth = linewidth def estimate_densities(self, bw, cut, scale, scale_hue, gridsize): """Find the support and density for all of the data.""" # Initialize data structures to keep track of plotting data if self.hue_names is None: support = [] density = [] counts = np.zeros(len(self.plot_data)) max_density = np.zeros(len(self.plot_data)) else: support = [[] for _ in self.plot_data] density = [[] for _ in self.plot_data] size = len(self.group_names), len(self.hue_names) counts = np.zeros(size) max_density = np.zeros(size) for i, group_data in enumerate(self.plot_data): # Option 1: we have a single level of grouping # -------------------------------------------- if self.plot_hues is None: # Strip missing datapoints kde_data = remove_na(group_data) # Handle special case of no data at this level if kde_data.size == 0: support.append(np.array([])) density.append(np.array([1.])) counts[i] = 0 max_density[i] = 0 continue # Handle special case of a single unique datapoint elif np.unique(kde_data).size == 1: support.append(np.unique(kde_data)) density.append(np.array([1.])) counts[i] = 1 max_density[i] = 0 continue # Fit the KDE and get the used bandwidth size kde, bw_used = self.fit_kde(kde_data, bw) # Determine the support grid and get the density over it support_i = self.kde_support(kde_data, bw_used, cut, gridsize) density_i = kde.evaluate(support_i) # Update the data structures with these results support.append(support_i) density.append(density_i) counts[i] = kde_data.size max_density[i] = density_i.max() # Option 2: we have nested grouping by a hue variable # --------------------------------------------------- else: for j, hue_level in enumerate(self.hue_names): # Handle special case of no data at this category level if not group_data.size: support[i].append(np.array([])) density[i].append(np.array([1.])) counts[i, j] = 0 max_density[i, j] = 0 continue # Select out the observations for this hue level hue_mask = self.plot_hues[i] == hue_level # Strip missing datapoints kde_data = remove_na(group_data[hue_mask]) # Handle special case of no data at this level if kde_data.size == 0: support[i].append(np.array([])) density[i].append(np.array([1.])) counts[i, j] = 0 max_density[i, j] = 0 continue # Handle special case of a single unique datapoint elif np.unique(kde_data).size == 1: support[i].append(np.unique(kde_data)) density[i].append(np.array([1.])) counts[i, j] = 1 max_density[i, j] = 0 continue # Fit the KDE and get the used bandwidth size kde, bw_used = self.fit_kde(kde_data, bw) # Determine the support grid and get the density over it support_ij = self.kde_support(kde_data, bw_used, cut, gridsize) density_ij = kde.evaluate(support_ij) # Update the data structures with these results support[i].append(support_ij) density[i].append(density_ij) counts[i, j] = kde_data.size max_density[i, j] = density_ij.max() # Scale the height of the density curve. # For a violinplot the density is non-quantitative. # The objective here is to scale the curves relative to 1 so that # they can be multiplied by the width parameter during plotting. if scale == "area": self.scale_area(density, max_density, scale_hue) elif scale == "width": self.scale_width(density) elif scale == "count": self.scale_count(density, counts, scale_hue) else: raise ValueError("scale method '{}' not recognized".format(scale)) # Set object attributes that will be used while plotting self.support = support self.density = density def fit_kde(self, x, bw): """Estimate a KDE for a vector of data with flexible bandwidth.""" # Allow for the use of old scipy where `bw` is fixed try: kde = stats.gaussian_kde(x, bw) except TypeError: kde = stats.gaussian_kde(x) if bw != "scott": # scipy default msg = ("Ignoring bandwidth choice, " "please upgrade scipy to use a different bandwidth.") warnings.warn(msg, UserWarning) # Extract the numeric bandwidth from the KDE object bw_used = kde.factor # At this point, bw will be a numeric scale factor. # To get the actual bandwidth of the kernel, we multiple by the # unbiased standard deviation of the data, which we will use # elsewhere to compute the range of the support. bw_used = bw_used * x.std(ddof=1) return kde, bw_used def kde_support(self, x, bw, cut, gridsize): """Define a grid of support for the violin.""" support_min = x.min() - bw * cut support_max = x.max() + bw * cut return np.linspace(support_min, support_max, gridsize) def scale_area(self, density, max_density, scale_hue): """Scale the relative area under the KDE curve. This essentially preserves the "standard" KDE scaling, but the resulting maximum density will be 1 so that the curve can be properly multiplied by the violin width. """ if self.hue_names is None: for d in density: if d.size > 1: d /= max_density.max() else: for i, group in enumerate(density): for d in group: if scale_hue: max = max_density[i].max() else: max = max_density.max() if d.size > 1: d /= max def scale_width(self, density): """Scale each density curve to the same height.""" if self.hue_names is None: for d in density: d /= d.max() else: for group in density: for d in group: d /= d.max() def scale_count(self, density, counts, scale_hue): """Scale each density curve by the number of observations.""" if self.hue_names is None: if counts.max() == 0: d = 0 else: for count, d in zip(counts, density): d /= d.max() d *= count / counts.max() else: for i, group in enumerate(density): for j, d in enumerate(group): if counts[i].max() == 0: d = 0 else: count = counts[i, j] if scale_hue: scaler = count / counts[i].max() else: scaler = count / counts.max() d /= d.max() d *= scaler @property def dwidth(self): if self.hue_names is None or not self.dodge: return self.width / 2 elif self.split: return self.width / 2 else: return self.width / (2 * len(self.hue_names)) def draw_violins(self, ax): """Draw the violins onto `ax`.""" fill_func = ax.fill_betweenx if self.orient == "v" else ax.fill_between for i, group_data in enumerate(self.plot_data): kws = dict(edgecolor=self.gray, linewidth=self.linewidth) # Option 1: we have a single level of grouping # -------------------------------------------- if self.plot_hues is None: support, density = self.support[i], self.density[i] # Handle special case of no observations in this bin if support.size == 0: continue # Handle special case of a single observation elif support.size == 1: val = support.item() d = density.item() self.draw_single_observation(ax, i, val, d) continue # Draw the violin for this group grid = np.ones(self.gridsize) * i fill_func(support, grid - density * self.dwidth, grid + density * self.dwidth, facecolor=self.colors[i], **kws) # Draw the interior representation of the data if self.inner is None: continue # Get a nan-free vector of datapoints violin_data = remove_na(group_data) # Draw box and whisker information if self.inner.startswith("box"): self.draw_box_lines(ax, violin_data, support, density, i) # Draw quartile lines elif self.inner.startswith("quart"): self.draw_quartiles(ax, violin_data, support, density, i) # Draw stick observations elif self.inner.startswith("stick"): self.draw_stick_lines(ax, violin_data, support, density, i) # Draw point observations elif self.inner.startswith("point"): self.draw_points(ax, violin_data, i) # Option 2: we have nested grouping by a hue variable # --------------------------------------------------- else: offsets = self.hue_offsets for j, hue_level in enumerate(self.hue_names): support, density = self.support[i][j], self.density[i][j] kws["facecolor"] = self.colors[j] # Add legend data, but just for one set of violins if not i: self.add_legend_data(ax, self.colors[j], hue_level) # Handle the special case where we have no observations if support.size == 0: continue # Handle the special case where we have one observation elif support.size == 1: val = support.item() d = density.item() if self.split: d = d / 2 at_group = i + offsets[j] self.draw_single_observation(ax, at_group, val, d) continue # Option 2a: we are drawing a single split violin # ----------------------------------------------- if self.split: grid = np.ones(self.gridsize) * i if j: fill_func(support, grid, grid + density * self.dwidth, **kws) else: fill_func(support, grid - density * self.dwidth, grid, **kws) # Draw the interior representation of the data if self.inner is None: continue # Get a nan-free vector of datapoints hue_mask = self.plot_hues[i] == hue_level violin_data = remove_na(group_data[hue_mask]) # Draw quartile lines if self.inner.startswith("quart"): self.draw_quartiles(ax, violin_data, support, density, i, ["left", "right"][j]) # Draw stick observations elif self.inner.startswith("stick"): self.draw_stick_lines(ax, violin_data, support, density, i, ["left", "right"][j]) # The box and point interior plots are drawn for # all data at the group level, so we just do that once if not j: continue # Get the whole vector for this group level violin_data = remove_na(group_data) # Draw box and whisker information if self.inner.startswith("box"): self.draw_box_lines(ax, violin_data, support, density, i) # Draw point observations elif self.inner.startswith("point"): self.draw_points(ax, violin_data, i) # Option 2b: we are drawing full nested violins # ----------------------------------------------- else: grid = np.ones(self.gridsize) * (i + offsets[j]) fill_func(support, grid - density * self.dwidth, grid + density * self.dwidth, **kws) # Draw the interior representation if self.inner is None: continue # Get a nan-free vector of datapoints hue_mask = self.plot_hues[i] == hue_level violin_data = remove_na(group_data[hue_mask]) # Draw box and whisker information if self.inner.startswith("box"): self.draw_box_lines(ax, violin_data, support, density, i + offsets[j]) # Draw quartile lines elif self.inner.startswith("quart"): self.draw_quartiles(ax, violin_data, support, density, i + offsets[j]) # Draw stick observations elif self.inner.startswith("stick"): self.draw_stick_lines(ax, violin_data, support, density, i + offsets[j]) # Draw point observations elif self.inner.startswith("point"): self.draw_points(ax, violin_data, i + offsets[j]) def draw_single_observation(self, ax, at_group, at_quant, density): """Draw a line to mark a single observation.""" d_width = density * self.dwidth if self.orient == "v": ax.plot([at_group - d_width, at_group + d_width], [at_quant, at_quant], color=self.gray, linewidth=self.linewidth) else: ax.plot([at_quant, at_quant], [at_group - d_width, at_group + d_width], color=self.gray, linewidth=self.linewidth) def draw_box_lines(self, ax, data, support, density, center): """Draw boxplot information at center of the density.""" # Compute the boxplot statistics q25, q50, q75 = np.percentile(data, [25, 50, 75]) whisker_lim = 1.5 * iqr(data) h1 = np.min(data[data >= (q25 - whisker_lim)]) h2 = np.max(data[data <= (q75 + whisker_lim)]) # Draw a boxplot using lines and a point if self.orient == "v": ax.plot([center, center], [h1, h2], linewidth=self.linewidth, color=self.gray) ax.plot([center, center], [q25, q75], linewidth=self.linewidth * 3, color=self.gray) ax.scatter(center, q50, zorder=3, color="white", edgecolor=self.gray, s=np.square(self.linewidth * 2)) else: ax.plot([h1, h2], [center, center], linewidth=self.linewidth, color=self.gray) ax.plot([q25, q75], [center, center], linewidth=self.linewidth * 3, color=self.gray) ax.scatter(q50, center, zorder=3, color="white", edgecolor=self.gray, s=np.square(self.linewidth * 2)) def draw_quartiles(self, ax, data, support, density, center, split=False): """Draw the quartiles as lines at width of density.""" q25, q50, q75 = np.percentile(data, [25, 50, 75]) self.draw_to_density(ax, center, q25, support, density, split, linewidth=self.linewidth, dashes=[self.linewidth * 1.5] * 2) self.draw_to_density(ax, center, q50, support, density, split, linewidth=self.linewidth, dashes=[self.linewidth * 3] * 2) self.draw_to_density(ax, center, q75, support, density, split, linewidth=self.linewidth, dashes=[self.linewidth * 1.5] * 2) def draw_points(self, ax, data, center): """Draw individual observations as points at middle of the violin.""" kws = dict(s=np.square(self.linewidth * 2), color=self.gray, edgecolor=self.gray) grid = np.ones(len(data)) * center if self.orient == "v": ax.scatter(grid, data, **kws) else: ax.scatter(data, grid, **kws) def draw_stick_lines(self, ax, data, support, density, center, split=False): """Draw individual observations as sticks at width of density.""" for val in data: self.draw_to_density(ax, center, val, support, density, split, linewidth=self.linewidth * .5) def draw_to_density(self, ax, center, val, support, density, split, **kws): """Draw a line orthogonal to the value axis at width of density.""" idx = np.argmin(np.abs(support - val)) width = self.dwidth * density[idx] * .99 kws["color"] = self.gray if self.orient == "v": if split == "left": ax.plot([center - width, center], [val, val], **kws) elif split == "right": ax.plot([center, center + width], [val, val], **kws) else: ax.plot([center - width, center + width], [val, val], **kws) else: if split == "left": ax.plot([val, val], [center - width, center], **kws) elif split == "right": ax.plot([val, val], [center, center + width], **kws) else: ax.plot([val, val], [center - width, center + width], **kws) def plot(self, ax): """Make the violin plot.""" self.draw_violins(ax) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() class _CategoricalScatterPlotter(_CategoricalPlotter): default_palette = "dark" @property def point_colors(self): """Return an index into the palette for each scatter point.""" point_colors = [] for i, group_data in enumerate(self.plot_data): # Initialize the array for this group level group_colors = np.empty(group_data.size, np.int) if isinstance(group_data, pd.Series): group_colors = pd.Series(group_colors, group_data.index) if self.plot_hues is None: # Use the same color for all points at this level # group_color = self.colors[i] group_colors[:] = i else: # Color the points based on the hue level for j, level in enumerate(self.hue_names): # hue_color = self.colors[j] if group_data.size: group_colors[self.plot_hues[i] == level] = j point_colors.append(group_colors) return point_colors def add_legend_data(self, ax): """Add empty scatterplot artists with labels for the legend.""" if self.hue_names is not None: for rgb, label in zip(self.colors, self.hue_names): ax.scatter([], [], color=mpl.colors.rgb2hex(rgb), label=label, s=60) class _StripPlotter(_CategoricalScatterPlotter): """1-d scatterplot with categorical organization.""" def __init__(self, x, y, hue, data, order, hue_order, jitter, dodge, orient, color, palette): """Initialize the plotter.""" self.establish_variables(x, y, hue, data, orient, order, hue_order) self.establish_colors(color, palette, 1) # Set object attributes self.dodge = dodge self.width = .8 if jitter == 1: # Use a good default for `jitter = True` jlim = 0.1 else: jlim = float(jitter) if self.hue_names is not None and dodge: jlim /= len(self.hue_names) self.jitterer = stats.uniform(-jlim, jlim * 2).rvs def draw_stripplot(self, ax, kws): """Draw the points onto `ax`.""" palette = np.asarray(self.colors) for i, group_data in enumerate(self.plot_data): if self.plot_hues is None or not self.dodge: if self.hue_names is None: hue_mask = np.ones(group_data.size, np.bool) else: hue_mask = np.array([h in self.hue_names for h in self.plot_hues[i]], np.bool) # Broken on older numpys # hue_mask = np.in1d(self.plot_hues[i], self.hue_names) strip_data = group_data[hue_mask] point_colors = np.asarray(self.point_colors[i][hue_mask]) # Plot the points in centered positions cat_pos = np.ones(strip_data.size) * i cat_pos += self.jitterer(len(strip_data)) kws.update(c=palette[point_colors]) if self.orient == "v": ax.scatter(cat_pos, strip_data, **kws) else: ax.scatter(strip_data, cat_pos, **kws) else: offsets = self.hue_offsets for j, hue_level in enumerate(self.hue_names): hue_mask = self.plot_hues[i] == hue_level strip_data = group_data[hue_mask] point_colors = np.asarray(self.point_colors[i][hue_mask]) # Plot the points in centered positions center = i + offsets[j] cat_pos = np.ones(strip_data.size) * center cat_pos += self.jitterer(len(strip_data)) kws.update(c=palette[point_colors]) if self.orient == "v": ax.scatter(cat_pos, strip_data, **kws) else: ax.scatter(strip_data, cat_pos, **kws) def plot(self, ax, kws): """Make the plot.""" self.draw_stripplot(ax, kws) self.add_legend_data(ax) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() class _SwarmPlotter(_CategoricalScatterPlotter): def __init__(self, x, y, hue, data, order, hue_order, dodge, orient, color, palette): """Initialize the plotter.""" self.establish_variables(x, y, hue, data, orient, order, hue_order) self.establish_colors(color, palette, 1) # Set object attributes self.dodge = dodge self.width = .8 def could_overlap(self, xy_i, swarm, d): """Return a list of all swarm points that could overlap with target. Assumes that swarm is a sorted list of all points below xy_i. """ _, y_i = xy_i neighbors = [] for xy_j in reversed(swarm): _, y_j = xy_j if (y_i - y_j) < d: neighbors.append(xy_j) else: break return np.array(list(reversed(neighbors))) def position_candidates(self, xy_i, neighbors, d): """Return a list of (x, y) coordinates that might be valid.""" candidates = [xy_i] x_i, y_i = xy_i left_first = True for x_j, y_j in neighbors: dy = y_i - y_j dx = np.sqrt(max(d ** 2 - dy ** 2, 0)) * 1.05 cl, cr = (x_j - dx, y_i), (x_j + dx, y_i) if left_first: new_candidates = [cl, cr] else: new_candidates = [cr, cl] candidates.extend(new_candidates) left_first = not left_first return np.array(candidates) def first_non_overlapping_candidate(self, candidates, neighbors, d): """Remove candidates from the list if they overlap with the swarm.""" # IF we have no neighbours, all candidates are good. if len(neighbors) == 0: return candidates[0] neighbors_x = neighbors[:, 0] neighbors_y = neighbors[:, 1] d_square = d ** 2 for xy_i in candidates: x_i, y_i = xy_i dx = neighbors_x - x_i dy = neighbors_y - y_i sq_distances = np.power(dx, 2.0) + np.power(dy, 2.0) # good candidate does not overlap any of neighbors # which means that squared distance between candidate # and any of the neighbours has to be at least # square of the diameter good_candidate = np.all(sq_distances >= d_square) if good_candidate: return xy_i # If `position_candidates` works well # this should never happen raise Exception('No non-overlapping candidates found. ' 'This should not happen.') def beeswarm(self, orig_xy, d): """Adjust x position of points to avoid overlaps.""" # In this method, ``x`` is always the categorical axis # Center of the swarm, in point coordinates midline = orig_xy[0, 0] # Start the swarm with the first point swarm = [orig_xy[0]] # Loop over the remaining points for xy_i in orig_xy[1:]: # Find the points in the swarm that could possibly # overlap with the point we are currently placing neighbors = self.could_overlap(xy_i, swarm, d) # Find positions that would be valid individually # with respect to each of the swarm neighbors candidates = self.position_candidates(xy_i, neighbors, d) # Sort candidates by their centrality offsets = np.abs(candidates[:, 0] - midline) candidates = candidates[np.argsort(offsets)] # Find the first candidate that does not overlap any neighbours new_xy_i = self.first_non_overlapping_candidate(candidates, neighbors, d) # Place it into the swarm swarm.append(new_xy_i) return np.array(swarm) def add_gutters(self, points, center, width): """Stop points from extending beyond their territory.""" half_width = width / 2 low_gutter = center - half_width off_low = points < low_gutter if off_low.any(): points[off_low] = low_gutter high_gutter = center + half_width off_high = points > high_gutter if off_high.any(): points[off_high] = high_gutter return points def swarm_points(self, ax, points, center, width, s, **kws): """Find new positions on the categorical axis for each point.""" # Convert from point size (area) to diameter default_lw = mpl.rcParams["patch.linewidth"] lw = kws.get("linewidth", kws.get("lw", default_lw)) dpi = ax.figure.dpi d = (np.sqrt(s) + lw) * (dpi / 72) # Transform the data coordinates to point coordinates. # We'll figure out the swarm positions in the latter # and then convert back to data coordinates and replot orig_xy = ax.transData.transform(points.get_offsets()) # Order the variables so that x is the categorical axis if self.orient == "h": orig_xy = orig_xy[:, [1, 0]] # Do the beeswarm in point coordinates new_xy = self.beeswarm(orig_xy, d) # Transform the point coordinates back to data coordinates if self.orient == "h": new_xy = new_xy[:, [1, 0]] new_x, new_y = ax.transData.inverted().transform(new_xy).T # Add gutters if self.orient == "v": self.add_gutters(new_x, center, width) else: self.add_gutters(new_y, center, width) # Reposition the points so they do not overlap points.set_offsets(np.c_[new_x, new_y]) def draw_swarmplot(self, ax, kws): """Plot the data.""" s = kws.pop("s") centers = [] swarms = [] palette = np.asarray(self.colors) # Set the categorical axes limits here for the swarm math if self.orient == "v": ax.set_xlim(-.5, len(self.plot_data) - .5) else: ax.set_ylim(-.5, len(self.plot_data) - .5) # Plot each swarm for i, group_data in enumerate(self.plot_data): if self.plot_hues is None or not self.dodge: width = self.width if self.hue_names is None: hue_mask = np.ones(group_data.size, np.bool) else: hue_mask = np.array([h in self.hue_names for h in self.plot_hues[i]], np.bool) # Broken on older numpys # hue_mask = np.in1d(self.plot_hues[i], self.hue_names) swarm_data = np.asarray(group_data[hue_mask]) point_colors = np.asarray(self.point_colors[i][hue_mask]) # Sort the points for the beeswarm algorithm sorter = np.argsort(swarm_data) swarm_data = swarm_data[sorter] point_colors = point_colors[sorter] # Plot the points in centered positions cat_pos = np.ones(swarm_data.size) * i kws.update(c=palette[point_colors]) if self.orient == "v": points = ax.scatter(cat_pos, swarm_data, s=s, **kws) else: points = ax.scatter(swarm_data, cat_pos, s=s, **kws) centers.append(i) swarms.append(points) else: offsets = self.hue_offsets width = self.nested_width for j, hue_level in enumerate(self.hue_names): hue_mask = self.plot_hues[i] == hue_level swarm_data = np.asarray(group_data[hue_mask]) point_colors = np.asarray(self.point_colors[i][hue_mask]) # Sort the points for the beeswarm algorithm sorter = np.argsort(swarm_data) swarm_data = swarm_data[sorter] point_colors = point_colors[sorter] # Plot the points in centered positions center = i + offsets[j] cat_pos = np.ones(swarm_data.size) * center kws.update(c=palette[point_colors]) if self.orient == "v": points = ax.scatter(cat_pos, swarm_data, s=s, **kws) else: points = ax.scatter(swarm_data, cat_pos, s=s, **kws) centers.append(center) swarms.append(points) # Update the position of each point on the categorical axis # Do this after plotting so that the numerical axis limits are correct for center, swarm in zip(centers, swarms): if swarm.get_offsets().size: self.swarm_points(ax, swarm, center, width, s, **kws) def plot(self, ax, kws): """Make the full plot.""" self.draw_swarmplot(ax, kws) self.add_legend_data(ax) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() class _CategoricalStatPlotter(_CategoricalPlotter): @property def nested_width(self): """A float with the width of plot elements when hue nesting is used.""" if self.dodge: width = self.width / len(self.hue_names) else: width = self.width return width def estimate_statistic(self, estimator, ci, n_boot, seed): if self.hue_names is None: statistic = [] confint = [] else: statistic = [[] for _ in self.plot_data] confint = [[] for _ in self.plot_data] for i, group_data in enumerate(self.plot_data): # Option 1: we have a single layer of grouping # -------------------------------------------- if self.plot_hues is None: if self.plot_units is None: stat_data = remove_na(group_data) unit_data = None else: unit_data = self.plot_units[i] have = pd.notnull(np.c_[group_data, unit_data]).all(axis=1) stat_data = group_data[have] unit_data = unit_data[have] # Estimate a statistic from the vector of data if not stat_data.size: statistic.append(np.nan) else: statistic.append(estimator(stat_data)) # Get a confidence interval for this estimate if ci is not None: if stat_data.size < 2: confint.append([np.nan, np.nan]) continue if ci == "sd": estimate = estimator(stat_data) sd = np.std(stat_data) confint.append((estimate - sd, estimate + sd)) else: boots = bootstrap(stat_data, func=estimator, n_boot=n_boot, units=unit_data, seed=seed) confint.append(utils.ci(boots, ci)) # Option 2: we are grouping by a hue layer # ---------------------------------------- else: for j, hue_level in enumerate(self.hue_names): if not self.plot_hues[i].size: statistic[i].append(np.nan) if ci is not None: confint[i].append((np.nan, np.nan)) continue hue_mask = self.plot_hues[i] == hue_level if self.plot_units is None: stat_data = remove_na(group_data[hue_mask]) unit_data = None else: group_units = self.plot_units[i] have = pd.notnull( np.c_[group_data, group_units] ).all(axis=1) stat_data = group_data[hue_mask & have] unit_data = group_units[hue_mask & have] # Estimate a statistic from the vector of data if not stat_data.size: statistic[i].append(np.nan) else: statistic[i].append(estimator(stat_data)) # Get a confidence interval for this estimate if ci is not None: if stat_data.size < 2: confint[i].append([np.nan, np.nan]) continue if ci == "sd": estimate = estimator(stat_data) sd = np.std(stat_data) confint[i].append((estimate - sd, estimate + sd)) else: boots = bootstrap(stat_data, func=estimator, n_boot=n_boot, units=unit_data, seed=seed) confint[i].append(utils.ci(boots, ci)) # Save the resulting values for plotting self.statistic = np.array(statistic) self.confint = np.array(confint) def draw_confints(self, ax, at_group, confint, colors, errwidth=None, capsize=None, **kws): if errwidth is not None: kws.setdefault("lw", errwidth) else: kws.setdefault("lw", mpl.rcParams["lines.linewidth"] * 1.8) for at, (ci_low, ci_high), color in zip(at_group, confint, colors): if self.orient == "v": ax.plot([at, at], [ci_low, ci_high], color=color, **kws) if capsize is not None: ax.plot([at - capsize / 2, at + capsize / 2], [ci_low, ci_low], color=color, **kws) ax.plot([at - capsize / 2, at + capsize / 2], [ci_high, ci_high], color=color, **kws) else: ax.plot([ci_low, ci_high], [at, at], color=color, **kws) if capsize is not None: ax.plot([ci_low, ci_low], [at - capsize / 2, at + capsize / 2], color=color, **kws) ax.plot([ci_high, ci_high], [at - capsize / 2, at + capsize / 2], color=color, **kws) class _BarPlotter(_CategoricalStatPlotter): """Show point estimates and confidence intervals with bars.""" def __init__(self, x, y, hue, data, order, hue_order, estimator, ci, n_boot, units, seed, orient, color, palette, saturation, errcolor, errwidth, capsize, dodge): """Initialize the plotter.""" self.establish_variables(x, y, hue, data, orient, order, hue_order, units) self.establish_colors(color, palette, saturation) self.estimate_statistic(estimator, ci, n_boot, seed) self.dodge = dodge self.errcolor = errcolor self.errwidth = errwidth self.capsize = capsize def draw_bars(self, ax, kws): """Draw the bars onto `ax`.""" # Get the right matplotlib function depending on the orientation barfunc = ax.bar if self.orient == "v" else ax.barh barpos = np.arange(len(self.statistic)) if self.plot_hues is None: # Draw the bars barfunc(barpos, self.statistic, self.width, color=self.colors, align="center", **kws) # Draw the confidence intervals errcolors = [self.errcolor] * len(barpos) self.draw_confints(ax, barpos, self.confint, errcolors, self.errwidth, self.capsize) else: for j, hue_level in enumerate(self.hue_names): # Draw the bars offpos = barpos + self.hue_offsets[j] barfunc(offpos, self.statistic[:, j], self.nested_width, color=self.colors[j], align="center", label=hue_level, **kws) # Draw the confidence intervals if self.confint.size: confint = self.confint[:, j] errcolors = [self.errcolor] * len(offpos) self.draw_confints(ax, offpos, confint, errcolors, self.errwidth, self.capsize) def plot(self, ax, bar_kws): """Make the plot.""" self.draw_bars(ax, bar_kws) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() class _PointPlotter(_CategoricalStatPlotter): default_palette = "dark" """Show point estimates and confidence intervals with (joined) points.""" def __init__(self, x, y, hue, data, order, hue_order, estimator, ci, n_boot, units, seed, markers, linestyles, dodge, join, scale, orient, color, palette, errwidth=None, capsize=None): """Initialize the plotter.""" self.establish_variables(x, y, hue, data, orient, order, hue_order, units) self.establish_colors(color, palette, 1) self.estimate_statistic(estimator, ci, n_boot, seed) # Override the default palette for single-color plots if hue is None and color is None and palette is None: self.colors = [color_palette()[0]] * len(self.colors) # Don't join single-layer plots with different colors if hue is None and palette is not None: join = False # Use a good default for `dodge=True` if dodge is True and self.hue_names is not None: dodge = .025 * len(self.hue_names) # Make sure we have a marker for each hue level if isinstance(markers, str): markers = [markers] * len(self.colors) self.markers = markers # Make sure we have a line style for each hue level if isinstance(linestyles, str): linestyles = [linestyles] * len(self.colors) self.linestyles = linestyles # Set the other plot components self.dodge = dodge self.join = join self.scale = scale self.errwidth = errwidth self.capsize = capsize @property def hue_offsets(self): """Offsets relative to the center position for each hue level.""" if self.dodge: offset = np.linspace(0, self.dodge, len(self.hue_names)) offset -= offset.mean() else: offset = np.zeros(len(self.hue_names)) return offset def draw_points(self, ax): """Draw the main data components of the plot.""" # Get the center positions on the categorical axis pointpos = np.arange(len(self.statistic)) # Get the size of the plot elements lw = mpl.rcParams["lines.linewidth"] * 1.8 * self.scale mew = lw * .75 markersize = np.pi * np.square(lw) * 2 if self.plot_hues is None: # Draw lines joining each estimate point if self.join: color = self.colors[0] ls = self.linestyles[0] if self.orient == "h": ax.plot(self.statistic, pointpos, color=color, ls=ls, lw=lw) else: ax.plot(pointpos, self.statistic, color=color, ls=ls, lw=lw) # Draw the confidence intervals self.draw_confints(ax, pointpos, self.confint, self.colors, self.errwidth, self.capsize) # Draw the estimate points marker = self.markers[0] colors = [mpl.colors.colorConverter.to_rgb(c) for c in self.colors] if self.orient == "h": x, y = self.statistic, pointpos else: x, y = pointpos, self.statistic ax.scatter(x, y, linewidth=mew, marker=marker, s=markersize, facecolor=colors, edgecolor=colors) else: offsets = self.hue_offsets for j, hue_level in enumerate(self.hue_names): # Determine the values to plot for this level statistic = self.statistic[:, j] # Determine the position on the categorical and z axes offpos = pointpos + offsets[j] z = j + 1 # Draw lines joining each estimate point if self.join: color = self.colors[j] ls = self.linestyles[j] if self.orient == "h": ax.plot(statistic, offpos, color=color, zorder=z, ls=ls, lw=lw) else: ax.plot(offpos, statistic, color=color, zorder=z, ls=ls, lw=lw) # Draw the confidence intervals if self.confint.size: confint = self.confint[:, j] errcolors = [self.colors[j]] * len(offpos) self.draw_confints(ax, offpos, confint, errcolors, self.errwidth, self.capsize, zorder=z) # Draw the estimate points n_points = len(remove_na(offpos)) marker = self.markers[j] color = mpl.colors.colorConverter.to_rgb(self.colors[j]) if self.orient == "h": x, y = statistic, offpos else: x, y = offpos, statistic if not len(remove_na(statistic)): x = y = [np.nan] * n_points ax.scatter(x, y, label=hue_level, facecolor=color, edgecolor=color, linewidth=mew, marker=marker, s=markersize, zorder=z) def plot(self, ax): """Make the plot.""" self.draw_points(ax) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() class _LVPlotter(_CategoricalPlotter): def __init__(self, x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, dodge, k_depth, linewidth, scale, outlier_prop): # TODO assigning variables for None is unneccesary if width is None: width = .8 self.width = width self.dodge = dodge if saturation is None: saturation = .75 self.saturation = saturation if k_depth is None: k_depth = 'proportion' self.k_depth = k_depth if linewidth is None: linewidth = mpl.rcParams["lines.linewidth"] self.linewidth = linewidth if scale is None: scale = 'exponential' self.scale = scale self.outlier_prop = outlier_prop self.establish_variables(x, y, hue, data, orient, order, hue_order) self.establish_colors(color, palette, saturation) def _lv_box_ends(self, vals, k_depth='proportion', outlier_prop=None): """Get the number of data points and calculate `depth` of letter-value plot.""" vals = np.asarray(vals) vals = vals[np.isfinite(vals)] n = len(vals) # If p is not set, calculate it so that 8 points are outliers if not outlier_prop: # Conventional boxplots assume this proportion of the data are # outliers. p = 0.007 else: if ((outlier_prop > 1.) or (outlier_prop < 0.)): raise ValueError('outlier_prop not in range [0, 1]!') p = outlier_prop # Select the depth, i.e. number of boxes to draw, based on the method k_dict = {'proportion': (np.log2(n)) - int(np.log2(n*p)) + 1, 'tukey': (np.log2(n)) - 3, 'trustworthy': (np.log2(n) - np.log2(2*stats.norm.ppf((1-p))**2)) + 1} k = k_dict[k_depth] try: k = int(k) except ValueError: k = 1 # If the number happens to be less than 0, set k to 0 if k < 1.: k = 1 # Calculate the upper box ends upper = [100*(1 - 0.5**(i+2)) for i in range(k, -1, -1)] # Calculate the lower box ends lower = [100*(0.5**(i+2)) for i in range(k, -1, -1)] # Stitch the box ends together percentile_ends = [(i, j) for i, j in zip(lower, upper)] box_ends = [np.percentile(vals, q) for q in percentile_ends] return box_ends, k def _lv_outliers(self, vals, k): """Find the outliers based on the letter value depth.""" perc_ends = (100*(0.5**(k+2)), 100*(1 - 0.5**(k+2))) edges = np.percentile(vals, perc_ends) lower_out = vals[np.where(vals < edges[0])[0]] upper_out = vals[np.where(vals > edges[1])[0]] return np.concatenate((lower_out, upper_out)) def _width_functions(self, width_func): # Dictionary of functions for computing the width of the boxes width_functions = {'linear': lambda h, i, k: (i + 1.) / k, 'exponential': lambda h, i, k: 2**(-k+i-1), 'area': lambda h, i, k: (1 - 2**(-k+i-2)) / h} return width_functions[width_func] def _lvplot(self, box_data, positions, color=[255. / 256., 185. / 256., 0.], vert=True, widths=1, k_depth='proportion', ax=None, outlier_prop=None, scale='exponential', **kws): x = positions[0] box_data = np.asarray(box_data) # If we only have one data point, plot a line if len(box_data) == 1: kws.update({'color': self.gray, 'linestyle': '-'}) ys = [box_data[0], box_data[0]] xs = [x - widths / 2, x + widths / 2] if vert: xx, yy = xs, ys else: xx, yy = ys, xs ax.plot(xx, yy, **kws) else: # Get the number of data points and calculate "depth" of # letter-value plot box_ends, k = self._lv_box_ends(box_data, k_depth=k_depth, outlier_prop=outlier_prop) # Anonymous functions for calculating the width and height # of the letter value boxes width = self._width_functions(scale) # Function to find height of boxes def height(b): return b[1] - b[0] # Functions to construct the letter value boxes def vert_perc_box(x, b, i, k, w): rect = Patches.Rectangle((x - widths*w / 2, b[0]), widths*w, height(b), fill=True) return rect def horz_perc_box(x, b, i, k, w): rect = Patches.Rectangle((b[0], x - widths*w / 2), height(b), widths*w, fill=True) return rect # Scale the width of the boxes so the biggest starts at 1 w_area = np.array([width(height(b), i, k) for i, b in enumerate(box_ends)]) w_area = w_area / np.max(w_area) # Calculate the medians y = np.median(box_data) # Calculate the outliers and plot outliers = self._lv_outliers(box_data, k) hex_color = mpl.colors.rgb2hex(color) if vert: boxes = [vert_perc_box(x, b[0], i, k, b[1]) for i, b in enumerate(zip(box_ends, w_area))] # Plot the medians ax.plot([x - widths / 2, x + widths / 2], [y, y], c='.15', alpha=.45, **kws) ax.scatter(np.repeat(x, len(outliers)), outliers, marker='d', c=hex_color, **kws) else: boxes = [horz_perc_box(x, b[0], i, k, b[1]) for i, b in enumerate(zip(box_ends, w_area))] # Plot the medians ax.plot([y, y], [x - widths / 2, x + widths / 2], c='.15', alpha=.45, **kws) ax.scatter(outliers, np.repeat(x, len(outliers)), marker='d', c=hex_color, **kws) # Construct a color map from the input color rgb = [[1, 1, 1], hex_color] cmap = mpl.colors.LinearSegmentedColormap.from_list('new_map', rgb) collection = PatchCollection(boxes, cmap=cmap) # Set the color gradation collection.set_array(np.array(np.linspace(0, 1, len(boxes)))) # Plot the boxes ax.add_collection(collection) def draw_letter_value_plot(self, ax, kws): """Use matplotlib to draw a letter value plot on an Axes.""" vert = self.orient == "v" for i, group_data in enumerate(self.plot_data): if self.plot_hues is None: # Handle case where there is data at this level if group_data.size == 0: continue # Draw a single box or a set of boxes # with a single level of grouping box_data = remove_na(group_data) # Handle case where there is no non-null data if box_data.size == 0: continue color = self.colors[i] self._lvplot(box_data, positions=[i], color=color, vert=vert, widths=self.width, k_depth=self.k_depth, ax=ax, scale=self.scale, outlier_prop=self.outlier_prop, **kws) else: # Draw nested groups of boxes offsets = self.hue_offsets for j, hue_level in enumerate(self.hue_names): # Add a legend for this hue level if not i: self.add_legend_data(ax, self.colors[j], hue_level) # Handle case where there is data at this level if group_data.size == 0: continue hue_mask = self.plot_hues[i] == hue_level box_data = remove_na(group_data[hue_mask]) # Handle case where there is no non-null data if box_data.size == 0: continue color = self.colors[j] center = i + offsets[j] self._lvplot(box_data, positions=[center], color=color, vert=vert, widths=self.nested_width, k_depth=self.k_depth, ax=ax, scale=self.scale, outlier_prop=self.outlier_prop, **kws) def plot(self, ax, boxplot_kws): """Make the plot.""" self.draw_letter_value_plot(ax, boxplot_kws) self.annotate_axes(ax) if self.orient == "h": ax.invert_yaxis() _categorical_docs = dict( # Shared narrative docs categorical_narrative=dedent("""\ This function always treats one of the variables as categorical and draws data at ordinal positions (0, 1, ... n) on the relevant axis, even when the data has a numeric or date type. See the :ref:`tutorial ` for more information.\ """), main_api_narrative=dedent("""\ Input data can be passed in a variety of formats, including: - Vectors of data represented as lists, numpy arrays, or pandas Series objects passed directly to the ``x``, ``y``, and/or ``hue`` parameters. - A "long-form" DataFrame, in which case the ``x``, ``y``, and ``hue`` variables will determine how the data are plotted. - A "wide-form" DataFrame, such that each numeric column will be plotted. - An array or list of vectors. In most cases, it is possible to use numpy or Python objects, but pandas objects are preferable because the associated names will be used to annotate the axes. Additionally, you can use Categorical types for the grouping variables to control the order of plot elements.\ """), # Shared function parameters input_params=dedent("""\ x, y, hue : names of variables in ``data`` or vector data, optional Inputs for plotting long-form data. See examples for interpretation.\ """), string_input_params=dedent("""\ x, y, hue : names of variables in ``data`` Inputs for plotting long-form data. See examples for interpretation.\ """), categorical_data=dedent("""\ data : DataFrame, array, or list of arrays, optional Dataset for plotting. If ``x`` and ``y`` are absent, this is interpreted as wide-form. Otherwise it is expected to be long-form.\ """), long_form_data=dedent("""\ data : DataFrame Long-form (tidy) dataset for plotting. Each column should correspond to a variable, and each row should correspond to an observation.\ """), order_vars=dedent("""\ order, hue_order : lists of strings, optional Order to plot the categorical levels in, otherwise the levels are inferred from the data objects.\ """), stat_api_params=dedent("""\ estimator : callable that maps vector -> scalar, optional Statistical function to estimate within each categorical bin. ci : float or "sd" or None, optional Size of confidence intervals to draw around estimated values. If "sd", skip bootstrapping and draw the standard deviation of the observations. If ``None``, no bootstrapping will be performed, and error bars will not be drawn. n_boot : int, optional Number of bootstrap iterations to use when computing confidence intervals. units : name of variable in ``data`` or vector data, optional Identifier of sampling units, which will be used to perform a multilevel bootstrap and account for repeated measures design. seed : int, numpy.random.Generator, or numpy.random.RandomState, optional Seed or random number generator for reproducible bootstrapping.\ """), orient=dedent("""\ orient : "v" | "h", optional Orientation of the plot (vertical or horizontal). This is usually inferred from the dtype of the input variables, but can be used to specify when the "categorical" variable is a numeric or when plotting wide-form data.\ """), color=dedent("""\ color : matplotlib color, optional Color for all of the elements, or seed for a gradient palette.\ """), palette=dedent("""\ palette : palette name, list, or dict, optional Color palette that maps either the grouping variable or the hue variable. If the palette is a dictionary, keys should be names of levels and values should be matplotlib colors.\ """), saturation=dedent("""\ saturation : float, optional Proportion of the original saturation to draw colors at. Large patches often look better with slightly desaturated colors, but set this to ``1`` if you want the plot colors to perfectly match the input color spec.\ """), capsize=dedent("""\ capsize : float, optional Width of the "caps" on error bars. """), errwidth=dedent("""\ errwidth : float, optional Thickness of error bar lines (and caps).\ """), width=dedent("""\ width : float, optional Width of a full element when not using hue nesting, or width of all the elements for one level of the major grouping variable.\ """), dodge=dedent("""\ dodge : bool, optional When hue nesting is used, whether elements should be shifted along the categorical axis.\ """), linewidth=dedent("""\ linewidth : float, optional Width of the gray lines that frame the plot elements.\ """), ax_in=dedent("""\ ax : matplotlib Axes, optional Axes object to draw the plot onto, otherwise uses the current Axes.\ """), ax_out=dedent("""\ ax : matplotlib Axes Returns the Axes object with the plot drawn onto it.\ """), # Shared see also boxplot=dedent("""\ boxplot : A traditional box-and-whisker plot with a similar API.\ """), violinplot=dedent("""\ violinplot : A combination of boxplot and kernel density estimation.\ """), stripplot=dedent("""\ stripplot : A scatterplot where one variable is categorical. Can be used in conjunction with other plots to show each observation.\ """), swarmplot=dedent("""\ swarmplot : A categorical scatterplot where the points do not overlap. Can be used with other plots to show each observation.\ """), barplot=dedent("""\ barplot : Show point estimates and confidence intervals using bars.\ """), countplot=dedent("""\ countplot : Show the counts of observations in each categorical bin.\ """), pointplot=dedent("""\ pointplot : Show point estimates and confidence intervals using scatterplot glyphs.\ """), catplot=dedent("""\ catplot : Combine a categorical plot with a :class:`FacetGrid`.\ """), boxenplot=dedent("""\ boxenplot : An enhanced boxplot for larger datasets.\ """), ) _categorical_docs.update(_facet_docs) def boxplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=.75, width=.8, dodge=True, fliersize=5, linewidth=None, whis=1.5, ax=None, **kwargs): plotter = _BoxPlotter(x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, dodge, fliersize, linewidth) if ax is None: ax = plt.gca() kwargs.update(dict(whis=whis)) plotter.plot(ax, kwargs) return ax boxplot.__doc__ = dedent("""\ Draw a box plot to show distributions with respect to categories. A box plot (or box-and-whisker plot) shows the distribution of quantitative data in a way that facilitates comparisons between variables or across levels of a categorical variable. The box shows the quartiles of the dataset while the whiskers extend to show the rest of the distribution, except for points that are determined to be "outliers" using a method that is a function of the inter-quartile range. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} {orient} {color} {palette} {saturation} {width} {dodge} fliersize : float, optional Size of the markers used to indicate outlier observations. {linewidth} whis : float, optional Proportion of the IQR past the low and high quartiles to extend the plot whiskers. Points outside this range will be identified as outliers. {ax_in} kwargs : key, value mappings Other keyword arguments are passed through to :meth:`matplotlib.axes.Axes.boxplot`. Returns ------- {ax_out} See Also -------- {violinplot} {stripplot} {swarmplot} {catplot} Examples -------- Draw a single horizontal boxplot: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set(style="whitegrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.boxplot(x=tips["total_bill"]) Draw a vertical boxplot grouped by a categorical variable: .. plot:: :context: close-figs >>> ax = sns.boxplot(x="day", y="total_bill", data=tips) Draw a boxplot with nested grouping by two categorical variables: .. plot:: :context: close-figs >>> ax = sns.boxplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="Set3") Draw a boxplot with nested grouping when some bins are empty: .. plot:: :context: close-figs >>> ax = sns.boxplot(x="day", y="total_bill", hue="time", ... data=tips, linewidth=2.5) Control box order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.boxplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Draw a boxplot for each numeric variable in a DataFrame: .. plot:: :context: close-figs >>> iris = sns.load_dataset("iris") >>> ax = sns.boxplot(data=iris, orient="h", palette="Set2") Use ``hue`` without changing box position or width: .. plot:: :context: close-figs >>> tips["weekend"] = tips["day"].isin(["Sat", "Sun"]) >>> ax = sns.boxplot(x="day", y="total_bill", hue="weekend", ... data=tips, dodge=False) Use :func:`swarmplot` to show the datapoints on top of the boxes: .. plot:: :context: close-figs >>> ax = sns.boxplot(x="day", y="total_bill", data=tips) >>> ax = sns.swarmplot(x="day", y="total_bill", data=tips, color=".25") Use :func:`catplot` to combine a :func:`boxplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="box", ... height=4, aspect=.7); """).format(**_categorical_docs) def violinplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, bw="scott", cut=2, scale="area", scale_hue=True, gridsize=100, width=.8, inner="box", split=False, dodge=True, orient=None, linewidth=None, color=None, palette=None, saturation=.75, ax=None, **kwargs): plotter = _ViolinPlotter(x, y, hue, data, order, hue_order, bw, cut, scale, scale_hue, gridsize, width, inner, split, dodge, orient, linewidth, color, palette, saturation) if ax is None: ax = plt.gca() plotter.plot(ax) return ax violinplot.__doc__ = dedent("""\ Draw a combination of boxplot and kernel density estimate. A violin plot plays a similar role as a box and whisker plot. It shows the distribution of quantitative data across several levels of one (or more) categorical variables such that those distributions can be compared. Unlike a box plot, in which all of the plot components correspond to actual datapoints, the violin plot features a kernel density estimation of the underlying distribution. This can be an effective and attractive way to show multiple distributions of data at once, but keep in mind that the estimation procedure is influenced by the sample size, and violins for relatively small samples might look misleadingly smooth. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} bw : {{'scott', 'silverman', float}}, optional Either the name of a reference rule or the scale factor to use when computing the kernel bandwidth. The actual kernel size will be determined by multiplying the scale factor by the standard deviation of the data within each bin. cut : float, optional Distance, in units of bandwidth size, to extend the density past the extreme datapoints. Set to 0 to limit the violin range within the range of the observed data (i.e., to have the same effect as ``trim=True`` in ``ggplot``. scale : {{"area", "count", "width"}}, optional The method used to scale the width of each violin. If ``area``, each violin will have the same area. If ``count``, the width of the violins will be scaled by the number of observations in that bin. If ``width``, each violin will have the same width. scale_hue : bool, optional When nesting violins using a ``hue`` variable, this parameter determines whether the scaling is computed within each level of the major grouping variable (``scale_hue=True``) or across all the violins on the plot (``scale_hue=False``). gridsize : int, optional Number of points in the discrete grid used to compute the kernel density estimate. {width} inner : {{"box", "quartile", "point", "stick", None}}, optional Representation of the datapoints in the violin interior. If ``box``, draw a miniature boxplot. If ``quartiles``, draw the quartiles of the distribution. If ``point`` or ``stick``, show each underlying datapoint. Using ``None`` will draw unadorned violins. split : bool, optional When using hue nesting with a variable that takes two levels, setting ``split`` to True will draw half of a violin for each level. This can make it easier to directly compare the distributions. {dodge} {orient} {linewidth} {color} {palette} {saturation} {ax_in} Returns ------- {ax_out} See Also -------- {boxplot} {stripplot} {swarmplot} {catplot} Examples -------- Draw a single horizontal violinplot: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set(style="whitegrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.violinplot(x=tips["total_bill"]) Draw a vertical violinplot grouped by a categorical variable: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", data=tips) Draw a violinplot with nested grouping by two categorical variables: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="muted") Draw split violins to compare the across the hue variable: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="muted", split=True) Control violin order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Scale the violin width by the number of observations in each bin: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="sex", ... data=tips, palette="Set2", split=True, ... scale="count") Draw the quartiles as horizontal lines instead of a mini-box: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="sex", ... data=tips, palette="Set2", split=True, ... scale="count", inner="quartile") Show each observation with a stick inside the violin: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="sex", ... data=tips, palette="Set2", split=True, ... scale="count", inner="stick") Scale the density relative to the counts across all bins: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="sex", ... data=tips, palette="Set2", split=True, ... scale="count", inner="stick", scale_hue=False) Use a narrow bandwidth to reduce the amount of smoothing: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", hue="sex", ... data=tips, palette="Set2", split=True, ... scale="count", inner="stick", ... scale_hue=False, bw=.2) Draw horizontal violins: .. plot:: :context: close-figs >>> planets = sns.load_dataset("planets") >>> ax = sns.violinplot(x="orbital_period", y="method", ... data=planets[planets.orbital_period < 1000], ... scale="width", palette="Set3") Don't let density extend past extreme values in the data: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="orbital_period", y="method", ... data=planets[planets.orbital_period < 1000], ... cut=0, scale="width", palette="Set3") Use ``hue`` without changing violin position or width: .. plot:: :context: close-figs >>> tips["weekend"] = tips["day"].isin(["Sat", "Sun"]) >>> ax = sns.violinplot(x="day", y="total_bill", hue="weekend", ... data=tips, dodge=False) Use :func:`catplot` to combine a :func:`violinplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="violin", split=True, ... height=4, aspect=.7); """).format(**_categorical_docs) def lvplot(*args, **kwargs): """Deprecated; please use `boxenplot`.""" msg = ( "The `lvplot` function has been renamed to `boxenplot`. The original " "name will be removed in a future release. Please update your code. " ) warnings.warn(msg) return boxenplot(*args, **kwargs) def boxenplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=.75, width=.8, dodge=True, k_depth='proportion', linewidth=None, scale='exponential', outlier_prop=None, ax=None, **kwargs): plotter = _LVPlotter(x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, dodge, k_depth, linewidth, scale, outlier_prop) if ax is None: ax = plt.gca() plotter.plot(ax, kwargs) return ax boxenplot.__doc__ = dedent("""\ Draw an enhanced box plot for larger datasets. This style of plot was originally named a "letter value" plot because it shows a large number of quantiles that are defined as "letter values". It is similar to a box plot in plotting a nonparametric representation of a distribution in which all features correspond to actual observations. By plotting more quantiles, it provides more information about the shape of the distribution, particularly in the tails. For a more extensive explanation, you can read the paper that introduced the plot: https://vita.had.co.nz/papers/letter-value-plot.html {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} {orient} {color} {palette} {saturation} {width} {dodge} k_depth : "proportion" | "tukey" | "trustworthy", optional The number of boxes, and by extension number of percentiles, to draw. All methods are detailed in Wickham's paper. Each makes different assumptions about the number of outliers and leverages different statistical properties. {linewidth} scale : "linear" | "exponential" | "area" Method to use for the width of the letter value boxes. All give similar results visually. "linear" reduces the width by a constant linear factor, "exponential" uses the proportion of data not covered, "area" is proportional to the percentage of data covered. outlier_prop : float, optional Proportion of data believed to be outliers. Used in conjunction with k_depth to determine the number of percentiles to draw. Defaults to 0.007 as a proportion of outliers. Should be in range [0, 1]. {ax_in} kwargs : key, value mappings Other keyword arguments are passed through to :meth:`matplotlib.axes.Axes.plot` and :meth:`matplotlib.axes.Axes.scatter`. Returns ------- {ax_out} See Also -------- {violinplot} {boxplot} {catplot} Examples -------- Draw a single horizontal boxen plot: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set(style="whitegrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.boxenplot(x=tips["total_bill"]) Draw a vertical boxen plot grouped by a categorical variable: .. plot:: :context: close-figs >>> ax = sns.boxenplot(x="day", y="total_bill", data=tips) Draw a letter value plot with nested grouping by two categorical variables: .. plot:: :context: close-figs >>> ax = sns.boxenplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="Set3") Draw a boxen plot with nested grouping when some bins are empty: .. plot:: :context: close-figs >>> ax = sns.boxenplot(x="day", y="total_bill", hue="time", ... data=tips, linewidth=2.5) Control box order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.boxenplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Draw a boxen plot for each numeric variable in a DataFrame: .. plot:: :context: close-figs >>> iris = sns.load_dataset("iris") >>> ax = sns.boxenplot(data=iris, orient="h", palette="Set2") Use :func:`stripplot` to show the datapoints on top of the boxes: .. plot:: :context: close-figs >>> ax = sns.boxenplot(x="day", y="total_bill", data=tips) >>> ax = sns.stripplot(x="day", y="total_bill", data=tips, ... size=4, color="gray") Use :func:`catplot` to combine :func:`boxenplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="boxen", ... height=4, aspect=.7); """).format(**_categorical_docs) def stripplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, jitter=True, dodge=False, orient=None, color=None, palette=None, size=5, edgecolor="gray", linewidth=0, ax=None, **kwargs): if "split" in kwargs: dodge = kwargs.pop("split") msg = "The `split` parameter has been renamed to `dodge`." warnings.warn(msg, UserWarning) plotter = _StripPlotter(x, y, hue, data, order, hue_order, jitter, dodge, orient, color, palette) if ax is None: ax = plt.gca() kwargs.setdefault("zorder", 3) size = kwargs.get("s", size) if linewidth is None: linewidth = size / 10 if edgecolor == "gray": edgecolor = plotter.gray kwargs.update(dict(s=size ** 2, edgecolor=edgecolor, linewidth=linewidth)) plotter.plot(ax, kwargs) return ax stripplot.__doc__ = dedent("""\ Draw a scatterplot where one variable is categorical. A strip plot can be drawn on its own, but it is also a good complement to a box or violin plot in cases where you want to show all observations along with some representation of the underlying distribution. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} jitter : float, ``True``/``1`` is special-cased, optional Amount of jitter (only along the categorical axis) to apply. This can be useful when you have many points and they overlap, so that it is easier to see the distribution. You can specify the amount of jitter (half the width of the uniform random variable support), or just use ``True`` for a good default. dodge : bool, optional When using ``hue`` nesting, setting this to ``True`` will separate the strips for different hue levels along the categorical axis. Otherwise, the points for each level will be plotted on top of each other. {orient} {color} {palette} size : float, optional Radius of the markers, in points. edgecolor : matplotlib color, "gray" is special-cased, optional Color of the lines around each point. If you pass ``"gray"``, the brightness is determined by the color palette used for the body of the points. {linewidth} {ax_in} kwargs : key, value mappings Other keyword arguments are passed through to :meth:`matplotlib.axes.Axes.scatter`. Returns ------- {ax_out} See Also -------- {swarmplot} {boxplot} {violinplot} {catplot} Examples -------- Draw a single horizontal strip plot: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set(style="whitegrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.stripplot(x=tips["total_bill"]) Group the strips by a categorical variable: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="day", y="total_bill", data=tips) Use a smaller amount of jitter: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="day", y="total_bill", data=tips, jitter=0.05) Draw horizontal strips: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="total_bill", y="day", data=tips) Draw outlines around the points: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="total_bill", y="day", data=tips, ... linewidth=1) Nest the strips within a second categorical variable: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="sex", y="total_bill", hue="day", data=tips) Draw each level of the ``hue`` variable at different locations on the major categorical axis: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="Set2", dodge=True) Control strip order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.stripplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Draw strips with large points and different aesthetics: .. plot:: :context: close-figs >>> ax = sns.stripplot("day", "total_bill", "smoker", data=tips, ... palette="Set2", size=20, marker="D", ... edgecolor="gray", alpha=.25) Draw strips of observations on top of a box plot: .. plot:: :context: close-figs >>> import numpy as np >>> ax = sns.boxplot(x="tip", y="day", data=tips, whis=np.inf) >>> ax = sns.stripplot(x="tip", y="day", data=tips, color=".3") Draw strips of observations on top of a violin plot: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", data=tips, ... inner=None, color=".8") >>> ax = sns.stripplot(x="day", y="total_bill", data=tips) Use :func:`catplot` to combine a :func:`stripplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="strip", ... height=4, aspect=.7); """).format(**_categorical_docs) def swarmplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, dodge=False, orient=None, color=None, palette=None, size=5, edgecolor="gray", linewidth=0, ax=None, **kwargs): if "split" in kwargs: dodge = kwargs.pop("split") msg = "The `split` parameter has been renamed to `dodge`." warnings.warn(msg, UserWarning) plotter = _SwarmPlotter(x, y, hue, data, order, hue_order, dodge, orient, color, palette) if ax is None: ax = plt.gca() kwargs.setdefault("zorder", 3) size = kwargs.get("s", size) if linewidth is None: linewidth = size / 10 if edgecolor == "gray": edgecolor = plotter.gray kwargs.update(dict(s=size ** 2, edgecolor=edgecolor, linewidth=linewidth)) plotter.plot(ax, kwargs) return ax swarmplot.__doc__ = dedent("""\ Draw a categorical scatterplot with non-overlapping points. This function is similar to :func:`stripplot`, but the points are adjusted (only along the categorical axis) so that they don't overlap. This gives a better representation of the distribution of values, but it does not scale well to large numbers of observations. This style of plot is sometimes called a "beeswarm". A swarm plot can be drawn on its own, but it is also a good complement to a box or violin plot in cases where you want to show all observations along with some representation of the underlying distribution. Arranging the points properly requires an accurate transformation between data and point coordinates. This means that non-default axis limits must be set *before* drawing the plot. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} dodge : bool, optional When using ``hue`` nesting, setting this to ``True`` will separate the strips for different hue levels along the categorical axis. Otherwise, the points for each level will be plotted in one swarm. {orient} {color} {palette} size : float, optional Radius of the markers, in points. edgecolor : matplotlib color, "gray" is special-cased, optional Color of the lines around each point. If you pass ``"gray"``, the brightness is determined by the color palette used for the body of the points. {linewidth} {ax_in} kwargs : key, value mappings Other keyword arguments are passed through to :meth:`matplotlib.axes.Axes.scatter`. Returns ------- {ax_out} See Also -------- {boxplot} {violinplot} {stripplot} {catplot} Examples -------- Draw a single horizontal swarm plot: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set(style="whitegrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.swarmplot(x=tips["total_bill"]) Group the swarms by a categorical variable: .. plot:: :context: close-figs >>> ax = sns.swarmplot(x="day", y="total_bill", data=tips) Draw horizontal swarms: .. plot:: :context: close-figs >>> ax = sns.swarmplot(x="total_bill", y="day", data=tips) Color the points using a second categorical variable: .. plot:: :context: close-figs >>> ax = sns.swarmplot(x="day", y="total_bill", hue="sex", data=tips) Split each level of the ``hue`` variable along the categorical axis: .. plot:: :context: close-figs >>> ax = sns.swarmplot(x="day", y="total_bill", hue="smoker", ... data=tips, palette="Set2", dodge=True) Control swarm order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.swarmplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Plot using larger points: .. plot:: :context: close-figs >>> ax = sns.swarmplot(x="time", y="tip", data=tips, size=6) Draw swarms of observations on top of a box plot: .. plot:: :context: close-figs >>> ax = sns.boxplot(x="tip", y="day", data=tips, whis=np.inf) >>> ax = sns.swarmplot(x="tip", y="day", data=tips, color=".2") Draw swarms of observations on top of a violin plot: .. plot:: :context: close-figs >>> ax = sns.violinplot(x="day", y="total_bill", data=tips, inner=None) >>> ax = sns.swarmplot(x="day", y="total_bill", data=tips, ... color="white", edgecolor="gray") Use :func:`catplot` to combine a :func:`swarmplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="swarm", ... height=4, aspect=.7); """).format(**_categorical_docs) def barplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, estimator=np.mean, ci=95, n_boot=1000, units=None, seed=None, orient=None, color=None, palette=None, saturation=.75, errcolor=".26", errwidth=None, capsize=None, dodge=True, ax=None, **kwargs): plotter = _BarPlotter(x, y, hue, data, order, hue_order, estimator, ci, n_boot, units, seed, orient, color, palette, saturation, errcolor, errwidth, capsize, dodge) if ax is None: ax = plt.gca() plotter.plot(ax, kwargs) return ax barplot.__doc__ = dedent("""\ Show point estimates and confidence intervals as rectangular bars. A bar plot represents an estimate of central tendency for a numeric variable with the height of each rectangle and provides some indication of the uncertainty around that estimate using error bars. Bar plots include 0 in the quantitative axis range, and they are a good choice when 0 is a meaningful value for the quantitative variable, and you want to make comparisons against it. For datasets where 0 is not a meaningful value, a point plot will allow you to focus on differences between levels of one or more categorical variables. It is also important to keep in mind that a bar plot shows only the mean (or other estimator) value, but in many cases it may be more informative to show the distribution of values at each level of the categorical variables. In that case, other approaches such as a box or violin plot may be more appropriate. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} {stat_api_params} {orient} {color} {palette} {saturation} errcolor : matplotlib color Color for the lines that represent the confidence interval. {errwidth} {capsize} {dodge} {ax_in} kwargs : key, value mappings Other keyword arguments are passed through to :meth:`matplotlib.axes.Axes.bar`. Returns ------- {ax_out} See Also -------- {countplot} {pointplot} {catplot} Examples -------- Draw a set of vertical bar plots grouped by a categorical variable: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set(style="whitegrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.barplot(x="day", y="total_bill", data=tips) Draw a set of vertical bars with nested grouping by a two variables: .. plot:: :context: close-figs >>> ax = sns.barplot(x="day", y="total_bill", hue="sex", data=tips) Draw a set of horizontal bars: .. plot:: :context: close-figs >>> ax = sns.barplot(x="tip", y="day", data=tips) Control bar order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.barplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Use median as the estimate of central tendency: .. plot:: :context: close-figs >>> from numpy import median >>> ax = sns.barplot(x="day", y="tip", data=tips, estimator=median) Show the standard error of the mean with the error bars: .. plot:: :context: close-figs >>> ax = sns.barplot(x="day", y="tip", data=tips, ci=68) Show standard deviation of observations instead of a confidence interval: .. plot:: :context: close-figs >>> ax = sns.barplot(x="day", y="tip", data=tips, ci="sd") Add "caps" to the error bars: .. plot:: :context: close-figs >>> ax = sns.barplot(x="day", y="tip", data=tips, capsize=.2) Use a different color palette for the bars: .. plot:: :context: close-figs >>> ax = sns.barplot("size", y="total_bill", data=tips, ... palette="Blues_d") Use ``hue`` without changing bar position or width: .. plot:: :context: close-figs >>> tips["weekend"] = tips["day"].isin(["Sat", "Sun"]) >>> ax = sns.barplot(x="day", y="total_bill", hue="weekend", ... data=tips, dodge=False) Plot all bars in a single color: .. plot:: :context: close-figs >>> ax = sns.barplot("size", y="total_bill", data=tips, ... color="salmon", saturation=.5) Use :meth:`matplotlib.axes.Axes.bar` parameters to control the style. .. plot:: :context: close-figs >>> ax = sns.barplot("day", "total_bill", data=tips, ... linewidth=2.5, facecolor=(1, 1, 1, 0), ... errcolor=".2", edgecolor=".2") Use :func:`catplot` to combine a :func:`barplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="bar", ... height=4, aspect=.7); """).format(**_categorical_docs) def pointplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, estimator=np.mean, ci=95, n_boot=1000, units=None, seed=None, markers="o", linestyles="-", dodge=False, join=True, scale=1, orient=None, color=None, palette=None, errwidth=None, capsize=None, ax=None, **kwargs): plotter = _PointPlotter(x, y, hue, data, order, hue_order, estimator, ci, n_boot, units, seed, markers, linestyles, dodge, join, scale, orient, color, palette, errwidth, capsize) if ax is None: ax = plt.gca() plotter.plot(ax) return ax pointplot.__doc__ = dedent("""\ Show point estimates and confidence intervals using scatter plot glyphs. A point plot represents an estimate of central tendency for a numeric variable by the position of scatter plot points and provides some indication of the uncertainty around that estimate using error bars. Point plots can be more useful than bar plots for focusing comparisons between different levels of one or more categorical variables. They are particularly adept at showing interactions: how the relationship between levels of one categorical variable changes across levels of a second categorical variable. The lines that join each point from the same ``hue`` level allow interactions to be judged by differences in slope, which is easier for the eyes than comparing the heights of several groups of points or bars. It is important to keep in mind that a point plot shows only the mean (or other estimator) value, but in many cases it may be more informative to show the distribution of values at each level of the categorical variables. In that case, other approaches such as a box or violin plot may be more appropriate. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} {stat_api_params} markers : string or list of strings, optional Markers to use for each of the ``hue`` levels. linestyles : string or list of strings, optional Line styles to use for each of the ``hue`` levels. dodge : bool or float, optional Amount to separate the points for each level of the ``hue`` variable along the categorical axis. join : bool, optional If ``True``, lines will be drawn between point estimates at the same ``hue`` level. scale : float, optional Scale factor for the plot elements. {orient} {color} {palette} {errwidth} {capsize} {ax_in} Returns ------- {ax_out} See Also -------- {barplot} {catplot} Examples -------- Draw a set of vertical point plots grouped by a categorical variable: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set(style="darkgrid") >>> tips = sns.load_dataset("tips") >>> ax = sns.pointplot(x="time", y="total_bill", data=tips) Draw a set of vertical points with nested grouping by a two variables: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="time", y="total_bill", hue="smoker", ... data=tips) Separate the points for different hue levels along the categorical axis: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="time", y="total_bill", hue="smoker", ... data=tips, dodge=True) Use a different marker and line style for the hue levels: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="time", y="total_bill", hue="smoker", ... data=tips, ... markers=["o", "x"], ... linestyles=["-", "--"]) Draw a set of horizontal points: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="tip", y="day", data=tips) Don't draw a line connecting each point: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="tip", y="day", data=tips, join=False) Use a different color for a single-layer plot: .. plot:: :context: close-figs >>> ax = sns.pointplot("time", y="total_bill", data=tips, ... color="#bb3f3f") Use a different color palette for the points: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="time", y="total_bill", hue="smoker", ... data=tips, palette="Set2") Control point order by passing an explicit order: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="time", y="tip", data=tips, ... order=["Dinner", "Lunch"]) Use median as the estimate of central tendency: .. plot:: :context: close-figs >>> from numpy import median >>> ax = sns.pointplot(x="day", y="tip", data=tips, estimator=median) Show the standard error of the mean with the error bars: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="day", y="tip", data=tips, ci=68) Show standard deviation of observations instead of a confidence interval: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="day", y="tip", data=tips, ci="sd") Add "caps" to the error bars: .. plot:: :context: close-figs >>> ax = sns.pointplot(x="day", y="tip", data=tips, capsize=.2) Use :func:`catplot` to combine a :func:`pointplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="sex", y="total_bill", ... hue="smoker", col="time", ... data=tips, kind="point", ... dodge=True, ... height=4, aspect=.7); """).format(**_categorical_docs) def countplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=.75, dodge=True, ax=None, **kwargs): estimator = len ci = None n_boot = 0 units = None seed = None errcolor = None errwidth = None capsize = None if x is None and y is not None: orient = "h" x = y elif y is None and x is not None: orient = "v" y = x elif x is not None and y is not None: raise TypeError("Cannot pass values for both `x` and `y`") else: raise TypeError("Must pass values for either `x` or `y`") plotter = _BarPlotter(x, y, hue, data, order, hue_order, estimator, ci, n_boot, units, seed, orient, color, palette, saturation, errcolor, errwidth, capsize, dodge) plotter.value_label = "count" if ax is None: ax = plt.gca() plotter.plot(ax, kwargs) return ax countplot.__doc__ = dedent("""\ Show the counts of observations in each categorical bin using bars. A count plot can be thought of as a histogram across a categorical, instead of quantitative, variable. The basic API and options are identical to those for :func:`barplot`, so you can compare counts across nested variables. {main_api_narrative} {categorical_narrative} Parameters ---------- {input_params} {categorical_data} {order_vars} {orient} {color} {palette} {saturation} {dodge} {ax_in} kwargs : key, value mappings Other keyword arguments are passed through to :meth:`matplotlib.axes.Axes.bar`. Returns ------- {ax_out} See Also -------- {barplot} {catplot} Examples -------- Show value counts for a single categorical variable: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set(style="darkgrid") >>> titanic = sns.load_dataset("titanic") >>> ax = sns.countplot(x="class", data=titanic) Show value counts for two categorical variables: .. plot:: :context: close-figs >>> ax = sns.countplot(x="class", hue="who", data=titanic) Plot the bars horizontally: .. plot:: :context: close-figs >>> ax = sns.countplot(y="class", hue="who", data=titanic) Use a different color palette: .. plot:: :context: close-figs >>> ax = sns.countplot(x="who", data=titanic, palette="Set3") Use :meth:`matplotlib.axes.Axes.bar` parameters to control the style. .. plot:: :context: close-figs >>> ax = sns.countplot(x="who", data=titanic, ... facecolor=(0, 0, 0, 0), ... linewidth=5, ... edgecolor=sns.color_palette("dark", 3)) Use :func:`catplot` to combine a :func:`countplot` and a :class:`FacetGrid`. This allows grouping within additional categorical variables. Using :func:`catplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of variable order across facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="class", hue="who", col="survived", ... data=titanic, kind="count", ... height=4, aspect=.7); """).format(**_categorical_docs) def factorplot(*args, **kwargs): """Deprecated; please use `catplot` instead.""" msg = ( "The `factorplot` function has been renamed to `catplot`. The " "original name will be removed in a future release. Please update " "your code. Note that the default `kind` in `factorplot` (`'point'`) " "has changed `'strip'` in `catplot`." ) warnings.warn(msg) if "size" in kwargs: kwargs["height"] = kwargs.pop("size") msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) kwargs.setdefault("kind", "point") return catplot(*args, **kwargs) def catplot(x=None, y=None, hue=None, data=None, row=None, col=None, col_wrap=None, estimator=np.mean, ci=95, n_boot=1000, units=None, seed=None, order=None, hue_order=None, row_order=None, col_order=None, kind="strip", height=5, aspect=1, orient=None, color=None, palette=None, legend=True, legend_out=True, sharex=True, sharey=True, margin_titles=False, facet_kws=None, **kwargs): # Handle deprecations if "size" in kwargs: height = kwargs.pop("size") msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) # Determine the plotting function try: plot_func = globals()[kind + "plot"] except KeyError: err = "Plot kind '{}' is not recognized".format(kind) raise ValueError(err) # Alias the input variables to determine categorical order and palette # correctly in the case of a count plot if kind == "count": if x is None and y is not None: x_, y_, orient = y, y, "h" elif y is None and x is not None: x_, y_, orient = x, x, "v" else: raise ValueError("Either `x` or `y` must be None for count plots") else: x_, y_ = x, y # Check for attempt to plot onto specific axes and warn if "ax" in kwargs: msg = ("catplot is a figure-level function and does not accept " "target axes. You may wish to try {}".format(kind + "plot")) warnings.warn(msg, UserWarning) kwargs.pop("ax") # Determine the order for the whole dataset, which will be used in all # facets to ensure representation of all data in the final plot p = _CategoricalPlotter() p.establish_variables(x_, y_, hue, data, orient, order, hue_order) order = p.group_names hue_order = p.hue_names # Determine the palette to use # (FacetGrid will pass a value for ``color`` to the plotting function # so we need to define ``palette`` to get default behavior for the # categorical functions p.establish_colors(color, palette, 1) if kind != "point" or hue is not None: palette = p.colors # Determine keyword arguments for the facets facet_kws = {} if facet_kws is None else facet_kws facet_kws.update( data=data, row=row, col=col, row_order=row_order, col_order=col_order, col_wrap=col_wrap, height=height, aspect=aspect, sharex=sharex, sharey=sharey, legend_out=legend_out, margin_titles=margin_titles, dropna=False, ) # Determine keyword arguments for the plotting function plot_kws = dict( order=order, hue_order=hue_order, orient=orient, color=color, palette=palette, ) plot_kws.update(kwargs) if kind in ["bar", "point"]: plot_kws.update( estimator=estimator, ci=ci, n_boot=n_boot, units=units, seed=seed, ) # Initialize the facets g = FacetGrid(**facet_kws) # Draw the plot onto the facets g.map_dataframe(plot_func, x, y, hue, **plot_kws) # Special case axis labels for a count type plot if kind == "count": if x is None: g.set_axis_labels(x_var="count") if y is None: g.set_axis_labels(y_var="count") if legend and (hue is not None) and (hue not in [x, row, col]): hue_order = list(map(utils.to_utf8, hue_order)) g.add_legend(title=hue, label_order=hue_order) return g catplot.__doc__ = dedent("""\ Figure-level interface for drawing categorical plots onto a :class:`FacetGrid`. This function provides access to several axes-level functions that show the relationship between a numerical and one or more categorical variables using one of several visual representations. The ``kind`` parameter selects the underlying axes-level function to use: Categorical scatterplots: - :func:`stripplot` (with ``kind="strip"``; the default) - :func:`swarmplot` (with ``kind="swarm"``) Categorical distribution plots: - :func:`boxplot` (with ``kind="box"``) - :func:`violinplot` (with ``kind="violin"``) - :func:`boxenplot` (with ``kind="boxen"``) Categorical estimate plots: - :func:`pointplot` (with ``kind="point"``) - :func:`barplot` (with ``kind="bar"``) - :func:`countplot` (with ``kind="count"``) Extra keyword arguments are passed to the underlying function, so you should refer to the documentation for each to see kind-specific options. Note that unlike when using the axes-level functions directly, data must be passed in a long-form DataFrame with variables specified by passing strings to ``x``, ``y``, ``hue``, etc. As in the case with the underlying plot functions, if variables have a ``categorical`` data type, the the levels of the categorical variables, and their order will be inferred from the objects. Otherwise you may have to use alter the dataframe sorting or use the function parameters (``orient``, ``order``, ``hue_order``, etc.) to set up the plot correctly. {categorical_narrative} After plotting, the :class:`FacetGrid` with the plot is returned and can be used directly to tweak supporting plot details or add other layers. Parameters ---------- {string_input_params} {long_form_data} row, col : names of variables in ``data``, optional Categorical variables that will determine the faceting of the grid. {col_wrap} {stat_api_params} {order_vars} row_order, col_order : lists of strings, optional Order to organize the rows and/or columns of the grid in, otherwise the orders are inferred from the data objects. kind : string, optional The kind of plot to draw (corresponds to the name of a categorical plotting function. Options are: "point", "bar", "strip", "swarm", "box", "violin", or "boxen". {height} {aspect} {orient} {color} {palette} legend : bool, optional If ``True`` and there is a ``hue`` variable, draw a legend on the plot. {legend_out} {share_xy} {margin_titles} facet_kws : dict, optional Dictionary of other keyword arguments to pass to :class:`FacetGrid`. kwargs : key, value pairings Other keyword arguments are passed through to the underlying plotting function. Returns ------- g : :class:`FacetGrid` Returns the :class:`FacetGrid` object with the plot on it for further tweaking. Examples -------- Draw a single facet to use the :class:`FacetGrid` legend placement: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set(style="ticks") >>> exercise = sns.load_dataset("exercise") >>> g = sns.catplot(x="time", y="pulse", hue="kind", data=exercise) Use a different plot kind to visualize the same data: .. plot:: :context: close-figs >>> g = sns.catplot(x="time", y="pulse", hue="kind", ... data=exercise, kind="violin") Facet along the columns to show a third categorical variable: .. plot:: :context: close-figs >>> g = sns.catplot(x="time", y="pulse", hue="kind", ... col="diet", data=exercise) Use a different height and aspect ratio for the facets: .. plot:: :context: close-figs >>> g = sns.catplot(x="time", y="pulse", hue="kind", ... col="diet", data=exercise, ... height=5, aspect=.8) Make many column facets and wrap them into the rows of the grid: .. plot:: :context: close-figs >>> titanic = sns.load_dataset("titanic") >>> g = sns.catplot("alive", col="deck", col_wrap=4, ... data=titanic[titanic.deck.notnull()], ... kind="count", height=2.5, aspect=.8) Plot horizontally and pass other keyword arguments to the plot function: .. plot:: :context: close-figs >>> g = sns.catplot(x="age", y="embark_town", ... hue="sex", row="class", ... data=titanic[titanic.embark_town.notnull()], ... orient="h", height=2, aspect=3, palette="Set3", ... kind="violin", dodge=True, cut=0, bw=.2) Use methods on the returned :class:`FacetGrid` to tweak the presentation: .. plot:: :context: close-figs >>> g = sns.catplot(x="who", y="survived", col="class", ... data=titanic, saturation=.5, ... kind="bar", ci=None, aspect=.6) >>> (g.set_axis_labels("", "Survival Rate") ... .set_xticklabels(["Men", "Women", "Children"]) ... .set_titles("{{col_name}} {{col_var}}") ... .set(ylim=(0, 1)) ... .despine(left=True)) #doctest: +ELLIPSIS """).format(**_categorical_docs) seaborn-0.10.0/seaborn/cm.py000066400000000000000000001270231361256634400156420ustar00rootroot00000000000000from matplotlib import colors, cm as mpl_cm _rocket_lut = [ [ 0.01060815, 0.01808215, 0.10018654], [ 0.01428972, 0.02048237, 0.10374486], [ 0.01831941, 0.0229766 , 0.10738511], [ 0.02275049, 0.02554464, 0.11108639], [ 0.02759119, 0.02818316, 0.11483751], [ 0.03285175, 0.03088792, 0.11863035], [ 0.03853466, 0.03365771, 0.12245873], [ 0.04447016, 0.03648425, 0.12631831], [ 0.05032105, 0.03936808, 0.13020508], [ 0.05611171, 0.04224835, 0.13411624], [ 0.0618531 , 0.04504866, 0.13804929], [ 0.06755457, 0.04778179, 0.14200206], [ 0.0732236 , 0.05045047, 0.14597263], [ 0.0788708 , 0.05305461, 0.14995981], [ 0.08450105, 0.05559631, 0.15396203], [ 0.09011319, 0.05808059, 0.15797687], [ 0.09572396, 0.06050127, 0.16200507], [ 0.10132312, 0.06286782, 0.16604287], [ 0.10692823, 0.06517224, 0.17009175], [ 0.1125315 , 0.06742194, 0.17414848], [ 0.11813947, 0.06961499, 0.17821272], [ 0.12375803, 0.07174938, 0.18228425], [ 0.12938228, 0.07383015, 0.18636053], [ 0.13501631, 0.07585609, 0.19044109], [ 0.14066867, 0.0778224 , 0.19452676], [ 0.14633406, 0.07973393, 0.1986151 ], [ 0.15201338, 0.08159108, 0.20270523], [ 0.15770877, 0.08339312, 0.20679668], [ 0.16342174, 0.0851396 , 0.21088893], [ 0.16915387, 0.08682996, 0.21498104], [ 0.17489524, 0.08848235, 0.2190294 ], [ 0.18065495, 0.09009031, 0.22303512], [ 0.18643324, 0.09165431, 0.22699705], [ 0.19223028, 0.09317479, 0.23091409], [ 0.19804623, 0.09465217, 0.23478512], [ 0.20388117, 0.09608689, 0.23860907], [ 0.20973515, 0.09747934, 0.24238489], [ 0.21560818, 0.09882993, 0.24611154], [ 0.22150014, 0.10013944, 0.2497868 ], [ 0.22741085, 0.10140876, 0.25340813], [ 0.23334047, 0.10263737, 0.25697736], [ 0.23928891, 0.10382562, 0.2604936 ], [ 0.24525608, 0.10497384, 0.26395596], [ 0.25124182, 0.10608236, 0.26736359], [ 0.25724602, 0.10715148, 0.27071569], [ 0.26326851, 0.1081815 , 0.27401148], [ 0.26930915, 0.1091727 , 0.2772502 ], [ 0.27536766, 0.11012568, 0.28043021], [ 0.28144375, 0.11104133, 0.2835489 ], [ 0.2875374 , 0.11191896, 0.28660853], [ 0.29364846, 0.11275876, 0.2896085 ], [ 0.29977678, 0.11356089, 0.29254823], [ 0.30592213, 0.11432553, 0.29542718], [ 0.31208435, 0.11505284, 0.29824485], [ 0.31826327, 0.1157429 , 0.30100076], [ 0.32445869, 0.11639585, 0.30369448], [ 0.33067031, 0.11701189, 0.30632563], [ 0.33689808, 0.11759095, 0.3088938 ], [ 0.34314168, 0.11813362, 0.31139721], [ 0.34940101, 0.11863987, 0.3138355 ], [ 0.355676 , 0.11910909, 0.31620996], [ 0.36196644, 0.1195413 , 0.31852037], [ 0.36827206, 0.11993653, 0.32076656], [ 0.37459292, 0.12029443, 0.32294825], [ 0.38092887, 0.12061482, 0.32506528], [ 0.38727975, 0.12089756, 0.3271175 ], [ 0.39364518, 0.12114272, 0.32910494], [ 0.40002537, 0.12134964, 0.33102734], [ 0.40642019, 0.12151801, 0.33288464], [ 0.41282936, 0.12164769, 0.33467689], [ 0.41925278, 0.12173833, 0.33640407], [ 0.42569057, 0.12178916, 0.33806605], [ 0.43214263, 0.12179973, 0.33966284], [ 0.43860848, 0.12177004, 0.34119475], [ 0.44508855, 0.12169883, 0.34266151], [ 0.45158266, 0.12158557, 0.34406324], [ 0.45809049, 0.12142996, 0.34540024], [ 0.46461238, 0.12123063, 0.34667231], [ 0.47114798, 0.12098721, 0.34787978], [ 0.47769736, 0.12069864, 0.34902273], [ 0.48426077, 0.12036349, 0.35010104], [ 0.49083761, 0.11998161, 0.35111537], [ 0.49742847, 0.11955087, 0.35206533], [ 0.50403286, 0.11907081, 0.35295152], [ 0.51065109, 0.11853959, 0.35377385], [ 0.51728314, 0.1179558 , 0.35453252], [ 0.52392883, 0.11731817, 0.35522789], [ 0.53058853, 0.11662445, 0.35585982], [ 0.53726173, 0.11587369, 0.35642903], [ 0.54394898, 0.11506307, 0.35693521], [ 0.5506426 , 0.11420757, 0.35737863], [ 0.55734473, 0.11330456, 0.35775059], [ 0.56405586, 0.11235265, 0.35804813], [ 0.57077365, 0.11135597, 0.35827146], [ 0.5774991 , 0.11031233, 0.35841679], [ 0.58422945, 0.10922707, 0.35848469], [ 0.59096382, 0.10810205, 0.35847347], [ 0.59770215, 0.10693774, 0.35838029], [ 0.60444226, 0.10573912, 0.35820487], [ 0.61118304, 0.10450943, 0.35794557], [ 0.61792306, 0.10325288, 0.35760108], [ 0.62466162, 0.10197244, 0.35716891], [ 0.63139686, 0.10067417, 0.35664819], [ 0.63812122, 0.09938212, 0.35603757], [ 0.64483795, 0.0980891 , 0.35533555], [ 0.65154562, 0.09680192, 0.35454107], [ 0.65824241, 0.09552918, 0.3536529 ], [ 0.66492652, 0.09428017, 0.3526697 ], [ 0.67159578, 0.09306598, 0.35159077], [ 0.67824099, 0.09192342, 0.3504148 ], [ 0.684863 , 0.09085633, 0.34914061], [ 0.69146268, 0.0898675 , 0.34776864], [ 0.69803757, 0.08897226, 0.3462986 ], [ 0.70457834, 0.0882129 , 0.34473046], [ 0.71108138, 0.08761223, 0.3430635 ], [ 0.7175507 , 0.08716212, 0.34129974], [ 0.72398193, 0.08688725, 0.33943958], [ 0.73035829, 0.0868623 , 0.33748452], [ 0.73669146, 0.08704683, 0.33543669], [ 0.74297501, 0.08747196, 0.33329799], [ 0.74919318, 0.08820542, 0.33107204], [ 0.75535825, 0.08919792, 0.32876184], [ 0.76145589, 0.09050716, 0.32637117], [ 0.76748424, 0.09213602, 0.32390525], [ 0.77344838, 0.09405684, 0.32136808], [ 0.77932641, 0.09634794, 0.31876642], [ 0.78513609, 0.09892473, 0.31610488], [ 0.79085854, 0.10184672, 0.313391 ], [ 0.7965014 , 0.10506637, 0.31063031], [ 0.80205987, 0.10858333, 0.30783 ], [ 0.80752799, 0.11239964, 0.30499738], [ 0.81291606, 0.11645784, 0.30213802], [ 0.81820481, 0.12080606, 0.29926105], [ 0.82341472, 0.12535343, 0.2963705 ], [ 0.82852822, 0.13014118, 0.29347474], [ 0.83355779, 0.13511035, 0.29057852], [ 0.83850183, 0.14025098, 0.2876878 ], [ 0.84335441, 0.14556683, 0.28480819], [ 0.84813096, 0.15099892, 0.281943 ], [ 0.85281737, 0.15657772, 0.27909826], [ 0.85742602, 0.1622583 , 0.27627462], [ 0.86196552, 0.16801239, 0.27346473], [ 0.86641628, 0.17387796, 0.27070818], [ 0.87079129, 0.17982114, 0.26797378], [ 0.87507281, 0.18587368, 0.26529697], [ 0.87925878, 0.19203259, 0.26268136], [ 0.8833417 , 0.19830556, 0.26014181], [ 0.88731387, 0.20469941, 0.25769539], [ 0.89116859, 0.21121788, 0.2553592 ], [ 0.89490337, 0.21785614, 0.25314362], [ 0.8985026 , 0.22463251, 0.25108745], [ 0.90197527, 0.23152063, 0.24918223], [ 0.90530097, 0.23854541, 0.24748098], [ 0.90848638, 0.24568473, 0.24598324], [ 0.911533 , 0.25292623, 0.24470258], [ 0.9144225 , 0.26028902, 0.24369359], [ 0.91717106, 0.26773821, 0.24294137], [ 0.91978131, 0.27526191, 0.24245973], [ 0.92223947, 0.28287251, 0.24229568], [ 0.92456587, 0.29053388, 0.24242622], [ 0.92676657, 0.29823282, 0.24285536], [ 0.92882964, 0.30598085, 0.24362274], [ 0.93078135, 0.31373977, 0.24468803], [ 0.93262051, 0.3215093 , 0.24606461], [ 0.93435067, 0.32928362, 0.24775328], [ 0.93599076, 0.33703942, 0.24972157], [ 0.93752831, 0.34479177, 0.25199928], [ 0.93899289, 0.35250734, 0.25452808], [ 0.94036561, 0.36020899, 0.25734661], [ 0.94167588, 0.36786594, 0.2603949 ], [ 0.94291042, 0.37549479, 0.26369821], [ 0.94408513, 0.3830811 , 0.26722004], [ 0.94520419, 0.39062329, 0.27094924], [ 0.94625977, 0.39813168, 0.27489742], [ 0.94727016, 0.4055909 , 0.27902322], [ 0.94823505, 0.41300424, 0.28332283], [ 0.94914549, 0.42038251, 0.28780969], [ 0.95001704, 0.42771398, 0.29244728], [ 0.95085121, 0.43500005, 0.29722817], [ 0.95165009, 0.44224144, 0.30214494], [ 0.9524044 , 0.44944853, 0.3072105 ], [ 0.95312556, 0.45661389, 0.31239776], [ 0.95381595, 0.46373781, 0.31769923], [ 0.95447591, 0.47082238, 0.32310953], [ 0.95510255, 0.47787236, 0.32862553], [ 0.95569679, 0.48489115, 0.33421404], [ 0.95626788, 0.49187351, 0.33985601], [ 0.95681685, 0.49882008, 0.34555431], [ 0.9573439 , 0.50573243, 0.35130912], [ 0.95784842, 0.51261283, 0.35711942], [ 0.95833051, 0.51946267, 0.36298589], [ 0.95879054, 0.52628305, 0.36890904], [ 0.95922872, 0.53307513, 0.3748895 ], [ 0.95964538, 0.53983991, 0.38092784], [ 0.96004345, 0.54657593, 0.3870292 ], [ 0.96042097, 0.55328624, 0.39319057], [ 0.96077819, 0.55997184, 0.39941173], [ 0.9611152 , 0.5666337 , 0.40569343], [ 0.96143273, 0.57327231, 0.41203603], [ 0.96173392, 0.57988594, 0.41844491], [ 0.96201757, 0.58647675, 0.42491751], [ 0.96228344, 0.59304598, 0.43145271], [ 0.96253168, 0.5995944 , 0.43805131], [ 0.96276513, 0.60612062, 0.44471698], [ 0.96298491, 0.6126247 , 0.45145074], [ 0.96318967, 0.61910879, 0.45824902], [ 0.96337949, 0.6255736 , 0.46511271], [ 0.96355923, 0.63201624, 0.47204746], [ 0.96372785, 0.63843852, 0.47905028], [ 0.96388426, 0.64484214, 0.4861196 ], [ 0.96403203, 0.65122535, 0.4932578 ], [ 0.96417332, 0.65758729, 0.50046894], [ 0.9643063 , 0.66393045, 0.5077467 ], [ 0.96443322, 0.67025402, 0.51509334], [ 0.96455845, 0.67655564, 0.52251447], [ 0.96467922, 0.68283846, 0.53000231], [ 0.96479861, 0.68910113, 0.53756026], [ 0.96492035, 0.69534192, 0.5451917 ], [ 0.96504223, 0.7015636 , 0.5528892 ], [ 0.96516917, 0.70776351, 0.5606593 ], [ 0.96530224, 0.71394212, 0.56849894], [ 0.96544032, 0.72010124, 0.57640375], [ 0.96559206, 0.72623592, 0.58438387], [ 0.96575293, 0.73235058, 0.59242739], [ 0.96592829, 0.73844258, 0.60053991], [ 0.96612013, 0.74451182, 0.60871954], [ 0.96632832, 0.75055966, 0.61696136], [ 0.96656022, 0.75658231, 0.62527295], [ 0.96681185, 0.76258381, 0.63364277], [ 0.96709183, 0.76855969, 0.64207921], [ 0.96739773, 0.77451297, 0.65057302], [ 0.96773482, 0.78044149, 0.65912731], [ 0.96810471, 0.78634563, 0.66773889], [ 0.96850919, 0.79222565, 0.6764046 ], [ 0.96893132, 0.79809112, 0.68512266], [ 0.96935926, 0.80395415, 0.69383201], [ 0.9698028 , 0.80981139, 0.70252255], [ 0.97025511, 0.81566605, 0.71120296], [ 0.97071849, 0.82151775, 0.71987163], [ 0.97120159, 0.82736371, 0.72851999], [ 0.97169389, 0.83320847, 0.73716071], [ 0.97220061, 0.83905052, 0.74578903], [ 0.97272597, 0.84488881, 0.75440141], [ 0.97327085, 0.85072354, 0.76299805], [ 0.97383206, 0.85655639, 0.77158353], [ 0.97441222, 0.86238689, 0.78015619], [ 0.97501782, 0.86821321, 0.78871034], [ 0.97564391, 0.87403763, 0.79725261], [ 0.97628674, 0.87986189, 0.8057883 ], [ 0.97696114, 0.88568129, 0.81430324], [ 0.97765722, 0.89149971, 0.82280948], [ 0.97837585, 0.89731727, 0.83130786], [ 0.97912374, 0.90313207, 0.83979337], [ 0.979891 , 0.90894778, 0.84827858], [ 0.98067764, 0.91476465, 0.85676611], [ 0.98137749, 0.92061729, 0.86536915] ] _mako_lut = [ [ 0.04503935, 0.01482344, 0.02092227], [ 0.04933018, 0.01709292, 0.02535719], [ 0.05356262, 0.01950702, 0.03018802], [ 0.05774337, 0.02205989, 0.03545515], [ 0.06188095, 0.02474764, 0.04115287], [ 0.06598247, 0.0275665 , 0.04691409], [ 0.07005374, 0.03051278, 0.05264306], [ 0.07409947, 0.03358324, 0.05834631], [ 0.07812339, 0.03677446, 0.06403249], [ 0.08212852, 0.0400833 , 0.06970862], [ 0.08611731, 0.04339148, 0.07538208], [ 0.09009161, 0.04664706, 0.08105568], [ 0.09405308, 0.04985685, 0.08673591], [ 0.09800301, 0.05302279, 0.09242646], [ 0.10194255, 0.05614641, 0.09813162], [ 0.10587261, 0.05922941, 0.103854 ], [ 0.1097942 , 0.06227277, 0.10959847], [ 0.11370826, 0.06527747, 0.11536893], [ 0.11761516, 0.06824548, 0.12116393], [ 0.12151575, 0.07117741, 0.12698763], [ 0.12541095, 0.07407363, 0.1328442 ], [ 0.12930083, 0.07693611, 0.13873064], [ 0.13317849, 0.07976988, 0.14465095], [ 0.13701138, 0.08259683, 0.15060265], [ 0.14079223, 0.08542126, 0.15659379], [ 0.14452486, 0.08824175, 0.16262484], [ 0.14820351, 0.09106304, 0.16869476], [ 0.15183185, 0.09388372, 0.17480366], [ 0.15540398, 0.09670855, 0.18094993], [ 0.15892417, 0.09953561, 0.18713384], [ 0.16238588, 0.10236998, 0.19335329], [ 0.16579435, 0.10520905, 0.19960847], [ 0.16914226, 0.10805832, 0.20589698], [ 0.17243586, 0.11091443, 0.21221911], [ 0.17566717, 0.11378321, 0.21857219], [ 0.17884322, 0.11666074, 0.2249565 ], [ 0.18195582, 0.11955283, 0.23136943], [ 0.18501213, 0.12245547, 0.23781116], [ 0.18800459, 0.12537395, 0.24427914], [ 0.19093944, 0.1283047 , 0.25077369], [ 0.19381092, 0.13125179, 0.25729255], [ 0.19662307, 0.13421303, 0.26383543], [ 0.19937337, 0.13719028, 0.27040111], [ 0.20206187, 0.14018372, 0.27698891], [ 0.20469116, 0.14319196, 0.28359861], [ 0.20725547, 0.14621882, 0.29022775], [ 0.20976258, 0.14925954, 0.29687795], [ 0.21220409, 0.15231929, 0.30354703], [ 0.21458611, 0.15539445, 0.31023563], [ 0.21690827, 0.15848519, 0.31694355], [ 0.21916481, 0.16159489, 0.32366939], [ 0.2213631 , 0.16471913, 0.33041431], [ 0.22349947, 0.1678599 , 0.33717781], [ 0.2255714 , 0.1710185 , 0.34395925], [ 0.22758415, 0.17419169, 0.35075983], [ 0.22953569, 0.17738041, 0.35757941], [ 0.23142077, 0.18058733, 0.3644173 ], [ 0.2332454 , 0.18380872, 0.37127514], [ 0.2350092 , 0.18704459, 0.3781528 ], [ 0.23670785, 0.190297 , 0.38504973], [ 0.23834119, 0.19356547, 0.39196711], [ 0.23991189, 0.19684817, 0.39890581], [ 0.24141903, 0.20014508, 0.4058667 ], [ 0.24286214, 0.20345642, 0.4128484 ], [ 0.24423453, 0.20678459, 0.41985299], [ 0.24554109, 0.21012669, 0.42688124], [ 0.2467815 , 0.21348266, 0.43393244], [ 0.24795393, 0.21685249, 0.4410088 ], [ 0.24905614, 0.22023618, 0.448113 ], [ 0.25007383, 0.22365053, 0.45519562], [ 0.25098926, 0.22710664, 0.46223892], [ 0.25179696, 0.23060342, 0.46925447], [ 0.25249346, 0.23414353, 0.47623196], [ 0.25307401, 0.23772973, 0.48316271], [ 0.25353152, 0.24136961, 0.49001976], [ 0.25386167, 0.24506548, 0.49679407], [ 0.25406082, 0.2488164 , 0.50348932], [ 0.25412435, 0.25262843, 0.51007843], [ 0.25404842, 0.25650743, 0.51653282], [ 0.25383134, 0.26044852, 0.52286845], [ 0.2534705 , 0.26446165, 0.52903422], [ 0.25296722, 0.2685428 , 0.53503572], [ 0.2523226 , 0.27269346, 0.54085315], [ 0.25153974, 0.27691629, 0.54645752], [ 0.25062402, 0.28120467, 0.55185939], [ 0.24958205, 0.28556371, 0.55701246], [ 0.24842386, 0.28998148, 0.56194601], [ 0.24715928, 0.29446327, 0.56660884], [ 0.24580099, 0.29899398, 0.57104399], [ 0.24436202, 0.30357852, 0.57519929], [ 0.24285591, 0.30819938, 0.57913247], [ 0.24129828, 0.31286235, 0.58278615], [ 0.23970131, 0.3175495 , 0.5862272 ], [ 0.23807973, 0.32226344, 0.58941872], [ 0.23644557, 0.32699241, 0.59240198], [ 0.2348113 , 0.33173196, 0.59518282], [ 0.23318874, 0.33648036, 0.59775543], [ 0.2315855 , 0.34122763, 0.60016456], [ 0.23001121, 0.34597357, 0.60240251], [ 0.2284748 , 0.35071512, 0.6044784 ], [ 0.22698081, 0.35544612, 0.60642528], [ 0.22553305, 0.36016515, 0.60825252], [ 0.22413977, 0.36487341, 0.60994938], [ 0.22280246, 0.36956728, 0.61154118], [ 0.22152555, 0.37424409, 0.61304472], [ 0.22030752, 0.37890437, 0.61446646], [ 0.2191538 , 0.38354668, 0.61581561], [ 0.21806257, 0.38817169, 0.61709794], [ 0.21703799, 0.39277882, 0.61831922], [ 0.21607792, 0.39736958, 0.61948028], [ 0.21518463, 0.40194196, 0.62059763], [ 0.21435467, 0.40649717, 0.62167507], [ 0.21358663, 0.41103579, 0.62271724], [ 0.21288172, 0.41555771, 0.62373011], [ 0.21223835, 0.42006355, 0.62471794], [ 0.21165312, 0.42455441, 0.62568371], [ 0.21112526, 0.42903064, 0.6266318 ], [ 0.21065161, 0.43349321, 0.62756504], [ 0.21023306, 0.43794288, 0.62848279], [ 0.20985996, 0.44238227, 0.62938329], [ 0.20951045, 0.44680966, 0.63030696], [ 0.20916709, 0.45122981, 0.63124483], [ 0.20882976, 0.45564335, 0.63219599], [ 0.20849798, 0.46005094, 0.63315928], [ 0.20817199, 0.46445309, 0.63413391], [ 0.20785149, 0.46885041, 0.63511876], [ 0.20753716, 0.47324327, 0.63611321], [ 0.20722876, 0.47763224, 0.63711608], [ 0.20692679, 0.48201774, 0.63812656], [ 0.20663156, 0.48640018, 0.63914367], [ 0.20634336, 0.49078002, 0.64016638], [ 0.20606303, 0.49515755, 0.6411939 ], [ 0.20578999, 0.49953341, 0.64222457], [ 0.20552612, 0.50390766, 0.64325811], [ 0.20527189, 0.50828072, 0.64429331], [ 0.20502868, 0.51265277, 0.64532947], [ 0.20479718, 0.51702417, 0.64636539], [ 0.20457804, 0.52139527, 0.64739979], [ 0.20437304, 0.52576622, 0.64843198], [ 0.20418396, 0.53013715, 0.64946117], [ 0.20401238, 0.53450825, 0.65048638], [ 0.20385896, 0.53887991, 0.65150606], [ 0.20372653, 0.54325208, 0.65251978], [ 0.20361709, 0.5476249 , 0.6535266 ], [ 0.20353258, 0.55199854, 0.65452542], [ 0.20347472, 0.55637318, 0.655515 ], [ 0.20344718, 0.56074869, 0.65649508], [ 0.20345161, 0.56512531, 0.65746419], [ 0.20349089, 0.56950304, 0.65842151], [ 0.20356842, 0.57388184, 0.65936642], [ 0.20368663, 0.57826181, 0.66029768], [ 0.20384884, 0.58264293, 0.6612145 ], [ 0.20405904, 0.58702506, 0.66211645], [ 0.20431921, 0.59140842, 0.66300179], [ 0.20463464, 0.59579264, 0.66387079], [ 0.20500731, 0.60017798, 0.66472159], [ 0.20544449, 0.60456387, 0.66555409], [ 0.20596097, 0.60894927, 0.66636568], [ 0.20654832, 0.61333521, 0.66715744], [ 0.20721003, 0.61772167, 0.66792838], [ 0.20795035, 0.62210845, 0.66867802], [ 0.20877302, 0.62649546, 0.66940555], [ 0.20968223, 0.63088252, 0.6701105 ], [ 0.21068163, 0.63526951, 0.67079211], [ 0.21177544, 0.63965621, 0.67145005], [ 0.21298582, 0.64404072, 0.67208182], [ 0.21430361, 0.64842404, 0.67268861], [ 0.21572716, 0.65280655, 0.67326978], [ 0.21726052, 0.65718791, 0.6738255 ], [ 0.21890636, 0.66156803, 0.67435491], [ 0.220668 , 0.66594665, 0.67485792], [ 0.22255447, 0.67032297, 0.67533374], [ 0.22458372, 0.67469531, 0.67578061], [ 0.22673713, 0.67906542, 0.67620044], [ 0.22901625, 0.6834332 , 0.67659251], [ 0.23142316, 0.68779836, 0.67695703], [ 0.23395924, 0.69216072, 0.67729378], [ 0.23663857, 0.69651881, 0.67760151], [ 0.23946645, 0.70087194, 0.67788018], [ 0.24242624, 0.70522162, 0.67813088], [ 0.24549008, 0.70957083, 0.67835215], [ 0.24863372, 0.71392166, 0.67854868], [ 0.25187832, 0.71827158, 0.67872193], [ 0.25524083, 0.72261873, 0.67887024], [ 0.25870947, 0.72696469, 0.67898912], [ 0.26229238, 0.73130855, 0.67907645], [ 0.26604085, 0.73564353, 0.67914062], [ 0.26993099, 0.73997282, 0.67917264], [ 0.27397488, 0.74429484, 0.67917096], [ 0.27822463, 0.74860229, 0.67914468], [ 0.28264201, 0.75290034, 0.67907959], [ 0.2873016 , 0.75717817, 0.67899164], [ 0.29215894, 0.76144162, 0.67886578], [ 0.29729823, 0.76567816, 0.67871894], [ 0.30268199, 0.76989232, 0.67853896], [ 0.30835665, 0.77407636, 0.67833512], [ 0.31435139, 0.77822478, 0.67811118], [ 0.3206671 , 0.78233575, 0.67786729], [ 0.32733158, 0.78640315, 0.67761027], [ 0.33437168, 0.79042043, 0.67734882], [ 0.34182112, 0.79437948, 0.67709394], [ 0.34968889, 0.79827511, 0.67685638], [ 0.35799244, 0.80210037, 0.67664969], [ 0.36675371, 0.80584651, 0.67649539], [ 0.3759816 , 0.80950627, 0.67641393], [ 0.38566792, 0.81307432, 0.67642947], [ 0.39579804, 0.81654592, 0.67656899], [ 0.40634556, 0.81991799, 0.67686215], [ 0.41730243, 0.82318339, 0.67735255], [ 0.4285828 , 0.82635051, 0.6780564 ], [ 0.44012728, 0.82942353, 0.67900049], [ 0.45189421, 0.83240398, 0.68021733], [ 0.46378379, 0.83530763, 0.6817062 ], [ 0.47573199, 0.83814472, 0.68347352], [ 0.48769865, 0.84092197, 0.68552698], [ 0.49962354, 0.84365379, 0.68783929], [ 0.5114027 , 0.8463718 , 0.69029789], [ 0.52301693, 0.84908401, 0.69288545], [ 0.53447549, 0.85179048, 0.69561066], [ 0.54578602, 0.8544913 , 0.69848331], [ 0.55695565, 0.85718723, 0.70150427], [ 0.56798832, 0.85987893, 0.70468261], [ 0.57888639, 0.86256715, 0.70802931], [ 0.5896541 , 0.8652532 , 0.71154204], [ 0.60028928, 0.86793835, 0.71523675], [ 0.61079441, 0.87062438, 0.71910895], [ 0.62116633, 0.87331311, 0.72317003], [ 0.63140509, 0.87600675, 0.72741689], [ 0.64150735, 0.87870746, 0.73185717], [ 0.65147219, 0.8814179 , 0.73648495], [ 0.66129632, 0.8841403 , 0.74130658], [ 0.67097934, 0.88687758, 0.74631123], [ 0.68051833, 0.88963189, 0.75150483], [ 0.68991419, 0.89240612, 0.75687187], [ 0.69916533, 0.89520211, 0.76241714], [ 0.70827373, 0.89802257, 0.76812286], [ 0.71723995, 0.90086891, 0.77399039], [ 0.72606665, 0.90374337, 0.7800041 ], [ 0.73475675, 0.90664718, 0.78615802], [ 0.74331358, 0.90958151, 0.79244474], [ 0.75174143, 0.91254787, 0.79884925], [ 0.76004473, 0.91554656, 0.80536823], [ 0.76827704, 0.91856549, 0.81196513], [ 0.77647029, 0.921603 , 0.81855729], [ 0.78462009, 0.92466151, 0.82514119], [ 0.79273542, 0.92773848, 0.83172131], [ 0.8008109 , 0.93083672, 0.83829355], [ 0.80885107, 0.93395528, 0.84485982], [ 0.81685878, 0.9370938 , 0.85142101], [ 0.82483206, 0.94025378, 0.8579751 ], [ 0.83277661, 0.94343371, 0.86452477], [ 0.84069127, 0.94663473, 0.87106853], [ 0.84857662, 0.9498573 , 0.8776059 ], [ 0.8564431 , 0.95309792, 0.88414253], [ 0.86429066, 0.95635719, 0.89067759], [ 0.87218969, 0.95960708, 0.89725384] ] _vlag_lut = [ [ 0.13850039, 0.41331206, 0.74052025], [ 0.15077609, 0.41762684, 0.73970427], [ 0.16235219, 0.4219191 , 0.7389667 ], [ 0.1733322 , 0.42619024, 0.73832537], [ 0.18382538, 0.43044226, 0.73776764], [ 0.19394034, 0.4346772 , 0.73725867], [ 0.20367115, 0.43889576, 0.73685314], [ 0.21313625, 0.44310003, 0.73648045], [ 0.22231173, 0.44729079, 0.73619681], [ 0.23125148, 0.45146945, 0.73597803], [ 0.23998101, 0.45563715, 0.7358223 ], [ 0.24853358, 0.45979489, 0.73571524], [ 0.25691416, 0.4639437 , 0.73566943], [ 0.26513894, 0.46808455, 0.73568319], [ 0.27322194, 0.47221835, 0.73575497], [ 0.28117543, 0.47634598, 0.73588332], [ 0.28901021, 0.48046826, 0.73606686], [ 0.2967358 , 0.48458597, 0.73630433], [ 0.30436071, 0.48869986, 0.73659451], [ 0.3118955 , 0.49281055, 0.73693255], [ 0.31935389, 0.49691847, 0.73730851], [ 0.32672701, 0.5010247 , 0.73774013], [ 0.33402607, 0.50512971, 0.73821941], [ 0.34125337, 0.50923419, 0.73874905], [ 0.34840921, 0.51333892, 0.73933402], [ 0.35551826, 0.51744353, 0.73994642], [ 0.3625676 , 0.52154929, 0.74060763], [ 0.36956356, 0.52565656, 0.74131327], [ 0.37649902, 0.52976642, 0.74207698], [ 0.38340273, 0.53387791, 0.74286286], [ 0.39025859, 0.53799253, 0.7436962 ], [ 0.39706821, 0.54211081, 0.744578 ], [ 0.40384046, 0.54623277, 0.74549872], [ 0.41058241, 0.55035849, 0.74645094], [ 0.41728385, 0.55448919, 0.74745174], [ 0.42395178, 0.55862494, 0.74849357], [ 0.4305964 , 0.56276546, 0.74956387], [ 0.4372044 , 0.56691228, 0.75068412], [ 0.4437909 , 0.57106468, 0.75183427], [ 0.45035117, 0.5752235 , 0.75302312], [ 0.45687824, 0.57938983, 0.75426297], [ 0.46339713, 0.58356191, 0.75551816], [ 0.46988778, 0.58774195, 0.75682037], [ 0.47635605, 0.59192986, 0.75816245], [ 0.48281101, 0.5961252 , 0.75953212], [ 0.4892374 , 0.60032986, 0.76095418], [ 0.49566225, 0.60454154, 0.76238852], [ 0.50206137, 0.60876307, 0.76387371], [ 0.50845128, 0.61299312, 0.76538551], [ 0.5148258 , 0.61723272, 0.76693475], [ 0.52118385, 0.62148236, 0.76852436], [ 0.52753571, 0.62574126, 0.77013939], [ 0.53386831, 0.63001125, 0.77180152], [ 0.54020159, 0.63429038, 0.7734803 ], [ 0.54651272, 0.63858165, 0.77521306], [ 0.55282975, 0.64288207, 0.77695608], [ 0.55912585, 0.64719519, 0.77875327], [ 0.56542599, 0.65151828, 0.78056551], [ 0.57170924, 0.65585426, 0.78242747], [ 0.57799572, 0.6602009 , 0.78430751], [ 0.58426817, 0.66456073, 0.78623458], [ 0.590544 , 0.66893178, 0.78818117], [ 0.59680758, 0.67331643, 0.79017369], [ 0.60307553, 0.67771273, 0.79218572], [ 0.60934065, 0.68212194, 0.79422987], [ 0.61559495, 0.68654548, 0.7963202 ], [ 0.62185554, 0.69098125, 0.79842918], [ 0.62810662, 0.69543176, 0.80058381], [ 0.63436425, 0.69989499, 0.80275812], [ 0.64061445, 0.70437326, 0.80497621], [ 0.6468706 , 0.70886488, 0.80721641], [ 0.65312213, 0.7133717 , 0.80949719], [ 0.65937818, 0.71789261, 0.81180392], [ 0.66563334, 0.72242871, 0.81414642], [ 0.67189155, 0.72697967, 0.81651872], [ 0.67815314, 0.73154569, 0.81892097], [ 0.68441395, 0.73612771, 0.82136094], [ 0.69068321, 0.74072452, 0.82382353], [ 0.69694776, 0.7453385 , 0.82633199], [ 0.70322431, 0.74996721, 0.8288583 ], [ 0.70949595, 0.75461368, 0.83143221], [ 0.7157774 , 0.75927574, 0.83402904], [ 0.72206299, 0.76395461, 0.83665922], [ 0.72835227, 0.76865061, 0.8393242 ], [ 0.73465238, 0.7733628 , 0.84201224], [ 0.74094862, 0.77809393, 0.84474951], [ 0.74725683, 0.78284158, 0.84750915], [ 0.75357103, 0.78760701, 0.85030217], [ 0.75988961, 0.79239077, 0.85313207], [ 0.76621987, 0.79719185, 0.85598668], [ 0.77255045, 0.8020125 , 0.85888658], [ 0.77889241, 0.80685102, 0.86181298], [ 0.78524572, 0.81170768, 0.86476656], [ 0.79159841, 0.81658489, 0.86776906], [ 0.79796459, 0.82148036, 0.8707962 ], [ 0.80434168, 0.82639479, 0.87385315], [ 0.8107221 , 0.83132983, 0.87695392], [ 0.81711301, 0.8362844 , 0.88008641], [ 0.82351479, 0.84125863, 0.88325045], [ 0.82992772, 0.84625263, 0.88644594], [ 0.83634359, 0.85126806, 0.8896878 ], [ 0.84277295, 0.85630293, 0.89295721], [ 0.84921192, 0.86135782, 0.89626076], [ 0.85566206, 0.866432 , 0.89959467], [ 0.86211514, 0.87152627, 0.90297183], [ 0.86857483, 0.87663856, 0.90638248], [ 0.87504231, 0.88176648, 0.90981938], [ 0.88151194, 0.88690782, 0.91328493], [ 0.88797938, 0.89205857, 0.91677544], [ 0.89443865, 0.89721298, 0.9202854 ], [ 0.90088204, 0.90236294, 0.92380601], [ 0.90729768, 0.90749778, 0.92732797], [ 0.91367037, 0.91260329, 0.93083814], [ 0.91998105, 0.91766106, 0.93431861], [ 0.92620596, 0.92264789, 0.93774647], [ 0.93231683, 0.9275351 , 0.94109192], [ 0.93827772, 0.9322888 , 0.94432312], [ 0.94404755, 0.93686925, 0.94740137], [ 0.94958284, 0.94123072, 0.95027696], [ 0.95482682, 0.9453245 , 0.95291103], [ 0.9597248 , 0.94909728, 0.95525103], [ 0.96422552, 0.95249273, 0.95723271], [ 0.96826161, 0.95545812, 0.95882188], [ 0.97178458, 0.95793984, 0.95995705], [ 0.97474105, 0.95989142, 0.96059997], [ 0.97708604, 0.96127366, 0.96071853], [ 0.97877855, 0.96205832, 0.96030095], [ 0.97978484, 0.96222949, 0.95935496], [ 0.9805997 , 0.96155216, 0.95813083], [ 0.98152619, 0.95993719, 0.95639322], [ 0.9819726 , 0.95766608, 0.95399269], [ 0.98191855, 0.9547873 , 0.95098107], [ 0.98138514, 0.95134771, 0.94740644], [ 0.98040845, 0.94739906, 0.94332125], [ 0.97902107, 0.94300131, 0.93878672], [ 0.97729348, 0.93820409, 0.93385135], [ 0.9752533 , 0.933073 , 0.92858252], [ 0.97297834, 0.92765261, 0.92302309], [ 0.97049104, 0.92200317, 0.91723505], [ 0.96784372, 0.91616744, 0.91126063], [ 0.96507281, 0.91018664, 0.90514124], [ 0.96222034, 0.90409203, 0.89890756], [ 0.9593079 , 0.89791478, 0.89259122], [ 0.95635626, 0.89167908, 0.88621654], [ 0.95338303, 0.88540373, 0.87980238], [ 0.95040174, 0.87910333, 0.87336339], [ 0.94742246, 0.87278899, 0.86691076], [ 0.94445249, 0.86646893, 0.86045277], [ 0.94150476, 0.86014606, 0.85399191], [ 0.93857394, 0.85382798, 0.84753642], [ 0.93566206, 0.84751766, 0.84108935], [ 0.93277194, 0.8412164 , 0.83465197], [ 0.92990106, 0.83492672, 0.82822708], [ 0.92704736, 0.82865028, 0.82181656], [ 0.92422703, 0.82238092, 0.81541333], [ 0.92142581, 0.81612448, 0.80902415], [ 0.91864501, 0.80988032, 0.80264838], [ 0.91587578, 0.80365187, 0.79629001], [ 0.9131367 , 0.79743115, 0.78994 ], [ 0.91041602, 0.79122265, 0.78360361], [ 0.90771071, 0.78502727, 0.77728196], [ 0.90501581, 0.77884674, 0.7709771 ], [ 0.90235365, 0.77267117, 0.76467793], [ 0.8997019 , 0.76650962, 0.75839484], [ 0.89705346, 0.76036481, 0.752131 ], [ 0.89444021, 0.75422253, 0.74587047], [ 0.89183355, 0.74809474, 0.73962689], [ 0.88923216, 0.74198168, 0.73340061], [ 0.88665892, 0.73587283, 0.72717995], [ 0.88408839, 0.72977904, 0.72097718], [ 0.88153537, 0.72369332, 0.71478461], [ 0.87899389, 0.7176179 , 0.70860487], [ 0.87645157, 0.71155805, 0.7024439 ], [ 0.8739399 , 0.70549893, 0.6962854 ], [ 0.87142626, 0.6994551 , 0.69014561], [ 0.8689268 , 0.69341868, 0.68401597], [ 0.86643562, 0.687392 , 0.67789917], [ 0.86394434, 0.68137863, 0.67179927], [ 0.86147586, 0.67536728, 0.665704 ], [ 0.85899928, 0.66937226, 0.6596292 ], [ 0.85654668, 0.66337773, 0.6535577 ], [ 0.85408818, 0.65739772, 0.64750494], [ 0.85164413, 0.65142189, 0.64145983], [ 0.84920091, 0.6454565 , 0.63542932], [ 0.84676427, 0.63949827, 0.62941 ], [ 0.84433231, 0.63354773, 0.62340261], [ 0.84190106, 0.62760645, 0.61740899], [ 0.83947935, 0.62166951, 0.61142404], [ 0.8370538 , 0.61574332, 0.60545478], [ 0.83463975, 0.60981951, 0.59949247], [ 0.83221877, 0.60390724, 0.593547 ], [ 0.82980985, 0.59799607, 0.58760751], [ 0.82740268, 0.59209095, 0.58167944], [ 0.82498638, 0.5861973 , 0.57576866], [ 0.82258181, 0.5803034 , 0.56986307], [ 0.82016611, 0.57442123, 0.56397539], [ 0.81776305, 0.56853725, 0.55809173], [ 0.81534551, 0.56266602, 0.55222741], [ 0.81294293, 0.55679056, 0.5463651 ], [ 0.81052113, 0.55092973, 0.54052443], [ 0.80811509, 0.54506305, 0.53468464], [ 0.80568952, 0.53921036, 0.52886622], [ 0.80327506, 0.53335335, 0.52305077], [ 0.80084727, 0.52750583, 0.51725256], [ 0.79842217, 0.5216578 , 0.51146173], [ 0.79599382, 0.51581223, 0.50568155], [ 0.79355781, 0.50997127, 0.49991444], [ 0.79112596, 0.50412707, 0.49415289], [ 0.78867442, 0.49829386, 0.48841129], [ 0.7862306 , 0.49245398, 0.48267247], [ 0.7837687 , 0.48662309, 0.47695216], [ 0.78130809, 0.4807883 , 0.47123805], [ 0.77884467, 0.47495151, 0.46553236], [ 0.77636283, 0.46912235, 0.45984473], [ 0.77388383, 0.46328617, 0.45416141], [ 0.77138912, 0.45745466, 0.44849398], [ 0.76888874, 0.45162042, 0.44283573], [ 0.76638802, 0.44577901, 0.43718292], [ 0.76386116, 0.43994762, 0.43155211], [ 0.76133542, 0.43410655, 0.42592523], [ 0.75880631, 0.42825801, 0.42030488], [ 0.75624913, 0.42241905, 0.41470727], [ 0.7536919 , 0.41656866, 0.40911347], [ 0.75112748, 0.41071104, 0.40352792], [ 0.74854331, 0.40485474, 0.3979589 ], [ 0.74594723, 0.39899309, 0.39240088], [ 0.74334332, 0.39312199, 0.38685075], [ 0.74073277, 0.38723941, 0.3813074 ], [ 0.73809409, 0.38136133, 0.37578553], [ 0.73544692, 0.37547129, 0.37027123], [ 0.73278943, 0.36956954, 0.36476549], [ 0.73011829, 0.36365761, 0.35927038], [ 0.72743485, 0.35773314, 0.35378465], [ 0.72472722, 0.35180504, 0.34831662], [ 0.72200473, 0.34586421, 0.34285937], [ 0.71927052, 0.33990649, 0.33741033], [ 0.71652049, 0.33393396, 0.33197219], [ 0.71375362, 0.32794602, 0.32654545], [ 0.71096951, 0.32194148, 0.32113016], [ 0.70816772, 0.31591904, 0.31572637], [ 0.70534784, 0.30987734, 0.31033414], [ 0.70250944, 0.30381489, 0.30495353], [ 0.69965211, 0.2977301 , 0.2995846 ], [ 0.6967754 , 0.29162126, 0.29422741], [ 0.69388446, 0.28548074, 0.28887769], [ 0.69097561, 0.2793096 , 0.28353795], [ 0.68803513, 0.27311993, 0.27821876], [ 0.6850794 , 0.26689144, 0.27290694], [ 0.682108 , 0.26062114, 0.26760246], [ 0.67911013, 0.2543177 , 0.26231367], [ 0.67609393, 0.24796818, 0.25703372], [ 0.67305921, 0.24156846, 0.25176238], [ 0.67000176, 0.23511902, 0.24650278], [ 0.66693423, 0.22859879, 0.24124404], [ 0.6638441 , 0.22201742, 0.2359961 ], [ 0.66080672, 0.21526712, 0.23069468] ] _icefire_lut = [ [ 0.73936227, 0.90443867, 0.85757238], [ 0.72888063, 0.89639109, 0.85488394], [ 0.71834255, 0.88842162, 0.8521605 ], [ 0.70773866, 0.88052939, 0.849422 ], [ 0.69706215, 0.87271313, 0.84668315], [ 0.68629021, 0.86497329, 0.84398721], [ 0.67543654, 0.85730617, 0.84130969], [ 0.66448539, 0.84971123, 0.83868005], [ 0.65342679, 0.84218728, 0.83611512], [ 0.64231804, 0.83471867, 0.83358584], [ 0.63117745, 0.827294 , 0.83113431], [ 0.62000484, 0.81991069, 0.82876741], [ 0.60879435, 0.81256797, 0.82648905], [ 0.59754118, 0.80526458, 0.82430414], [ 0.58624247, 0.79799884, 0.82221573], [ 0.57489525, 0.7907688 , 0.82022901], [ 0.56349779, 0.78357215, 0.81834861], [ 0.55204294, 0.77640827, 0.81657563], [ 0.54052516, 0.76927562, 0.81491462], [ 0.52894085, 0.76217215, 0.81336913], [ 0.51728854, 0.75509528, 0.81194156], [ 0.50555676, 0.74804469, 0.81063503], [ 0.49373871, 0.7410187 , 0.80945242], [ 0.48183174, 0.73401449, 0.80839675], [ 0.46982587, 0.72703075, 0.80747097], [ 0.45770893, 0.72006648, 0.80667756], [ 0.44547249, 0.71311941, 0.80601991], [ 0.43318643, 0.70617126, 0.80549278], [ 0.42110294, 0.69916972, 0.80506683], [ 0.40925101, 0.69211059, 0.80473246], [ 0.3976693 , 0.68498786, 0.80448272], [ 0.38632002, 0.67781125, 0.80431024], [ 0.37523981, 0.67057537, 0.80420832], [ 0.36442578, 0.66328229, 0.80417474], [ 0.35385939, 0.65593699, 0.80420591], [ 0.34358916, 0.64853177, 0.8043 ], [ 0.33355526, 0.64107876, 0.80445484], [ 0.32383062, 0.63356578, 0.80467091], [ 0.31434372, 0.62600624, 0.8049475 ], [ 0.30516161, 0.618389 , 0.80528692], [ 0.29623491, 0.61072284, 0.80569021], [ 0.28759072, 0.60300319, 0.80616055], [ 0.27923924, 0.59522877, 0.80669803], [ 0.27114651, 0.5874047 , 0.80730545], [ 0.26337153, 0.57952055, 0.80799113], [ 0.25588696, 0.57157984, 0.80875922], [ 0.248686 , 0.56358255, 0.80961366], [ 0.24180668, 0.55552289, 0.81055123], [ 0.23526251, 0.54739477, 0.8115939 ], [ 0.22921445, 0.53918506, 0.81267292], [ 0.22397687, 0.53086094, 0.8137141 ], [ 0.21977058, 0.52241482, 0.81457651], [ 0.21658989, 0.51384321, 0.81528511], [ 0.21452772, 0.50514155, 0.81577278], [ 0.21372783, 0.49630865, 0.81589566], [ 0.21409503, 0.48734861, 0.81566163], [ 0.2157176 , 0.47827123, 0.81487615], [ 0.21842857, 0.46909168, 0.81351614], [ 0.22211705, 0.45983212, 0.81146983], [ 0.22665681, 0.45052233, 0.80860217], [ 0.23176013, 0.44119137, 0.80494325], [ 0.23727775, 0.43187704, 0.80038017], [ 0.24298285, 0.42261123, 0.79493267], [ 0.24865068, 0.41341842, 0.78869164], [ 0.25423116, 0.40433127, 0.78155831], [ 0.25950239, 0.39535521, 0.77376848], [ 0.2644736 , 0.38651212, 0.76524809], [ 0.26901584, 0.37779582, 0.75621942], [ 0.27318141, 0.36922056, 0.746605 ], [ 0.27690355, 0.3607736 , 0.73659374], [ 0.28023585, 0.35244234, 0.72622103], [ 0.28306009, 0.34438449, 0.71500731], [ 0.28535896, 0.33660243, 0.70303975], [ 0.28708711, 0.32912157, 0.69034504], [ 0.28816354, 0.32200604, 0.67684067], [ 0.28862749, 0.31519824, 0.66278813], [ 0.28847904, 0.30869064, 0.6482815 ], [ 0.28770912, 0.30250126, 0.63331265], [ 0.28640325, 0.29655509, 0.61811374], [ 0.28458943, 0.29082155, 0.60280913], [ 0.28233561, 0.28527482, 0.58742866], [ 0.27967038, 0.2798938 , 0.57204225], [ 0.27665361, 0.27465357, 0.55667809], [ 0.27332564, 0.2695165 , 0.54145387], [ 0.26973851, 0.26447054, 0.52634916], [ 0.2659204 , 0.25949691, 0.511417 ], [ 0.26190145, 0.25458123, 0.49668768], [ 0.2577151 , 0.24971691, 0.48214874], [ 0.25337618, 0.24490494, 0.46778758], [ 0.24890842, 0.24013332, 0.45363816], [ 0.24433654, 0.23539226, 0.4397245 ], [ 0.23967922, 0.23067729, 0.4260591 ], [ 0.23495608, 0.22598894, 0.41262952], [ 0.23018113, 0.22132414, 0.39945577], [ 0.22534609, 0.21670847, 0.38645794], [ 0.22048761, 0.21211723, 0.37372555], [ 0.2156198 , 0.20755389, 0.36125301], [ 0.21074637, 0.20302717, 0.34903192], [ 0.20586893, 0.19855368, 0.33701661], [ 0.20101757, 0.19411573, 0.32529173], [ 0.19619947, 0.18972425, 0.31383846], [ 0.19140726, 0.18540157, 0.30260777], [ 0.1866769 , 0.1811332 , 0.29166583], [ 0.18201285, 0.17694992, 0.28088776], [ 0.17745228, 0.17282141, 0.27044211], [ 0.17300684, 0.16876921, 0.26024893], [ 0.16868273, 0.16479861, 0.25034479], [ 0.16448691, 0.16091728, 0.24075373], [ 0.16043195, 0.15714351, 0.23141745], [ 0.15652427, 0.15348248, 0.22238175], [ 0.15277065, 0.14994111, 0.21368395], [ 0.14918274, 0.14653431, 0.20529486], [ 0.14577095, 0.14327403, 0.19720829], [ 0.14254381, 0.14016944, 0.18944326], [ 0.13951035, 0.13723063, 0.18201072], [ 0.13667798, 0.13446606, 0.17493774], [ 0.13405762, 0.13188822, 0.16820842], [ 0.13165767, 0.12950667, 0.16183275], [ 0.12948748, 0.12733187, 0.15580631], [ 0.12755435, 0.1253723 , 0.15014098], [ 0.12586516, 0.12363617, 0.1448459 ], [ 0.12442647, 0.12213143, 0.13992571], [ 0.12324241, 0.12086419, 0.13539995], [ 0.12232067, 0.11984278, 0.13124644], [ 0.12166209, 0.11907077, 0.12749671], [ 0.12126982, 0.11855309, 0.12415079], [ 0.12114244, 0.11829179, 0.1212385 ], [ 0.12127766, 0.11828837, 0.11878534], [ 0.12284806, 0.1179729 , 0.11772022], [ 0.12619498, 0.11721796, 0.11770203], [ 0.129968 , 0.11663788, 0.11792377], [ 0.13410011, 0.11625146, 0.11839138], [ 0.13855459, 0.11606618, 0.11910584], [ 0.14333775, 0.11607038, 0.1200606 ], [ 0.148417 , 0.11626929, 0.12125453], [ 0.15377389, 0.11666192, 0.12268364], [ 0.15941427, 0.11723486, 0.12433911], [ 0.16533376, 0.11797856, 0.12621303], [ 0.17152547, 0.11888403, 0.12829735], [ 0.17797765, 0.11994436, 0.13058435], [ 0.18468769, 0.12114722, 0.13306426], [ 0.19165663, 0.12247737, 0.13572616], [ 0.19884415, 0.12394381, 0.1385669 ], [ 0.20627181, 0.12551883, 0.14157124], [ 0.21394877, 0.12718055, 0.14472604], [ 0.22184572, 0.12893119, 0.14802579], [ 0.22994394, 0.13076731, 0.15146314], [ 0.23823937, 0.13267611, 0.15502793], [ 0.24676041, 0.13462172, 0.15870321], [ 0.25546457, 0.13661751, 0.16248722], [ 0.26433628, 0.13865956, 0.16637301], [ 0.27341345, 0.14070412, 0.17034221], [ 0.28264773, 0.14277192, 0.1743957 ], [ 0.29202272, 0.14486161, 0.17852793], [ 0.30159648, 0.14691224, 0.1827169 ], [ 0.31129002, 0.14897583, 0.18695213], [ 0.32111555, 0.15103351, 0.19119629], [ 0.33107961, 0.1530674 , 0.19543758], [ 0.34119892, 0.15504762, 0.1996803 ], [ 0.35142388, 0.15701131, 0.20389086], [ 0.36178937, 0.1589124 , 0.20807639], [ 0.37229381, 0.16073993, 0.21223189], [ 0.38288348, 0.16254006, 0.2163249 ], [ 0.39359592, 0.16426336, 0.22036577], [ 0.40444332, 0.16588767, 0.22434027], [ 0.41537995, 0.16745325, 0.2282297 ], [ 0.42640867, 0.16894939, 0.23202755], [ 0.43754706, 0.17034847, 0.23572899], [ 0.44878564, 0.1716535 , 0.23932344], [ 0.4601126 , 0.17287365, 0.24278607], [ 0.47151732, 0.17401641, 0.24610337], [ 0.48300689, 0.17506676, 0.2492737 ], [ 0.49458302, 0.17601892, 0.25227688], [ 0.50623876, 0.17687777, 0.255096 ], [ 0.5179623 , 0.17765528, 0.2577162 ], [ 0.52975234, 0.17835232, 0.2601134 ], [ 0.54159776, 0.17898292, 0.26226847], [ 0.55348804, 0.17956232, 0.26416003], [ 0.56541729, 0.18010175, 0.26575971], [ 0.57736669, 0.180631 , 0.26704888], [ 0.58932081, 0.18117827, 0.26800409], [ 0.60127582, 0.18175888, 0.26858488], [ 0.61319563, 0.1824336 , 0.2687872 ], [ 0.62506376, 0.18324015, 0.26858301], [ 0.63681202, 0.18430173, 0.26795276], [ 0.64842603, 0.18565472, 0.26689463], [ 0.65988195, 0.18734638, 0.26543435], [ 0.67111966, 0.18948885, 0.26357955], [ 0.68209194, 0.19216636, 0.26137175], [ 0.69281185, 0.19535326, 0.25887063], [ 0.70335022, 0.19891271, 0.25617971], [ 0.71375229, 0.20276438, 0.25331365], [ 0.72401436, 0.20691287, 0.25027366], [ 0.73407638, 0.21145051, 0.24710661], [ 0.74396983, 0.21631913, 0.24380715], [ 0.75361506, 0.22163653, 0.24043996], [ 0.7630579 , 0.22731637, 0.23700095], [ 0.77222228, 0.23346231, 0.23356628], [ 0.78115441, 0.23998404, 0.23013825], [ 0.78979746, 0.24694858, 0.22678822], [ 0.79819286, 0.25427223, 0.22352658], [ 0.80630444, 0.26198807, 0.22040877], [ 0.81417437, 0.27001406, 0.21744645], [ 0.82177364, 0.27837336, 0.21468316], [ 0.82915955, 0.28696963, 0.21210766], [ 0.83628628, 0.2958499 , 0.20977813], [ 0.84322168, 0.30491136, 0.20766435], [ 0.84995458, 0.31415945, 0.2057863 ], [ 0.85648867, 0.32358058, 0.20415327], [ 0.86286243, 0.33312058, 0.20274969], [ 0.86908321, 0.34276705, 0.20157271], [ 0.87512876, 0.3525416 , 0.20064949], [ 0.88100349, 0.36243385, 0.19999078], [ 0.8866469 , 0.37249496, 0.1997976 ], [ 0.89203964, 0.38273475, 0.20013431], [ 0.89713496, 0.39318156, 0.20121514], [ 0.90195099, 0.40380687, 0.20301555], [ 0.90648379, 0.41460191, 0.20558847], [ 0.9106967 , 0.42557857, 0.20918529], [ 0.91463791, 0.43668557, 0.21367954], [ 0.91830723, 0.44790913, 0.21916352], [ 0.92171507, 0.45922856, 0.22568002], [ 0.92491786, 0.4705936 , 0.23308207], [ 0.92790792, 0.48200153, 0.24145932], [ 0.93073701, 0.49341219, 0.25065486], [ 0.93343918, 0.5048017 , 0.26056148], [ 0.93602064, 0.51616486, 0.27118485], [ 0.93850535, 0.52748892, 0.28242464], [ 0.94092933, 0.53875462, 0.29416042], [ 0.94330011, 0.5499628 , 0.30634189], [ 0.94563159, 0.56110987, 0.31891624], [ 0.94792955, 0.57219822, 0.33184256], [ 0.95020929, 0.5832232 , 0.34508419], [ 0.95247324, 0.59419035, 0.35859866], [ 0.95471709, 0.60510869, 0.37236035], [ 0.95698411, 0.61595766, 0.38629631], [ 0.95923863, 0.62676473, 0.40043317], [ 0.9615041 , 0.6375203 , 0.41474106], [ 0.96371553, 0.64826619, 0.42928335], [ 0.96591497, 0.65899621, 0.44380444], [ 0.96809871, 0.66971662, 0.45830232], [ 0.9702495 , 0.6804394 , 0.47280492], [ 0.9723881 , 0.69115622, 0.48729272], [ 0.97450723, 0.70187358, 0.50178034], [ 0.9766108 , 0.712592 , 0.51626837], [ 0.97871716, 0.72330511, 0.53074053], [ 0.98082222, 0.73401769, 0.54520694], [ 0.9829001 , 0.74474445, 0.5597019 ], [ 0.98497466, 0.75547635, 0.57420239], [ 0.98705581, 0.76621129, 0.58870185], [ 0.98913325, 0.77695637, 0.60321626], [ 0.99119918, 0.78771716, 0.61775821], [ 0.9932672 , 0.79848979, 0.63231691], [ 0.99535958, 0.80926704, 0.64687278], [ 0.99740544, 0.82008078, 0.66150571], [ 0.9992197 , 0.83100723, 0.6764127 ] ] _luts = [_rocket_lut, _mako_lut, _vlag_lut, _icefire_lut] _names = ["rocket", "mako", "vlag", "icefire"] for _lut, _name in zip(_luts, _names): _cmap = colors.ListedColormap(_lut, _name) locals()[_name] = _cmap _cmap_r = colors.ListedColormap(_lut[::-1], _name + "_r") locals()[_name + "_r"] = _cmap_r mpl_cm.register_cmap(_name, _cmap) mpl_cm.register_cmap(_name + "_r", _cmap_r) seaborn-0.10.0/seaborn/colors/000077500000000000000000000000001361256634400161655ustar00rootroot00000000000000seaborn-0.10.0/seaborn/colors/__init__.py000066400000000000000000000000741361256634400202770ustar00rootroot00000000000000from .xkcd_rgb import xkcd_rgb from .crayons import crayons seaborn-0.10.0/seaborn/colors/crayons.py000066400000000000000000000103521361256634400202160ustar00rootroot00000000000000crayons = {'Almond': '#EFDECD', 'Antique Brass': '#CD9575', 'Apricot': '#FDD9B5', 'Aquamarine': '#78DBE2', 'Asparagus': '#87A96B', 'Atomic Tangerine': '#FFA474', 'Banana Mania': '#FAE7B5', 'Beaver': '#9F8170', 'Bittersweet': '#FD7C6E', 'Black': '#000000', 'Blue': '#1F75FE', 'Blue Bell': '#A2A2D0', 'Blue Green': '#0D98BA', 'Blue Violet': '#7366BD', 'Blush': '#DE5D83', 'Brick Red': '#CB4154', 'Brown': '#B4674D', 'Burnt Orange': '#FF7F49', 'Burnt Sienna': '#EA7E5D', 'Cadet Blue': '#B0B7C6', 'Canary': '#FFFF99', 'Caribbean Green': '#00CC99', 'Carnation Pink': '#FFAACC', 'Cerise': '#DD4492', 'Cerulean': '#1DACD6', 'Chestnut': '#BC5D58', 'Copper': '#DD9475', 'Cornflower': '#9ACEEB', 'Cotton Candy': '#FFBCD9', 'Dandelion': '#FDDB6D', 'Denim': '#2B6CC4', 'Desert Sand': '#EFCDB8', 'Eggplant': '#6E5160', 'Electric Lime': '#CEFF1D', 'Fern': '#71BC78', 'Forest Green': '#6DAE81', 'Fuchsia': '#C364C5', 'Fuzzy Wuzzy': '#CC6666', 'Gold': '#E7C697', 'Goldenrod': '#FCD975', 'Granny Smith Apple': '#A8E4A0', 'Gray': '#95918C', 'Green': '#1CAC78', 'Green Yellow': '#F0E891', 'Hot Magenta': '#FF1DCE', 'Inchworm': '#B2EC5D', 'Indigo': '#5D76CB', 'Jazzberry Jam': '#CA3767', 'Jungle Green': '#3BB08F', 'Laser Lemon': '#FEFE22', 'Lavender': '#FCB4D5', 'Macaroni and Cheese': '#FFBD88', 'Magenta': '#F664AF', 'Mahogany': '#CD4A4C', 'Manatee': '#979AAA', 'Mango Tango': '#FF8243', 'Maroon': '#C8385A', 'Mauvelous': '#EF98AA', 'Melon': '#FDBCB4', 'Midnight Blue': '#1A4876', 'Mountain Meadow': '#30BA8F', 'Navy Blue': '#1974D2', 'Neon Carrot': '#FFA343', 'Olive Green': '#BAB86C', 'Orange': '#FF7538', 'Orchid': '#E6A8D7', 'Outer Space': '#414A4C', 'Outrageous Orange': '#FF6E4A', 'Pacific Blue': '#1CA9C9', 'Peach': '#FFCFAB', 'Periwinkle': '#C5D0E6', 'Piggy Pink': '#FDDDE6', 'Pine Green': '#158078', 'Pink Flamingo': '#FC74FD', 'Pink Sherbert': '#F78FA7', 'Plum': '#8E4585', 'Purple Heart': '#7442C8', "Purple Mountains' Majesty": '#9D81BA', 'Purple Pizzazz': '#FE4EDA', 'Radical Red': '#FF496C', 'Raw Sienna': '#D68A59', 'Razzle Dazzle Rose': '#FF48D0', 'Razzmatazz': '#E3256B', 'Red': '#EE204D', 'Red Orange': '#FF5349', 'Red Violet': '#C0448F', "Robin's Egg Blue": '#1FCECB', 'Royal Purple': '#7851A9', 'Salmon': '#FF9BAA', 'Scarlet': '#FC2847', "Screamin' Green": '#76FF7A', 'Sea Green': '#93DFB8', 'Sepia': '#A5694F', 'Shadow': '#8A795D', 'Shamrock': '#45CEA2', 'Shocking Pink': '#FB7EFD', 'Silver': '#CDC5C2', 'Sky Blue': '#80DAEB', 'Spring Green': '#ECEABE', 'Sunglow': '#FFCF48', 'Sunset Orange': '#FD5E53', 'Tan': '#FAA76C', 'Tickle Me Pink': '#FC89AC', 'Timberwolf': '#DBD7D2', 'Tropical Rain Forest': '#17806D', 'Tumbleweed': '#DEAA88', 'Turquoise Blue': '#77DDE7', 'Unmellow Yellow': '#FFFF66', 'Violet (Purple)': '#926EAE', 'Violet Red': '#F75394', 'Vivid Tangerine': '#FFA089', 'Vivid Violet': '#8F509D', 'White': '#FFFFFF', 'Wild Blue Yonder': '#A2ADD0', 'Wild Strawberry': '#FF43A4', 'Wild Watermelon': '#FC6C85', 'Wisteria': '#CDA4DE', 'Yellow': '#FCE883', 'Yellow Green': '#C5E384', 'Yellow Orange': '#FFAE42'} seaborn-0.10.0/seaborn/colors/xkcd_rgb.py000066400000000000000000001050631361256634400203270ustar00rootroot00000000000000xkcd_rgb = {'acid green': '#8ffe09', 'adobe': '#bd6c48', 'algae': '#54ac68', 'algae green': '#21c36f', 'almost black': '#070d0d', 'amber': '#feb308', 'amethyst': '#9b5fc0', 'apple': '#6ecb3c', 'apple green': '#76cd26', 'apricot': '#ffb16d', 'aqua': '#13eac9', 'aqua blue': '#02d8e9', 'aqua green': '#12e193', 'aqua marine': '#2ee8bb', 'aquamarine': '#04d8b2', 'army green': '#4b5d16', 'asparagus': '#77ab56', 'aubergine': '#3d0734', 'auburn': '#9a3001', 'avocado': '#90b134', 'avocado green': '#87a922', 'azul': '#1d5dec', 'azure': '#069af3', 'baby blue': '#a2cffe', 'baby green': '#8cff9e', 'baby pink': '#ffb7ce', 'baby poo': '#ab9004', 'baby poop': '#937c00', 'baby poop green': '#8f9805', 'baby puke green': '#b6c406', 'baby purple': '#ca9bf7', 'baby shit brown': '#ad900d', 'baby shit green': '#889717', 'banana': '#ffff7e', 'banana yellow': '#fafe4b', 'barbie pink': '#fe46a5', 'barf green': '#94ac02', 'barney': '#ac1db8', 'barney purple': '#a00498', 'battleship grey': '#6b7c85', 'beige': '#e6daa6', 'berry': '#990f4b', 'bile': '#b5c306', 'black': '#000000', 'bland': '#afa88b', 'blood': '#770001', 'blood orange': '#fe4b03', 'blood red': '#980002', 'blue': '#0343df', 'blue blue': '#2242c7', 'blue green': '#137e6d', 'blue grey': '#607c8e', 'blue purple': '#5729ce', 'blue violet': '#5d06e9', 'blue with a hint of purple': '#533cc6', 'blue/green': '#0f9b8e', 'blue/grey': '#758da3', 'blue/purple': '#5a06ef', 'blueberry': '#464196', 'bluegreen': '#017a79', 'bluegrey': '#85a3b2', 'bluey green': '#2bb179', 'bluey grey': '#89a0b0', 'bluey purple': '#6241c7', 'bluish': '#2976bb', 'bluish green': '#10a674', 'bluish grey': '#748b97', 'bluish purple': '#703be7', 'blurple': '#5539cc', 'blush': '#f29e8e', 'blush pink': '#fe828c', 'booger': '#9bb53c', 'booger green': '#96b403', 'bordeaux': '#7b002c', 'boring green': '#63b365', 'bottle green': '#044a05', 'brick': '#a03623', 'brick orange': '#c14a09', 'brick red': '#8f1402', 'bright aqua': '#0bf9ea', 'bright blue': '#0165fc', 'bright cyan': '#41fdfe', 'bright green': '#01ff07', 'bright lavender': '#c760ff', 'bright light blue': '#26f7fd', 'bright light green': '#2dfe54', 'bright lilac': '#c95efb', 'bright lime': '#87fd05', 'bright lime green': '#65fe08', 'bright magenta': '#ff08e8', 'bright olive': '#9cbb04', 'bright orange': '#ff5b00', 'bright pink': '#fe01b1', 'bright purple': '#be03fd', 'bright red': '#ff000d', 'bright sea green': '#05ffa6', 'bright sky blue': '#02ccfe', 'bright teal': '#01f9c6', 'bright turquoise': '#0ffef9', 'bright violet': '#ad0afd', 'bright yellow': '#fffd01', 'bright yellow green': '#9dff00', 'british racing green': '#05480d', 'bronze': '#a87900', 'brown': '#653700', 'brown green': '#706c11', 'brown grey': '#8d8468', 'brown orange': '#b96902', 'brown red': '#922b05', 'brown yellow': '#b29705', 'brownish': '#9c6d57', 'brownish green': '#6a6e09', 'brownish grey': '#86775f', 'brownish orange': '#cb7723', 'brownish pink': '#c27e79', 'brownish purple': '#76424e', 'brownish red': '#9e3623', 'brownish yellow': '#c9b003', 'browny green': '#6f6c0a', 'browny orange': '#ca6b02', 'bruise': '#7e4071', 'bubble gum pink': '#ff69af', 'bubblegum': '#ff6cb5', 'bubblegum pink': '#fe83cc', 'buff': '#fef69e', 'burgundy': '#610023', 'burnt orange': '#c04e01', 'burnt red': '#9f2305', 'burnt siena': '#b75203', 'burnt sienna': '#b04e0f', 'burnt umber': '#a0450e', 'burnt yellow': '#d5ab09', 'burple': '#6832e3', 'butter': '#ffff81', 'butter yellow': '#fffd74', 'butterscotch': '#fdb147', 'cadet blue': '#4e7496', 'camel': '#c69f59', 'camo': '#7f8f4e', 'camo green': '#526525', 'camouflage green': '#4b6113', 'canary': '#fdff63', 'canary yellow': '#fffe40', 'candy pink': '#ff63e9', 'caramel': '#af6f09', 'carmine': '#9d0216', 'carnation': '#fd798f', 'carnation pink': '#ff7fa7', 'carolina blue': '#8ab8fe', 'celadon': '#befdb7', 'celery': '#c1fd95', 'cement': '#a5a391', 'cerise': '#de0c62', 'cerulean': '#0485d1', 'cerulean blue': '#056eee', 'charcoal': '#343837', 'charcoal grey': '#3c4142', 'chartreuse': '#c1f80a', 'cherry': '#cf0234', 'cherry red': '#f7022a', 'chestnut': '#742802', 'chocolate': '#3d1c02', 'chocolate brown': '#411900', 'cinnamon': '#ac4f06', 'claret': '#680018', 'clay': '#b66a50', 'clay brown': '#b2713d', 'clear blue': '#247afd', 'cloudy blue': '#acc2d9', 'cobalt': '#1e488f', 'cobalt blue': '#030aa7', 'cocoa': '#875f42', 'coffee': '#a6814c', 'cool blue': '#4984b8', 'cool green': '#33b864', 'cool grey': '#95a3a6', 'copper': '#b66325', 'coral': '#fc5a50', 'coral pink': '#ff6163', 'cornflower': '#6a79f7', 'cornflower blue': '#5170d7', 'cranberry': '#9e003a', 'cream': '#ffffc2', 'creme': '#ffffb6', 'crimson': '#8c000f', 'custard': '#fffd78', 'cyan': '#00ffff', 'dandelion': '#fedf08', 'dark': '#1b2431', 'dark aqua': '#05696b', 'dark aquamarine': '#017371', 'dark beige': '#ac9362', 'dark blue': '#00035b', 'dark blue green': '#005249', 'dark blue grey': '#1f3b4d', 'dark brown': '#341c02', 'dark coral': '#cf524e', 'dark cream': '#fff39a', 'dark cyan': '#0a888a', 'dark forest green': '#002d04', 'dark fuchsia': '#9d0759', 'dark gold': '#b59410', 'dark grass green': '#388004', 'dark green': '#033500', 'dark green blue': '#1f6357', 'dark grey': '#363737', 'dark grey blue': '#29465b', 'dark hot pink': '#d90166', 'dark indigo': '#1f0954', 'dark khaki': '#9b8f55', 'dark lavender': '#856798', 'dark lilac': '#9c6da5', 'dark lime': '#84b701', 'dark lime green': '#7ebd01', 'dark magenta': '#960056', 'dark maroon': '#3c0008', 'dark mauve': '#874c62', 'dark mint': '#48c072', 'dark mint green': '#20c073', 'dark mustard': '#a88905', 'dark navy': '#000435', 'dark navy blue': '#00022e', 'dark olive': '#373e02', 'dark olive green': '#3c4d03', 'dark orange': '#c65102', 'dark pastel green': '#56ae57', 'dark peach': '#de7e5d', 'dark periwinkle': '#665fd1', 'dark pink': '#cb416b', 'dark plum': '#3f012c', 'dark purple': '#35063e', 'dark red': '#840000', 'dark rose': '#b5485d', 'dark royal blue': '#02066f', 'dark sage': '#598556', 'dark salmon': '#c85a53', 'dark sand': '#a88f59', 'dark sea green': '#11875d', 'dark seafoam': '#1fb57a', 'dark seafoam green': '#3eaf76', 'dark sky blue': '#448ee4', 'dark slate blue': '#214761', 'dark tan': '#af884a', 'dark taupe': '#7f684e', 'dark teal': '#014d4e', 'dark turquoise': '#045c5a', 'dark violet': '#34013f', 'dark yellow': '#d5b60a', 'dark yellow green': '#728f02', 'darkblue': '#030764', 'darkgreen': '#054907', 'darkish blue': '#014182', 'darkish green': '#287c37', 'darkish pink': '#da467d', 'darkish purple': '#751973', 'darkish red': '#a90308', 'deep aqua': '#08787f', 'deep blue': '#040273', 'deep brown': '#410200', 'deep green': '#02590f', 'deep lavender': '#8d5eb7', 'deep lilac': '#966ebd', 'deep magenta': '#a0025c', 'deep orange': '#dc4d01', 'deep pink': '#cb0162', 'deep purple': '#36013f', 'deep red': '#9a0200', 'deep rose': '#c74767', 'deep sea blue': '#015482', 'deep sky blue': '#0d75f8', 'deep teal': '#00555a', 'deep turquoise': '#017374', 'deep violet': '#490648', 'denim': '#3b638c', 'denim blue': '#3b5b92', 'desert': '#ccad60', 'diarrhea': '#9f8303', 'dirt': '#8a6e45', 'dirt brown': '#836539', 'dirty blue': '#3f829d', 'dirty green': '#667e2c', 'dirty orange': '#c87606', 'dirty pink': '#ca7b80', 'dirty purple': '#734a65', 'dirty yellow': '#cdc50a', 'dodger blue': '#3e82fc', 'drab': '#828344', 'drab green': '#749551', 'dried blood': '#4b0101', 'duck egg blue': '#c3fbf4', 'dull blue': '#49759c', 'dull brown': '#876e4b', 'dull green': '#74a662', 'dull orange': '#d8863b', 'dull pink': '#d5869d', 'dull purple': '#84597e', 'dull red': '#bb3f3f', 'dull teal': '#5f9e8f', 'dull yellow': '#eedc5b', 'dusk': '#4e5481', 'dusk blue': '#26538d', 'dusky blue': '#475f94', 'dusky pink': '#cc7a8b', 'dusky purple': '#895b7b', 'dusky rose': '#ba6873', 'dust': '#b2996e', 'dusty blue': '#5a86ad', 'dusty green': '#76a973', 'dusty lavender': '#ac86a8', 'dusty orange': '#f0833a', 'dusty pink': '#d58a94', 'dusty purple': '#825f87', 'dusty red': '#b9484e', 'dusty rose': '#c0737a', 'dusty teal': '#4c9085', 'earth': '#a2653e', 'easter green': '#8cfd7e', 'easter purple': '#c071fe', 'ecru': '#feffca', 'egg shell': '#fffcc4', 'eggplant': '#380835', 'eggplant purple': '#430541', 'eggshell': '#ffffd4', 'eggshell blue': '#c4fff7', 'electric blue': '#0652ff', 'electric green': '#21fc0d', 'electric lime': '#a8ff04', 'electric pink': '#ff0490', 'electric purple': '#aa23ff', 'emerald': '#01a049', 'emerald green': '#028f1e', 'evergreen': '#05472a', 'faded blue': '#658cbb', 'faded green': '#7bb274', 'faded orange': '#f0944d', 'faded pink': '#de9dac', 'faded purple': '#916e99', 'faded red': '#d3494e', 'faded yellow': '#feff7f', 'fawn': '#cfaf7b', 'fern': '#63a950', 'fern green': '#548d44', 'fire engine red': '#fe0002', 'flat blue': '#3c73a8', 'flat green': '#699d4c', 'fluorescent green': '#08ff08', 'fluro green': '#0aff02', 'foam green': '#90fda9', 'forest': '#0b5509', 'forest green': '#06470c', 'forrest green': '#154406', 'french blue': '#436bad', 'fresh green': '#69d84f', 'frog green': '#58bc08', 'fuchsia': '#ed0dd9', 'gold': '#dbb40c', 'golden': '#f5bf03', 'golden brown': '#b27a01', 'golden rod': '#f9bc08', 'golden yellow': '#fec615', 'goldenrod': '#fac205', 'grape': '#6c3461', 'grape purple': '#5d1451', 'grapefruit': '#fd5956', 'grass': '#5cac2d', 'grass green': '#3f9b0b', 'grassy green': '#419c03', 'green': '#15b01a', 'green apple': '#5edc1f', 'green blue': '#06b48b', 'green brown': '#544e03', 'green grey': '#77926f', 'green teal': '#0cb577', 'green yellow': '#c9ff27', 'green/blue': '#01c08d', 'green/yellow': '#b5ce08', 'greenblue': '#23c48b', 'greenish': '#40a368', 'greenish beige': '#c9d179', 'greenish blue': '#0b8b87', 'greenish brown': '#696112', 'greenish cyan': '#2afeb7', 'greenish grey': '#96ae8d', 'greenish tan': '#bccb7a', 'greenish teal': '#32bf84', 'greenish turquoise': '#00fbb0', 'greenish yellow': '#cdfd02', 'greeny blue': '#42b395', 'greeny brown': '#696006', 'greeny grey': '#7ea07a', 'greeny yellow': '#c6f808', 'grey': '#929591', 'grey blue': '#6b8ba4', 'grey brown': '#7f7053', 'grey green': '#789b73', 'grey pink': '#c3909b', 'grey purple': '#826d8c', 'grey teal': '#5e9b8a', 'grey/blue': '#647d8e', 'grey/green': '#86a17d', 'greyblue': '#77a1b5', 'greyish': '#a8a495', 'greyish blue': '#5e819d', 'greyish brown': '#7a6a4f', 'greyish green': '#82a67d', 'greyish pink': '#c88d94', 'greyish purple': '#887191', 'greyish teal': '#719f91', 'gross green': '#a0bf16', 'gunmetal': '#536267', 'hazel': '#8e7618', 'heather': '#a484ac', 'heliotrope': '#d94ff5', 'highlighter green': '#1bfc06', 'hospital green': '#9be5aa', 'hot green': '#25ff29', 'hot magenta': '#f504c9', 'hot pink': '#ff028d', 'hot purple': '#cb00f5', 'hunter green': '#0b4008', 'ice': '#d6fffa', 'ice blue': '#d7fffe', 'icky green': '#8fae22', 'indian red': '#850e04', 'indigo': '#380282', 'indigo blue': '#3a18b1', 'iris': '#6258c4', 'irish green': '#019529', 'ivory': '#ffffcb', 'jade': '#1fa774', 'jade green': '#2baf6a', 'jungle green': '#048243', 'kelley green': '#009337', 'kelly green': '#02ab2e', 'kermit green': '#5cb200', 'key lime': '#aeff6e', 'khaki': '#aaa662', 'khaki green': '#728639', 'kiwi': '#9cef43', 'kiwi green': '#8ee53f', 'lavender': '#c79fef', 'lavender blue': '#8b88f8', 'lavender pink': '#dd85d7', 'lawn green': '#4da409', 'leaf': '#71aa34', 'leaf green': '#5ca904', 'leafy green': '#51b73b', 'leather': '#ac7434', 'lemon': '#fdff52', 'lemon green': '#adf802', 'lemon lime': '#bffe28', 'lemon yellow': '#fdff38', 'lichen': '#8fb67b', 'light aqua': '#8cffdb', 'light aquamarine': '#7bfdc7', 'light beige': '#fffeb6', 'light blue': '#95d0fc', 'light blue green': '#7efbb3', 'light blue grey': '#b7c9e2', 'light bluish green': '#76fda8', 'light bright green': '#53fe5c', 'light brown': '#ad8150', 'light burgundy': '#a8415b', 'light cyan': '#acfffc', 'light eggplant': '#894585', 'light forest green': '#4f9153', 'light gold': '#fddc5c', 'light grass green': '#9af764', 'light green': '#96f97b', 'light green blue': '#56fca2', 'light greenish blue': '#63f7b4', 'light grey': '#d8dcd6', 'light grey blue': '#9dbcd4', 'light grey green': '#b7e1a1', 'light indigo': '#6d5acf', 'light khaki': '#e6f2a2', 'light lavendar': '#efc0fe', 'light lavender': '#dfc5fe', 'light light blue': '#cafffb', 'light light green': '#c8ffb0', 'light lilac': '#edc8ff', 'light lime': '#aefd6c', 'light lime green': '#b9ff66', 'light magenta': '#fa5ff7', 'light maroon': '#a24857', 'light mauve': '#c292a1', 'light mint': '#b6ffbb', 'light mint green': '#a6fbb2', 'light moss green': '#a6c875', 'light mustard': '#f7d560', 'light navy': '#155084', 'light navy blue': '#2e5a88', 'light neon green': '#4efd54', 'light olive': '#acbf69', 'light olive green': '#a4be5c', 'light orange': '#fdaa48', 'light pastel green': '#b2fba5', 'light pea green': '#c4fe82', 'light peach': '#ffd8b1', 'light periwinkle': '#c1c6fc', 'light pink': '#ffd1df', 'light plum': '#9d5783', 'light purple': '#bf77f6', 'light red': '#ff474c', 'light rose': '#ffc5cb', 'light royal blue': '#3a2efe', 'light sage': '#bcecac', 'light salmon': '#fea993', 'light sea green': '#98f6b0', 'light seafoam': '#a0febf', 'light seafoam green': '#a7ffb5', 'light sky blue': '#c6fcff', 'light tan': '#fbeeac', 'light teal': '#90e4c1', 'light turquoise': '#7ef4cc', 'light urple': '#b36ff6', 'light violet': '#d6b4fc', 'light yellow': '#fffe7a', 'light yellow green': '#ccfd7f', 'light yellowish green': '#c2ff89', 'lightblue': '#7bc8f6', 'lighter green': '#75fd63', 'lighter purple': '#a55af4', 'lightgreen': '#76ff7b', 'lightish blue': '#3d7afd', 'lightish green': '#61e160', 'lightish purple': '#a552e6', 'lightish red': '#fe2f4a', 'lilac': '#cea2fd', 'liliac': '#c48efd', 'lime': '#aaff32', 'lime green': '#89fe05', 'lime yellow': '#d0fe1d', 'lipstick': '#d5174e', 'lipstick red': '#c0022f', 'macaroni and cheese': '#efb435', 'magenta': '#c20078', 'mahogany': '#4a0100', 'maize': '#f4d054', 'mango': '#ffa62b', 'manilla': '#fffa86', 'marigold': '#fcc006', 'marine': '#042e60', 'marine blue': '#01386a', 'maroon': '#650021', 'mauve': '#ae7181', 'medium blue': '#2c6fbb', 'medium brown': '#7f5112', 'medium green': '#39ad48', 'medium grey': '#7d7f7c', 'medium pink': '#f36196', 'medium purple': '#9e43a2', 'melon': '#ff7855', 'merlot': '#730039', 'metallic blue': '#4f738e', 'mid blue': '#276ab3', 'mid green': '#50a747', 'midnight': '#03012d', 'midnight blue': '#020035', 'midnight purple': '#280137', 'military green': '#667c3e', 'milk chocolate': '#7f4e1e', 'mint': '#9ffeb0', 'mint green': '#8fff9f', 'minty green': '#0bf77d', 'mocha': '#9d7651', 'moss': '#769958', 'moss green': '#658b38', 'mossy green': '#638b27', 'mud': '#735c12', 'mud brown': '#60460f', 'mud green': '#606602', 'muddy brown': '#886806', 'muddy green': '#657432', 'muddy yellow': '#bfac05', 'mulberry': '#920a4e', 'murky green': '#6c7a0e', 'mushroom': '#ba9e88', 'mustard': '#ceb301', 'mustard brown': '#ac7e04', 'mustard green': '#a8b504', 'mustard yellow': '#d2bd0a', 'muted blue': '#3b719f', 'muted green': '#5fa052', 'muted pink': '#d1768f', 'muted purple': '#805b87', 'nasty green': '#70b23f', 'navy': '#01153e', 'navy blue': '#001146', 'navy green': '#35530a', 'neon blue': '#04d9ff', 'neon green': '#0cff0c', 'neon pink': '#fe019a', 'neon purple': '#bc13fe', 'neon red': '#ff073a', 'neon yellow': '#cfff04', 'nice blue': '#107ab0', 'night blue': '#040348', 'ocean': '#017b92', 'ocean blue': '#03719c', 'ocean green': '#3d9973', 'ocher': '#bf9b0c', 'ochre': '#bf9005', 'ocre': '#c69c04', 'off blue': '#5684ae', 'off green': '#6ba353', 'off white': '#ffffe4', 'off yellow': '#f1f33f', 'old pink': '#c77986', 'old rose': '#c87f89', 'olive': '#6e750e', 'olive brown': '#645403', 'olive drab': '#6f7632', 'olive green': '#677a04', 'olive yellow': '#c2b709', 'orange': '#f97306', 'orange brown': '#be6400', 'orange pink': '#ff6f52', 'orange red': '#fd411e', 'orange yellow': '#ffad01', 'orangeish': '#fd8d49', 'orangered': '#fe420f', 'orangey brown': '#b16002', 'orangey red': '#fa4224', 'orangey yellow': '#fdb915', 'orangish': '#fc824a', 'orangish brown': '#b25f03', 'orangish red': '#f43605', 'orchid': '#c875c4', 'pale': '#fff9d0', 'pale aqua': '#b8ffeb', 'pale blue': '#d0fefe', 'pale brown': '#b1916e', 'pale cyan': '#b7fffa', 'pale gold': '#fdde6c', 'pale green': '#c7fdb5', 'pale grey': '#fdfdfe', 'pale lavender': '#eecffe', 'pale light green': '#b1fc99', 'pale lilac': '#e4cbff', 'pale lime': '#befd73', 'pale lime green': '#b1ff65', 'pale magenta': '#d767ad', 'pale mauve': '#fed0fc', 'pale olive': '#b9cc81', 'pale olive green': '#b1d27b', 'pale orange': '#ffa756', 'pale peach': '#ffe5ad', 'pale pink': '#ffcfdc', 'pale purple': '#b790d4', 'pale red': '#d9544d', 'pale rose': '#fdc1c5', 'pale salmon': '#ffb19a', 'pale sky blue': '#bdf6fe', 'pale teal': '#82cbb2', 'pale turquoise': '#a5fbd5', 'pale violet': '#ceaefa', 'pale yellow': '#ffff84', 'parchment': '#fefcaf', 'pastel blue': '#a2bffe', 'pastel green': '#b0ff9d', 'pastel orange': '#ff964f', 'pastel pink': '#ffbacd', 'pastel purple': '#caa0ff', 'pastel red': '#db5856', 'pastel yellow': '#fffe71', 'pea': '#a4bf20', 'pea green': '#8eab12', 'pea soup': '#929901', 'pea soup green': '#94a617', 'peach': '#ffb07c', 'peachy pink': '#ff9a8a', 'peacock blue': '#016795', 'pear': '#cbf85f', 'periwinkle': '#8e82fe', 'periwinkle blue': '#8f99fb', 'perrywinkle': '#8f8ce7', 'petrol': '#005f6a', 'pig pink': '#e78ea5', 'pine': '#2b5d34', 'pine green': '#0a481e', 'pink': '#ff81c0', 'pink purple': '#db4bda', 'pink red': '#f5054f', 'pink/purple': '#ef1de7', 'pinkish': '#d46a7e', 'pinkish brown': '#b17261', 'pinkish grey': '#c8aca9', 'pinkish orange': '#ff724c', 'pinkish purple': '#d648d7', 'pinkish red': '#f10c45', 'pinkish tan': '#d99b82', 'pinky': '#fc86aa', 'pinky purple': '#c94cbe', 'pinky red': '#fc2647', 'piss yellow': '#ddd618', 'pistachio': '#c0fa8b', 'plum': '#580f41', 'plum purple': '#4e0550', 'poison green': '#40fd14', 'poo': '#8f7303', 'poo brown': '#885f01', 'poop': '#7f5e00', 'poop brown': '#7a5901', 'poop green': '#6f7c00', 'powder blue': '#b1d1fc', 'powder pink': '#ffb2d0', 'primary blue': '#0804f9', 'prussian blue': '#004577', 'puce': '#a57e52', 'puke': '#a5a502', 'puke brown': '#947706', 'puke green': '#9aae07', 'puke yellow': '#c2be0e', 'pumpkin': '#e17701', 'pumpkin orange': '#fb7d07', 'pure blue': '#0203e2', 'purple': '#7e1e9c', 'purple blue': '#632de9', 'purple brown': '#673a3f', 'purple grey': '#866f85', 'purple pink': '#e03fd8', 'purple red': '#990147', 'purple/blue': '#5d21d0', 'purple/pink': '#d725de', 'purpleish': '#98568d', 'purpleish blue': '#6140ef', 'purpleish pink': '#df4ec8', 'purpley': '#8756e4', 'purpley blue': '#5f34e7', 'purpley grey': '#947e94', 'purpley pink': '#c83cb9', 'purplish': '#94568c', 'purplish blue': '#601ef9', 'purplish brown': '#6b4247', 'purplish grey': '#7a687f', 'purplish pink': '#ce5dae', 'purplish red': '#b0054b', 'purply': '#983fb2', 'purply blue': '#661aee', 'purply pink': '#f075e6', 'putty': '#beae8a', 'racing green': '#014600', 'radioactive green': '#2cfa1f', 'raspberry': '#b00149', 'raw sienna': '#9a6200', 'raw umber': '#a75e09', 'really light blue': '#d4ffff', 'red': '#e50000', 'red brown': '#8b2e16', 'red orange': '#fd3c06', 'red pink': '#fa2a55', 'red purple': '#820747', 'red violet': '#9e0168', 'red wine': '#8c0034', 'reddish': '#c44240', 'reddish brown': '#7f2b0a', 'reddish grey': '#997570', 'reddish orange': '#f8481c', 'reddish pink': '#fe2c54', 'reddish purple': '#910951', 'reddy brown': '#6e1005', 'rich blue': '#021bf9', 'rich purple': '#720058', 'robin egg blue': '#8af1fe', "robin's egg": '#6dedfd', "robin's egg blue": '#98eff9', 'rosa': '#fe86a4', 'rose': '#cf6275', 'rose pink': '#f7879a', 'rose red': '#be013c', 'rosy pink': '#f6688e', 'rouge': '#ab1239', 'royal': '#0c1793', 'royal blue': '#0504aa', 'royal purple': '#4b006e', 'ruby': '#ca0147', 'russet': '#a13905', 'rust': '#a83c09', 'rust brown': '#8b3103', 'rust orange': '#c45508', 'rust red': '#aa2704', 'rusty orange': '#cd5909', 'rusty red': '#af2f0d', 'saffron': '#feb209', 'sage': '#87ae73', 'sage green': '#88b378', 'salmon': '#ff796c', 'salmon pink': '#fe7b7c', 'sand': '#e2ca76', 'sand brown': '#cba560', 'sand yellow': '#fce166', 'sandstone': '#c9ae74', 'sandy': '#f1da7a', 'sandy brown': '#c4a661', 'sandy yellow': '#fdee73', 'sap green': '#5c8b15', 'sapphire': '#2138ab', 'scarlet': '#be0119', 'sea': '#3c9992', 'sea blue': '#047495', 'sea green': '#53fca1', 'seafoam': '#80f9ad', 'seafoam blue': '#78d1b6', 'seafoam green': '#7af9ab', 'seaweed': '#18d17b', 'seaweed green': '#35ad6b', 'sepia': '#985e2b', 'shamrock': '#01b44c', 'shamrock green': '#02c14d', 'shit': '#7f5f00', 'shit brown': '#7b5804', 'shit green': '#758000', 'shocking pink': '#fe02a2', 'sick green': '#9db92c', 'sickly green': '#94b21c', 'sickly yellow': '#d0e429', 'sienna': '#a9561e', 'silver': '#c5c9c7', 'sky': '#82cafc', 'sky blue': '#75bbfd', 'slate': '#516572', 'slate blue': '#5b7c99', 'slate green': '#658d6d', 'slate grey': '#59656d', 'slime green': '#99cc04', 'snot': '#acbb0d', 'snot green': '#9dc100', 'soft blue': '#6488ea', 'soft green': '#6fc276', 'soft pink': '#fdb0c0', 'soft purple': '#a66fb5', 'spearmint': '#1ef876', 'spring green': '#a9f971', 'spruce': '#0a5f38', 'squash': '#f2ab15', 'steel': '#738595', 'steel blue': '#5a7d9a', 'steel grey': '#6f828a', 'stone': '#ada587', 'stormy blue': '#507b9c', 'straw': '#fcf679', 'strawberry': '#fb2943', 'strong blue': '#0c06f7', 'strong pink': '#ff0789', 'sun yellow': '#ffdf22', 'sunflower': '#ffc512', 'sunflower yellow': '#ffda03', 'sunny yellow': '#fff917', 'sunshine yellow': '#fffd37', 'swamp': '#698339', 'swamp green': '#748500', 'tan': '#d1b26f', 'tan brown': '#ab7e4c', 'tan green': '#a9be70', 'tangerine': '#ff9408', 'taupe': '#b9a281', 'tea': '#65ab7c', 'tea green': '#bdf8a3', 'teal': '#029386', 'teal blue': '#01889f', 'teal green': '#25a36f', 'tealish': '#24bca8', 'tealish green': '#0cdc73', 'terra cotta': '#c9643b', 'terracota': '#cb6843', 'terracotta': '#ca6641', 'tiffany blue': '#7bf2da', 'tomato': '#ef4026', 'tomato red': '#ec2d01', 'topaz': '#13bbaf', 'toupe': '#c7ac7d', 'toxic green': '#61de2a', 'tree green': '#2a7e19', 'true blue': '#010fcc', 'true green': '#089404', 'turquoise': '#06c2ac', 'turquoise blue': '#06b1c4', 'turquoise green': '#04f489', 'turtle green': '#75b84f', 'twilight': '#4e518b', 'twilight blue': '#0a437a', 'ugly blue': '#31668a', 'ugly brown': '#7d7103', 'ugly green': '#7a9703', 'ugly pink': '#cd7584', 'ugly purple': '#a442a0', 'ugly yellow': '#d0c101', 'ultramarine': '#2000b1', 'ultramarine blue': '#1805db', 'umber': '#b26400', 'velvet': '#750851', 'vermillion': '#f4320c', 'very dark blue': '#000133', 'very dark brown': '#1d0200', 'very dark green': '#062e03', 'very dark purple': '#2a0134', 'very light blue': '#d5ffff', 'very light brown': '#d3b683', 'very light green': '#d1ffbd', 'very light pink': '#fff4f2', 'very light purple': '#f6cefc', 'very pale blue': '#d6fffe', 'very pale green': '#cffdbc', 'vibrant blue': '#0339f8', 'vibrant green': '#0add08', 'vibrant purple': '#ad03de', 'violet': '#9a0eea', 'violet blue': '#510ac9', 'violet pink': '#fb5ffc', 'violet red': '#a50055', 'viridian': '#1e9167', 'vivid blue': '#152eff', 'vivid green': '#2fef10', 'vivid purple': '#9900fa', 'vomit': '#a2a415', 'vomit green': '#89a203', 'vomit yellow': '#c7c10c', 'warm blue': '#4b57db', 'warm brown': '#964e02', 'warm grey': '#978a84', 'warm pink': '#fb5581', 'warm purple': '#952e8f', 'washed out green': '#bcf5a6', 'water blue': '#0e87cc', 'watermelon': '#fd4659', 'weird green': '#3ae57f', 'wheat': '#fbdd7e', 'white': '#ffffff', 'windows blue': '#3778bf', 'wine': '#80013f', 'wine red': '#7b0323', 'wintergreen': '#20f986', 'wisteria': '#a87dc2', 'yellow': '#ffff14', 'yellow brown': '#b79400', 'yellow green': '#c0fb2d', 'yellow ochre': '#cb9d06', 'yellow orange': '#fcb001', 'yellow tan': '#ffe36e', 'yellow/green': '#c8fd3d', 'yellowgreen': '#bbf90f', 'yellowish': '#faee66', 'yellowish brown': '#9b7a01', 'yellowish green': '#b0dd16', 'yellowish orange': '#ffab0f', 'yellowish tan': '#fcfc81', 'yellowy brown': '#ae8b0c', 'yellowy green': '#bff128'} seaborn-0.10.0/seaborn/conftest.py000066400000000000000000000003341361256634400170630ustar00rootroot00000000000000import numpy as np import matplotlib.pyplot as plt import pytest @pytest.fixture(autouse=True) def close_figs(): yield plt.close("all") @pytest.fixture(autouse=True) def random_seed(): np.random.seed(47) seaborn-0.10.0/seaborn/distributions.py000066400000000000000000000605431361256634400201500ustar00rootroot00000000000000"""Plotting functions for visualizing distributions.""" from __future__ import division import numpy as np from scipy import stats import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.transforms as tx from matplotlib.collections import LineCollection import warnings from distutils.version import LooseVersion try: import statsmodels.nonparametric.api as smnp _has_statsmodels = True except ImportError: _has_statsmodels = False from .utils import iqr, _kde_support, remove_na from .palettes import color_palette, light_palette, dark_palette, blend_palette __all__ = ["distplot", "kdeplot", "rugplot"] def _freedman_diaconis_bins(a): """Calculate number of hist bins using Freedman-Diaconis rule.""" # From https://stats.stackexchange.com/questions/798/ a = np.asarray(a) if len(a) < 2: return 1 h = 2 * iqr(a) / (len(a) ** (1 / 3)) # fall back to sqrt(a) bins if iqr is 0 if h == 0: return int(np.sqrt(a.size)) else: return int(np.ceil((a.max() - a.min()) / h)) def distplot(a, bins=None, hist=True, kde=True, rug=False, fit=None, hist_kws=None, kde_kws=None, rug_kws=None, fit_kws=None, color=None, vertical=False, norm_hist=False, axlabel=None, label=None, ax=None): """Flexibly plot a univariate distribution of observations. This function combines the matplotlib ``hist`` function (with automatic calculation of a good default bin size) with the seaborn :func:`kdeplot` and :func:`rugplot` functions. It can also fit ``scipy.stats`` distributions and plot the estimated PDF over the data. Parameters ---------- a : Series, 1d-array, or list. Observed data. If this is a Series object with a ``name`` attribute, the name will be used to label the data axis. bins : argument for matplotlib hist(), or None, optional Specification of hist bins. If unspecified, as reference rule is used that tries to find a useful default. hist : bool, optional Whether to plot a (normed) histogram. kde : bool, optional Whether to plot a gaussian kernel density estimate. rug : bool, optional Whether to draw a rugplot on the support axis. fit : random variable object, optional An object with `fit` method, returning a tuple that can be passed to a `pdf` method a positional arguments following a grid of values to evaluate the pdf on. hist_kws : dict, optional Keyword arguments for :meth:`matplotlib.axes.Axes.hist`. kde_kws : dict, optional Keyword arguments for :func:`kdeplot`. rug_kws : dict, optional Keyword arguments for :func:`rugplot`. color : matplotlib color, optional Color to plot everything but the fitted curve in. vertical : bool, optional If True, observed values are on y-axis. norm_hist : bool, optional If True, the histogram height shows a density rather than a count. This is implied if a KDE or fitted density is plotted. axlabel : string, False, or None, optional Name for the support axis label. If None, will try to get it from a.name if False, do not set a label. label : string, optional Legend label for the relevant component of the plot. ax : matplotlib axis, optional If provided, plot on this axis. Returns ------- ax : matplotlib Axes Returns the Axes object with the plot for further tweaking. See Also -------- kdeplot : Show a univariate or bivariate distribution with a kernel density estimate. rugplot : Draw small vertical lines to show each observation in a distribution. Examples -------- Show a default plot with a kernel density estimate and histogram with bin size determined automatically with a reference rule: .. plot:: :context: close-figs >>> import seaborn as sns, numpy as np >>> sns.set(); np.random.seed(0) >>> x = np.random.randn(100) >>> ax = sns.distplot(x) Use Pandas objects to get an informative axis label: .. plot:: :context: close-figs >>> import pandas as pd >>> x = pd.Series(x, name="x variable") >>> ax = sns.distplot(x) Plot the distribution with a kernel density estimate and rug plot: .. plot:: :context: close-figs >>> ax = sns.distplot(x, rug=True, hist=False) Plot the distribution with a histogram and maximum likelihood gaussian distribution fit: .. plot:: :context: close-figs >>> from scipy.stats import norm >>> ax = sns.distplot(x, fit=norm, kde=False) Plot the distribution on the vertical axis: .. plot:: :context: close-figs >>> ax = sns.distplot(x, vertical=True) Change the color of all the plot elements: .. plot:: :context: close-figs >>> sns.set_color_codes() >>> ax = sns.distplot(x, color="y") Pass specific parameters to the underlying plot functions: .. plot:: :context: close-figs >>> ax = sns.distplot(x, rug=True, rug_kws={"color": "g"}, ... kde_kws={"color": "k", "lw": 3, "label": "KDE"}, ... hist_kws={"histtype": "step", "linewidth": 3, ... "alpha": 1, "color": "g"}) """ if ax is None: ax = plt.gca() # Intelligently label the support axis label_ax = bool(axlabel) if axlabel is None and hasattr(a, "name"): axlabel = a.name if axlabel is not None: label_ax = True # Make a a 1-d float array a = np.asarray(a, np.float) if a.ndim > 1: a = a.squeeze() # Drop null values from array a = remove_na(a) # Decide if the hist is normed norm_hist = norm_hist or kde or (fit is not None) # Handle dictionary defaults hist_kws = {} if hist_kws is None else hist_kws.copy() kde_kws = {} if kde_kws is None else kde_kws.copy() rug_kws = {} if rug_kws is None else rug_kws.copy() fit_kws = {} if fit_kws is None else fit_kws.copy() # Get the color from the current color cycle if color is None: if vertical: line, = ax.plot(0, a.mean()) else: line, = ax.plot(a.mean(), 0) color = line.get_color() line.remove() # Plug the label into the right kwarg dictionary if label is not None: if hist: hist_kws["label"] = label elif kde: kde_kws["label"] = label elif rug: rug_kws["label"] = label elif fit: fit_kws["label"] = label if hist: if bins is None: bins = min(_freedman_diaconis_bins(a), 50) hist_kws.setdefault("alpha", 0.4) if LooseVersion(mpl.__version__) < LooseVersion("2.2"): hist_kws.setdefault("normed", norm_hist) else: hist_kws.setdefault("density", norm_hist) orientation = "horizontal" if vertical else "vertical" hist_color = hist_kws.pop("color", color) ax.hist(a, bins, orientation=orientation, color=hist_color, **hist_kws) if hist_color != color: hist_kws["color"] = hist_color if kde: kde_color = kde_kws.pop("color", color) kdeplot(a, vertical=vertical, ax=ax, color=kde_color, **kde_kws) if kde_color != color: kde_kws["color"] = kde_color if rug: rug_color = rug_kws.pop("color", color) axis = "y" if vertical else "x" rugplot(a, axis=axis, ax=ax, color=rug_color, **rug_kws) if rug_color != color: rug_kws["color"] = rug_color if fit is not None: def pdf(x): return fit.pdf(x, *params) fit_color = fit_kws.pop("color", "#282828") gridsize = fit_kws.pop("gridsize", 200) cut = fit_kws.pop("cut", 3) clip = fit_kws.pop("clip", (-np.inf, np.inf)) bw = stats.gaussian_kde(a).scotts_factor() * a.std(ddof=1) x = _kde_support(a, bw, gridsize, cut, clip) params = fit.fit(a) y = pdf(x) if vertical: x, y = y, x ax.plot(x, y, color=fit_color, **fit_kws) if fit_color != "#282828": fit_kws["color"] = fit_color if label_ax: if vertical: ax.set_ylabel(axlabel) else: ax.set_xlabel(axlabel) return ax def _univariate_kdeplot(data, shade, vertical, kernel, bw, gridsize, cut, clip, legend, ax, cumulative=False, **kwargs): """Plot a univariate kernel density estimate on one of the axes.""" # Sort out the clipping if clip is None: clip = (-np.inf, np.inf) # Preprocess the data data = remove_na(data) # Calculate the KDE if np.nan_to_num(data.var()) == 0: # Don't try to compute KDE on singular data msg = "Data must have variance to compute a kernel density estimate." warnings.warn(msg, UserWarning) x, y = np.array([]), np.array([]) elif _has_statsmodels: # Prefer using statsmodels for kernel flexibility x, y = _statsmodels_univariate_kde(data, kernel, bw, gridsize, cut, clip, cumulative=cumulative) else: # Fall back to scipy if missing statsmodels if kernel != "gau": kernel = "gau" msg = "Kernel other than `gau` requires statsmodels." warnings.warn(msg, UserWarning) if cumulative: raise ImportError("Cumulative distributions are currently " "only implemented in statsmodels. " "Please install statsmodels.") x, y = _scipy_univariate_kde(data, bw, gridsize, cut, clip) # Make sure the density is nonnegative y = np.amax(np.c_[np.zeros_like(y), y], axis=1) # Flip the data if the plot should be on the y axis if vertical: x, y = y, x # Check if a label was specified in the call label = kwargs.pop("label", None) # Otherwise check if the data object has a name if label is None and hasattr(data, "name"): label = data.name # Decide if we're going to add a legend legend = label is not None and legend label = "_nolegend_" if label is None else label # Use the active color cycle to find the plot color facecolor = kwargs.pop("facecolor", None) line, = ax.plot(x, y, **kwargs) color = line.get_color() line.remove() kwargs.pop("color", None) facecolor = color if facecolor is None else facecolor # Draw the KDE plot and, optionally, shade ax.plot(x, y, color=color, label=label, **kwargs) shade_kws = dict( facecolor=facecolor, alpha=kwargs.get("alpha", 0.25), clip_on=kwargs.get("clip_on", True), zorder=kwargs.get("zorder", 1), ) if shade: if vertical: ax.fill_betweenx(y, 0, x, **shade_kws) else: ax.fill_between(x, 0, y, **shade_kws) # Set the density axis minimum to 0 if vertical: ax.set_xlim(0, auto=None) else: ax.set_ylim(0, auto=None) # Draw the legend here handles, labels = ax.get_legend_handles_labels() if legend and handles: ax.legend(loc="best") return ax def _statsmodels_univariate_kde(data, kernel, bw, gridsize, cut, clip, cumulative=False): """Compute a univariate kernel density estimate using statsmodels.""" fft = kernel == "gau" kde = smnp.KDEUnivariate(data) kde.fit(kernel, bw, fft, gridsize=gridsize, cut=cut, clip=clip) if cumulative: grid, y = kde.support, kde.cdf else: grid, y = kde.support, kde.density return grid, y def _scipy_univariate_kde(data, bw, gridsize, cut, clip): """Compute a univariate kernel density estimate using scipy.""" try: kde = stats.gaussian_kde(data, bw_method=bw) except TypeError: kde = stats.gaussian_kde(data) if bw != "scott": # scipy default msg = ("Ignoring bandwidth choice, " "please upgrade scipy to use a different bandwidth.") warnings.warn(msg, UserWarning) if isinstance(bw, str): bw = "scotts" if bw == "scott" else bw bw = getattr(kde, "%s_factor" % bw)() * np.std(data) grid = _kde_support(data, bw, gridsize, cut, clip) y = kde(grid) return grid, y def _bivariate_kdeplot(x, y, filled, fill_lowest, kernel, bw, gridsize, cut, clip, axlabel, cbar, cbar_ax, cbar_kws, ax, **kwargs): """Plot a joint KDE estimate as a bivariate contour plot.""" # Determine the clipping if clip is None: clip = [(-np.inf, np.inf), (-np.inf, np.inf)] elif np.ndim(clip) == 1: clip = [clip, clip] # Calculate the KDE if _has_statsmodels: xx, yy, z = _statsmodels_bivariate_kde(x, y, bw, gridsize, cut, clip) else: xx, yy, z = _scipy_bivariate_kde(x, y, bw, gridsize, cut, clip) # Plot the contours n_levels = kwargs.pop("n_levels", 10) scout, = ax.plot([], []) default_color = scout.get_color() scout.remove() cmap = kwargs.pop("cmap", None) color = kwargs.pop("color", None) if cmap is None and "colors" not in kwargs: if color is None: color = default_color if filled: cmap = light_palette(color, as_cmap=True) else: cmap = dark_palette(color, as_cmap=True) if isinstance(cmap, str): if cmap.endswith("_d"): pal = ["#333333"] pal.extend(color_palette(cmap.replace("_d", "_r"), 2)) cmap = blend_palette(pal, as_cmap=True) else: cmap = mpl.cm.get_cmap(cmap) label = kwargs.pop("label", None) kwargs["cmap"] = cmap contour_func = ax.contourf if filled else ax.contour cset = contour_func(xx, yy, z, n_levels, **kwargs) if filled and not fill_lowest: cset.collections[0].set_alpha(0) kwargs["n_levels"] = n_levels if cbar: cbar_kws = {} if cbar_kws is None else cbar_kws ax.figure.colorbar(cset, cbar_ax, ax, **cbar_kws) # Label the axes if hasattr(x, "name") and axlabel: ax.set_xlabel(x.name) if hasattr(y, "name") and axlabel: ax.set_ylabel(y.name) if label is not None: legend_color = cmap(.95) if color is None else color if filled: ax.fill_between([], [], color=legend_color, label=label) else: ax.plot([], [], color=legend_color, label=label) return ax def _statsmodels_bivariate_kde(x, y, bw, gridsize, cut, clip): """Compute a bivariate kde using statsmodels.""" if isinstance(bw, str): bw_func = getattr(smnp.bandwidths, "bw_" + bw) x_bw = bw_func(x) y_bw = bw_func(y) bw = [x_bw, y_bw] elif np.isscalar(bw): bw = [bw, bw] if isinstance(x, pd.Series): x = x.values if isinstance(y, pd.Series): y = y.values kde = smnp.KDEMultivariate([x, y], "cc", bw) x_support = _kde_support(x, kde.bw[0], gridsize, cut, clip[0]) y_support = _kde_support(y, kde.bw[1], gridsize, cut, clip[1]) xx, yy = np.meshgrid(x_support, y_support) z = kde.pdf([xx.ravel(), yy.ravel()]).reshape(xx.shape) return xx, yy, z def _scipy_bivariate_kde(x, y, bw, gridsize, cut, clip): """Compute a bivariate kde using scipy.""" data = np.c_[x, y] kde = stats.gaussian_kde(data.T, bw_method=bw) data_std = data.std(axis=0, ddof=1) if isinstance(bw, str): bw = "scotts" if bw == "scott" else bw bw_x = getattr(kde, "%s_factor" % bw)() * data_std[0] bw_y = getattr(kde, "%s_factor" % bw)() * data_std[1] elif np.isscalar(bw): bw_x, bw_y = bw, bw else: msg = ("Cannot specify a different bandwidth for each dimension " "with the scipy backend. You should install statsmodels.") raise ValueError(msg) x_support = _kde_support(data[:, 0], bw_x, gridsize, cut, clip[0]) y_support = _kde_support(data[:, 1], bw_y, gridsize, cut, clip[1]) xx, yy = np.meshgrid(x_support, y_support) z = kde([xx.ravel(), yy.ravel()]).reshape(xx.shape) return xx, yy, z def kdeplot(data, data2=None, shade=False, vertical=False, kernel="gau", bw="scott", gridsize=100, cut=3, clip=None, legend=True, cumulative=False, shade_lowest=True, cbar=False, cbar_ax=None, cbar_kws=None, ax=None, **kwargs): """Fit and plot a univariate or bivariate kernel density estimate. Parameters ---------- data : 1d array-like Input data. data2: 1d array-like, optional Second input data. If present, a bivariate KDE will be estimated. shade : bool, optional If True, shade in the area under the KDE curve (or draw with filled contours when data is bivariate). vertical : bool, optional If True, density is on x-axis. kernel : {'gau' | 'cos' | 'biw' | 'epa' | 'tri' | 'triw' }, optional Code for shape of kernel to fit with. Bivariate KDE can only use gaussian kernel. bw : {'scott' | 'silverman' | scalar | pair of scalars }, optional Name of reference method to determine kernel size, scalar factor, or scalar for each dimension of the bivariate plot. Note that the underlying computational libraries have different interperetations for this parameter: ``statsmodels`` uses it directly, but ``scipy`` treats it as a scaling factor for the standard deviation of the data. gridsize : int, optional Number of discrete points in the evaluation grid. cut : scalar, optional Draw the estimate to cut * bw from the extreme data points. clip : pair of scalars, or pair of pair of scalars, optional Lower and upper bounds for datapoints used to fit KDE. Can provide a pair of (low, high) bounds for bivariate plots. legend : bool, optional If True, add a legend or label the axes when possible. cumulative : bool, optional If True, draw the cumulative distribution estimated by the kde. shade_lowest : bool, optional If True, shade the lowest contour of a bivariate KDE plot. Not relevant when drawing a univariate plot or when ``shade=False``. Setting this to ``False`` can be useful when you want multiple densities on the same Axes. cbar : bool, optional If True and drawing a bivariate KDE plot, add a colorbar. cbar_ax : matplotlib axes, optional Existing axes to draw the colorbar onto, otherwise space is taken from the main axes. cbar_kws : dict, optional Keyword arguments for ``fig.colorbar()``. ax : matplotlib axes, optional Axes to plot on, otherwise uses current axes. kwargs : key, value pairings Other keyword arguments are passed to ``plt.plot()`` or ``plt.contour{f}`` depending on whether a univariate or bivariate plot is being drawn. Returns ------- ax : matplotlib Axes Axes with plot. See Also -------- distplot: Flexibly plot a univariate distribution of observations. jointplot: Plot a joint dataset with bivariate and marginal distributions. Examples -------- Plot a basic univariate density: .. plot:: :context: close-figs >>> import numpy as np; np.random.seed(10) >>> import seaborn as sns; sns.set(color_codes=True) >>> mean, cov = [0, 2], [(1, .5), (.5, 1)] >>> x, y = np.random.multivariate_normal(mean, cov, size=50).T >>> ax = sns.kdeplot(x) Shade under the density curve and use a different color: .. plot:: :context: close-figs >>> ax = sns.kdeplot(x, shade=True, color="r") Plot a bivariate density: .. plot:: :context: close-figs >>> ax = sns.kdeplot(x, y) Use filled contours: .. plot:: :context: close-figs >>> ax = sns.kdeplot(x, y, shade=True) Use more contour levels and a different color palette: .. plot:: :context: close-figs >>> ax = sns.kdeplot(x, y, n_levels=30, cmap="Purples_d") Use a narrower bandwith: .. plot:: :context: close-figs >>> ax = sns.kdeplot(x, bw=.15) Plot the density on the vertical axis: .. plot:: :context: close-figs >>> ax = sns.kdeplot(y, vertical=True) Limit the density curve within the range of the data: .. plot:: :context: close-figs >>> ax = sns.kdeplot(x, cut=0) Add a colorbar for the contours: .. plot:: :context: close-figs >>> ax = sns.kdeplot(x, y, cbar=True) Plot two shaded bivariate densities: .. plot:: :context: close-figs >>> iris = sns.load_dataset("iris") >>> setosa = iris.loc[iris.species == "setosa"] >>> virginica = iris.loc[iris.species == "virginica"] >>> ax = sns.kdeplot(setosa.sepal_width, setosa.sepal_length, ... cmap="Reds", shade=True, shade_lowest=False) >>> ax = sns.kdeplot(virginica.sepal_width, virginica.sepal_length, ... cmap="Blues", shade=True, shade_lowest=False) """ if ax is None: ax = plt.gca() if isinstance(data, list): data = np.asarray(data) if len(data) == 0: return ax data = data.astype(np.float64) if data2 is not None: if isinstance(data2, list): data2 = np.asarray(data2) data2 = data2.astype(np.float64) warn = False bivariate = False if isinstance(data, np.ndarray) and np.ndim(data) > 1: warn = True bivariate = True x, y = data.T elif isinstance(data, pd.DataFrame) and np.ndim(data) > 1: warn = True bivariate = True x = data.iloc[:, 0].values y = data.iloc[:, 1].values elif data2 is not None: bivariate = True x = data y = data2 if warn: warn_msg = ("Passing a 2D dataset for a bivariate plot is deprecated " "in favor of kdeplot(x, y), and it will cause an error in " "future versions. Please update your code.") warnings.warn(warn_msg, UserWarning) if bivariate and cumulative: raise TypeError("Cumulative distribution plots are not" "supported for bivariate distributions.") if bivariate: ax = _bivariate_kdeplot(x, y, shade, shade_lowest, kernel, bw, gridsize, cut, clip, legend, cbar, cbar_ax, cbar_kws, ax, **kwargs) else: ax = _univariate_kdeplot(data, shade, vertical, kernel, bw, gridsize, cut, clip, legend, ax, cumulative=cumulative, **kwargs) return ax def rugplot(a, height=.05, axis="x", ax=None, **kwargs): """Plot datapoints in an array as sticks on an axis. Parameters ---------- a : vector 1D array of observations. height : scalar, optional Height of ticks as proportion of the axis. axis : {'x' | 'y'}, optional Axis to draw rugplot on. ax : matplotlib axes, optional Axes to draw plot into; otherwise grabs current axes. kwargs : key, value pairings Other keyword arguments are passed to ``LineCollection``. Returns ------- ax : matplotlib axes The Axes object with the plot on it. """ if ax is None: ax = plt.gca() a = np.asarray(a) vertical = kwargs.pop("vertical", axis == "y") alias_map = dict(linewidth="lw", linestyle="ls", color="c") for attr, alias in alias_map.items(): if alias in kwargs: kwargs[attr] = kwargs.pop(alias) kwargs.setdefault("linewidth", 1) if vertical: trans = tx.blended_transform_factory(ax.transAxes, ax.transData) xy_pairs = np.column_stack([np.tile([0, height], len(a)), np.repeat(a, 2)]) else: trans = tx.blended_transform_factory(ax.transData, ax.transAxes) xy_pairs = np.column_stack([np.repeat(a, 2), np.tile([0, height], len(a))]) line_segs = xy_pairs.reshape([len(a), 2, 2]) ax.add_collection(LineCollection(line_segs, transform=trans, **kwargs)) ax.autoscale_view(scalex=not vertical, scaley=vertical) return ax seaborn-0.10.0/seaborn/external/000077500000000000000000000000001361256634400165065ustar00rootroot00000000000000seaborn-0.10.0/seaborn/external/__init__.py000066400000000000000000000000001361256634400206050ustar00rootroot00000000000000seaborn-0.10.0/seaborn/external/husl.py000066400000000000000000000150121361256634400200320ustar00rootroot00000000000000import operator import math __version__ = "2.1.0" m = [ [3.2406, -1.5372, -0.4986], [-0.9689, 1.8758, 0.0415], [0.0557, -0.2040, 1.0570] ] m_inv = [ [0.4124, 0.3576, 0.1805], [0.2126, 0.7152, 0.0722], [0.0193, 0.1192, 0.9505] ] # Hard-coded D65 illuminant refX = 0.95047 refY = 1.00000 refZ = 1.08883 refU = 0.19784 refV = 0.46834 lab_e = 0.008856 lab_k = 903.3 # Public API def husl_to_rgb(h, s, l): return lch_to_rgb(*husl_to_lch([h, s, l])) def husl_to_hex(h, s, l): return rgb_to_hex(husl_to_rgb(h, s, l)) def rgb_to_husl(r, g, b): return lch_to_husl(rgb_to_lch(r, g, b)) def hex_to_husl(hex): return rgb_to_husl(*hex_to_rgb(hex)) def huslp_to_rgb(h, s, l): return lch_to_rgb(*huslp_to_lch([h, s, l])) def huslp_to_hex(h, s, l): return rgb_to_hex(huslp_to_rgb(h, s, l)) def rgb_to_huslp(r, g, b): return lch_to_huslp(rgb_to_lch(r, g, b)) def hex_to_huslp(hex): return rgb_to_huslp(*hex_to_rgb(hex)) def lch_to_rgb(l, c, h): return xyz_to_rgb(luv_to_xyz(lch_to_luv([l, c, h]))) def rgb_to_lch(r, g, b): return luv_to_lch(xyz_to_luv(rgb_to_xyz([r, g, b]))) def max_chroma(L, H): hrad = math.radians(H) sinH = (math.sin(hrad)) cosH = (math.cos(hrad)) sub1 = (math.pow(L + 16, 3.0) / 1560896.0) sub2 = sub1 if sub1 > 0.008856 else (L / 903.3) result = float("inf") for row in m: m1 = row[0] m2 = row[1] m3 = row[2] top = ((0.99915 * m1 + 1.05122 * m2 + 1.14460 * m3) * sub2) rbottom = (0.86330 * m3 - 0.17266 * m2) lbottom = (0.12949 * m3 - 0.38848 * m1) bottom = (rbottom * sinH + lbottom * cosH) * sub2 for t in (0.0, 1.0): C = (L * (top - 1.05122 * t) / (bottom + 0.17266 * sinH * t)) if C > 0.0 and C < result: result = C return result def _hrad_extremum(L): lhs = (math.pow(L, 3.0) + 48.0 * math.pow(L, 2.0) + 768.0 * L + 4096.0) / 1560896.0 rhs = 1107.0 / 125000.0 sub = lhs if lhs > rhs else 10.0 * L / 9033.0 chroma = float("inf") result = None for row in m: for limit in (0.0, 1.0): [m1, m2, m3] = row top = -3015466475.0 * m3 * sub + 603093295.0 * m2 * sub - 603093295.0 * limit bottom = 1356959916.0 * m1 * sub - 452319972.0 * m3 * sub hrad = math.atan2(top, bottom) # This is a math hack to deal with tan quadrants, I'm too lazy to figure # out how to do this properly if limit == 0.0: hrad += math.pi test = max_chroma(L, math.degrees(hrad)) if test < chroma: chroma = test result = hrad return result def max_chroma_pastel(L): H = math.degrees(_hrad_extremum(L)) return max_chroma(L, H) def dot_product(a, b): return sum(map(operator.mul, a, b)) def f(t): if t > lab_e: return (math.pow(t, 1.0 / 3.0)) else: return (7.787 * t + 16.0 / 116.0) def f_inv(t): if math.pow(t, 3.0) > lab_e: return (math.pow(t, 3.0)) else: return (116.0 * t - 16.0) / lab_k def from_linear(c): if c <= 0.0031308: return 12.92 * c else: return (1.055 * math.pow(c, 1.0 / 2.4) - 0.055) def to_linear(c): a = 0.055 if c > 0.04045: return (math.pow((c + a) / (1.0 + a), 2.4)) else: return (c / 12.92) def rgb_prepare(triple): ret = [] for ch in triple: ch = round(ch, 3) if ch < -0.0001 or ch > 1.0001: raise Exception("Illegal RGB value %f" % ch) if ch < 0: ch = 0 if ch > 1: ch = 1 # Fix for Python 3 which by default rounds 4.5 down to 4.0 # instead of Python 2 which is rounded to 5.0 which caused # a couple off by one errors in the tests. Tests now all pass # in Python 2 and Python 3 ret.append(int(round(ch * 255 + 0.001, 0))) return ret def hex_to_rgb(hex): if hex.startswith('#'): hex = hex[1:] r = int(hex[0:2], 16) / 255.0 g = int(hex[2:4], 16) / 255.0 b = int(hex[4:6], 16) / 255.0 return [r, g, b] def rgb_to_hex(triple): [r, g, b] = triple return '#%02x%02x%02x' % tuple(rgb_prepare([r, g, b])) def xyz_to_rgb(triple): xyz = map(lambda row: dot_product(row, triple), m) return list(map(from_linear, xyz)) def rgb_to_xyz(triple): rgbl = list(map(to_linear, triple)) return list(map(lambda row: dot_product(row, rgbl), m_inv)) def xyz_to_luv(triple): X, Y, Z = triple if X == Y == Z == 0.0: return [0.0, 0.0, 0.0] varU = (4.0 * X) / (X + (15.0 * Y) + (3.0 * Z)) varV = (9.0 * Y) / (X + (15.0 * Y) + (3.0 * Z)) L = 116.0 * f(Y / refY) - 16.0 # Black will create a divide-by-zero error if L == 0.0: return [0.0, 0.0, 0.0] U = 13.0 * L * (varU - refU) V = 13.0 * L * (varV - refV) return [L, U, V] def luv_to_xyz(triple): L, U, V = triple if L == 0: return [0.0, 0.0, 0.0] varY = f_inv((L + 16.0) / 116.0) varU = U / (13.0 * L) + refU varV = V / (13.0 * L) + refV Y = varY * refY X = 0.0 - (9.0 * Y * varU) / ((varU - 4.0) * varV - varU * varV) Z = (9.0 * Y - (15.0 * varV * Y) - (varV * X)) / (3.0 * varV) return [X, Y, Z] def luv_to_lch(triple): L, U, V = triple C = (math.pow(math.pow(U, 2) + math.pow(V, 2), (1.0 / 2.0))) hrad = (math.atan2(V, U)) H = math.degrees(hrad) if H < 0.0: H = 360.0 + H return [L, C, H] def lch_to_luv(triple): L, C, H = triple Hrad = math.radians(H) U = (math.cos(Hrad) * C) V = (math.sin(Hrad) * C) return [L, U, V] def husl_to_lch(triple): H, S, L = triple if L > 99.9999999: return [100, 0.0, H] if L < 0.00000001: return [0.0, 0.0, H] mx = max_chroma(L, H) C = mx / 100.0 * S return [L, C, H] def lch_to_husl(triple): L, C, H = triple if L > 99.9999999: return [H, 0.0, 100.0] if L < 0.00000001: return [H, 0.0, 0.0] mx = max_chroma(L, H) S = C / mx * 100.0 return [H, S, L] def huslp_to_lch(triple): H, S, L = triple if L > 99.9999999: return [100, 0.0, H] if L < 0.00000001: return [0.0, 0.0, H] mx = max_chroma_pastel(L) C = mx / 100.0 * S return [L, C, H] def lch_to_huslp(triple): L, C, H = triple if L > 99.9999999: return [H, 0.0, 100.0] if L < 0.00000001: return [H, 0.0, 0.0] mx = max_chroma_pastel(L) S = C / mx * 100.0 return [H, S, L] seaborn-0.10.0/seaborn/matrix.py000066400000000000000000001417571361256634400165610ustar00rootroot00000000000000"""Functions to visualize matrices of data.""" from __future__ import division import itertools import warnings import matplotlib as mpl from matplotlib.collections import LineCollection import matplotlib.pyplot as plt from matplotlib import gridspec import numpy as np import pandas as pd from scipy.cluster import hierarchy from . import cm from .axisgrid import Grid from .utils import (despine, axis_ticklabels_overlap, relative_luminance, to_utf8) __all__ = ["heatmap", "clustermap"] def _index_to_label(index): """Convert a pandas index or multiindex to an axis label.""" if isinstance(index, pd.MultiIndex): return "-".join(map(to_utf8, index.names)) else: return index.name def _index_to_ticklabels(index): """Convert a pandas index or multiindex into ticklabels.""" if isinstance(index, pd.MultiIndex): return ["-".join(map(to_utf8, i)) for i in index.values] else: return index.values def _convert_colors(colors): """Convert either a list of colors or nested lists of colors to RGB.""" to_rgb = mpl.colors.colorConverter.to_rgb if isinstance(colors, pd.DataFrame): # Convert dataframe return pd.DataFrame({col: colors[col].map(to_rgb) for col in colors}) elif isinstance(colors, pd.Series): return colors.map(to_rgb) else: try: to_rgb(colors[0]) # If this works, there is only one level of colors return list(map(to_rgb, colors)) except ValueError: # If we get here, we have nested lists return [list(map(to_rgb, l)) for l in colors] def _matrix_mask(data, mask): """Ensure that data and mask are compatabile and add missing values. Values will be plotted for cells where ``mask`` is ``False``. ``data`` is expected to be a DataFrame; ``mask`` can be an array or a DataFrame. """ if mask is None: mask = np.zeros(data.shape, np.bool) if isinstance(mask, np.ndarray): # For array masks, ensure that shape matches data then convert if mask.shape != data.shape: raise ValueError("Mask must have the same shape as data.") mask = pd.DataFrame(mask, index=data.index, columns=data.columns, dtype=np.bool) elif isinstance(mask, pd.DataFrame): # For DataFrame masks, ensure that semantic labels match data if not mask.index.equals(data.index) \ and mask.columns.equals(data.columns): err = "Mask must have the same index and columns as data." raise ValueError(err) # Add any cells with missing data to the mask # This works around an issue where `plt.pcolormesh` doesn't represent # missing data properly mask = mask | pd.isnull(data) return mask class _HeatMapper(object): """Draw a heatmap plot of a matrix with nice labels and colormaps.""" def __init__(self, data, vmin, vmax, cmap, center, robust, annot, fmt, annot_kws, cbar, cbar_kws, xticklabels=True, yticklabels=True, mask=None): """Initialize the plotting object.""" # We always want to have a DataFrame with semantic information # and an ndarray to pass to matplotlib if isinstance(data, pd.DataFrame): plot_data = data.values else: plot_data = np.asarray(data) data = pd.DataFrame(plot_data) # Validate the mask and convet to DataFrame mask = _matrix_mask(data, mask) plot_data = np.ma.masked_where(np.asarray(mask), plot_data) # Get good names for the rows and columns xtickevery = 1 if isinstance(xticklabels, int): xtickevery = xticklabels xticklabels = _index_to_ticklabels(data.columns) elif xticklabels is True: xticklabels = _index_to_ticklabels(data.columns) elif xticklabels is False: xticklabels = [] ytickevery = 1 if isinstance(yticklabels, int): ytickevery = yticklabels yticklabels = _index_to_ticklabels(data.index) elif yticklabels is True: yticklabels = _index_to_ticklabels(data.index) elif yticklabels is False: yticklabels = [] # Get the positions and used label for the ticks nx, ny = data.T.shape if not len(xticklabels): self.xticks = [] self.xticklabels = [] elif isinstance(xticklabels, str) and xticklabels == "auto": self.xticks = "auto" self.xticklabels = _index_to_ticklabels(data.columns) else: self.xticks, self.xticklabels = self._skip_ticks(xticklabels, xtickevery) if not len(yticklabels): self.yticks = [] self.yticklabels = [] elif isinstance(yticklabels, str) and yticklabels == "auto": self.yticks = "auto" self.yticklabels = _index_to_ticklabels(data.index) else: self.yticks, self.yticklabels = self._skip_ticks(yticklabels, ytickevery) # Get good names for the axis labels xlabel = _index_to_label(data.columns) ylabel = _index_to_label(data.index) self.xlabel = xlabel if xlabel is not None else "" self.ylabel = ylabel if ylabel is not None else "" # Determine good default values for the colormapping self._determine_cmap_params(plot_data, vmin, vmax, cmap, center, robust) # Sort out the annotations if annot is None or annot is False: annot = False annot_data = None else: if isinstance(annot, bool): annot_data = plot_data else: annot_data = np.asarray(annot) if annot_data.shape != plot_data.shape: err = "`data` and `annot` must have same shape." raise ValueError(err) annot = True # Save other attributes to the object self.data = data self.plot_data = plot_data self.annot = annot self.annot_data = annot_data self.fmt = fmt self.annot_kws = {} if annot_kws is None else annot_kws.copy() self.cbar = cbar self.cbar_kws = {} if cbar_kws is None else cbar_kws.copy() def _determine_cmap_params(self, plot_data, vmin, vmax, cmap, center, robust): """Use some heuristics to set good defaults for colorbar and range.""" calc_data = plot_data.data[~np.isnan(plot_data.data)] if vmin is None: vmin = np.percentile(calc_data, 2) if robust else calc_data.min() if vmax is None: vmax = np.percentile(calc_data, 98) if robust else calc_data.max() self.vmin, self.vmax = vmin, vmax # Choose default colormaps if not provided if cmap is None: if center is None: self.cmap = cm.rocket else: self.cmap = cm.icefire elif isinstance(cmap, str): self.cmap = mpl.cm.get_cmap(cmap) elif isinstance(cmap, list): self.cmap = mpl.colors.ListedColormap(cmap) else: self.cmap = cmap # Recenter a divergent colormap if center is not None: vrange = max(vmax - center, center - vmin) normlize = mpl.colors.Normalize(center - vrange, center + vrange) cmin, cmax = normlize([vmin, vmax]) cc = np.linspace(cmin, cmax, 256) self.cmap = mpl.colors.ListedColormap(self.cmap(cc)) def _annotate_heatmap(self, ax, mesh): """Add textual labels with the value in each cell.""" mesh.update_scalarmappable() height, width = self.annot_data.shape xpos, ypos = np.meshgrid(np.arange(width) + .5, np.arange(height) + .5) for x, y, m, color, val in zip(xpos.flat, ypos.flat, mesh.get_array(), mesh.get_facecolors(), self.annot_data.flat): if m is not np.ma.masked: lum = relative_luminance(color) text_color = ".15" if lum > .408 else "w" annotation = ("{:" + self.fmt + "}").format(val) text_kwargs = dict(color=text_color, ha="center", va="center") text_kwargs.update(self.annot_kws) ax.text(x, y, annotation, **text_kwargs) def _skip_ticks(self, labels, tickevery): """Return ticks and labels at evenly spaced intervals.""" n = len(labels) if tickevery == 0: ticks, labels = [], [] elif tickevery == 1: ticks, labels = np.arange(n) + .5, labels else: start, end, step = 0, n, tickevery ticks = np.arange(start, end, step) + .5 labels = labels[start:end:step] return ticks, labels def _auto_ticks(self, ax, labels, axis): """Determine ticks and ticklabels that minimize overlap.""" transform = ax.figure.dpi_scale_trans.inverted() bbox = ax.get_window_extent().transformed(transform) size = [bbox.width, bbox.height][axis] axis = [ax.xaxis, ax.yaxis][axis] tick, = axis.set_ticks([0]) fontsize = tick.label1.get_size() max_ticks = int(size // (fontsize / 72)) if max_ticks < 1: return [], [] tick_every = len(labels) // max_ticks + 1 tick_every = 1 if tick_every == 0 else tick_every ticks, labels = self._skip_ticks(labels, tick_every) return ticks, labels def plot(self, ax, cax, kws): """Draw the heatmap on the provided Axes.""" # Remove all the Axes spines despine(ax=ax, left=True, bottom=True) # Draw the heatmap mesh = ax.pcolormesh(self.plot_data, vmin=self.vmin, vmax=self.vmax, cmap=self.cmap, **kws) # Set the axis limits ax.set(xlim=(0, self.data.shape[1]), ylim=(0, self.data.shape[0])) # Invert the y axis to show the plot in matrix form ax.invert_yaxis() # Possibly add a colorbar if self.cbar: cb = ax.figure.colorbar(mesh, cax, ax, **self.cbar_kws) cb.outline.set_linewidth(0) # If rasterized is passed to pcolormesh, also rasterize the # colorbar to avoid white lines on the PDF rendering if kws.get('rasterized', False): cb.solids.set_rasterized(True) # Add row and column labels if isinstance(self.xticks, str) and self.xticks == "auto": xticks, xticklabels = self._auto_ticks(ax, self.xticklabels, 0) else: xticks, xticklabels = self.xticks, self.xticklabels if isinstance(self.yticks, str) and self.yticks == "auto": yticks, yticklabels = self._auto_ticks(ax, self.yticklabels, 1) else: yticks, yticklabels = self.yticks, self.yticklabels ax.set(xticks=xticks, yticks=yticks) xtl = ax.set_xticklabels(xticklabels) ytl = ax.set_yticklabels(yticklabels, rotation="vertical") # Possibly rotate them if they overlap if hasattr(ax.figure.canvas, "get_renderer"): ax.figure.draw(ax.figure.canvas.get_renderer()) if axis_ticklabels_overlap(xtl): plt.setp(xtl, rotation="vertical") if axis_ticklabels_overlap(ytl): plt.setp(ytl, rotation="horizontal") # Add the axis labels ax.set(xlabel=self.xlabel, ylabel=self.ylabel) # Annotate the cells with the formatted values if self.annot: self._annotate_heatmap(ax, mesh) def heatmap(data, vmin=None, vmax=None, cmap=None, center=None, robust=False, annot=None, fmt=".2g", annot_kws=None, linewidths=0, linecolor="white", cbar=True, cbar_kws=None, cbar_ax=None, square=False, xticklabels="auto", yticklabels="auto", mask=None, ax=None, **kwargs): """Plot rectangular data as a color-encoded matrix. This is an Axes-level function and will draw the heatmap into the currently-active Axes if none is provided to the ``ax`` argument. Part of this Axes space will be taken and used to plot a colormap, unless ``cbar`` is False or a separate Axes is provided to ``cbar_ax``. Parameters ---------- data : rectangular dataset 2D dataset that can be coerced into an ndarray. If a Pandas DataFrame is provided, the index/column information will be used to label the columns and rows. vmin, vmax : floats, optional Values to anchor the colormap, otherwise they are inferred from the data and other keyword arguments. cmap : matplotlib colormap name or object, or list of colors, optional The mapping from data values to color space. If not provided, the default will depend on whether ``center`` is set. center : float, optional The value at which to center the colormap when plotting divergant data. Using this parameter will change the default ``cmap`` if none is specified. robust : bool, optional If True and ``vmin`` or ``vmax`` are absent, the colormap range is computed with robust quantiles instead of the extreme values. annot : bool or rectangular dataset, optional If True, write the data value in each cell. If an array-like with the same shape as ``data``, then use this to annotate the heatmap instead of the data. Note that DataFrames will match on position, not index. fmt : string, optional String formatting code to use when adding annotations. annot_kws : dict of key, value mappings, optional Keyword arguments for ``ax.text`` when ``annot`` is True. linewidths : float, optional Width of the lines that will divide each cell. linecolor : color, optional Color of the lines that will divide each cell. cbar : boolean, optional Whether to draw a colorbar. cbar_kws : dict of key, value mappings, optional Keyword arguments for `fig.colorbar`. cbar_ax : matplotlib Axes, optional Axes in which to draw the colorbar, otherwise take space from the main Axes. square : boolean, optional If True, set the Axes aspect to "equal" so each cell will be square-shaped. xticklabels, yticklabels : "auto", bool, list-like, or int, optional If True, plot the column names of the dataframe. If False, don't plot the column names. If list-like, plot these alternate labels as the xticklabels. If an integer, use the column names but plot only every n label. If "auto", try to densely plot non-overlapping labels. mask : boolean array or DataFrame, optional If passed, data will not be shown in cells where ``mask`` is True. Cells with missing values are automatically masked. ax : matplotlib Axes, optional Axes in which to draw the plot, otherwise use the currently-active Axes. kwargs : other keyword arguments All other keyword arguments are passed to :func:`matplotlib.axes.Axes.pcolormesh`. Returns ------- ax : matplotlib Axes Axes object with the heatmap. See also -------- clustermap : Plot a matrix using hierachical clustering to arrange the rows and columns. Examples -------- Plot a heatmap for a numpy array: .. plot:: :context: close-figs >>> import numpy as np; np.random.seed(0) >>> import seaborn as sns; sns.set() >>> uniform_data = np.random.rand(10, 12) >>> ax = sns.heatmap(uniform_data) Change the limits of the colormap: .. plot:: :context: close-figs >>> ax = sns.heatmap(uniform_data, vmin=0, vmax=1) Plot a heatmap for data centered on 0 with a diverging colormap: .. plot:: :context: close-figs >>> normal_data = np.random.randn(10, 12) >>> ax = sns.heatmap(normal_data, center=0) Plot a dataframe with meaningful row and column labels: .. plot:: :context: close-figs >>> flights = sns.load_dataset("flights") >>> flights = flights.pivot("month", "year", "passengers") >>> ax = sns.heatmap(flights) Annotate each cell with the numeric value using integer formatting: .. plot:: :context: close-figs >>> ax = sns.heatmap(flights, annot=True, fmt="d") Add lines between each cell: .. plot:: :context: close-figs >>> ax = sns.heatmap(flights, linewidths=.5) Use a different colormap: .. plot:: :context: close-figs >>> ax = sns.heatmap(flights, cmap="YlGnBu") Center the colormap at a specific value: .. plot:: :context: close-figs >>> ax = sns.heatmap(flights, center=flights.loc["January", 1955]) Plot every other column label and don't plot row labels: .. plot:: :context: close-figs >>> data = np.random.randn(50, 20) >>> ax = sns.heatmap(data, xticklabels=2, yticklabels=False) Don't draw a colorbar: .. plot:: :context: close-figs >>> ax = sns.heatmap(flights, cbar=False) Use different axes for the colorbar: .. plot:: :context: close-figs >>> grid_kws = {"height_ratios": (.9, .05), "hspace": .3} >>> f, (ax, cbar_ax) = plt.subplots(2, gridspec_kw=grid_kws) >>> ax = sns.heatmap(flights, ax=ax, ... cbar_ax=cbar_ax, ... cbar_kws={"orientation": "horizontal"}) Use a mask to plot only part of a matrix .. plot:: :context: close-figs >>> corr = np.corrcoef(np.random.randn(10, 200)) >>> mask = np.zeros_like(corr) >>> mask[np.triu_indices_from(mask)] = True >>> with sns.axes_style("white"): ... f, ax = plt.subplots(figsize=(7, 5)) ... ax = sns.heatmap(corr, mask=mask, vmax=.3, square=True) """ # Initialize the plotter object plotter = _HeatMapper(data, vmin, vmax, cmap, center, robust, annot, fmt, annot_kws, cbar, cbar_kws, xticklabels, yticklabels, mask) # Add the pcolormesh kwargs here kwargs["linewidths"] = linewidths kwargs["edgecolor"] = linecolor # Draw the plot and return the Axes if ax is None: ax = plt.gca() if square: ax.set_aspect("equal") plotter.plot(ax, cbar_ax, kwargs) return ax class _DendrogramPlotter(object): """Object for drawing tree of similarities between data rows/columns""" def __init__(self, data, linkage, metric, method, axis, label, rotate): """Plot a dendrogram of the relationships between the columns of data Parameters ---------- data : pandas.DataFrame Rectangular data """ self.axis = axis if self.axis == 1: data = data.T if isinstance(data, pd.DataFrame): array = data.values else: array = np.asarray(data) data = pd.DataFrame(array) self.array = array self.data = data self.shape = self.data.shape self.metric = metric self.method = method self.axis = axis self.label = label self.rotate = rotate if linkage is None: self.linkage = self.calculated_linkage else: self.linkage = linkage self.dendrogram = self.calculate_dendrogram() # Dendrogram ends are always at multiples of 5, who knows why ticks = 10 * np.arange(self.data.shape[0]) + 5 if self.label: ticklabels = _index_to_ticklabels(self.data.index) ticklabels = [ticklabels[i] for i in self.reordered_ind] if self.rotate: self.xticks = [] self.yticks = ticks self.xticklabels = [] self.yticklabels = ticklabels self.ylabel = _index_to_label(self.data.index) self.xlabel = '' else: self.xticks = ticks self.yticks = [] self.xticklabels = ticklabels self.yticklabels = [] self.ylabel = '' self.xlabel = _index_to_label(self.data.index) else: self.xticks, self.yticks = [], [] self.yticklabels, self.xticklabels = [], [] self.xlabel, self.ylabel = '', '' self.dependent_coord = self.dendrogram['dcoord'] self.independent_coord = self.dendrogram['icoord'] def _calculate_linkage_scipy(self): linkage = hierarchy.linkage(self.array, method=self.method, metric=self.metric) return linkage def _calculate_linkage_fastcluster(self): import fastcluster # Fastcluster has a memory-saving vectorized version, but only # with certain linkage methods, and mostly with euclidean metric # vector_methods = ('single', 'centroid', 'median', 'ward') euclidean_methods = ('centroid', 'median', 'ward') euclidean = self.metric == 'euclidean' and self.method in \ euclidean_methods if euclidean or self.method == 'single': return fastcluster.linkage_vector(self.array, method=self.method, metric=self.metric) else: linkage = fastcluster.linkage(self.array, method=self.method, metric=self.metric) return linkage @property def calculated_linkage(self): try: return self._calculate_linkage_fastcluster() except ImportError: if np.product(self.shape) >= 10000: msg = ("Clustering large matrix with scipy. Installing " "`fastcluster` may give better performance.") warnings.warn(msg) return self._calculate_linkage_scipy() def calculate_dendrogram(self): """Calculates a dendrogram based on the linkage matrix Made a separate function, not a property because don't want to recalculate the dendrogram every time it is accessed. Returns ------- dendrogram : dict Dendrogram dictionary as returned by scipy.cluster.hierarchy .dendrogram. The important key-value pairing is "reordered_ind" which indicates the re-ordering of the matrix """ return hierarchy.dendrogram(self.linkage, no_plot=True, color_threshold=-np.inf) @property def reordered_ind(self): """Indices of the matrix, reordered by the dendrogram""" return self.dendrogram['leaves'] def plot(self, ax, tree_kws): """Plots a dendrogram of the similarities between data on the axes Parameters ---------- ax : matplotlib.axes.Axes Axes object upon which the dendrogram is plotted """ tree_kws = {} if tree_kws is None else tree_kws.copy() tree_kws.setdefault("linewidths", .5) tree_kws.setdefault("colors", ".2") if self.rotate and self.axis == 0: coords = zip(self.dependent_coord, self.independent_coord) else: coords = zip(self.independent_coord, self.dependent_coord) lines = LineCollection([list(zip(x, y)) for x, y in coords], **tree_kws) ax.add_collection(lines) number_of_leaves = len(self.reordered_ind) max_dependent_coord = max(map(max, self.dependent_coord)) if self.rotate: ax.yaxis.set_ticks_position('right') # Constants 10 and 1.05 come from # `scipy.cluster.hierarchy._plot_dendrogram` ax.set_ylim(0, number_of_leaves * 10) ax.set_xlim(0, max_dependent_coord * 1.05) ax.invert_xaxis() ax.invert_yaxis() else: # Constants 10 and 1.05 come from # `scipy.cluster.hierarchy._plot_dendrogram` ax.set_xlim(0, number_of_leaves * 10) ax.set_ylim(0, max_dependent_coord * 1.05) despine(ax=ax, bottom=True, left=True) ax.set(xticks=self.xticks, yticks=self.yticks, xlabel=self.xlabel, ylabel=self.ylabel) xtl = ax.set_xticklabels(self.xticklabels) ytl = ax.set_yticklabels(self.yticklabels, rotation='vertical') # Force a draw of the plot to avoid matplotlib window error if hasattr(ax.figure.canvas, "get_renderer"): ax.figure.draw(ax.figure.canvas.get_renderer()) if len(ytl) > 0 and axis_ticklabels_overlap(ytl): plt.setp(ytl, rotation="horizontal") if len(xtl) > 0 and axis_ticklabels_overlap(xtl): plt.setp(xtl, rotation="vertical") return self def dendrogram(data, linkage=None, axis=1, label=True, metric='euclidean', method='average', rotate=False, tree_kws=None, ax=None): """Draw a tree diagram of relationships within a matrix Parameters ---------- data : pandas.DataFrame Rectangular data linkage : numpy.array, optional Linkage matrix axis : int, optional Which axis to use to calculate linkage. 0 is rows, 1 is columns. label : bool, optional If True, label the dendrogram at leaves with column or row names metric : str, optional Distance metric. Anything valid for scipy.spatial.distance.pdist method : str, optional Linkage method to use. Anything valid for scipy.cluster.hierarchy.linkage rotate : bool, optional When plotting the matrix, whether to rotate it 90 degrees counter-clockwise, so the leaves face right tree_kws : dict, optional Keyword arguments for the ``matplotlib.collections.LineCollection`` that is used for plotting the lines of the dendrogram tree. ax : matplotlib axis, optional Axis to plot on, otherwise uses current axis Returns ------- dendrogramplotter : _DendrogramPlotter A Dendrogram plotter object. Notes ----- Access the reordered dendrogram indices with dendrogramplotter.reordered_ind """ plotter = _DendrogramPlotter(data, linkage=linkage, axis=axis, metric=metric, method=method, label=label, rotate=rotate) if ax is None: ax = plt.gca() return plotter.plot(ax=ax, tree_kws=tree_kws) class ClusterGrid(Grid): def __init__(self, data, pivot_kws=None, z_score=None, standard_scale=None, figsize=None, row_colors=None, col_colors=None, mask=None, dendrogram_ratio=None, colors_ratio=None, cbar_pos=None): """Grid object for organizing clustered heatmap input on to axes""" if isinstance(data, pd.DataFrame): self.data = data else: self.data = pd.DataFrame(data) self.data2d = self.format_data(self.data, pivot_kws, z_score, standard_scale) self.mask = _matrix_mask(self.data2d, mask) self.fig = plt.figure(figsize=figsize) self.row_colors, self.row_color_labels = \ self._preprocess_colors(data, row_colors, axis=0) self.col_colors, self.col_color_labels = \ self._preprocess_colors(data, col_colors, axis=1) try: row_dendrogram_ratio, col_dendrogram_ratio = dendrogram_ratio except TypeError: row_dendrogram_ratio = col_dendrogram_ratio = dendrogram_ratio try: row_colors_ratio, col_colors_ratio = colors_ratio except TypeError: row_colors_ratio = col_colors_ratio = colors_ratio width_ratios = self.dim_ratios(self.row_colors, row_dendrogram_ratio, row_colors_ratio) height_ratios = self.dim_ratios(self.col_colors, col_dendrogram_ratio, col_colors_ratio) nrows = 2 if self.col_colors is None else 3 ncols = 2 if self.row_colors is None else 3 self.gs = gridspec.GridSpec(nrows, ncols, width_ratios=width_ratios, height_ratios=height_ratios) self.ax_row_dendrogram = self.fig.add_subplot(self.gs[-1, 0]) self.ax_col_dendrogram = self.fig.add_subplot(self.gs[0, -1]) self.ax_row_dendrogram.set_axis_off() self.ax_col_dendrogram.set_axis_off() self.ax_row_colors = None self.ax_col_colors = None if self.row_colors is not None: self.ax_row_colors = self.fig.add_subplot( self.gs[-1, 1]) if self.col_colors is not None: self.ax_col_colors = self.fig.add_subplot( self.gs[1, -1]) self.ax_heatmap = self.fig.add_subplot(self.gs[-1, -1]) if cbar_pos is None: self.ax_cbar = self.cax = None else: # Initialize the colorbar axes in the gridspec so that tight_layout # works. We will move it where it belongs later. This is a hack. self.ax_cbar = self.fig.add_subplot(self.gs[0, 0]) self.cax = self.ax_cbar # Backwards compatability self.cbar_pos = cbar_pos self.dendrogram_row = None self.dendrogram_col = None def _preprocess_colors(self, data, colors, axis): """Preprocess {row/col}_colors to extract labels and convert colors.""" labels = None if colors is not None: if isinstance(colors, (pd.DataFrame, pd.Series)): # Ensure colors match data indices if axis == 0: colors = colors.reindex(data.index) else: colors = colors.reindex(data.columns) # Replace na's with background color # TODO We should set these to transparent instead colors = colors.fillna('white') # Extract color values and labels from frame/series if isinstance(colors, pd.DataFrame): labels = list(colors.columns) colors = colors.T.values else: if colors.name is None: labels = [""] else: labels = [colors.name] colors = colors.values colors = _convert_colors(colors) return colors, labels def format_data(self, data, pivot_kws, z_score=None, standard_scale=None): """Extract variables from data or use directly.""" # Either the data is already in 2d matrix format, or need to do a pivot if pivot_kws is not None: data2d = data.pivot(**pivot_kws) else: data2d = data if z_score is not None and standard_scale is not None: raise ValueError( 'Cannot perform both z-scoring and standard-scaling on data') if z_score is not None: data2d = self.z_score(data2d, z_score) if standard_scale is not None: data2d = self.standard_scale(data2d, standard_scale) return data2d @staticmethod def z_score(data2d, axis=1): """Standarize the mean and variance of the data axis Parameters ---------- data2d : pandas.DataFrame Data to normalize axis : int Which axis to normalize across. If 0, normalize across rows, if 1, normalize across columns. Returns ------- normalized : pandas.DataFrame Noramlized data with a mean of 0 and variance of 1 across the specified axis. """ if axis == 1: z_scored = data2d else: z_scored = data2d.T z_scored = (z_scored - z_scored.mean()) / z_scored.std() if axis == 1: return z_scored else: return z_scored.T @staticmethod def standard_scale(data2d, axis=1): """Divide the data by the difference between the max and min Parameters ---------- data2d : pandas.DataFrame Data to normalize axis : int Which axis to normalize across. If 0, normalize across rows, if 1, normalize across columns. vmin : int If 0, then subtract the minimum of the data before dividing by the range. Returns ------- standardized : pandas.DataFrame Noramlized data with a mean of 0 and variance of 1 across the specified axis. """ # Normalize these values to range from 0 to 1 if axis == 1: standardized = data2d else: standardized = data2d.T subtract = standardized.min() standardized = (standardized - subtract) / ( standardized.max() - standardized.min()) if axis == 1: return standardized else: return standardized.T def dim_ratios(self, colors, dendrogram_ratio, colors_ratio): """Get the proportions of the figure taken up by each axes.""" ratios = [dendrogram_ratio] if colors is not None: # Colors are encoded as rgb, so ther is an extra dimention if np.ndim(colors) > 2: n_colors = len(colors) else: n_colors = 1 ratios += [n_colors * colors_ratio] # Add the ratio for the heatmap itself ratios.append(1 - sum(ratios)) return ratios @staticmethod def color_list_to_matrix_and_cmap(colors, ind, axis=0): """Turns a list of colors into a numpy matrix and matplotlib colormap These arguments can now be plotted using heatmap(matrix, cmap) and the provided colors will be plotted. Parameters ---------- colors : list of matplotlib colors Colors to label the rows or columns of a dataframe. ind : list of ints Ordering of the rows or columns, to reorder the original colors by the clustered dendrogram order axis : int Which axis this is labeling Returns ------- matrix : numpy.array A numpy array of integer values, where each corresponds to a color from the originally provided list of colors cmap : matplotlib.colors.ListedColormap """ # check for nested lists/color palettes. # Will fail if matplotlib color is list not tuple if any(issubclass(type(x), list) for x in colors): all_colors = set(itertools.chain(*colors)) n = len(colors) m = len(colors[0]) else: all_colors = set(colors) n = 1 m = len(colors) colors = [colors] color_to_value = dict((col, i) for i, col in enumerate(all_colors)) matrix = np.array([color_to_value[c] for color in colors for c in color]) shape = (n, m) matrix = matrix.reshape(shape) matrix = matrix[:, ind] if axis == 0: # row-side: matrix = matrix.T cmap = mpl.colors.ListedColormap(all_colors) return matrix, cmap def savefig(self, *args, **kwargs): if 'bbox_inches' not in kwargs: kwargs['bbox_inches'] = 'tight' self.fig.savefig(*args, **kwargs) def plot_dendrograms(self, row_cluster, col_cluster, metric, method, row_linkage, col_linkage, tree_kws): # Plot the row dendrogram if row_cluster: self.dendrogram_row = dendrogram( self.data2d, metric=metric, method=method, label=False, axis=0, ax=self.ax_row_dendrogram, rotate=True, linkage=row_linkage, tree_kws=tree_kws ) else: self.ax_row_dendrogram.set_xticks([]) self.ax_row_dendrogram.set_yticks([]) # PLot the column dendrogram if col_cluster: self.dendrogram_col = dendrogram( self.data2d, metric=metric, method=method, label=False, axis=1, ax=self.ax_col_dendrogram, linkage=col_linkage, tree_kws=tree_kws ) else: self.ax_col_dendrogram.set_xticks([]) self.ax_col_dendrogram.set_yticks([]) despine(ax=self.ax_row_dendrogram, bottom=True, left=True) despine(ax=self.ax_col_dendrogram, bottom=True, left=True) def plot_colors(self, xind, yind, **kws): """Plots color labels between the dendrogram and the heatmap Parameters ---------- heatmap_kws : dict Keyword arguments heatmap """ # Remove any custom colormap and centering # TODO this code has consistently caused problems when we # have missed kwargs that need to be excluded that it might # be better to rewrite *in*clusively. kws = kws.copy() kws.pop('cmap', None) kws.pop('norm', None) kws.pop('center', None) kws.pop('annot', None) kws.pop('vmin', None) kws.pop('vmax', None) kws.pop('robust', None) kws.pop('xticklabels', None) kws.pop('yticklabels', None) # Plot the row colors if self.row_colors is not None: matrix, cmap = self.color_list_to_matrix_and_cmap( self.row_colors, yind, axis=0) # Get row_color labels if self.row_color_labels is not None: row_color_labels = self.row_color_labels else: row_color_labels = False heatmap(matrix, cmap=cmap, cbar=False, ax=self.ax_row_colors, xticklabels=row_color_labels, yticklabels=False, **kws) # Adjust rotation of labels if row_color_labels is not False: plt.setp(self.ax_row_colors.get_xticklabels(), rotation=90) else: despine(self.ax_row_colors, left=True, bottom=True) # Plot the column colors if self.col_colors is not None: matrix, cmap = self.color_list_to_matrix_and_cmap( self.col_colors, xind, axis=1) # Get col_color labels if self.col_color_labels is not None: col_color_labels = self.col_color_labels else: col_color_labels = False heatmap(matrix, cmap=cmap, cbar=False, ax=self.ax_col_colors, xticklabels=False, yticklabels=col_color_labels, **kws) # Adjust rotation of labels, place on right side if col_color_labels is not False: self.ax_col_colors.yaxis.tick_right() plt.setp(self.ax_col_colors.get_yticklabels(), rotation=0) else: despine(self.ax_col_colors, left=True, bottom=True) def plot_matrix(self, colorbar_kws, xind, yind, **kws): self.data2d = self.data2d.iloc[yind, xind] self.mask = self.mask.iloc[yind, xind] # Try to reorganize specified tick labels, if provided xtl = kws.pop("xticklabels", "auto") try: xtl = np.asarray(xtl)[xind] except (TypeError, IndexError): pass ytl = kws.pop("yticklabels", "auto") try: ytl = np.asarray(ytl)[yind] except (TypeError, IndexError): pass # Reorganize the annotations to match the heatmap annot = kws.pop("annot", None) if annot is None: pass else: if isinstance(annot, bool): annot_data = self.data2d else: annot_data = np.asarray(annot) if annot_data.shape != self.data2d.shape: err = "`data` and `annot` must have same shape." raise ValueError(err) annot_data = annot_data[yind][:, xind] annot = annot_data # Setting ax_cbar=None in clustermap call implies no colorbar kws.setdefault("cbar", self.ax_cbar is not None) heatmap(self.data2d, ax=self.ax_heatmap, cbar_ax=self.ax_cbar, cbar_kws=colorbar_kws, mask=self.mask, xticklabels=xtl, yticklabels=ytl, annot=annot, **kws) ytl = self.ax_heatmap.get_yticklabels() ytl_rot = None if not ytl else ytl[0].get_rotation() self.ax_heatmap.yaxis.set_ticks_position('right') self.ax_heatmap.yaxis.set_label_position('right') if ytl_rot is not None: ytl = self.ax_heatmap.get_yticklabels() plt.setp(ytl, rotation=ytl_rot) tight_params = dict(h_pad=.02, w_pad=.02) if self.ax_cbar is None: self.fig.tight_layout(**tight_params) else: # Turn the colorbar axes off for tight layout so that its # ticks don't interfere with the rest of the plot layout. # Then move it. self.ax_cbar.set_axis_off() self.fig.tight_layout(**tight_params) self.ax_cbar.set_axis_on() self.ax_cbar.set_position(self.cbar_pos) def plot(self, metric, method, colorbar_kws, row_cluster, col_cluster, row_linkage, col_linkage, tree_kws, **kws): # heatmap square=True sets the aspect ratio on the axes, but that is # not compatible with the multi-axes layout of clustergrid if kws.get("square", False): msg = "``square=True`` ignored in clustermap" warnings.warn(msg) kws.pop("square") colorbar_kws = {} if colorbar_kws is None else colorbar_kws self.plot_dendrograms(row_cluster, col_cluster, metric, method, row_linkage=row_linkage, col_linkage=col_linkage, tree_kws=tree_kws) try: xind = self.dendrogram_col.reordered_ind except AttributeError: xind = np.arange(self.data2d.shape[1]) try: yind = self.dendrogram_row.reordered_ind except AttributeError: yind = np.arange(self.data2d.shape[0]) self.plot_colors(xind, yind, **kws) self.plot_matrix(colorbar_kws, xind, yind, **kws) return self def clustermap(data, pivot_kws=None, method='average', metric='euclidean', z_score=None, standard_scale=None, figsize=(10, 10), cbar_kws=None, row_cluster=True, col_cluster=True, row_linkage=None, col_linkage=None, row_colors=None, col_colors=None, mask=None, dendrogram_ratio=.2, colors_ratio=0.03, cbar_pos=(.02, .8, .05, .18), tree_kws=None, **kwargs): """Plot a matrix dataset as a hierarchically-clustered heatmap. Parameters ---------- data: 2D array-like Rectangular data for clustering. Cannot contain NAs. pivot_kws : dict, optional If `data` is a tidy dataframe, can provide keyword arguments for pivot to create a rectangular dataframe. method : str, optional Linkage method to use for calculating clusters. See scipy.cluster.hierarchy.linkage documentation for more information: https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html metric : str, optional Distance metric to use for the data. See scipy.spatial.distance.pdist documentation for more options https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html To use different metrics (or methods) for rows and columns, you may construct each linkage matrix yourself and provide them as {row,col}_linkage. z_score : int or None, optional Either 0 (rows) or 1 (columns). Whether or not to calculate z-scores for the rows or the columns. Z scores are: z = (x - mean)/std, so values in each row (column) will get the mean of the row (column) subtracted, then divided by the standard deviation of the row (column). This ensures that each row (column) has mean of 0 and variance of 1. standard_scale : int or None, optional Either 0 (rows) or 1 (columns). Whether or not to standardize that dimension, meaning for each row or column, subtract the minimum and divide each by its maximum. figsize: (width, height), optional Overall size of the figure. cbar_kws : dict, optional Keyword arguments to pass to ``cbar_kws`` in ``heatmap``, e.g. to add a label to the colorbar. {row,col}_cluster : bool, optional If True, cluster the {rows, columns}. {row,col}_linkage : numpy.array, optional Precomputed linkage matrix for the rows or columns. See scipy.cluster.hierarchy.linkage for specific formats. {row,col}_colors : list-like or pandas DataFrame/Series, optional List of colors to label for either the rows or columns. Useful to evaluate whether samples within a group are clustered together. Can use nested lists or DataFrame for multiple color levels of labeling. If given as a DataFrame or Series, labels for the colors are extracted from the DataFrames column names or from the name of the Series. DataFrame/Series colors are also matched to the data by their index, ensuring colors are drawn in the correct order. mask : boolean array or DataFrame, optional If passed, data will not be shown in cells where ``mask`` is True. Cells with missing values are automatically masked. Only used for visualizing, not for calculating. {dendrogram,colors}_ratio: float, or pair of floats, optional Proportion of the figure size devoted to the two marginal elements. If a pair is given, they correspond to (row, col) ratios. cbar_pos : (left, bottom, width, height), optional Position of the colorbar axes in the figure. Setting to ``None`` will disable the colorbar. tree_kws : dict, optional Parameters for the :class:`matplotlib.collections.LineCollection` that is used to plot the lines of the dendrogram tree. kwargs : other keyword arguments All other keyword arguments are passed to :func:`heatmap` Returns ------- clustergrid : ClusterGrid A ClusterGrid instance. Notes ----- The returned object has a ``savefig`` method that should be used if you want to save the figure object without clipping the dendrograms. To access the reordered row indices, use: ``clustergrid.dendrogram_row.reordered_ind`` Column indices, use: ``clustergrid.dendrogram_col.reordered_ind`` Examples -------- Plot a clustered heatmap: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set(color_codes=True) >>> iris = sns.load_dataset("iris") >>> species = iris.pop("species") >>> g = sns.clustermap(iris) Change the size and layout of the figure: .. plot:: :context: close-figs >>> g = sns.clustermap(iris, ... figsize=(7, 5), ... row_cluster=False, ... dendrogram_ratio=(.1, .2), ... cbar_pos=(0, .2, .03, .4)) Add colored labels to identify observations: .. plot:: :context: close-figs >>> lut = dict(zip(species.unique(), "rbg")) >>> row_colors = species.map(lut) >>> g = sns.clustermap(iris, row_colors=row_colors) Use a different colormap and adjust the limits of the color range: .. plot:: :context: close-figs >>> g = sns.clustermap(iris, cmap="mako", vmin=0, vmax=10) Use a different similarity metric: .. plot:: :context: close-figs >>> g = sns.clustermap(iris, metric="correlation") Use a different clustering method: .. plot:: :context: close-figs >>> g = sns.clustermap(iris, method="single") Standardize the data within the columns: .. plot:: :context: close-figs >>> g = sns.clustermap(iris, standard_scale=1) Normalize the data within the rows: .. plot:: :context: close-figs >>> g = sns.clustermap(iris, z_score=0, cmap="vlag") """ plotter = ClusterGrid(data, pivot_kws=pivot_kws, figsize=figsize, row_colors=row_colors, col_colors=col_colors, z_score=z_score, standard_scale=standard_scale, mask=mask, dendrogram_ratio=dendrogram_ratio, colors_ratio=colors_ratio, cbar_pos=cbar_pos) return plotter.plot(metric=metric, method=method, colorbar_kws=cbar_kws, row_cluster=row_cluster, col_cluster=col_cluster, row_linkage=row_linkage, col_linkage=col_linkage, tree_kws=tree_kws, **kwargs) seaborn-0.10.0/seaborn/miscplot.py000066400000000000000000000024021361256634400170660ustar00rootroot00000000000000from __future__ import division import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt __all__ = ["palplot", "dogplot"] def palplot(pal, size=1): """Plot the values in a color palette as a horizontal array. Parameters ---------- pal : sequence of matplotlib colors colors, i.e. as returned by seaborn.color_palette() size : scaling factor for size of plot """ n = len(pal) f, ax = plt.subplots(1, 1, figsize=(n * size, size)) ax.imshow(np.arange(n).reshape(1, n), cmap=mpl.colors.ListedColormap(list(pal)), interpolation="nearest", aspect="auto") ax.set_xticks(np.arange(n) - .5) ax.set_yticks([-.5, .5]) ax.set_xticklabels([]) ax.set_yticklabels([]) def dogplot(*_, **__): """Who's a good boy?""" try: from urllib.request import urlopen except ImportError: from urllib2 import urlopen from io import BytesIO url = "https://github.com/mwaskom/seaborn-data/raw/master/png/img{}.png" pic = np.random.randint(2, 7) data = BytesIO(urlopen(url.format(pic)).read()) img = plt.imread(data) f, ax = plt.subplots(figsize=(5, 5), dpi=100) f.subplots_adjust(0, 0, 1, 1) ax.imshow(img) ax.set_axis_off() seaborn-0.10.0/seaborn/palettes.py000066400000000000000000001013121361256634400170550ustar00rootroot00000000000000from __future__ import division import colorsys from itertools import cycle import numpy as np import matplotlib as mpl from .external import husl from .utils import desaturate, set_hls_values, get_color_cycle from .colors import xkcd_rgb, crayons __all__ = ["color_palette", "hls_palette", "husl_palette", "mpl_palette", "dark_palette", "light_palette", "diverging_palette", "blend_palette", "xkcd_palette", "crayon_palette", "cubehelix_palette", "set_color_codes"] SEABORN_PALETTES = dict( deep=["#4C72B0", "#DD8452", "#55A868", "#C44E52", "#8172B3", "#937860", "#DA8BC3", "#8C8C8C", "#CCB974", "#64B5CD"], deep6=["#4C72B0", "#55A868", "#C44E52", "#8172B3", "#CCB974", "#64B5CD"], muted=["#4878D0", "#EE854A", "#6ACC64", "#D65F5F", "#956CB4", "#8C613C", "#DC7EC0", "#797979", "#D5BB67", "#82C6E2"], muted6=["#4878D0", "#6ACC64", "#D65F5F", "#956CB4", "#D5BB67", "#82C6E2"], pastel=["#A1C9F4", "#FFB482", "#8DE5A1", "#FF9F9B", "#D0BBFF", "#DEBB9B", "#FAB0E4", "#CFCFCF", "#FFFEA3", "#B9F2F0"], pastel6=["#A1C9F4", "#8DE5A1", "#FF9F9B", "#D0BBFF", "#FFFEA3", "#B9F2F0"], bright=["#023EFF", "#FF7C00", "#1AC938", "#E8000B", "#8B2BE2", "#9F4800", "#F14CC1", "#A3A3A3", "#FFC400", "#00D7FF"], bright6=["#023EFF", "#1AC938", "#E8000B", "#8B2BE2", "#FFC400", "#00D7FF"], dark=["#001C7F", "#B1400D", "#12711C", "#8C0800", "#591E71", "#592F0D", "#A23582", "#3C3C3C", "#B8850A", "#006374"], dark6=["#001C7F", "#12711C", "#8C0800", "#591E71", "#B8850A", "#006374"], colorblind=["#0173B2", "#DE8F05", "#029E73", "#D55E00", "#CC78BC", "#CA9161", "#FBAFE4", "#949494", "#ECE133", "#56B4E9"], colorblind6=["#0173B2", "#029E73", "#D55E00", "#CC78BC", "#ECE133", "#56B4E9"] ) MPL_QUAL_PALS = { "tab10": 10, "tab20": 20, "tab20b": 20, "tab20c": 20, "Set1": 9, "Set2": 8, "Set3": 12, "Accent": 8, "Paired": 12, "Pastel1": 9, "Pastel2": 8, "Dark2": 8, } QUAL_PALETTE_SIZES = MPL_QUAL_PALS.copy() QUAL_PALETTE_SIZES.update({k: len(v) for k, v in SEABORN_PALETTES.items()}) QUAL_PALETTES = list(QUAL_PALETTE_SIZES.keys()) class _ColorPalette(list): """Set the color palette in a with statement, otherwise be a list.""" def __enter__(self): """Open the context.""" from .rcmod import set_palette self._orig_palette = color_palette() set_palette(self) return self def __exit__(self, *args): """Close the context.""" from .rcmod import set_palette set_palette(self._orig_palette) def as_hex(self): """Return a color palette with hex codes instead of RGB values.""" hex = [mpl.colors.rgb2hex(rgb) for rgb in self] return _ColorPalette(hex) def color_palette(palette=None, n_colors=None, desat=None): """Return a list of colors defining a color palette. Available seaborn palette names: deep, muted, bright, pastel, dark, colorblind Other options: name of matplotlib cmap, 'ch:', 'hls', 'husl', or a list of colors in any format matplotlib accepts Calling this function with ``palette=None`` will return the current matplotlib color cycle. Matplotlib palettes can be specified as reversed palettes by appending "_r" to the name or as "dark" palettes by appending "_d" to the name. (These options are mutually exclusive, but the resulting list of colors can also be reversed). This function can also be used in a ``with`` statement to temporarily set the color cycle for a plot or set of plots. See the :ref:`tutorial ` for more information. Parameters ---------- palette: None, string, or sequence, optional Name of palette or None to return current palette. If a sequence, input colors are used but possibly cycled and desaturated. n_colors : int, optional Number of colors in the palette. If ``None``, the default will depend on how ``palette`` is specified. Named palettes default to 6 colors, but grabbing the current palette or passing in a list of colors will not change the number of colors unless this is specified. Asking for more colors than exist in the palette will cause it to cycle. desat : float, optional Proportion to desaturate each color by. Returns ------- palette : list of RGB tuples. Color palette. Behaves like a list, but can be used as a context manager and possesses an ``as_hex`` method to convert to hex color codes. See Also -------- set_palette : Set the default color cycle for all plots. set_color_codes : Reassign color codes like ``"b"``, ``"g"``, etc. to colors from one of the seaborn palettes. Examples -------- Calling with no arguments returns all colors from the current default color cycle: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set() >>> sns.palplot(sns.color_palette()) Show one of the other "seaborn palettes", which have the same basic order of hues as the default matplotlib color cycle but more attractive colors. Calling with the name of a palette will return 6 colors by default: .. plot:: :context: close-figs >>> sns.palplot(sns.color_palette("muted")) Use discrete values from one of the built-in matplotlib colormaps: .. plot:: :context: close-figs >>> sns.palplot(sns.color_palette("RdBu", n_colors=7)) Make a customized cubehelix color palette: .. plot:: :context: close-figs >>> sns.palplot(sns.color_palette("ch:2.5,-.2,dark=.3")) Use a categorical matplotlib palette and add some desaturation: .. plot:: :context: close-figs >>> sns.palplot(sns.color_palette("Set1", n_colors=8, desat=.5)) Make a "dark" matplotlib sequential palette variant. (This can be good when coloring multiple lines or points that correspond to an ordered variable, where you don't want the lightest lines to be invisible): .. plot:: :context: close-figs >>> sns.palplot(sns.color_palette("Blues_d")) Use as a context manager: .. plot:: :context: close-figs >>> import numpy as np, matplotlib.pyplot as plt >>> with sns.color_palette("husl", 8): ... _ = plt.plot(np.c_[np.zeros(8), np.arange(8)].T) """ if palette is None: palette = get_color_cycle() if n_colors is None: n_colors = len(palette) elif not isinstance(palette, str): palette = palette if n_colors is None: n_colors = len(palette) else: if n_colors is None: # Use all colors in a qualitative palette or 6 of another kind n_colors = QUAL_PALETTE_SIZES.get(palette, 6) if palette in SEABORN_PALETTES: # Named "seaborn variant" of old matplotlib default palette palette = SEABORN_PALETTES[palette] elif palette == "hls": # Evenly spaced colors in cylindrical RGB space palette = hls_palette(n_colors) elif palette == "husl": # Evenly spaced colors in cylindrical Lab space palette = husl_palette(n_colors) elif palette.lower() == "jet": # Paternalism raise ValueError("No.") elif palette.startswith("ch:"): # Cubehelix palette with params specified in string args, kwargs = _parse_cubehelix_args(palette) palette = cubehelix_palette(n_colors, *args, **kwargs) else: try: # Perhaps a named matplotlib colormap? palette = mpl_palette(palette, n_colors) except ValueError: raise ValueError("%s is not a valid palette name" % palette) if desat is not None: palette = [desaturate(c, desat) for c in palette] # Always return as many colors as we asked for pal_cycle = cycle(palette) palette = [next(pal_cycle) for _ in range(n_colors)] # Always return in r, g, b tuple format try: palette = map(mpl.colors.colorConverter.to_rgb, palette) palette = _ColorPalette(palette) except ValueError: raise ValueError("Could not generate a palette for %s" % str(palette)) return palette def hls_palette(n_colors=6, h=.01, l=.6, s=.65): # noqa """Get a set of evenly spaced colors in HLS hue space. h, l, and s should be between 0 and 1 Parameters ---------- n_colors : int number of colors in the palette h : float first hue l : float lightness s : float saturation Returns ------- palette : seaborn color palette List-like object of colors as RGB tuples. See Also -------- husl_palette : Make a palette using evently spaced circular hues in the HUSL system. Examples -------- Create a palette of 10 colors with the default parameters: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set() >>> sns.palplot(sns.hls_palette(10)) Create a palette of 10 colors that begins at a different hue value: .. plot:: :context: close-figs >>> sns.palplot(sns.hls_palette(10, h=.5)) Create a palette of 10 colors that are darker than the default: .. plot:: :context: close-figs >>> sns.palplot(sns.hls_palette(10, l=.4)) Create a palette of 10 colors that are less saturated than the default: .. plot:: :context: close-figs >>> sns.palplot(sns.hls_palette(10, s=.4)) """ hues = np.linspace(0, 1, int(n_colors) + 1)[:-1] hues += h hues %= 1 hues -= hues.astype(int) palette = [colorsys.hls_to_rgb(h_i, l, s) for h_i in hues] return _ColorPalette(palette) def husl_palette(n_colors=6, h=.01, s=.9, l=.65): # noqa """Get a set of evenly spaced colors in HUSL hue space. h, s, and l should be between 0 and 1 Parameters ---------- n_colors : int number of colors in the palette h : float first hue s : float saturation l : float lightness Returns ------- palette : seaborn color palette List-like object of colors as RGB tuples. See Also -------- hls_palette : Make a palette using evently spaced circular hues in the HSL system. Examples -------- Create a palette of 10 colors with the default parameters: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set() >>> sns.palplot(sns.husl_palette(10)) Create a palette of 10 colors that begins at a different hue value: .. plot:: :context: close-figs >>> sns.palplot(sns.husl_palette(10, h=.5)) Create a palette of 10 colors that are darker than the default: .. plot:: :context: close-figs >>> sns.palplot(sns.husl_palette(10, l=.4)) Create a palette of 10 colors that are less saturated than the default: .. plot:: :context: close-figs >>> sns.palplot(sns.husl_palette(10, s=.4)) """ hues = np.linspace(0, 1, int(n_colors) + 1)[:-1] hues += h hues %= 1 hues *= 359 s *= 99 l *= 99 # noqa palette = [husl.husl_to_rgb(h_i, s, l) for h_i in hues] return _ColorPalette(palette) def mpl_palette(name, n_colors=6): """Return discrete colors from a matplotlib palette. Note that this handles the qualitative colorbrewer palettes properly, although if you ask for more colors than a particular qualitative palette can provide you will get fewer than you are expecting. In contrast, asking for qualitative color brewer palettes using :func:`color_palette` will return the expected number of colors, but they will cycle. If you are using the IPython notebook, you can also use the function :func:`choose_colorbrewer_palette` to interactively select palettes. Parameters ---------- name : string Name of the palette. This should be a named matplotlib colormap. n_colors : int Number of discrete colors in the palette. Returns ------- palette or cmap : seaborn color palette or matplotlib colormap List-like object of colors as RGB tuples, or colormap object that can map continuous values to colors, depending on the value of the ``as_cmap`` parameter. Examples -------- Create a qualitative colorbrewer palette with 8 colors: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set() >>> sns.palplot(sns.mpl_palette("Set2", 8)) Create a sequential colorbrewer palette: .. plot:: :context: close-figs >>> sns.palplot(sns.mpl_palette("Blues")) Create a diverging palette: .. plot:: :context: close-figs >>> sns.palplot(sns.mpl_palette("seismic", 8)) Create a "dark" sequential palette: .. plot:: :context: close-figs >>> sns.palplot(sns.mpl_palette("GnBu_d")) """ if name.endswith("_d"): pal = ["#333333"] pal.extend(color_palette(name.replace("_d", "_r"), 2)) cmap = blend_palette(pal, n_colors, as_cmap=True) else: cmap = mpl.cm.get_cmap(name) if cmap is None: raise ValueError("{} is not a valid colormap".format(name)) if name in MPL_QUAL_PALS: bins = np.linspace(0, 1, MPL_QUAL_PALS[name])[:n_colors] else: bins = np.linspace(0, 1, int(n_colors) + 2)[1:-1] palette = list(map(tuple, cmap(bins)[:, :3])) return _ColorPalette(palette) def _color_to_rgb(color, input): """Add some more flexibility to color choices.""" if input == "hls": color = colorsys.hls_to_rgb(*color) elif input == "husl": color = husl.husl_to_rgb(*color) elif input == "xkcd": color = xkcd_rgb[color] return color def dark_palette(color, n_colors=6, reverse=False, as_cmap=False, input="rgb"): """Make a sequential palette that blends from dark to ``color``. This kind of palette is good for data that range between relatively uninteresting low values and interesting high values. The ``color`` parameter can be specified in a number of ways, including all options for defining a color in matplotlib and several additional color spaces that are handled by seaborn. You can also use the database of named colors from the XKCD color survey. If you are using the IPython notebook, you can also choose this palette interactively with the :func:`choose_dark_palette` function. Parameters ---------- color : base color for high values hex, rgb-tuple, or html color name n_colors : int, optional number of colors in the palette reverse : bool, optional if True, reverse the direction of the blend as_cmap : bool, optional if True, return as a matplotlib colormap instead of list input : {'rgb', 'hls', 'husl', xkcd'} Color space to interpret the input color. The first three options apply to tuple inputs and the latter applies to string inputs. Returns ------- palette or cmap : seaborn color palette or matplotlib colormap List-like object of colors as RGB tuples, or colormap object that can map continuous values to colors, depending on the value of the ``as_cmap`` parameter. See Also -------- light_palette : Create a sequential palette with bright low values. diverging_palette : Create a diverging palette with two colors. Examples -------- Generate a palette from an HTML color: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set() >>> sns.palplot(sns.dark_palette("purple")) Generate a palette that decreases in lightness: .. plot:: :context: close-figs >>> sns.palplot(sns.dark_palette("seagreen", reverse=True)) Generate a palette from an HUSL-space seed: .. plot:: :context: close-figs >>> sns.palplot(sns.dark_palette((260, 75, 60), input="husl")) Generate a colormap object: .. plot:: :context: close-figs >>> from numpy import arange >>> x = arange(25).reshape(5, 5) >>> cmap = sns.dark_palette("#2ecc71", as_cmap=True) >>> ax = sns.heatmap(x, cmap=cmap) """ color = _color_to_rgb(color, input) gray = "#222222" colors = [color, gray] if reverse else [gray, color] return blend_palette(colors, n_colors, as_cmap) def light_palette(color, n_colors=6, reverse=False, as_cmap=False, input="rgb"): """Make a sequential palette that blends from light to ``color``. This kind of palette is good for data that range between relatively uninteresting low values and interesting high values. The ``color`` parameter can be specified in a number of ways, including all options for defining a color in matplotlib and several additional color spaces that are handled by seaborn. You can also use the database of named colors from the XKCD color survey. If you are using the IPython notebook, you can also choose this palette interactively with the :func:`choose_light_palette` function. Parameters ---------- color : base color for high values hex code, html color name, or tuple in ``input`` space. n_colors : int, optional number of colors in the palette reverse : bool, optional if True, reverse the direction of the blend as_cmap : bool, optional if True, return as a matplotlib colormap instead of list input : {'rgb', 'hls', 'husl', xkcd'} Color space to interpret the input color. The first three options apply to tuple inputs and the latter applies to string inputs. Returns ------- palette or cmap : seaborn color palette or matplotlib colormap List-like object of colors as RGB tuples, or colormap object that can map continuous values to colors, depending on the value of the ``as_cmap`` parameter. See Also -------- dark_palette : Create a sequential palette with dark low values. diverging_palette : Create a diverging palette with two colors. Examples -------- Generate a palette from an HTML color: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set() >>> sns.palplot(sns.light_palette("purple")) Generate a palette that increases in lightness: .. plot:: :context: close-figs >>> sns.palplot(sns.light_palette("seagreen", reverse=True)) Generate a palette from an HUSL-space seed: .. plot:: :context: close-figs >>> sns.palplot(sns.light_palette((260, 75, 60), input="husl")) Generate a colormap object: .. plot:: :context: close-figs >>> from numpy import arange >>> x = arange(25).reshape(5, 5) >>> cmap = sns.light_palette("#2ecc71", as_cmap=True) >>> ax = sns.heatmap(x, cmap=cmap) """ color = _color_to_rgb(color, input) light = set_hls_values(color, l=.95) # noqa colors = [color, light] if reverse else [light, color] return blend_palette(colors, n_colors, as_cmap) def _flat_palette(color, n_colors=6, reverse=False, as_cmap=False, input="rgb"): """Make a sequential palette that blends from gray to ``color``. Parameters ---------- color : matplotlib color hex, rgb-tuple, or html color name n_colors : int, optional number of colors in the palette reverse : bool, optional if True, reverse the direction of the blend as_cmap : bool, optional if True, return as a matplotlib colormap instead of list Returns ------- palette : list or colormap dark_palette : Create a sequential palette with dark low values. """ color = _color_to_rgb(color, input) flat = desaturate(color, 0) colors = [color, flat] if reverse else [flat, color] return blend_palette(colors, n_colors, as_cmap) def diverging_palette(h_neg, h_pos, s=75, l=50, sep=10, n=6, # noqa center="light", as_cmap=False): """Make a diverging palette between two HUSL colors. If you are using the IPython notebook, you can also choose this palette interactively with the :func:`choose_diverging_palette` function. Parameters ---------- h_neg, h_pos : float in [0, 359] Anchor hues for negative and positive extents of the map. s : float in [0, 100], optional Anchor saturation for both extents of the map. l : float in [0, 100], optional Anchor lightness for both extents of the map. sep : int, optional Size of the intermediate region. n : int, optional Number of colors in the palette (if not returning a cmap) center : {"light", "dark"}, optional Whether the center of the palette is light or dark as_cmap : bool, optional If true, return a matplotlib colormap object rather than a list of colors. Returns ------- palette or cmap : seaborn color palette or matplotlib colormap List-like object of colors as RGB tuples, or colormap object that can map continuous values to colors, depending on the value of the ``as_cmap`` parameter. See Also -------- dark_palette : Create a sequential palette with dark values. light_palette : Create a sequential palette with light values. Examples -------- Generate a blue-white-red palette: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set() >>> sns.palplot(sns.diverging_palette(240, 10, n=9)) Generate a brighter green-white-purple palette: .. plot:: :context: close-figs >>> sns.palplot(sns.diverging_palette(150, 275, s=80, l=55, n=9)) Generate a blue-black-red palette: .. plot:: :context: close-figs >>> sns.palplot(sns.diverging_palette(250, 15, s=75, l=40, ... n=9, center="dark")) Generate a colormap object: .. plot:: :context: close-figs >>> from numpy import arange >>> x = arange(25).reshape(5, 5) >>> cmap = sns.diverging_palette(220, 20, sep=20, as_cmap=True) >>> ax = sns.heatmap(x, cmap=cmap) """ palfunc = dark_palette if center == "dark" else light_palette n_half = int(128 - (sep // 2)) neg = palfunc((h_neg, s, l), n_half, reverse=True, input="husl") pos = palfunc((h_pos, s, l), n_half, input="husl") midpoint = dict(light=[(.95, .95, .95)], dark=[(.133, .133, .133)])[center] mid = midpoint * sep pal = blend_palette(np.concatenate([neg, mid, pos]), n, as_cmap=as_cmap) return pal def blend_palette(colors, n_colors=6, as_cmap=False, input="rgb"): """Make a palette that blends between a list of colors. Parameters ---------- colors : sequence of colors in various formats interpreted by ``input`` hex code, html color name, or tuple in ``input`` space. n_colors : int, optional Number of colors in the palette. as_cmap : bool, optional If True, return as a matplotlib colormap instead of list. Returns ------- palette or cmap : seaborn color palette or matplotlib colormap List-like object of colors as RGB tuples, or colormap object that can map continuous values to colors, depending on the value of the ``as_cmap`` parameter. """ colors = [_color_to_rgb(color, input) for color in colors] name = "blend" pal = mpl.colors.LinearSegmentedColormap.from_list(name, colors) if not as_cmap: rgb_array = pal(np.linspace(0, 1, int(n_colors)))[:, :3] # no alpha pal = _ColorPalette(map(tuple, rgb_array)) return pal def xkcd_palette(colors): """Make a palette with color names from the xkcd color survey. See xkcd for the full list of colors: https://xkcd.com/color/rgb/ This is just a simple wrapper around the ``seaborn.xkcd_rgb`` dictionary. Parameters ---------- colors : list of strings List of keys in the ``seaborn.xkcd_rgb`` dictionary. Returns ------- palette : seaborn color palette Returns the list of colors as RGB tuples in an object that behaves like other seaborn color palettes. See Also -------- crayon_palette : Make a palette with Crayola crayon colors. """ palette = [xkcd_rgb[name] for name in colors] return color_palette(palette, len(palette)) def crayon_palette(colors): """Make a palette with color names from Crayola crayons. Colors are taken from here: https://en.wikipedia.org/wiki/List_of_Crayola_crayon_colors This is just a simple wrapper around the ``seaborn.crayons`` dictionary. Parameters ---------- colors : list of strings List of keys in the ``seaborn.crayons`` dictionary. Returns ------- palette : seaborn color palette Returns the list of colors as rgb tuples in an object that behaves like other seaborn color palettes. See Also -------- xkcd_palette : Make a palette with named colors from the XKCD color survey. """ palette = [crayons[name] for name in colors] return color_palette(palette, len(palette)) def cubehelix_palette(n_colors=6, start=0, rot=.4, gamma=1.0, hue=0.8, light=.85, dark=.15, reverse=False, as_cmap=False): """Make a sequential palette from the cubehelix system. This produces a colormap with linearly-decreasing (or increasing) brightness. That means that information will be preserved if printed to black and white or viewed by someone who is colorblind. "cubehelix" is also available as a matplotlib-based palette, but this function gives the user more control over the look of the palette and has a different set of defaults. In addition to using this function, it is also possible to generate a cubehelix palette generally in seaborn using a string-shorthand; see the example below. Parameters ---------- n_colors : int Number of colors in the palette. start : float, 0 <= start <= 3 The hue at the start of the helix. rot : float Rotations around the hue wheel over the range of the palette. gamma : float 0 <= gamma Gamma factor to emphasize darker (gamma < 1) or lighter (gamma > 1) colors. hue : float, 0 <= hue <= 1 Saturation of the colors. dark : float 0 <= dark <= 1 Intensity of the darkest color in the palette. light : float 0 <= light <= 1 Intensity of the lightest color in the palette. reverse : bool If True, the palette will go from dark to light. as_cmap : bool If True, return a matplotlib colormap instead of a list of colors. Returns ------- palette or cmap : seaborn color palette or matplotlib colormap List-like object of colors as RGB tuples, or colormap object that can map continuous values to colors, depending on the value of the ``as_cmap`` parameter. See Also -------- choose_cubehelix_palette : Launch an interactive widget to select cubehelix palette parameters. dark_palette : Create a sequential palette with dark low values. light_palette : Create a sequential palette with bright low values. References ---------- Green, D. A. (2011). "A colour scheme for the display of astronomical intensity images". Bulletin of the Astromical Society of India, Vol. 39, p. 289-295. Examples -------- Generate the default palette: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set() >>> sns.palplot(sns.cubehelix_palette()) Rotate backwards from the same starting location: .. plot:: :context: close-figs >>> sns.palplot(sns.cubehelix_palette(rot=-.4)) Use a different starting point and shorter rotation: .. plot:: :context: close-figs >>> sns.palplot(sns.cubehelix_palette(start=2.8, rot=.1)) Reverse the direction of the lightness ramp: .. plot:: :context: close-figs >>> sns.palplot(sns.cubehelix_palette(reverse=True)) Generate a colormap object: .. plot:: :context: close-figs >>> from numpy import arange >>> x = arange(25).reshape(5, 5) >>> cmap = sns.cubehelix_palette(as_cmap=True) >>> ax = sns.heatmap(x, cmap=cmap) Use the full lightness range: .. plot:: :context: close-figs >>> cmap = sns.cubehelix_palette(dark=0, light=1, as_cmap=True) >>> ax = sns.heatmap(x, cmap=cmap) Use through the :func:`color_palette` interface: .. plot:: :context: close-figs >>> sns.palplot(sns.color_palette("ch:2,r=.2,l=.6")) """ def get_color_function(p0, p1): # Copied from matplotlib because it lives in private module def color(x): # Apply gamma factor to emphasise low or high intensity values xg = x ** gamma # Calculate amplitude and angle of deviation from the black # to white diagonal in the plane of constant # perceived intensity. a = hue * xg * (1 - xg) / 2 phi = 2 * np.pi * (start / 3 + rot * x) return xg + a * (p0 * np.cos(phi) + p1 * np.sin(phi)) return color cdict = { "red": get_color_function(-0.14861, 1.78277), "green": get_color_function(-0.29227, -0.90649), "blue": get_color_function(1.97294, 0.0), } cmap = mpl.colors.LinearSegmentedColormap("cubehelix", cdict) x = np.linspace(light, dark, int(n_colors)) pal = cmap(x)[:, :3].tolist() if reverse: pal = pal[::-1] if as_cmap: x_256 = np.linspace(light, dark, 256) if reverse: x_256 = x_256[::-1] pal_256 = cmap(x_256) cmap = mpl.colors.ListedColormap(pal_256, "seaborn_cubehelix") return cmap else: return _ColorPalette(pal) def _parse_cubehelix_args(argstr): """Turn stringified cubehelix params into args/kwargs.""" if argstr.startswith("ch:"): argstr = argstr[3:] if argstr.endswith("_r"): reverse = True argstr = argstr[:-2] else: reverse = False if not argstr: return [], {"reverse": reverse} all_args = argstr.split(",") args = [float(a.strip(" ")) for a in all_args if "=" not in a] kwargs = [a.split("=") for a in all_args if "=" in a] kwargs = {k.strip(" "): float(v.strip(" ")) for k, v in kwargs} kwarg_map = dict( s="start", r="rot", g="gamma", h="hue", l="light", d="dark", # noqa: E741 ) kwargs = {kwarg_map.get(k, k): v for k, v in kwargs.items()} if reverse: kwargs["reverse"] = True return args, kwargs def set_color_codes(palette="deep"): """Change how matplotlib color shorthands are interpreted. Calling this will change how shorthand codes like "b" or "g" are interpreted by matplotlib in subsequent plots. Parameters ---------- palette : {deep, muted, pastel, dark, bright, colorblind} Named seaborn palette to use as the source of colors. See Also -------- set : Color codes can be set through the high-level seaborn style manager. set_palette : Color codes can also be set through the function that sets the matplotlib color cycle. Examples -------- Map matplotlib color codes to the default seaborn palette. .. plot:: :context: close-figs >>> import matplotlib.pyplot as plt >>> import seaborn as sns; sns.set() >>> sns.set_color_codes() >>> _ = plt.plot([0, 1], color="r") Use a different seaborn palette. .. plot:: :context: close-figs >>> sns.set_color_codes("dark") >>> _ = plt.plot([0, 1], color="g") >>> _ = plt.plot([0, 2], color="m") """ if palette == "reset": colors = [(0., 0., 1.), (0., .5, 0.), (1., 0., 0.), (.75, 0., .75), (.75, .75, 0.), (0., .75, .75), (0., 0., 0.)] elif not isinstance(palette, str): err = "set_color_codes requires a named seaborn palette" raise TypeError(err) elif palette in SEABORN_PALETTES: if not palette.endswith("6"): palette = palette + "6" colors = SEABORN_PALETTES[palette] + [(.1, .1, .1)] else: err = "Cannot set colors with palette '{}'".format(palette) raise ValueError(err) for code, color in zip("bgrmyck", colors): rgb = mpl.colors.colorConverter.to_rgb(color) mpl.colors.colorConverter.colors[code] = rgb mpl.colors.colorConverter.cache[code] = rgb seaborn-0.10.0/seaborn/rcmod.py000066400000000000000000000405371361256634400163530ustar00rootroot00000000000000"""Control plot style and scaling using the matplotlib rcParams interface.""" from distutils.version import LooseVersion import functools import matplotlib as mpl import warnings from . import palettes, _orig_rc_params mpl_ge_150 = LooseVersion(mpl.__version__) >= '1.5.0' mpl_ge_2 = LooseVersion(mpl.__version__) >= '2.0' __all__ = ["set", "reset_defaults", "reset_orig", "axes_style", "set_style", "plotting_context", "set_context", "set_palette"] _style_keys = [ "axes.facecolor", "axes.edgecolor", "axes.grid", "axes.axisbelow", "axes.labelcolor", "figure.facecolor", "grid.color", "grid.linestyle", "text.color", "xtick.color", "ytick.color", "xtick.direction", "ytick.direction", "lines.solid_capstyle", "patch.edgecolor", "image.cmap", "font.family", "font.sans-serif", ] if mpl_ge_2: _style_keys.extend([ "patch.force_edgecolor", "xtick.bottom", "xtick.top", "ytick.left", "ytick.right", "axes.spines.left", "axes.spines.bottom", "axes.spines.right", "axes.spines.top", ]) _context_keys = [ "font.size", "axes.labelsize", "axes.titlesize", "xtick.labelsize", "ytick.labelsize", "legend.fontsize", "axes.linewidth", "grid.linewidth", "lines.linewidth", "lines.markersize", "patch.linewidth", "xtick.major.width", "ytick.major.width", "xtick.minor.width", "ytick.minor.width", "xtick.major.size", "ytick.major.size", "xtick.minor.size", "ytick.minor.size", ] def set(context="notebook", style="darkgrid", palette="deep", font="sans-serif", font_scale=1, color_codes=True, rc=None): """Set aesthetic parameters in one step. Each set of parameters can be set directly or temporarily, see the referenced functions below for more information. Parameters ---------- context : string or dict Plotting context parameters, see :func:`plotting_context` style : string or dict Axes style parameters, see :func:`axes_style` palette : string or sequence Color palette, see :func:`color_palette` font : string Font family, see matplotlib font manager. font_scale : float, optional Separate scaling factor to independently scale the size of the font elements. color_codes : bool If ``True`` and ``palette`` is a seaborn palette, remap the shorthand color codes (e.g. "b", "g", "r", etc.) to the colors from this palette. rc : dict or None Dictionary of rc parameter mappings to override the above. """ set_context(context, font_scale) set_style(style, rc={"font.family": font}) set_palette(palette, color_codes=color_codes) if rc is not None: mpl.rcParams.update(rc) def reset_defaults(): """Restore all RC params to default settings.""" mpl.rcParams.update(mpl.rcParamsDefault) def reset_orig(): """Restore all RC params to original settings (respects custom rc).""" with warnings.catch_warnings(): warnings.simplefilter('ignore', mpl.cbook.MatplotlibDeprecationWarning) mpl.rcParams.update(_orig_rc_params) def axes_style(style=None, rc=None): """Return a parameter dict for the aesthetic style of the plots. This affects things like the color of the axes, whether a grid is enabled by default, and other aesthetic elements. This function returns an object that can be used in a ``with`` statement to temporarily change the style parameters. Parameters ---------- style : dict, None, or one of {darkgrid, whitegrid, dark, white, ticks} A dictionary of parameters or the name of a preconfigured set. rc : dict, optional Parameter mappings to override the values in the preset seaborn style dictionaries. This only updates parameters that are considered part of the style definition. Examples -------- >>> st = axes_style("whitegrid") >>> set_style("ticks", {"xtick.major.size": 8, "ytick.major.size": 8}) >>> import matplotlib.pyplot as plt >>> with axes_style("white"): ... f, ax = plt.subplots() ... ax.plot(x, y) # doctest: +SKIP See Also -------- set_style : set the matplotlib parameters for a seaborn theme plotting_context : return a parameter dict to to scale plot elements color_palette : define the color palette for a plot """ if style is None: style_dict = {k: mpl.rcParams[k] for k in _style_keys} elif isinstance(style, dict): style_dict = style else: styles = ["white", "dark", "whitegrid", "darkgrid", "ticks"] if style not in styles: raise ValueError("style must be one of %s" % ", ".join(styles)) # Define colors here dark_gray = ".15" light_gray = ".8" # Common parameters style_dict = { "figure.facecolor": "white", "axes.labelcolor": dark_gray, "xtick.direction": "out", "ytick.direction": "out", "xtick.color": dark_gray, "ytick.color": dark_gray, "axes.axisbelow": True, "grid.linestyle": "-", "text.color": dark_gray, "font.family": ["sans-serif"], "font.sans-serif": ["Arial", "DejaVu Sans", "Liberation Sans", "Bitstream Vera Sans", "sans-serif"], "lines.solid_capstyle": "round", "patch.edgecolor": "w", "patch.force_edgecolor": True, "image.cmap": "rocket", "xtick.top": False, "ytick.right": False, } # Set grid on or off if "grid" in style: style_dict.update({ "axes.grid": True, }) else: style_dict.update({ "axes.grid": False, }) # Set the color of the background, spines, and grids if style.startswith("dark"): style_dict.update({ "axes.facecolor": "#EAEAF2", "axes.edgecolor": "white", "grid.color": "white", "axes.spines.left": True, "axes.spines.bottom": True, "axes.spines.right": True, "axes.spines.top": True, }) elif style == "whitegrid": style_dict.update({ "axes.facecolor": "white", "axes.edgecolor": light_gray, "grid.color": light_gray, "axes.spines.left": True, "axes.spines.bottom": True, "axes.spines.right": True, "axes.spines.top": True, }) elif style in ["white", "ticks"]: style_dict.update({ "axes.facecolor": "white", "axes.edgecolor": dark_gray, "grid.color": light_gray, "axes.spines.left": True, "axes.spines.bottom": True, "axes.spines.right": True, "axes.spines.top": True, }) # Show or hide the axes ticks if style == "ticks": style_dict.update({ "xtick.bottom": True, "ytick.left": True, }) else: style_dict.update({ "xtick.bottom": False, "ytick.left": False, }) # Remove entries that are not defined in the base list of valid keys # This lets us handle matplotlib <=/> 2.0 style_dict = {k: v for k, v in style_dict.items() if k in _style_keys} # Override these settings with the provided rc dictionary if rc is not None: rc = {k: v for k, v in rc.items() if k in _style_keys} style_dict.update(rc) # Wrap in an _AxesStyle object so this can be used in a with statement style_object = _AxesStyle(style_dict) return style_object def set_style(style=None, rc=None): """Set the aesthetic style of the plots. This affects things like the color of the axes, whether a grid is enabled by default, and other aesthetic elements. Parameters ---------- style : dict, None, or one of {darkgrid, whitegrid, dark, white, ticks} A dictionary of parameters or the name of a preconfigured set. rc : dict, optional Parameter mappings to override the values in the preset seaborn style dictionaries. This only updates parameters that are considered part of the style definition. Examples -------- >>> set_style("whitegrid") >>> set_style("ticks", {"xtick.major.size": 8, "ytick.major.size": 8}) See Also -------- axes_style : return a dict of parameters or use in a ``with`` statement to temporarily set the style. set_context : set parameters to scale plot elements set_palette : set the default color palette for figures """ style_object = axes_style(style, rc) mpl.rcParams.update(style_object) def plotting_context(context=None, font_scale=1, rc=None): """Return a parameter dict to scale elements of the figure. This affects things like the size of the labels, lines, and other elements of the plot, but not the overall style. The base context is "notebook", and the other contexts are "paper", "talk", and "poster", which are version of the notebook parameters scaled by .8, 1.3, and 1.6, respectively. This function returns an object that can be used in a ``with`` statement to temporarily change the context parameters. Parameters ---------- context : dict, None, or one of {paper, notebook, talk, poster} A dictionary of parameters or the name of a preconfigured set. font_scale : float, optional Separate scaling factor to independently scale the size of the font elements. rc : dict, optional Parameter mappings to override the values in the preset seaborn context dictionaries. This only updates parameters that are considered part of the context definition. Examples -------- >>> c = plotting_context("poster") >>> c = plotting_context("notebook", font_scale=1.5) >>> c = plotting_context("talk", rc={"lines.linewidth": 2}) >>> import matplotlib.pyplot as plt >>> with plotting_context("paper"): ... f, ax = plt.subplots() ... ax.plot(x, y) # doctest: +SKIP See Also -------- set_context : set the matplotlib parameters to scale plot elements axes_style : return a dict of parameters defining a figure style color_palette : define the color palette for a plot """ if context is None: context_dict = {k: mpl.rcParams[k] for k in _context_keys} elif isinstance(context, dict): context_dict = context else: contexts = ["paper", "notebook", "talk", "poster"] if context not in contexts: raise ValueError("context must be in %s" % ", ".join(contexts)) # Set up dictionary of default parameters base_context = { "font.size": 12, "axes.labelsize": 12, "axes.titlesize": 12, "xtick.labelsize": 11, "ytick.labelsize": 11, "legend.fontsize": 11, "axes.linewidth": 1.25, "grid.linewidth": 1, "lines.linewidth": 1.5, "lines.markersize": 6, "patch.linewidth": 1, "xtick.major.width": 1.25, "ytick.major.width": 1.25, "xtick.minor.width": 1, "ytick.minor.width": 1, "xtick.major.size": 6, "ytick.major.size": 6, "xtick.minor.size": 4, "ytick.minor.size": 4, } # Scale all the parameters by the same factor depending on the context scaling = dict(paper=.8, notebook=1, talk=1.5, poster=2)[context] context_dict = {k: v * scaling for k, v in base_context.items()} # Now independently scale the fonts font_keys = ["axes.labelsize", "axes.titlesize", "legend.fontsize", "xtick.labelsize", "ytick.labelsize", "font.size"] font_dict = {k: context_dict[k] * font_scale for k in font_keys} context_dict.update(font_dict) # Override these settings with the provided rc dictionary if rc is not None: rc = {k: v for k, v in rc.items() if k in _context_keys} context_dict.update(rc) # Wrap in a _PlottingContext object so this can be used in a with statement context_object = _PlottingContext(context_dict) return context_object def set_context(context=None, font_scale=1, rc=None): """Set the plotting context parameters. This affects things like the size of the labels, lines, and other elements of the plot, but not the overall style. The base context is "notebook", and the other contexts are "paper", "talk", and "poster", which are version of the notebook parameters scaled by .8, 1.3, and 1.6, respectively. Parameters ---------- context : dict, None, or one of {paper, notebook, talk, poster} A dictionary of parameters or the name of a preconfigured set. font_scale : float, optional Separate scaling factor to independently scale the size of the font elements. rc : dict, optional Parameter mappings to override the values in the preset seaborn context dictionaries. This only updates parameters that are considered part of the context definition. Examples -------- >>> set_context("paper") >>> set_context("talk", font_scale=1.4) >>> set_context("talk", rc={"lines.linewidth": 2}) See Also -------- plotting_context : return a dictionary of rc parameters, or use in a ``with`` statement to temporarily set the context. set_style : set the default parameters for figure style set_palette : set the default color palette for figures """ context_object = plotting_context(context, font_scale, rc) mpl.rcParams.update(context_object) class _RCAesthetics(dict): def __enter__(self): rc = mpl.rcParams self._orig = {k: rc[k] for k in self._keys} self._set(self) def __exit__(self, exc_type, exc_value, exc_tb): self._set(self._orig) def __call__(self, func): @functools.wraps(func) def wrapper(*args, **kwargs): with self: return func(*args, **kwargs) return wrapper class _AxesStyle(_RCAesthetics): """Light wrapper on a dict to set style temporarily.""" _keys = _style_keys _set = staticmethod(set_style) class _PlottingContext(_RCAesthetics): """Light wrapper on a dict to set context temporarily.""" _keys = _context_keys _set = staticmethod(set_context) def set_palette(palette, n_colors=None, desat=None, color_codes=False): """Set the matplotlib color cycle using a seaborn palette. Parameters ---------- palette : seaborn color paltte | matplotlib colormap | hls | husl Palette definition. Should be something that :func:`color_palette` can process. n_colors : int Number of colors in the cycle. The default number of colors will depend on the format of ``palette``, see the :func:`color_palette` documentation for more information. desat : float Proportion to desaturate each color by. color_codes : bool If ``True`` and ``palette`` is a seaborn palette, remap the shorthand color codes (e.g. "b", "g", "r", etc.) to the colors from this palette. Examples -------- >>> set_palette("Reds") >>> set_palette("Set1", 8, .75) See Also -------- color_palette : build a color palette or set the color cycle temporarily in a ``with`` statement. set_context : set parameters to scale plot elements set_style : set the default parameters for figure style """ colors = palettes.color_palette(palette, n_colors, desat) if mpl_ge_150: from cycler import cycler cyl = cycler('color', colors) mpl.rcParams['axes.prop_cycle'] = cyl else: mpl.rcParams["axes.color_cycle"] = list(colors) mpl.rcParams["patch.facecolor"] = colors[0] if color_codes: try: palettes.set_color_codes(palette) except (ValueError, TypeError): pass seaborn-0.10.0/seaborn/regression.py000066400000000000000000001133001361256634400174140ustar00rootroot00000000000000"""Plotting functions for linear models (broadly construed).""" from __future__ import division import copy from textwrap import dedent import warnings import numpy as np import pandas as pd from scipy.spatial import distance import matplotlib as mpl import matplotlib.pyplot as plt try: import statsmodels assert statsmodels _has_statsmodels = True except ImportError: _has_statsmodels = False from . import utils from . import algorithms as algo from .axisgrid import FacetGrid, _facet_docs __all__ = ["lmplot", "regplot", "residplot"] class _LinearPlotter(object): """Base class for plotting relational data in tidy format. To get anything useful done you'll have to inherit from this, but setup code that can be abstracted out should be put here. """ def establish_variables(self, data, **kws): """Extract variables from data or use directly.""" self.data = data # Validate the inputs any_strings = any([isinstance(v, str) for v in kws.values()]) if any_strings and data is None: raise ValueError("Must pass `data` if using named variables.") # Set the variables for var, val in kws.items(): if isinstance(val, str): vector = data[val] elif isinstance(val, list): vector = np.asarray(val) else: vector = val if vector is not None: vector = np.squeeze(vector) if np.ndim(vector) > 1: err = "regplot inputs must be 1d" raise ValueError(err) setattr(self, var, vector) def dropna(self, *vars): """Remove observations with missing data.""" vals = [getattr(self, var) for var in vars] vals = [v for v in vals if v is not None] not_na = np.all(np.column_stack([pd.notnull(v) for v in vals]), axis=1) for var in vars: val = getattr(self, var) if val is not None: setattr(self, var, val[not_na]) def plot(self, ax): raise NotImplementedError class _RegressionPlotter(_LinearPlotter): """Plotter for numeric independent variables with regression model. This does the computations and drawing for the `regplot` function, and is thus also used indirectly by `lmplot`. """ def __init__(self, x, y, data=None, x_estimator=None, x_bins=None, x_ci="ci", scatter=True, fit_reg=True, ci=95, n_boot=1000, units=None, seed=None, order=1, logistic=False, lowess=False, robust=False, logx=False, x_partial=None, y_partial=None, truncate=False, dropna=True, x_jitter=None, y_jitter=None, color=None, label=None): # Set member attributes self.x_estimator = x_estimator self.ci = ci self.x_ci = ci if x_ci == "ci" else x_ci self.n_boot = n_boot self.seed = seed self.scatter = scatter self.fit_reg = fit_reg self.order = order self.logistic = logistic self.lowess = lowess self.robust = robust self.logx = logx self.truncate = truncate self.x_jitter = x_jitter self.y_jitter = y_jitter self.color = color self.label = label # Validate the regression options: if sum((order > 1, logistic, robust, lowess, logx)) > 1: raise ValueError("Mutually exclusive regression options.") # Extract the data vals from the arguments or passed dataframe self.establish_variables(data, x=x, y=y, units=units, x_partial=x_partial, y_partial=y_partial) # Drop null observations if dropna: self.dropna("x", "y", "units", "x_partial", "y_partial") # Regress nuisance variables out of the data if self.x_partial is not None: self.x = self.regress_out(self.x, self.x_partial) if self.y_partial is not None: self.y = self.regress_out(self.y, self.y_partial) # Possibly bin the predictor variable, which implies a point estimate if x_bins is not None: self.x_estimator = np.mean if x_estimator is None else x_estimator x_discrete, x_bins = self.bin_predictor(x_bins) self.x_discrete = x_discrete else: self.x_discrete = self.x # Save the range of the x variable for the grid later self.x_range = self.x.min(), self.x.max() @property def scatter_data(self): """Data where each observation is a point.""" x_j = self.x_jitter if x_j is None: x = self.x else: x = self.x + np.random.uniform(-x_j, x_j, len(self.x)) y_j = self.y_jitter if y_j is None: y = self.y else: y = self.y + np.random.uniform(-y_j, y_j, len(self.y)) return x, y @property def estimate_data(self): """Data with a point estimate and CI for each discrete x value.""" x, y = self.x_discrete, self.y vals = sorted(np.unique(x)) points, cis = [], [] for val in vals: # Get the point estimate of the y variable _y = y[x == val] est = self.x_estimator(_y) points.append(est) # Compute the confidence interval for this estimate if self.x_ci is None: cis.append(None) else: units = None if self.x_ci == "sd": sd = np.std(_y) _ci = est - sd, est + sd else: if self.units is not None: units = self.units[x == val] boots = algo.bootstrap(_y, func=self.x_estimator, n_boot=self.n_boot, units=units, seed=self.seed) _ci = utils.ci(boots, self.x_ci) cis.append(_ci) return vals, points, cis def fit_regression(self, ax=None, x_range=None, grid=None): """Fit the regression model.""" # Create the grid for the regression if grid is None: if self.truncate: x_min, x_max = self.x_range else: if ax is None: x_min, x_max = x_range else: x_min, x_max = ax.get_xlim() grid = np.linspace(x_min, x_max, 100) ci = self.ci # Fit the regression if self.order > 1: yhat, yhat_boots = self.fit_poly(grid, self.order) elif self.logistic: from statsmodels.genmod.generalized_linear_model import GLM from statsmodels.genmod.families import Binomial yhat, yhat_boots = self.fit_statsmodels(grid, GLM, family=Binomial()) elif self.lowess: ci = None grid, yhat = self.fit_lowess() elif self.robust: from statsmodels.robust.robust_linear_model import RLM yhat, yhat_boots = self.fit_statsmodels(grid, RLM) elif self.logx: yhat, yhat_boots = self.fit_logx(grid) else: yhat, yhat_boots = self.fit_fast(grid) # Compute the confidence interval at each grid point if ci is None: err_bands = None else: err_bands = utils.ci(yhat_boots, ci, axis=0) return grid, yhat, err_bands def fit_fast(self, grid): """Low-level regression and prediction using linear algebra.""" def reg_func(_x, _y): return np.linalg.pinv(_x).dot(_y) X, y = np.c_[np.ones(len(self.x)), self.x], self.y grid = np.c_[np.ones(len(grid)), grid] yhat = grid.dot(reg_func(X, y)) if self.ci is None: return yhat, None beta_boots = algo.bootstrap(X, y, func=reg_func, n_boot=self.n_boot, units=self.units, seed=self.seed).T yhat_boots = grid.dot(beta_boots).T return yhat, yhat_boots def fit_poly(self, grid, order): """Regression using numpy polyfit for higher-order trends.""" def reg_func(_x, _y): return np.polyval(np.polyfit(_x, _y, order), grid) x, y = self.x, self.y yhat = reg_func(x, y) if self.ci is None: return yhat, None yhat_boots = algo.bootstrap(x, y, func=reg_func, n_boot=self.n_boot, units=self.units, seed=self.seed) return yhat, yhat_boots def fit_statsmodels(self, grid, model, **kwargs): """More general regression function using statsmodels objects.""" import statsmodels.genmod.generalized_linear_model as glm X, y = np.c_[np.ones(len(self.x)), self.x], self.y grid = np.c_[np.ones(len(grid)), grid] def reg_func(_x, _y): try: yhat = model(_y, _x, **kwargs).fit().predict(grid) except glm.PerfectSeparationError: yhat = np.empty(len(grid)) yhat.fill(np.nan) return yhat yhat = reg_func(X, y) if self.ci is None: return yhat, None yhat_boots = algo.bootstrap(X, y, func=reg_func, n_boot=self.n_boot, units=self.units, seed=self.seed) return yhat, yhat_boots def fit_lowess(self): """Fit a locally-weighted regression, which returns its own grid.""" from statsmodels.nonparametric.smoothers_lowess import lowess grid, yhat = lowess(self.y, self.x).T return grid, yhat def fit_logx(self, grid): """Fit the model in log-space.""" X, y = np.c_[np.ones(len(self.x)), self.x], self.y grid = np.c_[np.ones(len(grid)), np.log(grid)] def reg_func(_x, _y): _x = np.c_[_x[:, 0], np.log(_x[:, 1])] return np.linalg.pinv(_x).dot(_y) yhat = grid.dot(reg_func(X, y)) if self.ci is None: return yhat, None beta_boots = algo.bootstrap(X, y, func=reg_func, n_boot=self.n_boot, units=self.units, seed=self.seed).T yhat_boots = grid.dot(beta_boots).T return yhat, yhat_boots def bin_predictor(self, bins): """Discretize a predictor by assigning value to closest bin.""" x = self.x if np.isscalar(bins): percentiles = np.linspace(0, 100, bins + 2)[1:-1] bins = np.c_[utils.percentiles(x, percentiles)] else: bins = np.c_[np.ravel(bins)] dist = distance.cdist(np.c_[x], bins) x_binned = bins[np.argmin(dist, axis=1)].ravel() return x_binned, bins.ravel() def regress_out(self, a, b): """Regress b from a keeping a's original mean.""" a_mean = a.mean() a = a - a_mean b = b - b.mean() b = np.c_[b] a_prime = a - b.dot(np.linalg.pinv(b).dot(a)) return np.asarray(a_prime + a_mean).reshape(a.shape) def plot(self, ax, scatter_kws, line_kws): """Draw the full plot.""" # Insert the plot label into the correct set of keyword arguments if self.scatter: scatter_kws["label"] = self.label else: line_kws["label"] = self.label # Use the current color cycle state as a default if self.color is None: lines, = ax.plot([], []) color = lines.get_color() lines.remove() else: color = self.color # Ensure that color is hex to avoid matplotlib weirdness color = mpl.colors.rgb2hex(mpl.colors.colorConverter.to_rgb(color)) # Let color in keyword arguments override overall plot color scatter_kws.setdefault("color", color) line_kws.setdefault("color", color) # Draw the constituent plots if self.scatter: self.scatterplot(ax, scatter_kws) if self.fit_reg: self.lineplot(ax, line_kws) # Label the axes if hasattr(self.x, "name"): ax.set_xlabel(self.x.name) if hasattr(self.y, "name"): ax.set_ylabel(self.y.name) def scatterplot(self, ax, kws): """Draw the data.""" # Treat the line-based markers specially, explicitly setting larger # linewidth than is provided by the seaborn style defaults. # This would ideally be handled better in matplotlib (i.e., distinguish # between edgewidth for solid glyphs and linewidth for line glyphs # but this should do for now. line_markers = ["1", "2", "3", "4", "+", "x", "|", "_"] if self.x_estimator is None: if "marker" in kws and kws["marker"] in line_markers: lw = mpl.rcParams["lines.linewidth"] else: lw = mpl.rcParams["lines.markeredgewidth"] kws.setdefault("linewidths", lw) if not hasattr(kws['color'], 'shape') or kws['color'].shape[1] < 4: kws.setdefault("alpha", .8) x, y = self.scatter_data ax.scatter(x, y, **kws) else: # TODO abstraction ci_kws = {"color": kws["color"]} ci_kws["linewidth"] = mpl.rcParams["lines.linewidth"] * 1.75 kws.setdefault("s", 50) xs, ys, cis = self.estimate_data if [ci for ci in cis if ci is not None]: for x, ci in zip(xs, cis): ax.plot([x, x], ci, **ci_kws) ax.scatter(xs, ys, **kws) def lineplot(self, ax, kws): """Draw the model.""" # Fit the regression model grid, yhat, err_bands = self.fit_regression(ax) edges = grid[0], grid[-1] # Get set default aesthetics fill_color = kws["color"] lw = kws.pop("lw", mpl.rcParams["lines.linewidth"] * 1.5) kws.setdefault("linewidth", lw) # Draw the regression line and confidence interval line, = ax.plot(grid, yhat, **kws) try: line.sticky_edges.x[:] = edges # Prevent mpl from adding margin except AttributeError: msg = "Cannot set sticky_edges; requires newer matplotlib." warnings.warn(msg, UserWarning) if err_bands is not None: ax.fill_between(grid, *err_bands, facecolor=fill_color, alpha=.15) _regression_docs = dict( model_api=dedent("""\ There are a number of mutually exclusive options for estimating the regression model. See the :ref:`tutorial ` for more information.\ """), regplot_vs_lmplot=dedent("""\ The :func:`regplot` and :func:`lmplot` functions are closely related, but the former is an axes-level function while the latter is a figure-level function that combines :func:`regplot` and :class:`FacetGrid`.\ """), x_estimator=dedent("""\ x_estimator : callable that maps vector -> scalar, optional Apply this function to each unique value of ``x`` and plot the resulting estimate. This is useful when ``x`` is a discrete variable. If ``x_ci`` is given, this estimate will be bootstrapped and a confidence interval will be drawn.\ """), x_bins=dedent("""\ x_bins : int or vector, optional Bin the ``x`` variable into discrete bins and then estimate the central tendency and a confidence interval. This binning only influences how the scatterplot is drawn; the regression is still fit to the original data. This parameter is interpreted either as the number of evenly-sized (not necessary spaced) bins or the positions of the bin centers. When this parameter is used, it implies that the default of ``x_estimator`` is ``numpy.mean``.\ """), x_ci=dedent("""\ x_ci : "ci", "sd", int in [0, 100] or None, optional Size of the confidence interval used when plotting a central tendency for discrete values of ``x``. If ``"ci"``, defer to the value of the ``ci`` parameter. If ``"sd"``, skip bootstrapping and show the standard deviation of the observations in each bin.\ """), scatter=dedent("""\ scatter : bool, optional If ``True``, draw a scatterplot with the underlying observations (or the ``x_estimator`` values).\ """), fit_reg=dedent("""\ fit_reg : bool, optional If ``True``, estimate and plot a regression model relating the ``x`` and ``y`` variables.\ """), ci=dedent("""\ ci : int in [0, 100] or None, optional Size of the confidence interval for the regression estimate. This will be drawn using translucent bands around the regression line. The confidence interval is estimated using a bootstrap; for large datasets, it may be advisable to avoid that computation by setting this parameter to None.\ """), n_boot=dedent("""\ n_boot : int, optional Number of bootstrap resamples used to estimate the ``ci``. The default value attempts to balance time and stability; you may want to increase this value for "final" versions of plots.\ """), units=dedent("""\ units : variable name in ``data``, optional If the ``x`` and ``y`` observations are nested within sampling units, those can be specified here. This will be taken into account when computing the confidence intervals by performing a multilevel bootstrap that resamples both units and observations (within unit). This does not otherwise influence how the regression is estimated or drawn.\ """), seed=dedent("""\ seed : int, numpy.random.Generator, or numpy.random.RandomState, optional Seed or random number generator for reproducible bootstrapping.\ """), order=dedent("""\ order : int, optional If ``order`` is greater than 1, use ``numpy.polyfit`` to estimate a polynomial regression.\ """), logistic=dedent("""\ logistic : bool, optional If ``True``, assume that ``y`` is a binary variable and use ``statsmodels`` to estimate a logistic regression model. Note that this is substantially more computationally intensive than linear regression, so you may wish to decrease the number of bootstrap resamples (``n_boot``) or set ``ci`` to None.\ """), lowess=dedent("""\ lowess : bool, optional If ``True``, use ``statsmodels`` to estimate a nonparametric lowess model (locally weighted linear regression). Note that confidence intervals cannot currently be drawn for this kind of model.\ """), robust=dedent("""\ robust : bool, optional If ``True``, use ``statsmodels`` to estimate a robust regression. This will de-weight outliers. Note that this is substantially more computationally intensive than standard linear regression, so you may wish to decrease the number of bootstrap resamples (``n_boot``) or set ``ci`` to None.\ """), logx=dedent("""\ logx : bool, optional If ``True``, estimate a linear regression of the form y ~ log(x), but plot the scatterplot and regression model in the input space. Note that ``x`` must be positive for this to work.\ """), xy_partial=dedent("""\ {x,y}_partial : strings in ``data`` or matrices Confounding variables to regress out of the ``x`` or ``y`` variables before plotting.\ """), truncate=dedent("""\ truncate : bool, optional By default, the regression line is drawn to fill the x axis limits after the scatterplot is drawn. If ``truncate`` is ``True``, it will instead by bounded by the data limits.\ """), xy_jitter=dedent("""\ {x,y}_jitter : floats, optional Add uniform random noise of this size to either the ``x`` or ``y`` variables. The noise is added to a copy of the data after fitting the regression, and only influences the look of the scatterplot. This can be helpful when plotting variables that take discrete values.\ """), scatter_line_kws=dedent("""\ {scatter,line}_kws : dictionaries Additional keyword arguments to pass to ``plt.scatter`` and ``plt.plot``.\ """), ) _regression_docs.update(_facet_docs) def lmplot(x, y, data, hue=None, col=None, row=None, palette=None, col_wrap=None, height=5, aspect=1, markers="o", sharex=True, sharey=True, hue_order=None, col_order=None, row_order=None, legend=True, legend_out=True, x_estimator=None, x_bins=None, x_ci="ci", scatter=True, fit_reg=True, ci=95, n_boot=1000, units=None, seed=None, order=1, logistic=False, lowess=False, robust=False, logx=False, x_partial=None, y_partial=None, truncate=True, x_jitter=None, y_jitter=None, scatter_kws=None, line_kws=None, size=None): # Handle deprecations if size is not None: height = size msg = ("The `size` parameter has been renamed to `height`; " "please update your code.") warnings.warn(msg, UserWarning) # Reduce the dataframe to only needed columns need_cols = [x, y, hue, col, row, units, x_partial, y_partial] cols = np.unique([a for a in need_cols if a is not None]).tolist() data = data[cols] # Initialize the grid facets = FacetGrid(data, row, col, hue, palette=palette, row_order=row_order, col_order=col_order, hue_order=hue_order, height=height, aspect=aspect, col_wrap=col_wrap, sharex=sharex, sharey=sharey, legend_out=legend_out) # Add the markers here as FacetGrid has figured out how many levels of the # hue variable are needed and we don't want to duplicate that process if facets.hue_names is None: n_markers = 1 else: n_markers = len(facets.hue_names) if not isinstance(markers, list): markers = [markers] * n_markers if len(markers) != n_markers: raise ValueError(("markers must be a singeton or a list of markers " "for each level of the hue variable")) facets.hue_kws = {"marker": markers} # Hack to set the x limits properly, which needs to happen here # because the extent of the regression estimate is determined # by the limits of the plot if sharex: for ax in facets.axes.flat: ax.scatter(data[x], np.ones(len(data)) * data[y].mean()).remove() # Draw the regression plot on each facet regplot_kws = dict( x_estimator=x_estimator, x_bins=x_bins, x_ci=x_ci, scatter=scatter, fit_reg=fit_reg, ci=ci, n_boot=n_boot, units=units, seed=seed, order=order, logistic=logistic, lowess=lowess, robust=robust, logx=logx, x_partial=x_partial, y_partial=y_partial, truncate=truncate, x_jitter=x_jitter, y_jitter=y_jitter, scatter_kws=scatter_kws, line_kws=line_kws, ) facets.map_dataframe(regplot, x, y, **regplot_kws) # Add a legend if legend and (hue is not None) and (hue not in [col, row]): facets.add_legend() return facets lmplot.__doc__ = dedent("""\ Plot data and regression model fits across a FacetGrid. This function combines :func:`regplot` and :class:`FacetGrid`. It is intended as a convenient interface to fit regression models across conditional subsets of a dataset. When thinking about how to assign variables to different facets, a general rule is that it makes sense to use ``hue`` for the most important comparison, followed by ``col`` and ``row``. However, always think about your particular dataset and the goals of the visualization you are creating. {model_api} The parameters to this function span most of the options in :class:`FacetGrid`, although there may be occasional cases where you will want to use that class and :func:`regplot` directly. Parameters ---------- x, y : strings, optional Input variables; these should be column names in ``data``. {data} hue, col, row : strings Variables that define subsets of the data, which will be drawn on separate facets in the grid. See the ``*_order`` parameters to control the order of levels of this variable. {palette} {col_wrap} {height} {aspect} markers : matplotlib marker code or list of marker codes, optional Markers for the scatterplot. If a list, each marker in the list will be used for each level of the ``hue`` variable. {share_xy} {{hue,col,row}}_order : lists, optional Order for the levels of the faceting variables. By default, this will be the order that the levels appear in ``data`` or, if the variables are pandas categoricals, the category order. legend : bool, optional If ``True`` and there is a ``hue`` variable, add a legend. {legend_out} {x_estimator} {x_bins} {x_ci} {scatter} {fit_reg} {ci} {n_boot} {units} {seed} {order} {logistic} {lowess} {robust} {logx} {xy_partial} {truncate} {xy_jitter} {scatter_line_kws} See Also -------- regplot : Plot data and a conditional model fit. FacetGrid : Subplot grid for plotting conditional relationships. pairplot : Combine :func:`regplot` and :class:`PairGrid` (when used with ``kind="reg"``). Notes ----- {regplot_vs_lmplot} Examples -------- These examples focus on basic regression model plots to exhibit the various faceting options; see the :func:`regplot` docs for demonstrations of the other options for plotting the data and models. There are also other examples for how to manipulate plot using the returned object on the :class:`FacetGrid` docs. Plot a simple linear relationship between two variables: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set(color_codes=True) >>> tips = sns.load_dataset("tips") >>> g = sns.lmplot(x="total_bill", y="tip", data=tips) Condition on a third variable and plot the levels in different colors: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips) Use different markers as well as colors so the plot will reproduce to black-and-white more easily: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips, ... markers=["o", "x"]) Use a different color palette: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips, ... palette="Set1") Map ``hue`` levels to colors with a dictionary: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips, ... palette=dict(Yes="g", No="m")) Plot the levels of the third variable across different columns: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", col="smoker", data=tips) Change the height and aspect ratio of the facets: .. plot:: :context: close-figs >>> g = sns.lmplot(x="size", y="total_bill", hue="day", col="day", ... data=tips, height=6, aspect=.4, x_jitter=.1) Wrap the levels of the column variable into multiple rows: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", col="day", hue="day", ... data=tips, col_wrap=2, height=3) Condition on two variables to make a full grid: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", row="sex", col="time", ... data=tips, height=3) Use methods on the returned :class:`FacetGrid` instance to further tweak the plot: .. plot:: :context: close-figs >>> g = sns.lmplot(x="total_bill", y="tip", row="sex", col="time", ... data=tips, height=3) >>> g = (g.set_axis_labels("Total bill (US Dollars)", "Tip") ... .set(xlim=(0, 60), ylim=(0, 12), ... xticks=[10, 30, 50], yticks=[2, 6, 10]) ... .fig.subplots_adjust(wspace=.02)) """).format(**_regression_docs) def regplot(x, y, data=None, x_estimator=None, x_bins=None, x_ci="ci", scatter=True, fit_reg=True, ci=95, n_boot=1000, units=None, seed=None, order=1, logistic=False, lowess=False, robust=False, logx=False, x_partial=None, y_partial=None, truncate=True, dropna=True, x_jitter=None, y_jitter=None, label=None, color=None, marker="o", scatter_kws=None, line_kws=None, ax=None): plotter = _RegressionPlotter(x, y, data, x_estimator, x_bins, x_ci, scatter, fit_reg, ci, n_boot, units, seed, order, logistic, lowess, robust, logx, x_partial, y_partial, truncate, dropna, x_jitter, y_jitter, color, label) if ax is None: ax = plt.gca() scatter_kws = {} if scatter_kws is None else copy.copy(scatter_kws) scatter_kws["marker"] = marker line_kws = {} if line_kws is None else copy.copy(line_kws) plotter.plot(ax, scatter_kws, line_kws) return ax regplot.__doc__ = dedent("""\ Plot data and a linear regression model fit. {model_api} Parameters ---------- x, y: string, series, or vector array Input variables. If strings, these should correspond with column names in ``data``. When pandas objects are used, axes will be labeled with the series name. {data} {x_estimator} {x_bins} {x_ci} {scatter} {fit_reg} {ci} {n_boot} {units} {seed} {order} {logistic} {lowess} {robust} {logx} {xy_partial} {truncate} {xy_jitter} label : string Label to apply to either the scatterplot or regression line (if ``scatter`` is ``False``) for use in a legend. color : matplotlib color Color to apply to all plot elements; will be superseded by colors passed in ``scatter_kws`` or ``line_kws``. marker : matplotlib marker code Marker to use for the scatterplot glyphs. {scatter_line_kws} ax : matplotlib Axes, optional Axes object to draw the plot onto, otherwise uses the current Axes. Returns ------- ax : matplotlib Axes The Axes object containing the plot. See Also -------- lmplot : Combine :func:`regplot` and :class:`FacetGrid` to plot multiple linear relationships in a dataset. jointplot : Combine :func:`regplot` and :class:`JointGrid` (when used with ``kind="reg"``). pairplot : Combine :func:`regplot` and :class:`PairGrid` (when used with ``kind="reg"``). residplot : Plot the residuals of a linear regression model. Notes ----- {regplot_vs_lmplot} It's also easy to combine combine :func:`regplot` and :class:`JointGrid` or :class:`PairGrid` through the :func:`jointplot` and :func:`pairplot` functions, although these do not directly accept all of :func:`regplot`'s parameters. Examples -------- Plot the relationship between two variables in a DataFrame: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set(color_codes=True) >>> tips = sns.load_dataset("tips") >>> ax = sns.regplot(x="total_bill", y="tip", data=tips) Plot with two variables defined as numpy arrays; use a different color: .. plot:: :context: close-figs >>> import numpy as np; np.random.seed(8) >>> mean, cov = [4, 6], [(1.5, .7), (.7, 1)] >>> x, y = np.random.multivariate_normal(mean, cov, 80).T >>> ax = sns.regplot(x=x, y=y, color="g") Plot with two variables defined as pandas Series; use a different marker: .. plot:: :context: close-figs >>> import pandas as pd >>> x, y = pd.Series(x, name="x_var"), pd.Series(y, name="y_var") >>> ax = sns.regplot(x=x, y=y, marker="+") Use a 68% confidence interval, which corresponds with the standard error of the estimate, and extend the regression line to the axis limits: .. plot:: :context: close-figs >>> ax = sns.regplot(x=x, y=y, ci=68, truncate=False) Plot with a discrete ``x`` variable and add some jitter: .. plot:: :context: close-figs >>> ax = sns.regplot(x="size", y="total_bill", data=tips, x_jitter=.1) Plot with a discrete ``x`` variable showing means and confidence intervals for unique values: .. plot:: :context: close-figs >>> ax = sns.regplot(x="size", y="total_bill", data=tips, ... x_estimator=np.mean) Plot with a continuous variable divided into discrete bins: .. plot:: :context: close-figs >>> ax = sns.regplot(x=x, y=y, x_bins=4) Fit a higher-order polynomial regression: .. plot:: :context: close-figs >>> ans = sns.load_dataset("anscombe") >>> ax = sns.regplot(x="x", y="y", data=ans.loc[ans.dataset == "II"], ... scatter_kws={{"s": 80}}, ... order=2, ci=None) Fit a robust regression and don't plot a confidence interval: .. plot:: :context: close-figs >>> ax = sns.regplot(x="x", y="y", data=ans.loc[ans.dataset == "III"], ... scatter_kws={{"s": 80}}, ... robust=True, ci=None) Fit a logistic regression; jitter the y variable and use fewer bootstrap iterations: .. plot:: :context: close-figs >>> tips["big_tip"] = (tips.tip / tips.total_bill) > .175 >>> ax = sns.regplot(x="total_bill", y="big_tip", data=tips, ... logistic=True, n_boot=500, y_jitter=.03) Fit the regression model using log(x): .. plot:: :context: close-figs >>> ax = sns.regplot(x="size", y="total_bill", data=tips, ... x_estimator=np.mean, logx=True) """).format(**_regression_docs) def residplot(x, y, data=None, lowess=False, x_partial=None, y_partial=None, order=1, robust=False, dropna=True, label=None, color=None, scatter_kws=None, line_kws=None, ax=None): """Plot the residuals of a linear regression. This function will regress y on x (possibly as a robust or polynomial regression) and then draw a scatterplot of the residuals. You can optionally fit a lowess smoother to the residual plot, which can help in determining if there is structure to the residuals. Parameters ---------- x : vector or string Data or column name in `data` for the predictor variable. y : vector or string Data or column name in `data` for the response variable. data : DataFrame, optional DataFrame to use if `x` and `y` are column names. lowess : boolean, optional Fit a lowess smoother to the residual scatterplot. {x, y}_partial : matrix or string(s) , optional Matrix with same first dimension as `x`, or column name(s) in `data`. These variables are treated as confounding and are removed from the `x` or `y` variables before plotting. order : int, optional Order of the polynomial to fit when calculating the residuals. robust : boolean, optional Fit a robust linear regression when calculating the residuals. dropna : boolean, optional If True, ignore observations with missing data when fitting and plotting. label : string, optional Label that will be used in any plot legends. color : matplotlib color, optional Color to use for all elements of the plot. {scatter, line}_kws : dictionaries, optional Additional keyword arguments passed to scatter() and plot() for drawing the components of the plot. ax : matplotlib axis, optional Plot into this axis, otherwise grab the current axis or make a new one if not existing. Returns ------- ax: matplotlib axes Axes with the regression plot. See Also -------- regplot : Plot a simple linear regression model. jointplot : Draw a :func:`residplot` with univariate marginal distributions (when used with ``kind="resid"``). """ plotter = _RegressionPlotter(x, y, data, ci=None, order=order, robust=robust, x_partial=x_partial, y_partial=y_partial, dropna=dropna, color=color, label=label) if ax is None: ax = plt.gca() # Calculate the residual from a linear regression _, yhat, _ = plotter.fit_regression(grid=plotter.x) plotter.y = plotter.y - yhat # Set the regression option on the plotter if lowess: plotter.lowess = True else: plotter.fit_reg = False # Plot a horizontal line at 0 ax.axhline(0, ls=":", c=".2") # Draw the scatterplot scatter_kws = {} if scatter_kws is None else scatter_kws.copy() line_kws = {} if line_kws is None else line_kws.copy() plotter.plot(ax, scatter_kws, line_kws) return ax seaborn-0.10.0/seaborn/relational.py000066400000000000000000002016131361256634400173730ustar00rootroot00000000000000from __future__ import division from itertools import product from textwrap import dedent from distutils.version import LooseVersion import warnings import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt from . import utils from .utils import (categorical_order, get_color_cycle, ci_to_errsize, sort_df, remove_na, locator_to_legend_entries) from .algorithms import bootstrap from .palettes import (color_palette, cubehelix_palette, _parse_cubehelix_args, QUAL_PALETTES) from .axisgrid import FacetGrid, _facet_docs __all__ = ["relplot", "scatterplot", "lineplot"] class _RelationalPlotter(object): if LooseVersion(mpl.__version__) >= "2.0": default_markers = ["o", "X", "s", "P", "D", "^", "v", "p"] else: default_markers = ["o", "s", "D", "^", "v", "p"] default_dashes = ["", (4, 1.5), (1, 1), (3, 1, 1.5, 1), (5, 1, 1, 1), (5, 1, 2, 1, 2, 1)] def establish_variables(self, x=None, y=None, hue=None, size=None, style=None, units=None, data=None): """Parse the inputs to define data for plotting.""" # Initialize label variables x_label = y_label = hue_label = size_label = style_label = None # Option 1: # We have a wide-form datast # -------------------------- if x is None and y is None: self.input_format = "wide" # Option 1a: # The input data is a Pandas DataFrame # ------------------------------------ # We will assign the index to x, the values to y, # and the columns names to both hue and style # TODO accept a dict and try to coerce to a dataframe? if isinstance(data, pd.DataFrame): # Enforce numeric values try: data.astype(np.float) except ValueError: err = "A wide-form input must have only numeric values." raise ValueError(err) plot_data = data.copy() plot_data.loc[:, "x"] = data.index plot_data = pd.melt(plot_data, "x", var_name="hue", value_name="y") plot_data["style"] = plot_data["hue"] x_label = getattr(data.index, "name", None) hue_label = style_label = getattr(plot_data.columns, "name", None) # Option 1b: # The input data is an array or list # ---------------------------------- else: if not len(data): plot_data = pd.DataFrame(columns=["x", "y"]) elif np.isscalar(np.asarray(data)[0]): # The input data is a flat list(like): # We assign a numeric index for x and use the values for y x = getattr(data, "index", np.arange(len(data))) plot_data = pd.DataFrame(dict(x=x, y=data)) elif hasattr(data, "shape"): # The input data is an array(like): # We either use the index or assign a numeric index to x, # the values to y, and id keys to both hue and style plot_data = pd.DataFrame(data) plot_data.loc[:, "x"] = plot_data.index plot_data = pd.melt(plot_data, "x", var_name="hue", value_name="y") plot_data["style"] = plot_data["hue"] else: # The input data is a nested list: We will either use the # index or assign a numeric index for x, use the values # for y, and use numeric hue/style identifiers. plot_data = [] for i, data_i in enumerate(data): x = getattr(data_i, "index", np.arange(len(data_i))) n = getattr(data_i, "name", i) data_i = dict(x=x, y=data_i, hue=n, style=n, size=None) plot_data.append(pd.DataFrame(data_i)) plot_data = pd.concat(plot_data) # Option 2: # We have long-form data # ---------------------- elif x is not None and y is not None: self.input_format = "long" # Use variables as from the dataframe if specified if data is not None: x = data.get(x, x) y = data.get(y, y) hue = data.get(hue, hue) size = data.get(size, size) style = data.get(style, style) units = data.get(units, units) # Validate the inputs for var in [x, y, hue, size, style, units]: if isinstance(var, str): err = "Could not interpret input '{}'".format(var) raise ValueError(err) # Extract variable names x_label = getattr(x, "name", None) y_label = getattr(y, "name", None) hue_label = getattr(hue, "name", None) size_label = getattr(size, "name", None) style_label = getattr(style, "name", None) # Reassemble into a DataFrame plot_data = dict( x=x, y=y, hue=hue, style=style, size=size, units=units ) plot_data = pd.DataFrame(plot_data) # Option 3: # Only one variable argument # -------------------------- else: err = ("Either both or neither of `x` and `y` must be specified " "(but try passing to `data`, which is more flexible).") raise ValueError(err) # ---- Post-processing # Assign default values for missing attribute variables for attr in ["hue", "style", "size", "units"]: if attr not in plot_data: plot_data[attr] = None # Determine which semantics have (some) data plot_valid = plot_data.notnull().any() semantics = ["x", "y"] + [ name for name in ["hue", "size", "style"] if plot_valid[name] ] self.x_label = x_label self.y_label = y_label self.hue_label = hue_label self.size_label = size_label self.style_label = style_label self.plot_data = plot_data self.semantics = semantics return plot_data def categorical_to_palette(self, data, order, palette): """Determine colors when the hue variable is qualitative.""" # -- Identify the order and name of the levels if order is None: levels = categorical_order(data) else: levels = order n_colors = len(levels) # -- Identify the set of colors to use if isinstance(palette, dict): missing = set(levels) - set(palette) if any(missing): err = "The palette dictionary is missing keys: {}" raise ValueError(err.format(missing)) else: if palette is None: if n_colors <= len(get_color_cycle()): colors = color_palette(None, n_colors) else: colors = color_palette("husl", n_colors) elif isinstance(palette, list): if len(palette) != n_colors: err = "The palette list has the wrong number of colors." raise ValueError(err) colors = palette else: colors = color_palette(palette, n_colors) palette = dict(zip(levels, colors)) return levels, palette def numeric_to_palette(self, data, order, palette, norm): """Determine colors when the hue variable is quantitative.""" levels = list(np.sort(remove_na(data.unique()))) # TODO do we want to do something complicated to ensure contrast # at the extremes of the colormap against the background? # Identify the colormap to use palette = "ch:" if palette is None else palette if isinstance(palette, mpl.colors.Colormap): cmap = palette elif str(palette).startswith("ch:"): args, kwargs = _parse_cubehelix_args(palette) cmap = cubehelix_palette(0, *args, as_cmap=True, **kwargs) elif isinstance(palette, dict): colors = [palette[k] for k in sorted(palette)] cmap = mpl.colors.ListedColormap(colors) else: try: cmap = mpl.cm.get_cmap(palette) except (ValueError, TypeError): err = "Palette {} not understood" raise ValueError(err) if norm is None: norm = mpl.colors.Normalize() elif isinstance(norm, tuple): norm = mpl.colors.Normalize(*norm) elif not isinstance(norm, mpl.colors.Normalize): err = "``hue_norm`` must be None, tuple, or Normalize object." raise ValueError(err) if not norm.scaled(): norm(np.asarray(data.dropna())) # TODO this should also use color_lookup, but that needs the # class attributes that get set after using this function... if not isinstance(palette, dict): palette = dict(zip(levels, cmap(norm(levels)))) # palette = {l: cmap(norm([l, 1]))[0] for l in levels} return levels, palette, cmap, norm def color_lookup(self, key): """Return the color corresponding to the hue level.""" if self.hue_type == "numeric": normed = self.hue_norm(key) if np.ma.is_masked(normed): normed = np.nan return self.cmap(normed) elif self.hue_type == "categorical": return self.palette[key] def size_lookup(self, key): """Return the size corresponding to the size level.""" if self.size_type == "numeric": min_size, max_size = self.size_range val = self.size_norm(key) if np.ma.is_masked(val): return 0 return min_size + val * (max_size - min_size) elif self.size_type == "categorical": return self.sizes[key] def style_to_attributes(self, levels, style, defaults, name): """Convert a style argument to a dict of matplotlib attributes.""" if style is True: attrdict = dict(zip(levels, defaults)) elif style and isinstance(style, dict): attrdict = style elif style: attrdict = dict(zip(levels, style)) else: attrdict = {} if attrdict: missing_levels = set(levels) - set(attrdict) if any(missing_levels): err = "These `style` levels are missing {}: {}" raise ValueError(err.format(name, missing_levels)) return attrdict def subset_data(self): """Return (x, y) data for each subset defined by semantics.""" data = self.plot_data all_true = pd.Series(True, data.index) iter_levels = product(self.hue_levels, self.size_levels, self.style_levels) for hue, size, style in iter_levels: hue_rows = all_true if hue is None else data["hue"] == hue size_rows = all_true if size is None else data["size"] == size style_rows = all_true if style is None else data["style"] == style rows = hue_rows & size_rows & style_rows data["units"] = data.units.fillna("") subset_data = data.loc[rows, ["units", "x", "y"]].dropna() if not len(subset_data): continue if self.sort: subset_data = sort_df(subset_data, ["units", "x", "y"]) if self.units is None: subset_data = subset_data.drop("units", axis=1) yield (hue, size, style), subset_data def parse_hue(self, data, palette, order, norm): """Determine what colors to use given data characteristics.""" if self._empty_data(data): # Set default values when not using a hue mapping levels = [None] limits = None norm = None palette = {} var_type = None cmap = None else: # Determine what kind of hue mapping we want var_type = self._semantic_type(data) # Override depending on the type of the palette argument if palette in QUAL_PALETTES: var_type = "categorical" elif norm is not None: var_type = "numeric" elif isinstance(palette, (dict, list)): var_type = "categorical" # -- Option 1: categorical color palette if var_type == "categorical": cmap = None limits = None levels, palette = self.categorical_to_palette( # List comprehension here is required to # overcome differences in the way pandas # externalizes numpy datetime64 list(data), order, palette ) # -- Option 2: sequential color palette elif var_type == "numeric": data = pd.to_numeric(data) levels, palette, cmap, norm = self.numeric_to_palette( data, order, palette, norm ) limits = norm.vmin, norm.vmax self.hue_levels = levels self.hue_norm = norm self.hue_limits = limits self.hue_type = var_type self.palette = palette self.cmap = cmap # Update data as it may have changed dtype self.plot_data["hue"] = data def parse_size(self, data, sizes, order, norm): """Determine the linewidths given data characteristics.""" # TODO could break out two options like parse_hue does for clarity if self._empty_data(data): levels = [None] limits = None norm = None sizes = {} var_type = None width_range = None else: var_type = self._semantic_type(data) # Override depending on the type of the sizes argument if norm is not None: var_type = "numeric" elif isinstance(sizes, (dict, list)): var_type = "categorical" if var_type == "categorical": levels = categorical_order(data, order) numbers = np.arange(1, 1 + len(levels))[::-1] elif var_type == "numeric": data = pd.to_numeric(data) levels = numbers = np.sort(remove_na(data.unique())) if isinstance(sizes, (dict, list)): # Use literal size values if isinstance(sizes, list): if len(sizes) != len(levels): err = "The `sizes` list has wrong number of levels" raise ValueError(err) sizes = dict(zip(levels, sizes)) missing = set(levels) - set(sizes) if any(missing): err = "Missing sizes for the following levels: {}" raise ValueError(err.format(missing)) width_range = min(sizes.values()), max(sizes.values()) try: limits = min(sizes.keys()), max(sizes.keys()) except TypeError: limits = None else: # Infer the range of sizes to use if sizes is None: min_width, max_width = self._default_size_range else: try: min_width, max_width = sizes except (TypeError, ValueError): err = "sizes argument {} not understood".format(sizes) raise ValueError(err) width_range = min_width, max_width if norm is None: norm = mpl.colors.Normalize() elif isinstance(norm, tuple): norm = mpl.colors.Normalize(*norm) elif not isinstance(norm, mpl.colors.Normalize): err = ("``size_norm`` must be None, tuple, " "or Normalize object.") raise ValueError(err) norm.clip = True if not norm.scaled(): norm(np.asarray(numbers)) limits = norm.vmin, norm.vmax scl = norm(numbers) widths = np.asarray(min_width + scl * (max_width - min_width)) if scl.mask.any(): widths[scl.mask] = 0 sizes = dict(zip(levels, widths)) # sizes = {l: min_width + norm(n) * (max_width - min_width) # for l, n in zip(levels, numbers)} if var_type == "categorical": # Don't keep a reference to the norm, which will avoid # downstream code from switching to numerical interpretation norm = None self.sizes = sizes self.size_type = var_type self.size_levels = levels self.size_norm = norm self.size_limits = limits self.size_range = width_range # Update data as it may have changed dtype self.plot_data["size"] = data def parse_style(self, data, markers, dashes, order): """Determine the markers and line dashes.""" if self._empty_data(data): levels = [None] dashes = {} markers = {} else: if order is None: # List comprehension here is required to # overcome differences in the way pandas # coerces numpy datatypes levels = categorical_order(list(data)) else: levels = order markers = self.style_to_attributes( levels, markers, self.default_markers, "markers" ) dashes = self.style_to_attributes( levels, dashes, self.default_dashes, "dashes" ) paths = {} filled_markers = [] for k, m in markers.items(): if not isinstance(m, mpl.markers.MarkerStyle): m = mpl.markers.MarkerStyle(m) paths[k] = m.get_path().transformed(m.get_transform()) filled_markers.append(m.is_filled()) # Mixture of filled and unfilled markers will show line art markers # in the edge color, which defaults to white. This can be handled, # but there would be additional complexity with specifying the # weight of the line art markers without overwhelming the filled # ones with the edges. So for now, we will disallow mixtures. if any(filled_markers) and not all(filled_markers): err = "Filled and line art markers cannot be mixed" raise ValueError(err) self.style_levels = levels self.dashes = dashes self.markers = markers self.paths = paths def _empty_data(self, data): """Test if a series is completely missing.""" return data.isnull().all() def _semantic_type(self, data): """Determine if data should considered numeric or categorical.""" if self.input_format == "wide": return "categorical" elif isinstance(data, pd.Series) and data.dtype.name == "category": return "categorical" else: try: float_data = data.astype(np.float) values = np.unique(float_data.dropna()) # TODO replace with isin when pinned np version >= 1.13 if np.all(np.in1d(values, np.array([0., 1.]))): return "categorical" return "numeric" except (ValueError, TypeError): return "categorical" def label_axes(self, ax): """Set x and y labels with visibility that matches the ticklabels.""" if self.x_label is not None: x_visible = any(t.get_visible() for t in ax.get_xticklabels()) ax.set_xlabel(self.x_label, visible=x_visible) if self.y_label is not None: y_visible = any(t.get_visible() for t in ax.get_yticklabels()) ax.set_ylabel(self.y_label, visible=y_visible) def add_legend_data(self, ax): """Add labeled artists to represent the different plot semantics.""" verbosity = self.legend if verbosity not in ["brief", "full"]: err = "`legend` must be 'brief', 'full', or False" raise ValueError(err) legend_kwargs = {} keys = [] title_kws = dict(color="w", s=0, linewidth=0, marker="", dashes="") def update(var_name, val_name, **kws): key = var_name, val_name if key in legend_kwargs: legend_kwargs[key].update(**kws) else: keys.append(key) legend_kwargs[key] = dict(**kws) # -- Add a legend for hue semantics if verbosity == "brief" and self.hue_type == "numeric": if isinstance(self.hue_norm, mpl.colors.LogNorm): locator = mpl.ticker.LogLocator(numticks=3) else: locator = mpl.ticker.MaxNLocator(nbins=3) hue_levels, hue_formatted_levels = locator_to_legend_entries( locator, self.hue_limits, self.plot_data["hue"].dtype ) else: hue_levels = hue_formatted_levels = self.hue_levels # Add the hue semantic subtitle if self.hue_label is not None: update((self.hue_label, "title"), self.hue_label, **title_kws) # Add the hue semantic labels for level, formatted_level in zip(hue_levels, hue_formatted_levels): if level is not None: color = self.color_lookup(level) update(self.hue_label, formatted_level, color=color) # -- Add a legend for size semantics if verbosity == "brief" and self.size_type == "numeric": if isinstance(self.size_norm, mpl.colors.LogNorm): locator = mpl.ticker.LogLocator(numticks=3) else: locator = mpl.ticker.MaxNLocator(nbins=3) size_levels, size_formatted_levels = locator_to_legend_entries( locator, self.size_limits, self.plot_data["size"].dtype) else: size_levels = size_formatted_levels = self.size_levels # Add the size semantic subtitle if self.size_label is not None: update((self.size_label, "title"), self.size_label, **title_kws) # Add the size semantic labels for level, formatted_level in zip(size_levels, size_formatted_levels): if level is not None: size = self.size_lookup(level) update( self.size_label, formatted_level, linewidth=size, s=size) # -- Add a legend for style semantics # Add the style semantic title if self.style_label is not None: update((self.style_label, "title"), self.style_label, **title_kws) # Add the style semantic labels for level in self.style_levels: if level is not None: update(self.style_label, level, marker=self.markers.get(level, ""), dashes=self.dashes.get(level, "")) func = getattr(ax, self._legend_func) legend_data = {} legend_order = [] for key in keys: _, label = key kws = legend_kwargs[key] kws.setdefault("color", ".2") use_kws = {} for attr in self._legend_attributes + ["visible"]: if attr in kws: use_kws[attr] = kws[attr] artist = func([], [], label=label, **use_kws) if self._legend_func == "plot": artist = artist[0] legend_data[key] = artist legend_order.append(key) self.legend_data = legend_data self.legend_order = legend_order class _LinePlotter(_RelationalPlotter): _legend_attributes = ["color", "linewidth", "marker", "dashes"] _legend_func = "plot" def __init__(self, x=None, y=None, hue=None, size=None, style=None, data=None, palette=None, hue_order=None, hue_norm=None, sizes=None, size_order=None, size_norm=None, dashes=None, markers=None, style_order=None, units=None, estimator=None, ci=None, n_boot=None, seed=None, sort=True, err_style=None, err_kws=None, legend=None): plot_data = self.establish_variables( x, y, hue, size, style, units, data ) self._default_size_range = ( np.r_[.5, 2] * mpl.rcParams["lines.linewidth"] ) self.parse_hue(plot_data["hue"], palette, hue_order, hue_norm) self.parse_size(plot_data["size"], sizes, size_order, size_norm) self.parse_style(plot_data["style"], markers, dashes, style_order) self.units = units self.estimator = estimator self.ci = ci self.n_boot = n_boot self.seed = seed self.sort = sort self.err_style = err_style self.err_kws = {} if err_kws is None else err_kws self.legend = legend def aggregate(self, vals, grouper, units=None): """Compute an estimate and confidence interval using grouper.""" func = self.estimator ci = self.ci n_boot = self.n_boot seed = self.seed # Define a "null" CI for when we only have one value null_ci = pd.Series(index=["low", "high"], dtype=np.float) # Function to bootstrap in the context of a pandas group by def bootstrapped_cis(vals): if len(vals) <= 1: return null_ci boots = bootstrap(vals, func=func, n_boot=n_boot, seed=seed) cis = utils.ci(boots, ci) return pd.Series(cis, ["low", "high"]) # Group and get the aggregation estimate grouped = vals.groupby(grouper, sort=self.sort) est = grouped.agg(func) # Exit early if we don't want a confidence interval if ci is None: return est.index, est, None # Compute the error bar extents if ci == "sd": sd = grouped.std() cis = pd.DataFrame(np.c_[est - sd, est + sd], index=est.index, columns=["low", "high"]).stack() else: cis = grouped.apply(bootstrapped_cis) # Unpack the CIs into "wide" format for plotting if cis.notnull().any(): cis = cis.unstack().reindex(est.index) else: cis = None return est.index, est, cis def plot(self, ax, kws): """Draw the plot onto an axes, passing matplotlib kwargs.""" # Draw a test plot, using the passed in kwargs. The goal here is to # honor both (a) the current state of the plot cycler and (b) the # specified kwargs on all the lines we will draw, overriding when # relevant with the data semantics. Note that we won't cycle # internally; in other words, if ``hue`` is not used, all elements will # have the same color, but they will have the color that you would have # gotten from the corresponding matplotlib function, and calling the # function will advance the axes property cycle. scout, = ax.plot([], [], **kws) orig_color = kws.pop("color", scout.get_color()) orig_marker = kws.pop("marker", scout.get_marker()) orig_linewidth = kws.pop("linewidth", kws.pop("lw", scout.get_linewidth())) orig_dashes = kws.pop("dashes", "") kws.setdefault("markeredgewidth", kws.pop("mew", .75)) kws.setdefault("markeredgecolor", kws.pop("mec", "w")) scout.remove() # Set default error kwargs err_kws = self.err_kws.copy() if self.err_style == "band": err_kws.setdefault("alpha", .2) elif self.err_style == "bars": pass elif self.err_style is not None: err = "`err_style` must be 'band' or 'bars', not {}" raise ValueError(err.format(self.err_style)) # Loop over the semantic subsets and draw a line for each for semantics, data in self.subset_data(): hue, size, style = semantics x, y, units = data["x"], data["y"], data.get("units", None) if self.estimator is not None: if self.units is not None: err = "estimator must be None when specifying units" raise ValueError(err) x, y, y_ci = self.aggregate(y, x, units) else: y_ci = None kws["color"] = self.palette.get(hue, orig_color) kws["dashes"] = self.dashes.get(style, orig_dashes) kws["marker"] = self.markers.get(style, orig_marker) kws["linewidth"] = self.sizes.get(size, orig_linewidth) line, = ax.plot([], [], **kws) line_color = line.get_color() line_alpha = line.get_alpha() line_capstyle = line.get_solid_capstyle() line.remove() # --- Draw the main line x, y = np.asarray(x), np.asarray(y) if self.units is None: line, = ax.plot(x, y, **kws) else: for u in units.unique(): rows = np.asarray(units == u) ax.plot(x[rows], y[rows], **kws) # --- Draw the confidence intervals if y_ci is not None: low, high = np.asarray(y_ci["low"]), np.asarray(y_ci["high"]) if self.err_style == "band": ax.fill_between(x, low, high, color=line_color, **err_kws) elif self.err_style == "bars": y_err = ci_to_errsize((low, high), y) ebars = ax.errorbar(x, y, y_err, linestyle="", color=line_color, alpha=line_alpha, **err_kws) # Set the capstyle properly on the error bars for obj in ebars.get_children(): try: obj.set_capstyle(line_capstyle) except AttributeError: # Does not exist on mpl < 2.2 pass # Finalize the axes details self.label_axes(ax) if self.legend: self.add_legend_data(ax) handles, _ = ax.get_legend_handles_labels() if handles: ax.legend() class _ScatterPlotter(_RelationalPlotter): _legend_attributes = ["color", "s", "marker"] _legend_func = "scatter" def __init__(self, x=None, y=None, hue=None, size=None, style=None, data=None, palette=None, hue_order=None, hue_norm=None, sizes=None, size_order=None, size_norm=None, dashes=None, markers=None, style_order=None, x_bins=None, y_bins=None, units=None, estimator=None, ci=None, n_boot=None, alpha=None, x_jitter=None, y_jitter=None, legend=None): plot_data = self.establish_variables( x, y, hue, size, style, units, data ) self._default_size_range = ( np.r_[.5, 2] * np.square(mpl.rcParams["lines.markersize"]) ) self.parse_hue(plot_data["hue"], palette, hue_order, hue_norm) self.parse_size(plot_data["size"], sizes, size_order, size_norm) self.parse_style(plot_data["style"], markers, None, style_order) self.units = units self.alpha = alpha self.legend = legend def plot(self, ax, kws): # Draw a test plot, using the passed in kwargs. The goal here is to # honor both (a) the current state of the plot cycler and (b) the # specified kwargs on all the lines we will draw, overriding when # relevant with the data semantics. Note that we won't cycle # internally; in other words, if ``hue`` is not used, all elements will # have the same color, but they will have the color that you would have # gotten from the corresponding matplotlib function, and calling the # function will advance the axes property cycle. scout = ax.scatter([], [], **kws) s = kws.pop("s", scout.get_sizes()) c = kws.pop("c", scout.get_facecolors()) scout.remove() kws.pop("color", None) # TODO is this optimal? kws.setdefault("linewidth", .75) # TODO scale with marker size? kws.setdefault("edgecolor", "w") if self.markers: # Use a representative marker so scatter sets the edgecolor # properly for line art markers. We currently enforce either # all or none line art so this works. example_marker = list(self.markers.values())[0] kws.setdefault("marker", example_marker) # TODO this makes it impossible to vary alpha with hue which might # otherwise be useful? Should we just pass None? kws["alpha"] = 1 if self.alpha == "auto" else self.alpha # Assign arguments for plt.scatter and draw the plot data = self.plot_data[self.semantics].dropna() if not data.size: return x = data["x"] y = data["y"] if self.palette: c = [self.palette.get(val) for val in data["hue"]] if self.sizes: s = [self.sizes.get(val) for val in data["size"]] args = np.asarray(x), np.asarray(y), np.asarray(s), np.asarray(c) points = ax.scatter(*args, **kws) # Update the paths to get different marker shapes. This has to be # done here because plt.scatter allows varying sizes and colors # but only a single marker shape per call. if self.paths: p = [self.paths.get(val) for val in data["style"]] points.set_paths(p) # Finalize the axes details self.label_axes(ax) if self.legend: self.add_legend_data(ax) handles, _ = ax.get_legend_handles_labels() if handles: ax.legend() _relational_docs = dict( # --- Introductory prose main_api_narrative=dedent("""\ The relationship between ``x`` and ``y`` can be shown for different subsets of the data using the ``hue``, ``size``, and ``style`` parameters. These parameters control what visual semantics are used to identify the different subsets. It is possible to show up to three dimensions independently by using all three semantic types, but this style of plot can be hard to interpret and is often ineffective. Using redundant semantics (i.e. both ``hue`` and ``style`` for the same variable) can be helpful for making graphics more accessible. See the :ref:`tutorial ` for more information.\ """), relational_semantic_narrative=dedent("""\ The default treatment of the ``hue`` (and to a lesser extent, ``size``) semantic, if present, depends on whether the variable is inferred to represent "numeric" or "categorical" data. In particular, numeric variables are represented with a sequential colormap by default, and the legend entries show regular "ticks" with values that may or may not exist in the data. This behavior can be controlled through various parameters, as described and illustrated below.\ """), # --- Shared function parameters data_vars=dedent("""\ x, y : names of variables in ``data`` or vector data, optional Input data variables; must be numeric. Can pass data directly or reference columns in ``data``.\ """), data=dedent("""\ data : DataFrame, array, or list of arrays, optional Input data structure. If ``x`` and ``y`` are specified as names, this should be a "long-form" DataFrame containing those columns. Otherwise it is treated as "wide-form" data and grouping variables are ignored. See the examples for the various ways this parameter can be specified and the different effects of each.\ """), palette=dedent("""\ palette : string, list, dict, or matplotlib colormap An object that determines how colors are chosen when ``hue`` is used. It can be the name of a seaborn palette or matplotlib colormap, a list of colors (anything matplotlib understands), a dict mapping levels of the ``hue`` variable to colors, or a matplotlib colormap object.\ """), hue_order=dedent("""\ hue_order : list, optional Specified order for the appearance of the ``hue`` variable levels, otherwise they are determined from the data. Not relevant when the ``hue`` variable is numeric.\ """), hue_norm=dedent("""\ hue_norm : tuple or Normalize object, optional Normalization in data units for colormap applied to the ``hue`` variable when it is numeric. Not relevant if it is categorical.\ """), sizes=dedent("""\ sizes : list, dict, or tuple, optional An object that determines how sizes are chosen when ``size`` is used. It can always be a list of size values or a dict mapping levels of the ``size`` variable to sizes. When ``size`` is numeric, it can also be a tuple specifying the minimum and maximum size to use such that other values are normalized within this range.\ """), size_order=dedent("""\ size_order : list, optional Specified order for appearance of the ``size`` variable levels, otherwise they are determined from the data. Not relevant when the ``size`` variable is numeric.\ """), size_norm=dedent("""\ size_norm : tuple or Normalize object, optional Normalization in data units for scaling plot objects when the ``size`` variable is numeric.\ """), markers=dedent("""\ markers : boolean, list, or dictionary, optional Object determining how to draw the markers for different levels of the ``style`` variable. Setting to ``True`` will use default markers, or you can pass a list of markers or a dictionary mapping levels of the ``style`` variable to markers. Setting to ``False`` will draw marker-less lines. Markers are specified as in matplotlib.\ """), style_order=dedent("""\ style_order : list, optional Specified order for appearance of the ``style`` variable levels otherwise they are determined from the data. Not relevant when the ``style`` variable is numeric.\ """), units=dedent("""\ units : {long_form_var} Grouping variable identifying sampling units. When used, a separate line will be drawn for each unit with appropriate semantics, but no legend entry will be added. Useful for showing distribution of experimental replicates when exact identities are not needed. """), estimator=dedent("""\ estimator : name of pandas method or callable or None, optional Method for aggregating across multiple observations of the ``y`` variable at the same ``x`` level. If ``None``, all observations will be drawn.\ """), ci=dedent("""\ ci : int or "sd" or None, optional Size of the confidence interval to draw when aggregating with an estimator. "sd" means to draw the standard deviation of the data. Setting to ``None`` will skip bootstrapping.\ """), n_boot=dedent("""\ n_boot : int, optional Number of bootstraps to use for computing the confidence interval.\ """), seed=dedent("""\ seed : int, numpy.random.Generator, or numpy.random.RandomState, optional Seed or random number generator for reproducible bootstrapping.\ """), legend=dedent("""\ legend : "brief", "full", or False, optional How to draw the legend. If "brief", numeric ``hue`` and ``size`` variables will be represented with a sample of evenly spaced values. If "full", every group will get an entry in the legend. If ``False``, no legend data is added and no legend is drawn.\ """), ax_in=dedent("""\ ax : matplotlib Axes, optional Axes object to draw the plot onto, otherwise uses the current Axes.\ """), ax_out=dedent("""\ ax : matplotlib Axes Returns the Axes object with the plot drawn onto it.\ """), # --- Repeated phrases long_form_var="name of variables in ``data`` or vector data, optional", ) _relational_docs.update(_facet_docs) def lineplot(x=None, y=None, hue=None, size=None, style=None, data=None, palette=None, hue_order=None, hue_norm=None, sizes=None, size_order=None, size_norm=None, dashes=True, markers=None, style_order=None, units=None, estimator="mean", ci=95, n_boot=1000, seed=None, sort=True, err_style="band", err_kws=None, legend="brief", ax=None, **kwargs): p = _LinePlotter( x=x, y=y, hue=hue, size=size, style=style, data=data, palette=palette, hue_order=hue_order, hue_norm=hue_norm, sizes=sizes, size_order=size_order, size_norm=size_norm, dashes=dashes, markers=markers, style_order=style_order, units=units, estimator=estimator, ci=ci, n_boot=n_boot, seed=seed, sort=sort, err_style=err_style, err_kws=err_kws, legend=legend, ) if ax is None: ax = plt.gca() p.plot(ax, kwargs) return ax lineplot.__doc__ = dedent("""\ Draw a line plot with possibility of several semantic groupings. {main_api_narrative} {relational_semantic_narrative} By default, the plot aggregates over multiple ``y`` values at each value of ``x`` and shows an estimate of the central tendency and a confidence interval for that estimate. Parameters ---------- {data_vars} hue : {long_form_var} Grouping variable that will produce lines with different colors. Can be either categorical or numeric, although color mapping will behave differently in latter case. size : {long_form_var} Grouping variable that will produce lines with different widths. Can be either categorical or numeric, although size mapping will behave differently in latter case. style : {long_form_var} Grouping variable that will produce lines with different dashes and/or markers. Can have a numeric dtype but will always be treated as categorical. {data} {palette} {hue_order} {hue_norm} {sizes} {size_order} {size_norm} dashes : boolean, list, or dictionary, optional Object determining how to draw the lines for different levels of the ``style`` variable. Setting to ``True`` will use default dash codes, or you can pass a list of dash codes or a dictionary mapping levels of the ``style`` variable to dash codes. Setting to ``False`` will use solid lines for all subsets. Dashes are specified as in matplotlib: a tuple of ``(segment, gap)`` lengths, or an empty string to draw a solid line. {markers} {style_order} {units} {estimator} {ci} {n_boot} {seed} sort : boolean, optional If True, the data will be sorted by the x and y variables, otherwise lines will connect points in the order they appear in the dataset. err_style : "band" or "bars", optional Whether to draw the confidence intervals with translucent error bands or discrete error bars. err_kws : dict of keyword arguments Additional paramters to control the aesthetics of the error bars. The kwargs are passed either to :meth:`matplotlib.axes.Axes.fill_between` or :meth:`matplotlib.axes.Axes.errorbar`, depending on ``err_style``. {legend} {ax_in} kwargs : key, value mappings Other keyword arguments are passed down to :meth:`matplotlib.axes.Axes.plot`. Returns ------- {ax_out} See Also -------- scatterplot : Show the relationship between two variables without emphasizing continuity of the ``x`` variable. pointplot : Show the relationship between two variables when one is categorical. Examples -------- Draw a single line plot with error bands showing a confidence interval: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set() >>> import matplotlib.pyplot as plt >>> fmri = sns.load_dataset("fmri") >>> ax = sns.lineplot(x="timepoint", y="signal", data=fmri) Group by another variable and show the groups with different colors: .. plot:: :context: close-figs >>> ax = sns.lineplot(x="timepoint", y="signal", hue="event", ... data=fmri) Show the grouping variable with both color and line dashing: .. plot:: :context: close-figs >>> ax = sns.lineplot(x="timepoint", y="signal", ... hue="event", style="event", data=fmri) Use color and line dashing to represent two different grouping variables: .. plot:: :context: close-figs >>> ax = sns.lineplot(x="timepoint", y="signal", ... hue="region", style="event", data=fmri) Use markers instead of the dashes to identify groups: .. plot:: :context: close-figs >>> ax = sns.lineplot(x="timepoint", y="signal", ... hue="event", style="event", ... markers=True, dashes=False, data=fmri) Show error bars instead of error bands and plot the standard error: .. plot:: :context: close-figs >>> ax = sns.lineplot(x="timepoint", y="signal", hue="event", ... err_style="bars", ci=68, data=fmri) Show experimental replicates instead of aggregating: .. plot:: :context: close-figs >>> ax = sns.lineplot(x="timepoint", y="signal", hue="event", ... units="subject", estimator=None, lw=1, ... data=fmri.query("region == 'frontal'")) Use a quantitative color mapping: .. plot:: :context: close-figs >>> dots = sns.load_dataset("dots").query("align == 'dots'") >>> ax = sns.lineplot(x="time", y="firing_rate", ... hue="coherence", style="choice", ... data=dots) Use a different normalization for the colormap: .. plot:: :context: close-figs >>> from matplotlib.colors import LogNorm >>> ax = sns.lineplot(x="time", y="firing_rate", ... hue="coherence", style="choice", ... hue_norm=LogNorm(), data=dots) Use a different color palette: .. plot:: :context: close-figs >>> ax = sns.lineplot(x="time", y="firing_rate", ... hue="coherence", style="choice", ... palette="ch:2.5,.25", data=dots) Use specific color values, treating the hue variable as categorical: .. plot:: :context: close-figs >>> palette = sns.color_palette("mako_r", 6) >>> ax = sns.lineplot(x="time", y="firing_rate", ... hue="coherence", style="choice", ... palette=palette, data=dots) Change the width of the lines with a quantitative variable: .. plot:: :context: close-figs >>> ax = sns.lineplot(x="time", y="firing_rate", ... size="coherence", hue="choice", ... legend="full", data=dots) Change the range of line widths used to normalize the size variable: .. plot:: :context: close-figs >>> ax = sns.lineplot(x="time", y="firing_rate", ... size="coherence", hue="choice", ... sizes=(.25, 2.5), data=dots) Plot from a wide-form DataFrame: .. plot:: :context: close-figs >>> import numpy as np, pandas as pd; plt.close("all") >>> index = pd.date_range("1 1 2000", periods=100, ... freq="m", name="date") >>> data = np.random.randn(100, 4).cumsum(axis=0) >>> wide_df = pd.DataFrame(data, index, ["a", "b", "c", "d"]) >>> ax = sns.lineplot(data=wide_df) Plot from a list of Series: .. plot:: :context: close-figs >>> list_data = [wide_df.loc[:"2005", "a"], wide_df.loc["2003":, "b"]] >>> ax = sns.lineplot(data=list_data) Plot a single Series, pass kwargs to :meth:`matplotlib.axes.Axes.plot`: .. plot:: :context: close-figs >>> ax = sns.lineplot(data=wide_df["a"], color="coral", label="line") Draw lines at points as they appear in the dataset: .. plot:: :context: close-figs >>> x, y = np.random.randn(2, 5000).cumsum(axis=1) >>> ax = sns.lineplot(x=x, y=y, sort=False, lw=1) Use :func:`relplot` to combine :func:`lineplot` and :class:`FacetGrid`: This allows grouping within additional categorical variables. Using :func:`relplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of the semantic mappings across facets. .. plot:: :context: close-figs >>> g = sns.relplot(x="timepoint", y="signal", ... col="region", hue="event", style="event", ... kind="line", data=fmri) """).format(**_relational_docs) def scatterplot(x=None, y=None, hue=None, style=None, size=None, data=None, palette=None, hue_order=None, hue_norm=None, sizes=None, size_order=None, size_norm=None, markers=True, style_order=None, x_bins=None, y_bins=None, units=None, estimator=None, ci=95, n_boot=1000, alpha="auto", x_jitter=None, y_jitter=None, legend="brief", ax=None, **kwargs): p = _ScatterPlotter( x=x, y=y, hue=hue, style=style, size=size, data=data, palette=palette, hue_order=hue_order, hue_norm=hue_norm, sizes=sizes, size_order=size_order, size_norm=size_norm, markers=markers, style_order=style_order, x_bins=x_bins, y_bins=y_bins, estimator=estimator, ci=ci, n_boot=n_boot, alpha=alpha, x_jitter=x_jitter, y_jitter=y_jitter, legend=legend, ) if ax is None: ax = plt.gca() p.plot(ax, kwargs) return ax scatterplot.__doc__ = dedent("""\ Draw a scatter plot with possibility of several semantic groupings. {main_api_narrative} {relational_semantic_narrative} Parameters ---------- {data_vars} hue : {long_form_var} Grouping variable that will produce points with different colors. Can be either categorical or numeric, although color mapping will behave differently in latter case. size : {long_form_var} Grouping variable that will produce points with different sizes. Can be either categorical or numeric, although size mapping will behave differently in latter case. style : {long_form_var} Grouping variable that will produce points with different markers. Can have a numeric dtype but will always be treated as categorical. {data} {palette} {hue_order} {hue_norm} {sizes} {size_order} {size_norm} {markers} {style_order} {{x,y}}_bins : lists or arrays or functions *Currently non-functional.* {units} *Currently non-functional.* {estimator} *Currently non-functional.* {ci} *Currently non-functional.* {n_boot} *Currently non-functional.* alpha : float Proportional opacity of the points. {{x,y}}_jitter : booleans or floats *Currently non-functional.* {legend} {ax_in} kwargs : key, value mappings Other keyword arguments are passed down to :meth:`matplotlib.axes.Axes.scatter`. Returns ------- {ax_out} See Also -------- lineplot : Show the relationship between two variables connected with lines to emphasize continuity. swarmplot : Draw a scatter plot with one categorical variable, arranging the points to show the distribution of values. Examples -------- Draw a simple scatter plot between two variables: .. plot:: :context: close-figs >>> import seaborn as sns; sns.set() >>> import matplotlib.pyplot as plt >>> tips = sns.load_dataset("tips") >>> ax = sns.scatterplot(x="total_bill", y="tip", data=tips) Group by another variable and show the groups with different colors: .. plot:: :context: close-figs >>> ax = sns.scatterplot(x="total_bill", y="tip", hue="time", ... data=tips) Show the grouping variable by varying both color and marker: .. plot:: :context: close-figs >>> ax = sns.scatterplot(x="total_bill", y="tip", ... hue="time", style="time", data=tips) Vary colors and markers to show two different grouping variables: .. plot:: :context: close-figs >>> ax = sns.scatterplot(x="total_bill", y="tip", ... hue="day", style="time", data=tips) Show a quantitative variable by varying the size of the points: .. plot:: :context: close-figs >>> ax = sns.scatterplot(x="total_bill", y="tip", size="size", ... data=tips) Also show the quantitative variable by also using continuous colors: .. plot:: :context: close-figs >>> ax = sns.scatterplot(x="total_bill", y="tip", ... hue="size", size="size", ... data=tips) Use a different continuous color map: .. plot:: :context: close-figs >>> cmap = sns.cubehelix_palette(dark=.3, light=.8, as_cmap=True) >>> ax = sns.scatterplot(x="total_bill", y="tip", ... hue="size", size="size", ... palette=cmap, ... data=tips) Change the minimum and maximum point size and show all sizes in legend: .. plot:: :context: close-figs >>> cmap = sns.cubehelix_palette(dark=.3, light=.8, as_cmap=True) >>> ax = sns.scatterplot(x="total_bill", y="tip", ... hue="size", size="size", ... sizes=(20, 200), palette=cmap, ... legend="full", data=tips) Use a narrower range of color map intensities: .. plot:: :context: close-figs >>> cmap = sns.cubehelix_palette(dark=.3, light=.8, as_cmap=True) >>> ax = sns.scatterplot(x="total_bill", y="tip", ... hue="size", size="size", ... sizes=(20, 200), hue_norm=(0, 7), ... legend="full", data=tips) Vary the size with a categorical variable, and use a different palette: .. plot:: :context: close-figs >>> cmap = sns.cubehelix_palette(dark=.3, light=.8, as_cmap=True) >>> ax = sns.scatterplot(x="total_bill", y="tip", ... hue="day", size="smoker", ... palette="Set2", ... data=tips) Use a specific set of markers: .. plot:: :context: close-figs >>> markers = {{"Lunch": "s", "Dinner": "X"}} >>> ax = sns.scatterplot(x="total_bill", y="tip", style="time", ... markers=markers, ... data=tips) Control plot attributes using matplotlib parameters: .. plot:: :context: close-figs >>> ax = sns.scatterplot(x="total_bill", y="tip", ... s=100, color=".2", marker="+", ... data=tips) Pass data vectors instead of names in a data frame: .. plot:: :context: close-figs >>> iris = sns.load_dataset("iris") >>> ax = sns.scatterplot(x=iris.sepal_length, y=iris.sepal_width, ... hue=iris.species, style=iris.species) Pass a wide-form dataset and plot against its index: .. plot:: :context: close-figs >>> import numpy as np, pandas as pd; plt.close("all") >>> index = pd.date_range("1 1 2000", periods=100, ... freq="m", name="date") >>> data = np.random.randn(100, 4).cumsum(axis=0) >>> wide_df = pd.DataFrame(data, index, ["a", "b", "c", "d"]) >>> ax = sns.scatterplot(data=wide_df) Use :func:`relplot` to combine :func:`scatterplot` and :class:`FacetGrid`: This allows grouping within additional categorical variables. Using :func:`relplot` is safer than using :class:`FacetGrid` directly, as it ensures synchronization of the semantic mappings across facets. .. plot:: :context: close-figs >>> g = sns.relplot(x="total_bill", y="tip", ... col="time", hue="day", style="day", ... kind="scatter", data=tips) """).format(**_relational_docs) def relplot(x=None, y=None, hue=None, size=None, style=None, data=None, row=None, col=None, col_wrap=None, row_order=None, col_order=None, palette=None, hue_order=None, hue_norm=None, sizes=None, size_order=None, size_norm=None, markers=None, dashes=None, style_order=None, legend="brief", kind="scatter", height=5, aspect=1, facet_kws=None, **kwargs): if kind == "scatter": plotter = _ScatterPlotter func = scatterplot markers = True if markers is None else markers elif kind == "line": plotter = _LinePlotter func = lineplot dashes = True if dashes is None else dashes else: err = "Plot kind {} not recognized".format(kind) raise ValueError(err) # Check for attempt to plot onto specific axes and warn if "ax" in kwargs: msg = ("relplot is a figure-level function and does not accept " "target axes. You may wish to try {}".format(kind + "plot")) warnings.warn(msg, UserWarning) kwargs.pop("ax") # Use the full dataset to establish how to draw the semantics p = plotter( x=x, y=y, hue=hue, size=size, style=style, data=data, palette=palette, hue_order=hue_order, hue_norm=hue_norm, sizes=sizes, size_order=size_order, size_norm=size_norm, markers=markers, dashes=dashes, style_order=style_order, legend=legend, ) palette = p.palette if p.palette else None hue_order = p.hue_levels if any(p.hue_levels) else None hue_norm = p.hue_norm if p.hue_norm is not None else None sizes = p.sizes if p.sizes else None size_order = p.size_levels if any(p.size_levels) else None size_norm = p.size_norm if p.size_norm is not None else None markers = p.markers if p.markers else None dashes = p.dashes if p.dashes else None style_order = p.style_levels if any(p.style_levels) else None plot_kws = dict( palette=palette, hue_order=hue_order, hue_norm=p.hue_norm, sizes=sizes, size_order=size_order, size_norm=p.size_norm, markers=markers, dashes=dashes, style_order=style_order, legend=False, ) plot_kws.update(kwargs) if kind == "scatter": plot_kws.pop("dashes") # Set up the FacetGrid object facet_kws = {} if facet_kws is None else facet_kws g = FacetGrid( data=data, row=row, col=col, col_wrap=col_wrap, row_order=row_order, col_order=col_order, height=height, aspect=aspect, dropna=False, **facet_kws ) # Draw the plot g.map_dataframe(func, x, y, hue=hue, size=size, style=style, **plot_kws) # Show the legend if legend: p.add_legend_data(g.axes.flat[0]) if p.legend_data: g.add_legend(legend_data=p.legend_data, label_order=p.legend_order) return g relplot.__doc__ = dedent("""\ Figure-level interface for drawing relational plots onto a FacetGrid. This function provides access to several different axes-level functions that show the relationship between two variables with semantic mappings of subsets. The ``kind`` parameter selects the underlying axes-level function to use: - :func:`scatterplot` (with ``kind="scatter"``; the default) - :func:`lineplot` (with ``kind="line"``) Extra keyword arguments are passed to the underlying function, so you should refer to the documentation for each to see kind-specific options. {main_api_narrative} {relational_semantic_narrative} After plotting, the :class:`FacetGrid` with the plot is returned and can be used directly to tweak supporting plot details or add other layers. Note that, unlike when using the underlying plotting functions directly, data must be passed in a long-form DataFrame with variables specified by passing strings to ``x``, ``y``, and other parameters. Parameters ---------- x, y : names of variables in ``data`` Input data variables; must be numeric. hue : name in ``data``, optional Grouping variable that will produce elements with different colors. Can be either categorical or numeric, although color mapping will behave differently in latter case. size : name in ``data``, optional Grouping variable that will produce elements with different sizes. Can be either categorical or numeric, although size mapping will behave differently in latter case. style : name in ``data``, optional Grouping variable that will produce elements with different styles. Can have a numeric dtype but will always be treated as categorical. {data} row, col : names of variables in ``data``, optional Categorical variables that will determine the faceting of the grid. {col_wrap} row_order, col_order : lists of strings, optional Order to organize the rows and/or columns of the grid in, otherwise the orders are inferred from the data objects. {palette} {hue_order} {hue_norm} {sizes} {size_order} {size_norm} {legend} kind : string, optional Kind of plot to draw, corresponding to a seaborn relational plot. Options are {{``scatter`` and ``line``}}. {height} {aspect} facet_kws : dict, optional Dictionary of other keyword arguments to pass to :class:`FacetGrid`. kwargs : key, value pairings Other keyword arguments are passed through to the underlying plotting function. Returns ------- g : :class:`FacetGrid` Returns the :class:`FacetGrid` object with the plot on it for further tweaking. Examples -------- Draw a single facet to use the :class:`FacetGrid` legend placement: .. plot:: :context: close-figs >>> import seaborn as sns >>> sns.set(style="ticks") >>> tips = sns.load_dataset("tips") >>> g = sns.relplot(x="total_bill", y="tip", hue="day", data=tips) Facet on the columns with another variable: .. plot:: :context: close-figs >>> g = sns.relplot(x="total_bill", y="tip", ... hue="day", col="time", data=tips) Facet on the columns and rows: .. plot:: :context: close-figs >>> g = sns.relplot(x="total_bill", y="tip", hue="day", ... col="time", row="sex", data=tips) "Wrap" many column facets into multiple rows: .. plot:: :context: close-figs >>> g = sns.relplot(x="total_bill", y="tip", hue="time", ... col="day", col_wrap=2, data=tips) Use multiple semantic variables on each facet with specified attributes: .. plot:: :context: close-figs >>> g = sns.relplot(x="total_bill", y="tip", hue="time", size="size", ... palette=["b", "r"], sizes=(10, 100), ... col="time", data=tips) Use a different kind of plot: .. plot:: :context: close-figs >>> fmri = sns.load_dataset("fmri") >>> g = sns.relplot(x="timepoint", y="signal", ... hue="event", style="event", col="region", ... kind="line", data=fmri) Change the size of each facet: .. plot:: :context: close-figs >>> g = sns.relplot(x="timepoint", y="signal", ... hue="event", style="event", col="region", ... height=5, aspect=.7, kind="line", data=fmri) """).format(**_relational_docs) seaborn-0.10.0/seaborn/tests/000077500000000000000000000000001361256634400160265ustar00rootroot00000000000000seaborn-0.10.0/seaborn/tests/__init__.py000066400000000000000000000000001361256634400201250ustar00rootroot00000000000000seaborn-0.10.0/seaborn/tests/test_algorithms.py000066400000000000000000000151641361256634400216170ustar00rootroot00000000000000import numpy as np import numpy.random as npr import pytest from numpy.testing import assert_array_equal from distutils.version import LooseVersion from .. import algorithms as algo @pytest.fixture def random(): np.random.seed(sum(map(ord, "test_algorithms"))) def test_bootstrap(random): """Test that bootstrapping gives the right answer in dumb cases.""" a_ones = np.ones(10) n_boot = 5 out1 = algo.bootstrap(a_ones, n_boot=n_boot) assert_array_equal(out1, np.ones(n_boot)) out2 = algo.bootstrap(a_ones, n_boot=n_boot, func=np.median) assert_array_equal(out2, np.ones(n_boot)) def test_bootstrap_length(random): """Test that we get a bootstrap array of the right shape.""" a_norm = np.random.randn(1000) out = algo.bootstrap(a_norm) assert len(out) == 10000 n_boot = 100 out = algo.bootstrap(a_norm, n_boot=n_boot) assert len(out) == n_boot def test_bootstrap_range(random): """Test that boostrapping a random array stays within the right range.""" a_norm = np.random.randn(1000) amin, amax = a_norm.min(), a_norm.max() out = algo.bootstrap(a_norm) assert amin <= out.min() assert amax >= out.max() def test_bootstrap_multiarg(random): """Test that bootstrap works with multiple input arrays.""" x = np.vstack([[1, 10] for i in range(10)]) y = np.vstack([[5, 5] for i in range(10)]) def f(x, y): return np.vstack((x, y)).max(axis=0) out_actual = algo.bootstrap(x, y, n_boot=2, func=f) out_wanted = np.array([[5, 10], [5, 10]]) assert_array_equal(out_actual, out_wanted) def test_bootstrap_axis(random): """Test axis kwarg to bootstrap function.""" x = np.random.randn(10, 20) n_boot = 100 out_default = algo.bootstrap(x, n_boot=n_boot) assert out_default.shape == (n_boot,) out_axis = algo.bootstrap(x, n_boot=n_boot, axis=0) assert out_axis.shape, (n_boot, x.shape[1]) def test_bootstrap_seed(random): """Test that we can get reproducible resamples by seeding the RNG.""" data = np.random.randn(50) seed = 42 boots1 = algo.bootstrap(data, seed=seed) boots2 = algo.bootstrap(data, seed=seed) assert_array_equal(boots1, boots2) def test_bootstrap_ols(random): """Test bootstrap of OLS model fit.""" def ols_fit(X, y): XtXinv = np.linalg.inv(np.dot(X.T, X)) return XtXinv.dot(X.T).dot(y) X = np.column_stack((np.random.randn(50, 4), np.ones(50))) w = [2, 4, 0, 3, 5] y_noisy = np.dot(X, w) + np.random.randn(50) * 20 y_lownoise = np.dot(X, w) + np.random.randn(50) n_boot = 500 w_boot_noisy = algo.bootstrap(X, y_noisy, n_boot=n_boot, func=ols_fit) w_boot_lownoise = algo.bootstrap(X, y_lownoise, n_boot=n_boot, func=ols_fit) assert w_boot_noisy.shape == (n_boot, 5) assert w_boot_lownoise.shape == (n_boot, 5) assert w_boot_noisy.std() > w_boot_lownoise.std() def test_bootstrap_units(random): """Test that results make sense when passing unit IDs to bootstrap.""" data = np.random.randn(50) ids = np.repeat(range(10), 5) bwerr = np.random.normal(0, 2, 10) bwerr = bwerr[ids] data_rm = data + bwerr seed = 77 boots_orig = algo.bootstrap(data_rm, seed=seed) boots_rm = algo.bootstrap(data_rm, units=ids, seed=seed) assert boots_rm.std() > boots_orig.std() def test_bootstrap_arglength(): """Test that different length args raise ValueError.""" with pytest.raises(ValueError): algo.bootstrap(np.arange(5), np.arange(10)) def test_bootstrap_string_func(): """Test that named numpy methods are the same as the numpy function.""" x = np.random.randn(100) res_a = algo.bootstrap(x, func="mean", seed=0) res_b = algo.bootstrap(x, func=np.mean, seed=0) assert np.array_equal(res_a, res_b) res_a = algo.bootstrap(x, func="std", seed=0) res_b = algo.bootstrap(x, func=np.std, seed=0) assert np.array_equal(res_a, res_b) with pytest.raises(AttributeError): algo.bootstrap(x, func="not_a_method_name") def test_bootstrap_reproducibility(random): """Test that bootstrapping uses the internal random state.""" data = np.random.randn(50) boots1 = algo.bootstrap(data, seed=100) boots2 = algo.bootstrap(data, seed=100) assert_array_equal(boots1, boots2) with pytest.warns(UserWarning): # Deprecatd, remove when removing random_seed boots1 = algo.bootstrap(data, random_seed=100) boots2 = algo.bootstrap(data, random_seed=100) assert_array_equal(boots1, boots2) @pytest.mark.skipif(LooseVersion(np.__version__) < "1.17", reason="Tests new numpy random functionality") def test_seed_new(): # Can't use pytest parametrize because tests will fail where the new # Generator object and related function are not defined test_bank = [ (None, None, npr.Generator, False), (npr.RandomState(0), npr.RandomState(0), npr.RandomState, True), (npr.RandomState(0), npr.RandomState(1), npr.RandomState, False), (npr.default_rng(1), npr.default_rng(1), npr.Generator, True), (npr.default_rng(1), npr.default_rng(2), npr.Generator, False), (npr.SeedSequence(10), npr.SeedSequence(10), npr.Generator, True), (npr.SeedSequence(10), npr.SeedSequence(20), npr.Generator, False), (100, 100, npr.Generator, True), (100, 200, npr.Generator, False), ] for seed1, seed2, rng_class, match in test_bank: rng1 = algo._handle_random_seed(seed1) rng2 = algo._handle_random_seed(seed2) assert isinstance(rng1, rng_class) assert isinstance(rng2, rng_class) assert (rng1.uniform() == rng2.uniform()) == match @pytest.mark.skipif(LooseVersion(np.__version__) >= "1.17", reason="Tests old numpy random functionality") @pytest.mark.parametrize("seed1, seed2, match", [ (None, None, False), (npr.RandomState(0), npr.RandomState(0), True), (npr.RandomState(0), npr.RandomState(1), False), (100, 100, True), (100, 200, False), ]) def test_seed_old(seed1, seed2, match): rng1 = algo._handle_random_seed(seed1) rng2 = algo._handle_random_seed(seed2) assert isinstance(rng1, np.random.RandomState) assert isinstance(rng2, np.random.RandomState) assert (rng1.uniform() == rng2.uniform()) == match @pytest.mark.skipif(LooseVersion(np.__version__) >= "1.17", reason="Tests old numpy random functionality") def test_bad_seed_old(): with pytest.raises(ValueError): algo._handle_random_seed("not_a_random_seed") seaborn-0.10.0/seaborn/tests/test_axisgrid.py000066400000000000000000001534751361256634400212700ustar00rootroot00000000000000import warnings import numpy as np import pandas as pd from scipy import stats import matplotlib as mpl import matplotlib.pyplot as plt import pytest import nose.tools as nt import numpy.testing as npt try: import pandas.testing as tm except ImportError: import pandas.util.testing as tm from distutils.version import LooseVersion from .. import axisgrid as ag from .. import rcmod from ..palettes import color_palette from ..distributions import kdeplot, _freedman_diaconis_bins from ..categorical import pointplot from ..utils import categorical_order rs = np.random.RandomState(0) class TestFacetGrid(object): df = pd.DataFrame(dict(x=rs.normal(size=60), y=rs.gamma(4, size=60), a=np.repeat(list("abc"), 20), b=np.tile(list("mn"), 30), c=np.tile(list("tuv"), 20), d=np.tile(list("abcdefghijkl"), 5))) def test_self_data(self): g = ag.FacetGrid(self.df) nt.assert_is(g.data, self.df) def test_self_fig(self): g = ag.FacetGrid(self.df) nt.assert_is_instance(g.fig, plt.Figure) def test_self_axes(self): g = ag.FacetGrid(self.df, row="a", col="b", hue="c") for ax in g.axes.flat: nt.assert_is_instance(ax, plt.Axes) def test_axes_array_size(self): g1 = ag.FacetGrid(self.df) nt.assert_equal(g1.axes.shape, (1, 1)) g2 = ag.FacetGrid(self.df, row="a") nt.assert_equal(g2.axes.shape, (3, 1)) g3 = ag.FacetGrid(self.df, col="b") nt.assert_equal(g3.axes.shape, (1, 2)) g4 = ag.FacetGrid(self.df, hue="c") nt.assert_equal(g4.axes.shape, (1, 1)) g5 = ag.FacetGrid(self.df, row="a", col="b", hue="c") nt.assert_equal(g5.axes.shape, (3, 2)) for ax in g5.axes.flat: nt.assert_is_instance(ax, plt.Axes) def test_single_axes(self): g1 = ag.FacetGrid(self.df) nt.assert_is_instance(g1.ax, plt.Axes) g2 = ag.FacetGrid(self.df, row="a") with nt.assert_raises(AttributeError): g2.ax g3 = ag.FacetGrid(self.df, col="a") with nt.assert_raises(AttributeError): g3.ax g4 = ag.FacetGrid(self.df, col="a", row="b") with nt.assert_raises(AttributeError): g4.ax def test_col_wrap(self): n = len(self.df.d.unique()) g = ag.FacetGrid(self.df, col="d") assert g.axes.shape == (1, n) assert g.facet_axis(0, 8) is g.axes[0, 8] g_wrap = ag.FacetGrid(self.df, col="d", col_wrap=4) assert g_wrap.axes.shape == (n,) assert g_wrap.facet_axis(0, 8) is g_wrap.axes[8] assert g_wrap._ncol == 4 assert g_wrap._nrow == (n / 4) with pytest.raises(ValueError): g = ag.FacetGrid(self.df, row="b", col="d", col_wrap=4) df = self.df.copy() df.loc[df.d == "j"] = np.nan g_missing = ag.FacetGrid(df, col="d") assert g_missing.axes.shape == (1, n - 1) g_missing_wrap = ag.FacetGrid(df, col="d", col_wrap=4) assert g_missing_wrap.axes.shape == (n - 1,) g = ag.FacetGrid(self.df, col="d", col_wrap=1) assert len(list(g.facet_data())) == n def test_normal_axes(self): null = np.empty(0, object).flat g = ag.FacetGrid(self.df) npt.assert_array_equal(g._bottom_axes, g.axes.flat) npt.assert_array_equal(g._not_bottom_axes, null) npt.assert_array_equal(g._left_axes, g.axes.flat) npt.assert_array_equal(g._not_left_axes, null) npt.assert_array_equal(g._inner_axes, null) g = ag.FacetGrid(self.df, col="c") npt.assert_array_equal(g._bottom_axes, g.axes.flat) npt.assert_array_equal(g._not_bottom_axes, null) npt.assert_array_equal(g._left_axes, g.axes[:, 0].flat) npt.assert_array_equal(g._not_left_axes, g.axes[:, 1:].flat) npt.assert_array_equal(g._inner_axes, null) g = ag.FacetGrid(self.df, row="c") npt.assert_array_equal(g._bottom_axes, g.axes[-1, :].flat) npt.assert_array_equal(g._not_bottom_axes, g.axes[:-1, :].flat) npt.assert_array_equal(g._left_axes, g.axes.flat) npt.assert_array_equal(g._not_left_axes, null) npt.assert_array_equal(g._inner_axes, null) g = ag.FacetGrid(self.df, col="a", row="c") npt.assert_array_equal(g._bottom_axes, g.axes[-1, :].flat) npt.assert_array_equal(g._not_bottom_axes, g.axes[:-1, :].flat) npt.assert_array_equal(g._left_axes, g.axes[:, 0].flat) npt.assert_array_equal(g._not_left_axes, g.axes[:, 1:].flat) npt.assert_array_equal(g._inner_axes, g.axes[:-1, 1:].flat) def test_wrapped_axes(self): null = np.empty(0, object).flat g = ag.FacetGrid(self.df, col="a", col_wrap=2) npt.assert_array_equal(g._bottom_axes, g.axes[np.array([1, 2])].flat) npt.assert_array_equal(g._not_bottom_axes, g.axes[:1].flat) npt.assert_array_equal(g._left_axes, g.axes[np.array([0, 2])].flat) npt.assert_array_equal(g._not_left_axes, g.axes[np.array([1])].flat) npt.assert_array_equal(g._inner_axes, null) def test_figure_size(self): g = ag.FacetGrid(self.df, row="a", col="b") npt.assert_array_equal(g.fig.get_size_inches(), (6, 9)) g = ag.FacetGrid(self.df, row="a", col="b", height=6) npt.assert_array_equal(g.fig.get_size_inches(), (12, 18)) g = ag.FacetGrid(self.df, col="c", height=4, aspect=.5) npt.assert_array_equal(g.fig.get_size_inches(), (6, 4)) def test_figure_size_with_legend(self): g1 = ag.FacetGrid(self.df, col="a", hue="c", height=4, aspect=.5) npt.assert_array_equal(g1.fig.get_size_inches(), (6, 4)) g1.add_legend() nt.assert_greater(g1.fig.get_size_inches()[0], 6) g2 = ag.FacetGrid(self.df, col="a", hue="c", height=4, aspect=.5, legend_out=False) npt.assert_array_equal(g2.fig.get_size_inches(), (6, 4)) g2.add_legend() npt.assert_array_equal(g2.fig.get_size_inches(), (6, 4)) def test_legend_data(self): g1 = ag.FacetGrid(self.df, hue="a") g1.map(plt.plot, "x", "y") g1.add_legend() palette = color_palette(n_colors=3) nt.assert_equal(g1._legend.get_title().get_text(), "a") a_levels = sorted(self.df.a.unique()) lines = g1._legend.get_lines() nt.assert_equal(len(lines), len(a_levels)) for line, hue in zip(lines, palette): nt.assert_equal(line.get_color(), hue) labels = g1._legend.get_texts() nt.assert_equal(len(labels), len(a_levels)) for label, level in zip(labels, a_levels): nt.assert_equal(label.get_text(), level) def test_legend_data_missing_level(self): g1 = ag.FacetGrid(self.df, hue="a", hue_order=list("azbc")) g1.map(plt.plot, "x", "y") g1.add_legend() b, g, r, p = color_palette(n_colors=4) palette = [b, r, p] nt.assert_equal(g1._legend.get_title().get_text(), "a") a_levels = sorted(self.df.a.unique()) lines = g1._legend.get_lines() nt.assert_equal(len(lines), len(a_levels)) for line, hue in zip(lines, palette): nt.assert_equal(line.get_color(), hue) labels = g1._legend.get_texts() nt.assert_equal(len(labels), 4) for label, level in zip(labels, list("azbc")): nt.assert_equal(label.get_text(), level) def test_get_boolean_legend_data(self): self.df["b_bool"] = self.df.b == "m" g1 = ag.FacetGrid(self.df, hue="b_bool") g1.map(plt.plot, "x", "y") g1.add_legend() palette = color_palette(n_colors=2) nt.assert_equal(g1._legend.get_title().get_text(), "b_bool") b_levels = list(map(str, categorical_order(self.df.b_bool))) lines = g1._legend.get_lines() nt.assert_equal(len(lines), len(b_levels)) for line, hue in zip(lines, palette): nt.assert_equal(line.get_color(), hue) labels = g1._legend.get_texts() nt.assert_equal(len(labels), len(b_levels)) for label, level in zip(labels, b_levels): nt.assert_equal(label.get_text(), level) def test_legend_tuples(self): g = ag.FacetGrid(self.df, hue="a") g.map(plt.plot, "x", "y") handles, labels = g.ax.get_legend_handles_labels() label_tuples = [("", l) for l in labels] legend_data = dict(zip(label_tuples, handles)) g.add_legend(legend_data, label_tuples) for entry, label in zip(g._legend.get_texts(), labels): assert entry.get_text() == label def test_legend_options(self): g1 = ag.FacetGrid(self.df, hue="b") g1.map(plt.plot, "x", "y") g1.add_legend() def test_legendout_with_colwrap(self): g = ag.FacetGrid(self.df, col="d", hue='b', col_wrap=4, legend_out=False) g.map(plt.plot, "x", "y", linewidth=3) g.add_legend() def test_subplot_kws(self): g = ag.FacetGrid(self.df, despine=False, subplot_kws=dict(projection="polar")) for ax in g.axes.flat: nt.assert_true("PolarAxesSubplot" in str(type(ax))) def test_gridspec_kws(self): ratios = [3, 1, 2] gskws = dict(width_ratios=ratios) g = ag.FacetGrid(self.df, col='c', row='a', gridspec_kws=gskws) for ax in g.axes.flat: ax.set_xticks([]) ax.set_yticks([]) g.fig.tight_layout() for (l, m, r) in g.axes: assert l.get_position().width > m.get_position().width assert r.get_position().width > m.get_position().width def test_gridspec_kws_col_wrap(self): ratios = [3, 1, 2, 1, 1] gskws = dict(width_ratios=ratios) with warnings.catch_warnings(): warnings.resetwarnings() warnings.simplefilter("always") npt.assert_warns(UserWarning, ag.FacetGrid, self.df, col='d', col_wrap=5, gridspec_kws=gskws) def test_data_generator(self): g = ag.FacetGrid(self.df, row="a") d = list(g.facet_data()) nt.assert_equal(len(d), 3) tup, data = d[0] nt.assert_equal(tup, (0, 0, 0)) nt.assert_true((data["a"] == "a").all()) tup, data = d[1] nt.assert_equal(tup, (1, 0, 0)) nt.assert_true((data["a"] == "b").all()) g = ag.FacetGrid(self.df, row="a", col="b") d = list(g.facet_data()) nt.assert_equal(len(d), 6) tup, data = d[0] nt.assert_equal(tup, (0, 0, 0)) nt.assert_true((data["a"] == "a").all()) nt.assert_true((data["b"] == "m").all()) tup, data = d[1] nt.assert_equal(tup, (0, 1, 0)) nt.assert_true((data["a"] == "a").all()) nt.assert_true((data["b"] == "n").all()) tup, data = d[2] nt.assert_equal(tup, (1, 0, 0)) nt.assert_true((data["a"] == "b").all()) nt.assert_true((data["b"] == "m").all()) g = ag.FacetGrid(self.df, hue="c") d = list(g.facet_data()) nt.assert_equal(len(d), 3) tup, data = d[1] nt.assert_equal(tup, (0, 0, 1)) nt.assert_true((data["c"] == "u").all()) def test_map(self): g = ag.FacetGrid(self.df, row="a", col="b", hue="c") g.map(plt.plot, "x", "y", linewidth=3) lines = g.axes[0, 0].lines nt.assert_equal(len(lines), 3) line1, _, _ = lines nt.assert_equal(line1.get_linewidth(), 3) x, y = line1.get_data() mask = (self.df.a == "a") & (self.df.b == "m") & (self.df.c == "t") npt.assert_array_equal(x, self.df.x[mask]) npt.assert_array_equal(y, self.df.y[mask]) def test_map_dataframe(self): g = ag.FacetGrid(self.df, row="a", col="b", hue="c") def plot(x, y, data=None, **kws): plt.plot(data[x], data[y], **kws) g.map_dataframe(plot, "x", "y", linestyle="--") lines = g.axes[0, 0].lines nt.assert_equal(len(lines), 3) line1, _, _ = lines nt.assert_equal(line1.get_linestyle(), "--") x, y = line1.get_data() mask = (self.df.a == "a") & (self.df.b == "m") & (self.df.c == "t") npt.assert_array_equal(x, self.df.x[mask]) npt.assert_array_equal(y, self.df.y[mask]) def test_set(self): g = ag.FacetGrid(self.df, row="a", col="b") xlim = (-2, 5) ylim = (3, 6) xticks = [-2, 0, 3, 5] yticks = [3, 4.5, 6] g.set(xlim=xlim, ylim=ylim, xticks=xticks, yticks=yticks) for ax in g.axes.flat: npt.assert_array_equal(ax.get_xlim(), xlim) npt.assert_array_equal(ax.get_ylim(), ylim) npt.assert_array_equal(ax.get_xticks(), xticks) npt.assert_array_equal(ax.get_yticks(), yticks) def test_set_titles(self): g = ag.FacetGrid(self.df, row="a", col="b") g.map(plt.plot, "x", "y") # Test the default titles nt.assert_equal(g.axes[0, 0].get_title(), "a = a | b = m") nt.assert_equal(g.axes[0, 1].get_title(), "a = a | b = n") nt.assert_equal(g.axes[1, 0].get_title(), "a = b | b = m") # Test a provided title g.set_titles("{row_var} == {row_name} \\/ {col_var} == {col_name}") nt.assert_equal(g.axes[0, 0].get_title(), "a == a \\/ b == m") nt.assert_equal(g.axes[0, 1].get_title(), "a == a \\/ b == n") nt.assert_equal(g.axes[1, 0].get_title(), "a == b \\/ b == m") # Test a single row g = ag.FacetGrid(self.df, col="b") g.map(plt.plot, "x", "y") # Test the default titles nt.assert_equal(g.axes[0, 0].get_title(), "b = m") nt.assert_equal(g.axes[0, 1].get_title(), "b = n") # test with dropna=False g = ag.FacetGrid(self.df, col="b", hue="b", dropna=False) g.map(plt.plot, 'x', 'y') def test_set_titles_margin_titles(self): g = ag.FacetGrid(self.df, row="a", col="b", margin_titles=True) g.map(plt.plot, "x", "y") # Test the default titles nt.assert_equal(g.axes[0, 0].get_title(), "b = m") nt.assert_equal(g.axes[0, 1].get_title(), "b = n") nt.assert_equal(g.axes[1, 0].get_title(), "") # Test the row "titles" nt.assert_equal(g.axes[0, 1].texts[0].get_text(), "a = a") nt.assert_equal(g.axes[1, 1].texts[0].get_text(), "a = b") # Test a provided title g.set_titles(col_template="{col_var} == {col_name}") nt.assert_equal(g.axes[0, 0].get_title(), "b == m") nt.assert_equal(g.axes[0, 1].get_title(), "b == n") nt.assert_equal(g.axes[1, 0].get_title(), "") def test_set_ticklabels(self): g = ag.FacetGrid(self.df, row="a", col="b") g.map(plt.plot, "x", "y") xlab = [l.get_text() + "h" for l in g.axes[1, 0].get_xticklabels()] ylab = [l.get_text() for l in g.axes[1, 0].get_yticklabels()] g.set_xticklabels(xlab) g.set_yticklabels(ylab) got_x = [l.get_text() for l in g.axes[1, 1].get_xticklabels()] got_y = [l.get_text() for l in g.axes[0, 0].get_yticklabels()] npt.assert_array_equal(got_x, xlab) npt.assert_array_equal(got_y, ylab) x, y = np.arange(10), np.arange(10) df = pd.DataFrame(np.c_[x, y], columns=["x", "y"]) g = ag.FacetGrid(df).map(pointplot, "x", "y", order=x) g.set_xticklabels(step=2) got_x = [int(l.get_text()) for l in g.axes[0, 0].get_xticklabels()] npt.assert_array_equal(x[::2], got_x) g = ag.FacetGrid(self.df, col="d", col_wrap=5) g.map(plt.plot, "x", "y") g.set_xticklabels(rotation=45) g.set_yticklabels(rotation=75) for ax in g._bottom_axes: for l in ax.get_xticklabels(): nt.assert_equal(l.get_rotation(), 45) for ax in g._left_axes: for l in ax.get_yticklabels(): nt.assert_equal(l.get_rotation(), 75) def test_set_axis_labels(self): g = ag.FacetGrid(self.df, row="a", col="b") g.map(plt.plot, "x", "y") xlab = 'xx' ylab = 'yy' g.set_axis_labels(xlab, ylab) got_x = [ax.get_xlabel() for ax in g.axes[-1, :]] got_y = [ax.get_ylabel() for ax in g.axes[:, 0]] npt.assert_array_equal(got_x, xlab) npt.assert_array_equal(got_y, ylab) def test_axis_lims(self): g = ag.FacetGrid(self.df, row="a", col="b", xlim=(0, 4), ylim=(-2, 3)) nt.assert_equal(g.axes[0, 0].get_xlim(), (0, 4)) nt.assert_equal(g.axes[0, 0].get_ylim(), (-2, 3)) def test_data_orders(self): g = ag.FacetGrid(self.df, row="a", col="b", hue="c") nt.assert_equal(g.row_names, list("abc")) nt.assert_equal(g.col_names, list("mn")) nt.assert_equal(g.hue_names, list("tuv")) nt.assert_equal(g.axes.shape, (3, 2)) g = ag.FacetGrid(self.df, row="a", col="b", hue="c", row_order=list("bca"), col_order=list("nm"), hue_order=list("vtu")) nt.assert_equal(g.row_names, list("bca")) nt.assert_equal(g.col_names, list("nm")) nt.assert_equal(g.hue_names, list("vtu")) nt.assert_equal(g.axes.shape, (3, 2)) g = ag.FacetGrid(self.df, row="a", col="b", hue="c", row_order=list("bcda"), col_order=list("nom"), hue_order=list("qvtu")) nt.assert_equal(g.row_names, list("bcda")) nt.assert_equal(g.col_names, list("nom")) nt.assert_equal(g.hue_names, list("qvtu")) nt.assert_equal(g.axes.shape, (4, 3)) def test_palette(self): rcmod.set() g = ag.FacetGrid(self.df, hue="c") assert g._colors == color_palette(n_colors=len(self.df.c.unique())) g = ag.FacetGrid(self.df, hue="d") assert g._colors == color_palette("husl", len(self.df.d.unique())) g = ag.FacetGrid(self.df, hue="c", palette="Set2") assert g._colors == color_palette("Set2", len(self.df.c.unique())) dict_pal = dict(t="red", u="green", v="blue") list_pal = color_palette(["red", "green", "blue"], 3) g = ag.FacetGrid(self.df, hue="c", palette=dict_pal) assert g._colors == list_pal list_pal = color_palette(["green", "blue", "red"], 3) g = ag.FacetGrid(self.df, hue="c", hue_order=list("uvt"), palette=dict_pal) assert g._colors == list_pal def test_hue_kws(self): kws = dict(marker=["o", "s", "D"]) g = ag.FacetGrid(self.df, hue="c", hue_kws=kws) g.map(plt.plot, "x", "y") for line, marker in zip(g.axes[0, 0].lines, kws["marker"]): nt.assert_equal(line.get_marker(), marker) def test_dropna(self): df = self.df.copy() hasna = pd.Series(np.tile(np.arange(6), 10), dtype=np.float) hasna[hasna == 5] = np.nan df["hasna"] = hasna g = ag.FacetGrid(df, dropna=False, row="hasna") nt.assert_equal(g._not_na.sum(), 60) g = ag.FacetGrid(df, dropna=True, row="hasna") nt.assert_equal(g._not_na.sum(), 50) def test_unicode_column_label_with_rows(self): # use a smaller copy of the default testing data frame: df = self.df.copy() df = df[["a", "b", "x"]] # rename column 'a' (which will be used for the columns in the grid) # by using a Unicode string: unicode_column_label = u"\u01ff\u02ff\u03ff" df = df.rename(columns={"a": unicode_column_label}) # ensure that the data frame columns have the expected names: nt.assert_equal(list(df.columns), [unicode_column_label, "b", "x"]) # plot the grid -- if successful, no UnicodeEncodingError should # occur: g = ag.FacetGrid(df, col=unicode_column_label, row="b") g = g.map(plt.plot, "x") def test_unicode_column_label_no_rows(self): # use a smaller copy of the default testing data frame: df = self.df.copy() df = df[["a", "x"]] # rename column 'a' (which will be used for the columns in the grid) # by using a Unicode string: unicode_column_label = u"\u01ff\u02ff\u03ff" df = df.rename(columns={"a": unicode_column_label}) # ensure that the data frame columns have the expected names: nt.assert_equal(list(df.columns), [unicode_column_label, "x"]) # plot the grid -- if successful, no UnicodeEncodingError should # occur: g = ag.FacetGrid(df, col=unicode_column_label) g = g.map(plt.plot, "x") def test_unicode_row_label_with_columns(self): # use a smaller copy of the default testing data frame: df = self.df.copy() df = df[["a", "b", "x"]] # rename column 'b' (which will be used for the rows in the grid) # by using a Unicode string: unicode_row_label = u"\u01ff\u02ff\u03ff" df = df.rename(columns={"b": unicode_row_label}) # ensure that the data frame columns have the expected names: nt.assert_equal(list(df.columns), ["a", unicode_row_label, "x"]) # plot the grid -- if successful, no UnicodeEncodingError should # occur: g = ag.FacetGrid(df, col="a", row=unicode_row_label) g = g.map(plt.plot, "x") def test_unicode_row_label_no_columns(self): # use a smaller copy of the default testing data frame: df = self.df.copy() df = df[["b", "x"]] # rename column 'b' (which will be used for the rows in the grid) # by using a Unicode string: unicode_row_label = u"\u01ff\u02ff\u03ff" df = df.rename(columns={"b": unicode_row_label}) # ensure that the data frame columns have the expected names: nt.assert_equal(list(df.columns), [unicode_row_label, "x"]) # plot the grid -- if successful, no UnicodeEncodingError should # occur: g = ag.FacetGrid(df, row=unicode_row_label) g = g.map(plt.plot, "x") @pytest.mark.skipif(pd.__version__.startswith("0.24"), reason="known bug in pandas") def test_unicode_content_with_row_and_column(self): df = self.df.copy() # replace content of column 'a' (which will form the columns in the # grid) by Unicode characters: unicode_column_val = np.repeat((u'\u01ff', u'\u02ff', u'\u03ff'), 20) df["a"] = unicode_column_val # make sure that the replacement worked as expected: nt.assert_equal( list(df["a"]), [u'\u01ff'] * 20 + [u'\u02ff'] * 20 + [u'\u03ff'] * 20) # plot the grid -- if successful, no UnicodeEncodingError should # occur: g = ag.FacetGrid(df, col="a", row="b") g = g.map(plt.plot, "x") @pytest.mark.skipif(pd.__version__.startswith("0.24"), reason="known bug in pandas") def test_unicode_content_no_rows(self): df = self.df.copy() # replace content of column 'a' (which will form the columns in the # grid) by Unicode characters: unicode_column_val = np.repeat((u'\u01ff', u'\u02ff', u'\u03ff'), 20) df["a"] = unicode_column_val # make sure that the replacement worked as expected: nt.assert_equal( list(df["a"]), [u'\u01ff'] * 20 + [u'\u02ff'] * 20 + [u'\u03ff'] * 20) # plot the grid -- if successful, no UnicodeEncodingError should # occur: g = ag.FacetGrid(df, col="a") g = g.map(plt.plot, "x") @pytest.mark.skipif(pd.__version__.startswith("0.24"), reason="known bug in pandas") def test_unicode_content_no_columns(self): df = self.df.copy() # replace content of column 'a' (which will form the rows in the # grid) by Unicode characters: unicode_column_val = np.repeat((u'\u01ff', u'\u02ff', u'\u03ff'), 20) df["b"] = unicode_column_val # make sure that the replacement worked as expected: nt.assert_equal( list(df["b"]), [u'\u01ff'] * 20 + [u'\u02ff'] * 20 + [u'\u03ff'] * 20) # plot the grid -- if successful, no UnicodeEncodingError should # occur: g = ag.FacetGrid(df, row="b") g = g.map(plt.plot, "x") def test_categorical_column_missing_categories(self): df = self.df.copy() df['a'] = df['a'].astype('category') g = ag.FacetGrid(df[df['a'] == 'a'], col="a", col_wrap=1) nt.assert_equal(g.axes.shape, (len(df['a'].cat.categories),)) def test_categorical_warning(self): g = ag.FacetGrid(self.df, col="b") with warnings.catch_warnings(): warnings.resetwarnings() warnings.simplefilter("always") npt.assert_warns(UserWarning, g.map, pointplot, "b", "x") class TestPairGrid(object): rs = np.random.RandomState(sum(map(ord, "PairGrid"))) df = pd.DataFrame(dict(x=rs.normal(size=60), y=rs.randint(0, 4, size=(60)), z=rs.gamma(3, size=60), a=np.repeat(list("abc"), 20), b=np.repeat(list("abcdefghijkl"), 5))) def test_self_data(self): g = ag.PairGrid(self.df) nt.assert_is(g.data, self.df) def test_ignore_datelike_data(self): df = self.df.copy() df['date'] = pd.date_range('2010-01-01', periods=len(df), freq='d') result = ag.PairGrid(self.df).data expected = df.drop('date', axis=1) tm.assert_frame_equal(result, expected) def test_self_fig(self): g = ag.PairGrid(self.df) nt.assert_is_instance(g.fig, plt.Figure) def test_self_axes(self): g = ag.PairGrid(self.df) for ax in g.axes.flat: nt.assert_is_instance(ax, plt.Axes) def test_default_axes(self): g = ag.PairGrid(self.df) nt.assert_equal(g.axes.shape, (3, 3)) nt.assert_equal(g.x_vars, ["x", "y", "z"]) nt.assert_equal(g.y_vars, ["x", "y", "z"]) nt.assert_true(g.square_grid) def test_specific_square_axes(self): vars = ["z", "x"] g = ag.PairGrid(self.df, vars=vars) nt.assert_equal(g.axes.shape, (len(vars), len(vars))) nt.assert_equal(g.x_vars, vars) nt.assert_equal(g.y_vars, vars) nt.assert_true(g.square_grid) def test_remove_hue_from_default(self): hue = "z" g = ag.PairGrid(self.df, hue=hue) assert hue not in g.x_vars assert hue not in g.y_vars vars = ["x", "y", "z"] g = ag.PairGrid(self.df, hue=hue, vars=vars) assert hue in g.x_vars assert hue in g.y_vars def test_specific_nonsquare_axes(self): x_vars = ["x", "y"] y_vars = ["z", "y", "x"] g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars) nt.assert_equal(g.axes.shape, (len(y_vars), len(x_vars))) nt.assert_equal(g.x_vars, x_vars) nt.assert_equal(g.y_vars, y_vars) nt.assert_true(not g.square_grid) x_vars = ["x", "y"] y_vars = "z" g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars) nt.assert_equal(g.axes.shape, (len(y_vars), len(x_vars))) nt.assert_equal(g.x_vars, list(x_vars)) nt.assert_equal(g.y_vars, list(y_vars)) nt.assert_true(not g.square_grid) def test_specific_square_axes_with_array(self): vars = np.array(["z", "x"]) g = ag.PairGrid(self.df, vars=vars) nt.assert_equal(g.axes.shape, (len(vars), len(vars))) nt.assert_equal(g.x_vars, list(vars)) nt.assert_equal(g.y_vars, list(vars)) nt.assert_true(g.square_grid) def test_specific_nonsquare_axes_with_array(self): x_vars = np.array(["x", "y"]) y_vars = np.array(["z", "y", "x"]) g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars) nt.assert_equal(g.axes.shape, (len(y_vars), len(x_vars))) nt.assert_equal(g.x_vars, list(x_vars)) nt.assert_equal(g.y_vars, list(y_vars)) nt.assert_true(not g.square_grid) @pytest.mark.xfail(LooseVersion(mpl.__version__) < "1.5", reason="Expected failure on older matplotlib") def test_corner(self): plot_vars = ["x", "y", "z"] g1 = ag.PairGrid(self.df, vars=plot_vars, corner=True) corner_size = sum([i + 1 for i in range(len(plot_vars))]) assert len(g1.fig.axes) == corner_size g1.map_diag(plt.hist) assert len(g1.fig.axes) == (corner_size + len(plot_vars)) for ax in np.diag(g1.axes): assert not ax.yaxis.get_visible() assert not g1.axes[0, 0].get_ylabel() def test_size(self): g1 = ag.PairGrid(self.df, height=3) npt.assert_array_equal(g1.fig.get_size_inches(), (9, 9)) g2 = ag.PairGrid(self.df, height=4, aspect=.5) npt.assert_array_equal(g2.fig.get_size_inches(), (6, 12)) g3 = ag.PairGrid(self.df, y_vars=["z"], x_vars=["x", "y"], height=2, aspect=2) npt.assert_array_equal(g3.fig.get_size_inches(), (8, 2)) def test_map(self): vars = ["x", "y", "z"] g1 = ag.PairGrid(self.df) g1.map(plt.scatter) for i, axes_i in enumerate(g1.axes): for j, ax in enumerate(axes_i): x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) g2 = ag.PairGrid(self.df, "a") g2.map(plt.scatter) for i, axes_i in enumerate(g2.axes): for j, ax in enumerate(axes_i): x_in = self.df[vars[j]] y_in = self.df[vars[i]] for k, k_level in enumerate(self.df.a.unique()): x_in_k = x_in[self.df.a == k_level] y_in_k = y_in[self.df.a == k_level] x_out, y_out = ax.collections[k].get_offsets().T npt.assert_array_equal(x_in_k, x_out) npt.assert_array_equal(y_in_k, y_out) def test_map_nonsquare(self): x_vars = ["x"] y_vars = ["y", "z"] g = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars) g.map(plt.scatter) x_in = self.df.x for i, i_var in enumerate(y_vars): ax = g.axes[i, 0] y_in = self.df[i_var] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) def test_map_lower(self): vars = ["x", "y", "z"] g = ag.PairGrid(self.df) g.map_lower(plt.scatter) for i, j in zip(*np.tril_indices_from(g.axes, -1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.triu_indices_from(g.axes)): ax = g.axes[i, j] nt.assert_equal(len(ax.collections), 0) def test_map_upper(self): vars = ["x", "y", "z"] g = ag.PairGrid(self.df) g.map_upper(plt.scatter) for i, j in zip(*np.triu_indices_from(g.axes, 1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.tril_indices_from(g.axes)): ax = g.axes[i, j] nt.assert_equal(len(ax.collections), 0) def test_map_diag(self): g1 = ag.PairGrid(self.df) g1.map_diag(plt.hist) for var, ax in zip(g1.diag_vars, g1.diag_axes): nt.assert_equal(len(ax.patches), 10) assert pytest.approx(ax.patches[0].get_x()) == self.df[var].min() g2 = ag.PairGrid(self.df, hue="a") g2.map_diag(plt.hist) for ax in g2.diag_axes: nt.assert_equal(len(ax.patches), 30) g3 = ag.PairGrid(self.df, hue="a") g3.map_diag(plt.hist, histtype='step') for ax in g3.diag_axes: for ptch in ax.patches: nt.assert_equal(ptch.fill, False) def test_map_diag_rectangular(self): x_vars = ["x", "y"] y_vars = ["x", "y", "z"] g1 = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars) g1.map_diag(plt.hist) assert set(g1.diag_vars) == (set(x_vars) & set(y_vars)) for var, ax in zip(g1.diag_vars, g1.diag_axes): nt.assert_equal(len(ax.patches), 10) assert pytest.approx(ax.patches[0].get_x()) == self.df[var].min() for i, ax in enumerate(np.diag(g1.axes)): assert ax.bbox.bounds == g1.diag_axes[i].bbox.bounds g2 = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars, hue="a") g2.map_diag(plt.hist) assert set(g2.diag_vars) == (set(x_vars) & set(y_vars)) for ax in g2.diag_axes: nt.assert_equal(len(ax.patches), 30) x_vars = ["x", "y", "z"] y_vars = ["x", "y"] g3 = ag.PairGrid(self.df, x_vars=x_vars, y_vars=y_vars) g3.map_diag(plt.hist) assert set(g3.diag_vars) == (set(x_vars) & set(y_vars)) for var, ax in zip(g3.diag_vars, g3.diag_axes): nt.assert_equal(len(ax.patches), 10) assert pytest.approx(ax.patches[0].get_x()) == self.df[var].min() for i, ax in enumerate(np.diag(g3.axes)): assert ax.bbox.bounds == g3.diag_axes[i].bbox.bounds def test_map_diag_color(self): color = "red" rgb_color = mpl.colors.colorConverter.to_rgba(color) g1 = ag.PairGrid(self.df) g1.map_diag(plt.hist, color=color) for ax in g1.diag_axes: for patch in ax.patches: assert patch.get_facecolor() == rgb_color g2 = ag.PairGrid(self.df) g2.map_diag(kdeplot, color='red') for ax in g2.diag_axes: for line in ax.lines: assert line.get_color() == color def test_map_diag_palette(self): pal = color_palette(n_colors=len(self.df.a.unique())) g = ag.PairGrid(self.df, hue="a") g.map_diag(kdeplot) for ax in g.diag_axes: for line, color in zip(ax.lines, pal): assert line.get_color() == color def test_map_diag_and_offdiag(self): vars = ["x", "y", "z"] g = ag.PairGrid(self.df) g.map_offdiag(plt.scatter) g.map_diag(plt.hist) for ax in g.diag_axes: nt.assert_equal(len(ax.patches), 10) for i, j in zip(*np.triu_indices_from(g.axes, 1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.tril_indices_from(g.axes, -1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.diag_indices_from(g.axes)): ax = g.axes[i, j] nt.assert_equal(len(ax.collections), 0) def test_diag_sharey(self): g = ag.PairGrid(self.df, diag_sharey=True) g.map_diag(kdeplot) for ax in g.diag_axes[1:]: assert ax.get_ylim() == g.diag_axes[0].get_ylim() def test_palette(self): rcmod.set() g = ag.PairGrid(self.df, hue="a") assert g.palette == color_palette(n_colors=len(self.df.a.unique())) g = ag.PairGrid(self.df, hue="b") assert g.palette == color_palette("husl", len(self.df.b.unique())) g = ag.PairGrid(self.df, hue="a", palette="Set2") assert g.palette == color_palette("Set2", len(self.df.a.unique())) dict_pal = dict(a="red", b="green", c="blue") list_pal = color_palette(["red", "green", "blue"]) g = ag.PairGrid(self.df, hue="a", palette=dict_pal) assert g.palette == list_pal list_pal = color_palette(["blue", "red", "green"]) g = ag.PairGrid(self.df, hue="a", hue_order=list("cab"), palette=dict_pal) assert g.palette == list_pal def test_hue_kws(self): kws = dict(marker=["o", "s", "d", "+"]) g = ag.PairGrid(self.df, hue="a", hue_kws=kws) g.map(plt.plot) for line, marker in zip(g.axes[0, 0].lines, kws["marker"]): nt.assert_equal(line.get_marker(), marker) g = ag.PairGrid(self.df, hue="a", hue_kws=kws, hue_order=list("dcab")) g.map(plt.plot) for line, marker in zip(g.axes[0, 0].lines, kws["marker"]): nt.assert_equal(line.get_marker(), marker) def test_hue_order(self): order = list("dcab") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map(plt.plot) for line, level in zip(g.axes[1, 0].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"]) plt.close("all") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map_diag(plt.plot) for line, level in zip(g.axes[0, 0].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"]) plt.close("all") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map_lower(plt.plot) for line, level in zip(g.axes[1, 0].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"]) plt.close("all") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map_upper(plt.plot) for line, level in zip(g.axes[0, 1].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "y"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"]) plt.close("all") def test_hue_order_missing_level(self): order = list("dcaeb") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map(plt.plot) for line, level in zip(g.axes[1, 0].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"]) plt.close("all") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map_diag(plt.plot) for line, level in zip(g.axes[0, 0].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"]) plt.close("all") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map_lower(plt.plot) for line, level in zip(g.axes[1, 0].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "x"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "y"]) plt.close("all") g = ag.PairGrid(self.df, hue="a", hue_order=order) g.map_upper(plt.plot) for line, level in zip(g.axes[0, 1].lines, order): x, y = line.get_xydata().T npt.assert_array_equal(x, self.df.loc[self.df.a == level, "y"]) npt.assert_array_equal(y, self.df.loc[self.df.a == level, "x"]) plt.close("all") def test_nondefault_index(self): df = self.df.copy().set_index("b") plot_vars = ["x", "y", "z"] g1 = ag.PairGrid(df) g1.map(plt.scatter) for i, axes_i in enumerate(g1.axes): for j, ax in enumerate(axes_i): x_in = self.df[plot_vars[j]] y_in = self.df[plot_vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) g2 = ag.PairGrid(df, "a") g2.map(plt.scatter) for i, axes_i in enumerate(g2.axes): for j, ax in enumerate(axes_i): x_in = self.df[plot_vars[j]] y_in = self.df[plot_vars[i]] for k, k_level in enumerate(self.df.a.unique()): x_in_k = x_in[self.df.a == k_level] y_in_k = y_in[self.df.a == k_level] x_out, y_out = ax.collections[k].get_offsets().T npt.assert_array_equal(x_in_k, x_out) npt.assert_array_equal(y_in_k, y_out) def test_dropna(self): df = self.df.copy() n_null = 20 df.loc[np.arange(n_null), "x"] = np.nan plot_vars = ["x", "y", "z"] g1 = ag.PairGrid(df, vars=plot_vars, dropna=True) g1.map(plt.scatter) for i, axes_i in enumerate(g1.axes): for j, ax in enumerate(axes_i): x_in = df[plot_vars[j]] y_in = df[plot_vars[i]] x_out, y_out = ax.collections[0].get_offsets().T n_valid = (x_in * y_in).notnull().sum() assert n_valid == len(x_out) assert n_valid == len(y_out) def test_pairplot(self): vars = ["x", "y", "z"] g = ag.pairplot(self.df) for ax in g.diag_axes: assert len(ax.patches) > 1 for i, j in zip(*np.triu_indices_from(g.axes, 1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.tril_indices_from(g.axes, -1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.diag_indices_from(g.axes)): ax = g.axes[i, j] nt.assert_equal(len(ax.collections), 0) g = ag.pairplot(self.df, hue="a") n = len(self.df.a.unique()) for ax in g.diag_axes: assert len(ax.lines) == n assert len(ax.collections) == n def test_pairplot_reg(self): vars = ["x", "y", "z"] g = ag.pairplot(self.df, diag_kind="hist", kind="reg") for ax in g.diag_axes: nt.assert_equal(len(ax.patches), 10) for i, j in zip(*np.triu_indices_from(g.axes, 1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) nt.assert_equal(len(ax.lines), 1) nt.assert_equal(len(ax.collections), 2) for i, j in zip(*np.tril_indices_from(g.axes, -1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) nt.assert_equal(len(ax.lines), 1) nt.assert_equal(len(ax.collections), 2) for i, j in zip(*np.diag_indices_from(g.axes)): ax = g.axes[i, j] nt.assert_equal(len(ax.collections), 0) def test_pairplot_kde(self): vars = ["x", "y", "z"] g = ag.pairplot(self.df, diag_kind="kde") for ax in g.diag_axes: nt.assert_equal(len(ax.lines), 1) for i, j in zip(*np.triu_indices_from(g.axes, 1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.tril_indices_from(g.axes, -1)): ax = g.axes[i, j] x_in = self.df[vars[j]] y_in = self.df[vars[i]] x_out, y_out = ax.collections[0].get_offsets().T npt.assert_array_equal(x_in, x_out) npt.assert_array_equal(y_in, y_out) for i, j in zip(*np.diag_indices_from(g.axes)): ax = g.axes[i, j] nt.assert_equal(len(ax.collections), 0) def test_pairplot_markers(self): vars = ["x", "y", "z"] markers = ["o", "x", "s"] g = ag.pairplot(self.df, hue="a", vars=vars, markers=markers) assert g.hue_kws["marker"] == markers plt.close("all") with pytest.raises(ValueError): g = ag.pairplot(self.df, hue="a", vars=vars, markers=markers[:-2]) class TestJointGrid(object): rs = np.random.RandomState(sum(map(ord, "JointGrid"))) x = rs.randn(100) y = rs.randn(100) x_na = x.copy() x_na[10] = np.nan x_na[20] = np.nan data = pd.DataFrame(dict(x=x, y=y, x_na=x_na)) def test_margin_grid_from_lists(self): g = ag.JointGrid(self.x.tolist(), self.y.tolist()) npt.assert_array_equal(g.x, self.x) npt.assert_array_equal(g.y, self.y) def test_margin_grid_from_arrays(self): g = ag.JointGrid(self.x, self.y) npt.assert_array_equal(g.x, self.x) npt.assert_array_equal(g.y, self.y) def test_margin_grid_from_series(self): g = ag.JointGrid(self.data.x, self.data.y) npt.assert_array_equal(g.x, self.x) npt.assert_array_equal(g.y, self.y) def test_margin_grid_from_dataframe(self): g = ag.JointGrid("x", "y", self.data) npt.assert_array_equal(g.x, self.x) npt.assert_array_equal(g.y, self.y) def test_margin_grid_from_dataframe_bad_variable(self): with nt.assert_raises(ValueError): ag.JointGrid("x", "bad_column", self.data) def test_margin_grid_axis_labels(self): g = ag.JointGrid("x", "y", self.data) xlabel, ylabel = g.ax_joint.get_xlabel(), g.ax_joint.get_ylabel() nt.assert_equal(xlabel, "x") nt.assert_equal(ylabel, "y") g.set_axis_labels("x variable", "y variable") xlabel, ylabel = g.ax_joint.get_xlabel(), g.ax_joint.get_ylabel() nt.assert_equal(xlabel, "x variable") nt.assert_equal(ylabel, "y variable") def test_dropna(self): g = ag.JointGrid("x_na", "y", self.data, dropna=False) nt.assert_equal(len(g.x), len(self.x_na)) g = ag.JointGrid("x_na", "y", self.data, dropna=True) nt.assert_equal(len(g.x), pd.notnull(self.x_na).sum()) def test_axlims(self): lim = (-3, 3) g = ag.JointGrid("x", "y", self.data, xlim=lim, ylim=lim) nt.assert_equal(g.ax_joint.get_xlim(), lim) nt.assert_equal(g.ax_joint.get_ylim(), lim) nt.assert_equal(g.ax_marg_x.get_xlim(), lim) nt.assert_equal(g.ax_marg_y.get_ylim(), lim) def test_marginal_ticks(self): g = ag.JointGrid("x", "y", self.data) nt.assert_true(~len(g.ax_marg_x.get_xticks())) nt.assert_true(~len(g.ax_marg_y.get_yticks())) def test_bivariate_plot(self): g = ag.JointGrid("x", "y", self.data) g.plot_joint(plt.plot) x, y = g.ax_joint.lines[0].get_xydata().T npt.assert_array_equal(x, self.x) npt.assert_array_equal(y, self.y) def test_univariate_plot(self): g = ag.JointGrid("x", "x", self.data) g.plot_marginals(kdeplot) _, y1 = g.ax_marg_x.lines[0].get_xydata().T y2, _ = g.ax_marg_y.lines[0].get_xydata().T npt.assert_array_equal(y1, y2) def test_plot(self): g = ag.JointGrid("x", "x", self.data) g.plot(plt.plot, kdeplot) x, y = g.ax_joint.lines[0].get_xydata().T npt.assert_array_equal(x, self.x) npt.assert_array_equal(y, self.x) _, y1 = g.ax_marg_x.lines[0].get_xydata().T y2, _ = g.ax_marg_y.lines[0].get_xydata().T npt.assert_array_equal(y1, y2) def test_annotate(self): g = ag.JointGrid("x", "y", self.data) rp = stats.pearsonr(self.x, self.y) with pytest.warns(UserWarning): g.annotate(stats.pearsonr) annotation = g.ax_joint.legend_.texts[0].get_text() nt.assert_equal(annotation, "pearsonr = %.2g; p = %.2g" % rp) with pytest.warns(UserWarning): g.annotate(stats.pearsonr, stat="correlation") annotation = g.ax_joint.legend_.texts[0].get_text() nt.assert_equal(annotation, "correlation = %.2g; p = %.2g" % rp) def rsquared(x, y): return stats.pearsonr(x, y)[0] ** 2 r2 = rsquared(self.x, self.y) with pytest.warns(UserWarning): g.annotate(rsquared) annotation = g.ax_joint.legend_.texts[0].get_text() nt.assert_equal(annotation, "rsquared = %.2g" % r2) template = "{stat} = {val:.3g} (p = {p:.3g})" with pytest.warns(UserWarning): g.annotate(stats.pearsonr, template=template) annotation = g.ax_joint.legend_.texts[0].get_text() nt.assert_equal(annotation, template.format(stat="pearsonr", val=rp[0], p=rp[1])) def test_space(self): g = ag.JointGrid("x", "y", self.data, space=0) joint_bounds = g.ax_joint.bbox.bounds marg_x_bounds = g.ax_marg_x.bbox.bounds marg_y_bounds = g.ax_marg_y.bbox.bounds nt.assert_equal(joint_bounds[2], marg_x_bounds[2]) nt.assert_equal(joint_bounds[3], marg_y_bounds[3]) class TestJointPlot(object): rs = np.random.RandomState(sum(map(ord, "jointplot"))) x = rs.randn(100) y = rs.randn(100) data = pd.DataFrame(dict(x=x, y=y)) def test_scatter(self): g = ag.jointplot("x", "y", self.data) nt.assert_equal(len(g.ax_joint.collections), 1) x, y = g.ax_joint.collections[0].get_offsets().T npt.assert_array_equal(self.x, x) npt.assert_array_equal(self.y, y) x_bins = _freedman_diaconis_bins(self.x) nt.assert_equal(len(g.ax_marg_x.patches), x_bins) y_bins = _freedman_diaconis_bins(self.y) nt.assert_equal(len(g.ax_marg_y.patches), y_bins) def test_reg(self): g = ag.jointplot("x", "y", self.data, kind="reg") nt.assert_equal(len(g.ax_joint.collections), 2) x, y = g.ax_joint.collections[0].get_offsets().T npt.assert_array_equal(self.x, x) npt.assert_array_equal(self.y, y) x_bins = _freedman_diaconis_bins(self.x) nt.assert_equal(len(g.ax_marg_x.patches), x_bins) y_bins = _freedman_diaconis_bins(self.y) nt.assert_equal(len(g.ax_marg_y.patches), y_bins) nt.assert_equal(len(g.ax_joint.lines), 1) nt.assert_equal(len(g.ax_marg_x.lines), 1) nt.assert_equal(len(g.ax_marg_y.lines), 1) def test_resid(self): g = ag.jointplot("x", "y", self.data, kind="resid") nt.assert_equal(len(g.ax_joint.collections), 1) nt.assert_equal(len(g.ax_joint.lines), 1) nt.assert_equal(len(g.ax_marg_x.lines), 0) nt.assert_equal(len(g.ax_marg_y.lines), 1) def test_hex(self): g = ag.jointplot("x", "y", self.data, kind="hex") nt.assert_equal(len(g.ax_joint.collections), 1) x_bins = _freedman_diaconis_bins(self.x) nt.assert_equal(len(g.ax_marg_x.patches), x_bins) y_bins = _freedman_diaconis_bins(self.y) nt.assert_equal(len(g.ax_marg_y.patches), y_bins) def test_kde(self): g = ag.jointplot("x", "y", self.data, kind="kde") nt.assert_true(len(g.ax_joint.collections) > 0) nt.assert_equal(len(g.ax_marg_x.collections), 1) nt.assert_equal(len(g.ax_marg_y.collections), 1) nt.assert_equal(len(g.ax_marg_x.lines), 1) nt.assert_equal(len(g.ax_marg_y.lines), 1) def test_color(self): g = ag.jointplot("x", "y", self.data, color="purple") purple = mpl.colors.colorConverter.to_rgb("purple") scatter_color = g.ax_joint.collections[0].get_facecolor()[0, :3] nt.assert_equal(tuple(scatter_color), purple) hist_color = g.ax_marg_x.patches[0].get_facecolor()[:3] nt.assert_equal(hist_color, purple) def test_annotation(self): with pytest.warns(UserWarning): g = ag.jointplot("x", "y", self.data, stat_func=stats.pearsonr) nt.assert_equal(len(g.ax_joint.legend_.get_texts()), 1) g = ag.jointplot("x", "y", self.data, stat_func=None) nt.assert_is(g.ax_joint.legend_, None) def test_hex_customise(self): # test that default gridsize can be overridden g = ag.jointplot("x", "y", self.data, kind="hex", joint_kws=dict(gridsize=5)) nt.assert_equal(len(g.ax_joint.collections), 1) a = g.ax_joint.collections[0].get_array() nt.assert_equal(28, a.shape[0]) # 28 hexagons expected for gridsize 5 def test_bad_kind(self): with nt.assert_raises(ValueError): ag.jointplot("x", "y", self.data, kind="not_a_kind") def test_leaky_dict(self): # Validate input dicts are unchanged by jointplot plotting function for kwarg in ("joint_kws", "marginal_kws", "annot_kws"): for kind in ("hex", "kde", "resid", "reg", "scatter"): empty_dict = {} ag.jointplot("x", "y", self.data, kind=kind, **{kwarg: empty_dict}) assert empty_dict == {} seaborn-0.10.0/seaborn/tests/test_categorical.py000066400000000000000000003106361361256634400217250ustar00rootroot00000000000000import numpy as np import pandas as pd import scipy from scipy import stats, spatial import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.colors import rgb2hex from distutils.version import LooseVersion import pytest import nose.tools as nt import numpy.testing as npt from .. import categorical as cat from .. import palettes pandas_has_categoricals = LooseVersion(pd.__version__) >= "0.15" mpl_barplot_change = LooseVersion("2.0.1") class CategoricalFixture(object): """Test boxplot (also base class for things like violinplots).""" rs = np.random.RandomState(30) n_total = 60 x = rs.randn(int(n_total / 3), 3) x_df = pd.DataFrame(x, columns=pd.Series(list("XYZ"), name="big")) y = pd.Series(rs.randn(n_total), name="y_data") y_perm = y.reindex(rs.choice(y.index, y.size, replace=False)) g = pd.Series(np.repeat(list("abc"), int(n_total / 3)), name="small") h = pd.Series(np.tile(list("mn"), int(n_total / 2)), name="medium") u = pd.Series(np.tile(list("jkh"), int(n_total / 3))) df = pd.DataFrame(dict(y=y, g=g, h=h, u=u)) x_df["W"] = g class TestCategoricalPlotter(CategoricalFixture): def test_wide_df_data(self): p = cat._CategoricalPlotter() # Test basic wide DataFrame p.establish_variables(data=self.x_df) # Check data attribute for x, y, in zip(p.plot_data, self.x_df[["X", "Y", "Z"]].values.T): npt.assert_array_equal(x, y) # Check semantic attributes nt.assert_equal(p.orient, "v") nt.assert_is(p.plot_hues, None) nt.assert_is(p.group_label, "big") nt.assert_is(p.value_label, None) # Test wide dataframe with forced horizontal orientation p.establish_variables(data=self.x_df, orient="horiz") nt.assert_equal(p.orient, "h") # Text exception by trying to hue-group with a wide dataframe with nt.assert_raises(ValueError): p.establish_variables(hue="d", data=self.x_df) def test_1d_input_data(self): p = cat._CategoricalPlotter() # Test basic vector data x_1d_array = self.x.ravel() p.establish_variables(data=x_1d_array) nt.assert_equal(len(p.plot_data), 1) nt.assert_equal(len(p.plot_data[0]), self.n_total) nt.assert_is(p.group_label, None) nt.assert_is(p.value_label, None) # Test basic vector data in list form x_1d_list = x_1d_array.tolist() p.establish_variables(data=x_1d_list) nt.assert_equal(len(p.plot_data), 1) nt.assert_equal(len(p.plot_data[0]), self.n_total) nt.assert_is(p.group_label, None) nt.assert_is(p.value_label, None) # Test an object array that looks 1D but isn't x_notreally_1d = np.array([self.x.ravel(), self.x.ravel()[:int(self.n_total / 2)]]) p.establish_variables(data=x_notreally_1d) nt.assert_equal(len(p.plot_data), 2) nt.assert_equal(len(p.plot_data[0]), self.n_total) nt.assert_equal(len(p.plot_data[1]), self.n_total / 2) nt.assert_is(p.group_label, None) nt.assert_is(p.value_label, None) def test_2d_input_data(self): p = cat._CategoricalPlotter() x = self.x[:, 0] # Test vector data that looks 2D but doesn't really have columns p.establish_variables(data=x[:, np.newaxis]) nt.assert_equal(len(p.plot_data), 1) nt.assert_equal(len(p.plot_data[0]), self.x.shape[0]) nt.assert_is(p.group_label, None) nt.assert_is(p.value_label, None) # Test vector data that looks 2D but doesn't really have rows p.establish_variables(data=x[np.newaxis, :]) nt.assert_equal(len(p.plot_data), 1) nt.assert_equal(len(p.plot_data[0]), self.x.shape[0]) nt.assert_is(p.group_label, None) nt.assert_is(p.value_label, None) def test_3d_input_data(self): p = cat._CategoricalPlotter() # Test that passing actually 3D data raises x = np.zeros((5, 5, 5)) with nt.assert_raises(ValueError): p.establish_variables(data=x) def test_list_of_array_input_data(self): p = cat._CategoricalPlotter() # Test 2D input in list form x_list = self.x.T.tolist() p.establish_variables(data=x_list) nt.assert_equal(len(p.plot_data), 3) lengths = [len(v_i) for v_i in p.plot_data] nt.assert_equal(lengths, [self.n_total / 3] * 3) nt.assert_is(p.group_label, None) nt.assert_is(p.value_label, None) def test_wide_array_input_data(self): p = cat._CategoricalPlotter() # Test 2D input in array form p.establish_variables(data=self.x) nt.assert_equal(np.shape(p.plot_data), (3, self.n_total / 3)) npt.assert_array_equal(p.plot_data, self.x.T) nt.assert_is(p.group_label, None) nt.assert_is(p.value_label, None) def test_single_long_direct_inputs(self): p = cat._CategoricalPlotter() # Test passing a series to the x variable p.establish_variables(x=self.y) npt.assert_equal(p.plot_data, [self.y]) nt.assert_equal(p.orient, "h") nt.assert_equal(p.value_label, "y_data") nt.assert_is(p.group_label, None) # Test passing a series to the y variable p.establish_variables(y=self.y) npt.assert_equal(p.plot_data, [self.y]) nt.assert_equal(p.orient, "v") nt.assert_equal(p.value_label, "y_data") nt.assert_is(p.group_label, None) # Test passing an array to the y variable p.establish_variables(y=self.y.values) npt.assert_equal(p.plot_data, [self.y]) nt.assert_equal(p.orient, "v") nt.assert_is(p.value_label, None) nt.assert_is(p.group_label, None) # Test array and series with non-default index x = pd.Series([1, 1, 1, 1], index=[0, 2, 4, 6]) y = np.array([1, 2, 3, 4]) p.establish_variables(x, y) assert len(p.plot_data[0]) == 4 def test_single_long_indirect_inputs(self): p = cat._CategoricalPlotter() # Test referencing a DataFrame series in the x variable p.establish_variables(x="y", data=self.df) npt.assert_equal(p.plot_data, [self.y]) nt.assert_equal(p.orient, "h") nt.assert_equal(p.value_label, "y") nt.assert_is(p.group_label, None) # Test referencing a DataFrame series in the y variable p.establish_variables(y="y", data=self.df) npt.assert_equal(p.plot_data, [self.y]) nt.assert_equal(p.orient, "v") nt.assert_equal(p.value_label, "y") nt.assert_is(p.group_label, None) def test_longform_groupby(self): p = cat._CategoricalPlotter() # Test a vertically oriented grouped and nested plot p.establish_variables("g", "y", "h", data=self.df) nt.assert_equal(len(p.plot_data), 3) nt.assert_equal(len(p.plot_hues), 3) nt.assert_equal(p.orient, "v") nt.assert_equal(p.value_label, "y") nt.assert_equal(p.group_label, "g") nt.assert_equal(p.hue_title, "h") for group, vals in zip(["a", "b", "c"], p.plot_data): npt.assert_array_equal(vals, self.y[self.g == group]) for group, hues in zip(["a", "b", "c"], p.plot_hues): npt.assert_array_equal(hues, self.h[self.g == group]) # Test a grouped and nested plot with direct array value data p.establish_variables("g", self.y.values, "h", self.df) nt.assert_is(p.value_label, None) nt.assert_equal(p.group_label, "g") for group, vals in zip(["a", "b", "c"], p.plot_data): npt.assert_array_equal(vals, self.y[self.g == group]) # Test a grouped and nested plot with direct array hue data p.establish_variables("g", "y", self.h.values, self.df) for group, hues in zip(["a", "b", "c"], p.plot_hues): npt.assert_array_equal(hues, self.h[self.g == group]) # Test categorical grouping data if pandas_has_categoricals: df = self.df.copy() df.g = df.g.astype("category") # Test that horizontal orientation is automatically detected p.establish_variables("y", "g", "h", data=df) nt.assert_equal(len(p.plot_data), 3) nt.assert_equal(len(p.plot_hues), 3) nt.assert_equal(p.orient, "h") nt.assert_equal(p.value_label, "y") nt.assert_equal(p.group_label, "g") nt.assert_equal(p.hue_title, "h") for group, vals in zip(["a", "b", "c"], p.plot_data): npt.assert_array_equal(vals, self.y[self.g == group]) for group, hues in zip(["a", "b", "c"], p.plot_hues): npt.assert_array_equal(hues, self.h[self.g == group]) # Test grouped data that matches on index p1 = cat._CategoricalPlotter() p1.establish_variables(self.g, self.y, self.h) p2 = cat._CategoricalPlotter() p2.establish_variables(self.g, self.y[::-1], self.h) for i, (d1, d2) in enumerate(zip(p1.plot_data, p2.plot_data)): assert np.array_equal(d1.sort_index(), d2.sort_index()) def test_input_validation(self): p = cat._CategoricalPlotter() kws = dict(x="g", y="y", hue="h", units="u", data=self.df) for var in ["x", "y", "hue", "units"]: input_kws = kws.copy() input_kws[var] = "bad_input" with nt.assert_raises(ValueError): p.establish_variables(**input_kws) def test_order(self): p = cat._CategoricalPlotter() # Test inferred order from a wide dataframe input p.establish_variables(data=self.x_df) nt.assert_equal(p.group_names, ["X", "Y", "Z"]) # Test specified order with a wide dataframe input p.establish_variables(data=self.x_df, order=["Y", "Z", "X"]) nt.assert_equal(p.group_names, ["Y", "Z", "X"]) for group, vals in zip(["Y", "Z", "X"], p.plot_data): npt.assert_array_equal(vals, self.x_df[group]) with nt.assert_raises(ValueError): p.establish_variables(data=self.x, order=[1, 2, 0]) # Test inferred order from a grouped longform input p.establish_variables("g", "y", data=self.df) nt.assert_equal(p.group_names, ["a", "b", "c"]) # Test specified order from a grouped longform input p.establish_variables("g", "y", data=self.df, order=["b", "a", "c"]) nt.assert_equal(p.group_names, ["b", "a", "c"]) for group, vals in zip(["b", "a", "c"], p.plot_data): npt.assert_array_equal(vals, self.y[self.g == group]) # Test inferred order from a grouped input with categorical groups if pandas_has_categoricals: df = self.df.copy() df.g = df.g.astype("category") df.g = df.g.cat.reorder_categories(["c", "b", "a"]) p.establish_variables("g", "y", data=df) nt.assert_equal(p.group_names, ["c", "b", "a"]) for group, vals in zip(["c", "b", "a"], p.plot_data): npt.assert_array_equal(vals, self.y[self.g == group]) df.g = (df.g.cat.add_categories("d") .cat.reorder_categories(["c", "b", "d", "a"])) p.establish_variables("g", "y", data=df) nt.assert_equal(p.group_names, ["c", "b", "d", "a"]) def test_hue_order(self): p = cat._CategoricalPlotter() # Test inferred hue order p.establish_variables("g", "y", "h", data=self.df) nt.assert_equal(p.hue_names, ["m", "n"]) # Test specified hue order p.establish_variables("g", "y", "h", data=self.df, hue_order=["n", "m"]) nt.assert_equal(p.hue_names, ["n", "m"]) # Test inferred hue order from a categorical hue input if pandas_has_categoricals: df = self.df.copy() df.h = df.h.astype("category") df.h = df.h.cat.reorder_categories(["n", "m"]) p.establish_variables("g", "y", "h", data=df) nt.assert_equal(p.hue_names, ["n", "m"]) df.h = (df.h.cat.add_categories("o") .cat.reorder_categories(["o", "m", "n"])) p.establish_variables("g", "y", "h", data=df) nt.assert_equal(p.hue_names, ["o", "m", "n"]) def test_plot_units(self): p = cat._CategoricalPlotter() p.establish_variables("g", "y", "h", data=self.df) nt.assert_is(p.plot_units, None) p.establish_variables("g", "y", "h", data=self.df, units="u") for group, units in zip(["a", "b", "c"], p.plot_units): npt.assert_array_equal(units, self.u[self.g == group]) def test_infer_orient(self): p = cat._CategoricalPlotter() cats = pd.Series(["a", "b", "c"] * 10) nums = pd.Series(self.rs.randn(30)) nt.assert_equal(p.infer_orient(cats, nums), "v") nt.assert_equal(p.infer_orient(nums, cats), "h") nt.assert_equal(p.infer_orient(nums, None), "h") nt.assert_equal(p.infer_orient(None, nums), "v") nt.assert_equal(p.infer_orient(nums, nums, "vert"), "v") nt.assert_equal(p.infer_orient(nums, nums, "hori"), "h") with nt.assert_raises(ValueError): p.infer_orient(cats, cats) if pandas_has_categoricals: cats = pd.Series([0, 1, 2] * 10, dtype="category") nt.assert_equal(p.infer_orient(cats, nums), "v") nt.assert_equal(p.infer_orient(nums, cats), "h") with nt.assert_raises(ValueError): p.infer_orient(cats, cats) def test_default_palettes(self): p = cat._CategoricalPlotter() # Test palette mapping the x position p.establish_variables("g", "y", data=self.df) p.establish_colors(None, None, 1) nt.assert_equal(p.colors, palettes.color_palette(n_colors=3)) # Test palette mapping the hue position p.establish_variables("g", "y", "h", data=self.df) p.establish_colors(None, None, 1) nt.assert_equal(p.colors, palettes.color_palette(n_colors=2)) def test_default_palette_with_many_levels(self): with palettes.color_palette(["blue", "red"], 2): p = cat._CategoricalPlotter() p.establish_variables("g", "y", data=self.df) p.establish_colors(None, None, 1) npt.assert_array_equal(p.colors, palettes.husl_palette(3, l=.7)) # noqa def test_specific_color(self): p = cat._CategoricalPlotter() # Test the same color for each x position p.establish_variables("g", "y", data=self.df) p.establish_colors("blue", None, 1) blue_rgb = mpl.colors.colorConverter.to_rgb("blue") nt.assert_equal(p.colors, [blue_rgb] * 3) # Test a color-based blend for the hue mapping p.establish_variables("g", "y", "h", data=self.df) p.establish_colors("#ff0022", None, 1) rgba_array = np.array(palettes.light_palette("#ff0022", 2)) npt.assert_array_almost_equal(p.colors, rgba_array[:, :3]) def test_specific_palette(self): p = cat._CategoricalPlotter() # Test palette mapping the x position p.establish_variables("g", "y", data=self.df) p.establish_colors(None, "dark", 1) nt.assert_equal(p.colors, palettes.color_palette("dark", 3)) # Test that non-None `color` and `hue` raises an error p.establish_variables("g", "y", "h", data=self.df) p.establish_colors(None, "muted", 1) nt.assert_equal(p.colors, palettes.color_palette("muted", 2)) # Test that specified palette overrides specified color p = cat._CategoricalPlotter() p.establish_variables("g", "y", data=self.df) p.establish_colors("blue", "deep", 1) nt.assert_equal(p.colors, palettes.color_palette("deep", 3)) def test_dict_as_palette(self): p = cat._CategoricalPlotter() p.establish_variables("g", "y", "h", data=self.df) pal = {"m": (0, 0, 1), "n": (1, 0, 0)} p.establish_colors(None, pal, 1) nt.assert_equal(p.colors, [(0, 0, 1), (1, 0, 0)]) def test_palette_desaturation(self): p = cat._CategoricalPlotter() p.establish_variables("g", "y", data=self.df) p.establish_colors((0, 0, 1), None, .5) nt.assert_equal(p.colors, [(.25, .25, .75)] * 3) p.establish_colors(None, [(0, 0, 1), (1, 0, 0), "w"], .5) nt.assert_equal(p.colors, [(.25, .25, .75), (.75, .25, .25), (1, 1, 1)]) class TestCategoricalStatPlotter(CategoricalFixture): def test_no_bootstrappig(self): p = cat._CategoricalStatPlotter() p.establish_variables("g", "y", data=self.df) p.estimate_statistic(np.mean, None, 100, None) npt.assert_array_equal(p.confint, np.array([])) p.establish_variables("g", "y", "h", data=self.df) p.estimate_statistic(np.mean, None, 100, None) npt.assert_array_equal(p.confint, np.array([[], [], []])) def test_single_layer_stats(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) y = pd.Series(np.random.RandomState(0).randn(300)) p.establish_variables(g, y) p.estimate_statistic(np.mean, 95, 10000, None) nt.assert_equal(p.statistic.shape, (3,)) nt.assert_equal(p.confint.shape, (3, 2)) npt.assert_array_almost_equal(p.statistic, y.groupby(g).mean()) for ci, (_, grp_y) in zip(p.confint, y.groupby(g)): sem = stats.sem(grp_y) mean = grp_y.mean() stats.norm.ppf(.975) half_ci = stats.norm.ppf(.975) * sem ci_want = mean - half_ci, mean + half_ci npt.assert_array_almost_equal(ci_want, ci, 2) def test_single_layer_stats_with_units(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 90)) y = pd.Series(np.random.RandomState(0).randn(270)) u = pd.Series(np.repeat(np.tile(list("xyz"), 30), 3)) y[u == "x"] -= 3 y[u == "y"] += 3 p.establish_variables(g, y) p.estimate_statistic(np.mean, 95, 10000, None) stat1, ci1 = p.statistic, p.confint p.establish_variables(g, y, units=u) p.estimate_statistic(np.mean, 95, 10000, None) stat2, ci2 = p.statistic, p.confint npt.assert_array_equal(stat1, stat2) ci1_size = ci1[:, 1] - ci1[:, 0] ci2_size = ci2[:, 1] - ci2[:, 0] npt.assert_array_less(ci1_size, ci2_size) def test_single_layer_stats_with_missing_data(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) y = pd.Series(np.random.RandomState(0).randn(300)) p.establish_variables(g, y, order=list("abdc")) p.estimate_statistic(np.mean, 95, 10000, None) nt.assert_equal(p.statistic.shape, (4,)) nt.assert_equal(p.confint.shape, (4, 2)) mean = y[g == "b"].mean() sem = stats.sem(y[g == "b"]) half_ci = stats.norm.ppf(.975) * sem ci = mean - half_ci, mean + half_ci npt.assert_almost_equal(p.statistic[1], mean) npt.assert_array_almost_equal(p.confint[1], ci, 2) npt.assert_equal(p.statistic[2], np.nan) npt.assert_array_equal(p.confint[2], (np.nan, np.nan)) def test_nested_stats(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) h = pd.Series(np.tile(list("xy"), 150)) y = pd.Series(np.random.RandomState(0).randn(300)) p.establish_variables(g, y, h) p.estimate_statistic(np.mean, 95, 50000, None) nt.assert_equal(p.statistic.shape, (3, 2)) nt.assert_equal(p.confint.shape, (3, 2, 2)) npt.assert_array_almost_equal(p.statistic, y.groupby([g, h]).mean().unstack()) for ci_g, (_, grp_y) in zip(p.confint, y.groupby(g)): for ci, hue_y in zip(ci_g, [grp_y[::2], grp_y[1::2]]): sem = stats.sem(hue_y) mean = hue_y.mean() half_ci = stats.norm.ppf(.975) * sem ci_want = mean - half_ci, mean + half_ci npt.assert_array_almost_equal(ci_want, ci, 2) def test_bootstrap_seed(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) h = pd.Series(np.tile(list("xy"), 150)) y = pd.Series(np.random.RandomState(0).randn(300)) p.establish_variables(g, y, h) p.estimate_statistic(np.mean, 95, 1000, 0) confint_1 = p.confint p.estimate_statistic(np.mean, 95, 1000, 0) confint_2 = p.confint npt.assert_array_equal(confint_1, confint_2) def test_nested_stats_with_units(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 90)) h = pd.Series(np.tile(list("xy"), 135)) u = pd.Series(np.repeat(list("ijkijk"), 45)) y = pd.Series(np.random.RandomState(0).randn(270)) y[u == "i"] -= 3 y[u == "k"] += 3 p.establish_variables(g, y, h) p.estimate_statistic(np.mean, 95, 10000, None) stat1, ci1 = p.statistic, p.confint p.establish_variables(g, y, h, units=u) p.estimate_statistic(np.mean, 95, 10000, None) stat2, ci2 = p.statistic, p.confint npt.assert_array_equal(stat1, stat2) ci1_size = ci1[:, 0, 1] - ci1[:, 0, 0] ci2_size = ci2[:, 0, 1] - ci2[:, 0, 0] npt.assert_array_less(ci1_size, ci2_size) def test_nested_stats_with_missing_data(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) y = pd.Series(np.random.RandomState(0).randn(300)) h = pd.Series(np.tile(list("xy"), 150)) p.establish_variables(g, y, h, order=list("abdc"), hue_order=list("zyx")) p.estimate_statistic(np.mean, 95, 50000, None) nt.assert_equal(p.statistic.shape, (4, 3)) nt.assert_equal(p.confint.shape, (4, 3, 2)) mean = y[(g == "b") & (h == "x")].mean() sem = stats.sem(y[(g == "b") & (h == "x")]) half_ci = stats.norm.ppf(.975) * sem ci = mean - half_ci, mean + half_ci npt.assert_almost_equal(p.statistic[1, 2], mean) npt.assert_array_almost_equal(p.confint[1, 2], ci, 2) npt.assert_array_equal(p.statistic[:, 0], [np.nan] * 4) npt.assert_array_equal(p.statistic[2], [np.nan] * 3) npt.assert_array_equal(p.confint[:, 0], np.zeros((4, 2)) * np.nan) npt.assert_array_equal(p.confint[2], np.zeros((3, 2)) * np.nan) def test_sd_error_bars(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) y = pd.Series(np.random.RandomState(0).randn(300)) p.establish_variables(g, y) p.estimate_statistic(np.mean, "sd", None, None) nt.assert_equal(p.statistic.shape, (3,)) nt.assert_equal(p.confint.shape, (3, 2)) npt.assert_array_almost_equal(p.statistic, y.groupby(g).mean()) for ci, (_, grp_y) in zip(p.confint, y.groupby(g)): mean = grp_y.mean() half_ci = np.std(grp_y) ci_want = mean - half_ci, mean + half_ci npt.assert_array_almost_equal(ci_want, ci, 2) def test_nested_sd_error_bars(self): p = cat._CategoricalStatPlotter() g = pd.Series(np.repeat(list("abc"), 100)) h = pd.Series(np.tile(list("xy"), 150)) y = pd.Series(np.random.RandomState(0).randn(300)) p.establish_variables(g, y, h) p.estimate_statistic(np.mean, "sd", None, None) nt.assert_equal(p.statistic.shape, (3, 2)) nt.assert_equal(p.confint.shape, (3, 2, 2)) npt.assert_array_almost_equal(p.statistic, y.groupby([g, h]).mean().unstack()) for ci_g, (_, grp_y) in zip(p.confint, y.groupby(g)): for ci, hue_y in zip(ci_g, [grp_y[::2], grp_y[1::2]]): mean = hue_y.mean() half_ci = np.std(hue_y) ci_want = mean - half_ci, mean + half_ci npt.assert_array_almost_equal(ci_want, ci, 2) def test_draw_cis(self): p = cat._CategoricalStatPlotter() # Test vertical CIs p.orient = "v" f, ax = plt.subplots() at_group = [0, 1] confints = [(.5, 1.5), (.25, .8)] colors = [".2", ".3"] p.draw_confints(ax, at_group, confints, colors) lines = ax.lines for line, at, ci, c in zip(lines, at_group, confints, colors): x, y = line.get_xydata().T npt.assert_array_equal(x, [at, at]) npt.assert_array_equal(y, ci) nt.assert_equal(line.get_color(), c) plt.close("all") # Test horizontal CIs p.orient = "h" f, ax = plt.subplots() p.draw_confints(ax, at_group, confints, colors) lines = ax.lines for line, at, ci, c in zip(lines, at_group, confints, colors): x, y = line.get_xydata().T npt.assert_array_equal(x, ci) npt.assert_array_equal(y, [at, at]) nt.assert_equal(line.get_color(), c) plt.close("all") # Test vertical CIs with endcaps p.orient = "v" f, ax = plt.subplots() p.draw_confints(ax, at_group, confints, colors, capsize=0.3) capline = ax.lines[len(ax.lines) - 1] caplinestart = capline.get_xdata()[0] caplineend = capline.get_xdata()[1] caplinelength = abs(caplineend - caplinestart) nt.assert_almost_equal(caplinelength, 0.3) nt.assert_equal(len(ax.lines), 6) plt.close("all") # Test horizontal CIs with endcaps p.orient = "h" f, ax = plt.subplots() p.draw_confints(ax, at_group, confints, colors, capsize=0.3) capline = ax.lines[len(ax.lines) - 1] caplinestart = capline.get_ydata()[0] caplineend = capline.get_ydata()[1] caplinelength = abs(caplineend - caplinestart) nt.assert_almost_equal(caplinelength, 0.3) nt.assert_equal(len(ax.lines), 6) # Test extra keyword arguments f, ax = plt.subplots() p.draw_confints(ax, at_group, confints, colors, lw=4) line = ax.lines[0] nt.assert_equal(line.get_linewidth(), 4) plt.close("all") # Test errwidth is set appropriately f, ax = plt.subplots() p.draw_confints(ax, at_group, confints, colors, errwidth=2) capline = ax.lines[len(ax.lines)-1] nt.assert_equal(capline._linewidth, 2) nt.assert_equal(len(ax.lines), 2) plt.close("all") class TestBoxPlotter(CategoricalFixture): default_kws = dict(x=None, y=None, hue=None, data=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=.75, width=.8, dodge=True, fliersize=5, linewidth=None) def test_nested_width(self): kws = self.default_kws.copy() p = cat._BoxPlotter(**kws) p.establish_variables("g", "y", "h", data=self.df) nt.assert_equal(p.nested_width, .4 * .98) kws = self.default_kws.copy() kws["width"] = .6 p = cat._BoxPlotter(**kws) p.establish_variables("g", "y", "h", data=self.df) nt.assert_equal(p.nested_width, .3 * .98) kws = self.default_kws.copy() kws["dodge"] = False p = cat._BoxPlotter(**kws) p.establish_variables("g", "y", "h", data=self.df) nt.assert_equal(p.nested_width, .8) def test_hue_offsets(self): p = cat._BoxPlotter(**self.default_kws) p.establish_variables("g", "y", "h", data=self.df) npt.assert_array_equal(p.hue_offsets, [-.2, .2]) kws = self.default_kws.copy() kws["width"] = .6 p = cat._BoxPlotter(**kws) p.establish_variables("g", "y", "h", data=self.df) npt.assert_array_equal(p.hue_offsets, [-.15, .15]) p = cat._BoxPlotter(**kws) p.establish_variables("h", "y", "g", data=self.df) npt.assert_array_almost_equal(p.hue_offsets, [-.2, 0, .2]) def test_axes_data(self): ax = cat.boxplot("g", "y", data=self.df) nt.assert_equal(len(ax.artists), 3) plt.close("all") ax = cat.boxplot("g", "y", "h", data=self.df) nt.assert_equal(len(ax.artists), 6) plt.close("all") def test_box_colors(self): ax = cat.boxplot("g", "y", data=self.df, saturation=1) pal = palettes.color_palette(n_colors=3) for patch, color in zip(ax.artists, pal): nt.assert_equal(patch.get_facecolor()[:3], color) plt.close("all") ax = cat.boxplot("g", "y", "h", data=self.df, saturation=1) pal = palettes.color_palette(n_colors=2) for patch, color in zip(ax.artists, pal * 2): nt.assert_equal(patch.get_facecolor()[:3], color) plt.close("all") def test_draw_missing_boxes(self): ax = cat.boxplot("g", "y", data=self.df, order=["a", "b", "c", "d"]) nt.assert_equal(len(ax.artists), 3) def test_missing_data(self): x = ["a", "a", "b", "b", "c", "c", "d", "d"] h = ["x", "y", "x", "y", "x", "y", "x", "y"] y = self.rs.randn(8) y[-2:] = np.nan ax = cat.boxplot(x, y) nt.assert_equal(len(ax.artists), 3) plt.close("all") y[-1] = 0 ax = cat.boxplot(x, y, h) nt.assert_equal(len(ax.artists), 7) plt.close("all") def test_unaligned_index(self): f, (ax1, ax2) = plt.subplots(2) cat.boxplot(self.g, self.y, ax=ax1) cat.boxplot(self.g, self.y_perm, ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert np.array_equal(l1.get_xydata(), l2.get_xydata()) f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.boxplot(self.g, self.y, self.h, hue_order=hue_order, ax=ax1) cat.boxplot(self.g, self.y_perm, self.h, hue_order=hue_order, ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert np.array_equal(l1.get_xydata(), l2.get_xydata()) def test_boxplots(self): # Smoke test the high level boxplot options cat.boxplot("y", data=self.df) plt.close("all") cat.boxplot(y="y", data=self.df) plt.close("all") cat.boxplot("g", "y", data=self.df) plt.close("all") cat.boxplot("y", "g", data=self.df, orient="h") plt.close("all") cat.boxplot("g", "y", "h", data=self.df) plt.close("all") cat.boxplot("g", "y", "h", order=list("nabc"), data=self.df) plt.close("all") cat.boxplot("g", "y", "h", hue_order=list("omn"), data=self.df) plt.close("all") cat.boxplot("y", "g", "h", data=self.df, orient="h") plt.close("all") def test_axes_annotation(self): ax = cat.boxplot("g", "y", data=self.df) nt.assert_equal(ax.get_xlabel(), "g") nt.assert_equal(ax.get_ylabel(), "y") nt.assert_equal(ax.get_xlim(), (-.5, 2.5)) npt.assert_array_equal(ax.get_xticks(), [0, 1, 2]) npt.assert_array_equal([l.get_text() for l in ax.get_xticklabels()], ["a", "b", "c"]) plt.close("all") ax = cat.boxplot("g", "y", "h", data=self.df) nt.assert_equal(ax.get_xlabel(), "g") nt.assert_equal(ax.get_ylabel(), "y") npt.assert_array_equal(ax.get_xticks(), [0, 1, 2]) npt.assert_array_equal([l.get_text() for l in ax.get_xticklabels()], ["a", "b", "c"]) npt.assert_array_equal([l.get_text() for l in ax.legend_.get_texts()], ["m", "n"]) plt.close("all") ax = cat.boxplot("y", "g", data=self.df, orient="h") nt.assert_equal(ax.get_xlabel(), "y") nt.assert_equal(ax.get_ylabel(), "g") nt.assert_equal(ax.get_ylim(), (2.5, -.5)) npt.assert_array_equal(ax.get_yticks(), [0, 1, 2]) npt.assert_array_equal([l.get_text() for l in ax.get_yticklabels()], ["a", "b", "c"]) plt.close("all") class TestViolinPlotter(CategoricalFixture): default_kws = dict(x=None, y=None, hue=None, data=None, order=None, hue_order=None, bw="scott", cut=2, scale="area", scale_hue=True, gridsize=100, width=.8, inner="box", split=False, dodge=True, orient=None, linewidth=None, color=None, palette=None, saturation=.75) def test_split_error(self): kws = self.default_kws.copy() kws.update(dict(x="h", y="y", hue="g", data=self.df, split=True)) with nt.assert_raises(ValueError): cat._ViolinPlotter(**kws) def test_no_observations(self): p = cat._ViolinPlotter(**self.default_kws) x = ["a", "a", "b"] y = self.rs.randn(3) y[-1] = np.nan p.establish_variables(x, y) p.estimate_densities("scott", 2, "area", True, 20) nt.assert_equal(len(p.support[0]), 20) nt.assert_equal(len(p.support[1]), 0) nt.assert_equal(len(p.density[0]), 20) nt.assert_equal(len(p.density[1]), 1) nt.assert_equal(p.density[1].item(), 1) p.estimate_densities("scott", 2, "count", True, 20) nt.assert_equal(p.density[1].item(), 0) x = ["a"] * 4 + ["b"] * 2 y = self.rs.randn(6) h = ["m", "n"] * 2 + ["m"] * 2 p.establish_variables(x, y, h) p.estimate_densities("scott", 2, "area", True, 20) nt.assert_equal(len(p.support[1][0]), 20) nt.assert_equal(len(p.support[1][1]), 0) nt.assert_equal(len(p.density[1][0]), 20) nt.assert_equal(len(p.density[1][1]), 1) nt.assert_equal(p.density[1][1].item(), 1) p.estimate_densities("scott", 2, "count", False, 20) nt.assert_equal(p.density[1][1].item(), 0) def test_single_observation(self): p = cat._ViolinPlotter(**self.default_kws) x = ["a", "a", "b"] y = self.rs.randn(3) p.establish_variables(x, y) p.estimate_densities("scott", 2, "area", True, 20) nt.assert_equal(len(p.support[0]), 20) nt.assert_equal(len(p.support[1]), 1) nt.assert_equal(len(p.density[0]), 20) nt.assert_equal(len(p.density[1]), 1) nt.assert_equal(p.density[1].item(), 1) p.estimate_densities("scott", 2, "count", True, 20) nt.assert_equal(p.density[1].item(), .5) x = ["b"] * 4 + ["a"] * 3 y = self.rs.randn(7) h = (["m", "n"] * 4)[:-1] p.establish_variables(x, y, h) p.estimate_densities("scott", 2, "area", True, 20) nt.assert_equal(len(p.support[1][0]), 20) nt.assert_equal(len(p.support[1][1]), 1) nt.assert_equal(len(p.density[1][0]), 20) nt.assert_equal(len(p.density[1][1]), 1) nt.assert_equal(p.density[1][1].item(), 1) p.estimate_densities("scott", 2, "count", False, 20) nt.assert_equal(p.density[1][1].item(), .5) def test_dwidth(self): kws = self.default_kws.copy() kws.update(dict(x="g", y="y", data=self.df)) p = cat._ViolinPlotter(**kws) nt.assert_equal(p.dwidth, .4) kws.update(dict(width=.4)) p = cat._ViolinPlotter(**kws) nt.assert_equal(p.dwidth, .2) kws.update(dict(hue="h", width=.8)) p = cat._ViolinPlotter(**kws) nt.assert_equal(p.dwidth, .2) kws.update(dict(split=True)) p = cat._ViolinPlotter(**kws) nt.assert_equal(p.dwidth, .4) def test_scale_area(self): kws = self.default_kws.copy() kws["scale"] = "area" p = cat._ViolinPlotter(**kws) # Test single layer of grouping p.hue_names = None density = [self.rs.uniform(0, .8, 50), self.rs.uniform(0, .2, 50)] max_before = np.array([d.max() for d in density]) p.scale_area(density, max_before, False) max_after = np.array([d.max() for d in density]) nt.assert_equal(max_after[0], 1) before_ratio = max_before[1] / max_before[0] after_ratio = max_after[1] / max_after[0] nt.assert_equal(before_ratio, after_ratio) # Test nested grouping scaling across all densities p.hue_names = ["foo", "bar"] density = [[self.rs.uniform(0, .8, 50), self.rs.uniform(0, .2, 50)], [self.rs.uniform(0, .1, 50), self.rs.uniform(0, .02, 50)]] max_before = np.array([[r.max() for r in row] for row in density]) p.scale_area(density, max_before, False) max_after = np.array([[r.max() for r in row] for row in density]) nt.assert_equal(max_after[0, 0], 1) before_ratio = max_before[1, 1] / max_before[0, 0] after_ratio = max_after[1, 1] / max_after[0, 0] nt.assert_equal(before_ratio, after_ratio) # Test nested grouping scaling within hue p.hue_names = ["foo", "bar"] density = [[self.rs.uniform(0, .8, 50), self.rs.uniform(0, .2, 50)], [self.rs.uniform(0, .1, 50), self.rs.uniform(0, .02, 50)]] max_before = np.array([[r.max() for r in row] for row in density]) p.scale_area(density, max_before, True) max_after = np.array([[r.max() for r in row] for row in density]) nt.assert_equal(max_after[0, 0], 1) nt.assert_equal(max_after[1, 0], 1) before_ratio = max_before[1, 1] / max_before[1, 0] after_ratio = max_after[1, 1] / max_after[1, 0] nt.assert_equal(before_ratio, after_ratio) def test_scale_width(self): kws = self.default_kws.copy() kws["scale"] = "width" p = cat._ViolinPlotter(**kws) # Test single layer of grouping p.hue_names = None density = [self.rs.uniform(0, .8, 50), self.rs.uniform(0, .2, 50)] p.scale_width(density) max_after = np.array([d.max() for d in density]) npt.assert_array_equal(max_after, [1, 1]) # Test nested grouping p.hue_names = ["foo", "bar"] density = [[self.rs.uniform(0, .8, 50), self.rs.uniform(0, .2, 50)], [self.rs.uniform(0, .1, 50), self.rs.uniform(0, .02, 50)]] p.scale_width(density) max_after = np.array([[r.max() for r in row] for row in density]) npt.assert_array_equal(max_after, [[1, 1], [1, 1]]) def test_scale_count(self): kws = self.default_kws.copy() kws["scale"] = "count" p = cat._ViolinPlotter(**kws) # Test single layer of grouping p.hue_names = None density = [self.rs.uniform(0, .8, 20), self.rs.uniform(0, .2, 40)] counts = np.array([20, 40]) p.scale_count(density, counts, False) max_after = np.array([d.max() for d in density]) npt.assert_array_equal(max_after, [.5, 1]) # Test nested grouping scaling across all densities p.hue_names = ["foo", "bar"] density = [[self.rs.uniform(0, .8, 5), self.rs.uniform(0, .2, 40)], [self.rs.uniform(0, .1, 100), self.rs.uniform(0, .02, 50)]] counts = np.array([[5, 40], [100, 50]]) p.scale_count(density, counts, False) max_after = np.array([[r.max() for r in row] for row in density]) npt.assert_array_equal(max_after, [[.05, .4], [1, .5]]) # Test nested grouping scaling within hue p.hue_names = ["foo", "bar"] density = [[self.rs.uniform(0, .8, 5), self.rs.uniform(0, .2, 40)], [self.rs.uniform(0, .1, 100), self.rs.uniform(0, .02, 50)]] counts = np.array([[5, 40], [100, 50]]) p.scale_count(density, counts, True) max_after = np.array([[r.max() for r in row] for row in density]) npt.assert_array_equal(max_after, [[.125, 1], [1, .5]]) def test_bad_scale(self): kws = self.default_kws.copy() kws["scale"] = "not_a_scale_type" with nt.assert_raises(ValueError): cat._ViolinPlotter(**kws) def test_kde_fit(self): p = cat._ViolinPlotter(**self.default_kws) data = self.y data_std = data.std(ddof=1) # Bandwidth behavior depends on scipy version if LooseVersion(scipy.__version__) < "0.11": # Test ignoring custom bandwidth on old scipy kde, bw = p.fit_kde(self.y, .2) nt.assert_is_instance(kde, stats.gaussian_kde) nt.assert_equal(kde.factor, kde.scotts_factor()) else: # Test reference rule bandwidth kde, bw = p.fit_kde(data, "scott") nt.assert_is_instance(kde, stats.gaussian_kde) nt.assert_equal(kde.factor, kde.scotts_factor()) nt.assert_equal(bw, kde.scotts_factor() * data_std) # Test numeric scale factor kde, bw = p.fit_kde(self.y, .2) nt.assert_is_instance(kde, stats.gaussian_kde) nt.assert_equal(kde.factor, .2) nt.assert_equal(bw, .2 * data_std) def test_draw_to_density(self): p = cat._ViolinPlotter(**self.default_kws) # p.dwidth will be 1 for easier testing p.width = 2 # Test verical plots support = np.array([.2, .6]) density = np.array([.1, .4]) # Test full vertical plot _, ax = plt.subplots() p.draw_to_density(ax, 0, .5, support, density, False) x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [.99 * -.4, .99 * .4]) npt.assert_array_equal(y, [.5, .5]) plt.close("all") # Test left vertical plot _, ax = plt.subplots() p.draw_to_density(ax, 0, .5, support, density, "left") x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [.99 * -.4, 0]) npt.assert_array_equal(y, [.5, .5]) plt.close("all") # Test right vertical plot _, ax = plt.subplots() p.draw_to_density(ax, 0, .5, support, density, "right") x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [0, .99 * .4]) npt.assert_array_equal(y, [.5, .5]) plt.close("all") # Switch orientation to test horizontal plots p.orient = "h" support = np.array([.2, .5]) density = np.array([.3, .7]) # Test full horizontal plot _, ax = plt.subplots() p.draw_to_density(ax, 0, .6, support, density, False) x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [.6, .6]) npt.assert_array_equal(y, [.99 * -.7, .99 * .7]) plt.close("all") # Test left horizontal plot _, ax = plt.subplots() p.draw_to_density(ax, 0, .6, support, density, "left") x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [.6, .6]) npt.assert_array_equal(y, [.99 * -.7, 0]) plt.close("all") # Test right horizontal plot _, ax = plt.subplots() p.draw_to_density(ax, 0, .6, support, density, "right") x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [.6, .6]) npt.assert_array_equal(y, [0, .99 * .7]) plt.close("all") def test_draw_single_observations(self): p = cat._ViolinPlotter(**self.default_kws) p.width = 2 # Test vertical plot _, ax = plt.subplots() p.draw_single_observation(ax, 1, 1.5, 1) x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [0, 2]) npt.assert_array_equal(y, [1.5, 1.5]) plt.close("all") # Test horizontal plot p.orient = "h" _, ax = plt.subplots() p.draw_single_observation(ax, 2, 2.2, .5) x, y = ax.lines[0].get_xydata().T npt.assert_array_equal(x, [2.2, 2.2]) npt.assert_array_equal(y, [1.5, 2.5]) plt.close("all") def test_draw_box_lines(self): # Test vertical plot kws = self.default_kws.copy() kws.update(dict(y="y", data=self.df, inner=None)) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_box_lines(ax, self.y, p.support[0], p.density[0], 0) nt.assert_equal(len(ax.lines), 2) q25, q50, q75 = np.percentile(self.y, [25, 50, 75]) _, y = ax.lines[1].get_xydata().T npt.assert_array_equal(y, [q25, q75]) _, y = ax.collections[0].get_offsets().T nt.assert_equal(y, q50) plt.close("all") # Test horizontal plot kws = self.default_kws.copy() kws.update(dict(x="y", data=self.df, inner=None)) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_box_lines(ax, self.y, p.support[0], p.density[0], 0) nt.assert_equal(len(ax.lines), 2) q25, q50, q75 = np.percentile(self.y, [25, 50, 75]) x, _ = ax.lines[1].get_xydata().T npt.assert_array_equal(x, [q25, q75]) x, _ = ax.collections[0].get_offsets().T nt.assert_equal(x, q50) plt.close("all") def test_draw_quartiles(self): kws = self.default_kws.copy() kws.update(dict(y="y", data=self.df, inner=None)) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_quartiles(ax, self.y, p.support[0], p.density[0], 0) for val, line in zip(np.percentile(self.y, [25, 50, 75]), ax.lines): _, y = line.get_xydata().T npt.assert_array_equal(y, [val, val]) def test_draw_points(self): p = cat._ViolinPlotter(**self.default_kws) # Test vertical plot _, ax = plt.subplots() p.draw_points(ax, self.y, 0) x, y = ax.collections[0].get_offsets().T npt.assert_array_equal(x, np.zeros_like(self.y)) npt.assert_array_equal(y, self.y) plt.close("all") # Test horizontal plot p.orient = "h" _, ax = plt.subplots() p.draw_points(ax, self.y, 0) x, y = ax.collections[0].get_offsets().T npt.assert_array_equal(x, self.y) npt.assert_array_equal(y, np.zeros_like(self.y)) plt.close("all") def test_draw_sticks(self): kws = self.default_kws.copy() kws.update(dict(y="y", data=self.df, inner=None)) p = cat._ViolinPlotter(**kws) # Test vertical plot _, ax = plt.subplots() p.draw_stick_lines(ax, self.y, p.support[0], p.density[0], 0) for val, line in zip(self.y, ax.lines): _, y = line.get_xydata().T npt.assert_array_equal(y, [val, val]) plt.close("all") # Test horizontal plot p.orient = "h" _, ax = plt.subplots() p.draw_stick_lines(ax, self.y, p.support[0], p.density[0], 0) for val, line in zip(self.y, ax.lines): x, _ = line.get_xydata().T npt.assert_array_equal(x, [val, val]) plt.close("all") def test_validate_inner(self): kws = self.default_kws.copy() kws.update(dict(inner="bad_inner")) with nt.assert_raises(ValueError): cat._ViolinPlotter(**kws) def test_draw_violinplots(self): kws = self.default_kws.copy() # Test single vertical violin kws.update(dict(y="y", data=self.df, inner=None, saturation=1, color=(1, 0, 0, 1))) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) nt.assert_equal(len(ax.collections), 1) npt.assert_array_equal(ax.collections[0].get_facecolors(), [(1, 0, 0, 1)]) plt.close("all") # Test single horizontal violin kws.update(dict(x="y", y=None, color=(0, 1, 0, 1))) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) nt.assert_equal(len(ax.collections), 1) npt.assert_array_equal(ax.collections[0].get_facecolors(), [(0, 1, 0, 1)]) plt.close("all") # Test multiple vertical violins kws.update(dict(x="g", y="y", color=None,)) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) nt.assert_equal(len(ax.collections), 3) for violin, color in zip(ax.collections, palettes.color_palette()): npt.assert_array_equal(violin.get_facecolors()[0, :-1], color) plt.close("all") # Test multiple violins with hue nesting kws.update(dict(hue="h")) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) nt.assert_equal(len(ax.collections), 6) for violin, color in zip(ax.collections, palettes.color_palette(n_colors=2) * 3): npt.assert_array_equal(violin.get_facecolors()[0, :-1], color) plt.close("all") # Test multiple split violins kws.update(dict(split=True, palette="muted")) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) nt.assert_equal(len(ax.collections), 6) for violin, color in zip(ax.collections, palettes.color_palette("muted", n_colors=2) * 3): npt.assert_array_equal(violin.get_facecolors()[0, :-1], color) plt.close("all") def test_draw_violinplots_no_observations(self): kws = self.default_kws.copy() kws["inner"] = None # Test single layer of grouping x = ["a", "a", "b"] y = self.rs.randn(3) y[-1] = np.nan kws.update(x=x, y=y) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) nt.assert_equal(len(ax.collections), 1) nt.assert_equal(len(ax.lines), 0) plt.close("all") # Test nested hue grouping x = ["a"] * 4 + ["b"] * 2 y = self.rs.randn(6) h = ["m", "n"] * 2 + ["m"] * 2 kws.update(x=x, y=y, hue=h) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) nt.assert_equal(len(ax.collections), 3) nt.assert_equal(len(ax.lines), 0) plt.close("all") def test_draw_violinplots_single_observations(self): kws = self.default_kws.copy() kws["inner"] = None # Test single layer of grouping x = ["a", "a", "b"] y = self.rs.randn(3) kws.update(x=x, y=y) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) nt.assert_equal(len(ax.collections), 1) nt.assert_equal(len(ax.lines), 1) plt.close("all") # Test nested hue grouping x = ["b"] * 4 + ["a"] * 3 y = self.rs.randn(7) h = (["m", "n"] * 4)[:-1] kws.update(x=x, y=y, hue=h) p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) nt.assert_equal(len(ax.collections), 3) nt.assert_equal(len(ax.lines), 1) plt.close("all") # Test nested hue grouping with split kws["split"] = True p = cat._ViolinPlotter(**kws) _, ax = plt.subplots() p.draw_violins(ax) nt.assert_equal(len(ax.collections), 3) nt.assert_equal(len(ax.lines), 1) plt.close("all") def test_violinplots(self): # Smoke test the high level violinplot options cat.violinplot("y", data=self.df) plt.close("all") cat.violinplot(y="y", data=self.df) plt.close("all") cat.violinplot("g", "y", data=self.df) plt.close("all") cat.violinplot("y", "g", data=self.df, orient="h") plt.close("all") cat.violinplot("g", "y", "h", data=self.df) plt.close("all") cat.violinplot("g", "y", "h", order=list("nabc"), data=self.df) plt.close("all") cat.violinplot("g", "y", "h", hue_order=list("omn"), data=self.df) plt.close("all") cat.violinplot("y", "g", "h", data=self.df, orient="h") plt.close("all") for inner in ["box", "quart", "point", "stick", None]: cat.violinplot("g", "y", data=self.df, inner=inner) plt.close("all") cat.violinplot("g", "y", "h", data=self.df, inner=inner) plt.close("all") cat.violinplot("g", "y", "h", data=self.df, inner=inner, split=True) plt.close("all") class TestCategoricalScatterPlotter(CategoricalFixture): def test_group_point_colors(self): p = cat._CategoricalScatterPlotter() p.establish_variables(x="g", y="y", data=self.df) p.establish_colors(None, "deep", 1) point_colors = p.point_colors n_colors = self.g.unique().size assert len(point_colors) == n_colors for i, group_colors in enumerate(point_colors): for color in group_colors: assert color == i def test_hue_point_colors(self): p = cat._CategoricalScatterPlotter() hue_order = self.h.unique().tolist() p.establish_variables(x="g", y="y", hue="h", hue_order=hue_order, data=self.df) p.establish_colors(None, "deep", 1) point_colors = p.point_colors assert len(point_colors) == self.g.unique().size for i, group_colors in enumerate(point_colors): group_hues = np.asarray(p.plot_hues[i]) for point_hue, point_color in zip(group_hues, group_colors): assert point_color == p.hue_names.index(point_hue) # hue_level = np.asarray(p.plot_hues[i])[j] # palette_color = deep_colors[hue_order.index(hue_level)] # assert tuple(point_color) == palette_color def test_scatterplot_legend(self): p = cat._CategoricalScatterPlotter() hue_order = ["m", "n"] p.establish_variables(x="g", y="y", hue="h", hue_order=hue_order, data=self.df) p.establish_colors(None, "deep", 1) deep_colors = palettes.color_palette("deep", self.h.unique().size) f, ax = plt.subplots() p.add_legend_data(ax) leg = ax.legend() for i, t in enumerate(leg.get_texts()): nt.assert_equal(t.get_text(), hue_order[i]) for i, h in enumerate(leg.legendHandles): rgb = h.get_facecolor()[0, :3] nt.assert_equal(tuple(rgb), tuple(deep_colors[i])) class TestStripPlotter(CategoricalFixture): def test_stripplot_vertical(self): pal = palettes.color_palette() ax = cat.stripplot("g", "y", jitter=False, data=self.df) for i, (_, vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_equal(x, np.ones(len(x)) * i) npt.assert_array_equal(y, vals) npt.assert_equal(ax.collections[i].get_facecolors()[0, :3], pal[i]) def test_stripplot_horiztonal(self): df = self.df.copy() df.g = df.g.astype("category") ax = cat.stripplot("y", "g", jitter=False, data=df) for i, (_, vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_equal(x, vals) npt.assert_array_equal(y, np.ones(len(x)) * i) def test_stripplot_jitter(self): pal = palettes.color_palette() ax = cat.stripplot("g", "y", data=self.df, jitter=True) for i, (_, vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_less(np.ones(len(x)) * i - .1, x) npt.assert_array_less(x, np.ones(len(x)) * i + .1) npt.assert_array_equal(y, vals) npt.assert_equal(ax.collections[i].get_facecolors()[0, :3], pal[i]) def test_dodge_nested_stripplot_vertical(self): pal = palettes.color_palette() ax = cat.stripplot("g", "y", "h", data=self.df, jitter=False, dodge=True) for i, (_, group_vals) in enumerate(self.y.groupby(self.g)): for j, (_, vals) in enumerate(group_vals.groupby(self.h)): x, y = ax.collections[i * 2 + j].get_offsets().T npt.assert_array_equal(x, np.ones(len(x)) * i + [-.2, .2][j]) npt.assert_array_equal(y, vals) fc = ax.collections[i * 2 + j].get_facecolors()[0, :3] assert tuple(fc) == pal[j] def test_dodge_nested_stripplot_horizontal(self): df = self.df.copy() df.g = df.g.astype("category") ax = cat.stripplot("y", "g", "h", data=df, jitter=False, dodge=True) for i, (_, group_vals) in enumerate(self.y.groupby(self.g)): for j, (_, vals) in enumerate(group_vals.groupby(self.h)): x, y = ax.collections[i * 2 + j].get_offsets().T npt.assert_array_equal(x, vals) npt.assert_array_equal(y, np.ones(len(x)) * i + [-.2, .2][j]) def test_nested_stripplot_vertical(self): # Test a simple vertical strip plot ax = cat.stripplot("g", "y", "h", data=self.df, jitter=False, dodge=False) for i, (_, group_vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_equal(x, np.ones(len(x)) * i) npt.assert_array_equal(y, group_vals) def test_nested_stripplot_horizontal(self): df = self.df.copy() df.g = df.g.astype("category") ax = cat.stripplot("y", "g", "h", data=df, jitter=False, dodge=False) for i, (_, group_vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_equal(x, group_vals) npt.assert_array_equal(y, np.ones(len(x)) * i) def test_three_strip_points(self): x = np.arange(3) ax = cat.stripplot(x=x) facecolors = ax.collections[0].get_facecolor() nt.assert_equal(facecolors.shape, (3, 4)) npt.assert_array_equal(facecolors[0], facecolors[1]) def test_unaligned_index(self): f, (ax1, ax2) = plt.subplots(2) cat.stripplot(self.g, self.y, ax=ax1) cat.stripplot(self.g, self.y_perm, ax=ax2) for p1, p2 in zip(ax1.collections, ax2.collections): y1, y2 = p1.get_offsets()[:, 1], p2.get_offsets()[:, 1] assert np.array_equal(np.sort(y1), np.sort(y2)) assert np.array_equal(p1.get_facecolors()[np.argsort(y1)], p2.get_facecolors()[np.argsort(y2)]) f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.stripplot(self.g, self.y, self.h, hue_order=hue_order, ax=ax1) cat.stripplot(self.g, self.y_perm, self.h, hue_order=hue_order, ax=ax2) for p1, p2 in zip(ax1.collections, ax2.collections): y1, y2 = p1.get_offsets()[:, 1], p2.get_offsets()[:, 1] assert np.array_equal(np.sort(y1), np.sort(y2)) assert np.array_equal(p1.get_facecolors()[np.argsort(y1)], p2.get_facecolors()[np.argsort(y2)]) f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.stripplot(self.g, self.y, self.h, dodge=True, hue_order=hue_order, ax=ax1) cat.stripplot(self.g, self.y_perm, self.h, dodge=True, hue_order=hue_order, ax=ax2) for p1, p2 in zip(ax1.collections, ax2.collections): y1, y2 = p1.get_offsets()[:, 1], p2.get_offsets()[:, 1] assert np.array_equal(np.sort(y1), np.sort(y2)) assert np.array_equal(p1.get_facecolors()[np.argsort(y1)], p2.get_facecolors()[np.argsort(y2)]) class TestSwarmPlotter(CategoricalFixture): default_kws = dict(x=None, y=None, hue=None, data=None, order=None, hue_order=None, dodge=False, orient=None, color=None, palette=None) def test_could_overlap(self): p = cat._SwarmPlotter(**self.default_kws) neighbors = p.could_overlap((1, 1), [(0, 0), (1, .5), (.5, .5)], 1) npt.assert_array_equal(neighbors, [(1, .5), (.5, .5)]) def test_position_candidates(self): p = cat._SwarmPlotter(**self.default_kws) xy_i = (0, 1) neighbors = [(0, 1), (0, 1.5)] candidates = p.position_candidates(xy_i, neighbors, 1) dx1 = 1.05 dx2 = np.sqrt(1 - .5 ** 2) * 1.05 npt.assert_array_equal(candidates, [(0, 1), (-dx1, 1), (dx1, 1), (dx2, 1), (-dx2, 1)]) def test_find_first_non_overlapping_candidate(self): p = cat._SwarmPlotter(**self.default_kws) candidates = [(.5, 1), (1, 1), (1.5, 1)] neighbors = np.array([(0, 1)]) first = p.first_non_overlapping_candidate(candidates, neighbors, 1) npt.assert_array_equal(first, (1, 1)) def test_beeswarm(self): p = cat._SwarmPlotter(**self.default_kws) d = self.y.diff().mean() * 1.5 x = np.zeros(self.y.size) y = np.sort(self.y) orig_xy = np.c_[x, y] swarm = p.beeswarm(orig_xy, d) dmat = spatial.distance.cdist(swarm, swarm) triu = dmat[np.triu_indices_from(dmat, 1)] npt.assert_array_less(d, triu) npt.assert_array_equal(y, swarm[:, 1]) def test_add_gutters(self): p = cat._SwarmPlotter(**self.default_kws) points = np.array([0, -1, .4, .8]) points = p.add_gutters(points, 0, 1) npt.assert_array_equal(points, np.array([0, -.5, .4, .5])) def test_swarmplot_vertical(self): pal = palettes.color_palette() ax = cat.swarmplot("g", "y", data=self.df) for i, (_, vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_almost_equal(y, np.sort(vals)) fc = ax.collections[i].get_facecolors()[0, :3] npt.assert_equal(fc, pal[i]) def test_swarmplot_horizontal(self): pal = palettes.color_palette() ax = cat.swarmplot("y", "g", data=self.df, orient="h") for i, (_, vals) in enumerate(self.y.groupby(self.g)): x, y = ax.collections[i].get_offsets().T npt.assert_array_almost_equal(x, np.sort(vals)) fc = ax.collections[i].get_facecolors()[0, :3] npt.assert_equal(fc, pal[i]) def test_dodge_nested_swarmplot_vertical(self): pal = palettes.color_palette() ax = cat.swarmplot("g", "y", "h", data=self.df, dodge=True) for i, (_, group_vals) in enumerate(self.y.groupby(self.g)): for j, (_, vals) in enumerate(group_vals.groupby(self.h)): x, y = ax.collections[i * 2 + j].get_offsets().T npt.assert_array_almost_equal(y, np.sort(vals)) fc = ax.collections[i * 2 + j].get_facecolors()[0, :3] assert tuple(fc) == pal[j] def test_dodge_nested_swarmplot_horizontal(self): pal = palettes.color_palette() ax = cat.swarmplot("y", "g", "h", data=self.df, orient="h", dodge=True) for i, (_, group_vals) in enumerate(self.y.groupby(self.g)): for j, (_, vals) in enumerate(group_vals.groupby(self.h)): x, y = ax.collections[i * 2 + j].get_offsets().T npt.assert_array_almost_equal(x, np.sort(vals)) fc = ax.collections[i * 2 + j].get_facecolors()[0, :3] assert tuple(fc) == pal[j] def test_nested_swarmplot_vertical(self): ax = cat.swarmplot("g", "y", "h", data=self.df) pal = palettes.color_palette() hue_names = self.h.unique().tolist() grouped_hues = list(self.h.groupby(self.g)) for i, (_, vals) in enumerate(self.y.groupby(self.g)): points = ax.collections[i] x, y = points.get_offsets().T sorter = np.argsort(vals) npt.assert_array_almost_equal(y, vals.iloc[sorter]) _, hue_vals = grouped_hues[i] for hue, fc in zip(hue_vals.values[sorter.values], points.get_facecolors()): assert tuple(fc[:3]) == pal[hue_names.index(hue)] def test_nested_swarmplot_horizontal(self): ax = cat.swarmplot("y", "g", "h", data=self.df, orient="h") pal = palettes.color_palette() hue_names = self.h.unique().tolist() grouped_hues = list(self.h.groupby(self.g)) for i, (_, vals) in enumerate(self.y.groupby(self.g)): points = ax.collections[i] x, y = points.get_offsets().T sorter = np.argsort(vals) npt.assert_array_almost_equal(x, vals.iloc[sorter]) _, hue_vals = grouped_hues[i] for hue, fc in zip(hue_vals.values[sorter.values], points.get_facecolors()): assert tuple(fc[:3]) == pal[hue_names.index(hue)] def test_unaligned_index(self): f, (ax1, ax2) = plt.subplots(2) cat.swarmplot(self.g, self.y, ax=ax1) cat.swarmplot(self.g, self.y_perm, ax=ax2) for p1, p2 in zip(ax1.collections, ax2.collections): assert np.allclose(p1.get_offsets()[:, 1], p2.get_offsets()[:, 1]) assert np.array_equal(p1.get_facecolors(), p2.get_facecolors()) f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.swarmplot(self.g, self.y, self.h, hue_order=hue_order, ax=ax1) cat.swarmplot(self.g, self.y_perm, self.h, hue_order=hue_order, ax=ax2) for p1, p2 in zip(ax1.collections, ax2.collections): assert np.allclose(p1.get_offsets()[:, 1], p2.get_offsets()[:, 1]) assert np.array_equal(p1.get_facecolors(), p2.get_facecolors()) f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.swarmplot(self.g, self.y, self.h, dodge=True, hue_order=hue_order, ax=ax1) cat.swarmplot(self.g, self.y_perm, self.h, dodge=True, hue_order=hue_order, ax=ax2) for p1, p2 in zip(ax1.collections, ax2.collections): assert np.allclose(p1.get_offsets()[:, 1], p2.get_offsets()[:, 1]) assert np.array_equal(p1.get_facecolors(), p2.get_facecolors()) class TestBarPlotter(CategoricalFixture): default_kws = dict( x=None, y=None, hue=None, data=None, estimator=np.mean, ci=95, n_boot=100, units=None, seed=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=.75, errcolor=".26", errwidth=None, capsize=None, dodge=True ) def test_nested_width(self): kws = self.default_kws.copy() p = cat._BarPlotter(**kws) p.establish_variables("g", "y", "h", data=self.df) nt.assert_equal(p.nested_width, .8 / 2) p = cat._BarPlotter(**kws) p.establish_variables("h", "y", "g", data=self.df) nt.assert_equal(p.nested_width, .8 / 3) kws["dodge"] = False p = cat._BarPlotter(**kws) p.establish_variables("h", "y", "g", data=self.df) nt.assert_equal(p.nested_width, .8) def test_draw_vertical_bars(self): kws = self.default_kws.copy() kws.update(x="g", y="y", data=self.df) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) nt.assert_equal(len(ax.patches), len(p.plot_data)) nt.assert_equal(len(ax.lines), len(p.plot_data)) for bar, color in zip(ax.patches, p.colors): nt.assert_equal(bar.get_facecolor()[:-1], color) positions = np.arange(len(p.plot_data)) - p.width / 2 for bar, pos, stat in zip(ax.patches, positions, p.statistic): nt.assert_equal(bar.get_x(), pos) nt.assert_equal(bar.get_width(), p.width) if mpl.__version__ >= mpl_barplot_change: nt.assert_equal(bar.get_y(), 0) nt.assert_equal(bar.get_height(), stat) else: nt.assert_equal(bar.get_y(), min(0, stat)) nt.assert_equal(bar.get_height(), abs(stat)) def test_draw_horizontal_bars(self): kws = self.default_kws.copy() kws.update(x="y", y="g", orient="h", data=self.df) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) nt.assert_equal(len(ax.patches), len(p.plot_data)) nt.assert_equal(len(ax.lines), len(p.plot_data)) for bar, color in zip(ax.patches, p.colors): nt.assert_equal(bar.get_facecolor()[:-1], color) positions = np.arange(len(p.plot_data)) - p.width / 2 for bar, pos, stat in zip(ax.patches, positions, p.statistic): nt.assert_equal(bar.get_y(), pos) nt.assert_equal(bar.get_height(), p.width) if mpl.__version__ >= mpl_barplot_change: nt.assert_equal(bar.get_x(), 0) nt.assert_equal(bar.get_width(), stat) else: nt.assert_equal(bar.get_x(), min(0, stat)) nt.assert_equal(bar.get_width(), abs(stat)) def test_draw_nested_vertical_bars(self): kws = self.default_kws.copy() kws.update(x="g", y="y", hue="h", data=self.df) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) n_groups, n_hues = len(p.plot_data), len(p.hue_names) nt.assert_equal(len(ax.patches), n_groups * n_hues) nt.assert_equal(len(ax.lines), n_groups * n_hues) for bar in ax.patches[:n_groups]: nt.assert_equal(bar.get_facecolor()[:-1], p.colors[0]) for bar in ax.patches[n_groups:]: nt.assert_equal(bar.get_facecolor()[:-1], p.colors[1]) positions = np.arange(len(p.plot_data)) for bar, pos in zip(ax.patches[:n_groups], positions): nt.assert_almost_equal(bar.get_x(), pos - p.width / 2) nt.assert_almost_equal(bar.get_width(), p.nested_width) for bar, stat in zip(ax.patches, p.statistic.T.flat): if LooseVersion(mpl.__version__) >= mpl_barplot_change: nt.assert_almost_equal(bar.get_y(), 0) nt.assert_almost_equal(bar.get_height(), stat) else: nt.assert_almost_equal(bar.get_y(), min(0, stat)) nt.assert_almost_equal(bar.get_height(), abs(stat)) def test_draw_nested_horizontal_bars(self): kws = self.default_kws.copy() kws.update(x="y", y="g", hue="h", orient="h", data=self.df) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) n_groups, n_hues = len(p.plot_data), len(p.hue_names) nt.assert_equal(len(ax.patches), n_groups * n_hues) nt.assert_equal(len(ax.lines), n_groups * n_hues) for bar in ax.patches[:n_groups]: nt.assert_equal(bar.get_facecolor()[:-1], p.colors[0]) for bar in ax.patches[n_groups:]: nt.assert_equal(bar.get_facecolor()[:-1], p.colors[1]) positions = np.arange(len(p.plot_data)) for bar, pos in zip(ax.patches[:n_groups], positions): nt.assert_almost_equal(bar.get_y(), pos - p.width / 2) nt.assert_almost_equal(bar.get_height(), p.nested_width) for bar, stat in zip(ax.patches, p.statistic.T.flat): if LooseVersion(mpl.__version__) >= mpl_barplot_change: nt.assert_almost_equal(bar.get_x(), 0) nt.assert_almost_equal(bar.get_width(), stat) else: nt.assert_almost_equal(bar.get_x(), min(0, stat)) nt.assert_almost_equal(bar.get_width(), abs(stat)) def test_draw_missing_bars(self): kws = self.default_kws.copy() order = list("abcd") kws.update(x="g", y="y", order=order, data=self.df) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) nt.assert_equal(len(ax.patches), len(order)) nt.assert_equal(len(ax.lines), len(order)) plt.close("all") hue_order = list("mno") kws.update(x="g", y="y", hue="h", hue_order=hue_order, data=self.df) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) nt.assert_equal(len(ax.patches), len(p.plot_data) * len(hue_order)) nt.assert_equal(len(ax.lines), len(p.plot_data) * len(hue_order)) plt.close("all") def test_unaligned_index(self): f, (ax1, ax2) = plt.subplots(2) cat.barplot(self.g, self.y, ci="sd", ax=ax1) cat.barplot(self.g, self.y_perm, ci="sd", ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert pytest.approx(l1.get_xydata()) == l2.get_xydata() for p1, p2 in zip(ax1.patches, ax2.patches): assert pytest.approx(p1.get_xy()) == p2.get_xy() assert pytest.approx(p1.get_height()) == p2.get_height() assert pytest.approx(p1.get_width()) == p2.get_width() f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.barplot(self.g, self.y, self.h, hue_order=hue_order, ci="sd", ax=ax1) cat.barplot(self.g, self.y_perm, self.h, hue_order=hue_order, ci="sd", ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert pytest.approx(l1.get_xydata()) == l2.get_xydata() for p1, p2 in zip(ax1.patches, ax2.patches): assert pytest.approx(p1.get_xy()) == p2.get_xy() assert pytest.approx(p1.get_height()) == p2.get_height() assert pytest.approx(p1.get_width()) == p2.get_width() def test_barplot_colors(self): # Test unnested palette colors kws = self.default_kws.copy() kws.update(x="g", y="y", data=self.df, saturation=1, palette="muted") p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) palette = palettes.color_palette("muted", len(self.g.unique())) for patch, pal_color in zip(ax.patches, palette): nt.assert_equal(patch.get_facecolor()[:-1], pal_color) plt.close("all") # Test single color color = (.2, .2, .3, 1) kws = self.default_kws.copy() kws.update(x="g", y="y", data=self.df, saturation=1, color=color) p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) for patch in ax.patches: nt.assert_equal(patch.get_facecolor(), color) plt.close("all") # Test nested palette colors kws = self.default_kws.copy() kws.update(x="g", y="y", hue="h", data=self.df, saturation=1, palette="Set2") p = cat._BarPlotter(**kws) f, ax = plt.subplots() p.draw_bars(ax, {}) palette = palettes.color_palette("Set2", len(self.h.unique())) for patch in ax.patches[:len(self.g.unique())]: nt.assert_equal(patch.get_facecolor()[:-1], palette[0]) for patch in ax.patches[len(self.g.unique()):]: nt.assert_equal(patch.get_facecolor()[:-1], palette[1]) plt.close("all") def test_simple_barplots(self): ax = cat.barplot("g", "y", data=self.df) nt.assert_equal(len(ax.patches), len(self.g.unique())) nt.assert_equal(ax.get_xlabel(), "g") nt.assert_equal(ax.get_ylabel(), "y") plt.close("all") ax = cat.barplot("y", "g", orient="h", data=self.df) nt.assert_equal(len(ax.patches), len(self.g.unique())) nt.assert_equal(ax.get_xlabel(), "y") nt.assert_equal(ax.get_ylabel(), "g") plt.close("all") ax = cat.barplot("g", "y", "h", data=self.df) nt.assert_equal(len(ax.patches), len(self.g.unique()) * len(self.h.unique())) nt.assert_equal(ax.get_xlabel(), "g") nt.assert_equal(ax.get_ylabel(), "y") plt.close("all") ax = cat.barplot("y", "g", "h", orient="h", data=self.df) nt.assert_equal(len(ax.patches), len(self.g.unique()) * len(self.h.unique())) nt.assert_equal(ax.get_xlabel(), "y") nt.assert_equal(ax.get_ylabel(), "g") plt.close("all") class TestPointPlotter(CategoricalFixture): default_kws = dict( x=None, y=None, hue=None, data=None, estimator=np.mean, ci=95, n_boot=100, units=None, seed=None, order=None, hue_order=None, markers="o", linestyles="-", dodge=0, join=True, scale=1, orient=None, color=None, palette=None, ) def test_different_defualt_colors(self): kws = self.default_kws.copy() kws.update(dict(x="g", y="y", data=self.df)) p = cat._PointPlotter(**kws) color = palettes.color_palette()[0] npt.assert_array_equal(p.colors, [color, color, color]) def test_hue_offsets(self): kws = self.default_kws.copy() kws.update(dict(x="g", y="y", hue="h", data=self.df)) p = cat._PointPlotter(**kws) npt.assert_array_equal(p.hue_offsets, [0, 0]) kws.update(dict(dodge=.5)) p = cat._PointPlotter(**kws) npt.assert_array_equal(p.hue_offsets, [-.25, .25]) kws.update(dict(x="h", hue="g", dodge=0)) p = cat._PointPlotter(**kws) npt.assert_array_equal(p.hue_offsets, [0, 0, 0]) kws.update(dict(dodge=.3)) p = cat._PointPlotter(**kws) npt.assert_array_equal(p.hue_offsets, [-.15, 0, .15]) def test_draw_vertical_points(self): kws = self.default_kws.copy() kws.update(x="g", y="y", data=self.df) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) nt.assert_equal(len(ax.collections), 1) nt.assert_equal(len(ax.lines), len(p.plot_data) + 1) points = ax.collections[0] nt.assert_equal(len(points.get_offsets()), len(p.plot_data)) x, y = points.get_offsets().T npt.assert_array_equal(x, np.arange(len(p.plot_data))) npt.assert_array_equal(y, p.statistic) for got_color, want_color in zip(points.get_facecolors(), p.colors): npt.assert_array_equal(got_color[:-1], want_color) def test_draw_horizontal_points(self): kws = self.default_kws.copy() kws.update(x="y", y="g", orient="h", data=self.df) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) nt.assert_equal(len(ax.collections), 1) nt.assert_equal(len(ax.lines), len(p.plot_data) + 1) points = ax.collections[0] nt.assert_equal(len(points.get_offsets()), len(p.plot_data)) x, y = points.get_offsets().T npt.assert_array_equal(x, p.statistic) npt.assert_array_equal(y, np.arange(len(p.plot_data))) for got_color, want_color in zip(points.get_facecolors(), p.colors): npt.assert_array_equal(got_color[:-1], want_color) def test_draw_vertical_nested_points(self): kws = self.default_kws.copy() kws.update(x="g", y="y", hue="h", data=self.df) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) nt.assert_equal(len(ax.collections), 2) nt.assert_equal(len(ax.lines), len(p.plot_data) * len(p.hue_names) + len(p.hue_names)) for points, numbers, color in zip(ax.collections, p.statistic.T, p.colors): nt.assert_equal(len(points.get_offsets()), len(p.plot_data)) x, y = points.get_offsets().T npt.assert_array_equal(x, np.arange(len(p.plot_data))) npt.assert_array_equal(y, numbers) for got_color in points.get_facecolors(): npt.assert_array_equal(got_color[:-1], color) def test_draw_horizontal_nested_points(self): kws = self.default_kws.copy() kws.update(x="y", y="g", hue="h", orient="h", data=self.df) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) nt.assert_equal(len(ax.collections), 2) nt.assert_equal(len(ax.lines), len(p.plot_data) * len(p.hue_names) + len(p.hue_names)) for points, numbers, color in zip(ax.collections, p.statistic.T, p.colors): nt.assert_equal(len(points.get_offsets()), len(p.plot_data)) x, y = points.get_offsets().T npt.assert_array_equal(x, numbers) npt.assert_array_equal(y, np.arange(len(p.plot_data))) for got_color in points.get_facecolors(): npt.assert_array_equal(got_color[:-1], color) def test_draw_missing_points(self): kws = self.default_kws.copy() df = self.df.copy() kws.update(x="g", y="y", hue="h", hue_order=["x", "y"], data=df) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) df.loc[df["h"] == "m", "y"] = np.nan kws.update(x="g", y="y", hue="h", data=df) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) def test_unaligned_index(self): f, (ax1, ax2) = plt.subplots(2) cat.pointplot(self.g, self.y, ci="sd", ax=ax1) cat.pointplot(self.g, self.y_perm, ci="sd", ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert pytest.approx(l1.get_xydata()) == l2.get_xydata() for p1, p2 in zip(ax1.collections, ax2.collections): assert pytest.approx(p1.get_offsets()) == p2.get_offsets() f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.pointplot(self.g, self.y, self.h, hue_order=hue_order, ci="sd", ax=ax1) cat.pointplot(self.g, self.y_perm, self.h, hue_order=hue_order, ci="sd", ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert pytest.approx(l1.get_xydata()) == l2.get_xydata() for p1, p2 in zip(ax1.collections, ax2.collections): assert pytest.approx(p1.get_offsets()) == p2.get_offsets() def test_pointplot_colors(self): # Test a single-color unnested plot color = (.2, .2, .3, 1) kws = self.default_kws.copy() kws.update(x="g", y="y", data=self.df, color=color) p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) for line in ax.lines: nt.assert_equal(line.get_color(), color[:-1]) for got_color in ax.collections[0].get_facecolors(): npt.assert_array_equal(rgb2hex(got_color), rgb2hex(color)) plt.close("all") # Test a multi-color unnested plot palette = palettes.color_palette("Set1", 3) kws.update(x="g", y="y", data=self.df, palette="Set1") p = cat._PointPlotter(**kws) nt.assert_true(not p.join) f, ax = plt.subplots() p.draw_points(ax) for line, pal_color in zip(ax.lines, palette): npt.assert_array_equal(line.get_color(), pal_color) for point_color, pal_color in zip(ax.collections[0].get_facecolors(), palette): npt.assert_array_equal(rgb2hex(point_color), rgb2hex(pal_color)) plt.close("all") # Test a multi-colored nested plot palette = palettes.color_palette("dark", 2) kws.update(x="g", y="y", hue="h", data=self.df, palette="dark") p = cat._PointPlotter(**kws) f, ax = plt.subplots() p.draw_points(ax) for line in ax.lines[:(len(p.plot_data) + 1)]: nt.assert_equal(line.get_color(), palette[0]) for line in ax.lines[(len(p.plot_data) + 1):]: nt.assert_equal(line.get_color(), palette[1]) for i, pal_color in enumerate(palette): for point_color in ax.collections[i].get_facecolors(): npt.assert_array_equal(point_color[:-1], pal_color) plt.close("all") def test_simple_pointplots(self): ax = cat.pointplot("g", "y", data=self.df) nt.assert_equal(len(ax.collections), 1) nt.assert_equal(len(ax.lines), len(self.g.unique()) + 1) nt.assert_equal(ax.get_xlabel(), "g") nt.assert_equal(ax.get_ylabel(), "y") plt.close("all") ax = cat.pointplot("y", "g", orient="h", data=self.df) nt.assert_equal(len(ax.collections), 1) nt.assert_equal(len(ax.lines), len(self.g.unique()) + 1) nt.assert_equal(ax.get_xlabel(), "y") nt.assert_equal(ax.get_ylabel(), "g") plt.close("all") ax = cat.pointplot("g", "y", "h", data=self.df) nt.assert_equal(len(ax.collections), len(self.h.unique())) nt.assert_equal(len(ax.lines), (len(self.g.unique()) * len(self.h.unique()) + len(self.h.unique()))) nt.assert_equal(ax.get_xlabel(), "g") nt.assert_equal(ax.get_ylabel(), "y") plt.close("all") ax = cat.pointplot("y", "g", "h", orient="h", data=self.df) nt.assert_equal(len(ax.collections), len(self.h.unique())) nt.assert_equal(len(ax.lines), (len(self.g.unique()) * len(self.h.unique()) + len(self.h.unique()))) nt.assert_equal(ax.get_xlabel(), "y") nt.assert_equal(ax.get_ylabel(), "g") plt.close("all") class TestCountPlot(CategoricalFixture): def test_plot_elements(self): ax = cat.countplot("g", data=self.df) nt.assert_equal(len(ax.patches), self.g.unique().size) for p in ax.patches: nt.assert_equal(p.get_y(), 0) nt.assert_equal(p.get_height(), self.g.size / self.g.unique().size) plt.close("all") ax = cat.countplot(y="g", data=self.df) nt.assert_equal(len(ax.patches), self.g.unique().size) for p in ax.patches: nt.assert_equal(p.get_x(), 0) nt.assert_equal(p.get_width(), self.g.size / self.g.unique().size) plt.close("all") ax = cat.countplot("g", hue="h", data=self.df) nt.assert_equal(len(ax.patches), self.g.unique().size * self.h.unique().size) plt.close("all") ax = cat.countplot(y="g", hue="h", data=self.df) nt.assert_equal(len(ax.patches), self.g.unique().size * self.h.unique().size) plt.close("all") def test_input_error(self): with nt.assert_raises(TypeError): cat.countplot() with nt.assert_raises(TypeError): cat.countplot(x="g", y="h", data=self.df) class TestCatPlot(CategoricalFixture): def test_facet_organization(self): g = cat.catplot("g", "y", data=self.df) nt.assert_equal(g.axes.shape, (1, 1)) g = cat.catplot("g", "y", col="h", data=self.df) nt.assert_equal(g.axes.shape, (1, 2)) g = cat.catplot("g", "y", row="h", data=self.df) nt.assert_equal(g.axes.shape, (2, 1)) g = cat.catplot("g", "y", col="u", row="h", data=self.df) nt.assert_equal(g.axes.shape, (2, 3)) def test_plot_elements(self): g = cat.catplot("g", "y", data=self.df, kind="point") nt.assert_equal(len(g.ax.collections), 1) want_lines = self.g.unique().size + 1 nt.assert_equal(len(g.ax.lines), want_lines) g = cat.catplot("g", "y", "h", data=self.df, kind="point") want_collections = self.h.unique().size nt.assert_equal(len(g.ax.collections), want_collections) want_lines = (self.g.unique().size + 1) * self.h.unique().size nt.assert_equal(len(g.ax.lines), want_lines) g = cat.catplot("g", "y", data=self.df, kind="bar") want_elements = self.g.unique().size nt.assert_equal(len(g.ax.patches), want_elements) nt.assert_equal(len(g.ax.lines), want_elements) g = cat.catplot("g", "y", "h", data=self.df, kind="bar") want_elements = self.g.unique().size * self.h.unique().size nt.assert_equal(len(g.ax.patches), want_elements) nt.assert_equal(len(g.ax.lines), want_elements) g = cat.catplot("g", data=self.df, kind="count") want_elements = self.g.unique().size nt.assert_equal(len(g.ax.patches), want_elements) nt.assert_equal(len(g.ax.lines), 0) g = cat.catplot("g", hue="h", data=self.df, kind="count") want_elements = self.g.unique().size * self.h.unique().size nt.assert_equal(len(g.ax.patches), want_elements) nt.assert_equal(len(g.ax.lines), 0) g = cat.catplot("g", "y", data=self.df, kind="box") want_artists = self.g.unique().size nt.assert_equal(len(g.ax.artists), want_artists) g = cat.catplot("g", "y", "h", data=self.df, kind="box") want_artists = self.g.unique().size * self.h.unique().size nt.assert_equal(len(g.ax.artists), want_artists) g = cat.catplot("g", "y", data=self.df, kind="violin", inner=None) want_elements = self.g.unique().size nt.assert_equal(len(g.ax.collections), want_elements) g = cat.catplot("g", "y", "h", data=self.df, kind="violin", inner=None) want_elements = self.g.unique().size * self.h.unique().size nt.assert_equal(len(g.ax.collections), want_elements) g = cat.catplot("g", "y", data=self.df, kind="strip") want_elements = self.g.unique().size nt.assert_equal(len(g.ax.collections), want_elements) g = cat.catplot("g", "y", "h", data=self.df, kind="strip") want_elements = self.g.unique().size + self.h.unique().size nt.assert_equal(len(g.ax.collections), want_elements) def test_bad_plot_kind_error(self): with nt.assert_raises(ValueError): cat.catplot("g", "y", data=self.df, kind="not_a_kind") def test_count_x_and_y(self): with nt.assert_raises(ValueError): cat.catplot("g", "y", data=self.df, kind="count") def test_plot_colors(self): ax = cat.barplot("g", "y", data=self.df) g = cat.catplot("g", "y", data=self.df, kind="bar") for p1, p2 in zip(ax.patches, g.ax.patches): nt.assert_equal(p1.get_facecolor(), p2.get_facecolor()) plt.close("all") ax = cat.barplot("g", "y", data=self.df, color="purple") g = cat.catplot("g", "y", data=self.df, kind="bar", color="purple") for p1, p2 in zip(ax.patches, g.ax.patches): nt.assert_equal(p1.get_facecolor(), p2.get_facecolor()) plt.close("all") ax = cat.barplot("g", "y", data=self.df, palette="Set2") g = cat.catplot("g", "y", data=self.df, kind="bar", palette="Set2") for p1, p2 in zip(ax.patches, g.ax.patches): nt.assert_equal(p1.get_facecolor(), p2.get_facecolor()) plt.close("all") ax = cat.pointplot("g", "y", data=self.df) g = cat.catplot("g", "y", data=self.df) for l1, l2 in zip(ax.lines, g.ax.lines): nt.assert_equal(l1.get_color(), l2.get_color()) plt.close("all") ax = cat.pointplot("g", "y", data=self.df, color="purple") g = cat.catplot("g", "y", data=self.df, color="purple") for l1, l2 in zip(ax.lines, g.ax.lines): nt.assert_equal(l1.get_color(), l2.get_color()) plt.close("all") ax = cat.pointplot("g", "y", data=self.df, palette="Set2") g = cat.catplot("g", "y", data=self.df, palette="Set2") for l1, l2 in zip(ax.lines, g.ax.lines): nt.assert_equal(l1.get_color(), l2.get_color()) plt.close("all") def test_ax_kwarg_removal(self): f, ax = plt.subplots() with pytest.warns(UserWarning): g = cat.catplot("g", "y", data=self.df, ax=ax) assert len(ax.collections) == 0 assert len(g.ax.collections) > 0 def test_factorplot(self): with pytest.warns(UserWarning): g = cat.factorplot("g", "y", data=self.df) nt.assert_equal(len(g.ax.collections), 1) want_lines = self.g.unique().size + 1 nt.assert_equal(len(g.ax.lines), want_lines) class TestBoxenPlotter(CategoricalFixture): default_kws = dict(x=None, y=None, hue=None, data=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=.75, width=.8, dodge=True, k_depth='proportion', linewidth=None, scale='exponential', outlier_prop=None) def ispatch(self, c): return isinstance(c, mpl.collections.PatchCollection) def edge_calc(self, n, data): q = np.asanyarray([0.5 ** n, 1 - 0.5 ** n]) * 100 q = list(np.unique(q)) return np.percentile(data, q) def test_box_ends_finite(self): p = cat._LVPlotter(**self.default_kws) p.establish_variables("g", "y", data=self.df) box_k = np.asarray([[b, k] for b, k in map(p._lv_box_ends, p.plot_data)]) box_ends = box_k[:, 0] k_vals = box_k[:, 1] # Check that all the box ends are finite and are within # the bounds of the data b_e = map(lambda a: np.all(np.isfinite(a)), box_ends) assert np.sum(list(b_e)) == len(box_ends) def within(t): a, d = t return ((np.ravel(a) <= d.max()) & (np.ravel(a) >= d.min())).all() b_w = map(within, zip(box_ends, p.plot_data)) assert np.sum(list(b_w)) == len(box_ends) k_f = map(lambda k: (k > 0.) & np.isfinite(k), k_vals) assert np.sum(list(k_f)) == len(k_vals) def test_box_ends_correct(self): n = 100 linear_data = np.arange(n) expected_k = int(np.log2(n)) - int(np.log2(n * 0.007)) + 1 expected_edges = [self.edge_calc(i, linear_data) for i in range(expected_k + 2, 1, -1)] p = cat._LVPlotter(**self.default_kws) calc_edges, calc_k = p._lv_box_ends(linear_data) assert np.array_equal(expected_edges, calc_edges) assert expected_k == calc_k def test_outliers(self): n = 100 outlier_data = np.append(np.arange(n - 1), 2 * n) expected_k = int(np.log2(n)) - int(np.log2(n * 0.007)) + 1 expected_edges = [self.edge_calc(i, outlier_data) for i in range(expected_k + 2, 1, -1)] p = cat._LVPlotter(**self.default_kws) calc_edges, calc_k = p._lv_box_ends(outlier_data) npt.assert_equal(list(expected_edges), calc_edges) npt.assert_equal(expected_k, calc_k) out_calc = p._lv_outliers(outlier_data, calc_k) out_exp = p._lv_outliers(outlier_data, expected_k) npt.assert_equal(out_exp, out_calc) def test_hue_offsets(self): p = cat._LVPlotter(**self.default_kws) p.establish_variables("g", "y", "h", data=self.df) npt.assert_array_equal(p.hue_offsets, [-.2, .2]) kws = self.default_kws.copy() kws["width"] = .6 p = cat._LVPlotter(**kws) p.establish_variables("g", "y", "h", data=self.df) npt.assert_array_equal(p.hue_offsets, [-.15, .15]) p = cat._LVPlotter(**kws) p.establish_variables("h", "y", "g", data=self.df) npt.assert_array_almost_equal(p.hue_offsets, [-.2, 0, .2]) def test_axes_data(self): ax = cat.boxenplot("g", "y", data=self.df) patches = filter(self.ispatch, ax.collections) nt.assert_equal(len(list(patches)), 3) plt.close("all") ax = cat.boxenplot("g", "y", "h", data=self.df) patches = filter(self.ispatch, ax.collections) nt.assert_equal(len(list(patches)), 6) plt.close("all") def test_box_colors(self): ax = cat.boxenplot("g", "y", data=self.df, saturation=1) pal = palettes.color_palette(n_colors=3) for patch, color in zip(ax.artists, pal): nt.assert_equal(patch.get_facecolor()[:3], color) plt.close("all") ax = cat.boxenplot("g", "y", "h", data=self.df, saturation=1) pal = palettes.color_palette(n_colors=2) for patch, color in zip(ax.artists, pal * 2): nt.assert_equal(patch.get_facecolor()[:3], color) plt.close("all") def test_draw_missing_boxes(self): ax = cat.boxenplot("g", "y", data=self.df, order=["a", "b", "c", "d"]) patches = filter(self.ispatch, ax.collections) nt.assert_equal(len(list(patches)), 3) plt.close("all") def test_unaligned_index(self): f, (ax1, ax2) = plt.subplots(2) cat.boxenplot(self.g, self.y, ax=ax1) cat.boxenplot(self.g, self.y_perm, ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert np.array_equal(l1.get_xydata(), l2.get_xydata()) f, (ax1, ax2) = plt.subplots(2) hue_order = self.h.unique() cat.boxenplot(self.g, self.y, self.h, hue_order=hue_order, ax=ax1) cat.boxenplot(self.g, self.y_perm, self.h, hue_order=hue_order, ax=ax2) for l1, l2 in zip(ax1.lines, ax2.lines): assert np.array_equal(l1.get_xydata(), l2.get_xydata()) def test_missing_data(self): x = ["a", "a", "b", "b", "c", "c", "d", "d"] h = ["x", "y", "x", "y", "x", "y", "x", "y"] y = self.rs.randn(8) y[-2:] = np.nan ax = cat.boxenplot(x, y) nt.assert_equal(len(ax.lines), 3) plt.close("all") y[-1] = 0 ax = cat.boxenplot(x, y, h) nt.assert_equal(len(ax.lines), 7) plt.close("all") def test_boxenplots(self): # Smoke test the high level boxenplot options cat.boxenplot("y", data=self.df) plt.close("all") cat.boxenplot(y="y", data=self.df) plt.close("all") cat.boxenplot("g", "y", data=self.df) plt.close("all") cat.boxenplot("y", "g", data=self.df, orient="h") plt.close("all") cat.boxenplot("g", "y", "h", data=self.df) plt.close("all") cat.boxenplot("g", "y", "h", order=list("nabc"), data=self.df) plt.close("all") cat.boxenplot("g", "y", "h", hue_order=list("omn"), data=self.df) plt.close("all") cat.boxenplot("y", "g", "h", data=self.df, orient="h") plt.close("all") cat.boxenplot("y", "g", "h", data=self.df, orient="h", palette="Set2") plt.close("all") cat.boxenplot("y", "g", "h", data=self.df, orient="h", color="b") plt.close("all") def test_axes_annotation(self): ax = cat.boxenplot("g", "y", data=self.df) nt.assert_equal(ax.get_xlabel(), "g") nt.assert_equal(ax.get_ylabel(), "y") nt.assert_equal(ax.get_xlim(), (-.5, 2.5)) npt.assert_array_equal(ax.get_xticks(), [0, 1, 2]) npt.assert_array_equal([l.get_text() for l in ax.get_xticklabels()], ["a", "b", "c"]) plt.close("all") ax = cat.boxenplot("g", "y", "h", data=self.df) nt.assert_equal(ax.get_xlabel(), "g") nt.assert_equal(ax.get_ylabel(), "y") npt.assert_array_equal(ax.get_xticks(), [0, 1, 2]) npt.assert_array_equal([l.get_text() for l in ax.get_xticklabels()], ["a", "b", "c"]) npt.assert_array_equal([l.get_text() for l in ax.legend_.get_texts()], ["m", "n"]) plt.close("all") with plt.rc_context(rc={"axes.labelsize": "large"}): ax = cat.boxenplot("g", "y", "h", data=self.df) plt.close("all") ax = cat.boxenplot("y", "g", data=self.df, orient="h") nt.assert_equal(ax.get_xlabel(), "y") nt.assert_equal(ax.get_ylabel(), "g") nt.assert_equal(ax.get_ylim(), (2.5, -.5)) npt.assert_array_equal(ax.get_yticks(), [0, 1, 2]) npt.assert_array_equal([l.get_text() for l in ax.get_yticklabels()], ["a", "b", "c"]) plt.close("all") def test_lvplot(self): with pytest.warns(UserWarning): ax = cat.lvplot("g", "y", data=self.df) patches = filter(self.ispatch, ax.collections) nt.assert_equal(len(list(patches)), 3) plt.close("all") seaborn-0.10.0/seaborn/tests/test_distributions.py000066400000000000000000000254061361256634400223500ustar00rootroot00000000000000import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy import stats import pytest import nose.tools as nt import numpy.testing as npt from .. import distributions as dist _no_statsmodels = not dist._has_statsmodels class TestDistPlot(object): rs = np.random.RandomState(0) x = rs.randn(100) def test_hist_bins(self): try: fd_edges = np.histogram_bin_edges(self.x, "fd") except AttributeError: pytest.skip("Requires numpy >= 1.15") ax = dist.distplot(self.x) for edge, bar in zip(fd_edges, ax.patches): assert pytest.approx(edge) == bar.get_x() plt.close(ax.figure) n = 25 n_edges = np.histogram_bin_edges(self.x, n) ax = dist.distplot(self.x, bins=n) for edge, bar in zip(n_edges, ax.patches): assert pytest.approx(edge) == bar.get_x() def test_elements(self): n = 10 ax = dist.distplot(self.x, bins=n, hist=True, kde=False, rug=False, fit=None) assert len(ax.patches) == 10 assert len(ax.lines) == 0 assert len(ax.collections) == 0 plt.close(ax.figure) ax = dist.distplot(self.x, hist=False, kde=True, rug=False, fit=None) assert len(ax.patches) == 0 assert len(ax.lines) == 1 assert len(ax.collections) == 0 plt.close(ax.figure) ax = dist.distplot(self.x, hist=False, kde=False, rug=True, fit=None) assert len(ax.patches) == 0 assert len(ax.lines) == 0 assert len(ax.collections) == 1 plt.close(ax.figure) ax = dist.distplot(self.x, hist=False, kde=False, rug=False, fit=stats.norm) assert len(ax.patches) == 0 assert len(ax.lines) == 1 assert len(ax.collections) == 0 def test_distplot_with_nans(self): f, (ax1, ax2) = plt.subplots(2) x_null = np.append(self.x, [np.nan]) dist.distplot(self.x, ax=ax1) dist.distplot(x_null, ax=ax2) line1 = ax1.lines[0] line2 = ax2.lines[0] assert np.array_equal(line1.get_xydata(), line2.get_xydata()) for bar1, bar2 in zip(ax1.patches, ax2.patches): assert bar1.get_xy() == bar2.get_xy() assert bar1.get_height() == bar2.get_height() class TestKDE(object): rs = np.random.RandomState(0) x = rs.randn(50) y = rs.randn(50) kernel = "gau" bw = "scott" gridsize = 128 clip = (-np.inf, np.inf) cut = 3 def test_scipy_univariate_kde(self): """Test the univariate KDE estimation with scipy.""" grid, y = dist._scipy_univariate_kde(self.x, self.bw, self.gridsize, self.cut, self.clip) nt.assert_equal(len(grid), self.gridsize) nt.assert_equal(len(y), self.gridsize) for bw in ["silverman", .2]: dist._scipy_univariate_kde(self.x, bw, self.gridsize, self.cut, self.clip) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_statsmodels_univariate_kde(self): """Test the univariate KDE estimation with statsmodels.""" grid, y = dist._statsmodels_univariate_kde(self.x, self.kernel, self.bw, self.gridsize, self.cut, self.clip) nt.assert_equal(len(grid), self.gridsize) nt.assert_equal(len(y), self.gridsize) for bw in ["silverman", .2]: dist._statsmodels_univariate_kde(self.x, self.kernel, bw, self.gridsize, self.cut, self.clip) def test_scipy_bivariate_kde(self): """Test the bivariate KDE estimation with scipy.""" clip = [self.clip, self.clip] x, y, z = dist._scipy_bivariate_kde(self.x, self.y, self.bw, self.gridsize, self.cut, clip) nt.assert_equal(x.shape, (self.gridsize, self.gridsize)) nt.assert_equal(y.shape, (self.gridsize, self.gridsize)) nt.assert_equal(len(z), self.gridsize) # Test a specific bandwidth clip = [self.clip, self.clip] x, y, z = dist._scipy_bivariate_kde(self.x, self.y, 1, self.gridsize, self.cut, clip) # Test that we get an error with an invalid bandwidth with nt.assert_raises(ValueError): dist._scipy_bivariate_kde(self.x, self.y, (1, 2), self.gridsize, self.cut, clip) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_statsmodels_bivariate_kde(self): """Test the bivariate KDE estimation with statsmodels.""" clip = [self.clip, self.clip] x, y, z = dist._statsmodels_bivariate_kde(self.x, self.y, self.bw, self.gridsize, self.cut, clip) nt.assert_equal(x.shape, (self.gridsize, self.gridsize)) nt.assert_equal(y.shape, (self.gridsize, self.gridsize)) nt.assert_equal(len(z), self.gridsize) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_statsmodels_kde_cumulative(self): """Test computation of cumulative KDE.""" grid, y = dist._statsmodels_univariate_kde(self.x, self.kernel, self.bw, self.gridsize, self.cut, self.clip, cumulative=True) nt.assert_equal(len(grid), self.gridsize) nt.assert_equal(len(y), self.gridsize) # make sure y is monotonically increasing npt.assert_((np.diff(y) > 0).all()) def test_kde_cummulative_2d(self): """Check error if args indicate bivariate KDE and cumulative.""" with npt.assert_raises(TypeError): dist.kdeplot(self.x, data2=self.y, cumulative=True) def test_kde_singular(self): with pytest.warns(UserWarning): ax = dist.kdeplot(np.ones(10)) line = ax.lines[0] assert not line.get_xydata().size with pytest.warns(UserWarning): ax = dist.kdeplot(np.ones(10) * np.nan) line = ax.lines[1] assert not line.get_xydata().size @pytest.mark.parametrize("cumulative", [True, False]) def test_kdeplot_with_nans(self, cumulative): if cumulative and _no_statsmodels: pytest.skip("no statsmodels") x_missing = np.append(self.x, [np.nan, np.nan]) f, ax = plt.subplots() dist.kdeplot(self.x, cumulative=cumulative) dist.kdeplot(x_missing, cumulative=cumulative) line1, line2 = ax.lines assert np.array_equal(line1.get_xydata(), line2.get_xydata()) def test_bivariate_kde_series(self): df = pd.DataFrame({'x': self.x, 'y': self.y}) ax_series = dist.kdeplot(df.x, df.y) ax_values = dist.kdeplot(df.x.values, df.y.values) nt.assert_equal(len(ax_series.collections), len(ax_values.collections)) nt.assert_equal(ax_series.collections[0].get_paths(), ax_values.collections[0].get_paths()) def test_bivariate_kde_colorbar(self): f, ax = plt.subplots() dist.kdeplot(self.x, self.y, cbar=True, cbar_kws=dict(label="density"), ax=ax) nt.assert_equal(len(f.axes), 2) nt.assert_equal(f.axes[1].get_ylabel(), "density") def test_legend(self): f, ax = plt.subplots() dist.kdeplot(self.x, self.y, label="test1") line = ax.lines[-1] assert line.get_label() == "test1" f, ax = plt.subplots() dist.kdeplot(self.x, self.y, shade=True, label="test2") fill = ax.collections[-1] assert fill.get_label() == "test2" def test_contour_color(self): rgb = (.1, .5, .7) f, ax = plt.subplots() dist.kdeplot(self.x, self.y, color=rgb) contour = ax.collections[-1] assert np.array_equal(contour.get_color()[0, :3], rgb) low = ax.collections[0].get_color().mean() high = ax.collections[-1].get_color().mean() assert low < high f, ax = plt.subplots() dist.kdeplot(self.x, self.y, shade=True, color=rgb) contour = ax.collections[-1] low = ax.collections[0].get_facecolor().mean() high = ax.collections[-1].get_facecolor().mean() assert low > high f, ax = plt.subplots() dist.kdeplot(self.x, self.y, shade=True, colors=[rgb]) for level in ax.collections: level_rgb = tuple(level.get_facecolor().squeeze()[:3]) assert level_rgb == rgb class TestRugPlot(object): @pytest.fixture def list_data(self): return np.random.randn(20).tolist() @pytest.fixture def array_data(self): return np.random.randn(20) @pytest.fixture def series_data(self): return pd.Series(np.random.randn(20)) def test_rugplot(self, list_data, array_data, series_data): h = .1 for data in [list_data, array_data, series_data]: f, ax = plt.subplots() dist.rugplot(data, h) rug, = ax.collections segments = np.array(rug.get_segments()) assert len(segments) == len(data) assert np.array_equal(segments[:, 0, 0], data) assert np.array_equal(segments[:, 1, 0], data) assert np.array_equal(segments[:, 0, 1], np.zeros_like(data)) assert np.array_equal(segments[:, 1, 1], np.ones_like(data) * h) plt.close(f) f, ax = plt.subplots() dist.rugplot(data, h, axis="y") rug, = ax.collections segments = np.array(rug.get_segments()) assert len(segments) == len(data) assert np.array_equal(segments[:, 0, 1], data) assert np.array_equal(segments[:, 1, 1], data) assert np.array_equal(segments[:, 0, 0], np.zeros_like(data)) assert np.array_equal(segments[:, 1, 0], np.ones_like(data) * h) plt.close(f) f, ax = plt.subplots() dist.rugplot(data, axis="y") dist.rugplot(data, vertical=True) c1, c2 = ax.collections assert np.array_equal(c1.get_segments(), c2.get_segments()) plt.close(f) f, ax = plt.subplots() dist.rugplot(data) dist.rugplot(data, lw=2) dist.rugplot(data, linewidth=3, alpha=.5) for c, lw in zip(ax.collections, [1, 2, 3]): assert np.squeeze(c.get_linewidth()).item() == lw assert c.get_alpha() == .5 plt.close(f) seaborn-0.10.0/seaborn/tests/test_matrix.py000066400000000000000000001312341361256634400207470ustar00rootroot00000000000000import itertools import tempfile import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import pandas as pd from scipy.spatial import distance from scipy.cluster import hierarchy import nose.tools as nt import numpy.testing as npt try: import pandas.testing as pdt except ImportError: import pandas.util.testing as pdt import pytest from .. import matrix as mat from .. import color_palette try: import fastcluster assert fastcluster _no_fastcluster = False except ImportError: _no_fastcluster = True class TestHeatmap(object): rs = np.random.RandomState(sum(map(ord, "heatmap"))) x_norm = rs.randn(4, 8) letters = pd.Series(["A", "B", "C", "D"], name="letters") df_norm = pd.DataFrame(x_norm, index=letters) x_unif = rs.rand(20, 13) df_unif = pd.DataFrame(x_unif) default_kws = dict(vmin=None, vmax=None, cmap=None, center=None, robust=False, annot=False, fmt=".2f", annot_kws=None, cbar=True, cbar_kws=None, mask=None) def test_ndarray_input(self): p = mat._HeatMapper(self.x_norm, **self.default_kws) npt.assert_array_equal(p.plot_data, self.x_norm) pdt.assert_frame_equal(p.data, pd.DataFrame(self.x_norm)) npt.assert_array_equal(p.xticklabels, np.arange(8)) npt.assert_array_equal(p.yticklabels, np.arange(4)) nt.assert_equal(p.xlabel, "") nt.assert_equal(p.ylabel, "") def test_df_input(self): p = mat._HeatMapper(self.df_norm, **self.default_kws) npt.assert_array_equal(p.plot_data, self.x_norm) pdt.assert_frame_equal(p.data, self.df_norm) npt.assert_array_equal(p.xticklabels, np.arange(8)) npt.assert_array_equal(p.yticklabels, self.letters.values) nt.assert_equal(p.xlabel, "") nt.assert_equal(p.ylabel, "letters") def test_df_multindex_input(self): df = self.df_norm.copy() index = pd.MultiIndex.from_tuples([("A", 1), ("B", 2), ("C", 3), ("D", 4)], names=["letter", "number"]) index.name = "letter-number" df.index = index p = mat._HeatMapper(df, **self.default_kws) combined_tick_labels = ["A-1", "B-2", "C-3", "D-4"] npt.assert_array_equal(p.yticklabels, combined_tick_labels) nt.assert_equal(p.ylabel, "letter-number") p = mat._HeatMapper(df.T, **self.default_kws) npt.assert_array_equal(p.xticklabels, combined_tick_labels) nt.assert_equal(p.xlabel, "letter-number") def test_mask_input(self): kws = self.default_kws.copy() mask = self.x_norm > 0 kws['mask'] = mask p = mat._HeatMapper(self.x_norm, **kws) plot_data = np.ma.masked_where(mask, self.x_norm) npt.assert_array_equal(p.plot_data, plot_data) def test_default_vlims(self): p = mat._HeatMapper(self.df_unif, **self.default_kws) nt.assert_equal(p.vmin, self.x_unif.min()) nt.assert_equal(p.vmax, self.x_unif.max()) def test_robust_vlims(self): kws = self.default_kws.copy() kws["robust"] = True p = mat._HeatMapper(self.df_unif, **kws) nt.assert_equal(p.vmin, np.percentile(self.x_unif, 2)) nt.assert_equal(p.vmax, np.percentile(self.x_unif, 98)) def test_custom_sequential_vlims(self): kws = self.default_kws.copy() kws["vmin"] = 0 kws["vmax"] = 1 p = mat._HeatMapper(self.df_unif, **kws) nt.assert_equal(p.vmin, 0) nt.assert_equal(p.vmax, 1) def test_custom_diverging_vlims(self): kws = self.default_kws.copy() kws["vmin"] = -4 kws["vmax"] = 5 kws["center"] = 0 p = mat._HeatMapper(self.df_norm, **kws) nt.assert_equal(p.vmin, -4) nt.assert_equal(p.vmax, 5) def test_array_with_nans(self): x1 = self.rs.rand(10, 10) nulls = np.zeros(10) * np.nan x2 = np.c_[x1, nulls] m1 = mat._HeatMapper(x1, **self.default_kws) m2 = mat._HeatMapper(x2, **self.default_kws) nt.assert_equal(m1.vmin, m2.vmin) nt.assert_equal(m1.vmax, m2.vmax) def test_mask(self): df = pd.DataFrame(data={'a': [1, 1, 1], 'b': [2, np.nan, 2], 'c': [3, 3, np.nan]}) kws = self.default_kws.copy() kws["mask"] = np.isnan(df.values) m = mat._HeatMapper(df, **kws) npt.assert_array_equal(np.isnan(m.plot_data.data), m.plot_data.mask) def test_custom_cmap(self): kws = self.default_kws.copy() kws["cmap"] = "BuGn" p = mat._HeatMapper(self.df_unif, **kws) nt.assert_equal(p.cmap, mpl.cm.BuGn) def test_centered_vlims(self): kws = self.default_kws.copy() kws["center"] = .5 p = mat._HeatMapper(self.df_unif, **kws) nt.assert_equal(p.vmin, self.df_unif.values.min()) nt.assert_equal(p.vmax, self.df_unif.values.max()) def test_default_colors(self): vals = np.linspace(.2, 1, 9) cmap = mpl.cm.binary ax = mat.heatmap([vals], cmap=cmap) fc = ax.collections[0].get_facecolors() cvals = np.linspace(0, 1, 9) npt.assert_array_almost_equal(fc, cmap(cvals), 2) def test_custom_vlim_colors(self): vals = np.linspace(.2, 1, 9) cmap = mpl.cm.binary ax = mat.heatmap([vals], vmin=0, cmap=cmap) fc = ax.collections[0].get_facecolors() npt.assert_array_almost_equal(fc, cmap(vals), 2) def test_custom_center_colors(self): vals = np.linspace(.2, 1, 9) cmap = mpl.cm.binary ax = mat.heatmap([vals], center=.5, cmap=cmap) fc = ax.collections[0].get_facecolors() npt.assert_array_almost_equal(fc, cmap(vals), 2) def test_tickabels_off(self): kws = self.default_kws.copy() kws['xticklabels'] = False kws['yticklabels'] = False p = mat._HeatMapper(self.df_norm, **kws) nt.assert_equal(p.xticklabels, []) nt.assert_equal(p.yticklabels, []) def test_custom_ticklabels(self): kws = self.default_kws.copy() xticklabels = list('iheartheatmaps'[:self.df_norm.shape[1]]) yticklabels = list('heatmapsarecool'[:self.df_norm.shape[0]]) kws['xticklabels'] = xticklabels kws['yticklabels'] = yticklabels p = mat._HeatMapper(self.df_norm, **kws) nt.assert_equal(p.xticklabels, xticklabels) nt.assert_equal(p.yticklabels, yticklabels) def test_custom_ticklabel_interval(self): kws = self.default_kws.copy() xstep, ystep = 2, 3 kws['xticklabels'] = xstep kws['yticklabels'] = ystep p = mat._HeatMapper(self.df_norm, **kws) nx, ny = self.df_norm.T.shape npt.assert_array_equal(p.xticks, np.arange(0, nx, xstep) + .5) npt.assert_array_equal(p.yticks, np.arange(0, ny, ystep) + .5) npt.assert_array_equal(p.xticklabels, self.df_norm.columns[0:nx:xstep]) npt.assert_array_equal(p.yticklabels, self.df_norm.index[0:ny:ystep]) def test_heatmap_annotation(self): ax = mat.heatmap(self.df_norm, annot=True, fmt=".1f", annot_kws={"fontsize": 14}) for val, text in zip(self.x_norm.flat, ax.texts): nt.assert_equal(text.get_text(), "{:.1f}".format(val)) nt.assert_equal(text.get_fontsize(), 14) def test_heatmap_annotation_overwrite_kws(self): annot_kws = dict(color="0.3", va="bottom", ha="left") ax = mat.heatmap(self.df_norm, annot=True, fmt=".1f", annot_kws=annot_kws) for text in ax.texts: nt.assert_equal(text.get_color(), "0.3") nt.assert_equal(text.get_ha(), "left") nt.assert_equal(text.get_va(), "bottom") def test_heatmap_annotation_with_mask(self): df = pd.DataFrame(data={'a': [1, 1, 1], 'b': [2, np.nan, 2], 'c': [3, 3, np.nan]}) mask = np.isnan(df.values) df_masked = np.ma.masked_where(mask, df) ax = mat.heatmap(df, annot=True, fmt='.1f', mask=mask) nt.assert_equal(len(df_masked.compressed()), len(ax.texts)) for val, text in zip(df_masked.compressed(), ax.texts): nt.assert_equal("{:.1f}".format(val), text.get_text()) def test_heatmap_annotation_mesh_colors(self): ax = mat.heatmap(self.df_norm, annot=True) mesh = ax.collections[0] nt.assert_equal(len(mesh.get_facecolors()), self.df_norm.values.size) plt.close("all") def test_heatmap_annotation_other_data(self): annot_data = self.df_norm + 10 ax = mat.heatmap(self.df_norm, annot=annot_data, fmt=".1f", annot_kws={"fontsize": 14}) for val, text in zip(annot_data.values.flat, ax.texts): nt.assert_equal(text.get_text(), "{:.1f}".format(val)) nt.assert_equal(text.get_fontsize(), 14) def test_heatmap_annotation_with_limited_ticklabels(self): ax = mat.heatmap(self.df_norm, fmt=".2f", annot=True, xticklabels=False, yticklabels=False) for val, text in zip(self.x_norm.flat, ax.texts): nt.assert_equal(text.get_text(), "{:.2f}".format(val)) def test_heatmap_cbar(self): f = plt.figure() mat.heatmap(self.df_norm) nt.assert_equal(len(f.axes), 2) plt.close(f) f = plt.figure() mat.heatmap(self.df_norm, cbar=False) nt.assert_equal(len(f.axes), 1) plt.close(f) f, (ax1, ax2) = plt.subplots(2) mat.heatmap(self.df_norm, ax=ax1, cbar_ax=ax2) nt.assert_equal(len(f.axes), 2) plt.close(f) @pytest.mark.xfail(mpl.__version__ == "3.1.1", reason="matplotlib 3.1.1 bug") def test_heatmap_axes(self): ax = mat.heatmap(self.df_norm) xtl = [int(l.get_text()) for l in ax.get_xticklabels()] nt.assert_equal(xtl, list(self.df_norm.columns)) ytl = [l.get_text() for l in ax.get_yticklabels()] nt.assert_equal(ytl, list(self.df_norm.index)) nt.assert_equal(ax.get_xlabel(), "") nt.assert_equal(ax.get_ylabel(), "letters") nt.assert_equal(ax.get_xlim(), (0, 8)) nt.assert_equal(ax.get_ylim(), (4, 0)) def test_heatmap_ticklabel_rotation(self): f, ax = plt.subplots(figsize=(2, 2)) mat.heatmap(self.df_norm, xticklabels=1, yticklabels=1, ax=ax) for t in ax.get_xticklabels(): nt.assert_equal(t.get_rotation(), 0) for t in ax.get_yticklabels(): nt.assert_equal(t.get_rotation(), 90) plt.close(f) df = self.df_norm.copy() df.columns = [str(c) * 10 for c in df.columns] df.index = [i * 10 for i in df.index] f, ax = plt.subplots(figsize=(2, 2)) mat.heatmap(df, xticklabels=1, yticklabels=1, ax=ax) for t in ax.get_xticklabels(): nt.assert_equal(t.get_rotation(), 90) for t in ax.get_yticklabels(): nt.assert_equal(t.get_rotation(), 0) plt.close(f) def test_heatmap_inner_lines(self): c = (0, 0, 1, 1) ax = mat.heatmap(self.df_norm, linewidths=2, linecolor=c) mesh = ax.collections[0] nt.assert_equal(mesh.get_linewidths()[0], 2) nt.assert_equal(tuple(mesh.get_edgecolor()[0]), c) def test_square_aspect(self): ax = mat.heatmap(self.df_norm, square=True) nt.assert_equal(ax.get_aspect(), "equal") def test_mask_validation(self): mask = mat._matrix_mask(self.df_norm, None) nt.assert_equal(mask.shape, self.df_norm.shape) nt.assert_equal(mask.values.sum(), 0) with nt.assert_raises(ValueError): bad_array_mask = self.rs.randn(3, 6) > 0 mat._matrix_mask(self.df_norm, bad_array_mask) with nt.assert_raises(ValueError): bad_df_mask = pd.DataFrame(self.rs.randn(4, 8) > 0) mat._matrix_mask(self.df_norm, bad_df_mask) def test_missing_data_mask(self): data = pd.DataFrame(np.arange(4, dtype=np.float).reshape(2, 2)) data.loc[0, 0] = np.nan mask = mat._matrix_mask(data, None) npt.assert_array_equal(mask, [[True, False], [False, False]]) mask_in = np.array([[False, True], [False, False]]) mask_out = mat._matrix_mask(data, mask_in) npt.assert_array_equal(mask_out, [[True, True], [False, False]]) def test_cbar_ticks(self): f, (ax1, ax2) = plt.subplots(2) mat.heatmap(self.df_norm, ax=ax1, cbar_ax=ax2, cbar_kws=dict(drawedges=True)) assert len(ax2.collections) == 2 class TestDendrogram(object): rs = np.random.RandomState(sum(map(ord, "dendrogram"))) x_norm = rs.randn(4, 8) + np.arange(8) x_norm = (x_norm.T + np.arange(4)).T letters = pd.Series(["A", "B", "C", "D", "E", "F", "G", "H"], name="letters") df_norm = pd.DataFrame(x_norm, columns=letters) try: import fastcluster x_norm_linkage = fastcluster.linkage_vector(x_norm.T, metric='euclidean', method='single') except ImportError: x_norm_distances = distance.pdist(x_norm.T, metric='euclidean') x_norm_linkage = hierarchy.linkage(x_norm_distances, method='single') x_norm_dendrogram = hierarchy.dendrogram(x_norm_linkage, no_plot=True, color_threshold=-np.inf) x_norm_leaves = x_norm_dendrogram['leaves'] df_norm_leaves = np.asarray(df_norm.columns[x_norm_leaves]) default_kws = dict(linkage=None, metric='euclidean', method='single', axis=1, label=True, rotate=False) def test_ndarray_input(self): p = mat._DendrogramPlotter(self.x_norm, **self.default_kws) npt.assert_array_equal(p.array.T, self.x_norm) pdt.assert_frame_equal(p.data.T, pd.DataFrame(self.x_norm)) npt.assert_array_equal(p.linkage, self.x_norm_linkage) nt.assert_dict_equal(p.dendrogram, self.x_norm_dendrogram) npt.assert_array_equal(p.reordered_ind, self.x_norm_leaves) npt.assert_array_equal(p.xticklabels, self.x_norm_leaves) npt.assert_array_equal(p.yticklabels, []) nt.assert_equal(p.xlabel, None) nt.assert_equal(p.ylabel, '') def test_df_input(self): p = mat._DendrogramPlotter(self.df_norm, **self.default_kws) npt.assert_array_equal(p.array.T, np.asarray(self.df_norm)) pdt.assert_frame_equal(p.data.T, self.df_norm) npt.assert_array_equal(p.linkage, self.x_norm_linkage) nt.assert_dict_equal(p.dendrogram, self.x_norm_dendrogram) npt.assert_array_equal(p.xticklabels, np.asarray(self.df_norm.columns)[ self.x_norm_leaves]) npt.assert_array_equal(p.yticklabels, []) nt.assert_equal(p.xlabel, 'letters') nt.assert_equal(p.ylabel, '') def test_df_multindex_input(self): df = self.df_norm.copy() index = pd.MultiIndex.from_tuples([("A", 1), ("B", 2), ("C", 3), ("D", 4)], names=["letter", "number"]) index.name = "letter-number" df.index = index kws = self.default_kws.copy() kws['label'] = True p = mat._DendrogramPlotter(df.T, **kws) xticklabels = ["A-1", "B-2", "C-3", "D-4"] xticklabels = [xticklabels[i] for i in p.reordered_ind] npt.assert_array_equal(p.xticklabels, xticklabels) npt.assert_array_equal(p.yticklabels, []) nt.assert_equal(p.xlabel, "letter-number") def test_axis0_input(self): kws = self.default_kws.copy() kws['axis'] = 0 p = mat._DendrogramPlotter(self.df_norm.T, **kws) npt.assert_array_equal(p.array, np.asarray(self.df_norm.T)) pdt.assert_frame_equal(p.data, self.df_norm.T) npt.assert_array_equal(p.linkage, self.x_norm_linkage) nt.assert_dict_equal(p.dendrogram, self.x_norm_dendrogram) npt.assert_array_equal(p.xticklabels, self.df_norm_leaves) npt.assert_array_equal(p.yticklabels, []) nt.assert_equal(p.xlabel, 'letters') nt.assert_equal(p.ylabel, '') def test_rotate_input(self): kws = self.default_kws.copy() kws['rotate'] = True p = mat._DendrogramPlotter(self.df_norm, **kws) npt.assert_array_equal(p.array.T, np.asarray(self.df_norm)) pdt.assert_frame_equal(p.data.T, self.df_norm) npt.assert_array_equal(p.xticklabels, []) npt.assert_array_equal(p.yticklabels, self.df_norm_leaves) nt.assert_equal(p.xlabel, '') nt.assert_equal(p.ylabel, 'letters') def test_rotate_axis0_input(self): kws = self.default_kws.copy() kws['rotate'] = True kws['axis'] = 0 p = mat._DendrogramPlotter(self.df_norm.T, **kws) npt.assert_array_equal(p.reordered_ind, self.x_norm_leaves) def test_custom_linkage(self): kws = self.default_kws.copy() try: import fastcluster linkage = fastcluster.linkage_vector(self.x_norm, method='single', metric='euclidean') except ImportError: d = distance.pdist(self.x_norm, metric='euclidean') linkage = hierarchy.linkage(d, method='single') dendrogram = hierarchy.dendrogram(linkage, no_plot=True, color_threshold=-np.inf) kws['linkage'] = linkage p = mat._DendrogramPlotter(self.df_norm, **kws) npt.assert_array_equal(p.linkage, linkage) nt.assert_dict_equal(p.dendrogram, dendrogram) def test_label_false(self): kws = self.default_kws.copy() kws['label'] = False p = mat._DendrogramPlotter(self.df_norm, **kws) nt.assert_equal(p.xticks, []) nt.assert_equal(p.yticks, []) nt.assert_equal(p.xticklabels, []) nt.assert_equal(p.yticklabels, []) nt.assert_equal(p.xlabel, "") nt.assert_equal(p.ylabel, "") def test_linkage_scipy(self): p = mat._DendrogramPlotter(self.x_norm, **self.default_kws) scipy_linkage = p._calculate_linkage_scipy() from scipy.spatial import distance from scipy.cluster import hierarchy dists = distance.pdist(self.x_norm.T, metric=self.default_kws['metric']) linkage = hierarchy.linkage(dists, method=self.default_kws['method']) npt.assert_array_equal(scipy_linkage, linkage) @pytest.mark.skipif(_no_fastcluster, reason="fastcluster not installed") def test_fastcluster_other_method(self): import fastcluster kws = self.default_kws.copy() kws['method'] = 'average' linkage = fastcluster.linkage(self.x_norm.T, method='average', metric='euclidean') p = mat._DendrogramPlotter(self.x_norm, **kws) npt.assert_array_equal(p.linkage, linkage) @pytest.mark.skipif(_no_fastcluster, reason="fastcluster not installed") def test_fastcluster_non_euclidean(self): import fastcluster kws = self.default_kws.copy() kws['metric'] = 'cosine' kws['method'] = 'average' linkage = fastcluster.linkage(self.x_norm.T, method=kws['method'], metric=kws['metric']) p = mat._DendrogramPlotter(self.x_norm, **kws) npt.assert_array_equal(p.linkage, linkage) def test_dendrogram_plot(self): d = mat.dendrogram(self.x_norm, **self.default_kws) ax = plt.gca() xlim = ax.get_xlim() # 10 comes from _plot_dendrogram in scipy.cluster.hierarchy xmax = len(d.reordered_ind) * 10 nt.assert_equal(xlim[0], 0) nt.assert_equal(xlim[1], xmax) nt.assert_equal(len(ax.collections[0].get_paths()), len(d.dependent_coord)) @pytest.mark.xfail(mpl.__version__ == "3.1.1", reason="matplotlib 3.1.1 bug") def test_dendrogram_rotate(self): kws = self.default_kws.copy() kws['rotate'] = True d = mat.dendrogram(self.x_norm, **kws) ax = plt.gca() ylim = ax.get_ylim() # 10 comes from _plot_dendrogram in scipy.cluster.hierarchy ymax = len(d.reordered_ind) * 10 # Since y axis is inverted, ylim is (80, 0) # and therefore not (0, 80) as usual: nt.assert_equal(ylim[1], 0) nt.assert_equal(ylim[0], ymax) def test_dendrogram_ticklabel_rotation(self): f, ax = plt.subplots(figsize=(2, 2)) mat.dendrogram(self.df_norm, ax=ax) for t in ax.get_xticklabels(): nt.assert_equal(t.get_rotation(), 0) plt.close(f) df = self.df_norm.copy() df.columns = [str(c) * 10 for c in df.columns] df.index = [i * 10 for i in df.index] f, ax = plt.subplots(figsize=(2, 2)) mat.dendrogram(df, ax=ax) for t in ax.get_xticklabels(): nt.assert_equal(t.get_rotation(), 90) plt.close(f) f, ax = plt.subplots(figsize=(2, 2)) mat.dendrogram(df.T, axis=0, rotate=True) for t in ax.get_yticklabels(): nt.assert_equal(t.get_rotation(), 0) plt.close(f) class TestClustermap(object): rs = np.random.RandomState(sum(map(ord, "clustermap"))) x_norm = rs.randn(4, 8) + np.arange(8) x_norm = (x_norm.T + np.arange(4)).T letters = pd.Series(["A", "B", "C", "D", "E", "F", "G", "H"], name="letters") df_norm = pd.DataFrame(x_norm, columns=letters) try: import fastcluster x_norm_linkage = fastcluster.linkage_vector(x_norm.T, metric='euclidean', method='single') except ImportError: x_norm_distances = distance.pdist(x_norm.T, metric='euclidean') x_norm_linkage = hierarchy.linkage(x_norm_distances, method='single') x_norm_dendrogram = hierarchy.dendrogram(x_norm_linkage, no_plot=True, color_threshold=-np.inf) x_norm_leaves = x_norm_dendrogram['leaves'] df_norm_leaves = np.asarray(df_norm.columns[x_norm_leaves]) default_kws = dict(pivot_kws=None, z_score=None, standard_scale=None, figsize=(10, 10), row_colors=None, col_colors=None, dendrogram_ratio=.2, colors_ratio=.03, cbar_pos=(0, .8, .05, .2)) default_plot_kws = dict(metric='euclidean', method='average', colorbar_kws=None, row_cluster=True, col_cluster=True, row_linkage=None, col_linkage=None, tree_kws=None) row_colors = color_palette('Set2', df_norm.shape[0]) col_colors = color_palette('Dark2', df_norm.shape[1]) def test_ndarray_input(self): cm = mat.ClusterGrid(self.x_norm, **self.default_kws) pdt.assert_frame_equal(cm.data, pd.DataFrame(self.x_norm)) nt.assert_equal(len(cm.fig.axes), 4) nt.assert_equal(cm.ax_row_colors, None) nt.assert_equal(cm.ax_col_colors, None) def test_df_input(self): cm = mat.ClusterGrid(self.df_norm, **self.default_kws) pdt.assert_frame_equal(cm.data, self.df_norm) def test_corr_df_input(self): df = self.df_norm.corr() cg = mat.ClusterGrid(df, **self.default_kws) cg.plot(**self.default_plot_kws) diag = cg.data2d.values[np.diag_indices_from(cg.data2d)] npt.assert_array_equal(diag, np.ones(cg.data2d.shape[0])) def test_pivot_input(self): df_norm = self.df_norm.copy() df_norm.index.name = 'numbers' df_long = pd.melt(df_norm.reset_index(), var_name='letters', id_vars='numbers') kws = self.default_kws.copy() kws['pivot_kws'] = dict(index='numbers', columns='letters', values='value') cm = mat.ClusterGrid(df_long, **kws) pdt.assert_frame_equal(cm.data2d, df_norm) def test_colors_input(self): kws = self.default_kws.copy() kws['row_colors'] = self.row_colors kws['col_colors'] = self.col_colors cm = mat.ClusterGrid(self.df_norm, **kws) npt.assert_array_equal(cm.row_colors, self.row_colors) npt.assert_array_equal(cm.col_colors, self.col_colors) nt.assert_equal(len(cm.fig.axes), 6) def test_nested_colors_input(self): kws = self.default_kws.copy() row_colors = [self.row_colors, self.row_colors] col_colors = [self.col_colors, self.col_colors] kws['row_colors'] = row_colors kws['col_colors'] = col_colors cm = mat.ClusterGrid(self.df_norm, **kws) npt.assert_array_equal(cm.row_colors, row_colors) npt.assert_array_equal(cm.col_colors, col_colors) nt.assert_equal(len(cm.fig.axes), 6) def test_colors_input_custom_cmap(self): kws = self.default_kws.copy() kws['cmap'] = mpl.cm.PRGn kws['row_colors'] = self.row_colors kws['col_colors'] = self.col_colors cm = mat.clustermap(self.df_norm, **kws) npt.assert_array_equal(cm.row_colors, self.row_colors) npt.assert_array_equal(cm.col_colors, self.col_colors) nt.assert_equal(len(cm.fig.axes), 6) def test_z_score(self): df = self.df_norm.copy() df = (df - df.mean()) / df.std() kws = self.default_kws.copy() kws['z_score'] = 1 cm = mat.ClusterGrid(self.df_norm, **kws) pdt.assert_frame_equal(cm.data2d, df) def test_z_score_axis0(self): df = self.df_norm.copy() df = df.T df = (df - df.mean()) / df.std() df = df.T kws = self.default_kws.copy() kws['z_score'] = 0 cm = mat.ClusterGrid(self.df_norm, **kws) pdt.assert_frame_equal(cm.data2d, df) def test_standard_scale(self): df = self.df_norm.copy() df = (df - df.min()) / (df.max() - df.min()) kws = self.default_kws.copy() kws['standard_scale'] = 1 cm = mat.ClusterGrid(self.df_norm, **kws) pdt.assert_frame_equal(cm.data2d, df) def test_standard_scale_axis0(self): df = self.df_norm.copy() df = df.T df = (df - df.min()) / (df.max() - df.min()) df = df.T kws = self.default_kws.copy() kws['standard_scale'] = 0 cm = mat.ClusterGrid(self.df_norm, **kws) pdt.assert_frame_equal(cm.data2d, df) def test_z_score_standard_scale(self): kws = self.default_kws.copy() kws['z_score'] = True kws['standard_scale'] = True with nt.assert_raises(ValueError): mat.ClusterGrid(self.df_norm, **kws) def test_color_list_to_matrix_and_cmap(self): matrix, cmap = mat.ClusterGrid.color_list_to_matrix_and_cmap( self.col_colors, self.x_norm_leaves) colors_set = set(self.col_colors) col_to_value = dict((col, i) for i, col in enumerate(colors_set)) matrix_test = np.array([col_to_value[col] for col in self.col_colors])[self.x_norm_leaves] shape = len(self.col_colors), 1 matrix_test = matrix_test.reshape(shape) cmap_test = mpl.colors.ListedColormap(colors_set) npt.assert_array_equal(matrix, matrix_test) npt.assert_array_equal(cmap.colors, cmap_test.colors) def test_nested_color_list_to_matrix_and_cmap(self): colors = [self.col_colors, self.col_colors] matrix, cmap = mat.ClusterGrid.color_list_to_matrix_and_cmap( colors, self.x_norm_leaves) all_colors = set(itertools.chain(*colors)) color_to_value = dict((col, i) for i, col in enumerate(all_colors)) matrix_test = np.array( [color_to_value[c] for color in colors for c in color]) shape = len(colors), len(colors[0]) matrix_test = matrix_test.reshape(shape) matrix_test = matrix_test[:, self.x_norm_leaves] matrix_test = matrix_test.T cmap_test = mpl.colors.ListedColormap(all_colors) npt.assert_array_equal(matrix, matrix_test) npt.assert_array_equal(cmap.colors, cmap_test.colors) def test_color_list_to_matrix_and_cmap_axis1(self): matrix, cmap = mat.ClusterGrid.color_list_to_matrix_and_cmap( self.col_colors, self.x_norm_leaves, axis=1) colors_set = set(self.col_colors) col_to_value = dict((col, i) for i, col in enumerate(colors_set)) matrix_test = np.array([col_to_value[col] for col in self.col_colors])[self.x_norm_leaves] shape = 1, len(self.col_colors) matrix_test = matrix_test.reshape(shape) cmap_test = mpl.colors.ListedColormap(colors_set) npt.assert_array_equal(matrix, matrix_test) npt.assert_array_equal(cmap.colors, cmap_test.colors) def test_savefig(self): # Not sure if this is the right way to test.... cm = mat.ClusterGrid(self.df_norm, **self.default_kws) cm.plot(**self.default_plot_kws) cm.savefig(tempfile.NamedTemporaryFile(), format='png') def test_plot_dendrograms(self): cm = mat.clustermap(self.df_norm, **self.default_kws) nt.assert_equal(len(cm.ax_row_dendrogram.collections[0].get_paths()), len(cm.dendrogram_row.independent_coord)) nt.assert_equal(len(cm.ax_col_dendrogram.collections[0].get_paths()), len(cm.dendrogram_col.independent_coord)) data2d = self.df_norm.iloc[cm.dendrogram_row.reordered_ind, cm.dendrogram_col.reordered_ind] pdt.assert_frame_equal(cm.data2d, data2d) def test_cluster_false(self): kws = self.default_kws.copy() kws['row_cluster'] = False kws['col_cluster'] = False cm = mat.clustermap(self.df_norm, **kws) nt.assert_equal(len(cm.ax_row_dendrogram.lines), 0) nt.assert_equal(len(cm.ax_col_dendrogram.lines), 0) nt.assert_equal(len(cm.ax_row_dendrogram.get_xticks()), 0) nt.assert_equal(len(cm.ax_row_dendrogram.get_yticks()), 0) nt.assert_equal(len(cm.ax_col_dendrogram.get_xticks()), 0) nt.assert_equal(len(cm.ax_col_dendrogram.get_yticks()), 0) pdt.assert_frame_equal(cm.data2d, self.df_norm) def test_row_col_colors(self): kws = self.default_kws.copy() kws['row_colors'] = self.row_colors kws['col_colors'] = self.col_colors cm = mat.clustermap(self.df_norm, **kws) nt.assert_equal(len(cm.ax_row_colors.collections), 1) nt.assert_equal(len(cm.ax_col_colors.collections), 1) def test_cluster_false_row_col_colors(self): kws = self.default_kws.copy() kws['row_cluster'] = False kws['col_cluster'] = False kws['row_colors'] = self.row_colors kws['col_colors'] = self.col_colors cm = mat.clustermap(self.df_norm, **kws) nt.assert_equal(len(cm.ax_row_dendrogram.lines), 0) nt.assert_equal(len(cm.ax_col_dendrogram.lines), 0) nt.assert_equal(len(cm.ax_row_dendrogram.get_xticks()), 0) nt.assert_equal(len(cm.ax_row_dendrogram.get_yticks()), 0) nt.assert_equal(len(cm.ax_col_dendrogram.get_xticks()), 0) nt.assert_equal(len(cm.ax_col_dendrogram.get_yticks()), 0) nt.assert_equal(len(cm.ax_row_colors.collections), 1) nt.assert_equal(len(cm.ax_col_colors.collections), 1) pdt.assert_frame_equal(cm.data2d, self.df_norm) def test_row_col_colors_df(self): kws = self.default_kws.copy() kws['row_colors'] = pd.DataFrame({'row_1': list(self.row_colors), 'row_2': list(self.row_colors)}, index=self.df_norm.index, columns=['row_1', 'row_2']) kws['col_colors'] = pd.DataFrame({'col_1': list(self.col_colors), 'col_2': list(self.col_colors)}, index=self.df_norm.columns, columns=['col_1', 'col_2']) cm = mat.clustermap(self.df_norm, **kws) row_labels = [l.get_text() for l in cm.ax_row_colors.get_xticklabels()] nt.assert_equal(cm.row_color_labels, ['row_1', 'row_2']) nt.assert_equal(row_labels, cm.row_color_labels) col_labels = [l.get_text() for l in cm.ax_col_colors.get_yticklabels()] nt.assert_equal(cm.col_color_labels, ['col_1', 'col_2']) nt.assert_equal(col_labels, cm.col_color_labels) def test_row_col_colors_df_shuffled(self): # Tests if colors are properly matched, even if given in wrong order m, n = self.df_norm.shape shuffled_inds = [self.df_norm.index[i] for i in list(range(0, m, 2)) + list(range(1, m, 2))] shuffled_cols = [self.df_norm.columns[i] for i in list(range(0, n, 2)) + list(range(1, n, 2))] kws = self.default_kws.copy() row_colors = pd.DataFrame({'row_annot': list(self.row_colors)}, index=self.df_norm.index) kws['row_colors'] = row_colors.loc[shuffled_inds] col_colors = pd.DataFrame({'col_annot': list(self.col_colors)}, index=self.df_norm.columns) kws['col_colors'] = col_colors.loc[shuffled_cols] cm = mat.clustermap(self.df_norm, **kws) nt.assert_equal(list(cm.col_colors)[0], list(self.col_colors)) nt.assert_equal(list(cm.row_colors)[0], list(self.row_colors)) def test_row_col_colors_df_missing(self): kws = self.default_kws.copy() row_colors = pd.DataFrame({'row_annot': list(self.row_colors)}, index=self.df_norm.index) kws['row_colors'] = row_colors.drop(self.df_norm.index[0]) col_colors = pd.DataFrame({'col_annot': list(self.col_colors)}, index=self.df_norm.columns) kws['col_colors'] = col_colors.drop(self.df_norm.columns[0]) cm = mat.clustermap(self.df_norm, **kws) nt.assert_equal(list(cm.col_colors)[0], [(1.0, 1.0, 1.0)] + list(self.col_colors[1:])) nt.assert_equal(list(cm.row_colors)[0], [(1.0, 1.0, 1.0)] + list(self.row_colors[1:])) def test_row_col_colors_df_one_axis(self): # Test case with only row annotation. kws1 = self.default_kws.copy() kws1['row_colors'] = pd.DataFrame({'row_1': list(self.row_colors), 'row_2': list(self.row_colors)}, index=self.df_norm.index, columns=['row_1', 'row_2']) cm1 = mat.clustermap(self.df_norm, **kws1) row_labels = [l.get_text() for l in cm1.ax_row_colors.get_xticklabels()] nt.assert_equal(cm1.row_color_labels, ['row_1', 'row_2']) nt.assert_equal(row_labels, cm1.row_color_labels) # Test case with onl col annotation. kws2 = self.default_kws.copy() kws2['col_colors'] = pd.DataFrame({'col_1': list(self.col_colors), 'col_2': list(self.col_colors)}, index=self.df_norm.columns, columns=['col_1', 'col_2']) cm2 = mat.clustermap(self.df_norm, **kws2) col_labels = [l.get_text() for l in cm2.ax_col_colors.get_yticklabels()] nt.assert_equal(cm2.col_color_labels, ['col_1', 'col_2']) nt.assert_equal(col_labels, cm2.col_color_labels) def test_row_col_colors_series(self): kws = self.default_kws.copy() kws['row_colors'] = pd.Series(list(self.row_colors), name='row_annot', index=self.df_norm.index) kws['col_colors'] = pd.Series(list(self.col_colors), name='col_annot', index=self.df_norm.columns) cm = mat.clustermap(self.df_norm, **kws) row_labels = [l.get_text() for l in cm.ax_row_colors.get_xticklabels()] nt.assert_equal(cm.row_color_labels, ['row_annot']) nt.assert_equal(row_labels, cm.row_color_labels) col_labels = [l.get_text() for l in cm.ax_col_colors.get_yticklabels()] nt.assert_equal(cm.col_color_labels, ['col_annot']) nt.assert_equal(col_labels, cm.col_color_labels) def test_row_col_colors_series_shuffled(self): # Tests if colors are properly matched, even if given in wrong order m, n = self.df_norm.shape shuffled_inds = [self.df_norm.index[i] for i in list(range(0, m, 2)) + list(range(1, m, 2))] shuffled_cols = [self.df_norm.columns[i] for i in list(range(0, n, 2)) + list(range(1, n, 2))] kws = self.default_kws.copy() row_colors = pd.Series(list(self.row_colors), name='row_annot', index=self.df_norm.index) kws['row_colors'] = row_colors.loc[shuffled_inds] col_colors = pd.Series(list(self.col_colors), name='col_annot', index=self.df_norm.columns) kws['col_colors'] = col_colors.loc[shuffled_cols] cm = mat.clustermap(self.df_norm, **kws) nt.assert_equal(list(cm.col_colors), list(self.col_colors)) nt.assert_equal(list(cm.row_colors), list(self.row_colors)) def test_row_col_colors_series_missing(self): kws = self.default_kws.copy() row_colors = pd.Series(list(self.row_colors), name='row_annot', index=self.df_norm.index) kws['row_colors'] = row_colors.drop(self.df_norm.index[0]) col_colors = pd.Series(list(self.col_colors), name='col_annot', index=self.df_norm.columns) kws['col_colors'] = col_colors.drop(self.df_norm.columns[0]) cm = mat.clustermap(self.df_norm, **kws) nt.assert_equal(list(cm.col_colors), [(1.0, 1.0, 1.0)] + list(self.col_colors[1:])) nt.assert_equal(list(cm.row_colors), [(1.0, 1.0, 1.0)] + list(self.row_colors[1:])) def test_row_col_colors_ignore_heatmap_kwargs(self): g = mat.clustermap(self.rs.uniform(0, 200, self.df_norm.shape), row_colors=self.row_colors, col_colors=self.col_colors, cmap="Spectral", norm=mpl.colors.LogNorm(), vmax=100) assert np.array_equal( np.array(self.row_colors)[g.dendrogram_row.reordered_ind], g.ax_row_colors.collections[0].get_facecolors()[:, :3] ) assert np.array_equal( np.array(self.col_colors)[g.dendrogram_col.reordered_ind], g.ax_col_colors.collections[0].get_facecolors()[:, :3] ) def test_mask_reorganization(self): kws = self.default_kws.copy() kws["mask"] = self.df_norm > 0 g = mat.clustermap(self.df_norm, **kws) npt.assert_array_equal(g.data2d.index, g.mask.index) npt.assert_array_equal(g.data2d.columns, g.mask.columns) npt.assert_array_equal(g.mask.index, self.df_norm.index[ g.dendrogram_row.reordered_ind]) npt.assert_array_equal(g.mask.columns, self.df_norm.columns[ g.dendrogram_col.reordered_ind]) def test_ticklabel_reorganization(self): kws = self.default_kws.copy() xtl = np.arange(self.df_norm.shape[1]) kws["xticklabels"] = list(xtl) ytl = self.letters.loc[:self.df_norm.shape[0]] kws["yticklabels"] = ytl g = mat.clustermap(self.df_norm, **kws) xtl_actual = [t.get_text() for t in g.ax_heatmap.get_xticklabels()] ytl_actual = [t.get_text() for t in g.ax_heatmap.get_yticklabels()] xtl_want = xtl[g.dendrogram_col.reordered_ind].astype(" g1.ax_col_dendrogram.get_position().height) assert (g2.ax_col_colors.get_position().height > g1.ax_col_colors.get_position().height) assert (g2.ax_heatmap.get_position().height < g1.ax_heatmap.get_position().height) assert (g2.ax_row_dendrogram.get_position().width > g1.ax_row_dendrogram.get_position().width) assert (g2.ax_row_colors.get_position().width > g1.ax_row_colors.get_position().width) assert (g2.ax_heatmap.get_position().width < g1.ax_heatmap.get_position().width) kws1 = self.default_kws.copy() kws1.update(col_colors=self.col_colors) kws2 = kws1.copy() kws2.update(col_colors=[self.col_colors, self.col_colors]) g1 = mat.clustermap(self.df_norm, **kws1) g2 = mat.clustermap(self.df_norm, **kws2) assert (g2.ax_col_colors.get_position().height > g1.ax_col_colors.get_position().height) kws1 = self.default_kws.copy() kws1.update(dendrogram_ratio=(.2, .2)) kws2 = kws1.copy() kws2.update(dendrogram_ratio=(.2, .3)) g1 = mat.clustermap(self.df_norm, **kws1) g2 = mat.clustermap(self.df_norm, **kws2) assert (g2.ax_row_dendrogram.get_position().width == g1.ax_row_dendrogram.get_position().width) assert (g2.ax_col_dendrogram.get_position().height > g1.ax_col_dendrogram.get_position().height) def test_cbar_pos(self): kws = self.default_kws.copy() kws["cbar_pos"] = (.2, .1, .4, .3) g = mat.clustermap(self.df_norm, **kws) pos = g.ax_cbar.get_position() assert pytest.approx(tuple(pos.p0)) == kws["cbar_pos"][:2] assert pytest.approx(pos.width) == kws["cbar_pos"][2] assert pytest.approx(pos.height) == kws["cbar_pos"][3] kws["cbar_pos"] = None g = mat.clustermap(self.df_norm, **kws) assert g.ax_cbar is None def test_square_warning(self): kws = self.default_kws.copy() g1 = mat.clustermap(self.df_norm, **kws) with pytest.warns(UserWarning): kws["square"] = True g2 = mat.clustermap(self.df_norm, **kws) g1_shape = g1.ax_heatmap.get_position().get_points() g2_shape = g2.ax_heatmap.get_position().get_points() assert np.array_equal(g1_shape, g2_shape) def test_clustermap_annotation(self): g = mat.clustermap(self.df_norm, annot=True, fmt=".1f") for val, text in zip(np.asarray(g.data2d).flat, g.ax_heatmap.texts): assert text.get_text() == "{:.1f}".format(val) g = mat.clustermap(self.df_norm, annot=self.df_norm, fmt=".1f") for val, text in zip(np.asarray(g.data2d).flat, g.ax_heatmap.texts): assert text.get_text() == "{:.1f}".format(val) def test_tree_kws(self): rgb = (1, .5, .2) g = mat.clustermap(self.df_norm, tree_kws=dict(color=rgb)) for ax in [g.ax_col_dendrogram, g.ax_row_dendrogram]: tree, = ax.collections assert tuple(tree.get_color().squeeze())[:3] == rgb seaborn-0.10.0/seaborn/tests/test_miscplot.py000066400000000000000000000017031361256634400212720ustar00rootroot00000000000000import nose.tools as nt import matplotlib.pyplot as plt from .. import miscplot as misc from ..palettes import color_palette from ..utils import _network class TestPalPlot(object): """Test the function that visualizes a color palette.""" def test_palplot_size(self): pal4 = color_palette("husl", 4) misc.palplot(pal4) size4 = plt.gcf().get_size_inches() nt.assert_equal(tuple(size4), (4, 1)) pal5 = color_palette("husl", 5) misc.palplot(pal5) size5 = plt.gcf().get_size_inches() nt.assert_equal(tuple(size5), (5, 1)) palbig = color_palette("husl", 3) misc.palplot(palbig, 2) sizebig = plt.gcf().get_size_inches() nt.assert_equal(tuple(sizebig), (6, 2)) class TestDogPlot(object): @_network(url="https://github.com/mwaskom/seaborn-data") def test_dogplot(self): misc.dogplot() ax = plt.gca() assert len(ax.images) == 1 seaborn-0.10.0/seaborn/tests/test_palettes.py000066400000000000000000000300111361256634400212530ustar00rootroot00000000000000import colorsys import numpy as np import matplotlib as mpl import pytest import nose.tools as nt import numpy.testing as npt import matplotlib.pyplot as plt from .. import palettes, utils, rcmod from ..external import husl from ..colors import xkcd_rgb, crayons from distutils.version import LooseVersion mpl_ge_150 = LooseVersion(mpl.__version__) >= '1.5.0' class TestColorPalettes(object): def test_current_palette(self): pal = palettes.color_palette(["red", "blue", "green"]) rcmod.set_palette(pal) assert pal == utils.get_color_cycle() rcmod.set() def test_palette_context(self): default_pal = palettes.color_palette() context_pal = palettes.color_palette("muted") with palettes.color_palette(context_pal): nt.assert_equal(utils.get_color_cycle(), context_pal) nt.assert_equal(utils.get_color_cycle(), default_pal) def test_big_palette_context(self): original_pal = palettes.color_palette("deep", n_colors=8) context_pal = palettes.color_palette("husl", 10) rcmod.set_palette(original_pal) with palettes.color_palette(context_pal, 10): nt.assert_equal(utils.get_color_cycle(), context_pal) nt.assert_equal(utils.get_color_cycle(), original_pal) # Reset default rcmod.set() def test_palette_size(self): pal = palettes.color_palette("deep") assert len(pal) == palettes.QUAL_PALETTE_SIZES["deep"] pal = palettes.color_palette("pastel6") assert len(pal) == palettes.QUAL_PALETTE_SIZES["pastel6"] pal = palettes.color_palette("Set3") assert len(pal) == palettes.QUAL_PALETTE_SIZES["Set3"] pal = palettes.color_palette("husl") assert len(pal) == 6 pal = palettes.color_palette("Greens") assert len(pal) == 6 def test_seaborn_palettes(self): pals = "deep", "muted", "pastel", "bright", "dark", "colorblind" for name in pals: full = palettes.color_palette(name, 10).as_hex() short = palettes.color_palette(name + "6", 6).as_hex() b, _, g, r, m, _, _, _, y, c = full assert [b, g, r, m, y, c] == list(short) def test_hls_palette(self): hls_pal1 = palettes.hls_palette() hls_pal2 = palettes.color_palette("hls") npt.assert_array_equal(hls_pal1, hls_pal2) def test_husl_palette(self): husl_pal1 = palettes.husl_palette() husl_pal2 = palettes.color_palette("husl") npt.assert_array_equal(husl_pal1, husl_pal2) def test_mpl_palette(self): mpl_pal1 = palettes.mpl_palette("Reds") mpl_pal2 = palettes.color_palette("Reds") npt.assert_array_equal(mpl_pal1, mpl_pal2) def test_mpl_dark_palette(self): mpl_pal1 = palettes.mpl_palette("Blues_d") mpl_pal2 = palettes.color_palette("Blues_d") npt.assert_array_equal(mpl_pal1, mpl_pal2) def test_bad_palette_name(self): with nt.assert_raises(ValueError): palettes.color_palette("IAmNotAPalette") def test_terrible_palette_name(self): with nt.assert_raises(ValueError): palettes.color_palette("jet") def test_bad_palette_colors(self): pal = ["red", "blue", "iamnotacolor"] with nt.assert_raises(ValueError): palettes.color_palette(pal) def test_palette_desat(self): pal1 = palettes.husl_palette(6) pal1 = [utils.desaturate(c, .5) for c in pal1] pal2 = palettes.color_palette("husl", desat=.5) npt.assert_array_equal(pal1, pal2) def test_palette_is_list_of_tuples(self): pal_in = np.array(["red", "blue", "green"]) pal_out = palettes.color_palette(pal_in, 3) nt.assert_is_instance(pal_out, list) nt.assert_is_instance(pal_out[0], tuple) nt.assert_is_instance(pal_out[0][0], float) nt.assert_equal(len(pal_out[0]), 3) def test_palette_cycles(self): deep = palettes.color_palette("deep6") double_deep = palettes.color_palette("deep6", 12) nt.assert_equal(double_deep, deep + deep) def test_hls_values(self): pal1 = palettes.hls_palette(6, h=0) pal2 = palettes.hls_palette(6, h=.5) pal2 = pal2[3:] + pal2[:3] npt.assert_array_almost_equal(pal1, pal2) pal_dark = palettes.hls_palette(5, l=.2) # noqa pal_bright = palettes.hls_palette(5, l=.8) # noqa npt.assert_array_less(list(map(sum, pal_dark)), list(map(sum, pal_bright))) pal_flat = palettes.hls_palette(5, s=.1) pal_bold = palettes.hls_palette(5, s=.9) npt.assert_array_less(list(map(np.std, pal_flat)), list(map(np.std, pal_bold))) def test_husl_values(self): pal1 = palettes.husl_palette(6, h=0) pal2 = palettes.husl_palette(6, h=.5) pal2 = pal2[3:] + pal2[:3] npt.assert_array_almost_equal(pal1, pal2) pal_dark = palettes.husl_palette(5, l=.2) # noqa pal_bright = palettes.husl_palette(5, l=.8) # noqa npt.assert_array_less(list(map(sum, pal_dark)), list(map(sum, pal_bright))) pal_flat = palettes.husl_palette(5, s=.1) pal_bold = palettes.husl_palette(5, s=.9) npt.assert_array_less(list(map(np.std, pal_flat)), list(map(np.std, pal_bold))) def test_cbrewer_qual(self): pal_short = palettes.mpl_palette("Set1", 4) pal_long = palettes.mpl_palette("Set1", 6) nt.assert_equal(pal_short, pal_long[:4]) pal_full = palettes.mpl_palette("Set2", 8) pal_long = palettes.mpl_palette("Set2", 10) nt.assert_equal(pal_full, pal_long[:8]) def test_mpl_reversal(self): pal_forward = palettes.mpl_palette("BuPu", 6) pal_reverse = palettes.mpl_palette("BuPu_r", 6) npt.assert_array_almost_equal(pal_forward, pal_reverse[::-1]) def test_rgb_from_hls(self): color = .5, .8, .4 rgb_got = palettes._color_to_rgb(color, "hls") rgb_want = colorsys.hls_to_rgb(*color) nt.assert_equal(rgb_got, rgb_want) def test_rgb_from_husl(self): color = 120, 50, 40 rgb_got = palettes._color_to_rgb(color, "husl") rgb_want = husl.husl_to_rgb(*color) nt.assert_equal(rgb_got, rgb_want) def test_rgb_from_xkcd(self): color = "dull red" rgb_got = palettes._color_to_rgb(color, "xkcd") rgb_want = xkcd_rgb[color] nt.assert_equal(rgb_got, rgb_want) def test_light_palette(self): pal_forward = palettes.light_palette("red") pal_reverse = palettes.light_palette("red", reverse=True) assert np.allclose(pal_forward, pal_reverse[::-1]) red = mpl.colors.colorConverter.to_rgb("red") nt.assert_equal(pal_forward[-1], red) pal_cmap = palettes.light_palette("blue", as_cmap=True) nt.assert_is_instance(pal_cmap, mpl.colors.LinearSegmentedColormap) def test_dark_palette(self): pal_forward = palettes.dark_palette("red") pal_reverse = palettes.dark_palette("red", reverse=True) assert np.allclose(pal_forward, pal_reverse[::-1]) red = mpl.colors.colorConverter.to_rgb("red") assert pal_forward[-1] == red pal_cmap = palettes.dark_palette("blue", as_cmap=True) assert isinstance(pal_cmap, mpl.colors.LinearSegmentedColormap) def test_diverging_palette(self): h_neg, h_pos = 100, 200 sat, lum = 70, 50 args = h_neg, h_pos, sat, lum n = 12 pal = palettes.diverging_palette(*args, n=n) neg_pal = palettes.light_palette((h_neg, sat, lum), int(n // 2), input="husl") pos_pal = palettes.light_palette((h_pos, sat, lum), int(n // 2), input="husl") assert len(pal) == n assert pal[0] == neg_pal[-1] assert pal[-1] == pos_pal[-1] pal_dark = palettes.diverging_palette(*args, n=n, center="dark") assert np.mean(pal[int(n / 2)]) > np.mean(pal_dark[int(n / 2)]) pal_cmap = palettes.diverging_palette(*args, as_cmap=True) assert isinstance(pal_cmap, mpl.colors.LinearSegmentedColormap) def test_blend_palette(self): colors = ["red", "yellow", "white"] pal_cmap = palettes.blend_palette(colors, as_cmap=True) nt.assert_is_instance(pal_cmap, mpl.colors.LinearSegmentedColormap) def test_cubehelix_against_matplotlib(self): x = np.linspace(0, 1, 8) mpl_pal = mpl.cm.cubehelix(x)[:, :3].tolist() sns_pal = palettes.cubehelix_palette(8, start=0.5, rot=-1.5, hue=1, dark=0, light=1, reverse=True) nt.assert_list_equal(sns_pal, mpl_pal) def test_cubehelix_n_colors(self): for n in [3, 5, 8]: pal = palettes.cubehelix_palette(n) nt.assert_equal(len(pal), n) def test_cubehelix_reverse(self): pal_forward = palettes.cubehelix_palette() pal_reverse = palettes.cubehelix_palette(reverse=True) nt.assert_list_equal(pal_forward, pal_reverse[::-1]) def test_cubehelix_cmap(self): cmap = palettes.cubehelix_palette(as_cmap=True) nt.assert_is_instance(cmap, mpl.colors.ListedColormap) pal = palettes.cubehelix_palette() x = np.linspace(0, 1, 6) npt.assert_array_equal(cmap(x)[:, :3], pal) cmap_rev = palettes.cubehelix_palette(as_cmap=True, reverse=True) x = np.linspace(0, 1, 6) pal_forward = cmap(x).tolist() pal_reverse = cmap_rev(x[::-1]).tolist() nt.assert_list_equal(pal_forward, pal_reverse) def test_cubehelix_code(self): color_palette = palettes.color_palette cubehelix_palette = palettes.cubehelix_palette pal1 = color_palette("ch:", 8) pal2 = color_palette(cubehelix_palette(8)) assert pal1 == pal2 pal1 = color_palette("ch:.5, -.25,hue = .5,light=.75", 8) pal2 = color_palette(cubehelix_palette(8, .5, -.25, hue=.5, light=.75)) assert pal1 == pal2 pal1 = color_palette("ch:h=1,r=.5", 9) pal2 = color_palette(cubehelix_palette(9, hue=1, rot=.5)) assert pal1 == pal2 pal1 = color_palette("ch:_r", 6) pal2 = color_palette(cubehelix_palette(6, reverse=True)) assert pal1 == pal2 def test_xkcd_palette(self): names = list(xkcd_rgb.keys())[10:15] colors = palettes.xkcd_palette(names) for name, color in zip(names, colors): as_hex = mpl.colors.rgb2hex(color) nt.assert_equal(as_hex, xkcd_rgb[name]) def test_crayon_palette(self): names = list(crayons.keys())[10:15] colors = palettes.crayon_palette(names) for name, color in zip(names, colors): as_hex = mpl.colors.rgb2hex(color) nt.assert_equal(as_hex, crayons[name].lower()) def test_color_codes(self): palettes.set_color_codes("deep") colors = palettes.color_palette("deep6") + [".1"] for code, color in zip("bgrmyck", colors): rgb_want = mpl.colors.colorConverter.to_rgb(color) rgb_got = mpl.colors.colorConverter.to_rgb(code) nt.assert_equal(rgb_want, rgb_got) palettes.set_color_codes("reset") with pytest.raises(ValueError): palettes.set_color_codes("Set1") def test_as_hex(self): pal = palettes.color_palette("deep") for rgb, hex in zip(pal, pal.as_hex()): nt.assert_equal(mpl.colors.rgb2hex(rgb), hex) def test_preserved_palette_length(self): pal_in = palettes.color_palette("Set1", 10) pal_out = palettes.color_palette(pal_in) nt.assert_equal(pal_in, pal_out) def test_get_color_cycle(self): if mpl_ge_150: colors = [(1., 0., 0.), (0, 1., 0.)] prop_cycle = plt.cycler(color=colors) with plt.rc_context({"axes.prop_cycle": prop_cycle}): result = utils.get_color_cycle() assert result == colors seaborn-0.10.0/seaborn/tests/test_rcmod.py000066400000000000000000000206231361256634400205460ustar00rootroot00000000000000import numpy as np import matplotlib as mpl from distutils.version import LooseVersion import nose import matplotlib.pyplot as plt import nose.tools as nt import numpy.testing as npt from .. import rcmod, palettes, utils class RCParamTester(object): def flatten_list(self, orig_list): iter_list = map(np.atleast_1d, orig_list) flat_list = [item for sublist in iter_list for item in sublist] return flat_list def assert_rc_params(self, params): for k, v in params.items(): if isinstance(v, np.ndarray): npt.assert_array_equal(mpl.rcParams[k], v) else: nt.assert_equal((k, mpl.rcParams[k]), (k, v)) class TestAxesStyle(RCParamTester): styles = ["white", "dark", "whitegrid", "darkgrid", "ticks"] def test_default_return(self): current = rcmod.axes_style() self.assert_rc_params(current) def test_key_usage(self): _style_keys = set(rcmod._style_keys) for style in self.styles: nt.assert_true(not set(rcmod.axes_style(style)) ^ _style_keys) def test_bad_style(self): with nt.assert_raises(ValueError): rcmod.axes_style("i_am_not_a_style") def test_rc_override(self): rc = {"axes.facecolor": "blue", "foo.notaparam": "bar"} out = rcmod.axes_style("darkgrid", rc) nt.assert_equal(out["axes.facecolor"], "blue") nt.assert_not_in("foo.notaparam", out) def test_set_style(self): for style in self.styles: style_dict = rcmod.axes_style(style) rcmod.set_style(style) self.assert_rc_params(style_dict) def test_style_context_manager(self): rcmod.set_style("darkgrid") orig_params = rcmod.axes_style() context_params = rcmod.axes_style("whitegrid") with rcmod.axes_style("whitegrid"): self.assert_rc_params(context_params) self.assert_rc_params(orig_params) @rcmod.axes_style("whitegrid") def func(): self.assert_rc_params(context_params) func() self.assert_rc_params(orig_params) def test_style_context_independence(self): nt.assert_true(set(rcmod._style_keys) ^ set(rcmod._context_keys)) def test_set_rc(self): rcmod.set(rc={"lines.linewidth": 4}) nt.assert_equal(mpl.rcParams["lines.linewidth"], 4) rcmod.set() def test_set_with_palette(self): rcmod.reset_orig() rcmod.set(palette="deep") assert utils.get_color_cycle() == palettes.color_palette("deep", 10) rcmod.reset_orig() rcmod.set(palette="deep", color_codes=False) assert utils.get_color_cycle() == palettes.color_palette("deep", 10) rcmod.reset_orig() pal = palettes.color_palette("deep") rcmod.set(palette=pal) assert utils.get_color_cycle() == palettes.color_palette("deep", 10) rcmod.reset_orig() rcmod.set(palette=pal, color_codes=False) assert utils.get_color_cycle() == palettes.color_palette("deep", 10) rcmod.reset_orig() rcmod.set() def test_reset_defaults(self): # Changes to the rc parameters make this test hard to manage # on older versions of matplotlib, so we'll skip it if LooseVersion(mpl.__version__) < LooseVersion("1.3"): raise nose.SkipTest rcmod.reset_defaults() self.assert_rc_params(mpl.rcParamsDefault) rcmod.set() def test_reset_orig(self): # Changes to the rc parameters make this test hard to manage # on older versions of matplotlib, so we'll skip it if LooseVersion(mpl.__version__) < LooseVersion("1.3"): raise nose.SkipTest rcmod.reset_orig() self.assert_rc_params(mpl.rcParamsOrig) rcmod.set() class TestPlottingContext(RCParamTester): contexts = ["paper", "notebook", "talk", "poster"] def test_default_return(self): current = rcmod.plotting_context() self.assert_rc_params(current) def test_key_usage(self): _context_keys = set(rcmod._context_keys) for context in self.contexts: missing = set(rcmod.plotting_context(context)) ^ _context_keys nt.assert_true(not missing) def test_bad_context(self): with nt.assert_raises(ValueError): rcmod.plotting_context("i_am_not_a_context") def test_font_scale(self): notebook_ref = rcmod.plotting_context("notebook") notebook_big = rcmod.plotting_context("notebook", 2) font_keys = ["axes.labelsize", "axes.titlesize", "legend.fontsize", "xtick.labelsize", "ytick.labelsize", "font.size"] for k in font_keys: nt.assert_equal(notebook_ref[k] * 2, notebook_big[k]) def test_rc_override(self): key, val = "grid.linewidth", 5 rc = {key: val, "foo": "bar"} out = rcmod.plotting_context("talk", rc=rc) nt.assert_equal(out[key], val) nt.assert_not_in("foo", out) def test_set_context(self): for context in self.contexts: context_dict = rcmod.plotting_context(context) rcmod.set_context(context) self.assert_rc_params(context_dict) def test_context_context_manager(self): rcmod.set_context("notebook") orig_params = rcmod.plotting_context() context_params = rcmod.plotting_context("paper") with rcmod.plotting_context("paper"): self.assert_rc_params(context_params) self.assert_rc_params(orig_params) @rcmod.plotting_context("paper") def func(): self.assert_rc_params(context_params) func() self.assert_rc_params(orig_params) class TestPalette(object): def test_set_palette(self): rcmod.set_palette("deep") assert utils.get_color_cycle() == palettes.color_palette("deep", 10) rcmod.set_palette("pastel6") assert utils.get_color_cycle() == palettes.color_palette("pastel6", 6) rcmod.set_palette("dark", 4) assert utils.get_color_cycle() == palettes.color_palette("dark", 4) rcmod.set_palette("Set2", color_codes=True) assert utils.get_color_cycle() == palettes.color_palette("Set2", 8) class TestFonts(object): def test_set_font(self): rcmod.set(font="Verdana") _, ax = plt.subplots() ax.set_xlabel("foo") try: nt.assert_equal(ax.xaxis.label.get_fontname(), "Verdana") except AssertionError: if has_verdana(): raise else: raise nose.SkipTest("Verdana font is not present") finally: rcmod.set() def test_set_serif_font(self): rcmod.set(font="serif") _, ax = plt.subplots() ax.set_xlabel("foo") nt.assert_in(ax.xaxis.label.get_fontname(), mpl.rcParams["font.serif"]) rcmod.set() def test_different_sans_serif(self): if LooseVersion(mpl.__version__) < LooseVersion("1.4"): raise nose.SkipTest rcmod.set() rcmod.set_style(rc={"font.sans-serif": ["Verdana"]}) _, ax = plt.subplots() ax.set_xlabel("foo") try: nt.assert_equal(ax.xaxis.label.get_fontname(), "Verdana") except AssertionError: if has_verdana(): raise else: raise nose.SkipTest("Verdana font is not present") finally: rcmod.set() def has_verdana(): """Helper to verify if Verdana font is present""" # This import is relatively lengthy, so to prevent its import for # testing other tests in this module not requiring this knowledge, # import font_manager here import matplotlib.font_manager as mplfm try: verdana_font = mplfm.findfont('Verdana', fallback_to_default=False) except: # noqa # if https://github.com/matplotlib/matplotlib/pull/3435 # gets accepted return False # otherwise check if not matching the logic for a 'default' one try: unlikely_font = mplfm.findfont("very_unlikely_to_exist1234", fallback_to_default=False) except: # noqa # if matched verdana but not unlikely, Verdana must exist return True # otherwise -- if they match, must be the same default return verdana_font != unlikely_font seaborn-0.10.0/seaborn/tests/test_regression.py000066400000000000000000000521561361256634400216300ustar00rootroot00000000000000import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import pandas as pd import pytest import nose.tools as nt import numpy.testing as npt try: import pandas.testing as pdt except ImportError: import pandas.util.testing as pdt from nose import SkipTest from distutils.version import LooseVersion try: import statsmodels.regression.linear_model as smlm _no_statsmodels = False except ImportError: _no_statsmodels = True from .. import regression as lm from ..palettes import color_palette rs = np.random.RandomState(0) class TestLinearPlotter(object): rs = np.random.RandomState(77) df = pd.DataFrame(dict(x=rs.normal(size=60), d=rs.randint(-2, 3, 60), y=rs.gamma(4, size=60), s=np.tile(list("abcdefghij"), 6))) df["z"] = df.y + rs.randn(60) df["y_na"] = df.y.copy() df.loc[[10, 20, 30], 'y_na'] = np.nan def test_establish_variables_from_frame(self): p = lm._LinearPlotter() p.establish_variables(self.df, x="x", y="y") pdt.assert_series_equal(p.x, self.df.x) pdt.assert_series_equal(p.y, self.df.y) pdt.assert_frame_equal(p.data, self.df) def test_establish_variables_from_series(self): p = lm._LinearPlotter() p.establish_variables(None, x=self.df.x, y=self.df.y) pdt.assert_series_equal(p.x, self.df.x) pdt.assert_series_equal(p.y, self.df.y) nt.assert_is(p.data, None) def test_establish_variables_from_array(self): p = lm._LinearPlotter() p.establish_variables(None, x=self.df.x.values, y=self.df.y.values) npt.assert_array_equal(p.x, self.df.x) npt.assert_array_equal(p.y, self.df.y) nt.assert_is(p.data, None) def test_establish_variables_from_lists(self): p = lm._LinearPlotter() p.establish_variables(None, x=self.df.x.values.tolist(), y=self.df.y.values.tolist()) npt.assert_array_equal(p.x, self.df.x) npt.assert_array_equal(p.y, self.df.y) nt.assert_is(p.data, None) def test_establish_variables_from_mix(self): p = lm._LinearPlotter() p.establish_variables(self.df, x="x", y=self.df.y) pdt.assert_series_equal(p.x, self.df.x) pdt.assert_series_equal(p.y, self.df.y) pdt.assert_frame_equal(p.data, self.df) def test_establish_variables_from_bad(self): p = lm._LinearPlotter() with nt.assert_raises(ValueError): p.establish_variables(None, x="x", y=self.df.y) def test_dropna(self): p = lm._LinearPlotter() p.establish_variables(self.df, x="x", y_na="y_na") pdt.assert_series_equal(p.x, self.df.x) pdt.assert_series_equal(p.y_na, self.df.y_na) p.dropna("x", "y_na") mask = self.df.y_na.notnull() pdt.assert_series_equal(p.x, self.df.x[mask]) pdt.assert_series_equal(p.y_na, self.df.y_na[mask]) class TestRegressionPlotter(object): rs = np.random.RandomState(49) grid = np.linspace(-3, 3, 30) n_boot = 100 bins_numeric = 3 bins_given = [-1, 0, 1] df = pd.DataFrame(dict(x=rs.normal(size=60), d=rs.randint(-2, 3, 60), y=rs.gamma(4, size=60), s=np.tile(list(range(6)), 10))) df["z"] = df.y + rs.randn(60) df["y_na"] = df.y.copy() bw_err = rs.randn(6)[df.s.values] * 2 df.y += bw_err p = 1 / (1 + np.exp(-(df.x * 2 + rs.randn(60)))) df["c"] = [rs.binomial(1, p_i) for p_i in p] df.loc[[10, 20, 30], 'y_na'] = np.nan def test_variables_from_frame(self): p = lm._RegressionPlotter("x", "y", data=self.df, units="s") pdt.assert_series_equal(p.x, self.df.x) pdt.assert_series_equal(p.y, self.df.y) pdt.assert_series_equal(p.units, self.df.s) pdt.assert_frame_equal(p.data, self.df) def test_variables_from_series(self): p = lm._RegressionPlotter(self.df.x, self.df.y, units=self.df.s) npt.assert_array_equal(p.x, self.df.x) npt.assert_array_equal(p.y, self.df.y) npt.assert_array_equal(p.units, self.df.s) nt.assert_is(p.data, None) def test_variables_from_mix(self): p = lm._RegressionPlotter("x", self.df.y + 1, data=self.df) npt.assert_array_equal(p.x, self.df.x) npt.assert_array_equal(p.y, self.df.y + 1) pdt.assert_frame_equal(p.data, self.df) def test_variables_must_be_1d(self): array_2d = np.random.randn(20, 2) array_1d = np.random.randn(20) with pytest.raises(ValueError): lm._RegressionPlotter(array_2d, array_1d) with pytest.raises(ValueError): lm._RegressionPlotter(array_1d, array_2d) def test_dropna(self): p = lm._RegressionPlotter("x", "y_na", data=self.df) nt.assert_equal(len(p.x), pd.notnull(self.df.y_na).sum()) p = lm._RegressionPlotter("x", "y_na", data=self.df, dropna=False) nt.assert_equal(len(p.x), len(self.df.y_na)) def test_ci(self): p = lm._RegressionPlotter("x", "y", data=self.df, ci=95) nt.assert_equal(p.ci, 95) nt.assert_equal(p.x_ci, 95) p = lm._RegressionPlotter("x", "y", data=self.df, ci=95, x_ci=68) nt.assert_equal(p.ci, 95) nt.assert_equal(p.x_ci, 68) p = lm._RegressionPlotter("x", "y", data=self.df, ci=95, x_ci="sd") nt.assert_equal(p.ci, 95) nt.assert_equal(p.x_ci, "sd") @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_fast_regression(self): p = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot) # Fit with the "fast" function, which just does linear algebra yhat_fast, _ = p.fit_fast(self.grid) # Fit using the statsmodels function with an OLS model yhat_smod, _ = p.fit_statsmodels(self.grid, smlm.OLS) # Compare the vector of y_hat values npt.assert_array_almost_equal(yhat_fast, yhat_smod) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_regress_poly(self): p = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot) # Fit an first-order polynomial yhat_poly, _ = p.fit_poly(self.grid, 1) # Fit using the statsmodels function with an OLS model yhat_smod, _ = p.fit_statsmodels(self.grid, smlm.OLS) # Compare the vector of y_hat values npt.assert_array_almost_equal(yhat_poly, yhat_smod) def test_regress_logx(self): x = np.arange(1, 10) y = np.arange(1, 10) grid = np.linspace(1, 10, 100) p = lm._RegressionPlotter(x, y, n_boot=self.n_boot) yhat_lin, _ = p.fit_fast(grid) yhat_log, _ = p.fit_logx(grid) nt.assert_greater(yhat_lin[0], yhat_log[0]) nt.assert_greater(yhat_log[20], yhat_lin[20]) nt.assert_greater(yhat_lin[90], yhat_log[90]) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_regress_n_boot(self): p = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot) # Fast (linear algebra) version _, boots_fast = p.fit_fast(self.grid) npt.assert_equal(boots_fast.shape, (self.n_boot, self.grid.size)) # Slower (np.polyfit) version _, boots_poly = p.fit_poly(self.grid, 1) npt.assert_equal(boots_poly.shape, (self.n_boot, self.grid.size)) # Slowest (statsmodels) version _, boots_smod = p.fit_statsmodels(self.grid, smlm.OLS) npt.assert_equal(boots_smod.shape, (self.n_boot, self.grid.size)) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_regress_without_bootstrap(self): p = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot, ci=None) # Fast (linear algebra) version _, boots_fast = p.fit_fast(self.grid) nt.assert_is(boots_fast, None) # Slower (np.polyfit) version _, boots_poly = p.fit_poly(self.grid, 1) nt.assert_is(boots_poly, None) # Slowest (statsmodels) version _, boots_smod = p.fit_statsmodels(self.grid, smlm.OLS) nt.assert_is(boots_smod, None) def test_regress_bootstrap_seed(self): seed = 200 p1 = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot, seed=seed) p2 = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot, seed=seed) _, boots1 = p1.fit_fast(self.grid) _, boots2 = p2.fit_fast(self.grid) npt.assert_array_equal(boots1, boots2) def test_numeric_bins(self): p = lm._RegressionPlotter(self.df.x, self.df.y) x_binned, bins = p.bin_predictor(self.bins_numeric) npt.assert_equal(len(bins), self.bins_numeric) npt.assert_array_equal(np.unique(x_binned), bins) def test_provided_bins(self): p = lm._RegressionPlotter(self.df.x, self.df.y) x_binned, bins = p.bin_predictor(self.bins_given) npt.assert_array_equal(np.unique(x_binned), self.bins_given) def test_bin_results(self): p = lm._RegressionPlotter(self.df.x, self.df.y) x_binned, bins = p.bin_predictor(self.bins_given) nt.assert_greater(self.df.x[x_binned == 0].min(), self.df.x[x_binned == -1].max()) nt.assert_greater(self.df.x[x_binned == 1].min(), self.df.x[x_binned == 0].max()) def test_scatter_data(self): p = lm._RegressionPlotter(self.df.x, self.df.y) x, y = p.scatter_data npt.assert_array_equal(x, self.df.x) npt.assert_array_equal(y, self.df.y) p = lm._RegressionPlotter(self.df.d, self.df.y) x, y = p.scatter_data npt.assert_array_equal(x, self.df.d) npt.assert_array_equal(y, self.df.y) p = lm._RegressionPlotter(self.df.d, self.df.y, x_jitter=.1) x, y = p.scatter_data nt.assert_true((x != self.df.d).any()) npt.assert_array_less(np.abs(self.df.d - x), np.repeat(.1, len(x))) npt.assert_array_equal(y, self.df.y) p = lm._RegressionPlotter(self.df.d, self.df.y, y_jitter=.05) x, y = p.scatter_data npt.assert_array_equal(x, self.df.d) npt.assert_array_less(np.abs(self.df.y - y), np.repeat(.1, len(y))) def test_estimate_data(self): p = lm._RegressionPlotter(self.df.d, self.df.y, x_estimator=np.mean) x, y, ci = p.estimate_data npt.assert_array_equal(x, np.sort(np.unique(self.df.d))) npt.assert_array_almost_equal(y, self.df.groupby("d").y.mean()) npt.assert_array_less(np.array(ci)[:, 0], y) npt.assert_array_less(y, np.array(ci)[:, 1]) def test_estimate_cis(self): seed = 123 p = lm._RegressionPlotter(self.df.d, self.df.y, x_estimator=np.mean, ci=95, seed=seed) _, _, ci_big = p.estimate_data p = lm._RegressionPlotter(self.df.d, self.df.y, x_estimator=np.mean, ci=50, seed=seed) _, _, ci_wee = p.estimate_data npt.assert_array_less(np.diff(ci_wee), np.diff(ci_big)) p = lm._RegressionPlotter(self.df.d, self.df.y, x_estimator=np.mean, ci=None) _, _, ci_nil = p.estimate_data npt.assert_array_equal(ci_nil, [None] * len(ci_nil)) def test_estimate_units(self): # Seed the RNG locally seed = 345 p = lm._RegressionPlotter("x", "y", data=self.df, units="s", seed=seed, x_bins=3) _, _, ci_big = p.estimate_data ci_big = np.diff(ci_big, axis=1) p = lm._RegressionPlotter("x", "y", data=self.df, seed=seed, x_bins=3) _, _, ci_wee = p.estimate_data ci_wee = np.diff(ci_wee, axis=1) npt.assert_array_less(ci_wee, ci_big) def test_partial(self): x = self.rs.randn(100) y = x + self.rs.randn(100) z = x + self.rs.randn(100) p = lm._RegressionPlotter(y, z) _, r_orig = np.corrcoef(p.x, p.y)[0] p = lm._RegressionPlotter(y, z, y_partial=x) _, r_semipartial = np.corrcoef(p.x, p.y)[0] nt.assert_less(r_semipartial, r_orig) p = lm._RegressionPlotter(y, z, x_partial=x, y_partial=x) _, r_partial = np.corrcoef(p.x, p.y)[0] nt.assert_less(r_partial, r_orig) x = pd.Series(x) y = pd.Series(y) p = lm._RegressionPlotter(y, z, x_partial=x, y_partial=x) _, r_partial = np.corrcoef(p.x, p.y)[0] nt.assert_less(r_partial, r_orig) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_logistic_regression(self): p = lm._RegressionPlotter("x", "c", data=self.df, logistic=True, n_boot=self.n_boot) _, yhat, _ = p.fit_regression(x_range=(-3, 3)) npt.assert_array_less(yhat, 1) npt.assert_array_less(0, yhat) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_logistic_perfect_separation(self): y = self.df.x > self.df.x.mean() p = lm._RegressionPlotter("x", y, data=self.df, logistic=True, n_boot=10) with np.errstate(all="ignore"): _, yhat, _ = p.fit_regression(x_range=(-3, 3)) nt.assert_true(np.isnan(yhat).all()) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_robust_regression(self): p_ols = lm._RegressionPlotter("x", "y", data=self.df, n_boot=self.n_boot) _, ols_yhat, _ = p_ols.fit_regression(x_range=(-3, 3)) p_robust = lm._RegressionPlotter("x", "y", data=self.df, robust=True, n_boot=self.n_boot) _, robust_yhat, _ = p_robust.fit_regression(x_range=(-3, 3)) nt.assert_equal(len(ols_yhat), len(robust_yhat)) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_lowess_regression(self): p = lm._RegressionPlotter("x", "y", data=self.df, lowess=True) grid, yhat, err_bands = p.fit_regression(x_range=(-3, 3)) nt.assert_equal(len(grid), len(yhat)) nt.assert_is(err_bands, None) def test_regression_options(self): with nt.assert_raises(ValueError): lm._RegressionPlotter("x", "y", data=self.df, lowess=True, order=2) with nt.assert_raises(ValueError): lm._RegressionPlotter("x", "y", data=self.df, lowess=True, logistic=True) def test_regression_limits(self): f, ax = plt.subplots() ax.scatter(self.df.x, self.df.y) p = lm._RegressionPlotter("x", "y", data=self.df) grid, _, _ = p.fit_regression(ax) xlim = ax.get_xlim() nt.assert_equal(grid.min(), xlim[0]) nt.assert_equal(grid.max(), xlim[1]) p = lm._RegressionPlotter("x", "y", data=self.df, truncate=True) grid, _, _ = p.fit_regression() nt.assert_equal(grid.min(), self.df.x.min()) nt.assert_equal(grid.max(), self.df.x.max()) class TestRegressionPlots(object): rs = np.random.RandomState(56) df = pd.DataFrame(dict(x=rs.randn(90), y=rs.randn(90) + 5, z=rs.randint(0, 1, 90), g=np.repeat(list("abc"), 30), h=np.tile(list("xy"), 45), u=np.tile(np.arange(6), 15))) bw_err = rs.randn(6)[df.u.values] df.y += bw_err def test_regplot_basic(self): f, ax = plt.subplots() lm.regplot("x", "y", self.df) nt.assert_equal(len(ax.lines), 1) nt.assert_equal(len(ax.collections), 2) x, y = ax.collections[0].get_offsets().T npt.assert_array_equal(x, self.df.x) npt.assert_array_equal(y, self.df.y) def test_regplot_selective(self): f, ax = plt.subplots() ax = lm.regplot("x", "y", self.df, scatter=False, ax=ax) nt.assert_equal(len(ax.lines), 1) nt.assert_equal(len(ax.collections), 1) ax.clear() f, ax = plt.subplots() ax = lm.regplot("x", "y", self.df, fit_reg=False) nt.assert_equal(len(ax.lines), 0) nt.assert_equal(len(ax.collections), 1) ax.clear() f, ax = plt.subplots() ax = lm.regplot("x", "y", self.df, ci=None) nt.assert_equal(len(ax.lines), 1) nt.assert_equal(len(ax.collections), 1) ax.clear() def test_regplot_scatter_kws_alpha(self): f, ax = plt.subplots() color = np.array([[0.3, 0.8, 0.5, 0.5]]) ax = lm.regplot("x", "y", self.df, scatter_kws={'color': color}) nt.assert_is(ax.collections[0]._alpha, None) nt.assert_equal(ax.collections[0]._facecolors[0, 3], 0.5) f, ax = plt.subplots() color = np.array([[0.3, 0.8, 0.5]]) ax = lm.regplot("x", "y", self.df, scatter_kws={'color': color}) nt.assert_equal(ax.collections[0]._alpha, 0.8) f, ax = plt.subplots() color = np.array([[0.3, 0.8, 0.5]]) ax = lm.regplot("x", "y", self.df, scatter_kws={'color': color, 'alpha': 0.4}) nt.assert_equal(ax.collections[0]._alpha, 0.4) f, ax = plt.subplots() color = 'r' ax = lm.regplot("x", "y", self.df, scatter_kws={'color': color}) nt.assert_equal(ax.collections[0]._alpha, 0.8) def test_regplot_binned(self): ax = lm.regplot("x", "y", self.df, x_bins=5) nt.assert_equal(len(ax.lines), 6) nt.assert_equal(len(ax.collections), 2) def test_lmplot_basic(self): g = lm.lmplot("x", "y", self.df) ax = g.axes[0, 0] nt.assert_equal(len(ax.lines), 1) nt.assert_equal(len(ax.collections), 2) x, y = ax.collections[0].get_offsets().T npt.assert_array_equal(x, self.df.x) npt.assert_array_equal(y, self.df.y) def test_lmplot_hue(self): g = lm.lmplot("x", "y", data=self.df, hue="h") ax = g.axes[0, 0] nt.assert_equal(len(ax.lines), 2) nt.assert_equal(len(ax.collections), 4) def test_lmplot_markers(self): g1 = lm.lmplot("x", "y", data=self.df, hue="h", markers="s") nt.assert_equal(g1.hue_kws, {"marker": ["s", "s"]}) g2 = lm.lmplot("x", "y", data=self.df, hue="h", markers=["o", "s"]) nt.assert_equal(g2.hue_kws, {"marker": ["o", "s"]}) with nt.assert_raises(ValueError): lm.lmplot("x", "y", data=self.df, hue="h", markers=["o", "s", "d"]) def test_lmplot_marker_linewidths(self): if mpl.__version__ == "1.4.2": raise SkipTest g = lm.lmplot("x", "y", data=self.df, hue="h", fit_reg=False, markers=["o", "+"]) c = g.axes[0, 0].collections nt.assert_equal(c[1].get_linewidths()[0], mpl.rcParams["lines.linewidth"]) def test_lmplot_facets(self): g = lm.lmplot("x", "y", data=self.df, row="g", col="h") nt.assert_equal(g.axes.shape, (3, 2)) g = lm.lmplot("x", "y", data=self.df, col="u", col_wrap=4) nt.assert_equal(g.axes.shape, (6,)) g = lm.lmplot("x", "y", data=self.df, hue="h", col="u") nt.assert_equal(g.axes.shape, (1, 6)) def test_lmplot_hue_col_nolegend(self): g = lm.lmplot("x", "y", data=self.df, col="h", hue="h") nt.assert_is(g._legend, None) def test_lmplot_scatter_kws(self): g = lm.lmplot("x", "y", hue="h", data=self.df, ci=None) red_scatter, blue_scatter = g.axes[0, 0].collections red, blue = color_palette(n_colors=2) npt.assert_array_equal(red, red_scatter.get_facecolors()[0, :3]) npt.assert_array_equal(blue, blue_scatter.get_facecolors()[0, :3]) def test_residplot(self): x, y = self.df.x, self.df.y ax = lm.residplot(x, y) resid = y - np.polyval(np.polyfit(x, y, 1), x) x_plot, y_plot = ax.collections[0].get_offsets().T npt.assert_array_equal(x, x_plot) npt.assert_array_almost_equal(resid, y_plot) @pytest.mark.skipif(_no_statsmodels, reason="no statsmodels") def test_residplot_lowess(self): ax = lm.residplot("x", "y", self.df, lowess=True) nt.assert_equal(len(ax.lines), 2) x, y = ax.lines[1].get_xydata().T npt.assert_array_equal(x, np.sort(self.df.x)) def test_three_point_colors(self): x, y = np.random.randn(2, 3) ax = lm.regplot(x, y, color=(1, 0, 0)) color = ax.collections[0].get_facecolors() npt.assert_almost_equal(color[0, :3], (1, 0, 0)) @pytest.mark.skipif(LooseVersion(mpl.__version__) < "2.0", reason="not supported on old matplotlib") def test_regplot_xlim(self): f, ax = plt.subplots() x, y1, y2 = np.random.randn(3, 50) lm.regplot(x, y1, truncate=False) lm.regplot(x, y2, truncate=False) line1, line2 = ax.lines assert np.array_equal(line1.get_xdata(), line2.get_xdata()) seaborn-0.10.0/seaborn/tests/test_relational.py000066400000000000000000001736301361256634400216030ustar00rootroot00000000000000from __future__ import division from itertools import product import warnings import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt import pytest from .. import relational as rel from ..palettes import color_palette from ..utils import categorical_order, sort_df class TestRelationalPlotter(object): def scatter_rgbs(self, collections): rgbs = [] for col in collections: rgb = tuple(col.get_facecolor().squeeze()[:3]) rgbs.append(rgb) return rgbs def colors_equal(self, *args): equal = True for c1, c2 in zip(*args): c1 = mpl.colors.colorConverter.to_rgb(np.squeeze(c1)) c2 = mpl.colors.colorConverter.to_rgb(np.squeeze(c1)) equal &= c1 == c2 return equal def paths_equal(self, *args): equal = True for p1, p2 in zip(*args): equal &= np.array_equal(p1.vertices, p2.vertices) equal &= np.array_equal(p1.codes, p2.codes) return equal @pytest.fixture def wide_df(self): columns = list("abc") index = pd.Int64Index(np.arange(10, 50, 2), name="wide_index") values = np.random.randn(len(index), len(columns)) return pd.DataFrame(values, index=index, columns=columns) @pytest.fixture def wide_array(self): return np.random.randn(20, 3) @pytest.fixture def flat_array(self): return np.random.randn(20) @pytest.fixture def flat_series(self): index = pd.Int64Index(np.arange(10, 30), name="t") return pd.Series(np.random.randn(20), index, name="s") @pytest.fixture def wide_list(self): return [np.random.randn(20), np.random.randn(10)] @pytest.fixture def wide_list_of_series(self): return [pd.Series(np.random.randn(20), np.arange(20), name="a"), pd.Series(np.random.randn(10), np.arange(5, 15), name="b")] @pytest.fixture def long_df(self): n = 100 rs = np.random.RandomState() df = pd.DataFrame(dict( x=rs.randint(0, 20, n), y=rs.randn(n), a=np.take(list("abc"), rs.randint(0, 3, n)), b=np.take(list("mnop"), rs.randint(0, 4, n)), c=np.take(list([0, 1]), rs.randint(0, 2, n)), d=np.repeat(np.datetime64('2005-02-25'), n), s=np.take([2, 4, 8], rs.randint(0, 3, n)), f=np.take(list([0.2, 0.3]), rs.randint(0, 2, n)), )) df["s_cat"] = df["s"].astype("category") return df @pytest.fixture def repeated_df(self): n = 100 rs = np.random.RandomState() return pd.DataFrame(dict( x=np.tile(np.arange(n // 2), 2), y=rs.randn(n), a=np.take(list("abc"), rs.randint(0, 3, n)), u=np.repeat(np.arange(2), n // 2), )) @pytest.fixture def missing_df(self): n = 100 rs = np.random.RandomState() df = pd.DataFrame(dict( x=rs.randint(0, 20, n), y=rs.randn(n), a=np.take(list("abc"), rs.randint(0, 3, n)), b=np.take(list("mnop"), rs.randint(0, 4, n)), s=np.take([2, 4, 8], rs.randint(0, 3, n)), )) for col in df: idx = rs.permutation(df.index)[:10] df.loc[idx, col] = np.nan return df @pytest.fixture def null_column(self): return pd.Series(index=np.arange(20)) def test_wide_df_variables(self, wide_df): p = rel._RelationalPlotter() p.establish_variables(data=wide_df) assert p.input_format == "wide" assert p.semantics == ["x", "y", "hue", "style"] assert len(p.plot_data) == np.product(wide_df.shape) x = p.plot_data["x"] expected_x = np.tile(wide_df.index, wide_df.shape[1]) assert np.array_equal(x, expected_x) y = p.plot_data["y"] expected_y = wide_df.values.ravel(order="f") assert np.array_equal(y, expected_y) hue = p.plot_data["hue"] expected_hue = np.repeat(wide_df.columns.values, wide_df.shape[0]) assert np.array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert np.array_equal(style, expected_style) assert p.plot_data["size"].isnull().all() assert p.x_label == wide_df.index.name assert p.y_label is None assert p.hue_label == wide_df.columns.name assert p.size_label is None assert p.style_label == wide_df.columns.name def test_wide_df_variables_check(self, wide_df): p = rel._RelationalPlotter() wide_df = wide_df.copy() wide_df.loc[:, "not_numeric"] = "a" with pytest.raises(ValueError): p.establish_variables(data=wide_df) def test_wide_array_variables(self, wide_array): p = rel._RelationalPlotter() p.establish_variables(data=wide_array) assert p.input_format == "wide" assert p.semantics == ["x", "y", "hue", "style"] assert len(p.plot_data) == np.product(wide_array.shape) nrow, ncol = wide_array.shape x = p.plot_data["x"] expected_x = np.tile(np.arange(nrow), ncol) assert np.array_equal(x, expected_x) y = p.plot_data["y"] expected_y = wide_array.ravel(order="f") assert np.array_equal(y, expected_y) hue = p.plot_data["hue"] expected_hue = np.repeat(np.arange(ncol), nrow) assert np.array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert np.array_equal(style, expected_style) assert p.plot_data["size"].isnull().all() assert p.x_label is None assert p.y_label is None assert p.hue_label is None assert p.size_label is None assert p.style_label is None def test_flat_array_variables(self, flat_array): p = rel._RelationalPlotter() p.establish_variables(data=flat_array) assert p.input_format == "wide" assert p.semantics == ["x", "y"] assert len(p.plot_data) == np.product(flat_array.shape) x = p.plot_data["x"] expected_x = np.arange(flat_array.shape[0]) assert np.array_equal(x, expected_x) y = p.plot_data["y"] expected_y = flat_array assert np.array_equal(y, expected_y) assert p.plot_data["hue"].isnull().all() assert p.plot_data["style"].isnull().all() assert p.plot_data["size"].isnull().all() assert p.x_label is None assert p.y_label is None assert p.hue_label is None assert p.size_label is None assert p.style_label is None def test_flat_series_variables(self, flat_series): p = rel._RelationalPlotter() p.establish_variables(data=flat_series) assert p.input_format == "wide" assert p.semantics == ["x", "y"] assert len(p.plot_data) == len(flat_series) x = p.plot_data["x"] expected_x = flat_series.index assert np.array_equal(x, expected_x) y = p.plot_data["y"] expected_y = flat_series assert np.array_equal(y, expected_y) assert p.x_label is None assert p.y_label is None assert p.hue_label is None assert p.size_label is None assert p.style_label is None def test_wide_list_variables(self, wide_list): p = rel._RelationalPlotter() p.establish_variables(data=wide_list) assert p.input_format == "wide" assert p.semantics == ["x", "y", "hue", "style"] assert len(p.plot_data) == sum(len(l) for l in wide_list) x = p.plot_data["x"] expected_x = np.concatenate([np.arange(len(l)) for l in wide_list]) assert np.array_equal(x, expected_x) y = p.plot_data["y"] expected_y = np.concatenate(wide_list) assert np.array_equal(y, expected_y) hue = p.plot_data["hue"] expected_hue = np.concatenate([ np.ones_like(l) * i for i, l in enumerate(wide_list) ]) assert np.array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert np.array_equal(style, expected_style) assert p.plot_data["size"].isnull().all() assert p.x_label is None assert p.y_label is None assert p.hue_label is None assert p.size_label is None assert p.style_label is None def test_wide_list_of_series_variables(self, wide_list_of_series): p = rel._RelationalPlotter() p.establish_variables(data=wide_list_of_series) assert p.input_format == "wide" assert p.semantics == ["x", "y", "hue", "style"] assert len(p.plot_data) == sum(len(l) for l in wide_list_of_series) x = p.plot_data["x"] expected_x = np.concatenate([s.index for s in wide_list_of_series]) assert np.array_equal(x, expected_x) y = p.plot_data["y"] expected_y = np.concatenate(wide_list_of_series) assert np.array_equal(y, expected_y) hue = p.plot_data["hue"] expected_hue = np.concatenate([ np.full(len(s), s.name, object) for s in wide_list_of_series ]) assert np.array_equal(hue, expected_hue) style = p.plot_data["style"] expected_style = expected_hue assert np.array_equal(style, expected_style) assert p.plot_data["size"].isnull().all() assert p.x_label is None assert p.y_label is None assert p.hue_label is None assert p.size_label is None assert p.style_label is None def test_long_df(self, long_df): p = rel._RelationalPlotter() p.establish_variables(x="x", y="y", data=long_df) assert p.input_format == "long" assert p.semantics == ["x", "y"] assert np.array_equal(p.plot_data["x"], long_df["x"]) assert np.array_equal(p.plot_data["y"], long_df["y"]) for col in ["hue", "style", "size"]: assert p.plot_data[col].isnull().all() assert (p.x_label, p.y_label) == ("x", "y") assert p.hue_label is None assert p.size_label is None assert p.style_label is None p.establish_variables(x=long_df.x, y="y", data=long_df) assert p.semantics == ["x", "y"] assert np.array_equal(p.plot_data["x"], long_df["x"]) assert np.array_equal(p.plot_data["y"], long_df["y"]) assert (p.x_label, p.y_label) == ("x", "y") p.establish_variables(x="x", y=long_df.y, data=long_df) assert p.semantics == ["x", "y"] assert np.array_equal(p.plot_data["x"], long_df["x"]) assert np.array_equal(p.plot_data["y"], long_df["y"]) assert (p.x_label, p.y_label) == ("x", "y") p.establish_variables(x="x", y="y", hue="a", data=long_df) assert p.semantics == ["x", "y", "hue"] assert np.array_equal(p.plot_data["hue"], long_df["a"]) for col in ["style", "size"]: assert p.plot_data[col].isnull().all() assert p.hue_label == "a" assert p.size_label is None and p.style_label is None p.establish_variables(x="x", y="y", hue="a", style="a", data=long_df) assert p.semantics == ["x", "y", "hue", "style"] assert np.array_equal(p.plot_data["hue"], long_df["a"]) assert np.array_equal(p.plot_data["style"], long_df["a"]) assert p.plot_data["size"].isnull().all() assert p.hue_label == p.style_label == "a" assert p.size_label is None p.establish_variables(x="x", y="y", hue="a", style="b", data=long_df) assert p.semantics == ["x", "y", "hue", "style"] assert np.array_equal(p.plot_data["hue"], long_df["a"]) assert np.array_equal(p.plot_data["style"], long_df["b"]) assert p.plot_data["size"].isnull().all() p.establish_variables(x="x", y="y", size="y", data=long_df) assert p.semantics == ["x", "y", "size"] assert np.array_equal(p.plot_data["size"], long_df["y"]) assert p.size_label == "y" assert p.hue_label is None and p.style_label is None def test_bad_input(self, long_df): p = rel._RelationalPlotter() with pytest.raises(ValueError): p.establish_variables(x=long_df.x) with pytest.raises(ValueError): p.establish_variables(y=long_df.y) with pytest.raises(ValueError): p.establish_variables(x="not_in_df", data=long_df) with pytest.raises(ValueError): p.establish_variables(x="x", y="not_in_df", data=long_df) with pytest.raises(ValueError): p.establish_variables(x="x", y="not_in_df", data=long_df) def test_empty_input(self): p = rel._RelationalPlotter() p.establish_variables(data=[]) p.establish_variables(data=np.array([])) p.establish_variables(data=pd.DataFrame()) p.establish_variables(x=[], y=[]) def test_units(self, repeated_df): p = rel._RelationalPlotter() p.establish_variables(x="x", y="y", units="u", data=repeated_df) assert np.array_equal(p.plot_data["units"], repeated_df["u"]) def test_parse_hue_null(self, wide_df, null_column): p = rel._LinePlotter(data=wide_df) p.parse_hue(null_column, "Blues", None, None) assert p.hue_levels == [None] assert p.palette == {} assert p.hue_type is None assert p.cmap is None def test_parse_hue_categorical(self, wide_df, long_df): p = rel._LinePlotter(data=wide_df) assert p.hue_levels == wide_df.columns.tolist() assert p.hue_type == "categorical" assert p.cmap is None # Test named palette palette = "Blues" expected_colors = color_palette(palette, wide_df.shape[1]) expected_palette = dict(zip(wide_df.columns, expected_colors)) p.parse_hue(p.plot_data.hue, palette, None, None) assert p.palette == expected_palette # Test list palette palette = color_palette("Reds", wide_df.shape[1]) p.parse_hue(p.plot_data.hue, palette, None, None) expected_palette = dict(zip(wide_df.columns, palette)) assert p.palette == expected_palette # Test dict palette colors = color_palette("Set1", 8) palette = dict(zip(wide_df.columns, colors)) p.parse_hue(p.plot_data.hue, palette, None, None) assert p.palette == palette # Test dict with missing keys palette = dict(zip(wide_df.columns[:-1], colors)) with pytest.raises(ValueError): p.parse_hue(p.plot_data.hue, palette, None, None) # Test list with wrong number of colors palette = colors[:-1] with pytest.raises(ValueError): p.parse_hue(p.plot_data.hue, palette, None, None) # Test hue order hue_order = ["a", "c", "d"] p.parse_hue(p.plot_data.hue, None, hue_order, None) assert p.hue_levels == hue_order # Test long data p = rel._LinePlotter(x="x", y="y", hue="a", data=long_df) assert p.hue_levels == categorical_order(long_df.a) assert p.hue_type == "categorical" assert p.cmap is None # Test default palette p.parse_hue(p.plot_data.hue, None, None, None) hue_levels = categorical_order(long_df.a) expected_colors = color_palette(n_colors=len(hue_levels)) expected_palette = dict(zip(hue_levels, expected_colors)) assert p.palette == expected_palette # Test default palette with many levels levels = pd.Series(list("abcdefghijklmnopqrstuvwxyz")) p.parse_hue(levels, None, None, None) expected_colors = color_palette("husl", n_colors=len(levels)) expected_palette = dict(zip(levels, expected_colors)) assert p.palette == expected_palette # Test binary data p = rel._LinePlotter(x="x", y="y", hue="c", data=long_df) assert p.hue_levels == [0, 1] assert p.hue_type == "categorical" df = long_df[long_df["c"] == 0] p = rel._LinePlotter(x="x", y="y", hue="c", data=df) assert p.hue_levels == [0] assert p.hue_type == "categorical" df = long_df[long_df["c"] == 1] p = rel._LinePlotter(x="x", y="y", hue="c", data=df) assert p.hue_levels == [1] assert p.hue_type == "categorical" # Test Timestamp data p = rel._LinePlotter(x="x", y="y", hue="d", data=long_df) assert p.hue_levels == [pd.Timestamp('2005-02-25')] assert p.hue_type == "categorical" # Test numeric data with category type p = rel._LinePlotter(x="x", y="y", hue="s_cat", data=long_df) assert p.hue_levels == categorical_order(long_df.s_cat) assert p.hue_type == "categorical" assert p.cmap is None # Test categorical palette specified for numeric data palette = "deep" p = rel._LinePlotter(x="x", y="y", hue="s", palette=palette, data=long_df) expected_colors = color_palette(palette, n_colors=len(levels)) hue_levels = categorical_order(long_df["s"]) expected_palette = dict(zip(hue_levels, expected_colors)) assert p.palette == expected_palette assert p.hue_type == "categorical" def test_parse_hue_numeric(self, long_df): p = rel._LinePlotter(x="x", y="y", hue="s", data=long_df) hue_levels = list(np.sort(long_df.s.unique())) assert p.hue_levels == hue_levels assert p.hue_type == "numeric" assert p.cmap.name == "seaborn_cubehelix" # Test named colormap palette = "Purples" p.parse_hue(p.plot_data.hue, palette, None, None) assert p.cmap is mpl.cm.get_cmap(palette) # Test colormap object palette = mpl.cm.get_cmap("Greens") p.parse_hue(p.plot_data.hue, palette, None, None) assert p.cmap is palette # Test cubehelix shorthand palette = "ch:2,0,light=.2" p.parse_hue(p.plot_data.hue, palette, None, None) assert isinstance(p.cmap, mpl.colors.ListedColormap) # Test default hue limits p.parse_hue(p.plot_data.hue, None, None, None) assert p.hue_limits == (p.plot_data.hue.min(), p.plot_data.hue.max()) # Test specified hue limits hue_norm = 1, 4 p.parse_hue(p.plot_data.hue, None, None, hue_norm) assert p.hue_limits == hue_norm assert isinstance(p.hue_norm, mpl.colors.Normalize) assert p.hue_norm.vmin == hue_norm[0] assert p.hue_norm.vmax == hue_norm[1] # Test Normalize object hue_norm = mpl.colors.PowerNorm(2, vmin=1, vmax=10) p.parse_hue(p.plot_data.hue, None, None, hue_norm) assert p.hue_limits == (hue_norm.vmin, hue_norm.vmax) assert p.hue_norm is hue_norm # Test default colormap values hmin, hmax = p.plot_data.hue.min(), p.plot_data.hue.max() p.parse_hue(p.plot_data.hue, None, None, None) assert p.palette[hmin] == pytest.approx(p.cmap(0.0)) assert p.palette[hmax] == pytest.approx(p.cmap(1.0)) # Test specified colormap values hue_norm = hmin - 1, hmax - 1 p.parse_hue(p.plot_data.hue, None, None, hue_norm) norm_min = (hmin - hue_norm[0]) / (hue_norm[1] - hue_norm[0]) assert p.palette[hmin] == pytest.approx(p.cmap(norm_min)) assert p.palette[hmax] == pytest.approx(p.cmap(1.0)) # Test list of colors hue_levels = list(np.sort(long_df.s.unique())) palette = color_palette("Blues", len(hue_levels)) p.parse_hue(p.plot_data.hue, palette, None, None) assert p.palette == dict(zip(hue_levels, palette)) palette = color_palette("Blues", len(hue_levels) + 1) with pytest.raises(ValueError): p.parse_hue(p.plot_data.hue, palette, None, None) # Test dictionary of colors palette = dict(zip(hue_levels, color_palette("Reds"))) p.parse_hue(p.plot_data.hue, palette, None, None) assert p.palette == palette palette.pop(hue_levels[0]) with pytest.raises(ValueError): p.parse_hue(p.plot_data.hue, palette, None, None) # Test invalid palette palette = "not_a_valid_palette" with pytest.raises(ValueError): p.parse_hue(p.plot_data.hue, palette, None, None) # Test bad norm argument hue_norm = "not a norm" with pytest.raises(ValueError): p.parse_hue(p.plot_data.hue, None, None, hue_norm) def test_parse_size(self, long_df): p = rel._LinePlotter(x="x", y="y", size="s", data=long_df) # Test default size limits and range default_linewidth = mpl.rcParams["lines.linewidth"] default_limits = p.plot_data["size"].min(), p.plot_data["size"].max() default_range = .5 * default_linewidth, 2 * default_linewidth p.parse_size(p.plot_data["size"], None, None, None) assert p.size_limits == default_limits size_range = min(p.sizes.values()), max(p.sizes.values()) assert size_range == default_range # Test specified size limits size_limits = (1, 5) p.parse_size(p.plot_data["size"], None, None, size_limits) assert p.size_limits == size_limits # Test specified size range sizes = (.1, .5) p.parse_size(p.plot_data["size"], sizes, None, None) assert p.size_limits == default_limits # Test size values with normalization range sizes = (1, 5) size_norm = (1, 10) p.parse_size(p.plot_data["size"], sizes, None, size_norm) normalize = mpl.colors.Normalize(*size_norm, clip=True) for level, width in p.sizes.items(): assert width == sizes[0] + (sizes[1] - sizes[0]) * normalize(level) # Test size values with normalization object sizes = (1, 5) size_norm = mpl.colors.LogNorm(1, 10, clip=False) p.parse_size(p.plot_data["size"], sizes, None, size_norm) assert p.size_norm.clip for level, width in p.sizes.items(): assert width == sizes[0] + (sizes[1] - sizes[0]) * size_norm(level) # Test specified size order var = "a" levels = long_df[var].unique() sizes = [1, 4, 6] size_order = [levels[1], levels[2], levels[0]] p = rel._LinePlotter(x="x", y="y", size=var, data=long_df) p.parse_size(p.plot_data["size"], sizes, size_order, None) assert p.sizes == dict(zip(size_order, sizes)) # Test list of sizes var = "a" levels = categorical_order(long_df[var]) sizes = list(np.random.rand(len(levels))) p = rel._LinePlotter(x="x", y="y", size=var, data=long_df) p.parse_size(p.plot_data["size"], sizes, None, None) assert p.sizes == dict(zip(levels, sizes)) # Test dict of sizes var = "a" levels = categorical_order(long_df[var]) sizes = dict(zip(levels, np.random.rand(len(levels)))) p = rel._LinePlotter(x="x", y="y", size=var, data=long_df) p.parse_size(p.plot_data["size"], sizes, None, None) assert p.sizes == sizes # Test sizes list with wrong length sizes = list(np.random.rand(len(levels) + 1)) with pytest.raises(ValueError): p.parse_size(p.plot_data["size"], sizes, None, None) # Test sizes dict with missing levels sizes = dict(zip(levels, np.random.rand(len(levels) - 1))) with pytest.raises(ValueError): p.parse_size(p.plot_data["size"], sizes, None, None) # Test bad sizes argument sizes = "bad_size" with pytest.raises(ValueError): p.parse_size(p.plot_data["size"], sizes, None, None) # Test bad norm argument size_norm = "not a norm" p = rel._LinePlotter(x="x", y="y", size="s", data=long_df) with pytest.raises(ValueError): p.parse_size(p.plot_data["size"], None, None, size_norm) def test_parse_style(self, long_df): p = rel._LinePlotter(x="x", y="y", style="a", data=long_df) # Test defaults markers, dashes = True, True p.parse_style(p.plot_data["style"], markers, dashes, None) assert p.markers == dict(zip(p.style_levels, p.default_markers)) assert p.dashes == dict(zip(p.style_levels, p.default_dashes)) # Test lists markers, dashes = ["o", "s", "d"], [(1, 0), (1, 1), (2, 1, 3, 1)] p.parse_style(p.plot_data["style"], markers, dashes, None) assert p.markers == dict(zip(p.style_levels, markers)) assert p.dashes == dict(zip(p.style_levels, dashes)) # Test dicts markers = dict(zip(p.style_levels, markers)) dashes = dict(zip(p.style_levels, dashes)) p.parse_style(p.plot_data["style"], markers, dashes, None) assert p.markers == markers assert p.dashes == dashes # Test style order with defaults style_order = np.take(p.style_levels, [1, 2, 0]) markers = dashes = True p.parse_style(p.plot_data["style"], markers, dashes, style_order) assert p.markers == dict(zip(style_order, p.default_markers)) assert p.dashes == dict(zip(style_order, p.default_dashes)) # Test too many levels with style lists markers, dashes = ["o", "s"], False with pytest.raises(ValueError): p.parse_style(p.plot_data["style"], markers, dashes, None) markers, dashes = False, [(2, 1)] with pytest.raises(ValueError): p.parse_style(p.plot_data["style"], markers, dashes, None) # Test too many levels with style dicts markers, dashes = {"a": "o", "b": "s"}, False with pytest.raises(ValueError): p.parse_style(p.plot_data["style"], markers, dashes, None) markers, dashes = False, {"a": (1, 0), "b": (2, 1)} with pytest.raises(ValueError): p.parse_style(p.plot_data["style"], markers, dashes, None) # Test mixture of filled and unfilled markers markers, dashes = ["o", "x", "s"], None with pytest.raises(ValueError): p.parse_style(p.plot_data["style"], markers, dashes, None) def test_subset_data_quantities(self, long_df): p = rel._LinePlotter(x="x", y="y", data=long_df) assert len(list(p.subset_data())) == 1 # -- var = "a" n_subsets = len(long_df[var].unique()) p = rel._LinePlotter(x="x", y="y", hue=var, data=long_df) assert len(list(p.subset_data())) == n_subsets p = rel._LinePlotter(x="x", y="y", style=var, data=long_df) assert len(list(p.subset_data())) == n_subsets n_subsets = len(long_df[var].unique()) p = rel._LinePlotter(x="x", y="y", size=var, data=long_df) assert len(list(p.subset_data())) == n_subsets # -- var = "a" n_subsets = len(long_df[var].unique()) p = rel._LinePlotter(x="x", y="y", hue=var, style=var, data=long_df) assert len(list(p.subset_data())) == n_subsets # -- var1, var2 = "a", "s" n_subsets = len(set(list(map(tuple, long_df[[var1, var2]].values)))) p = rel._LinePlotter(x="x", y="y", hue=var1, style=var2, data=long_df) assert len(list(p.subset_data())) == n_subsets p = rel._LinePlotter(x="x", y="y", hue=var1, size=var2, style=var1, data=long_df) assert len(list(p.subset_data())) == n_subsets # -- var1, var2, var3 = "a", "s", "b" cols = [var1, var2, var3] n_subsets = len(set(list(map(tuple, long_df[cols].values)))) p = rel._LinePlotter(x="x", y="y", hue=var1, size=var2, style=var3, data=long_df) assert len(list(p.subset_data())) == n_subsets def test_subset_data_keys(self, long_df): p = rel._LinePlotter(x="x", y="y", data=long_df) for (hue, size, style), _ in p.subset_data(): assert hue is None assert size is None assert style is None # -- var = "a" p = rel._LinePlotter(x="x", y="y", hue=var, data=long_df) for (hue, size, style), _ in p.subset_data(): assert hue in long_df[var].values assert size is None assert style is None p = rel._LinePlotter(x="x", y="y", style=var, data=long_df) for (hue, size, style), _ in p.subset_data(): assert hue is None assert size is None assert style in long_df[var].values p = rel._LinePlotter(x="x", y="y", hue=var, style=var, data=long_df) for (hue, size, style), _ in p.subset_data(): assert hue in long_df[var].values assert size is None assert style in long_df[var].values p = rel._LinePlotter(x="x", y="y", size=var, data=long_df) for (hue, size, style), _ in p.subset_data(): assert hue is None assert size in long_df[var].values assert style is None # -- var1, var2 = "a", "s" p = rel._LinePlotter(x="x", y="y", hue=var1, size=var2, data=long_df) for (hue, size, style), _ in p.subset_data(): assert hue in long_df[var1].values assert size in long_df[var2].values assert style is None def test_subset_data_values(self, long_df): p = rel._LinePlotter(x="x", y="y", data=long_df) _, data = next(p.subset_data()) expected = sort_df(p.plot_data.loc[:, ["x", "y"]], ["x", "y"]) assert np.array_equal(data.values, expected) p = rel._LinePlotter(x="x", y="y", data=long_df, sort=False) _, data = next(p.subset_data()) expected = p.plot_data.loc[:, ["x", "y"]] assert np.array_equal(data.values, expected) p = rel._LinePlotter(x="x", y="y", hue="a", data=long_df) for (hue, _, _), data in p.subset_data(): rows = p.plot_data["hue"] == hue cols = ["x", "y"] expected = sort_df(p.plot_data.loc[rows, cols], cols) assert np.array_equal(data.values, expected.values) p = rel._LinePlotter(x="x", y="y", hue="a", data=long_df, sort=False) for (hue, _, _), data in p.subset_data(): rows = p.plot_data["hue"] == hue cols = ["x", "y"] expected = p.plot_data.loc[rows, cols] assert np.array_equal(data.values, expected.values) p = rel._LinePlotter(x="x", y="y", hue="a", style="a", data=long_df) for (hue, _, _), data in p.subset_data(): rows = p.plot_data["hue"] == hue cols = ["x", "y"] expected = sort_df(p.plot_data.loc[rows, cols], cols) assert np.array_equal(data.values, expected.values) p = rel._LinePlotter(x="x", y="y", hue="a", size="s", data=long_df) for (hue, size, _), data in p.subset_data(): rows = (p.plot_data["hue"] == hue) & (p.plot_data["size"] == size) cols = ["x", "y"] expected = sort_df(p.plot_data.loc[rows, cols], cols) assert np.array_equal(data.values, expected.values) class TestLinePlotter(TestRelationalPlotter): def test_aggregate(self, long_df): p = rel._LinePlotter(x="x", y="y", data=long_df) p.n_boot = 10000 p.sort = False x = pd.Series(np.tile([1, 2], 100)) y = pd.Series(np.random.randn(200)) y_mean = y.groupby(x).mean() def sem(x): return np.std(x) / np.sqrt(len(x)) y_sem = y.groupby(x).apply(sem) y_cis = pd.DataFrame(dict(low=y_mean - y_sem, high=y_mean + y_sem), columns=["low", "high"]) p.ci = 68 p.estimator = "mean" index, est, cis = p.aggregate(y, x) assert np.array_equal(index.values, x.unique()) assert est.index.equals(index) assert est.values == pytest.approx(y_mean.values) assert cis.values == pytest.approx(y_cis.values, 4) assert list(cis.columns) == ["low", "high"] p.estimator = np.mean index, est, cis = p.aggregate(y, x) assert np.array_equal(index.values, x.unique()) assert est.index.equals(index) assert est.values == pytest.approx(y_mean.values) assert cis.values == pytest.approx(y_cis.values, 4) assert list(cis.columns) == ["low", "high"] p.seed = 0 _, _, ci1 = p.aggregate(y, x) _, _, ci2 = p.aggregate(y, x) assert np.array_equal(ci1, ci2) y_std = y.groupby(x).std() y_cis = pd.DataFrame(dict(low=y_mean - y_std, high=y_mean + y_std), columns=["low", "high"]) p.ci = "sd" index, est, cis = p.aggregate(y, x) assert np.array_equal(index.values, x.unique()) assert est.index.equals(index) assert est.values == pytest.approx(y_mean.values) assert cis.values == pytest.approx(y_cis.values) assert list(cis.columns) == ["low", "high"] p.ci = None index, est, cis = p.aggregate(y, x) assert cis is None p.ci = 68 x, y = pd.Series([1, 2, 3]), pd.Series([4, 3, 2]) index, est, cis = p.aggregate(y, x) assert np.array_equal(index.values, x) assert np.array_equal(est.values, y) assert cis is None x, y = pd.Series([1, 1, 2]), pd.Series([2, 3, 4]) index, est, cis = p.aggregate(y, x) assert cis.loc[2].isnull().all() p = rel._LinePlotter(x="x", y="y", data=long_df) p.estimator = "mean" p.n_boot = 100 p.ci = 95 x = pd.Categorical(["a", "b", "a", "b"], ["a", "b", "c"]) y = pd.Series([1, 1, 2, 2]) with warnings.catch_warnings(): warnings.simplefilter("error", RuntimeWarning) index, est, cis = p.aggregate(y, x) assert cis.loc[["c"]].isnull().all().all() def test_legend_data(self, long_df): f, ax = plt.subplots() p = rel._LinePlotter(x="x", y="y", data=long_df, legend="full") p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert handles == [] # -- ax.clear() p = rel._LinePlotter(x="x", y="y", hue="a", data=long_df, legend="full") p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_color() for h in handles] assert labels == ["a"] + p.hue_levels assert colors == ["w"] + [p.palette[l] for l in p.hue_levels] # -- ax.clear() p = rel._LinePlotter(x="x", y="y", hue="a", style="a", markers=True, legend="full", data=long_df) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_color() for h in handles] markers = [h.get_marker() for h in handles] assert labels == ["a"] + p.hue_levels == ["a"] + p.style_levels assert colors == ["w"] + [p.palette[l] for l in p.hue_levels] assert markers == [""] + [p.markers[l] for l in p.style_levels] # -- ax.clear() p = rel._LinePlotter(x="x", y="y", hue="a", style="b", markers=True, legend="full", data=long_df) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_color() for h in handles] markers = [h.get_marker() for h in handles] expected_colors = (["w"] + [p.palette[l] for l in p.hue_levels] + ["w"] + [".2" for _ in p.style_levels]) expected_markers = ([""] + ["None" for _ in p.hue_levels] + [""] + [p.markers[l] for l in p.style_levels]) assert labels == ["a"] + p.hue_levels + ["b"] + p.style_levels assert colors == expected_colors assert markers == expected_markers # -- ax.clear() p = rel._LinePlotter(x="x", y="y", hue="a", size="a", data=long_df, legend="full") p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_color() for h in handles] widths = [h.get_linewidth() for h in handles] assert labels == ["a"] + p.hue_levels == ["a"] + p.size_levels assert colors == ["w"] + [p.palette[l] for l in p.hue_levels] assert widths == [0] + [p.sizes[l] for l in p.size_levels] # -- x, y = np.random.randn(2, 40) z = np.tile(np.arange(20), 2) p = rel._LinePlotter(x=x, y=y, hue=z) ax.clear() p.legend = "full" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert labels == [str(l) for l in p.hue_levels] ax.clear() p.legend = "brief" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert len(labels) == 4 p = rel._LinePlotter(x=x, y=y, size=z) ax.clear() p.legend = "full" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert labels == [str(l) for l in p.size_levels] ax.clear() p.legend = "brief" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert len(labels) == 4 ax.clear() p.legend = "bad_value" with pytest.raises(ValueError): p.add_legend_data(ax) ax.clear() p = rel._LinePlotter(x=x, y=y, hue=z, hue_norm=mpl.colors.LogNorm(), legend="brief") p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert float(labels[2]) / float(labels[1]) == 10 ax.clear() p = rel._LinePlotter(x=x, y=y, size=z, size_norm=mpl.colors.LogNorm(), legend="brief") p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert float(labels[2]) / float(labels[1]) == 10 ax.clear() p = rel._LinePlotter( x="x", y="y", hue="f", legend="brief", data=long_df) p.add_legend_data(ax) expected_levels = ['0.20', '0.24', '0.28', '0.32'] handles, labels = ax.get_legend_handles_labels() assert labels == ["f"] + expected_levels ax.clear() p = rel._LinePlotter( x="x", y="y", size="f", legend="brief", data=long_df) p.add_legend_data(ax) expected_levels = ['0.20', '0.24', '0.28', '0.32'] handles, labels = ax.get_legend_handles_labels() assert labels == ["f"] + expected_levels def test_plot(self, long_df, repeated_df): f, ax = plt.subplots() p = rel._LinePlotter(x="x", y="y", data=long_df, sort=False, estimator=None) p.plot(ax, {}) line, = ax.lines assert np.array_equal(line.get_xdata(), long_df.x.values) assert np.array_equal(line.get_ydata(), long_df.y.values) ax.clear() p.plot(ax, {"color": "k", "label": "test"}) line, = ax.lines assert line.get_color() == "k" assert line.get_label() == "test" p = rel._LinePlotter(x="x", y="y", data=long_df, sort=True, estimator=None) ax.clear() p.plot(ax, {}) line, = ax.lines sorted_data = sort_df(long_df, ["x", "y"]) assert np.array_equal(line.get_xdata(), sorted_data.x.values) assert np.array_equal(line.get_ydata(), sorted_data.y.values) p = rel._LinePlotter(x="x", y="y", hue="a", data=long_df) ax.clear() p.plot(ax, {}) assert len(ax.lines) == len(p.hue_levels) for line, level in zip(ax.lines, p.hue_levels): assert line.get_color() == p.palette[level] p = rel._LinePlotter(x="x", y="y", size="a", data=long_df) ax.clear() p.plot(ax, {}) assert len(ax.lines) == len(p.size_levels) for line, level in zip(ax.lines, p.size_levels): assert line.get_linewidth() == p.sizes[level] p = rel._LinePlotter(x="x", y="y", hue="a", style="a", markers=True, data=long_df) ax.clear() p.plot(ax, {}) assert len(ax.lines) == len(p.hue_levels) == len(p.style_levels) for line, level in zip(ax.lines, p.hue_levels): assert line.get_color() == p.palette[level] assert line.get_marker() == p.markers[level] p = rel._LinePlotter(x="x", y="y", hue="a", style="b", markers=True, data=long_df) ax.clear() p.plot(ax, {}) levels = product(p.hue_levels, p.style_levels) assert len(ax.lines) == (len(p.hue_levels) * len(p.style_levels)) for line, (hue, style) in zip(ax.lines, levels): assert line.get_color() == p.palette[hue] assert line.get_marker() == p.markers[style] p = rel._LinePlotter(x="x", y="y", data=long_df, estimator="mean", err_style="band", ci="sd", sort=True) ax.clear() p.plot(ax, {}) line, = ax.lines expected_data = long_df.groupby("x").y.mean() assert np.array_equal(line.get_xdata(), expected_data.index.values) assert np.allclose(line.get_ydata(), expected_data.values) assert len(ax.collections) == 1 p = rel._LinePlotter(x="x", y="y", hue="a", data=long_df, estimator="mean", err_style="band", ci="sd") ax.clear() p.plot(ax, {}) assert len(ax.lines) == len(ax.collections) == len(p.hue_levels) for c in ax.collections: assert isinstance(c, mpl.collections.PolyCollection) p = rel._LinePlotter(x="x", y="y", hue="a", data=long_df, estimator="mean", err_style="bars", ci="sd") ax.clear() p.plot(ax, {}) # assert len(ax.lines) / 2 == len(ax.collections) == len(p.hue_levels) # The lines are different on mpl 1.4 but I can't install to debug assert len(ax.collections) == len(p.hue_levels) for c in ax.collections: assert isinstance(c, mpl.collections.LineCollection) p = rel._LinePlotter(x="x", y="y", data=repeated_df, units="u", estimator=None) ax.clear() p.plot(ax, {}) n_units = len(repeated_df["u"].unique()) assert len(ax.lines) == n_units p = rel._LinePlotter(x="x", y="y", hue="a", data=repeated_df, units="u", estimator=None) ax.clear() p.plot(ax, {}) n_units *= len(repeated_df["a"].unique()) assert len(ax.lines) == n_units p.estimator = "mean" with pytest.raises(ValueError): p.plot(ax, {}) p = rel._LinePlotter(x="x", y="y", hue="a", data=long_df, err_style="band", err_kws={"alpha": .5}) ax.clear() p.plot(ax, {}) for band in ax.collections: assert band.get_alpha() == .5 p = rel._LinePlotter(x="x", y="y", hue="a", data=long_df, err_style="bars", err_kws={"elinewidth": 2}) ax.clear() p.plot(ax, {}) for lines in ax.collections: assert lines.get_linestyles() == 2 p.err_style = "invalid" with pytest.raises(ValueError): p.plot(ax, {}) x_str = long_df["x"].astype(str) p = rel._LinePlotter(x="x", y="y", hue=x_str, data=long_df) ax.clear() p.plot(ax, {}) p = rel._LinePlotter(x="x", y="y", size=x_str, data=long_df) ax.clear() p.plot(ax, {}) def test_axis_labels(self, long_df): f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) p = rel._LinePlotter(x="x", y="y", data=long_df) p.plot(ax1, {}) assert ax1.get_xlabel() == "x" assert ax1.get_ylabel() == "y" p.plot(ax2, {}) assert ax2.get_xlabel() == "x" assert ax2.get_ylabel() == "y" assert not ax2.yaxis.label.get_visible() def test_lineplot_axes(self, wide_df): f1, ax1 = plt.subplots() f2, ax2 = plt.subplots() ax = rel.lineplot(data=wide_df) assert ax is ax2 ax = rel.lineplot(data=wide_df, ax=ax1) assert ax is ax1 def test_lineplot_smoke(self, flat_array, flat_series, wide_array, wide_list, wide_list_of_series, wide_df, long_df, missing_df): f, ax = plt.subplots() rel.lineplot([], []) ax.clear() rel.lineplot(data=flat_array) ax.clear() rel.lineplot(data=flat_series) ax.clear() rel.lineplot(data=wide_array) ax.clear() rel.lineplot(data=wide_list) ax.clear() rel.lineplot(data=wide_list_of_series) ax.clear() rel.lineplot(data=wide_df) ax.clear() rel.lineplot(x="x", y="y", data=long_df) ax.clear() rel.lineplot(x=long_df.x, y=long_df.y) ax.clear() rel.lineplot(x=long_df.x, y="y", data=long_df) ax.clear() rel.lineplot(x="x", y=long_df.y.values, data=long_df) ax.clear() rel.lineplot(x="x", y="y", hue="a", data=long_df) ax.clear() rel.lineplot(x="x", y="y", hue="a", style="a", data=long_df) ax.clear() rel.lineplot(x="x", y="y", hue="a", style="b", data=long_df) ax.clear() rel.lineplot(x="x", y="y", hue="a", style="a", data=missing_df) ax.clear() rel.lineplot(x="x", y="y", hue="a", style="b", data=missing_df) ax.clear() rel.lineplot(x="x", y="y", hue="a", size="a", data=long_df) ax.clear() rel.lineplot(x="x", y="y", hue="a", size="s", data=long_df) ax.clear() rel.lineplot(x="x", y="y", hue="a", size="a", data=missing_df) ax.clear() rel.lineplot(x="x", y="y", hue="a", size="s", data=missing_df) ax.clear() class TestScatterPlotter(TestRelationalPlotter): def test_legend_data(self, long_df): m = mpl.markers.MarkerStyle("o") default_mark = m.get_path().transformed(m.get_transform()) m = mpl.markers.MarkerStyle("") null_mark = m.get_path().transformed(m.get_transform()) f, ax = plt.subplots() p = rel._ScatterPlotter(x="x", y="y", data=long_df, legend="full") p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert handles == [] # -- ax.clear() p = rel._ScatterPlotter(x="x", y="y", hue="a", data=long_df, legend="full") p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_facecolors()[0] for h in handles] expected_colors = ["w"] + [p.palette[l] for l in p.hue_levels] assert labels == ["a"] + p.hue_levels assert self.colors_equal(colors, expected_colors) # -- ax.clear() p = rel._ScatterPlotter(x="x", y="y", hue="a", style="a", markers=True, legend="full", data=long_df) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_facecolors()[0] for h in handles] expected_colors = ["w"] + [p.palette[l] for l in p.hue_levels] paths = [h.get_paths()[0] for h in handles] expected_paths = [null_mark] + [p.paths[l] for l in p.style_levels] assert labels == ["a"] + p.hue_levels == ["a"] + p.style_levels assert self.colors_equal(colors, expected_colors) assert self.paths_equal(paths, expected_paths) # -- ax.clear() p = rel._ScatterPlotter(x="x", y="y", hue="a", style="b", markers=True, legend="full", data=long_df) p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_facecolors()[0] for h in handles] paths = [h.get_paths()[0] for h in handles] expected_colors = (["w"] + [p.palette[l] for l in p.hue_levels] + ["w"] + [".2" for _ in p.style_levels]) expected_paths = ([null_mark] + [default_mark for _ in p.hue_levels] + [null_mark] + [p.paths[l] for l in p.style_levels]) assert labels == ["a"] + p.hue_levels + ["b"] + p.style_levels assert self.colors_equal(colors, expected_colors) assert self.paths_equal(paths, expected_paths) # -- ax.clear() p = rel._ScatterPlotter(x="x", y="y", hue="a", size="a", data=long_df, legend="full") p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() colors = [h.get_facecolors()[0] for h in handles] expected_colors = ["w"] + [p.palette[l] for l in p.hue_levels] sizes = [h.get_sizes()[0] for h in handles] expected_sizes = [0] + [p.sizes[l] for l in p.size_levels] assert labels == ["a"] + p.hue_levels == ["a"] + p.size_levels assert self.colors_equal(colors, expected_colors) assert sizes == expected_sizes # -- ax.clear() sizes_list = [10, 100, 200] p = rel._ScatterPlotter(x="x", y="y", size="s", sizes=sizes_list, data=long_df, legend="full") p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() sizes = [h.get_sizes()[0] for h in handles] expected_sizes = [0] + [p.sizes[l] for l in p.size_levels] assert labels == ["s"] + [str(l) for l in p.size_levels] assert sizes == expected_sizes # -- ax.clear() sizes_dict = {2: 10, 4: 100, 8: 200} p = rel._ScatterPlotter(x="x", y="y", size="s", sizes=sizes_dict, data=long_df, legend="full") p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() sizes = [h.get_sizes()[0] for h in handles] expected_sizes = [0] + [p.sizes[l] for l in p.size_levels] assert labels == ["s"] + [str(l) for l in p.size_levels] assert sizes == expected_sizes # -- x, y = np.random.randn(2, 40) z = np.tile(np.arange(20), 2) p = rel._ScatterPlotter(x=x, y=y, hue=z) ax.clear() p.legend = "full" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert labels == [str(l) for l in p.hue_levels] ax.clear() p.legend = "brief" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert len(labels) == 4 p = rel._ScatterPlotter(x=x, y=y, size=z) ax.clear() p.legend = "full" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert labels == [str(l) for l in p.size_levels] ax.clear() p.legend = "brief" p.add_legend_data(ax) handles, labels = ax.get_legend_handles_labels() assert len(labels) == 4 ax.clear() p.legend = "bad_value" with pytest.raises(ValueError): p.add_legend_data(ax) def test_plot(self, long_df, repeated_df): f, ax = plt.subplots() p = rel._ScatterPlotter(x="x", y="y", data=long_df) p.plot(ax, {}) points = ax.collections[0] assert np.array_equal(points.get_offsets(), long_df[["x", "y"]].values) ax.clear() p.plot(ax, {"color": "k", "label": "test"}) points = ax.collections[0] assert self.colors_equal(points.get_facecolor(), "k") assert points.get_label() == "test" p = rel._ScatterPlotter(x="x", y="y", hue="a", data=long_df) ax.clear() p.plot(ax, {}) points = ax.collections[0] expected_colors = [p.palette[k] for k in p.plot_data["hue"]] assert self.colors_equal(points.get_facecolors(), expected_colors) p = rel._ScatterPlotter(x="x", y="y", style="c", markers=["+", "x"], data=long_df) ax.clear() color = (1, .3, .8) p.plot(ax, {"color": color}) points = ax.collections[0] assert self.colors_equal(points.get_edgecolors(), [color]) p = rel._ScatterPlotter(x="x", y="y", size="a", data=long_df) ax.clear() p.plot(ax, {}) points = ax.collections[0] expected_sizes = [p.size_lookup(k) for k in p.plot_data["size"]] assert np.array_equal(points.get_sizes(), expected_sizes) p = rel._ScatterPlotter(x="x", y="y", hue="a", style="a", markers=True, data=long_df) ax.clear() p.plot(ax, {}) expected_colors = [p.palette[k] for k in p.plot_data["hue"]] expected_paths = [p.paths[k] for k in p.plot_data["style"]] assert self.colors_equal(points.get_facecolors(), expected_colors) assert self.paths_equal(points.get_paths(), expected_paths) p = rel._ScatterPlotter(x="x", y="y", hue="a", style="b", markers=True, data=long_df) ax.clear() p.plot(ax, {}) expected_colors = [p.palette[k] for k in p.plot_data["hue"]] expected_paths = [p.paths[k] for k in p.plot_data["style"]] assert self.colors_equal(points.get_facecolors(), expected_colors) assert self.paths_equal(points.get_paths(), expected_paths) x_str = long_df["x"].astype(str) p = rel._ScatterPlotter(x="x", y="y", hue=x_str, data=long_df) ax.clear() p.plot(ax, {}) p = rel._ScatterPlotter(x="x", y="y", size=x_str, data=long_df) ax.clear() p.plot(ax, {}) def test_axis_labels(self, long_df): f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) p = rel._ScatterPlotter(x="x", y="y", data=long_df) p.plot(ax1, {}) assert ax1.get_xlabel() == "x" assert ax1.get_ylabel() == "y" p.plot(ax2, {}) assert ax2.get_xlabel() == "x" assert ax2.get_ylabel() == "y" assert not ax2.yaxis.label.get_visible() def test_scatterplot_axes(self, wide_df): f1, ax1 = plt.subplots() f2, ax2 = plt.subplots() ax = rel.scatterplot(data=wide_df) assert ax is ax2 ax = rel.scatterplot(data=wide_df, ax=ax1) assert ax is ax1 def test_scatterplot_smoke(self, flat_array, flat_series, wide_array, wide_list, wide_list_of_series, wide_df, long_df, missing_df): f, ax = plt.subplots() rel.scatterplot([], []) ax.clear() rel.scatterplot(data=flat_array) ax.clear() rel.scatterplot(data=flat_series) ax.clear() rel.scatterplot(data=wide_array) ax.clear() rel.scatterplot(data=wide_list) ax.clear() rel.scatterplot(data=wide_list_of_series) ax.clear() rel.scatterplot(data=wide_df) ax.clear() rel.scatterplot(x="x", y="y", data=long_df) ax.clear() rel.scatterplot(x=long_df.x, y=long_df.y) ax.clear() rel.scatterplot(x=long_df.x, y="y", data=long_df) ax.clear() rel.scatterplot(x="x", y=long_df.y.values, data=long_df) ax.clear() rel.scatterplot(x="x", y="y", hue="a", data=long_df) ax.clear() rel.scatterplot(x="x", y="y", hue="a", style="a", data=long_df) ax.clear() rel.scatterplot(x="x", y="y", hue="a", style="b", data=long_df) ax.clear() rel.scatterplot(x="x", y="y", hue="a", style="a", data=missing_df) ax.clear() rel.scatterplot(x="x", y="y", hue="a", style="b", data=missing_df) ax.clear() rel.scatterplot(x="x", y="y", hue="a", size="a", data=long_df) ax.clear() rel.scatterplot(x="x", y="y", hue="a", size="s", data=long_df) ax.clear() rel.scatterplot(x="x", y="y", hue="a", size="a", data=missing_df) ax.clear() rel.scatterplot(x="x", y="y", hue="a", size="s", data=missing_df) ax.clear() class TestRelPlotter(TestRelationalPlotter): def test_relplot_simple(self, long_df): g = rel.relplot(x="x", y="y", kind="scatter", data=long_df) x, y = g.ax.collections[0].get_offsets().T assert np.array_equal(x, long_df["x"]) assert np.array_equal(y, long_df["y"]) g = rel.relplot(x="x", y="y", kind="line", data=long_df) x, y = g.ax.lines[0].get_xydata().T expected = long_df.groupby("x").y.mean() assert np.array_equal(x, expected.index) assert y == pytest.approx(expected.values) with pytest.raises(ValueError): g = rel.relplot(x="x", y="y", kind="not_a_kind", data=long_df) def test_relplot_complex(self, long_df): for sem in ["hue", "size", "style"]: g = rel.relplot(x="x", y="y", data=long_df, **{sem: "a"}) x, y = g.ax.collections[0].get_offsets().T assert np.array_equal(x, long_df["x"]) assert np.array_equal(y, long_df["y"]) for sem in ["hue", "size", "style"]: g = rel.relplot(x="x", y="y", col="c", data=long_df, **{sem: "a"}) grouped = long_df.groupby("c") for (_, grp_df), ax in zip(grouped, g.axes.flat): x, y = ax.collections[0].get_offsets().T assert np.array_equal(x, grp_df["x"]) assert np.array_equal(y, grp_df["y"]) for sem in ["size", "style"]: g = rel.relplot(x="x", y="y", hue="b", col="c", data=long_df, **{sem: "a"}) grouped = long_df.groupby("c") for (_, grp_df), ax in zip(grouped, g.axes.flat): x, y = ax.collections[0].get_offsets().T assert np.array_equal(x, grp_df["x"]) assert np.array_equal(y, grp_df["y"]) for sem in ["hue", "size", "style"]: g = rel.relplot(x="x", y="y", col="b", row="c", data=sort_df(long_df, ["c", "b"]), **{sem: "a"}) grouped = long_df.groupby(["c", "b"]) for (_, grp_df), ax in zip(grouped, g.axes.flat): x, y = ax.collections[0].get_offsets().T assert np.array_equal(x, grp_df["x"]) assert np.array_equal(y, grp_df["y"]) def test_relplot_hues(self, long_df): palette = ["r", "b", "g"] g = rel.relplot(x="x", y="y", hue="a", style="b", col="c", palette=palette, data=long_df) palette = dict(zip(long_df["a"].unique(), palette)) grouped = long_df.groupby("c") for (_, grp_df), ax in zip(grouped, g.axes.flat): points = ax.collections[0] expected_hues = [palette[val] for val in grp_df["a"]] assert self.colors_equal(points.get_facecolors(), expected_hues) def test_relplot_sizes(self, long_df): sizes = [5, 12, 7] g = rel.relplot(x="x", y="y", size="a", hue="b", col="c", sizes=sizes, data=long_df) sizes = dict(zip(long_df["a"].unique(), sizes)) grouped = long_df.groupby("c") for (_, grp_df), ax in zip(grouped, g.axes.flat): points = ax.collections[0] expected_sizes = [sizes[val] for val in grp_df["a"]] assert np.array_equal(points.get_sizes(), expected_sizes) def test_relplot_styles(self, long_df): markers = ["o", "d", "s"] g = rel.relplot(x="x", y="y", style="a", hue="b", col="c", markers=markers, data=long_df) paths = [] for m in markers: m = mpl.markers.MarkerStyle(m) paths.append(m.get_path().transformed(m.get_transform())) paths = dict(zip(long_df["a"].unique(), paths)) grouped = long_df.groupby("c") for (_, grp_df), ax in zip(grouped, g.axes.flat): points = ax.collections[0] expected_paths = [paths[val] for val in grp_df["a"]] assert self.paths_equal(points.get_paths(), expected_paths) def test_relplot_stringy_numerics(self, long_df): long_df["x_str"] = long_df["x"].astype(str) g = rel.relplot(x="x", y="y", hue="x_str", data=long_df) points = g.ax.collections[0] xys = points.get_offsets() mask = np.ma.getmask(xys) assert not mask.any() assert np.array_equal(xys, long_df[["x", "y"]]) g = rel.relplot(x="x", y="y", size="x_str", data=long_df) points = g.ax.collections[0] xys = points.get_offsets() mask = np.ma.getmask(xys) assert not mask.any() assert np.array_equal(xys, long_df[["x", "y"]]) def test_relplot_legend(self, long_df): g = rel.relplot(x="x", y="y", data=long_df) assert g._legend is None g = rel.relplot(x="x", y="y", hue="a", data=long_df) texts = [t.get_text() for t in g._legend.texts] expected_texts = np.append(["a"], long_df["a"].unique()) assert np.array_equal(texts, expected_texts) g = rel.relplot(x="x", y="y", hue="s", size="s", data=long_df) texts = [t.get_text() for t in g._legend.texts] assert np.array_equal(texts[1:], np.sort(texts[1:])) g = rel.relplot(x="x", y="y", hue="a", legend=False, data=long_df) assert g._legend is None palette = color_palette("deep", len(long_df["b"].unique())) a_like_b = dict(zip(long_df["a"].unique(), long_df["b"].unique())) long_df["a_like_b"] = long_df["a"].map(a_like_b) g = rel.relplot(x="x", y="y", hue="b", style="a_like_b", palette=palette, kind="line", estimator=None, data=long_df) lines = g._legend.get_lines()[1:] # Chop off title dummy for line, color in zip(lines, palette): assert line.get_color() == color def test_ax_kwarg_removal(self, long_df): f, ax = plt.subplots() with pytest.warns(UserWarning): g = rel.relplot("x", "y", data=long_df, ax=ax) assert len(ax.collections) == 0 assert len(g.ax.collections) > 0 seaborn-0.10.0/seaborn/tests/test_utils.py000066400000000000000000000325631361256634400206100ustar00rootroot00000000000000"""Tests for plotting utilities.""" import tempfile import shutil import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt import nose import nose.tools as nt from nose.tools import assert_equal, raises import numpy.testing as npt try: import pandas.testing as pdt except ImportError: import pandas.util.testing as pdt from distutils.version import LooseVersion try: from bs4 import BeautifulSoup except ImportError: BeautifulSoup = None from .. import utils, rcmod from ..utils import get_dataset_names, load_dataset, _network pandas_has_categoricals = LooseVersion(pd.__version__) >= "0.15" a_norm = np.random.randn(100) def test_pmf_hist_basics(): """Test the function to return barplot args for pmf hist.""" out = utils.pmf_hist(a_norm) assert_equal(len(out), 3) x, h, w = out assert_equal(len(x), len(h)) # Test simple case a = np.arange(10) x, h, w = utils.pmf_hist(a, 10) nose.tools.assert_true(np.all(h == h[0])) def test_pmf_hist_widths(): """Test histogram width is correct.""" x, h, w = utils.pmf_hist(a_norm) assert_equal(x[1] - x[0], w) def test_pmf_hist_normalization(): """Test that output data behaves like a PMF.""" x, h, w = utils.pmf_hist(a_norm) nose.tools.assert_almost_equal(sum(h), 1) nose.tools.assert_less_equal(h.max(), 1) def test_pmf_hist_bins(): """Test bin specification.""" x, h, w = utils.pmf_hist(a_norm, 20) assert_equal(len(x), 20) def test_ci_to_errsize(): """Test behavior of ci_to_errsize.""" cis = [[.5, .5], [1.25, 1.5]] heights = [1, 1.5] actual_errsize = np.array([[.5, 1], [.25, 0]]) test_errsize = utils.ci_to_errsize(cis, heights) npt.assert_array_equal(actual_errsize, test_errsize) def test_desaturate(): """Test color desaturation.""" out1 = utils.desaturate("red", .5) assert_equal(out1, (.75, .25, .25)) out2 = utils.desaturate("#00FF00", .5) assert_equal(out2, (.25, .75, .25)) out3 = utils.desaturate((0, 0, 1), .5) assert_equal(out3, (.25, .25, .75)) out4 = utils.desaturate("red", .5) assert_equal(out4, (.75, .25, .25)) @raises(ValueError) def test_desaturation_prop(): """Test that pct outside of [0, 1] raises exception.""" utils.desaturate("blue", 50) def test_saturate(): """Test performance of saturation function.""" out = utils.saturate((.75, .25, .25)) assert_equal(out, (1, 0, 0)) def test_iqr(): """Test the IQR function.""" a = np.arange(5) iqr = utils.iqr(a) assert_equal(iqr, 2) def test_str_to_utf8(): """Test the to_utf8 function: string to Unicode""" s = "\u01ff\u02ff" u = utils.to_utf8(s) assert_equal(type(s), type(str())) assert_equal(type(u), type(u"\u01ff\u02ff")) class TestSpineUtils(object): sides = ["left", "right", "bottom", "top"] outer_sides = ["top", "right"] inner_sides = ["left", "bottom"] offset = 10 original_position = ("outward", 0) offset_position = ("outward", offset) def test_despine(self): f, ax = plt.subplots() for side in self.sides: nt.assert_true(ax.spines[side].get_visible()) utils.despine() for side in self.outer_sides: nt.assert_true(~ax.spines[side].get_visible()) for side in self.inner_sides: nt.assert_true(ax.spines[side].get_visible()) utils.despine(**dict(zip(self.sides, [True] * 4))) for side in self.sides: nt.assert_true(~ax.spines[side].get_visible()) def test_despine_specific_axes(self): f, (ax1, ax2) = plt.subplots(2, 1) utils.despine(ax=ax2) for side in self.sides: nt.assert_true(ax1.spines[side].get_visible()) for side in self.outer_sides: nt.assert_true(~ax2.spines[side].get_visible()) for side in self.inner_sides: nt.assert_true(ax2.spines[side].get_visible()) def test_despine_with_offset(self): f, ax = plt.subplots() for side in self.sides: nt.assert_equal(ax.spines[side].get_position(), self.original_position) utils.despine(ax=ax, offset=self.offset) for side in self.sides: is_visible = ax.spines[side].get_visible() new_position = ax.spines[side].get_position() if is_visible: nt.assert_equal(new_position, self.offset_position) else: nt.assert_equal(new_position, self.original_position) def test_despine_side_specific_offset(self): f, ax = plt.subplots() utils.despine(ax=ax, offset=dict(left=self.offset)) for side in self.sides: is_visible = ax.spines[side].get_visible() new_position = ax.spines[side].get_position() if is_visible and side == "left": nt.assert_equal(new_position, self.offset_position) else: nt.assert_equal(new_position, self.original_position) def test_despine_with_offset_specific_axes(self): f, (ax1, ax2) = plt.subplots(2, 1) utils.despine(offset=self.offset, ax=ax2) for side in self.sides: nt.assert_equal(ax1.spines[side].get_position(), self.original_position) if ax2.spines[side].get_visible(): nt.assert_equal(ax2.spines[side].get_position(), self.offset_position) else: nt.assert_equal(ax2.spines[side].get_position(), self.original_position) def test_despine_trim_spines(self): f, ax = plt.subplots() ax.plot([1, 2, 3], [1, 2, 3]) ax.set_xlim(.75, 3.25) utils.despine(trim=True) for side in self.inner_sides: bounds = ax.spines[side].get_bounds() nt.assert_equal(bounds, (1, 3)) def test_despine_trim_inverted(self): f, ax = plt.subplots() ax.plot([1, 2, 3], [1, 2, 3]) ax.set_ylim(.85, 3.15) ax.invert_yaxis() utils.despine(trim=True) for side in self.inner_sides: bounds = ax.spines[side].get_bounds() nt.assert_equal(bounds, (1, 3)) def test_despine_trim_noticks(self): f, ax = plt.subplots() ax.plot([1, 2, 3], [1, 2, 3]) ax.set_yticks([]) utils.despine(trim=True) nt.assert_equal(ax.get_yticks().size, 0) def test_despine_moved_ticks(self): f, ax = plt.subplots() for t in ax.yaxis.majorTicks: t.tick1line.set_visible(True) utils.despine(ax=ax, left=True, right=False) for y in ax.yaxis.majorTicks: assert t.tick2line.get_visible() plt.close(f) f, ax = plt.subplots() for t in ax.yaxis.majorTicks: t.tick1line.set_visible(False) utils.despine(ax=ax, left=True, right=False) for y in ax.yaxis.majorTicks: assert not t.tick2line.get_visible() plt.close(f) f, ax = plt.subplots() for t in ax.xaxis.majorTicks: t.tick1line.set_visible(True) utils.despine(ax=ax, bottom=True, top=False) for y in ax.xaxis.majorTicks: assert t.tick2line.get_visible() plt.close(f) f, ax = plt.subplots() for t in ax.xaxis.majorTicks: t.tick1line.set_visible(False) utils.despine(ax=ax, bottom=True, top=False) for y in ax.xaxis.majorTicks: assert not t.tick2line.get_visible() plt.close(f) def test_ticklabels_overlap(): rcmod.set() f, ax = plt.subplots(figsize=(2, 2)) f.tight_layout() # This gets the Agg renderer working assert not utils.axis_ticklabels_overlap(ax.get_xticklabels()) big_strings = "abcdefgh", "ijklmnop" ax.set_xlim(-.5, 1.5) ax.set_xticks([0, 1]) ax.set_xticklabels(big_strings) assert utils.axis_ticklabels_overlap(ax.get_xticklabels()) x, y = utils.axes_ticklabels_overlap(ax) assert x assert not y def test_categorical_order(): x = ["a", "c", "c", "b", "a", "d"] y = [3, 2, 5, 1, 4] order = ["a", "b", "c", "d"] out = utils.categorical_order(x) nt.assert_equal(out, ["a", "c", "b", "d"]) out = utils.categorical_order(x, order) nt.assert_equal(out, order) out = utils.categorical_order(x, ["b", "a"]) nt.assert_equal(out, ["b", "a"]) out = utils.categorical_order(np.array(x)) nt.assert_equal(out, ["a", "c", "b", "d"]) out = utils.categorical_order(pd.Series(x)) nt.assert_equal(out, ["a", "c", "b", "d"]) out = utils.categorical_order(y) nt.assert_equal(out, [1, 2, 3, 4, 5]) out = utils.categorical_order(np.array(y)) nt.assert_equal(out, [1, 2, 3, 4, 5]) out = utils.categorical_order(pd.Series(y)) nt.assert_equal(out, [1, 2, 3, 4, 5]) if pandas_has_categoricals: x = pd.Categorical(x, order) out = utils.categorical_order(x) nt.assert_equal(out, list(x.categories)) x = pd.Series(x) out = utils.categorical_order(x) nt.assert_equal(out, list(x.cat.categories)) out = utils.categorical_order(x, ["b", "a"]) nt.assert_equal(out, ["b", "a"]) x = ["a", np.nan, "c", "c", "b", "a", "d"] out = utils.categorical_order(x) nt.assert_equal(out, ["a", "c", "b", "d"]) def test_locator_to_legend_entries(): locator = mpl.ticker.MaxNLocator(nbins=3) limits = (0.09, 0.4) levels, str_levels = utils.locator_to_legend_entries( locator, limits, float ) assert str_levels == ["0.00", "0.15", "0.30", "0.45"] limits = (0.8, 0.9) levels, str_levels = utils.locator_to_legend_entries( locator, limits, float ) assert str_levels == ["0.80", "0.84", "0.88", "0.92"] limits = (1, 6) levels, str_levels = utils.locator_to_legend_entries(locator, limits, int) assert str_levels == ["0", "2", "4", "6"] locator = mpl.ticker.LogLocator(numticks=3) limits = (5, 1425) levels, str_levels = utils.locator_to_legend_entries(locator, limits, int) if LooseVersion(mpl.__version__) >= "3.1": assert str_levels == ['0', '1', '100', '10000', '1e+06'] limits = (0.00003, 0.02) levels, str_levels = utils.locator_to_legend_entries( locator, limits, float ) if LooseVersion(mpl.__version__) >= "3.1": assert str_levels == ['1e-07', '1e-05', '1e-03', '1e-01', '10'] if LooseVersion(pd.__version__) >= "0.15": def check_load_dataset(name): ds = load_dataset(name, cache=False) assert(isinstance(ds, pd.DataFrame)) def check_load_cached_dataset(name): # Test the cacheing using a temporary file. # With Python 3.2+, we could use the tempfile.TemporaryDirectory() # context manager instead of this try...finally statement tmpdir = tempfile.mkdtemp() try: # download and cache ds = load_dataset(name, cache=True, data_home=tmpdir) # use cached version ds2 = load_dataset(name, cache=True, data_home=tmpdir) pdt.assert_frame_equal(ds, ds2) finally: shutil.rmtree(tmpdir) @_network(url="https://github.com/mwaskom/seaborn-data") def test_get_dataset_names(): if not BeautifulSoup: raise nose.SkipTest("No BeautifulSoup available for parsing html") names = get_dataset_names() assert(len(names) > 0) assert(u"titanic" in names) @_network(url="https://github.com/mwaskom/seaborn-data") def test_load_datasets(): if not BeautifulSoup: raise nose.SkipTest("No BeautifulSoup available for parsing html") # Heavy test to verify that we can load all available datasets for name in get_dataset_names(): # unfortunately @network somehow obscures this generator so it # does not get in effect, so we need to call explicitly # yield check_load_dataset, name check_load_dataset(name) @_network(url="https://github.com/mwaskom/seaborn-data") def test_load_cached_datasets(): if not BeautifulSoup: raise nose.SkipTest("No BeautifulSoup available for parsing html") # Heavy test to verify that we can load all available datasets for name in get_dataset_names(): # unfortunately @network somehow obscures this generator so it # does not get in effect, so we need to call explicitly # yield check_load_dataset, name check_load_cached_dataset(name) def test_relative_luminance(): """Test relative luminance.""" out1 = utils.relative_luminance("white") assert_equal(out1, 1) out2 = utils.relative_luminance("#000000") assert_equal(out2, 0) out3 = utils.relative_luminance((.25, .5, .75)) nose.tools.assert_almost_equal(out3, 0.201624536) rgbs = mpl.cm.RdBu(np.linspace(0, 1, 10)) lums1 = [utils.relative_luminance(rgb) for rgb in rgbs] lums2 = utils.relative_luminance(rgbs) for lum1, lum2 in zip(lums1, lums2): nose.tools.assert_almost_equal(lum1, lum2) def test_remove_na(): a_array = np.array([1, 2, np.nan, 3]) a_array_rm = utils.remove_na(a_array) npt.assert_array_equal(a_array_rm, np.array([1, 2, 3])) a_series = pd.Series([1, 2, np.nan, 3]) a_series_rm = utils.remove_na(a_series) pdt.assert_series_equal(a_series_rm, pd.Series([1., 2, 3], [0, 1, 3])) seaborn-0.10.0/seaborn/utils.py000066400000000000000000000470671361256634400164140ustar00rootroot00000000000000"""Small plotting-related utility functions.""" from __future__ import print_function, division import colorsys import os import numpy as np from scipy import stats import pandas as pd import matplotlib as mpl import matplotlib.colors as mplcol import matplotlib.pyplot as plt from urllib.request import urlopen, urlretrieve from http.client import HTTPException __all__ = ["desaturate", "saturate", "set_hls_values", "despine", "get_dataset_names", "load_dataset"] def remove_na(arr): """Helper method for removing NA values from array-like. Parameters ---------- arr : array-like The array-like from which to remove NA values. Returns ------- clean_arr : array-like The original array with NA values removed. """ return arr[pd.notnull(arr)] def sort_df(df, *args, **kwargs): """Wrapper to handle different pandas sorting API pre/post 0.17.""" try: return df.sort_values(*args, **kwargs) except AttributeError: return df.sort(*args, **kwargs) def ci_to_errsize(cis, heights): """Convert intervals to error arguments relative to plot heights. Parameters ---------- cis: 2 x n sequence sequence of confidence interval limits heights : n sequence sequence of plot heights Returns ------- errsize : 2 x n array sequence of error size relative to height values in correct format as argument for plt.bar """ cis = np.atleast_2d(cis).reshape(2, -1) heights = np.atleast_1d(heights) errsize = [] for i, (low, high) in enumerate(np.transpose(cis)): h = heights[i] elow = h - low ehigh = high - h errsize.append([elow, ehigh]) errsize = np.asarray(errsize).T return errsize def pmf_hist(a, bins=10): """Return arguments to plt.bar for pmf-like histogram of an array. Parameters ---------- a: array-like array to make histogram of bins: int number of bins Returns ------- x: array left x position of bars h: array height of bars w: float width of bars """ n, x = np.histogram(a, bins) h = n / n.sum() w = x[1] - x[0] return x[:-1], h, w def desaturate(color, prop): """Decrease the saturation channel of a color by some percent. Parameters ---------- color : matplotlib color hex, rgb-tuple, or html color name prop : float saturation channel of color will be multiplied by this value Returns ------- new_color : rgb tuple desaturated color code in RGB tuple representation """ # Check inputs if not 0 <= prop <= 1: raise ValueError("prop must be between 0 and 1") # Get rgb tuple rep rgb = mplcol.colorConverter.to_rgb(color) # Convert to hls h, l, s = colorsys.rgb_to_hls(*rgb) # Desaturate the saturation channel s *= prop # Convert back to rgb new_color = colorsys.hls_to_rgb(h, l, s) return new_color def saturate(color): """Return a fully saturated color with the same hue. Parameters ---------- color : matplotlib color hex, rgb-tuple, or html color name Returns ------- new_color : rgb tuple saturated color code in RGB tuple representation """ return set_hls_values(color, s=1) def set_hls_values(color, h=None, l=None, s=None): # noqa """Independently manipulate the h, l, or s channels of a color. Parameters ---------- color : matplotlib color hex, rgb-tuple, or html color name h, l, s : floats between 0 and 1, or None new values for each channel in hls space Returns ------- new_color : rgb tuple new color code in RGB tuple representation """ # Get rgb tuple representation rgb = mplcol.colorConverter.to_rgb(color) vals = list(colorsys.rgb_to_hls(*rgb)) for i, val in enumerate([h, l, s]): if val is not None: vals[i] = val rgb = colorsys.hls_to_rgb(*vals) return rgb def axlabel(xlabel, ylabel, **kwargs): """Grab current axis and label it.""" ax = plt.gca() ax.set_xlabel(xlabel, **kwargs) ax.set_ylabel(ylabel, **kwargs) def despine(fig=None, ax=None, top=True, right=True, left=False, bottom=False, offset=None, trim=False): """Remove the top and right spines from plot(s). fig : matplotlib figure, optional Figure to despine all axes of, default uses current figure. ax : matplotlib axes, optional Specific axes object to despine. top, right, left, bottom : boolean, optional If True, remove that spine. offset : int or dict, optional Absolute distance, in points, spines should be moved away from the axes (negative values move spines inward). A single value applies to all spines; a dict can be used to set offset values per side. trim : bool, optional If True, limit spines to the smallest and largest major tick on each non-despined axis. Returns ------- None """ # Get references to the axes we want if fig is None and ax is None: axes = plt.gcf().axes elif fig is not None: axes = fig.axes elif ax is not None: axes = [ax] for ax_i in axes: for side in ["top", "right", "left", "bottom"]: # Toggle the spine objects is_visible = not locals()[side] ax_i.spines[side].set_visible(is_visible) if offset is not None and is_visible: try: val = offset.get(side, 0) except AttributeError: val = offset _set_spine_position(ax_i.spines[side], ('outward', val)) # Potentially move the ticks if left and not right: maj_on = any( t.tick1line.get_visible() for t in ax_i.yaxis.majorTicks ) min_on = any( t.tick1line.get_visible() for t in ax_i.yaxis.minorTicks ) ax_i.yaxis.set_ticks_position("right") for t in ax_i.yaxis.majorTicks: t.tick2line.set_visible(maj_on) for t in ax_i.yaxis.minorTicks: t.tick2line.set_visible(min_on) if bottom and not top: maj_on = any( t.tick1line.get_visible() for t in ax_i.xaxis.majorTicks ) min_on = any( t.tick1line.get_visible() for t in ax_i.xaxis.minorTicks ) ax_i.xaxis.set_ticks_position("top") for t in ax_i.xaxis.majorTicks: t.tick2line.set_visible(maj_on) for t in ax_i.xaxis.minorTicks: t.tick2line.set_visible(min_on) if trim: # clip off the parts of the spines that extend past major ticks xticks = ax_i.get_xticks() if xticks.size: firsttick = np.compress(xticks >= min(ax_i.get_xlim()), xticks)[0] lasttick = np.compress(xticks <= max(ax_i.get_xlim()), xticks)[-1] ax_i.spines['bottom'].set_bounds(firsttick, lasttick) ax_i.spines['top'].set_bounds(firsttick, lasttick) newticks = xticks.compress(xticks <= lasttick) newticks = newticks.compress(newticks >= firsttick) ax_i.set_xticks(newticks) yticks = ax_i.get_yticks() if yticks.size: firsttick = np.compress(yticks >= min(ax_i.get_ylim()), yticks)[0] lasttick = np.compress(yticks <= max(ax_i.get_ylim()), yticks)[-1] ax_i.spines['left'].set_bounds(firsttick, lasttick) ax_i.spines['right'].set_bounds(firsttick, lasttick) newticks = yticks.compress(yticks <= lasttick) newticks = newticks.compress(newticks >= firsttick) ax_i.set_yticks(newticks) def _set_spine_position(spine, position): """ Set the spine's position without resetting an associated axis. As of matplotlib v. 1.0.0, if a spine has an associated axis, then spine.set_position() calls axis.cla(), which resets locators, formatters, etc. We temporarily replace that call with axis.reset_ticks(), which is sufficient for our purposes. """ axis = spine.axis if axis is not None: cla = axis.cla axis.cla = axis.reset_ticks spine.set_position(position) if axis is not None: axis.cla = cla def _kde_support(data, bw, gridsize, cut, clip): """Establish support for a kernel density estimate.""" support_min = max(data.min() - bw * cut, clip[0]) support_max = min(data.max() + bw * cut, clip[1]) return np.linspace(support_min, support_max, gridsize) def percentiles(a, pcts, axis=None): """Like scoreatpercentile but can take and return array of percentiles. Parameters ---------- a : array data pcts : sequence of percentile values percentile or percentiles to find score at axis : int or None if not None, computes scores over this axis Returns ------- scores: array array of scores at requested percentiles first dimension is length of object passed to ``pcts`` """ scores = [] try: n = len(pcts) except TypeError: pcts = [pcts] n = 0 for i, p in enumerate(pcts): if axis is None: score = stats.scoreatpercentile(a.ravel(), p) else: score = np.apply_along_axis(stats.scoreatpercentile, axis, a, p) scores.append(score) scores = np.asarray(scores) if not n: scores = scores.squeeze() return scores def ci(a, which=95, axis=None): """Return a percentile range from an array of values.""" p = 50 - which / 2, 50 + which / 2 return percentiles(a, p, axis) def sig_stars(p): """Return a R-style significance string corresponding to p values.""" if p < 0.001: return "***" elif p < 0.01: return "**" elif p < 0.05: return "*" elif p < 0.1: return "." return "" def iqr(a): """Calculate the IQR for an array of numbers.""" a = np.asarray(a) q1 = stats.scoreatpercentile(a, 25) q3 = stats.scoreatpercentile(a, 75) return q3 - q1 def get_dataset_names(): """Report available example datasets, useful for reporting issues.""" # delayed import to not demand bs4 unless this function is actually used from bs4 import BeautifulSoup http = urlopen('https://github.com/mwaskom/seaborn-data/') gh_list = BeautifulSoup(http) return [l.text.replace('.csv', '') for l in gh_list.find_all("a", {"class": "js-navigation-open"}) if l.text.endswith('.csv')] def get_data_home(data_home=None): """Return the path of the seaborn data directory. This is used by the ``load_dataset`` function. If the ``data_home`` argument is not specified, the default location is ``~/seaborn-data``. Alternatively, a different default location can be specified using the environment variable ``SEABORN_DATA``. """ if data_home is None: data_home = os.environ.get('SEABORN_DATA', os.path.join('~', 'seaborn-data')) data_home = os.path.expanduser(data_home) if not os.path.exists(data_home): os.makedirs(data_home) return data_home def load_dataset(name, cache=True, data_home=None, **kws): """Load a dataset from the online repository (requires internet). Parameters ---------- name : str Name of the dataset (`name`.csv on https://github.com/mwaskom/seaborn-data). You can obtain list of available datasets using :func:`get_dataset_names` cache : boolean, optional If True, then cache data locally and use the cache on subsequent calls data_home : string, optional The directory in which to cache data. By default, uses ~/seaborn-data/ kws : dict, optional Passed to pandas.read_csv """ path = ("https://raw.githubusercontent.com/" "mwaskom/seaborn-data/master/{}.csv") full_path = path.format(name) if cache: cache_path = os.path.join(get_data_home(data_home), os.path.basename(full_path)) if not os.path.exists(cache_path): urlretrieve(full_path, cache_path) full_path = cache_path df = pd.read_csv(full_path, **kws) if df.iloc[-1].isnull().all(): df = df.iloc[:-1] # Set some columns as a categorical type with ordered levels if name == "tips": df["day"] = pd.Categorical(df["day"], ["Thur", "Fri", "Sat", "Sun"]) df["sex"] = pd.Categorical(df["sex"], ["Male", "Female"]) df["time"] = pd.Categorical(df["time"], ["Lunch", "Dinner"]) df["smoker"] = pd.Categorical(df["smoker"], ["Yes", "No"]) if name == "flights": df["month"] = pd.Categorical(df["month"], df.month.unique()) if name == "exercise": df["time"] = pd.Categorical(df["time"], ["1 min", "15 min", "30 min"]) df["kind"] = pd.Categorical(df["kind"], ["rest", "walking", "running"]) df["diet"] = pd.Categorical(df["diet"], ["no fat", "low fat"]) if name == "titanic": df["class"] = pd.Categorical(df["class"], ["First", "Second", "Third"]) df["deck"] = pd.Categorical(df["deck"], list("ABCDEFG")) return df def axis_ticklabels_overlap(labels): """Return a boolean for whether the list of ticklabels have overlaps. Parameters ---------- labels : list of ticklabels Returns ------- overlap : boolean True if any of the labels overlap. """ if not labels: return False try: bboxes = [l.get_window_extent() for l in labels] overlaps = [b.count_overlaps(bboxes) for b in bboxes] return max(overlaps) > 1 except RuntimeError: # Issue on macosx backend rasies an error in the above code return False def axes_ticklabels_overlap(ax): """Return booleans for whether the x and y ticklabels on an Axes overlap. Parameters ---------- ax : matplotlib Axes Returns ------- x_overlap, y_overlap : booleans True when the labels on that axis overlap. """ return (axis_ticklabels_overlap(ax.get_xticklabels()), axis_ticklabels_overlap(ax.get_yticklabels())) def categorical_order(values, order=None): """Return a list of unique data values. Determine an ordered list of levels in ``values``. Parameters ---------- values : list, array, Categorical, or Series Vector of "categorical" values order : list-like, optional Desired order of category levels to override the order determined from the ``values`` object. Returns ------- order : list Ordered list of category levels not including null values. """ if order is None: if hasattr(values, "categories"): order = values.categories else: try: order = values.cat.categories except (TypeError, AttributeError): try: order = values.unique() except AttributeError: order = pd.unique(values) try: np.asarray(values).astype(np.float) order = np.sort(order) except (ValueError, TypeError): order = order order = filter(pd.notnull, order) return list(order) def locator_to_legend_entries(locator, limits, dtype): """Return levels and formatted levels for brief numeric legends.""" raw_levels = locator.tick_values(*limits).astype(dtype) class dummy_axis: def get_view_interval(self): return limits if isinstance(locator, mpl.ticker.LogLocator): formatter = mpl.ticker.LogFormatter() else: formatter = mpl.ticker.ScalarFormatter() formatter.axis = dummy_axis() # TODO: The following two lines should be replaced # once pinned matplotlib>=3.1.0 with: # formatted_levels = formatter.format_ticks(raw_levels) formatter.set_locs(raw_levels) formatted_levels = [formatter(x) for x in raw_levels] return raw_levels, formatted_levels def get_color_cycle(): """Return the list of colors in the current matplotlib color cycle.""" try: cyl = mpl.rcParams['axes.prop_cycle'] try: # matplotlib 1.5 verifies that axes.prop_cycle *is* a cycler # but no garuantee that there's a `color` key. # so users could have a custom rcParmas w/ no color... return [x['color'] for x in cyl] except KeyError: pass except KeyError: pass return mpl.rcParams['axes.color_cycle'] def relative_luminance(color): """Calculate the relative luminance of a color according to W3C standards Parameters ---------- color : matplotlib color or sequence of matplotlib colors Hex code, rgb-tuple, or html color name. Returns ------- luminance : float(s) between 0 and 1 """ rgb = mpl.colors.colorConverter.to_rgba_array(color)[:, :3] rgb = np.where(rgb <= .03928, rgb / 12.92, ((rgb + .055) / 1.055) ** 2.4) lum = rgb.dot([.2126, .7152, .0722]) try: return lum.item() except ValueError: return lum def to_utf8(obj): """Return a Unicode string representing a Python object. Unicode strings (i.e. type ``unicode`` in Python 2.7 and type ``str`` in Python 3.x) are returned unchanged. Byte strings (i.e. type ``str`` in Python 2.7 and type ``bytes`` in Python 3.x) are returned as UTF-8-encoded strings. For other objects, the method ``__str__()`` is called, and the result is returned as a UTF-8-encoded string. Parameters ---------- obj : object Any Python object Returns ------- s : unicode (Python 2.7) / str (Python 3.x) UTF-8-encoded string representation of ``obj`` """ if isinstance(obj, str): try: # If obj is a string, try to return it as a Unicode-encoded # string: return obj.decode("utf-8") except AttributeError: # Python 3.x strings are already Unicode, and do not have a # decode() method, so the unchanged string is returned return obj try: if isinstance(obj, unicode): # do not attemt a conversion if string is already a Unicode # string: return obj else: # call __str__() for non-string object, and return the # result to Unicode: return obj.__str__().decode("utf-8") except NameError: # NameError is raised in Python 3.x as type 'unicode' is not # defined. if isinstance(obj, bytes): return obj.decode("utf-8") else: return obj.__str__() def _network(t=None, url='https://google.com'): """ Decorator that will skip a test if `url` is unreachable. Parameters ---------- t : function, optional url : str, optional """ import nose if t is None: return lambda x: _network(x, url=url) def wrapper(*args, **kwargs): # attempt to connect try: f = urlopen(url) except (IOError, HTTPException): raise nose.SkipTest() else: f.close() return t(*args, **kwargs) return wrapper seaborn-0.10.0/seaborn/widgets.py000066400000000000000000000357241361256634400167170ustar00rootroot00000000000000from __future__ import division import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap # Lots of different places that widgets could come from... try: from ipywidgets import interact, FloatSlider, IntSlider except ImportError: import warnings # ignore ShimWarning raised by IPython, see GH #892 with warnings.catch_warnings(): warnings.simplefilter("ignore") try: from IPython.html.widgets import interact, FloatSlider, IntSlider except ImportError: try: from IPython.html.widgets import (interact, FloatSliderWidget, IntSliderWidget) FloatSlider = FloatSliderWidget IntSlider = IntSliderWidget except ImportError: pass from .miscplot import palplot from .palettes import (color_palette, dark_palette, light_palette, diverging_palette, cubehelix_palette) __all__ = ["choose_colorbrewer_palette", "choose_cubehelix_palette", "choose_dark_palette", "choose_light_palette", "choose_diverging_palette"] def _init_mutable_colormap(): """Create a matplotlib colormap that will be updated by the widgets.""" greys = color_palette("Greys", 256) cmap = LinearSegmentedColormap.from_list("interactive", greys) cmap._init() cmap._set_extremes() return cmap def _update_lut(cmap, colors): """Change the LUT values in a matplotlib colormap in-place.""" cmap._lut[:256] = colors cmap._set_extremes() def _show_cmap(cmap): """Show a continuous matplotlib colormap.""" from .rcmod import axes_style # Avoid circular import with axes_style("white"): f, ax = plt.subplots(figsize=(8.25, .75)) ax.set(xticks=[], yticks=[]) x = np.linspace(0, 1, 256)[np.newaxis, :] ax.pcolormesh(x, cmap=cmap) def choose_colorbrewer_palette(data_type, as_cmap=False): """Select a palette from the ColorBrewer set. These palettes are built into matplotlib and can be used by name in many seaborn functions, or by passing the object returned by this function. Parameters ---------- data_type : {'sequential', 'diverging', 'qualitative'} This describes the kind of data you want to visualize. See the seaborn color palette docs for more information about how to choose this value. Note that you can pass substrings (e.g. 'q' for 'qualitative. as_cmap : bool If True, the return value is a matplotlib colormap rather than a list of discrete colors. Returns ------- pal or cmap : list of colors or matplotlib colormap Object that can be passed to plotting functions. See Also -------- dark_palette : Create a sequential palette with dark low values. light_palette : Create a sequential palette with bright low values. diverging_palette : Create a diverging palette from selected colors. cubehelix_palette : Create a sequential palette or colormap using the cubehelix system. """ if data_type.startswith("q") and as_cmap: raise ValueError("Qualitative palettes cannot be colormaps.") pal = [] if as_cmap: cmap = _init_mutable_colormap() if data_type.startswith("s"): opts = ["Greys", "Reds", "Greens", "Blues", "Oranges", "Purples", "BuGn", "BuPu", "GnBu", "OrRd", "PuBu", "PuRd", "RdPu", "YlGn", "PuBuGn", "YlGnBu", "YlOrBr", "YlOrRd"] variants = ["regular", "reverse", "dark"] @interact def choose_sequential(name=opts, n=(2, 18), desat=FloatSlider(min=0, max=1, value=1), variant=variants): if variant == "reverse": name += "_r" elif variant == "dark": name += "_d" if as_cmap: colors = color_palette(name, 256, desat) _update_lut(cmap, np.c_[colors, np.ones(256)]) _show_cmap(cmap) else: pal[:] = color_palette(name, n, desat) palplot(pal) elif data_type.startswith("d"): opts = ["RdBu", "RdGy", "PRGn", "PiYG", "BrBG", "RdYlBu", "RdYlGn", "Spectral"] variants = ["regular", "reverse"] @interact def choose_diverging(name=opts, n=(2, 16), desat=FloatSlider(min=0, max=1, value=1), variant=variants): if variant == "reverse": name += "_r" if as_cmap: colors = color_palette(name, 256, desat) _update_lut(cmap, np.c_[colors, np.ones(256)]) _show_cmap(cmap) else: pal[:] = color_palette(name, n, desat) palplot(pal) elif data_type.startswith("q"): opts = ["Set1", "Set2", "Set3", "Paired", "Accent", "Pastel1", "Pastel2", "Dark2"] @interact def choose_qualitative(name=opts, n=(2, 16), desat=FloatSlider(min=0, max=1, value=1)): pal[:] = color_palette(name, n, desat) palplot(pal) if as_cmap: return cmap return pal def choose_dark_palette(input="husl", as_cmap=False): """Launch an interactive widget to create a dark sequential palette. This corresponds with the :func:`dark_palette` function. This kind of palette is good for data that range between relatively uninteresting low values and interesting high values. Requires IPython 2+ and must be used in the notebook. Parameters ---------- input : {'husl', 'hls', 'rgb'} Color space for defining the seed value. Note that the default is different than the default input for :func:`dark_palette`. as_cmap : bool If True, the return value is a matplotlib colormap rather than a list of discrete colors. Returns ------- pal or cmap : list of colors or matplotlib colormap Object that can be passed to plotting functions. See Also -------- dark_palette : Create a sequential palette with dark low values. light_palette : Create a sequential palette with bright low values. cubehelix_palette : Create a sequential palette or colormap using the cubehelix system. """ pal = [] if as_cmap: cmap = _init_mutable_colormap() if input == "rgb": @interact def choose_dark_palette_rgb(r=(0., 1.), g=(0., 1.), b=(0., 1.), n=(3, 17)): color = r, g, b if as_cmap: colors = dark_palette(color, 256, input="rgb") _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = dark_palette(color, n, input="rgb") palplot(pal) elif input == "hls": @interact def choose_dark_palette_hls(h=(0., 1.), l=(0., 1.), s=(0., 1.), n=(3, 17)): color = h, l, s if as_cmap: colors = dark_palette(color, 256, input="hls") _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = dark_palette(color, n, input="hls") palplot(pal) elif input == "husl": @interact def choose_dark_palette_husl(h=(0, 359), s=(0, 99), l=(0, 99), n=(3, 17)): color = h, s, l if as_cmap: colors = dark_palette(color, 256, input="husl") _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = dark_palette(color, n, input="husl") palplot(pal) if as_cmap: return cmap return pal def choose_light_palette(input="husl", as_cmap=False): """Launch an interactive widget to create a light sequential palette. This corresponds with the :func:`light_palette` function. This kind of palette is good for data that range between relatively uninteresting low values and interesting high values. Requires IPython 2+ and must be used in the notebook. Parameters ---------- input : {'husl', 'hls', 'rgb'} Color space for defining the seed value. Note that the default is different than the default input for :func:`light_palette`. as_cmap : bool If True, the return value is a matplotlib colormap rather than a list of discrete colors. Returns ------- pal or cmap : list of colors or matplotlib colormap Object that can be passed to plotting functions. See Also -------- light_palette : Create a sequential palette with bright low values. dark_palette : Create a sequential palette with dark low values. cubehelix_palette : Create a sequential palette or colormap using the cubehelix system. """ pal = [] if as_cmap: cmap = _init_mutable_colormap() if input == "rgb": @interact def choose_light_palette_rgb(r=(0., 1.), g=(0., 1.), b=(0., 1.), n=(3, 17)): color = r, g, b if as_cmap: colors = light_palette(color, 256, input="rgb") _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = light_palette(color, n, input="rgb") palplot(pal) elif input == "hls": @interact def choose_light_palette_hls(h=(0., 1.), l=(0., 1.), s=(0., 1.), n=(3, 17)): color = h, l, s if as_cmap: colors = light_palette(color, 256, input="hls") _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = light_palette(color, n, input="hls") palplot(pal) elif input == "husl": @interact def choose_light_palette_husl(h=(0, 359), s=(0, 99), l=(0, 99), n=(3, 17)): color = h, s, l if as_cmap: colors = light_palette(color, 256, input="husl") _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = light_palette(color, n, input="husl") palplot(pal) if as_cmap: return cmap return pal def choose_diverging_palette(as_cmap=False): """Launch an interactive widget to choose a diverging color palette. This corresponds with the :func:`diverging_palette` function. This kind of palette is good for data that range between interesting low values and interesting high values with a meaningful midpoint. (For example, change scores relative to some baseline value). Requires IPython 2+ and must be used in the notebook. Parameters ---------- as_cmap : bool If True, the return value is a matplotlib colormap rather than a list of discrete colors. Returns ------- pal or cmap : list of colors or matplotlib colormap Object that can be passed to plotting functions. See Also -------- diverging_palette : Create a diverging color palette or colormap. choose_colorbrewer_palette : Interactively choose palettes from the colorbrewer set, including diverging palettes. """ pal = [] if as_cmap: cmap = _init_mutable_colormap() @interact def choose_diverging_palette(h_neg=IntSlider(min=0, max=359, value=220), h_pos=IntSlider(min=0, max=359, value=10), s=IntSlider(min=0, max=99, value=74), l=IntSlider(min=0, max=99, value=50), sep=IntSlider(min=1, max=50, value=10), n=(2, 16), center=["light", "dark"]): if as_cmap: colors = diverging_palette(h_neg, h_pos, s, l, sep, 256, center) _update_lut(cmap, colors) _show_cmap(cmap) else: pal[:] = diverging_palette(h_neg, h_pos, s, l, sep, n, center) palplot(pal) if as_cmap: return cmap return pal def choose_cubehelix_palette(as_cmap=False): """Launch an interactive widget to create a sequential cubehelix palette. This corresponds with the :func:`cubehelix_palette` function. This kind of palette is good for data that range between relatively uninteresting low values and interesting high values. The cubehelix system allows the palette to have more hue variance across the range, which can be helpful for distinguishing a wider range of values. Requires IPython 2+ and must be used in the notebook. Parameters ---------- as_cmap : bool If True, the return value is a matplotlib colormap rather than a list of discrete colors. Returns ------- pal or cmap : list of colors or matplotlib colormap Object that can be passed to plotting functions. See Also -------- cubehelix_palette : Create a sequential palette or colormap using the cubehelix system. """ pal = [] if as_cmap: cmap = _init_mutable_colormap() @interact def choose_cubehelix(n_colors=IntSlider(min=2, max=16, value=9), start=FloatSlider(min=0, max=3, value=0), rot=FloatSlider(min=-1, max=1, value=.4), gamma=FloatSlider(min=0, max=5, value=1), hue=FloatSlider(min=0, max=1, value=.8), light=FloatSlider(min=0, max=1, value=.85), dark=FloatSlider(min=0, max=1, value=.15), reverse=False): if as_cmap: colors = cubehelix_palette(256, start, rot, gamma, hue, light, dark, reverse) _update_lut(cmap, np.c_[colors, np.ones(256)]) _show_cmap(cmap) else: pal[:] = cubehelix_palette(n_colors, start, rot, gamma, hue, light, dark, reverse) palplot(pal) if as_cmap: return cmap return pal seaborn-0.10.0/setup.cfg000066400000000000000000000000421361256634400150500ustar00rootroot00000000000000[metadata] license_file = LICENSE seaborn-0.10.0/setup.py000066400000000000000000000057161361256634400147560ustar00rootroot00000000000000#! /usr/bin/env python # # Copyright (C) 2012-2020 Michael Waskom DESCRIPTION = "seaborn: statistical data visualization" LONG_DESCRIPTION = """\ Seaborn is a library for making statistical graphics in Python. It is built on top of `matplotlib `_ and closely integrated with `pandas `_ data structures. Here is some of the functionality that seaborn offers: - A dataset-oriented API for examining relationships between multiple variables - Specialized support for using categorical variables to show observations or aggregate statistics - Options for visualizing univariate or bivariate distributions and for comparing them between subsets of data - Automatic estimation and plotting of linear regression models for different kinds dependent variables - Convenient views onto the overall structure of complex datasets - High-level abstractions for structuring multi-plot grids that let you easily build complex visualizations - Concise control over matplotlib figure styling with several built-in themes - Tools for choosing color palettes that faithfully reveal patterns in your data Seaborn aims to make visualization a central part of exploring and understanding data. Its dataset-oriented plotting functions operate on dataframes and arrays containing whole datasets and internally perform the necessary semantic mapping and statistical aggregation to produce informative plots. """ DISTNAME = 'seaborn' MAINTAINER = 'Michael Waskom' MAINTAINER_EMAIL = 'mwaskom@nyu.edu' URL = 'https://seaborn.pydata.org' LICENSE = 'BSD (3-clause)' DOWNLOAD_URL = 'https://github.com/mwaskom/seaborn/' VERSION = '0.10.0' PYTHON_REQUIRES = ">=3.6" INSTALL_REQUIRES = [ 'numpy>=1.13.3', 'scipy>=1.0.1', 'pandas>=0.22.0', 'matplotlib>=2.1.2', ] PACKAGES = [ 'seaborn', 'seaborn.colors', 'seaborn.external', 'seaborn.tests', ] CLASSIFIERS = [ 'Intended Audience :: Science/Research', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'License :: OSI Approved :: BSD License', 'Topic :: Scientific/Engineering :: Visualization', 'Topic :: Multimedia :: Graphics', 'Operating System :: POSIX', 'Operating System :: Unix', 'Operating System :: MacOS' ] if __name__ == "__main__": from setuptools import setup import sys if sys.version_info[:2] < (3, 6): raise RuntimeError("seaborn requires python >= 3.6.") setup( name=DISTNAME, author=MAINTAINER, author_email=MAINTAINER_EMAIL, maintainer=MAINTAINER, maintainer_email=MAINTAINER_EMAIL, description=DESCRIPTION, long_description=LONG_DESCRIPTION, license=LICENSE, url=URL, version=VERSION, download_url=DOWNLOAD_URL, python_requires=PYTHON_REQUIRES, install_requires=INSTALL_REQUIRES, packages=PACKAGES, classifiers=CLASSIFIERS ) seaborn-0.10.0/testing/000077500000000000000000000000001361256634400147105ustar00rootroot00000000000000seaborn-0.10.0/testing/deps_latest.txt000066400000000000000000000000521361256634400177550ustar00rootroot00000000000000numpy scipy matplotlib pandas statsmodels seaborn-0.10.0/testing/deps_minimal.txt000066400000000000000000000000361361256634400201110ustar00rootroot00000000000000numpy scipy matplotlib pandas seaborn-0.10.0/testing/deps_pinned.txt000066400000000000000000000001121361256634400177330ustar00rootroot00000000000000numpy=1.13.3 scipy=1.0.1 pandas=0.22.0 matplotlib=2.1.2 statsmodels=0.8.0 seaborn-0.10.0/testing/getmsfonts.sh000066400000000000000000000002171361256634400174350ustar00rootroot00000000000000echo ttf-mscorefonts-installer msttcorefonts/accepted-mscorefonts-eula select true | debconf-set-selections apt-get install msttcorefonts -qq seaborn-0.10.0/testing/matplotlibrc_agg000066400000000000000000000000151361256634400201410ustar00rootroot00000000000000backend: Agg seaborn-0.10.0/testing/matplotlibrc_qtagg000066400000000000000000000000201361256634400205020ustar00rootroot00000000000000backend: Qt5Agg seaborn-0.10.0/testing/utils.txt000066400000000000000000000000451361256634400166100ustar00rootroot00000000000000pytest!=5.3.4 pytest-cov flake8 nose